id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
1372746
Soft SUSY breaking
In theoretical physics, soft SUSY breaking is type of supersymmetry breaking that does not cause ultraviolet divergences to appear in scalar masses. Overview. These terms are relevant operators—i.e. operators whose coefficients have a positive dimension of mass—though there are some exceptions. A model with soft SUSY breaking was proposed in 1981 by Howard Georgi and Savas Dimopoulos. Before this, dynamical models of supersymmetry breaking were being used that suffered from giving rise to color and charge breaking vacua. Soft SUSY breaking decouples the origin of supersymmetry breaking from its phenomenological consequences. In effect, soft SUSY breaking adds explicit symmetry breaking to the supersymmetric Standard Model Lagrangian. The source of SUSY breaking results from a different sector where supersymmetry is broken spontaneously. Divorcing the spontaneous supersymmetry breaking from the supersymmetric Standard Model leads to the notion of mediated supersymmetry breaking. Nonholomorphic soft supersymmetry breaking interactions. In low energy supersymmetry based models, the soft supersymmetry breaking interactions excepting the mass terms are usually considered to be holomorphic functions of fields. While a superpotential such as that of MSSM needs to be holomorphic, there is no reason why soft supersymmetry breaking interactions are required to be holomorphic functions of fields. Of course, an arbitrary nonholomorphic interaction may invite an appearance of quadratic divergence (or hard supersymmetry breaking); however, there are scenarios with no gauge singlet fields where nonholomorphic interactions can as well be of soft supersymmetry breaking type. One may consider a hidden sector based supersymmetry breaking, with formula_0 and formula_1 to be chiral superfields. Then, there exist nonholomorphic formula_2-term contributions of the forms formula_3 that are soft supersymmetry breaking in nature. The above lead to nonholomorphic trilinear soft terms like formula_4 and an explicit Higgsino soft mass term like formula_5 in the Lagrangian. The coefficients of both formula_6 and formula_5 terms are proportional to formula_7, where formula_8 is the vacuum expectation value of the auxiliary field components of formula_0 and formula_9is the scale of mediation of supersymmetry breaking. Away from MSSM, there can be higgsino-gaugino interactions like formula_10 that are also nonholomorphic in nature. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "X " }, { "math_id": 1, "text": "\\Phi " }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "\\frac{1}{M^3} [XX^*\\Phi^2\\Phi^*]_D ~and~\n \\frac{1}{M^3} [XX^*D^\\alpha\\Phi D_\\alpha\\Phi]_D" }, { "math_id": 4, "text": "\\phi^2\\phi^*" }, { "math_id": 5, "text": "\\psi \\psi " }, { "math_id": 6, "text": "\\phi^2\\phi^* " }, { "math_id": 7, "text": "\\frac{|F|^2}{M^3} " }, { "math_id": 8, "text": "|F| " }, { "math_id": 9, "text": "M " }, { "math_id": 10, "text": "\\psi \\lambda " } ]
https://en.wikipedia.org/wiki?curid=1372746
13727501
T-norm fuzzy logics
T-norm fuzzy logics are a family of non-classical logics, informally delimited by having a semantics that takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning. T-norm fuzzy logics belong in broader classes of fuzzy logics and many-valued logics. In order to generate a well-behaved implication, the t-norms are usually required to be left-continuous; logics of left-continuous t-norms further belong in the class of substructural logics, among which they are marked with the validity of the "law of prelinearity", ("A" → "B") ∨ ("B" → "A"). Both propositional and first-order (or higher-order) t-norm fuzzy logics, as well as their expansions by modal and other operators, are studied. Logics that restrict the t-norm semantics to a subset of the real unit interval (for example, finitely valued Łukasiewicz logics) are usually included in the class as well. Important examples of t-norm fuzzy logics are monoidal t-norm logic (MTL) of all left-continuous t-norms, basic logic (BL) of all continuous t-norms, product fuzzy logic of the product t-norm, or the nilpotent minimum logic of the nilpotent minimum t-norm. Some independently motivated logics belong among t-norm fuzzy logics, too, for example Łukasiewicz logic (which is the logic of the Łukasiewicz t-norm) or Gödel–Dummett logic (which is the logic of the minimum t-norm). Motivation. As members of the family of fuzzy logics, t-norm fuzzy logics primarily aim at generalizing classical two-valued logic by admitting intermediary truth values between 1 (truth) and 0 (falsity) representing "degrees" of truth of propositions. The degrees are assumed to be real numbers from the unit interval [0, 1]. In propositional t-norm fuzzy logics, propositional connectives are stipulated to be truth-functional, that is, the truth value of a complex proposition formed by a propositional connective from some constituent propositions is a function (called the "truth function" of the connective) of the truth values of the constituent propositions. The truth functions operate on the set of truth degrees (in the standard semantics, on the [0, 1] interval); thus the truth function of an "n"-ary propositional connective "c" is a function "F""c": [0, 1]"n" → [0, 1]. Truth functions generalize truth tables of propositional connectives known from classical logic to operate on the larger system of truth values. T-norm fuzzy logics impose certain natural constraints on the truth function of conjunction. The truth function formula_0 of conjunction is assumed to satisfy the following conditions: These assumptions make the truth function of conjunction a left-continuous t-norm, which explains the name of the family of fuzzy logics ("t-norm based"). Particular logics of the family can make further assumptions about the behavior of conjunction (for example, Gödel–Dummett logic requires its idempotence) or other connectives (for example, the logic IMTL (involutive monoidal t-norm logic) requires the involutiveness of negation). All left-continuous t-norms formula_7 have a unique residuum, that is, a binary function formula_8 such that for all "x", "y", and "z" in [0, 1], formula_9 if and only if formula_10 The residuum of a left-continuous t-norm can explicitly be defined as formula_11 This ensures that the residuum is the pointwise largest function such that for all "x" and "y", formula_12 The latter can be interpreted as a fuzzy version of the modus ponens rule of inference. The residuum of a left-continuous t-norm thus can be characterized as the weakest function that makes the fuzzy modus ponens valid, which makes it a suitable truth function for implication in fuzzy logic. Left-continuity of the t-norm is the necessary and sufficient condition for this relationship between a t-norm conjunction and its residual implication to hold. Truth functions of further propositional connectives can be defined by means of the t-norm and its residuum, for instance the residual negation formula_13 or bi-residual equivalence formula_14 Truth functions of propositional connectives may also be introduced by additional definitions: the most usual ones are the minimum (which plays a role of another conjunctive connective), the maximum (which plays a role of a disjunctive connective), or the Baaz Delta operator, defined in [0, 1] as formula_15 if formula_16 and formula_17 otherwise. In this way, a left-continuous t-norm, its residuum, and the truth functions of additional propositional connectives determine the truth values of complex propositional formulae in [0, 1]. Formulae that always evaluate to 1 are called "tautologies" with respect to the given left-continuous t-norm formula_18 or "formula_19tautologies." The set of all formula_19tautologies is called the "logic" of the t-norm formula_18 as these formulae represent the laws of fuzzy logic (determined by the t-norm) that hold (to degree 1) regardless of the truth degrees of atomic formulae. Some formulae are tautologies with respect to a larger class of left-continuous t-norms; the set of such formulae is called the logic of the class. Important t-norm logics are the logics of particular t-norms or classes of t-norms, for example: It turns out that many logics of particular t-norms and classes of t-norms are axiomatizable. The completeness theorem of the axiomatic system with respect to the corresponding t-norm semantics on [0, 1] is then called the "standard completeness" of the logic. Besides the standard real-valued semantics on [0, 1], the logics are sound and complete with respect to general algebraic semantics, formed by suitable classes of prelinear commutative bounded integral residuated lattices. History. Some particular t-norm fuzzy logics have been introduced and investigated long before the family was recognized (even before the notions of fuzzy logic or t-norm emerged): A systematic study of particular t-norm fuzzy logics and their classes began with Hájek's (1998) monograph "Metamathematics of Fuzzy Logic", which presented the notion of the logic of a continuous t-norm, the logics of the three basic continuous t-norms (Łukasiewicz, Gödel, and product), and the 'basic' fuzzy logic BL of all continuous t-norms (all of them both propositional and first-order). The book also started the investigation of fuzzy logics as non-classical logics with Hilbert-style calculi, algebraic semantics, and metamathematical properties known from other logics (completeness theorems, deduction theorems, complexity, etc.). Since then, a plethora of t-norm fuzzy logics have been introduced and their metamathematical properties have been investigated. Some of the most important t-norm fuzzy logics were introduced in 2001, by Esteva and Godo (MTL, IMTL, SMTL, NM, WNM), Esteva, Godo, and Montagna (propositional ŁΠ), and Cintula (first-order ŁΠ). Logical language. The logical vocabulary of propositional t-norm fuzzy logics standardly comprises the following connectives: Some propositional t-norm logics add further propositional connectives to the above language, most often the following ones: Well-formed formulae of propositional t-norm logics are defined from propositional variables (usually countably many) by the above logical connectives, as usual in propositional logics. In order to save parentheses, it is common to use the following order of precedence: First-order variants of t-norm logics employ the usual logical language of first-order logic with the above propositional connectives and the following quantifiers: The first-order variant of a propositional t-norm logic formula_44 is usually denoted by formula_57 Semantics. Algebraic semantics is predominantly used for propositional t-norm fuzzy logics, with three main classes of algebras with respect to which a t-norm fuzzy logic formula_44 is complete:
[ { "math_id": 0, "text": "*\\colon[0,1]^2\\to[0,1]" }, { "math_id": 1, "text": "x*y=y*x" }, { "math_id": 2, "text": "(x*y)*z = x*(y*z)" }, { "math_id": 3, "text": "x \\le y" }, { "math_id": 4, "text": "x*z \\le y*z" }, { "math_id": 5, "text": "1*x = x" }, { "math_id": 6, "text": "0*x = 0" }, { "math_id": 7, "text": "*" }, { "math_id": 8, "text": "\\Rightarrow" }, { "math_id": 9, "text": "x*y\\le z" }, { "math_id": 10, "text": "x\\le y\\Rightarrow z." }, { "math_id": 11, "text": "(x\\Rightarrow y)=\\sup\\{z\\mid z*x\\le y\\}." }, { "math_id": 12, "text": "x*(x\\Rightarrow y)\\le y." }, { "math_id": 13, "text": "\\neg x=(x\\Rightarrow 0)" }, { "math_id": 14, "text": "x\\Leftrightarrow y = (x\\Rightarrow y)*(y\\Rightarrow x)." }, { "math_id": 15, "text": "\\Delta x = 1" }, { "math_id": 16, "text": "x=1" }, { "math_id": 17, "text": "\\Delta x = 0" }, { "math_id": 18, "text": "*," }, { "math_id": 19, "text": "*\\mbox{-}" }, { "math_id": 20, "text": "x*y = \\max(x+y-1,0)" }, { "math_id": 21, "text": "x*y = \\min(x,y)" }, { "math_id": 22, "text": "x*y = x\\cdot y" }, { "math_id": 23, "text": "\\rightarrow" }, { "math_id": 24, "text": "\\And" }, { "math_id": 25, "text": "\\otimes" }, { "math_id": 26, "text": "\\wedge" }, { "math_id": 27, "text": "A\\wedge B \\equiv A \\mathbin{\\And} (A \\rightarrow B)." }, { "math_id": 28, "text": "\\bot" }, { "math_id": 29, "text": "0" }, { "math_id": 30, "text": "\\overline{0}" }, { "math_id": 31, "text": "\\neg" }, { "math_id": 32, "text": "\\neg A \\equiv A \\rightarrow \\bot" }, { "math_id": 33, "text": "\\leftrightarrow" }, { "math_id": 34, "text": "A \\leftrightarrow B \\equiv (A \\rightarrow B) \\wedge (B \\rightarrow A)" }, { "math_id": 35, "text": "(A \\rightarrow B) \\mathbin{\\And} (B \\rightarrow A)." }, { "math_id": 36, "text": "\\vee" }, { "math_id": 37, "text": "A \\vee B \\equiv ((A \\rightarrow B) \\rightarrow B) \\wedge ((B \\rightarrow A) \\rightarrow A)" }, { "math_id": 38, "text": "\\top" }, { "math_id": 39, "text": "1" }, { "math_id": 40, "text": "\\overline{1}" }, { "math_id": 41, "text": "\\top \\equiv \\bot \\rightarrow \\bot." }, { "math_id": 42, "text": "\\triangle" }, { "math_id": 43, "text": "\\triangle A" }, { "math_id": 44, "text": "L" }, { "math_id": 45, "text": "L_{\\triangle}." }, { "math_id": 46, "text": "r" }, { "math_id": 47, "text": "\\overline{r}." }, { "math_id": 48, "text": "\\overline{r \\mathbin{\\And} s} \\leftrightarrow (\\overline{r} \\mathbin{\\And} \\overline{s})," }, { "math_id": 49, "text": "\\overline{r \\rightarrow s} \\leftrightarrow (\\overline{r} \\mathbin{\\rightarrow} \\overline{s})," }, { "math_id": 50, "text": "\\sim" }, { "math_id": 51, "text": "\\neg\\neg A \\leftrightarrow A" }, { "math_id": 52, "text": "L_{\\sim}" }, { "math_id": 53, "text": "\\oplus" }, { "math_id": 54, "text": "A \\oplus B \\equiv \\mathrm{\\sim}(\\mathrm{\\sim}A \\mathbin{\\And} \\mathrm{\\sim}B)." }, { "math_id": 55, "text": "\\forall" }, { "math_id": 56, "text": "\\exists" }, { "math_id": 57, "text": "L\\forall." }, { "math_id": 58, "text": "L_\\sim" }, { "math_id": 59, "text": "f_\\sim(x)=1-x," } ]
https://en.wikipedia.org/wiki?curid=13727501
13728473
Tropical savanna climate
Climate subtype Tropical savanna climate or tropical wet and dry climate is a tropical climate sub-type that corresponds to the Köppen climate classification categories Aw (for a dry "winter") and As (for a dry "summer"). The driest month has less than of precipitation and also less than formula_0mm of precipitation. This latter fact is in a direct contrast to a tropical monsoon climate, whose driest month sees less than of precipitation but has "more" than formula_0 of precipitation. In essence, a tropical savanna climate tends to either see less overall rainfall than a tropical monsoon climate or have more pronounced dry season(s). It is impossible for a tropical savanna climate to have more than as such would result in a negative value in that equation. In tropical savanna climates, the dry season can become severe, and often drought conditions prevail during the course of the year. Tropical savanna climates often feature tree-studded grasslands due to its dryness, rather than thick jungle. It is this widespread occurrence of tall, coarse grass (called savanna) which has led to Aw and As climates often being referred to as the tropical savanna. However, there is some doubt whether tropical grasslands are climatically induced. Additionally, pure savannas, without trees, are the exception rather than the rule. Versions. There are generally four types of tropical savanna climates: Distribution. Tropical savanna climates are most commonly found in Africa, Asia, Central America, and South America. The climate is also prevalent in sections of northern Australia, the Pacific Islands, in extreme southern North America in south Florida, and some islands in the Caribbean. Most places that have this climate are found at the outer margins of the tropical zone, but occasionally an inner-tropical location (e.g., San Marcos, Antioquia, Colombia) also qualifies. Similarly, the Caribbean coast, eastward from the Gulf of Urabá on the Colombia – Panamá border to the Orinoco river delta, on the Atlantic Ocean (ca. ), have long dry periods (the extreme is the BSh climate (see below), characterized by very low, unreliable precipitation, present, for instance, in extensive areas in the Guajira, and Coro, western Venezuela, the northernmost peninsulas in South America, which receive < total annual precipitation, practically all in two or three months). This condition extends to the Lesser Antilles and Greater Antilles forming the Circumcaribbean dry belt. The length and severity of the dry season diminishes inland (southward); at the latitude of the Amazon river—which flows eastward, just south of the equatorial line—the climate is Af. East from the Andes, between the arid Caribbean and the ever-wet Amazon, are the Orinoco river Llanos or savannas, from where this climate takes its name. Sometimes "As" is used in place of "Aw" if the dry season occurs during the time of higher sun and longer days. This is typically due to a rain shadow effect that cuts off ITCZ-triggered summer precipitation in a tropical area while winter precipitation remains sufficient to preclude a hot semi-arid climate ("BSh") and temperatures in the summer months are warm enough to preclude a Mediterranean climate ("Csa/Csb") classification. This is the case East Africa (Mombasa, Kenya, Somalia), Sri Lanka (Trincomalee) and coastal regions of Northeastern Brazil (from São Luís through Natal to Maceió), for instance. The difference between "summer" and "winter" in such tropical locations is usually so slight that a distinction between an "As" and "Aw" climate is trivial. In most places that have tropical wet and dry climates, however, the dry season occurs during the time of lower sun and shorter days because of reduction of or lack of convection, which in turn is due to the meridional shifts of the Intertropical Convergence Zone during the entire course of the year, based on which hemisphere the location sits. Cities with a tropical savanna climate. <templatestyles src="Col-begin/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "100-\\left (\\frac{\\text{Total Annual Precipitation (mm)}}{25} \\right)" } ]
https://en.wikipedia.org/wiki?curid=13728473
13731830
Listing's law
Listing's law, named after German mathematician Johann Benedict Listing (1808–1882), describes the three-dimensional orientation of the eye and its axes of rotation. Listing's law has been shown to hold when the head is stationary and upright and gaze is directed toward far targets, i.e., when the eyes are either fixating, making saccades, or pursuing moving visual targets. Listing's law (often abbreviated L1) has been generalized to yield the "binocular extension of Listing's law" (often abbreviated L2) which also covers vergence. It was proposed by Listing based on its geometric beauty, and never published it. It was first published by Ruete in a 1855 textbook. Helmholtz first found empirical justification based on measurements of afterimages. Definition. Listing's law states that the eye does not achieve all possible 3D orientations and that, instead, all achieved eye orientations can be reached by starting from one specific "primary" reference orientation and then rotating about an axis that lies within the plane orthogonal to the primary orientation's gaze direction (line of sight / visual axis). This plane is called Listing's plane. It can be shown that Listing's law implies that, if we start from any chosen eye orientation, all achieved eye orientations can be reached by starting from this orientation and then rotating about an axis that lies within a specific plane that is associated with this chosen orientation. (Only for the primary reference orientation is the gaze direction orthogonal to its associated plane.) Listing's law can be deduced without starting with the orthogonality assumption. If one assumes that all achieved eye orientations can be reached from some chosen eye orientation and then rotating about an axis that lies within some specific plane, then the existence of a unique primary orientation with an orthogonal Listing's plane is assured. The expression of Listing's law can be simplified by creating a coordinate system where the origin is primary position, the vertical and horizontal axes of rotation are aligned in Listing's plane, and the third (torsional) axis is orthogonal to Listing's plane. In this coordinate system, Listing's law simply states that the torsional component of eye orientation is held at zero. (Note that this is not the same description of ocular torsion as rotation around the line of sight: whereas movements that start or end at the primary position can indeed be performed without any rotation about the line of sight, this is not the case for arbitrary movements.) Listing's law can also be formulated in a coordinate-free form using geometric algebra. Listing's law is the specific realization of the more general "Donders' law". Donders' law. For any one gaze direction the eye's 3D spatial orientation is unique and independent of how the eye reached that gaze direction (previous gaze directions, eye orientations, temporal movements). It is implied by Listing's law. Note that it is theoretically possible for Listing's law to be false, but Donders' law to be true. Listing's plane. The Listing's plane of a subject can be measured by recording the vector of rotation that would cause the eye to rotate from its primary position to a rotated position. It is orthogonal to the line of sight at the primary position. The line of sight is typically horizontal, but does not necessarily point straight ahead (perpendicular to the coronal plane). Instead, it points towards the nose or the temples by as much as 15 degrees, across subjects. Also, within each subject, the primary position tilts towards the temples when viewing distant objects due to vergence. The tilt angle is 0.72° per degree of vergence The plane has thickness (standard deviation) of about 1 degree. Listing's half-angle rule. Let formula_0 be the gaze direction when the eye is in the primary position. Consider the scenario: The eye is looking at a certain direction formula_1, then it turns towards a different direction formula_2. If the eye follows Listing's law, then orientation of the eye is uniquely determined in both gaze directions, and so there exists a unique rotation that turns the eye from the first orientation to the second. It is a theorem of geometry that, for any formula_3, the rotation axis is in the plane perpendicular to formula_4. This is Listing's half-angle rule. This can be proved by noting that a rotation by formula_5 is composed of two reflections across two planes formula_6 apart. The plane is called the velocity plane (or displacement plane). Listing's plane is the velocity plane of the primary position Purpose. There has been considerable debate for over a century whether the purpose of Listing's law is primarily motor or perceptual. Some modern neuroscientists – who have tended to emphasize optimization of multiple variables – consider Listing's law to be the best compromise between motor factors (e.g., taking the shortest possible rotation path) and visual factors (see below for details). Common misconceptions. The axes of rotation associated with Listing's law are only in Listing's plane for movements that head toward or away from primary position. For all other eye movements towards or away from some non-primary position, the eye must rotate about an axis of rotation that tilts out of Listing's plane. Such axes lie in a specific plane associated with this non-primary position. This plane's normal lies halfway between the primary gaze direction and the gaze direction of this non-primary position. This is called "the half-angle rule". (This complication is one of the most difficult aspects of Listing's law to understand, but it follows directly from the non-commutative laws of physical rotation, which specify that one rotation followed by a second rotation does not yield the same result as these same rotations performed in the inverse order.) Modifications and violations. Half-Listing's law strategy. Listing's law is violated when the eyes counter-rotate during head rotation to maintain gaze stability, either due to the vestibulo-ocular reflex (VOR) or the optokinetic reflex. Here the eye simply rotates about approximately the same axis as the head (which could even be a pure torsional rotation). This generally results in slow movements that drive the eye torsionally out of Listing's plane. However, when the head translates without rotating, gaze direction remains stable but Listing's law is still maintained. Specifically, if the head rolls (shaking left and right), the counterroll reflex would roll the eyes in the opposite direction, violating Listing's law. Listing's law persists if a torsional bias is added, when the head is held at a tilted posture and the eyes counter-roll, and when the head is held steady upward or downward Listing's plane tilts slightly in the opposite direction. Perfect VOR would stabilize retinal image but cause violation to Listing's law, As a compromise, eye motion follows the half-Listing's law strategy, where instead of following the Listing's half-angle rule (a geometric consequence of Listing's rule), eyes react to head motion in VOR by rotating around a modified velocity plane. The modified velocity plane makes an angle with Listing's plane that is 1/4, instead of 1/2, of the angle between the gaze direction and the primary direction. Other violations. When larger "gaze saccades" are accompanied by a head movement, Listing's law cannot be maintained constantly because the eyes move much faster than the head. The eye typically reaches the destination in 80 ms, but the head needs about 300 ms. In this case, the eyes start at the position following Listing's law, then arrive at the destination violating it, then as the head continues to move into position, the eyes retain their orientation, until the head reaches the destination, and the eyes end up following Listing's law again in the end. The temporary violation can reach up to 15 degrees of torsion relative to Listing's law. The data can be explained by assuming that the eyes take the fastest possible path to their final orientation, with no constraints on torsion, except that it stays less than 15 degrees. Listing's law does not hold during sleep. Listing's law holds during fixation, saccades, and smooth pursuit. Furthermore, Listing's law has been generalized to the "binocular extension of Listing's law", which holds also during vergence. Adaptation. Listing's law can be violated in neurological conditions, such as acute unilateral fourth nerve palsy. However, there is an adaptive mechanism that ensures Listing's law, so that chronic patients of unilateral fourth nerve palsy satisfy Listing's law again. The adaptation fails under central fascicular palsy, as even chronic patients suffer from deviation from Listing's law. Binocular extension. While Listing's law holds only for eyes that fixate a distant point (at optical infinity), it has been extended to include also vergence. From this "binocular extension of Listing's law", it follows that vergence can lead to a change of cyclotorsion. The Listing's planes of the two eyes tilt outward, opposite to the eyes, when they converge on a near target. During convergence, there is a relative excyclotorsion on upgaze and a relative incyclotorsion on downgaze. Shape and thickness. Certain slight physiological deviations from Listing's rule are commonly described in terms of the "shape" and "thickness" of Listing's plane: Visual consequences. Since Listing's law and its variants determine the orientation of the eye(s) for any particular gaze direction, it therefore determines the spatial pattern of visual stimulation on the retina(s). For example, since Listing's law defines torsion as zero about a head-fixed axis, this results in "false torsional" tilts about the line of sight when the eye is at tertiary (oblique) positions, which the brain must compensate for when interpreting the visual image. Torsion is not good for binocular vision because it complicates the already difficult problem of matching images from the two eyes for stereopsis (depth vision). The binocular version of Listing's law is thought to be a best compromise to simplify this problem, although it does not completely rid the visual system of the need to know current eye orientation. Physiology. In the 1990s there was considerable debate about whether Listing's law is a neural or mechanical phenomenon. However, the accumulated evidence suggests that both factors play a role in the implementation of different aspects of Listing's law. The horizontal recti muscles of the eyes only contribute to horizontal eye rotation and position, but the vertical recti and oblique muscles each have approximately equal vertical and torsional actions (in Listing's plane coordinates). Thus, to hold eye position in Listing's plane, there needs to be a balance of activation between these muscles so that torsion cancels to zero. The eye muscles may also contribute to Listing's law by having position-dependent pulling directions during motion, i.e., this might be the mechanism that implements the "half-angle rule" described above. Higher gaze control centers in the frontal cortex and superior colliculus are only concerned with pointing gaze in the right direction and do not appear to be involved in 3D eye control or the implementation of Listing's law. However, the brainstem reticular formation centers that control vertical eye position (the interstitial nucleus of Cajal; INC) and saccade velocity (the rostral interstitial nucleus of the medial longitudinal fasciculus; riMLF) are equally involved in torsional control, each being divided into populations of neurons that control directions similar to those of the vertical and torsional pulling eye muscles. However, these neural coordinate systems appear to align with Listing's plane in a way that probably simplifies Listing's law: positive and negative torsional control is balanced across the midline of the brainstem so that equal activation produces positions and movements in Listing's plane. Thus torsional control is only needed for movements toward or away from Listing's plane. However, it remains unclear how 2D activity in the higher gaze centres results in the right pattern of 3D activity in the brainstem. The brainstem premotor centers (INC, riMLF, etc.) project to the motoneurons for eye muscles, which encode positions and displacements of the eyes while leaving the "half-angle rule" to the mechanics of the eyes itself (see above). The cerebellum also plays a role in correcting deviations from Listing's plane. Pathology. Damage to any of the physiology described above can disrupt Listing's law and thus have negative impacts for vision. Disorders of the eye muscles (such as strabismus) often cause torsional offsets in eye position that are particularly troublesome when they differ between the two eyes, as the resulting cyclodisparity may lead to cyclodisplopia (double vision due to relative torsion) and may prevent binocular fusion. Damage to the vestibular system and brainstem reticular formation centres for 3D eye control can cause torsional offsets and/or torsional drifting motion of the eyes that severely disrupts vision. Degeneration of the cerebellum causes torsional control to become "sloppy". Similar effects occur during alcohol consumption. The influence of strabismus surgery on the Listing's planes of the two eyes is not fully understood. In one study, patients' eyes showed greater adherence to Listing's rule after the operation, however, the relative orientation of the Listing's planes of the two eyes had changed. Measurement. The orientation of Listing's plane (equivalently, the location of the primary position) of an individual can be measured using scleral coils. It can also be measured using a synoptometer. Alternatively, it can be measured using eye tracking (see also Eye tracking on the ISS for an example). Discovery and history. Listing's law was named after German mathematician Johann Benedict Listing (1808–1882). It is not clear how Listing derived this idea, but apparently based on his geometric aesthetics. Listing's law was first confirmed experimentally by the 19th-century polymath Hermann von Helmholtz, who compared visual afterimages at various eye positions to predictions derived from Listing's law and found that they matched. Listing's law was first measured directly, with the use of 3D eye coils in the 1980s by Ferman, Collewijn and colleagues. In the late 1980s Tweed and Vilis were the first to directly measure and visualize Listing's plane, and also contributed to the understanding of the laws of rotational kinematics that underlie Listing's law. Since then many investigators have used similar technology to test various aspects of Listing's law. Demer and Miller have championed the role of eye muscles, whereas Crawford and colleagues worked out several of the neural mechanisms described above over the past two decades. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\hat n" }, { "math_id": 1, "text": "\\hat v" }, { "math_id": 2, "text": "\\hat v'" }, { "math_id": 3, "text": "\\hat v, \\hat v'" }, { "math_id": 4, "text": "\\frac{\\hat n + \\hat v}{2}" }, { "math_id": 5, "text": "\\theta" }, { "math_id": 6, "text": "\\theta/2" } ]
https://en.wikipedia.org/wiki?curid=13731830
13733
Hilbert's basis theorem
Polynomial ideals are finitely generated In mathematics Hilbert's basis theorem asserts that every ideal of a polynomial ring over a field has a finite generating set (a finite "basis" in Hilbert's terminology). In modern algebra, rings whose ideals have this property are called Noetherian rings. Every field, and the ring of integers are Noetherian rings. So, the theorem can be generalized and restated as: "every polynomial ring over a Noetherian ring is also Noetherian". The theorem was stated and proved by David Hilbert in 1890 in his seminal article on invariant theory, where he solved several problems on invariants. In this article, he proved also two other fundamental theorems on polynomials, the Nullstellensatz (zero-locus theorem) and the syzygy theorem (theorem on relations). These three theorems were the starting point of the interpretation of algebraic geometry in terms of commutative algebra. In particular, the basis theorem implies that every algebraic set is the intersection of a finite number of hypersurfaces. Another aspect of this article had a great impact on mathematics of the 20th century; this is the systematic use of non-constructive methods. For example, the basis theorem asserts that every ideal has a finite generator set, but the original proof does not provide any way to compute it for a specific ideal. This approach was so astonishing for mathematicians of that time that the first version of the article was rejected by Paul Gordan, the greatest specialist of invariants of that time, with the comment "This is not mathematics. This is theology." Later, he recognized "I have convinced myself that even theology has its merits." Statement. If formula_0 is a ring, let formula_1 denote the ring of polynomials in the indeterminate formula_2 over formula_0. Hilbert proved that if formula_0 is "not too large", in the sense that if formula_0 is Noetherian, the same must be true for formula_1. Formally, Hilbert's Basis Theorem. If formula_0 is a Noetherian ring, then formula_1 is a Noetherian ring. Corollary. If formula_0 is a Noetherian ring, then formula_3 is a Noetherian ring. Hilbert proved the theorem (for the special case of multivariate polynomials over a field) in the course of his proof of finite generation of rings of invariants. The theorem is interpreted in algebraic geometry as follows: every algebraic set is the set of the common zeros of finitely many polynomials. Hilbert's proof is highly non-constructive: it proceeds by induction on the number of variables, and, at each induction step use the non-constructive proof for one variable less. Introduced more eighty years later, Gröbner bases allow a direct proof that is as constructive as possible: Gröbner bases produce an algorithm for testing whether a polynomial belong to the ideal generated by other polynomials. So, given an infinite sequence of polynomials, one can construct algorithmically the list of those polynomials that do not belong to the ideal generated by the preceding ones. Gröbner basis theory implies that this list is necessarily finite, and is thus a finite basis of the ideal. However, for deciding whether the list is complete, one must consider every element of the infinite sequence, which cannot be done in the finite time allowed to an algorithm. Proof. Theorem. If formula_0 is a left (resp. right) Noetherian ring, then the polynomial ring formula_1 is also a left (resp. right) Noetherian ring. Remark. We will give two proofs, in both only the "left" case is considered; the proof for the right case is similar. First proof. Suppose formula_4 is a non-finitely generated left ideal. Then by recursion (using the axiom of dependent choice) there is a sequence of polynomials formula_5 such that if formula_6 is the left ideal generated by formula_7 then formula_8 is of minimal degree. By construction, formula_9 is a non-decreasing sequence of natural numbers. Let formula_10 be the leading coefficient of formula_11 and let formula_12 be the left ideal in formula_0 generated by formula_13. Since formula_0 is Noetherian the chain of ideals formula_14 must terminate. Thus formula_15 for some integer formula_16. So in particular, formula_17 Now consider formula_18 whose leading term is equal to that of formula_19; moreover, formula_20. However, formula_21, which means that formula_22 has degree less than formula_19, contradicting the minimality. Second proof. Let formula_4 be a left ideal. Let formula_23 be the set of leading coefficients of members of formula_24. This is obviously a left ideal over formula_0, and so is finitely generated by the leading coefficients of finitely many members of formula_24; say formula_25. Let formula_26 be the maximum of the set formula_27, and let formula_28 be the set of leading coefficients of members of formula_24, whose degree is formula_29. As before, the formula_28 are left ideals over formula_0, and so are finitely generated by the leading coefficients of finitely many members of formula_24, say formula_30 with degrees formula_29. Now let formula_31 be the left ideal generated by: formula_32 We have formula_33 and claim also formula_34. Suppose for the sake of contradiction this is not so. Then let formula_35 be of minimal degree, and denote its leading coefficient by formula_36. Case 1: formula_37. Regardless of this condition, we have formula_38, so formula_36 is a left linear combination formula_39 of the coefficients of the formula_40. Consider formula_41 which has the same leading term as formula_42; moreover formula_43 while formula_44. Therefore formula_45 and formula_46, which contradicts minimality. Case 2: formula_47. Then formula_48 so formula_36 is a left linear combination formula_49 of the leading coefficients of the formula_50. Considering formula_51 we yield a similar contradiction as in Case 1. Thus our claim holds, and formula_52 which is finitely generated. Note that the only reason we had to split into two cases was to ensure that the powers of formula_2 multiplying the factors were non-negative in the constructions. Applications. Let formula_0 be a Noetherian commutative ring. Hilbert's basis theorem has some immediate corollaries. Formal proofs. Formal proofs of Hilbert's basis theorem have been verified through the Mizar project (see HILBASIS file) and Lean (see ring_theory.polynomial). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "R[X]" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "R[X_1,\\dotsc,X_n]" }, { "math_id": 4, "text": "\\mathfrak a \\subseteq R[X]" }, { "math_id": 5, "text": "\\{ f_0, f_1, \\ldots \\}" }, { "math_id": 6, "text": "\\mathfrak b_n" }, { "math_id": 7, "text": "f_0, \\ldots, f_{n-1}" }, { "math_id": 8, "text": "f_n \\in \\mathfrak a \\setminus \\mathfrak b_n" }, { "math_id": 9, "text": "\\{\\deg(f_0), \\deg(f_1), \\ldots \\}" }, { "math_id": 10, "text": "a_n" }, { "math_id": 11, "text": "f_n" }, { "math_id": 12, "text": "\\mathfrak{b}" }, { "math_id": 13, "text": "a_0,a_1,\\ldots" }, { "math_id": 14, "text": "(a_0)\\subset(a_0,a_1)\\subset(a_0,a_1,a_2) \\subset \\cdots" }, { "math_id": 15, "text": "\\mathfrak b = (a_0,\\ldots ,a_{N-1})" }, { "math_id": 16, "text": "N" }, { "math_id": 17, "text": "a_N=\\sum_{i<N} u_{i}a_{i}, \\qquad u_i \\in R." }, { "math_id": 18, "text": "g = \\sum_{i<N}u_{i}X^{\\deg(f_{N})-\\deg(f_{i})}f_{i}," }, { "math_id": 19, "text": "f_N" }, { "math_id": 20, "text": "g\\in\\mathfrak b_N" }, { "math_id": 21, "text": "f_N \\notin \\mathfrak b_N" }, { "math_id": 22, "text": "f_N - g \\in \\mathfrak a \\setminus \\mathfrak b_N" }, { "math_id": 23, "text": "\\mathfrak b" }, { "math_id": 24, "text": "\\mathfrak a" }, { "math_id": 25, "text": "f_0, \\ldots, f_{N-1}" }, { "math_id": 26, "text": "d" }, { "math_id": 27, "text": "\\{\\deg(f_0),\\ldots, \\deg(f_{N-1})\\}" }, { "math_id": 28, "text": "\\mathfrak b_k" }, { "math_id": 29, "text": "\\le k" }, { "math_id": 30, "text": "f^{(k)}_{0}, \\ldots, f^{(k)}_{N^{(k)}-1}" }, { "math_id": 31, "text": "\\mathfrak a^*\\subseteq R[X]" }, { "math_id": 32, "text": "\\left\\{f_{i},f^{(k)}_{j} \\, : \\ i<N,\\, j<N^{(k)},\\, k<d \\right\\}\\!\\!\\;." }, { "math_id": 33, "text": "\\mathfrak a^*\\subseteq\\mathfrak a" }, { "math_id": 34, "text": "\\mathfrak a\\subseteq\\mathfrak a^*" }, { "math_id": 35, "text": "h\\in \\mathfrak a \\setminus \\mathfrak a^*" }, { "math_id": 36, "text": "a" }, { "math_id": 37, "text": "\\deg(h)\\ge d" }, { "math_id": 38, "text": "a\\in \\mathfrak b" }, { "math_id": 39, "text": "a=\\sum_j u_j a_j" }, { "math_id": 40, "text": "f_j" }, { "math_id": 41, "text": "h_0 =\\sum_{j}u_{j}X^{\\deg(h)-\\deg(f_{j})}f_{j}," }, { "math_id": 42, "text": "h" }, { "math_id": 43, "text": "h_0 \\in \\mathfrak a^*" }, { "math_id": 44, "text": "h\\notin\\mathfrak a^*" }, { "math_id": 45, "text": "h - h_0 \\in \\mathfrak a\\setminus\\mathfrak a^*" }, { "math_id": 46, "text": "\\deg(h - h_0) < \\deg(h)" }, { "math_id": 47, "text": "\\deg(h) = k < d" }, { "math_id": 48, "text": "a\\in\\mathfrak b_k" }, { "math_id": 49, "text": "a=\\sum_j u_j a^{(k)}_j" }, { "math_id": 50, "text": "f^{(k)}_j" }, { "math_id": 51, "text": "h_0=\\sum_j u_j X^{\\deg(h)-\\deg(f^{(k)}_{j})}f^{(k)}_{j}," }, { "math_id": 52, "text": "\\mathfrak a = \\mathfrak a^*" }, { "math_id": 53, "text": "R[X_0,\\dotsc,X_{n-1}]" }, { "math_id": 54, "text": "R^n" }, { "math_id": 55, "text": "\\mathfrak a\\subset R[X_0, \\dotsc, X_{n-1}]" }, { "math_id": 56, "text": "A" }, { "math_id": 57, "text": "A \\simeq R[X_0, \\dotsc, X_{n-1}] / \\mathfrak a" }, { "math_id": 58, "text": "\\mathfrak a = (p_0,\\dotsc, p_{N-1})" } ]
https://en.wikipedia.org/wiki?curid=13733
1373338
Rooted product of graphs
Binary operation performed on graphs In mathematical graph theory, the rooted product of a graph G and a rooted graph H is defined as follows: take copies of H, and for every vertex vi of G, identify vi with the root node of the i-th copy of H. More formally, assuming that formula_0 and that the root node of H is "h"1, define formula_1, where formula_2 and formula_3. If G is also rooted at "g"1, one can view the product itself as rooted, at ("g"1, "h"1). The rooted product is a subgraph of the cartesian product of the same two graphs. Applications. The rooted product is especially relevant for trees, as the rooted product of two trees is another tree. For instance, Koh et al. (1980) used rooted products to find graceful numberings for a wide family of trees. If H is a two-vertex complete graph "K"2, then for any graph G, the rooted product of G and H has domination number exactly half of its number of vertices. Every connected graph in which the domination number is half the number of vertices arises in this way, with the exception of the four-vertex cycle graph. These graphs can be used to generate examples in which the bound of Vizing's conjecture, an unproven inequality between the domination number of the graphs in a different graph product, the cartesian product of graphs, is exactly met . They are also well-covered graphs.
[ { "math_id": 0, "text": "\\begin{align}\nV(G) &= \\{g_1, \\ldots, g_n \\}, \\\\\nV(H) &= \\{h_1, \\ldots, h_m \\},\n\\end{align}" }, { "math_id": 1, "text": "G \\circ H := (V, E)" }, { "math_id": 2, "text": "V = \\left\\{(g_i, h_j): 1\\leq i\\leq n, 1\\leq j\\leq m\\right\\}" }, { "math_id": 3, "text": "E = \\Bigl\\{\\bigl((g_i, h_1), (g_k, h_1)\\bigr): (g_i, g_k) \\in E(G)\\Bigr\\} \\cup \\bigcup_{i=1}^n \\Bigl\\{\\bigl((g_i, h_j), (g_i, h_k)\\bigr): (h_j, h_k) \\in E(H)\\Bigr\\}" } ]
https://en.wikipedia.org/wiki?curid=1373338
13733769
Darcy friction factor formulae
Equations for calculations of the Darcy friction factor In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the "Darcy friction factor", a dimensionless quantity used in the Darcy–Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow. The Darcy friction factor is also known as the "Darcy–Weisbach friction factor", "resistance coefficient" or simply "friction factor"; by definition it is four times larger than the Fanning friction factor. Notation. In this article, the following conventions and definitions are to be understood: Flow regime. Which friction factor formula may be applicable depends upon the type of flow that exists: Transition flow. Transition (neither fully laminar nor fully turbulent) flow occurs in the range of Reynolds numbers between 2300 and 4000. The value of the Darcy friction factor is subject to large uncertainties in this flow regime. Turbulent flow in smooth conduits. The Blasius correlation is the simplest equation for computing the Darcy friction factor. Because the Blasius correlation has no term for pipe roughness, it is valid only to smooth pipes. However, the Blasius correlation is sometimes used in rough pipes because of its simplicity. The Blasius correlation is valid up to the Reynolds number 100000. Turbulent flow in rough conduits. The Darcy friction factor for fully turbulent flow (Reynolds number greater than 4000) in rough conduits can be modeled by the Colebrook–White equation. Free surface flow. The last formula in the "Colebrook equation" section of this article is for free surface flow. The approximations elsewhere in this article are not applicable for this type of flow. Choosing a formula. Before choosing a formula it is worth knowing that in the paper on the Moody chart, Moody stated the accuracy is about ±5% for smooth pipes and ±10% for rough pipes. If more than one formula is applicable in the flow regime under consideration, the choice of formula may be influenced by one or more of the following: Colebrook–White equation. The phenomenological Colebrook–White equation (or Colebrook equation) expresses the Darcy friction factor "f" as a function of Reynolds number Re and pipe relative roughness ε / "D"h, fitting the data of experimental studies of turbulent flow in smooth and rough pipes. The equation can be used to (iteratively) solve for the Darcy–Weisbach friction factor "f". For a conduit flowing completely full of fluid at Reynolds numbers greater than 4000, it is expressed as: formula_0 or formula_1 where: Note: Some sources use a constant of 3.71 in the denominator for the roughness term in the first equation above. Solving. The Colebrook equation is usually solved numerically due to its implicit nature. Recently, the Lambert W function has been employed to obtain explicit reformulation of the Colebrook equation. formula_4 formula_5 or formula_6 formula_7 will get: formula_8 formula_9 then: formula_10 Expanded forms. Additional, mathematically equivalent forms of the Colebrook equation are: formula_11 where: 1.7384... = 2 log (2 × 3.7) = 2 log (7.4) 18.574 = 2.51 × 3.7 × 2 and formula_12 or formula_13 where: 1.1364... = 1.7384... − 2 log (2) = 2 log (7.4) − 2 log (2) = 2 log (3.7) 9.287 = 18.574 / 2 = 2.51 × 3.7. The additional equivalent forms above assume that the constants 3.7 and 2.51 in the formula at the top of this section are exact. The constants are probably values which were rounded by Colebrook during his curve fitting; but they are effectively treated as exact when comparing (to several decimal places) results from explicit formulae (such as those found elsewhere in this article) to the friction factor computed via Colebrook's implicit equation. Equations similar to the additional forms above (with the constants rounded to fewer decimal places, or perhaps shifted slightly to minimize overall rounding errors) may be found in various references. It may be helpful to note that they are essentially the same equation. Free surface flow. Another form of the Colebrook-White equation exists for free surfaces. Such a condition may exist in a pipe that is flowing partially full of fluid. For free surface flow: formula_14 The above equation is valid only for turbulent flow. Another approach for estimating "f" in free surface flows, which is valid under all the flow regimes (laminar, transition and turbulent) is the following: formula_15 where "a" is: formula_16 and "b" is: formula_17 where "Reh" is Reynolds number where "h" is the characteristic hydraulic length (hydraulic radius for 1D flows or water depth for 2D flows) and "Rh" is the hydraulic radius (for 1D flows) or the water depth (for 2D flows). The Lambert W function can be calculated as follows: formula_18 Approximations of the Colebrook equation. Haaland equation. The "Haaland equation" was proposed in 1983 by Professor S.E. Haaland of the Norwegian Institute of Technology. It is used to solve directly for the Darcy–Weisbach friction factor "f" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation, but the discrepancy from experimental data is well within the accuracy of the data. The Haaland equation is expressed: formula_19 Swamee–Jain equation. The Swamee–Jain equation is used to solve directly for the Darcy–Weisbach friction factor "f" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. formula_20 Serghides's solution. Serghides's solution is used to solve directly for the Darcy–Weisbach friction factor "f" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. It was derived using Steffensen's method. The solution involves calculating three intermediate values and then substituting those values into a final equation. formula_21 formula_22 formula_23 formula_24 The equation was found to match the Colebrook–White equation within 0.0023% for a test set with a 70-point matrix consisting of ten relative roughness values (in the range 0.00004 to 0.05) by seven Reynolds numbers (2500 to 108). Goudar–Sonnad equation. Goudar equation is the most accurate approximation to solve directly for the Darcy–Weisbach friction factor "f" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. Equation has the following form formula_25 formula_26 formula_27 formula_28 formula_29 formula_30 formula_31 formula_32 formula_33 formula_34 Brkić solution. Brkić shows one approximation of the Colebrook equation based on the Lambert W-function formula_35 formula_36 The equation was found to match the Colebrook–White equation within 3.15%. Brkić-Praks solution. Brkić and Praks show one approximation of the Colebrook equation based on the Wright formula_37-function, a cognate of the Lambert W-function formula_38 formula_39, formula_40, formula_41formula_42, and formula_43 The equation was found to match the Colebrook–White equation within 0.0497%. Praks-Brkić solution. Praks and Brkić show one approximation of the Colebrook equation based on the Wright formula_37-function, a cognate of the Lambert W-function formula_44 formula_45, formula_46, formula_41formula_42, and formula_43 The equation was found to match the Colebrook–White equation within 0.0012%. Niazkar's solution. Since Serghides's solution was found to be one of the most accurate approximation of the implicit Colebrook–White equation, Niazkar modified the Serghides's solution to solve directly for the Darcy–Weisbach friction factor "f" for a full-flowing circular pipe. Niazkar's solution is shown in the following: formula_47 formula_22 formula_23 formula_24 Niazkar's solution was found to be the most accurate correlation based on a comparative analysis conducted in the literature among 42 different explicit equations for estimating Colebrook friction factor. Blasius correlations. Early approximations for smooth pipes by Paul Richard Heinrich Blasius in terms of the Darcy–Weisbach friction factor are given in one article of 1913: formula_48. Johann Nikuradse in 1932 proposed that this corresponds to a power law correlation for the fluid velocity profile. Mishra and Gupta in 1979 proposed a correction for curved or helically coiled tubes, taking into account the equivalent curve radius, Rc: formula_49, with, formula_50 where "f" is a function of: valid for: Swamee equation. The Swamee equation is used to solve directly for the Darcy–Weisbach friction factor ("f") for a full-flowing circular pipe for all flow regimes (laminar, transitional, turbulent). It is an exact solution for the Hagen–Poiseuille equation in the laminar flow regime and an approximation of the implicit Colebrook–White equation in the turbulent regime with a maximum deviation of less than 2.38% over the specified range. Additionally, it provides a smooth transition between the laminar and turbulent regimes to be valid as a full-range equation, 0 &lt; Re &lt; 108. formula_51 Table of Approximations. The following table lists historical approximations to the Colebrook–White relation for pressure-driven flow. Churchill equation (1977) is the only equation that can be evaluated for very slow flow (Reynolds number &lt; 1), but the Cheng (2008), and Bellos et al. (2018) equations also return an approximately correct value for friction factor in the laminar flow region (Reynolds number &lt; 2300). All of the others are for transitional and turbulent flow only. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{1}{\\sqrt{f}}= -2 \\log \\left( \\frac { \\varepsilon} {3.7 D_\\mathrm{h}} + \\frac {2.51} {\\mathrm{Re} \\sqrt{f}} \\right)" }, { "math_id": 1, "text": " \\frac{1}{\\sqrt{f}}= -2 \\log \\left( \\frac{\\varepsilon}{14.8 R_\\mathrm{h}} + \\frac{2.51}{\\mathrm{Re}\\sqrt{f}} \\right)" }, { "math_id": 2, "text": "D_\\mathrm{h}" }, { "math_id": 3, "text": "R_\\mathrm{h}" }, { "math_id": 4, "text": "x=\\frac{1}{\\sqrt{f}}, b=\\frac{\\varepsilon}{14.8R_h}, a= \\frac{2.51}{Re} " }, { "math_id": 5, "text": " x=-2\\log(ax+b) " }, { "math_id": 6, "text": " 10^{-\\frac{x}{2}}= ax+b " }, { "math_id": 7, "text": " p=10^{-\\frac{1}{2}} " }, { "math_id": 8, "text": " p^x = ax + b " }, { "math_id": 9, "text": " x = -\\frac{W\\left(-\\frac{\\ln p}{a}\\,p^{-\\frac{b}{a}}\\right)}{\\ln p} - \\frac{b}{a} " }, { "math_id": 10, "text": " f = \\frac{1}{\\left(\\dfrac{2W\\left(\\frac{\\ln 10}{2a}\\,10^{\\frac{b}{2a}}\\right)}{\\ln 10} - \\dfrac{b}{a}\\right)^2} " }, { "math_id": 11, "text": " \\frac{1}{\\sqrt{f}}= 1.7384\\ldots -2 \\log \\left( \\frac { 2 \\varepsilon} {D_\\mathrm{h}} + \\frac {18.574} {\\mathrm{Re} \\sqrt{f}} \\right)" }, { "math_id": 12, "text": " \\frac{1}{\\sqrt{f}}= 1.1364\\ldots + 2 \\log\\left (D_\\mathrm{h} / \\varepsilon\\right) -2 \\log \\left( 1 + \\frac { 9.287} {\\mathrm{Re} (\\varepsilon/D_\\mathrm{h}) \\sqrt{f}} \\right)" }, { "math_id": 13, "text": " \\frac{1}{\\sqrt{f}}= 1.1364\\ldots -2 \\log \\left( \\frac {\\varepsilon}{D_\\mathrm{h}} + \\frac {9.287} {\\mathrm{Re} \\sqrt{f}} \\right) " }, { "math_id": 14, "text": "\\frac{1}{\\sqrt{f}} = -2 \\log \\left(\\frac{\\varepsilon}{12R_\\mathrm{h}} + \\frac{2.51}{\\mathrm{Re}\\sqrt{f}}\\right)." }, { "math_id": 15, "text": "f=\\left ( \\frac{24}{Re_h} \\right )\n\\left [ \\frac{0.86e^{W(1.35Re_h)}} {Re_h} \\right ]^{2(1-a)b}\n\\left \\{ \\frac{1.34}{\\left [ \\ln{12.21\\left ( \\frac{R_h}{\\epsilon} \\right )} \\right ]^2} \\right \\}^{(1-a)(1-b)}\n" }, { "math_id": 16, "text": "a= \\frac{1}{1+\\left ( \\frac{Re_h}{678} \\right )^{8.4}} \n" }, { "math_id": 17, "text": "b=\\frac{1}{1+\\left ( \\frac{Re_h}{150\\left ( \\frac{R_h}{\\epsilon} \\right )} \\right )^{1.8}} \n" }, { "math_id": 18, "text": "W(1.35Re_h)=\\ln{1.35Re_h}-\\ln{\\ln{1.35Re_h}}+\\left ( \\frac{\\ln{\\ln{1.35Re_h}}}{\\ln{1.35Re_h}} \\right )+\n\\left ( \\frac{\\ln{[\\ln{1.35Re_h}]^2-2\\ln{\\ln{1.35Re_h}}}}{2[\\ln{1.35Re_h}]^2} \\right )\n" }, { "math_id": 19, "text": " \\frac{1}{\\sqrt {f}} = -1.8 \\log \\left[ \\left( \\frac{\\varepsilon/D}{3.7} \\right)^{1.11} + \\frac{6.9}{\\mathrm{Re}} \\right] " }, { "math_id": 20, "text": " f = \\frac{0.25}{\\left[\\log\\left (\\frac{\\varepsilon/D}{3.7} + \\frac{5.74}{\\mathrm{Re}^{0.9}}\\right)\\right]^2}" }, { "math_id": 21, "text": " A = -2\\log\\left( \\frac{\\varepsilon/D}{3.7} + {12\\over \\mathrm{Re}}\\right) " }, { "math_id": 22, "text": " B = -2\\log \\left(\\frac{\\varepsilon/D}{3.7} + {2.51 A \\over \\mathrm{Re}}\\right) " }, { "math_id": 23, "text": " C = -2\\log \\left(\\frac{\\varepsilon/D}{3.7} + {2.51 B \\over \\mathrm{Re}}\\right) " }, { "math_id": 24, "text": " \\frac{1}{\\sqrt{f}} = A - \\frac{(B - A)^2}{C - 2B + A} " }, { "math_id": 25, "text": " a = {2 \\over \\ln(10)}" }, { "math_id": 26, "text": " b = \\frac{\\varepsilon/D}{3.7} " }, { "math_id": 27, "text": " d = {\\ln(10)\\mathrm{Re}\\over 5.02} " }, { "math_id": 28, "text": " s = {bd + \\ln(d)} " }, { "math_id": 29, "text": " q = {{s}^{s/(s+1)}} " }, { "math_id": 30, "text": " g = {bd + \\ln{d \\over q}} " }, { "math_id": 31, "text": " z = {\\ln{q \\over g}} " }, { "math_id": 32, "text": " D_{LA} = z{{g\\over {g+1}}} " }, { "math_id": 33, "text": " D_{CFA} = D_{LA} \\left(1 + \\frac{z/2}{(g+1)^2+(z/3)(2g-1)}\\right) " }, { "math_id": 34, "text": " \\frac{1}{\\sqrt {f}} = {a\\left[ \\ln\\left( d / q \\right) + D_{CFA} \\right] } " }, { "math_id": 35, "text": " S = \\ln\\frac{\\mathrm{Re}}{\\mathrm{1.816\\ln\\frac{1.1\\mathrm{Re}}{ \\ln\\left( 1+1.1\\mathrm{Re} \\right) }}}" }, { "math_id": 36, "text": " \\frac{1}{\\sqrt {f}} = -2\\log \\left(\\frac{\\varepsilon/D}{3.71} + {2.18 S \\over \\mathrm{Re}}\\right) " }, { "math_id": 37, "text": "\\omega" }, { "math_id": 38, "text": "\\displaystyle\\frac{1}{\\sqrt{f}}\\approx 0.8686\\cdot \\left[ B-C+\\displaystyle\\frac{1.038\\cdot C}{\\mathrm{0.332+}\\,x}\\right] \\," }, { "math_id": 39, "text": "A\\approx \\displaystyle \\frac{Re\\cdot \\epsilon/D }{8.0884}" }, { "math_id": 40, "text": "B\\approx \\mathrm{ln}\\,\\left( Re\\right) -0.7794" }, { "math_id": 41, "text": "C=" }, { "math_id": 42, "text": "\\mathrm{ln}\\,\\left( x\\right)" }, { "math_id": 43, "text": "x=A+B" }, { "math_id": 44, "text": "\\displaystyle\\frac{1}{\\sqrt{f}}\\approx 0.8685972\\cdot \\left[ B-C+\\displaystyle\\frac{C}{x-0.5588\\cdot C+1.2079}\\, \\right]" }, { "math_id": 45, "text": "A\\approx \\displaystyle \\frac{Re\\cdot \\epsilon/D }{8.0897}" }, { "math_id": 46, "text": "B\\approx \\mathrm{ln}\\,\\left( Re\\right) -0.779626" }, { "math_id": 47, "text": " A = -2\\log\\left( \\frac{\\varepsilon/D}{3.7} + {4.5547\\over \\mathrm{Re^{0.8784}}}\\right) " }, { "math_id": 48, "text": "f = 0.3164 \\mathrm{Re}^{-{1 \\over 4}}" }, { "math_id": 49, "text": "f = 0.316 \\mathrm{Re}^{-{1 \\over 4}} + 0.0075\\sqrt{\\frac {D}{2 R_c}}" }, { "math_id": 50, "text": "R_c = R\\left[1 + \\left(\\frac{H}{2 \\pi R} \\right)^2\\right]" }, { "math_id": 51, "text": " f = \\left \\lbrace \\left(\\frac{64}{\\mathrm{Re}}\\right)^{8} + 9.5 \\left[ \\ln \\left(\\frac{\\varepsilon}{{3.7}{D}} + \\frac{5.74}{\\mathrm{Re}^{0.9}} \\right) - \\left(\\frac{2500}{\\mathrm{Re}}\\right)^{6}\\right]^{-16} \\right \\rbrace ^{\\frac{1}{8}}\n" } ]
https://en.wikipedia.org/wiki?curid=13733769
13735033
Helium atom scattering
Helium atom scattering (HAS) is a surface analysis technique used in materials science. It provides information about the surface structure and lattice dynamics of a material by measuring the diffracted atoms from a monochromatic helium beam incident on the sample. History. The first recorded helium diffraction experiment was completed in 1930 by Immanuel Estermann and Otto Stern on the (100) crystal face of lithium fluoride. This experimentally established the feasibility of atom diffraction when the de Broglie wavelength, λ, of the impinging atoms is on the order of the interatomic spacing of the material. At the time, the major limit to the experimental resolution of this method was due to the large velocity spread of the helium beam. It wasn't until the development of high pressure nozzle sources capable of producing intense and strongly monochromatic beams in the 1970s that HAS gained popularity for probing surface structure. Interest in studying the collision of rarefied gases with solid surfaces was helped by a connection with aeronautics and space problems of the time. Plenty of studies showing the fine structures in the diffraction pattern of materials using helium atom scattering were published in the 1970s. However, it wasn't until a third generation of nozzle beam sources was developed, around 1980, that studies of surface phonons could be made by helium atom scattering. These nozzle beam sources were capable of producing helium atom beams with an energy resolution of less than 1meV, making it possible to explicitly resolve the very small energy changes resulting from the inelastic collision of a helium atom with the vibrational modes of a solid surface, so HAS could now be used to probe lattice dynamics. The first measurement of such a surface phonon dispersion curve was reported in 1981, leading to a renewed interest in helium atom scattering applications, particularly for the study of surface dynamics. Basic principles. Surface sensitivity. Generally speaking, surface bonding is different from the bonding within the bulk of a material. In order to accurately model and describe the surface characteristics and properties of a material, it is necessary to understand the specific bonding mechanisms at work at the surface. To do this, one must employ a technique that is able to probe only the surface, we call such a technique "surface-sensitive". That is, the 'observing' particle (whether it be an electron, a neutron, or an atom) needs to be able to only 'see' (gather information from) the surface. If the penetration depth of the incident particle is too deep into the sample, the information it carries out of the sample for detection will contain contributions not only from the surface, but also from the bulk material. While there are several techniques that probe only the first few monolayers of a material, such as low-energy electron diffraction (LEED), helium atom scattering is unique in that it does not penetrate the surface of the sample at all! In fact, the scattering 'turnaround' point of the helium atom is 3-4 angstroms above the surface plane of atoms on the material. Therefore, the information carried out in the scattered helium atom comes solely from the very surface of the sample. A visual comparison of helium scattering and electron scattering is shown below: Helium at thermal energies can be modeled classically as scattering from a hard potential wall, with the location of scattering points representing a constant electron density surface. Since single scattering dominates the helium-surface interactions, the collected helium signal easily gives information on the surface structure without the complications of considering multiple electron scattering events (such as in LEED). Scattering mechanism. A qualitative sketch of the elastic one-dimensional interaction potential between the incident helium atom and an atom on the surface of the sample is shown here: This potential can be broken down into an attractive portion due to Van der Waals forces, which dominates over large separation distances, and a steep repulsive force due to electrostatic repulsion of the positive nuclei, which dominates the short distances. To modify the potential for a two-dimensional surface, a function is added to describe the surface atomic corrugations of the sample. The resulting three-dimensional potential can be modeled as a corrugated Morse potential as: formula_0 The first term is for the laterally-averaged surface potential - a potential well with a depth "D" at the minimum of "z" = "z"m and a fitting parameter "α", and the second term is the repulsive potential modified by the corrugation function, "ξ"("x","y"), with the same periodicity as the surface and fitting parameter "β". Helium atoms, in general, can be scattered either elastically (with no energy transfer to or from the crystal surface) or inelastically through excitation or deexcitation of the surface vibrational modes (phonon creation or annihilation). Each of these scattering results can be used in order to study different properties of a material's surface. Why use helium atoms? There are several advantages to using helium atoms as compared with x-rays, neutrons, and electrons to probe a surface and study its structures and phonon dynamics. As mentioned previously, the lightweight helium atoms at thermal energies do not penetrate into the bulk of the material being studied. This means that in addition to being strictly surface-sensitive they are truly non-destructive to the sample. Their de Broglie wavelength is also on the order of the interatomic spacing of materials, making them ideal probing particles. Since they are neutral, helium atoms are insensitive to surface charges. As a noble gas, the helium atoms are chemically inert. When used at thermal energies, as is the usual scenario, the helium atomic beam is an inert probe (chemically, electrically, magnetically, and mechanically). It is therefore capable of studying the surface structure and dynamics of a wide variety of materials, including those with reactive or metastable surfaces. A helium atom beam can even probe surfaces in the presence of electromagnetic fields and during ultra-high vacuum surface processing without interfering with the ongoing process. Because of this, helium atoms can be useful to make measurements of sputtering or annealing, and adsorbate layer depositions. Finally, because the thermal helium atom has no rotational and vibrational degrees of freedom and no available electronic transitions, only the translational kinetic energy of the incident and scattered beam need be analyzed in order to extract information about the surface. Instrumentation. The accompanying figure is a general schematic of a helium atom scattering experimental setup. It consists of a nozzle beam source, an ultra high vacuum scattering chamber with a crystal manipulator, and a detector chamber. Every system can have a different particular arrangement and setup, but most will have this basic structure. Sources. The helium atom beam, with a very narrow energy spread of less than 1meV, is created through free adiabatic expansion of helium at a pressure of ~200bar into a low-vacuum chamber through a small ~5-10μm nozzle. Depending on the system operating temperature range, typical helium atom energies produced can be 5-200meV. A conical aperture between A and B called the skimmer extracts the center portion of the helium beam. At this point, the atoms of the helium beam should be moving with nearly uniform velocity. Also contained in section B is a chopper system, which is responsible for creating the beam pulses needed to generate the time of flight measurements to be discussed later. Scattering chamber. The scattering chamber, area C, generally contains the crystal manipulator and any other analytical instruments that can be used to characterize the crystal surface. Equipment that can be included in the main scattering chamber includes a LEED screen (to make complementary measurements of the surface structure), an Auger analysis system (to determine the contamination level of the surface), a mass spectrometer (to monitor the vacuum quality and residual gas composition), and, for working with metal surfaces, an ion gun (for sputter cleaning of the sample surface). In order to maintain clean surfaces, the pressure in the scattering chamber needs to be in the range of 10−8 to 10−9 Pa. This requires the use of turbomolecular or cryogenic vacuum pumps. Crystal manipulator. The crystal manipulator allows for at least three different motions of the sample. The azimuthal rotation allows the crystal to change the direction of the surface atoms, the tilt angle is used to set the normal of the crystal to be in the scattering plane, and the rotation of the manipulator around the "z"-axis alters the beam incidence angle. The crystal manipulator should also incorporate a system to control the temperature of the crystal. Detector. After the beam scatters off the crystal surface, it goes into the detector area "D". The most commonly used detector setup is an electron bombardment ion source followed by a mass filter and an electron multiplier. The beam is directed through a series of differential pumping stages that reduce the noise-to-signal ratio before hitting the detector. A time-of-flight analyzer can follow the detector to take energy loss measurements. Elastic measurements. Under conditions for which elastic diffractive scattering dominates, the relative angular positions of the diffraction peaks reflect the geometric properties of the surface being examined. That is, the locations of the diffraction peaks reveal the symmetry of the two-dimensional space group that characterizes the observed surface of the crystal. The width of the diffraction peaks reflects the energy spread of the beam. The elastic scattering is governed by two kinematic conditions - conservation of energy and the energy of the momentum component parallel to the crystal: formula_1 formula_2 Here G is a reciprocal lattice vector, kG and ki are the final and initial (incident) wave vectors of the helium atom. The Ewald sphere construction will determine the diffracted beams to be seen and the scattering angles at which they will appear. A characteristic diffraction pattern will appear, determined by the periodicity of the surface, in a similar manner to that seen for Bragg scattering in electron and x-ray diffraction. Most helium atom scattering studies will scan the detector in a plane defined by the incoming atomic beam direction and the surface normal, reducing the Ewald sphere to a circle of radius "R"="k"0 intersecting only reciprocal lattice rods that lie in the scattering plane as shown here: The intensities of the diffraction peaks provide information about the static gas-surface interaction potentials. Measuring the diffraction peak intensities under different incident beam conditions can reveal the surface corrugation (the surface electron density) of the outermost atoms on the surface. Note that the detection of the helium atoms is much less efficient than for electrons, so the scattered intensity can only be determined for one point in k-space at a time. For an ideal surface, there should be no elastic scattering intensity between the observed diffraction peaks. If there is intensity seen here, it is due to a surface imperfection, such as steps or adatoms. From the angular position, width and intensity of the peaks, information is gained regarding the surface structure and symmetry, and the ordering of surface features. Inelastic measurements. The inelastic scattering of the helium atom beam reveals the surface phonon dispersion for a material. At scattering angles far away from the specular or diffraction angles, the scattering intensity of the ordered surface is dominated by inelastic collisions. In order to study the inelastic scattering of the helium atom beam due only to single-phonon contributions, an energy analysis needs to be made of the scattered atoms. The most popular way to do this is through the use of time-of-flight (TOF) analysis. The TOF analysis requires the beam to be pulsed through the mechanical chopper, producing collimated beam 'packets' that have a 'time-of-flight' (TOF) to travel from the chopper to the detector. The beams that scatter inelastically will lose some energy in their encounter with the surface and therefore have a different velocity after scattering than they were incident with. The creation or annihilation of surface phonons can be measured, therefore, by the shifts in the energy of the scattered beam. By changing the scattering angles or incident beam energy, it is possible to sample inelastic scattering at different values of energy and momentum transfer, mapping out the dispersion relations for the surface modes. Analyzing the dispersion curves reveals sought-after information about the surface structure and bonding. A TOF analysis plot would show intensity peaks as a function of time. The main peak (with the highest intensity) is that for the unscattered helium beam 'packet'. A peak to the left is that for the annihilation of a phonon. If a phonon creation process occurred, it would appear as a peak to the right: The qualitative sketch above shows what a time-of-flight plot might look like near a diffraction angle. However, as the crystal rotates away from the diffraction angle, the elastic (main) peak drops in intensity. The intensity never shrinks to zero even far from diffraction conditions, however, due to incoherent elastic scattering from surface defects. The intensity of the incoherent elastic peak and its dependence on scattering angle can therefore provide useful information about surface imperfections present on the crystal. The kinematics of the phonon annihilation or creation process are extremely simple - conservation of energy and momentum can be combined to yield an equation for the energy exchange Δ"E" and momentum exchange q during the collision process. This inelastic scattering process is described as a phonon of energy Δ"E" &amp;hbar;"ω" and wavevector q. The vibrational modes of the lattice can then be described by the dispersion relations "ω"(q), which give the possible phonon frequencies ω as a function of the phonon wavevector q. In addition to detecting surface phonons, because of the low energy of the helium beam, low-frequency vibrations of adsorbates can be detected as well, leading to the determination of their potential energy.
[ { "math_id": 0, "text": "V(z)=D \\big\\{\\exp \\left[-2\\alpha (z-z_m)\\right] - 2 \\exp \\left[-\\alpha(z-z_m>)\\right]\\big\\} + 2\\beta D \\exp \\left[2 \\alpha (z-z_m) \\right] \\xi(x,y)" }, { "math_id": 1, "text": "E_f = E_i \\Rightarrow \\mathbf k_i^2 = \\mathbf k_{\\mathbf G}^2 = k_{\\mathbf G z}^2 + k_{\\parallel \\mathbf G}^2" }, { "math_id": 2, "text": "\\mathbf k_{\\parallel \\mathbf G} = \\mathbf k_{\\parallel i} + \\mathbf G" } ]
https://en.wikipedia.org/wiki?curid=13735033
13737856
Molar mass constant
Physical constant defined as the ratio of the molar mass and relative mass The molar mass constant, usually denoted by "M"u, is a physical constant defined as one twelfth of the molar mass of carbon-12: "M"u = "M"(12C)/12. The molar mass of an element or compound is its relative atomic mass (atomic weight) or relative molecular mass (molecular weight) multiplied by the molar mass constant. The mole and the atomic mass unit (dalton) were originally defined in the International System of Units (SI) in such a way that the constant was exactly , which made the numerical value of the molar mass of a substance, in grams per mole, equal to the average mass of its constituent particles (atoms, molecules, or formula units) relative to the atomic mass constant, "m"u. Thus, for example, the average molecular mass of water is approximately , making the mass of one mole of water approximately . On 20 May 2019, the SI definition of mole changed in such a way that the molar mass constant remains nearly but no longer exactly . However, the difference is insignificant for all practical purposes. According to the SI, the value of "M"u now depends on the mass of one atom of carbon-12, which must be determined experimentally. The 2022 CODATA recommended value of "M"u is .‍ The molar mass constant is important in writing dimensionally correct equations. While one may informally say "the molar mass of an element "M" is the same as its atomic weight "A"", the atomic weight (relative atomic mass) "A" is a dimensionless quantity, whereas the molar mass "M" has the units of mass per mole. Formally, "M" is "A" times the molar mass constant "M"u. Prior to 2019 redefinition. The molar mass constant was unusual (but not unique) among physical constants by having an exactly defined value rather than being measured experimentally. From the old definition of the mole, the molar mass of carbon-12 was exactly 12 g/mol. From the definition of relative atomic mass, the relative atomic mass of carbon-12, that is the atomic weight of a sample of pure carbon-12, is exactly 12. The molar mass constant was thus given by formula_0 The molar mass constant is related to the mass of a carbon-12 atom in grams: formula_1 The Avogadro constant being a fixed value, the mass of a carbon-12 atom depends on the accuracy and precision of the molar mass constant. Post-2019 redefinition. Because the 2019 redefinition of SI base units gave the Avogadro constant an exact numerical value, the value of the molar mass constant is no longer exact, and will be subject to increasing precision with future experimentations. One consequence of this change is that the previously defined relationship between the mass of the 12C atom, the dalton, the kilogram, and the Avogadro number is no longer exact. One of the following had to change: The wording of the 9th SI Brochure implies that the first statement remains valid, which means the second is no longer exactly true. The molar mass constant is still very close to , but no longer exactly equal to it. Appendix 2 to the 9th SI Brochure states that "the molar mass of carbon 12, "M"(12C), is equal to within a relative standard uncertainty equal to that of the recommended value of "N"A"h" at the time this Resolution was adopted, namely , and that in the future its value will be determined experimentally", which makes no reference to the dalton and is consistent with either statement. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_{\\text{u}} = {\\text{molar mass }[M( ^{12}\\mathrm{C} )]\\over \\text{relative atomic mass }[A_{\\text{r}}( ^{12}\\mathrm{C} )]} = {{12\\ {\\rm g/mol}}\\over 12}=1\\ \\rm g/mol" }, { "math_id": 1, "text": "m({}^{12}{\\text{C}}) = \\frac{12 \\times M_{\\text{u}}}{N_{\\text{A}}}" } ]
https://en.wikipedia.org/wiki?curid=13737856
13739701
Panjer recursion
The Panjer recursion is an algorithm to compute the probability distribution approximation of a compound random variable formula_0 where both formula_1 and formula_2 are random variables and of special types. In more general cases the distribution of "S" is a compound distribution. The recursion for the special cases considered was introduced in a paper by Harry Panjer (Distinguished Emeritus Professor, University of Waterloo). It is heavily used in actuarial science (see also systemic risk). Preliminaries. We are interested in the compound random variable formula_0 where formula_1 and formula_2 fulfill the following preconditions. Claim size distribution. We assume the formula_2 to be i.i.d. and independent of formula_1. Furthermore the formula_2 have to be distributed on a lattice formula_3 with latticewidth formula_4. formula_5 In actuarial practice, formula_2 is obtained by discretisation of the claim density function (upper, lower...). Claim number distribution. The number of claims "N" is a random variable, which is said to have a "claim number distribution", and which can take values 0, 1, 2, ... etc.. For the "Panjer recursion", the probability distribution of "N" has to be a member of the Panjer class, otherwise known as the (a,b,0) class of distributions. This class consists of all counting random variables which fulfill the following relation: formula_6 for some formula_7 and formula_8 which fulfill formula_9. The initial value formula_10 is determined such that formula_11 The Panjer recursion makes use of this iterative relationship to specify a recursive way of constructing the probability distribution of "S". In the following formula_12 denotes the probability generating function of "N": for this see the table in (a,b,0) class of distributions. In the case of claim number is known, please note the "De Pril" algorithm. This algorithm is suitable to compute the sum distribution of formula_13 discrete random variables. Recursion. The algorithm now gives a recursion to compute the formula_14. The starting value is formula_15 with the special cases formula_16 and formula_17 and proceed with formula_18 Example. The following example shows the approximated density of formula_19 where formula_20 and formula_21 with lattice width "h" = 0.04. (See Fréchet distribution.) As observed, an issue may arise at the initialization of the recursion. Guégan and Hassani (2009) have proposed a solution to deal with that issue
[ { "math_id": 0, "text": "S = \\sum_{i=1}^N X_i\\," }, { "math_id": 1, "text": "N\\," }, { "math_id": 2, "text": "X_i\\," }, { "math_id": 3, "text": "h \\mathbb{N}_0\\," }, { "math_id": 4, "text": "h>0\\," }, { "math_id": 5, "text": "f_k = P[X_i = hk].\\," }, { "math_id": 6, "text": "P[N=k] = p_k= \\left(a + \\frac{b}{k} \\right) \\cdot p_{k-1},~~k \\ge 1.\\, " }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "a+b \\ge 0\\," }, { "math_id": 10, "text": "p_0\\," }, { "math_id": 11, "text": "\\sum_{k=0}^\\infty p_k = 1.\\," }, { "math_id": 12, "text": "W_N(x)\\," }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "g_k =P[S = hk] \\," }, { "math_id": 15, "text": "g_0 = W_N(f_0)\\," }, { "math_id": 16, "text": "g_0=p_0\\cdot \\exp(f_0 b) \\quad \\text{ if } \\quad a = 0,\\," }, { "math_id": 17, "text": "g_0=\\frac{p_0}{(1-f_0a)^{1+b/a}} \\quad \\text{ for } \\quad a \\ne 0,\\," }, { "math_id": 18, "text": "g_k=\\frac{1}{1-f_0a}\\sum_{j=1}^k \\left( a+\\frac{b\\cdot j}{k} \\right) \\cdot f_j \\cdot g_{k-j}.\\," }, { "math_id": 19, "text": "\\scriptstyle S \\,=\\, \\sum_{i=1}^N X_i" }, { "math_id": 20, "text": "\\scriptstyle N\\, \\sim\\, \\text{NegBin}(3.5,0.3)\\," }, { "math_id": 21, "text": "\\scriptstyle X \\,\\sim \\,\\text{Frechet}(1.7,1)" } ]
https://en.wikipedia.org/wiki?curid=13739701
13739985
Lehmer's conjecture
Proposed lower bound on the Mahler measure for polynomials with integer coefficients Lehmer's conjecture, also known as the Lehmer's Mahler measure problem, is a problem in number theory raised by Derrick Henry Lehmer. The conjecture asserts that there is an absolute constant formula_0 such that every polynomial with integer coefficients formula_1 satisfies one of the following properties: There are a number of definitions of the Mahler measure, one of which is to factor formula_3 over formula_7 as formula_8 and then set formula_9 The smallest known Mahler measure (greater than 1) is for "Lehmer's polynomial" formula_10 for which the Mahler measure is the Salem number formula_11 It is widely believed that this example represents the true minimal value: that is, formula_12 in Lehmer's conjecture. Motivation. Consider Mahler measure for one variable and Jensen's formula shows that if formula_13 then formula_9 In this paragraph denote formula_14 , which is also called Mahler measure. If formula_15 has integer coefficients, this shows that formula_16 is an algebraic number so formula_17 is the logarithm of an algebraic integer. It also shows that formula_18 and that if formula_19 then formula_15 is a product of cyclotomic polynomials i.e. monic polynomials whose all roots are roots of unity, or a monomial polynomial of formula_5 i.e. a power formula_20 for some formula_21 . Lehmer noticed that formula_19 is an important value in the study of the integer sequences formula_22 for monic formula_15 . If formula_15 does not vanish on the circle then formula_23. If formula_15 does vanish on the circle but not at any root of unity, then the same convergence holds by Baker's theorem (in fact an earlier result of Gelfond is sufficient for this, as pointed out by Lind in connection with his study of quasihyperbolic toral automorphisms). As a result, Lehmer was led to ask whether there is a constant formula_24 such that formula_25 provided formula_15 is not cyclotomic?, or given formula_24, are there formula_15 with integer coefficients for which formula_26? Some positive answers have been provided as follows, but Lehmer's conjecture is not yet completely proved and is still a question of much interest. Partial results. Let formula_1 be an irreducible monic polynomial of degree formula_27. Smyth proved that Lehmer's conjecture is true for all polynomials that are not reciprocal, i.e., all polynomials satisfying formula_28. Blanksby and Montgomery and Stewart independently proved that there is an absolute constant formula_29 such that either formula_6 or formula_30 Dobrowolski improved this to formula_31 Dobrowolski obtained the value "C" ≥ 1/1200 and asymptotically C &gt; 1-ε for all sufficiently large "D". Voutier in 1996 obtained "C" ≥ 1/4 for "D" ≥ 2. Elliptic analogues. Let formula_32 be an elliptic curve defined over a number field formula_33, and let formula_34 be the canonical height function. The canonical height is the analogue for elliptic curves of the function formula_35. It has the property that formula_36 if and only if formula_37 is a torsion point in formula_38. The elliptic Lehmer conjecture asserts that there is a constant formula_39 such that formula_40 for all non-torsion points formula_41, where formula_42. If the elliptic curve "E" has complex multiplication, then the analogue of Dobrowolski's result holds: formula_43 due to Laurent. For arbitrary elliptic curves, the best known result is formula_44 due to Masser. For elliptic curves with non-integral j-invariant, this has been improved to formula_45 by Hindry and Silverman. Restricted results. Stronger results are known for restricted classes of polynomials or algebraic numbers. If "P"("x") is not reciprocal then formula_46 and this is clearly best possible. If further all the coefficients of "P" are odd then formula_47 For any algebraic number "α", let formula_48 be the Mahler measure of the minimal polynomial formula_49 of "α". If the field Q("α") is a Galois extension of Q, then Lehmer's conjecture holds for formula_49. Relation to structure of compact group automorphisms. The measure-theoretic entropy of an ergodic automorphism of a compact metrizable abelian group is known to be given by the logarithmic Mahler measure of a polynomial with integer coefficients if it is finite. As pointed out by Lind, this means that the set of possible values of the entropy of such actions is either all of formula_50 or a countable set depending on the solution to Lehmer's problem. Lind also showed that the infinite-dimensional torus either has ergodic automorphisms of finite positive entropy or only has automorphisms of infinite entropy depending on the solution to Lehmer's problem. Since an ergodic compact group automorphism is measurably isomorphic to a Bernoulli shift, and the Bernoulli shifts are classified up to measurable isomorphism by their entropy by Ornstein's theorem, this means that the moduli space of all ergodic compact group automorphisms up to measurable isomorphism is either countable or uncountable depending on the solution to Lehmer's problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu>1" }, { "math_id": 1, "text": "P(x)\\in\\mathbb{Z}[x]" }, { "math_id": 2, "text": "\\mathcal{M}(P(x))" }, { "math_id": 3, "text": "P(x)" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "\\mathcal{M}(P(x))=1" }, { "math_id": 7, "text": "\\mathbb{C}" }, { "math_id": 8, "text": "P(x)=a_0 (x-\\alpha_1)(x-\\alpha_2)\\cdots(x-\\alpha_D)," }, { "math_id": 9, "text": "\\mathcal{M}(P(x)) = |a_0| \\prod_{i=1}^{D} \\max(1,|\\alpha_i|)." }, { "math_id": 10, "text": "P(x)= x^{10}+x^9-x^7-x^6-x^5-x^4-x^3+x+1 \\,," }, { "math_id": 11, "text": "\\mathcal{M}(P(x))=1.176280818\\dots \\ ." }, { "math_id": 12, "text": "\\mu=1.176280818\\dots" }, { "math_id": 13, "text": "P(x)=a_0 (x-\\alpha_1)(x-\\alpha_2)\\cdots(x-\\alpha_D)" }, { "math_id": 14, "text": "m(P)=\\log(\\mathcal{M}(P(x))" }, { "math_id": 15, "text": "P" }, { "math_id": 16, "text": "\\mathcal{M}(P)" }, { "math_id": 17, "text": "m(P)" }, { "math_id": 18, "text": "m(P)\\ge0" }, { "math_id": 19, "text": "m(P)=0" }, { "math_id": 20, "text": "x^n" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "\\Delta_n=\\text{Res}(P(x), x^n-1)=\\prod^D_{i=1}(\\alpha_i^n-1)" }, { "math_id": 23, "text": "\\lim|\\Delta_n|^{1/n}=\\mathcal{M}(P)" }, { "math_id": 24, "text": "c>0" }, { "math_id": 25, "text": "m(P)>c" }, { "math_id": 26, "text": " 0<m(P)<c " }, { "math_id": 27, "text": "D" }, { "math_id": 28, "text": "x^DP(x^{-1})\\ne P(x)" }, { "math_id": 29, "text": "C>1" }, { "math_id": 30, "text": "\\log\\mathcal{M}(P(x))\\ge \\frac{C}{D\\log D}. " }, { "math_id": 31, "text": "\\log\\mathcal{M}(P(x))\\ge C\\left(\\frac{\\log\\log D}{\\log D}\\right)^3." }, { "math_id": 32, "text": "E/K" }, { "math_id": 33, "text": "K" }, { "math_id": 34, "text": "\\hat{h}_E:E(\\bar{K})\\to\\mathbb{R}" }, { "math_id": 35, "text": "(\\deg P)^{-1}\\log\\mathcal{M}(P(x))" }, { "math_id": 36, "text": "\\hat{h}_E(Q)=0" }, { "math_id": 37, "text": "Q" }, { "math_id": 38, "text": "E(\\bar{K})" }, { "math_id": 39, "text": "C(E/K)>0" }, { "math_id": 40, "text": "\\hat{h}_E(Q) \\ge \\frac{C(E/K)}{D}" }, { "math_id": 41, "text": "Q\\in E(\\bar{K})" }, { "math_id": 42, "text": "D=[K(Q):K]" }, { "math_id": 43, "text": "\\hat{h}_E(Q) \\ge \\frac{C(E/K)}{D} \\left(\\frac{\\log\\log D}{\\log D}\\right)^3 ," }, { "math_id": 44, "text": "\\hat{h}_E(Q) \\ge \\frac{C(E/K)}{D^3(\\log D)^2}," }, { "math_id": 45, "text": "\\hat{h}_E(Q) \\ge \\frac{C(E/K)}{D^2(\\log D)^2}," }, { "math_id": 46, "text": "M(P) \\ge M(x^3 -x - 1) \\approx 1.3247 " }, { "math_id": 47, "text": "M(P) \\ge M(x^2 -x - 1) \\approx 1.618 . " }, { "math_id": 48, "text": "M(\\alpha)" }, { "math_id": 49, "text": "P_\\alpha" }, { "math_id": 50, "text": "(0,\\infty]" } ]
https://en.wikipedia.org/wiki?curid=13739985
13740536
Bridge and torch problem
Logic puzzle The bridge and torch problem (also known as "The Midnight Train" and "Dangerous crossing") is a logic puzzle that deals with four people, a bridge and a torch. It is in the category of river crossing puzzles, where a number of objects must move across a river, with some constraints. Story. Four people come to a river in the night. There is a narrow bridge, and it can only hold two people at a time. They have one torch and, because it's night, the torch has to be used when crossing the bridge. Person A can cross the bridge in 1 minute, B in 2 minutes, C in 5 minutes, and D in 8 minutes. When two people cross the bridge together, they must move at the slower person's pace. The question is, can they all get across the bridge if the torch lasts only 15 minutes? Solution. An obvious first idea is that the cost of returning the torch to the people waiting to cross is an unavoidable expense which should be minimized. This strategy makes A the torch bearer, shuttling each person across the bridge: This strategy does not permit a crossing in 15 minutes. To find the correct solution, one must realize that forcing the two slowest people to cross individually wastes time which can be saved if they both cross together: A second equivalent solution swaps the return trips. Basically, the two fastest people cross together on the 1st and 5th trips, the two slowest people cross together on the 3rd trip, and EITHER of the fastest people returns on the 2nd trip, and the other fastest person returns on the 4th trip. Thus the minimum time for four people is given by the following mathematical equations: When formula_0, &lt;br&gt; formula_1 &lt;br&gt; formula_2 A semi-formal approach. Assume that a solution minimizes the total number of crossings. This gives a total of five crossings - three pair crossings and two solo-crossings. Also, assume we always choose the fastest for the solo-cross. First, we show that if the two slowest persons (C and D) cross separately, they accumulate a total crossing time of 15. This is done by taking persons A, C, &amp; D: C+A+D+A = 5+1+8+1=15. (Here we use A because we know that using A to cross both C and D separately is the most efficient.) But, the time has elapsed and person A and B are still on the starting side of the bridge and must cross. So it is not possible for the two slowest (C &amp; D) to cross separately. Second, we show that in order for C and D to cross together that they need to cross on the second pair-cross: i.e. not C or D, so A and B, must cross together first. Remember our assumption at the beginning states that we should minimize crossings and so we have five crossings - 3 pair-crossings and 2 single crossings. Assume that C and D cross first. But then C or D must cross back to bring the torch to the other side, and so whoever solo-crossed must cross again. Hence, they will cross separately. Also, it is impossible for them to cross together last, since this implies that one of them must have crossed previously, otherwise there would be three persons total on the start side. So, since there are only three choices for the pair-crossings and C and D cannot cross first or last, they must cross together on the second, or middle, pair-crossing. Putting all this together, A and B must cross first, since we know C and D cannot and we are minimizing crossings. Then, A must cross next, since we assume we should choose the fastest to make the solo-cross. Then we are at the second, or middle, pair-crossing so C and D must go. Then we choose to send the fastest back, which is B. A and B are now on the start side and must cross for the last pair-crossing. This gives us, B+A+D+B+B = 2+1+8+2+2 = 15. Variations and history. Several variations exist, with cosmetic variations such as differently named people, or variation in the crossing times or time limit. The torch itself may expire in a short time and so serve as the time limit. In a variation called "The Midnight Train", for example, person D needs 10 minutes instead of 8 to cross the bridge, and persons A, B, C and D, now called the four Gabrianni brothers, have 17 minutes to catch the midnight train. The puzzle is known to have appeared as early as 1981, in the book "Super Strategies For Puzzles and Games". In this version of the puzzle, A, B, C and D take 5, 10, 20, and 25 minutes, respectively, to cross, and the time limit is 60 minutes. In all these variations, the structure and solution of the puzzle remain the same. In the case where there are an arbitrary number of people with arbitrary crossing times, and the capacity of the bridge remains equal to two people, the problem has been completely analyzed by graph-theoretic methods. Martin Erwig from Oregon State University has used a variation of the problem to argue for the usability of the Haskell programming language over Prolog for solving search problems. The puzzle is also mentioned in Daniel Dennett's book "From Bacteria to Bach and Back" as his favorite example of a solution that is counter-intuitive. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A < B < C < D" }, { "math_id": 1, "text": "\\min(B + A + C + A + D, B + A + D + B + B)" }, { "math_id": 2, "text": "\\min(2A + B + C + D,A + 3B + D)" } ]
https://en.wikipedia.org/wiki?curid=13740536
13743194
Carbonylation
Chemical reaction which adds a C=O group onto a molecule In chemistry, carbonylation refers to reactions that introduce carbon monoxide (CO) into organic and inorganic substrates. Carbon monoxide is abundantly available and conveniently reactive, so it is widely used as a reactant in industrial chemistry. The term carbonylation also refers to oxidation of protein side chains. Organic chemistry. Several industrially useful organic chemicals are prepared by carbonylations, which can be highly selective reactions. Carbonylations produce organic carbonyls, i.e., compounds that contain the functional group such as aldehydes (), carboxylic acids () and esters (). Carbonylations are the basis of many types of reactions, including hydroformylation and Reppe reactions. These reactions require metal catalysts, which bind and activate the CO. These processes involve transition metal acyl complexes as intermediates. Much of this theme was developed by Walter Reppe. Hydroformylation. Hydroformylation entails the addition of both carbon monoxide and hydrogen to unsaturated organic compounds, usually alkenes. The usual products are aldehydes: formula_0 The reaction requires metal catalysts that bind CO, forming intermediate metal carbonyls. Many of the commodity carboxylic acids, i.e. propionic, butyric, valeric, etc, as well as many of the commodity alcohols, i.e. propanol, butanol, amyl alcohol, are derived from aldehydes produced by hydroformylation. In this way, hydroformylation is a gateway from alkenes to oxygenates. Decarbonylation. Few organic carbonyls undergo spontaneous decarbonylation, but many can be induced to do so with appropriate catalysts. A common transformation involves the conversion of aldehydes to alkanes, usually catalyzed by metal complexes: formula_1 Few catalysts are highly active or exhibit broad scope. Acetic acid and acetic anhydride. Large-scale applications of carbonylation are the Monsanto acetic acid process and Cativa process, which convert methanol to acetic acid. In another major industrial process, acetic anhydride is prepared by a related carbonylation of methyl acetate. Oxidative carbonylation. Dimethyl carbonate and dimethyl oxalate are produced industrially using carbon monoxide and an oxidant, in effect as a source of . formula_2 The oxidative carbonylation of methanol is catalyzed by copper(I) salts, which form transient carbonyl complexes. For the oxidative carbonylation of alkenes, palladium complexes are used. Hydrocarboxylation and hydroesterification. In hydrocarboxylation, alkenes and alkynes are the substrates. This method is used industrially to produce propionic acid from ethylene using nickel carbonyl as the catalyst: formula_3 In the industrial synthesis of ibuprofen, a benzylic alcohol is converted to the corresponding arylacetic acid via a Pd-catalyzed carbonylation: formula_4 Acrylic acid was once mainly prepared by the hydrocarboxylation of acetylene. Nowadays, however, the preferred route to acrylic acid entails the oxidation of propene, exploiting its low cost and the high reactivity of the allylic bonds. Hydroesterification is like hydrocarboxylation, but it uses alcohols in place of water. formula_5 The process is catalyzed by Herrmann's catalyst, . Under similar conditions, other Pd-diphosphines catalyze formation of polyketones. Other reactions. The Koch reaction is a special case of hydrocarboxylation reaction that does not rely on metal catalysts. Instead, the process is catalyzed by strong acids such as sulfuric acid or the combination of phosphoric acid and boron trifluoride. The reaction is less applicable to simple alkene. The industrial synthesis of glycolic acid is achieved in this way: formula_6 The conversion of isobutene to pivalic acid is also illustrative: formula_7 Alkyl, benzyl, vinyl, aryl, and allyl halides can also be carbonylated in the presence carbon monoxide and suitable catalysts such as manganese, iron, or nickel powders. In the Collman reaction, an iron carbonyl complex serves as both metal catalyst and carbonyl source. Carbonylation in inorganic chemistry. Metal carbonyls, compounds with the formula (M = metal; L = other ligands) are prepared by carbonylation of transition metals. Iron and nickel powder react directly with CO to give and , respectively. Most other metals form carbonyls less directly, such as from their oxides or halides. Metal carbonyls are widely employed as catalysts in the hydroformylation and Reppe processes discussed above. Inorganic compounds that contain CO ligands can also undergo decarbonylation, often via a photochemical reaction.
[ { "math_id": 0, "text": "\\ce{RCH=CH2 + H2} + {\\color{red}\\ce{CO}} \\longrightarrow \\ce{RCH2CH2}{\\color{red}\\ce{C}}\\ce{H}{\\color{red}\\ce{O}}" }, { "math_id": 1, "text": "\\ce{R}{\\color{red}\\ce{C}}\\ce{H}{\\color{red}\\ce{O}} \\longrightarrow \\ce{RH} + {\\color{red}\\ce{CO}}" }, { "math_id": 2, "text": "\\ce{4 CH3OH + O2} + 2\\ {\\color{red}\\ce{CO}} \\longrightarrow \\ce{2 (CH3O)2}{\\color{red}\\ce{CO}} + \\ce{2 H2O}" }, { "math_id": 3, "text": "\\ce{RCH=CH2 + H2O} + {\\color{red}\\ce{CO}} \\longrightarrow \\ce{RCH2CH2}{\\color{red}\\ce{CO}}\\ce{OH}" }, { "math_id": 4, "text": "\\ce{ArCH(CH3)OH} + {\\color{red}\\ce{CO}} \\longrightarrow \\ce{ArCH(CH3)}{\\color{red}\\ce{CO}}\\ce{OH}" }, { "math_id": 5, "text": "\\ce{C2H4} + {\\color{red}\\ce{CO}} + \\ce{MeOH -> CH3CH2}{\\color{red}\\ce{CO}}\\ce{OMe}" }, { "math_id": 6, "text": "\\ce{CH2O} + {\\color{red}\\ce{CO}} + \\ce{H2O -> HOCH2}{\\color{red}\\ce{CO}}\\ce{OH}" }, { "math_id": 7, "text": "\\ce{Me2C=CH2 + H2O} + {\\color{red}\\ce{CO}} \\longrightarrow \\ce{Me3C}{\\color{red}\\ce{CO}}\\ce{OH}" } ]
https://en.wikipedia.org/wiki?curid=13743194
1374330
Otoacoustic emission
Sound from the inner ear An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health. Broadly speaking, there are two types of otoacoustic emissions: spontaneous otoacoustic emissions (SOAEs), which occur without external stimulation, and evoked otoacoustic emissions (EOAEs), which require an evoking stimulus. Mechanism of occurrence. OAEs are considered to be related to the amplification function of the cochlea. In the absence of external stimulation, the activity of the cochlear amplifier increases, leading to the production of sound. Several lines of evidence suggest that, in mammals, outer hair cells are the elements that enhance cochlear sensitivity and frequency selectivity and hence act as the energy sources for amplification. Types. Spontaneous. Spontaneous otoacoustic emissions (SOAEs) are sounds that are emitted from the ear without external stimulation and are measurable with sensitive microphones in the external ear canal. At least one SOAE can be detected in approximately 35–50% of the population. The sounds are frequency-stable between 500 Hz and 4,500 Hz and have unstable volumes between -30 dB SPL and +10 dB SPL. The majority of those with SOAEs are unaware of them, however 1–9% perceive a SOAE as an annoying tinnitus. It has been suggested that "The Hum" phenomena are SOAEs. Evoked. Evoked otoacoustic emissions are currently evoked using three different methodologies. The evoked responses from these stimuli occur at frequencies (formula_3) mathematically related to the primary frequencies, with the two most prominent being formula_4 (the "cubic" distortion tone, most commonly used for hearing screening), because they produce the most robust emission, and formula_5 (the "quadratic" distortion tone, or simple difference tone). Clinical importance. Otoacoustic emissions are clinically important because they are the basis of a simple, non-invasive test for cochlear hearing loss in newborn babies and in children or adults who are unable or unwilling to cooperate during conventional hearing tests. In addition, the OAEs are highly reliable making it suitable for diagnostic and screening applications. Many western countries now have national programmes for the universal hearing screening of newborn babies. Newborn hearing screening is state-mandated prior to hospital discharge in the United States. Periodic early childhood hearing screenings programs are also utilizing OAE technology. The Early Childhood Hearing Outreach Initiative at the National Center for Hearing Assessment and Management (NCHAM) at Utah State University has helped hundreds of Early Head Start programs across the United States implement OAE screening and follow-up practices in those early childhood educational settings. The primary screening tool is a test for the presence of a click-evoked OAE. Otoacoustic emissions also assist in differential diagnosis of cochlear and higher level hearing losses (e.g., auditory neuropathy). The relationships between otoacoustic emissions and tinnitus have been explored. Several studies suggest that in about 6% to 12% of normal-hearing persons with tinnitus and SOAEs, the SOAEs are at least partly responsible for the tinnitus. Studies have found that some subjects with tinnitus display oscillating or ringing EOAEs, and in these cases, it is hypothesized that the oscillating EOAEs and tinnitus are related to a common underlying pathology rather than the emissions being the source of the tinnitus. In conjunction with audiometric testing, OAE testing can be completed to determine changes in the responses. Studies have found that exposure to noise can cause a decline in OAE responses. OAEs are a measurement of the activity of outer hair cells in the cochlea, and noise-induced hearing loss occurs as a result of damage to the outer hair cells in the cochlea. Therefore, the damage or loss of some outer hair cells will likely show up on OAEs before showing up on the audiogram. Studies have shown that for some individuals with normal hearing that have been exposed to excessive sound levels, fewer, reduced, or no OAEs can be present. This could be an indication of noise-induced hearing loss before it is seen on an audiogram. In one study, a group of subjects with noise exposure was compared to a group of subjects with normal audiograms and a history of noise exposure, as well as a group of military recruits with no history of noise exposure and a normal audiogram. They found that an increase in severity of the noise-induced hearing loss resulted in OAEs with a smaller range of emissions and reduced amplitude of the emissions. The loss of emissions due to noise exposure was found to occur mostly in higher frequencies, and it was more prominent in the groups that had noise exposure in comparison to the non-exposed group. It was found that OAEs were more sensitive to identifying noise-induced cochlear damage than pure tone audiometry. In conclusion, the study identified OAEs as a method for helping with detection of the early onset of noise-induced hearing loss. It has been found that distortion-product otoacoustic emissions (DPOAE's) have provided the most information for detecting hearing loss in high frequencies when compared to transient-evoked otoacoustic emissions (TEOAE). This is an indication that DPOAE's can help with detecting an early onset of noise-induced hearing loss. A study measuring audiometric thresholds and DPOAEs among individuals in the military showed that there was a decrease in DPOAEs after noise exposure, but did not show a shift in audiometric threshold. This supports OAEs as predicting early signs of noise damage. Biometric importance. In 2009, Stephen Beeby of the University of Southampton led research into utilizing otoacoustic emissions for biometric identification. Devices equipped with a microphone could detect these subsonic emissions and potentially identify an individual, thereby providing access to the device, without the need of a traditional password. It is speculated, however, that colds, medication, trimming one's ear hair, or recording and playing back a signal to the microphone could subvert the identification process. Measuring otoacoustic emissions on earphones. High-end personalized headphone products (e.g., Nuraphone) are being designed to measure OAEs and determine the listener’s sensitivity to different acoustic frequencies. This is then used to personalize the audio signal for each listener. In 2022, researchers at the University of Washington built a low-cost prototype that can reliably detect otoacoustic emissions using commodity earphones and microphones attached to a smartphone. The low-cost prototype sends two frequency tones through each of the headphone’s earbuds, detects the distortion-product OAEs generated by the cochlea and recorded via the microphone. Such low-cost technologies may help larger efforts to achieve universal neonatal hearing screening across the world.
[ { "math_id": 0, "text": "f_1" }, { "math_id": 1, "text": "f_2" }, { "math_id": 2, "text": "f_1\\mbox{ }:\\mbox{ }f_2" }, { "math_id": 3, "text": "f_{dp}" }, { "math_id": 4, "text": "f_{dp}=2f_1-f_2" }, { "math_id": 5, "text": "f_{dp}=f_2-f_1" } ]
https://en.wikipedia.org/wiki?curid=1374330
13744357
Low-energy ion scattering
Low-energy ion scattering spectroscopy (LEIS), sometimes referred to simply as ion scattering spectroscopy (ISS), is a surface-sensitive analytical technique used to characterize the chemical and structural makeup of materials. LEIS involves directing a stream of charged particles known as ions at a surface and making observations of the positions, velocities, and energies of the ions that have interacted with the surface. Data that is thus collected can be used to deduce information about the material such as the relative positions of atoms in a surface lattice and the elemental identity of those atoms. LEIS is closely related to both medium-energy ion scattering (MEIS) and high-energy ion scattering (HEIS, known in practice as Rutherford backscattering spectroscopy, or RBS), differing primarily in the energy range of the ion beam used to probe the surface. While much of the information collected using LEIS can be obtained using other surface science techniques, LEIS is unique in its sensitivity to both structure and composition of surfaces. Additionally, LEIS is one of a very few surface-sensitive techniques capable of directly observing hydrogen atoms, an aspect that may make it an increasingly more important technique as the hydrogen economy is being explored. Experimental setup. LEIS systems consist of the following: Physics of ion-surface interactions. Several different types of events may take place as a result of the ion beam impinging on a target surface. Some of these events include electron or photon emission, electron transfer (both ion-surface and surface-ion), scattering, adsorption, and sputtering (i.e. ejection of atoms from the surface). For each system and each interaction there exists an interaction cross-section, and the study of these cross-sections is a field in its own right. As the name suggests, LEIS is primarily concerned with scattering phenomena. Elemental composition and two-body collision model. Due to the energy range typically used in ion scattering experiments (&gt; 500 eV), effects of thermal vibrations, phonon oscillations, and interatomic binding are ignored since they are far below this range (~a few eV), and the interaction of particle and surface may be thought of as a classical two-body elastic collision problem. Measuring the energy of ions scattered in this type of interaction can be used to determine the elemental composition of a surface, as is shown in the following: Two-body elastic collisions are governed by the concepts of energy and momentum conservation. Consider a particle with mass mx, velocity v0, and energy given as formula_0 impacting another particle at rest with mass my. The energies of the particles after collision are formula_1 and formula_2 where formula_3 and thus formula_4. Additionally, we know formula_5. Using trigonometry we are able to determine Similarly, we know In a well-controlled experiment the energy and mass of the primary ions (E0 and mx, respectively) and the scattering or recoiling geometries are all known, so determination of surface elemental composition is given by the correlation between E1 or E2 and my. Higher energy scattering peaks correspond to heavier atoms and lower energy peaks correspond to lighter atoms. Getting quantitative. While obtaining qualitative information about the elemental composition of a surface is relatively straightforward, it is necessary to understand the statistical cross-section of interaction between ion and surface atoms in order to obtain quantitative information. Stated another way, it is easy to find out if a particular species is present, but much more difficult to determine how much of this species is there. The two-body collision model fails to give quantitative results as it ignores the contributions of coulomb repulsion as well as the more complicated effects of charge screening by electrons. This is generally less of a problem in MEIS and RBS experiments but presents issues in LEIS. Coulomb repulsion occurs between positively charged primary ions and the nuclei of surface atoms. The interaction potential is given as: Where formula_6 and formula_7 are the atomic numbers of the primary ion and surface atom, respectively, formula_8 is the elementary charge, formula_9 is the interatomic distance, and formula_10 is the screening function. formula_10 accounts for the interference of the electrons orbiting each nucleus. In the case of MEIS and RBS, this potential can be used to calculate the Rutherford scattering cross section (see Rutherford scattering) formula_11: As shown at right, formula_12 represents a finite region for an incoming particle, while formula_13 represents the solid scattering angle after the scattering event. However, for LEIS formula_14 is typically unknown which prevents such a clean analysis. Additionally, when using noble gas ion beams there is a high probability of neutralization on impact (which has strong angular dependence) due to the strong desire of these ions to be in a neutral, closed shell state. This results in poor secondary ion flux. See AISS and TOF-SARS below for approaches to avoiding this problem. Shadowing and blocking. Shadowing and blocking are important concepts in almost all types of ion-surface interactions and result from the repulsive nature of the ion-nucleus interaction. As shown at right, when a flux of ions flows in parallel towards a scattering center (nucleus), they are each scattered according to the force of the Coulomb repulsion. This effect is known as shadowing. In a simple Coulomb repulsion model, the resulting region of “forbidden” space behind the scattering center takes the form of a paraboloid with radius formula_15 at a distance L from the scattering center. The flux density is increased near the edge of the paraboloid. Blocking is closely related to shadowing, and involves the interaction between scattered ions and a neighboring scattering center (as such it inherently requires the presence of at least two scattering centers). As shown, ions scattered from the first nucleus are now on diverging paths as they undergo interaction with the second nucleus. This interaction results in another “shadowing cone” now called a blocking cone where ions scattered from the first nucleus are blocked from exiting at angles below formula_16. Focusing effects again result in an increased flux density near formula_16. In both shadowing and blocking, the "forbidden" regions are actually accessible to trajectories when the mass of incoming ions is greater than that of the surface atoms (e.g. Ar+ impacting Si or Al). In this case the region will have a finite but depleted flux density. For higher energy ions such as those used in MEIS and RBS the concepts of shadowing and blocking are relatively straightforward since ion-nucleus interactions dominate and electron screening effects are insignificant. However, in the case of LEIS these screening effects do interfere with ion-nucleus interactions and the repulsive potential becomes more complicated. Also, multiple scattering events are very likely which complicates analysis. Importantly, due to the lower energy ions used LEIS is typically characterized by large interaction cross-sections and shadow cone radii. For this reason penetration depth is low and the method has much higher first-layer sensitivity than MEIS or RBS. Overall, these concepts are essential for data analysis in impact collision LEIS experiments (see below). Diffraction does not play a major role. The de Broglie wavelength of ions used in LEIS experiments is given as formula_17. Using a worst-case value of 500 eV for an 4He+ ion, we see λ is still only 0.006 Å, still well below the typical interatomic spacing of 2-3 Å. Because of this, the effects of diffraction are not significant in a normal LEIS experiment. Variations of technique. Depending on the particular experimental setup, LEIS may be used to obtain a variety of information about a sample. The following includes several of these methods.
[ { "math_id": 0, "text": " E_0 = \\tfrac {1}{2} m_x v_0^2 \\,\\! " }, { "math_id": 1, "text": " E_1 = \\tfrac {1}{2} m_x v_1^2 \\,\\! " }, { "math_id": 2, "text": " E_2 = \\tfrac {1}{2} m_y v_2^2 \\,\\! " }, { "math_id": 3, "text": "E_0 = E_1 + E_2 \\,\\! " }, { "math_id": 4, "text": " \\tfrac{1}{2} m_x v_0^2 = \\tfrac{1}{2} m_x v_1^2 + \\tfrac{1}{2} m_y v_2^2 \\,\\! " }, { "math_id": 5, "text": "m_x v_0 = m_x v_1 \\cos \\theta_1 + m_y v_2 \\cos \\theta_2 \\,\\!" }, { "math_id": 6, "text": "Z_1 \\,\\!" }, { "math_id": 7, "text": "Z_2 \\,\\!" }, { "math_id": 8, "text": "e \\,\\! " }, { "math_id": 9, "text": "r \\,\\!" }, { "math_id": 10, "text": "\\phi (r) \\,\\!" }, { "math_id": 11, "text": " \\tfrac {d \\sigma}{d \\Omega} " }, { "math_id": 12, "text": " d \\sigma \\,\\!" }, { "math_id": 13, "text": " d \\Omega \\,\\!" }, { "math_id": 14, "text": " \\phi (r) \\,\\! " }, { "math_id": 15, "text": "r = 2 \\sqrt {\\tfrac{Z_1 Z_2 e^2 L}{E_0}} " }, { "math_id": 16, "text": "\\alpha_{crit} \\,\\!" }, { "math_id": 17, "text": " \\lambda = \\tfrac{h}{m v} " }, { "math_id": 18, "text": " r = d\\sin\\alpha_{crit}\\,\\!" }, { "math_id": 19, "text": " L = d\\cos\\alpha_{crit}\\,\\!" }, { "math_id": 20, "text": "\\alpha_0\\,\\!" }, { "math_id": 21, "text": "\\alpha_1\\,\\!" }, { "math_id": 22, "text": "\\alpha_2\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=13744357
1374448
Degree (graph theory)
Number of edges touching a vertex in a graph In graph theory, the degree (or valency) of a vertex of a graph is the number of edges that are incident to the vertex; in a multigraph, a loop contributes 2 to a vertex's degree, for the two ends of the edge. The degree of a vertex formula_0 is denoted formula_1 or formula_2. The maximum degree of a graph formula_3 is denoted by formula_4, and is the maximum of formula_3's vertices' degrees. The minimum degree of a graph is denoted by formula_5, and is the minimum of formula_3's vertices' degrees. In the multigraph shown on the right, the maximum degree is 5 and the minimum degree is 0. In a regular graph, every vertex has the same degree, and so we can speak of "the" degree of the graph. A complete graph (denoted formula_6, where formula_7 is the number of vertices in the graph) is a special kind of regular graph where all vertices have the maximum possible degree, formula_8. In a signed graph, the number of positive edges connected to the vertex formula_0 is called positive degformula_9 and the number of connected negative edges is entitled negative degformula_9.&lt;ref name="10.1016/j.physa.2014.11.062"&gt;&lt;/ref&gt; Handshaking lemma. The degree sum formula states that, given a graph formula_10, formula_11. The formula implies that in any undirected graph, the number of vertices with odd degree is even. This statement (as well as the degree sum formula) is known as the handshaking lemma. The latter name comes from a popular mathematical problem, which is to prove that in any group of people, the number of people who have shaken hands with an odd number of other people from the group is even. Degree sequence. The degree sequence of an undirected graph is the non-increasing sequence of its vertex degrees; for the above graph it is (5, 3, 3, 2, 2, 1, 0). The degree sequence is a graph invariant, so isomorphic graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a graph; in some cases, non-isomorphic graphs have the same degree sequence. The degree sequence problem is the problem of finding some or all graphs with the degree sequence being a given non-increasing sequence of positive integers. (Trailing zeroes may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the graph.) A sequence which is the degree sequence of some graph, i.e. for which the degree sequence problem has a solution, is called a graphic or graphical sequence. As a consequence of the degree sum formula, any sequence with an odd sum, such as (3, 3, 1), cannot be realized as the degree sequence of a graph. The inverse is also true: if a sequence has an even sum, it is the degree sequence of a multigraph. The construction of such a graph is straightforward: connect vertices with odd degrees in pairs (forming a matching), and fill out the remaining even degree counts by self-loops. The question of whether a given degree sequence can be realized by a simple graph is more challenging. This problem is also called graph realization problem and can be solved by either the Erdős–Gallai theorem or the Havel–Hakimi algorithm. The problem of finding or estimating the number of graphs with a given degree sequence is a problem from the field of graph enumeration. More generally, the degree sequence of a hypergraph is the non-increasing sequence of its vertex degrees. A sequence is formula_12-graphic if it is the degree sequence of some formula_12-uniform hypergraph. In particular, a formula_13-graphic sequence is graphic. Deciding if a given sequence is formula_12-graphic is doable in polynomial time for formula_14 via the Erdős–Gallai theorem but is NP-complete for all formula_15. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v" }, { "math_id": 1, "text": "\\deg(v)" }, { "math_id": 2, "text": "\\deg v" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "\\Delta(G)" }, { "math_id": 5, "text": "\\delta(G)" }, { "math_id": 6, "text": "K_n" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "n-1" }, { "math_id": 9, "text": "(v)" }, { "math_id": 10, "text": "G=(V, E)" }, { "math_id": 11, "text": "\\sum_{v \\in V} \\deg(v) = 2|E|\\, " }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "2" }, { "math_id": 14, "text": "k=2" }, { "math_id": 15, "text": "k\\ge 3" } ]
https://en.wikipedia.org/wiki?curid=1374448
1374699
Additive polynomial
In mathematics, the additive polynomials are an important topic in classical algebraic number theory. Definition. Let "k" be a field of prime characteristic "p". A polynomial "P"("x") with coefficients in "k" is called an additive polynomial, or a Frobenius polynomial, if formula_0 as polynomials in "a" and "b". It is equivalent to assume that this equality holds for all "a" and "b" in some infinite field containing "k", such as its algebraic closure. Occasionally absolutely additive is used for the condition above, and additive is used for the weaker condition that "P"("a" + "b") = "P"("a") + "P"("b") for all "a" and "b" in the field. For infinite fields the conditions are equivalent, but for finite fields they are not, and the weaker condition is the "wrong" as it does not behave well. For example, over a field of order "q" any multiple "P" of "x""q" − "x" will satisfy "P"("a" + "b") = "P"("a") + "P"("b") for all "a" and "b" in the field, but will usually not be (absolutely) additive. Examples. The polynomial "x""p" is additive. Indeed, for any "a" and "b" in the algebraic closure of "k" one has by the binomial theorem formula_1 Since "p" is prime, for all "n" = 1, ..., "p"−1 the binomial coefficient formula_2 is divisible by "p", which implies that formula_3 as polynomials in "a" and "b". Similarly all the polynomials of the form formula_4 are additive, where "n" is a non-negative integer. The definition makes sense even if "k" is a field of characteristic zero, but in this case the only additive polynomials are those of the form "ax" for some "a" in "k". The ring of additive polynomials. It is quite easy to prove that any linear combination of polynomials formula_5 with coefficients in "k" is also an additive polynomial. An interesting question is whether there are other additive polynomials except these linear combinations. The answer is that these are the only ones. One can check that if "P"("x") and "M"("x") are additive polynomials, then so are "P"("x") + "M"("x") and "P"("M"("x")). These imply that the additive polynomials form a ring under polynomial addition and composition. This ring is denoted formula_6 This ring is not commutative unless "k" is the field formula_7 (see modular arithmetic). Indeed, consider the additive polynomials "ax" and "x""p" for a coefficient "a" in "k". For them to commute under composition, we must have formula_8 and hence "a""p" − "a" = 0. This is false for "a" not a root of this equation, that is, for "a" outside formula_9 The fundamental theorem of additive polynomials. Let "P"("x") be a polynomial with coefficients in "k", and formula_10 be the set of its roots. Assuming that the roots of "P"("x") are distinct (that is, "P"("x") is separable), then "P"("x") is additive if and only if the set formula_11 forms a group with the field addition.
[ { "math_id": 0, "text": "P(a+b)=P(a)+P(b)\\," }, { "math_id": 1, "text": "(a+b)^p = \\sum_{n=0}^p {p \\choose n} a^n b^{p-n}." }, { "math_id": 2, "text": "\\scriptstyle{p \\choose n}" }, { "math_id": 3, "text": "(a+b)^p \\equiv a^p+b^p \\mod p" }, { "math_id": 4, "text": "\\tau_p^n(x) = x^{p^n}" }, { "math_id": 5, "text": "\\tau_p^n(x)" }, { "math_id": 6, "text": "k\\{ \\tau_p\\}.\\," }, { "math_id": 7, "text": "\\mathbb{F}_p = \\mathbf{Z}/p\\mathbf{Z}" }, { "math_id": 8, "text": "(ax)^p = ax^p,\\," }, { "math_id": 9, "text": "\\mathbb{F}_p." }, { "math_id": 10, "text": "\\{w_1,\\dots,w_m\\}\\subset k" }, { "math_id": 11, "text": "\\{w_1,\\dots,w_m\\}" } ]
https://en.wikipedia.org/wiki?curid=1374699
13747309
DBSCAN
Density-based data clustering algorithm &lt;templatestyles src="Machine learning/styles.css"/&gt; Density-based spatial clustering of applications with noise (DBSCAN) is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaowei Xu in 1996. It is a density-based clustering non-parametric algorithm: given a set of points in some space, it groups together points that are closely packed (points with many nearby neighbors), and marks as outliers points that lie alone in low-density regions (those whose nearest neighbors are too far away). DBSCAN is one of the most common, and most commonly cited, clustering algorithms. In 2014, the algorithm was awarded the test of time award (an award given to algorithms which have received substantial attention in theory and practice) at the leading data mining conference, ACM SIGKDD. As of July 2020[ [update]], the follow-up paper "DBSCAN Revisited, Revisited: Why and How You Should (Still) Use DBSCAN" appears in the list of the 8 most downloaded articles of the prestigious ACM Transactions on Database Systems (TODS) journal. The popular follow-up HDBSCAN* was initially published by Ricardo J. G. Campello, David Moulavi, and Jörg Sander in 2013, then expanded upon with Arthur Zimek in 2015. It revises some of the original decisions such as the border points and produces a hierarchical instead of a flat result. History. In 1972, Robert F. Ling published a closely related algorithm in "The Theory and Construction of k-Clusters" in "The Computer Journal" with an estimated runtime complexity of O(n³). DBSCAN has a worst-case of O(n²), and the database-oriented range-query formulation of DBSCAN allows for index acceleration. The algorithms slightly differ in their handling of border points. Preliminary. Consider a set of points in some space to be clustered. Let ε be a parameter specifying the radius of a neighborhood with respect to some point. For the purpose of DBSCAN clustering, the points are classified as "core points", ("directly"-)" reachable points" and "outliers", as follows: "p" and "pn" "q", where each "p""i"+1 is directly reachable from pi. Note that this implies that the initial point and all points on the path must be core points, with the possible exception of q. Now if p is a core point, then it forms a "cluster" together with all points (core or non-core) that are reachable from it. Each cluster contains at least one core point; non-core points can be part of a cluster, but they form its "edge", since they cannot be used to reach more points. Reachability is not a symmetric relation: by definition, only core points can reach non-core points. The opposite is not true, so a non-core point may be reachable, but nothing can be reached from it. Therefore, a further notion of "connectedness" is needed to formally define the extent of the clusters found by DBSCAN. Two points p and q are density-connected if there is a point o such that both p and q are reachable from o. Density-connectedness "is" symmetric. A cluster then satisfies two properties: Algorithm. Original query-based algorithm. DBSCAN requires two parameters: ε (eps) and the minimum number of points required to form a dense region (minPts). It starts with an arbitrary starting point that has not been visited. This point's ε-neighborhood is retrieved, and if it contains sufficiently many points, a cluster is started. Otherwise, the point is labeled as noise. Note that this point might later be found in a sufficiently sized ε-environment of a different point and hence be made part of a cluster. If a point is found to be a dense part of a cluster, its ε-neighborhood is also part of that cluster. Hence, all points that are found within the ε-neighborhood are added, as is their own ε-neighborhood when they are also dense. This process continues until the density-connected cluster is completely found. Then, a new unvisited point is retrieved and processed, leading to the discovery of a further cluster or noise. DBSCAN can be used with any distance function (as well as similarity functions or other predicates). The distance function (dist) can therefore be seen as an additional parameter. The algorithm can be expressed in pseudocode as follows: DBSCAN(DB, distFunc, eps, minPts) { C := 0 "/* Cluster counter */" for each point P in database DB { if label(P) ≠ undefined then continue "/* Previously processed in inner loop */" Neighbors N := RangeQuery(DB, distFunc, P, eps) "/* Find neighbors */" if |N| &lt; minPts then { "/* Density check */" label(P) := Noise "/* Label as Noise */" continue C := C + 1 "/* next cluster label */" label(P) := C "/* Label initial point */" SeedSet S := N \ {P} "/* Neighbors to expand */" for each point Q in S { "/* Process every seed point Q */" if label(Q) = Noise then label(Q) := C "/* Change Noise to border point */" if label(Q) ≠ undefined then continue "/* Previously processed (e.g., border point) */" label(Q) := C "/* Label neighbor */" Neighbors N := RangeQuery(DB, distFunc, Q, eps) "/* Find neighbors */" if |N| ≥ minPts then { "/* Density check (if Q is a core point) */" S := S ∪ N "/* Add new neighbors to seed set */" where RangeQuery can be implemented using a database index for better performance, or using a slow linear scan: RangeQuery(DB, distFunc, Q, eps) { Neighbors N := empty list for each point P in database DB { "/* Scan all points in the database */" if distFunc(Q, P) ≤ eps then { "/* Compute distance and check epsilon */" N := N ∪ {P} "/* Add to result */" return N Abstract algorithm. The DBSCAN algorithm can be abstracted into the following steps: A naive implementation of this requires storing the neighborhoods in step 1, thus requiring substantial memory. The original DBSCAN algorithm does not require this by performing these steps for one point at a time. Optimization Criterion. DBSCAN optimizes the following loss function: For any possible clustering formula_0 out of the set of all clusterings formula_1, it minimizes the number of clusters under the condition that every pair of points in a cluster is density-reachable, which corresponds to the original two properties "maximality" and "connectivity" of a cluster: formula_2 where formula_3 gives the smallest formula_4 such that two points p and q are density-connected. Complexity. DBSCAN visits each point of the database, possibly multiple times (e.g., as candidates to different clusters). For practical considerations, however, the time complexity is mostly governed by the number of regionQuery invocations. DBSCAN executes exactly one such query for each point, and if an indexing structure is used that executes a neighborhood query in O(log "n"), an overall average runtime complexity of O("n" log "n") is obtained (if parameter ε is chosen in a meaningful way, i.e. such that on average only O(log "n") points are returned). Without the use of an accelerating index structure, or on degenerated data (e.g. all points within a distance less than ε), the worst case run time complexity remains O("n"²). The formula_5 - "n" = ("n"²-"n")/2-sized upper triangle of the distance matrix can be materialized to avoid distance recomputations, but this needs O("n"²) memory, whereas a non-matrix based implementation of DBSCAN only needs O("n") memory. Disadvantages. See the section below on extensions for algorithmic modifications to handle these issues. Parameter estimation. Every data mining task has the problem of parameters. Every parameter influences the algorithm in specific ways. For DBSCAN, the parameters ε and "minPts" are needed. The parameters must be specified by the user. Ideally, the value of ε is given by the problem to solve (e.g. a physical distance), and "minPts" is then the desired minimum cluster size. OPTICS can be seen as a generalization of DBSCAN that replaces the ε parameter with a maximum value that mostly affects performance. "MinPts" then essentially becomes the minimum cluster size to find. While the algorithm is much easier to parameterize than DBSCAN, the results are a bit more difficult to use, as it will usually produce a hierarchical clustering instead of the simple data partitioning that DBSCAN produces. Recently, one of the original authors of DBSCAN has revisited DBSCAN and OPTICS, and published a refined version of hierarchical DBSCAN (HDBSCAN*), which no longer has the notion of border points. Instead, only the core points form the cluster. Relationship to spectral clustering. A spectral implementation of DBSCAN is related to spectral clustering in the trivial case of determining connected graph components — the optimal clusters with no edges cut. However, it can be computationally intensive, up to formula_6. Additionally, one has to choose the number of eigenvectors to compute. For performance reasons, the original DBSCAN algorithm remains preferable to its spectral implementation. Extensions. Generalized DBSCAN (GDBSCAN) is a generalization by the same authors to arbitrary "neighborhood" and "dense" predicates. The ε and "minPts" parameters are removed from the original algorithm and moved to the predicates. For example, on polygon data, the "neighborhood" could be any intersecting polygon, whereas the density predicate uses the polygon areas instead of just the object count. Various extensions to the DBSCAN algorithm have been proposed, including methods for parallelization, parameter estimation, and support for uncertain data. The basic idea has been extended to hierarchical clustering by the OPTICS algorithm. DBSCAN is also used as part of subspace clustering algorithms like PreDeCon and SUBCLU. HDBSCAN* is a hierarchical version of DBSCAN which is also faster than OPTICS, from which a flat partition consisting of the most prominent clusters can be extracted from the hierarchy. Availability. Different implementations of the same algorithm were found to exhibit enormous performance differences, with the fastest on a test data set finishing in 1.4 seconds, the slowest taking 13803 seconds. The differences can be attributed to implementation quality, language and compiler differences, and the use of indexes for acceleration. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C= \\{C_1, \\ldots , C_l\\} " }, { "math_id": 1, "text": "\\mathcal{C}" }, { "math_id": 2, "text": "\\min_{C \\subset \\mathcal{C},~ d_{db}(p, q)\\leq \\varepsilon ~\\forall p, q \\in C_i ~ \\forall C_i \\in C} |C|" }, { "math_id": 3, "text": "d_{db}(p,q)" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "\\textstyle\\binom n2" }, { "math_id": 6, "text": "O(n^3)" } ]
https://en.wikipedia.org/wiki?curid=13747309
1374906
Doublet–triplet splitting problem
In particle physics, the doublet–triplet (splitting) problem is a problem of some Grand Unified Theories, such as SU(5), SO(10), and formula_0. Grand unified theories predict Higgs bosons (doublets of formula_1) arise from representations of the unified group that contain other states, in particular, states that are triplets of color. The primary problem with these color triplet Higgs is that they can mediate proton decay in supersymmetric theories that are only suppressed by two powers of GUT scale (i.e. they are dimension 5 supersymmetric operators). In addition to mediating proton decay, they alter gauge coupling unification. The doublet–triplet problem is the question 'what keeps the doublets light while the triplets are heavy?' Doublet–triplet splitting and the μ-problem. In 'minimal' SU(5), the way one accomplishes doublet–triplet splitting is through a combination of interactions formula_2 where formula_3 is an adjoint of SU(5) and is traceless. When formula_3 acquires a vacuum expectation value formula_4 that breaks SU(5) to the Standard Model gauge symmetry the Higgs doublets and triplets acquire a mass formula_5 Since formula_6 is at the GUT scale (formula_7 GeV) and the Higgs doublets need to have a weak scale mass (100 GeV), this requires formula_8. So to solve this doublet–triplet splitting problem requires a tuning of the two terms to within one part in formula_9. This is also why the mu problem of the MSSM (i.e. why are the Higgs doublets so light) and doublet–triplet splitting are so closely intertwined. Solutions to the doublet-triplet splitting. The missing partner mechanism. One solution to the doublet–triplet splitting (DTS) in the context of supersymmetric formula_10 proposed in and is called the missing partner mechanism (MPM). The main idea is that in addition to the usual fields there are two additional chiral super-fields formula_11 and formula_12. Note that formula_13 decomposes as follows under the SM gauge group: formula_14 which contains no field that could couple to the formula_1 doublets of formula_15 or formula_16. Due to group theoretical reasons formula_10 has to be broken by a formula_17 instead of the usual formula_18, at least at the renormalizable level. The superpotential then reads formula_19 After breaking to the SM the colour triplet can get super heavy, suppressing proton decay, while the SM Higgs does not. Note that nevertheless the SM Higgs will have to pick up a mass in order to reproduce the electroweak theory correctly. Note that although solving the DTS problem the MPM tends to render models non-perturbative just above the GUT scale. This problem is addressed by the "Double missing partner mechanism". Dimopoulos–Wilczek mechanism. In an SO(10) theory, there is a potential solution to the doublet–triplet splitting problem known as the 'Dimopoulos–Wilczek' mechanism. In SO(10), the adjoint field, formula_3 acquires a vacuum expectation value of the form formula_20. formula_21 and formula_22 give masses to the Higgs doublet and triplet, respectively, and are independent of each other, because formula_3 is traceless for any values they may have. If formula_23, then the Higgs doublet remains massless. This is very similar to the way that doublet–triplet splitting is done in either higher-dimensional grand unified theories or string theory. To arrange for the VEV to align along this direction (and still not mess up the other details of the model) often requires very contrived models, however. Higgs representations in Grand Unified Theories. In SU(5): formula_24 formula_25 In SO(10): formula_26 Proton decay. Non-supersymmetric theories suffer from quartic radiative corrections to the mass squared of the electroweak Higgs boson (see hierarchy problem). In the presence of supersymmetry, the triplet Higgsino needs to be more massive than the GUT scale to prevent proton decay because it generates dimension 5 operators in MSSM; there it is not enough simply to require the triplet to have a GUT scale mass. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_6" }, { "math_id": 1, "text": "SU(2)" }, { "math_id": 2, "text": " \\int d^2\\theta \\; \\lambda H_{\\bar{5}} \\Sigma H_{5} + \\mu H_{\\bar{5}} H_{5}" }, { "math_id": 3, "text": "\\Sigma" }, { "math_id": 4, "text": "\\langle \\Sigma\\rangle = \\rm{diag}(2, 2, 2, -3, -3) f" }, { "math_id": 5, "text": " \\int d^2\\theta \\; (2 \\lambda f + \\mu) H_{\\bar{3}}H_3 + (-3\\lambda f +\\mu) H_{\\bar{2}}H_2" }, { "math_id": 6, "text": " f" }, { "math_id": 7, "text": " 10^{16}" }, { "math_id": 8, "text": "\\mu \\sim 3 \\lambda f \\pm 100 \\mbox{GeV}" }, { "math_id": 9, "text": "10^{14}" }, { "math_id": 10, "text": "SU(5)" }, { "math_id": 11, "text": "Z_{50}" }, { "math_id": 12, "text": "Z_{\\overline{50}}" }, { "math_id": 13, "text": "{\\mathbf{50}}" }, { "math_id": 14, "text": "\n\\mathbf{50}\\rightarrow(\\mathbf{1},\\mathbf{1},-2)+(\\mathbf{3},\\mathbf{1},-\\frac 13)+(\\overline{\\mathbf{3}},\\mathbf{2},-\\frac 76)+(\\mathbf{6},\\mathbf{1},\\frac 43)+(\\overline{\\mathbf{6}},\\mathbf{3},-\\frac 13)+(\\mathbf{8},\\mathbf{2},\\frac 12)" }, { "math_id": 15, "text": "H_{\\overline{5}}" }, { "math_id": 16, "text": "H_{{5}}" }, { "math_id": 17, "text": "\\mathbf{75}" }, { "math_id": 18, "text": "\\mathbf{24}" }, { "math_id": 19, "text": "\nW_{MPM}=y_1 H_{\\overline{5}}H_{75}Z_{50}+y_2 Z_{\\overline{50}}H_{75}H_{5}+m_{50}Z_{{50}}Z_{\\overline{50}}." }, { "math_id": 20, "text": "\\langle \\Sigma \\rangle = \\mbox{diag}( i \\sigma_2 f_3, i\\sigma_2 f_3, i\\sigma_2 f_3, i\\sigma_2 f_2, i \\sigma_2 f_2)" }, { "math_id": 21, "text": "f_2" }, { "math_id": 22, "text": "f_3" }, { "math_id": 23, "text": "f_2=0" }, { "math_id": 24, "text": "5\\rightarrow (1,2)_{1\\over 2}\\oplus (3,1)_{-{1\\over 3}}" }, { "math_id": 25, "text": "\\bar{5}\\rightarrow (1,2)_{-{1\\over 2}}\\oplus (\\bar{3},1)_{1\\over 3}" }, { "math_id": 26, "text": "10\\rightarrow (1,2)_{1\\over 2}\\oplus (1,2)_{-{1\\over 2}}\\oplus (3,1)_{-{1\\over 3}}\\oplus (\\bar{3},1)_{1\\over 3}" } ]
https://en.wikipedia.org/wiki?curid=1374906
1374948
Functional (mathematics)
Types of mappings in mathematics In mathematics, a functional is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author). This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions. In the case where the space formula_2 is a space of functions, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function". However, the fact that formula_2 is a space of functions is not mathematically essential, so this older definition is no longer prevalent. The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian. Details. Duality. The mapping formula_4 is a function, where formula_5 is an argument of a function formula_6 At the same time, the mapping of a function to the value of the function at a point formula_7 is a "functional"; here, formula_5 is a parameter. Provided that formula_8 is a linear function from a vector space to the underlying scalar field, the above linear maps are dual to each other, and in functional analysis both are called linear functionals. Definite integral. Integrals such as formula_9 form a special class of functionals. They map a function formula_8 into a real number, provided that formula_10 is real-valued. Examples include Inner product spaces. Given an inner product space formula_16 and a fixed vector formula_17 the map defined by formula_18 is a linear functional on formula_3 The set of vectors formula_19 such that formula_20 is zero is a vector subspace of formula_16 called the "null space" or "kernel" of the functional, or the orthogonal complement of formula_21 denoted formula_22 For example, taking the inner product with a fixed function formula_23 defines a (linear) functional on the Hilbert space formula_24 of square integrable functions on formula_25 formula_26 Locality. If a functional's value can be computed for small segments of the input curve and then summed to find the total value, the functional is called local. Otherwise it is called non-local. For example: formula_27 is local while formula_28 is non-local. This occurs commonly when integrals occur separately in the numerator and denominator of an equation such as in calculations of center of mass. Functional equations. The traditional usage also applies when one talks about a functional equation, meaning an equation between functionals: an equation formula_29 between functionals can be read as an 'equation to solve', with solutions being themselves functions. In such equations there may be several sets of variable unknowns, like when it is said that an "additive" map formula_8 is one "satisfying Cauchy's functional equation": formula_30 Derivative and integration. Functional derivatives are used in Lagrangian mechanics. They are derivatives of functionals; that is, they carry information on how a functional changes when the input function changes by a small amount. Richard Feynman used functional integrals as the central idea in his sum over the histories formulation of quantum mechanics. This usage implies an integral taken over some function space. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "V^*" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "X." }, { "math_id": 4, "text": "x_0 \\mapsto f(x_0)" }, { "math_id": 5, "text": "x_0" }, { "math_id": 6, "text": "f." }, { "math_id": 7, "text": "f \\mapsto f(x_0)" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "f\\mapsto I[f] = \\int_{\\Omega} H(f(x),f'(x),\\ldots) \\; \\mu(\\mathrm{d}x)" }, { "math_id": 10, "text": "H" }, { "math_id": 11, "text": "f\\mapsto\\int_{x_0}^{x_1}f(x)\\;\\mathrm{d}x" }, { "math_id": 12, "text": "L^p" }, { "math_id": 13, "text": "E" }, { "math_id": 14, "text": "f\\mapsto \\left(\\int_E|f|^p \\; \\mathrm{d}x\\right)^{1/p}" }, { "math_id": 15, "text": "f \\mapsto \\int_{x_0}^{x_1} \\sqrt{ 1+|f'(x)|^2 } \\; \\mathrm{d}x" }, { "math_id": 16, "text": "X," }, { "math_id": 17, "text": "\\vec{x} \\in X," }, { "math_id": 18, "text": "\\vec{y} \\mapsto \\vec{x} \\cdot \\vec{y}" }, { "math_id": 19, "text": "\\vec{y}" }, { "math_id": 20, "text": "\\vec{x}\\cdot \\vec{y}" }, { "math_id": 21, "text": "\\vec{x}," }, { "math_id": 22, "text": "\\{\\vec{x}\\}^\\perp." }, { "math_id": 23, "text": "g \\in L^2([-\\pi,\\pi])" }, { "math_id": 24, "text": "L^2([-\\pi,\\pi])" }, { "math_id": 25, "text": "[-\\pi,\\pi]:" }, { "math_id": 26, "text": "f \\mapsto \\langle f,g \\rangle = \\int_{[-\\pi,\\pi]} \\bar{f} g" }, { "math_id": 27, "text": "F(y) = \\int_{x_0}^{x_1}y(x)\\;\\mathrm{d}x" }, { "math_id": 28, "text": "F(y) = \\frac{\\int_{x_0}^{x_1}y(x)\\;\\mathrm{d}x}{\\int_{x_0}^{x_1} (1+ [y(x)]^2)\\;\\mathrm{d}x}" }, { "math_id": 29, "text": "F = G" }, { "math_id": 30, "text": "f(x + y) = f(x) + f(y) \\qquad \\text{ for all } x, y." } ]
https://en.wikipedia.org/wiki?curid=1374948
13752102
Surface second harmonic generation
Surface second harmonic generation is a method for probing interfaces in atomic and molecular systems. In second harmonic generation (SHG), the light frequency is doubled, essentially converting two photons of the original beam of energy "E" into a single photon of energy 2"E" as it interacts with noncentrosymmetric media. Surface second harmonic generation is a special case of SHG where the second beam is generated because of a break of symmetry caused by an interface. Since centrosymmetric symmetry in centrosymmetric media is only disrupted in the first (occasionally second and third) atomic or molecular layer of a system, properties of the second harmonic signal then provide information about the surface atomic or molecular layers only. Surface SHG is possible even for materials which do not exhibit SHG in the bulk. Although in many situations the dominant second harmonic signal arises from the broken symmetry at the surface, the signal in fact always has contributions from both the surface and bulk. Thus, the most sensitive experiments typically involve modification of a surface and study of the subsequent modification of the harmonic generation properties. History. Second harmonic generation from a surface was first observed by Terhune, Maker, and Savage at the Ford Motor Company in 1962, one year after Franken et al. first discovered second harmonic generation in bulk crystals. Prior to Terhune's discovery, it was believed that crystals could only exhibit second harmonic generation if the crystal was noncentrosymmetric. Terhune observed that calcite, a centrosymmetric crystal which is only capable of SHG in the bulk in the presence of an applied electric field which would break the symmetry of the electronic structure, surprisingly also produced a second harmonic signal in the absence of an external electric field. During the 1960s, SHG was observed for many other centrosymmetric media including metals, semiconductors, oxides, and liquids. In 1968, Bloembergen et al. showed that the second harmonic signal was generated from the surface. Interest in this field waned during the 1970s and only a handful of research groups investigated surface SHG, most notably Y. R. Shen's group at University of California at Berkeley. During the 70s and 80s, most of the research in this field focused on understanding the electronic response, particularly in metals. In 1981, Chen et al. showed that SHG could be used to detect individual monolayers, and since then, much research has gone into using and understanding SHG as surface probe of molecular adsorption and orientation. Excitation of second harmonic signal. Just as bulk second harmonic generation, surface SHG arises out of the second-order susceptibility tensor χ(2). While the χ(2) tensor contains 27 elements, many of these elements are reduced by symmetry arguments. The exact nature of these arguments depends on the application. When determining molecular orientation, it is assumed that χ(2) is rotationally invariant around the "z"-axis (normal to the surface). The number of tensor elements reduces from 27 to the following 7 independent quantities: χZZZ, χZXX = χZYY, χXZX = χYZY, χXXZ = χYYZ, χXYZ = −χYXZ, χXZY = −χYZX, χZXY = −χZYX. Second Harmonic Generation further restricts the independent terms by requiring the tensor is symmetric in the last two indices reducing the number of independent tensor terms to 4: χZZZ, χZXX (equivalently χZYY), χXXZ (equivalently χXZX, χYZY, χYYZ), χXYZ (equivalently χXZY, −χYXZ, −χYZX). In order for χZXY = −χZYX to hold under this final condition, both terms must be 0. The four independent terms are material dependent properties and can vary as the external conditions change. These four terms give rise to the second harmonic signal, and allow for calculation of material properties such as electronic structure, atomic organization, and molecular orientation. Detailed analysis of the second harmonic generation from surfaces and interfaces, as well as the ability to detect monolayers and sub-monolayers, may be found in Guyot-Sionnest et al. Applications. Interface structure. It may seem paradoxical at first that surface SHG which relies on a break in symmetry is possible in crystals which have an inherent symmetric structure. At a crystalline interface half of the atomic forces experienced in the bulk crystal are not present which causes changes in the atomic and electronic structures. There are two major changes that occur at the interface: 1) the interplanar distances of the top layers change and 2) the atoms redistribute themselves to a completely new packing structure. While symmetry is maintained in the surface planes, the break in symmetry out-of-plane modifies the second-order susceptibility tensor χ(2), giving rise to optical second harmonic generation. Typical measurements of SHG from crystalline surfaces structures are performed by rotating the sample in an incident beam (Figure 1). The second harmonic signal will vary with the azimuth angle of the sample due to the symmetry of the atomic and electronic structure (Figure 2). As a result, surface SHG theory is highly dependent on geometry of the superstructure. Since electron interactions are responsible for the SHG response, the jellium model is usually numerically solved using Density Functional Theory to predict the SHG response of a given surface. SHG sensitivity to surface structure approach was effectively demonstrated by Heinz, Loy, and Thompson, working for IBM in 1985. They showed that the SHG signal from a freshly cleaved Si(111) surface would alter its behavior as the temperature was raised and the superstructure changed from a 2×1 structure to the 7×7 structure. Noting the change in signal, they were able to verify the existence of one mirror plane in the 2×1 construction and 3 mirror planes in the 7×7 construction thereby providing new information to the bonding structure of the surface atoms. Since then, surface SHG has been used to probe many other metallic surfaces such as reconstructed gold (110), Pd(111), and Al(100). Perhaps one of the most powerful uses of surface SHG is the probing of surface structure of buried interfaces. Traditional surface tools such as atomic force microscopy and scanning tunneling microscopy as well as many forms of electron diffraction must be conducted under vacuum, and are not sensitive to interfaces deeper in the probed medium. SHG measurements allow the incident laser beam to pass without interaction through higher level materials to the target interface where the second harmonic signal is generated. In cases where the transmitting materials do interact with the beam, these contributions to the second harmonic signal can be resolved in other experiments and subtracted out. The resulting measured second harmonic signal contains the second harmonic component from the buried interface alone. This type of measurement is useful for determining the surface structure of the interface. As an example, Cheikh-Rouhou et al. demonstrated this process to resolve interface structures of 5 layer systems. Adsorption measurements. Surface SHG is useful for monitoring the growth of monolayers on a surface. As particles adsorb, the SHG signal is altered. Two common applications in surface science are the adsorption of small gas molecules onto a surface and the adsorption of dissolved dye molecules in a liquid to a surface. Bourguignon et al. showed that as carbon monoxide is adsorbed onto a Pd(111) surface, the SHG signal decreased exponentially as predicted by the Langmuir isotherm. As CO coverage approached 1 monolayer, the SHG intensity leveled off. Larger molecules like dyes often can form multilayers on a surface, and this can be measured in situ using SHG. As the first monolayer forms, the intensity can often be seen to increase to a maximum until a uniform distribution of particles is obtained (Figure 3). As additional particles adsorb and the second monolayer begins to form, the SHG signal decreases until it reaches a minimum at the completion of the second monolayer. This alternating behavior can typically be seen for the growth of monolayers. As additional layers form, the SHG response of the substrate is screened by the adsorbate and eventually, the SHG signal levels off. Molecular orientation. As molecular layers adsorb to surfaces it is often useful to know the molecular orientation of the adsorbed molecules. Molecular orientation can be probed by observing the polarization of the second harmonic signal, generated from a polarized beam. Figure 4 shows a typical experimental geometry for molecular orientation experiments. The beam is incident on the sample in a total internal reflection geometry which improves the second harmonic signal because as the wave propagates along the interface, additional second harmonic photons are generated, By rotating either the polarizer or the analyzer, the s- and p-polarized signals are measured which allow for the calculation of the second-order susceptibility tensor χ(2). Simpson's research group has studied this phenomenon in depth. The molecular orientation can differ from the laboratory axis in three directions, corresponding to three angles. Typically, SHG measurements of this type are only able to extract a single parameter, namely the molecular orientation with respect to the surface normal. Calculation of molecular orientation. When dealing with adsorbed molecules on a surface, it is typical to find a uniaxial distribution of the molecules, resulting in x- and y- coordinate terms to be interchangeable. When analyzing the second-order susceptibility tensor χ(2), the quantities χXYZ = -χYXZ must be 0 and only three independent tensor terms remain: χzzz, χzxx, and χxxz. The intensities of the s and p polarizations in the second harmonic are given by following relationships: formula_0 formula_1 where γ is the polarization angle with γ = 0 corresponding to p-polarized light. The "s""i" terms depend on the experimental geometry are functions of the total internal reflection angles of the incident and second harmonic beams and the linear and nonlinear Fresnel factors respectively which relate the electric field components at the interface to incident and detected fields. The second-order susceptibility tensor, χ(2), is the parameter which can be measured in second order experiments, but it does not explicitly provide insight to the molecular orientation of surface molecules. To determine molecular orientation, the second-order hyperpolarizability tensor β, must be calculated. For adsorbed molecules in a uniaxial distribution, the only independent hyperpolarizability tensor terms are βz’z’z’, βz’x’x’, and βx’x’z’ where ’ terms denote the molecular coordinate system as opposed to the laboratory coordinate system. β can be related to χ(2) through orientational averages. As an example, in an isotropic distribution on the surface, χ(2) elements are given by. formula_2 formula_3 formula_4 where "N""s" is the surface number density of the adsorbed molecules, θ and Ψ are orientational angles relating the molecular coordinate system to the laboratory coordinate system, and &lt;x&gt; represents the average value of x. In many cases, only one or two of the molecular hyperpolarizability tensor are dominant. In these cases, the relationships between χ and β can be simplified. Bernhard Dick presents several of these simplifications. Additional applications. In addition to these applications, surface SHG is used to probe other effects. In surface spectroscopy, where either the fundamental or second harmonic are resonant with electronic transitions in the surface atoms, details can be determined about the electronic structure and band gaps. In monolayer microscopy the second harmonic signal is magnified and surface features are imaged with a resolution on the order of a wavelength. Surface SHG can also be used to monitor chemical reactions at a surface with picosecond resolution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm I_s^{2 \\omega}(\\gamma)=C |s_1 \\sin {2 \\gamma}\\ \\chi_{xxz}|^2(I^{\\omega})^2" }, { "math_id": 1, "text": "\\mathrm I_p^{2 \\omega}(\\gamma)=C |s_5\\chi_{zxx}+\\cos^2{ \\gamma}\\ {(s_2\\chi_{xxz} +s_3\\chi_{zxx}+s_4\\chi_{zzz}-s_5\\chi{zxx})}|^2(I^{\\omega})^2" }, { "math_id": 2, "text": "\\chi_{ZZZ}=N_s[\\langle\\cos^3 \\theta\\rangle\\beta_{Z'Z'Z'} + \\langle\\cos \\theta \\sin^2 \\theta \\sin^2 \\Psi\\rangle(\\beta_{Z'X'X'} + \\beta_{X'X'Z'})]" }, { "math_id": 3, "text": "\\chi_{ZXX}=\\frac {1}{2}N_s[\\langle\\cos \\theta \\sin^2 \\theta\\rangle\\beta_{Z'Z'X'}+\\langle\\cos \\theta\\rangle\\beta_{Z'X'X'} - \\langle\\cos \\theta \\sin^2 \\theta \\sin^2 \\Psi\\rangle(\\beta_{Z'X'X'} + \\beta_{X'X'Z'})]" }, { "math_id": 4, "text": "\\chi_{XXZ}=\\frac {1}{2}N_s[\\langle\\cos \\theta \\sin^2 \\theta\\rangle \\beta_{Z'Z'X'} + \\langle\\cos \\theta\\rangle\\beta_{X'X'Z'} - \\langle\\cos \\theta \\sin^2 \\theta \\sin^2 \\Psi\\rangle(\\beta_{Z'X'X'} + \\beta_{X'X'Z'})]" } ]
https://en.wikipedia.org/wiki?curid=13752102
13752761
Sod shock tube
The Sod shock tube problem, named after Gary A. Sod, is a common test for the accuracy of computational fluid codes, like Riemann solvers, and was heavily investigated by Sod in 1978. The test consists of a one-dimensional Riemann problem with the following parameters, for left and right states of an ideal gas. formula_0,formula_1 where *formula_2 is the density *formula_3 is the pressure *formula_4 is the velocity The time evolution of this problem can be described by solving the Euler equations, which leads to three characteristics, describing the propagation speed of the various regions of the system. Namely the rarefaction wave, the contact discontinuity and the shock discontinuity. If this is solved numerically, one can test against the analytical solution, and get information how well a code captures and resolves shocks and contact discontinuities and reproduce the correct density profile of the rarefaction wave. Analytic derivation. NOTE: The equations provided below are only correct when rarefaction takes place on left side of domain and shock happens on right side of domain. The different states of the solution are separated by the time evolution of the three characteristics of the system, which is due to the finite speed of information propagation. Two of them are equal to the speed of sound of the left and right states formula_5 formula_6 where formula_7 is the adiabatic gamma. The first one is the position of the beginning of the rarefaction wave while the other is the velocity of the propagation of the shock. Defining: formula_8, formula_9 The states after the shock are connected by the Rankine Hugoniot shock jump conditions. formula_10 But to calculate the density in Region 4 we need to know the pressure in that region. This is related by the contact discontinuity with the pressure in region 3 by formula_11 Unfortunately the pressure in region 3 can only be calculated iteratively, the right solution is found when formula_12 equals formula_13 formula_14 formula_15 formula_16 This function can be evaluated to an arbitrary precision thus giving the pressure in the region 3 formula_17 finally we can calculate formula_18 formula_19 and formula_20 follows from the adiabatic gas law formula_21
[ { "math_id": 0, "text": "\n\\left( \\begin{array}{c}\\rho_L\\\\P_L\\\\u_L\\end{array}\\right)\n=\n\\left( \\begin{array}{c}1.0\\\\1.0\\\\0.0\\end{array} \\right)\n" }, { "math_id": 1, "text": "\n\\left( \\begin{array}{c}\\rho_R\\\\P_R\\\\u_R\\end{array}\\right)\n=\n\\left( \\begin{array}{c}0.125\\\\0.1\\\\0.0\\end{array}\\right)\n" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "u" }, { "math_id": 5, "text": "cs_1 = \\sqrt{\\gamma \\frac{P_L}{\\rho_L}}" }, { "math_id": 6, "text": "cs_5 = \\sqrt{\\gamma \\frac{P_R}{\\rho_R}}" }, { "math_id": 7, "text": "\\gamma" }, { "math_id": 8, "text": "\\Gamma = \\frac{\\gamma - 1}{\\gamma + 1}" }, { "math_id": 9, "text": "\\beta = \\frac{\\gamma - 1}{2 \\gamma}" }, { "math_id": 10, "text": "\\rho_4 = \\rho_5 \\frac{P_4 + \\Gamma P_5}{P_5 + \\Gamma P_4}" }, { "math_id": 11, "text": "P_4 = P_3" }, { "math_id": 12, "text": "u_3" }, { "math_id": 13, "text": "u_4" }, { "math_id": 14, "text": "u_4 = \\left(P_3' - P_5\\right)\\sqrt{\\frac{1-\\Gamma}{\\rho_R(P_3'+\\Gamma P_5)}}" }, { "math_id": 15, "text": "u_3 =\\left(P_1^\\beta-P_3'^\\beta\\right) \\sqrt{\\frac{(1-\\Gamma^2)P_1^{1/\\gamma}}{\\Gamma^2 \\rho_L}}" }, { "math_id": 16, "text": "u_3 - u_4 = 0" }, { "math_id": 17, "text": "P_3 = \\operatorname{calculate}(P_3,s,s,,)" }, { "math_id": 18, "text": "u_3 = u_5 + \\frac{(P_3 - P_5)}{\\sqrt{\\frac{\\rho_5}{2}((\\gamma+1)P_3 +(\\gamma-1)P_5)}}" }, { "math_id": 19, "text": "u_4 = u_3" }, { "math_id": 20, "text": "\\rho_3" }, { "math_id": 21, "text": "\\rho_3 = \\rho_1 \\left(\\frac{P_3}{P_1}\\right)^{1/\\gamma}" } ]
https://en.wikipedia.org/wiki?curid=13752761
13754920
Bundle adjustment
Technique in photogrammetry and computer vision In photogrammetry and computer stereo vision, bundle adjustment is simultaneous refining of the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, given a set of images depicting a number of 3D points from different viewpoints. Its name refers to the "geometrical bundles" of light rays originating from each 3D feature and converging on each camera's optical center, which are adjusted optimally according to an optimality criterion involving the corresponding image projections of all points. Uses. Bundle adjustment is almost always used as the last step of feature-based 3D reconstruction algorithms. It amounts to an optimization problem on the 3D structure and viewing parameters (i.e., camera pose and possibly intrinsic calibration and radial distortion), to obtain a reconstruction which is optimal under certain assumptions regarding the noise pertaining to the observed image features: If the image error is zero-mean Gaussian, then bundle adjustment is the Maximum Likelihood Estimator. Bundle adjustment was originally conceived in the field of photogrammetry during the 1950s and has increasingly been used by computer vision researchers during recent years. General approach. Bundle adjustment boils down to minimizing the reprojection error between the image locations of observed and predicted image points, which is expressed as the sum of squares of a large number of nonlinear, real-valued functions. Thus, the minimization is achieved using nonlinear least-squares algorithms. Of these, Levenberg–Marquardt has proven to be one of the most successful due to its ease of implementation and its use of an effective damping strategy that lends it the ability to converge quickly from a wide range of initial guesses. By iteratively linearizing the function to be minimized in the neighborhood of the current estimate, the Levenberg–Marquardt algorithm involves the solution of linear systems termed the normal equations. When solving the minimization problems arising in the framework of bundle adjustment, the normal equations have a sparse block structure owing to the lack of interaction among parameters for different 3D points and cameras. This can be exploited to gain tremendous computational benefits by employing a sparse variant of the Levenberg–Marquardt algorithm which explicitly takes advantage of the normal equations zeros pattern, avoiding storing and operating on zero-elements. Mathematical definition. Bundle adjustment amounts to jointly refining a set of initial camera and structure parameter estimates for finding the set of parameters that most accurately predict the locations of the observed points in the set of available images. More formally, assume that formula_0 3D points are seen in formula_1 views and let formula_2 be the projection of the formula_3th point on image formula_4. Let formula_5 denote the binary variables that equal 1 if point formula_3 is visible in image formula_4 and 0 otherwise. Assume also that each camera formula_4 is parameterized by a vector formula_6 and each 3D point formula_3 by a vector formula_7. Bundle adjustment minimizes the total reprojection error with respect to all 3D point and camera parameters, specifically formula_8 where formula_9 is the predicted projection of point formula_3 on image formula_4 and formula_10 denotes the Euclidean distance between the image points represented by vectors formula_11 and formula_12. Because the minimum is computed over many points and many images, bundle adjustment is by definition tolerant to missing image projections, and if the distance metric is chosen reasonably (e.g., Euclidean distance), bundle adjustment will also minimize a physically meaningful criterion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "\\mathbf{x}_{ij}" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": "\\displaystyle v_{ij}" }, { "math_id": 6, "text": "\\mathbf{a}_j" }, { "math_id": 7, "text": "\\mathbf{b}_i" }, { "math_id": 8, "text": "\n\\min_{\\mathbf{a}_j, \\, \\mathbf{b}_i} \\displaystyle\\sum_{i=1}^{n} \\; \\displaystyle\\sum_{j=1}^{m} \\; v_{ij} \\, d(\\mathbf{Q}(\\mathbf{a}_j, \\, \\mathbf{b}_i), \\; \\mathbf{x}_{ij})^2,\n" }, { "math_id": 9, "text": "\\mathbf{Q}(\\mathbf{a}_j, \\, \\mathbf{b}_i)" }, { "math_id": 10, "text": "d(\\mathbf{x}, \\, \\mathbf{y})" }, { "math_id": 11, "text": "\\mathbf{x}" }, { "math_id": 12, "text": "\\mathbf{y}" } ]
https://en.wikipedia.org/wiki?curid=13754920
13755
Hull (watercraft)
Watertight buoyant body of a ship or boat A hull is the watertight body of a ship, boat, submarine, or flying boat. The hull may open at the top (such as a dinghy), or it may be fully or partially covered with a deck. Atop the deck may be a deckhouse and other superstructures, such as a funnel, derrick, or mast. The line where the hull meets the water surface is called the waterline. General features. There is a wide variety of hull types that are chosen for suitability for different usages, the hull shape being dependent upon the needs of the design. Shapes range from a nearly perfect box in the case of scow barges to a needle-sharp surface of revolution in the case of a racing multihull sailboat. The shape is chosen to strike a balance between cost, hydrostatic considerations (accommodation, load carrying, and stability), hydrodynamics (speed, power requirements, and motion and behavior in a seaway) and special considerations for the ship's role, such as the rounded bow of an icebreaker or the flat bottom of a landing craft. In a typical modern steel ship, the hull will have watertight decks, and major transverse members called bulkheads. There may also be intermediate members such as girders, stringers and webs, and minor members called ordinary transverse frames, frames, or longitudinals, depending on the structural arrangement. The uppermost continuous deck may be called the "upper deck", "weather deck", "spar deck", "main deck", or simply "deck". The particular name given depends on the context—the type of ship or boat, the arrangement, or even where it sails. In a typical wooden sailboat, the hull is constructed of wooden planking, supported by transverse frames (often referred to as ribs) and bulkheads, which are further tied together by longitudinal stringers or ceiling. Often but not always there is a centerline longitudinal member called a keel. In fiberglass or composite hulls, the structure may resemble wooden or steel vessels to some extent, or be of a monocoque arrangement. In many cases, composite hulls are built by sandwiching thin fiber-reinforced skins over a lightweight but reasonably rigid core of foam, balsa wood, impregnated paper honeycomb, or other material. Perhaps the earliest proper hulls were built by the Ancient Egyptians, who by 3000 BC knew how to assemble wooden planks into a hull. Hull shapes. Hulls come in many varieties and can have composite shape, (e.g., a fine entry forward and inverted bell shape aft), but are grouped primarily as follows: Hull forms. At present, the most widely used form is the round bilge hull. With a small payload, such a craft has less of its hull below the waterline, giving less resistance and more speed. With a greater payload, resistance is greater and speed lower, but the hull's outward bend provides smoother performance in waves. As such, the inverted bell shape is a popular form used with planing hulls. Chined and hard-chined hulls. A chined hull does not have a smooth rounded transition between bottom and sides. Instead, its contours are interrupted by sharp angles where predominantly longitudinal panels of the hull meet. The sharper the intersection (the more acute the angle), the "harder" the chine. More than one chine per side is possible. The Cajun "pirogue" is an example of a craft with hard chines. Benefits of this type of hull include potentially lower production cost and a (usually) fairly flat bottom, making the boat faster at planing. A hard chined hull resists rolling (in smooth water) more than does a hull with rounded bilges (the chine creates turbulence and drag resisting the rolling motion, as it moves through the water, the rounded-bilge provides less flow resistance around the turn). In rough seas, this can make the boat roll more, as the motion drags first down, then up, on a chine: round-bilge boats are more seakindly in waves, as a result. Chined hulls may have one of three shapes: Each of these chine hulls has its own unique characteristics and use. The flat-bottom hull has high initial stability but high drag. To counter the high drag, hull forms are narrow and sometimes severely tapered at bow and stern. This leads to poor stability when heeled in a sailboat. This is often countered by using heavy interior ballast on sailing versions. They are best suited to sheltered inshore waters. Early racing power boats were fine forward and flat aft. This produced maximum lift and a smooth, fast ride in flat water, but this hull form is easily unsettled in waves. The multi-chine hull approximates a curved hull form. It has less drag than a flat-bottom boat. Multi chines are more complex to build but produce a more seaworthy hull form. They are usually displacement hulls. V or arc-bottom chine boats have a Vshape between 6°and 23°. This is called the angle. The flatter shape of a 6-degree hull will plane with less wind or a lower-horsepower engine but will pound more in waves. The deep Vform (between 18and 23degrees) is only suited to high-powered planing boats. They require more powerful engines to lift the boat onto the plane but give a faster, smoother ride in waves. Displacement chined hulls have more wetted surface area, hence more drag, than an equivalent round-hull form, for any given displacement. Smooth curve hulls. Smooth curve hulls are hulls that use, just like the curved hulls, a centreboard, or an attached keel. Semi round bilge hulls are somewhat less round. The advantage of the semi-round is that it is a nice middle between the S-bottom and chined hull. Typical examples of a semi-round bilge hull can be found in the Centaur and Laser sailing dinghies. S-bottom hulls are sailing boat hulls with a midships transverse half-section shaped like an "s". In the s-bottom, the hull has round bilges and merges smoothly with the keel, and there are no sharp corners on the hull sides between the keel centreline and the sheer line. Boats with this hull form may have a long fixed deep keel, or a long shallow fixed keel with a centreboard swing keel inside. Ballast may be internal, external, or a combination. This hull form was most popular in the late 19th and early to mid 20th centuries. Examples of small sailboats that use this s-shape are the Yngling and Randmeer. Metrics. Hull forms are defined as follows: Block measures that define the principal dimensions. They are: Form derivatives that are calculated from the shape and the block measures. They are: Coefficients help compare hull forms as well: Note: formula_4 Computer-aided design. Use of computer-aided design has superseded paper-based methods of ship design that relied on manual calculations and lines drawing. Since the early 1990s, a variety of commercial and freeware software packages specialized for naval architecture have been developed that provide 3D drafting capabilities combined with calculation modules for hydrostatics and hydrodynamics. These may be referred to as geometric modeling systems for naval architecture. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nC_b = \\frac {V}{L_{WL} \\cdot B_{WL} \\cdot T_{WL}}\n" }, { "math_id": 1, "text": "\nC_m = \\frac {A_m}{ B_{WL} \\cdot T_{WL}}\n" }, { "math_id": 2, "text": "\nC_p = \\frac {V}{L_{WL} \\cdot A_m}\n" }, { "math_id": 3, "text": "\nC_w = \\frac {A_w}{L_{WL} \\cdot B_{WL}}\n" }, { "math_id": 4, "text": " C_b = C_{p} \\cdot C_{m} " } ]
https://en.wikipedia.org/wiki?curid=13755
13756871
Electrophoretic light scattering
Electrophoretic light scattering (also known as laser Doppler electrophoresis and phase analysis light scattering ) is based on dynamic light scattering. The frequency shift or phase shift of an incident laser beam depends on the dispersed particles mobility. With "dynamic light scattering", Brownian motion causes particle motion. With "electrophoretic" light scattering, oscillating electric field performs this function. The method is used for measuring electrophoretic mobility, from which zeta potential can then be calculated. Instruments for applying the method are commercially available from several manufacturers. The last set of calculations requires information on viscosity and dielectric permittivity of the dispersion medium; appropriate electrophoresis theory is also required. Sample dilution is often necessary to eliminate multiple scattering of the incident laser beam and/or particle interactions. Instrumentation. A laser beam passes through the electrophoresis cell, irradiates the particles dispersed in it, and is scattered by the particles. The scattered light is detected by a photo-multiplier after passing through two pinholes. There are two types of optical systems: heterodyne and fringe. Ware and Flygare developed a heterodyne-type ELS instrument, that was the first instrument of this type. In a fringe optics ELS instrument, a laser beam is divided into two beams. Those cross inside the electrophresis cell at a fixed angle to produce a fringe pattern. The scattered light from the particles, which migrates inside the fringe, is intensity-modulated. The frequency shifts from both types of optics obey the same equations. The observed spectra resemble each other. Oka et al. developed an ELS instrument of heterodyne-type optics that is now available commercially. Its optics is shown in Fig. 3. If the frequencies of the intersecting laser beams are the same then it is not possible to resolve the direction of the motion of the migrating particles. Instead, only the magnitude of the velocity (i.e., the speed) can be determined. Hence, the sign of the zeta potential cannot be ascertained. This limitation can be overcome by shifting the frequency of one of the beams relative to the other. Such shifting may be referred to as frequency modulation or, more colloquially, just modulation. Modulators used in ELS may include piezo-actuated mirrors or acousto-optic modulators. This modulation scheme is employed by the heterodyne light scattering method, too. Phase-analysis light scattering (PALS) is a method for evaluating zeta potential, in which the rate of phase change of the interference between light scattered by the sample and the modulated reference beam is analyzed. This rate is compared with a mathematically generated sine wave predetermined by the modulator frequency. The application of large fields, which can lead to sample heating and breakdown of the colloids is no longer required. But any non-linearity of the modulator or any change in the characteristics of the modulator with time will mean that the generated sine wave will no longer reflect the real conditions, and the resulting zeta-potential measurements become less reliable. A further development of the PALS technique is the so-called "continuously monitored PALS" (cmPALS) technique, which addresses the non-linearity of the modulators. An extra modulator detects the interference between the modulated and unmodulated laser light. Thus, its beat frequency is solely the modulation frequency and is therefore independent of the electrophoretic motion of the particles. This results in faster measurements, higher reproducibility even at low applied electric fields as well as higher sensitivity of the measurement. Heterodyne light scattering. The frequency of light scattered by particles undergoing electrophoresis is shifted by the amount of the Doppler effect, formula_0 from that of the incident light, :formula_1 . The shift can be detected by means of heterodyne optics in which the scattering light is mixed with the reference light. The autocorrelation function of intensity of the mixed light, formula_2, can be approximately described by the following damped cosine function [7]. formula_3 where formula_4 is a decay constant and A, B, and C are positive constants dependent on the optical system. Damping frequency formula_5 is an observed frequency, and is the frequency difference between scattered and reference light. formula_6 where formula_7 is the frequency of scattered light, formula_8 the frequency of the reference light, formula_9 the frequency of incident light (laser light), and formula_10 the modulation frequency. The power spectrum of mixed light, namely the Fourier transform of formula_2, gives a couple of Lorenz functions at formula_11 having a half-width of formula_12 at the half maximum. In addition to these two, the last term in equation (1) gives another Lorenz function at formula_13 The Doppler shift of frequency and the decay constant are dependent on the geometry of the optical system and are expressed respectively by the equations. formula_14 and formula_15 where formula_16 is velocity of the particles, formula_17 is the amplitude of the scattering vector, and formula_18 is the translational diffusion constant of particles. The amplitude of the scattering vector formula_17 is given by the equation formula_19 Since velocity formula_16 is proportional to the applied electric field, formula_20, the apparent electrophoretic mobility formula_21 is define by the equation formula_22 Finally, the relation between the Doppler shift frequency and mobility is given for the case of the optical configuration of Fig. 3 by the equation formula_23 where formula_20 is the strength of the electric field, formula_24 the refractive index of the medium, formula_25, the wavelength of the incident light in vacuum, and formula_26 the scattering angle. The sign of formula_27 is a result of vector calculation and depends on the geometry of the optics. The spectral frequency can be obtained according to Eq. (2). When formula_28, Eq. (2) is modified and expressed as formula_29 The modulation frequency formula_30 can be obtained as the damping frequency without an electric field applied. The particle diameter is obtained by assuming that the particle is spherical. This is called the hydrodynamic diameter, formula_31 . formula_32 where formula_33 is Boltzmann coefficient, formula_34 is the absolute temperature, and formula_35 the dynamic viscosity of the surrounding fluid. Profile of electro-osmotic flow. Figure 4 shows two examples of heterodyne autocorrelation functions of scattered light from sodium polystyrene sulfate solution (NaPSS; MW 400,000; 4 mg/mL in 10 mM NaCl). The oscillating correlation function shown by Fig. 4a is a result of interference between the scattered light and the modulated reference light. The beat of Fig. 4b includes additionally the contribution from the frequency changes of light scattered by PSS molecules under an electrical field of 40 V/cm. Figure 5 shows heterodyne power spectra obtained by Fourier transform of the autocorrelation functions shown in Fig. 4. Figure 6 shows plots of Doppler shift frequencies measured at various cell depth and electric field strengths, where a sample is the NaPSS solution. These parabolic curves are called profiles of electro-osmotic flow and indicate that the velocity of the particles changed at different depth. The surface potential of the cell wall produces electro-osmotic flow. Since the electrophoresis chamber is a closed system, backward flow is produced at the center of the cell. Then the observed mobility or velocity from Eq. (7) is a result of the combination of osmotic flow and electrophoretic movement. Electrophoretic mobility analysis has been studied by Mori and Okamoto [16], who have taken into account the effect of electro-osmotic flow at the side wall. The profile of velocity or mobility at the center of the cell is given approximately by Eq. (11) for the case where k&gt;5. formula_36 where formula_37 cell depth formula_38 apparent electrophoretic velocity of particle at position z. formula_39 true electrophoretic velocity of the particles. formula_40 thickness of the cell formula_41 average velocity of osmotic flow at upper and lower cell wall. formula_42 difference between velocities of osmotic flow at upper and lower cell wall. formula_43 formula_44 , a ratio between two side lengths of the rectangular cross section. The parabolic curve of frequency shift caused by electro-osmotic flow shown in Fig. 6 fits with Eq. (11) with application of the least squares method. Since the mobility is proportional to a frequency shift of the light scattered by a particle and the migrating velocity of a particle as indicated by Eq. (7), all the velocity, mobility, and frequency shifts are expressed by parabolic equations. Then the true electrophoretic mobility of a particle, the electro-osmotic mobility at the upper and lower cell walls, ware obtained. The frequency shift caused only by the electrophoresis of particles is equal to the apparent mobility at the stationary layer. The velocity of the electrophoretic migration thus obtained is proportional to the electric field as shown in Fig. 7. The frequency shift increases with increase of the scattering angle as shown in Fig. 8. This result is in agreement with the theoretical Eq. (7). Applications. Electrophoretic light scattering (ELS) is primarily used for characterizing the surface charges of colloidal particles like macromolecules or synthetic polymers (ex. polystyrene) in liquid media in an electric field. In addition to information about surface charges, ELS can also measure the particle size of proteins and determine the zeta potential distribution. Biophysics. ELS is useful for characterizing information about the surface of proteins. Ware and Flygare (1971) demonstrated that electrophoretic techniques can be combined with laser beat spectroscopy in order to simultaneously determine the electrophoretic mobility and diffusion coefficient of bovine serum albumin. The width of a Doppler shifted spectrum of light that is scattered from a solution of macromolecules is proportional to the diffusion coefficient. The Doppler shift is proportional to the electrophoretic mobility of a macromolecule. From studies that have applied this method to poly (L-lysine), ELS is believed to monitor fluctuation mobilities in the presence of solvents with varying salt concentrations. It has also been shown that electrophoretic mobility data can be converted to zeta potential values, which enables the determination of the isoelectric point of proteins and the number of electrokinetic charges on the surface. Other biological macromolecules that can be analyzed with ELS include polysaccharides. pKa values of chitosans can be calculated from the dependency of electrophoretic mobility values on pH and charge density. Like proteins, the size and zeta potential of chitosans can be determined through ELS. ELS has also been applied to nucleic acids and viruses. The technique can be extended to measure electrophoretic mobilities of large bacteria molecules at low ionic strengths. Nanoparticles. ELS has been used to characterize the polydispersity, nanodispersity, and stability of single-walled carbon nanotubes in an aqueous environment with surfactants. The technique can be used in combination with dynamic light scattering to measure these properties of nanotubes in many different solvents. References. &lt;templatestyles src="Reflist/styles.css" /&gt; (1) Surfactant Science Series, Consulting Editor Martin J. Schick Consultant New York, Vol. 76 Electrical Phenomena at Interfaces Second Edition, Fundamentals, Measurements and Applications, Second Edition, Revised and Expanded. Ed by Hiroyuki Ohshima, Kunio Furusawa. 1998. K. Oka and K. Furusawa, Chapter 8 Electrophresis, p. 152 - 223. Marcel Dekker, Inc, (7) B.R. Ware and D.D. Haas, in Fast Method in Physical Biochemistry and Cell Biology. (R.I. Sha'afi and S.M. Fernandez, Eds), Elsevier, New York, 1983, Chap. 8. (9) (10) (11) K. Oka, W. Otani, K. Kameyama, M. Kidai, and T. Takagi, Appl. Theor. Electrophor. 1: 273-278 (1990). (12) K. Oka, W. Otani, Y. Kubo, Y. Zasu, and M. Akagi, U.S. Patent Appl. 465, 186: Jpn. Patent H7-5227 (1995). (16) S. Mori and H. Okamoto, Flotation 28: 1 (1980). (in Japanese): Fusen 28(3): 117 (1980). (17) M. Smoluchowski, in Handbuch der Electrizitat und des Magnetismus. (L. Greatz. Ed). Barth, Leripzig, 1921, pp. 379. (18) P. White, Phil. Mag. S 7, 23, No. 155 (1937). (19) S. Komagat, Res. Electrotech. Lab. (Jpn) 348, March 1933. (20) Y. Fukui, S. Yuu and K. Ushiki, Power Technol. 54: 165 (1988).
[ { "math_id": 0, "text": "\\upsilon_D\\," }, { "math_id": 1, "text": "\\upsilon\\," }, { "math_id": 2, "text": "g(\\tau) \\," }, { "math_id": 3, "text": "g(\\tau)=A+B \\exp(-\\Gamma\\tau)\\cos(2\\pi\\upsilon_o)+C \\exp(-2\\Gamma \\tau)\\,\\qquad (1)" }, { "math_id": 4, "text": "\\Gamma\\," }, { "math_id": 5, "text": "\\upsilon_o\\," }, { "math_id": 6, "text": "\\upsilon_o =| \\upsilon_s - \\upsilon_r | = | (\\upsilon_i+\\upsilon_D)-(\\upsilon_i+\\upsilon_M ) |\\qquad(2)" }, { "math_id": 7, "text": "\\upsilon_s\\," }, { "math_id": 8, "text": "\\upsilon_r\\," }, { "math_id": 9, "text": "\\upsilon_i\\," }, { "math_id": 10, "text": "\\upsilon_M\\," }, { "math_id": 11, "text": "\\pm\\Delta\\upsilon \\," }, { "math_id": 12, "text": "\\Gamma/2\\pi\\," }, { "math_id": 13, "text": "\\upsilon = 0\\," }, { "math_id": 14, "text": "\\upsilon_D = \\frac{Vq}{2\\pi} \\qquad(3)\n" }, { "math_id": 15, "text": "\\Gamma = D|q|^2 \\qquad(4)" }, { "math_id": 16, "text": "\\ V \\," }, { "math_id": 17, "text": "\\ q \\," }, { "math_id": 18, "text": "\\ D \\," }, { "math_id": 19, "text": "\\ |q| = \\frac{4 \\pi n}{\\lambda_0 }\\sin\\left( \\frac{\\theta}{2}\\right) \\qquad(5)" }, { "math_id": 20, "text": "\\ E \\," }, { "math_id": 21, "text": "\\ \\mu_{obs} \\," }, { "math_id": 22, "text": "\\ \\vec{V} = \\mu_{obs} \\vec{E} \\qquad(6)" }, { "math_id": 23, "text": "\\upsilon_D = \\mu_{obs} \\frac{n E}{\\lambda_0} \\sin \\theta \\qquad(7)" }, { "math_id": 24, "text": "\\ n\\," }, { "math_id": 25, "text": "\\ \\lambda_0 \\," }, { "math_id": 26, "text": "\\ \\theta \\," }, { "math_id": 27, "text": "\\ v_D \\," }, { "math_id": 28, "text": "\\ | \\upsilon_M | > | \\upsilon _D | \\," }, { "math_id": 29, "text": "\\upsilon_p = \\upsilon_o = \\pm( \\upsilon _D -| \\upsilon _M | ) \\qquad(8)" }, { "math_id": 30, "text": " \\upsilon _M \\," }, { "math_id": 31, "text": "\\ d_H\\," }, { "math_id": 32, "text": "\\ d_H = \\frac{k_BT}{3 \\pi \\eta D} \\qquad(10)" }, { "math_id": 33, "text": "\\ k_B \\," }, { "math_id": 34, "text": "\\ T \\," }, { "math_id": 35, "text": "\\ \\eta \\," }, { "math_id": 36, "text": "\\ U_a (z) = AU_0(z/b)^2 + \\Delta U_0(z/b)+(1-A) U_0 + U_p \\qquad(11)" }, { "math_id": 37, "text": "\\ z = \\," }, { "math_id": 38, "text": "\\ U_a (z) = \\," }, { "math_id": 39, "text": "\\ U_p = \\," }, { "math_id": 40, "text": "\\ z/b = \\," }, { "math_id": 41, "text": "<Math>\\ U_0 = \\,</math>" }, { "math_id": 42, "text": "<Math>\\Delta U_0 = \\,</math>" }, { "math_id": 43, "text": "<Math>\\ A = \\frac{1}{( 2/3) - (0.420166/k)} \\qquad(12)</math>" }, { "math_id": 44, "text": " \\ k = a/b \\," } ]
https://en.wikipedia.org/wiki?curid=13756871
13757191
Sedimentation potential
Sedimentation potential occurs when dispersed particles move under the influence of either gravity or centrifugation or electricity in a medium. This motion disrupts the equilibrium symmetry of the particle's double layer. While the particle moves, the ions in the electric double layer lag behind due to the liquid flow. This causes a slight displacement between the surface charge and the electric charge of the diffuse layer. As a result, the moving particle creates a dipole moment. The sum of all of the dipoles generates an electric field which is called "sedimentation potential". It can be measured with an open electrical circuit, which is also called sedimentation current. There are detailed descriptions of this effect in many books on colloid and interface science. Surface energy. Background related to phenomenon. Electrokinetic phenomena are a family of several different effects that occur in heterogeneous fluids or in porous bodies filled with fluid. The sum of these phenomena deals with the effect on a particle from some outside resulting in a net electrokinetic effect. The common source of all these effects stems from the interfacial 'double layer' of charges. Particles influenced by an external force generate tangential motion of a fluid with respect to an adjacent charged surface. This force may consist of electric, pressure gradient, concentration gradient, gravity. In addition, the moving phase might be either the continuous fluid or dispersed phase. Sedimentation potential is the field of electrokinetic phenomena dealing with the generation of an electric field by sedimenting colloid particles. History of models. This phenomenon was first discovered by Dorn in 1879. He observed that a vertical electric field had developed in a suspension of glass beads in water, as the beads were settling. This was the origin of sedimentation potential, which is often referred to as the Dorn effect. Smoluchowski built the first models to calculate the potential in the early 1900s. Booth created a general theory on sedimentation potential in 1954 based on Overbeek's 1943 theory on electrophoresis. In 1980, Stigter extended Booth's model to allow for higher surface potentials. Ohshima created a model based on O'Brien and White 's 1978 model used to analyze the sedimentation velocity of a single charged sphere and the sedimentation potential of a dilute suspension. Generation of a potential. As a charged particle moves through a gravitational force or centrifugation, an electric potential is induced. While the particle moves, ions in the electric double layer lag behind creating a net dipole moment behind due to liquid flow. The sum of all dipoles on the particle is what causes sedimentation potential. Sedimentation potential has the opposite effect compared to electrophoresis where an electric field is applied to the system. Ionic conductivity is often referred to when dealing with sedimentation potential. The following relation provides a measure of the sedimentation potential due to the settling of charged spheres. First discovered by Smoluchowski in 1903 and 1921. This relationship only holds true for non-overlapping electric double layers and for dilute suspensions. In 1954, Booth proved that this idea held true for Pyrex glass powder settling in a KCl solution. From this relation, the sedimentation potential, ES, is independent of the particle radius and that ES → 0, Φ p → 0 (a single particle). formula_0 Smoluchowski's sedimentation potential is defined where ε0 is the permitivity of free space, D the dimensionless dielectric constant, ξ the zeta potential, g the acceleration due to gravity, Φ the particle volume fraction, ρ the particle density, ρo the medium density, λ the specific volume conductivity, and η the viscosity. Smoluchowski developed the equation under five assumptions: formula_1 Where "Di" is the diffusion coefficient of the "ith" solute species, and "ni∞" is the number concentration of electrolyte solution. Ohshima's model was developed in 1984 and was originally used to analyze the sedimentation velocity of a single charged sphere and the sedimentation potential of a dilute suspension. The model provided below holds true for dilute suspensions of low zeta potential, "i.e. e"ζ/κBT ≤2 formula_2 Testing. Measurement. Sedimentation potential is measured by attaching electrodes to a glass column filled with the dispersion of interest. A voltmeter is attached to measure the potential generated from the suspension. To account for different geometries of the electrode, the column is typically rotated 180 degrees while measuring the potential. This difference in potential through rotation by 180 degrees is twice the sedimentation potential. The zeta potential can be determined through measurement by sedimentation potential, as the concentration, conductivity of the suspension, density of the particle, and potential difference are known. By rotating the column 180 degrees, drift and geometry differences of the column can be ignored. formula_3 When dealing with the case of concentrated systems, the zeta potential can be determined through measurement of the sedimentation potential formula_4, from the potential difference relative to the distance between the electrodes. The other parameters represent the following: formula_5 the viscosity of the medium; formula_6 the bulk conductivity; formula_7 the relative permittivity of the medium; formula_8 the permittivity of free space; formula_9 the density of the particle; formula_10 the density of the medium; formula_11 is the acceleration due to gravity; and σ∞ is the electrical conductivity of the bulk electrolyte solution. An improved design cell was developed to determine sedimentation potential, specific conductivity, volume fraction of the solids as well as pH. Two pairs of electrodes are used in this set up, one to measure potential difference and the other for resistance. A flip switch is utilized to avoid polarization of the resistance electrodes and buildup of charge by alternating the current. The pH of the system could be monitored and the electrolyte was drawn into the tube using a vacuum pump. Applications. Applications of sedimentation field flow fractionation (SFFF). Sedimentation field flow fractionation (SFFF) is a non-destructive separation technique which can be used for both separation, and collecting fractions. Some applications of SFFF include characterization of particle size of latex materials for adhesives, coatings and paints, colloidal silica for binders, coatings and compounding agents, titanium oxide pigments for paints, paper and textiles, emulsion for soft drinks, and biological materials like viruses and liposomes. Some main aspects of SFFF include: it provides high-resolution possibilities for size distribution measurements with high precision, the resolution is dependent on experimental conditions, the typical analysis time is 1 to 2 hours, and it is a non-destructive technique which offers the possibility of collecting fraction. Particle size analysis by sedimentation field flow fractionation. As sedimentation field flow fractionation (SFFF) is one of field flow fractionation separation techniques, it is appropriate for fractionation and characterization of particulate materials and soluble samples in the colloid size range. Differences in interaction between a centrifugal force field and particles with different masses or sizes lead to the separation. An exponential distribution of particles of a certain size or weight is results due to the Brownian motion. Some of the assumptions to develop the theoretical equations include that there is no interaction between individual particles and equilibrium can occur anywhere in separation channels. See also. Various combinations of the driving force and moving phase determine various electrokinetic effects. Following "Fundamentals of Interface and Colloid Science" by Lyklema (1995), the complete family of electrokinetic phenomena includes: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{s} = - \\frac{\\varepsilon\\zeta (\\rho -\\rho _{0})\\phi _{p}g }{\\sigma ^{\\infty }\\eta }" }, { "math_id": 1, "text": "\\sigma ^{\\infty} = \\frac{e^{2}}{k_{B}T}\\sum z_{i}^{2}D_{i}n_{i\\infty }" }, { "math_id": 2, "text": " E_{s} = - \\frac{\\varepsilon\\zeta (\\rho -\\rho _{0})\\phi _{p} }{\\sigma ^{\\infty }\\eta } gH(\\kappa \\alpha )+\\vartheta(\\zeta ^2)" }, { "math_id": 3, "text": "\\zeta = \\frac{\\eta \\lambda E_{s}}{\\varepsilon_{r}\\varepsilon _{0}(\\rho -\\rho _{0})g }" }, { "math_id": 4, "text": "E_{s}" }, { "math_id": 5, "text": "\\eta" }, { "math_id": 6, "text": "\\lambda " }, { "math_id": 7, "text": "\\varepsilon_{r}" }, { "math_id": 8, "text": "\\varepsilon_{0}" }, { "math_id": 9, "text": "\\rho" }, { "math_id": 10, "text": "\\rho_{0}" }, { "math_id": 11, "text": "g" } ]
https://en.wikipedia.org/wiki?curid=13757191
13757276
Quadratic eigenvalue problem
In mathematics, the quadratic eigenvalue problem (QEP), is to find scalar eigenvalues formula_0, left eigenvectors formula_1 and right eigenvectors formula_2 such that formula_3 where formula_4, with matrix coefficients formula_5 and we require that formula_6, (so that we have a nonzero leading coefficient). There are formula_7 eigenvalues that may be "infinite" or finite, and possibly zero. This is a special case of a nonlinear eigenproblem. formula_8 is also known as a quadratic polynomial matrix. Spectral theory. A QEP is said to be regular if formula_9 identically. The coefficient of the formula_10 term in formula_11 is formula_12, implying that the QEP is regular if formula_13 is nonsingular. Eigenvalues at infinity and eigenvalues at 0 may be exchanged by considering the reversed polynomial, formula_14. As there are formula_15 eigenvectors in a formula_16 dimensional space, the eigenvectors cannot be orthogonal. It is possible to have the same eigenvector attached to different eigenvalues. Applications. Systems of differential equations. Quadratic eigenvalue problems arise naturally in the solution of systems of second order linear differential equations without forcing: formula_17 Where formula_18, and formula_19. If all quadratic eigenvalues of formula_20 are distinct, then the solution can be written in terms of the quadratic eigenvalues and right quadratic eigenvectors as formula_21 Where formula_22 are the quadratic eigenvalues, formula_23 are the formula_15 right quadratic eigenvectors, and formula_24 is a parameter vector determined from the initial conditions on formula_25 and formula_26. Stability theory for linear systems can now be applied, as the behavior of a solution depends explicitly on the (quadratic) eigenvalues. Finite element methods. A QEP can result in part of the dynamic analysis of structures discretized by the finite element method. In this case the quadratic, formula_8 has the form formula_4, where formula_13 is the mass matrix, formula_27 is the damping matrix and formula_28 is the stiffness matrix. Other applications include vibro-acoustics and fluid dynamics. Methods of solution. Direct methods for solving the standard or generalized eigenvalue problems formula_29 and formula_30 are based on transforming the problem to Schur or Generalized Schur form. However, there is no analogous form for quadratic matrix polynomials. One approach is to transform the quadratic matrix polynomial to a linear matrix pencil (formula_31), and solve a generalized eigenvalue problem. Once eigenvalues and eigenvectors of the linear problem have been determined, eigenvectors and eigenvalues of the quadratic can be determined. The most common linearization is the first companion linearization formula_32 with corresponding eigenvector formula_33 For convenience, one often takes formula_34 to be the formula_35 identity matrix. We solve formula_36 for formula_37 and formula_38, for example by computing the Generalized Schur form. We can then take the first formula_16 components of formula_38 as the eigenvector formula_2 of the original quadratic formula_8. Another common linearization is given by formula_39 In the case when either formula_40 or formula_41 is a Hamiltonian matrix and the other is a skew-Hamiltonian matrix, the following linearizations can be used. formula_42 formula_43
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": " Q(\\lambda)x = 0 ~ \\text{ and } ~ y^\\ast Q(\\lambda) = 0," }, { "math_id": 4, "text": "Q(\\lambda)=\\lambda^2 M + \\lambda C + K" }, { "math_id": 5, "text": "M, \\, C, K \\in \\mathbb{C}^{n \\times n}" }, { "math_id": 6, "text": "M\\,\\neq 0" }, { "math_id": 7, "text": "2n" }, { "math_id": 8, "text": "Q(\\lambda)" }, { "math_id": 9, "text": "\\text{det} (Q(\\lambda)) \\not \\equiv 0" }, { "math_id": 10, "text": "\\lambda^{2n}" }, { "math_id": 11, "text": "\\text{det}(Q(\\lambda))" }, { "math_id": 12, "text": "\\text{det}(M)" }, { "math_id": 13, "text": "M" }, { "math_id": 14, "text": " \\lambda^2 Q(\\lambda^{-1}) = \\lambda^2 K + \\lambda C + M " }, { "math_id": 15, "text": " 2n" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": " M q''(t) +C q'(t) + K q(t) = 0 " }, { "math_id": 18, "text": " q(t) \\in \\mathbb{R}^n " }, { "math_id": 19, "text": " M, C, K \\in \\mathbb{R}^{n\\times n}" }, { "math_id": 20, "text": " Q(\\lambda) = \\lambda^2 M + \\lambda C + K " }, { "math_id": 21, "text": "\nq(t) = \\sum_{j=1}^{2n} \\alpha_j x_j e^{\\lambda_j t} = X e^{\\Lambda t} \\alpha\n" }, { "math_id": 22, "text": "\\Lambda = \\text{Diag}([\\lambda_1, \\ldots, \\lambda_{2n}]) \\in \\mathbb{R}^{2n \\times 2n} " }, { "math_id": 23, "text": " X = [x_1, \\ldots, x_{2n}] \\in \\mathbb{R}^{n \\times 2n} " }, { "math_id": 24, "text": " \\alpha = [\\alpha_1, \\cdots, \\alpha_{2n}]^\\top \\in \\mathbb{R}^{2n}" }, { "math_id": 25, "text": " q" }, { "math_id": 26, "text": " q'" }, { "math_id": 27, "text": "C" }, { "math_id": 28, "text": "K" }, { "math_id": 29, "text": " Ax = \\lambda x" }, { "math_id": 30, "text": " Ax = \\lambda B x " }, { "math_id": 31, "text": " A-\\lambda B" }, { "math_id": 32, "text": "\nL1(\\lambda) = \n\\begin{bmatrix}\n0 & N \\\\\n-K & -C \n\\end{bmatrix}\n-\n\\lambda\\begin{bmatrix}\nN & 0 \\\\\n0 & M \n\\end{bmatrix},\n " }, { "math_id": 33, "text": "\nz = \n\\begin{bmatrix}\nx \\\\\n\\lambda x\n\\end{bmatrix}.\n" }, { "math_id": 34, "text": "N" }, { "math_id": 35, "text": "n\\times n" }, { "math_id": 36, "text": " L(\\lambda) z = 0 " }, { "math_id": 37, "text": " \\lambda " }, { "math_id": 38, "text": "z" }, { "math_id": 39, "text": "\nL2(\\lambda)= \\begin{bmatrix}\n-K & 0 \\\\\n0 & N\n\\end{bmatrix}\n-\n\\lambda\\begin{bmatrix}\nC & M \\\\\nN & 0 \n\\end{bmatrix}.\n" }, { "math_id": 40, "text": "A" }, { "math_id": 41, "text": "B" }, { "math_id": 42, "text": "\nL3(\\lambda)= \\begin{bmatrix}\nK & 0 \\\\\nC & K\n\\end{bmatrix}\n-\n\\lambda\\begin{bmatrix}\n0 & K \\\\\n-M & 0 \n\\end{bmatrix}.\n" }, { "math_id": 43, "text": "\nL4(\\lambda)= \\begin{bmatrix}\n0 & -K \\\\\nM & 0\n\\end{bmatrix}\n-\n\\lambda\\begin{bmatrix}\nM & C \\\\\n0 & M \n\\end{bmatrix}.\n" } ]
https://en.wikipedia.org/wiki?curid=13757276
13760785
Prehomogeneous vector space
In mathematics, a prehomogeneous vector space (PVS) is a finite-dimensional vector space "V" together with a subgroup "G" of the general linear group GL("V") such that "G" has an open dense orbit in "V". The term prehomogeneous vector space was introduced by Mikio Sato in 1970. These spaces have many applications in geometry, number theory and analysis, as well as representation theory. The irreducible PVS were classified first by Vinberg in his 1960 thesis in the special case when G is simple and later by Sato and Tatsuo Kimura in 1977 in the general case by means of a transformation known as "castling". They are subdivided into two types, according to whether the semisimple part of "G" acts prehomogeneously or not. If it doesn't then there is a homogeneous polynomial on "V" which is invariant under the semisimple part of "G". Setting. In the setting of Sato, "G" is an algebraic group and "V" is a rational representation of "G" which has a (nonempty) open orbit in the Zariski topology. However, PVS can also be studied from the point of view of Lie theory: for instance, in Knapp (2002), "G" is a complex Lie group and "V" is a holomorphic representation of "G" with an open dense orbit. The two approaches are essentially the same, and the theory has validity over the real numbers. We assume, for simplicity of notation, that the action of "G" on "V" is a faithful representation. We can then identify "G" with its image in GL("V"), although in practice it is sometimes convenient to let "G" be a covering group. Although prehomogeneous vector spaces do not necessarily decompose into direct sums of irreducibles, it is natural to study the irreducible PVS (i.e., when "V" is an irreducible representation of "G"). In this case, a theorem of Élie Cartan shows that "G" ≤ GL("V") is a reductive group, with a centre that is at most one-dimensional. This, together with the obvious dimensional restriction dim "G" ≥ dim "V", is the key ingredient in the Sato–Kimura classification. Castling. The classification of PVS is complicated by the following fact. Suppose "m" &gt; "n" &gt; 0 and "V" is an "m"-dimensional representation of "G" over a field F. Then: ("G" × SL("n"), "V" ⊗ F"n") is a PVS if and only if ("G" × SL("m" − "n"), "V"* ⊗ F"m"−"n") is a PVS. The proof is to observe that both conditions are equivalent to there being an open dense orbit of the action of "G" on the Grassmannian of "n"-planes in "V", because this is isomorphic to the Grassmannian of ("m" − "n")-planes in "V"*. This transformation of PVS is called castling. Given a PVS "V", a new PVS can be obtained by tensoring "V" with F and castling. By repeating this process, and regrouping tensor products, many new examples can be obtained, which are said to be "castling-equivalent". Thus PVS can be grouped into castling equivalence classes. Sato and Kimura show that in each such class, there is essentially one PVS of minimal dimension, which they call "reduced", and they classify the reduced irreducible PVS. Classification. The classification of irreducible reduced PVS ("G", "V") splits into two cases: those for which "G" is semisimple, and those for which it is reductive with one-dimensional centre. If "G" is semisimple, it is (perhaps a covering of) a subgroup of SL("V"), and hence "G" × GL(1) acts prehomogenously on "V", with one-dimensional centre. We exclude such trivial extensions of semisimple PVS from the PVS with one-dimensional center. In other words, in the case that "G" has one-dimensional center, we assume that the semisimple part does "not" act prehomogeneously; it follows that there is a "relative invariant", i.e., a function invariant under the semisimple part of "G", which is homogeneous of a certain degree "d". This makes it possible to restrict attention to semisimple "G" ≤ SL("V") and split the classification as follows: However, it turns out that the classification is much shorter, if one allows not just products with GL(1), but also with SL("n") and GL("n"). This is quite natural in terms of the castling transformation discussed previously. Thus we wish to classify irreducible reduced PVS in terms of semisimple "G" ≤ SL("V") and "n" ≥ 1 such that either: In the latter case, there is a homogeneous polynomial which separates the "G" × GL("n") orbits into "G" × SL("n") orbits. This has an interpretation in terms of the grassmannian Gr"n"("V") of "n"-planes in "V" (at least for "n" ≤ dim "V"). In both cases "G" acts on Gr"n"("V") with a dense open orbit "U". In the first case the complement Gr"n"("V") &amp;setminus; "U" has codimension ≥ 2; in the second case it is a divisor of some degree "d", and the relative invariant is a homogeneous polynomial of degree "nd". In the following, the classification list will be presented over the complex numbers. Irregular examples. Type 1 Spin(10, C) on C16 Type 2 Sp(2"m", C) × SO(3, C) on C2"m" ⊗ C3 Both of these examples are PVS only for "n" = 1. Remaining examples. The remaining examples are all type 2. To avoid discussing the finite groups appearing, the lists present the Lie algebra of the isotropy group rather than the isotropy group itself. Here ΛC6 ≅ C14 denotes the space of 3-forms whose contraction with the given symplectic form is zero. Proofs. Sato and Kimura establish this classification by producing a list of possible irreducible prehomogeneous ("G", "V"), using the fact that "G" is reductive and the dimensional restriction. They then check whether each member of this list is prehomogeneous or not. However, there is a general explanation why most of the pairs ("G", "V") in the classification are prehomogeneous, in terms of isotropy representations of generalized flag varieties. Indeed, in 1974, Richardson observed that if "H" is a semisimple Lie group with a parabolic subgroup "P", then the action of "P" on the nilradical formula_0⊥ of its Lie algebra has a dense open orbit. This shows in particular (and was noted independently by Vinberg in 1975) that the Levi factor "G" of "P" acts prehomogeneously on "V" := formula_0⊥/[formula_0⊥, formula_0⊥]. Almost all of the examples in the classification can be obtained by applying this construction with "P" a maximal parabolic subgroup of a simple Lie group "H": these are classified by connected Dynkin diagrams with one distinguished node. Applications. One reason that PVS are interesting is that they classify generic objects that arise in "G"-invariant situations. For example, if "G" = GL(7), then the above tables show that there are generic 3-forms under the action of "G", and the stabilizer of such a 3-form is isomorphic to the exceptional Lie group G2. Another example concerns the prehomogeneous vector spaces with a cubic relative invariant. By the Sato-Kimura classification, there are essentially four such examples, and they all come from complexified isotropy representations of hermitian symmetric spaces for a larger group "H" (i.e., "G" is the semisimple part of the stabilizer of a point, and "V" is the corresponding tangent representation). In each case a generic point in "V" identifies it with the complexification of a Jordan algebra of 3 × 3 hermitian matrices (over the division algebras R, C, H and O respectively) and the cubic relative invariant is identified with a suitable determinant. The isotropy algebra of such a generic point, the Lie algebra of "G" and the Lie algebra of "H" give the complexifications of the first three rows of the Freudenthal magic square. Other Hermitian symmetric spaces yields prehomogeneous vector spaces whose generic points define Jordan algebras in a similar way. The Jordan algebra "J"("m" − 1) in the last row is the spin factor (which is the vector space R"m"−1 ⊕ R, with a Jordan algebra structure defined using the inner product on R"m"−1). It reduces to "J"2(R), "J"2(C), "J"2(H), "J"2(O) for "m" = 3, 4, 6 and 10 respectively. The relation between hermitian symmetric spaces and Jordan algebras can be explained using Jordan triple systems.
[ { "math_id": 0, "text": "\\mathfrak{p}" } ]
https://en.wikipedia.org/wiki?curid=13760785
1376120
Sackur–Tetrode equation
Expression of monatomic ideal gas entropy The Sackur–Tetrode equation is an expression for the entropy of a monatomic ideal gas. It is named for Hugo Martin Tetrode (1895–1931) and Otto Sackur (1880–1914), who developed it independently as a solution of Boltzmann's gas statistics and entropy equations, at about the same time in 1912. Formula. The Sackur–Tetrode equation expresses the entropy formula_0 of a monatomic ideal gas in terms of its thermodynamic state—specifically, its volume formula_1, internal energy formula_2, and the number of particles formula_3: formula_4 where formula_5 is the Boltzmann constant, formula_6 is the mass of a gas particle and formula_7 is the Planck constant. The equation can also be expressed in terms of the thermal wavelength formula_8: formula_9 For a derivation of the Sackur–Tetrode equation, see the Gibbs paradox. For the constraints placed upon the entropy of an ideal gas by thermodynamics alone, see the ideal gas article. The above expressions assume that the gas is in the classical regime and is described by Maxwell–Boltzmann statistics (with "correct Boltzmann counting"). From the definition of the thermal wavelength, this means the Sackur–Tetrode equation is valid only when formula_10 The entropy predicted by the Sackur–Tetrode equation approaches negative infinity as the temperature approaches zero. Sackur–Tetrode constant. The Sackur–Tetrode constant, written "S"0/"R", is equal to "S"/"kBN" evaluated at a temperature of "T" = 1 kelvin, at standard pressure (100 kPa or 101.325 kPa, to be specified), for one mole of an ideal gas composed of particles of mass equal to the atomic mass constant ("m"u = ‍). Its 2018 CODATA recommended value is: "S"0/"R" = for "p"o = 100 kPa "S"0/"R" = for "p"o = 101.325 kPa. Information-theoretic interpretation. In addition to the thermodynamic perspective of entropy, the tools of information theory can be used to provide an information perspective of entropy. In particular, it is possible to derive the Sackur–Tetrode equation in information-theoretic terms. The overall entropy is represented as the sum of four individual entropies, i.e., four distinct sources of missing information. These are positional uncertainty, momenta uncertainty, the quantum mechanical uncertainty principle, and the indistinguishability of the particles. Summing the four pieces, the Sackur–Tetrode equation is then given as formula_11 The derivation uses Stirling's approximation, formula_12. Strictly speaking, the use of dimensioned arguments to the logarithms is incorrect, however their use is a "shortcut" made for simplicity. If each logarithmic argument were divided by an unspecified standard value expressed in terms of an unspecified standard mass, length and time, these standard values would cancel in the final result, yielding the same conclusion. The individual entropy terms will not be absolute, but will rather depend upon the standards chosen, and will differ with different standards by an additive constant. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "U" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "\n\\frac{S}{k_{\\rm B} N} = \\ln\n\\left[ \\frac VN \\left(\\frac{4\\pi m}{3h^2}\\frac UN\\right)^{3/2}\\right]+\n{\\frac 52}\n," }, { "math_id": 5, "text": "k_\\mathrm{B}" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "h" }, { "math_id": 8, "text": "\\Lambda" }, { "math_id": 9, "text": "\n\\frac{S}{k_{\\rm B}N} = \\ln\\left(\\frac{V}{N\\Lambda^3}\\right)+\\frac{5}{2} ,\n" }, { "math_id": 10, "text": "\\frac{V}{N\\Lambda^3}\\gg 1 ." }, { "math_id": 11, "text": "\n\\begin{align}\n\\frac{S}{k_{\\rm B} N} & = [\\ln V] + \\left[\\frac 32 \\ln\\left(2\\pi e m k_{\\rm B} T\\right)\\right] + [ -3\\ln h] + \\left[-\\frac{\\ln N!}{N}\\right] \\\\\n& \\approx \\ln \\left[\\frac{V}{N} \\left(\\frac{2\\pi m k_{\\rm B} T}{h^2}\\right)^{\\frac 32}\\right] + \\frac 52 \n\\end{align}\n" }, { "math_id": 12, "text": "\\ln N! \\approx N \\ln N - N" } ]
https://en.wikipedia.org/wiki?curid=1376120
13762380
Oscillating U-tube
The oscillating U-tube is a technique to determine the density of liquids and gases based on an electronic measurement of the frequency of oscillation, from which the density value is calculated. This measuring principle is based on the Mass-Spring Model. The sample is filled into a container with oscillation capacity. The eigenfrequency of this container is influenced by the sample's mass. This container with oscillation capacity is a hollow, U-shaped glass tube (oscillating U-tube) which is electronically excited into undamped oscillation. The two branches of the U-shaped oscillator function as its spring elements. The direction of oscillation is normal to the level of the two branches. The oscillator's eigenfrequency is only influenced by the part of the sample that is actually involved in the oscillation. The volume involved in the oscillation is limited by the stationary oscillation knots at the bearing points of the oscillator. If the oscillator is at least filled up to its bearing points, the same precisely defined volume always participates in the oscillation, thus the measured value of the sample's mass can be used to calculate its density. Overfilling the oscillator beyond the bearing points is irrelevant to the measurement. For this reason the oscillator can also be employed to measure the density of sample media that flow through the tube (Continuous Measurement). In modern digital density meters, Piezo elements are used to excite the U-tube whereby optical pickups determine the period of oscillation. This period τ can be measured with high resolution and stands in simple relation to the density ρ of the sample in the oscillator: formula_0 A and B are the respective instrument constants of each oscillator. Their values are determined by calibrating with two substances of the precisely known densities ρ1 and ρ2. Modern instruments calculate and store the constants A and B after the two calibration measurements, which are mostly performed with air and water. They employ suitable measures to compensate various parasitic influences on the measuring result, e.g. the influence of the sample's viscosity and the non-linearity caused by the measuring instrument's finite mass as well as aging effects of the glass (reference oscillator). In 1967 the company Anton Paar GmbH presented the first digital density meter for liquids and gases employing the oscillating U-tube principle at .
[ { "math_id": 0, "text": "\n\\rho = A \\cdot \\tau^2 - B\n" } ]
https://en.wikipedia.org/wiki?curid=13762380
13763478
Automorphisms of the symmetric and alternating groups
In group theory, a branch of mathematics, the automorphisms and outer automorphisms of the symmetric groups and alternating groups are both standard examples of these automorphisms, and objects of study in their own right, particularly the exceptional outer automorphism of S6, the symmetric group on 6 elements. Formally, formula_1 is complete and the natural map formula_7 is an isomorphism. Indeed, the natural maps formula_12 are isomorphisms. formula_13 formula_14 The exceptional outer automorphism of S6. Among symmetric groups, only S6 has a non-trivial outer automorphism, which one can call "exceptional" (in analogy with exceptional Lie algebras) or "exotic". In fact, Out(S6) = C2. This was discovered by Otto Hölder in 1895. The specific nature of the outer automorphism is as follows. The 360 permutations in the even subgroup (A6) are transformed amongst themselves: And the odd part is also conserved: Thus, all 720 permutations on 6 elements are accounted for. The outer automorphism does not preserve cycle structure in general, mapping some single cycles to the product of two or three cycles and vice versa. This also yields another outer automorphism of A6, and this is the only exceptional outer automorphism of a finite simple group: for the infinite families of simple groups, there are formulas for the number of outer automorphisms, and the simple group of order 360, thought of as A6, would be expected to have two outer automorphisms, not four. However, when A6 is viewed as PSL(2, 9) the outer automorphism group has the expected order. (For sporadic groups – i.e. those not falling in an infinite family – the notion of exceptional outer automorphism is ill-defined, as there is no general formula.) Construction. There are numerous constructions, listed in . Note that as an outer automorphism, it is a "class" of automorphisms, well-determined only up to an inner automorphism, hence there is not a natural one to write down. One method is: Throughout the following, one can work with the multiplication action on cosets or the conjugation action on conjugates. To see that S6 has an outer automorphism, recall that homomorphisms from a group "G" to a symmetric group S"n" are essentially the same as actions of "G" on a set of "n" elements, and the subgroup fixing a point is then a subgroup of index at most "n" in "G". Conversely if we have a subgroup of index "n" in "G", the action on the cosets gives a transitive action of "G" on "n" points, and therefore a homomorphism to S"n". Construction from graph partitions. Before the more mathematically rigorous constructions, it helps to understand a simple construction. Take a complete graph with 6 vertices, K6. It has 15 edges, which can be partitioned into perfect matchings in 15 different ways, each perfect matching being a set of three edges no two of which share a vertex. It is possible to find a set of 5 perfect matchings from the set of 15 such that no two matchings share an edge, and that between them include all 5 × 3 = 15 edges of the graph; this graph factorization can be done in 6 different ways. Consider a permutation of the 6 vertices, and see its effect on the 6 different factorizations. We get a map from 720 input permutations to 720 output permutations. That map is precisely the outer automorphism of S6. Being an automorphism, the map must preserve the order of elements, but unlike inner automorphisms, it does not preserve cycle structure, thereby indicating that it must be an outer automorphism. For instance, a 2-cycle maps to a product of three 2-cycles; it is easy to see that a 2-cycle affects all of the 6 graph factorizations in some way, and hence has no fixed points when viewed as a permutation of factorizations. The fact that it is possible to construct this automorphism at all relies on a large number of numerical coincidences which apply only to "n" = 6. Exotic map S5 → S6. There is a subgroup (indeed, 6 conjugate subgroups) of S6 which is abstractly isomorphic to S5, but which acts transitively as subgroups of S6 on a set of 6 elements. (The image of the obvious map S"n" → S"n"+1 fixes an element and thus is not transitive.) Sylow 5-subgroups. Janusz and Rotman construct it thus: This follows from inspection of 5-cycles: each 5-cycle generates a group of order 5 (thus a Sylow subgroup), there are 5!/5 = 120/5 = 24  5-cycles, yielding 6 subgroups (as each subgroup also includes the identity), and S"n" acts transitively by conjugation on the set of cycles of a given class, hence transitively by conjugation on these subgroups. Alternately, one could use the Sylow theorems, which state generally that all Sylow p-subgroups are conjugate. PGL(2,5). The projective linear group of dimension two over the finite field with five elements, PGL(2, 5), acts on the projective line over the field with five elements, P1(F5), which has six elements. Further, this action is faithful and 3-transitive, as is always the case for the action of the projective linear group on the projective line. This yields a map PGL(2, 5) → S6 as a transitive subgroup. Identifying PGL(2, 5) with S5 and the projective special linear group PSL(2, 5) with A5 yields the desired exotic maps S5 → S6 and A5 → A6. Following the same philosophy, one can realize the outer automorphism as the following two inequivalent actions of S6 on a set with six elements: Frobenius group. Another way: To construct an outer automorphism of S6, we need to construct an "unusual" subgroup of index 6 in S6, in other words one that is not one of the six obvious S5 subgroups fixing a point (which just correspond to inner automorphisms of S6). The Frobenius group of affine transformations of F5 (maps formula_20 where "a" ≠ 0) has order 20 = (5 − 1) · 5 and acts on the field with 5 elements, hence is a subgroup of S5. S5 acts transitively on the coset space, which is a set of 120/20 = 6 elements (or by conjugation, which yields the action above). Other constructions. Ernst Witt found a copy of Aut(S6) in the Mathieu group M12 (a subgroup "T" isomorphic to S6 and an element "σ" that normalizes "T" and acts by outer automorphism). Similarly to S6 acting on a set of 6 elements in 2 different ways (having an outer automorphism), M12 acts on a set of 12 elements in 2 different ways (has an outer automorphism), though since "M"12 is itself exceptional, one does not consider this outer automorphism to be exceptional itself. The full automorphism group of A6 appears naturally as a maximal subgroup of the Mathieu group M12 in 2 ways, as either a subgroup fixing a division of the 12 points into a pair of 6-element sets, or as a subgroup fixing a subset of 2 points. Another way to see that S6 has a nontrivial outer automorphism is to use the fact that A6 is isomorphic to PSL2(9), whose automorphism group is the projective semilinear group PΓL2(9), in which PSL2(9) is of index 4, yielding an outer automorphism group of order 4. The most visual way to see this automorphism is to give an interpretation via algebraic geometry over finite fields, as follows. Consider the action of S6 on affine 6-space over the field k with 3 elements. This action preserves several things: the hyperplane "H" on which the coordinates sum to 0, the line "L" in "H" where all coordinates coincide, and the quadratic form "q" given by the sum of the squares of all 6 coordinates. The restriction of "q" to "H" has defect line "L", so there is an induced quadratic form "Q" on the 4-dimensional "H"/"L" that one checks is non-degenerate and non-split. The zero scheme of "Q" in "H"/"L" defines a smooth quadric surface "X" in the associated projective 3-space over "k". Over an algebraic closure of "k", "X" is a product of two projective lines, so by a descent argument "X" is the Weil restriction to "k" of the projective line over a quadratic étale algebra "K". Since "Q" is not split over "k", an auxiliary argument with special orthogonal groups over "k" forces "K" to be a field (rather than a product of two copies of "k"). The natural S6-action on everything in sight defines a map from S6 to the "k"-automorphism group of "X", which is the semi-direct product "G" of PGL2("K") = PGL2(9) against the Galois involution. This map carries the simple group A6 nontrivially into (hence onto) the subgroup PSL2(9) of index 4 in the semi-direct product "G", so S6 is thereby identified as an index-2 subgroup of "G" (namely, the subgroup of "G" generated by PSL2(9) and the Galois involution). Conjugation by any element of "G" outside of S6 defines the nontrivial outer automorphism of S6. Structure of outer automorphism. On cycles, it exchanges permutations of type (12) with (12)(34)(56) (class 21 with class 23), and of type (123) with (145)(263) (class 31 with class 32). The outer automorphism also exchanges permutations of type (12)(345) with (123456) (class 2131 with class 61). For each of the other cycle types in S6, the outer automorphism fixes the class of permutations of the cycle type. On A6, it interchanges the 3-cycles (like (123)) with elements of class 32 (like (123)(456)). No other outer automorphisms. To see that none of the other symmetric groups have outer automorphisms, it is easiest to proceed in two steps: The latter can be shown in two ways: Each permutation of order two (called an involution) is a product of "k" &gt; 0 disjoint transpositions, so that it has cyclic structure 2"k"1"n"−2"k". What is special about the class of transpositions ("k" = 1)? If one forms the product of two distinct transpositions "τ"1 and "τ"2, then one always obtains either a 3-cycle or a permutation of type 221"n"−4, so the order of the produced element is either 2 or 3. On the other hand, if one forms the product of two distinct involutions "σ"1, "σ"2 of type "k" &gt; 1, then provided "n" ≥ 7, it is always possible to produce an element of order 6, 7 or 4, as follows. We can arrange that the product contains either For "k" ≥ 5, adjoin to the permutations "σ"1, "σ"2 of the last example redundant 2-cycles that cancel each other, and we still get two 4-cycles. Now we arrive at a contradiction, because if the class of transpositions is sent via the automorphism "f" to a class of involutions that has "k" &gt; 1, then there exist two transpositions "τ"1, "τ"2 such that "f"("τ"1) "f"("τ"2) has order 6, 7 or 4, but we know that "τ"1"τ"2 has order 2 or 3. No other outer automorphisms of S6. S6 has exactly one (class) of outer automorphisms: Out(S6) = C2. To see this, observe that there are only two conjugacy classes of S6 of size 15: the transpositions and those of class 23. Each element of Aut(S6) either preserves each of these conjugacy classes, or exchanges them. Any representative of the outer automorphism constructed above exchanges the conjugacy classes, whereas an index 2 subgroup stabilizes the transpositions. But an automorphism that stabilizes the transpositions is inner, so the inner automorphisms form an index 2 subgroup of Aut(S6), so Out(S6) = C2. More pithily: an automorphism that stabilizes transpositions is inner, and there are only two conjugacy classes of order 15 (transpositions and triple transpositions), hence the outer automorphism group is at most order 2. Small "n". Symmetric. For "n" = 2, S2 = C2 = Z/2 and the automorphism group is trivial (obviously, but more formally because Aut(Z/2) = GL(1, Z/2) = Z/2* = C1). The inner automorphism group is thus also trivial (also because S2 is abelian). Alternating. For "n" = 1 and 2, A1 = A2 = C1 is trivial, so the automorphism group is also trivial. For "n" = 3, A3 = C3 = Z/3 is abelian (and cyclic): the automorphism group is GL(1, Z/3*) = C2, and the inner automorphism group is trivial (because it is abelian).
[ { "math_id": 0, "text": "n\\neq 2,6" }, { "math_id": 1, "text": "\\mathrm{S}_n" }, { "math_id": 2, "text": "n=6" }, { "math_id": 3, "text": "n=1,2" }, { "math_id": 4, "text": "n=3" }, { "math_id": 5, "text": "\\operatorname{Aut}(\\mathrm{S}_n) = \\mathrm{S}_n" }, { "math_id": 6, "text": "\\operatorname{Out}(\\mathrm{S}_n) = \\mathrm{C}_1" }, { "math_id": 7, "text": "\\mathrm{S}_n \\to \\operatorname{Aut}(\\mathrm{S}_n)" }, { "math_id": 8, "text": "n\\neq 1,2,6" }, { "math_id": 9, "text": "\\operatorname{Out}(\\mathrm{A}_n)=\\mathrm{S}_n/\\mathrm{A}_n=\\mathrm{C}_2" }, { "math_id": 10, "text": "n\\neq 2,3,6" }, { "math_id": 11, "text": "\\operatorname{Aut}(\\mathrm{A}_n)=\\operatorname{Aut}(\\mathrm{S}_n)=\\mathrm{S}_n" }, { "math_id": 12, "text": "\\mathrm{S}_n \\to \\operatorname{Aut}(\\mathrm{S}_n) \\to \\operatorname{Aut}(\\mathrm{A}_n)" }, { "math_id": 13, "text": "\\operatorname{Aut}(\\mathrm{S}_1)=\\operatorname{Out}(\\mathrm{S}_1)=\\operatorname{Aut}(\\mathrm{A}_1)=\\operatorname{Out}(\\mathrm{A}_1)=\\mathrm{C}_1" }, { "math_id": 14, "text": "\\operatorname{Aut}(\\mathrm{S}_2)=\\operatorname{Out}(\\mathrm{S}_2)=\\operatorname{Aut}(\\mathrm{A}_2)=\\operatorname{Out}(\\mathrm{A}_2)=\\mathrm{C}_1" }, { "math_id": 15, "text": "\\operatorname{Aut}(\\mathrm{A}_3)=\\operatorname{Out}(\\mathrm{A}_3)=\\mathrm{S}_3/\\mathrm{A}_3=\\mathrm{C}_2" }, { "math_id": 16, "text": "\\operatorname{Out}(\\mathrm{S}_6)=\\mathrm{C}_2" }, { "math_id": 17, "text": "\\operatorname{Aut}(\\mathrm{S}_6)=\\mathrm{S}_6 \\rtimes \\mathrm{C}_2" }, { "math_id": 18, "text": "\\operatorname{Out}(\\mathrm{A}_6)=\\mathrm{C}_2 \\times \\mathrm{C}_2" }, { "math_id": 19, "text": "\\operatorname{Aut}(\\mathrm{A}_6)=\\operatorname{Aut}(\\mathrm{S}_6)=\\mathrm{S}_6 \\rtimes \\mathrm{C}_2." }, { "math_id": 20, "text": "x \\mapsto ax+b" } ]
https://en.wikipedia.org/wiki?curid=13763478
13763527
Lommel function
The Lommel differential equation, named after Eugen von Lommel, is an inhomogeneous form of the Bessel differential equation: formula_0 Solutions are given by the Lommel functions "s"μ,ν("z") and "S"μ,ν("z"), introduced by Eugen von Lommel (1880), formula_1 formula_2 where "J"ν("z") is a Bessel function of the first kind and "Y"ν("z") a Bessel function of the second kind. The "s" function can also be written as formula_3 where "p""F""q" is a generalized hypergeometric function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z^2 \\frac{d^2y}{dz^2} + z \\frac{dy}{dz} + (z^2 - \\nu^2)y = z^{\\mu+1}." }, { "math_id": 1, "text": "s_{\\mu,\\nu}(z) = \\frac{\\pi}{2} \\left[ Y_{\\nu} (z) \\! \\int_{0}^{z} \\!\\! x^{\\mu} J_{\\nu}(x) \\, dx - J_\\nu (z) \\! \\int_{0}^{z} \\!\\! x^{\\mu} Y_{\\nu}(x) \\, dx \\right]," }, { "math_id": 2, "text": "S_{\\mu,\\nu}(z) = s_{\\mu,\\nu}(z) + 2^{\\mu-1} \\Gamma\\left(\\frac{\\mu + \\nu + 1}{2}\\right) \\Gamma\\left(\\frac{\\mu - \\nu + 1}{2}\\right)\n\\left(\\sin \\left[(\\mu - \\nu)\\frac{\\pi}{2}\\right] J_\\nu(z) - \\cos \\left[(\\mu - \\nu)\\frac{\\pi}{2}\\right] Y_\\nu(z)\\right)," }, { "math_id": 3, "text": " s_{\\mu, \\nu} (z) = \\frac{z^{\\mu + 1}}{(\\mu - \\nu + 1)(\\mu + \\nu + 1)} {}_1F_2(1; \\frac{\\mu}{2} - \\frac{\\nu}{2} + \\frac{3}{2} , \\frac{\\mu}{2} + \\frac{\\nu}{2} + \\frac{3}{2} ;-\\frac{z^2}{4})," } ]
https://en.wikipedia.org/wiki?curid=13763527
13766990
Henry George theorem
Economic theorem The Henry George theorem states that under certain conditions, aggregate spending by government on public goods will increase aggregate rent based on land value (land rent) more than that amount, with the benefit of the last marginal investment equaling its cost. The theory is named for 19th century U.S. political economist and activist Henry George. Theory. This general relationship, first noted by the French physiocrats in the 18th century, is one basis for advocating the collection of a tax based on land rents to help defray the cost of public investment that helps create land values. Henry George popularized this method of raising public revenue in his works (especially in "Progress and Poverty"), which launched the 'single tax' movement. In 1977, Joseph Stiglitz showed that under certain conditions, beneficial investments in public goods will increase aggregate land rents by at least as much as the investments' cost. This proposition was dubbed the "Henry George theorem", as it characterizes a situation where Henry George's 'single tax' on land values, is not only efficient, it is also the only tax necessary to finance public expenditures. Henry George had famously advocated for the replacement of all other taxes with a land value tax, arguing that as the location value of land was improved by public works, its economic rent was the most logical source of public revenue. Subsequent studies generalized the principle and found that the theorem holds even after relaxing assumptions. Studies indicate that even existing land prices, which are depressed due to the existing burden of taxation on income and investment, are great enough to replace taxes at all levels of government. Economists later discussed whether the theorem provides a practical guide for determining optimal city and enterprise size. Mathematical treatments suggest that an entity obtains optimal population when the opposing marginal costs and marginal benefits of additional residents are balanced. The status quo alternative is that the bulk of the value of public improvements is captured by the landowners, because the state has only (unfocused) income and capital taxes by which to do so. Derivation. Stigltiz (1977). The following derivation follows an economic model presented in Joseph Stiglitz’ 1977 theory of local public goods. Suppose a community where production, a which function of the population size of the workforce N, renders private and public goods. The community seeks to maximize the utility function: formula_0 subject to the corresponding resource constraint: formula_1 Where Y is output, c is the per capita consumption of private goods, and G is the aggregate consumption of local public goods reflected by its government expenditure on its provision. Land rents in this model are calculated using the ‘Ricardian rent identity,’ (See Luigi Pasinetti’s “A Mathematical Formulation of the Ricardian System,”): formula_2 Where formula_3 marginal product of laborers. From the resource constraint formula_4, it follows that, formula_5 The community’s utility maximization problem becomes: formula_6 Differentiation of the utility function with respect to N yields: formula_7 For the population size to be optimal, formula_8 must equal zero. Moreover, as per the utility function, formula_9. Consequently, formula_10 With first-order conditions: formula_11 formula_12 formula_13 Comparison of the FOC for G and the Ricardian rent identity reveals an equality, but only when population is optimal. Thus, formula_14 In summation, land rents would be just sufficient to finance a provision of local public goods, given that certain conditions are satisfied since the population size that is such that utility is maximized, is also such that land rents equals government expenditures on local public goods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U = U(c, G) \\; ," }, { "math_id": 1, "text": "Y = f(N) = cN + G \\; ." }, { "math_id": 2, "text": "R = f(N) - f^\\prime(N)N \\; ." }, { "math_id": 3, "text": " f^\\prime(N)= \\partial Y/\\partial N = " }, { "math_id": 4, "text": "f(N) = cN + G" }, { "math_id": 5, "text": "c = \\frac{f(N) - G}{N} \\; ." }, { "math_id": 6, "text": "\\max U = U\\left(\\frac{f(N) - G}{N}, G\\right ) \\; ." }, { "math_id": 7, "text": "\\frac{dU}{dN} = \\frac{\\partial U}{\\partial c} \\cdot \\frac{Nf^\\prime(N) - f(N) + G}{N^2} \\; ." }, { "math_id": 8, "text": "dU/dN" }, { "math_id": 9, "text": "\\partial U/\\partial c \\ne 0" }, { "math_id": 10, "text": "\\frac{Nf^\\prime(N) - f(N) + G}{N^2} = 0 \\; ," }, { "math_id": 11, "text": " c = f^\\prime(N) \\; ," }, { "math_id": 12, "text": " G = f(N) - f^\\prime(N)N \\; ," }, { "math_id": 13, "text": " N = \\frac{f(N) - G}{f^\\prime(N)} \\; ." }, { "math_id": 14, "text": "\\frac{dU}{dN} = 0 \\Rightarrow R = G \\; ." } ]
https://en.wikipedia.org/wiki?curid=13766990
1376702
Vector clock
Algorithm for partial ordering of events and detecting causality in distributed systems A vector clock is a data structure used for determining the partial ordering of events in a distributed system and detecting causality violations. Just as in Lamport timestamps, inter-process messages contain the state of the sending process's logical clock. A vector clock of a system of "N" processes is an array/vector of "N" logical clocks, one clock per process; a local "largest possible values" copy of the global clock-array is kept in each process. Denote formula_0 as the vector clock maintained by process formula_1, the clock updates proceed as follows: History. Lamport originated the idea of logical Lamport clocks in 1978. However, the logical clocks in that paper were scalars, not vectors. The generalization to vector time was developed several times, apparently independently, by different authors in the early 1980s. At least 6 papers contain the concept. The papers canonically cited in reference to vector clocks are Colin Fidge’s and Friedemann Mattern’s 1988 works, as they (independently) established the name "vector clock" and the mathematical properties of vector clocks. Partial ordering property. Vector clocks allow for the partial causal ordering of events. Defining the following: Properties: Relation with other orders: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "VC_i" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "VC_{i}[i] \\leftarrow VC_{i}[i] + 1" }, { "math_id": 3, "text": "P_i" }, { "math_id": 4, "text": "(m, VC_{j})" }, { "math_id": 5, "text": "P_j" }, { "math_id": 6, "text": "VC_{i}[i]\\leftarrow VC_{i}[i]+1" }, { "math_id": 7, "text": "VC_{i}[k]\\leftarrow \\max(VC_{i}[k], VC_{j}[k]), \\forall k" }, { "math_id": 8, "text": "VC(x)" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "VC(x)_z" }, { "math_id": 11, "text": "z" }, { "math_id": 12, "text": "VC(x) < VC(y) \\iff \\forall z [VC(x)_z \\le VC(y)_z] \\land \\exists z' [ VC(x)_{z'} < VC(y)_{z'} ]" }, { "math_id": 13, "text": "VC(y)" }, { "math_id": 14, "text": "VC(y)_z" }, { "math_id": 15, "text": "VC(x)_{z'} < VC(y)_{z'}" }, { "math_id": 16, "text": "x \\to y\\;" }, { "math_id": 17, "text": "y" }, { "math_id": 18, "text": "VC(x) < VC(y)" }, { "math_id": 19, "text": "VC(a) < VC(b)" }, { "math_id": 20, "text": "(VC(b) < VC(a))" }, { "math_id": 21, "text": "VC(b) < VC(c)" }, { "math_id": 22, "text": "VC(a) < VC(c)" }, { "math_id": 23, "text": "a \\to b\\;" }, { "math_id": 24, "text": "b \\to c\\;" }, { "math_id": 25, "text": "a \\to c\\;" }, { "math_id": 26, "text": "RT(x)" }, { "math_id": 27, "text": "RT(a) < RT(b)" }, { "math_id": 28, "text": "C(x)" }, { "math_id": 29, "text": "C(a) < C(b)" } ]
https://en.wikipedia.org/wiki?curid=1376702
1376785
Stock duration
Weighted measure of times until stock dividends are received The duration of a stock is the average of the times until its cash flows are received, weighted by their present values. The most popular model of duration uses dividends as the cash flows. In vernacular, the duration of a stock is how long we need to receive dividends to be repaid the purchase price of the stock. If a stock doesn't pay dividends, other methods using distributable cash flows, may be utilized. The duration of an equity is a noisy analogue of the Macaulay duration of a bond, due to the variability and unpredictability of dividend payments. The duration of a stock or the stock market is implied rather than deterministic. Duration of the U.S. stock market as a whole, and most individual stocks within it, is many years to a few decades. A nominal value, assumed in many analyses, would be 20-30 years, analogous to long term bonds. Higher price/earnings and other multiples imply longer duration. Duration is a measure of the price sensitivity of a stock to changes in the long term interest rate, i.e., the longer the duration, the more sensitive the stock is to interest rates. In U.S. stock markets, an SEC rule adoption in 1982 (rule 10b-18) that allowed discretionary stock buybacks has distorted the calculation of duration based on dividends since at least the early 1990s. The rule change had no ascertainable impact on duration, but duration now needs to account for all cash distributions including buybacks. Duration in the discounted cash flow model. Present value of a stock The "present value" or value, i.e., the hypothetical fair price of a stock according to the Dividend Discount Model, is the sum of the present values of all its dividends in perpetuity. The simplest version of the model assumes constant growth, constant discount rate and constant dividend yield in perpetuity. Then the present value of the stock is where P is the price of the stock D is the initial dividend amount r is the periodic discount rate (either annual or quarterly) g is the dividend growth rate (either annual or quarterly corresponding to r) The requisite assumptions are hardly ever true in perpetuity, so the computed value is highly hypothetical. In the Discounted Cash Flow Model (DCFM) of security analysis, the value of a security is the present value of all its future cash flows including interest or dividends and the implied cash flow of the residual value of the security itself, if any. A special case of the DCFM, based on a stock's dividend, is called the Dividend Discount Model. Under that model, the value of a stock depends on how long we expect to receive dividends, their cash amounts, spacing (usually monthly, quarterly or semiannually), and a hypothesized long term discount rate that incorporates inflation in the currency and risk on the firm's payouts. The duration of the stock is how long we need to receive dividends for the present value of the dividends plus the residual value of the stock to total to the price paid. Conceptually, it corresponds to the duration of a bond but the duration of a bond is deterministic and that of a stock is not. It is not necessary for the dividends to be reinvested – that's a separate risk, reinvestment risk, and does not affect the risks and therefore the value of the stock. If a stock does not pay a dividend or pays a very low dividend, alternatively, analysts may use a firm's free cash flow taking into account any necessary capital expenditures, to approximate what distributable cash could be available to shareholders. Low interest rates shorten duration because the present value of near term cash flows is relatively greater; high interest rates lengthen duration because we're more reliant on deeply discounted cash flows in the far future. The duration of the U.S. stock market represented by the S&amp;P 500 for example (or other broad index) as well as most individual stocks, is many years to several decades. Generally, higher price/earnings and other equity multiples imply longer duration and greater risk that the implied cash flows may not arrive as expected. Duration. The first approximation, in years, to the duration of a stock is the ratio of the two terms, stock price divided by the annual dividend amount. Since the present value of future dividends gets a bit less with each passing year (or even quarter or month), the duration is a bit longer than that approximation. But the duration of a stock, unlike that of a bond, isn't deterministic. The stock price and dividend are taken directly from the market, and they're tangible. Everything else is hypothecated into the future: interest rates, growth, volatility, idiosyncratic risks, and dividend amounts. For European stocks, dividends aren't fixed, but paid as a proportion of profits, so even the base amounts are hypothecated. Historically, before the 1990s, the average dividend yield on U.S. stocks had been a little less than 4%, so the first approximation to duration has been a little more than 25 years. The hypothecated duration taking into account changes in the present value of future dividends, has been about 33% longer, which gives a duration in the low 30s (years), Traditionally, analysts have cited the duration of the U.S. market as 20-30 years. Since the last recession in 2008-09, multiples have become inflated and dividend yields have dropped, so the current implied duration of stocks according to the Dividend Discount Model (DDM) has risen to at least 80 years (Dec. 2021). However, the implied duration from other means isn't nearly that long. A one-stage mathematical model using current growth, etc, is usually not sustainable, i.e. those conditions aren't expected to obtain for possibly decades. Therefore, most analysts use a 2-stage or 3-stage model to assess present value and duration of stocks. It is improbable that stock duration can reasonably exceed a person's working and investing career of 45-50 years. Since duration and present value (or just value) of a stock are terms in the same equation, which can be solved for one by making assumptions about the other, an excessively long duration can provide a check on over enthusiastic stock valuations. Example. Suppose a stock costing $100 pays a 4% dividend, grows at a terminal rate of 6.5% and has a discount rate of 7.9%. The price/dividend first estimate of 25 years is easily calculated. If we assume an additional 33% duration to account for the discounted value of future dividend payments, that yields a duration of 33.3 years. Present value of the dividend payment in year one is $4, year two $4*1.065*.921=$3.92, year three $3.85, etc. There is an infinite series, such that each year's dividend payment has a present value of .9809 of the previous year's payment, starting with $4. The present value of the stock in perpetuity (i.e. the sum of present values of all dividend payments) is $209.04. To recover the price paid of $100 must take some time considerably less than till the end of time. That time is between 33 and 34 years: the present value of dividends paid through the 34th year (but not the 33rd) will exceed $100. That is very close to the rule of thumb estimate above. However, what may be gained in mathematical precision is lost by the compounding of uncertainties, particularly about growth, over the term of 34 years: they make any numbers we may calculate with conjectural. It may be more appropriate to derive an empirical estimate of duration, and encapsulate it in a rule of thumb that's reasonable most of the time. Price sensitivity versus duration. The price sensitivity of a stock versus duration, often called modified duration, is the percentage change in price in response to a 1% change in the long-term return that the stock is priced to deliver. The modified duration is duration divided by (1 + growth rate). There is some ambiguity in the literature when referring to duration; much of the time modified duration is referred to simply as "duration", and they have similar values, so much confusion results. Convexity. The modified duration formula assumes a linear relationship between percent change in return and percent change in price; but because returns compound, it overestimates the actual change in price. This difference is called "convexity".
[ { "math_id": 0, "text": "P=\\frac{D}{(r-g)}" } ]
https://en.wikipedia.org/wiki?curid=1376785
13768214
K-edge
In X-ray absorption spectroscopy, the K-edge is a sudden increase in x-ray absorption occurring when the energy of the X-rays is just above the binding energy of the innermost electron shell of the atoms interacting with the photons. The term is based on X-ray notation, where the innermost electron shell is known as the K-shell. Physically, this sudden increase in attenuation is caused by the photoelectric absorption of the photons. For this interaction to occur, the photons must have more energy than the binding energy of the K-shell electrons (K-edge). A photon having an energy just above the binding energy of the electron is therefore more likely to be absorbed than a photon having an energy just below this binding energy or significantly above it. The energies near the K-edge are also objects of study, and provide other information. Use. The two radiocontrast agents iodine and barium have ideal K-shell binding energies for absorption of X-rays: 33.2 keV and 37.4 keV respectively, which is close to the mean energy of most diagnostic X-ray beams. Similar sudden increases in attenuation may also be found for other inner shells than the K shell; the general term for the phenomenon is absorption edge. Dual-energy computed tomography techniques take advantage of the increased attenuation of iodinated radiocontrast at lower tube energies to heighten the degree of contrast between iodinated radiocontrast and other high attenuation biological material present in the body such as blood and hemorrhage. Metal K-edge. Metal K-edge spectroscopy is a spectroscopic technique used to study the electronic structures of transition metal atoms and complexes. This method measures X-ray absorption caused by the excitation of a 1s electron to valence bound states localized on the metal, which creates a characteristic absorption peak called the K-edge. The K-edge can be divided into the pre-edge region (comprising the pre-edge and rising edge transitions) and the near-edge region (comprising the intense edge transition and ~150 eV above it). Pre-edge. The K-edge of an open shell transition metal ion displays a weak pre-edge 1s-to-valence-metal-d transition at a lower energy than the intense edge jump. This dipole-forbidden transition gains intensity through a quadrupole mechanism and/or through 4p mixing into the final state. The pre-edge contains information about ligand fields and oxidation state. Higher oxidation of the metal leads to greater stabilization of the 1s orbital with respect to the metal d orbitals, resulting in higher energy of the pre-edge. Bonding interactions with ligands also cause changes in the metal's effective nuclear charge (Zeff), leading to changes in the energy of the pre-edge. The intensity under the pre-edge transition depends on the geometry around the absorbing metal and can be correlated to the structural symmetry in the molecule. Molecules with centrosymmetry have low pre-edge intensity, whereas the intensity increases as the molecule moves away from centrosymmetry. This change is due to the higher mixing of the 4p with the 3d orbitals as the molecule loses centrosymmetry. Rising-edge. A rising-edge follows the pre-edge, and may consist of several overlapping transitions that are hard to resolve. The energy position of the rising-edge contains information about the oxidation state of the metal. In the case of copper complexes, the rising-edge consists of intense transitions, which provide information about bonding. For CuI species, this transition is a distinct shoulder and arises from intense electric-dipole-allowed 1s→4p transitions. The normalized intensity and energy of the rising-edge transitions in these CuI complexes can be used to distinguish between two-, three- and four-coordinate CuI sites. In the case of higher-oxidation-state copper atoms, the 1s→4p transition lies higher in energy, mixed in with the near-edge region. However, an intense transition in the rising-edge region is observed for CuIII and some CuII complexes from a formally forbidden two electron 1s→4p+shakedown transition. This “shakedown” process arises from a 1s→4p transition that leads to relaxation of the excited state, followed by a ligand-to-metal charge transfer to the excited state. This rising-edge transition can be fitted to a valence bond configuration (VBCI) model to obtain the composition of the ground state wavefunction and information on ground state covalency. The VBCI model describes the ground and excited state as a linear combination of the metal-based d-state and the ligand-based charge transfer state. The higher the contribution of the charge transfer state to the ground state, the higher is the ground state covalency indicating stronger metal-ligand bonding. Near-edge. The near-edge region is difficult to quantitatively analyze because it describes transitions to continuum levels that are still under the influence of the core potential. This region is analogous to the EXAFS region and contains structural information. Extraction of metrical parameters from the edge region can be obtained by using the multiple-scattering code implemented in the MXAN software. Ligand K-edge. Ligand K-edge spectroscopy is a spectroscopic technique used to study the electronic structures of metal-ligand complexes. This method measures X-ray absorption caused by the excitation of ligand 1s electrons to unfilled p orbitals (principal quantum number formula_0) and continuum states, which creates a characteristic absorption feature called the K-edge. Pre-edges. Transitions at energies lower than the edge can occur, provided they lead to orbitals with some ligand p character; these features are called pre-edges. Pre-edge intensities ("D"0) are related to the amount of ligand (L) character in the unfilled orbital: formula_1 where formula_2 is the wavefunction of the unfilled orbital, r is the transition dipole operator, and formula_3 is the "covalency" or ligand character in the orbital. Since formula_4, the above expression relating intensity and quantum transition operators can be simplified to use experimental values: formula_5 where "n" is the number of absorbing ligand atoms, "h" is the number of holes, and "Is" is the transition dipole integral which can be determined experimentally. Therefore, by measuring the intensity of pre-edges, it is possible to experimentally determine the amount of ligand character in a molecular orbital. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\leq 4" }, { "math_id": 1, "text": "D_0(L \\ 1s \\rightarrow \\psi^*) = const \\ \\vert \\langle L \\ 1s \\vert \\mathbf{r} \\vert \\psi^* \\rangle \\vert^2 \n= \\alpha^2 \\ const \\ \\vert \\langle L \\ 1s \\vert \\mathbf{r} \\vert L \\ np \\rangle \\vert^2 " }, { "math_id": 2, "text": "\\psi^*" }, { "math_id": 3, "text": "\\alpha^2" }, { "math_id": 4, "text": "\\psi^* = \\sqrt{1-\\alpha^2} \\vert M_d \\rangle - \\alpha \\vert L_{np} \\rangle " }, { "math_id": 5, "text": " D_0 = \\frac{\\alpha^2 h}{3n}I_s" } ]
https://en.wikipedia.org/wiki?curid=13768214
1376839
Leverage (finance)
The use of borrowed funds in the purchase of an asset In finance, leverage, also known as gearing, is any technique involving borrowing funds to buy an investment. Financial leverage is named after a lever in physics, which amplifies a small input force into a greater output force, because successful leverage amplifies the smaller amounts of money needed for borrowing into large amounts of profit. However, the technique also involves the high risk of not being able to pay back a large loan. Normally, a lender will set a limit on how much risk it is prepared to take and will set a limit on how much leverage it will permit, and would require the acquired asset to be provided as collateral security for the loan. Leveraging enables gains to be multiplied. On the other hand, losses are also multiplied, and there is a risk that leveraging will result in a loss if financing costs exceed the income from the asset, or the value of the asset falls. Leverage can arise in a number of situations. Securities like options and futures are effectively leveraged bets between parties where the principal is implicitly borrowed and lent at interest rates of very short treasury bills. Equity owners of businesses leverage their investment by having the business borrow a portion of its needed financing. The more it borrows, the less equity it needs, so any profits or losses are shared among a smaller base and are proportionately larger as a result. Businesses leverage their operations by using fixed cost inputs when revenues are expected to be variable. An increase in revenue will result in a larger increase in operating profit. Hedge funds may leverage their assets by financing a portion of their portfolios with the cash proceeds from the short sale of other positions. History. Before the 1980s, quantitative limits on bank leverage were rare. Banks in most countries had a reserve requirement, a fraction of deposits that was required to be held in liquid form, generally precious metals or government notes or deposits. This does not limit leverage. A capital requirement is a fraction of assets that is required to be funded in the form of equity or equity-like securities. Although these two are often confused, they are in fact opposite. A reserve requirement is a fraction of certain liabilities (from the right hand side of the balance sheet) that must be held as a certain kind of asset (from the left hand side of the balance sheet). A capital requirement is a fraction of assets (from the left hand side of the balance sheet) that must be held as a certain kind of liability or equity (from the right hand side of the balance sheet). Before the 1980s, regulators typically imposed judgmental capital requirements, a bank was supposed to be "adequately capitalized," but these were not objective rules. National regulators began imposing formal capital requirements in the 1980s, and by 1988 most large multinational banks were held to the Basel I standard. Basel I categorized assets into five risk buckets, and mandated minimum capital requirements for each. This limits accounting leverage. If a bank is required to hold 8% capital against an asset, that is the same as an accounting leverage limit of 1/.08 or 12.5 to 1. While Basel I is generally credited with improving bank risk management it suffered from two main defects. It did not require capital for all off-balance sheet risks (there was a clumsy provisions for derivatives, but not for certain other off-balance sheet exposures) and it encouraged banks to pick the riskiest assets in each bucket (for example, the capital requirement was the same for all corporate loans, whether to solid companies or ones near bankruptcy, and the requirement for government loans was zero). Work on Basel II began in the early 1990s and it was implemented in stages beginning in 2005. Basel II attempted to limit economic leverage rather than accounting leverage. It required advanced banks to estimate the risk of their positions and allocate capital accordingly. While this is much more rational in theory, it is more subject to estimation error, both honest and opportunitistic. The poor performance of many banks during the financial crisis of 2007–2009 led to calls to reimpose leverage limits, by which most people meant accounting leverage limits, if they understood the distinction at all. However, in view of the problems with Basel I, it seems likely that some hybrid of accounting and notional leverage will be used, and the leverage limits will be imposed in addition to, not instead of, Basel II economic leverage limits. Financial crisis of 2007–2008. The financial crisis of 2007–2008, like many previous financial crises, was blamed in part on excessive leverage. Consumers in the United States and many other developed countries had high levels of debt relative to their wages and the value of collateral assets. When home prices fell, and debt interest rates reset higher, and business laid off employees, borrowers could no longer afford debt payments, and lenders could not recover their principal by selling collateral. Financial institutions were highly levered. Lehman Brothers, for example, in its last annual financial statements, showed accounting leverage of 31.4 times ($691 billion in assets divided by $22 billion in stockholders' equity). Bankruptcy examiner Anton R. Valukas determined that the true accounting leverage was higher: it had been understated due to dubious accounting treatments including the so-called repo 105 (allowed by Ernst &amp; Young). Banks' notional leverage was more than twice as high, due to off-balance sheet transactions. At the end of 2007, Lehman had $738 billion of notional derivatives in addition to the assets above, plus significant off-balance sheet exposures to special purpose entities, structured investment vehicles and conduits, plus various lending commitments, contractual payments and contingent obligations. On the other hand, almost half of Lehman's balance sheet consisted of closely offsetting positions and very-low-risk assets, such as regulatory deposits. The company emphasized "net leverage", which excluded these assets. On that basis, Lehman held $373 billion of "net assets" and a "net leverage ratio" of 16.1. Risk. While leverage magnifies profits when the returns from the asset more than offset the costs of borrowing, leverage may also magnify losses. A corporation that borrows too much money might face bankruptcy or default during a business downturn, while a less-leveraged corporation might survive. An investor who buys a stock on 50% margin will lose 40% if the stock declines 20%.; also in this case the involved subject might be unable to refund the incurred significant total loss. Risk may depend on the volatility in value of collateral assets. Brokers may demand additional funds when the value of securities held declines. Banks may decline to renew mortgages when the value of real estate declines below the debt's principal. Even if cash flows and profits are sufficient to maintain the ongoing borrowing costs, loans may be called-in. This may happen exactly at a time when there is little market liquidity, i.e. a paucity of buyers, and sales by others are depressing prices. It means that as market price falls, leverage goes up in relation to the revised equity value, multiplying losses as prices continue to go down. This can lead to rapid ruin, for even if the underlying asset value decline is mild or temporary the debt-financing may be only short-term, and thus due for immediate repayment. The risk can be mitigated by negotiating the terms of leverage, by maintaining unused capacity for additional borrowing, and by leveraging only liquid assets which may rapidly be converted to cash. There is an implicit assumption in that account, however, which is that the underlying leveraged asset is the same as the unleveraged one. If a company borrows money to modernize, add to its product line or expand internationally, the extra trading profit from the additional diversification might more than offset the additional risk from leverage. Or if an investor uses a fraction of his or her portfolio to margin stock index futures (high risk) and puts the rest in a low-risk money-market fund, he or she might have the same volatility and expected return as an investor in an unlevered low-risk equity-index fund. Or if both long and short positions are held by a pairs-trading stock strategy the matching and off-setting economic leverage may lower overall risk levels. So while adding leverage to a given asset always adds risk, it is not the case that a levered company or investment is always riskier than an unlevered one. In fact, many highly levered hedge funds have less return volatility than unlevered bond funds, and normally heavily indebted low-risk public utilities are usually less risky stocks than unlevered high-risk technology companies. Definitions. The term leverage is used differently in investments and corporate finance, and has multiple definitions in each field. Accounting leverage. Accounting leverage is total assets divided by the total assets minus total liabilities. Banking. Under Basel III, banks are expected to maintain a leverage ratio in excess of 3%. The ratio is defined as formula_0. Here the exposure is defined broadly and includes off-balance sheet items and derivative "add-ons", whereas Tier 1 capital is limited to the banks "core capital". See Notional leverage. Notional leverage is total notional amount of assets plus total notional amount of liabilities divided by equity. Economic leverage. Economic leverage is volatility of equity divided by volatility of an unlevered investment in the same assets. For example, assume a party buys $100 of a 10-year fixed-rate treasury bond and enters into a fixed-for-floating 10-year interest rate swap to convert the payments to floating rate. The derivative is off-balance sheet, so it is ignored for accounting leverage. Accounting leverage is therefore 1 to 1. The notional amount of the swap does count for notional leverage, so notional leverage is 2 to 1. The swap removes most of the economic risk of the treasury bond, so economic leverage is near zero. formula_1 formula_2 formula_3 Corporate finance. There are several ways to define operating leverage, the most common. is: formula_4 Financial leverage is usually defined as: formula_5 For outsiders, it is hard to calculate operating leverage as fixed and variable costs are usually not disclosed. In an attempt to estimate operating leverage, one can use the percentage change in operating income for a one-percent change in revenue. The product of the two is called total leverage, and estimates the percentage change in net income for a one-percent change in revenue. There are several variants of each of these definitions, and the financial statements are usually adjusted before the values are computed. Moreover, there are industry-specific conventions that differ somewhat from the treatment above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{\\mbox{Tier 1 Capital}}{\\mbox{Total exposure}}" }, { "math_id": 1, "text": "\\text{Degree of Operating Leverage} = \\frac{\\mathrm{EBIT\\;+\\;Fixed\\;Costs}}{\\mathrm{EBIT}}" }, { "math_id": 2, "text": "\\text{Degree of Financial Leverage} = \\frac{\\mathrm{EBIT}}{\\mathrm{EBIT} -\n \\text{Total Interest Expense}}" }, { "math_id": 3, "text": "\\text{Degree of Combined Leverage} = \\text{DOL} \\times \\text{DFL} = \\frac{\\mathrm{EBIT} + \\text{Fixed Costs}}{\\mathrm{EBIT} - \\text{Total Interest Expense}}" }, { "math_id": 4, "text": "\n\\begin{align}\n\\text{Operating leverage} & = \\frac{\\text{Revenue} - \\text{Variable Cost}}{\\text{Revenue} - \\text{Variable Cost} - \\text{Fixed Cost}} = \\frac{\\text{Revenue} - \\text{Variable Cost}}{\\text{Operating Income}}\n\\end{align}\n" }, { "math_id": 5, "text": "\\text{Financial leverage}= \\frac{\\text{Total Debt}}{\\text{Shareholders' Equity}}" } ]
https://en.wikipedia.org/wiki?curid=1376839
1377178
Scapegoat tree
Type of balanced binary search tree In computer science, a scapegoat tree is a self-balancing binary search tree, invented by Arne Andersson in 1989 and again by Igal Galperin and Ronald L. Rivest in 1993. It provides worst-case formula_2 lookup time (with formula_3 as the number of entries) and formula_1 amortized insertion and deletion time. Unlike most other self-balancing binary search trees which also provide worst case formula_1 lookup time, scapegoat trees have no additional per-node memory overhead compared to a regular binary search tree: besides key and value, a node stores only two pointers to the child nodes. This makes scapegoat trees easier to implement and, due to data structure alignment, can reduce node overhead by up to one-third. Instead of the small incremental rebalancing operations used by most balanced tree algorithms, scapegoat trees rarely but expensively choose a "scapegoat" and completely rebuild the subtree rooted at the scapegoat into a complete binary tree. Thus, scapegoat trees have formula_0 worst-case update performance. Theory. A binary search tree is said to be weight-balanced if half the nodes are on the left of the root, and half on the right. An α-weight-balanced node is defined as meeting a relaxed weight balance criterion: size(left) ≤ α*size(node) size(right) ≤ α*size(node) Where size can be defined recursively as: function size(node) is if node = nil then return 0 else return size(node-&gt;left) + size(node-&gt;right) + 1 end if end function Even a degenerate tree (linked list) satisfies this condition if α=1, whereas an α=0.5 would only match almost complete binary trees. A binary search tree that is α-weight-balanced must also be α-height-balanced, that is height(tree) ≤ floor(log1/α(size(tree))) By contraposition, a tree that is not α-height-balanced is not α-weight-balanced. Scapegoat trees are not guaranteed to keep α-weight-balance at all times, but are always loosely α-height-balanced in that height(scapegoat tree) ≤ floor(log1/α(size(tree))) + 1. Violations of this height balance condition can be detected at insertion time, and imply that a violation of the weight balance condition must exist. This makes scapegoat trees similar to red–black trees in that they both have restrictions on their height. They differ greatly though in their implementations of determining where the rotations (or in the case of scapegoat trees, rebalances) take place. Whereas red–black trees store additional 'color' information in each node to determine the location, scapegoat trees find a scapegoat which isn't α-weight-balanced to perform the rebalance operation on. This is loosely similar to AVL trees, in that the actual rotations depend on 'balances' of nodes, but the means of determining the balance differs greatly. Since AVL trees check the balance value on every insertion/deletion, it is typically stored in each node; scapegoat trees are able to calculate it only as needed, which is only when a scapegoat needs to be found. Unlike most other self-balancing search trees, scapegoat trees are entirely flexible as to their balancing. They support any α such that 0.5 &lt; α &lt; 1. A high α value results in fewer balances, making insertion quicker but lookups and deletions slower, and vice versa for a low α. Therefore in practical applications, an α can be chosen depending on how frequently these actions should be performed. Operations. Lookup. Lookup is not modified from a standard binary search tree, and has a worst-case time of formula_1. This is in contrast to splay trees which have a worst-case time of formula_0. The reduced node memory overhead compared to other self-balancing binary search trees can further improve locality of reference and caching. Insertion. Insertion is implemented with the same basic ideas as an unbalanced binary search tree, however with a few significant changes. When finding the insertion point, the depth of the new node must also be recorded. This is implemented via a simple counter that gets incremented during each iteration of the lookup, effectively counting the number of edges between the root and the inserted node. If this node violates the α-height-balance property (defined above), a rebalance is required. To rebalance, an entire subtree rooted at a scapegoat undergoes a balancing operation. The scapegoat is defined as being an ancestor of the inserted node which isn't α-weight-balanced. There will always be at least one such ancestor. Rebalancing any of them will restore the α-height-balanced property. One way of finding a scapegoat, is to climb from the new node back up to the root and select the first node that isn't α-weight-balanced. Climbing back up to the root requires formula_1 storage space, usually allocated on the stack, or parent pointers. This can actually be avoided by pointing each child at its parent as you go down, and repairing on the walk back up. To determine whether a potential node is a viable scapegoat, we need to check its α-weight-balanced property. To do this we can go back to the definition: size(left) ≤ α*size(node) size(right) ≤ α*size(node) However a large optimisation can be made by realising that we already know two of the three sizes, leaving only the third to be calculated. Consider the following example to demonstrate this. Assuming that we're climbing back up to the root: size(parent) = size(node) + size(sibling) + 1 But as: size(inserted node) = 1. The case is trivialized down to: size[x+1] = size[x] + size(sibling) + 1 Where x = this node, x + 1 = parent and size(sibling) is the only function call actually required. Once the scapegoat is found, the subtree rooted at the scapegoat is completely rebuilt to be perfectly balanced. This can be done in formula_0 time by traversing the nodes of the subtree to find their values in sorted order and recursively choosing the median as the root of the subtree. As rebalance operations take formula_0 time (dependent on the number of nodes of the subtree), insertion has a worst-case performance of formula_0 time. However, because these worst-case scenarios are spread out, insertion takes formula_1 amortized time. Sketch of proof for cost of insertion. Define the Imbalance of a node "v" to be the absolute value of the difference in size between its left node and right node minus 1, or 0, whichever is greater. In other words: formula_4 Immediately after rebuilding a subtree rooted at "v", I("v") = 0. Lemma: Immediately before rebuilding the subtree rooted at "v", formula_5 Proof of lemma: Let formula_7 be the root of a subtree immediately after rebuilding. formula_8. If there are formula_9 degenerate insertions (that is, where each inserted node increases the height by 1), then formula_10, formula_11 and formula_12. Since formula_13 before rebuilding, there were formula_14 insertions into the subtree rooted at formula_15 that did not result in rebuilding. Each of these insertions can be performed in formula_1 time. The final insertion that causes rebuilding costs formula_16. Using aggregate analysis it becomes clear that the amortized cost of an insertion is formula_1: formula_17 Deletion. Scapegoat trees are unusual in that deletion is easier than insertion. To enable deletion, scapegoat trees need to store an additional value with the tree data structure. This property, which we will call MaxNodeCount simply represents the highest achieved NodeCount. It is set to NodeCount whenever the entire tree is rebalanced, and after insertion is set to max(MaxNodeCount, NodeCount). To perform a deletion, we simply remove the node as you would in a simple binary search tree, but if NodeCount ≤ α*MaxNodeCount then we rebalance the entire tree about the root, remembering to set MaxNodeCount to NodeCount. This gives deletion a worst-case performance of formula_0 time, whereas the amortized time is formula_1. Sketch of proof for cost of deletion. Suppose the scapegoat tree has formula_3 elements and has just been rebuilt (in other words, it is a complete binary tree). At most formula_18 deletions can be performed before the tree must be rebuilt. Each of these deletions take formula_1 time (the amount of time to search for the element and flag it as deleted). The formula_19 deletion causes the tree to be rebuilt and takes formula_20 (or just formula_0) time. Using aggregate analysis it becomes clear that the amortized cost of a deletion is formula_1: formula_21 Etymology. The name Scapegoat tree "[...] is based on the common wisdom that, when something goes wrong, the first thing people tend to do is find someone to blame (the scapegoat)." In the Bible, a scapegoat is an animal that is ritually burdened with the sins of others, and then driven away. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" }, { "math_id": 1, "text": "O(\\log n)" }, { "math_id": 2, "text": "{\\color{Blue}O(\\log n)}" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "I(v) = \\operatorname{max}(|\\operatorname{left}(v) - \\operatorname{right}(v)| - 1, 0) " }, { "math_id": 5, "text": "I(v) \\in \\Omega (|v|) " }, { "math_id": 6, "text": "\\Omega " }, { "math_id": 7, "text": "v_0" }, { "math_id": 8, "text": "h(v_0) = \\log(|v_0| + 1) " }, { "math_id": 9, "text": "\\Omega (|v_0|)" }, { "math_id": 10, "text": "I(v) \\in \\Omega (|v_0|) " }, { "math_id": 11, "text": "h(v) = h(v_0) + \\Omega (|v_0|) " }, { "math_id": 12, "text": "\\log(|v|) \\le \\log(|v_0| + 1) + 1 " }, { "math_id": 13, "text": "I(v) \\in \\Omega (|v|)" }, { "math_id": 14, "text": "\\Omega (|v|)" }, { "math_id": 15, "text": "v" }, { "math_id": 16, "text": "O(|v|)" }, { "math_id": 17, "text": "{\\Omega (|v|) O(\\log n) + O(|v|) \\over \\Omega (|v|)} = O(\\log n) " }, { "math_id": 18, "text": "n/2 - 1" }, { "math_id": 19, "text": "n/2" }, { "math_id": 20, "text": "O(\\log n) + O(n)" }, { "math_id": 21, "text": "\n{\\sum_{1}^{n/2} O(\\log n) + O(n) \\over n/2} = \n{{n \\over 2}O(\\log n) + O(n) \\over n/2} = \nO(\\log n) \\ \n" } ]
https://en.wikipedia.org/wiki?curid=1377178
1377241
Phragmén–Lindelöf principle
In complex analysis, the Phragmén–Lindelöf principle (or method), first formulated by Lars Edvard Phragmén (1863–1937) and Ernst Leonard Lindelöf (1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic function formula_0 (i.e, formula_1) on an unbounded domain formula_2 when an additional (usually mild) condition constraining the growth of formula_3 on formula_2 is given. It is a generalization of the maximum modulus principle, which is only applicable to bounded domains. Background. In the theory of complex functions, it is known that the modulus (absolute value) of a holomorphic (complex differentiable) function in the interior of a bounded region is bounded by its modulus on the boundary of the region. More precisely, if a non-constant function formula_4 is holomorphic in a bounded region formula_2 and continuous on its closure formula_5, then formula_6 for all formula_7. This is known as the "maximum modulus principle." (In fact, since formula_8 is compact and formula_3 is continuous, there actually exists some formula_9 such that formula_10.) The maximum modulus principle is generally used to conclude that a holomorphic function is bounded in a region after showing that it is bounded on its boundary. However, the maximum modulus principle cannot be applied to an unbounded region of the complex plane. As a concrete example, let us examine the behavior of the holomorphic function formula_11 in the unbounded strip formula_12. Although formula_13, so that formula_3 is bounded on boundary formula_14, formula_3 grows rapidly without bound when formula_15 along the positive real axis. The difficulty here stems from the extremely fast growth of formula_3 along the positive real axis. If the growth rate of formula_3 is guaranteed to not be "too fast," as specified by an appropriate growth condition, the "Phragmén–Lindelöf principle" can be applied to show that boundedness of formula_0 on the region's boundary implies that formula_0 is in fact bounded in the whole region, effectively extending the maximum modulus principle to unbounded regions. Outline of the technique. Suppose we are given a holomorphic function formula_0 and an unbounded region formula_16, and we want to show that formula_17 on formula_16. In a typical Phragmén–Lindelöf argument, we introduce a certain multiplicative factor formula_18 satisfying formula_19 to "subdue" the growth of formula_0. In particular, formula_18 is chosen such that (i): formula_20 is holomorphic for all formula_21 and formula_22 on the boundary formula_23 of an appropriate "bounded" subregion formula_24; and (ii): the asymptotic behavior of formula_20 allows us to establish that formula_22 for formula_25 (i.e., the unbounded part of formula_16 outside the closure of the bounded subregion). This allows us to apply the maximum modulus principle to first conclude that formula_22 on formula_26 and then extend the conclusion to all formula_27. Finally, we let formula_28 so that formula_29 for every formula_27 in order to conclude that formula_17 on formula_16. In the literature of complex analysis, there are many examples of the Phragmén–Lindelöf principle applied to unbounded regions of differing types, and also a version of this principle may be applied in a similar fashion to subharmonic and superharmonic functions. Example of application. To continue the example above, we can impose a growth condition on a holomorphic function formula_0 that prevents it from "blowing up" and allows the Phragmén–Lindelöf principle to be applied. To this end, we now include the condition that formula_30 for some real constants formula_31 and formula_32, for all formula_27. It can then be shown that formula_33 for all formula_34 implies that formula_33 in fact holds for all formula_27. Thus, we have the following proposition: Proposition. "Let" formula_35 "Let" formula_0 "be holomorphic on formula_16 and continuous on formula_36, and suppose there exist real constants formula_37 such that" formula_38 "for all formula_27 and formula_33 for all formula_39. Then formula_33 for all formula_27". Note that this conclusion fails when formula_40, precisely as the motivating counterexample in the previous section demonstrates. The proof of this statement employs a typical Phragmén–Lindelöf argument: Proof: "(Sketch)" We fix formula_41 and define for each formula_21 the auxiliary function formula_18 by formula_42. Moreover, for a given formula_43, we define formula_44 to be the open rectangle in the complex plane enclosed within the vertices formula_45. Now, fix formula_21 and consider the function formula_20. Because one can show that formula_46 for all formula_47, it follows that formula_48 for formula_34. Moreover, one can show for formula_49 that formula_50 uniformly as formula_51. This allows us to find an formula_52 such that formula_53 whenever formula_49 and formula_54. Now consider the bounded rectangular region formula_55. We have established that formula_48 for all formula_56. Hence, the maximum modulus principle implies that formula_48 for all formula_57. Since formula_53 also holds whenever formula_27 and formula_58, we have in fact shown that formula_53 holds for all formula_27. Finally, because formula_59 as formula_28, we conclude that formula_33 for all formula_27. Q.E.D. Phragmén–Lindelöf principle for a sector in the complex plane. A particularly useful statement proved using the Phragmén–Lindelöf principle bounds holomorphic functions on a sector of the complex plane if it is bounded on its boundary. This statement can be used to give a complex analytic proof of the Hardy's uncertainty principle, which states that a function and its Fourier transform cannot both decay faster than exponentially. Proposition. "Let formula_60 be a function that is holomorphic in a sector" formula_61 "of central angle formula_62, and continuous on its boundary. If" "for formula_34, and" "for all formula_27, where formula_64 and formula_65, then formula_63 holds also for all formula_27." Remarks. The condition (2) can be relaxed to with the same conclusion. Special cases. In practice the point 0 is often transformed into the point ∞ of the Riemann sphere. This gives a version of the principle that applies to strips, for example bounded by two lines of constant real part in the complex plane. This special case is sometimes known as Lindelöf's theorem. Carlson's theorem is an application of the principle to functions bounded on the imaginary axis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "|f(z)|<M\\ \\ (z\\in \\Omega)" }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "|f|" }, { "math_id": 4, "text": "f:\\mathbb{C}\\to\\mathbb{C}" }, { "math_id": 5, "text": "\\overline{\\Omega}=\\Omega\\cup\\partial \\Omega" }, { "math_id": 6, "text": "|f(z_0)|<\\sup_{z\\in \\partial \\Omega} |f(z)|" }, { "math_id": 7, "text": "z_0\\in \\Omega" }, { "math_id": 8, "text": "\\overline{\\Omega}" }, { "math_id": 9, "text": "w_0\\in\\partial \\Omega" }, { "math_id": 10, "text": "|f(w_0)|=\\sup_{z\\in \\Omega} |f(z)|" }, { "math_id": 11, "text": "f(z) = \\exp(\\exp(z))" }, { "math_id": 12, "text": "S = \\left\\{z:\\Im(z)\\in \\left(-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right)\\right\\}" }, { "math_id": 13, "text": "|f(x\\pm \\pi i/2)|=1" }, { "math_id": 14, "text": "\\partial S" }, { "math_id": 15, "text": "|z|\\to\\infty" }, { "math_id": 16, "text": "S" }, { "math_id": 17, "text": "|f|\\leq M" }, { "math_id": 18, "text": "h_\\epsilon" }, { "math_id": 19, "text": "\\lim_{\\epsilon \\to 0} h_\\epsilon= 1" }, { "math_id": 20, "text": "fh_\\epsilon" }, { "math_id": 21, "text": "\\epsilon>0" }, { "math_id": 22, "text": "|fh_\\epsilon|\\leq M" }, { "math_id": 23, "text": "\\partial S_{\\mathrm{bdd}}" }, { "math_id": 24, "text": "S_{\\mathrm{bdd}}\\subset S" }, { "math_id": 25, "text": "z\\in S\\setminus \\overline{S_{\\mathrm{bdd}}}" }, { "math_id": 26, "text": "\\overline{S_{\\mathrm{bdd}}}" }, { "math_id": 27, "text": "z\\in S" }, { "math_id": 28, "text": "\\epsilon\\to 0" }, { "math_id": 29, "text": "f(z)h_\\epsilon(z)\\to f(z)" }, { "math_id": 30, "text": "|f(z)|<\\exp\\left(A\\exp(c \\cdot \\left|\\Re(z)\\right|)\\right)" }, { "math_id": 31, "text": "c<1" }, { "math_id": 32, "text": "A<\\infty" }, { "math_id": 33, "text": "|f(z)|\\leq 1" }, { "math_id": 34, "text": "z\\in\\partial S" }, { "math_id": 35, "text": "S=\\left\\{z:\\Im(z)\\in \\left(-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right)\\right\\},\\quad \\overline{S}=\\left\\{z:\\Im(z)\\in \\left[-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right]\\right\\}." }, { "math_id": 36, "text": "\\overline{S}" }, { "math_id": 37, "text": "c<1,\\ A<\\infty" }, { "math_id": 38, "text": "|f(z)|<\\exp\\bigl(A\\exp(c\\cdot|\\Re(z)|)\\bigr)" }, { "math_id": 39, "text": "z\\in\\overline{S}\\setminus S=\\partial S" }, { "math_id": 40, "text": "c=1" }, { "math_id": 41, "text": "b\\in(c,1)" }, { "math_id": 42, "text": "h_\\epsilon(z)=e^{-\\epsilon(e^{b z}+e^{-b z})}" }, { "math_id": 43, "text": "a>0" }, { "math_id": 44, "text": "S_{a}" }, { "math_id": 45, "text": "\\{a\\pm i\\pi/2,-a\\pm i\\pi/2\\}" }, { "math_id": 46, "text": "|h_\\epsilon(z)|\\leq1" }, { "math_id": 47, "text": "z\\in \\overline{S}" }, { "math_id": 48, "text": "|f(z)h_\\epsilon(z)|\\leq 1" }, { "math_id": 49, "text": "z\\in\\overline{S}" }, { "math_id": 50, "text": "|f(z)h_\\epsilon(z)|\\to 0" }, { "math_id": 51, "text": "|\\Re(z)|\\to\\infty" }, { "math_id": 52, "text": "x_0" }, { "math_id": 53, "text": "|f(z)h_\\epsilon(z)|\\leq1" }, { "math_id": 54, "text": "|\\Re(z)|\\geq x_0" }, { "math_id": 55, "text": "S_{x_0}" }, { "math_id": 56, "text": "z\\in\\partial S_{x_0}" }, { "math_id": 57, "text": "z\\in \\overline{S_{x_0}}" }, { "math_id": 58, "text": "|\\Re(z)|> x_0" }, { "math_id": 59, "text": "fh_\\epsilon\\to f" }, { "math_id": 60, "text": "F" }, { "math_id": 61, "text": " S = \\left\\{ z \\, \\big| \\, \\alpha < \\arg z < \\beta \\right\\} " }, { "math_id": 62, "text": "\\beta-\\alpha=\\pi/\\lambda" }, { "math_id": 63, "text": "|F(z)| \\leq 1" }, { "math_id": 64, "text": "\\rho\\in[0,\\lambda)" }, { "math_id": 65, "text": "C>0" } ]
https://en.wikipedia.org/wiki?curid=1377241
13772472
Ω-logic
In set theory, Ω-logic is an infinitary logic and deductive system proposed by W. Hugh Woodin (1999) as part of an attempt to generalize the theory of determinacy of pointclasses to cover the structure formula_0. Just as the axiom of projective determinacy yields a canonical theory of formula_1, he sought to find axioms that would give a canonical theory for the larger structure. The theory he developed involves a controversial argument that the continuum hypothesis is false. Analysis. Woodin's Ω-conjecture asserts that if there is a proper class of Woodin cardinals (for technical reasons, most results in the theory are most easily stated under this assumption), then Ω-logic satisfies an analogue of the completeness theorem. From this conjecture, it can be shown that, if there is any single axiom which is comprehensive over formula_0 (in Ω-logic), it must imply that the continuum is not formula_2. Woodin also isolated a specific axiom, a variation of Martin's maximum, which states that any Ω-consistent formula_3 (over formula_0) sentence is true; this axiom implies that the continuum is formula_4. Woodin also related his Ω-conjecture to a proposed abstract definition of large cardinals: he took a "large cardinal property" to be a formula_5 property formula_6 of ordinals which implies that α is a strong inaccessible, and which is invariant under forcing by sets of cardinal less than α. Then the Ω-conjecture implies that if there are arbitrarily large models containing a large cardinal, this fact will be provable in Ω-logic. The theory involves a definition of Ω-validity: a statement is an Ω-valid consequence of a set theory "T" if it holds in every model of "T" having the form formula_7 for some ordinal formula_8 and some forcing notion formula_9. This notion is clearly preserved under forcing, and in the presence of a proper class of Woodin cardinals it will also be invariant under forcing (in other words, Ω-satisfiability is preserved under forcing as well). There is also a notion of Ω-provability; here the "proofs" consist of universally Baire sets and are checked by verifying that for every countable transitive model of the theory, and every forcing notion in the model, the generic extension of the model (as calculated in "V") contains the "proof", restricted its own reals. For a proof-set "A" the condition to be checked here is called ""A"-closed". A complexity measure can be given on the proofs by their ranks in the Wadge hierarchy. Woodin showed that this notion of "provability" implies Ω-validity for sentences which are formula_3 over "V". The Ω-conjecture states that the converse of this result also holds. In all currently known core models, it is known to be true; moreover the consistency strength of the large cardinals corresponds to the least proof-rank required to "prove" the existence of the cardinals. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_{\\aleph_2}" }, { "math_id": 1, "text": "H_{\\aleph_1}" }, { "math_id": 2, "text": "\\aleph_1" }, { "math_id": 3, "text": "\\Pi_2" }, { "math_id": 4, "text": "\\aleph_2" }, { "math_id": 5, "text": "\\Sigma_2" }, { "math_id": 6, "text": "P(\\alpha)" }, { "math_id": 7, "text": "V^\\mathbb{B}_\\alpha" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\mathbb{B}" } ]
https://en.wikipedia.org/wiki?curid=13772472
1377405
Zeckendorf's theorem
On the unique representation of integers as sums of non-consecutive Fibonacci numbers In mathematics, Zeckendorf's theorem, named after Belgian amateur mathematician Edouard Zeckendorf, is a theorem about the representation of integers as sums of Fibonacci numbers. Zeckendorf's theorem states that every positive integer can be represented uniquely as the sum of "one or more" distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. More precisely, if N is any positive integer, there exist positive integers "c""i" ≥ 2, with "c""i" + 1 &gt; "c""i" + 1, such that formula_0 where Fn is the nth Fibonacci number. Such a sum is called the Zeckendorf representation of N. The Fibonacci coding of N can be derived from its Zeckendorf representation. For example, the Zeckendorf representation of 64 is 64 = 55 + 8 + 1. There are other ways of representing 64 as the sum of Fibonacci numbers 64 = 55 + 5 + 3 + 1 64 = 34 + 21 + 8 + 1 64 = 34 + 21 + 5 + 3 + 1 64 = 34 + 13 + 8 + 5 + 3 + 1 but these are not Zeckendorf representations because 34 and 21 are consecutive Fibonacci numbers, as are 5 and 3. For any given positive integer, its Zeckendorf representation can be found by using a greedy algorithm, choosing the largest possible Fibonacci number at each stage. History. While the theorem is named after the eponymous author who published his paper in 1972, the same result had been published 20 years earlier by Gerrit Lekkerkerker. As such, the theorem is an example of Stigler's Law of Eponymy. Proof. Zeckendorf's theorem has two parts: The first part of Zeckendorf's theorem (existence) can be proven by induction. For "n" = 1, 2, 3 it is clearly true (as these are Fibonacci numbers), for "n" = 4 we have 4 = 3 + 1. If "n" is a Fibonacci number then there is nothing to prove. Otherwise there exists j such that "F""j" &lt; "n" &lt; "F""j" + 1. Now suppose each positive integer "a" &lt; "n" has a Zeckendorf representation (induction hypothesis) and consider "b" = "n" − "F""j". Since "b" &lt; "n", b has a Zeckendorf representation by the induction hypothesis. At the same time, "b" = "n" − "F""j" &lt; "F""j" + 1 − "F""j" "F""j" − 1  (we apply the definition of Fibonacci number in the last equality), so the Zeckendorf representation of b does not contain "F""j" − 1, and hence also does not contain "F""j". As a result, "n" can be represented as the sum of Fj and the Zeckendorf representation of b, such that the Fibonacci numbers involved in the sum are distinct. The second part of Zeckendorf's theorem (uniqueness) requires the following lemma: "Lemma": The sum of any non-empty set of distinct, non-consecutive Fibonacci numbers whose largest member is Fj is strictly less than the next larger Fibonacci number "F""j" + 1. The lemma can be proven by induction on j. Now take two non-empty sets formula_1 and formula_2 of distinct non-consecutive Fibonacci numbers which have the same sum, formula_3. Consider sets formula_4 and formula_5 which are equal to formula_1 and formula_2 from which the common elements have been removed (i. e. formula_6 and formula_7). Since formula_1 and formula_2 had equal sum, and we have removed exactly the elements from formula_8 from both sets, formula_4 and formula_5 must have the same sum as well, formula_9. Now we will show by contradiction that at least one of formula_4 and formula_5 is empty. Assume the contrary, i. e. that formula_4 and formula_5 are both non-empty and let the largest member of formula_4 be Fs and the largest member of formula_5 be Ft. Because formula_4 and formula_5 contain no common elements, "F""s" ≠ "F""t". Without loss of generality, suppose "F""s" &lt; "F""t". Then by the lemma, formula_10, and, by the fact that formula_11, formula_12, whereas clearly formula_13. This contradicts the fact that formula_4 and formula_5 have the same sum, and we can conclude that either formula_4 or formula_5 must be empty. Now assume (again without loss of generality) that formula_4 is empty. Then formula_4 has sum 0, and so must formula_5. But since formula_5 can only contain positive integers, it must be empty too. To conclude: formula_14 which implies formula_15, proving that each Zeckendorf representation is unique. Fibonacci multiplication. One can define the following operation formula_16 on natural numbers a, b: given the Zeckendorf representations formula_17 and formula_18 we define the Fibonacci product formula_19 For example, the Zeckendorf representation of 2 is formula_20, and the Zeckendorf representation of 4 is formula_21 (formula_22 is disallowed from representations), so formula_23 A simple rearrangement of sums shows that this is a commutative operation; however, Donald Knuth proved the surprising fact that this operation is also associative. Representation with negafibonacci numbers. The Fibonacci sequence can be extended to negative index n using the rearranged recurrence relation formula_25 which yields the sequence of "negafibonacci" numbers satisfying formula_26 Any integer can be uniquely represented as a sum of negafibonacci numbers in which no two consecutive negafibonacci numbers are used. For example: 0 = "F"−1 + "F"−2, for example, so the uniqueness of the representation does depend on the condition that no two consecutive negafibonacci numbers are used. This gives a system of coding integers, similar to the representation of Zeckendorf's theorem. In the string representing the integer x, the nth digit is 1 if F−n appears in the sum that represents x; that digit is 0 otherwise. For example, 24 may be represented by the string 100101001, which has the digit 1 in places 9, 6, 4, and 1, because 24 = "F"−1 + "F"−4 + "F"−6 + "F"−9. The integer x is represented by a string of odd length if and only if "x" &gt; 0. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. "This article incorporates material from proof that the Zeckendorf representation of a positive integer is unique on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "N = \\sum_{i = 0}^k F_{c_i}," }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "\\sum_{x \\in S} x = \\sum_{x \\in T} x" }, { "math_id": 4, "text": "S'" }, { "math_id": 5, "text": "T'" }, { "math_id": 6, "text": "S' = S\\setminus T" }, { "math_id": 7, "text": "T' = T\\setminus S" }, { "math_id": 8, "text": "S\\cap T" }, { "math_id": 9, "text": "\\sum_{x \\in S'} x = \\sum_{x \\in T'} x" }, { "math_id": 10, "text": "\\sum_{x \\in S'} x < F_{s + 1}" }, { "math_id": 11, "text": "F_{s} < F_{s + 1} \\leq F_{t}" }, { "math_id": 12, "text": "\\sum_{x \\in S'} x < F_t" }, { "math_id": 13, "text": "\\sum_{x \\in T'} x \\geq F_t" }, { "math_id": 14, "text": "S' = T' = \\emptyset" }, { "math_id": 15, "text": "S = T" }, { "math_id": 16, "text": "a\\circ b" }, { "math_id": 17, "text": "a=\\sum_{i=0}^kF_{c_i}\\;(c_i\\ge2)" }, { "math_id": 18, "text": "b=\\sum_{j=0}^lF_{d_j}\\;(d_j\\ge2)" }, { "math_id": 19, "text": "a\\circ b=\\sum_{i=0}^k\\sum_{j=0}^lF_{c_i+d_j}." }, { "math_id": 20, "text": "F_3" }, { "math_id": 21, "text": "F_4 + F_2" }, { "math_id": 22, "text": "F_1" }, { "math_id": 23, "text": "2 \\circ 4 = F_{3+4} + F_{3+2} = 13 + 5 = 18." }, { "math_id": 24, "text": "4 \\circ 4 = (F_4 + F_2) \\circ (F_4 + F_2) = F_{4+4} + 2F_{4+2} + F_{2+2} = 21 + 2\\cdot 8 + 3 = 40 = F_9 + F_5 + F_2." }, { "math_id": 25, "text": "F_{n-2} = F_n - F_{n-1}, " }, { "math_id": 26, "text": "F_{-n} = (-1)^{n+1} F_n. " } ]
https://en.wikipedia.org/wiki?curid=1377405
13774593
Differential capacitance
Differential capacitance in physics, electronics, and electrochemistry is a measure of the voltage-dependent capacitance of a nonlinear capacitor, such as an electrical double layer or a semiconductor diode. It is defined as the derivative of charge with respect to potential. Description. In electrochemistry differential capacitance is a parameter introduced for characterizing electrical double layers: formula_0 where σ is surface charge and ψ is electric surface potential. Capacitance is usually defined as the stored charge between two conducting surfaces separated by a dielectric divided by the voltage between the surfaces. Another definition is the rate of change of the stored charge or surface charge (σ) divided by the rate of change of the voltage between the surfaces or the electric surface potential (ψ). The latter is called the "differential capacitance," but usually the stored charge is directly proportional to the voltage, making the capacitances given by the two definitions equal. This type of differential capacitance may be called "parallel plate capacitance," after the usual form of the capacitor. However, the term is meaningful when applied to any two conducting bodies such as spheres, and not necessarily ones of the same size, for example, the elevated terminals of a Tesla wireless system and the earth. These are widely spaced insulated conducting bodies positioned over a spherically conducting ground plane. "The differential capacitance between the spheres is obtained by assuming opposite charges ±q on them..." Another form of differential capacitance refers to single isolated conducting bodies. It is usually discussed in books under the topic of "electrostatics." This capacitance is best defined as the rate of change of charge stored in the body divided by the rate of change of the potential of the body. The definition of the absolute potential of the body depends on what is selected as a reference. This is sometimes referred to as the "self-capacitance" of a body. If the body is a conducting sphere, the self-capacitance is proportional to its radius, and is roughly 1pF per centimetre of radius. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C = \\frac{\\mathrm{d} \\sigma}{ \\mathrm{d} \\Psi}" } ]
https://en.wikipedia.org/wiki?curid=13774593
13785655
Magnetic anomaly
Local variation in the Earth's magnetic field In geophysics, a magnetic anomaly is a local variation in the Earth's magnetic field resulting from variations in the chemistry or magnetism of the rocks. Mapping of variation over an area is valuable in detecting structures obscured by overlying material. The magnetic variation (geomagnetic reversals) in successive bands of ocean floor parallel with mid-ocean ridges was important evidence for seafloor spreading, a concept central to the theory of plate tectonics. Measurement. Magnetic anomalies are generally a small fraction of the magnetic field. The total field ranges from 25,000 to 65,000 nanoteslas (nT). To measure anomalies, magnetometers need a sensitivity of 10 nT or less. There are three main types of magnetometer used to measure magnetic anomalies: Data acquisition. Ground-based. In ground-based surveys, measurements are made at a series of stations, typically 15 to 60 m apart. Usually a proton precession magnetometer is used and it is often mounted on a pole. Raising the magnetometer reduces the influence of small ferrous objects that were discarded by humans. To further reduce unwanted signals, the surveyors do not carry metallic objects such as keys, knives or compasses, and objects such as motor vehicles, railway lines, and barbed wire fences are avoided. If some such contaminant is overlooked, it may show up as a sharp spike in the anomaly, so such features are treated with suspicion. The main application for ground-based surveys is the detailed search for minerals. Aeromagnetic. Airborne magnetic surveys are often used in oil surveys to provide preliminary information for seismic surveys. In some countries such as Canada, government agencies have made systematic surveys of large areas. The survey generally involves making a series of parallel runs at a constant height and with intervals of anywhere from a hundred meters to several kilometers. These are crossed by occasional tie lines, perpendicular to the main survey, to check for errors. The plane is a source of magnetism, so sensors are either mounted on a boom (as in the figure) or towed behind on a cable. Aeromagnetic surveys have a lower spatial resolution than ground surveys, but this can be an advantage for a regional survey of deeper rocks. Shipborne. In shipborne surveys, a magnetometer is towed a few hundred meters behind a ship in a device called a "fish". The sensor is kept at a constant depth of about 15 m. Otherwise, the procedure is similar to that used in aeromagnetic surveys. Spacecraft. Sputnik 3 in 1958 was the first spacecraft to carry a magnetometer. In the autumn of 1979, Magsat was launched and jointly operated by NASA and USGS until the spring of 1980. It had a caesium vapor scalar magnetometer and a fluxgate vector magnetometer. CHAMP, a German satellite, made precise gravity and magnetic measurements from 2001 to 2010. A Danish satellite, Ørsted, was launched in 1999 and is still in operation, while the Swarm mission of the European Space Agency involves a "constellation" of three satellites that were launched in November, 2013. Data reduction. There are two main corrections that are needed for magnetic measurements. The first is removing short-term variations in the field from external sources; e.g., "diurnal variations" that have a period of 24 hours and magnitudes of up to 30 nT, probably from the action of the solar wind on the ionosphere. In addition, magnetic storms can have peak magnitudes of 1000 nT and can last for several days. Their contribution can be measured by returning to a base station repeatedly or by having another magnetometer that periodically measures the field at a fixed location. Second, since the anomaly is the local contribution to the magnetic field, the main geomagnetic field must be subtracted from it. The International Geomagnetic Reference Field is usually used for this purpose. This is a large-scale, time-averaged mathematical model of the Earth's field based on measurements from satellites, magnetic observatories and other surveys. Some corrections that are needed for gravity anomalies are less important for magnetic anomalies. For example, the vertical gradient of the magnetic field is 0.03 nT/m or less, so an elevation correction is generally not needed. Interpretation. Theoretical background. The magnetization in the surveyed rock is the vector sum of induced and remanent magnetization: formula_0 The induced magnetization of many minerals is the product of the ambient magnetic field and their magnetic susceptibility χ: formula_1 Some susceptibilities are given in the table. Minerals that are diamagnetic or paramagnetic only have an induced magnetization. Ferromagnetic minerals such as magnetite also can carry a remanent magnetization or remanence. This remanence can last for millions of years, so it may be in a completely different direction from the present Earth's field. If a remanence is present, it is difficult to separate from the induced magnetization unless samples of the rock are measured. The ratio of the magnitudes, "Q" "M"r/"M"i, is called the Koenigsberger ratio. Magnetic anomaly modeling. Interpretation of magnetic anomalies is usually done by matching observed and modeled values of the anomalous magnetic field. An algorithm developed by Talwani and Heirtzler(1964) (and further elaborated by Kravchinsky et al., 2019) treats both induced and remnant magnetizations as vectors and allows theoretical estimation of the remnant magnetization from the existing apparent polar wander paths for different tectonic units or continents. Applications. Ocean floor stripes. Magnetic surveys over the oceans have revealed a characteristic pattern of anomalies around mid-ocean ridges. They involve a series of positive and negative anomalies in the intensity of the magnetic field, forming stripes running parallel to each ridge. They are often symmetric about the axis of the ridge. The stripes are generally tens of kilometers wide, and the anomalies are a few hundred nanoteslas. The source of these anomalies is primarily permanent magnetization carried by titanomagnetite minerals in basalt and gabbros. They are magnetized when ocean crust is formed at the ridge. As magma rises to the surface and cools, the rock acquires a thermoremanent magnetization in the direction of the field. Then the rock is carried away from the ridge by the motions of the tectonic plates. Every few hundred thousand years, the direction of the magnetic field reverses. Thus, the pattern of stripes is a global phenomenon and can be used to calculate the velocity of seafloor spreading. In fiction. In the "Space Odyssey" series by Arthur C. Clarke, a series of monoliths are left by extraterrestrials for humans to find. One near the crater Tycho is found by its unnaturally powerful magnetic field and named "Tycho Magnetic Anomaly 1" (TMA-1). One orbiting Jupiter is named TMA-2, and one in the Olduvai Gorge is found in 2513 and retroactively named TMA-0 because it was first encountered by primitive humans. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbf{M} = \\mathbf{M}_\\text{i} + \\mathbf{M}_\\text{r}." }, { "math_id": 1, "text": " \\mathbf{M}_\\text{i} = \\chi \\mathbf{H}. " } ]
https://en.wikipedia.org/wiki?curid=13785655
1378619
Representation theory of SU(2)
First case of a Lie group that is both compact and non-abelian In the study of the representation theory of Lie groups, the study of representations of SU(2) is fundamental to the study of representations of semisimple Lie groups. It is the first case of a Lie group that is both a compact group and a non-abelian group. The first condition implies the representation theory is discrete: representations are direct sums of a collection of basic irreducible representations (governed by the Peter–Weyl theorem). The second means that there will be irreducible representations in dimensions greater than 1. SU(2) is the universal covering group of SO(3), and so its representation theory includes that of the latter, by dint of a surjective homomorphism to it. This underlies the significance of SU(2) for the description of non-relativistic spin in theoretical physics; see below for other physical and historical context. As shown below, the finite-dimensional irreducible representations of SU(2) are indexed by a non-negative integer formula_0 and have dimension formula_1. In the physics literature, the representations are labeled by the quantity formula_2, where formula_3 is then either an integer or a half-integer, and the dimension is formula_4. Lie algebra representations. The representations of the group are found by considering representations of formula_5, the Lie algebra of SU(2). Since the group SU(2) is simply connected, every representation of its Lie algebra can be integrated to a group representation; we will give an explicit construction of the representations at the group level below. Real and complexified Lie algebras. The real Lie algebra formula_5 has a basis given by formula_6 The matrices are a representation of the quaternions: formula_9 formula_10 formula_11 where I is the conventional 2×2 identity matrix:formula_12 Consequently, the commutator brackets of the matrices satisfy formula_13 It is then convenient to pass to the complexified Lie algebra formula_14 (Skew self-adjoint matrices with trace zero plus self-adjoint matrices with trace zero gives all matrices with trace zero.) As long as we are working with representations over formula_15 this passage from real to complexified Lie algebra is harmless. The reason for passing to the complexification is that it allows us to construct a nice basis of a type that does not exist in the real Lie algebra formula_5. The complexified Lie algebra is spanned by three elements formula_16, formula_17, and formula_18, given by formula_19 or, explicitly, formula_20 The non-trivial/non-identical part of the group's multiplication table is formula_21 formula_22 formula_23 where O is the 2×2 all-zero matrix. Hence their commutation relations are formula_24 Up to a factor of 2, the elements formula_18, formula_16 and formula_17 may be identified with the angular momentum operators formula_25, formula_26, and formula_27, respectively. The factor of 2 is a discrepancy between conventions in math and physics; we will attempt to mention both conventions in the results that follow. Weights and the structure of the representation. In this setting, the eigenvalues for formula_18 are referred to as the weights of the representation. The following elementary result is a key step in the analysis. Suppose that formula_28 is an eigenvector for formula_18 with eigenvalue formula_29; that is, that formula_30 Then formula_31 In other words, formula_32 is either the zero vector or an eigenvector for formula_18 with eigenvalue formula_33 and formula_34 is either zero or an eigenvector for formula_18 with eigenvalue formula_35 Thus, the operator formula_16 acts as a raising operator, increasing the weight by 2, while formula_17 acts as a lowering operator. Suppose now that formula_36 is an irreducible, finite-dimensional representation of the complexified Lie algebra. Then formula_18 can have only finitely many eigenvalues. In particular, there must be some final eigenvalue formula_37 with the property that formula_38 is "not" an eigenvalue. Let formula_39 be an eigenvector for formula_18 with that eigenvalue formula_40 formula_41 then we must have formula_42 or else the above identity would tell us that formula_43 is an eigenvector with eigenvalue formula_44 Now define a "chain" of vectors formula_45 by formula_46. A simple argument by induction then shows that formula_47 for all formula_48 Now, if formula_49 is not the zero vector, it is an eigenvector for formula_18 with eigenvalue formula_50 Since, again, formula_51 has only finitely many eigenvectors, we conclude that formula_52 must be zero for some formula_53 (and then formula_54 for all formula_55). Let formula_56 be the last nonzero vector in the chain; that is, formula_57 but formula_58 Then of course formula_59 and by the above identity with formula_60 we have formula_61 Since formula_62 is at least one and formula_63 we conclude that formula_64 "must be equal to the non-negative integer" formula_65 We thus obtain a chain of formula_62 vectors, formula_66 such that formula_67 acts as formula_68 and formula_69 acts as formula_70 and formula_51 acts as formula_71 Since the vectors formula_49 are eigenvectors for formula_18 with distinct eigenvalues, they must be linearly independent. Furthermore, the span of formula_74 is clearly invariant under the action of the complexified Lie algebra. Since formula_36 is assumed irreducible, this span must be all of formula_75 We thus obtain a complete description of what an irreducible representation must look like; that is, a basis for the space and a complete description of how the generators of the Lie algebra act. Conversely, for any formula_76 we can construct a representation by simply using the above formulas and checking that the commutation relations hold. This representation can then be shown to be irreducible. Conclusion: For each non-negative integer formula_77 there is a unique irreducible representation with highest weight formula_65 Each irreducible representation is equivalent to one of these. The representation with highest weight formula_73 has dimension formula_62 with weights formula_78 each having multiplicity one. The Casimir element. We now introduce the (quadratic) Casimir element, formula_79 given by formula_80. We can view formula_79 as an element of the universal enveloping algebra or as an operator in each irreducible representation. Viewing formula_81 as an operator on the representation with highest weight formula_73, we may easily compute that formula_81 commutes with each formula_82 Thus, by Schur's lemma, formula_81 acts as a scalar multiple formula_83 of the identity for each formula_84 We can write formula_79 in terms of the formula_85 basis as follows: formula_86 which can be reduced to formula_87 The eigenvalue of formula_81 in the representation with highest weight formula_73 can be computed by applying formula_81 to the highest weight vector, which is annihilated by formula_88 thus, we get formula_89 In the physics literature, the Casimir is normalized as formula_90 Labeling things in terms of formula_91 the eigenvalue formula_92 of formula_93 is then computed as formula_94 The group representations. Action on polynomials. Since SU(2) is simply connected, a general result shows that every representation of its (complexified) Lie algebra gives rise to a representation of SU(2) itself. It is desirable, however, to give an explicit realization of the representations at the group level. The group representations can be realized on spaces of polynomials in two complex variables. That is, for each non-negative integer formula_0, we let formula_95 denote the space of homogeneous polynomials formula_96 of degree formula_0 in two complex variables. Then the dimension of formula_95 is formula_1. There is a natural action of SU(2) on each formula_95, given by formula_97. The associated Lie algebra representation is simply the one described in the previous section. (See here for an explicit formula for the action of the Lie algebra on the space of polynomials.) The characters. The character of a representation formula_98 is the function formula_99 given by formula_100. Characters plays an important role in the representation theory of compact groups. The character is easily seen to be a class function, that is, invariant under conjugation. In the SU(2) case, the fact that the character is a class function means it is determined by its value on the maximal torus formula_101 consisting of the diagonal matrices in SU(2), since the elements are orthogonally diagonalizable with the spectral theorem. Since the irreducible representation with highest weight formula_0 has weights formula_102, it is easy to see that the associated character satisfies formula_103 This expression is a finite geometric series that can be simplified to formula_104 This last expression is just the statement of the Weyl character formula for the SU(2) case. Actually, following Weyl's original analysis of the representation theory of compact groups, one can classify the representations entirely from the group perspective, without using Lie algebra representations at all. In this approach, the Weyl character formula plays an essential part in the classification, along with the Peter–Weyl theorem. The SU(2) case of this story is described here. Relation to the representations of SO(3). Note that either all of the weights of the representation are even (if formula_0 is even) or all of the weights are odd (if formula_0 is odd). In physical terms, this distinction is important: The representations with even weights correspond to ordinary representations of the rotation group SO(3). By contrast, the representations with odd weights correspond to double-valued (spinorial) representation of SO(3), also known as projective representations. In the physics conventions, formula_0 being even corresponds to formula_3 being an integer while formula_0 being odd corresponds to formula_3 being a half-integer. These two cases are described as integer spin and half-integer spin, respectively. The representations with odd, positive values of formula_0 are faithful representations of SU(2), while the representations of SU(2) with non-negative, even formula_0 are not faithful. Another approach. See under the example for Borel–Weil–Bott theorem. Most important irreducible representations and their applications. Representations of SU(2) describe non-relativistic spin, due to being a double covering of the rotation group of Euclidean 3-space. Relativistic spin is described by the representation theory of SL2(C), a supergroup of SU(2), which in a similar way covers SO+(1;3), the relativistic version of the rotation group. SU(2) symmetry also supports concepts of isobaric spin and weak isospin, collectively known as "isospin". The representation with formula_105 (i.e., formula_106 in the physics convention) is the 2 representation, the fundamental representation of SU(2). When an element of SU(2) is written as a complex 2 × 2 matrix, it is simply a multiplication of column 2-vectors. It is known in physics as the spin-1/2 and, historically, as the multiplication of quaternions (more precisely, multiplication by a unit quaternion). This representation can also be viewed as a double-valued projective representation of the rotation group SO(3). The representation with formula_107 (i.e., formula_108) is the 3 representation, the adjoint representation. It describes 3-d rotations, the standard representation of SO(3), so real numbers are sufficient for it. Physicists use it for the description of massive spin-1 particles, such as vector mesons, but its importance for spin theory is much higher because it anchors spin states to the geometry of the physical 3-space. This representation emerged simultaneously with the 2 when William Rowan Hamilton introduced versors, his term for elements of SU(2). Note that Hamilton did not use standard group theory terminology since his work preceded Lie group developments. The formula_109 (i.e. formula_110) representation is used in particle physics for certain baryons, such as the Δ. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "m + 1" }, { "math_id": 2, "text": "l = m/2" }, { "math_id": 3, "text": "l" }, { "math_id": 4, "text": "2l + 1" }, { "math_id": 5, "text": "\\mathfrak{su}(2)" }, { "math_id": 6, "text": "u_1 = \\begin{bmatrix}\n 0 & i\\\\\n i & 0\n \\end{bmatrix} ,\\qquad\n u_2 = \\begin{bmatrix}\n 0 & -1\\\\\n 1 & ~~0\n \\end{bmatrix} ,\\qquad\n u_3 = \\begin{bmatrix}\n i & ~~0\\\\\n 0 & -i\n \\end{bmatrix}~,\n" }, { "math_id": 7, "text": "u_1 = +i\\ \\sigma_1 \\;, \\, u_2 = -i\\ \\sigma_2 \\;," }, { "math_id": 8, "text": "u_3 = +i\\ \\sigma_3 ~." }, { "math_id": 9, "text": " u_1\\,u_1 = -I\\, , ~~\\quad u_2\\,u_2 = -I \\, , ~~\\quad u_3\\,u_3 = -I\\, ," }, { "math_id": 10, "text": " u_1\\,u_2 = +u_3\\, , \\quad u_2\\,u_3 = +u_1\\, , \\quad u_3\\,u_1 = +u_2\\, ," }, { "math_id": 11, "text": " u_2\\,u_1 = -u_3\\, , \\quad u_3\\,u_2 = -u_1\\, , \\quad u_1\\,u_3 = -u_2 ~." }, { "math_id": 12, "text": "~~I = \\begin{bmatrix}\n 1 & 0\\\\\n 0 & 1\n\\end{bmatrix} ~." }, { "math_id": 13, "text": "[u_1, u_2] = 2 u_3\\, ,\\quad [u_2, u_3] = 2 u_1\\, ,\\quad [u_3, u_1] = 2 u_2 ~." }, { "math_id": 14, "text": "\\mathrm{su}(2) + i\\,\\mathrm{su}(2) = \\mathrm{sl}(2;\\mathbb C) ~." }, { "math_id": 15, "text": "\\mathbb C" }, { "math_id": 16, "text": "X" }, { "math_id": 17, "text": "Y" }, { "math_id": 18, "text": "H" }, { "math_id": 19, "text": "\n H = \\frac{1}{i}u_3, \\qquad\n X = \\frac{1}{2i}\\left(u_1 - iu_2\\right), \\qquad \n Y = \\frac{1}{2i}(u_1 + iu_2) ~;\n" }, { "math_id": 20, "text": " \n H = \\begin{bmatrix}\n 1 & ~~0\\\\\n 0 & -1\n \\end{bmatrix}, \\qquad\n X = \\begin{bmatrix}\n 0 & 1\\\\\n 0 & 0\n \\end{bmatrix}, \\qquad\n Y = \\begin{bmatrix}\n 0 & 0\\\\\n 1 & 0\n \\end{bmatrix}\n~." }, { "math_id": 21, "text": " H X ~=~~~~X ,\\qquad H Y ~= -Y ,\\qquad X Y ~=~ \\tfrac{1}{2}\\left(I + H \\right)," }, { "math_id": 22, "text": " X H ~= -X ,\\qquad Y H ~=~~~~Y ,\\qquad Y X ~=~ \\tfrac{1}{2}\\left(I - H \\right)," }, { "math_id": 23, "text": " H H ~=~~~I~ ,\\qquad X X ~=~~~~O ,\\qquad Y Y ~=~ ~O," }, { "math_id": 24, "text": "[H, X] = 2 X, \\qquad [H, Y] = -2 Y, \\qquad [X, Y] = H." }, { "math_id": 25, "text": "J_z" }, { "math_id": 26, "text": "J_+" }, { "math_id": 27, "text": "J_-" }, { "math_id": 28, "text": "v" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "H v = \\alpha v." }, { "math_id": 31, "text": "\\begin{alignat}{5}\nH (X v) &= (X H + [H,X]) v &&= (\\alpha + 2) X v,\\\\[3pt]\nH (Y v) &= (Y H + [H,Y]) v &&= (\\alpha - 2) Y v.\n\\end{alignat}" }, { "math_id": 32, "text": "Xv" }, { "math_id": 33, "text": "\\alpha + 2" }, { "math_id": 34, "text": "Y v" }, { "math_id": 35, "text": "\\alpha - 2." }, { "math_id": 36, "text": "V" }, { "math_id": 37, "text": "\\lambda \\in \\mathbb{C}" }, { "math_id": 38, "text": "\\lambda + 2" }, { "math_id": 39, "text": "v_0" }, { "math_id": 40, "text": "\\lambda:" }, { "math_id": 41, "text": "H v_0 = \\lambda v_0," }, { "math_id": 42, "text": "X v_0 = 0," }, { "math_id": 43, "text": "X v_0" }, { "math_id": 44, "text": "\\lambda + 2 ." }, { "math_id": 45, "text": "v_0, v_1, \\ldots" }, { "math_id": 46, "text": "v_k = Y^k v_0" }, { "math_id": 47, "text": "X v_k = k(\\lambda - (k - 1))v_{k-1}" }, { "math_id": 48, "text": "k = 1, 2, \\ldots ." }, { "math_id": 49, "text": " v_k " }, { "math_id": 50, "text": " \\lambda - 2k ." }, { "math_id": 51, "text": " H " }, { "math_id": 52, "text": " v_\\ell " }, { "math_id": 53, "text": " \\ell " }, { "math_id": 54, "text": "v_k = 0" }, { "math_id": 55, "text": " k > \\ell " }, { "math_id": 56, "text": "v_m" }, { "math_id": 57, "text": " v_m \\neq 0 " }, { "math_id": 58, "text": " v_{m+1} = 0 ." }, { "math_id": 59, "text": " X v_{m+1} = 0 " }, { "math_id": 60, "text": "k = m + 1 ," }, { "math_id": 61, "text": " 0 = X v_{m+1} = (m + 1)(\\lambda - m)v_m ." }, { "math_id": 62, "text": " m + 1 " }, { "math_id": 63, "text": " v_m \\neq 0 ," }, { "math_id": 64, "text": " \\lambda " }, { "math_id": 65, "text": " m ." }, { "math_id": 66, "text": " v_0, v_1, \\ldots, v_m ," }, { "math_id": 67, "text": " Y " }, { "math_id": 68, "text": " Y v_m = 0, \\quad Y v_k = v_{k+1} \\quad (k < m) " }, { "math_id": 69, "text": " X " }, { "math_id": 70, "text": " X v_0 = 0, \\quad X v_k = k (m - (k - 1)) v_{k-1} \\quad (k \\ge 1)" }, { "math_id": 71, "text": "H v_k = (m - 2k) v_k ." }, { "math_id": 72, "text": "\\lambda" }, { "math_id": 73, "text": " m " }, { "math_id": 74, "text": " v_0, \\ldots , v_m " }, { "math_id": 75, "text": " V ." }, { "math_id": 76, "text": " m \\geq 0 " }, { "math_id": 77, "text": " m ," }, { "math_id": 78, "text": " m, m - 2, \\ldots, -(m - 2), -m ," }, { "math_id": 79, "text": "C" }, { "math_id": 80, "text": "C = -\\left(u_1^2 + u_2^2 + u_3^2\\right)" }, { "math_id": 81, "text": " C " }, { "math_id": 82, "text": " u_i ." }, { "math_id": 83, "text": "c_m" }, { "math_id": 84, "text": " m ." }, { "math_id": 85, "text": "\\{ H, X, Y \\} " }, { "math_id": 86, "text": "C = (X + Y)^2 - (-X + Y)^2 + H^2 ," }, { "math_id": 87, "text": "C = 4YX + H^2 + 2H ." }, { "math_id": 88, "text": " X ;" }, { "math_id": 89, "text": "c_m = m^2 + 2m = m(m + 2) ." }, { "math_id": 90, "text": " C' = \\frac{1}{4}C ." }, { "math_id": 91, "text": " \\ell = \\frac{1}{2}m ," }, { "math_id": 92, "text": " d_\\ell " }, { "math_id": 93, "text": " C' " }, { "math_id": 94, "text": " d_\\ell = \\frac{1}{4}(2\\ell)(2\\ell + 2) = \\ell (\\ell + 1) ." }, { "math_id": 95, "text": "V_m" }, { "math_id": 96, "text": "p" }, { "math_id": 97, "text": "[U \\cdot p](z) = p\\left(U^{-1}z\\right),\\quad z\\in\\mathbb C^2, U\\in\\mathrm{SU}(2)" }, { "math_id": 98, "text": "\\Pi: G \\rightarrow \\operatorname{GL}(V)" }, { "math_id": 99, "text": "\\Chi: G \\rightarrow \\mathbb{C}" }, { "math_id": 100, "text": "\\Chi(g) = \\operatorname{trace}(\\Pi(g))" }, { "math_id": 101, "text": "T" }, { "math_id": 102, "text": "m, m - 2, \\ldots, -(m - 2), -m" }, { "math_id": 103, "text": "\\Chi\\left(\\begin{pmatrix}\n e^{i\\theta} & 0\\\\\n 0 & e^{-i\\theta}\n\\end{pmatrix}\\right) = e^{im\\theta} + e^{i(m-2)\\theta} + \\cdots + e^{-i(m-2)\\theta} + e^{-im\\theta}." }, { "math_id": 104, "text": "\\Chi\\left(\\begin{pmatrix}\n e^{i\\theta} & 0\\\\\n 0 & e^{-i\\theta}\n\\end{pmatrix}\\right) = \\frac{\\sin((m + 1)\\theta)}{\\sin(\\theta)}." }, { "math_id": 105, "text": "m = 1" }, { "math_id": 106, "text": "l = 1/2" }, { "math_id": 107, "text": "m = 2" }, { "math_id": 108, "text": "l = 1" }, { "math_id": 109, "text": "m = 3" }, { "math_id": 110, "text": "l = 3/2" } ]
https://en.wikipedia.org/wiki?curid=1378619
1378699
Covering group
Concept in topological group theory In mathematics, a covering group of a topological group "H" is a covering space "G" of "H" such that "G" is a topological group and the covering map "p" : "G" → "H" is a continuous group homomorphism. The map "p" is called the covering homomorphism. A frequently occurring case is a double covering group, a topological double cover in which "H" has index 2 in "G"; examples include the spin groups, pin groups, and metaplectic groups. Roughly explained, saying that for example the metaplectic group Mp2"n" is a "double cover" of the symplectic group Sp2"n" means that there are always two elements in the metaplectic group representing one element in the symplectic group. Properties. Let "G" be a covering group of "H". The kernel "K" of the covering homomorphism is just the fiber over the identity in "H" and is a discrete normal subgroup of "G". The kernel "K" is closed in "G" if and only if "G" is Hausdorff (and if and only if "H" is Hausdorff). Going in the other direction, if "G" is any topological group and "K" is a discrete normal subgroup of "G" then the quotient map "p" : "G" → "G" / "K" is a covering homomorphism. If "G" is connected then "K", being a discrete normal subgroup, necessarily lies in the center of "G" and is therefore abelian. In this case, the center of "H" = "G" / "K" is given by formula_0 As with all covering spaces, the fundamental group of "G" injects into the fundamental group of "H". Since the fundamental group of a topological group is always abelian, every covering group is a normal covering space. In particular, if "G" is path-connected then the quotient group "π"1("H") / "π"1("G") is isomorphic to "K". The group "K" acts simply transitively on the fibers (which are just left cosets) by right multiplication. The group "G" is then a principal "K"-bundle over "H". If "G" is a covering group of "H" then the groups "G" and "H" are locally isomorphic. Moreover, given any two connected locally isomorphic groups "H"1 and "H"2, there exists a topological group "G" with discrete normal subgroups "K"1 and "K"2 such that "H"1 is isomorphic to "G" / "K"1 and "H"2 is isomorphic to "G" / "K"2. Group structure on a covering space. Let "H" be a topological group and let "G" be a covering space of "H". If "G" and "H" are both path-connected and locally path-connected, then for any choice of element "e"* in the fiber over "e" ∈ "H", there exists a unique topological group structure on "G", with "e"* as the identity, for which the covering map "p" : "G" → "H" is a homomorphism. The construction is as follows. Let "a" and "b" be elements of "G" and let "f" and "g" be paths in "G" starting at "e"* and terminating at "a" and "b" respectively. Define a path "h" : "I" → "H" by "h"("t") = "p"("f"("t"))"p"("g"("t")). By the path-lifting property of covering spaces there is a unique lift of "h" to "G" with initial point "e"*. The product "ab" is defined as the endpoint of this path. By construction we have "p"("ab") = "p"("a")"p"("b"). One must show that this definition is independent of the choice of paths "f" and "g", and also that the group operations are continuous. Alternatively, the group law on "G" can be constructed by lifting the group law "H" × "H" → "H" to "G", using the lifting property of the covering map "G" × "G" → "H" × "H". The non-connected case is interesting and is studied in the papers by Taylor and by Brown-Mucuk cited below. Essentially there is an obstruction to the existence of a universal cover that is also a topological group such that the covering map is a morphism: this obstruction lies in the third cohomology group of the group of components of "G" with coefficients in the fundamental group of "G" at the identity. Universal covering group. If "H" is a path-connected, locally path-connected, and semilocally simply connected group then it has a universal cover. By the previous construction the universal cover can be made into a topological group with the covering map a continuous homomorphism. This group is called the universal covering group of "H". There is also a more direct construction, which we give below. Let "PH" be the path group of "H". That is, "PH" is the space of paths in "H" based at the identity together with the compact-open topology. The product of paths is given by pointwise multiplication, i.e. ("fg")("t") = "f"("t")"g"("t"). This gives "PH" the structure of a topological group. There is a natural group homomorphism "PH" → "H" that sends each path to its endpoint. The universal cover of "H" is given as the quotient of "PH" by the normal subgroup of null-homotopic loops. The projection "PH" → "H" descends to the quotient giving the covering map. One can show that the universal cover is simply connected and the kernel is just the fundamental group of "H". That is, we have a short exact sequence formula_1 where ~"H" is the universal cover of "H". Concretely, the universal covering group of "H" is the space of homotopy classes of paths in "H" with pointwise multiplication of paths. The covering map sends each path class to its endpoint. Lattice of covering groups. As the above suggest, if a group has a universal covering group (if it is path-connected, locally path-connected, and semilocally simply connected), with discrete center, then the set of all topological groups that are covered by the universal covering group form a lattice, corresponding to the lattice of subgroups of the center of the universal covering group: inclusion of subgroups corresponds to covering of quotient groups. The maximal element is the universal covering group ~"H", while the minimal element is the universal covering group mod its center, . This corresponds algebraically to the universal perfect central extension (called "covering group", by analogy) as the maximal element, and a group mod its center as minimal element. This is particularly important for Lie groups, as these groups are all the (connected) realizations of a particular Lie algebra. For many Lie groups the center is the group of scalar matrices, and thus the group mod its center is the projectivization of the Lie group. These covers are important in studying projective representations of Lie groups, and spin representations lead to the discovery of spin groups: a projective representation of a Lie group need not come from a linear representation of the group, but does come from a linear representation of some covering group, in particular the universal covering group. The finite analog led to the covering group or Schur cover, as discussed above. A key example arises from SL2(R), which has center {±1} and fundamental group Z. It is a double cover of the centerless projective special linear group PSL2(R), which is obtained by taking the quotient by the center. By Iwasawa decomposition, both groups are circle bundles over the complex upper half-plane, and their universal cover formula_2 is a real line bundle over the half-plane that forms one of Thurston's eight geometries. Since the half-plane is contractible, all bundle structures are trivial. The preimage of SL2(Z) in the universal cover is isomorphic to the braid group on three strands. Lie groups. The above definitions and constructions all apply to the special case of Lie groups. In particular, every covering of a manifold is a manifold, and the covering homomorphism becomes a smooth map. Likewise, given any discrete normal subgroup of a Lie group the quotient group is a Lie group and the quotient map is a covering homomorphism. Two Lie groups are locally isomorphic if and only if their Lie algebras are isomorphic. This implies that a homomorphism "φ" : "G" → "H" of Lie groups is a covering homomorphism if and only if the induced map on Lie algebras formula_3 is an isomorphism. Since for every Lie algebra formula_4 there is a unique simply connected Lie group "G" with Lie algebra &amp;NoBreak;&amp;NoBreak;, from this follows that the universal covering group of a connected Lie group "H" is the (unique) simply connected Lie group "G" having the same Lie algebra as "H".
[ { "math_id": 0, "text": "\\mathrm{Z}(H) \\cong \\mathrm{Z}(G)/K ." }, { "math_id": 1, "text": "1\\to \\pi_1(H) \\to \\tilde H \\to H \\to 1" }, { "math_id": 2, "text": "{\\mathrm{S}\\widetilde{\\mathrm{L}_2(}\\mathbf{R})}" }, { "math_id": 3, "text": "\\phi_* : \\mathfrak g \\to \\mathfrak h" }, { "math_id": 4, "text": "\\mathfrak g" } ]
https://en.wikipedia.org/wiki?curid=1378699
13790
Hash function
Mapping arbitrary data to fixed-size values A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called "hash values", "hash codes", "hash digests", "digests", or simply "hashes". The values are usually used to index a fixed-size table called a "hash table". Use of a hash function to index a hash table is called "hashing" or "scatter-storage addressing". Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally- and storage-space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often-exponential storage requirements of direct access of state spaces of large or variable-length keys. Use of hash functions relies on statistical properties of key and function interaction: worst-case behavior is intolerably bad but rare, and average-case behavior can be nearly optimal (minimal collision).527 Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. The hash function differs from these concepts mainly in terms of data integrity. Hash tables may use non-cryptographic hash functions, while cryptographic hash functions are used in cybersecurity to secure sensitive data such as passwords. Overview. In a hash table, a hash function takes a key as an input, which is associated with a datum or record and used to identify it to the data storage and retrieval application. The keys may be fixed-length, like an integer, or variable-length, like a name. In some cases, the key is the datum itself. The output is a hash code used to index a hash table holding the data or records, or pointers to them. A hash function may be considered to perform three functions: A good hash function satisfies two basic properties: it should be very fast to compute, and it should minimize duplication of output values (collisions). Hash functions rely on generating favorable probability distributions for their effectiveness, reducing access time to nearly constant. High table loading factors, pathological key sets, and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give the best worst-case performance, good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes. Implementation is based on parity-preserving bit operations (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a collision-resolution method that employs an auxiliary data structure like linked lists, or systematic probing of the table to find an empty slot. Hash tables. Hash functions are used in conjunction with hash tables to store and retrieve data items or data records. The hash function translates the key associated with each datum or record into a hash code, which is used to index the hash table. When an item is to be added to the table, the hash code may index an empty slot (also called a bucket), in which case the item is added to the table there. If the hash code indexes a full slot, then some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table. In "chained hashing", each slot is the head of a linked list or chain, and items that collide at the slot are added to the chain. Chains may be kept in random order and searched linearly, or in serial order, or as a self-ordering list by frequency to speed up access. In "open address hashing", the table is probed starting from the occupied slot in a specified manner, usually by linear probing, quadratic probing, or double hashing until an open slot is located or the entire table is probed (overflow). Searching for the item follows the same procedure until the item is located, an open slot is found, or the entire table has been searched (item not in table). Specialized uses. Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items. Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. A special case of hashing is known as geometric hashing or the "grid method". In these applications, the set of all inputs is some sort of metric space, and the hashing function can be interpreted as a partition of that space into a grid of "cells". The table is often an array with two or more indices (called a "grid file", "grid index", "bucket grid", and similar names), and the hash function returns an index tuple. This principle is widely used in computer graphics, computational geometry, and many other disciplines, to solve many proximity problems in the plane or in three-dimensional space, such as finding closest pairs in a set of points, similar shapes in a list of shapes, similar images in an image database, and so on. Hash tables are also used to implement associative arrays and dynamic sets. Properties. Uniformity. A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash value in the output range should be generated with roughly the same probability. The reason for this last requirement is that the cost of hashing-based methods goes up sharply as the number of "collisions"—pairs of inputs that are mapped to the same hash value—increases. If some hash values are more likely to occur than others, then a larger fraction of the lookup operations will have to search through a larger set of colliding table entries. This criterion only requires the value to be "uniformly distributed", not "random" in any sense. A good randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but the converse need not be true. Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries. In other words, if a typical set of "m" records is hashed to "n" table slots, then the probability of a bucket receiving many more than "m"/"n" records should be vanishingly small. In particular, if "m" &lt; "n", then very few buckets should have more than one or two records. A small number of collisions is virtually inevitable, even if "n" is much larger than "m"—see the birthday problem. In special cases when the keys are known in advance and the key set is static, a hash function can be found that achieves absolute (or collisionless) uniformity. Such a hash function is said to be "perfect". There is no algorithmic way of constructing such a function—searching for one is a factorial function of the number of keys to be mapped versus the number of table slots that they are mapped into. Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. See universal hash function. Testing and measurement. When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared test. This test is a goodness-of-fit measure: it is the actual distribution of items in buckets versus the expected (or uniform) distribution of items. The formula is formula_0 where n is the number of keys, m is the number of buckets, and "b""j" is the number of items in bucket j. A ratio within one confidence interval (such as 0.95 to 1.05) is indicative that the hash function evaluated has an expected uniform distribution. Hash functions can have some technical properties that make it more likely that they will have a uniform distribution when applied. One is the strict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the keyspace may have low variability. For the output to be uniformly distributed, a low amount of variability, even one bit, should translate into a high amount of variability (i.e. distribution over the tablespace) in the output. Each bit should change with a probability of 50% because, if some bits are reluctant to change, then the keys become clustered around those values. If the bits want to change too readily, then the mapping is approaching a fixed XOR function of a single bit. Standard tests for this property have been described in the literature. The relevance of the criterion to a multiplicative hash function is assessed here. Efficiency. In data storage and retrieval applications, the use of a hash function is a trade-off between search time and data storage space. If search time were unbounded, then a very compact unordered linear list would be the best medium; if storage space were unbounded, then a randomly accessible structure indexable by the key-value would be very large and very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large keyspace to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys. In most applications, the hash function should be computable with minimum latency and secondarily in a minimum number of instructions. Computational complexity varies with the number of instructions required and latency of individual instructions, with the simplest being the bitwise methods (folding), followed by the multiplicative methods, and the most complex (slowest) are the division-based methods. Because collisions should be infrequent, and cause a marginal delay but are otherwise harmless, it is usually preferable to choose a faster hash function over one that needs more computation but saves a few collisions. Division-based implementations can be of particular concern because the division is microprogrammed on nearly all chip architectures. Division (modulo) by a constant can be inverted to become a multiplication by the word-size multiplicative-inverse of that constant. This can be done by the programmer, or by the compiler. Division can also be reduced directly into a series of shift-subtracts and shift-adds, though minimizing the number of such operations required is a daunting problem; the number of assembly instructions resulting may be more than a dozen and swamp the pipeline. If the architecture has hardware multiply functional units, then the multiply-by-inverse is likely a better approach. We can allow the table size "n" to not be a power of 2 and still not have to perform any remainder or division operation, as these computations are sometimes costly. For example, let "n" be significantly less than 2"b". Consider a pseudorandom number generator function "P"(key) that is uniform on the interval [0, 2"b" − 1]. A hash function uniform on the interval [0, "n" − 1] is "n" "P"(key) / 2"b". We can replace the division by a (possibly faster) right bit shift: "n P"(key) » "b". If keys are being hashed repeatedly, and the hash function is costly, then computing time can be saved by precomputing the hash codes and storing them with the keys. Matching hash codes almost certainly means that the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position. Universality. A "universal hashing" scheme is a randomized algorithm that selects a hash function "h" among a family of such functions, in such a way that the probability of a collision of any two distinct keys is 1/"m", where "m" is the number of distinct hash values desired—independently of the two keys. Universal hashing ensures (in a probabilistic sense) that the hash function application will behave as well as if it were using a random function, for any distribution of the input data. It will, however, have more collisions than perfect hashing and may require more operations than a special-purpose hash function. Applicability. A hash function that allows only certain table sizes or strings only up to a certain length, or cannot accept a seed (i.e. allow double hashing) is less useful than one that does. A hash function is applicable in a variety of situations. Particularly within cryptography, notable applications include: Deterministic. A hash procedure must be deterministic—for a given input value, it must always generate the same hash value. In other words, it must be a function of the data to be hashed, in the mathematical sense of the term. This requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number generators or the time of day. It also excludes functions that depend on the memory address of the object being hashed, because the address may change during execution (as may happen on systems that use certain methods of garbage collection), although sometimes rehashing of the item is possible. The determinism is in the context of the reuse of the function. For example, Python adds the feature that hash functions make use of a randomized seed that is generated once when the Python process starts in addition to the input to be hashed. The Python hash (SipHash) is still a valid hash function when used within a single run, but if the values are persisted (for example, written to disk), they can no longer be treated as valid hash values, since in the next run the random value might differ. Defined range. It is often desirable that the output of a hash function have fixed size (but see below). If, for example, the output is constrained to 32-bit integer values, then the hash values can be used to index into an array. Such hashing is commonly used to accelerate data searches. Producing fixed-length output from variable-length input can be accomplished by breaking the input data into chunks of specific size. Hash functions used for data searches use some arithmetic expression that iteratively processes chunks of the input (such as the characters in a string) to produce the hash value. Variable range. In many applications, the range of hash values may be different for each run of the program or may change along the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function which takes two parameters—the input data "z", and the number "n" of allowed hash values. A common solution is to compute a fixed hash function with a very large range (say, 0 to 232 − 1), divide the result by "n", and use the division's remainder. If "n" is itself a power of 2, this can be done by bit masking and bit shifting. When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution between 0 and "n" − 1, for any value of "n" that may occur in the application. Depending on the function, the remainder may be uniform only for certain values of "n", e.g. odd or prime numbers. Variable range with minimal movement (dynamic hash function). When the hash function is used to store values in a hash table that outlives the run of the program, and the hash table needs to be expanded or shrunk, the hash table is referred to as a dynamic hash table. A hash function that will relocate the minimum number of records when the table is resized is desirable. What is needed is a hash function "H"("z","n") (where "z" is the key being hashed and "n" is the number of allowed hash values) such that "H"("z","n" + 1) = "H"("z","n") with probability close to "n"/("n" + 1). Linear hashing and spiral hashing are examples of dynamic hash functions that execute in constant time but relax the property of uniformity to achieve the minimal movement property. Extendible hashing uses a dynamic hash function that requires space proportional to "n" to compute the hash function, and it becomes a function of the previous keys that have been inserted. Several algorithms that preserve the uniformity property but require time proportional to "n" to compute the value of "H"("z","n") have been invented. A hash function with minimal movement is especially useful in distributed hash tables. Data normalization. In some applications, the input data may contain features that are irrelevant for comparison purposes. For example, when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters. For such data, one must use a hash function that is compatible with the data equivalence criterion being used: that is, any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by normalizing the input before hashing it, as by upper-casing all letters. Hashing integer data types. There are several common algorithms for hashing integers. The method giving the best distribution is data-dependent. One of the simplest and most common methods in practice is the modulo division method. Identity hash function. If the data to be hashed is small enough, then one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing this "identity" hash function is effectively zero. This hash function is perfect, as it maps each input to a distinct hash value. The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, in Java, the hash code is a 32-bit integer. Thus the 32-bit integer codice_0 and 32-bit floating-point codice_1 objects can simply use the value directly, whereas the 64-bit integer codice_2 and 64-bit floating-point codice_3 cannot. Other types of data can also use this hashing scheme. For example, when mapping character strings between upper and lower case, one can use the binary encoding of each character, interpreted as an integer, to index a table that gives the alternative form of that character ("A" for "a", "8" for "8", etc.). If each character is stored in 8 bits (as in extended ASCII or ISO Latin 1), the table has only 28 = 256 entries; in the case of Unicode characters, the table would have 17 × 216 = entries. The same technique can be used to map two-letter country codes like "us" or "za" to country names (262 = 676 table entries), 5-digit ZIP codes like 13083 to city names ( entries), etc. Invalid data values (such as the country code "xx" or the ZIP code 00000) may be left undefined in the table or mapped to some appropriate "null" value. Trivial hash function. If the keys are uniformly or sufficiently uniformly distributed over the key space, so that the key values are essentially random, then they may be considered to be already "hashed". In this case, any number of any bits in the key may be extracted and collated as an index into the hash table. For example, a simple hash function might mask off the m least significant bits and use the result as an index into a hash table of size 2"m". Folding. A folding hash code is produced by dividing the input into sections of m bits, where 2"m" is the table size, and using a parity-preserving bitwise operation such as ADD or XOR to combine the sections, followed by a mask or shifts to trim off any excess bits at the high or low end. For example, for a table size of 15 bits and a 64-bit key value of 0x0123456789ABCDEF, there are five sections consisting of 0x4DEF, 0x1357, 0x159E, 0x091A, and 0x0. Adding yields 0x7FFE, a 15-bit value. Mid-squares. A mid-squares hash code is produced by squaring the input and extracting an appropriate number of middle digits or bits. For example, if the input is and the hash table size , then squaring the key produces , so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there is not a lot of leading or trailing zeros in the key. This is a variant of multiplicative hashing, but not as good because an arbitrary key is not a good multiplier. Division hashing. A standard technique is to use a modulo function on the key, by selecting a divisor M which is a prime number close to the table size, so "h"("K") ≡ "K" (mod "M"). The table size is usually a power of 2. This gives a distribution from {0, "M" − 1}. This gives good results over a large number of key sets. A significant drawback of division hashing is that division is microprogrammed on most modern architectures (including x86) and can be 10 times slower than multiplication. A second drawback is that it will not break up clustered keys. For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small. Algebraic coding. Algebraic coding is a variant of the division method of hashing which uses division by a polynomial modulo 2 instead of an integer to map n bits to m bits. In this approach, "M" = 2"m", and we postulate an mth-degree polynomial "Z"("x") = "x""m" + ζ"m"−1"x""m"−1 + &amp;ctdot; + ζ0. A key "K" = ("k""n"−1&amp;mldr;"k"1"k"0)2 can be regarded as the polynomial "K"("x") = "k""n"−1"x""n"−1 + &amp;ctdot; + "k"1"x" + "k"0. The remainder using polynomial arithmetic modulo 2 is "K"("x") mod "Z"("x") = "h""m"−1"x""m"−1 + &amp;ctdot; "h"1"x" + "h"0. Then "h"("K") = ("h""m"−1&amp;mldr;"h"1"h"0)2. If "Z"("x") is constructed to have t or fewer non-zero coefficients, then keys which share fewer than t bits are guaranteed to not collide. Z is a function of k, t, and n (the last of which is a divisor of 2"k" − 1) and is constructed from the finite field GF(2"k"). Knuth gives an example: taking ("n","m","t") = (15,10,7) yields "Z"("x") = "x"10 + "x"8 + "x"5 + "x"4 + "x"2 + "x" + 1. The derivation is as follows: Let S be the smallest set of integers such that {1,2,&amp;mldr;,"t"} &amp;SubsetEqual; "S" and (2"j" mod "n") &amp;in; "S" ∀"j" &amp;in; "S". Define formula_1 where α &amp;in;"n" GF(2"k") and where the coefficients of "P"("x") are computed in this field. Then the degree of "P"("x") = |"S"|. Since α2"j" is a root of "P"("x") whenever α"j" is a root, it follows that the coefficients "pi" of "P"("x") satisfy "p" = "p"i, so they are all 0 or 1. If "R"("x") = "r""n"−1"x""n"−1 + &amp;ctdot; + "r"1"x" + "r"0 is any nonzero polynomial modulo 2 with at most t nonzero coefficients, then "R"("x") is not a multiple of "P"("x") modulo 2. If follows that the corresponding hash function will map keys with fewer than t bits in common to unique indices. The usual outcome is that either n will get large, or t will get large, or both, for the scheme to be computationally feasible. Therefore, it is more suited to hardware or microcode implementation. Unique permutation hashing. Unique permutation hashing has a guaranteed best worst-case insertion time. Multiplicative hashing. Standard multiplicative hashing uses the formula "h""a"("K") = ⌊("aK" mod "W") / ("W"/"M")⌋, which produces a hash value in {0, &amp;mldr;, "M" − 1}. The value a is an appropriately chosen value that should be relatively prime to W; it should be large, and its binary representation a random mix of 1s and 0s. An important practical special case occurs when "W" = 2"w" and "M" = 2"m" are powers of 2 and w is the machine word size. In this case, this formula becomes "h""a"("K") = ⌊("aK" mod 2"w") / 2"w"−"m"⌋. This is special because arithmetic modulo 2"w" is done by default in low-level programming languages and integer division by a power of 2 is simply a right-shift, so, in C, for example, this function becomes unsigned hash(unsigned K) { return (a*K) » (w-m); and for fixed m and w this translates into a single integer multiplication and right-shift, making it one of the fastest hash functions to compute. Multiplicative hashing is susceptible to a "common mistake" that leads to poor diffusion—higher-value input bits do not affect lower-value output bits. A transmutation on the input which shifts the span of retained top bits down and XORs or ADDs them to the key before the multiplication step corrects for this. The resulting function looks like: unsigned hash(unsigned K) { K ^= K » (w-m); return (a*K) » (w-m); Fibonacci hashing. Fibonacci hashing is a form of multiplicative hashing in which the multiplier is 2"w" / &amp;varphi;, where w is the machine word length and &amp;varphi; (phi) is the golden ratio (approximately 1.618). A property of this multiplier is that it uniformly distributes over the table space, blocks of consecutive keys with respect to any block of bits in the key. Consecutive keys within the high bits or low bits of the key (or some other field) are relatively common. The multipliers for various word lengths are: The multiplier should be odd, so the least significant bit of the output is invertible modulo 2"w". The last two values given above are rounded (up and down, respectively) by more than 1/2 of a least-significant bit to achieve this. Zobrist hashing. Tabulation hashing, more generally known as "Zobrist hashing" after Albert Zobrist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations. This algorithm has proven to be very fast and of high quality for hashing purposes (especially hashing of integer-number keys). Zobrist hashing was originally introduced as a means of compactly representing chess positions in computer game-playing programs. A unique random number was assigned to represent each type of piece (six each for black and white) on each space of the board. Thus a table of 64×12 such numbers is initialized at the start of the program. The random numbers could be any length, but 64 bits was natural due to the 64 squares on the board. A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation) and XORing them together (the starting value could be 0 (the identity value for XOR) or a random seed). The resulting value was reduced by modulo, folding, or some other operation to produce a hash table index. The original Zobrist hash was stored in the table as the representation of the position. Later, the method was extended to hashing integers by representing each byte in each of 4 possible positions in the word by a unique 32-bit random number. Thus, a table of 28×4 random numbers is constructed. A 32-bit hashed integer is transcribed by successively indexing the table with the value of each byte of the plain text integer and XORing the loaded values together (again, the starting value can be the identity value or a random seed). The natural extension to 64-bit integers is by use of a table of 28×8 64-bit random numbers. This kind of function has some nice theoretical properties, one of which is called "3-tuple independence", meaning that every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values. Customized hash function. A hash function can be designed to exploit existing entropy in the keys. If the keys have leading or trailing zeros, or particular fields that are unused, always zero or some other constant, or generally vary little, then masking out only the volatile bits and hashing on those will provide a better and possibly faster hash function. Selected divisors or multipliers in the division and multiplicative schemes may make more uniform hash functions if the keys are cyclic or have other redundancies. Hashing variable-length data. When the data values are long (or variable-length) character strings—such as personal names, web page addresses, or mail messages—their distribution is usually very uneven, with complicated dependencies. For example, text in any natural language has highly non-uniform distributions of characters, and character pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way. Middle and ends. Simplistic hash functions may add the first and last "n" characters of a string along with the length, or form a word-size hash from the middle 4 characters of a string. This saves iterating over the (potentially long) string, but hash functions that do not hash on all characters of a string can readily become linear due to redundancies, clustering, or other pathologies in the key set. Such strategies may be effective as a custom hash function if the structure of the keys is such that either the middle, ends, or other fields are zero or some other invariant constant that does not differentiate the keys; then the invariant parts of the keys can be ignored. Character folding. The paradigmatic example of folding by characters is to add up the integer values of all the characters in the string. A better idea is to multiply the hash total by a constant, typically a sizable prime number, before adding in the next character, ignoring overflow. Using exclusive-or instead of addition is also a plausible alternative. The final operation would be a modulo, mask, or other function to reduce the word value to an index the size of the table. The weakness of this procedure is that information may cluster in the upper or lower bits of the bytes; this clustering will remain in the hashed result and cause more collisions than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0, and printable strings do not use the first 32 byte codes, so the information (95 bytecodes) is clustered in the remaining bits in an unobvious manner. The classic approach, dubbed the PJW hash based on the work of Peter J. Weinberger at Bell Labs in the 1970s, was originally designed for hashing identifiers into compiler symbol tables as given in the . This hash function offsets the bytes 4 bits before adding them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero, xored back into the low byte of the cumulative quantity. The result is a word-size hash code to which a modulo or other reducing operation can be applied to produce the final hash index. Today, especially with the advent of 64-bit word sizes, much more efficient variable-length string hashing by word chunks is available. Word length folding. Modern microprocessors will allow for much faster processing if 8-bit character strings are not hashed by processing one character at a time, but by interpreting the string as an array of 32-bit or 64-bit integers and hashing/accumulating these "wide word" integer values by means of arithmetic operations (e.g. multiplication by constant and bit-shifting). The final word, which may have unoccupied byte positions, is filled with zeros or a specified randomizing value before being folded into the hash. The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table. Radix conversion hashing. Analogous to the way an ASCII or EBCDIC character string representing a decimal number is converted to a numeric quantity for computing, a variable-length string can be converted as "x""k"−1a"k"−1 + "x""k"−2a"k"−2 + &amp;ctdot; + "x"1"a" + "x"0. This is simply a polynomial in a radix "a" &gt; 1 that takes the components ("x"0,"x"1...,"x""k"−1) as the characters of the input string of length "k". It can be used directly as the hash code, or a hash function applied to it to map the potentially large value to the hash table size. The value of "a" is usually a prime number large enough to hold the number of different characters in the character set of potential keys. Radix conversion hashing of strings minimizes the number of collisions. Available data sizes may restrict the maximum length of string that can be hashed with this method. For example, a 128-bit word will hash only a 26-character alphabetic string (ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using radix 97 and a 64-bit word. However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to 1019, or 19 decimal digits with radix 10. Rolling hash. In some applications, such as substring search, one can compute a hash function "h" for every "k"-character substring of a given "n"-character string by advancing a window of width "k" characters along the string, where "k" is a fixed integer, and "n" &gt; "k". The straightforward solution, which is to extract such a substring at every character position in the text and compute "h" separately, requires a number of operations proportional to "k"·"n". However, with the proper choice of "h", one can use the technique of rolling hash to compute all those hashes with an effort proportional to "mk" + "n" where "m" is the number of occurrences of the substring.[] The most familiar algorithm of this type is Rabin-Karp with best and average case performance "O"("n"+"mk") and worst case "O"("n"·"k") (in all fairness, the worst case here is gravely pathological: both the text string and substring are composed of a repeated single character, such as "t"="AAAAAAAAAAA", and "s"="AAA"). The hash function used for the algorithm is usually the Rabin fingerprint, designed to avoid collisions in 8-bit character strings, but other suitable hash functions are also used. Analysis. Worst case results for a hash function can be assessed two ways: theoretical and practical. The theoretical worst case is the probability that all keys map to a single slot. The practical worst case is the expected longest probe sequence (hash function + collision resolution method). This analysis considers uniform hashing, that is, any key will map to any particular slot with probability 1/"m", a characteristic of universal hash functions. While Knuth worries about adversarial attack on real time systems, Gonnet has shown that the probability of such a case is "ridiculously small". His representation was that the probability of "k" of "n" keys mapping to a single slot is α"k" / ("e"α "k"!), where "α" is the load factor, "n"/"m". History. The term "hash" offers a natural analogy with its non-technical meaning (to chop up or make a mess out of something), given how hash functions scramble their input data to derive their output.514 In his research for the precise origin of the term, Donald Knuth notes that, while Hans Peter Luhn of IBM appears to have been the first to use the concept of a hash function in a memo dated January 1953, the term itself did not appear in published literature until the late 1960s, in Herbert Hellerman's "Digital Computer System Principles", even though it was already widespread jargon by then. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\sum_{j=0}^{m-1}(b_j)(b_j+1)/2}{(n/2m)(n+2m-1)}," }, { "math_id": 1, "text": "P(x)=\\prod_{j\\in S}(x-\\alpha^j)" } ]
https://en.wikipedia.org/wiki?curid=13790
13790456
Laman graph
In graph theory, the Laman graphs are a family of sparse graphs describing the minimally rigid systems of rods and joints in the plane. Formally, a Laman graph is a graph on "n" vertices such that, for all "k", every "k"-vertex subgraph has at most 2"k" − 3 edges, and such that the whole graph has exactly 2"n" − 3 edges. Laman graphs are named after Gerard Laman, of the University of Amsterdam, who in 1970 used them to characterize rigid planar structures. However, this characterization, the Geiringer–Laman theorem, had already been discovered in 1927 by Hilda Geiringer. Rigidity. Laman graphs arise in rigidity theory: if one places the vertices of a Laman graph in the Euclidean plane, in general position, there will in general be no simultaneous continuous motion of all the points, other than Euclidean congruences, that preserves the lengths of all the graph edges. A graph is rigid in this sense if and only if it has a Laman subgraph that spans all of its vertices. Thus, the Laman graphs are exactly the minimally rigid graphs, and they form the bases of the two-dimensional rigidity matroids. If "n" points in the plane are given, then there are 2"n" degrees of freedom in their placement (each point has two independent coordinates), but a rigid graph has only three degrees of freedom (the position of a single one of its vertices and the rotation of the remaining graph around that vertex). Intuitively, adding an edge of fixed length to a graph reduces its number of degrees of freedom by one, so the 2"n" − 3 edges in a Laman graph reduce the 2"n" degrees of freedom of the initial point placement to the three degrees of freedom of a rigid graph. However, not every graph with 2"n" − 3 edges is rigid; the condition in the definition of a Laman graph that no subgraph can have too many edges ensures that each edge contributes to reducing the overall number of degrees of freedom, and is not wasted within a subgraph that is already itself rigid due to its other edges. Planarity. A pointed pseudotriangulation is a planar straight-line drawing of a graph, with the properties that the outer face is convex, that every bounded face is a pseudotriangle, a polygon with only three convex vertices, and that the edges incident to every vertex span an angle of less than 180 degrees. The graphs that can be drawn as pointed pseudotriangulations are exactly the planar Laman graphs. However, Laman graphs have planar embeddings that are not pseudotriangulations, and there are Laman graphs that are not planar, such as the utility graph "K"3,3. Sparsity. and define a graph as being formula_0-sparse if every nonempty subgraph with formula_1 vertices has at most formula_2 edges, and formula_0-tight if it is formula_0-sparse and has exactly formula_2 edges. Thus, in their notation, the Laman graphs are exactly the (2,3)-tight graphs, and the subgraphs of the Laman graphs are exactly the (2,3)-sparse graphs. The same notation can be used to describe other important families of sparse graphs, including trees, pseudoforests, and graphs of bounded arboricity. Based on this characterization, it is possible to recognize n-vertex Laman graphs in time "O"("n"2), by simulating a "pebble game" that begins with a graph with n vertices and no edges, with two pebbles placed on each vertex, and performs a sequence of the following two kinds of steps to create all of the edges of the graph: If these operations can be used to construct an orientation of the given graph, then it is necessarily (2,3)-sparse, and vice versa. However, faster algorithms are possible, running in time formula_3, based on testing whether doubling one edge of the given graph results in a multigraph that is (2,2)-tight (equivalently, whether it can be decomposed into two edge-disjoint spanning trees) and then using this decomposition to check whether the given graph is a Laman graph. Network flow techniques can be used to test whether a planar graph is a Laman graph more quickly, in time formula_4. Henneberg construction. Before Laman's and Geiringer's work, Lebrecht Henneberg characterized the two-dimensional minimally rigid graphs (that is, the Laman graphs) in a different way. Henneberg showed that the minimally rigid graphs on two or more vertices are exactly the graphs that can be obtained, starting from a single edge, by a sequence of operations of the following two types: A sequence of these operations that forms a given graph is known as a Henneberg construction of the graph. For instance, the complete bipartite graph "K"3,3 may be formed using the first operation to form a triangle and then applying the second operation to subdivide each edge of the triangle and connect each subdivision point with the opposite triangle vertex. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(k,\\ell)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "kn-\\ell" }, { "math_id": 3, "text": "O(n^{3/2}\\sqrt{\\log n})" }, { "math_id": 4, "text": "O(n\\log^3 n)" } ]
https://en.wikipedia.org/wiki?curid=13790456
13792019
Stokes's law of sound attenuation
Formula for sound intensity loss in a Newtonian fluid In acoustics, Stokes's law of sound attenuation is a formula for the attenuation of sound in a Newtonian fluid, such as water or air, due to the fluid's viscosity. It states that the amplitude of a plane wave decreases exponentially with distance traveled, at a rate α given by formula_0 where η is the dynamic viscosity coefficient of the fluid, ω is the sound's angular frequency, ρ is the fluid density, and V is the speed of sound in the medium. The law and its derivation were published in 1845 by the Anglo-Irish physicist G. G. Stokes, who also developed Stokes's law for the friction force in fluid motion. A generalisation of Stokes attenuation taking into account the effect of thermal conductivity was proposed by the German physicist Gustav Kirchhoff in 1868. Sound attenuation in fluids is also accompanied by acoustic dispersion, meaning that the different frequencies are propagating at different sound speeds. Interpretation. Stokes's law of sound attenuation applies to sound propagation in an isotropic and homogeneous Newtonian medium. Consider a plane sinusoidal pressure wave that has amplitude A0 at some point. After traveling a distance d from that point, its amplitude "A"("d") will be formula_1 The parameter α is a kind of attenuation constant, dimensionally the reciprocal of length. In the International System of Units (SI), it is expressed in neper per meter or simply reciprocal of meter (m–1). That is, if α = 1 m–1, the wave's amplitude decreases by a factor of 1/"e" for each meter traveled. Importance of volume viscosity. The law is amended to include a contribution by the volume viscosity ζ: formula_2 The volume viscosity coefficient is relevant when the fluid's compressibility cannot be ignored, such as in the case of ultrasound in water. The volume viscosity of water at 15 C is 3.09 centipoise. Modification for very high frequencies. Stokes's law is actually an asymptotic approximation for low frequencies of a more general formula involving relaxation time τ: formula_3 The relaxation time for water is about per radian, corresponding to an angular frequency ω of radians (500 gigaradians) per second and therefore a frequency of about . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\alpha = \\frac{2 \\eta\\omega^2}{3\\rho V^3}" }, { "math_id": 1, "text": "A(d) = A_0e^{-\\alpha d}" }, { "math_id": 2, "text": "\n\\alpha = \\frac{\\left( 2\\eta + \\frac{3}{2}\\zeta \\right)\\omega^2}{3\\rho V^3} = \\frac{\\left( \\frac{4}{3}\\eta + \\zeta \\right)\\omega^2}{2\\rho V^3}\n" }, { "math_id": 3, "text": "\\begin{align}\n2\\left(\\frac{\\alpha V}{\\omega}\\right)^2 &= \\frac{1}{\\sqrt{1+\\left(\\omega\\tau\\right)^2}}-\\frac{1}{1+\\left(\\omega\\tau\\right)^2}\\\\\n\\alpha &= \\frac{\\omega}{V}\\sqrt{\\frac{\\sqrt{1+\\left(\\omega\\tau\\right)^2}-1}{2\\left(1+\\left(\\omega\\tau\\right)^2\\right)}}\\\\\n\\tau &= \\frac{\\frac{4\\eta}{3} + \\zeta}{\\rho V^2} = \\frac{4\\eta+3\\zeta}{3\\rho V^2}\\\\\n\\alpha &= \\omega \\sqrt{\\frac{3}{2}}\\left(\\frac{\n\\rho\\left(\\sqrt{\\left(\\omega\\left(4\\eta+3\\zeta\\right)\\right)^2+\\left(3\\rho V^2\\right)^2}-3\\rho V^2\\right)}{\n\\left(\\omega\\left(4\\eta+3\\zeta\\right)\\right)^2+\\left(3\\rho V^2\\right)^2}\n\\right)^\\frac{1}{2}\\\\\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=13792019
1379266
Ring-imaging Cherenkov detector
The ring-imaging Cherenkov, or RICH, detector is a device for identifying the type of an electrically charged subatomic particle of known momentum, that traverses a transparent refractive medium, by measurement of the presence and characteristics of the Cherenkov radiation emitted during that traversal. RICH detectors were first developed in the 1980s and are used in high energy elementary particle-, nuclear- and astro-physics experiments. This article outlines the origins and principles of the RICH detector, with brief examples of its different forms in modern physics experiments. Ring-imaging Cherenkov (RICH) detector. Origins. The ring-imaging detection technique was first proposed by Jacques Séguinot and Tom Ypsilantis, working at CERN in 1977. Their research and development, of high precision single-photon detectors and related optics, lay the foundations for the design development and construction of the first large-scale Particle Physics RICH detectors, at CERN's OMEGA facility and LEP (Large Electron–Positron Collider) DELPHI experiment. Principles. A ring-imaging Cherenkov (RICH) detector allows the identification of electrically charged subatomic particle types through the detection of the Cherenkov radiation emitted (as photons) by the particle in traversing a medium with refractive index formula_0 &gt; 1. The identification is achieved by measurement of the angle of emission, formula_1, of the Cherenkov radiation, which is related to the charged particle's velocity formula_2 by formula_3 where formula_4 is the speed of light. Knowledge of the particle's momentum and direction (normally available from an associated momentum-spectrometer) allows a predicted formula_2 for each hypothesis of the particles type; using the known formula_0 of the RICH radiator gives a corresponding prediction of formula_1 that can be compared to the formula_1 of the detected Cherenkov photons, thus indicating the particle's identity (usually as a probability per particle type). A typical (simulated) distribution of formula_1 vs momentum of the source particle, for single Cherenkov photons, produced in a gaseous radiator (n~1.0005, angular resolution~0.6mrad) is shown in the following Fig.1: The different particle types follow distinct contours of constant mass, smeared by the effective angular resolution of the RICH detector; at higher momenta each particle emits a number of Cherenkov photons which, taken together, give a more precise measure of the average formula_1 than does a single photon (see Fig.3 below), allowing effective particle separation to extend beyond 100 GeV in this example. This particle identification is essential for the detailed understanding of the intrinsic physics of the structure and interactions of elementary particles. The essence of the ring-imaging method is to devise an optical system with single-photon detectors, that can isolate the Cherenkov photons that each particle emits, to form a single "ring image" from which an accurate formula_1 can be determined. A polar plot of the Cherenkov angles of photons associated with a 22 GeV/c particle in a radiator with formula_5=1.0005 is shown in Fig.2; both pion and kaon are illustrated; protons are below Cherenkov threshold, formula_6, producing no radiation in this case (which would also be a very clear signal of particle type = proton, since fluctuations in the number of photons follow Poisson statistics about the expected mean, so that the probability of e.g. a 22 GeV/c kaon producing zero photons when ~12 were expected is very small; "e−12" or 1 in 162755). The number of detected photons shown for each particle type is, for illustration purposes, the average for that type in a RICH having formula_7 ~ 25 (see below). The distribution in azimuth is random between 0 and 360 degrees; the distribution in formula_1 is spread with RMS angular resolution ~ 0.6 milliradians. Note that, because the points of emission of the photons can be at any place on the (normally straight line) trajectory of the particle through the radiator, the emerging photons occupy a light-cone in space. In a RICH detector the photons within this light-cone pass through an optical system and impinge upon a position sensitive photon detector. With a suitably focusing optical system this allows reconstruction of a ring, similar to that above in Fig.2, the radius of which gives a measure of the Cherenkov emission angle formula_1. The resolving power of this method is illustrated by comparing the Cherenkov angle "per photon", see the first plot, Fig.1 above, with the mean Cherenkov angle "per particle" (averaged over all photons emitted by that particle) obtained by ring-imaging, shown in Fig.3; the greatly enhanced separation between particle types is very clear. Optical Precision and Response. This ability of a RICH system to successfully resolve different hypotheses for the particle type depends on two principal factors, which in turn depend upon the listed sub-factors; formula_8 is a measure of the intrinsic optical precision of the RICH detector. formula_7 is a measure of the optical response of the RICH; it can be thought of as the limiting case of the number of actually detected photons produced by a particle whose velocity approaches that of light, averaged over all relevant particle trajectories in the RICH detector. The average number of Cherenkov photons detected, for a slower particle, of charge formula_9 (normally ±1), emitting photons at angle formula_1 is then formula_10 and the precision with which the mean Cherenkov angle can be determined with these photons is approximately formula_11 to which the angular precision of the emitting particle's measured direction must be added in quadrature, if it is not negligible compared to formula_12. Particle Identification. Given the known momentum of the emitting particle and the refractive index of the radiator, the expected Cherenkov angle for each particle type can be predicted, and its difference from the observed mean Cherenkov angle calculated. Dividing this difference by formula_13 then gives a measure of the 'number of sigma' deviation of the hypothesis from the observation, which can be used in computing a probability or likelihood for each possible hypothesis. The following Fig.4 shows the 'number of sigma' deviation of the kaon hypothesis from a true pion ring image ("π not k") and of the pion hypothesis from a true kaon ring image ("k not π"), as a function of momentum, for a RICH with formula_5 = 1.0005, formula_7 = 25, formula_8 = 0.64 milliradians; Also shown are the average number of detected photons from pions("Ngπ") or from kaons("Ngk"). One can see that the RICH's ability to separate the two particle types exceeds 4-sigma everywhere between threshold and 80 GeV/c, finally dropping below 3-sigma at about 100 GeV. It is important to note that this result is for an 'ideal' detector, with homogeneous acceptance and efficiency, normal error distributions and zero background. No such detector exists, of course, and in a real experiment much more sophisticated procedures are actually used to account for those effects; position dependent acceptance and efficiency; non-Gaussian error distributions; non negligible and variable event-dependent backgrounds. In practice, for the multi-particle final states produced in a typical collider experiment, separation of kaons from other final state hadrons, mainly pions, is the most important purpose of the RICH. In that context the two most vital RICH functions, which maximise signal and minimise combinatorial backgrounds, are its ability to "correctly identify a kaon as a kaon" and its ability "not to misidentify a pion as a kaon". The related probabilities, which are the usual measures of signal detection and background rejection in real data, are plotted in Fig.5 below to show their variation with momentum (simulation with 10% random background); Note that the ~30% "π → k" misidentification rate at 100 GeV is, for the most part, due to the presence of 10% background hits (faking photons) in the simulated detector; the 3-sigma separation in the mean Cherenkov angle (shown in Fig.4 above) would, by itself, only account for about 6% misidentification. More detailed analyses of the above type, for operational RICH detectors, can be found in the published literature. For example, the LHCb experiment at the CERN LHC studies, amongst other "B-meson" decays, the particular process "B0 → π+π−". The following Fig.6 shows, on the left, the "π+π−" mass distribution without RICH identification, where all particles are assumed to be "π"; the "B0 → π+π−" signal of interest is the turquoise-dotted line and is completely swamped by background due to "B" and "Λ" decays involving kaons and protons, and combinatorial background from particles not associated with the "B0" decay. On the right are the same data with RICH identification used to select only pions and reject kaons and protons; the "B0 → π+π−" signal is preserved but all kaon- and proton-related backgrounds are greatly reduced, so that the overall "B0" signal/background has improved by a factor ~ 6, allowing much more precise measurement of the decay process. RICH Types. Both focusing and proximity-focusing detectors are in use (Fig.7). In a focusing RICH detector, the photons are collected by a spherical mirror with focal length formula_14 and focused onto the photon detector placed at the focal plane. The result is a circle with a radius formula_15, independent of the emission point along the particle's track (formula_16). This scheme is suitable for low refractive index radiators (i.e., gases) with their larger radiator length needed to create enough photons. In the more compact proximity-focusing design a thin radiator volume emits a cone of Cherenkov light which traverses a small distance, the proximity gap, and is detected on the photon detector plane. The image is a ring of light the radius of which is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is mainly determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification (HMPID), one of the detectors of ALICE (A Large Ion Collider Experiment), which is one of the five experiments at the LHC (Large Hadron Collider) at CERN. In a DIRC (Detection of Internally Reflected Cherenkov light, Fig.8), another design of a RICH detector, light that is captured by total internal reflection inside the solid radiator reaches the light sensors at the detector perimeter, the precise rectangular cross section of the radiator preserving the angular information of the Cherenkov light cone. One example is the DIRC of the BaBar experiment at SLAC. The LHCb experiment on the Large Hadron Collider, Fig.9, uses two RICH detectors for differentiating between pions and kaons. The first (RICH-1) is located immediately after the Vertex Locator (VELO) around the interaction point and is optimised for low-momentum particles and the second (RICH-2) is located after the magnet and particle-tracker layers and optimised for higher-momentum particles. The Alpha Magnetic Spectrometer device AMS-02, Fig.10, recently mounted on the International Space Station uses a RICH detector in combination with other devices to analyze cosmic rays. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " n " }, { "math_id": 1, "text": " \\theta_c " }, { "math_id": 2, "text": " v " }, { "math_id": 3, "text": "\\cos \\theta_c = \\frac{c}{nv}" }, { "math_id": 4, "text": "c" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": " c/nv > 1 " }, { "math_id": 7, "text": "N_c" }, { "math_id": 8, "text": " \\sigma " }, { "math_id": 9, "text": "q" }, { "math_id": 10, "text": " N = \\dfrac{N_c q^2 \\sin^2(\\theta_c)}{1 - \\dfrac{1}{n^2}} " }, { "math_id": 11, "text": "\\sigma_m = \\frac{\\sigma}{\\sqrt{N}}" }, { "math_id": 12, "text": "\\sigma_m" }, { "math_id": 13, "text": " \\sigma_m " }, { "math_id": 14, "text": "f" }, { "math_id": 15, "text": "r = f\\theta_c" }, { "math_id": 16, "text": "\\theta_c \\ll 1" } ]
https://en.wikipedia.org/wiki?curid=1379266
1379315
Time-of-flight detector
Particle detector A time-of-flight (TOF) detector is a particle detector which can discriminate between a lighter and a heavier elementary particle of same momentum using their time of flight between two scintillators. The first of the scintillators activates a clock upon being hit while the other stops the clock upon being hit. If the two masses are denoted by formula_0and formula_1 and have velocities formula_2 and formula_3 then the time of flight difference is given by formula_4 where formula_5 is the distance between the scintillators. The approximation is in the relativistic limit at momentum formula_6 and formula_7 denotes the speed of light in vacuum.
[ { "math_id": 0, "text": "m_1" }, { "math_id": 1, "text": " m_2" }, { "math_id": 2, "text": "v_1" }, { "math_id": 3, "text": "v_2" }, { "math_id": 4, "text": "\\Delta t = L\\left(\\frac{1}{v_1}-\\frac{1}{v_2}\\right)\\approx \\frac{Lc}{2p^2}(m_1^2-m_2^2)" }, { "math_id": 5, "text": "L" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=1379315
13793480
Volume viscosity
Volume viscosity (also called bulk viscosity, or second viscosity or, dilatational viscosity) is a material property relevant for characterizing fluid flow. Common symbols are formula_0 or formula_1. It has dimensions (mass / (length × time)), and the corresponding SI unit is the pascal-second (Pa·s). Like other material properties (e.g. density, shear viscosity, and thermal conductivity) the value of volume viscosity is specific to each fluid and depends additionally on the fluid state, particularly its temperature and pressure. Physically, volume viscosity represents the irreversible resistance, over and above the reversible resistance caused by isentropic bulk modulus, to a compression or expansion of a fluid. At the molecular level, it stems from the finite time required for energy injected in the system to be distributed among the rotational and vibrational degrees of freedom of molecular motion. Knowledge of the volume viscosity is important for understanding a variety of fluid phenomena, including sound attenuation in polyatomic gases (e.g. Stokes's law), propagation of shock waves, and dynamics of liquids containing gas bubbles. In many fluid dynamics problems, however, its effect can be neglected. For instance, it is 0 in a monatomic gas at low density, whereas in an incompressible flow the volume viscosity is superfluous since it does not appear in the equation of motion. Volume viscosity was introduced in 1879 by Sir Horace Lamb in his famous work "Hydrodynamics". Although relatively obscure in the scientific literature at large, volume viscosity is discussed in depth in many important works on fluid mechanics, fluid acoustics, theory of liquids, and rheology. Derivation and use. At thermodynamic equilibrium, the negative-one-third of the trace of the Cauchy stress tensor is often identified with the thermodynamic pressure, formula_2 which depends only on equilibrium state variables like temperature and density (equation of state). In general, the trace of the stress tensor is the sum of thermodynamic pressure contribution and another contribution which is proportional to the divergence of the velocity field. This coefficient of proportionality is called volume viscosity. Common symbols for volume viscosity are formula_3 and formula_4. Volume viscosity appears in the classic Navier-Stokes equation if it is written for compressible fluid, as described in most books on general hydrodynamics and acoustics. formula_5 where formula_6 is the shear viscosity coefficient and formula_3 is the volume viscosity coefficient. The parameters formula_6 and formula_3 were originally called the first and bulk viscosity coefficients, respectively. The operator formula_7 is the material derivative. By introducing the tensors (matrices) formula_8, formula_9 and formula_10 (where "e" is a scalar called dilation, and formula_11 is the identity tensor), which describes crude shear flow (i.e. the strain rate tensor), pure shear flow (i.e. the deviatoric part of the strain rate tensor, i.e. the shear rate tensor) and compression flow (i.e. the isotropic dilation tensor), respectively, formula_12 formula_13 formula_14 the classic Navier-Stokes equation gets a lucid form. formula_15 Note that the term in the momentum equation that contains the volume viscosity disappears for an incompressible flow because there is no divergence of the flow, and so also no flow dilation "e" to which is proportional: formula_16 So the incompressible Navier-Stokes equation can be simply written: formula_17 In fact, note that for the incompressible flow the strain rate is purely deviatoric since there is no dilation ("e"=0). In other words, for an incompressible flow the isotropic stress component is simply the pressure: formula_18 and the deviatoric (shear) stress is simply twice the product between the shear viscosity and the strain rate (Newton's constitutive law): formula_19 Therefore, in the incompressible flow the volume viscosity plays no role in the fluid dynamics. However, in a compressible flow there are cases where formula_20, which are explained below. In general, moreover, formula_3 is not just a property of the fluid in the classic thermodynamic sense, but also depends on the process, for example the compression/expansion rate. The same goes for shear viscosity. For a Newtonian fluid the shear viscosity is a pure fluid property, but for a non-Newtonian fluid it is not a pure fluid property due to its dependence on the velocity gradient. Neither shear nor volume viscosity are equilibrium parameters or properties, but transport properties. The velocity gradient and/or compression rate are therefore independent variables together with pressure, temperature, and other state variables. Landau's explanation. According to Landau, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In compression or expansion, as in any rapid change of state, the fluid ceases to be in thermodynamic equilibrium, and internal processes are set up in it which tend to restore this equilibrium. These processes are usually so rapid (i.e. their relaxation time is so short) that the restoration of equilibrium follows the change in volume almost immediately unless, of course, the rate of change of volume is very large. He later adds: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It may happen, nevertheless, that the relaxation times of the processes of restoration of equilibrium are long, i.e. they take place comparatively slowly. After an example, he concludes (with formula_3 used to represent volume viscosity): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Hence, if the relaxation time of these processes is long, a considerable dissipation of energy occurs when the fluid is compressed or expanded, and, since this dissipation must be determined by the second viscosity, we reach the conclusion that formula_3 is large. Measurement. A brief review of the techniques available for measuring the volume viscosity of liquids can be found in Dukhin &amp; Goetz and Sharma (2019). One such method is by using an acoustic rheometer. Below are values of the volume viscosity for several Newtonian liquids at 25 °C (reported in cP): methanol - 0.8 ethanol - 1.4 propanol - 2.7 pentanol - 2.8 acetone - 1.4 toluene - 7.6 cyclohexanone - 7.0 hexane - 2.4 Recent studies have determined the volume viscosity for a variety of gases, including carbon dioxide, methane, and nitrous oxide. These were found to have volume viscosities which were hundreds to thousands of times larger than their shear viscosities. Fluids having large volume viscosities include those used as working fluids in power systems having non-fossil fuel heat sources, wind tunnel testing, and pharmaceutical processing. Modeling. There are many publications dedicated to numerical modeling of volume viscosity. A detailed review of these studies can be found in Sharma (2019) and Cramer. In the latter study, a number of common fluids were found to have bulk viscosities which were hundreds to thousands of times larger than their shear viscosities. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\zeta, \\mu', \\mu_\\mathrm{b}, \\kappa" }, { "math_id": 1, "text": "\\xi" }, { "math_id": 2, "text": "-{1\\over3}\\sigma_a^a = P," }, { "math_id": 3, "text": "\\zeta" }, { "math_id": 4, "text": "\\mu_{v}" }, { "math_id": 5, "text": "\\rho \\frac{D \\mathbf{v}}{Dt} = -\\nabla P + \\nabla\\cdot\\left[\\mu\\left(\\nabla\\mathbf{v} + \\left(\\nabla\\mathbf{v}\\right)^T - \\frac{2}{3} (\\nabla\\cdot\\mathbf{v})\\mathbf{I}\\right) \\right] + \\nabla\\cdot[\\zeta(\\nabla\\cdot \\mathbf{v})\\mathbf{I}] + \\rho \\mathbf{g}" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": " D\\mathbf{v}/Dt " }, { "math_id": 8, "text": " \\boldsymbol{\\epsilon} " }, { "math_id": 9, "text": " \\boldsymbol{\\gamma} " }, { "math_id": 10, "text": " e \\mathbf{I} " }, { "math_id": 11, "text": " \\mathbf{I} " }, { "math_id": 12, "text": " \\boldsymbol{\\epsilon} = \\frac{1}{2} \\left( \\nabla\\mathbf{v} + \\left(\\nabla\\mathbf{v}\\right)^T \\right)" }, { "math_id": 13, "text": " e = \\frac{1}{3} \\nabla \\! \\cdot \\! \\mathbf{v}" }, { "math_id": 14, "text": " \\boldsymbol{\\gamma} = \\boldsymbol{\\epsilon} - e \\mathbf{I} " }, { "math_id": 15, "text": "\\rho \\frac{D \\mathbf{v}}{Dt} = -\\nabla (P - 3 \\zeta e) + \\nabla\\cdot ( 2\\mu \\boldsymbol \\gamma) + \\rho \\mathbf{g}" }, { "math_id": 16, "text": " \\nabla \\! \\cdot \\! \\mathbf{v} =0 " }, { "math_id": 17, "text": "\\rho \\frac{D \\mathbf{v}}{Dt} = -\\nabla P + \\nabla\\cdot ( 2\\mu \\boldsymbol \\epsilon) + \\rho \\mathbf{g}" }, { "math_id": 18, "text": "p= \\frac 1 3 Tr(\\boldsymbol \\sigma)" }, { "math_id": 19, "text": "\\boldsymbol \\tau = 2 \\mu \\boldsymbol \\epsilon" }, { "math_id": 20, "text": "\\zeta\\gg\\mu" } ]
https://en.wikipedia.org/wiki?curid=13793480
13793747
Group method of data handling
Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric optimization of models. GMDH is used in such fields as data mining, knowledge discovery, prediction, complex systems modeling, optimization and pattern recognition. GMDH algorithms are characterized by inductive procedure that performs sorting-out of gradually complicated polynomial models and selecting the best solution by means of the "external criterion". The last section of contains a summary of the applications of GMDH in the 1970s. Other names include "polynomial feedforward neural network", or "self-organization of models". It was one of the first deep learning methods, used to train an eight-layer neural net in 1971. Mathematical content. Polynomial regression. This section is based on. This is the general problem of statistical modelling of data: Consider a dataset formula_0, with formula_1 points. Each point contains formula_2 observations, and one target formula_3 to predict. How to best predict the target based on the observations? First, we split the full dataset into two parts: a training set and a validation set. The training set would be used to fit more and more model parameters, and the validation set would be used to decide which parameters to include, and when to stop fitting completely. The GMDH starts by considering degree-2 polynomial in 2 variables. Suppose we want to predict the target using just the formula_4 parts of the observation, and using only degree-2 polynomials, then the most we can do is this:formula_5where the parameters formula_6 are computed by linear regression. Now, the parameters formula_6 depend on which formula_4 we have chosen, and we do not know which formula_4 we should choose, so we choose all of them. That is, we perform all formula_7 such polynomial regressions:formula_8obtaining formula_7 polynomial models of the dataset. We do not want to accept all the polynomial models, since it would contain too many models. To only select the best subset of these models, we run each model formula_9 on the validation dataset, and select the models whose mean-square-error is below a threshold. We also write down the smallest mean-square-error achieved as formula_10. Suppose that after this process, we have obtained a set of formula_11 models. We now run the models on the training dataset, to obtain a sequence of transformed observations: formula_12. The same algorithm can now be run again. The algorithm continues, giving us formula_13. As long as each formula_14 is smaller than the previous one, the process continues, giving us increasingly deep models. As soon as some formula_15, the algorithm terminates. The last layer fitted (layer formula_16) is discarded, as it has overfit the training set. The previous layers are outputted. More sophisticated methods for deciding when to terminate are possible. For example, one might keep running the algorithm for several more steps, in the hope of passing a temporary rise in formula_14. In general. Instead of a degree-2 polynomial in 2 variables, each unit may use higher-degree polynomials in more variables: formula_17 And more generally, a GMDH model with multiple inputs and one output is a subset of components of the "base function" (1): formula_18 where "fi" are elementary functions dependent on different sets of inputs, "ai" are coefficients and "m" is the number of the base function components. External criteria. External criteria are optimization objectives for the model, such as minimizing mean-squared error on the validation set, as given above. The most common criteria are: Idea. Like linear regression, which fits a linear equation over data, GMDH fits arbitrarily high orders of polynomial equations over data. To choose between models, two or more subsets of a data sample are used, similar to the train-validation-test split. GMDH combined ideas from: black box modeling, successive genetic selection of pairwise features, the Gabor's principle of "freedom of decisions choice", and the Beer's principle of external additions. Inspired by an analogy between constructing a model out of noisy data, and sending messages through a noisy channel, they proposed "noise-immune modelling": the higher the noise, the less parameters must the optimal model have, since the noisy channel does not allow more bits to be sent through. The model is structured as a feedforward neural network, but without restrictions on the depth, they had a procedure for automatic models structure generation, which imitates the process of biological selection with pairwise genetic features. History. The method was originated in 1968 by Prof. Alexey G. Ivakhnenko in the Institute of Cybernetics in Kyiv. Period 1968–1971 is characterized by application of only regularity criterion for solving of the problems of identification, pattern recognition and short-term forecasting. As reference functions polynomials, logical nets, fuzzy Zadeh sets and Bayes probability formulas were used. Authors were stimulated by very high accuracy of forecasting with the new approach. Noise immunity was not investigated. Period 1972–1975. The problem of modeling of noised data and incomplete information basis was solved. Multicriteria selection and utilization of additional priory information for noiseimmunity increasing were proposed. Best experiments showed that with extended definition of the optimal model by additional criterion noise level can be ten times more than signal. Then it was improved using Shannon's Theorem of General Communication theory. Period 1976–1979. The convergence of multilayered GMDH algorithms was investigated. It was shown that some multilayered algorithms have "multilayerness error" – analogous to static error of control systems. In 1977 a solution of objective systems analysis problems by multilayered GMDH algorithms was proposed. It turned out that sorting-out by criteria ensemble finds the only optimal system of equations and therefore to show complex object elements, their main input and output variables. Period 1980–1988. Many important theoretical results were received. It became clear that full physical models cannot be used for long-term forecasting. It was proved, that non-physical models of GMDH are more accurate for approximation and forecast than physical models of regression analysis. Two-level algorithms which use two different time scales for modeling were developed. Since 1989 the new algorithms (AC, OCC, PF) for non-parametric modeling of fuzzy objects and SLP for expert systems were developed and investigated. Present stage of GMDH development can be described as blossom out of deep learning neuronets and parallel inductive algorithms for multiprocessor computers. Such procedure is currently used in deep learning networks. GMDH-type neural networks. There are many different ways to choose an order for partial models consideration. The very first consideration order used in GMDH and originally called multilayered inductive procedure is the most popular one. It is a sorting-out of gradually complicated models generated from "base function". The best model is indicated by the minimum of the external criterion characteristic. Multilayered procedure is equivalent to the Artificial Neural Network with polynomial activation function of neurons. Therefore, the algorithm with such an approach usually referred as GMDH-type Neural Network or Polynomial Neural Network. Li showed that GMDH-type neural network performed better than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network. Combinatorial GMDH. Another important approach to partial models consideration that becomes more and more popular is a combinatorial search that is either limited or full. This approach has some advantages against Polynomial Neural Networks, but requires considerable computational power and thus is not effective for objects with a large number of inputs. An important achievement of Combinatorial GMDH is that it fully outperforms linear regression approach if noise level in the input data is greater than zero. It guarantees that the most optimal model will be founded during exhaustive sorting. Basic Combinatorial algorithm makes the following steps: In contrast to GMDH-type neural networks Combinatorial algorithm usually does not stop at the certain level of complexity because a point of increase of criterion value can be simply a local minimum, see Fig.1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{(x_1, ..., x_k; y)_s\\}_{s=1:n}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "x_1, ..., x_k" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "i, j" }, { "math_id": 5, "text": "y \\approx f_{a,b,c,d,e,h}(x_i, x_j) := a + bx_i + cx_j + dx_i^2 + ex_j^2 + fx_{i}x_{j}" }, { "math_id": 6, "text": "a, b, c, d, e, f" }, { "math_id": 7, "text": "\\frac 12 k(k-1)" }, { "math_id": 8, "text": "y \\approx f_{(i, j); a,b,c,d,e,h}(x_i, x_j) := a_{i,j} + b_{i,j}x_i + c_{i,j}x_j + d_{i,j}x_i^2 + e_{i,j}x_j^2 + f_{i,j}x_{i}x_{j} \\quad \\forall 1 \\leq i < j \\leq k" }, { "math_id": 9, "text": "f_{(i, j); a,b,c,d,e,h}" }, { "math_id": 10, "text": "minMSE_1" }, { "math_id": 11, "text": "k_1" }, { "math_id": 12, "text": "z_1, z_2, ..., z_{k_1}" }, { "math_id": 13, "text": "minMSE_1, minMSE_2, ..." }, { "math_id": 14, "text": "minMSE" }, { "math_id": 15, "text": "minMSE_{L+1} > minMSE_{L}" }, { "math_id": 16, "text": "L+1" }, { "math_id": 17, "text": " Y(x_1,\\dots,x_n) = a_0+\\sum\\limits_{i = 1}^n {a_i} x_i+\\sum\\limits_{i = 1}^n \n{\\sum\\limits_{j = i}^n {a_{i j} } } x_i x_j+\\sum\\limits_{i = 1}^n \n{\\sum\\limits_{j = i}^n{\\sum\\limits_{k = j}^n {a_{i j k} } } }x_i x_j x_k+\\cdots " }, { "math_id": 18, "text": " Y(x_1,\\dots,x_n)=a_0+\\sum\\limits_{i = 1}^m a_i f_i" } ]
https://en.wikipedia.org/wiki?curid=13793747
13793909
Penman–Monteith equation
The Penman–Monteith equation approximates net evapotranspiration (ET) from meteorological data, as a replacement for direct measurement of evapotranspiration. The equation is widely used, and was derived by the United Nations Food and Agriculture Organization for modeling reference evapotranspiration ET0. Significance. Evapotranspiration contributions are very significant in a watershed's water balance, yet are often not emphasized in results because the precision of this component is often weak relative to more directly measured phenomena, e.g. rain and stream flow. In addition to weather uncertainties, the Penman–Monteith equation is sensitive to vegetation specific parameters, e.g. stomatal resistance or conductance. Various forms of crop coefficients (Kc) account for differences between specific vegetation modeled and a "reference evapotranspiration" (RET or ET0) standard. Stress coefficients (Ks) account for reductions in ET due to environmental stress (e.g. soil saturation reduces root-zone O2, low soil moisture induces wilt, air pollution effects, and salinity). Models of native vegetation cannot assume crop management to avoid recurring stress. Equation. Per Monteith's "Evaporation and Environment", the equation is: formula_0 "λ"v = Latent heat of vaporization. Energy required per unit mass of water vaporized. (J g−1) "L"v = Volumetric latent heat of vaporization. Energy required per water volume vaporized. ("L"v = 2453 MJ m−3) "E" = Mass water evapotranspiration rate (g s−1 m−2) "ET" = Water volume evapotranspired (mm s−1) Δ = Rate of change of saturation specific humidity with air temperature. (Pa K−1) "R"n = Net irradiance (W m−2), the external source of energy flux "G" = Ground heat flux (W m−2), usually difficult to measure "c"p = Specific heat capacity of air (J kg−1 K−1) "ρ"a = dry air density (kg m−3) δ"e" = vapor pressure deficit (Pa) "g"a = Conductivity of air, atmospheric conductance (m s−1) "g"s = Conductivity of stoma, surface or stomatal conductance (m s−1) "γ" = Psychrometric constant ("γ" ≈ 66 Pa K−1) Note: Often resistances are used rather than conductivities. formula_1 where rc refers to the resistance to flux from a vegetation canopy to the extent of some defined boundary layer. The atmospheric conductance "g"a accounts for aerodynamic effects like the zero plane displacement height and the roughness length of the surface. The stomatal conductance "g"s accounts for effect of leaf density (Leaf Area Index), water stress and CO2 concentration in the air, that is to say to plant reaction to external factors. Different models exist to link the stomatal conductance to these vegetation characteristics, like the ones from P.G. Jarvis (1976) or Jacobs et al. (1996). Accuracy. While the Penman-Monteith method is widely considered accurate for practical purposes and is recommended by the Food and Agriculture Organization of the United Nations, errors when compared to direct measurement or other techniques can range from -9 to 40%. Variations and alternatives. FAO 56 Penman-Monteith equation. To avoid the inherent complexity of determining stomatal and atmospheric conductance, the Food and Agriculture Organization proposed in 1998 a simplified equation for the reference evapotranspiration "ET"0. It is defined as the evapotranpiration for "[an] hypothetical reference crop with an assumed crop height of 0.12 m, a fixed surface resistance of 70 s m-1 and an albedo of 0.23." This reference surface is defined to represent "an extensive surface of green grass of uniform height, actively growing, completely shading the ground and with adequate water". The corresponding equation is: formula_2 "ET"0 = Reference evapotranspiration, Water volume evapotranspired (mm day−1) Δ = Rate of change of saturation specific humidity with air temperature. (Pa K−1) "R"n = Net irradiance (MJ m−2 day−1), the external source of energy flux "G" = Ground heat flux (MJ m−2 day−1), usually equivalent to zero on a day "T" = Air temperature at 2m (K) "u2" = Wind speed at 2m height (m/s) δ"e" = vapor pressure deficit (kPa) "γ" = Psychrometric constant ("γ" ≈ 66 Pa K−1) N.B.: The coefficient 0.408 and 900 are not unitless but account for the conversion from energy values to equivalent water depths: radiation [mm day−1] = 0.408 radiation [MJ m−2 day−1]. This reference evapotranspiration ET0 can then be used to evaluate the evapotranspiration rate ET from unstressed plant through crop coefficients Kc: ET = Kc * ET0. Variations. The standard methods of the American Society of Civil Engineers modify the standard Penman–Monteith equation for use with an hourly time step. The SWAT model is one of many GIS-integrated hydrologic models estimating ET using Penman–Monteith equations. Priestley–Taylor. The Priestley–Taylor equation was developed as a substitute to the Penman–Monteith equation to remove dependence on observations. For Priestley–Taylor, only radiation (irradiance) observations are required. This is done by removing the aerodynamic terms from the Penman–Monteith equation and adding an empirically derived constant factor, formula_3. The underlying concept behind the Priestley–Taylor model is that an air mass moving above a vegetated area with abundant water would become saturated with water. In these conditions, the actual evapotranspiration would match the Penman rate of reference evapotranspiration. However, observations revealed that actual evaporation was 1.26 times greater than reference evaporation, and therefore the equation for actual evaporation was found by taking reference evapotranspiration and multiplying it by formula_3. The assumption here is for vegetation with an abundant water supply (i.e. the plants have low moisture stress). Areas like arid regions with high moisture stress are estimated to have higher formula_3 values. The assumption that an air mass moving over a vegetated surface with abundant water saturates has been questioned later. The lowest and turbulent part of the atmosphere, the atmospheric boundary layer, is not a closed box, but constantly brings in dry air from higher up in the atmosphere towards the surface. As water evaporates more easily into a dry atmosphere, evapotranspiration is enhanced. This explains the larger than unity value of the Priestley-Taylor parameter formula_3. The proper equilibrium of the system has been derived and involves the characteristics of the interface of the atmospheric boundary layer and the overlying free atmosphere. History. The equation is named after Howard Penman and John Monteith. Penman first published his equation in 1948 and Monteith revised it in 1965. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\overset{\\text{Energy flux rate}}{\\lambda_v E=\\frac{\\Delta (R_n-G) + \\rho_a c_p \\left( \\delta e \\right) g_a }\n{\\Delta + \\gamma \\left ( 1 + g_a / g_s \\right)}}\n~ \\iff ~\n \\overset{\\text{Volume flux rate}}{ET=\\frac{\\Delta (R_n-G) + \\rho_a c_p \\left( \\delta e \\right) g_a }\n{ \\left( \\Delta + \\gamma \\left ( 1 + g_a / g_s \\right) \\right) L_v }}\n" }, { "math_id": 1, "text": " g_a = \\tfrac{1}{ r_a} ~ ~ \\And ~ ~ g_s = \\tfrac{1}{ r_s} = \\tfrac{1}{ r_c}" }, { "math_id": 2, "text": " ET_o = \\frac{0.408 \\Delta (R_n-G) + \\frac{900}{T} \\gamma u_2 \\delta e }{\\Delta + \\gamma (1 + 0.34 u_2)}\n" }, { "math_id": 3, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=13793909
13795874
Shift theorem
In mathematics, the (exponential) shift theorem is a theorem about polynomial differential operators ("D"-operators) and exponential functions. It permits one to eliminate, in certain cases, the exponential from under the "D"-operators. Statement. The theorem states that, if "P"("D") is a polynomial of the "D"-operator, then, for any sufficiently differentiable function "y", formula_0 To prove the result, proceed by induction. Note that only the special case formula_1 needs to be proved, since the general result then follows by linearity of "D"-operators. The result is clearly true for "n" = 1 since formula_2 Now suppose the result true for "n" = "k", that is, formula_3 Then, formula_4 This completes the proof. The shift theorem can be applied equally well to inverse operators: formula_5 Related. There is a similar version of the shift theorem for Laplace transforms (formula_6): formula_7 Examples. The exponential shift theorem can be used to speed the calculation of higher derivatives of functions that is given by the product of an exponential and another function. For instance, if formula_8, one has that formula_9 Another application of the exponential shift theorem is to solve linear differential equations whose characteristic polynomial has repeated roots. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(D)(e^{ax}y)\\equiv e^{ax}P(D+a)y." }, { "math_id": 1, "text": "P(D)=D^n" }, { "math_id": 2, "text": "D(e^{ax}y)=e^{ax}(D+a)y." }, { "math_id": 3, "text": "D^k(e^{ax}y)=e^{ax}(D+a)^k y." }, { "math_id": 4, "text": "\\begin{align}\nD^{k+1}(e^{ax}y)&\\equiv\\frac{d}{dx}\\left\\{e^{ax}\\left(D+a\\right)^ky\\right\\}\\\\\n&{}=e^{ax}\\frac{d}{dx}\\left\\{\\left(D+a\\right)^k y\\right\\} + ae^{ax}\\left\\{\\left(D+a\\right)^ky\\right\\}\\\\\n&{}=e^{ax}\\left\\{\\left(\\frac{d}{dx}+a\\right) \\left(D+a\\right)^ky\\right\\}\\\\\n&{}=e^{ax}(D+a)^{k+1}y.\n\\end{align}" }, { "math_id": 5, "text": "\\frac{1}{P(D)}(e^{ax}y)=e^{ax}\\frac{1}{P(D+a)}y." }, { "math_id": 6, "text": "t<a" }, { "math_id": 7, "text": "e^{-as}\\mathcal{L}\\{f(t)\\} = \\mathcal{L}\\{f(t-a)\\}." }, { "math_id": 8, "text": "f(x) = \\sin(x) e^x" }, { "math_id": 9, "text": "\\begin{align}\nD^3 f &= D^3 (e^x\\sin(x)) = e^x (D+1)^3 \\sin (x) \\\\\n&= e^x \\left(D^3 + 3D^2 + 3D + 1\\right) \\sin(x) \\\\\n&= e^x\\left(-\\cos(x)-3\\sin(x)+3\\cos(x)+\\sin(x)\\right)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=13795874
13796065
Freudenthal magic square
Mathematical magic square In mathematics, the Freudenthal magic square (or Freudenthal–Tits magic square) is a construction relating several Lie algebras (and their associated Lie groups). It is named after Hans Freudenthal and Jacques Tits, who developed the idea independently. It associates a Lie algebra to a pair of division algebras "A", "B". The resulting Lie algebras have Dynkin diagrams according to the table at the right. The "magic" of the Freudenthal magic square is that the constructed Lie algebra is symmetric in "A" and "B", despite the original construction not being symmetric, though Vinberg's symmetric method gives a symmetric construction. The Freudenthal magic square includes all of the exceptional Lie groups apart from "G"2, and it provides one possible approach to justify the assertion that "the exceptional Lie groups all exist because of the octonions": "G"2 itself is the automorphism group of the octonions (also, it is in many ways like a classical Lie group because it is the stabilizer of a generic 3-form on a 7-dimensional vector space – see prehomogeneous vector space). Constructions. See history for context and motivation. These were originally constructed circa 1958 by Freudenthal and Tits, with more elegant formulations following in later years. Tits' approach. Tits' approach, discovered circa 1958 and published in , is as follows. Associated with any normed real division algebra "A" (i.e., R, C, H or O) there is a Jordan algebra, "J"3("A"), of 3 × 3 "A"-Hermitian matrices. For any pair ("A", "B") of such division algebras, one can define a Lie algebra formula_0 where formula_1 denotes the Lie algebra of derivations of an algebra, and the subscript 0 denotes the trace-free part. The Lie algebra "L" has formula_2 as a subalgebra, and this acts naturally on formula_3. The Lie bracket on formula_3 (which is not a subalgebra) is not obvious, but Tits showed how it could be defined, and that it produced the following table of compact Lie algebras. By construction, the row of the table with "A"=R gives formula_4, and similarly vice versa. Vinberg's symmetric method. The "magic" of the Freudenthal magic square is that the constructed Lie algebra is symmetric in "A" and "B". This is not obvious from Tits' construction. Ernest Vinberg gave a construction which is manifestly symmetric, in . Instead of using a Jordan algebra, he uses an algebra of skew-hermitian trace-free matrices with entries in "A" ⊗ "B", denoted formula_5. Vinberg defines a Lie algebra structure on formula_6 When "A" and "B" have no derivations (i.e., R or C), this is just the Lie (commutator) bracket on formula_5. In the presence of derivations, these form a subalgebra acting naturally on formula_5 as in Tits' construction, and the tracefree commutator bracket on formula_5 is modified by an expression with values in formula_7. Triality. A more recent construction, due to Pierre Ramond and Bruce Allison and developed by Chris Barton and Anthony Sudbery, uses triality in the form developed by John Frank Adams; this was presented in , and in streamlined form in . Whereas Vinberg's construction is based on the automorphism groups of a division algebra "A" (or rather their Lie algebras of derivations), Barton and Sudbery use the group of automorphisms of the corresponding triality. The triality is the trilinear map formula_8 obtained by taking three copies of the division algebra "A", and using the inner product on "A" to dualize the multiplication. The automorphism group is the subgroup of SO("A"1) × SO("A"2) × SO("A"3) preserving this trilinear map. It is denoted Tri("A"). The following table compares its Lie algebra to the Lie algebra of derivations. Barton and Sudbery then identify the magic square Lie algebra corresponding to ("A","B") with a Lie algebra structure on the vector space formula_10 The Lie bracket is compatible with a Z2 × Z2 grading, with tri("A") and tri("B") in degree (0,0), and the three copies of "A" ⊗ "B" in degrees (0,1), (1,0) and (1,1). The bracket preserves tri("A") and tri("B") and these act naturally on the three copies of "A" ⊗ "B", as in the other constructions, but the brackets between these three copies are more constrained. For instance when "A" and "B" are the octonions, the triality is that of Spin(8), the double cover of SO(8), and the Barton-Sudbery description yields formula_11 where V, S+ and S− are the three 8-dimensional representations of formula_9 (the fundamental representation and the two spin representations), and the hatted objects are an isomorphic copy. With respect to one of the Z2 gradings, the first three summands combine to give formula_12 and the last two together form one of its spin representations Δ+128 (the superscript denotes the dimension). This is a well known symmetric decomposition of E8. The Barton–Sudbery construction extends this to the other Lie algebras in the magic square. In particular, for the exceptional Lie algebras in the last row (or column), the symmetric decompositions are: formula_13 formula_14 formula_15 formula_16 Generalizations. Split composition algebras. In addition to the normed division algebras, there are other composition algebras over R, namely the split-complex numbers, the split-quaternions and the split-octonions. If one uses these instead of the complex numbers, quaternions, and octonions, one obtains the following variant of the magic square (where the split versions of the division algebras are denoted by a prime). Here all the Lie algebras are the split real form except for so3, but a sign change in the definition of the Lie bracket can be used to produce the split form so2,1. In particular, for the exceptional Lie algebras, the maximal compact subalgebras are as follows: A non-symmetric version of the magic square can also be obtained by combining the split algebras with the usual division algebras. According to Barton and Sudbery, the resulting table of Lie algebras is as follows. The real exceptional Lie algebras appearing here can again be described by their maximal compact subalgebras. Arbitrary fields. The split forms of the composition algebras and Lie algebras can be defined over any field K. This yields the following magic square. There is some ambiguity here if K is not algebraically closed. In the case K = C, this is the complexification of the Freudenthal magic squares for R discussed so far. More general Jordan algebras. The squares discussed so far are related to the Jordan algebras "J"3("A"), where "A" is a division algebra. There are also Jordan algebras "Jn"("A"), for any positive integer "n", as long as "A" is associative. These yield split forms (over any field K) and compact forms (over R) of generalized magic squares. For "n" = 2, J"2"("O") is also a Jordan algebra. In the compact case (over R) this yields a magic square of orthogonal Lie algebras. The last row and column here are the orthogonal algebra part of the isotropy algebra in the symmetric decomposition of the exceptional Lie algebras mentioned previously. These constructions are closely related to hermitian symmetric spaces – cf. prehomogeneous vector spaces. Symmetric spaces. Riemannian symmetric spaces, both compact and non-compact, can be classified uniformly using a magic square construction, in . The irreducible compact symmetric spaces are, up to finite covers, either a compact simple Lie group, a Grassmannian, a Lagrangian Grassmannian, or a double Lagrangian Grassmannian of subspaces of formula_17 for normed division algebras A and B. A similar construction produces the irreducible non-compact symmetric spaces. History. Rosenfeld projective planes. Following Ruth Moufang's discovery in 1933 of the Cayley projective plane or "octonionic projective plane" P2(O), whose symmetry group is the exceptional Lie group F4, and with the knowledge that "G"2 is the automorphism group of the octonions, it was proposed by that the remaining exceptional Lie groups "E"6, "E"7, and E8 are isomorphism groups of projective planes over certain algebras over the octonions: This proposal is appealing, as there are certain exceptional compact Riemannian symmetric spaces with the desired symmetry groups and whose dimension agree with that of the putative projective planes (dim(P2(K ⊗ K′)) = 2 dim(K)dim(K′)), and this would give a uniform construction of the exceptional Lie groups as symmetries of naturally occurring objects (i.e., without an a priori knowledge of the exceptional Lie groups). The Riemannian symmetric spaces were classified by Cartan in 1926 (Cartan's labels are used in sequel); see classification for details, and the relevant spaces are: The difficulty with this proposal is that while the octonions are a division algebra, and thus a projective plane is defined over them, the bioctonions, quateroctonions and octooctonions are not division algebras, and thus the usual definition of a projective plane does not work. This can be resolved for the bioctonions, with the resulting projective plane being the complexified Cayley plane, but the constructions do not work for the quateroctonions and octooctonions, and the spaces in question do not obey the usual axioms of projective planes, hence the quotes on "(putative) projective plane". However, the tangent space at each point of these spaces can be identified with the plane (H ⊗ O)2, or (O ⊗ O)2 further justifying the intuition that these are a form of generalized projective plane. Accordingly, the resulting spaces are sometimes called Rosenfeld projective planes and notated as if they were projective planes. More broadly, these compact forms are the Rosenfeld elliptic projective planes, while the dual non-compact forms are the Rosenfeld hyperbolic projective planes. A more modern presentation of Rosenfeld's ideas is in , while a brief note on these "planes" is in . The spaces can be constructed using Tits' theory of buildings, which allows one to construct a geometry with any given algebraic group as symmetries, but this requires starting with the Lie groups and constructing a geometry from them, rather than constructing a geometry independently of a knowledge of the Lie groups. Magic square. While at the level of manifolds and Lie groups, the construction of the projective plane P2(K ⊗ K′) of two normed division algebras does not work, the corresponding construction at the level of Lie algebras "does" work. That is, if one decomposes the Lie algebra of infinitesimal isometries of the projective plane P2(K) and applies the same analysis to P2(K ⊗ K′), one can use this decomposition, which holds when P2(K ⊗ K′) can actually be defined as a projective plane, as a "definition" of a "magic square Lie algebra" "M"(K,K′). This definition is purely algebraic, and holds even without assuming the existence of the corresponding geometric space. This was done independently circa 1958 in and by Freudenthal in a series of 11 papers, starting with and ending with , though the simplified construction outlined here is due to . See also. &lt;templatestyles src="Col-begin/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " L=\\left (\\mathfrak{der}(A)\\oplus\\mathfrak{der}(J_3(B))\\right )\\oplus \\left (A_0\\otimes J_3(B)_0 \\right )" }, { "math_id": 1, "text": "\\mathfrak{der}" }, { "math_id": 2, "text": "\\mathfrak{der}(A)\\oplus\\mathfrak{der}(J_3(B))" }, { "math_id": 3, "text": "A_0\\otimes J_3(B)_0" }, { "math_id": 4, "text": "\\mathfrak{der}(J_3(B))" }, { "math_id": 5, "text": "\\mathfrak{sa}_3(A\\otimes B)" }, { "math_id": 6, "text": " \\mathfrak{der}(A)\\oplus\\mathfrak{der}(B)\\oplus\\mathfrak{sa}_3(A\\otimes B)." }, { "math_id": 7, "text": " \\mathfrak{der}(A)\\oplus\\mathfrak{der}(B)" }, { "math_id": 8, "text": " A_1\\times A_2\\times A_3 \\to \\mathbf R" }, { "math_id": 9, "text": "\\mathfrak{so}_8" }, { "math_id": 10, "text": " \\mathfrak{tri}(A)\\oplus\\mathfrak{tri}(B)\\oplus (A_1\\otimes B_1)\\oplus (A_2\\otimes B_2)\\oplus (A_3\\otimes B_3). " }, { "math_id": 11, "text": "\\mathfrak e_8\\cong \\mathfrak{so}_8\\oplus\\widehat{\\mathfrak{so}}_8\\oplus(V\\otimes \\widehat V)\\oplus (S_+\\otimes\\widehat S_+)\\oplus (S_-\\otimes \\widehat S_-)" }, { "math_id": 12, "text": "\\mathfrak{so}_{16}" }, { "math_id": 13, "text": "\\mathfrak f_4\\cong \\mathfrak{so}_9\\oplus \\Delta^{16}" }, { "math_id": 14, "text": "\\mathfrak e_6\\cong (\\mathfrak{so}_{10}\\oplus \\mathfrak u_1)\\oplus \\Delta^{32}" }, { "math_id": 15, "text": "\\mathfrak e_7\\cong (\\mathfrak{so}_{12}\\oplus \\mathfrak{sp}_1)\\oplus \\Delta_+^{64}" }, { "math_id": 16, "text": "\\mathfrak e_8\\cong \\mathfrak{so}_{16}\\oplus \\Delta_+^{128}." }, { "math_id": 17, "text": "(\\mathbf A \\otimes \\mathbf B)^n," } ]
https://en.wikipedia.org/wiki?curid=13796065
13797188
Vaught conjecture
The Vaught conjecture is a conjecture in the mathematical field of model theory originally proposed by Robert Lawson Vaught in 1961. It states that the number of countable models of a first-order complete theory in a countable language is finite or ℵ0 or 2ℵ0. Morley showed that the number of countable models is finite or ℵ0 or ℵ1 or 2ℵ0, which solves the conjecture except for the case of ℵ1 models when the continuum hypothesis fails. For this remaining case, Robin Knight (2002, 2007) has announced a counterexample to the Vaught conjecture and the topological Vaught conjecture. As of 2021, the counterexample has not been verified. Statement of the conjecture. Let formula_0 be a first-order, countable, complete theory with infinite models. Let formula_1 denote the number of models of "T" of cardinality formula_2 up to isomorphism—the spectrum of the theory formula_0. Morley proved that if "I"("T", ℵ0) is infinite then it must be ℵ0 or ℵ1 or the cardinality of the continuum. The Vaught conjecture is the statement that it is not possible for formula_3. The conjecture is a trivial consequence of the continuum hypothesis; so this axiom is often excluded in work on the conjecture. Alternatively, there is a sharper form of the conjecture that states that any countable complete "T" with uncountably many countable models will have a perfect set of uncountable models (as pointed out by John Steel, in "On Vaught's conjecture". Cabal Seminar 76–77 (Proc. Caltech-UCLA Logic Sem., 1976–77), pp. 193–208, Lecture Notes in Math., 689, Springer, Berlin, 1978, this form of the Vaught conjecture is equiprovable with the original). Original formulation. The original formulation by Vaught was not stated as a conjecture, but as a problem: "Can it be proved, without the use of the continuum hypothesis, that there exists a complete theory having exactly" ℵ1 "non-isomorphic denumerable models?" By the result by Morley mentioned at the beginning, a positive solution to the conjecture essentially corresponds to a negative answer to Vaught's problem as originally stated. Vaught's theorem. Vaught proved that the number of countable models of a complete theory cannot be 2. It can be any finite number other than 2, for example: The idea of the proof of Vaught's theorem is as follows. If there are at most countably many countable models, then there is a smallest one: the atomic model, and a largest one, the saturated model, which are different if there is more than one model. If they are different, the saturated model must realize some "n"-type omitted by the atomic model. Then one can show that an atomic model of the theory of structures realizing this "n"-type (in a language expanded by finitely many constants) is a third model, not isomorphic to either the atomic or the saturated model. In the example above with 3 models, the atomic model is the one where the sequence is unbounded, the saturated model is the one where the sequence converges, and an example of a type not realized by the atomic model is an element greater than all elements of the sequence. Topological Vaught conjecture. The topological Vaught conjecture is the statement that whenever a Polish group acts continuously on a Polish space, there are either countably many orbits or continuum many orbits. The topological Vaught conjecture is more general than the original Vaught conjecture: Given a countable language we can form the space of all structures on the natural numbers for that language. If we equip this with the topology generated by first-order formulas, then it is known from A. Gregorczyk, A. Mostowski, C. Ryll-Nardzewski, "Definability of sets of models of axiomatic theories" ("Bulletin of the Polish Academy of Sciences (series Mathematics, Astronomy, Physics)", vol. 9 (1961), pp. 163–7) that the resulting space is Polish. There is a continuous action of the infinite symmetric group (the collection of all permutations of the natural numbers with the topology of point-wise convergence) that gives rise to the equivalence relation of isomorphism. Given a complete first-order theory "T", the set of structures satisfying "T" is a minimal, closed invariant set, and hence Polish in its own right.
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "I(T, \\alpha)" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\aleph_{0} < I(T,\\aleph_{0}) < 2^{\\aleph_{0}}" } ]
https://en.wikipedia.org/wiki?curid=13797188
13798040
Field with one element
In mathematics, the field with one element is a suggestive name for an object that should behave similarly to a finite field with a single element, if such a field could exist. This object is denoted F1, or, in a French–English pun, Fun. The name "field with one element" and the notation F1 are only suggestive, as there is no field with one element in classical abstract algebra. Instead, F1 refers to the idea that there should be a way to replace sets and operations, the traditional building blocks for abstract algebra, with other, more flexible objects. Many theories of F1 have been proposed, but it is not clear which, if any, of them give F1 all the desired properties. While there is still no field with a single element in these theories, there is a field-like object whose characteristic is one. Most proposed theories of F1 replace abstract algebra entirely. Mathematical objects such as vector spaces and polynomial rings can be carried over into these new theories by mimicking their abstract properties. This allows the development of commutative algebra and algebraic geometry on new foundations. One of the defining features of theories of F1 is that these new foundations allow more objects than classical abstract algebra does, one of which behaves like a field of characteristic one. The possibility of studying the mathematics of F1 was originally suggested in 1956 by Jacques Tits, published in , on the basis of an analogy between symmetries in projective geometry and the combinatorics of simplicial complexes. F1 has been connected to noncommutative geometry and to a possible proof of the Riemann hypothesis. History. In 1957, Jacques Tits introduced the theory of buildings, which relate algebraic groups to abstract simplicial complexes. One of the assumptions is a non-triviality condition: If the building is an "n"‑dimensional abstract simplicial complex, and if "k" &lt; "n", then every "k"‑simplex of the building must be contained in at least three "n"‑simplices. This is analogous to the condition in classical projective geometry that a line must contain at least three points. However, there are degenerate geometries that satisfy all the conditions to be a projective geometry except that the lines admit only two points. The analogous objects in the theory of buildings are called apartments. Apartments play such a constituent role in the theory of buildings that Tits conjectured the existence of a theory of projective geometry in which the degenerate geometries would have equal standing with the classical ones. This geometry would take place, he said, over a "field of characteristic one". Using this analogy it was possible to describe some of the elementary properties of F1, but it was not possible to construct it. After Tits' initial observations, little progress was made until the early 1990s. In the late 1980s, Alexander Smirnov gave a series of talks in which he conjectured that the Riemann hypothesis could be proven by considering the integers as a curve over a field with one element. By 1991, Smirnov had taken some steps towards algebraic geometry over F1, introducing extensions of F1 and using them to handle the projective line P1 over F1. Algebraic numbers were treated as maps to this P1, and conjectural approximations to the Riemann–Hurwitz formula for these maps were suggested. These approximations imply solutions to important problems like the abc conjecture. The extensions of F1 later on were denoted as F"q" with "q" = 1"n". Together with Mikhail Kapranov, Smirnov went on to explore how algebraic and number-theoretic constructions in prime characteristic might look in "characteristic one", culminating in an unpublished work released in 1995. In 1993, Yuri Manin gave a series of lectures on zeta functions where he proposed developing a theory of algebraic geometry over F1. He suggested that zeta functions of varieties over F1 would have very simple descriptions, and he proposed a relation between the K‑theory of F1 and the homotopy groups of spheres. This inspired several people to attempt to construct explicit theories of F1‑geometry. The first published definition of a variety over F1 came from Christophe Soulé in 1999, who constructed it using algebras over the complex numbers and functors from categories of certain rings. In 2000, Zhu proposed that F1 was the same as F2 except that the sum of one and one was one, not zero. Deitmar suggested that F1 should be found by forgetting the additive structure of a ring and focusing on the multiplication. Toën and Vaquié built on Hakim's theory of relative schemes and defined F1 using symmetric monoidal categories. Their construction was later shown to be equivalent to Deitmar's by Vezzani. Nikolai Durov constructed F1 as a commutative algebraic monad. Borger used descent to construct it from the finite fields and the integers. Alain Connes and Caterina Consani developed both Soulé and Deitmar's notions by "gluing" the category of multiplicative monoids and the category of rings to create a new category formula_0 then defining F1‑schemes to be a particular kind of representable functor on formula_1 Using this, they managed to provide a notion of several number-theoretic constructions over F1 such as motives and field extensions, as well as constructing Chevalley groups over F12. Along with Matilde Marcolli, Connes and Consani have also connected F1 with noncommutative geometry. It has also been suggested to have connections to the unique games conjecture in computational complexity theory. Oliver Lorscheid, along with others, has recently achieved Tits' original aim of describing Chevalley groups over F1 by introducing objects called blueprints, which are a simultaneous generalisation of both semirings and monoids. These are used to define so-called "blue schemes", one of which is Spec F1. Lorscheid's ideas depart somewhat from other ideas of groups over F1, in that the F1‑scheme is not itself the Weyl group of its base extension to normal schemes. Lorscheid first defines the Tits category, a full subcategory of the category of blue schemes, and defines the "Weyl extension", a functor from the Tits category to Set. A Tits–Weyl model of an algebraic group formula_2 is a blue scheme "G" with a group operation that is a morphism in the Tits category, whose base extension is formula_2 and whose Weyl extension is isomorphic to the Weyl group of formula_3 F1‑geometry has been linked to tropical geometry, via the fact that semirings (in particular, tropical semirings) arise as quotients of some monoid semiring N["A"] of finite formal sums of elements of a monoid "A", which is itself an F1‑algebra. This connection is made explicit by Lorscheid's use of blueprints. The Giansiracusa brothers have constructed a tropical scheme theory, for which their category of tropical schemes is equivalent to the category of Toën–Vaquié F1‑schemes. This category embeds faithfully, but not fully, into the category of blue schemes, and is a full subcategory of the category of Durov schemes. Motivations. Algebraic number theory. One motivation for F1 comes from algebraic number theory. Weil's proof of the Riemann hypothesis for curves over finite fields starts with a curve "C" over a finite field "k", which comes equipped with a function field "F", which is a field extension of "k". Each such function field gives rise to a Hasse–Weil zeta function "ζ""F", and the Riemann hypothesis for finite fields determines the zeroes of "ζ""F". Weil's proof then uses various geometric properties of "C" to study "ζ""F". The field of rational numbers Q is linked in a similar way to the Riemann zeta function, but Q is not the function field of a variety. Instead, Q is the function field of the scheme Spec Z. This is a one-dimensional scheme (also known as an algebraic curve), and so there should be some "base field" that this curve lies over, of which Q would be a field extension (in the same way that "C" is a curve over "k", and "F" is an extension of "k"). The hope of F1‑geometry is that a suitable object F1 could play the role of this base field, which would allow for a proof of the Riemann hypothesis by mimicking Weil's proof with F1 in place of "k". Arakelov geometry. Geometry over a field with one element is also motivated by Arakelov geometry, where Diophantine equations are studied using tools from complex geometry. The theory involves complicated comparisons between finite fields and the complex numbers. Here the existence of F1 is useful for technical reasons. Expected properties. F1 is not a field. F1 cannot be a field because by definition all fields must contain two distinct elements, the additive identity zero and the multiplicative identity one. Even if this restriction is dropped (for instance by letting the additive and multiplicative identities be the same element), a ring with one element must be the zero ring, which does not behave like a finite field. For instance, all modules over the zero ring are isomorphic (as the only element of such a module is the zero element). However, one of the key motivations of F1 is the description of sets as "F1‑vector spaces" – if finite sets were modules over the zero ring, then every finite set would be the same size, which is not the case. Moreover, the spectrum of the trivial ring is empty, but the spectrum of a field has one point. Computations. Various structures on a set are analogous to structures on a projective space, and can be computed in the same way: Sets are projective spaces. The number of elements of P(F) = P"n"−1(F"q"), the ("n" − 1)‑dimensional projective space over the finite field F"q", is the "q"‑integer formula_4 Taking "q" = 1 yields ["n"]"q" = "n". The expansion of the "q"‑integer into a sum of powers of "q" corresponds to the Schubert cell decomposition of projective space. Permutations are maximal flags. There are "n"! permutations of a set with "n" elements, and ["n"]!"q" maximal flags in F, where formula_5 is the "q"‑factorial. Indeed, a permutation of a set can be considered a filtered set, as a flag is a filtered vector space: for instance, the ordering (0, 1, 2) of the set {0, 1, 2} corresponds to the filtration {0} ⊂ {0, 1} ⊂ {0, 1, 2}. Subsets are subspaces. The binomial coefficient formula_6 gives the number of "m"-element subsets of an "n"-element set, and the "q"‑binomial coefficient formula_7 gives the number of "m"-dimensional subspaces of an "n"-dimensional vector space over F"q". The expansion of the "q"‑binomial coefficient into a sum of powers of "q" corresponds to the Schubert cell decomposition of the Grassmannian. Monoid schemes. Deitmar's construction of monoid schemes has been called "the very core of F1‑geometry", as most other theories of F1‑geometry contain descriptions of monoid schemes. Morally, it mimicks the theory of schemes developed in the 1950s and 1960s by replacing commutative rings with monoids. The effect of this is to "forget" the additive structure of the ring, leaving only the multiplicative structure. For this reason, it is sometimes called "non-additive geometry". Monoids. A multiplicative monoid is a monoid "A" that also contains an absorbing element 0 (distinct from the identity 1 of the monoid), such that 0"a" = 0 for every "a" in the monoid "A". The field with one element is then defined to be F1 = {0, 1}, the multiplicative monoid of the field with two elements, which is initial in the category of multiplicative monoids. A monoid ideal in a monoid "A" is a subset "I" that is multiplicatively closed, contains 0, and such that "IA" = {"ra" : "r" ∈ "I", "a" ∈ "A"} = "I". Such an ideal is prime if "A" ∖ "I" is multiplicatively closed and contains 1. For monoids "A" and "B", a monoid homomorphism is a function "f" : "A" → "B" such that Monoid schemes. The "spectrum" of a monoid "A", denoted Spec "A", is the set of prime ideals of "A". The spectrum of a monoid can be given a Zariski topology, by defining basic open sets formula_14 for each "h" in "A". A "monoidal space" is a topological space along with a sheaf of multiplicative monoids called the "structure sheaf". An "affine monoid scheme" is a monoidal space that is isomorphic to the spectrum of a monoid, and a monoid scheme is a sheaf of monoids that has an open cover by affine monoid schemes. Monoid schemes can be turned into ring-theoretic schemes by means of a base extension functor – ⊗F1 Z that sends the monoid "A" to the Z‑module (i.e. ring) Z["A"] / ⟨0"A"⟩, and a monoid homomorphism "f" : "A" → "B" extends to a ring homomorphism "f"Z : "A" ⊗F1 Z → "B" ⊗F1 Z that is linear as a Z‑module homomorphism. The base extension of an affine monoid scheme is defined via the formula formula_15 which in turn defines the base extension of a general monoid scheme. Consequences. This construction achieves many of the desired properties of F1‑geometry: Spec F1 consists of a single point, so behaves similarly to the spectrum of a field in conventional geometry, and the category of affine monoid schemes is dual to the category of multiplicative monoids, mirroring the duality of affine schemes and commutative rings. Furthermore, this theory satisfies the combinatorial properties expected of F1 mentioned in previous sections; for instance, projective space over F1 of dimension "n" as a monoid scheme is identical to an apartment of projective space over F"q" of dimension "n" when described as a building. However, monoid schemes do not fulfill all of the expected properties of a theory of F1‑geometry, as the only varieties that have monoid scheme analogues are toric varieties. More precisely, if "X" is a monoid scheme whose base extension is a flat, separated, connected scheme of finite type, then the base extension of "X" is a toric variety. Other notions of F1‑geometry, such as that of Connes–Consani, build on this model to describe F1‑varieties that are not toric. Field extensions. One may define field extensions of the field with one element as the group of roots of unity, or more finely (with a geometric structure) as the group scheme of roots of unity. This is non-naturally isomorphic to the cyclic group of order "n", the isomorphism depending on choice of a primitive root of unity: formula_16 Thus a vector space of dimension "d" over F1"n" is a finite set of order "dn" on which the roots of unity act freely, together with a base point. From this point of view the finite field F"q" is an algebra over F1"n", of dimension "d" = ("q" − 1)/"n" for any "n" that is a factor of "q" − 1 (for example "n" = "q" − 1 or "n" = 1). This corresponds to the fact that the group of units of a finite field F"q" (which are the "q" − 1 non-zero elements) is a cyclic group of order "q" − 1, on which any cyclic group of order dividing "q" − 1 acts freely (by raising to a power), and the zero element of the field is the base point. Similarly, the real numbers R are an algebra over F12, of infinite dimension, as the real numbers contain ±1, but no other roots of unity, and the complex numbers C are an algebra over F1"n" for all "n", again of infinite dimension, as the complex numbers have all roots of unity. From this point of view, any phenomenon that only depends on a field having roots of unity can be seen as coming from F1 – for example, the discrete Fourier transform (complex-valued) and the related number-theoretic transform (Z/"n"Z‑valued). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{M}\\mathfrak{R}," }, { "math_id": 1, "text": "\\mathfrak{M}\\mathfrak{R}." }, { "math_id": 2, "text": "\\mathcal{G}" }, { "math_id": 3, "text": "\\mathcal{G}." }, { "math_id": 4, "text": "[n]_q := \\frac{q^n-1}{q-1}=1+q+q^2+\\dots+q^{n-1}." }, { "math_id": 5, "text": "[n]!_q := [1]_q [2]_q \\dots [n]_q" }, { "math_id": 6, "text": "\\frac{n!}{m!(n-m)!}" }, { "math_id": 7, "text": "\\frac{[n]!_q}{[m]!_q[n-m]!_q}" }, { "math_id": 8, "text": "f(0) = 0 ;" }, { "math_id": 9, "text": "f(1) = 1 ," }, { "math_id": 10, "text": "f(ab) = f(a)f(b)" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "b" }, { "math_id": 13, "text": "A ." }, { "math_id": 14, "text": "U_h = \\{\\mathfrak{p}\\in\\text{Spec}A:h\\notin\\mathfrak{p}\\}," }, { "math_id": 15, "text": "\\operatorname{Spec}(A)\\times_{\\operatorname{Spec}(\\mathbf{F}_1)}\\operatorname{Spec}(\\mathbf{Z})=\\operatorname{Spec}\\big( A\\otimes_{\\mathbf{F}_1}\\mathbf{Z}\\big)," }, { "math_id": 16, "text": "\\mathbf{F}_{1^n} = \\mu_n." } ]
https://en.wikipedia.org/wiki?curid=13798040
13802
Hammer
Tool A hammer is a tool, most often a hand tool, consisting of a weighted "head" fixed to a long handle that is swung to deliver an impact to a small area of an object. This can be, for example, to drive nails into wood, to shape metal (as with a forge), or to crush rock. Hammers are used for a wide range of driving, shaping, breaking and non-destructive striking applications. Traditional disciplines include carpentry, blacksmithing, warfare, and percussive musicianship (as with a gong). Hammering is use of a hammer in its strike capacity, as opposed to prying with a secondary claw or grappling with a secondary hook. Carpentry and blacksmithing hammers are generally wielded from a stationary stance against a stationary target as gripped and propelled with one arm, in a lengthy downward planar arc—downward to add kinetic energy to the impact—pivoting mainly around the shoulder and elbow, with a small but brisk wrist rotation shortly before impact; for extreme impact, concurrent motions of the torso and knee can lower the shoulder joint during the swing to further increase the length of the swing arc (but this is tiring). War hammers are often wielded in non-vertical planes of motion, with a far greater share of energy input provided from the legs and hips, which can also include a lunging motion, especially against moving targets. Small mallets can be swung from the wrists in a smaller motion permitting a much higher cadence of repeated strikes. Use of hammers and heavy mallets for demolition must adapt the hammer stroke to the location and orientation of the target, which can necessitate a clubbing or golfing motion with a two-handed grip. The modern hammer head is typically made of steel which has been heat treated for hardness, and the handle (also known as a haft or helve) is typically made of wood or plastic. Ubiquitous in framing, the claw hammer has a "claw" to pull nails out of wood, and is commonly found in an inventory of household tools in North America. Other types of hammers vary in shape, size, and structure, depending on their purposes. Hammers used in many trades include sledgehammers, mallets, and ball-peen hammers. Although most hammers are hand tools, powered hammers, such as steam hammers and trip hammers, are used to deliver forces beyond the capacity of the human arm. There are over 40 different types of hammers that have many different types of uses. For hand hammers, the grip of the shaft is an important consideration. Many forms of hammering by hand are heavy work, and perspiration can lead to slippage from the hand, turning a hammer into a dangerous or destructive uncontrolled projectile. Steel is highly elastic and transmits shock and vibration; steel is also a good conductor of heat, making it unsuitable for contact with bare skin in frigid conditions. Modern hammers with steel shafts are almost invariably clad with a synthetic polymer to improve grip, dampen vibration, and to provide thermal insulation. A suitably contoured handle is also an important aid in providing a secure grip during heavy use. Traditional wooden handles were reasonably good in all regards, but lack strength and durability compared to steel, and there are safety issues with wooden handles if the head becomes loose on the shaft. The high elasticity of the steel head is important in energy transfer, especially when used in conjunction with an equally elastic anvil. In terms of human physiology, many uses of the hammer involve coordinated ballistic movements under intense muscular forces which must be planned in advance at the neuromuscular level, as they occur too rapidly for conscious adjustment in flight. For this reason, accurate striking at speed requires more practice than a tapping movement to the same target area. It has been suggested that the cognitive demands for pre-planning, sequencing and accurate timing associated with the related ballistic movements of throwing, clubbing, and hammering precipitated aspects of brain evolution in early hominids. History. The use of simple hammers dates to around 3.3 million years ago according to the 2012 find made by Sonia Harmand and Jason Lewis of Stony Brook University, who while excavating a site near Kenya's Lake Turkana discovered a very large deposit of various shaped stones including those used to strike wood, bone, or other stones to break them apart and shape them. The first hammers were made without handles. Stones attached to sticks with strips of leather or animal sinew were being used as hammers with handles by about 30,000 BCE during the middle of the Paleolithic Stone Age. The addition of a handle gave the user better control and less accidents. The hammer became the primary tool used for building, food, and protection. The hammer's archaeological record shows that it may be the oldest tool for which definite evidence exists. Construction and materials. A traditional hand-held hammer consists of a separate head and a handle, which can be fastened together by means of a special wedge made for the purpose, or by glue, or both. This two-piece design is often used to combine a dense metallic striking head with a non-metallic mechanical-shock-absorbing handle (to reduce user fatigue from repeated strikes). If wood is used for the handle, it is often hickory or ash, which are tough and long-lasting materials that can dissipate shock waves from the hammer head. Rigid fiberglass resin may be used for the handle; this material does not absorb water or decay but does not dissipate shock as well as wood. A loose hammer head is considered hazardous due to the risk of the head becoming detached from the handle while being swung becoming a dangerous uncontrolled projectile. Wooden handles can often be replaced when worn or damaged; specialized kits are available covering a range of handle sizes and designs, plus special wedges and spacers for secure attachment. Some hammers are one-piece designs made mostly of a single material. A one-piece metallic hammer may optionally have its handle coated or wrapped in a resilient material such as rubber for improved grip and to reduce user fatigue. The hammer head may be surfaced with a variety of materials including brass, bronze, wood, plastic, rubber, or leather. Some hammers have interchangeable striking surfaces, which can be selected as needed or replaced when worn out. Designs and variations. A large hammer-like tool is a "maul" (sometimes called a "beetle"), a wood- or rubber-headed hammer is a "mallet", and a hammer-like tool with a cutting blade is usually called a "hatchet". The essential part of a hammer is the head, a compact solid mass that is able to deliver a blow to the intended target without itself deforming. The impacting surface of the tool is usually flat or slightly rounded; the opposite end of the impacting mass may have a ball shape, as in the ball-peen hammer. Some upholstery hammers have a magnetized face, to pick up tacks. In the hatchet, the flat hammer head may be secondary to the cutting edge of the tool. The impact between steel hammer heads and the objects being hit can create sparks, which may ignite flammable or explosive gases. These are a hazard in some industries such as underground coal mining (due to the presence of methane gas), or in other hazardous environments such as petroleum refineries and chemical plants. In these environments, a variety of non-sparking metal tools are used, primarily made of aluminium or beryllium copper. In recent years, the handles have been made of durable plastic or rubber, though wood is still widely used because of its shock-absorbing qualities and repairability. Mechanically powered. Mechanically powered hammers often look quite different from the hand tools, but nevertheless, most of them work on the same principle. They include: Physics. As a force amplifier. A hammer is a simple force amplifier that works by converting mechanical work into kinetic energy and back. In the swing that precedes each blow, the hammer head stores a certain amount of kinetic energy—equal to the length "D" of the swing times the force "f" produced by the muscles of the arm and by gravity. When the hammer strikes, the head is stopped by an opposite force coming from the target, equal and opposite to the force applied by the head to the target. If the target is a hard and heavy object, or if it is resting on some sort of anvil, the head can travel only a very short distance "d" before stopping. Since the stopping force "F" times that distance must be equal to the head's kinetic energy, it follows that "F" is much greater than the original driving force "f"—roughly, by a factor "D"/"d". In this way, great strength is not needed to produce a force strong enough to bend steel, or crack the hardest stone. Effect of the head's mass. The amount of energy delivered to the target by the hammer-blow is equivalent to one half the mass of the head times the square of the head's speed at the time of impact formula_0. While the energy delivered to the target increases linearly with mass, it increases quadratically with the speed (see the effect of the handle, below). High tech titanium heads are lighter and allow for longer handles, thus increasing velocity and delivering the same energy with less arm fatigue than that of a heavier steel head hammer. A titanium head has about 3% recoil energy and can result in greater efficiency and less fatigue when compared to a steel head with up to 30% recoil. Dead blow hammers use special rubber or steel shot to absorb recoil energy, rather than bouncing the hammer head after impact. Effect of the handle. The handle of the hammer helps in several ways. It keeps the user's hands away from the point of impact. It provides a broad area that is better-suited for gripping by the hand. Most importantly, it allows the user to maximize the speed of the head on each blow. The primary constraint on additional handle length is the lack of space to swing the hammer. This is why sledgehammers, largely used in open spaces, can have handles that are much longer than a standard carpenter's hammer. The second most important constraint is more subtle. Even without considering the effects of fatigue, the longer the handle, the harder it is to guide the head of the hammer to its target at full speed. Most designs are a compromise between practicality and energy efficiency. With too long a handle, the hammer is inefficient because it delivers force to the wrong place, off-target. With too short a handle, the hammer is inefficient because it does not deliver enough force, requiring more blows to complete a given task. Modifications have also been made with respect to the effect of the hammer on the user. Handles made of shock-absorbing materials or varying angles attempt to make it easier for the user to continue to wield this age-old device, even as nail guns and other powered drivers encroach on its traditional field of use. As hammers must be used in many circumstances, where the position of the person using them cannot be taken for granted, trade-offs are made for the sake of practicality. In areas where one has plenty of room, a long handle with a heavy head (like a sledgehammer) can deliver the maximum amount of energy to the target. It is not practical to use such a large hammer for all tasks, however, and thus the overall design has been modified repeatedly to achieve the optimum utility in a wide variety of situations. Effect of gravity. Gravity exerts a force on the hammer head. If hammering downwards, gravity increases the acceleration during the hammer stroke and increases the energy delivered with each blow. If hammering upwards, gravity reduces the acceleration during the hammer stroke and therefore reduces the energy delivered with each blow. Some hammering methods, such as traditional mechanical pile drivers, rely entirely on gravity for acceleration on the down stroke. Ergonomics and injury risks. A hammer may cause significant injury if it strikes the body. Both manual and powered hammers can cause peripheral neuropathy or a variety of other ailments when used improperly. Awkward handles can cause repetitive stress injury (RSI) to hand and arm joints, and uncontrolled shock waves from repeated impacts can injure nerves and the skeleton. Additionally, striking metal objects with a hammer may produce small metallic projectiles which can become lodged in the eye. It is therefore recommended to wear safety glasses. War hammers. A war hammer is a late medieval weapon of war intended for close combat action. Symbolism. The hammer, being one of the most used tools by man, has been used very much in symbols such as flags and heraldry. In the Middle Ages, it was used often in blacksmith guild logos, as well as in many family symbols. The hammer and pick are used as a symbol of mining. In mythology, the gods Thor (Norse) and Sucellus (Celtic and Gallo-Roman), and the hero Hercules (Greek), all had hammers that appear in their lore and carried different meanings. Thor, the god of thunder and lightning, wields a hammer named Mjölnir. Many artifacts of decorative hammers have been found, leading modern practitioners of this religion to often wear reproductions as a sign of their faith. In American folklore, the hammer of John Henry represents the strength and endurance of a man. A political party in Singapore, Workers' Party of Singapore, based their logo on a hammer to symbolize the party's civic nationalism and social democracy ideology. A variant, well-known symbol with a hammer in it is the hammer and sickle, which was the symbol of the former Soviet Union and is strongly linked to communism and early socialism. The hammer in this symbol represents the industrial working class (and the sickle represents the agricultural working class). The hammer is used in some coats of arms in former socialist countries like East Germany. Similarly, the Hammer and Sword symbolizes Strasserism, a strand of Nazism seeking to appeal to the working class. Another variant of the symbol was used for the North Korean party, Workers' Party of Korea, incorporated with an ink brush on the middle, which symbolizes both Juche and Songun ideologies. In Pink Floyd – The Wall, two hammers crossed are used as a symbol for the fascist takeover of the concert during "In the Flesh". This also has the meaning of the hammer beating down any "nails" that stick out. The gavel, a small wooden mallet, is used to symbolize a mandate to preside over a meeting or judicial proceeding, and a graphic image of one is used as a symbol of legislative or judicial decision-making authority. Judah Maccabee was nicknamed "The Hammer", possibly in recognition of his ferocity in battle. The name "Maccabee" may derive from the Aramaic "maqqaba". (see .) The hammer in the song "If I Had a Hammer" represents a relentless message of justice broadcast across the land. The song became a symbol of the civil rights movement. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(E={mv^2 \\over 2})" } ]
https://en.wikipedia.org/wiki?curid=13802
13803804
Calculation of glass properties
The calculation of glass properties is used to predict glass properties of interest The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). History. Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time. In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work on the problem further and to evaluate all possible glass components systematically. Finally, Schott succeeded in producing homogeneous glass samples, and he invented borosilicate glass with the optical properties Abbe needed. These inventions gave rise to the well-known companies Zeiss and Schott Glass (see also Timeline of microscope technology). Systematic glass research was born. In 1908, Eugene Sullivan founded glass research also in the United States (Corning, New York). At the beginning of glass research it was most important to know the relation between the glass composition and its properties. For this purpose Otto Schott introduced the additivity principle in several publications for calculation of glass properties. This principle implies that the relation between the glass composition and a specific property is linear to all glass component concentrations, assuming an ideal mixture, with "Ci" and "bi" representing specific glass component concentrations and related coefficients respectively in the equation below. The additivity principle is a simplification and only valid within narrow composition ranges as seen in the displayed diagrams for the refractive index and the viscosity. Nevertheless, the application of the additivity principle lead the way to many of Schott's inventions, including optical glasses, glasses with low thermal expansion for cooking and laboratory ware (Duran), and glasses with reduced freezing point depression for mercury thermometers. Subsequently, English and Gehlhoff "et al." published similar additive glass property calculation models. Schott's additivity principle is still widely in use today in glass research and technology. Additivity Principle:    formula_0 Global models. Schott and many scientists and engineers afterwards applied the additivity principle to experimental data measured in their own laboratory within sufficiently narrow composition ranges (local glass models). This is most convenient because disagreements between laboratories and non-linear glass component interactions do not need to be considered. In the course of several decades of systematic glass research thousands of glass compositions were studied, resulting in millions of published glass properties, collected in glass databases. This huge pool of experimental data was not investigated as a whole, until Bottinga, Kucuk, Priven, Choudhary, Mazurin, and Fluegel published their global glass models, using various approaches. In contrast to the models by Schott the global models consider many independent data sources, making the model estimates more reliable. In addition, global models can reveal and quantify "non-additive" influences of certain glass component combinations on the properties, such as the "mixed-alkali effect" as seen in the adjacent diagram, or the "boron anomaly". Global models also reflect interesting developments of glass property measurement accuracy, e.g., a decreasing accuracy of experimental data in modern scientific literature for some glass properties, shown in the diagram. They can be used for accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). In the following sections (except melting enthalpy) "empirical" modeling techniques are presented, which seem to be a successful way for handling huge amounts of experimental data. The resulting models are applied in contemporary engineering and research for the calculation of glass properties. Non-empirical ("deductive") glass models exist. They are often not created to obtain reliable glass property predictions in the first place (except melting enthalpy), but to establish relations among several properties (e.g. atomic radius, atomic mass, chemical bond strength and angles, chemical valency, heat capacity) to gain scientific insight. In future, the investigation of property relations in deductive models may ultimately lead to reliable predictions for all desired properties, provided the property relations are well understood and all required experimental data are available. Methods. Glass properties and glass behavior during production can be calculated through statistical analysis of glass databases such as GE-SYSTEM SciGlass and Interglad, sometimes combined with the finite element method. For estimating the melting enthalpy thermodynamic databases are used. Linear regression. If the desired glass property is not related to crystallization (e.g., liquidus temperature) or phase separation, linear regression can be applied using common polynomial functions up to the third degree. Below is an example equation of the second degree. The "C"-values are the glass component concentrations like Na2O or CaO in percent or other fractions, the "b"-values are coefficients, and "n" is the total number of glass components. The glass main component silica (SiO2) is excluded in the equation below because of over-parametrization due to the constraint that all components sum up to 100%. Many terms in the equation below can be neglected based on correlation and significance analysis. Systematic errors such as seen in the picture are quantified by dummy variables. Further details and examples are available in an online tutorial by Fluegel. formula_1 Non-linear regression. The liquidus temperature has been modeled by non-linear regression using neural networks and disconnected peak functions. The disconnected peak functions approach is based on the observation that within one primary crystalline phase field linear regression can be applied and at eutectic points sudden changes occur. Glass melting enthalpy. The glass melting enthalpy reflects the amount of energy needed to convert the mix of raw materials (batch) to a melt glass. It depends on the batch and glass compositions, on the efficiency of the furnace and heat regeneration systems, the average residence time of the glass in the furnace, and many other factors. A pioneering article about the subject was written by Carl Kröger in 1953. Finite element method. For modeling of the glass flow in a glass melting furnace the finite element method is applied commercially, based on data or models for viscosity, density, thermal conductivity, heat capacity, absorption spectra, and other relevant properties of the glass melt. The finite element method may also be applied to glass forming processes. Optimization. It is often required to optimize several glass properties simultaneously, including production costs. This can be performed, e.g., by simplex search, or in a spreadsheet as follows: It is possible to weight the desired properties differently. Basic information about the principle can be found in an article by Huff "et al." The combination of several glass models together with further relevant technological and financial functions can be used in six sigma optimization. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{Glass Property} = b_0 + \\sum_{i=1}^n b_iC_i" }, { "math_id": 1, "text": "\\mbox{Glass Property} = b_0 + \\sum_{i=1}^n \\left( b_iC_i + \\sum_{k=i}^n b_{ik}C_iC_k \\right)" } ]
https://en.wikipedia.org/wiki?curid=13803804
13808007
Acoustic rheometer
An acoustic rheometer is a device used to measure the rheological properties of fluids, such as viscosity and elasticity, by utilizing sound waves. It works by generating acoustic waves in the fluid and analyzing the changes in the wave propagation caused by the fluid's rheological behavior. An acoustic rheometer uses a piezo-electric crystal to generate the acoustic waves, applying an oscillating extensional stress to the system. System response can be interpreted in terms of extensional rheology. This interpretation is based on a link between shear rheology, extensional rheology and acoustics. Relationship between these scientific disciplines was described in details by Litovitz and Davis in 1964. It is well known that properties of viscoelastic fluid are characterised in shear rheology with a shear modulus "G", which links shear stress "Tij" and shear strain "Sij" formula_0 There is similar linear relationship in extensional rheology between extensional stress "P", extensional strain "S" and extensional modulus "K": formula_1 Detail theoretical analysis indicates that propagation of sound or ultrasound through a viscoelastic fluid depends on both shear modulus "G" and extensional modulus "K". It is convenient to introduce a combined longitudinal modulus "M": formula_2 There are simple equations that express longitudinal modulus in terms of acoustic properties, sound speed "V" and attenuation α formula_3 formula_4 Acoustic rheometer measures sound speed and attenuation of ultrasound for a set of frequencies in the megahertz range. These measurable parameters can be converted into real and imaginary components of "longitudinal modulus". Sound speed determines M', which is a measure of system elasticity. It can be converted into fluid compressibility. Attenuation determines M", which is a measure of viscous properties, energy dissipation. This parameter can be considered as extensional viscosity In the case of Newtonian liquid attenuation yields information on the volume viscosity. Stokes' law (sound attenuation) provides relationship among attenuation, dynamic viscosity and volume viscosity of the Newtonian fluid. This type of rheometer works at much higher frequencies than others. It is suitable for studying effects with much shorter relaxation times than any other rheometer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " {T_{ij} = G \\cdot S _{ij}} " }, { "math_id": 1, "text": " {P = - K \\cdot S} " }, { "math_id": 2, "text": " M =M' + M''=K + \\frac{4}{3}G" }, { "math_id": 3, "text": " M'= \\rho \\cdot V^2" }, { "math_id": 4, "text": " M''= \\frac{2 \\rho \\alpha V^3}{\\omega}" } ]
https://en.wikipedia.org/wiki?curid=13808007
13808626
Fréchet distribution
Continuous probability distribution The Fréchet distribution, also known as inverse Weibull distribution, is a special case of the generalized extreme value distribution. It has the cumulative distribution function formula_0 where "α" &gt; 0 is a shape parameter. It can be generalised to include a location parameter "m" (the minimum) and a scale parameter "s" &gt; 0 with the cumulative distribution function formula_1 Named for Maurice Fréchet who wrote a related paper in 1927, further work was done by Fisher and Tippett in 1928 and by Gumbel in 1958. Characteristics. The single parameter Fréchet with parameter formula_2 has standardized moment formula_3 (with formula_4) defined only for formula_5: formula_6 where formula_7 is the Gamma function. In particular: The quantile formula_12 of order formula_13 can be expressed through the inverse of the distribution, formula_14. In particular the median is: formula_15 The mode of the distribution is formula_16 Especially for the 3-parameter Fréchet, the first quartile is formula_17 and the third quartile formula_18 Also the quantiles for the mean and mode are: formula_19 formula_20 Applications. However, in most hydrological applications, the distribution fitting is via the generalized extreme value distribution as this avoids imposing the assumption that the distribution does not have a lower bound (as required by the Frechet distribution). References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pr(X \\le x)=e^{-x^{-\\alpha}} \\text{ if } x>0. " }, { "math_id": 1, "text": "\\Pr(X \\le x)=e^{-\\left(\\frac{x-m}{s}\\right)^{-\\alpha}} \\text{ if } x>m. " }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\mu_k=\\int_0^\\infty x^k f(x)dx=\\int_0^\\infty t^{-\\frac{k}{\\alpha}}e^{-t} \\, dt," }, { "math_id": 4, "text": "t=x^{-\\alpha}" }, { "math_id": 5, "text": "k<\\alpha" }, { "math_id": 6, "text": "\\mu_k=\\Gamma\\left(1-\\frac{k}{\\alpha}\\right)" }, { "math_id": 7, "text": "\\Gamma\\left(z\\right)" }, { "math_id": 8, "text": "\\alpha>1" }, { "math_id": 9, "text": "E[X]=\\Gamma(1-\\tfrac{1}{\\alpha})" }, { "math_id": 10, "text": "\\alpha>2" }, { "math_id": 11, "text": "\\text{Var}(X)=\\Gamma(1-\\tfrac{2}{\\alpha})-\\big(\\Gamma(1-\\tfrac{1}{\\alpha})\\big)^2." }, { "math_id": 12, "text": "q_y" }, { "math_id": 13, "text": "y" }, { "math_id": 14, "text": "q_y=F^{-1}(y)=\\left(-\\log_e y \\right)^{-\\frac{1}{\\alpha}}" }, { "math_id": 15, "text": "q_{1/2}=(\\log_e 2)^{-\\frac{1}{\\alpha}}." }, { "math_id": 16, "text": "\\left(\\frac{\\alpha}{\\alpha+1}\\right)^\\frac{1}{\\alpha}." }, { "math_id": 17, "text": "q_1= m+\\frac{s}{\\sqrt[\\alpha]{\\log(4)}} " }, { "math_id": 18, "text": "q_3= m+\\frac{s}{\\sqrt[\\alpha]{\\log(\\frac{4}{3})}}. " }, { "math_id": 19, "text": "F(mean)=\\exp \\left( -\\Gamma^{-\\alpha} \\left(1- \\frac{1}{\\alpha} \\right) \\right)" }, { "math_id": 20, "text": "F(mode)=\\exp \\left( -\\frac{\\alpha+1}{\\alpha} \\right)." }, { "math_id": 21, "text": "Z_i = -1/\\log F_i(X_i)" }, { "math_id": 22, "text": "(R, W)= (Z_1 + Z_2, Z_1/(Z_1 + Z_2))" }, { "math_id": 23, "text": "R \\gg 1" }, { "math_id": 24, "text": "W" }, { "math_id": 25, "text": " X \\sim U(0,1) \\," }, { "math_id": 26, "text": " m + s(-\\log(X))^{-1/\\alpha} \\sim \\textrm{Frechet}(\\alpha,s,m)\\," }, { "math_id": 27, "text": " X \\sim \\textrm{Frechet}(\\alpha,s,m)\\," }, { "math_id": 28, "text": " k X + b \\sim \\textrm{Frechet}(\\alpha,k s,k m + b)\\," }, { "math_id": 29, "text": " X_i \\sim \\textrm{Frechet}(\\alpha,s,m) \\, " }, { "math_id": 30, "text": " Y=\\max\\{\\,X_1,\\ldots,X_n\\,\\} \\, " }, { "math_id": 31, "text": " Y \\sim \\textrm{Frechet}(\\alpha,n^{\\tfrac{1}{\\alpha}} s,m) \\," }, { "math_id": 32, "text": " X \\sim \\textrm{Frechet}(\\alpha,s,m=0)\\," }, { "math_id": 33, "text": "X^{-1} \\sim \\textrm{Weibull}(k=\\alpha, \\lambda=s^{-1})\\," } ]
https://en.wikipedia.org/wiki?curid=13808626
1381368
Superlattice
Periodic structure of layers of two or more materials A superlattice is a periodic structure of layers of two (or more) materials. Typically, the thickness of one layer is several nanometers. It can also refer to a lower-dimensional structure such as an array of quantum dots or quantum wells. Discovery. Superlattices were discovered early in 1925 by Johansson and Linde after the studies on gold-copper and palladium-copper systems through their special X-ray diffraction patterns. Further experimental observations and theoretical modifications on the field were done by Bradley and Jay, Gorsky, Borelius, Dehlinger and Graf, Bragg and Williams and Bethe. Theories were based on the transition of arrangement of atoms in crystal lattices from disordered state to an ordered state. Mechanical properties. J.S. Koehler theoretically predicted that by using alternate (nano-)layers of materials with high and low elastic constants, shearing resistance is improved by up to 100 times as the Frank–Read source of dislocations cannot operate in the nanolayers. The increased mechanical hardness of such superlattice materials was confirmed firstly by Lehoczky in 1978 on Al-Cu and Al-Ag, and later on by several others, such as Barnett and Sproul on hard PVD coatings. Semiconductor properties. If the superlattice is made of two semiconductor materials with different band gaps, each quantum well sets up new selection rules that affect the conditions for charges to flow through the structure. The two different semiconductor materials are deposited alternately on each other to form a periodic structure in the growth direction. Since the 1970 proposal of synthetic superlattices by Esaki and Tsu, advances in the physics of such ultra-fine semiconductors, presently called quantum structures, have been made. The concept of quantum confinement has led to the observation of quantum size effects in isolated quantum well heterostructures and is closely related to superlattices through the tunneling phenomena. Therefore, these two ideas are often discussed on the same physical basis, but each has different physics useful for applications in electric and optical devices. Semiconductor superlattice types. Superlattice miniband structures depend on the heterostructure type, either "type I", "type II" or "type III". For type I the bottom of the conduction band and the top of the valence subband are formed in the same semiconductor layer. In type II the conduction and valence subbands are staggered in both real and reciprocal space, so that electrons and holes are confined in different layers. Type III superlattices involve semimetal material, such as HgTe/CdTe. Although the bottom of the conduction subband and the top of the valence subband are formed in the same semiconductor layer in Type III superlattice, which is similar with Type I superlattice, the band gap of Type III superlattices can be continuously adjusted from semiconductor to zero band gap material and to semimetal with negative band gap. Another class of quasiperiodic superlattices is named after Fibonacci. A Fibonacci superlattice can be viewed as a one-dimensional quasicrystal, where either electron hopping transfer or on-site energy takes two values arranged in a Fibonacci sequence. Semiconductor materials. Semiconductor materials, which are used to fabricate the superlattice structures, may be divided by the element groups, IV, III-V and II-VI. While group III-V semiconductors (especially GaAs/AlxGa1−xAs) have been extensively studied, group IV heterostructures such as the SixGe1−x system are much more difficult to realize because of the large lattice mismatch. Nevertheless, the strain modification of the subband structures is interesting in these quantum structures and has attracted much attention. In the GaAs/AlAs system both the difference in lattice constant between GaAs and AlAs and the difference of their thermal expansion coefficient are small. Thus, the remaining strain at room temperature can be minimized after cooling from epitaxial growth temperatures. The first compositional superlattice was realized using the GaAs/AlxGa1−xAs material system. A graphene/boron nitride system forms a semiconductor superlattice once the two crystals are aligned. Its charge carriers move perpendicular to the electric field, with little energy dissipation. h-BN has a hexagonal structure similar to graphene's. The superlattice has broken inversion symmetry. Locally, topological currents are comparable in strength to the applied current, indicating large valley-Hall angles. Production. Superlattices can be produced using various techniques, but the most common are molecular-beam epitaxy (MBE) and sputtering. With these methods, layers can be produced with thicknesses of only a few atomic spacings. An example of specifying a superlattice is [Fe20V30]20. It describes a bi-layer of 20Å of Iron (Fe) and 30Å of Vanadium (V) repeated 20 times, thus yielding a total thickness of 1000Å or 100 nm. The MBE technology as a means of fabricating semiconductor superlattices is of primary importance. In addition to the MBE technology, metal-organic chemical vapor deposition (MO-CVD) has contributed to the development of superconductor superlattices, which are composed of quaternary III-V compound semiconductors like InGaAsP alloys. Newer techniques include a combination of gas source handling with ultrahigh vacuum (UHV) technologies such as metal-organic molecules as source materials and gas-source MBE using hybrid gases such as arsine (AsH3) and phosphine (PH3) have been developed. Generally speaking MBE is a method of using three temperatures in binary systems, e.g., the substrate temperature, the source material temperature of the group III and the group V elements in the case of III-V compounds. The structural quality of the produced superlattices can be verified by means of X-ray diffraction or neutron diffraction spectra which contain characteristic satellite peaks. Other effects associated with the alternating layering are: giant magnetoresistance, tunable reflectivity for X-ray and neutron mirrors, neutron spin polarization, and changes in elastic and acoustic properties. Depending on the nature of its components, a superlattice may be called "magnetic", "optical" or "semiconducting". Miniband structure. The schematic structure of a periodic superlattice is shown below, where A and B are two semiconductor materials of respective layer thickness "a" and "b" (period: formula_0). When "a" and "b" are not too small compared with the interatomic spacing, an adequate approximation is obtained by replacing these fast varying potentials by an effective potential derived from the band structure of the original bulk semiconductors. It is straightforward to solve 1D Schrödinger equations in each of the individual layers, whose solutions formula_1 are linear combinations of real or imaginary exponentials. For a large barrier thickness, tunneling is a weak perturbation with regard to the uncoupled dispersionless states, which are fully confined as well. In this case the dispersion relation formula_2, periodic over formula_3 with over formula_4 by virtue of the Bloch theorem, is fully sinusoidal: formula_5 and the effective mass changes sign for formula_6: formula_7 In the case of minibands, this sinusoidal character is no longer preserved. Only high up in the miniband (for wavevectors well beyond formula_8) is the top actually 'sensed' and does the effective mass change sign. The shape of the miniband dispersion influences miniband transport profoundly and accurate dispersion relation calculations are required given wide minibands. The condition for observing single miniband transport is the absence of interminiband transfer by any process. The thermal quantum "kBT" should be much smaller than the energy difference formula_9 between the first and second miniband, even in the presence of the applied electric field. Bloch states. For an ideal superlattice a complete set of eigenstates states can be constructed by products of plane waves formula_10 and a "z"-dependent function formula_11 which satisfies the eigenvalue equation formula_12. As formula_13 and formula_14 are periodic functions with the superlattice period "d", the eigenstates are Bloch state formula_15 with energy formula_16. Within first-order perturbation theory in k2, one obtains the energy formula_17. Now, formula_18 will exhibit a larger probability in the well, so that it seems reasonable to replace the second term by formula_19 where formula_20 is the effective mass of the quantum well. Wannier functions. By definition the Bloch functions are delocalized over the whole superlattice. This may provide difficulties if electric fields are applied or effects due to the superlattice's finite length are considered. Therefore, it is often helpful to use different sets of basis states that are better localized. A tempting choice would be the use of eigenstates of single quantum wells. Nevertheless, such a choice has a severe shortcoming: the corresponding states are solutions of two different Hamiltonians, each neglecting the presence of the other well. Thus these states are not orthogonal, creating complications. Typically, the coupling is estimated by the transfer Hamiltonian within this approach. For these reasons, it is more convenient to use the set of Wannier functions. Wannier–Stark ladder. Applying an electric field "F" to the superlattice structure causes the Hamiltonian to exhibit an additional scalar potential "eφ"("z") = −"eFz" that destroys the translational invariance. In this case, given an eigenstate with wavefunction formula_21 and energy formula_22, then the set of states corresponding to wavefunctions formula_23 are eigenstates of the Hamiltonian with energies "E""j" = "E"0 − "jeFd". These states are equally spaced both in energy and real space and form the so-called "Wannier–Stark ladder". The potential formula_24 is not bounded for the infinite crystal, which implies a continuous energy spectrum. Nevertheless, the characteristic energy spectrum of these Wannier–Stark ladders could be resolved experimentally. Transport. The motion of charge carriers in a superlattice is different from that in the individual layers: mobility of charge carriers can be enhanced, which is beneficial for high-frequency devices, and specific optical properties are used in semiconductor lasers. If an external bias is applied to a conductor, such as a metal or a semiconductor, typically an electric current is generated. The magnitude of this current is determined by the band structure of the material, scattering processes, the applied field strength and the equilibrium carrier distribution of the conductor. A particular case of superlattices called superstripes are made of superconducting units separated by spacers. In each miniband the superconducting order parameter, called the superconducting gap, takes different values, producing a multi-gap, or two-gap or multiband superconductivity. Recently, Felix and Pereira investigated the thermal transport by phonons in periodic and quasiperiodic superlattices of graphene-hBN according to the Fibonacci sequence. They reported that the contribution of coherent thermal transport (phonons like-wave) was suppressed as quasiperiodicity increased. Other dimensionalities. Soon after two-dimensional electron gases (2DEG) had become commonly available for experiments, research groups attempted to create structures that could be called 2D artificial crystals. The idea is to subject the electrons confined to an interface between two semiconductors (i.e. along "z"-direction) to an additional modulation potential "V"("x","y"). Contrary to the classical superlattices (1D/3D, that is 1D modulation of electrons in 3D bulk) described above, this is typically achieved by treating the heterostructure surface: depositing a suitably patterned metallic gate or etching. If the amplitude of "V"("x","y") is large (take formula_25 as an example) compared to the Fermi level, formula_26, the electrons in the superlattice should behave similarly to electrons in an atomic crystal with square lattice (in the example, these "atoms" would be located at positions ("na","ma") where "n","m" are integers). The difference is in the length and energy scales. Lattice constants of atomic crystals are of the order of 1Å while those of superlattices ("a") are several hundreds or thousands larger as dictated by technological limits (e.g. electron-beam lithography used for the patterning of the heterostructure surface). Energies are correspondingly smaller in superlattices. Using the simple quantum-mechanically confined-particle model suggests formula_27. This relation is only a rough guide and actual calculations with currently topical graphene (a natural atomic crystal) and artificial graphene (superlattice) show that characteristic band widths are of the order of 1 eV and 10 meV, respectively. In the regime of weak modulation (formula_28), phenomena like commensurability oscillations or fractal energy spectra (Hofstadter butterfly) occur. Artificial two-dimensional crystals can be viewed as a 2D/2D case (2D modulation of a 2D system) and other combinations are experimentally available: an array of quantum wires (1D/2D) or 3D/3D photonic crystals. Applications. The superlattice of palladium-copper system is used in high performance alloys to enable a higher electrical conductivity, which is favored by the ordered structure. Further alloying elements like silver, rhenium, rhodium and ruthenium are added for better mechanical strength and high temperature stability. This alloy is used for probe needles in probe cards. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d=a+b" }, { "math_id": 1, "text": " \\psi" }, { "math_id": 2, "text": " E_z(k_z) " }, { "math_id": 3, "text": "2 \\pi /d " }, { "math_id": 4, "text": " d=a+b " }, { "math_id": 5, "text": "\\ E_z(k_z)=\\frac{\\Delta}{2}(1-\\cos(k_z d))" }, { "math_id": 6, "text": " 2\\pi /d" }, { "math_id": 7, "text": "\\ {m^* = \\frac{\\hbar^2}{\\partial^2 E / \\partial k^2}}|_{k=0}" }, { "math_id": 8, "text": "2 \\pi /d" }, { "math_id": 9, "text": " E_2-E_1" }, { "math_id": 10, "text": " e^{ i \\mathbf{k} \\cdot \\mathbf{r} }/ 2\\pi " }, { "math_id": 11, "text": "f_k (z)" }, { "math_id": 12, "text": " \\left( E_c(z) - \\frac{\\partial }{\\partial z} \\frac{\\hbar^2}{2 m_c (z)} \\frac{\\partial }{\\partial z} + \\frac {\\hbar^2 \\mathbf{k} ^2}{2m_c (z)} \\right) f_k (z) = E f_k (z) " }, { "math_id": 13, "text": " E_c (z) " }, { "math_id": 14, "text": " m_c(z) " }, { "math_id": 15, "text": " f_k (z)= \\phi _{q, \\mathbf{k}}(z)" }, { "math_id": 16, "text": "E^\\nu (q, \\mathbf{k})" }, { "math_id": 17, "text": " E^ \\nu (q, \\mathbf{k}) \\approx E^ \\nu(q, \\mathbf{0}) + \\langle \\phi _{q, \\mathbf{k}} \\mid \\frac{\\hbar^2 \\mathbf{k}^2}{2m_c (z)} \\mid \\phi _{q, \\mathbf{k}} \\rangle " }, { "math_id": 18, "text": " \\phi _{q, \\mathbf{0}} (z) " }, { "math_id": 19, "text": " E_k = \\frac{\\hbar^2 \\mathbf{k}^2}{2m_w} " }, { "math_id": 20, "text": "m_w" }, { "math_id": 21, "text": " \\Phi_0 (z) " }, { "math_id": 22, "text": "E_0" }, { "math_id": 23, "text": "\\Phi_j (z)= \\Phi_0 (z-jd) " }, { "math_id": 24, "text": " \\Phi_0 (z)" }, { "math_id": 25, "text": "V(x,y)=-V_0(\\cos 2\\pi x/a+\\cos 2\\pi y/a), V_0>0" }, { "math_id": 26, "text": "|V_0|\\gg E_f" }, { "math_id": 27, "text": "E\\propto 1/a^2" }, { "math_id": 28, "text": "|V_0|\\ll E_f" } ]
https://en.wikipedia.org/wiki?curid=1381368
13816037
Kervaire invariant
In mathematics, the Kervaire invariant is an invariant of a framed formula_0-dimensional manifold that measures whether the manifold could be surgically converted into a sphere. This invariant evaluates to 0 if the manifold can be converted to a sphere, and 1 otherwise. This invariant was named after Michel Kervaire who built on work of Cahit Arf. The Kervaire invariant is defined as the Arf invariant of the skew-quadratic form on the middle dimensional homology group. It can be thought of as the simply-connected "quadratic" L-group formula_1, and thus analogous to the other invariants from L-theory: the signature, a formula_2-dimensional invariant (either symmetric or quadratic, formula_3), and the De Rham invariant, a formula_4-dimensional "symmetric" invariant formula_5. In any given dimension, there are only two possibilities: either all manifolds have Arf–Kervaire invariant equal to 0, or half have Arf–Kervaire invariant 0 and the other half have Arf–Kervaire invariant 1. The Kervaire invariant problem is the problem of determining in which dimensions the Kervaire invariant can be nonzero. For differentiable manifolds, this can happen in dimensions 2, 6, 14, 30, 62, and possibly 126, and in no other dimensions. On May 30, 2024, Zhouli Xu (in collaboration with Weinan Lin and Guozhen Wang), announced during a seminar at Princeton University that the final case of dimension 126 has been settled. Xu stated that formula_6 survives so that there exists a manifold of Kervaire invariant 1 in dimension 126. . (https://www.math.princeton.edu/events/computing-differentials-adams-spectral-sequence-2024-05-30t170000) Definition. The Kervaire invariant is the Arf invariant of the quadratic form determined by the framing on the middle-dimensional formula_7-coefficient homology group formula_8 and is thus sometimes called the Arf–Kervaire invariant. The quadratic form (properly, skew-quadratic form) is a quadratic refinement of the usual ε-symmetric form on the middle dimensional homology of an (unframed) even-dimensional manifold; the framing yields the quadratic refinement. The quadratic form "q" can be defined by algebraic topology using functional Steenrod squares, and geometrically via the self-intersections of immersions formula_9formula_10 formula_11 determined by the framing, or by the triviality/non-triviality of the normal bundles of embeddings formula_9formula_10 formula_11 (for formula_12) and the mod 2 Hopf invariant of maps formula_13 (for formula_14). History. The Kervaire invariant is a generalization of the Arf invariant of a framed surface (that is, a 2-dimensional manifold with stably trivialized tangent bundle) which was used by Lev Pontryagin in 1950 to compute the homotopy group formula_15 of maps formula_16 formula_10 formula_17 (for formula_18), which is the cobordism group of surfaces embedded in formula_16 with trivialized normal bundle. used his invariant for "n" = 10 to construct the Kervaire manifold, a 10-dimensional PL manifold with no differentiable structure, the first example of such a manifold, by showing that his invariant does not vanish on this PL manifold, but vanishes on all smooth manifolds of dimension 10. computes the group of exotic spheres (in dimension greater than 4), with one step in the computation depending on the Kervaire invariant problem. Specifically, they show that the set of exotic spheres of dimension "n" – specifically the monoid of smooth structures on the standard "n"-sphere – is isomorphic to the group formula_19 of "h"-cobordism classes of oriented homotopy "n"-spheres. They compute this latter in terms of a map formula_20 where formula_21 is the cyclic subgroup of "n"-spheres that bound a parallelizable manifold of dimension formula_22, formula_23 is the "n"th stable homotopy group of spheres, and "J" is the image of the J-homomorphism, which is also a cyclic group. The groups formula_21 and formula_24 have easily understood cyclic factors, which are trivial or order two except in dimension formula_25, in which case they are large, with order related to the Bernoulli numbers. The quotients are the difficult parts of the groups. The map between these quotient groups is either an isomorphism or is injective and has an image of index 2. It is the latter if and only if there is an "n"-dimensional framed manifold of nonzero Kervaire invariant, and thus the classification of exotic spheres depends up to a factor of 2 on the Kervaire invariant problem. Examples. For the standard embedded torus, the skew-symmetric form is given by formula_26 (with respect to the standard symplectic basis), and the skew-quadratic refinement is given by formula_27 with respect to this basis: formula_28: the basis curves don't self-link; and formula_29: a (1,1) self-links, as in the Hopf fibration. This form thus has Arf invariant 0 (most of its elements have norm 0; it has isotropy index 1), and thus the standard embedded torus has Kervaire invariant 0. Kervaire invariant problem. The question of in which dimensions "n" there are "n"-dimensional framed manifolds of nonzero Kervaire invariant is called the Kervaire invariant problem. This is only possible if "n" is 2 mod 4, and indeed one must have "n" is of the form formula_30 (two less than a power of two). The question is almost completely resolved: there are manifolds with nonzero Kervaire invariant in dimension 2, 6, 14, 30, 62, and none in all other dimensions other than possibly 126. However, Zhouli Xu (in collaboration with Weinan Lin and Guozhen Wang) announced on May 30, 2024 that there exists a manifold with nonzero Kervaire invariant in dimension 126. The main results are those of William Browder (1969), who reduced the problem from differential topology to stable homotopy theory and showed that the only possible dimensions are formula_30, and those of Michael A. Hill, Michael J. Hopkins, and Douglas C. Ravenel (2016), who showed that there were no such manifolds for formula_31 (formula_32). Together with explicit constructions for lower dimensions (through 62), this leaves open only dimension 126. It was conjectured by Michael Atiyah that there is such a manifold in dimension 126, and that the higher-dimensional manifolds with nonzero Kervaire invariant are related to well-known exotic manifolds two dimension higher, in dimensions 16, 32, 64, and 128, namely the Cayley projective plane formula_33 (dimension 16, octonionic projective plane) and the analogous Rosenfeld projective planes (the bi-octonionic projective plane in dimension 32, the quateroctonionic projective plane in dimension 64, and the octo-octonionic projective plane in dimension 128), specifically that there is a construction that takes these projective planes and produces a manifold with nonzero Kervaire invariant in two dimensions lower. Kervaire–Milnor invariant. The Kervaire–Milnor invariant is a closely related invariant of framed surgery of a 2, 6 or 14-dimensional framed manifold, that gives isomorphisms from the 2nd and 6th stable homotopy group of spheres to formula_7, and a homomorphism from the 14th stable homotopy group of spheres onto formula_7. For "n" = 2, 6, 14 there is an exotic framing on formula_34 with Kervaire–Milnor invariant 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(4k+2)" }, { "math_id": 1, "text": "L_{4k+2}" }, { "math_id": 2, "text": "4k" }, { "math_id": 3, "text": "L^{4k} \\cong L_{4k}" }, { "math_id": 4, "text": "(4k+1)" }, { "math_id": 5, "text": "L^{4k+1}" }, { "math_id": 6, "text": "h_{6}^2" }, { "math_id": 7, "text": "\\Z/2\\Z" }, { "math_id": 8, "text": "q\\colon H_{2m+1}(M;\\Z/2\\mathbb{Z}) \\to \\Z/2\\Z," }, { "math_id": 9, "text": "S^{2m+1}" }, { "math_id": 10, "text": "\\to" }, { "math_id": 11, "text": "M^{4m+2}" }, { "math_id": 12, "text": "m \\neq 0,1,3" }, { "math_id": 13, "text": "S^{4m+2+k} \\to S^{2m+1+k}" }, { "math_id": 14, "text": "m = 0,1,3" }, { "math_id": 15, "text": "\\pi_{n+2}(S^n)=\\Z/2\\Z" }, { "math_id": 16, "text": "S^{n+2}" }, { "math_id": 17, "text": "S^n" }, { "math_id": 18, "text": "n\\geq 2" }, { "math_id": 19, "text": "\\Theta_n" }, { "math_id": 20, "text": "\\Theta_n/bP_{n+1}\\to \\pi_n^S/J,\\, " }, { "math_id": 21, "text": "bP_{n+1}" }, { "math_id": 22, "text": "n+1" }, { "math_id": 23, "text": "\\pi_n^S" }, { "math_id": 24, "text": "J" }, { "math_id": 25, "text": "n = 4k+3" }, { "math_id": 26, "text": "\\begin{pmatrix}0 & 1\\\\-1 & 0\\end{pmatrix}" }, { "math_id": 27, "text": "xy" }, { "math_id": 28, "text": "Q(1,0)=Q(0,1)=0" }, { "math_id": 29, "text": "Q(1,1)=1" }, { "math_id": 30, "text": "2^k-2" }, { "math_id": 31, "text": "k \\geq 8" }, { "math_id": 32, "text": "n \\geq 254" }, { "math_id": 33, "text": "\\mathbf{O}P^2" }, { "math_id": 34, "text": "S^{n/2} \\times S^{n/2}" } ]
https://en.wikipedia.org/wiki?curid=13816037
13818545
Maneuvering speed
Airspeed limitation selected by the designer of the aircraft In aviation, the maneuvering speed of an aircraft is an airspeed limitation selected by the designer of the aircraft. At speeds close to, and faster than, the maneuvering speed, full deflection of any flight control surface should not be attempted because of the risk of damage to the aircraft structure. The maneuvering speed of an aircraft is shown on a cockpit placard and in the aircraft's flight manual but is not commonly shown on the aircraft's airspeed indicator. In the context of air combat maneuvering (ACM), the maneuvering speed is also known as corner speed or cornering speed. Implications. It has been widely misunderstood that flight below maneuvering speed will provide total protection from structural failure. In response to the destruction of American Airlines Flight 587, a CFR Final Rule was issued clarifying that "flying at or below the design maneuvering speed does not allow a pilot to make multiple large control inputs in one airplane axis or single full control inputs in more than one airplane axis at a time". Such actions "may result in structural failures at any speed, including below the maneuvering speed." Design maneuvering speed VA. VA is the design maneuvering speed and is a calibrated airspeed. Maneuvering speed cannot be slower than formula_0 and need not be greater than Vc. If formula_1 is chosen by the manufacturer to be exactly formula_0 the aircraft will stall in a nose-up pitching maneuver before the structure is subjected to its limiting aerodynamic load. However, if formula_1 is selected to be greater than formula_0, the structure will be subjected to loads which exceed the limiting load unless the pilot checks the maneuver. The maneuvering speed or maximum operating maneuvering speed depicted on a cockpit placard is calculated for the maximum weight of the aircraft. Some Pilot's Operating Handbooks also present safe speeds for weights less than the maximum. The formula used to calculate a safe speed for a lower weight is formula_2, where VA is maneuvering speed (at maximum weight), W2 is actual weight, W1 is maximum weight. Maximum operating maneuvering speed VO. Some aircraft have a maximum operating maneuvering speed VO. Note that this is a different concept than design maneuvering speed. The concept of maximum operating maneuvering speed was introduced to the US type-certification standards for light aircraft in 1993. The maximum operating maneuvering speed is selected by the aircraft designer and cannot be more than formula_0, where Vs is the stalling speed of the aircraft, and "n" is the maximal allowed positive load factor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_s \\sqrt{n}" }, { "math_id": 1, "text": "V_A" }, { "math_id": 2, "text": "\\scriptstyle V_A \\sqrt{W_2 \\over W_1}" } ]
https://en.wikipedia.org/wiki?curid=13818545
1382023
Quartic interaction
Quantum field theory with four-point interactions In quantum field theory, a quartic interaction is a type of self-interaction in a scalar field. Other types of quartic interactions may be found under the topic of four-fermion interactions. A classical free scalar field formula_0 satisfies the Klein–Gordon equation. If a scalar field is denoted formula_0, a quartic interaction is represented by adding a potential energy term formula_1 to the Lagrangian density. The coupling constant formula_2 is dimensionless in 4-dimensional spacetime. This article uses the formula_3 metric signature for Minkowski space. The Lagrangian for a real scalar field. The Lagrangian density for a real scalar field with a quartic interaction is formula_4 This Lagrangian has a global Z2 symmetry mapping formula_5. The Lagrangian for a complex scalar field. The Lagrangian for a complex scalar field can be motivated as follows. For "two" scalar fields formula_6 and formula_7 the Lagrangian has the form formula_8 which can be written more concisely introducing a complex scalar field formula_9 defined as formula_10 formula_11 Expressed in terms of this complex scalar field, the above Lagrangian becomes formula_12 which is thus equivalent to the SO(2) model of real scalar fields formula_13, as can be seen by expanding the complex field formula_9 in real and imaginary parts. With formula_14 real scalar fields, we can have a formula_15 model with a global SO(N) symmetry given by the Lagrangian formula_16 Expanding the complex field in real and imaginary parts shows that it is equivalent to the SO(2) model of real scalar fields. In all of the models above, the coupling constant formula_2 must be positive, since otherwise the potential would be unbounded below, and there would be no stable vacuum. Also, the Feynman path integral discussed below would be ill-defined. In 4 dimensions, formula_17 theories have a Landau pole. This means that without a cut-off on the high-energy scale, renormalization would render the theory trivial. The formula_17 model belongs to the Griffiths-Simon class, meaning that it can be represented also as the weak limit of an Ising model on a certain type of graph. The triviality of both the formula_17 model and the Ising model in formula_18 can be shown via a graphical representation known as the random current expansion. Feynman integral quantization. The Feynman diagram expansion may be obtained also from the Feynman path integral formulation. The time-ordered vacuum expectation values of polynomials in φ, known as the "n"-particle Green's functions, are constructed by integrating over all possible fields, normalized by the vacuum expectation value with no external fields, formula_19 All of these Green's functions may be obtained by expanding the exponential in "J"("x")φ("x") in the generating function formula_20 A Wick rotation may be applied to make time imaginary. Changing the signature to (++++) then gives a φ4 statistical mechanics integral over a 4-dimensional Euclidean space, formula_21 Normally, this is applied to the scattering of particles with fixed momenta, in which case, a Fourier transform is useful, giving instead formula_22 where formula_23 is the Dirac delta function. The standard trick to evaluate this functional integral is to write it as a product of exponential factors, schematically, formula_24 The second two exponential factors can be expanded as power series, and the combinatorics of this expansion can be represented graphically. The integral with λ = 0 can be treated as a product of infinitely many elementary Gaussian integrals, and the result may be expressed as a sum of Feynman diagrams, calculated using the following Feynman rules: The last rule takes into account the effect of dividing by formula_26. The Minkowski-space Feynman rules are similar, except that each vertex is represented by formula_27, while each internal line is represented by a factor "i"/("q"2-"m"2 + "i" "ε"), where the "ε" term represents the small Wick rotation needed to make the Minkowski-space Gaussian integral converge. Renormalization. The integrals over unconstrained momenta, called "loop integrals", in the Feynman graphs typically diverge. This is normally handled by renormalization, which is a procedure of adding divergent counter-terms to the Lagrangian in such a way that the diagrams constructed from the original Lagrangian and counterterms are finite. A renormalization scale must be introduced in the process, and the coupling constant and mass become dependent upon it. It is this dependence that leads to the Landau pole mentioned earlier, and requires that the cutoff be kept finite. Alternatively, if the cutoff is allowed to go to infinity, the Landau pole can be avoided only if the renormalized coupling runs to zero, rendering the theory trivial. Spontaneous symmetry breaking. An interesting feature can occur if "m"2 turns negative, but with λ still positive. In this case, the vacuum consists of two lowest-energy states, each of which spontaneously breaks the Z2 global symmetry of the original theory. This leads to the appearance of interesting collective states like domain walls. In the "O"(2) theory, the vacua would lie on a circle, and the choice of one would spontaneously break the "O"(2) symmetry. A continuous broken symmetry leads to a Goldstone boson. This type of spontaneous symmetry breaking is the essential component of the Higgs mechanism. Spontaneous breaking of discrete symmetries. The simplest relativistic system in which we can see spontaneous symmetry breaking is one with a single scalar field formula_0 with Lagrangian formula_28 where formula_29 and formula_30 Minimizing the potential with respect to formula_0 leads to formula_31 We now expand the field around this minimum writing formula_32 and substituting in the lagrangian we get formula_33 where we notice that the scalar formula_34 has now a "positive " mass term. Thinking in terms of vacuum expectation values lets us understand what happens to a symmetry when it is spontaneously broken. The original Lagrangian was invariant under the formula_35 symmetry formula_36. Since formula_37 are both minima, there must be two different vacua: formula_38 with formula_39 Since the formula_35 symmetry takes formula_36, it must take formula_40 as well. The two possible vacua for the theory are equivalent, but one has to be chosen. Although it seems that in the new Lagrangian the formula_35 symmetry has disappeared, it is still there, but it now acts as formula_41 This is a general feature of spontaneously broken symmetries: the vacuum breaks them, but they are not actually broken in the Lagrangian, just hidden, and often realized only in a nonlinear way. Exact solutions. There exists a set of exact classical solutions to the equation of motion of the theory written in the form formula_42 that can be written for the massless, formula_43, case as formula_44 where formula_45 is the Jacobi elliptic sine function and formula_46 are two integration constants, provided the following dispersion relation holds formula_47 The interesting point is that we started with a massless equation but the exact solution describes a wave with a dispersion relation proper to a massive solution. When the mass term is not zero one gets formula_48 being now the dispersion relation formula_49 Finally, for the case of a symmetry breaking one has formula_50 being formula_51 and the following dispersion relation holds formula_52 These wave solutions are interesting as, notwithstanding we started with an equation with a wrong mass sign, the dispersion relation has the right one. Besides, Jacobi function formula_53 has no real zeros and so the field is never zero but moves around a given constant value that is initially chosen describing a spontaneous breaking of symmetry. A proof of uniqueness can be provided if we note that the solution can be sought in the form formula_54 being formula_55. Then, the partial differential equation becomes an ordinary differential equation that is the one defining the Jacobi elliptic function with formula_56 satisfying the proper dispersion relation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "({\\lambda}/{4!}) \\varphi^4" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "(+ - - -)" }, { "math_id": 4, "text": "\\mathcal{L}(\\varphi)=\\frac{1}{2} [\\partial^\\mu \\varphi \\partial_\\mu \\varphi -m^2 \\varphi^2] -\\frac{\\lambda}{4!} \\varphi^4." }, { "math_id": 5, "text": "\\varphi\\to-\\varphi" }, { "math_id": 6, "text": "\\varphi_1" }, { "math_id": 7, "text": "\\varphi_2" }, { "math_id": 8, "text": " \\mathcal{L}(\\varphi_1,\\varphi_2) =\n\\frac{1}{2} [ \\partial_\\mu \\varphi_1 \\partial^\\mu \\varphi_1 - m^2 \\varphi_1^2]\n+ \\frac{1}{2} [ \\partial_\\mu \\varphi_2 \\partial^\\mu \\varphi_2 - m^2 \\varphi_2^2]\n- \\frac{1}{4} \\lambda (\\varphi_1^2 + \\varphi_2^2)^2,\n" }, { "math_id": 9, "text": "\\phi" }, { "math_id": 10, "text": " \\phi \\equiv \\frac{1}{\\sqrt{2}} (\\varphi_1 + i \\varphi_2), " }, { "math_id": 11, "text": " \\phi^* \\equiv \\frac{1}{\\sqrt{2}} (\\varphi_1 - i \\varphi_2). " }, { "math_id": 12, "text": "\\mathcal{L}(\\phi)=\\partial^\\mu \\phi^* \\partial_\\mu \\phi -m^2 \\phi^* \\phi -\\lambda (\\phi^* \\phi)^2," }, { "math_id": 13, "text": "\\varphi_1, \\varphi_2" }, { "math_id": 14, "text": "N" }, { "math_id": 15, "text": "\\varphi^4" }, { "math_id": 16, "text": "\\mathcal{L}(\\varphi_1,...,\\varphi_N)=\\frac{1}{2} [\\partial^\\mu \\varphi_a \\partial_\\mu \\varphi_a - m^2 \\varphi_a \\varphi_a] -\\frac{1}{4} \\lambda (\\varphi_a \\varphi_a)^2, \\quad a=1,...,N." }, { "math_id": 17, "text": "\\phi^4" }, { "math_id": 18, "text": "d\\geq 4" }, { "math_id": 19, "text": "\\langle\\Omega|\\mathcal{T}\\{{\\phi}(x_1)\\cdots {\\phi}(x_n)\\}|\\Omega\\rangle=\\frac{\\int \\mathcal{D}\\phi \\phi(x_1)\\cdots \\phi(x_n) e^{i\\int d^4x \\left({1\\over 2}\\partial^\\mu \\phi \\partial_\\mu \\phi -{m^2 \\over 2}\\phi^2-{\\lambda\\over 4!}\\phi^4\\right)}}{\\int \\mathcal{D}\\phi e^{i\\int d^4x \\left({1\\over 2}\\partial^\\mu \\phi \\partial_\\mu \\phi -{m^2 \\over 2}\\phi^2-{\\lambda\\over 4!}\\phi^4\\right)}}." }, { "math_id": 20, "text": "Z[J] =\\int \\mathcal{D}\\phi e^{i\\int d^4x \\left({1\\over 2}\\partial^\\mu \\phi \\partial_\\mu \\phi -{m^2 \\over 2}\\phi^2-{\\lambda\\over 4!}\\phi^4+J\\phi\\right)} = Z[0] \\sum_{n=0}^{\\infty} \\frac{1}{n!} \\langle\\Omega|\\mathcal{T}\\{{\\phi}(x_1)\\cdots {\\phi}(x_n)\\}|\\Omega\\rangle." }, { "math_id": 21, "text": "Z[J]=\\int \\mathcal{D}\\phi e^{-\\int d^4x \\left({1\\over 2}(\\nabla\\phi)^2+{m^2 \\over 2}\\phi^2+{\\lambda\\over 4!}\\phi^4+J\\phi\\right)}." }, { "math_id": 22, "text": "\\tilde{Z}[\\tilde{J}]=\\int \\mathcal{D}\\tilde\\phi e^{-\\int d^4p \\left({1\\over 2}(p^2+m^2)\\tilde\\phi^2-\\tilde{J}\\tilde\\phi+{\\lambda\\over 4!}{\\int {d^4p_1 \\over (2\\pi)^4}{d^4p_2 \\over (2\\pi)^4}{d^4p_3 \\over (2\\pi)^4}\\delta(p-p_1-p_2-p_3)\\tilde\\phi(p)\\tilde\\phi(p_1)\\tilde\\phi(p_2)\\tilde\\phi(p_3)}\\right)}." }, { "math_id": 23, "text": "\\delta(x)" }, { "math_id": 24, "text": "\\tilde{Z}[\\tilde{J}]=\\int \\mathcal{D}\\tilde\\phi \\prod_p \\left[e^{-(p^2+m^2)\\tilde\\phi^2/2} e^{-\\lambda/4!\\int {d^4p_1 \\over (2\\pi)^4}{d^4p_2 \\over (2\\pi)^4}{d^4p_3 \\over (2\\pi)^4}\\delta(p-p_1-p_2-p_3)\\tilde\\phi(p)\\tilde\\phi(p_1)\\tilde\\phi(p_2)\\tilde\\phi(p_3)} e^{\\tilde{J}\\tilde\\phi}\\right]." }, { "math_id": 25, "text": "\\tilde{\\phi}(p)" }, { "math_id": 26, "text": "\\tilde{Z}[0]" }, { "math_id": 27, "text": "-i\\lambda" }, { "math_id": 28, "text": "\\mathcal{L}(\\varphi) = \\frac{1}{2} (\\partial \\varphi)^2 + \\frac{1}{2}\\mu^2 \\varphi^2 - \\frac{1}{4} \\lambda \\varphi^4 \\equiv \\frac{1}{2} (\\partial \\varphi)^2 - V(\\varphi), " }, { "math_id": 29, "text": " \\mu^2 > 0" }, { "math_id": 30, "text": " V(\\varphi) \\equiv - \\frac{1}{2}\\mu^2 \\varphi^2 + \\frac{1}{4} \\lambda \\varphi^4. " }, { "math_id": 31, "text": " V'(\\varphi_0) = 0 \\Longleftrightarrow \\varphi_0^2 \\equiv v^2 = \\frac{\\mu^2}{\\lambda}. " }, { "math_id": 32, "text": " \\varphi(x) = v + \\sigma(x), " }, { "math_id": 33, "text": " \\mathcal{L}(\\varphi) =\n\\underbrace{-\\frac{\\mu^4}{4\\lambda}}_{\\text{unimportant constant}}\n+ \\underbrace{\\frac{1}{2} [( \\partial \\sigma)^2 - (\\sqrt{2}\\mu)^2 \\sigma^2 ]}_{\\text{massive scalar field}}\n+ \\underbrace{ (-\\lambda v \\sigma^3 - \\frac{\\lambda}{4} \\sigma^4) }_{\\text{self-interactions}}. " }, { "math_id": 34, "text": "\\sigma" }, { "math_id": 35, "text": "Z_2" }, { "math_id": 36, "text": " \\varphi \\rightarrow -\\varphi" }, { "math_id": 37, "text": " \\langle \\Omega | \\varphi | \\Omega \\rangle = \\pm \\sqrt{ \\frac{6\\mu^2}{\\lambda} }" }, { "math_id": 38, "text": "|\\Omega_\\pm \\rangle" }, { "math_id": 39, "text": " \\langle \\Omega_\\pm | \\varphi | \\Omega_\\pm \\rangle = \\pm \\sqrt{ \\frac{6\\mu^2}{\\lambda} }. " }, { "math_id": 40, "text": " | \\Omega_+ \\rangle \\leftrightarrow | \\Omega_- \\rangle " }, { "math_id": 41, "text": " \\sigma \\rightarrow -\\sigma - 2v. " }, { "math_id": 42, "text": " \\partial^2\\varphi+\\mu_0^2\\varphi+\\lambda\\varphi^3=0" }, { "math_id": 43, "text": "\\mu_0=0" }, { "math_id": 44, "text": "\\varphi(x) = \\pm\\mu\\left(\\frac{2}{\\lambda}\\right)^{1\\over 4}{\\rm sn}(p\\cdot x+\\theta,i)," }, { "math_id": 45, "text": "\\, \\rm sn\\!" }, { "math_id": 46, "text": "\\,\\mu,\\theta" }, { "math_id": 47, "text": "p^2=\\mu^2\\left(\\frac{\\lambda}{2}\\right)^{1\\over 2}." }, { "math_id": 48, "text": "\\varphi(x) = \\pm\\sqrt{\\frac{2\\mu^4}{\\mu_0^2 + \\sqrt{\\mu_0^4 + 2\\lambda\\mu^4}}}{\\rm sn}\\left(p\\cdot x+\\theta,\\sqrt{\\frac{-\\mu_0^2 + \\sqrt{\\mu_0^4 + 2\\lambda\\mu^4}}{-\\mu_0^2 - \n \\sqrt{\\mu_0^4 + 2\\lambda\\mu^4}}}\\right)" }, { "math_id": 49, "text": "p^2=\\mu_0^2+\\frac{\\lambda\\mu^4}{\\mu_0^2+\\sqrt{\\mu_0^4+2\\lambda\\mu^4}}." }, { "math_id": 50, "text": "\\varphi(x) =\\pm v\\cdot {\\rm dn}(p\\cdot x+\\theta,i)," }, { "math_id": 51, "text": "v=\\sqrt{\\frac{2\\mu_0^2}{3\\lambda}}" }, { "math_id": 52, "text": "p^2=\\frac{\\lambda v^2}{2}." }, { "math_id": 53, "text": "\\, {\\rm dn}\\!" }, { "math_id": 54, "text": "\\varphi=\\varphi(\\xi)" }, { "math_id": 55, "text": "\\xi=p\\cdot x" }, { "math_id": 56, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=1382023
138215
Exact sequence
Sequence of homomorphisms such that each kernel equals the preceding image An exact sequence is a sequence of morphisms between objects (for example, groups, rings, modules, and, more generally, objects of an abelian category) such that the image of one morphism equals the kernel of the next. Definition. In the context of group theory, a sequence formula_1 of groups and group homomorphisms is said to be exact at formula_0 if formula_2. The sequence is called exact if it is exact at each formula_0 for all formula_3, i.e., if the image of each homomorphism is equal to the kernel of the next. The sequence of groups and homomorphisms may be either finite or infinite. A similar definition can be made for other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. More generally, the notion of an exact sequence makes sense in any category with kernels and cokernels, and more specially in abelian categories, where it is widely used. Simple cases. To understand the definition, it is helpful to consider relatively simple cases where the sequence is of group homomorphisms, is finite, and begins or ends with the trivial group. Traditionally, this, along with the single identity element, is denoted 0 (additive notation, usually when the groups are abelian), or denoted 1 (multiplicative notation). Short exact sequence. Short exact sequences are exact sequences of the form formula_4 As established above, for any such short exact sequence, "f" is a monomorphism and "g" is an epimorphism. Furthermore, the image of "f" is equal to the kernel of "g". It is helpful to think of "A" as a subobject of "B" with "f" embedding "A" into "B", and of "C" as the corresponding factor object (or quotient), "B"/"A", with "g" inducing an isomorphism formula_5 The short exact sequence formula_6 is called split if there exists a homomorphism "h" : "C" → "B" such that the composition "g" ∘ "h" is the identity map on "C". It follows that if these are abelian groups, "B" is isomorphic to the direct sum of "A" and "C": formula_7 Long exact sequence. A general exact sequence is sometimes called a long exact sequence, to distinguish from the special case of a short exact sequence. A long exact sequence is equivalent to a family of short exact sequences in the following sense: Given a long sequence with "n ≥" 2, we can split it up into the short sequences where formula_8 for every formula_9. By construction, the sequences "(2)" are exact at the formula_10's (regardless of the exactness of "(1)"). Furthermore, "(1)" is a long exact sequence if and only if "(2)" are all short exact sequences. Examples. Integers modulo two. Consider the following sequence of abelian groups: formula_11 The first homomorphism maps each element "i" in the set of integers Z to the element 2"i" in Z. The second homomorphism maps each element "i" in Z to an element "j" in the quotient group; that is, "j" "i" mod 2. Here the hook arrow formula_12 indicates that the map 2× from Z to Z is a monomorphism, and the two-headed arrow formula_13 indicates an epimorphism (the map mod 2). This is an exact sequence because the image 2Z of the monomorphism is the kernel of the epimorphism. Essentially "the same" sequence can also be written as formula_14 In this case the monomorphism is 2"n" ↦ 2"n" and although it looks like an identity function, it is not onto (that is, not an epimorphism) because the odd numbers don't belong to 2Z. The image of 2Z through this monomorphism is however exactly the same subset of Z as the image of Z through "n" ↦ 2"n" used in the previous sequence. This latter sequence does differ in the concrete nature of its first object from the previous one as 2Z is not the same set as Z even though the two are isomorphic as groups. The first sequence may also be written without using special symbols for monomorphism and epimorphism: formula_15 Here 0 denotes the trivial group, the map from Z to Z is multiplication by 2, and the map from Z to the factor group Z/2Z is given by reducing integers modulo 2. This is indeed an exact sequence: The first and third sequences are somewhat of a special case owing to the infinite nature of Z. It is not possible for a finite group to be mapped by inclusion (that is, by a monomorphism) as a proper subgroup of itself. Instead the sequence that emerges from the first isomorphism theorem is formula_16 (here the trivial group is denoted formula_17 as these groups are not supposed to be abelian). As a more concrete example of an exact sequence on finite groups: formula_18 where formula_19 is the cyclic group of order "n" and formula_20 is the dihedral group of order 2"n", which is a non-abelian group. Intersection and sum of modules. Let "I" and "J" be two ideals of a ring "R". Then formula_21 is an exact sequence of "R"-modules, where the module homomorphism formula_22 maps each element "x" of formula_23 to the element &amp;NoBreak;&amp;NoBreak; of the direct sum formula_24, and the homomorphism formula_25 maps each element &amp;NoBreak;&amp;NoBreak; of formula_24 to &amp;NoBreak;&amp;NoBreak;. These homomorphisms are restrictions of similarly defined homomorphisms that form the short exact sequence formula_26 Passing to quotient modules yields another exact sequence formula_27 Grad, curl and div in differential geometry. Another example can be derived from differential geometry, especially relevant for work on the Maxwell equations. Consider the Hilbert space formula_28 of scalar-valued square-integrable functions on three dimensions formula_29. Taking the gradient of a function formula_30 moves us to a subset of formula_31, the space of vector valued, still square-integrable functions on the same domain formula_32 — specifically, the set of such functions that represent conservative vector fields. (The generalized Stokes' theorem has preserved integrability.) First, note the curl of all such fields is zero — since formula_33 for all such "f". However, this only proves that the image of the gradient is a subset of the kernel of the curl. To prove that they are in fact the same set, prove the converse: that if the curl of a vector field formula_34 is 0, then formula_34 is the gradient of some scalar function. This follows almost immediately from Stokes' theorem (see the proof at conservative force.) The image of the gradient is then precisely the kernel of the curl, and so we can then take the curl to be our next morphism, taking us again to a (different) subset of formula_31. Similarly, we note that formula_35 so the image of the curl is a subset of the kernel of the divergence. The converse is somewhat involved (for the general case see Poincaré lemma): Having thus proved that the image of the curl is precisely the kernel of the divergence, this morphism in turn takes us back to the space we started from formula_28. Since definitionally we have landed on a space of integrable functions, any such function can (at least formally) be integrated in order to produce a vector field which divergence is that function — so the image of the divergence is the entirety of formula_28, and we can complete our sequence: formula_36 Equivalently, we could have reasoned in reverse: in a simply connected space, a curl-free vector field (a field in the kernel of the curl) can always be written as a gradient of a scalar function (and thus is in the image of the gradient). Similarly, a solenoidal vector field can be written as a curl of another field. (Reasoning in this direction thus makes use of the fact that 3-dimensional space is topologically trivial.) This short exact sequence also permits a much shorter proof of the validity of the Helmholtz decomposition that does not rely on brute-force vector calculus. Consider the subsequence formula_37 Since the divergence of the gradient is the Laplacian, and since the Hilbert space of square-integrable functions can be spanned by the eigenfunctions of the Laplacian, we already see that some inverse mapping formula_38 must exist. To explicitly construct such an inverse, we can start from the definition of the vector Laplacian formula_39 Since we are trying to construct an identity mapping by composing some function with the gradient, we know that in our case formula_40. Then if we take the divergence of both sides formula_41 we see that if a function is an eigenfunction of the vector Laplacian, its divergence must be an eigenfunction of the scalar Laplacian with the same eigenvalue. Then we can build our inverse function formula_42 simply by breaking any function in formula_31 into the vector-Laplacian eigenbasis, scaling each by the inverse of their eigenvalue, and taking the divergence; the action of formula_43 is thus clearly the identity. Thus by the splitting lemma, formula_44, or equivalently, any square-integrable vector field on formula_45 can be broken into the sum of a gradient and a curl — which is what we set out to prove. Properties. The splitting lemma states that, for a short exact sequence formula_46 the following conditions are equivalent. For non-commutative groups, the splitting lemma does not apply, and one has only the equivalence between the two last conditions, with "the direct sum" replaced with "a semidirect product". In both cases, one says that such a short exact sequence "splits". The snake lemma shows how a commutative diagram with two exact rows gives rise to a longer exact sequence. The nine lemma is a special case. The five lemma gives conditions under which the middle map in a commutative diagram with exact rows of length 5 is an isomorphism; the short five lemma is a special case thereof applying to short exact sequences. The importance of short exact sequences is underlined by the fact that every exact sequence results from "weaving together" several overlapping short exact sequences. Consider for instance the exact sequence formula_47 which implies that there exist objects "Ck" in the category such that formula_48. Suppose in addition that the cokernel of each morphism exists, and is isomorphic to the image of the next morphism in the sequence: formula_49 (This is true for a number of interesting categories, including any abelian category such as the abelian groups; but it is not true for all categories that allow exact sequences, and in particular is not true for the category of groups, in which coker("f") : "G" → "H" is not "H"/im("f") but formula_50, the quotient of "H" by the conjugate closure of im("f").) Then we obtain a commutative diagram in which all the diagonals are short exact sequences: The only portion of this diagram that depends on the cokernel condition is the object formula_51 and the final pair of morphisms formula_52. If there exists any object formula_53 and morphism formula_54 such that formula_55 is exact, then the exactness of formula_56 is ensured. Again taking the example of the category of groups, the fact that im("f") is the kernel of some homomorphism on "H" implies that it is a normal subgroup, which coincides with its conjugate closure; thus coker("f") is isomorphic to the image "H"/im("f") of the next morphism. Conversely, given any list of overlapping short exact sequences, their middle terms form an exact sequence in the same manner. Applications of exact sequences. In the theory of abelian categories, short exact sequences are often used as a convenient language to talk about subobjects and factor objects. The extension problem is essentially the question "Given the end terms "A" and "C" of a short exact sequence, what possibilities exist for the middle term "B"?" In the category of groups, this is equivalent to the question, what groups "B" have "A" as a normal subgroup and "C" as the corresponding factor group? This problem is important in the classification of groups. See also Outer automorphism group. Notice that in an exact sequence, the composition "f""i"+1 ∘ "f""i" maps "A""i" to 0 in "A""i"+2, so every exact sequence is a chain complex. Furthermore, only "f""i"-images of elements of "A""i" are mapped to 0 by "f""i"+1, so the homology of this chain complex is trivial. More succinctly: Exact sequences are precisely those chain complexes which are acyclic. Given any chain complex, its homology can therefore be thought of as a measure of the degree to which it fails to be exact. If we take a series of short exact sequences linked by chain complexes (that is, a short exact sequence of chain complexes, or from another point of view, a chain complex of short exact sequences), then we can derive from this a long exact sequence (that is, an exact sequence indexed by the natural numbers) on homology by application of the zig-zag lemma. It comes up in algebraic topology in the study of relative homology; the Mayer–Vietoris sequence is another example. Long exact sequences induced by short exact sequences are also characteristic of derived functors. Exact functors are functors that transform exact sequences into exact sequences. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G_i" }, { "math_id": 1, "text": "G_0\\;\\xrightarrow{\\ f_1\\ }\\; G_1 \\;\\xrightarrow{\\ f_2\\ }\\; G_2 \\;\\xrightarrow{\\ f_3\\ }\\; \\cdots \\;\\xrightarrow{\\ f_n\\ }\\; G_n" }, { "math_id": 2, "text": "\\operatorname{im}(f_i)=\\ker(f_{i+1})" }, { "math_id": 3, "text": "1\\leq i<n" }, { "math_id": 4, "text": "0 \\to A \\xrightarrow{f} B \\xrightarrow{g} C \\to 0." }, { "math_id": 5, "text": "C \\cong B/\\operatorname{im}(f) = B/\\operatorname{ker}(g)" }, { "math_id": 6, "text": "0 \\to A \\xrightarrow{f} B \\xrightarrow{g} C \\to 0\\," }, { "math_id": 7, "text": "B \\cong A \\oplus C." }, { "math_id": 8, "text": "K_i = \\operatorname{im}(f_i)" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "K_i" }, { "math_id": 11, "text": "\\mathbf{Z} \\mathrel{\\overset{2\\times}{\\,\\hookrightarrow}} \\mathbf{Z} \\twoheadrightarrow \\mathbf{Z}/2\\mathbf{Z}" }, { "math_id": 12, "text": "\\hookrightarrow" }, { "math_id": 13, "text": "\\twoheadrightarrow" }, { "math_id": 14, "text": "2\\mathbf{Z} \\mathrel{\\,\\hookrightarrow} \\mathbf{Z} \\twoheadrightarrow \\mathbf{Z}/2\\mathbf{Z}" }, { "math_id": 15, "text": "0 \\to \\mathbf{Z} \\mathrel{\\overset{2\\times}{\\longrightarrow}} \\mathbf{Z} \\longrightarrow \\mathbf{Z}/2\\mathbf{Z} \\to 0" }, { "math_id": 16, "text": "1 \\to N \\to G \\to G/N \\to 1" }, { "math_id": 17, "text": "1," }, { "math_id": 18, "text": "1 \\to C_n \\to D_{2n} \\to C_2 \\to 1" }, { "math_id": 19, "text": "C_n" }, { "math_id": 20, "text": "D_{2n}" }, { "math_id": 21, "text": "0 \\to I\\cap J \\to I\\oplus J \\to I + J \\to 0 " }, { "math_id": 22, "text": "I\\cap J \\to I\\oplus J" }, { "math_id": 23, "text": "I\\cap J" }, { "math_id": 24, "text": "I\\oplus J" }, { "math_id": 25, "text": "I\\oplus J \\to I+J" }, { "math_id": 26, "text": "0\\to R \\to R\\oplus R \\to R \\to 0 " }, { "math_id": 27, "text": "0\\to R/(I\\cap J) \\to R/I \\oplus R/J \\to R/(I+J) \\to 0 " }, { "math_id": 28, "text": "L^2" }, { "math_id": 29, "text": "\\left\\lbrace f:\\mathbb{R}^3 \\to \\mathbb{R} \\right\\rbrace" }, { "math_id": 30, "text": "f\\in\\mathbb{H}_1" }, { "math_id": 31, "text": "\\mathbb{H}_3" }, { "math_id": 32, "text": "\\left\\lbrace f:\\mathbb{R}^3\\to\\mathbb{R}^3 \\right\\rbrace" }, { "math_id": 33, "text": "\\operatorname{curl} (\\operatorname{grad} f ) \\equiv \\nabla \\times (\\nabla f) = 0" }, { "math_id": 34, "text": "\\vec{F}" }, { "math_id": 35, "text": "\\operatorname{div} \\left(\\operatorname{curl} \\vec{v}\\right) \\equiv \\nabla \\cdot \\nabla \\times \\vec{v} = 0," }, { "math_id": 36, "text": "0 \\to L^2 \\mathrel{\\xrightarrow{\\operatorname{grad}}} \\mathbb{H}_3 \\mathrel{\\xrightarrow{\\operatorname{curl}}} \\mathbb{H}_3 \\mathrel{\\xrightarrow{\\operatorname{div}}} L^2 \\to 0" }, { "math_id": 37, "text": "0 \\to L^2 \\mathrel{\\xrightarrow{\\operatorname{grad}}} \\mathbb{H}_3 \\mathrel{\\xrightarrow{\\operatorname{curl}}} \\operatorname{im}(\\operatorname{curl}) \\to 0." }, { "math_id": 38, "text": "\\nabla^{-1}:\\mathbb{H}_3 \\to L^2" }, { "math_id": 39, "text": "\\nabla^2 \\vec{A} = \\nabla\\left(\\nabla\\cdot\\vec{A}\\right) + \\nabla\\times\\left(\\nabla\\times\\vec{A}\\right)" }, { "math_id": 40, "text": "\\nabla\\times\\vec{A} = \\operatorname{curl}\\left(\\vec{A}\\right) = 0" }, { "math_id": 41, "text": "\\begin{align}\n \\nabla\\cdot\\nabla^2\\vec{A}\n & = \\nabla\\cdot\\nabla\\left(\\nabla\\cdot\\vec{A}\\right) \\\\\n & = \\nabla^2\\left(\\nabla\\cdot\\vec{A}\\right) \\\\\n\\end{align}" }, { "math_id": 42, "text": "\\nabla^{-1}" }, { "math_id": 43, "text": "\\nabla^{-1}\\circ\\nabla" }, { "math_id": 44, "text": "\\mathbb{H}_3 \\cong L^2 \\oplus \\operatorname{im}(\\operatorname{curl})" }, { "math_id": 45, "text": "\\mathbb{R}^3" }, { "math_id": 46, "text": "0 \\to A \\;\\xrightarrow{\\ f\\ }\\; B \\;\\xrightarrow{\\ g\\ }\\; C \\to 0," }, { "math_id": 47, "text": "A_1\\to A_2\\to A_3\\to A_4\\to A_5\\to A_6" }, { "math_id": 48, "text": "C_k \\cong \\ker (A_k\\to A_{k+1}) \\cong \\operatorname{im} (A_{k-1}\\to A_k)" }, { "math_id": 49, "text": "C_k \\cong \\operatorname{coker} (A_{k-2}\\to A_{k-1})" }, { "math_id": 50, "text": "H / {\\left\\langle \\operatorname{im} f \\right\\rangle}^H" }, { "math_id": 51, "text": "C_7" }, { "math_id": 52, "text": "A_6 \\to C_7\\to 0" }, { "math_id": 53, "text": "A_{k+1}" }, { "math_id": 54, "text": "A_k \\to A_{k+1}" }, { "math_id": 55, "text": "A_{k-1} \\to A_k \\to A_{k+1}" }, { "math_id": 56, "text": "0 \\to C_k \\to A_k \\to C_{k+1} \\to 0" } ]
https://en.wikipedia.org/wiki?curid=138215
13823014
Krener's theorem
In mathematics, Krener's theorem is a result attributed to Arthur J. Krener in geometric control theory about the topological properties of attainable sets of finite-dimensional control systems. It states that any attainable set of a bracket-generating system has nonempty interior or, equivalently, that any attainable set has nonempty interior in the topology of the corresponding orbit. Heuristically, Krener's theorem prohibits attainable sets from being hairy. Theorem. Let formula_0 be a smooth control system, where formula_1 belongs to a finite-dimensional manifold formula_2 and formula_3 belongs to a control set formula_4. Consider the family of vector fields formula_5. Let formula_6 be the Lie algebra generated by formula_7 with respect to the Lie bracket of vector fields. Given formula_8, if the vector space formula_9 is equal to formula_10, then formula_11 belongs to the closure of the interior of the attainable set from formula_11. Remarks and consequences. Even if formula_12 is different from formula_10, the attainable set from formula_11 has nonempty interior in the orbit topology, as it follows from Krener's theorem applied to the control system restricted to the orbit through formula_11. When all the vector fields in formula_13 are analytic, formula_14 if and only if formula_11 belongs to the closure of the interior of the attainable set from formula_11. This is a consequence of Krener's theorem and of the orbit theorem. As a corollary of Krener's theorem one can prove that if the system is bracket-generating and if the attainable set from formula_8 is dense in formula_2, then the attainable set from formula_11 is actually equal to formula_2.
[ { "math_id": 0, "text": "{\\ }\\dot q=f(q,u)" }, { "math_id": 1, "text": "{\\ q}" }, { "math_id": 2, "text": "\\ M" }, { "math_id": 3, "text": "\\ u" }, { "math_id": 4, "text": "\\ U" }, { "math_id": 5, "text": "{\\mathcal F}=\\{f(\\cdot,u)\\mid u\\in U\\}" }, { "math_id": 6, "text": "\\ \\mathrm{Lie}\\,\\mathcal{F}" }, { "math_id": 7, "text": "{\\mathcal F}" }, { "math_id": 8, "text": "\\ q\\in M" }, { "math_id": 9, "text": "\\ \\mathrm{Lie}_q\\,\\mathcal{F}=\\{g(q)\\mid g\\in \\mathrm{Lie}\\,\\mathcal{F}\\}" }, { "math_id": 10, "text": "\\ T_q M" }, { "math_id": 11, "text": "\\ q" }, { "math_id": 12, "text": "\\mathrm{Lie}_q\\,\\mathcal{F}" }, { "math_id": 13, "text": "\\ \\mathcal{F}" }, { "math_id": 14, "text": "\\ \\mathrm{Lie}_q\\,\\mathcal{F}=T_q M" } ]
https://en.wikipedia.org/wiki?curid=13823014
1382381
Orthogonal transformation
Linear algebra operation In linear algebra, an orthogonal transformation is a linear transformation "T" : "V" → "V" on a real inner product space "V", that preserves the inner product. That is, for each pair "u", "v" of elements of "V", we have formula_0 Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases. Orthogonal transformations are injective: if formula_1 then formula_2, hence formula_3, so the kernel of formula_4 is trivial. Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions. In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of "V". The columns of the matrix form another orthonormal basis of "V". If an orthogonal transformation is invertible (which is always the case when "V" is finite-dimensional) then its inverse formula_5 is another orthogonal transformation identical to the transpose of formula_4: formula_6. Examples. Consider the inner-product space formula_7 with the standard Euclidean inner product and standard basis. Then, the matrix transformation formula_8 is orthogonal. To see this, consider formula_9 Then, formula_10 The previous example can be extended to construct all orthogonal transformations. For example, the following matrices define orthogonal transformations on formula_11: formula_12 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle u,v \\rangle = \\langle Tu,Tv \\rangle \\, ." }, { "math_id": 1, "text": "Tv = 0" }, { "math_id": 2, "text": "0 = \\langle Tv,Tv \\rangle = \\langle v,v \\rangle" }, { "math_id": 3, "text": "v = 0" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "T^{-1}" }, { "math_id": 6, "text": "T^{-1} = T^{\\mathtt{T}}" }, { "math_id": 7, "text": "(\\mathbb{R}^2,\\langle\\cdot,\\cdot\\rangle)" }, { "math_id": 8, "text": "\nT = \\begin{bmatrix}\n\\cos(\\theta) & -\\sin(\\theta) \\\\\n\\sin(\\theta) & \\cos(\\theta)\n\\end{bmatrix} : \\mathbb{R}^2 \\to \\mathbb{R}^2\n" }, { "math_id": 9, "text": "\n\\begin{align}\nTe_1 = \\begin{bmatrix}\\cos(\\theta) \\\\ \\sin(\\theta)\\end{bmatrix} && Te_2 = \\begin{bmatrix}-\\sin(\\theta) \\\\ \\cos(\\theta)\\end{bmatrix}\n\\end{align}\n" }, { "math_id": 10, "text": "\n\\begin{align}\n&\\langle Te_1,Te_1\\rangle = \\begin{bmatrix} \\cos(\\theta) & \\sin(\\theta) \\end{bmatrix} \\cdot \\begin{bmatrix} \\cos(\\theta) \\\\ \\sin(\\theta) \\end{bmatrix} = \\cos^2(\\theta) + \\sin^2(\\theta) = 1\\\\\n&\\langle Te_1,Te_2\\rangle = \\begin{bmatrix} \\cos(\\theta) & \\sin(\\theta) \\end{bmatrix} \\cdot \\begin{bmatrix} -\\sin(\\theta) \\\\ \\cos(\\theta) \\end{bmatrix} = \\sin(\\theta)\\cos(\\theta) - \\sin(\\theta)\\cos(\\theta) = 0\\\\\n&\\langle Te_2,Te_2\\rangle = \\begin{bmatrix} -\\sin(\\theta) & \\cos(\\theta) \\end{bmatrix} \\cdot \\begin{bmatrix} -\\sin(\\theta) \\\\ \\cos(\\theta) \\end{bmatrix} = \\sin^2(\\theta) + \\cos^2(\\theta) = 1\\\\\n\\end{align}\n" }, { "math_id": 11, "text": "(\\mathbb{R}^3,\\langle\\cdot,\\cdot\\rangle)" }, { "math_id": 12, "text": "\n\\begin{bmatrix}\n\\cos(\\theta) & -\\sin(\\theta) & 0 \\\\\n\\sin(\\theta) & \\cos(\\theta) & 0 \\\\\n0 & 0 & 1\n\\end{bmatrix},\n\\begin{bmatrix}\n\\cos(\\theta) & 0 & -\\sin(\\theta) \\\\\n0 & 1 & 0 \\\\\n\\sin(\\theta) & 0 & \\cos(\\theta)\n\\end{bmatrix},\n\\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & \\cos(\\theta) & -\\sin(\\theta) \\\\\n0 & \\sin(\\theta) & \\cos(\\theta) \n\\end{bmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=1382381
13831369
Hydrolysis constant
The word hydrolysis is applied to chemical reactions in which a substance reacts with water. In organic chemistry, the products of the reaction are usually molecular, being formed by combination with H and OH groups (e.g., hydrolysis of an ester to an alcohol and a carboxylic acid). In inorganic chemistry, the word most often applies to cations forming soluble hydroxide or oxide complexes with, in some cases, the formation of hydroxide and oxide precipitates. Metal hydrolysis and associated equilibrium constant values. The hydrolysis reaction for a hydrated metal ion in aqueous solution can be written as: "p" Mz+ + "q" H2O ⇌ M"p"(OH)"q"("pz–q") + "q" H+ and the corresponding formation constant as: formula_0 and associated equilibria can be written as: MO"x"(OH)"z–2x"(s) + "z" H+ ⇌ M"z+" + ("z–x") H2O MO"x"(OH)"z–2x"(s) + x H2O ⇌ M"z+" + z OH− "p" MO"x"(OH)"z–2x"(s) + ("pz–q") H+ ⇌ M"p"(OH)"q"("pz–q") + ("pz–px–q") H2O Aluminium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Americium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Americium(V). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Antimony(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Antimony(V). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Arsenic(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Arsenic(V). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Barium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Berkelium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Beryllium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Bismuth. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Boron. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Cadmium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Calcium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Californium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Cerium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Chromium(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K (The divalent state is unstable in water, producing hydrogen whilst being oxidised to a higher valency state (Baes and Mesmer, 1976). The reliability of the data is in doubt.): Chromium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Chromium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Cobalt(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Cobalt(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Copper(I). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Copper(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Curium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Dysprosium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Erbium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Europium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Gadolinium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Gallium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Germanium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Gold(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Hafnium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Holmium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Indium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Iridium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Iron(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Iron(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Lanthanum. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Lead(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Lead(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Lithium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Magnesium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Manganese(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Manganese(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Mercury(I). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) 0.5 M HClO4 Mercury(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Molybdenum(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution, T = 298.15 K and I = 3 M NaClO4 (a) or 0.1 M Na+ medium, Data at I = 0 are not available (b): Neodymium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Neptunium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Neptunium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Neptunium(V). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Neptunium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Nickel(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Niobium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Osmium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution, "I" = 0.1 M and T = 298.15 K: Osmium(VIII). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Palladium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Plutonium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Plutonium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Plutonium(V). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Plutonium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Potassium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Praseodymium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Radium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Rhodium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Samarium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Scandium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Selenium(–II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Selenium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Selenium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Silicon. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Silver. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Sodium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Strontium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Tantalum. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) Tellurium(-II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) Tellurium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) Tellurium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) Terbium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Thallium(I). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) Thallium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: (a) Thorium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Thulium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Tin(II). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Tin(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Tungsten. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Titanium(III). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Titanium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Uranium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Uranium(VI). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Vanadium(IV). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Vanadium(V). Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Ytterbium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Yttrium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Zinc. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: Zirconium. Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta_{pq} = \\frac{[M_p(OH)_q^{(pz-q)}][H^+]^q}{[M^{z+}]^p}" } ]
https://en.wikipedia.org/wiki?curid=13831369
13833
Hash table
Associative array for storing key-value pairs In computing, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps keys to values. A hash table uses a hash function to compute an "index", also called a "hash code", into an array of "buckets" or "slots", from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A map implemented by a hash table is called a hash map. Most hash table designs employ an imperfect hash function. Hash collisions, where the hash function generates the same index for more than one key, therefore typically must be accommodated in some way. In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions of key–value pairs, at amortized constant average cost per operation. Hashing is an example of a space-time tradeoff. If memory is infinite, the entire key can be used directly as an index to locate its value with a single memory access. On the other hand, if infinite time is available, values can be stored without regard for their keys, and a binary search or linear search can be used to retrieve the element.458 In many situations, hash tables turn out to be on average more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets. History. The idea of hashing arose independently in different places. In January 1953, Hans Peter Luhn wrote an internal IBM memorandum that used hashing with chaining. The first example of open addressing was proposed by A. D. Linh, building on Luhn's memorandum.547 Around the same time, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research implemented hashing for the IBM 701 assembler.124 Open addressing with linear probing is credited to Amdahl, although Andrey Ershov independently had the same idea. The term "open addressing" was coined by W. Wesley Peterson on his article which discusses the problem of search in large files.15 The first published work on hashing with chaining is credited to Arnold Dumey, who discussed the idea of using remainder modulo a prime as a hash function.15 The word "hashing" was first published in an article by Robert Morris.126 A theoretical analysis of linear probing was submitted originally by Konheim and Weiss.15 Overview. An associative array stores a set of (key, value) pairs and allows insertion, deletion, and lookup (search), with the constraint of unique keys. In the hash table implementation of associative arrays, an array formula_0 of length formula_1 is partially filled with formula_2 elements, where formula_3. A value formula_4 gets stored at an index location formula_5, where formula_6 is a hash function, and formula_7.2 Under reasonable assumptions, hash tables have better time complexity bounds on search, delete, and insert operations in comparison to self-balancing binary search trees.1 Hash tables are also commonly used to implement sets, by omitting the stored value for each key and merely tracking whether the key is present.1 Load factor. A "load factor" formula_8 is a critical statistic of a hash table, and is defined as follows: formula_9 where The performance of the hash table deteriorates in relation to the load factor formula_8.2 The software typically ensures that the load factor formula_8 remains below a certain constant, formula_10. This helps maintain good performance. Therefore, a common approach is to resize or "rehash" the hash table whenever the load factor formula_8 reaches formula_10. Similarly the table may also be resized if the load factor drops below formula_11. Load factor for separate chaining. With separate chaining hash tables, each slot of the bucket array stores a pointer to a list or array of data. Separate chaining hash tables suffer gradually declining performance as the load factor grows, and no fixed point beyond which resizing is absolutely needed. With separate chaining, the value of formula_10 that gives best performance is typically between 1 and 3. Load factor for open addressing. With open addressing, each slot of the bucket array holds exactly one item. Therefore an open-addressed hash table cannot have a load factor greater than 1. The performance of open addressing becomes very bad when the load factor approaches 1. Therefore a hash table that uses open addressing "must" be resized or "rehashed" if the load factor formula_8 approaches 1. With open addressing, acceptable figures of max load factor formula_10 should range around 0.6 to 0.75.110 Hash function. A hash function formula_12 maps the universe formula_13 of keys to indices or slots within the table, that is, formula_14 for formula_15. The conventional implementations of hash functions are based on the "integer universe assumption" that all elements of the table stem from the universe formula_16, where the bit length of formula_17 is confined within the word size of a computer architecture.2 A hash function formula_6 is said to be perfect for a given set formula_18 if it is injective on formula_18, that is, if each element formula_19 maps to a different value in formula_20. A perfect hash function can be created if all the keys are known ahead of time. Integer universe assumption. The schemes of hashing used in "integer universe assumption" include hashing by division, hashing by multiplication, universal hashing, dynamic perfect hashing, and static perfect hashing.2 However, hashing by division is the commonly used scheme.264110 Hashing by division. The scheme in hashing by division is as follows:2 formula_21 Where formula_22 is the hash digest of formula_19 and formula_1 is the size of the table. Hashing by multiplication. The scheme in hashing by multiplication is as follows: formula_23 Where formula_0 is a real-valued constant and formula_1 is the size of the table. An advantage of the hashing by multiplication is that the formula_1 is not critical. Although any value formula_0 produces a hash function, Donald Knuth suggests using the golden ratio.3 Choosing a hash function. Uniform distribution of the hash values is a fundamental requirement of a hash function. A non-uniform distribution increases the number of collisions and the cost of resolving them. Uniformity is sometimes difficult to ensure by design, but may be evaluated empirically using statistical tests, e.g., a Pearson's chi-squared test for discrete uniform distributions. The distribution needs to be uniform only for table sizes that occur in the application. In particular, if one uses dynamic resizing with exact doubling and halving of the table size, then the hash function needs to be uniform only when the size is a power of two. Here the index can be computed as some range of bits of the hash function. On the other hand, some hashing algorithms prefer to have the size be a prime number. For open addressing schemes, the hash function should also avoid "clustering", the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash is claimed to have particularly poor clustering behavior. K-independent hashing offers a way to prove a certain hash function does not have bad keysets for a given type of hashtable. A number of K-independence results are known for collision resolution schemes such as linear probing and cuckoo hashing. Since K-independence can prove a hash function works, one can then focus on finding the fastest possible such hash function. Collision resolution. A search algorithm that uses hashing consists of two parts. The first part is computing a hash function which transforms the search key into an array index. The ideal case is such that no two search keys hashes to the same array index. However, this is not always the case and is impossible to guarantee for unseen given data.515 Hence the second part of the algorithm is collision resolution. The two common methods for collision resolution are separate chaining and open addressing.458 Separate chaining. In separate chaining, the process involves building a linked list with key–value pair for each search array index. The collided items are chained together through a single linked list, which can be traversed to access the item with a unique search key.464 Collision resolution through chaining with linked list is a common method of implementation of hash tables. Let formula_24 and formula_4 be the hash table and the node respectively, the operation involves as follows:258 Chained-Hash-Insert("T", "k") "insert" "x" "at the head of linked list" "T"["h"("k")] Chained-Hash-Search("T", "k") "search for an element with key" "k" "in linked list" "T"["h"("k")] Chained-Hash-Delete("T", "k") "delete" "x" "from the linked list" "T"["h"("k")] If the element is comparable either numerically or lexically, and inserted into the list by maintaining the total order, it results in faster termination of the unsuccessful searches. Other data structures for separate chaining. If the keys are ordered, it could be efficient to use "self-organizing" concepts such as using a self-balancing binary search tree, through which the theoretical worst case could be brought down to formula_25, although it introduces additional complexities.521 In dynamic perfect hashing, two-level hash tables are used to reduce the look-up complexity to be a guaranteed formula_26 in the worst case. In this technique, the buckets of formula_27 entries are organized as perfect hash tables with formula_28 slots providing constant worst-case lookup time, and low amortized time for insertion. A study shows array-based separate chaining to be 97% more performant when compared to the standard linked list method under heavy load.99 Techniques such as using fusion tree for each buckets also result in constant time for all operations with high probability. Caching and locality of reference. The linked list of separate chaining implementation may not be cache-conscious due to spatial locality—locality of reference—when the nodes of the linked list are scattered across memory, thus the list traversal during insert and search may entail CPU cache inefficiencies.91 In cache-conscious variants of collision resolution through separate chaining, a dynamic array found to be more cache-friendly is used in the place where a linked list or self-balancing binary search trees is usually deployed, since the contiguous allocation pattern of the array could be exploited by hardware-cache prefetchers—such as translation lookaside buffer—resulting in reduced access time and memory consumption. Open addressing. Open addressing is another collision resolution technique in which every entry record is stored in the bucket array itself, and the hash resolution is performed through probing. When a new entry has to be inserted, the buckets are examined, starting with the hashed-to slot and proceeding in some "probe sequence", until an unoccupied slot is found. When searching for an entry, the buckets are scanned in the same sequence, until either the target record is found, or an unused array slot is found, which indicates an unsuccessful search. Well-known probe sequences include: The performance of open addressing may be slower compared to separate chaining since the probe sequence increases when the load factor formula_8 approaches 1.93 The probing results in an infinite loop if the load factor reaches 1, in the case of a completely filled table.471 The average cost of linear probing depends on the hash function's ability to distribute the elements uniformly throughout the table to avoid clustering, since formation of clusters would result in increased search time.472 Caching and locality of reference. Since the slots are located in successive locations, linear probing could lead to better utilization of CPU cache due to locality of references resulting in reduced memory latency. Other collision resolution techniques based on open addressing. Coalesced hashing. Coalesced hashing is a hybrid of both separate chaining and open addressing in which the buckets or nodes link within the table. The algorithm is ideally suited for fixed memory allocation.4 The collision in coalesced hashing is resolved by identifying the largest-indexed empty slot on the hash table, then the colliding value is inserted into that slot. The bucket is also linked to the inserted node's slot which contains its colliding hash address.8 Cuckoo hashing. Cuckoo hashing is a form of open addressing collision resolution technique which guarantees formula_26 worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters into infinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues. Hopscotch hashing. Hopscotch hashing is an open addressing based algorithm which combines the elements of cuckoo hashing, linear probing and chaining through the notion of a "neighbourhood" of buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket. The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput in concurrent settings, thus well suited for implementing resizable concurrent hash table.350 The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items.352 Each bucket within the hash table includes an additional "hop-information"—an "H"-bit bit array for indicating the relative distance of the item which was originally hashed into the current virtual bucket within "H"-1 entries.352 Let formula_27 and formula_29 be the key to be inserted and bucket to which the key is hashed into respectively; several cases are involved in the insertion procedure such that the neighbourhood property of the algorithm is vowed: if formula_29 is empty, the element is inserted, and the leftmost bit of bitmap is set to 1; if not empty, linear probing is used for finding an empty slot in the table, the bitmap of the bucket gets updated followed by the insertion; if the empty slot is not within the range of the "neighbourhood," i.e. "H"-1, subsequent swap and hop-info bit array manipulation of each bucket is performed in accordance with its neighbourhood invariant properties.353 Robin Hood hashing. Robin Hood hashing is an open addressing based collision resolution algorithm; the collisions are resolved through favouring the displacement of the element that is farthest—or longest "probe sequence length" (PSL)—from its "home location" i.e. the bucket to which the item was hashed into.12 Although Robin Hood hashing does not change the theoretical search cost, it significantly affects the variance of the distribution of the items on the buckets,2 i.e. dealing with cluster formation in the hash table. Each node within the hash table that uses Robin Hood hashing should be augmented to store an extra PSL value. Let formula_4 be the key to be inserted, formula_30 be the (incremental) PSL length of formula_4, formula_24 be the hash table and formula_31 be the index, the insertion procedure is as follows:5 Dynamic resizing. Repeated insertions cause the number of entries in a hash table to grow, which consequently increases the load factor; to maintain the amortized formula_26 performance of the lookup and insertion operations, a hash table is dynamically resized and the items of the tables are "rehashed" into the buckets of the new hash table, since the items cannot be copied over as varying table sizes results in different hash value due to modulo operation. If a hash table becomes "too empty" after deleting some elements, resizing may be performed to avoid excessive memory usage. Resizing by moving all entries. Generally, a new hash table with a size double that of the original hash table gets allocated privately and every item in the original hash table gets moved to the newly allocated one by computing the hash values of the items followed by the insertion operation. Rehashing is simple, but computationally expensive. Alternatives to all-at-once rehashing. Some hash table implementations, notably in real-time systems, cannot pay the price of enlarging the hash table all at once, because it may interrupt time-critical operations. If one cannot avoid dynamic resizing, a solution is to perform the resizing gradually to avoid storage blip—typically at 50% of new table's size—during rehashing and to avoid memory fragmentation that triggers heap compaction due to deallocation of large memory blocks caused by the old hash table. In such case, the rehashing operation is done incrementally through extending prior memory block allocated for the old hash table such that the buckets of the hash table remain unaltered. A common approach for amortized rehashing involves maintaining two hash functions formula_37 and formula_38. The process of rehashing a bucket's items in accordance with the new hash function is termed as "cleaning", which is implemented through command pattern by encapsulating the operations such as formula_39, formula_40 and formula_41 through a formula_42 wrapper such that each element in the bucket gets rehashed and its procedure involve as follows:3 Linear hashing. Linear hashing is an implementation of the hash table which enables dynamic growths or shrinks of the table one bucket at a time. Performance. The performance of a hash table is dependent on the hash function's ability in generating quasi-random numbers (formula_45) for entries in the hash table where formula_46, formula_2 and formula_47 denotes the key, number of buckets and the hash function such that formula_48. If the hash function generates the same formula_45 for distinct keys (formula_49), this results in "collision", which is dealt with in a variety of ways. The constant time complexity (formula_26) of the operation in a hash table is presupposed on the condition that the hash function doesn't generate colliding indices; thus, the performance of the hash table is directly proportional to the chosen hash function's ability to disperse the indices. However, construction of such a hash function is practically infeasible, that being so, implementations depend on case-specific collision resolution techniques in achieving higher performance.2 Applications. Associative arrays. Hash tables are commonly used to implement many types of in-memory tables. They are used to implement associative arrays. Database indexing. Hash tables may also be used as disk-based data structures and database indices (such as in dbm) although B-trees are more popular in these applications. Caches. Hash tables can be used to implement caches, auxiliary data tables that are used to speed up the access to data that is primarily stored in slower media. In this application, hash collisions can be handled by discarding one of the two colliding entries—usually erasing the old item that is currently stored in the table and overwriting it with the new item, so every item in the table has a unique hash value. Sets. Hash tables can be used in the implementation of set data structure, which can store unique values without any particular order; set is typically used in testing the membership of a value in the collection, rather than element retrieval. Transposition table. A transposition table to a complex Hash Table which stores information about each section that has been searched. Implementations. Many programming languages provide hash table functionality, either as built-in associative arrays or as standard library modules. In JavaScript, an "object" is a mutable collection of key-value pairs (called "properties"), where each key is either a string or a guaranteed-unique "symbol"; any other value, when used as a key, is first coerced to a string. Aside from the seven "primitive" data types, every value in JavaScript is an object. ECMAScript 2015 also added the codice_0 data structure, which accepts arbitrary values as keys. C++11 includes codice_1 in its standard library for storing keys and values of arbitrary types. Go's built-in codice_2 implements a hash table in the form of a type. Java programming language includes the codice_3, codice_4, codice_5, and codice_6 generic collections. Python's built-in codice_7 implements a hash table in the form of a type. Ruby's built-in codice_8 uses the open addressing model from Ruby 2.4 onwards. Rust programming language includes codice_4, codice_3 as part of the Rust Standard Library. The .NET standard library includes codice_3 and codice_12, so it can be used from languages such as C# and VB.NET. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "m \\ge n" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "A[h(x)]" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "h(x) < m" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\text{load factor}\\ (\\alpha) = \\frac{n}{m}," }, { "math_id": 10, "text": "\\alpha_{\\max}" }, { "math_id": 11, "text": "\\alpha_{\\max}/4" }, { "math_id": 12, "text": "h : U \\rightarrow \\{0, ..., m-1\\}" }, { "math_id": 13, "text": "U" }, { "math_id": 14, "text": "h(x) \\in {0, ..., m-1}" }, { "math_id": 15, "text": "x \\in U" }, { "math_id": 16, "text": "U = \\{0, ..., u - 1\\}" }, { "math_id": 17, "text": "u" }, { "math_id": 18, "text": "S" }, { "math_id": 19, "text": "x \\in S" }, { "math_id": 20, "text": "{0, ..., m-1}" }, { "math_id": 21, "text": "h(x)\\ =\\ x\\, \\bmod\\, m" }, { "math_id": 22, "text": "M" }, { "math_id": 23, "text": "h(x) = \\lfloor m \\bigl((M A) \\bmod 1\\bigr) \\rfloor" }, { "math_id": 24, "text": "T" }, { "math_id": 25, "text": "O(\\log{n})" }, { "math_id": 26, "text": "O(1)" }, { "math_id": 27, "text": "k" }, { "math_id": 28, "text": "k^2" }, { "math_id": 29, "text": "Bk" }, { "math_id": 30, "text": "x.psl" }, { "math_id": 31, "text": "j" }, { "math_id": 32, "text": "x.psl\\ \\le\\ T[j].psl" }, { "math_id": 33, "text": "x.psl\\ >\\ T[j].psl" }, { "math_id": 34, "text": "T[j]" }, { "math_id": 35, "text": "x'" }, { "math_id": 36, "text": "j+1" }, { "math_id": 37, "text": "h_\\text{old}" }, { "math_id": 38, "text": "h_\\text{new}" }, { "math_id": 39, "text": "\\mathrm{Add}(\\mathrm{key})" }, { "math_id": 40, "text": "\\mathrm{Get}(\\mathrm{key})" }, { "math_id": 41, "text": "\\mathrm{Delete}(\\mathrm{key})" }, { "math_id": 42, "text": "\\mathrm{Lookup}(\\mathrm{key}, \\text{command})" }, { "math_id": 43, "text": "\\mathrm{Table}[h_\\text{old}(\\mathrm{key})]" }, { "math_id": 44, "text": "\\mathrm{Table}[h_\\text{new}(\\mathrm{key})]" }, { "math_id": 45, "text": "\\sigma" }, { "math_id": 46, "text": "K" }, { "math_id": 47, "text": "h(x)" }, { "math_id": 48, "text": "\\sigma\\ =\\ h(K)\\ \\%\\ n" }, { "math_id": 49, "text": "K_1 \\ne K_2,\\ h(K_1)\\ =\\ h(K_2)" } ]
https://en.wikipedia.org/wiki?curid=13833
13835110
Nonnegative rank (linear algebra)
In linear algebra, the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative. For example, the linear rank of a matrix is the smallest number of vectors, such that every column of the matrix can be written as a linear combination of those vectors. For the nonnegative rank, it is required that the vectors must have nonnegative entries, and also that the coefficients in the linear combinations are nonnegative. Formal definition. There are several equivalent definitions, all modifying the definition of the linear rank slightly. Apart from the definition given above, there is the following: The nonnegative rank of a nonnegative "m×n"-matrix "A" is equal to the smallest number "q" such there exists a nonnegative "m×q"-matrix "B" and a nonnegative "q×n"-matrix "C" such that "A = BC" (the usual matrix product). To obtain the linear rank, drop the condition that "B" and "C" must be nonnegative. Further, the nonnegative rank is the smallest number of nonnegative rank-one matrices into which the matrix can be decomposed additively: formula_0 where "Rj ≥ 0" stands for ""Rj" is nonnegative". (To obtain the usual linear rank, drop the condition that the "Rj" have to be nonnegative.) Given a nonnegative formula_1 matrix "A" the nonnegative rank formula_2 of "A" satisfies formula_3 where formula_4 denotes the usual linear rank of "A". A Fallacy. The rank of the matrix "A" is the largest number of columns which are linearly independent, i.e., none of the selected columns can be written as a linear combination of the other selected columns. It is not true that adding nonnegativity to this characterization gives the nonnegative rank: The nonnegative rank is in general less than or equal to the largest number of columns such that no selected column can be written as a nonnegative linear combination of the other selected columns. Connection with the linear rank. It is always true that "rank(A) ≤ rank+(A)". In fact "rank+(A) = rank(A)" holds whenever "rank(A) ≤ 2". In the case "rank(A) ≥ 3", however, "rank(A) &lt; rank+(A)" is possible. For example, the matrix formula_5 satisfies "rank(A) = 3 &lt; 4 = rank+(A)". These two results (including the 4×4 matrix example above) were first provided by Thomas in a response to a question posed in 1973 by Berman and Plemmons. Computing the nonnegative rank. The nonnegative rank of a matrix can be determined algorithmically. It has been proved that determining whether formula_6 is NP-hard. Obvious questions concerning the complexity of nonnegative rank computation remain unanswered to date. For example, the complexity of determining the nonnegative rank of matrices of fixed rank "k" is unknown for "k &gt; 2". Ancillary facts. Nonnegative rank has important applications in Combinatorial optimization: The minimum number of facets of an extension of a polyhedron "P" is equal to the nonnegative rank of its so-called "slack matrix". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mbox{rank}_+(A) = \\min\\{ q \\mid \\sum_{j=1}^q R_j = A, \\; \\mbox{rank}\\,R_1=\\dots=\\mbox{rank}\\,R_q =1, \\; R_1,\\dots,R_q \\ge 0\\}," }, { "math_id": 1, "text": " m \\times n " }, { "math_id": 2, "text": " rank_+(A) " }, { "math_id": 3, "text": "\\mbox{rank}\\,(A) \\leq \\mbox{rank}_+(A) \\leq \\min(m,n)," }, { "math_id": 4, "text": "\\mbox{rank}\\,(A)" }, { "math_id": 5, "text": "\\mathbf{A} = \\begin{bmatrix}\n1 & 1 & 0 & 0\\\\\n1 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 1\\\\\n0 & 0 & 1 & 1 \\end{bmatrix}," }, { "math_id": 6, "text": "{{\\text{rank}}_+}(A)= \\text{rank}(A)" } ]
https://en.wikipedia.org/wiki?curid=13835110
1383616
Shotgun slug
Type of ammunition used mainly in hunting medium and large game A shotgun slug is a heavy projectile (a slug) made of lead, copper, or other material and fired from a shotgun. Slugs are designed for hunting large game, and other uses, particularly in areas near human population where their short range and slow speed helps increase safety margin. The first effective modern shotgun slug was introduced by Wilhelm Brenneke in 1898, and his design remains in use today. Most shotgun slugs are designed to be fired through a cylinder bore, improved cylinder choke, rifled choke tubes, or fully rifled bores. Slugs differ from round ball lead projectiles in that they are stabilized in some manner. In the early development of firearms, smooth-bored barrels were not differentiated to fire either single or multiple projectiles. Single projectiles were used for larger game and warfare, though shot could be loaded as needed for small game, birds, and activities such as trench clearing. As firearms became specialized and differentiated, shotguns were still able to fire round balls though rifled muskets were far more accurate and effective. Modern slugs emerged as a way of improving on the accuracy of round balls. Early slugs were heavier in front than in the rear, similar to a Minié ball, to provide aerodynamic stabilization. Rifled barrels, rifled slugs and rifled choke tubes were developed later to provide gyroscopic spin stabilization in place of or in addition to aerodynamic stabilization. Some of these slugs are saboted sub-caliber projectiles, resulting in greatly improved external ballistics performance. A shotgun slug typically has more physical mass than a rifle bullet. For example, the lightest common .30-06 Springfield rifle bullet weighs 150 grains (, while the lightest common 12 gauge shotgun slug weighs &lt;templatestyles src="Fraction/styles.css" /&gt;7⁄8 oz (). Slugs made of low-density material, such as rubber, are available as less than lethal specialty ammunition. Uses. Shotgun slugs are used to hunt medium to large game at short ranges by firing a single large projectile rather than a large number of smaller ones. In many populated areas, hunters are restricted to shotguns even for medium to large game, such as deer and elk, due to concerns about the range of modern rifle bullets. In such cases a slug will provide a longer range than a load of buckshot, which traditionally was used at ranges up to approximately , without approaching the range of a rifle. In Alaska, seasoned professional guides and wild life officials use pump-action 12 gauge shotguns loaded with slugs for defense against both black and brown bears under . Law enforcement officers are frequently equipped with shotguns. In contrast to traditional buckshot, slugs offer benefits of accuracy, range, and increased wounding potential at longer ranges while avoiding stray pellets that could injure bystanders or damage property. Further, a shotgun allows selecting a desired shell to meet the need in a variety of situations. Examples include a less-lethal cartridge in the form of a bean bag round or other less lethal buckshot and slugs. A traditional rifle would offer greater range and accuracy than slugs, but without the variety of ammunition choices and versatility. Design considerations. The mass of a shotgun slug is kept within SAAMI pressure limits for shot loads in any given shotgun shell load design. Slugs are designed to pass safely through open chokes and should never be fired through tight or unknown barrels. The internal pressure of the shotshell load will actually be slightly higher than the equivalent mass slug projectile load, due to an increased resistance that occurs from a phenomenon known as shot setback. Common 12 gauge slug masses are &lt;templatestyles src="Fraction/styles.css" /&gt;7⁄8 oz (, 1 oz (, and &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄8 oz (, the same weight as common birdshot payloads. Comparisons with rifle bullets. A 1 oz ( Foster 12 gauge shotgun slug achieves a velocity of approximately with a muzzle energy of . slugs travel at around with a muzzle energy of . In contrast, a .30-06 Springfield bullet weighing at a velocity of achieves an energy of . A bullet at , which is a very common 30-06 Springfield load and not its true maximum potential, achieves of energy. Due to the slug's larger caliber and shape, it has greater air resistance and slows down much more quickly than a bullet. It slows to less than half its muzzle energy at , which is below the minimum recommended energy threshold for taking large game. The minimum recommended muzzle energy is ( for deer, for elk, and for moose). A slug also becomes increasingly inaccurate with distance; out to or more, with a maximum practical range of approximately . In contrast, centerfire cartridges fired from rifles can easily travel at longer ranges of or more. Shotgun slugs are best suited for use over shorter ranges. Taylor knock-out factor. The Taylor knock-out factor (TKOF) was developed as a measure of stopping power for hunting game, however it is a rather flawed calculation. It is defined as the product of bullet mass, velocity and diameter, using the imperial units grains (equal to 64.79891 mg), feet per second (equal to 0.3048 m/s) and inches (equal to 25.4 mm): formula_0 Some TKOF example values for shotgun slugs are: To compare with rifles, some TKOF example values for rifle cartridges are: Types. Full-bore slugs. Full-bore slugs such as the Brenneke and Foster designs use a spin-stabilization method of stabilization through the use of angled fins on the slug’s outer walls. The slight 750 RPM spin is enough to stabilize the slug because the slug’s center of pressure is so much further back than its center of mass. Saboted slugs are similar in shape to handgun bullets and airguns pellets. Their center of pressure is in front of their center of mass, meaning a higher twist rate is required to achieve proper stabilization. Most saboted slugs are designed for rifled shotgun barrels and are stabilized through gyroscopic forces from their spin. Brenneke slugs. The Brenneke slug was developed by the German gun and ammunition designer Wilhelm Brenneke (1865–1951) in 1898. The original Brenneke slug is a solid lead slug with ribs cast onto the outside, much like a rifled Foster slug. There is a plastic, felt or cellulose fiber wad attached to the base that remains attached after firing. This wad serves as a gas seal, preventing the gasses from going around the projectile. The lead "ribs" that are used for inducing spin also swage through any choked bore from improved cylinder to full. The soft metal, typically lead, fins squish or swage down in size to fit through the choke to allow for an easy passage. Foster slugs. The "Foster slug", invented by Karl M. Foster in 1931, and patented in 1947 (U.S. patent 2414863), is a type of shotgun slug designed to be fired through a smoothbore shotgun barrel, even though it commonly labeled as a "rifled" slug. A rifled slug is for smooth bores and a sabot slug is for rifled barrels. Most Foster slugs also have "rifling", which consists of ribs on the outside of the slug. Like the Brenneke, these ribs impart a rotation on the slug to correct for manufacturing irregularities, thus improving precision (i.e. group size). Similar to traditional rifling, the rotation of the slug imparts gyroscopic stabilization. Saboted slugs. Saboted slugs are shotgun projectiles smaller than the bore of the shotgun and supported by a plastic sabot. The sabot is traditionally designed to engage the rifling in a rifled shotgun barrel and impart a ballistic spin onto the projectile. This differentiates them from traditional slugs, which are not designed to benefit from a rifled barrel (though neither does the other any damage). Due to the fact that they do not contact the bore, they can be made from a variety of materials including lead, copper, brass, or steel. Saboted slugs can vary in shape, but are typically bullet-shaped for increased ballistic coefficient and greater range. The sabot is generally plastic and serves to seal the bore and keep the slug centered in the barrel while it rotates with the rifling. The sabot separates from the slug after it departs the muzzle. Saboted slugs fired from rifled bores are superior in accuracy over any smooth-bored slug options with accuracy approaching that of low-velocity rifle calibers. Wad slugs. A modern variant between the Foster slug and the sabot slug is the wad slug. This is a type of shotgun slug designed to be fired through a smoothbore shotgun barrel. Like the traditional Foster slug, a deep hollow is located in the rear of this slug, which serves to retain the center of mass near the front tip of the slug much like the Foster slug. However, unlike the Foster slug, a wad slug additionally has a key or web wall molded across the deep hollow, spanning the hollow, which serves to increase the structural integrity of the slug while also reducing the amount of expansion of the slug when fired, reducing the stress on the shot wad in which it rides down a barrel. Also, unlike Foster slugs that have thin fins on the outside of the slug, much like those on the Brenneke, the wad slug is shaped with an ogive or bullet shape, with a smooth outer surface. The wad slug is loaded using a standard shotshell wad, which acts like a sabot. The diameter of the wad slug is slightly less than the nominal bore diameter, being around for a 12-gauge wad slug, and a wad slug is generally cast solely from pure lead, necessary for increasing safety if the slug is ever fired through a choked shotgun. Common 12 gauge slug masses are &lt;templatestyles src="Fraction/styles.css" /&gt;7⁄8 oz ((), 1 oz ((), and &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄8 oz ((), the same as common birdshot payloads. Depending on the specific stack-up, a card wad is also sometimes located between the slug and the shotshell wad, depending largely on which hull is specified, with the primary intended purpose of improving fold crimps on the loaded wad slug shell that serves to regulate fired shotshell pressures and improve accuracy. It is also possible to fire a wad slug through rifled slug barrels, and, unlike with the Foster slug where lead fouling is often a problem, a wad slug typically causes no significant leading, being nested inside a traditional shotshell wad functioning as a sabot as it travels down the shotgun barrel. Accuracy of wad slugs falls off quickly at ranges beyond , thereby largely equaling the ranges possible with Foster slugs, while still not reaching the ranges possible with traditional sabot slugs using thicker-walled sabots. Unlike the Foster slug which is traditionally roll-crimped, the wad slug is fold-crimped. Because of this important difference, and because it uses standard shotshell wads, a wad slug can easily be reloaded using any standard modern shotshell reloading press without requiring specialized roll-crimp tools. Plumbata slugs. A plumbata slug has a plastic stabilizer attached to the projectile. The stabilizer may be fitted into a cavity in the bottom of the slug, or it may fit over the slug and into external notches on the slug. With the first method discarding sabots may be added. And with the second, the stabilizer may act as a sabot, but remains attached to the projectile and is commonly known as an "Impact Discarding Sabot" (IDS). Steel slugs. There are some types of all-steel subcaliber slugs supported by a protective plastic sabot (the projectile would damage the barrel without a sabot). Examples include Russian "Tandem" wadcutter-type slug (the name is historical, as early versions consisted of two spherical steel balls) and ogive "UDAR" ("Strike") slug and French spool-like "Balle Blondeau" (Blondeau slug) and "Balle fleche Sauvestre" (Sauvestre flechette) with steel sabot inside expanding copper body and plastic rear empennage. Made of non-deforming steel, these slugs are well-suited to shooting in brush, but may produce overpenetration. They also may be used for disabling vehicles by firing in the engine compartment or for defeating hard body armor. Improvised slugs. Wax slugs. Another variant of a Great Depression–era shotgun slug design is the wax slug. These were made by hand by cutting the end off a standard birdshot loaded shotshell, shortening the shell very slightly, pouring the lead shot out, and melting paraffin, candle wax, or crayons in a pan on a stovetop, mixing the lead birdshot in the melted wax, and then using a spoon to pour the liquified wax containing part of the birdshot back into the shotshell, all while not overfilling the shotgun shell. Once the shell cooled, the birdshot was now held in a mass by the cooled paraffin, and formed a slug. No roll or fold crimp was required to hold the wax slug in the hull. These were often used to hunt deer during the Depression. Cut shell slugs. Yet another expedient shotgun slug design is the cut shell. These are made by hand from a standard birdshot shell by cutting a ring around and through the hull of the shell that nearly encircles the shell, with the cut traditionally located in the middle of the wad separating the powder and shot. A small amount of the shell wall is retained, amounting to roughly a quarter of the circumference of the shotshell hull. When fired, the end of the hull separates from the base and travels through the bore and down range. Cut shells have the advantage of expedience. They can be handmade on the spot as the need arises while on a hunt for small game if a larger game animal such as a deer or a bear appears. In terms of safety, part of the shell may remain behind in the barrel, causing potential problems if not noticed and cleared before another shot is fired. Guns for use with slugs. Many hunters hunt with shotgun slugs where rifle usage is not allowed, or as a way of saving the cost of a rifle by getting additional use out of their shotgun. A barrel for shooting slugs can require some special considerations. The biggest drawback of a rifled shotgun barrel is the inability to fire buckshot or birdshot accurately. While buckshot or birdshot will not rapidly damage the gun (it can wear the rifling of the barrel with long-term repeated use), the shot's spread increases nearly four-fold compared to a smooth bore, and pellets tend to form a ring-shaped pattern due to the pellets' tangential velocity moving them away from the bore line. In practical terms, the effective range of a rifled shotgun loaded with buckshot is limited to or less. Iron sights or a low magnification telescopic sight are needed for accuracy, rather than the bead sight used with shot, and an open choke is best. Since most current production shotguns come equipped with sighting ribs and interchangeable choke tubes, converting a standard shotgun to a slug gun can be as simple as attaching clamp-on sights to the rib and switching to a skeet or cylinder choke tube. There are also rifled choke tubes of cylinder bore. Many repeating shotguns have barrels that can easily be removed and replaced in under a minute with no tools, so many hunters simply use an additional barrel for shooting slugs. Slug barrels will generally be somewhat shorter, have rifle type sights or a base for a telescopic sight, and may be either rifled or smooth bore. Smooth-bore shotgun barrels are quite a bit less expensive than rifled shotgun barrels, and Foster type slugs, as well as wad slugs, can work well up to in a smooth-bore barrel. For achieving accuracy at and beyond, however, a dedicated rifled slug barrel usually provides significant advantages. Another option is to use a rifled choke in a smooth-bore barrel, at least for shotguns having a removable choke tube. Rifled chokes are considerably less expensive than a rifled shotgun barrel, and a smooth-bore barrel paired with a rifled choke is often nearly as accurate as a rifled shotgun barrel dedicated for use with slugs. There are many options in selecting shotguns for use with slugs. Improvements in slug performance have also led to some very specialized slug guns. The H&amp;R Ultra Slug Hunter, for example, uses a heavy rifled barrel (see Accurize) to obtain high accuracy from slugs. Reloading shotgun slugs. Shotgun slugs are often hand loaded, primarily to save cost but also to improve performance over that possible with commercially manufactured slug shells. In contrast, it is possible to reload slug shells with hand-cast lead slugs for less than $0.50 (c. 2013) each. The recurring cost depends heavily on which published recipe is used. Some published recipes for handloading 1 oz ( 12 gauge slugs require as much as of powder each, whereas other 12 gauge recipes for &lt;templatestyles src="Fraction/styles.css" /&gt;7⁄8 oz ( slugs require only of powder. Shotguns operate at much lower pressures than pistols and rifles, typically operating at pressures of or less, for 12 gauge shells, whereas rifles and pistols routinely are operated at pressures in excess of , and sometimes upwards of . The SAAMI maximum permitted pressure limit is only for 12 gauge and shells, including shotgun slugs, so the typical operating pressures for many shotgun shells are only slightly below the maximum permitted pressures allowed for the use of safe ammunition. This small safety margin, and the possibility of pressure varying by over with small changes in components, require great care and consistency in hand-loading. Legal issues. Shotgun slugs are sometimes subject to specific regulation in many countries in the world. Legislation differs with each country. The Netherlands. Large game (including deer and wild boar) hunting is only allowed with large caliber rifles; shotguns are only allowed for small and medium-sized game, up to foxes and geese. However, when a shotgun has a rifled barrel, it is considered a rifle, and it becomes legal for hunting roe deer with a minimum caliber and at a and deer or wild boar with a minimum caliber and at . Sweden. Slugs fired from a single-barrel shotgun are allowed for hunting wild boar, fallow deer, and mouflon, although when hunting for wounded game there are no restrictions. The shot must be fired at a range of no more than . The hunter must also have the legal right to use a rifle for such game in order to hunt with shotgun slugs. United Kingdom. Ammunition which contains no fewer than five projectiles, none of which exceed in diameter, is legal with a Section 2 Shotgun Certificate. Slugs, which contain only one projectile and usually exceed in diameter, are controlled under the Firearms Act, and require a firearms certificate to possess, which is very strictly regulated. Legal uses in the UK include, but are not restricted to, practical shotgun enthusiasts as members of clubs and at competitions, such as those run by or affiliated to the UKPSA. United States. Rifled barrels for shotguns are an unusual legal issue in the United States. Firearms with rifled barrels are designed to fire single projectiles, and a firearm that is designed to fire a single projectile with a diameter greater than .50 inches (12.7 mm) is considered a destructive device and as such is severely restricted. However, the ATF has ruled that as long as the gun was designed to fire shot, and modified (by the user or the manufacturer) to fire single projectiles with the addition of a rifled barrel, then the firearm is still considered a shotgun and not a destructive device. In some areas, rifles are prohibited for hunting animals such as deer. This is generally due to safety concerns. Shotgun slugs have a far shorter maximum range than most rifle cartridges, and are safer for use near populated areas. In some areas, there are designated zones and special shotgun-only seasons for deer. This may include a modern slug shotgun, with rifled barrel and high performance sabot slugs, which provides rifle-like power and accuracy at ranges over . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{TKOF} = \\text{Mass(grains)} \\times \\text{Velocity(feet per second)} \\times \\text{Diameter(inches)} \\times \\frac{1}{7000}." } ]
https://en.wikipedia.org/wiki?curid=1383616
138378
Splitting lemma
In mathematics, and more specifically in homological algebra, the splitting lemma states that in any abelian category, the following statements are equivalent for a short exact sequence formula_0 If any of these statements holds, the sequence is called a split exact sequence, and the sequence is said to "split". In the above short exact sequence, where the sequence splits, it allows one to refine the first isomorphism theorem, which states that: "C" ≅ "B"/ker "r" ≅ "B"/"q"("A") (i.e., "C" isomorphic to the coimage of "r" or cokernel of "q") to: "B" "q"("A") ⊕ "u"("C") ≅ "A" ⊕ "C" where the first isomorphism theorem is then just the projection onto "C". It is a categorical generalization of the rank–nullity theorem (in the form V ≅ ker "T" ⊕ im "T") in linear algebra. Proof for the category of abelian groups. 3. ⇒ 1. and 3. ⇒ 2.. First, to show that 3. implies both 1. and 2., we assume 3. and take as "t" the natural projection of the direct sum onto "A", and take as "u" the natural injection of "C" into the direct sum. 1. ⇒ 3.. To prove that 1. implies 3., first note that any member of "B" is in the set (ker "t" + im "q"). This follows since for all "b" in "B", "b" ("b" − "qt"("b")) + "qt"("b"); "qt"("b") is in im "q", and "b" − "qt"("b") is in ker "t", since "t"("b" − "qt"("b")) "t"("b") − "tqt"("b") "t"("b") − ("tq")"t"("b") "t"("b") − "t"("b") 0. Next, the intersection of im "q" and ker "t" is 0, since if there exists "a" in "A" such that "q"("a") "b", and "t"("b") 0, then 0 "tq"("a") "a"; and therefore, "b" 0. This proves that "B" is the direct sum of im "q" and ker "t". So, for all "b" in "B", "b" can be uniquely identified by some "a" in "A", "k" in ker "t", such that "b" "q"("a") + "k". By exactness ker "r" im "q". The subsequence "B" ⟶ "C" ⟶ 0 implies that "r" is onto; therefore for any "c" in "C" there exists some "b" "q"("a") + "k" such that "c" "r"("b") "r"("q"("a") + "k") "r"("k"). Therefore, for any "c" in "C", exists "k" in ker "t" such that "c" = "r"("k"), and "r"(ker "t") = "C". If "r"("k") 0, then "k" is in im "q"; since the intersection of im "q" and ker "t" 0, then "k" 0. Therefore, the restriction "r": ker "t" → "C" is an isomorphism; and ker "t" is isomorphic to "C". Finally, im "q" is isomorphic to "A" due to the exactness of 0 ⟶ "A" ⟶ "B"; so "B" is isomorphic to the direct sum of "A" and "C", which proves (3). 2. ⇒ 3.. To show that 2. implies 3., we follow a similar argument. Any member of "B" is in the set ker "r" + im "u"; since for all "b" in "B", "b" ("b" − "ur"("b")) + "ur"("b"), which is in ker "r" + im "u". The intersection of ker "r" and im "u" is 0, since if "r"("b") 0 and "u"("c") "b", then 0 "ru"("c") "c". By exactness, im "q" ker "r", and since "q" is an injection, im "q" is isomorphic to "A", so "A" is isomorphic to ker "r". Since "ru" is a bijection, "u" is an injection, and thus im "u" is isomorphic to "C". So "B" is again the direct sum of "A" and "C". An alternative "abstract nonsense" proof of the splitting lemma may be formulated entirely in category theoretic terms. Non-abelian groups. In the form stated here, the splitting lemma does not hold in the full category of groups, which is not an abelian category. Partially true. It is partially true: if a short exact sequence of groups is left split or a direct sum (1. or 3.), then all of the conditions hold. For a direct sum this is clear, as one can inject from or project to the summands. For a left split sequence, the map "t" × "r": "B" → "A" × "C" gives an isomorphism, so "B" is a direct sum (3.), and thus inverting the isomorphism and composing with the natural injection "C" → "A" × "C" gives an injection "C" → "B" splitting "r" (2.). However, if a short exact sequence of groups is right split (2.), then it need not be left split or a direct sum (neither 1. nor 3. follows): the problem is that the image of the right splitting need not be normal. What is true in this case is that "B" is a semidirect product, though not in general a direct product. Counterexample. To form a counterexample, take the smallest non-abelian group "B" ≅ "S"3, the symmetric group on three letters. Let "A" denote the alternating subgroup, and let "C" "B"/"A" ≅ {±1}. Let "q" and "r" denote the inclusion map and the sign map respectively, so that formula_1 is a short exact sequence. 3. fails, because "S"3 is not abelian, but 2. holds: we may define "u": "C" → "B" by mapping the generator to any two-cycle. Note for completeness that 1. fails: any map "t": "B" → "A" must map every two-cycle to the identity because the map has to be a group homomorphism, while the order of a two-cycle is 2 which can not be divided by the order of the elements in "A" other than the identity element, which is 3 as "A" is the alternating subgroup of "S"3, or namely the cyclic group of order 3. But every permutation is a product of two-cycles, so "t" is the trivial map, whence "tq": "A" → "A" is the trivial map, not the identity.
[ { "math_id": 0, "text": "0 \\longrightarrow A \\mathrel{\\overset{q}{\\longrightarrow}} B \\mathrel{\\overset{r}{\\longrightarrow}} C \\longrightarrow 0." }, { "math_id": 1, "text": "0 \\longrightarrow A \\mathrel{\\stackrel{q}{\\longrightarrow}} B \\mathrel{\\stackrel{r}{\\longrightarrow}} C \\longrightarrow 0 " } ]
https://en.wikipedia.org/wiki?curid=138378
1383899
Linear time-invariant system
Mathematical model which is both linear and time-invariant In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response "y"("t") of the system to an arbitrary input "x"("t") can be found directly using convolution: "y"("t") = ("x" ∗ "h")("t") where "h"("t") is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining "h"("t")), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers. Linear time-invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as "linear translation-invariant" to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled) systems, "linear shift-invariant" is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy, and many other technical areas where systems of ordinary differential equations present themselves. Overview. The defining properties of any LTI system are "linearity" and "time invariance". The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system's impulse response. The output of the system formula_1 is simply the convolution of the input to the system formula_0 with the system's impulse response formula_11. This is called a continuous time system. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating in discrete time: formula_12 where "y", "x", and "h" are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral. LTI systems can also be characterized in the "frequency domain" by the system's transfer function, which is the Laplace transform of the system's impulse response (or Z transform in the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain. For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform formula_13 for some complex amplitude formula_14 and complex frequency formula_15, the output will be some complex constant times the input, say formula_16 for some new complex amplitude formula_17. The ratio formula_18 is the transfer function at frequency formula_15. Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a different amplitude and a different phase, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input. LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits. Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals. A linear system that is not time-invariant can be solved using other approaches such as the Green function method. Continuous-time systems. Impulse response and convolution. The behavior of a linear, continuous-time, time-invariant system with input signal "x"("t") and output signal "y"("t") is described by the convolution integral: where formula_19 is the system's response to an impulse: formula_20. formula_21 is therefore proportional to a weighted average of the input function formula_22. The weighting function is formula_23, simply shifted by amount formula_24. As formula_24 changes, the weighting function emphasizes different parts of the input function. When formula_25 is zero for all negative formula_26, formula_27 depends only on values of formula_28 prior to time formula_24, and the system is said to be causal. To understand why the convolution produces the output of an LTI system, let the notation formula_29 represent the function formula_30 with variable formula_31 and constant formula_26. And let the shorter notation formula_32 represent formula_33. Then a continuous-time system transforms an input function, formula_34 into an output function, formula_35. And in general, every value of the output can depend on every value of the input. This concept is represented by: formula_36 where formula_37 is the transformation operator for time formula_24. In a typical system, formula_27 depends most heavily on the values of formula_28 that occurred near time formula_24. Unless the transform itself changes with formula_24, the output function is just constant, and the system is uninteresting. For a linear system, formula_38 must satisfy Eq.1: And the time-invariance requirement is: In this notation, we can write the impulse response as formula_39 Similarly: Substituting this result into the convolution integral: formula_40 which has the form of the right side of Eq.2 for the case formula_41 and formula_42 Eq.2 then allows this continuation: formula_43 In summary, the input function, formula_32, can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown at Eq.1. The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse responses, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral. The mathematical operations above have a simple graphical simulation. Exponentials as eigenfunctions. An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is, formula_44 where "f" is the eigenfunction and formula_45 is the eigenvalue, a constant. The exponential functions formula_46, where formula_47, are eigenfunctions of a linear, time-invariant operator. A simple proof illustrates this concept. Suppose the input is formula_48. The output of the system with impulse response formula_11 is then formula_49 which, by the commutative property of convolution, is equivalent to formula_50 where the scalar formula_51 is dependent only on the parameter "s". So the system's response is a scaled version of the input. In particular, for any formula_47, the system output is the product of the input formula_52 and the constant formula_53. Hence, formula_46 is an eigenfunction of an LTI system, and the corresponding eigenvalue is formula_53. Direct proof. It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems. Let's set formula_54 some complex exponential and formula_55 a time-shifted version of it. formula_56 by linearity with respect to the constant formula_57. formula_58 by time invariance of formula_59. So formula_60. Setting formula_61 and renaming we get: formula_62 i.e. that a complex exponential formula_63 as input will give a complex exponential of same frequency as output. Fourier and Laplace transforms. The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sided Laplace transform formula_64 is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form formula_65 where formula_66 and formula_67). The Fourier transform formula_68 gives the eigenvalues for pure complex sinusoids. Both of formula_53 and formula_69 are called the "system function", "system response", or "transfer function". The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values of "t" less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the bilateral Laplace transform). The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not square integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist formula_70 One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency "s" "jω", where "ω" 2"πf", we obtain |"H"("s")| which is the system gain for frequency "f". The relative phase shift between the output and input for that frequency component is likewise given by arg("H"("s")). Important system properties. Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing. Causality. A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is formula_71 where formula_11 is the impulse response. It is not possible in general to determine causality from the two-sided Laplace transform. However, when working in the time domain, one normally uses the one-sided Laplace transform which requires causality. Stability. A system is bounded-input, bounded-output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying formula_72 leads to an output satisfying formula_73 (that is, a finite maximum absolute value of formula_0 implies a finite maximum absolute value of formula_1), then the system is stable. A necessary and sufficient condition is that formula_11, the impulse response, is in L1 (has a finite L1 norm): formula_74 In the frequency domain, the region of convergence must contain the imaginary axis formula_75. As an example, the ideal low-pass filter with impulse response equal to a sinc function is not BIBO stable, because the sinc function does not have a finite L1 norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero for formula_76 and equal to a sinusoid at the cut-off frequency for formula_77, then the output will be unbounded for all times other than the zero crossings. Discrete-time systems. Almost everything in continuous-time systems has a counterpart in discrete-time systems. Discrete-time systems from continuous-time systems. In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to. In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If formula_0 is a CT signal, then the sampling circuit used before an analog-to-digital converter will transform it to a DT signal: formula_78 where "T" is the sampling period. Before sampling, the input signal is normally run through a so-called Nyquist filter which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component "above" the folding frequency (or Nyquist frequency) is aliased to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency. Impulse response and convolution. Let formula_79 represent the sequence formula_80 And let the shorter notation formula_81 represent formula_82 A discrete system transforms an input sequence, formula_81 into an output sequence, formula_83 In general, every element of the output can depend on every element of the input. Representing the transformation operator by formula_84, we can write: formula_85 Note that unless the transform itself changes with "n", the output sequence is just constant, and the system is uninteresting. (Thus the subscript, "n".) In a typical system, "y"["n"] depends most heavily on the elements of "x" whose indices are near "n". For the special case of the Kronecker delta function, formula_86 the output sequence is the impulse response: formula_87 For a linear system, formula_84 must satisfy: And the time-invariance requirement is: In such a system, the impulse response, formula_88, characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity: formula_89 which expresses formula_81 in terms of a sum of weighted delta functions. Therefore: formula_90 where we have invoked Eq.4 for the case formula_91 and formula_92. And because of Eq.5, we may write: formula_93 Therefore: which is the familiar discrete convolution formula. The operator formula_95 can therefore be interpreted as proportional to a weighted average of the function "x"["k"]. The weighting function is "h"[−"k"], simply shifted by amount "n". As "n" changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at "n"=0 is a "time" reversed copy of the unshifted weighting function. When "h"["k"] is zero for all negative "k", the system is said to be causal. Exponentials as eigenfunctions. An eigenfunction is a function for which the output of the operator is the same function, scaled by some constant. In symbols, formula_96 where "f" is the eigenfunction and formula_45 is the eigenvalue, a constant. The exponential functions formula_97, where formula_98, are eigenfunctions of a linear, time-invariant operator. formula_99 is the sampling interval, and formula_100. A simple proof illustrates this concept. Suppose the input is formula_101. The output of the system with impulse response formula_102 is then formula_103 which is equivalent to the following by the commutative property of convolution formula_104 where formula_105 is dependent only on the parameter "z". So formula_106 is an eigenfunction of an LTI system because the system response is the same as the input times the constant formula_107. Z and discrete-time Fourier transforms. The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The Z transform formula_108 is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids; i.e. exponentials of the form formula_109, where formula_66. These can also be written as formula_106 with formula_110. The discrete-time Fourier transform (DTFT) formula_111 gives the eigenvalues of pure sinusoids. Both of formula_107 and formula_112 are called the "system function", "system response", or "transfer function". Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t&lt;0. The discrete-time Fourier transform Fourier series may be used for analyzing periodic signals. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is, formula_113 Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior. Important system properties. The input-output characteristics of discrete-time LTI system are completely described by its impulse response formula_102. Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function "is" stable. Causality. A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input. A necessary and sufficient condition for causality is formula_114 where formula_102 is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique. When a region of convergence is specified, then causality can be determined. Stability. A system is bounded input, bounded output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if formula_115 implies that formula_116 (that is, if bounded input implies bounded output, in the sense that the maximum absolute values of formula_117 and formula_94 are finite), then the system is stable. A necessary and sufficient condition is that formula_102, the impulse response, satisfies formula_118 In the frequency domain, the region of convergence must contain the unit circle (i.e., the locus satisfying formula_119 for complex "z"). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "x(t)" }, { "math_id": 1, "text": "y(t)" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "ax(t)" }, { "math_id": 4, "text": "ay(t)" }, { "math_id": 5, "text": "x'(t)" }, { "math_id": 6, "text": "y'(t)" }, { "math_id": 7, "text": "x(t)+x'(t)" }, { "math_id": 8, "text": "y(t)+y'(t)" }, { "math_id": 9, "text": "x(t-T)" }, { "math_id": 10, "text": "y(t-T)" }, { "math_id": 11, "text": "h(t)" }, { "math_id": 12, "text": "y_{i} = x_{i} * h_{i}" }, { "math_id": 13, "text": "A_s e^{st}" }, { "math_id": 14, "text": "A_s" }, { "math_id": 15, "text": "s" }, { "math_id": 16, "text": "B_s e^{st}" }, { "math_id": 17, "text": "B_s" }, { "math_id": 18, "text": "B_s/A_s" }, { "math_id": 19, "text": " h(t)" }, { "math_id": 20, "text": "x(\\tau) = \\delta(\\tau)" }, { "math_id": 21, "text": " y(t) " }, { "math_id": 22, "text": "x(\\tau)" }, { "math_id": 23, "text": " h(-\\tau)" }, { "math_id": 24, "text": " t" }, { "math_id": 25, "text": " h(\\tau)" }, { "math_id": 26, "text": " \\tau" }, { "math_id": 27, "text": " y(t)" }, { "math_id": 28, "text": " x" }, { "math_id": 29, "text": " \\{x(u-\\tau);\\ u\\}" }, { "math_id": 30, "text": " x(u-\\tau)" }, { "math_id": 31, "text": " u" }, { "math_id": 32, "text": " \\{x\\}" }, { "math_id": 33, "text": " \\{x(u);\\ u\\}" }, { "math_id": 34, "text": " \\{x\\}," }, { "math_id": 35, "text": "\\{y\\}" }, { "math_id": 36, "text": "y(t) \\mathrel{\\stackrel{\\text{def}}{=}} O_t\\{x\\}," }, { "math_id": 37, "text": " O_t" }, { "math_id": 38, "text": " O" }, { "math_id": 39, "text": " h(t) \\mathrel{\\stackrel{\\text{def}}{=}} O_t\\{\\delta(u);\\ u\\}." }, { "math_id": 40, "text": "\n\\begin{align}\n (x * h)(t) &= \\int_{-\\infty}^\\infty x(\\tau)\\cdot h(t - \\tau) \\,\\mathrm{d}\\tau \\\\[4pt]\n &= \\int_{-\\infty}^\\infty x(\\tau)\\cdot O_t\\{\\delta(u-\\tau);\\ u\\} \\, \\mathrm{d}\\tau,\\,\n\\end{align}\n" }, { "math_id": 41, "text": " c_\\tau = x(\\tau)" }, { "math_id": 42, "text": " x_\\tau(u) = \\delta(u-\\tau)." }, { "math_id": 43, "text": "\n\\begin{align}\n (x * h)(t) &= O_t\\left\\{\\int_{-\\infty}^\\infty x(\\tau)\\cdot \\delta(u-\\tau) \\, \\mathrm{d}\\tau;\\ u \\right\\}\\\\[4pt]\n &= O_t\\left\\{x(u);\\ u \\right\\}\\\\\n &\\mathrel{\\stackrel{\\text{def}}{=}} y(t).\\,\n\\end{align}\n" }, { "math_id": 44, "text": "\\mathcal{H}f = \\lambda f," }, { "math_id": 45, "text": "\\lambda" }, { "math_id": 46, "text": "A e^{s t}" }, { "math_id": 47, "text": "A, s \\in \\mathbb{C}" }, { "math_id": 48, "text": "x(t) = A e^{s t}" }, { "math_id": 49, "text": "\\int_{-\\infty}^\\infty h(t - \\tau) A e^{s \\tau}\\, \\mathrm{d} \\tau" }, { "math_id": 50, "text": "\\begin{align}\n \\overbrace{\\int_{-\\infty}^\\infty h(\\tau) \\, A e^{s (t - \\tau)} \\, \\mathrm{d} \\tau}^{\\mathcal{H} f}\n &= \\int_{-\\infty}^\\infty h(\\tau) \\, A e^{s t} e^{-s \\tau} \\, \\mathrm{d} \\tau \\\\[4pt]\n &= A e^{s t} \\int_{-\\infty}^{\\infty} h(\\tau) \\, e^{-s \\tau} \\, \\mathrm{d} \\tau \\\\[4pt]\n &= \\overbrace{\\underbrace{A e^{s t}}_{\\text{Input}}}^{f} \\overbrace{\\underbrace{H(s)}_{\\text{Scalar}}}^{\\lambda}, \\\\\n\\end{align}" }, { "math_id": 51, "text": "H(s) \\mathrel{\\stackrel{\\text{def}}{=}} \\int_{-\\infty}^\\infty h(t) e^{-s t} \\, \\mathrm{d} t" }, { "math_id": 52, "text": "A e^{st}" }, { "math_id": 53, "text": "H(s)" }, { "math_id": 54, "text": "v(t) = e^{i \\omega t}" }, { "math_id": 55, "text": "v_a(t) = e^{i \\omega (t+a)}" }, { "math_id": 56, "text": "H[v_a](t) = e^{i\\omega a} H[v](t)" }, { "math_id": 57, "text": "e^{i \\omega a}" }, { "math_id": 58, "text": "H[v_a](t) = H[v](t+a)" }, { "math_id": 59, "text": "H" }, { "math_id": 60, "text": "H[v](t+a) = e^{i \\omega a} H[v](t)" }, { "math_id": 61, "text": "t = 0" }, { "math_id": 62, "text": "H[v](\\tau) = e^{i\\omega \\tau} H[v](0)" }, { "math_id": 63, "text": "e^{i \\omega \\tau}" }, { "math_id": 64, "text": "H(s) \\mathrel{\\stackrel{\\text{def}}{=}} \\mathcal{L}\\{h(t)\\} \\mathrel{\\stackrel{\\text{def}}{=}} \\int_0^\\infty h(t) e^{-s t} \\, \\mathrm{d} t" }, { "math_id": 65, "text": "e^{j \\omega t}" }, { "math_id": 66, "text": "\\omega \\in \\mathbb{R}" }, { "math_id": 67, "text": "j \\mathrel{\\stackrel{\\text{def}}{=}} \\sqrt{-1}" }, { "math_id": 68, "text": "H(j \\omega) = \\mathcal{F}\\{h(t)\\}" }, { "math_id": 69, "text": "H(j\\omega)" }, { "math_id": 70, "text": "y(t) = (h*x)(t) \\mathrel{\\stackrel{\\text{def}}{=}} \\int_{-\\infty}^\\infty h(t - \\tau) x(\\tau) \\, \\mathrm{d} \\tau \\mathrel{\\stackrel{\\text{def}}{=}} \\mathcal{L}^{-1}\\{H(s)X(s)\\}." }, { "math_id": 71, "text": "h(t) = 0 \\quad \\forall t < 0," }, { "math_id": 72, "text": "\\ \\|x(t)\\|_{\\infty} < \\infty" }, { "math_id": 73, "text": "\\ \\|y(t)\\|_{\\infty} < \\infty" }, { "math_id": 74, "text": "\\|h(t)\\|_1 = \\int_{-\\infty}^\\infty |h(t)| \\, \\mathrm{d}t < \\infty." }, { "math_id": 75, "text": "s = j\\omega" }, { "math_id": 76, "text": "t < 0" }, { "math_id": 77, "text": "t > 0" }, { "math_id": 78, "text": "x_n \\mathrel{\\stackrel{\\text{def}}{=}} x(nT) \\qquad \\forall \\, n \\in \\mathbb{Z}," }, { "math_id": 79, "text": "\\{x[m - k];\\ m\\}" }, { "math_id": 80, "text": "\\{x[m - k];\\text{ for all integer values of } m\\}." }, { "math_id": 81, "text": "\\{x\\}" }, { "math_id": 82, "text": "\\{x[m];\\ m\\}." }, { "math_id": 83, "text": "\\{y\\}." }, { "math_id": 84, "text": "O" }, { "math_id": 85, "text": "y[n] \\mathrel{\\stackrel{\\text{def}}{=}} O_n\\{x\\}." }, { "math_id": 86, "text": "x[m] = \\delta[m]," }, { "math_id": 87, "text": "h[n] \\mathrel{\\stackrel{\\text{def}}{=}} O_n\\{\\delta[m];\\ m\\}." }, { "math_id": 88, "text": "\\{h\\}" }, { "math_id": 89, "text": "x[m] \\equiv \\sum_{k=-\\infty}^{\\infty} x[k] \\cdot \\delta[m - k]," }, { "math_id": 90, "text": "\\begin{align}\n y[n] = O_n\\{x\\}\n &= O_n\\left\\{\\sum_{k=-\\infty}^\\infty x[k]\\cdot \\delta[m-k];\\ m \\right\\}\\\\\n &= \\sum_{k=-\\infty}^\\infty x[k]\\cdot O_n\\{\\delta[m-k];\\ m\\},\\,\n\\end{align}" }, { "math_id": 91, "text": "c_k = x[k]" }, { "math_id": 92, "text": "x_k[m] = \\delta[m-k]" }, { "math_id": 93, "text": "\\begin{align}\n O_n\\{\\delta[m-k];\\ m\\} &\\mathrel{\\stackrel{\\quad}{=}} O_{n-k}\\{\\delta[m];\\ m\\} \\\\\n &\\mathrel{\\stackrel{\\text{def}}{=}} h[n-k].\n\\end{align}" }, { "math_id": 94, "text": "y[n]" }, { "math_id": 95, "text": "O_n" }, { "math_id": 96, "text": "\\mathcal{H}f = \\lambda f ," }, { "math_id": 97, "text": "z^n = e^{sT n}" }, { "math_id": 98, "text": "n \\in \\mathbb{Z}" }, { "math_id": 99, "text": "T \\in \\mathbb{R}" }, { "math_id": 100, "text": "z = e^{sT}, \\ z,s \\in \\mathbb{C}" }, { "math_id": 101, "text": "x[n] = z^n" }, { "math_id": 102, "text": "h[n]" }, { "math_id": 103, "text": "\\sum_{m=-\\infty}^{\\infty} h[n-m] \\, z^m" }, { "math_id": 104, "text": "\\sum_{m=-\\infty}^{\\infty} h[m] \\, z^{(n - m)} = z^n \\sum_{m=-\\infty}^{\\infty} h[m] \\, z^{-m} = z^n H(z)" }, { "math_id": 105, "text": "H(z) \\mathrel{\\stackrel{\\text{def}}{=}} \\sum_{m=-\\infty}^\\infty h[m] z^{-m}" }, { "math_id": 106, "text": "z^n" }, { "math_id": 107, "text": "H(z)" }, { "math_id": 108, "text": "H(z) = \\mathcal{Z}\\{h[n]\\} = \\sum_{n=-\\infty}^\\infty h[n] z^{-n}" }, { "math_id": 109, "text": "e^{j \\omega n}" }, { "math_id": 110, "text": "z = e^{j \\omega}" }, { "math_id": 111, "text": "H(e^{j \\omega}) = \\mathcal{F}\\{h[n]\\}" }, { "math_id": 112, "text": "H(e^{j\\omega})" }, { "math_id": 113, "text": "y[n] = (h*x)[n] = \\sum_{m=-\\infty}^\\infty h[n-m] x[m] = \\mathcal{Z}^{-1}\\{H(z)X(z)\\}." }, { "math_id": 114, "text": "h[n] = 0 \\ \\forall n < 0," }, { "math_id": 115, "text": "\\|x[n]\\|_{\\infty} < \\infty" }, { "math_id": 116, "text": "\\|y[n]\\|_{\\infty} < \\infty" }, { "math_id": 117, "text": "x[n]" }, { "math_id": 118, "text": "\\|h[n]\\|_1 \\mathrel{\\stackrel{\\text{def}}{=}} \\sum_{n = -\\infty}^\\infty |h[n]| < \\infty." }, { "math_id": 119, "text": "|z| = 1" } ]
https://en.wikipedia.org/wiki?curid=1383899
1383986
Absorption (chemistry)
Chemical process Absorption is a physical or chemical phenomenon or a process in which atoms, molecules or ions enter the liquid or solid bulk phase of a material. This is a different process from adsorption, since molecules undergoing absorption are taken up by the volume, not by the surface (as in the case for adsorption). A more common definition is that "Absorption is a chemical or physical phenomenon in which the molecules, atoms and ions of the substance getting absorbed enter into the bulk phase (gas, liquid or solid) of the material in which it is taken up." A more general term is "sorption", which covers absorption, adsorption, and ion exchange. Absorption is a condition in which something takes in another substance. In many processes important in technology, the chemical absorption is used in place of the physical process, e.g., absorption of carbon dioxide by sodium hydroxide – such acid-base processes do not follow the Nernst partition law (see: solubility). For some examples of this effect, see liquid-liquid extraction. It is possible to extract a solute from one liquid phase to another without a chemical reaction. Examples of such solutes are noble gases and osmium tetroxide. The process of absorption means that a substance captures and transforms energy. The absorbent distributes the material it captures throughout whole and adsorbent only distributes it through the surface. The process of gas or liquid which penetrate into the body of adsorbent is commonly known as absorption. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; IUPAC definition absorption: 1) The process of one material (absorbate) being retained by another (absorbent); this may be the physical solution of a gas, liquid, or solid in a liquid, attachment of molecules of a gas, vapour, liquid, or dissolved substance to a solid surface by physical forces, etc. In spectrophotometry, absorption of light at characteristic wavelengths or bands of wavelengths is used to identify the chemical nature of molecules, atoms or ions and to measure the concentrations of these species. 2) A phenomenon in which radiation transfers to matter which it traverses some of or all its energy. Equation. If absorption is a physical process not accompanied by any other physical or chemical process, it usually follows the Nernst distribution law: "the ratio of concentrations of some solute species in two bulk phases when it is equilibrium and in contact is constant for a given solute and bulk phases": formula_0 The value of constant KN depends on temperature and is called "partition coefficient". This equation is valid if concentrations are not too large and if the species "x" does not change its form in any of the two phases "1" or "2". If such molecule undergoes association or dissociation then this equation still describes the equilibrium between "x" in both phases, but only for the same form – concentrations of all remaining forms must be calculated by taking into account all the other equilibria. In the case of gas absorption, one may calculate its concentration by using, e.g., the Ideal gas law, "c = p/RT". In alternative fashion, one may use partial pressures instead of concentrations. Types of absorption. Absorption is a process that may be chemical (reactive) or physical (non-reactive). Chemical absorption. Chemical absorption or reactive absorption is a chemical reaction between the absorbed and the absorbing substances. Sometimes it combines with physical absorption. This type of absorption depends upon the stoichiometry of the reaction and the concentration of its reactants. They may be carried out in different units, with a wide spectrum of phase flow types and interactions. In most cases, RA is carried out in plate or packed columns. Physical absorption. Water in a solid. Hydrophilic solids, which include many solids of biological origin, can readily absorb water. Polar interactions between water and the molecules of the solid favor partition of the water into the solid, which can allow significant absorption of water vapor even in relatively low humidity. Moisture regain. A fiber (or other hydrophilic material) that has been exposed to the atmosphere will usually contain some water even if it feels dry. The water can be driven off by heating in an oven, leading to a measurable decrease in weight, which will gradually be regained if the fiber is returned to a 'normal' atmosphere. This effect is crucial in the textile industry – where the proportion of a material's weight made up by water is called the "moisture regain". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{[x]_{1}}{[x]_{2}} = \\text{constant} = K_{N(x,12)}" } ]
https://en.wikipedia.org/wiki?curid=1383986
138404
Chain complex
Tool in homological algebra In mathematics, a chain complex is an algebraic structure that consists of a sequence of abelian groups (or modules) and a sequence of homomorphisms between consecutive groups such that the image of each homomorphism is included in the kernel of the next. Associated to a chain complex is its homology, which describes how the images are included in the kernels. A cochain complex is similar to a chain complex, except that its homomorphisms are in the opposite direction. The homology of a cochain complex is called its cohomology. In algebraic topology, the singular chain complex of a topological space X is constructed using continuous maps from a simplex to X, and the homomorphisms of the chain complex capture how these maps restrict to the boundary of the simplex. The homology of this chain complex is called the singular homology of X, and is a commonly used invariant of a topological space. Chain complexes are studied in homological algebra, but are used in several areas of mathematics, including abstract algebra, Galois theory, differential geometry and algebraic geometry. They can be defined more generally in abelian categories. Definitions. A chain complex formula_0 is a sequence of abelian groups or modules ..., "A"0, "A"1, "A"2, "A"3, "A"4, ... connected by homomorphisms (called boundary operators or differentials) "d""n" : "A""n" → "A""n"−1, such that the composition of any two consecutive maps is the zero map. Explicitly, the differentials satisfy "d""n" ∘ "d""n"+1 = 0, or with indices suppressed, "d"2 = 0. The complex may be written out as follows. formula_1 The cochain complex formula_2 is the dual notion to a chain complex. It consists of a sequence of abelian groups or modules ..., "A"0, "A"1, "A"2, "A"3, "A"4, ... connected by homomorphisms "d""n" : "A""n" → "A""n"+1 satisfying "d""n"+1 ∘ "d""n" = 0. The cochain complex may be written out in a similar fashion to the chain complex. formula_3 The index "n" in either "A""n" or "A""n" is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension. All the concepts and definitions for chain complexes apply to cochain complexes, except that they will follow this different convention for dimension, and often terms will be given the prefix "co-". In this article, definitions will be given for chain complexes when the distinction is not required. A bounded chain complex is one in which almost all the "A""n" are 0; that is, a finite complex extended to the left and right by 0. An example is the chain complex defining the simplicial homology of a finite simplicial complex. A chain complex is bounded above if all modules above some fixed degree "N" are 0, and is bounded below if all modules below some fixed degree are 0. Clearly, a complex is bounded both above and below if and only if the complex is bounded. The elements of the individual groups of a (co)chain complex are called (co)chains. The elements in the kernel of "d" are called (co)cycles (or closed elements), and the elements in the image of "d" are called (co)boundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles. The "n"-th (co)homology group "H""n" ("H""n") is the group of (co)cycles modulo (co)boundaries in degree "n", that is, formula_4 Exact sequences. An exact sequence (or exact complex) is a chain complex whose homology groups are all zero. This means all closed elements in the complex are exact. A short exact sequence is a bounded exact sequence in which only the groups "A""k", "A""k"+1, "A""k"+2 may be nonzero. For example, the following chain complex is a short exact sequence. formula_5 In the middle group, the closed elements are the elements pZ; these are clearly the exact elements in this group. Chain maps. A chain map "f" between two chain complexes formula_6 and formula_7 is a sequence formula_8 of homomorphisms formula_9 for each "n" that commutes with the boundary operators on the two chain complexes, so formula_10. This is written out in the following commutative diagram. A chain map sends cycles to cycles and boundaries to boundaries, and thus induces a map on homology formula_11. A continuous map "f" between topological spaces "X" and "Y" induces a chain map between the singular chain complexes of "X" and "Y", and hence induces a map "f"* between the singular homology of "X" and "Y" as well. When "X" and "Y" are both equal to the "n"-sphere, the map induced on homology defines the degree of the map "f". The concept of chain map reduces to the one of boundary through the construction of the cone of a chain map. Chain homotopy. A chain homotopy offers a way to relate two chain maps that induce the same map on homology groups, even though the maps may be different. Given two chain complexes "A" and "B", and two chain maps "f", "g" : "A" → "B", a chain homotopy is a sequence of homomorphisms "h""n" : "A""n" → "B""n"+1 such that "hd""A" + "d""B""h" = "f" − "g". The maps may be written out in a diagram as follows, but this diagram is not commutative. The map "hd""A" + "d""B""h" is easily verified to induce the zero map on homology, for any "h". It immediately follows that "f" and "g" induce the same map on homology. One says "f" and "g" are chain homotopic (or simply homotopic), and this property defines an equivalence relation between chain maps. Let "X" and "Y" be topological spaces. In the case of singular homology, a homotopy between continuous maps "f", "g" : "X" → "Y" induces a chain homotopy between the chain maps corresponding to "f" and "g". This shows that two homotopic maps induce the same map on singular homology. The name "chain homotopy" is motivated by this example. Examples. Singular homology. Let "X" be a topological space. Define "C""n"("X") for natural "n" to be the free abelian group formally generated by singular n-simplices in "X", and define the boundary map formula_12 to be formula_13 where the hat denotes the omission of a vertex. That is, the boundary of a singular simplex is the alternating sum of restrictions to its faces. It can be shown that ∂2 = 0, so formula_14 is a chain complex; the singular homology formula_15 is the homology of this complex. Singular homology is a useful invariant of topological spaces up to homotopy equivalence. The degree zero homology group is a free abelian group on the path-components of "X". de Rham cohomology. The differential "k"-forms on any smooth manifold "M" form a real vector space called Ω"k"("M") under addition. The exterior derivative "d" maps Ω"k"("M") to Ω"k"+1("M"), and "d"2 = 0 follows essentially from symmetry of second derivatives, so the vector spaces of "k"-forms along with the exterior derivative are a cochain complex. formula_16 The cohomology of this complex is called the de Rham cohomology of "M". Locally constant functions are designated with its isomorphism formula_17 with c the count of mutually disconnected components of "M". This way the complex was extended to leave the complex exact at zero-form level using the subset operator. Smooth maps between manifolds induce chain maps, and smooth homotopies between maps induce chain homotopies. Category of chain complexes. Chain complexes of "K"-modules with chain maps form a category Ch"K", where "K" is a commutative ring. If "V" = "V"formula_18 and "W" = "W"formula_18 are chain complexes, their tensor product formula_19 is a chain complex with degree "n" elements given by formula_20 and differential given by formula_21 where "a" and "b" are any two homogeneous vectors in "V" and "W" respectively, and formula_22 denotes the degree of "a". This tensor product makes the category Ch"K" into a symmetric monoidal category. The identity object with respect to this monoidal product is the base ring "K" viewed as a chain complex in degree 0. The braiding is given on simple tensors of homogeneous elements by formula_23 The sign is necessary for the braiding to be a chain map. Moreover, the category of chain complexes of "K"-modules also has internal Hom: given chain complexes "V" and "W", the internal Hom of "V" and "W", denoted Hom("V","W"), is the chain complex with degree "n" elements given by formula_24 and differential given by formula_25. We have a natural isomorphism formula_26 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(A_\\bullet, d_\\bullet)" }, { "math_id": 1, "text": "\n\\cdots\n\\xleftarrow{d_0} A_0\n\\xleftarrow{d_1} A_1\n\\xleftarrow{d_2} A_2\n\\xleftarrow{d_3} A_3\n\\xleftarrow{d_4} A_4\n\\xleftarrow{d_5}\n\\cdots\n" }, { "math_id": 2, "text": "(A^\\bullet, d^\\bullet)" }, { "math_id": 3, "text": "\n\\cdots\n\\xrightarrow{d^{-1}}\nA^0 \\xrightarrow{d^0}\nA^1 \\xrightarrow{d^1}\nA^2 \\xrightarrow{d^2}\nA^3 \\xrightarrow{d^3}\nA^4 \\xrightarrow{d^4}\n\\cdots\n" }, { "math_id": 4, "text": "H_n = \\ker d_{n}/\\mbox{im } d_{n+1} \\quad \\left(H^n = \\ker d^{n}/\\mbox{im } d^{n-1} \\right)" }, { "math_id": 5, "text": "\n\\cdots\n\\xrightarrow{} \\; 0 \\;\n\\xrightarrow{} \\; \\mathbf{Z} \\;\n\\xrightarrow{\\times p} \\; \\mathbf{Z}\n\\twoheadrightarrow \\mathbf{Z}/p\\mathbf{Z} \\;\n\\xrightarrow{} \\; 0 \\;\n\\xrightarrow{}\n\\cdots\n" }, { "math_id": 6, "text": "(A_\\bullet, d_{A,\\bullet})" }, { "math_id": 7, "text": "(B_\\bullet, d_{B,\\bullet})" }, { "math_id": 8, "text": "f_\\bullet" }, { "math_id": 9, "text": "f_n : A_n \\rightarrow B_n" }, { "math_id": 10, "text": " d_{B,n} \\circ f_n = f_{n-1} \\circ d_{A,n}" }, { "math_id": 11, "text": "(f_\\bullet)_*:H_\\bullet(A_\\bullet, d_{A,\\bullet}) \\rightarrow H_\\bullet(B_\\bullet, d_{B,\\bullet})" }, { "math_id": 12, "text": "\\partial_n: C_n(X) \\to C_{n-1}(X)" }, { "math_id": 13, "text": "\\partial_n : \\, (\\sigma: [v_0,\\ldots,v_n] \\to X) \\mapsto (\\sum_{i=0}^n (-1)^i \\sigma: [v_0,\\ldots, \\hat v_i, \\ldots, v_n] \\to X)" }, { "math_id": 14, "text": "(C_\\bullet, \\partial_\\bullet)" }, { "math_id": 15, "text": "H_\\bullet(X)" }, { "math_id": 16, "text": " 0\\stackrel{\\subset}{\\to}\\ {\\Re^{c}} \\stackrel{\\subset}{\\to}\\ {\\Omega^0(M)} \\stackrel{d}{\\to}\\ {\\Omega^1(M)} \\stackrel{d}{\\to}\\ {\\Omega^2(M)} \\stackrel{d}{\\to}\\ \\Omega^3(M) \\to \\cdots" }, { "math_id": 17, "text": " \\Re^c" }, { "math_id": 18, "text": "{}_*" }, { "math_id": 19, "text": " V \\otimes W " }, { "math_id": 20, "text": " (V \\otimes W)_n = \\bigoplus_{\\{i,j|i+j=n\\}} V_i \\otimes W_j " }, { "math_id": 21, "text": " \\partial (a \\otimes b) = \\partial a \\otimes b + (-1)^{\\left|a\\right|} a \\otimes \\partial b " }, { "math_id": 22, "text": " \\left|a\\right| " }, { "math_id": 23, "text": " a \\otimes b \\mapsto (-1)^{\\left|a\\right|\\left|b\\right|} b \\otimes a " }, { "math_id": 24, "text": "\\Pi_{i}\\text{Hom}_K (V_i,W_{i+n})" }, { "math_id": 25, "text": " (\\partial f)(v) = \\partial(f(v)) - (-1)^{\\left|f\\right|} f(\\partial(v)) " }, { "math_id": 26, "text": "\\text{Hom}(A\\otimes B, C) \\cong \\text{Hom}(A,\\text{Hom}(B,C))" } ]
https://en.wikipedia.org/wiki?curid=138404
13844097
Packing dimension
Dimension of a subset of a metric space In mathematics, the packing dimension is one of a number of concepts that can be used to define the dimension of a subset of a metric space. Packing dimension is in some sense dual to Hausdorff dimension, since packing dimension is constructed by "packing" small open balls inside the given subset, whereas Hausdorff dimension is constructed by covering the given subset by such small open balls. The packing dimension was introduced by C. Tricot Jr. in 1982. Definitions. Let ("X", "d") be a metric space with a subset "S" ⊆ "X" and let "s" ≥ 0 be a real number. The "s"-dimensional packing pre-measure of "S" is defined to be formula_0 Unfortunately, this is just a pre-measure and not a true measure on subsets of "X", as can be seen by considering dense, countable subsets. However, the pre-measure leads to a "bona fide" measure: the "s"-dimensional packing measure of "S" is defined to be formula_1 i.e., the packing measure of "S" is the infimum of the packing pre-measures of countable covers of "S". Having done this, the packing dimension dimP("S") of "S" is defined analogously to the Hausdorff dimension: formula_2 An example. The following example is the simplest situation where Hausdorff and packing dimensions may differ. Fix a sequence formula_3 such that formula_4 and formula_5. Define inductively a nested sequence formula_6 of compact subsets of the real line as follows: Let formula_7. For each connected component of formula_8 (which will necessarily be an interval of length formula_9), delete the middle interval of length formula_10, obtaining two intervals of length formula_11, which will be taken as connected components of formula_12. Next, define formula_13. Then formula_14 is topologically a Cantor set (i.e., a compact totally disconnected perfect space). For example, formula_14 will be the usual middle-thirds Cantor set if formula_15. It is possible to show that the Hausdorff and the packing dimensions of the set formula_14 are given respectively by: formula_16 It follows easily that given numbers formula_17, one can choose a sequence formula_3 as above such that the associated (topological) Cantor set formula_14 has Hausdorff dimension formula_18 and packing dimension formula_19. Generalizations. One can consider dimension functions more general than "diameter to the "s"": for any function "h" : [0, +∞) → [0, +∞], let the packing pre-measure of "S" with dimension function "h" be given by formula_20 and define the packing measure of "S" with dimension function "h" by formula_21 The function "h" is said to be an exact (packing) dimension function for "S" if "P""h"("S") is both finite and strictly positive. Properties. Note, however, that the packing dimension is "not" equal to the box dimension. For example, the set of rationals Q has box dimension one and packing dimension zero.
[ { "math_id": 0, "text": "P_0^s (S) = \\limsup_{\\delta \\downarrow 0}\\left\\{ \\left. \\sum_{i \\in I} \\mathrm{diam} (B_i)^s \\right| \\begin{matrix} \\{ B_i \\}_{i \\in I} \\text{ is a countable collection} \\\\ \\text{of pairwise disjoint closed balls with} \\\\ \\text{diameters } \\leq \\delta \\text{ and centres in } S \\end{matrix} \\right\\}." }, { "math_id": 1, "text": "P^s (S) = \\inf \\left\\{ \\left. \\sum_{j \\in J} P_0^s (S_j) \\right| S \\subseteq \\bigcup_{j \\in J} S_j, J \\text{ countable} \\right\\}," }, { "math_id": 2, "text": "\\begin{align}\n\\dim_{\\mathrm{P}} (S) &{} = \\sup \\{ s \\geq 0 | P^s (S) = + \\infty \\} \\\\\n&{} = \\inf \\{ s \\geq 0 | P^s (S) = 0 \\}.\n\\end{align}" }, { "math_id": 3, "text": "(a_n)" }, { "math_id": 4, "text": "a_0=1" }, { "math_id": 5, "text": "0<a_{n+1}<a_n/2" }, { "math_id": 6, "text": "E_0 \\supset E_1 \\supset E_2 \\supset \\cdots" }, { "math_id": 7, "text": "E_0=[0,1]" }, { "math_id": 8, "text": "E_n" }, { "math_id": 9, "text": "a_n" }, { "math_id": 10, "text": "a_n - 2a_{n+1}" }, { "math_id": 11, "text": "a_{n+1}" }, { "math_id": 12, "text": "E_{n+1}" }, { "math_id": 13, "text": "K = \\bigcap_n E_n" }, { "math_id": 14, "text": "K" }, { "math_id": 15, "text": "a_n=3^{-n}" }, { "math_id": 16, "text": "\\begin{align}\n\\dim_{\\mathrm{H}} (K) &{} = \\liminf_{n\\to\\infty} \\frac{n \\log 2}{- \\log a_n} \\, , \\\\\n\\dim_{\\mathrm{P}} (K) &{} = \\limsup_{n\\to\\infty} \\frac{n \\log 2}{- \\log a_n} \\, .\n\\end{align}" }, { "math_id": 17, "text": "0 \\leq d_1 \\leq d_2 \\leq 1" }, { "math_id": 18, "text": "d_1" }, { "math_id": 19, "text": "d_2" }, { "math_id": 20, "text": "P_0^h (S) = \\lim_{\\delta \\downarrow 0} \\sup \\left\\{ \\left. \\sum_{i \\in I} h \\big( \\mathrm{diam} (B_i) \\big) \\right| \\begin{matrix} \\{ B_{i} \\}_{i \\in I} \\text{ is a countable collection} \\\\ \\text{of pairwise disjoint balls with} \\\\ \\text{diameters } \\leq \\delta \\text{ and centres in } S \\end{matrix} \\right\\}" }, { "math_id": 21, "text": "P^h (S) = \\inf \\left\\{ \\left. \\sum_{j \\in J} P_0^h (S_j) \\right| S \\subseteq \\bigcup_{j \\in J} S_j, J \\text{ countable} \\right\\}." }, { "math_id": 22, "text": "\\dim_{\\mathrm{P}} (S) = \\overline{\\dim}_\\mathrm{MB} (S)." } ]
https://en.wikipedia.org/wiki?curid=13844097
1384568
Character group
In mathematics, a character group is the group of representations of a group by complex-valued functions. These functions can be thought of as one-dimensional matrix representations and so are special cases of the group characters that arise in the related context of character theory. Whenever a group is represented by matrices, the function defined by the trace of the matrices is called a character; however, these traces "do not" in general form a group. Some important properties of these one-dimensional characters apply to characters in general: The primary importance of the character group for finite abelian groups is in number theory, where it is used to construct Dirichlet characters. The character group of the cyclic group also appears in the theory of the discrete Fourier transform. For locally compact abelian groups, the character group (with an assumption of continuity) is central to Fourier analysis. Preliminaries. Let formula_0 be an abelian group. A function formula_1 mapping the group to the non-zero complex numbers is called a character of formula_0 if it is a group homomorphism from formula_0 to formula_2—that is, if formula_3 for all formula_4. If formula_5 is a character of a finite group formula_0, then each function value formula_6 is a root of unity, since for each formula_7 there exists formula_8 such that formula_9, and hence formula_10. Each character "f" is a constant on conjugacy classes of "G", that is, "f"("hgh"−1) = "f"("g"). For this reason, a character is sometimes called a class function. A finite abelian group of order "n" has exactly "n" distinct characters. These are denoted by "f"1, ..., "f""n". The function "f"1 is the trivial representation, which is given by formula_11 for all formula_7. It is called the principal character of G; the others are called the non-principal characters. Definition. If "G" is an abelian group, then the set of characters "fk" forms an abelian group under pointwise multiplication. That is, the product of characters formula_12 and formula_13 is defined by formula_14 for all formula_7. This group is the character group of G and is sometimes denoted as formula_15. The identity element of formula_15 is the principal character "f"1, and the inverse of a character "fk" is its reciprocal 1/"fk". If formula_0 is finite of order "n", then formula_15 is also of order "n". In this case, since formula_16 for all formula_7, the inverse of a character is equal to the complex conjugate. Alternative definition. There is another definition of character grouppg 29 which uses formula_17 as the target instead of just formula_18. This is useful while studying complex tori because the character group of the lattice in a complex torus formula_19 is canonically isomorphic to the dual torus via the Appell-Humbert theorem. That is,formula_20We can express explicit elements in the character group as follows: recall that elements in formula_21 can be expressed asformula_22for formula_23. If we consider the lattice as a subgroup of the underlying real vector space of formula_24, then a homomorphismformula_25can be factored as a mapformula_26This follows from elementary properties of homomorphisms. Note thatformula_27giving us the desired factorization. As the groupformula_28we have the isomorphism of the character group, as a group, with the group of homomorphisms of formula_29 to formula_30. Since formula_31 for any abelian group formula_0, we haveformula_32after composing with the complex exponential, we find thatformula_33which is the expected result. Examples. Finitely generated abelian groups. Since every finitely generated abelian group is isomorphic toformula_34the character group can be easily computed in all finitely generated cases. From universal properties, and the isomorphism between finite products and coproducts, we have the character groups of formula_0 is isomorphic toformula_35for the first case, this is isomorphic to formula_36, the second is computed by looking at the maps which send the generator formula_37 to the various powers of the formula_38-th roots of unity formula_39. Orthogonality of characters. Consider the formula_40 matrix "A" = "A"("G") whose matrix elements are formula_41 where formula_42 is the "k"th element of "G". The sum of the entries in the "j"th row of "A" is given by formula_43 if formula_44, and formula_45. The sum of the entries in the "k"th column of "A" is given by formula_46 if formula_47, and formula_48. Let formula_49 denote the conjugate transpose of "A". Then formula_50. This implies the desired orthogonality relationship for the characters: i.e., formula_51 , where formula_52 is the Kronecker delta and formula_53 is the complex conjugate of formula_54.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "f: G \\to \\mathbb{C}\\setminus\\{0\\}" }, { "math_id": 2, "text": "\\mathbb C^\\times" }, { "math_id": 3, "text": "f(g_1 g_2) = f(g_1)f(g_2)" }, { "math_id": 4, "text": "g_1, g_2 \\in G" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "f(g)" }, { "math_id": 7, "text": "g \\in G" }, { "math_id": 8, "text": "k \\in \\mathbb{N}" }, { "math_id": 9, "text": "g^{k} = e" }, { "math_id": 10, "text": "f(g)^{k} = f(g^{k}) = f(e) = 1" }, { "math_id": 11, "text": "f_1(g) = 1" }, { "math_id": 12, "text": "f_j" }, { "math_id": 13, "text": "f_k" }, { "math_id": 14, "text": "(f_j f_k)(g)= f_j(g) f_k(g)" }, { "math_id": 15, "text": "\\hat{G}" }, { "math_id": 16, "text": "|f_k(g)| = 1" }, { "math_id": 17, "text": "U(1) = \\{z \\in \\mathbb{C}^*: |z|=1 \\}" }, { "math_id": 18, "text": "\\mathbb{C}^*" }, { "math_id": 19, "text": "V/\\Lambda" }, { "math_id": 20, "text": "\\text{Hom}(\\Lambda, U(1)) \\cong V^\\vee/\\Lambda^\\vee = X^\\vee" }, { "math_id": 21, "text": "U(1)" }, { "math_id": 22, "text": "e^{2\\pi i x}" }, { "math_id": 23, "text": "x \\in \\mathbb{R}" }, { "math_id": 24, "text": "V" }, { "math_id": 25, "text": "\\phi: \\Lambda \\to U(1)" }, { "math_id": 26, "text": "\\phi : \\Lambda \\to \\mathbb{R} \\xrightarrow{\\exp({2\\pi i \\cdot })} U(1)" }, { "math_id": 27, "text": "\\begin{align}\n\\phi(x+y) &= \\exp({2\\pi i }f(x+y)) \\\\\n&= \\phi(x) + \\phi(y) \\\\\n&= \\exp(2\\pi i f(x))\\exp(2\\pi i f(y))\n\\end{align}" }, { "math_id": 28, "text": "\\text{Hom}(\\Lambda,\\mathbb{R}) \\cong \\text{Hom}(\\mathbb{Z}^{2n},\\mathbb{R})" }, { "math_id": 29, "text": "\\mathbb{Z}^{2n}" }, { "math_id": 30, "text": "\\mathbb{R}" }, { "math_id": 31, "text": "\\text{Hom}(\\mathbb{Z},G)\\cong G" }, { "math_id": 32, "text": "\\text{Hom}(\\mathbb{Z}^{2n}, \\mathbb{R}) \\cong \\mathbb{R}^{2n}" }, { "math_id": 33, "text": "\\text{Hom}(\\mathbb{Z}^{2n}, U(1)) \\cong \\mathbb{R}^{2n}/\\mathbb{Z}^{2n}" }, { "math_id": 34, "text": "G \\cong \\mathbb{Z}^{n}\\oplus \\bigoplus_{i=1}^m \\mathbb{Z}/a_i" }, { "math_id": 35, "text": "\\text{Hom}(\\mathbb{Z},\\mathbb{C}^*)^{\\oplus n}\\oplus\\bigoplus_{i=1}^k\\text{Hom}(\\mathbb{Z}/n_i,\\mathbb{C}^*)" }, { "math_id": 36, "text": "(\\mathbb{C}^*)^{\\oplus n}" }, { "math_id": 37, "text": "1 \\in \\mathbb{Z}/n_i" }, { "math_id": 38, "text": "n_i" }, { "math_id": 39, "text": "\\zeta_{n_i} = \\exp(2\\pi i/n_i)" }, { "math_id": 40, "text": "n \\times n" }, { "math_id": 41, "text": "A_{jk} = f_j(g_k)" }, { "math_id": 42, "text": "g_k" }, { "math_id": 43, "text": "\\sum_{k=1}^n A_{jk} = \\sum_{k=1}^n f_j(g_k) = 0" }, { "math_id": 44, "text": "j \\neq 1" }, { "math_id": 45, "text": "\\sum_{k=1}^n A_{1k} = n" }, { "math_id": 46, "text": "\\sum_{j=1}^n A_{jk} = \\sum_{j=1}^n f_j(g_k) = 0" }, { "math_id": 47, "text": "k \\neq 1" }, { "math_id": 48, "text": "\\sum_{j=1}^n A_{j1} = \\sum_{j=1}^n f_j(e) = n" }, { "math_id": 49, "text": "A^\\ast" }, { "math_id": 50, "text": "AA^\\ast = A^\\ast A = nI" }, { "math_id": 51, "text": "\\sum_{k=1}^n {f_k}^* (g_i) f_k (g_j) = n \\delta_{ij}" }, { "math_id": 52, "text": "\\delta_{ij}" }, { "math_id": 53, "text": "f^*_k (g_i)" }, { "math_id": 54, "text": "f_k (g_i)" } ]
https://en.wikipedia.org/wiki?curid=1384568
138484
Commutative diagram
Collection of maps which give the same result In mathematics, and especially in category theory, a commutative diagram is a diagram such that all directed paths in the diagram with the same start and endpoints lead to the same result. It is said that commutative diagrams play the role in category theory that equations play in algebra. Description. A commutative diagram often consists of three parts: Arrow symbols. In algebra texts, the type of morphism can be denoted with different arrow usages: The meanings of different arrows are not entirely standardized: the arrows used for monomorphisms, epimorphisms, and isomorphisms are also used for injections, surjections, and bijections, as well as the cofibrations, fibrations, and weak equivalences in a model category. Verifying commutativity. Commutativity makes sense for a polygon of any finite number of sides (including just 1 or 2), and a diagram is commutative if every polygonal subdiagram is commutative. Note that a diagram may be non-commutative, i.e., the composition of different paths in the diagram may not give the same result. Examples. Example 1. In the left diagram, which expresses the first isomorphism theorem, commutativity of the triangle means that formula_7. In the right diagram, commutativity of the square means formula_8. Example 2. In order for the diagram below to commute, three equalities must be satisfied: Here, since the first equality follows from the last two, it suffices to show that (2) and (3) are true in order for the diagram to commute. However, since equality (3) generally does not follow from the other two, it is generally not enough to have only equalities (1) and (2) if one were to show that the diagram commutes. Diagram chasing. Diagram chasing (also called diagrammatic search) is a method of mathematical proof used especially in homological algebra, where one establishes a property of some morphism by tracing the elements of a commutative diagram. A proof by diagram chasing typically involves the formal use of the properties of the diagram, such as injective or surjective maps, or exact sequences. A syllogism is constructed, for which the graphical display of the diagram is just a visual aid. It follows that one ends up "chasing" elements around the diagram, until the desired element or result is constructed or verified. Examples of proofs by diagram chasing include those typically given for the five lemma, the snake lemma, the zig-zag lemma, and the nine lemma. In higher category theory. In higher category theory, one considers not only objects and arrows, but arrows between the arrows, arrows between arrows between arrows, and so on ad infinitum. For example, the category of small categories Cat is naturally a 2-category, with functors as its arrows and natural transformations as the arrows between functors. In this setting, commutative diagrams may include these higher arrows as well, which are often depicted in the following style: formula_12. For example, the following (somewhat trivial) diagram depicts two categories C and D, together with two functors F, G : C → D and a natural transformation α : F ⇒ G: There are two kinds of composition in a 2-category (called vertical composition and horizontal composition), and they may also be depicted via pasting diagrams (see 2-category#Definition for examples). Diagrams as functors. A commutative diagram in a category "C" can be interpreted as a functor from an index category "J" to "C;" one calls the functor a diagram. More formally, a commutative diagram is a visualization of a diagram indexed by a poset category. Such a diagram typically includes: Conversely, given a commutative diagram, it defines a poset category, where: However, not every diagram commutes (the notion of diagram strictly generalizes commutative diagram). As a simple example, the diagram of a single object with an endomorphism (formula_13), or with two parallel arrows (formula_14, that is, formula_15, sometimes called the free quiver), as used in the definition of equalizer need not commute. Further, diagrams may be messy or impossible to draw, when the number of objects or morphisms is large (or even infinite). Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hookrightarrow" }, { "math_id": 1, "text": "\\rightarrowtail" }, { "math_id": 2, "text": "\\twoheadrightarrow" }, { "math_id": 3, "text": "\\overset{\\sim}{\\rightarrow}" }, { "math_id": 4, "text": "\\exists" }, { "math_id": 5, "text": "!" }, { "math_id": 6, "text": "\\exists!" }, { "math_id": 7, "text": "f = \\tilde{f} \\circ \\pi" }, { "math_id": 8, "text": "h \\circ f = k \\circ g" }, { "math_id": 9, "text": "r \\circ h \\circ g = H \\circ G \\circ l" }, { "math_id": 10, "text": "m \\circ g = G \\circ l" }, { "math_id": 11, "text": "r \\circ h = H \\circ m" }, { "math_id": 12, "text": "\\Rightarrow" }, { "math_id": 13, "text": "f\\colon X \\to X" }, { "math_id": 14, "text": "\\bullet \\rightrightarrows \\bullet" }, { "math_id": 15, "text": "f,g\\colon X \\to Y" } ]
https://en.wikipedia.org/wiki?curid=138484
13849063
Demographic gravitation
Social effect Demographic gravitation is a concept of "social physics", introduced by Princeton University astrophysicist John Quincy Stewart in 1947. It is an attempt to use equations and notions of classical physics, such as gravity, to seek simplified insights and even laws of demographic behaviour for large numbers of human beings. A basic conception within it is that large numbers of people, in a city for example, actually behave as an attractive force for other people to migrate there. It has been related to W. J. Reilly's law of retail gravitation, George Kingsley Zipf's Demographic Energy, and to the theory of trip distribution through gravity models. Writing in the journal "Sociometry", Stewart set out an "agenda for social physics." Comparing the microscopic versus macroscopic viewpoints in the methodology of formulating physical laws, he made an analogy with the social sciences: Fortunately for physics, the macroscopic approach was the commonsense one, and the early investigators – Boyle, Charles, Gay-Lussac – were able to establish the laws of gases. The situation with respect to "social physics" is reversed... If Robert Boyle had taken the attitude of many social scientists, he would not have been willing to measure the pressure and volume of a sample of air until an encyclopedic history of its molecules had been compiled. Boyle did not even know that air contained argon and helium but he found a very important law. Stewart proceeded to apply Newtonian formulae of gravitation to that of "the average interrelations of people" on a wide geographic scale, elucidating such notions as "the demographic force of attraction," demographic energy, force, potential and gradient. Key equations. The following are some of the key equations (with plain English paraphrases) from his article in sociometry: formula_0 formula_1 formula_2 formula_3 formula_4 The potential of population at any point is equivalent to the measure of proximity of people at that point (this also has relevance to Georgist economic rent theory Economic rent#Land rent). For comparison, Reilly's retail gravity equilibrium (or Balance/Break Point) is paraphrased as: formula_5 Recently, a stochastic version has been proposed according to which the probability formula_6 of a site formula_7 to become urban is given by formula_8 where formula_9 for urban sites and formula_10 otherwise, formula_11 is the distance between sites formula_7 and formula_12, and formula_13 controls the overall growth-rate. The parameter formula_14 determines the degree of compactness.
[ { "math_id": 0, "text": "F = \\frac{N_1 N_2}{d^2}" }, { "math_id": 1, "text": "E = \\frac{N_1 N_2}{d}" }, { "math_id": 2, "text": " PN_1 = \\frac{N_2}{d}" }, { "math_id": 3, "text": "P = \\frac{N}{d}" }, { "math_id": 4, "text": "\\text{gradient} = \\frac{N}{m^2}" }, { "math_id": 5, "text": "\\frac{N_1}{d^2} = \\frac{N_2}{d^2}" }, { "math_id": 6, "text": "p_j" }, { "math_id": 7, "text": "j" }, { "math_id": 8, "text": "p_j=C\\frac{\\sum_k w_k d_{j,k}^{-\\gamma}}{\\sum_k d_{j,k}^{-\\gamma}}" }, { "math_id": 9, "text": "w_k=1" }, { "math_id": 10, "text": "w_k=0" }, { "math_id": 11, "text": "d_{j,k}" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "C" }, { "math_id": 14, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=13849063
1384943
Glove problem
In operations research, the glove problem (also known as the condom problem) is an optimization problem used as an example that the cheapest capital cost often leads to dramatic increase in operational time, but that the shortest operational time need not be given by the most expensive capital cost. Problem statement. "M" doctors are each to examine each of "N" patients, wearing gloves to avoid contamination. Each glove can be used any number of times, but the same side of one glove cannot be exposed to more than one person. Gloves can be re-used any number of times, and more than one can be used simultaneously. Given "M" doctors and "N" patients, the minimum number of gloves "G"("M", "N") required for all the doctors to examine all the patients is given by: Details. A naive approach would be to estimate the number of gloves as simply "G"("M", "N") = "MN". But this number can be significantly reduced by exploiting the fact that each glove has two sides, and it is not necessary to use both sides simultaneously. A better solution can be found by assigning each person his or her own glove, which is to be used for the entire operation. Every pairwise encounter is then protected by a double layer. Note that the outer surface of the doctor's gloves meets only the inner surface of the patient's gloves. This gives an answer of "M" + "N" gloves, which is significantly lower than "MN". The makespan with this scheme is "K" · max("M", "N"), where "K" is the duration of one pairwise encounter. Note that this is exactly the same makespan if MN gloves were used. Clearly in this case, increasing capital cost has not produced a shorter operation time. The number "G"("M", "N") may be refined further by allowing asymmetry in the initial distribution of gloves. The best scheme is given by: This scheme uses (1 · "N") + (("M" − 1 − 1) · 1) + (1 · 0) = "M" + "N" − 2 gloves. This number cannot be reduced further. The makespan is then given by: Makespan: "K" · (2"N" + max("M" − 2, "N")). Clearly, the minimum "G"("M", "N") increases the makespan significantly, sometimes by a factor of 3. Note that the benefit in the number of gloves is only 2 units. One or the other solution may be preferred depending on the relative cost of a glove judged against the longer operation time. In theory, the intermediate solution with ("M" + "N" − 1) should also occur as a candidate solution, but this requires such narrow windows on "M", "N" and the cost parameters to be optimal that it is often ignored. Other factors. The statement of the problem does not make it clear that the principle of contagion applies, i.e. if the inside of one glove has been touched by the outside of another that previously touched some person, then that inside also counts as touched by that person. Also, medical gloves are reversible; therefore a better solution exists, which uses formula_0 gloves where the less numerous group are equipped with a glove each, the more numerous in pairs. The first of each pair use a clean interface, the second reverse the glove. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\min\\left(\\lceil M/2\\rceil+N, M+\\lceil N/2\\rceil\\right) " } ]
https://en.wikipedia.org/wiki?curid=1384943
1385257
Cheap talk
Game-theoretic concept In game theory, cheap talk is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signalling, in which sending certain messages may be costly for the sender depending on the state of the world. This basic setting set by Vincent Crawford and Joel Sobel has given rise to a variety of variants. To give a formal definition, cheap talk is communication that is: Therefore, an agent engaging in cheap talk could lie with impunity, but may choose in equilibrium not to do so. Applications. Game theory. Cheap talk can, in general, be added to any game and has the potential to enhance the set of possible equilibrium outcomes. For example, one can add a round of cheap talk in the beginning of the Battle of the Sexes. Each player announces whether they intend to go to the football game, or the opera. Because the Battle of the Sexes is a coordination game, this initial round of communication may enable the players to select among multiple equilibria, thereby achieving higher payoffs than in the uncoordinated case. The messages and strategies which yield this outcome are symmetric for each player. They are: 1) announce opera or football with even probability 2) if a person announces opera (or football), then upon hearing this message the other person will say opera (or football) as well (Farrell and Rabin, 1996). If they both announce different options, then no coordination is achieved. In the case of only one player messaging, this could also give that player a first-mover advantage. It is not guaranteed, however, that cheap talk will have an effect on equilibrium payoffs. Another game, the Prisoner's Dilemma, is a game whose only equilibrium is in dominant strategies. Any pre-play cheap talk will be ignored and players will play their dominant strategies (Defect, Defect) regardless of the messages sent. Biological applications. It has been commonly argued that cheap talk will have no effect on the underlying structure of the game. In biology authors have often argued that costly signalling best explains signalling between animals (see Handicap principle, Signalling theory). This general belief has been receiving some challenges (see work by Carl Bergstrom and Brian Skyrms 2002, 2004). In particular, several models using evolutionary game theory indicate that cheap talk can have effects on the evolutionary dynamics of particular games. Crawford and Sobel's definition. Setting. In the basic form of the game, there are two players communicating, one sender "S" and one receiver "R". Type. Sender "S" gets knowledge of the state of the world or of his "type" "t". Receiver "R" does not know "t" ; he has only ex-ante beliefs about it, and relies on a message from "S" to possibly improve the accuracy of his beliefs. Message. "S" decides to send message "m". Message "m" may disclose full information, but it may also give limited, blurred information: it will typically say "The state of the world is between "t1" and "t2"". It may give no information at all. The form of the message does not matter, as long as there is mutual understanding, common interpretation. It could be a general statement from a central bank's chairman, a political speech in any language, etc. Whatever the form, it is eventually taken to mean "The state of the world is between "t1" and "t2"". Action. Receiver "R" receives message "m". "R" updates his beliefs about the state of the world given new information that he might get, using Bayes's rule. "R" decides to take action "a". This action impacts both his own utility and the sender's utility. Utility. The decision of "S" regarding the content of "m" is based on maximizing his utility, given what he expects "R" to do. Utility is a way to quantify satisfaction or wishes. It can be financial profits, or non-financial satisfaction—for instance the extent to which the environment is protected. "→ Quadratic utilities:" The respective utilities of "S" and "R" can be specified by the following: formula_0 formula_1 The theory applies to more general forms of utility, but quadratic preferences makes exposition easier. Thus "S" and "R" have different objectives if "b ≠ 0". Parameter "b" is interpreted as "conflict of interest" between the two players, or alternatively as bias."UR" is maximized when "a = t", meaning that the receiver wants to take action that matches the state of the world, which he does not know in general. "US" is maximized when "a = t + b", meaning that "S" wants a slightly higher action to be taken, if "b &gt; 0". Since "S" does not control action, "S" must obtain the desired action by choosing what information to reveal. Each player's utility depends on the state of the world and on both players' decisions that eventually lead to action "a". Nash equilibrium. We look for an equilibrium where each player decides optimally, assuming that the other player also decides optimally. Players are rational, although "R" has only limited information. Expectations get realized, and there is no incentive to deviate from this situation. Theorem. Crawford and Sobel characterize possible Nash equilibria. When interests are aligned, then information is fully disclosed. When conflict of interest is very large, all information is kept hidden. These are extreme cases. The model allowing for more subtle case when interests are close, but different and in these cases optimal behavior leads to some but not all information being disclosed, leading to various kinds of carefully worded sentences that we may observe. More generally: Messages. While messages could ex-ante assume an infinite number of possible values "μ(t)" for the infinite number of possible states of the world "t", actually they may take only a finite number of values "(m1, m2, . . . , mN)". Thus an equilibrium may be characterized by a partition "(t0(N), t1(N). . . tN(N))" of the set of types [0, 1], where "0 = t0(N) &lt; t1(N) &lt; . . . &lt; tN(N) = 1". This partition is shown on the top right segment of Figure 1. The "ti(N)"'s are the bounds of intervals where the messages are constant: for "ti-1(N) &lt; t &lt; ti(N), μ(t) = mi". Actions. Since actions are functions of messages, actions are also constant over these intervals: for "ti-1(N) &lt; t &lt; ti(N)", "α(t) = α(mi) = ai". The action function is now indirectly characterized by the fact that each value "ai" optimizes return for the "R", knowing that "t" is between "t1" and "t2". Mathematically (assuming that "t" is uniformly distributed over [0, 1]), formula_2 → "Quadratic utilities:" Given that "R" knows that "t" is between "ti-1" and "ti", and in the special case quadratic utility where "R" wants action "a" to be as close to "t" as possible, we can show that quite intuitively the optimal action is the middle of the interval: formula_3 Indifference condition. At "t ti", The sender has to be indifferent between sending either message "mi-1" or "mi". formula_4       "1 ≤ i≤ N-1" This gives information about "N" and the "ti". "→ Practically:" We consider a partition of size "N". One can show that formula_5 "N" must be small enough so that the numerator is positive. This determines the maximum allowed value formula_6 where formula_7 is the ceiling of formula_8, i.e. the smallest positive integer greater or equal to formula_8. Example: We assume that "b = 1/20". Then "N* = 3". We now describe all the equilibria for "N=1", "2", or "3" (see Figure 2). N = 1: This is the babbling equilibrium. "t0 = 0, t1 = 1"; "a1 = 1/2 = 0.5". N = 2: "t0 = 0, t1 = 2/5 = 0.4, t2 = 1"; "a1 = 1/5 = 0.2, a2 = 7/10 = 0.7". N = N* = 3: "t0 = 0, t1 = 2/15, t2 = 7/15, t3 = 1"; "a1 = 1/15, a2 = 3/10 = 0.3, a3 = 11/15". With "N = 1", we get the "coarsest" possible message, which does not give any information. So everything is red on the top left panel. With "N = 3", the message is "finer". However, it remains quite coarse compared to full revelation, which would be the 45° line, but which is not a Nash equilibrium. With a higher "N", and a finer message, the blue area is more important. This implies higher utility. Disclosing more information benefits both parties. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U^S(a, t) = -(a-t-b)^2" }, { "math_id": 1, "text": "U^R(a,t)=-(a-t)^2" }, { "math_id": 2, "text": "a_i = \\bar{a}(t_{i-1}, t_i) = \\mathrm{arg}\\max_a \\int_{t_{i-1}}^{t_i} U^R(a, t) dt" }, { "math_id": 3, "text": "a_i = \\frac{t_{i-1} + t_i}{2}" }, { "math_id": 4, "text": "U^S (a_i, t_i) = U^S (a_{i+1}, t_i)" }, { "math_id": 5, "text": "t_i = t_1 i + 2 b i (i-1) \\qquad t_1 = \\frac{1-2 b N (N-1)}{N}" }, { "math_id": 6, "text": "N^* = \\langle -\\frac{1}{2}+\\frac{1}{2} \\sqrt{1+\\frac{2}{b}} \\rangle" }, { "math_id": 7, "text": "\\langle Z \\rangle" }, { "math_id": 8, "text": "Z" } ]
https://en.wikipedia.org/wiki?curid=1385257
13856372
Pre-measure
In mathematics, a pre-measure is a set function that is, in some sense, a precursor to a "bona fide" measure on a given space. Indeed, one of the fundamental theorems in measure theory states that a pre-measure can be extended to a measure. Definition. Let formula_0 be a ring of subsets (closed under union and relative complement) of a fixed set formula_1 and let formula_2 be a set function. formula_3 is called a pre-measure if formula_4 and, for every countable (or finite) sequence formula_5 of pairwise disjoint sets whose union lies in formula_6 formula_7 The second property is called formula_8-additivity. Thus, what is missing for a pre-measure to be a measure is that it is not necessarily defined on a sigma-algebra (or a sigma-ring). Carathéodory's extension theorem. It turns out that pre-measures give rise quite naturally to outer measures, which are defined for all subsets of the space formula_9 More precisely, if formula_3 is a pre-measure defined on a ring of subsets formula_0 of the space formula_10 then the set function formula_11 defined by formula_12 is an outer measure on formula_1 and the measure formula_13 induced by formula_11 on the formula_8-algebra formula_14 of Carathéodory-measurable sets satisfies formula_15 for formula_16 (in particular, formula_14 includes formula_0). The infimum of the empty set is taken to be formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\mu_0 : R \\to [0, \\infty]" }, { "math_id": 3, "text": "\\mu_0" }, { "math_id": 4, "text": "\\mu_0(\\varnothing) = 0" }, { "math_id": 5, "text": "A_1, A_2, \\ldots \\in R" }, { "math_id": 6, "text": "R," }, { "math_id": 7, "text": "\\mu_0 \\left(\\bigcup_{n=1}^\\infty A_n\\right) = \\sum_{n=1}^\\infty \\mu_0(A_n)." }, { "math_id": 8, "text": "\\sigma" }, { "math_id": 9, "text": "X." }, { "math_id": 10, "text": "X," }, { "math_id": 11, "text": "\\mu^*" }, { "math_id": 12, "text": "\\mu^* (S) = \\inf \\left\\{\\left. \\sum_{i=1}^{\\infty} \\mu_0(A_i) \\right| A_i \\in R, S \\subseteq \\bigcup_{i=1}^{\\infty} A_i\\right\\}" }, { "math_id": 13, "text": "\\mu" }, { "math_id": 14, "text": "\\Sigma" }, { "math_id": 15, "text": "\\mu(A) = \\mu_0(A)" }, { "math_id": 16, "text": "A \\in R" }, { "math_id": 17, "text": "+\\infty." } ]
https://en.wikipedia.org/wiki?curid=13856372
13860
Hahn–Banach theorem
Theorem on extension of bounded linear functionals The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a vector subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry. History. The theorem is named for the mathematicians Hans Hahn and Stefan Banach, who proved it independently in the late 1920s. The special case of the theorem for the space formula_0 of continuous functions on an interval was proved earlier (in 1912) by Eduard Helly, and a more general extension theorem, the M. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 by Marcel Riesz. The first Hahn–Banach theorem was proved by Eduard Helly in 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space (formula_1) had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensional extension exists (where the linear functional has its domain extended by one dimension) and then using induction. In 1927, Hahn defined general Banach spaces and used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that uses sublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both used transfinite induction. The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as the moment problem, whereby given all the potential moments of a function one must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is the Fourier cosine series problem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. Riesz and Helly solved the problem for certain classes of spaces (such as formula_2 and formula_3) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem: (&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;The vector problem) Given a collection formula_4 of bounded linear functionals on a normed space formula_5 and a collection of scalars formula_6 determine if there is an formula_7 such that formula_8 for all formula_9 If formula_5 happens to be a reflexive space then to solve the vector problem, it suffices to solve the following dual problem: (The functional problem) Given a collection formula_10 of vectors in a normed space formula_5 and a collection of scalars formula_6 determine if there is a bounded linear functional formula_11 on formula_5 such that formula_12 for all formula_9 Riesz went on to define formula_2 space (formula_13) in 1910 and the formula_14 spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (The functional problem) —  Let formula_10 be vectors in a real or complex normed space formula_5 and let formula_15 be scalars also indexed by formula_16 There exists a continuous linear functional formula_11 on formula_5 such that formula_12 for all formula_17 if and only if there exists a formula_18 such that for any choice of scalars formula_19 where all but finitely many formula_20 are formula_21 the following holds: formula_22 The Hahn–Banach theorem can be deduced from the above theorem. If formula_5 is reflexive then this theorem solves the vector problem. Hahn–Banach theorem. A real-valued function formula_23 defined on a subset formula_24 of formula_5 is said to be a function formula_25 if formula_26 for every formula_27 Hence the reason why the following version of the Hahn–Banach theorem is called the dominated extension theorem. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Hahn–Banach dominated extension theorem (for real linear functionals) —  If formula_25 is a sublinear function (such as a norm or seminorm for example) defined on a real vector space formula_5 then any linear functional defined on a vector subspace of formula_5 that is dominated above by formula_28 has at least one linear extension to all of formula_5 that is also dominated above by formula_29 Explicitly, if formula_25 is a sublinear function, which by definition means that it satisfies formula_30 and if formula_23 is a linear functional defined on a vector subspace formula_24 of formula_5 such that formula_31 then there exists a linear functional formula_32 such that formula_33 formula_34 Moreover, if formula_28 is a seminorm then formula_35 necessarily holds for all formula_36 The theorem remains true if the requirements on formula_28 are relaxed to require only that formula_28 be a convex function: formula_37 A function formula_25 is convex and satisfies formula_38 if and only if formula_39 for all vectors formula_40 and all non-negative real formula_41 such that formula_42 Every sublinear function is a convex function. On the other hand, if formula_25 is convex with formula_43 then the function defined by formula_44 is positively homogeneous (because for all formula_45 and formula_46 one has formula_47), hence, being convex, it is sublinear. It is also bounded above by formula_48 and satisfies formula_49 for every linear functional formula_50 So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals. If formula_32 is linear then formula_51 if and only if formula_52 which is the (equivalent) conclusion that some authors write instead of formula_50 It follows that if formula_25 is also symmetric, meaning that formula_53 holds for all formula_54 then formula_51 if and only formula_55 Every norm is a seminorm and both are symmetric balanced sublinear functions. A sublinear function is a seminorm if and only if it is a balanced function. On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. The identity function formula_56 on formula_57 is an example of a sublinear function that is not a seminorm. For complex or real vector spaces. The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Hahn–Banach theorem —  Suppose formula_25 a seminorm on a vector space formula_5 over the field formula_58 which is either formula_59 or formula_60 If formula_61 is a linear functional on a vector subspace formula_24 such that formula_62 then there exists a linear functional formula_63 such that formula_64 formula_65 The theorem remains true if the requirements on formula_28 are relaxed to require only that for all formula_40 and all scalars formula_66 and formula_67 satisfying formula_68 formula_69 This condition holds if and only if formula_28 is a convex and balanced function satisfying formula_70 or equivalently, if and only if it is convex, satisfies formula_70 and formula_71 for all formula_7 and all unit length scalars formula_72 A complex-valued functional formula_73 is said to be if formula_35 for all formula_45 in the domain of formula_74 With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly: Hahn–Banach dominated extension theorem: If formula_25 is a seminorm defined on a real or complex vector space formula_75 then every dominated linear functional defined on a vector subspace of formula_5 has a dominated linear extension to all of formula_76 In the case where formula_5 is a real vector space and formula_25 is merely a convex or sublinear function, this conclusion will remain true if both instances of "dominated" (meaning formula_77) are weakened to instead mean "dominated above" (meaning formula_51). Proof The following observations allow the Hahn–Banach theorem for real vector spaces to be applied to (complex-valued) linear functionals on complex vector spaces. Every linear functional formula_78 on a complex vector space is completely determined by its real part formula_79 through the formula formula_82 and moreover, if formula_83 is a norm on formula_5 then their dual norms are equal: formula_84 In particular, a linear functional on formula_5 extends another one defined on formula_85 if and only if their real parts are equal on formula_24 (in other words, a linear functional formula_73 extends formula_11 if and only if formula_86 extends formula_87). The real part of a linear functional on formula_5 is always a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;real-linear functional (meaning that it is linear when formula_5 is considered as a real vector space) and if formula_88 is a real-linear functional on a complex vector space then formula_89 defines the unique linear functional on formula_5 whose real part is formula_90 If formula_73 is a linear functional on a (complex or real) vector space formula_5 and if formula_25 is a seminorm then formula_93 Stated in simpler language, a linear functional is dominated by a seminorm formula_28 if and only if its real part is dominated above by formula_29 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof of Hahn–Banach for complex vector spaces by reduction to real vector spaces Suppose formula_25 is a seminorm on a complex vector space formula_5 and let formula_94 be a linear functional defined on a vector subspace formula_24 of formula_5 that satisfies formula_95 on formula_96 Consider formula_5 as a real vector space and apply the Hahn–Banach theorem for real vector spaces to the real-linear functional formula_97 to obtain a real-linear extension formula_88 that is also dominated above by formula_98 so that it satisfies formula_99 on formula_5 and formula_100 on formula_96 The map formula_78 defined by formula_101 is a linear functional on formula_5 that extends formula_11 (because their real parts agree on formula_24) and satisfies formula_77 on formula_5 (because formula_92 and formula_28 is a seminorm). formula_81 The proof above shows that when formula_28 is a seminorm then there is a one-to-one correspondence between dominated linear extensions of formula_94 and dominated real-linear extensions of formula_102 the proof even gives a formula for explicitly constructing a linear extension of formula_11 from any given real-linear extension of its real part. Continuity A linear functional formula_73 on a topological vector space is continuous if and only if this is true of its real part formula_103 if the domain is a normed space then formula_104 (where one side is infinite if and only if the other side is infinite). Assume formula_5 is a topological vector space and formula_25 is sublinear function. If formula_28 is a continuous sublinear function that dominates a linear functional formula_73 then formula_73 is necessarily continuous. Moreover, a linear functional formula_73 is continuous if and only if its absolute value formula_105 (which is a seminorm that dominates formula_73) is continuous. In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function. Proof. The Hahn–Banach theorem for real vector spaces ultimately follows from Helly's initial result for the special case where the linear functional is extended from formula_24 to a larger vector space in which formula_24 has codimension formula_106 &lt;templatestyles src="Math_theorem/styles.css" /&gt; Lemma (&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;One–dimensional dominated extension theorem) —  Let formula_25 be a sublinear function on a real vector space formula_75 let formula_23 a linear functional on a proper vector subspace formula_107 such that formula_108 on formula_24 (meaning formula_26 for all formula_109), and let formula_7 be a vector not in formula_24 (so formula_110). There exists a linear extension formula_111 of formula_11 such that formula_51 on formula_112 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Given any real number formula_113 the map formula_114 defined by formula_115 is always a linear extension of formula_11 to formula_116 but it might not satisfy formula_117 It will be shown that formula_67 can always be chosen so as to guarantee that formula_118 which will complete the proof. If formula_119 then formula_120 which implies formula_121 So define formula_122 where formula_123 are real numbers. To guarantee formula_118 it suffices that formula_124 (in fact, this is also necessary) because then formula_67 satisfies "the decisive inequality" formula_126 To see that formula_127 follows, assume formula_128 and substitute formula_129 in for both formula_130 and formula_131 to obtain formula_132 If formula_133 (respectively, if formula_134) then the right (respectively, the left) hand side equals formula_135 so that multiplying by formula_136 gives formula_137 formula_81 This lemma remains true if formula_25 is merely a convex function instead of a sublinear function. The lemma above is the key step in deducing the dominated extension theorem from Zorn's lemma. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof of dominated extension theorem using Zorn's lemma The set of all possible dominated linear extensions of formula_11 are partially ordered by extension of each other, so there is a maximal extension formula_74 By the codimension-1 result, if formula_73 is not defined on all of formula_75 then it can be further extended. Thus formula_73 must be defined everywhere, as claimed. formula_81 When formula_24 has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case uses Zorn's lemma although the strictly weaker ultrafilter lemma (which is equivalent to the compactness theorem and to the Boolean prime ideal theorem) may be used instead. Hahn–Banach can also be proved using Tychonoff's theorem for compact Hausdorff spaces (which is also equivalent to the ultrafilter lemma) The Mizar project has completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file. Continuous extension theorem. The Hahn–Banach theorem can be used to guarantee the existence of continuous linear extensions of continuous linear functionals. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Hahn–Banach continuous extension theorem —  Every continuous linear functional formula_11 defined on a vector subspace formula_24 of a (real or complex) locally convex topological vector space formula_5 has a continuous linear extension formula_73 to all of formula_76 If in addition formula_5 is a normed space, then this extension can be chosen so that its dual norm is equal to that of formula_140 In category-theoretic terms, the underlying field of the vector space is an injective object in the category of locally convex vector spaces. On a normed (or seminormed) space, a linear extension formula_73 of a bounded linear functional formula_11 is said to be if it has the same dual norm as the original functional: formula_141 Because of this terminology, the second part of the above theorem is sometimes referred to as the "norm-preserving" version of the Hahn–Banach theorem. Explicitly: &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Norm-preserving Hahn–Banach continuous extension theorem — Every continuous linear functional formula_11 defined on a vector subspace formula_24 of a (real or complex) normed space formula_5 has a continuous linear extension formula_73 to all of formula_5 that satisfies formula_142 Proof of the continuous extension theorem. The following observations allow the continuous extension theorem to be deduced from the Hahn–Banach theorem. The absolute value of a linear functional is always a seminorm. A linear functional formula_73 on a topological vector space formula_5 is continuous if and only if its absolute value formula_105 is continuous, which happens if and only if there exists a continuous seminorm formula_28 on formula_5 such that formula_77 on the domain of formula_74 If formula_5 is a locally convex space then this statement remains true when the linear functional formula_73 is defined on a proper vector subspace of formula_76 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof of the continuous extension theorem for locally convex spaces Let formula_11 be a continuous linear functional defined on a vector subspace formula_24 of a locally convex topological vector space formula_76 Because formula_5 is locally convex, there exists a continuous seminorm formula_143 on formula_5 that dominates formula_11 (meaning that formula_144 for all formula_109). By the Hahn–Banach theorem, there exists a linear extension of formula_11 to formula_75 call it formula_145 that satisfies formula_77 on formula_76 This linear functional formula_73 is continuous since formula_77 and formula_28 is a continuous seminorm. Proof for normed spaces A linear functional formula_11 on a normed space is continuous if and only if it is bounded, which means that its dual norm formula_146 is finite, in which case formula_147 holds for every point formula_130 in its domain. Moreover, if formula_148 is such that formula_149 for all formula_130 in the functional's domain, then necessarily formula_150 If formula_73 is a linear extension of a linear functional formula_11 then their dual norms always satisfy formula_151 so that equality formula_152 is equivalent to formula_153 which holds if and only if formula_154 for every point formula_45 in the extension's domain. This can be restated in terms of the function formula_155 defined by formula_156 which is always a seminorm: A linear extension of a bounded linear functional formula_11 is norm-preserving if and only if the extension is dominated by the seminorm formula_159 Applying the Hahn–Banach theorem to formula_11 with this seminorm formula_157 thus produces a dominated linear extension whose norm is (necessarily) equal to that of formula_160 which proves the theorem: &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof of the norm-preserving Hahn–Banach continuous extension theorem Let formula_11 be a continuous linear functional defined on a vector subspace formula_24 of a normed space formula_76 Then the function formula_143 defined by formula_161 is a seminorm on formula_5 that dominates formula_160 meaning that formula_144 holds for every formula_27 By the Hahn–Banach theorem, there exists a linear functional formula_73 on formula_5 that extends formula_11 (which guarantees formula_151) and that is also dominated by formula_98 meaning that formula_35 for every formula_36 The fact that formula_158 is a real number such that formula_154 for every formula_54 guarantees formula_162 Since formula_163 is finite, the linear functional formula_73 is bounded and thus continuous. Non-locally convex spaces. The continuous extension theorem might fail if the topological vector space (TVS) formula_5 is not locally convex. For example, for formula_164 the Lebesgue space formula_2 is a complete metrizable TVS (an F-space) that is not locally convex (in fact, its only convex open subsets are itself formula_2 and the empty set) and the only continuous linear functional on formula_2 is the constant formula_165 function . Since formula_2 is Hausdorff, every finite-dimensional vector subspace formula_166 is linearly homeomorphic to Euclidean space formula_167 or formula_168 (by F. Riesz's theorem) and so every non-zero linear functional formula_11 on formula_24 is continuous but none has a continuous linear extension to all of formula_169 However, it is possible for a TVS formula_5 to not be locally convex but nevertheless have enough continuous linear functionals that its continuous dual space formula_170 separates points; for such a TVS, a continuous linear functional defined on a vector subspace might have a continuous linear extension to the whole space. If the TVS formula_5 is not locally convex then there might not exist any continuous seminorm formula_25 defined on formula_5 (not just on formula_24) that dominates formula_160 in which case the Hahn–Banach theorem can not be applied as it was in the above proof of the continuous extension theorem. However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: If formula_5 is any TVS (not necessarily locally convex), then a continuous linear functional formula_11 defined on a vector subspace formula_24 has a continuous linear extension formula_73 to all of formula_5 if and only if there exists some continuous seminorm formula_28 on formula_5 that dominates formula_140 Specifically, if given a continuous linear extension formula_73 then formula_171 is a continuous seminorm on formula_5 that dominates formula_172 and conversely, if given a continuous seminorm formula_143 on formula_5 that dominates formula_11 then any dominated linear extension of formula_11 to formula_5 (the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension. Geometric Hahn–Banach (the Hahn–Banach separation theorems). The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets: formula_173 and formula_174 This sort of argument appears widely in convex geometry, optimization theory, and economics. Lemmas to this end derived from the original Hahn–Banach theorem are known as the Hahn–Banach separation theorems. They are generalizations of the hyperplane separation theorem, which states that two disjoint nonempty convex subsets of a finite-dimensional space formula_175 can be separated by some affine hyperplane, which is a fiber (level set) of the form formula_176 where formula_177 is a non-zero linear functional and formula_178 is a scalar. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  Let formula_179 and formula_180 be non-empty convex subsets of a real locally convex topological vector space formula_76 If formula_181 and formula_182 then there exists a continuous linear functional formula_11 on formula_5 such that formula_183 and formula_184 for all formula_185 (such an formula_11 is necessarily non-zero). When the convex sets have additional properties, such as being open or compact for example, then the conclusion can be substantially strengthened: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  Let formula_179 and formula_180 be convex non-empty disjoint subsets of a real topological vector space formula_76 If formula_5 is complex (rather than real) then the same claims hold, but for the real part of formula_140 Then following important corollary is known as the Geometric Hahn–Banach theorem or Mazur's theorem (also known as Ascoli–Mazur theorem). It follows from the first bullet above and the convexity of formula_96 &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (Mazur) —  Let formula_24 be a vector subspace of the topological vector space formula_5 and suppose formula_192 is a non-empty convex open subset of formula_5 with formula_193 Then there is a closed hyperplane (codimension-1 vector subspace) formula_194 that contains formula_138 but remains disjoint from formula_195 Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Corollary (Separation of a subspace and an open convex set) —  Let formula_24 be a vector subspace of a locally convex topological vector space formula_75 and formula_196 be a non-empty open convex subset disjoint from formula_96 Then there exists a continuous linear functional formula_11 on formula_5 such that formula_197 for all formula_109 and formula_198 on formula_199 Supporting hyperplanes. Since points are trivially convex, geometric Hahn–Banach implies that functionals can detect the boundary of a set. In particular, let formula_5 be a real topological vector space and formula_200 be convex with formula_201 If formula_202 then there is a functional that is vanishing at formula_203 but supported on the interior of formula_204 Call a normed space formula_5 smooth if at each point formula_45 in its unit ball there exists a unique closed hyperplane to the unit ball at formula_125 Köthe showed in 1983 that a normed space is smooth at a point formula_45 if and only if the norm is Gateaux differentiable at that point. Balanced or disked neighborhoods. Let formula_196 be a convex balanced neighborhood of the origin in a locally convex topological vector space formula_5 and suppose formula_7 is not an element of formula_199 Then there exists a continuous linear functional formula_11 on formula_5 such that formula_205 Applications. The Hahn–Banach theorem is the first sign of an important philosophy in functional analysis: to understand a space, one should understand its continuous functionals. For example, linear subspaces are characterized by functionals: if X is a normed vector space with linear subspace M (not necessarily closed) and if formula_80 is an element of X not in the closure of M, then there exists a continuous linear map formula_186 with formula_197 for all formula_206 formula_207 and formula_208 (To see this, note that formula_209 is a sublinear function.) Moreover, if formula_80 is an element of X, then there exists a continuous linear map formula_186 such that formula_210 and formula_211 This implies that the natural injection formula_212 from a normed space X into its double dual formula_213 is isometric. That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space is Hausdorff or locally convex. However, suppose X is a topological vector space, not necessarily Hausdorff or locally convex, but with a nonempty, proper, convex, open set M. Then geometric Hahn–Banach implies that there is a hyperplane separating M from any other point. In particular, there must exist a nonzero functional on X — that is, the continuous dual space formula_170 is non-trivial. Considering X with the weak topology induced by formula_214 then X becomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new space separates points. Thus X with this weak topology becomes Hausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. Partial differential equations. The Hahn–Banach theorem is often useful when one wishes to apply the method of a priori estimates. Suppose that we wish to solve the linear differential equation formula_215 for formula_216 with formula_11 given in some Banach space X. If we have control on the size of formula_91 in terms of formula_217 and we can think of formula_91 as a bounded linear functional on some suitable space of test functions formula_218 then we can view formula_11 as a linear functional by adjunction: formula_219 At first, this functional is only defined on the image of formula_220 but using the Hahn–Banach theorem, we can try to extend it to the entire codomain X. The resulting functional is often defined to be a weak solution to the equation. Characterizing reflexive Banach spaces. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  A real Banach space is reflexive if and only if every pair of non-empty disjoint closed convex subsets, one of which is bounded, can be strictly separated by a hyperplane. Example from Fredholm theory. To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Proposition —  Suppose formula_5 is a Hausdorff locally convex TVS over the field formula_221 and formula_222 is a vector subspace of formula_5 that is TVS–isomorphic to formula_223 for some set formula_224 Then formula_222 is a closed and complemented vector subspace of formula_76 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Since formula_223 is a complete TVS so is formula_225 and since any complete subset of a Hausdorff TVS is closed, formula_222 is a closed subset of formula_76 Let formula_226 be a TVS isomorphism, so that each formula_227 is a continuous surjective linear functional. By the Hahn–Banach theorem, we may extend each formula_228 to a continuous linear functional formula_229 on formula_76 Let formula_230 so formula_73 is a continuous linear surjection such that its restriction to formula_222 is formula_231 Let formula_232 which is a continuous linear map whose restriction to formula_222 is formula_233 where formula_234 denotes the identity map on formula_235 This shows that formula_236 is a continuous linear projection onto formula_222 (that is, formula_237). Thus formula_222 is complemented in formula_5 and formula_238 in the category of TVSs. formula_81 The above result may be used to show that every closed vector subspace of formula_239 is complemented because any such space is either finite dimensional or else TVS–isomorphic to formula_240 Generalizations. General template There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: formula_25 is a sublinear function (possibly a seminorm) on a vector space formula_75 formula_24 is a vector subspace of formula_5 (possibly closed), and formula_11 is a linear functional on formula_24 satisfying formula_95 on formula_24 (and possibly some other conditions). One then concludes that there exists a linear extension formula_73 of formula_11 to formula_5 such that formula_77 on formula_5 (possibly with additional properties). &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  If formula_241 is an absorbing disk in a real or complex vector space formula_5 and if formula_11 be a linear functional defined on a vector subspace formula_24 of formula_5 such that formula_242 on formula_243 then there exists a linear functional formula_73 on formula_5 extending formula_11 such that formula_244 on formula_245 For seminorms. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Hahn–Banach theorem for seminorms —  If formula_246 is a seminorm defined on a vector subspace formula_24 of formula_75 and if formula_247 is a seminorm on formula_5 such that formula_248 then there exists a seminorm formula_249 on formula_5 such that formula_250 on formula_24 and formula_251 on formula_76 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof of the Hahn–Banach theorem for seminorms Let formula_252 be the convex hull of formula_253 Because formula_252 is an absorbing disk in formula_75 its Minkowski functional formula_236 is a seminorm. Then formula_254 on formula_24 and formula_251 on formula_76 So for example, suppose that formula_11 is a bounded linear functional defined on a vector subspace formula_24 of a normed space formula_75 so its the operator norm formula_158 is a non-negative real number. Then the linear functional's absolute value formula_255 is a seminorm on formula_24 and the map formula_247 defined by formula_256 is a seminorm on formula_5 that satisfies formula_257 on formula_96 The Hahn–Banach theorem for seminorms guarantees the existence of a seminorm formula_249 that is equal to formula_258 on formula_24 (since formula_259) and is bounded above by formula_260 everywhere on formula_5 (since formula_251). Geometric separation. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Hahn–Banach sandwich theorem — Let formula_25 be a sublinear function on a real vector space formula_75 let formula_261 be any subset of formula_75 and let formula_262 be any map. If there exist positive real numbers formula_66 and formula_67 such that formula_263 then there exists a linear functional formula_32 on formula_5 such that formula_51 on formula_5 and formula_264 on formula_265 Maximal dominated linear extension. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (Andenaes, 1970) — Let formula_25 be a sublinear function on a real vector space formula_75 let formula_23 be a linear functional on a vector subspace formula_24 of formula_5 such that formula_108 on formula_138 and let formula_261 be any subset of formula_76 Then there exists a linear functional formula_32 on formula_5 that extends formula_160 satisfies formula_51 on formula_75 and is (pointwise) maximal on formula_252 in the following sense: if formula_266 is a linear functional on formula_5 that extends formula_11 and satisfies formula_267 on formula_75 then formula_268 on formula_252 implies formula_269 on formula_265 If formula_270 is a singleton set (where formula_271 is some vector) and if formula_32 is such a maximal dominated linear extension of formula_139 then formula_272 Vector valued Hahn–Banach. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Vector–valued Hahn–Banach theorem — If formula_5 and formula_222 are vector spaces over the same field and if formula_273 is a linear map defined on a vector subspace formula_24 of formula_75 then there exists a linear map formula_274 that extends formula_140 Invariant Hahn–Banach. A set formula_275 of maps formula_276 is (with respect to function composition formula_277) if formula_278 for all formula_279 Say that a function formula_11 defined on a subset formula_24 of formula_5 is if formula_280 and formula_281 on formula_24 for every formula_282 &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;An invariant Hahn–Banach theorem —  Suppose formula_275 is a commutative set of continuous linear maps from a normed space formula_5 into itself and let formula_11 be a continuous linear functional defined some vector subspace formula_24 of formula_5 that is formula_275-invariant, which means that formula_280 and formula_281 on formula_24 for every formula_282 Then formula_11 has a continuous linear extension formula_73 to all of formula_5 that has the same operator norm formula_152 and is also formula_275-invariant, meaning that formula_283 on formula_5 for every formula_282 This theorem may be summarized: Every formula_275-invariant continuous linear functional defined on a vector subspace of a normed space formula_5 has a formula_275-invariant Hahn–Banach extension to all of formula_76 For nonlinear functions. The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. &lt;templatestyles src="Math_theorem/styles.css" /&gt; &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Mazur–Orlicz theorem —  Let formula_25 be a sublinear function on a real or complex vector space formula_75 let formula_284 be any set, and let formula_285 and formula_286 be any maps. The following statements are equivalent: The following theorem characterizes when any scalar function on formula_5 (not necessarily linear) has a continuous linear extension to all of formula_76 &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;The extension principle) —  Let formula_11 a scalar-valued function on a subset formula_252 of a topological vector space formula_76 Then there exists a continuous linear functional formula_73 on formula_5 extending formula_11 if and only if there exists a continuous seminorm formula_28 on formula_5 such that formula_293 for all positive integers formula_131 and all finite sequences formula_294 of scalars and elements formula_288 of formula_265 Converse. Let X be a topological vector space. A vector subspace M of X has the extension property if any continuous linear functional on M can be extended to a continuous linear functional on X, and we say that X has the Hahn–Banach extension property (HBEP) if every vector subspace of X has the extension property. The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable topological vector spaces there is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex. On the other hand, a vector space X of uncountable dimension, endowed with the finest vector topology, then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable. A vector subspace M of a TVS X has the separation property if for every element of X such that formula_295 there exists a continuous linear functional formula_11 on X such that formula_296 and formula_197 for all formula_27 Clearly, the continuous dual space of a TVS X separates points on X if and only if formula_297 has the separation property. In 1992, Kakol proved that any infinite dimensional vector space X, there exist TVS-topologies on X that do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points on X. However, if X is a TVS then every vector subspace of X has the extension property if and only if every vector subspace of X has the separation property. Relation to axiom of choice and other theorems. The proof of the Hahn–Banach theorem for real vector spaces (HB) commonly uses Zorn's lemma, which in the axiomatic framework of Zermelo–Fraenkel set theory (ZF) is equivalent to the axiom of choice (AC). It was discovered by Łoś and Ryll-Nardzewski and independently by Luxemburg that HB can be proved using the ultrafilter lemma (UL), which is equivalent (under ZF) to the Boolean prime ideal theorem (BPI). BPI is strictly weaker than the axiom of choice and it was later shown that HB is strictly weaker than BPI. The ultrafilter lemma is equivalent (under ZF) to the Banach–Alaoglu theorem, which is another foundational theorem in functional analysis. Although the Banach–Alaoglu theorem implies HB, it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger than HB). However, HB is equivalent to a certain weakened version of the Banach–Alaoglu theorem for normed spaces. The Hahn–Banach theorem is also equivalent to the following statement: (∗): On every Boolean algebra B there exists a "probability charge", that is: a non-constant finitely additive map from formula_180 into formula_298 In ZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set. Moreover, the Hahn–Banach theorem implies the Banach–Tarski paradox. For separable Banach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem of second-order arithmetic that takes a form of Kőnig's lemma restricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example of reverse mathematics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Proofs &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C[a, b]" }, { "math_id": 1, "text": "\\Complex^{\\N}" }, { "math_id": 2, "text": "L^p([0, 1])" }, { "math_id": 3, "text": "C([a, b])" }, { "math_id": 4, "text": "\\left(f_i\\right)_{i \\in I}" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\left(c_i\\right)_{i \\in I}," }, { "math_id": 7, "text": "x \\in X" }, { "math_id": 8, "text": "f_i(x) = c_i" }, { "math_id": 9, "text": "i \\in I." }, { "math_id": 10, "text": "\\left(x_i\\right)_{i \\in I}" }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": "f\\left(x_i\\right) = c_i" }, { "math_id": 13, "text": "1 < p < \\infty" }, { "math_id": 14, "text": "\\ell^p" }, { "math_id": 15, "text": "\\left(c_i\\right)_{i \\in I}" }, { "math_id": 16, "text": "I \\neq \\varnothing." }, { "math_id": 17, "text": "i \\in I" }, { "math_id": 18, "text": "K > 0" }, { "math_id": 19, "text": "\\left(s_i\\right)_{i \\in I}" }, { "math_id": 20, "text": "s_i" }, { "math_id": 21, "text": "0," }, { "math_id": 22, "text": "\\left|\\sum_{i \\in I} s_i c_i\\right| \\leq K \\left\\|\\sum_{i \\in I} s_i x_i\\right\\|." }, { "math_id": 23, "text": "f : M \\to \\R" }, { "math_id": 24, "text": "M" }, { "math_id": 25, "text": "p : X \\to \\R" }, { "math_id": 26, "text": "f(m) \\leq p(m)" }, { "math_id": 27, "text": "m \\in M." }, { "math_id": 28, "text": "p" }, { "math_id": 29, "text": "p." }, { "math_id": 30, "text": "p(x + y) \\leq p(x) + p(y) \\quad \\text{ and } \\quad p(t x) = t p(x) \\qquad \\text{ for all } \\; x, y \\in X \\; \\text{ and all real } \\; t \\geq 0," }, { "math_id": 31, "text": "f(m) \\leq p(m) \\quad \\text{ for all } m \\in M" }, { "math_id": 32, "text": "F : X \\to \\R" }, { "math_id": 33, "text": "F(m) = f(m) \\quad \\text{ for all } m \\in M," }, { "math_id": 34, "text": "F(x) \\leq p(x) \\quad ~\\;\\, \\text{ for all } x \\in X." }, { "math_id": 35, "text": "|F(x)| \\leq p(x)" }, { "math_id": 36, "text": "x \\in X." }, { "math_id": 37, "text": "p(t x + (1 - t) y) \\leq t p(x) + (1 - t) p(y) \\qquad \\text{ for all } 0 < t < 1 \\text{ and } x, y \\in X." }, { "math_id": 38, "text": "p(0) \\leq 0" }, { "math_id": 39, "text": "p(a x + b y) \\leq a p(x) + b p(y)" }, { "math_id": 40, "text": "x, y \\in X" }, { "math_id": 41, "text": "a, b \\geq 0" }, { "math_id": 42, "text": "a + b \\leq 1." }, { "math_id": 43, "text": "p(0) \\geq 0," }, { "math_id": 44, "text": "p_0(x) \\;\\stackrel{\\scriptscriptstyle\\text{def}}{=}\\; \\inf_{t > 0} \\frac{p(tx)}{t}" }, { "math_id": 45, "text": "x" }, { "math_id": 46, "text": "r>0" }, { "math_id": 47, "text": "p_0(rx)=\\inf_{t > 0} \\frac{p(trx)}{t} ) =r\\inf_{t > 0} \\frac{p(trx)}{tr} = r\\inf_{\\tau > 0} \\frac{p(\\tau x)}{\\tau}=rp_0(x)" }, { "math_id": 48, "text": "p_0 \\leq p," }, { "math_id": 49, "text": "F \\leq p_0" }, { "math_id": 50, "text": "F \\leq p." }, { "math_id": 51, "text": "F \\leq p" }, { "math_id": 52, "text": "-p(-x) \\leq F(x) \\leq p(x) \\quad \\text{ for all } x \\in X," }, { "math_id": 53, "text": "p(-x) = p(x)" }, { "math_id": 54, "text": "x \\in X," }, { "math_id": 55, "text": "|F| \\leq p." }, { "math_id": 56, "text": "\\R \\to \\R" }, { "math_id": 57, "text": "X := \\R" }, { "math_id": 58, "text": "\\mathbf{K}," }, { "math_id": 59, "text": "\\R" }, { "math_id": 60, "text": "\\Complex." }, { "math_id": 61, "text": "f : M \\to \\mathbf{K}" }, { "math_id": 62, "text": "|f(m)| \\leq p(m) \\quad \\text{ for all } m \\in M," }, { "math_id": 63, "text": "F : X \\to \\mathbf{K}" }, { "math_id": 64, "text": "F(m) = f(m) \\quad \\; \\text{ for all } m \\in M," }, { "math_id": 65, "text": "|F(x)| \\leq p(x) \\quad \\;\\, \\text{ for all } x \\in X." }, { "math_id": 66, "text": "a" }, { "math_id": 67, "text": "b" }, { "math_id": 68, "text": "|a| + |b| \\leq 1," }, { "math_id": 69, "text": "p(a x + b y) \\leq |a| p(x) + |b| p(y)." }, { "math_id": 70, "text": "p(0) \\leq 0," }, { "math_id": 71, "text": "p(u x) \\leq p(x)" }, { "math_id": 72, "text": "u." }, { "math_id": 73, "text": "F" }, { "math_id": 74, "text": "F." }, { "math_id": 75, "text": "X," }, { "math_id": 76, "text": "X." }, { "math_id": 77, "text": "|F| \\leq p" }, { "math_id": 78, "text": "F : X \\to \\Complex" }, { "math_id": 79, "text": "\\; \\operatorname{Re} F : X \\to \\R \\;" }, { "math_id": 80, "text": "z" }, { "math_id": 81, "text": "\\blacksquare" }, { "math_id": 82, "text": "F(x) \\;=\\; \\operatorname{Re} F(x) - i \\operatorname{Re} F(i x) \\qquad \\text{ for all } x \\in X" }, { "math_id": 83, "text": "\\|\\cdot\\|" }, { "math_id": 84, "text": "\\|F\\| = \\|\\operatorname{Re} F\\|." }, { "math_id": 85, "text": "M \\subseteq X" }, { "math_id": 86, "text": "\\operatorname{Re} F" }, { "math_id": 87, "text": "\\operatorname{Re} f" }, { "math_id": 88, "text": "R : X \\to \\R" }, { "math_id": 89, "text": "x \\mapsto R(x) - i R(i x)" }, { "math_id": 90, "text": "R." }, { "math_id": 91, "text": "u" }, { "math_id": 92, "text": "\\operatorname{Re} F \\leq p" }, { "math_id": 93, "text": "|F| \\,\\leq\\, p \\quad \\text{ if and only if } \\quad \\operatorname{Re} F \\,\\leq\\, p." }, { "math_id": 94, "text": "f : M \\to \\Complex" }, { "math_id": 95, "text": "|f| \\leq p" }, { "math_id": 96, "text": "M." }, { "math_id": 97, "text": "\\; \\operatorname{Re} f : M \\to \\R \\;" }, { "math_id": 98, "text": "p," }, { "math_id": 99, "text": "R \\leq p" }, { "math_id": 100, "text": "R = \\operatorname{Re} f" }, { "math_id": 101, "text": "F(x) \\;=\\; R(x) - i R(i x)" }, { "math_id": 102, "text": "\\operatorname{Re} f : M \\to \\R;" }, { "math_id": 103, "text": "\\operatorname{Re} F;" }, { "math_id": 104, "text": "\\|F\\| = \\|\\operatorname{Re} F\\|" }, { "math_id": 105, "text": "|F|" }, { "math_id": 106, "text": "1." }, { "math_id": 107, "text": "M \\subsetneq X" }, { "math_id": 108, "text": "f \\leq p" }, { "math_id": 109, "text": "m \\in M" }, { "math_id": 110, "text": "M \\oplus \\R x = \\operatorname{span} \\{M, x\\}" }, { "math_id": 111, "text": "F : M \\oplus \\R x \\to \\R" }, { "math_id": 112, "text": "M \\oplus \\R x." }, { "math_id": 113, "text": "b," }, { "math_id": 114, "text": "F_b : M \\oplus \\R x \\to \\R" }, { "math_id": 115, "text": "F_b(m + r x) = f(m) + r b" }, { "math_id": 116, "text": "M \\oplus \\R x" }, { "math_id": 117, "text": "F_b \\leq p." }, { "math_id": 118, "text": "F_b \\leq p," }, { "math_id": 119, "text": "m, n \\in M" }, { "math_id": 120, "text": "f(m) - f(n) = f(m - n) \\leq p(m - n) = p(m + x - x - n) \\leq p(m + x) + p(- x - n)" }, { "math_id": 121, "text": "-p(-n - x) - f(n) ~\\leq~ p(m + x) - f(m)." }, { "math_id": 122, "text": "a = \\sup_{n \\in M}[-p(-n - x) - f(n)] \\qquad \\text{ and } \\qquad c = \\inf_{m \\in M} [p(m + x) - f(m)]" }, { "math_id": 123, "text": "a \\leq c" }, { "math_id": 124, "text": "a \\leq b \\leq c" }, { "math_id": 125, "text": "x." }, { "math_id": 126, "text": "-p(-n - x) - f(n) ~\\leq~ b ~\\leq~ p(m + x) - f(m) \\qquad \\text{ for all }\\; m, n \\in M." }, { "math_id": 127, "text": "f(m) + r b \\leq p(m + r x)" }, { "math_id": 128, "text": "r \\neq 0" }, { "math_id": 129, "text": "\\tfrac{1}{r} m" }, { "math_id": 130, "text": "m" }, { "math_id": 131, "text": "n" }, { "math_id": 132, "text": "-p\\left(- \\tfrac{1}{r} m - x\\right) - \\tfrac{1}{r} f\\left(m\\right) ~\\leq~ b ~\\leq~ p\\left(\\tfrac{1}{r} m + x\\right) - \\tfrac{1}{r} f\\left(m\\right)." }, { "math_id": 133, "text": "r > 0" }, { "math_id": 134, "text": "r < 0" }, { "math_id": 135, "text": "\\tfrac{1}{r} \\left[p(m + r x) - f(m)\\right]" }, { "math_id": 136, "text": "r" }, { "math_id": 137, "text": "r b \\leq p(m + r x) - f(m)." }, { "math_id": 138, "text": "M," }, { "math_id": 139, "text": "f : M \\to \\R," }, { "math_id": 140, "text": "f." }, { "math_id": 141, "text": "\\|F\\| = \\|f\\|." }, { "math_id": 142, "text": "\\|f\\| = \\|F\\|." }, { "math_id": 143, "text": "p : X \\to \\Reals" }, { "math_id": 144, "text": "|f(m)| \\leq p(m)" }, { "math_id": 145, "text": "F," }, { "math_id": 146, "text": "\\|f\\| = \\sup \\{|f(m)| : \\|m\\| \\leq 1, m \\in \\operatorname{domain} f\\}" }, { "math_id": 147, "text": "|f(m)| \\leq \\|f\\| \\|m\\|" }, { "math_id": 148, "text": "c \\geq 0" }, { "math_id": 149, "text": "|f(m)| \\leq c \\|m\\|" }, { "math_id": 150, "text": "\\|f\\| \\leq c." }, { "math_id": 151, "text": "\\|f\\| \\leq \\|F\\|" }, { "math_id": 152, "text": "\\|f\\| = \\|F\\|" }, { "math_id": 153, "text": "\\|F\\| \\leq \\|f\\|," }, { "math_id": 154, "text": "|F(x)| \\leq \\|f\\| \\|x\\|" }, { "math_id": 155, "text": "\\|f\\| \\, \\|\\cdot\\| : X \\to \\Reals" }, { "math_id": 156, "text": "x \\mapsto \\|f\\| \\, \\|x\\|," }, { "math_id": 157, "text": "\\|f\\| \\, \\|\\cdot\\|" }, { "math_id": 158, "text": "\\|f\\|" }, { "math_id": 159, "text": "\\|f\\| \\, \\|\\cdot\\|." }, { "math_id": 160, "text": "f," }, { "math_id": 161, "text": "p(x) = \\|f\\| \\, \\|x\\|" }, { "math_id": 162, "text": "\\|F\\| \\leq \\|f\\|." }, { "math_id": 163, "text": "\\|F\\| = \\|f\\|" }, { "math_id": 164, "text": "0 < p < 1," }, { "math_id": 165, "text": "0" }, { "math_id": 166, "text": "M \\subseteq L^p([0, 1])" }, { "math_id": 167, "text": "\\Reals^{\\dim M}" }, { "math_id": 168, "text": "\\Complex^{\\dim M}" }, { "math_id": 169, "text": "L^p([0, 1])." }, { "math_id": 170, "text": "X^*" }, { "math_id": 171, "text": "p := |F|" }, { "math_id": 172, "text": "f;" }, { "math_id": 173, "text": "\\{-p(- x - n) - f(n) : n \\in M\\}," }, { "math_id": 174, "text": "\\{p(m + x) - f(m) : m \\in M\\}." }, { "math_id": 175, "text": "\\R^n" }, { "math_id": 176, "text": "f^{-1}(s) = \\{x : f(x) = s\\}" }, { "math_id": 177, "text": "f \\neq 0" }, { "math_id": 178, "text": "s" }, { "math_id": 179, "text": "A" }, { "math_id": 180, "text": "B" }, { "math_id": 181, "text": "\\operatorname{Int} A \\neq \\varnothing" }, { "math_id": 182, "text": "B \\cap \\operatorname{Int} A = \\varnothing" }, { "math_id": 183, "text": "\\sup f(A) \\leq \\inf f(B)" }, { "math_id": 184, "text": "f(a) < \\inf f(B)" }, { "math_id": 185, "text": "a \\in \\operatorname{Int} A" }, { "math_id": 186, "text": "f : X \\to \\mathbf{K}" }, { "math_id": 187, "text": "s \\in \\R" }, { "math_id": 188, "text": "f(a) < s \\leq f(b)" }, { "math_id": 189, "text": "a \\in A, b \\in B." }, { "math_id": 190, "text": "s, t \\in \\R" }, { "math_id": 191, "text": "f(a) < t < s < f(b)" }, { "math_id": 192, "text": "K" }, { "math_id": 193, "text": "K \\cap M = \\varnothing." }, { "math_id": 194, "text": "N \\subseteq X" }, { "math_id": 195, "text": "K." }, { "math_id": 196, "text": "U" }, { "math_id": 197, "text": "f(m) = 0" }, { "math_id": 198, "text": "\\operatorname{Re} f > 0" }, { "math_id": 199, "text": "U." }, { "math_id": 200, "text": "A \\subseteq X" }, { "math_id": 201, "text": "\\operatorname{Int} A \\neq \\varnothing." }, { "math_id": 202, "text": "a_0 \\in A \\setminus \\operatorname{Int} A" }, { "math_id": 203, "text": "a_0," }, { "math_id": 204, "text": "A." }, { "math_id": 205, "text": "\\sup |f(U)| \\leq |f(x)|." }, { "math_id": 206, "text": "m \\in M," }, { "math_id": 207, "text": "f(z) = 1," }, { "math_id": 208, "text": "\\|f\\| = \\operatorname{dist}(z, M)^{-1}." }, { "math_id": 209, "text": "\\operatorname{dist}(\\cdot, M)" }, { "math_id": 210, "text": "f(z) = \\|z\\|" }, { "math_id": 211, "text": "\\|f\\| \\leq 1." }, { "math_id": 212, "text": "J" }, { "math_id": 213, "text": "V^{**}" }, { "math_id": 214, "text": "X^*," }, { "math_id": 215, "text": "P u = f" }, { "math_id": 216, "text": "u," }, { "math_id": 217, "text": "\\|f\\|_X" }, { "math_id": 218, "text": "g," }, { "math_id": 219, "text": "(f, g) = (u, P^*g)." }, { "math_id": 220, "text": "P," }, { "math_id": 221, "text": "\\mathbf{K}" }, { "math_id": 222, "text": "Y" }, { "math_id": 223, "text": "\\mathbf{K}^I" }, { "math_id": 224, "text": "I." }, { "math_id": 225, "text": "Y," }, { "math_id": 226, "text": "f = \\left(f_i\\right)_{i \\in I} : Y \\to \\mathbf{K}^I" }, { "math_id": 227, "text": "f_i : Y \\to \\mathbf{K}" }, { "math_id": 228, "text": "f_i" }, { "math_id": 229, "text": "F_i : X \\to \\mathbf{K}" }, { "math_id": 230, "text": "F := \\left(F_i\\right)_{i \\in I} : X \\to \\mathbf{K}^I" }, { "math_id": 231, "text": "F\\big\\vert_Y = \\left(F_i\\big\\vert_Y\\right)_{i \\in I} = \\left(f_i\\right)_{i \\in I} = f." }, { "math_id": 232, "text": "P := f^{-1} \\circ F : X \\to Y," }, { "math_id": 233, "text": "P\\big\\vert_Y = f^{-1} \\circ F\\big\\vert_Y = f^{-1} \\circ f = \\mathbf{1}_Y," }, { "math_id": 234, "text": "\\mathbb{1}_Y" }, { "math_id": 235, "text": "Y." }, { "math_id": 236, "text": "P" }, { "math_id": 237, "text": "P \\circ P = P" }, { "math_id": 238, "text": "X = Y \\oplus \\ker P" }, { "math_id": 239, "text": "\\R^{\\N}" }, { "math_id": 240, "text": "\\R^{\\N}." }, { "math_id": 241, "text": "D" }, { "math_id": 242, "text": "|f| \\leq 1" }, { "math_id": 243, "text": "M \\cap D," }, { "math_id": 244, "text": "|F| \\leq 1" }, { "math_id": 245, "text": "D." }, { "math_id": 246, "text": "p : M \\to \\Reals" }, { "math_id": 247, "text": "q : X \\to \\Reals" }, { "math_id": 248, "text": "p \\leq q\\big\\vert_M," }, { "math_id": 249, "text": "P : X \\to \\Reals" }, { "math_id": 250, "text": "P\\big\\vert_M = p" }, { "math_id": 251, "text": "P \\leq q" }, { "math_id": 252, "text": "S" }, { "math_id": 253, "text": "\\{m \\in M : p(m) \\leq 1\\} \\cup \\{x \\in X : q(x) \\leq 1\\}." }, { "math_id": 254, "text": "p = P" }, { "math_id": 255, "text": "p := |f|" }, { "math_id": 256, "text": "q(x) = \\|f\\| \\, \\|x\\|" }, { "math_id": 257, "text": "p \\leq q\\big\\vert_M" }, { "math_id": 258, "text": "|f|" }, { "math_id": 259, "text": "P\\big\\vert_M = p = |f|" }, { "math_id": 260, "text": "P(x) \\leq \\|f\\| \\, \\|x\\|" }, { "math_id": 261, "text": "S \\subseteq X" }, { "math_id": 262, "text": "f : S \\to \\R" }, { "math_id": 263, "text": "0 \\geq \\inf_{s \\in S} [p(s - a x - b y) - f(s) - a f(x) - b f(y)] \\qquad \\text{ for all } x, y \\in S," }, { "math_id": 264, "text": "f \\leq F \\leq p" }, { "math_id": 265, "text": "S." }, { "math_id": 266, "text": "\\widehat{F} : X \\to \\R" }, { "math_id": 267, "text": "\\widehat{F} \\leq p" }, { "math_id": 268, "text": "F \\leq \\widehat{F}" }, { "math_id": 269, "text": "F = \\widehat{F}" }, { "math_id": 270, "text": "S = \\{s\\}" }, { "math_id": 271, "text": "s \\in X" }, { "math_id": 272, "text": "F(s) = \\inf_{m \\in M} [f(s) + p(s - m)]." }, { "math_id": 273, "text": "f : M \\to Y" }, { "math_id": 274, "text": "F : X \\to Y" }, { "math_id": 275, "text": "\\Gamma" }, { "math_id": 276, "text": "X \\to X" }, { "math_id": 277, "text": "\\,\\circ\\," }, { "math_id": 278, "text": "F \\circ G = G \\circ F" }, { "math_id": 279, "text": "F, G \\in \\Gamma." }, { "math_id": 280, "text": "L(M) \\subseteq M" }, { "math_id": 281, "text": "f \\circ L = f" }, { "math_id": 282, "text": "L \\in \\Gamma." }, { "math_id": 283, "text": "F \\circ L = F" }, { "math_id": 284, "text": "T" }, { "math_id": 285, "text": "R : T \\to \\R" }, { "math_id": 286, "text": "v : T \\to X" }, { "math_id": 287, "text": "R \\leq F \\circ v" }, { "math_id": 288, "text": "s_1, \\ldots, s_n" }, { "math_id": 289, "text": "n > 0" }, { "math_id": 290, "text": "t_1, \\ldots, t_n \\in T" }, { "math_id": 291, "text": "T," }, { "math_id": 292, "text": "\\sum_{i=1}^n s_i R\\left(t_i\\right) \\leq p\\left(\\sum_{i=1}^n s_i v\\left(t_i\\right)\\right)." }, { "math_id": 293, "text": "\\left|\\sum_{i=1}^n a_i f(s_i)\\right| \\leq p\\left(\\sum_{i=1}^n a_is_i\\right)" }, { "math_id": 294, "text": "a_1, \\ldots, a_n" }, { "math_id": 295, "text": "x \\not\\in M," }, { "math_id": 296, "text": "f(x) \\neq 0" }, { "math_id": 297, "text": "\\{0\\}," }, { "math_id": 298, "text": "[0, 1]." } ]
https://en.wikipedia.org/wiki?curid=13860
13860656
Extensional viscosity
Polymer solution parameter Extensional viscosity (also known as elongational viscosity) is a viscosity coefficient when the applied stress is extensional stress. It is often used for characterizing polymer solutions. Extensional viscosity can be measured using rheometers that apply "extensional stress". Acoustic rheometer is one example of such devices. Extensional viscosity is defined as the ratio of the normal stress difference to the rate of strain. For uniaxial extension along direction formula_0: formula_1 where formula_2 is the extensional viscosity or elongational viscosity formula_3 is the normal stress along direction n. formula_4 is the rate of strain: formula_5 The ratio between the extensional viscosity formula_6 and the dynamic viscosity formula_7 is known as Trouton's Ratio, formula_8. For a Newtonian Fluid, the Trouton ratio equals three. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "\\eta_e = \\frac{\\sigma_{zz} - \\frac{1}{2}\\sigma_{xx} - \\frac{1}{2}\\sigma_{yy}}{\\dot{\\varepsilon}}\\,\\!" }, { "math_id": 2, "text": "\\eta_e\\,\\!" }, { "math_id": 3, "text": "\\sigma_{nn}\\,\\!" }, { "math_id": 4, "text": "\\dot{\\varepsilon}\\,\\!" }, { "math_id": 5, "text": "\\dot{\\varepsilon} = \\frac{\\partial v_z}{\\partial z}\\,\\!" }, { "math_id": 6, "text": "\\eta_e" }, { "math_id": 7, "text": "\\eta" }, { "math_id": 8, "text": "\\mathrm{Tr} = \\eta_e/\\eta" } ]
https://en.wikipedia.org/wiki?curid=13860656
13864461
Pollaczek–Khinchine formula
In queueing theory, a discipline within the mathematical theory of probability, the Pollaczek–Khinchine formula states a relationship between the queue length and service time distribution Laplace transforms for an M/G/1 queue (where jobs arrive according to a Poisson process and have general service time distribution). The term is also used to refer to the relationships between the mean queue length and mean waiting/service time in such a model. The formula was first published by Felix Pollaczek in 1930 and recast in probabilistic terms by Aleksandr Khinchin two years later. In ruin theory the formula can be used to compute the probability of ultimate ruin (probability of an insurance company going bankrupt). Mean queue length. The formula states that the mean number of customers in system "L" is given by formula_0 where For the mean queue length to be finite it is necessary that formula_4 as otherwise jobs arrive faster than they leave the queue. "Traffic intensity," ranges between 0 and 1, and is the mean fraction of time that the server is busy. If the arrival rate formula_1 is greater than or equal to the service rate formula_5, the queuing delay becomes infinite. The variance term enters the expression due to Feller's paradox. Mean waiting time. If we write "W" for the mean time a customer spends in the system, then formula_6 where formula_7 is the mean waiting time (time spent in the queue waiting for service) and formula_5 is the service rate. Using Little's law, which states that formula_8 where so formula_9 We can write an expression for the mean waiting time as formula_10 Queue length transform. Writing π("z") for the probability-generating function of the number of customers in the queue formula_11 where g("s") is the Laplace transform of the service time probability density function. Waiting time transform. Writing W*("s") for the Laplace–Stieltjes transform of the waiting time distribution, formula_12 where again g("s") is the Laplace transform of service time probability density function. "n"th moments can be obtained by differentiating the transform "n" times, multiplying by (−1)"n" and evaluating at "s" = 0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " L = \\rho + \\frac{\\rho^2 + \\lambda^2 \\operatorname{Var}(S)}{2(1-\\rho)}" }, { "math_id": 1, "text": "\\lambda" }, { "math_id": 2, "text": "1/\\mu" }, { "math_id": 3, "text": "\\rho=\\lambda/\\mu" }, { "math_id": 4, "text": "\\rho < 1" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "W=W'+\\mu^{-1}" }, { "math_id": 7, "text": "W'" }, { "math_id": 8, "text": "L=\\lambda W" }, { "math_id": 9, "text": "W = \\frac{\\rho + \\lambda \\mu \\text{Var}(S)}{2(\\mu-\\lambda)} + \\mu^{-1}." }, { "math_id": 10, "text": "W' = \\frac{L}{\\lambda} - \\mu^{-1} = \\frac{\\rho + \\lambda \\mu \\text{Var}(S)}{2(\\mu-\\lambda)}." }, { "math_id": 11, "text": "\\pi(z) = \\frac{(1-z)(1-\\rho)g(\\lambda(1-z))}{g(\\lambda(1-z))-z}" }, { "math_id": 12, "text": "W^\\ast(s) = \\frac{(1-\\rho)sg(s)}{s-\\lambda(1-g(s))}" } ]
https://en.wikipedia.org/wiki?curid=13864461
13864664
Complementary monopoly
A complementary monopoly is an economic concept. It considers a situation where consent must be obtained from more than one agent to obtain a good. In turn leading to a reduction in surplus generated relative to an outright monopoly, if the two agents do not cooperate. The theory was originally proposed in the nineteenth century by Antoine Augustin Cournot. This can be seen in private toll roads where more than one operator controls a different section of the road. The solution is for one agent to purchase all sections of the road. Complementary goods are a less extreme form of this effect. In this case, one good is still of value even if the other good is not obtained. In a 1968 paper Hugo F. Sonnenschein claims complementary monopoly is equivalent to Cournot duopoly. Example. Consider a road between two towns where half of the road is owned by two agents. A customer must pass two toll booth in order to pass from one town to the other. Each agent sets the price of his toll booth. Given a demand function, formula_0, The optimal price for a monopolist is formula_1 leading to revenue of formula_2 If both agents are independently setting their prices, then the Nash equilibrium is for each to set their price at formula_3. This leads to an increase in the total price to formula_4 and a decrease in total revenue to formula_5 The total revenue generated by the two owners is reduced and the price is increased. This means that both the owners and the users of the road are worse off than they would otherwise be. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D = D_{max}\\cdot (P_{max} - P)" }, { "math_id": 1, "text": "P = \\frac{P_{max}} {2}" }, { "math_id": 2, "text": "R = {D}\\cdot {P} = {D_{max}\\cdot (P_{max} - \\frac{P_{max}} {2})}\\cdot \\frac{P_{max}} {2} = {D_{max}\\cdot \\frac{P_{max}^2} {4}}" }, { "math_id": 3, "text": "P = \\frac{P_{max}} {3}" }, { "math_id": 4, "text": "P = \\frac{2\\cdot P_{max}} {3}" }, { "math_id": 5, "text": "R = {D}\\cdot {P} = {D_{max}\\cdot (P_{max} - \\frac{2\\cdot P_{max}} {3})}\\cdot \\frac{2\\cdot P_{max}} {3} = {D_{max}\\cdot \\frac{2\\cdot P_{max}^2} {9}}" } ]
https://en.wikipedia.org/wiki?curid=13864664
13865816
Putter
Type of golf club A putter is a club used in the sport of golf to make relatively short and low-speed strokes with the intention of rolling the ball into the hole from a short distance away. It is differentiated from the other clubs (typically, irons and woods) by a clubhead with a very flat, low-profile, low-loft striking face, and by other features which are only allowed on putters, such as bent shafts, non-circular grips, and positional guides. Putters are generally used from very close distances to the cup, generally on the putting green, though certain courses have fringes and roughs near the green which are also suitable for putting. While no club in a player's bag is absolutely indispensable nor required to be carried by strict rules, the putter comes closest. It is a highly specialized tool for a specific job, and virtually no golfer is without one. Design. Putting is the most precise aspect of the game of golf. The putter must be designed to give the golfer every technical advantage including smooth stroke, good glide, sweet impact, and bounce-less topspin ball launch as well as every technique advantage including perfect fit as to shaft angle and length. The striking face of a putter is usually not perpendicular to the ground: putters have a small amount of loft, intended to "lift" the ball out of any depression it has made or settled into on the green, which reduces bouncing. This loft is typically 5–6°, and by strict rules cannot be more than 10°. The putter is the only club that may have a grip that is not perfectly round; "shield"-like cross-sections with a flat top and curved underside are most common. The putter is also the only club allowed to have a bent shaft; often, club-makers will attach the shaft to the club-head on the near edge for visibility, but to increase stability, the shaft is bent near the clubhead mounting so that its lie and the resulting clubhead position places the line of the straight part of the shaft at the sweet spot of the subhead, where the ball should be for the best putt. This increases accuracy as the golfer can direct their swing through the ball, without feeling like they are slightly behind it. Many putters also have an offset hosel, which places the shaft of the club in line with the center of the ball at impact, again to improve stability and feel as, combined with the vertical bend, the shaft will point directly into the center of the ball at impact. Historically putters were known as "putting cleeks" and were made entirely from woods such as beech, ash and hazel. In the 1900s putters heads evolved, with iron club heads becoming a more popular design. The design of the putter's club head has undergone radical changes since the late 1950s. Putters were originally a forged iron piece very similar in shape to the irons of the day. One of the first to apply scientific principles to golf club design was engineer Karsten Solheim. In 1959 instead of attaching the shaft at the heel of the blade, Solheim attached it in the center, transferring much of the weight of the club head to the perimeter. Through attempts to lower the center of gravity of the club head, it evolved into a shorter, thicker head slightly curved from front to rear (the so-called "hot dog" putter). The introduction of investment casting for club heads allowed drastically different shapes to be made far more easily and cheaply than with forging, resulting in several design improvements. First of all, the majority of mass behind the clubface was placed as low as possible, resulting in an L-shaped side profile with a thin, flat club face and another thin block along the bottom of the club behind the face. Additionally, peripheral weighting, or the placing of mass as far away from the center of the clubface as possible, increases the moment of inertia of the club head, reducing twisting if the club contacts the ball slightly off-center and thus giving the club a larger "sweet spot" with which to contact the ball. Newer innovations include replacing the metal at the "sweet spot" with a softer metal or polymer compound that will give and rebound at impact, which increases the peak impulse (force formula_0 time) imparted to the ball for better distance. Putters are subdivided into mallet, peripheral weighted and blade styles. Power instability and practice/play convertibility are features embodied in the latest putter design technology. Variations. Long-shaft putters. Though most putters have a shaft (slightly shorter for most ladies and juniors, longer for most men), putters are also made with longer shaft lengths and grips, and are designed to reduce the "degrees of freedom" allowed a player when he or she putts. Simply, the more joints that can easily bend or twist during the putting motion, the more degrees of freedom a player has when putting, which gives more flexibility and feel but can result in more inconsistent putts. With a normal putter, the player has six degrees of freedom: hands, wrists, elbows, shoulders, waist and knees, all of which can be moved just slightly to affect the path of the ball and likely prevent a putt from falling in the cup. Such motions, especially nervous uncontrollable motions, are called "yips", and having a chronic case of the yips can ruin a golfer's short game. German professional golfer Bernhard Langer is famous for having such a severe case that he once needed four putts to hole out from within three feet of the cup. A "belly putter" is typically about longer than a normal putter and is designed to be "anchored" against the abdomen of the player. This design reduces or removes the importance of the hands, wrists, elbows, and shoulders. A "long putter" is even longer and is designed to be anchored from the chest or even the chin and similarly reduces the impact of the hands, wrists, elbows and shoulders. The disadvantages are decreased feel and control over putting power, especially with the long putter. Their use in professional tournaments is hotly contested; Jim Furyk and others on the pro tours including Langer and Vijay Singh have used belly putters at some point with a marked improvement of their short game, while players like Tiger Woods and officials like former USGA technical director Frank Thomas have condemned it as conferring an unfair advantage on users. In November 2012, a proposed change for the 2016 edition of the rules of golf was announced, which would forbid players from anchoring a club against their body in any way. This rule change will affect the use of long and belly putters by players. Notable players affected include Adam Scott, Tim Clark, Kevin Stadler, Keegan Bradley, Webb Simpson, Carl Pettersson and Ernie Els. This new rule (14-1b Anchoring the Club) was approved in May 2013 and took effect on 1 January 2016. This new rule prohibits "anchoring" a putter when making a stroke. It does not ban long-shafted putters, rather, it bans the method by which they were originally designed to be used. Fetch Mallet. Called a fetch putter because it can be used to retrieve the golf ball out of the cup. In November 2018 Lee Westwood won the Nedbank Golf Challenge using a PING Sigma 2 Fetch putter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\times" } ]
https://en.wikipedia.org/wiki?curid=13865816
138677
Landau's function
Mathematical function In mathematics, Landau's function "g"("n"), named after Edmund Landau, is defined for every natural number "n" to be the largest order of an element of the symmetric group "S""n". Equivalently, "g"("n") is the largest least common multiple (lcm) of any partition of "n", or the maximum number of times a permutation of "n" elements can be recursively applied to itself before it returns to its starting sequence. For instance, 5 = 2 + 3 and lcm(2,3) = 6. No other partition of 5 yields a bigger lcm, so "g"(5) = 6. An element of order 6 in the group "S"5 can be written in cycle notation as (1 2) (3 4 5). Note that the same argument applies to the number 6, that is, "g"(6) = 6. There are arbitrarily long sequences of consecutive numbers "n", "n" + 1, ..., "n" + "m" on which the function "g" is constant. The integer sequence "g"(0) = 1, "g"(1) = 1, "g"(2) = 2, "g"(3) = 3, "g"(4) = 4, "g"(5) = 6, "g"(6) = 6, "g"(7) = 12, "g"(8) = 15, ... (sequence in the OEIS) is named after Edmund Landau, who proved in 1902 that formula_0 (where ln denotes the natural logarithm). Equivalently (using little-o notation), formula_1. More precisely, formula_2 If formula_3, where formula_4 denotes the prime counting function, formula_5 the logarithmic integral function with inverse formula_6, and we may take formula_7 for some constant "c" &gt; 0 by Ford, then formula_8 The statement that formula_9 for all sufficiently large "n" is equivalent to the Riemann hypothesis. It can be shown that formula_10 with the only equality between the functions at "n" = 0, and indeed formula_11
[ { "math_id": 0, "text": "\\lim_{n\\to\\infty}\\frac{\\ln(g(n))}{\\sqrt{n \\ln(n)}} = 1" }, { "math_id": 1, "text": "g(n) = e^{(1+o(1))\\sqrt{n\\ln n}}" }, { "math_id": 2, "text": "\\ln g(n)=\\sqrt{n\\ln n}\\left(1+\\frac{\\ln\\ln n-1}{2\\ln n}-\\frac{(\\ln\\ln n)^2-6\\ln\\ln n+9}{8(\\ln n)^2}+O\\left(\\left(\\frac{\\ln\\ln n}{\\ln n}\\right)^3\\right)\\right)." }, { "math_id": 3, "text": "\\pi(x)-\\operatorname{Li}(x)=O(R(x))" }, { "math_id": 4, "text": "\\pi" }, { "math_id": 5, "text": "\\operatorname{Li}" }, { "math_id": 6, "text": "\\operatorname{Li}^{-1}" }, { "math_id": 7, "text": "R(x)=x\\exp\\bigl(-c(\\ln x)^{3/5}(\\ln\\ln x)^{-1/5}\\bigr)" }, { "math_id": 8, "text": "\\ln g(n)=\\sqrt{\\operatorname{Li}^{-1}(n)}+O\\bigl(R(\\sqrt{n\\ln n})\\ln n\\bigr)." }, { "math_id": 9, "text": "\\ln g(n)<\\sqrt{\\mathrm{Li}^{-1}(n)}" }, { "math_id": 10, "text": "g(n)\\le e^{n/e}" }, { "math_id": 11, "text": "g(n) \\le \\exp\\left(1.05314\\sqrt{n\\ln n}\\right)." } ]
https://en.wikipedia.org/wiki?curid=138677
1387081
Rubik's Clock
Rubik's puzzle The Rubik's Clock is a mechanical puzzle invented and patented by Christopher C. Wiggs and Christopher J. Taylor. The Hungarian sculptor and professor of architecture Ernő Rubik bought the patent from them to market the product under his name. It was first marketed in 1988. The Rubik's Clock is a two-sided puzzle, each side presenting nine clocks to the puzzler. There are four dials, one at each corner of the puzzle, each allowing the corresponding corner clock to be rotated directly. (The corner clocks, unlike the other clocks, rotate on both sides of the puzzle simultaneously and can never be operated independently. Thus the puzzle contains only 14 independent clocks.) There are also four pins which span both sides of the puzzle; each pin arranged such that if it is "in" on one side it is "out" on the other. The state of each pin (in or out) determines whether the adjacent corner clock is mechanically connected to the three other adjacent clocks on the front side or on the back side: thus the configuration of the pins determines which sets of clocks can be turned simultaneously by rotating a suitable dial. The aim of the puzzle is to set all nine clocks to 12 o'clock (straight up) on both sides of the puzzle simultaneously. The method to do so is to start by constructing a cross on both sides (at 12 o’clock) and then solving the corner clocks. The Rubik’s clock is listed as one of the 17 WCA events, with records for fastest time to solve one puzzle, and the fastest average time to solve 5 puzzles (discarding the slowest and fastest times). Combinations. Since there are 14 independent clocks, with 12 settings each, there are a total of formula_0=1,283,918,464,548,864 possible combinations for the clock faces. This does not count for the number of pin positions. Notation. The puzzle is oriented with 12 o'clock on top, and either side in front. The following moves can be made: Pin movements: Wheel movements: Puzzle rotation: Records. The world record for single solve is held by Brendyn Dunagan of the United States with a time of 1.97 seconds, set at La La Land 2024. The world record for Olympic average of five solves is held by Eryk Kasperek of Poland with an average of 2.52 seconds, set at Cube4fun Lublin on WEII 2024.
[ { "math_id": 0, "text": "12^{14}" } ]
https://en.wikipedia.org/wiki?curid=1387081
13872825
Linear space (geometry)
A linear space is a basic structure in incidence geometry. A linear space consists of a set of elements called points, and a set of elements called lines. Each line is a distinct subset of the points. The points in a line are said to be incident with the line. Each two points are in a line, and any two lines may have no more than one point in common. Intuitively, this rule can be visualized as the property that two straight lines never intersect more than once. Linear spaces can be seen as a generalization of projective and affine planes, and more broadly, of 2-formula_0 block designs, where the requirement that every block contains the same number of points is dropped and the essential structural characteristic is that 2 points are incident with exactly 1 line. The term "linear space" was coined by Paul Libois in 1964, though many results about linear spaces are much older. Definition. Let "L" = ("P", "G", "I") be an incidence structure, for which the elements of "P" are called points and the elements of "G" are called lines. "L" is a "linear space" if the following three axioms hold: Some authors drop (L3) when defining linear spaces. In such a situation the linear spaces complying to (L3) are considered as "nontrivial" and those that do not are "trivial". Examples. The regular Euclidean plane with its points and lines constitutes a linear space, moreover all affine and projective spaces are linear spaces as well. The table below shows all possible nontrivial linear spaces of five points. Because any two points are always incident with one line, the lines being incident with only two points are not drawn, by convention. The trivial case is simply a line through five points. In the first illustration, the ten lines connecting the ten pairs of points are not drawn. In the second illustration, seven lines connecting seven pairs of points are not drawn. A linear space of "n" points containing a line being incident with "n" − 1 points is called a "near pencil". (See pencil) Properties. The De Bruijn–Erdős theorem shows that in any finite linear space formula_1 which is not a single point or a single line, we have formula_2.
[ { "math_id": 0, "text": "(v,k,1)" }, { "math_id": 1, "text": "S=({\\mathcal P},{\\mathcal L}, \\textbf{I})" }, { "math_id": 2, "text": "|\\mathcal{P}| \\leq |\\mathcal{L}|" } ]
https://en.wikipedia.org/wiki?curid=13872825
1387689
Table of Lie groups
This article gives a table of some common Lie groups and their associated Lie algebras. The following are noted: the topological properties of the group (dimension; connectedness; compactness; the nature of the fundamental group; and whether or not they are simply connected) as well as on their algebraic properties (abelian; simple; semisimple). For more examples of Lie groups and other related topics see the list of simple Lie groups; the Bianchi classification of groups of up to three dimensions; see classification of low-dimensional real Lie algebras for up to four dimensions; and the list of Lie group topics. Real Lie groups and their algebras. Column legend Complex Lie groups and their algebras. Note that a "complex Lie group" is defined as a complex analytic manifold that is also a group whose multiplication and inversion are each given by a holomorphic map. The dimensions in the table below are dimensions over C. Note that every complex Lie group/algebra can also be viewed as a real Lie group/algebra of twice the dimension. Complex Lie algebras. The dimensions given are dimensions over C. Note that every complex Lie algebra can also be viewed as a real Lie algebra of twice the dimension. The Lie algebra of affine transformations of dimension two, in fact, exist for any field. An instance has already been listed in the first table for real Lie algebras.
[ { "math_id": 0, "text": "\\pi_0" }, { "math_id": 1, "text": "\\pi_1" } ]
https://en.wikipedia.org/wiki?curid=1387689
1387827
Gaussian noise
Type of noise in signal processing In signal processing theory, Gaussian noise, named after Carl Friedrich Gauss, is a kind of signal noise that has a probability density function (pdf) equal to that of the normal distribution (which is also known as the Gaussian distribution). In other words, the values that the noise can take are Gaussian-distributed. The probability density function formula_0 of a Gaussian random variable formula_1 is given by: formula_2 where formula_1 represents the grey level, formula_3 the mean grey value and formula_4 its standard deviation. A special case is "white Gaussian noise", in which the values at any pair of times are identically distributed and statistically independent (and hence uncorrelated). In communication channel testing and modelling, Gaussian noise is used as additive white noise to generate additive white Gaussian noise. In telecommunications and computer networking, communication channels can be affected by wideband Gaussian noise coming from many natural sources, such as the thermal vibrations of atoms in conductors (referred to as thermal noise or Johnson–Nyquist noise), shot noise, black-body radiation from the earth and other warm objects, and from celestial sources such as the Sun. Gaussian noise in digital images. Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise caused by poor illumination and/or high temperature, and/or transmission e.g. electronic circuit noise. In digital image processing Gaussian noise can be reduced using a spatial filter, though when smoothing an image, an undesirable outcome may result in the blurring of fine-scaled image edges and details because they also correspond to blocked high frequencies. Conventional spatial filtering techniques for noise removal include: mean (convolution) filtering, median filtering and Gaussian smoothing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "\\varphi(z) = \\frac 1 {\\sigma\\sqrt{2\\pi}} e^{ -(z-\\mu)^2/(2\\sigma^2) }" }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=1387827
13879720
Cone (formal languages)
In formal language theory, a cone is a set of formal languages that has some desirable closure properties enjoyed by some well-known sets of languages, in particular by the families of regular languages, context-free languages and the recursively enumerable languages. The concept of a cone is a more abstract notion that subsumes all of these families. A similar notion is the faithful cone, having somewhat relaxed conditions. For example, the context-sensitive languages do not form a cone, but still have the required properties to form a faithful cone. The terminology "cone" has a French origin. In the American oriented literature one usually speaks of a "full trio". The "trio" corresponds to the faithful cone. Definition. A cone is a family formula_0 of languages such that formula_0 contains at least one non-empty language, and for any formula_1 over some alphabet formula_2, The family of all regular languages is contained in any cone. If one restricts the definition to homomorphisms that do not introduce the empty word formula_10 then one speaks of a "faithful cone"; the inverse homomorphisms are not restricted. Within the Chomsky hierarchy, the regular languages, the context-free languages, and the recursively enumerable languages are all cones, whereas the context-sensitive languages and the recursive languages are only faithful cones. Relation to Transducers. A finite state transducer is a finite state automaton that has both input and output. It defines a transduction formula_11, mapping a language formula_12 over the input alphabet into another language formula_13 over the output alphabet. Each of the cone operations (homomorphism, inverse homomorphism, intersection with a regular language) can be implemented using a finite state transducer. And, since finite state transducers are closed under composition, every sequence of cone operations can be performed by a finite state transducer. Conversely, every finite state transduction formula_11 can be decomposed into cone operations. In fact, there exists a normal form for this decomposition, which is commonly known as "Nivat's Theorem": Namely, each such formula_11 can be effectively decomposed as formula_14, where formula_15 are homomorphisms, and formula_8 is a regular language depending only on formula_11. Altogether, this means that a family of languages is a cone if and only if it is closed under finite state transductions. This is a very powerful set of operations. For instance one easily writes a (nondeterministic) finite state transducer with alphabet formula_16 that removes every second formula_17 in words of even length (and does not change words otherwise). Since the context-free languages form a cone, they are closed under this exotic operation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{S}" }, { "math_id": 1, "text": "L \\in \\mathcal{S}" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "h" }, { "math_id": 4, "text": "\\Sigma^\\ast" }, { "math_id": 5, "text": "\\Delta^\\ast" }, { "math_id": 6, "text": "h(L)" }, { "math_id": 7, "text": "h^{-1}(L)" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "L\\cap R" }, { "math_id": 10, "text": "\\lambda" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "L" }, { "math_id": 13, "text": "T(L)" }, { "math_id": 14, "text": "T(L) = g(h^{-1}(L) \\cap R)" }, { "math_id": 15, "text": "g, h" }, { "math_id": 16, "text": "\\{a,b\\}" }, { "math_id": 17, "text": "b" } ]
https://en.wikipedia.org/wiki?curid=13879720
1388070
System of imprimitivity
The concept of a system of imprimitivity is used in mathematics, particularly in algebra and analysis, both within the context of the theory of group representations. It was used by George Mackey as the basis for his theory of induced unitary representations of locally compact groups. The simplest case, and the context in which the idea was first noticed, is that of finite groups (see primitive permutation group). Consider a group "G" and subgroups "H" and "K", with "K" contained in "H". Then the left cosets of "H" in "G" are each the union of left cosets of "K". Not only that, but translation (on one side) by any element "g" of "G" respects this decomposition. The connection with induced representations is that the permutation representation on cosets is the special case of induced representation, in which a representation is induced from a trivial representation. The structure, combinatorial in this case, respected by translation shows that either "K" is a maximal subgroup of "G", or there is a system of imprimitivity (roughly, a lack of full "mixing"). In order to generalise this to other cases, the concept is re-expressed: first in terms of functions on "G" constant on "K"-cosets, and then in terms of projection operators (for example the averaging over "K"-cosets of elements of the group algebra). Mackey also used the idea for his explication of quantization theory based on preservation of relativity groups acting on configuration space. This generalized work of Eugene Wigner and others and is often considered to be one of the pioneering ideas in canonical quantization. Example. To motivate the general definitions, a definition is first formulated, in the case of finite groups and their representations on finite-dimensional vector spaces. Let "G" be a finite group and "U" a representation of "G" on a finite-dimensional complex vector space "H". The action of "G" on elements of "H" induces an action of "G" on the vector subspaces "W" of "H" in this way: formula_0 Let "X" be a set of subspaces of "H" such that formula_1 Then ("U","X") is a system of imprimitivity for "G". Two assertions must hold in the definition above: formula_2 holds only when all the coefficients "c""W" are zero. If the action of "G" on the elements of "X" is transitive, then we say this is a transitive system of imprimitivity. Let "G" be a finite group and "G"0 a subgroup of "G". A representation "U" of "G" is induced from a representation "V" of "G"0 if and only if there exist the following: such that "G"0 is the stabilizer subgroup of "W" under the action of "G", i.e. formula_3 and "V" is equivalent to the representation of "G"0 on "W"0 given by "U""h" | "W"0 for "h" ∈ "G"0. Note that by this definition, "induced by" is a relation between representations. We would like to show that there is actually a mapping on representations which corresponds to this relation. For finite groups one can show that a well-defined inducing construction exists on equivalence of representations by considering the character of a representation "U" defined by formula_4 If a representation "U" of "G" is induced from a representation "V" of "G"0, then formula_5 Thus the character function χ"U" (and therefore "U" itself) is completely determined by χ"V". Example. Let "G" be a finite group and consider the space "H" of complex-valued functions on "G". The left regular representation of "G" on "H" is defined by formula_6 Now "H" can be considered as the algebraic direct sum of the one-dimensional spaces "W""x", for "x" ∈ "G", where formula_7 The spaces "W""x" are permuted by L"g". Infinite dimensional systems of imprimitivity. To generalize the finite dimensional definition given in the preceding section, a suitable replacement for the set "X" of vector subspaces of "H" which is permuted by the representation "U" is needed. As it turns out, a naïve approach based on subspaces of "H" will not work; for example the translation representation of R on "L"2(R) has no system of imprimitivity in this sense. The right formulation of direct sum decomposition is formulated in terms of projection-valued measures. Mackey's original formulation was expressed in terms of a locally compact second countable (lcsc) group "G", a standard Borel space "X" and a Borel group action formula_8 We will refer to this as a standard Borel "G"-space. The definitions can be given in a much more general context, but the original setup used by Mackey is still quite general and requires fewer technicalities. Definition. Let "G" be a lcsc group acting on a standard Borel space "X". A system of imprimitivity based on ("G", "X") consists of a separable Hilbert space "H" and a pair consisting of which satisfy formula_9 Example. Let "X" be a standard "G" space and μ a σ-finite countably additive "invariant" measure on "X". This means formula_10 for all "g" ∈ "G" and Borel subsets "A" of "G". Let π("A") be multiplication by the indicator function of "A" and "U""g" be the operator formula_11 Then ("U", π) is a system of imprimitivity of ("G", "X") on "L"2μ("X"). This system of imprimitivity is sometimes called the "Koopman system of imprimitivity". Homogeneous systems of imprimitivity. A system of imprimitivity is homogeneous of multiplicity "n", where 1 ≤ "n" ≤ ω if and only if the corresponding projection-valued measure π on "X" is homogeneous of multiplicity "n". In fact, "X" breaks up into a countable disjoint family {"X""n"} 1 ≤ "n" ≤ ω of Borel sets such that π is homogeneous of multiplicity "n" on "X""n". It is also easy to show "X""n" is "G" invariant. "Lemma". Any system of imprimitivity is an orthogonal direct sum of homogeneous ones. It can be shown that if the action of "G" on "X" is transitive, then any system of imprimitivity on "X" is homogeneous. More generally, if the action of "G" on "X" is ergodic (meaning that "X" cannot be reduced by invariant proper Borel sets of "X") then any system of imprimitivity on "X" is homogeneous. We now discuss how the structure of homogeneous systems of imprimitivity can be expressed in a form which generalizes the Koopman representation given in the example above. In the following, we assume that μ is a σ-finite measure on a standard Borel "G"-space "X" such that the action of "G" respects the measure class of μ. This condition is weaker than invariance, but it suffices to construct a unitary translation operator similar to the Koopman operator in the example above. "G" respects the measure class of μ means that the Radon-Nikodym derivative formula_12 is well-defined for every "g" ∈ "G", where formula_13 It can be shown that there is a version of "s" which is jointly Borel measurable, that is formula_14 is Borel measurable and satisfies formula_12 for almost all values of ("g", "x") ∈ "G" × "X". Suppose "H" is a separable Hilbert space, U("H") the unitary operators on "H". A "unitary cocycle" is a Borel mapping formula_15 such that formula_16 for almost all "x" ∈ "X" formula_17 for almost all ("g", "h", "x"). A unitary cocycle is "strict" if and only if the above relations hold for all ("g", "h", "x"). It can be shown that for any unitary cocycle there is a strict unitary cocycle which is equal almost everywhere to it (Varadarajan, 1985). "Theorem". Define formula_18 Then "U" is a unitary representation of "G" on the Hilbert space formula_19 Moreover, if for any Borel set "A", π("A") is the projection operator formula_20 then ("U", π) is a system of imprimitivity of ("G","X"). Conversely, any homogeneous system of imprimitivity is of this form, for some measure σ-finite measure μ. This measure is unique up to measure equivalence, that is to say, two such measures have the same sets of measure 0. Much more can be said about the correspondence between homogeneous systems of imprimitivity and cocycles. When the action of "G" on "X" is transitive however, the correspondence takes a particularly explicit form based on the representation obtained by restricting the cocycle Φ to a fixed point subgroup of the action. We consider this case in the next section. Example. A system of imprimitivity ("U", π) of ("G","X") on a separable Hilbert space "H" is "irreducible" if and only if the only closed subspaces invariant under all the operators "U""g" and π("A") for "g" and element of "G" and "A" a Borel subset of "X" are "H" or {0}. If ("U", π) is irreducible, then π is homogeneous. Moreover, the corresponding measure on "X" as per the previous theorem is ergodic. Induced representations. If "X" is a Borel "G" space and "x" ∈ "X", then the fixed point subgroup formula_21 is a closed subgroup of "G". Since we are only assuming the action of "G" on "X" is Borel, this fact is non-trivial. To prove it, one can use the fact that a standard Borel "G"-space can be imbedded into a compact "G"-space in which the action is continuous. "Theorem". Suppose "G" acts on "X" transitively. Then there is a σ-finite quasi-invariant measure μ on "X" which is unique up to measure equivalence (that is any two such measures have the same sets of measure zero). If Φ is a strict unitary cocycle formula_15 then the restriction of Φ to the fixed point subgroup "G""x" is a Borel measurable unitary representation "U" of "G""x" on "H" (Here U("H") has the strong operator topology). However, it is known that a Borel measurable unitary representation is equal almost everywhere (with respect to Haar measure) to a strongly continuous unitary representation. This restriction mapping sets up a fundamental correspondence: "Theorem". Suppose "G" acts on "X" transitively with quasi-invariant measure μ. There is a bijection from unitary equivalence classes of systems of imprimitivity of ("G", "X") and unitary equivalence classes of representation of "G""x". Moreover, this bijection preserves irreducibility, that is a system of imprimitivity of ("G", "X") is irreducible if and only if the corresponding representation of "G""x" is irreducible. Given a representation "V" of "G""x" the corresponding representation of "G" is called the "representation induced by" "V". See theorem 6.2 of (Varadarajan, 1985). Applications to the theory of group representations. Systems of imprimitivity arise naturally in the determination of the representations of a group "G" which is the semi-direct product of an abelian group "N" by a group "H" that acts by automorphisms of "N". This means "N" is a normal subgroup of "G" and "H" a subgroup of "G" such that "G" = "N H" and "N" ∩ "H" = {"e"} (with "e" being the identity element of "G"). An important example of this is the inhomogeneous Lorentz group. Fix "G", "H" and "N" as above and let "X" be the character space of "N". In particular, "H" acts on "X" by formula_22 "Theorem". There is a bijection between unitary equivalence classes of representations of "G" and unitary equivalence classes of systems of imprimitivity based on ("H", "X"). This correspondence preserves intertwining operators. In particular, a representation of "G" is irreducible if and only if the corresponding system of imprimitivity is irreducible. This result is of particular interest when the action of "H" on "X" is such that every ergodic quasi-invariant measure on "X" is transitive. In that case, each such measure is the image of (a totally finite version) of Haar measure on "X" by the map formula_23 A necessary condition for this to be the case is that there is a countable set of "H" invariant Borel sets which separate the orbits of "H". This is the case for instance for the action of the Lorentz group on the character space of R4. Example: the Heisenberg group. The Heisenberg group is the group of 3 × 3 "real" matrices of the form: formula_24 This group is the semi-direct product of formula_25 and the abelian normal subgroup formula_26 Denote the typical matrix in "H" by ["w"] and the typical one in "N" by ["s","t"]. Then formula_27 "w" acts on the dual of R2 by multiplication by the transpose matrix formula_28 This allows us to completely determine the orbits and the representation theory. "Orbit structure": The orbits fall into two classes: "Fixed point subgroups": These also fall into two classes depending on the orbit: "Classification": This allows us to completely classify all irreducible representations of the Heisenberg group. These are parametrized by the set consisting of We can write down explicit formulas for these representations by describing the restrictions to "N" and "H". "Case 1". The corresponding representation π is of the form: It acts on "L"2(R) with respect to Lebesgue measure and formula_29 formula_30 "Case 2". The corresponding representation is given by the 1-dimensional character formula_31 formula_32
[ { "math_id": 0, "text": " U_g W = \\{ U_g w: w \\in W \\}. " }, { "math_id": 1, "text": " H = \\bigoplus_{W \\in X} W. " }, { "math_id": 2, "text": " \\sum_{W \\in X} c_W v_W = 0, \\quad v_W \\in W \\setminus \\{ 0 \\} " }, { "math_id": 3, "text": " G_0 = \\{g \\in G: U_g W_0 \\subseteq W_0\\}." }, { "math_id": 4, "text": " \\chi_U(g) = \\operatorname{tr}(U_g). " }, { "math_id": 5, "text": " \\chi_U(g) = \\frac{1}{|G_0|} \\sum_{\\{ x \\in G : {x}^{-1} \\,\ng \\, x \\in G_0\\}} \\chi_V({x}^{-1} \\ g \\ x), \\quad \\forall g \\in G. " }, { "math_id": 6, "text": " [\\operatorname{L}_g \\psi](h) = \\psi(g^{-1} h). " }, { "math_id": 7, "text": " W_x = \\{\\psi \\in H: \\psi(g) = 0, \\quad \\forall g \\neq x\\}." }, { "math_id": 8, "text": " G \\times X \\rightarrow X, \\quad (g,x) \\mapsto g \\cdot x. " }, { "math_id": 9, "text": " U_g \\pi(A) U_{g^{-1}} = \\pi(g \\cdot A). " }, { "math_id": 10, "text": " \\mu(g^{-1} A) = \\mu(A) \\quad " }, { "math_id": 11, "text": " [U_g \\psi] (x) =\\psi(g^{-1} x).\\quad " }, { "math_id": 12, "text": " s(g,x) = \\bigg[\\frac{d \\mu}{d g^{-1}\\mu}\\bigg](x) \\in [0, \\infty) " }, { "math_id": 13, "text": " g^{-1}\\mu(A) = \\mu(g A). \\quad " }, { "math_id": 14, "text": " s : G \\times X \\rightarrow [0, \\infty) " }, { "math_id": 15, "text": " \\Phi: G \\times X \\rightarrow \\operatorname{U}(H) " }, { "math_id": 16, "text": " \\Phi(e, x) = I \\quad " }, { "math_id": 17, "text": " \\Phi(g h, x) = \\Phi(g, h \\cdot x) \\Phi(h, x) " }, { "math_id": 18, "text": " [U_g \\psi](x) = \\sqrt{s(g,g^{-1}x)}\\ \\Phi(g, g^{-1} x) \\ \\psi(g^{-1} x). " }, { "math_id": 19, "text": " \\int_X^\\oplus H d \\mu(x)." }, { "math_id": 20, "text": " \\pi(A) \\psi = 1_A \\psi, \\quad \\int_X^\\oplus H d \\mu(x) \\rightarrow \\int_X^\\oplus H d \\mu(x), " }, { "math_id": 21, "text": " G_x = \\{g \\in G: g \\cdot x = x \\} " }, { "math_id": 22, "text": " [ h \\cdot \\chi](n) = \\chi(h^{-1} n h). " }, { "math_id": 23, "text": " g \\mapsto g \\cdot x_0. " }, { "math_id": 24, "text": " \\begin{bmatrix} 1 & x & z \\\\0 & 1 & y \\\\ 0 & 0 & 1 \\end{bmatrix}. " }, { "math_id": 25, "text": " H = \\bigg\\{\\begin{bmatrix} 1 & w & 0 \\\\0 & 1 & 0 \\\\ 0 & 0 & 1\\end{bmatrix}: w \\in \\mathbb{R} \\bigg\\} " }, { "math_id": 26, "text": " N = \\bigg\\{\\begin{bmatrix} 1 & 0 & t \\\\0 & 1 & s \\\\ 0 & 0 & 1\\end{bmatrix}: s,t \\in \\mathbb{R} \\bigg\\}. " }, { "math_id": 27, "text": " [w]^{-1} \\begin{bmatrix}s \\\\ t \\end{bmatrix} [w] = \\begin{bmatrix}s \\\\ - w s + t \\end{bmatrix} = \\begin{bmatrix} 1 & 0 \\\\ -w & 1 \\end{bmatrix} \\begin{bmatrix} s \\\\ t \\end{bmatrix} " }, { "math_id": 28, "text": " \\begin{bmatrix} 1 & -w \\\\ 0 & 1 \\end{bmatrix}. " }, { "math_id": 29, "text": " (\\pi [s,t] \\psi)(x) = e^{i t y_0} e^{i s x} \\psi (x). \\quad " }, { "math_id": 30, "text": " (\\pi[w] \\psi)(x) = \\psi(x+w y_0).\\quad " }, { "math_id": 31, "text": " \\pi [s,t] = e^{i s x_0}. \\quad " }, { "math_id": 32, "text": " \\pi[w] = e^{i \\lambda w}. \\quad " } ]
https://en.wikipedia.org/wiki?curid=1388070
13880758
4-Hydroxybutyrate dehydrogenase
Class of enzymes In enzymology, a 4-hydroxybutyrate dehydrogenase (EC 1.1.1.61) is an enzyme that catalyzes the chemical reaction 4-hydroxybutanoate + NAD+ formula_0 succinate semialdehyde + NADH + H+ The two substrates of this enzyme are therefore 4-hydroxybutanoic acid, and NAD+, whereas its 3 products are succinate semialdehyde, NADH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 4-hydroxybutanoate:NAD+ oxidoreductase. This enzyme is also called gamma-hydroxybutyrate dehydrogenase. This enzyme participates in butanoate metabolism and the degradation of the neurotransmitter 4-hydroxybutanoic acid. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=13880758
13883
Huffman coding
Technique to compress data In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". The output from Huffman's algorithm can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). The algorithm derives this table from the estimated probability or frequency of occurrence ("weight") for each possible value of the source symbol. As in other entropy encoding methods, more common symbols are generally represented using fewer bits than less common symbols. Huffman's method can be efficiently implemented, finding a code in time linear to the number of input weights if these weights are sorted. However, although optimal among methods encoding symbols separately, Huffman coding is not always optimal among all compression methods - it is replaced with arithmetic coding or asymmetric numeral systems if a better compression ratio is required. History. In 1951, David A. Huffman and his MIT information theory classmates were given the choice of a term paper or a final exam. The professor, Robert M. Fano, assigned a term paper on the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sorted binary tree and quickly proved this method the most efficient. In doing so, Huffman outdid Fano, who had worked with Claude Shannon to develop a similar code. Building the tree from the bottom up guaranteed optimality, unlike the top-down approach of Shannon–Fano coding. Terminology. Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code (sometimes called "prefix-free codes", that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol). Huffman coding is such a widespread method for creating prefix codes that the term "Huffman code" is widely used as a synonym for "prefix code" even when such a code is not produced by Huffman's algorithm. Problem definition. Formalized description. Input. Alphabet formula_0, which is the symbol alphabet of size formula_1. Tuple formula_2, which is the tuple of the (positive) symbol weights (usually proportional to probabilities), i.e. formula_3. Output. Code formula_4, which is the tuple of (binary) codewords, where formula_5 is the codeword for formula_6. Goal. Let formula_7 be the weighted path length of code formula_8. Condition: formula_9 for any code formula_10. Example. We give an example of the result of Huffman coding for a code with five characters and given weights. We will not verify that it minimizes "L" over all codes, but we will compute "L" and compare it to the Shannon entropy "H" of the given set of weights; the result is nearly optimal. For any code that is "biunique", meaning that the code is "uniquely decodeable", the sum of the probability budgets across all symbols is always less than or equal to one. In this example, the sum is strictly equal to one; as a result, the code is termed a "complete" code. If this is not the case, one can always derive an equivalent code by adding extra symbols (with associated null probabilities), to make the code complete while keeping it "biunique". As defined by Shannon (1948), the information content "h" (in bits) of each symbol "a"i with non-null probability is formula_11 The entropy "H" (in bits) is the weighted sum, across all symbols "a""i" with non-zero probability "w""i", of the information content of each symbol: formula_12 As a consequence of Shannon's source coding theorem, the entropy is a measure of the smallest codeword length that is theoretically possible for the given alphabet with associated weights. In this example, the weighted average codeword length is 2.25 bits per symbol, only slightly larger than the calculated entropy of 2.205 bits per symbol. So not only is this code optimal in the sense that no other feasible code performs better, but it is very close to the theoretical limit established by Shannon. In general, a Huffman code need not be unique. Thus the set of Huffman codes for a given probability distribution is a non-empty subset of the codes minimizing formula_14 for that probability distribution. (However, for each minimizing codeword length assignment, there exists at least one Huffman code with those lengths.) Basic technique. Compression. The technique works by creating a binary tree of nodes. These can be stored in a regular array, the size of which depends on the number of symbols, formula_1. A node can be either a leaf node or an internal node. Initially, all nodes are leaf nodes, which contain the symbol itself, the weight (frequency of appearance) of the symbol and optionally, a link to a parent node which makes it easy to read the code (in reverse) starting from a leaf node. Internal nodes contain a weight, links to two child nodes and an optional link to a parent node. As a common convention, bit '0' represents following the left child and bit '1' represents following the right child. A finished tree has up to formula_1 leaf nodes and formula_15 internal nodes. A Huffman tree that omits unused symbols produces the most optimal code lengths. The process begins with the leaf nodes containing the probabilities of the symbol they represent. Then, the process takes the two nodes with smallest probability, and creates a new internal node having these two nodes as children. The weight of the new node is set to the sum of the weight of the children. We then apply the process again, on the new internal node and on the remaining nodes (i.e., we exclude the two leaf nodes), we repeat this process until only one node remains, which is the root of the Huffman tree. The simplest construction algorithm uses a priority queue where the node with lowest probability is given highest priority: Since efficient priority queue data structures require O(log "n") time per insertion, and a tree with "n" leaves has 2"n"−1 nodes, this algorithm operates in O("n" log "n") time, where "n" is the number of symbols. If the symbols are sorted by probability, there is a linear-time (O("n")) method to create a Huffman tree using two queues, the first one containing the initial weights (along with pointers to the associated leaves), and combined weights (along with pointers to the trees) being put in the back of the second queue. This assures that the lowest weight is always kept at the front of one of the two queues: Once the Huffman tree has been generated, it is traversed to generate a dictionary which maps the symbols to binary codes as follows: The final encoding of any symbol is then read by a concatenation of the labels on the edges along the path from the root node to the symbol. In many cases, time complexity is not very important in the choice of algorithm here, since "n" here is the number of symbols in the alphabet, which is typically a very small number (compared to the length of the message to be encoded); whereas complexity analysis concerns the behavior when "n" grows to be very large. It is generally beneficial to minimize the variance of codeword length. For example, a communication buffer receiving Huffman-encoded data may need to be larger to deal with especially long symbols if the tree is especially unbalanced. To minimize variance, simply break ties between queues by choosing the item in the first queue. This modification will retain the mathematical optimality of the Huffman coding while both minimizing variance and minimizing the length of the longest character code. Decompression. Generally speaking, the process of decompression is simply a matter of translating the stream of prefix codes to individual byte values, usually by traversing the Huffman tree node by node as each bit is read from the input stream (reaching a leaf node necessarily terminates the search for that particular byte value). Before this can take place, however, the Huffman tree must be somehow reconstructed. In the simplest case, where character frequencies are fairly predictable, the tree can be preconstructed (and even statistically adjusted on each compression cycle) and thus reused every time, at the expense of at least some measure of compression efficiency. Otherwise, the information to reconstruct the tree must be sent a priori. A naive approach might be to prepend the frequency count of each character to the compression stream. Unfortunately, the overhead in such a case could amount to several kilobytes, so this method has little practical use. If the data is compressed using canonical encoding, the compression model can be precisely reconstructed with just formula_16 bits of information (where B is the number of bits per symbol). Another method is to simply prepend the Huffman tree, bit by bit, to the output stream. For example, assuming that the value of 0 represents a parent node and 1 a leaf node, whenever the latter is encountered the tree building routine simply reads the next 8 bits to determine the character value of that particular leaf. The process continues recursively until the last leaf node is reached; at that point, the Huffman tree will thus be faithfully reconstructed. The overhead using such a method ranges from roughly 2 to 320 bytes (assuming an 8-bit alphabet). Many other techniques are possible as well. In any case, since the compressed data can include unused "trailing bits" the decompressor must be able to determine when to stop producing output. This can be accomplished by either transmitting the length of the decompressed data along with the compression model or by defining a special code symbol to signify the end of input (the latter method can adversely affect code length optimality, however). Main properties. The probabilities used can be generic ones for the application domain that are based on average experience, or they can be the actual frequencies found in the text being compressed. This requires that a frequency table must be stored with the compressed text. See the Decompression section above for more information about the various techniques employed for this purpose. Optimality. Huffman's original algorithm is optimal for a symbol-by-symbol coding with a known input probability distribution, i.e., separately encoding unrelated symbols in such a data stream. However, it is not optimal when the symbol-by-symbol restriction is dropped, or when the probability mass functions are unknown. Also, if symbols are not independent and identically distributed, a single code may be insufficient for optimality. Other methods such as arithmetic coding often have better compression capability. Although both aforementioned methods can combine an arbitrary number of symbols for more efficient coding and generally adapt to the actual input statistics, arithmetic coding does so without significantly increasing its computational or algorithmic complexities (though the simplest version is slower and more complex than Huffman coding). Such flexibility is especially useful when input probabilities are not precisely known or vary significantly within the stream. However, Huffman coding is usually faster and arithmetic coding was historically a subject of some concern over patent issues. Thus many technologies have historically avoided arithmetic coding in favor of Huffman and other prefix coding techniques. As of mid-2010, the most commonly used techniques for this alternative to Huffman coding have passed into the public domain as the early patents have expired. For a set of symbols with a uniform probability distribution and a number of members which is a power of two, Huffman coding is equivalent to simple binary block encoding, e.g., ASCII coding. This reflects the fact that compression is not possible with such an input, no matter what the compression method, i.e., doing nothing to the data is the optimal thing to do. Huffman coding is optimal among all methods in any case where each input symbol is a known independent and identically distributed random variable having a probability that is dyadic. Prefix codes, and thus Huffman coding in particular, tend to have inefficiency on small alphabets, where probabilities often fall between these optimal (dyadic) points. The worst case for Huffman coding can happen when the probability of the most likely symbol far exceeds 2−1 = 0.5, making the upper limit of inefficiency unbounded. There are two related approaches for getting around this particular inefficiency while still using Huffman coding. Combining a fixed number of symbols together ("blocking") often increases (and never decreases) compression. As the size of the block approaches infinity, Huffman coding theoretically approaches the entropy limit, i.e., optimal compression. However, blocking arbitrarily large groups of symbols is impractical, as the complexity of a Huffman code is linear in the number of possibilities to be encoded, a number that is exponential in the size of a block. This limits the amount of blocking that is done in practice. A practical alternative, in widespread use, is run-length encoding. This technique adds one step in advance of entropy coding, specifically counting (runs) of repeated symbols, which are then encoded. For the simple case of Bernoulli processes, Golomb coding is optimal among prefix codes for coding run length, a fact proved via the techniques of Huffman coding. A similar approach is taken by fax machines using modified Huffman coding. However, run-length coding is not as adaptable to as many input types as other compression technologies. Variations. Many variations of Huffman coding exist, some of which use a Huffman-like algorithm, and others of which find optimal prefix codes (while, for example, putting different restrictions on the output). Note that, in the latter case, the method need not be Huffman-like, and, indeed, need not even be polynomial time. "n"-ary Huffman coding. The "n"-ary Huffman algorithm uses the {0, 1..., "n" − 1} alphabet to encode message and build an "n"-ary tree. This approach was considered by Huffman in his original paper. The same algorithm applies as for binary (formula_17) codes, except that the "n" least probable symbols are taken together, instead of just the 2 least probable. Note that for "n" greater than 2, not all sets of source words can properly form an "n"-ary tree for Huffman coding. In these cases, additional 0-probability place holders must be added. This is because the tree must form an "n" to 1 contractor; for binary coding, this is a 2 to 1 contractor, and any sized set can form such a contractor. If the number of source words is congruent to 1 modulo "n"−1, then the set of source words will form a proper Huffman tree. Adaptive Huffman coding. A variation called adaptive Huffman coding involves calculating the probabilities dynamically based on recent actual frequencies in the sequence of source symbols, and changing the coding tree structure to match the updated probability estimates. It is used rarely in practice, since the cost of updating the tree makes it slower than optimized adaptive arithmetic coding, which is more flexible and has better compression. Huffman template algorithm. Most often, the weights used in implementations of Huffman coding represent numeric probabilities, but the algorithm given above does not require this; it requires only that the weights form a totally ordered commutative monoid, meaning a way to order weights and to add them. The Huffman template algorithm enables one to use any kind of weights (costs, frequencies, pairs of weights, non-numerical weights) and one of many combining methods (not just addition). Such algorithms can solve other minimization problems, such as minimizing formula_18, a problem first applied to circuit design. Length-limited Huffman coding/minimum variance Huffman coding. Length-limited Huffman coding is a variant where the goal is still to achieve a minimum weighted path length, but there is an additional restriction that the length of each codeword must be less than a given constant. The package-merge algorithm solves this problem with a simple greedy approach very similar to that used by Huffman's algorithm. Its time complexity is formula_19, where formula_20 is the maximum length of a codeword. No algorithm is known to solve this problem in formula_21 or formula_22 time, unlike the presorted and unsorted conventional Huffman problems, respectively. Huffman coding with unequal letter costs. In the standard Huffman coding problem, it is assumed that each symbol in the set that the code words are constructed from has an equal cost to transmit: a code word whose length is "N" digits will always have a cost of "N", no matter how many of those digits are 0s, how many are 1s, etc. When working under this assumption, minimizing the total cost of the message and minimizing the total number of digits are the same thing. "Huffman coding with unequal letter costs" is the generalization without this assumption: the letters of the encoding alphabet may have non-uniform lengths, due to characteristics of the transmission medium. An example is the encoding alphabet of Morse code, where a 'dash' takes longer to send than a 'dot', and therefore the cost of a dash in transmission time is higher. The goal is still to minimize the weighted average codeword length, but it is no longer sufficient just to minimize the number of symbols used by the message. No algorithm is known to solve this in the same manner or with the same efficiency as conventional Huffman coding, though it has been solved by Karp whose solution has been refined for the case of integer costs by Golin. Optimal alphabetic binary trees (Hu–Tucker coding). In the standard Huffman coding problem, it is assumed that any codeword can correspond to any input symbol. In the alphabetic version, the alphabetic order of inputs and outputs must be identical. Thus, for example, formula_23 could not be assigned code formula_24, but instead should be assigned either formula_25 or formula_26. This is also known as the Hu–Tucker problem, after T. C. Hu and Alan Tucker, the authors of the paper presenting the first formula_22-time solution to this optimal binary alphabetic problem, which has some similarities to Huffman algorithm, but is not a variation of this algorithm. A later method, the Garsia–Wachs algorithm of Adriano Garsia and Michelle L. Wachs (1977), uses simpler logic to perform the same comparisons in the same total time bound. These optimal alphabetic binary trees are often used as binary search trees. The canonical Huffman code. If weights corresponding to the alphabetically ordered inputs are in numerical order, the Huffman code has the same lengths as the optimal alphabetic code, which can be found from calculating these lengths, rendering Hu–Tucker coding unnecessary. The code resulting from numerically (re-)ordered input is sometimes called the "canonical Huffman code" and is often the code used in practice, due to ease of encoding/decoding. The technique for finding this code is sometimes called Huffman–Shannon–Fano coding, since it is optimal like Huffman coding, but alphabetic in weight probability, like Shannon–Fano coding. The Huffman–Shannon–Fano code corresponding to the example is formula_27, which, having the same codeword lengths as the original solution, is also optimal. But in "canonical Huffman code", the result is formula_28. Applications. Arithmetic coding and Huffman coding produce equivalent results — achieving entropy — when every symbol has a probability of the form 1/2"k". In other circumstances, arithmetic coding can offer better compression than Huffman coding because — intuitively — its "code words" can have effectively non-integer bit lengths, whereas code words in prefix codes such as Huffman codes can only have an integer number of bits. Therefore, a code word of length "k" only optimally matches a symbol of probability 1/2"k" and other probabilities are not represented optimally; whereas the code word length in arithmetic coding can be made to exactly match the true probability of the symbol. This difference is especially striking for small alphabet sizes. Prefix codes nevertheless remain in wide use because of their simplicity, high speed, and lack of patent coverage. They are often used as a "back-end" to other compression methods. Deflate (PKZIP's algorithm) and multimedia codecs such as JPEG and MP3 have a front-end model and quantization followed by the use of prefix codes; these are often called "Huffman codes" even though most applications use pre-defined variable-length codes rather than codes designed using Huffman's algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = (a_{1},a_{2},\\dots,a_{n})" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "W = (w_{1},w_{2},\\dots,w_{n})" }, { "math_id": 3, "text": "w_{i} = \\operatorname{weight}\\left(a_{i}\\right),\\, i \\in \\{1, 2, \\dots, n\\}" }, { "math_id": 4, "text": "C\\left(W\\right) = (c_{1},c_{2},\\dots,c_{n})" }, { "math_id": 5, "text": "c_{i}" }, { "math_id": 6, "text": "a_{i},\\, i \\in \\{1, 2, \\dots, n\\}" }, { "math_id": 7, "text": "L\\left(C\\left(W\\right)\\right) = \\sum_{i=1}^{n} {w_{i}\\operatorname{length}\\left(c_{i}\\right)}" }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "L\\left(C\\left(W\\right)\\right) \\leq L\\left(T\\left(W\\right)\\right)" }, { "math_id": 10, "text": "T\\left(W\\right)" }, { "math_id": 11, "text": "h(a_i) = \\log_2{1 \\over w_i}. " }, { "math_id": 12, "text": " H(A) = \\sum_{w_i > 0} w_i h(a_i) = \\sum_{w_i > 0} w_i \\log_2{1 \\over w_i} = - \\sum_{w_i > 0} w_i \\log_2{w_i}. " }, { "math_id": 13, "text": "\\lim_{w \\to 0^+} w \\log_2 w = 0" }, { "math_id": 14, "text": "L(C)" }, { "math_id": 15, "text": "n-1" }, { "math_id": 16, "text": "B\\cdot 2^B" }, { "math_id": 17, "text": "n = 2" }, { "math_id": 18, "text": "\\max_i\\left[w_{i}+\\mathrm{length}\\left(c_{i}\\right)\\right]" }, { "math_id": 19, "text": "O(nL)" }, { "math_id": 20, "text": "L" }, { "math_id": 21, "text": "O(n)" }, { "math_id": 22, "text": "O(n\\log n)" }, { "math_id": 23, "text": "A = \\left\\{a,b,c\\right\\}" }, { "math_id": 24, "text": "H\\left(A,C\\right) = \\left\\{00,1,01\\right\\}" }, { "math_id": 25, "text": "H\\left(A,C\\right) =\\left\\{00,01,1\\right\\}" }, { "math_id": 26, "text": "H\\left(A,C\\right) = \\left\\{0,10,11\\right\\}" }, { "math_id": 27, "text": "\\{000,001,01,10,11\\}" }, { "math_id": 28, "text": "\\{110,111,00,01,10\\}" } ]
https://en.wikipedia.org/wiki?curid=13883
13885392
Crop coefficient
Crop coefficients are properties of plants used in predicting evapotranspiration (ET). The most basic crop coefficient, "K"c, is simply the ratio of ET observed for the crop studied over that observed for the well calibrated reference crop under the same conditions. formula_0 Potential evapotranspiration (PET), is the evaporation and transpiration that potentially could occur if a field of the crop had an ideal unlimited water supply. RET is the reference ET often denoted as ET0. Even in agricultural crops, where ideal conditions are approximated as much as is practical, plants are not always growing (and therefore transpiring) at their theoretical potential. Plants have growth stages and states of health induced by a variety of environmental conditions. RET usually represents the PET of the reference crop's most active growth. "K"c then becomes a function or series of values specific to the crop of interest through its growing season. These can be quite elaborate in the case of certain maize varieties, but tend to use a trapezoidal or leaf area index (LAI) curve for common crop or vegetation canopies. Stress coefficients, "K"s, account for diminished ET due to specific stress factors. These are often assumed to combine by multiplication. formula_1 Water stress is the most ubiquitous stress factor, often denoted as "K"w. Stress coefficients tend to be functions ranging between 0 and 1. The simplest are linear, but thresholds are appropriate for some toxicity responses. Crop coefficients can exceed 1 when the crop evapotranspiration exceeds that of RET. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " PET = K_c * RET" }, { "math_id": 1, "text": "ET_{estimate} = K_w * K_{s_1} * K_{s_2} * K_c * ET_o" } ]
https://en.wikipedia.org/wiki?curid=13885392
13885985
Computron tube
Type of electron tube The Computron was an electron tube designed to perform the parallel addition and multiplication of digital numbers. It was conceived by Richard L. Snyder, Jr., Jan A. Rajchman, Paul Rudnick and the digital computer group at the laboratories of the Radio Corporation of America under the direction of Vladimir Zworykin. Development began in 1941 under contract OEM-sr-591 to Division 7 of the National Defense Research Committee of the United States Office of Research and Development. The numerical function of the Computron was to solve the equation formula_0 where A, B, C, and D are 14 bit inputs and S is a 28 bit output. This function was key to the RCA attempt to produce a non-analog computer based fire-control system for use in artillery aiming during WWII. A simple way to describe the physically complex Computron is to begin with a cathode ray tube structure in the form of a right-circular cylinder with a central vertical cathode structure. The cylinder is composed of 14 discrete planes, each plane having 14 individual radial outward projecting beams. Each of the 196 individual beams is steered by multiple deflection plates toward its two targets. Some deflection plates are connected to circuitry external to the Computron and are the data inputs. The balance of the plates are connected to internal targets and are the partial sums and products from other stages within the tube. Some of the targets are connected to circuitry outside the tube and represent the result. The electronic function of the Computron design incorporated steered, rather than gated, multiple electron beams. Additionally, the Computron was based on the ability of a secondary electron emission target, under electron bombardment, to assume the potential of the nearest collector electrode. The Additron Tube design by Josef Kates gated electron beams of a fixed trajectory with several control grids which either passed or blocked a current. The Computron was a complex cathode ray tube while the Additron was a triode with multiple grids and targets. A subsection of the Computron was prototyped and tested and the concept validated but the building of an entire device was never attempted. A United States Patent was filed 30 July 1943 and granted 22 July 1947 for the Computron. Modern implications. The Computron design was an early attempt to produce not only a vacuum tube integrated circuit for both size and reliability (lifetime) issues, but to minimize external electrical connections between active elements. The goal of integration is not merely to reduce external signal connections into and out of a package by including multiple active devices in one package, as in the Loewe 3NF tube. It is to merge the functions of the active devices for a technical synergy. A modern example would be the multiple-emitter transistor of transistor–transistor logic integrated circuits Another modern construct anticipated by the Computron is the barrel shifter circuit which is used in many numeric computation style microprocessors. Damning praise. The Computron was an idea born of the necessity of war research. It was to be a key element of the electronic digital computer which had yet to be built. But the project was begun to increase the accuracy of artillery in battle, not to advance the state of the embryonic electronic computer. Its fate was well described in a letter to Dr. Paul E. Klopsteg, Head of NDRC Division 17, dated 6 February 1943 which concludes: ...As I said above, our entire Division is exceedingly reluctant to see a development which is scientifically so beautiful and so promising dropped at this point, though cold reason tells us that we cannot justify the expenditure of additional Government funds on the basis of Fire Control at this time. Sincerely yours, Harold L. Hazen Chief, Division 7
[ { "math_id": 0, "text": "S=(A*B)+C+D" } ]
https://en.wikipedia.org/wiki?curid=13885985
13888357
Chapman–Robbins bound
In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute. The bound was independently discovered by John Hammersley in 1950, and by Douglas Chapman and Herbert Robbins in 1951. Statement. Let formula_0 be the set of parameters for a family of probability distributions formula_1 on formula_2. For any two formula_3, let formula_4 be the formula_5-divergence from formula_6 to formula_7. Then: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Given any scalar random variable formula_8, and any two formula_9, we have formula_10. A generalization to the multivariable case is: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Given any multivariate random variable formula_11, and any formula_12, formula_13 Proof. By the variational representation of chi-squared divergence:formula_14 Plug in formula_15, to obtain: formula_16Switch the denominator and the left side and take supremum over formula_17 to obtain the single-variate case. For the multivariate case, we define formula_18 for any formula_19. Then plug in formula_20 in the variational representation to obtain: formula_21Take supremum over formula_22, using the linear algebra fact that formula_23, we obtain the multivariate case. Relation to Cramér–Rao bound. Usually, formula_24 is the sample space of formula_25 independent draws of a formula_26-valued random variable formula_27 with distribution formula_28 from a by formula_29 parameterized family of probability distributions, formula_30 is its formula_25-fold product measure, and formula_31 is an estimator of formula_32. Then, for formula_33, the expression inside the supremum in the Chapman–Robbins bound converges to the Cramér–Rao bound of formula_34 when formula_35, assuming the regularity conditions of the Cramér–Rao bound hold. This implies that, when both bounds exist, the Chapman–Robbins version is always at least as tight as the Cramér–Rao bound; in many cases, it is substantially tighter. The Chapman–Robbins bound also holds under much weaker regularity conditions. For example, no assumption is made regarding differentiability of the probability density function "p"("x"; "θ") of formula_28. When "p"("x"; "θ") is non-differentiable, the Fisher information is not defined, and hence the Cramér–Rao bound does not exist. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Theta" }, { "math_id": 1, "text": "\\{\\mu_\\theta : \\theta\\in\\Theta\\}" }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "\\theta, \\theta' \\in \\Theta" }, { "math_id": 4, "text": "\\chi^2(\\mu_{\\theta'}; \\mu_{\\theta})" }, { "math_id": 5, "text": " \\chi^2" }, { "math_id": 6, "text": "\\mu_{\\theta}" }, { "math_id": 7, "text": "\\mu_{\\theta'}" }, { "math_id": 8, "text": "\\hat g: \\Omega \\to \\R" }, { "math_id": 9, "text": "\\theta, \\theta'\\in\\Theta" }, { "math_id": 10, "text": "\\operatorname{Var}_\\theta[\\hat g] \\geq \\sup_{\\theta'\\neq \\theta \\in \\Theta}\\frac{(E_{\\theta'}[\\hat g] - E_{\\theta}[\\hat g])^2}{\\chi^2(\\mu_{\\theta'} ; \\mu_\\theta)}" }, { "math_id": 11, "text": "\\hat g: \\Omega \\to \\R^m" }, { "math_id": 12, "text": "\\theta, \\theta' \\in\\Theta" }, { "math_id": 13, "text": "\\chi^2(\\mu_{\\theta'} ; \\mu_\\theta) \\geq\n(E_{\\theta'}[\\hat g] - E_{\\theta}[\\hat g])^T \\operatorname{Cov}_\\theta[\\hat g]^{-1} (E_{\\theta'}[\\hat g] - E_{\\theta}[\\hat g])" }, { "math_id": 14, "text": "\\chi^2(P; Q) = \\sup_g \\frac{(E_P[g]-E_Q[g])^2}{\\operatorname{Var}_Q[g]}" }, { "math_id": 15, "text": "g = \\hat g, P = \\mu_{\\theta'}, Q = \\mu_\\theta" }, { "math_id": 16, "text": "\\chi^2(\\mu_{\\theta'}; \\mu_\\theta) \\geq \\frac{(E_{\\theta'}[\\hat g]-E_\\theta[\\hat g])^2}{\\operatorname{Var}_\\theta[\\hat g]}" }, { "math_id": 17, "text": "\\theta'" }, { "math_id": 18, "text": "h = \\sum_{i=1}^m v_i \\hat g_i" }, { "math_id": 19, "text": "v\\neq 0 \\in \\R^m" }, { "math_id": 20, "text": "g = h" }, { "math_id": 21, "text": "\\chi^2(\\mu_{\\theta'}; \\mu_\\theta) \\geq \\frac{(E_{\\theta'}[h]-E_\\theta[h])^2}{\\operatorname{Var}_\\theta[h]} = \\frac{\\langle v, E_{\\theta'}[\\hat g]-E_\\theta[\\hat g]\\rangle^2}{v^T \\operatorname{Cov}_\\theta[\\hat g] v} " }, { "math_id": 22, "text": "v\\neq 0 \\in\\R^m " }, { "math_id": 23, "text": "\\sup_{v\\neq 0} \\frac{v^T ww^T v}{v^T M v} = w^T M^{-1}w " }, { "math_id": 24, "text": "\\Omega = \\mathcal X^n" }, { "math_id": 25, "text": "n" }, { "math_id": 26, "text": "\\mathcal X" }, { "math_id": 27, "text": "X" }, { "math_id": 28, "text": "\\lambda_\\theta" }, { "math_id": 29, "text": "\\theta \\in \\Theta \\subseteq \\mathbb R^m" }, { "math_id": 30, "text": "\\mu_\\theta = \\lambda_\\theta^{\\otimes n}" }, { "math_id": 31, "text": "\\hat g : \\mathcal X^n \\to \\Theta" }, { "math_id": 32, "text": "\\theta" }, { "math_id": 33, "text": "m=1" }, { "math_id": 34, "text": "\\hat g" }, { "math_id": 35, "text": "\\theta' \\to \\theta" } ]
https://en.wikipedia.org/wiki?curid=13888357