id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
7852591
Polynomial matrix
In mathematics, a polynomial matrix or matrix of polynomials is a matrix whose elements are univariate or multivariate polynomials. Equivalently, a polynomial matrix is a polynomial whose coefficients are matrices. A univariate polynomial matrix "P" of degree "p" is defined as: formula_0 where formula_1 denotes a matrix of constant coefficients, and formula_2 is non-zero. An example 3×3 polynomial matrix, degree 2: formula_3 We can express this by saying that for a ring "R", the rings formula_4 and formula_5 are isomorphic. Properties. Note that polynomial matrices are "not" to be confused with monomial matrices, which are simply matrices with exactly one non-zero entry in each row and column. If by λ we denote any element of the field over which we constructed the matrix, by "I" the identity matrix, and we let "A" be a polynomial matrix, then the matrix λ"I" − "A" is the characteristic matrix of the matrix "A". Its determinant, |λ"I" − "A"| is the characteristic polynomial of the matrix "A". References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "P = \\sum_{n=0}^p A(n)x^n = A(0)+A(1)x+A(2)x^2+ \\cdots +A(p)x^p" }, { "math_id": 1, "text": "A(i)" }, { "math_id": 2, "text": "A(p)" }, { "math_id": 3, "text": "\nP=\\begin{pmatrix}\n1 & x^2 & x \\\\\n0 & 2x & 2 \\\\\n3x+2 & x^2-1 & 0\n\\end{pmatrix}\n=\\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 0 & 2 \\\\\n2 & -1 & 0\n\\end{pmatrix}\n\n+\\begin{pmatrix}\n0 & 0 & 1 \\\\\n0 & 2 & 0 \\\\\n3 & 0 & 0\n\\end{pmatrix}x+\\begin{pmatrix}\n0 & 1 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{pmatrix}x^2.\n" }, { "math_id": 4, "text": "M_n(R[X])" }, { "math_id": 5, "text": "(M_n(R))[X]" } ]
https://en.wikipedia.org/wiki?curid=7852591
7852809
Hamiltonian matrix
Mathematical matrix In mathematics, a Hamiltonian matrix is a 2"n"-by-2"n" matrix A such that "JA" is symmetric, where J is the skew-symmetric matrix formula_0 and "In" is the n-by-n identity matrix. In other words, A is Hamiltonian if and only if ("JA")T = "JA" where ()T denotes the transpose. Properties. Suppose that the 2"n"-by-2"n" matrix A is written as the block matrix formula_1 where a, b, c, and d are n-by-n matrices. Then the condition that "A" be Hamiltonian is equivalent to requiring that the matrices "b" and "c" are symmetric, and that "a" + "d"T = 0. Another equivalent condition is that "A" is of the form "A" = "JS" with "S" symmetric. It follows easily from the definition that the transpose of a Hamiltonian matrix is Hamiltonian. Furthermore, the sum (and any linear combination) of two Hamiltonian matrices is again Hamiltonian, as is their commutator. It follows that the space of all Hamiltonian matrices is a Lie algebra, denoted sp(2"n"). The dimension of sp(2"n") is 2"n"2 + "n". The corresponding Lie group is the symplectic group Sp(2"n"). This group consists of the symplectic matrices, those matrices A which satisfy "A"T"JA" = "J". Thus, the matrix exponential of a Hamiltonian matrix is symplectic. However the logarithm of a symplectic matrix is not necessarily Hamiltonian because the exponential map from the Lie algebra to the group is not surjective. The characteristic polynomial of a real Hamiltonian matrix is even. Thus, if a Hamiltonian matrix has λ as an eigenvalue, then −λ, λ* and −λ* are also eigenvalues. It follows that the trace of a Hamiltonian matrix is zero. The square of a Hamiltonian matrix is skew-Hamiltonian (a matrix A is skew-Hamiltonian if ("JA")T = −"JA"). Conversely, every skew-Hamiltonian matrix arises as the square of a Hamiltonian matrix. Extension to complex matrices. As for symplectic matrices, the definition for Hamiltonian matrices can be extended to complex matrices in two ways. One possibility is to say that a matrix A is Hamiltonian if ("JA")T = "JA", as above. Another possibility is to use the condition ("JA")* = "JA" where the superscript asterisk ((⋅)*) denotes the conjugate transpose. Hamiltonian operators. Let V be a vector space, equipped with a symplectic form Ω. A linear map formula_2 is called a Hamiltonian operator with respect to Ω if the form formula_3 is symmetric. Equivalently, it should satisfy formula_4 Choose a basis "e"1, …, "e"2"n" in V, such that Ω is written as formula_5. A linear operator is Hamiltonian with respect to Ω if and only if its matrix in this basis is Hamiltonian. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "J =\n\\begin{bmatrix}\n0_n & I_n \\\\\n-I_n & 0_n \\\\\n\\end{bmatrix}" }, { "math_id": 1, "text": " A = \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix}" }, { "math_id": 2, "text": "A : \\; V \\mapsto V" }, { "math_id": 3, "text": "x, y \\mapsto \\Omega(A(x), y)" }, { "math_id": 4, "text": "\\Omega(A(x), y) = -\\Omega(x, A(y))" }, { "math_id": 5, "text": "\\sum_i e_i \\wedge e_{n+i}" } ]
https://en.wikipedia.org/wiki?curid=7852809
7852887
Cylindric algebra
Algebraization of first-order logic with equality In mathematics, the notion of cylindric algebra, developed by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality. Definition of a cylindric algebra. A cylindric algebra of dimension formula_0 (where formula_0 is any ordinal number) is an algebraic structure formula_1 such that formula_2 is a Boolean algebra, formula_3 a unary operator on formula_4 for every formula_5 (called a "cylindrification"), and formula_6 a distinguished element of formula_4 for every formula_5 and formula_7 (called a "diagonal"), such that the following hold: (C1) formula_8 (C2) formula_9 (C3) formula_10 (C4) formula_11 (C5) formula_12 (C6) If formula_13, then formula_14 (C7) If formula_15, then formula_16 Assuming a presentation of first-order logic without function symbols, the operator formula_17 models existential quantification over variable formula_5 in formula formula_18 while the operator formula_6 models the equality of variables formula_5 and formula_7. Hence, reformulated using standard logical notations, the axioms read as (C1) formula_19 (C2) formula_20 (C3) formula_21 (C4) formula_22 (C5) formula_23 (C6) If formula_5 is a variable different from both formula_7 and formula_24, then formula_25 (C7) If formula_5 and formula_7 are different variables, then formula_26 Cylindric set algebras. A cylindric set algebra of dimension formula_0 is an algebraic structure formula_27 such that formula_28 is a field of sets, formula_29 is given by formula_30, and formula_6 is given by formula_31. It necessarily validates the axioms C1–C7 of a cylindric algebra, with formula_32 instead of formula_33, formula_34 instead of formula_35, set complement for complement, empty set as 0, formula_36 as the unit, and formula_37 instead of formula_38. The set "X" is called the "base". A representation of a cylindric algebra is an isomorphism from that algebra to a cylindric set algebra. Not every cylindric algebra has a representation as a cylindric set algebra. It is easier to connect the semantics of first-order predicate logic with cylindric set algebra. (For more details, see .) Generalizations. Cylindric algebras have been generalized to the case of many-sorted logic (Caleiro and Gonçalves 2006), which allows for a better modeling of the duality between first-order formulas and terms. Relation to monadic Boolean algebra. When formula_39 and formula_40 are restricted to being only 0, then formula_3 becomes formula_41, the diagonals can be dropped out, and the following theorem of cylindric algebra (Pinter 1973): formula_42 turns into the axiom formula_43 of monadic Boolean algebra. The axiom (C4) drops out (becomes a tautology). Thus monadic Boolean algebra can be seen as a restriction of cylindric algebra to the one variable case. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "(A,+,\\cdot,-,0,1,c_\\kappa,d_{\\kappa\\lambda})_{\\kappa,\\lambda<\\alpha}" }, { "math_id": 2, "text": "(A,+,\\cdot,-,0,1)" }, { "math_id": 3, "text": "c_\\kappa" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "\\kappa" }, { "math_id": 6, "text": "d_{\\kappa\\lambda}" }, { "math_id": 7, "text": "\\lambda" }, { "math_id": 8, "text": "c_\\kappa 0=0" }, { "math_id": 9, "text": "x\\leq c_\\kappa x" }, { "math_id": 10, "text": "c_\\kappa(x\\cdot c_\\kappa y)=c_\\kappa x\\cdot c_\\kappa y" }, { "math_id": 11, "text": "c_\\kappa c_\\lambda x=c_\\lambda c_\\kappa x" }, { "math_id": 12, "text": "d_{\\kappa\\kappa}=1" }, { "math_id": 13, "text": "\\kappa\\notin\\{\\lambda,\\mu\\}" }, { "math_id": 14, "text": "d_{\\lambda\\mu}=c_\\kappa(d_{\\lambda\\kappa}\\cdot d_{\\kappa\\mu})" }, { "math_id": 15, "text": "\\kappa\\neq\\lambda" }, { "math_id": 16, "text": "c_\\kappa(d_{\\kappa\\lambda}\\cdot x)\\cdot c_\\kappa(d_{\\kappa\\lambda}\\cdot -x)=0" }, { "math_id": 17, "text": "c_\\kappa x" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "\\exists \\kappa. \\mathit{false} \\iff \\mathit{false}" }, { "math_id": 20, "text": "x \\implies \\exists \\kappa. x" }, { "math_id": 21, "text": "\\exists \\kappa. (x\\wedge \\exists \\kappa. y) \\iff (\\exists\\kappa. x) \\wedge (\\exists\\kappa. y)" }, { "math_id": 22, "text": "\\exists\\kappa \\exists\\lambda. x \\iff \\exists \\lambda \\exists\\kappa. x" }, { "math_id": 23, "text": "\\kappa=\\kappa \\iff \\mathit{true}" }, { "math_id": 24, "text": "\\mu" }, { "math_id": 25, "text": "\\lambda=\\mu \\iff \\exists\\kappa. (\\lambda=\\kappa \\wedge \\kappa=\\mu)" }, { "math_id": 26, "text": "\\exists\\kappa. (\\kappa=\\lambda \\wedge x) \\wedge \\exists\\kappa. (\\kappa=\\lambda\\wedge \\neg x) \\iff \\mathit{false}" }, { "math_id": 27, "text": "(A, \\cup, \\cap, -, \\empty, X^\\alpha, c_\\kappa,d_{\\kappa\\lambda})_{\\kappa,\\lambda<\\alpha}" }, { "math_id": 28, "text": "\\langle X^\\alpha, A \\rangle" }, { "math_id": 29, "text": "c_\\kappa S" }, { "math_id": 30, "text": "\\{y \\in X^\\alpha \\mid \\exists x \\in S\\ \\forall \\beta \\neq \\kappa\\ y(\\beta) = x(\\beta)\\}" }, { "math_id": 31, "text": "\\{x \\in X^\\alpha \\mid x(\\kappa) = x(\\lambda)\\}" }, { "math_id": 32, "text": "\\cup" }, { "math_id": 33, "text": "+" }, { "math_id": 34, "text": "\\cap" }, { "math_id": 35, "text": "\\cdot" }, { "math_id": 36, "text": "X^\\alpha" }, { "math_id": 37, "text": "\\subseteq" }, { "math_id": 38, "text": "\\le" }, { "math_id": 39, "text": "\\alpha = 1" }, { "math_id": 40, "text": "\\kappa, \\lambda" }, { "math_id": 41, "text": "\\exists" }, { "math_id": 42, "text": " c_\\kappa (x + y) = c_\\kappa x + c_\\kappa y " }, { "math_id": 43, "text": " \\exists (x + y) = \\exists x + \\exists y " } ]
https://en.wikipedia.org/wiki?curid=7852887
7853003
Conchoid of Dürer
Plane algebraic curve In geometry, the conchoid of Dürer, also called Dürer's shell curve, is a plane, algebraic curve, named after Albrecht Dürer and introduced in 1525. It is not a true conchoid. Construction. Suppose two perpendicular lines are given, with intersection point "O". For concreteness we may assume that these are the coordinate axes and that "O" is the origin, that is (0, 0). Let points "Q" = ("q", 0) and "R" = (0, "r") move on the axes in such a way that "q" + "r" = "b", a constant. On the line "QR", extended as necessary, mark points "P" and P' at a fixed distance a from "Q". The locus of the points "P" and P' is Dürer's conchoid. Equation. The equation of the conchoid in Cartesian form is formula_0 In parametric form the equation is given by formula_1 where the parameter t is measured in radians. Properties. The curve has two components, asymptotic to the lines formula_2. Each component is a rational curve. If "a" &gt; "b" there is a loop, if "a" = "b" there is a cusp at (0,"a"). Special cases include: The envelope of straight lines used in the construction form a parabola (as seen in Durer's original diagram above) and therefore the curve is a point-glissette formed by a line and one of its points sliding respectively against a parabola and one of its tangents. History. It was first described by the German painter and mathematician Albrecht Dürer (1471–1528) in his book "Underweysung der Messung" ("Instruction in Measurement with Compass and Straightedge" p. 38), calling it "Ein muschellini" ("Conchoid" or "Shell"). Dürer only drew one branch of the curve. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2y^2(x^2+y^2) - 2by^2(x+y) + (b^2-3a^2)y^2 - a^2x^2 + 2a^2b(x+y) + a^2(a^2-b^2) = 0 . " }, { "math_id": 1, "text": "\\begin{align}\nx &= \\frac{b \\cos(t)}{\\cos(t) - \\sin(t)} + a \\cos(t),\\\\\ny &= a \\sin(t),\n\\end{align}" }, { "math_id": 2, "text": "y = \\pm a / \\sqrt2" }, { "math_id": 3, "text": "x^2+y^2=a^2" } ]
https://en.wikipedia.org/wiki?curid=7853003
7853706
Mathieu transformation
The Mathieu transformations make up a subgroup of canonical transformations preserving the differential form formula_0 The transformation is named after the French mathematician Émile Léonard Mathieu. Details. In order to have this invariance, there should exist at least one relation between formula_1 and formula_2 only (without any formula_3 involved). formula_4 where formula_5. When formula_6 a Mathieu transformation becomes a Lagrange point transformation.
[ { "math_id": 0, "text": "\\sum_i p_i \\delta q_i=\\sum_i P_i \\delta Q_i \\," }, { "math_id": 1, "text": "q_i" }, { "math_id": 2, "text": "Q_i" }, { "math_id": 3, "text": "p_i,P_i" }, { "math_id": 4, "text": "\n\\begin{align}\n\\Omega_1(q_1,q_2,\\ldots,q_n,Q_1,Q_2,\\ldots Q_n) & =0 \\\\\n& {}\\ \\ \\vdots\\\\\n\\Omega_m(q_1,q_2,\\ldots,q_n,Q_1,Q_2,\\ldots Q_n) & =0\n\\end{align}\n" }, { "math_id": 5, "text": "1 < m \\le n" }, { "math_id": 6, "text": "m=n" } ]
https://en.wikipedia.org/wiki?curid=7853706
7859407
Weakly o-minimal structure
In model theory, a weakly o-minimal structure is a model-theoretic structure whose definable sets in the domain are just finite unions of convex sets. Definition. A linearly ordered structure, "M", with language "L" including an ordering relation &lt;, is called weakly o-minimal if every parametrically definable subset of "M" is a finite union of convex (definable) subsets. A theory is weakly o-minimal if all its models are weakly o-minimal. Note that, in contrast to o-minimality, it is possible for a theory to have models that are weakly o-minimal and to have other models that are not weakly o-minimal. Difference from o-minimality. In an o-minimal structure formula_0 the definable sets in formula_1 are finite unions of points and intervals, where "interval" stands for a sets of the form formula_2, for some "a" and "b" in formula_3. For weakly o-minimal structures formula_0 this is relaxed so that the definable sets in "M" are finite unions of convex definable sets. A set formula_4 is convex if whenever "a" and "b" are in formula_4, "a" &lt; "b" and "c" ∈  formula_1 satisfies that "a" &lt; "c" &lt; "b", then "c" is in "C". Points and intervals are of course convex sets, but there are convex sets that are not either points or intervals, as explained below. If we have a weakly o-minimal structure expanding (R,&lt;), the real ordered field, then the structure will be o-minimal. The two notions are different in other settings though. For example, let "R" be the ordered field of real algebraic numbers with the usual ordering &lt; inherited from R. Take a transcendental number, say "π", and add a unary relation "S" to the structure given by the subset (−"π","π") ∩ "R". Now consider the subset "A" of "R" defined by the formula formula_5 so that the set consists of all strictly positive real algebraic numbers that are less than "π". The set is clearly convex, but cannot be written as a finite union of points and intervals whose endpoints are in "R". To write it as an interval one would either have to include the endpoint "π", which isn't in "R", or one would require infinitely many intervals, such as the union formula_6 Since we have a definable set that isn't a finite union of points and intervals, this structure is not o-minimal. However, it is known that the structure is weakly o-minimal, and in fact the theory of this structure is weakly o-minimal. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(M,<,...)" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "I=\\{r\\in M\\,:\\,a<r<b\\}" }, { "math_id": 3, "text": "M \\cup \\{\\pm \\infty\\}" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "0<a \\,\\wedge\\, S(a)" }, { "math_id": 6, "text": "\\bigcup_{\\alpha<\\pi}(0,\\alpha)." } ]
https://en.wikipedia.org/wiki?curid=7859407
7859676
Missing data
Statistical concept In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Missing data can occur because of nonresponse: no information is provided for one or more items or for a whole unit ("subject"). Some items are more likely to generate a nonresponse than others: for example items about private subjects such as income. Attrition is a type of missingness that can occur in longitudinal studies—for instance studying development where a measurement is repeated after a certain period of time. Missingness occurs when participants drop out before the test ends and one or more measurements are missing. Data often are missing in research in economics, sociology, and political science because governments or private entities choose not to, or fail to, report critical statistics, or because the information is not available. Sometimes missing values are caused by the researcher—for example, when data collection is done improperly or mistakes are made in data entry. These forms of missingness take different types, with different impacts on the validity of conclusions from research: Missing completely at random, missing at random, and missing not at random. Missing data can be handled similarly as censored data. Types. Understanding the reasons why data are missing is important for handling the remaining data correctly. If values are missing completely at random, the data sample is likely still representative of the population. But if the values are missing systematically, analysis may be biased. For example, in a study of the relation between IQ and income, if participants with an above-average IQ tend to skip the question ‘What is your salary?’, analyses that do not take into account this missing at random (MAR pattern (see below)) may falsely fail to find a positive association between IQ and salary. Because of these problems, methodologists routinely advise researchers to design studies to minimize the occurrence of missing values. Graphical models can be used to describe the missing data mechanism in detail. Missing completely at random. Values in a data set are missing completely at random (MCAR) if the events that lead to any particular data-item being missing are independent both of observable variables and of unobservable parameters of interest, and occur entirely at random. When data are MCAR, the analysis performed on the data is unbiased; however, data are rarely MCAR. In the case of MCAR, the missingness of data is unrelated to any study variable: thus, the participants with completely observed data are in effect a random sample of all the participants assigned a particular intervention. With MCAR, the random assignment of treatments is assumed to be preserved, but that is usually an unrealistically strong assumption in practice. Missing at random. Missing at random (MAR) occurs when the missingness is not random, but where missingness can be fully accounted for by variables where there is complete information. Since MAR is an assumption that is impossible to verify statistically, we must rely on its substantive reasonableness. An example is that males are less likely to fill in a depression survey but this has nothing to do with their level of depression, after accounting for maleness. Depending on the analysis method, these data can still induce parameter bias in analyses due to the contingent emptiness of cells (male, very high depression may have zero entries). However, if the parameter is estimated with Full Information Maximum Likelihood, MAR will provide asymptotically unbiased estimates. Missing not at random. Missing not at random (MNAR) (also known as nonignorable nonresponse) is data that is neither MAR nor MCAR (i.e. the value of the variable that's missing is related to the reason it's missing). To extend the previous example, this would occur if men failed to fill in a depression survey "because" of their level of depression. Samuelson and Spirer (1992) discussed how missing and/or distorted data about demographics, law enforcement, and health could be indicators of patterns of human rights violations. They gave several fairly well documented examples. Structured Missingness. Missing data can also arise in subtle ways that are not well accounted for in classical theory. An increasingly encountered problem arises in which data may not be MAR but missing values exhibit an association or structure, either explicitly or implicitly. Such missingness has been described as ‘structured missingness’. Structured missingness commonly arises when combining information from multiple studies, each of which may vary in its design and measurement set and therefore only contain a subset of variables from the union of measurement modalities. In these situations, missing values may relate to the various sampling methodologies used to collect the data or reflect characteristics of the wider population of interest, and so may impart useful information. For instance, in a health context, structured missingness has been observed as a consequence of linking clinical, genomic and imaging data. The presence of structured missingness may be a hindrance to make effective use of data at scale, including through both classical statistical and current machine learning methods. For example, there might be bias inherent in the reasons why some data might be missing in patterns, which might have implications in predictive fairness for machine learning models. Furthermore, established methods for dealing with missing data, such as imputation, do not usually take into account the structure of the missing data and so development of new formulations is needed to deal with structured missingness appropriately or effectively. Finally, characterising structured missingness within the classical framework of MCAR, MAR, and MNAR is a work in progress. Techniques of dealing with missing data. Missing data reduces the representativeness of the sample and can therefore distort inferences about the population. Generally speaking, there are three main approaches to handle missing data: (1) "Imputation"—where values are filled in the place of missing data, (2) "omission"—where samples with invalid data are discarded from further analysis and (3) "analysis"—by directly applying methods unaffected by the missing values. One systematic review addressing the prevention and handling of missing data for patient-centered outcomes research identified 10 standards as necessary for the prevention and handling of missing data. These include standards for study design, study conduct, analysis, and reporting. In some practical application, the experimenters can control the level of missingness, and prevent missing values before gathering the data. For example, in computer questionnaires, it is often not possible to skip a question. A question has to be answered, otherwise one cannot continue to the next. So missing values due to the participant are eliminated by this type of questionnaire, though this method may not be permitted by an ethics board overseeing the research. In survey research, it is common to make multiple efforts to contact each individual in the sample, often sending letters to attempt to persuade those who have decided not to participate to change their minds. However, such techniques can either help or hurt in terms of reducing the negative inferential effects of missing data, because the kind of people who are willing to be persuaded to participate after initially refusing or not being home are likely to be significantly different from the kinds of people who will still refuse or remain unreachable after additional effort. In situations where missing values are likely to occur, the researcher is often advised on planning to use methods of data analysis methods that are robust to missingness. An analysis is robust when we are confident that mild to moderate violations of the technique's key assumptions will produce little or no bias, or distortion in the conclusions drawn about the population. Imputation. Some data analysis techniques are not robust to missingness, and require to "fill in", or impute the missing data. Rubin (1987) argued that repeating imputation even a few times (5 or less) enormously improves the quality of estimation. For many practical purposes, 2 or 3 imputations capture most of the relative efficiency that could be captured with a larger number of imputations. However, a too-small number of imputations can lead to a substantial loss of statistical power, and some scholars now recommend 20 to 100 or more. Any multiply-imputed data analysis must be repeated for each of the imputed data sets and, in some cases, the relevant statistics must be combined in a relatively complicated way. Multiple imputation is not conducted in specific disciplines, as there is a lack of training or misconceptions about them. Methods such as listwise deletion have been used to impute data but it has been found to introduce additional bias. There is a beginner guide that provides a step-by-step instruction how to impute data.   The expectation-maximization algorithm is an approach in which values of the statistics which would be computed if a complete dataset were available are estimated (imputed), taking into account the pattern of missing data. In this approach, values for individual missing data-items are not usually imputed. Interpolation. In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. In the comparison of two paired samples with missing data, a test statistic that uses all available data without the need for imputation is the partially overlapping samples t-test. This is valid under normality and assuming MCAR Partial deletion. Methods which involve reducing the data available to a dataset having no missing values include: Full analysis. Methods which take full account of all information available, without the distortion resulting from using imputed values as if they were actually observed: Partial identification methods may also be used. Model-based techniques. Model based techniques, often using graphs, offer additional tools for testing missing data types (MCAR, MAR, MNAR) and for estimating parameters under missing data conditions. For example, a test for refuting MAR/MCAR reads as follows: For any three variables "X,Y", and "Z" where "Z" is fully observed and "X" and "Y" partially observed, the data should satisfy: formula_0. In words, the observed portion of "X" should be independent on the missingness status of "Y," conditional on every value of "Z". Failure to satisfy this condition indicates that the problem belongs to the MNAR category. (Remark: These tests are necessary for variable-based MAR which is a slight variation of event-based MAR.) When data falls into MNAR category techniques are available for consistently estimating parameters when certain conditions hold in the model. For example, if "Y" explains the reason for missingness in "X" and "Y" itself has missing values, the joint probability distribution of "X" and "Y" can still be estimated if the missingness of "Y" is random. The estimand in this case will be: formula_1 where formula_2 and formula_3 denote the observed portions of their respective variables. Different model structures may yield different estimands and different procedures of estimation whenever consistent estimation is possible. The preceding estimand calls for first estimating formula_4 from complete data and multiplying it by formula_5 estimated from cases in which "Y" is observed regardless of the status of "X". Moreover, in order to obtain a consistent estimate it is crucial that the first term be formula_4 as opposed to formula_6. In many cases model based techniques permit the model structure to undergo refutation tests. Any model which implies the independence between a partially observed variable "X" and the missingness indicator of another variable "Y" (i.e. formula_7), conditional on formula_8 can be submitted to the following refutation test: formula_9. Finally, the estimands that emerge from these techniques are derived in closed form and do not require iterative procedures such as Expectation Maximization that are susceptible to local optima. A special class of problems appears when the probability of the missingness depends on time. For example, in the trauma databases the probability to lose data about the trauma outcome depends on the day after trauma. In these cases various non-stationary Markov chain models are applied. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\perp\\!\\!\\!\\perp R_y |(R_x,Z)" }, { "math_id": 1, "text": "\n\\begin{align}\nP(X,Y)& =P(X|Y) P(Y) \\\\\n & =P(X|Y,R_x=0,R_y=0) P(Y|R_y=0)\n\\end{align}\n" }, { "math_id": 2, "text": "R_x=0" }, { "math_id": 3, "text": "R_y=0" }, { "math_id": 4, "text": "P(X|Y)" }, { "math_id": 5, "text": "P(Y)" }, { "math_id": 6, "text": "P(Y|X)" }, { "math_id": 7, "text": "R_y" }, { "math_id": 8, "text": "R_x" }, { "math_id": 9, "text": "X \\perp\\!\\!\\!\\perp R_y | R_x =0" } ]
https://en.wikipedia.org/wiki?curid=7859676
7862454
Falconer's formula
Mathematical formula used to calculate heritability in twin studies Heritability is the proportion of variance caused by genetic factors of a specific trait in a population. Falconer's formula is a mathematical formula that is used in twin studies to estimate the relative contribution of genetic vs. environmental factors to variation in a particular trait (that is, the heritability of the trait) based on the difference between twin correlations. Statistical models for heritability commonly include an error that will absorb phenotypic variation that cannot be described by genetics when analyzed. These are unique subject-specific influences on a trait. Falconer's formula was first proposed by the Scottish geneticist Douglas Falconer. The formula is formula_0 where formula_1 is the broad sense heritability, formula_2 is the (monozygotic, MZ) identical twin correlation, and formula_3 is the (dizygotic, DZ) fraternal twin correlation. Falconer's formula assumes the equal contribution of environmental factors in MZ pairs and DZ pairs. Therefore, additional phenotypic correlation between the two pairs is due to genetic factors. Subtracting the correlation of the DZ pairs from MZ pairs yields the variance in phenotypes contributed by genetic factors. The correlation of same sex MZ twins is always higher than the DZ twin correlation with various sexes and thus all gender differences are evaluated as heritable. To avoid this error, only genetic studies comparing MZ twins with the same sex DZ twins are valid. Correlations between formula_4 (additive genetics) and formula_5 (common environment) must be included in the derivation shown below. formula_6 formula_7 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{H_b}^2 = 2(r_{mz} - r_{dz})" }, { "math_id": 1, "text": "{H_b}^2" }, { "math_id": 2, "text": "r_{mz}" }, { "math_id": 3, "text": "r_{dz}" }, { "math_id": 4, "text": "A = {H_b}^2" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "r_{mz} = A + C + 2 \\cdot \\text{Corr}(A,C)" }, { "math_id": 7, "text": "r_{dz} = \\frac{1}{2}A + C + 2 \\cdot \\text{Corr}(\\tfrac{1}{2}A,C)" } ]
https://en.wikipedia.org/wiki?curid=7862454
7864525
Shift matrix
In mathematics, a shift matrix is a binary matrix with ones only on the superdiagonal or subdiagonal, and zeroes elsewhere. A shift matrix "U" with ones on the superdiagonal is an upper shift matrix. The alternative subdiagonal matrix "L" is unsurprisingly known as a lower shift matrix. The ("i", "j"&amp;hairsp;)th component of "U" and "L" are formula_0 where formula_1 is the Kronecker delta symbol. For example, the 5 × 5 shift matrices are formula_2 Clearly, the transpose of a lower shift matrix is an upper shift matrix and vice versa. As a linear transformation, a lower shift matrix shifts the components of a column vector one position down, with a zero appearing in the first position. An upper shift matrix shifts the components of a column vector one position up, with a zero appearing in the last position. Premultiplying a matrix "A" by a lower shift matrix results in the elements of "A" being shifted downward by one position, with zeroes appearing in the top row. Postmultiplication by a lower shift matrix results in a shift left. Similar operations involving an upper shift matrix result in the opposite shift. Clearly all finite-dimensional shift matrices are nilpotent; an "n" × "n" shift matrix "S" becomes the zero matrix when raised to the power of its dimension "n". Shift matrices act on shift spaces. The infinite-dimensional shift matrices are particularly important for the study of ergodic systems. Important examples of infinite-dimensional shifts are the Bernoulli shift, which acts as a shift on Cantor space, and the Gauss map, which acts as a shift on the space of continued fractions (that is, on Baire space.) Properties. Let "L" and "U" be the "n" × "n" lower and upper shift matrices, respectively. The following properties hold for both "U" and "L". Let us therefore only list the properties for "U": The following properties show how "U" and "L" are related: If "N" is any nilpotent matrix, then "N" is similar to a block diagonal matrix of the form formula_4 where each of the blocks "S"1, "S"2, ..., "S""r" is a shift matrix (possibly of different sizes). formula_5 Examples. Then, formula_6 Clearly there are many possible permutations. For example, formula_7 is equal to the matrix "A" shifted up and left along the main diagonal. formula_8
[ { "math_id": 0, "text": "U_{ij} = \\delta_{i+1,j}, \\quad L_{ij} = \\delta_{i,j+1}," }, { "math_id": 1, "text": "\\delta_{ij}" }, { "math_id": 2, "text": "U_5 = \\begin{pmatrix}\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix} \\quad\nL_5 = \\begin{pmatrix}\n0 & 0 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0\n\\end{pmatrix}." }, { "math_id": 3, "text": "p_U(\\lambda) = (-1)^n\\lambda^n." }, { "math_id": 4, "text": "\\begin{pmatrix} \n S_1 & 0 & \\ldots & 0 \\\\ \n 0 & S_2 & \\ldots & 0 \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & \\ldots & S_r \n\\end{pmatrix}" }, { "math_id": 5, "text": "S = \\begin{pmatrix}\n0 & 0 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0\n\\end{pmatrix}; \\quad A = \\begin{pmatrix}\n1 & 1 & 1 & 1 & 1 \\\\\n1 & 2 & 2 & 2 & 1 \\\\\n1 & 2 & 3 & 2 & 1 \\\\\n1 & 2 & 2 & 2 & 1 \\\\\n1 & 1 & 1 & 1 & 1\n\\end{pmatrix}." }, { "math_id": 6, "text": "SA = \\begin{pmatrix}\n0 & 0 & 0 & 0 & 0 \\\\\n1 & 1 & 1 & 1 & 1 \\\\\n1 & 2 & 2 & 2 & 1 \\\\\n1 & 2 & 3 & 2 & 1 \\\\\n1 & 2 & 2 & 2 & 1\n\\end{pmatrix}; \\quad AS = \\begin{pmatrix}\n1 & 1 & 1 & 1 & 0 \\\\\n2 & 2 & 2 & 1 & 0 \\\\\n2 & 3 & 2 & 1 & 0 \\\\\n2 & 2 & 2 & 1 & 0 \\\\\n1 & 1 & 1 & 1 & 0\n\\end{pmatrix}." }, { "math_id": 7, "text": "S^\\mathsf{T} A S" }, { "math_id": 8, "text": "\nS^\\mathsf{T}AS=\\begin{pmatrix}\n2 & 2 & 2 & 1 & 0 \\\\\n2 & 3 & 2 & 1 & 0 \\\\\n2 & 2 & 2 & 1 & 0 \\\\\n1 & 1 & 1 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix}." } ]
https://en.wikipedia.org/wiki?curid=7864525
7864709
Green's function (many-body theory)
Correlators of field operators In many-body theory, the term Green's function (or Green function) is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators. The name comes from the Green's functions used to solve inhomogeneous differential equations, to which they are loosely related. (Specifically, only two-point 'Green's functions' in the case of a non-interacting system are Green's functions in the mathematical sense; the linear operator that they invert is the Hamiltonian operator, which in the non-interacting case is quadratic in the fields.) Spatially uniform case. Basic definitions. We consider a many-body theory with field operator (annihilation operator written in the position basis) formula_0. The Heisenberg operators can be written in terms of Schrödinger operators as formula_1and the creation operator is formula_2, where formula_3 is the grand-canonical Hamiltonian. Similarly, for the imaginary-time operators, formula_4 formula_5 [Note that the imaginary-time creation operator formula_6 is not the Hermitian conjugate of the annihilation operator formula_7.] In real time, the formula_8-point Green function is defined by formula_9 where we have used a condensed notation in which formula_10 signifies formula_11 and formula_12 signifies formula_13. The operator formula_14 denotes time ordering, and indicates that the field operators that follow it are to be ordered so that their time arguments increase from right to left. In imaginary time, the corresponding definition is formula_15 where formula_10 signifies formula_16. (The imaginary-time variables formula_17 are restricted to the range from formula_18 to the inverse temperature formula_19.) Note regarding signs and normalization used in these definitions: The signs of the Green functions have been chosen so that Fourier transform of the two-point (formula_20) thermal Green function for a free particle is formula_21 and the retarded Green function is formula_22 where formula_23 is the Matsubara frequency. Throughout, formula_24 is formula_25 for bosons and formula_26 for fermions and formula_27 denotes either a commutator or anticommutator as appropriate. Two-point functions. The Green function with a single pair of arguments (formula_20) is referred to as the two-point function, or propagator. In the presence of both spatial and temporal translational symmetry, it depends only on the difference of its arguments. Taking the Fourier transform with respect to both space and time gives formula_28 where the sum is over the appropriate Matsubara frequencies (and the integral involves an implicit factor of formula_29, as usual). In real time, we will explicitly indicate the time-ordered function with a superscript T: formula_30 The real-time two-point Green function can be written in terms of 'retarded' and 'advanced' Green functions, which will turn out to have simpler analyticity properties. The retarded and advanced Green functions are defined by formula_31 and formula_32 respectively. They are related to the time-ordered Green function by formula_33 where formula_34 is the Bose–Einstein or Fermi–Dirac distribution function. Imaginary-time ordering and "β"-periodicity. The thermal Green functions are defined only when both imaginary-time arguments are within the range formula_18 to formula_35. The two-point Green function has the following properties. (The position or momentum arguments are suppressed in this section.) Firstly, it depends only on the difference of the imaginary times: formula_36 The argument formula_37 is allowed to run from formula_38 to formula_35. Secondly, formula_39 is (anti)periodic under shifts of formula_35. Because of the small domain within which the function is defined, this means just formula_40 for formula_41. Time ordering is crucial for this property, which can be proved straightforwardly, using the cyclicity of the trace operation. These two properties allow for the Fourier transform representation and its inverse, formula_42 Finally, note that formula_39 has a discontinuity at formula_43; this is consistent with a long-distance behaviour of formula_44. Spectral representation. The propagators in real and imaginary time can both be related to the spectral density (or spectral weight), given by formula_45 where |"α"⟩ refers to a (many-body) eigenstate of the grand-canonical Hamiltonian "H" − "μN", with eigenvalue "Eα". The imaginary-time propagator is then given by formula_46 and the retarded propagator by formula_47 where the limit as formula_48 is implied. The advanced propagator is given by the same expression, but with formula_49 in the denominator. The time-ordered function can be found in terms of formula_50 and formula_51. As claimed above, formula_52 and formula_53 have simple analyticity properties: the former (latter) has all its poles and discontinuities in the lower (upper) half-plane. The thermal propagator formula_54 has all its poles and discontinuities on the imaginary formula_55 axis. The spectral density can be found very straightforwardly from formula_50, using the Sokhatsky–Weierstrass theorem formula_56 where P denotes the Cauchy principal part. This gives formula_57 This furthermore implies that formula_58 obeys the following relationship between its real and imaginary parts: formula_59 where formula_60 denotes the principal value of the integral. The spectral density obeys a sum rule, formula_61 which gives formula_62 as formula_63. Hilbert transform. The similarity of the spectral representations of the imaginary- and real-time Green functions allows us to define the function formula_64 which is related to formula_65 and formula_50 by formula_66 and formula_67 A similar expression obviously holds for formula_51. The relation between formula_68 and formula_69 is referred to as a Hilbert transform. Proof of spectral representation. We demonstrate the proof of the spectral representation of the propagator in the case of the thermal Green function, defined as formula_70 Due to translational symmetry, it is only necessary to consider formula_71 for formula_72, given by formula_73 Inserting a complete set of eigenstates gives formula_74 Since formula_75 and formula_76 are eigenstates of formula_77, the Heisenberg operators can be rewritten in terms of Schrödinger operators, giving formula_78 Performing the Fourier transform then gives formula_79 Momentum conservation allows the final term to be written as (up to possible factors of the volume) formula_80 which confirms the expressions for the Green functions in the spectral representation. The sum rule can be proved by considering the expectation value of the commutator, formula_81 and then inserting a complete set of eigenstates into both terms of the commutator: formula_82 Swapping the labels in the first term then gives formula_83 which is exactly the result of the integration of ρ. Non-interacting case. In the non-interacting case, formula_84 is an eigenstate with (grand-canonical) energy formula_85, where formula_86 is the single-particle dispersion relation measured with respect to the chemical potential. The spectral density therefore becomes formula_87 From the commutation relations, formula_88 with possible factors of the volume again. The sum, which involves the thermal average of the number operator, then gives simply formula_89, leaving formula_90 The imaginary-time propagator is thus formula_91 and the retarded propagator is formula_92 Zero-temperature limit. As "β" → ∞, the spectral density becomes formula_93 where "α" = 0 corresponds to the ground state. Note that only the first (second) term contributes when ω is positive (negative). General case. Basic definitions. We can use 'field operators' as above, or creation and annihilation operators associated with other single-particle states, perhaps eigenstates of the (noninteracting) kinetic energy. We then use formula_94 where formula_95 is the annihilation operator for the single-particle state formula_96 and formula_97 is that state's wavefunction in the position basis. This gives formula_98 with a similar expression for formula_99. Two-point functions. These depend only on the difference of their time arguments, so that formula_100 and formula_101 We can again define retarded and advanced functions in the obvious way; these are related to the time-ordered function in the same way as above. The same periodicity properties as described in above apply to formula_102. Specifically, formula_103 and formula_104 for formula_105. Spectral representation. In this case, formula_106 where formula_107 and formula_108 are many-body states. The expressions for the Green functions are modified in the obvious ways: formula_109 and formula_110 Their analyticity properties are identical to those of formula_111 and formula_58 defined in the translationally invariant case. The proof follows exactly the same steps, except that the two matrix elements are no longer complex conjugates. Noninteracting case. If the particular single-particle states that are chosen are 'single-particle energy eigenstates', i.e. formula_112 then for formula_113 an eigenstate: formula_114 so is formula_115: formula_116 and so is formula_117: formula_118 We therefore have formula_119 We then rewrite formula_120 therefore formula_121 use formula_122 and the fact that the thermal average of the number operator gives the Bose–Einstein or Fermi–Dirac distribution function. Finally, the spectral density simplifies to give formula_123 so that the thermal Green function is formula_124 and the retarded Green function is formula_125 Note that the noninteracting Green function is diagonal, but this will not be true in the interacting case.
[ { "math_id": 0, "text": "\\psi(\\mathbf{x})" }, { "math_id": 1, "text": "\\psi(\\mathbf{x},t) = e^{i K t} \\psi(\\mathbf{x}) e^{-i K t},\n" }, { "math_id": 2, "text": "\\bar\\psi(\\mathbf{x},t) = [\\psi(\\mathbf{x},t)]^\\dagger" }, { "math_id": 3, "text": "K = H - \\mu N" }, { "math_id": 4, "text": "\\psi(\\mathbf{x},\\tau) = e^{K \\tau} \\psi(\\mathbf{x}) e^{-K\\tau}" }, { "math_id": 5, "text": "\\bar\\psi(\\mathbf{x},\\tau) = e^{K \\tau} \\psi^\\dagger(\\mathbf{x}) e^{-K\\tau}." }, { "math_id": 6, "text": "\\bar\\psi(\\mathbf{x},\\tau)" }, { "math_id": 7, "text": "\\psi(\\mathbf{x},\\tau)" }, { "math_id": 8, "text": "2n" }, { "math_id": 9, "text": " G^{(n)}(1 \\ldots n \\mid 1' \\ldots n') = i^n \\langle T\\psi(1)\\ldots\\psi(n)\\bar\\psi(n')\\ldots\\bar\\psi(1')\\rangle, " }, { "math_id": 10, "text": "j" }, { "math_id": 11, "text": "(\\mathbf{x}_j, t_j)" }, { "math_id": 12, "text": "j'" }, { "math_id": 13, "text": "(\\mathbf{x}_j', t_j')" }, { "math_id": 14, "text": "T" }, { "math_id": 15, "text": " \\mathcal{G}^{(n)}(1 \\ldots n \\mid 1' \\ldots n') = \\langle T\\psi(1)\\ldots\\psi(n)\\bar\\psi(n')\\ldots\\bar\\psi(1')\\rangle, " }, { "math_id": 16, "text": "\\mathbf{x}_j, \\tau_j" }, { "math_id": 17, "text": "\\tau_j" }, { "math_id": 18, "text": "0" }, { "math_id": 19, "text": "\\beta = \\frac{1}{k_\\text{B} T}" }, { "math_id": 20, "text": "n=1" }, { "math_id": 21, "text": " \\mathcal{G}(\\mathbf{k},\\omega_n) = \\frac{1}{-i\\omega_n + \\xi_\\mathbf{k}}, " }, { "math_id": 22, "text": "G^{\\mathrm{R}}(\\mathbf{k},\\omega) = \\frac{1}{-(\\omega+i\\eta) + \\xi_\\mathbf{k}}," }, { "math_id": 23, "text": "\\omega_n = \\frac{[2n+\\theta(-\\zeta)]\\pi}{\\beta}" }, { "math_id": 24, "text": "\\zeta" }, { "math_id": 25, "text": "+1" }, { "math_id": 26, "text": "-1" }, { "math_id": 27, "text": "[\\ldots,\\ldots]=[\\ldots,\\ldots]_{-\\zeta}" }, { "math_id": 28, "text": "\\mathcal{G}(\\mathbf{x}\\tau\\mid\\mathbf{x}'\\tau') = \\int_\\mathbf{k} d\\mathbf{k} \\frac{1}{\\beta}\\sum_{\\omega_n} \\mathcal{G}(\\mathbf{k},\\omega_n) e^{i \\mathbf{k}\\cdot(\\mathbf{x}-\\mathbf{x}')-i\\omega_n (\\tau-\\tau')}," }, { "math_id": 29, "text": "(L/2\\pi)^{d}" }, { "math_id": 30, "text": "G^{\\mathrm{T}}(\\mathbf{x} t\\mid\\mathbf{x}' t') = \\int_\\mathbf{k} d \\mathbf{k} \\int \\frac{d\\omega}{2\\pi} G^{\\mathrm{T}}(\\mathbf{k},\\omega) e^{i \\mathbf{k}\\cdot(\\mathbf{x} -\\mathbf{x} ')-i\\omega(t-t')}." }, { "math_id": 31, "text": "G^{\\mathrm{R}}(\\mathbf{x} t \\mid \\mathbf{x}' t') = -i\\langle[\\psi(\\mathbf{x} ,t),\\bar\\psi(\\mathbf{x} ',t')]_{\\zeta}\\rangle\\Theta(t-t')" }, { "math_id": 32, "text": "G^{\\mathrm{A}}(\\mathbf{x} t\\mid\\mathbf{x} 't') = i\\langle[\\psi(\\mathbf{x} ,t),\\bar\\psi(\\mathbf{x}', t')]_{\\zeta}\\rangle \\Theta(t'-t)," }, { "math_id": 33, "text": "G^{\\mathrm{T}}(\\mathbf{k},\\omega) = [1+\\zeta n(\\omega)]G^{\\mathrm{R}}(\\mathbf{k},\\omega) - \\zeta n(\\omega) G^{\\mathrm{A}}(\\mathbf{k},\\omega)," }, { "math_id": 34, "text": "n(\\omega) = \\frac{1}{e^{\\beta \\omega}-\\zeta}" }, { "math_id": 35, "text": "\\beta" }, { "math_id": 36, "text": "\\mathcal{G}(\\tau,\\tau') = \\mathcal{G}(\\tau - \\tau')." }, { "math_id": 37, "text": "\\tau - \\tau'" }, { "math_id": 38, "text": "-\\beta" }, { "math_id": 39, "text": "\\mathcal{G}(\\tau)" }, { "math_id": 40, "text": "\\mathcal{G}(\\tau - \\beta) = \\zeta \\mathcal{G}(\\tau)," }, { "math_id": 41, "text": "0 < \\tau < \\beta" }, { "math_id": 42, "text": "\\mathcal{G}(\\omega_n) = \\int_0^\\beta d\\tau \\, \\mathcal{G}(\\tau)\\, e^{i\\omega_n \\tau}." }, { "math_id": 43, "text": "\\tau = 0" }, { "math_id": 44, "text": "\\mathcal{G}(\\omega_n) \\sim 1/|\\omega_n|" }, { "math_id": 45, "text": "\\rho(\\mathbf{k},\\omega) = \\frac{1}{\\mathcal{Z}}\\sum_{\\alpha,\\alpha'} 2\\pi \\delta(E_\\alpha-E_{\\alpha'} - \\omega) |\\langle \\alpha \\mid \\psi_\\mathbf{k}^\\dagger \\mid \\alpha'\\rangle|^2 \\left(e^{-\\beta E_{\\alpha'}} - \\zeta e^{-\\beta E_{\\alpha}}\\right)," }, { "math_id": 46, "text": "\n\\mathcal{G}(\\mathbf{k},\\omega_n) = \\int_{-\\infty}^\\infty \\frac{d\\omega'}{2\\pi} \\frac{\\rho(\\mathbf{k},\\omega')}{-i\\omega_n+\\omega'}~,\n" }, { "math_id": 47, "text": "G^{\\mathrm{R}}(\\mathbf{k},\\omega) = \\int_{-\\infty}^\\infty \\frac{d\\omega'}{2\\pi} \\frac{\\rho(\\mathbf{k},\\omega')}{-(\\omega+i\\eta)+\\omega'}," }, { "math_id": 48, "text": "\\eta \\to 0^+" }, { "math_id": 49, "text": "-i\\eta" }, { "math_id": 50, "text": "G^{\\mathrm{R}}" }, { "math_id": 51, "text": "G^{\\mathrm{A}}" }, { "math_id": 52, "text": "G^{\\mathrm{R}}(\\omega)" }, { "math_id": 53, "text": "G^{\\mathrm{A}}(\\omega)" }, { "math_id": 54, "text": "\\mathcal{G}(\\omega_n)" }, { "math_id": 55, "text": "\\omega_n" }, { "math_id": 56, "text": "\\lim_{\\eta \\to 0^+} \\frac{1}{x\\pm i\\eta} = P\\frac{1}{x} \\mp i\\pi\\delta(x)," }, { "math_id": 57, "text": "\\rho(\\mathbf{k},\\omega) = 2\\operatorname{Im} G^{\\mathrm{R}}(\\mathbf{k},\\omega)." }, { "math_id": 58, "text": "G^{\\mathrm{R}}(\\mathbf{k},\\omega)" }, { "math_id": 59, "text": "\\operatorname{Re} G^{\\mathrm{R}}(\\mathbf{k},\\omega) = -2 P \\int_{-\\infty}^\\infty \\frac{d\\omega'}{2\\pi} \\frac{\\operatorname{Im} G^{\\mathrm{R}}(\\mathbf{k},\\omega')}{\\omega-\\omega'}," }, { "math_id": 60, "text": "P" }, { "math_id": 61, "text": "\\int_{-\\infty}^\\infty \\frac{d\\omega}{2\\pi} \\rho(\\mathbf{k},\\omega) = 1," }, { "math_id": 62, "text": "G^{\\mathrm{R}}(\\omega)\\sim\\frac{1}{|\\omega|}" }, { "math_id": 63, "text": "|\\omega| \\to \\infty" }, { "math_id": 64, "text": "G(\\mathbf{k},z) = \\int_{-\\infty}^\\infty \\frac{dx}{2\\pi} \\frac{\\rho(\\mathbf{k},x)}{-z+x}," }, { "math_id": 65, "text": "\\mathcal{G}" }, { "math_id": 66, "text": "\\mathcal{G}(\\mathbf{k},\\omega_n) = G(\\mathbf{k}, i\\omega_n)" }, { "math_id": 67, "text": "G^{\\mathrm{R}}(\\mathbf{k},\\omega) = G(\\mathbf{k},\\omega + i\\eta)." }, { "math_id": 68, "text": "G(\\mathbf{k},z)" }, { "math_id": 69, "text": "\\rho(\\mathbf{k},x)" }, { "math_id": 70, "text": "\\mathcal{G}(\\mathbf{x} , \\tau\\mid\\mathbf{x} ',\\tau') = \\langle T\\psi(\\mathbf{x} ,\\tau)\\bar\\psi(\\mathbf{x} ', \\tau') \\rangle." }, { "math_id": 71, "text": "\\mathcal{G}(\\mathbf{x} ,\\tau\\mid\\mathbf{0},0)" }, { "math_id": 72, "text": "\\tau > 0" }, { "math_id": 73, "text": "\n\\mathcal{G}(\\mathbf{x},\\tau\\mid\\mathbf{0},0) = \\frac{1}{\\mathcal{Z}}\\sum_{\\alpha'} e^{-\\beta E_{\\alpha'}}\n\\langle\\alpha' \\mid \\psi(\\mathbf{x},\\tau)\\bar\\psi(\\mathbf{0},0) \\mid \\alpha' \\rangle.\n" }, { "math_id": 74, "text": "\n\\mathcal{G}(\\mathbf{x} ,\\tau\\mid\\mathbf{0},0) = \\frac{1}{\\mathcal{Z}}\\sum_{\\alpha,\\alpha'} e^{-\\beta E_{\\alpha'}}\n\\langle\\alpha' \\mid \\psi(\\mathbf{x} ,\\tau)\\mid\\alpha \\rangle\\langle\\alpha \\mid \\bar\\psi(\\mathbf{0},0) \\mid \\alpha' \\rangle.\n" }, { "math_id": 75, "text": "|\\alpha \\rangle" }, { "math_id": 76, "text": "|\\alpha' \\rangle" }, { "math_id": 77, "text": "H-\\mu N" }, { "math_id": 78, "text": "\n\\mathcal{G}(\\mathbf{x} ,\\tau|\\mathbf{0},0) = \\frac{1}{\\mathcal{Z}}\\sum_{\\alpha,\\alpha'} e^{-\\beta E_{\\alpha'}}\ne^{\\tau(E_{\\alpha'} - E_\\alpha)}\\langle\\alpha' \\mid \\psi(\\mathbf{x} )\\mid\\alpha \\rangle \\langle\\alpha \\mid \\psi^\\dagger(\\mathbf{0}) \\mid \\alpha' \\rangle.\n" }, { "math_id": 79, "text": "\n\\mathcal{G}(\\mathbf{k},\\omega_n) = \\frac{1}{\\mathcal{Z}} \\sum_{\\alpha,\\alpha'} e^{-\\beta E_{\\alpha'}} \\frac{1-\\zeta e^{\\beta(E_{\\alpha'} - E_\\alpha)}}{-i\\omega_n + E_\\alpha - E_{\\alpha'}} \\int_{\\mathbf{k}'} d\\mathbf{k}' \\langle\\alpha \\mid \\psi(\\mathbf{k}) \\mid \\alpha' \\rangle\\langle\\alpha' \\mid \\psi^\\dagger(\\mathbf{k}')\\mid\\alpha \\rangle.\n" }, { "math_id": 80, "text": "|\\langle\\alpha' \\mid\\psi^\\dagger(\\mathbf{k})\\mid\\alpha \\rangle|^2," }, { "math_id": 81, "text": "1 = \\frac{1}{\\mathcal{Z}} \\sum_\\alpha \\langle\\alpha \\mid e^{-\\beta(H-\\mu N)}[\\psi_\\mathbf{k},\\psi_\\mathbf{k}^\\dagger]_{-\\zeta} \\mid \\alpha \\rangle," }, { "math_id": 82, "text": "\n1 = \\frac{1}{\\mathcal{Z}} \\sum_{\\alpha,\\alpha'} e^{-\\beta E_\\alpha} \\left(\n\\langle\\alpha \\mid \\psi_\\mathbf{k} \\mid \\alpha' \\rangle\\langle\\alpha' \\mid \\psi_\\mathbf{k}^\\dagger \\mid \\alpha \\rangle - \\zeta \\langle\\alpha \\mid \\psi_\\mathbf{k}^\\dagger \\mid \\alpha' \\rangle\\langle\\alpha' \\mid \\psi_\\mathbf{k}\\mid\\alpha \\rangle\n\\right).\n" }, { "math_id": 83, "text": "\n1 = \\frac{1}{\\mathcal{Z}} \\sum_{\\alpha,\\alpha'}\n\\left(e^{-\\beta E_{\\alpha'}} - \\zeta e^{-\\beta E_\\alpha} \\right)\n|\\langle\\alpha \\mid \\psi_\\mathbf{k}^\\dagger \\mid \\alpha' \\rangle|^2 ~,\n" }, { "math_id": 84, "text": "\\psi_\\mathbf{k}^\\dagger\\mid\\alpha' \\rangle" }, { "math_id": 85, "text": "E_{\\alpha'} + \\xi_\\mathbf{k}" }, { "math_id": 86, "text": "\\xi_\\mathbf{k} = \\epsilon_\\mathbf{k} - \\mu" }, { "math_id": 87, "text": "\n\\rho_0(\\mathbf{k},\\omega) = \\frac{1}{\\mathcal{Z}}\\,2\\pi\\delta(\\xi_\\mathbf{k} - \\omega) \\sum_{\\alpha'}\\langle\\alpha' \\mid\\psi_\\mathbf{k}\\psi_\\mathbf{k}^\\dagger\\mid\\alpha' \\rangle(1-\\zeta e^{-\\beta\\xi_\\mathbf{k}})e^{-\\beta E_{\\alpha'}}.\n" }, { "math_id": 88, "text": "\n\\langle\\alpha' \\mid \\psi_\\mathbf{k}\\psi_\\mathbf{k}^\\dagger\\mid\\alpha' \\rangle =\n\\langle\\alpha' \\mid(1+\\zeta\\psi_\\mathbf{k}^\\dagger\\psi_\\mathbf{k})\\mid\\alpha' \\rangle,\n" }, { "math_id": 89, "text": "[1 + \\zeta n(\\xi_\\mathbf{k})]\\mathcal{Z}" }, { "math_id": 90, "text": "\\rho_0(\\mathbf{k},\\omega) = 2\\pi\\delta(\\xi_\\mathbf{k} - \\omega)." }, { "math_id": 91, "text": "\\mathcal{G}_0(\\mathbf{k},\\omega) = \\frac{1}{-i\\omega_n + \\xi_\\mathbf{k}}" }, { "math_id": 92, "text": "G_0^{\\mathrm{R}}(\\mathbf{k},\\omega) = \\frac{1}{-(\\omega+i \\eta) + \\xi_\\mathbf{k}}." }, { "math_id": 93, "text": "\n\\rho(\\mathbf{k},\\omega) = 2\\pi\\sum_{\\alpha} \\left[ \\delta(E_\\alpha - E_0 - \\omega)\n\\left|\\left\\langle \\alpha \\mid \\psi_\\mathbf{k}^\\dagger \\mid 0 \\right\\rangle\\right|^2\n- \\zeta \\delta(E_0 - E_{\\alpha} - \\omega)\n\\left|\\left\\langle 0 \\mid \\psi_\\mathbf{k}^\\dagger \\mid \\alpha \\right\\rangle\\right|^2\\right]\n" }, { "math_id": 94, "text": "\\psi(\\mathbf{x} ,\\tau) = \\varphi_\\alpha(\\mathbf{x} ) \\psi_\\alpha(\\tau)," }, { "math_id": 95, "text": "\\psi_\\alpha" }, { "math_id": 96, "text": "\\alpha" }, { "math_id": 97, "text": "\\varphi_\\alpha(\\mathbf{x} )" }, { "math_id": 98, "text": "\n\\mathcal{G}^{(n)}_{\\alpha_1\\ldots\\alpha_n|\\beta_1\\ldots\\beta_n}(\\tau_1 \\ldots \\tau_n | \\tau_1' \\ldots \\tau_n')\n= \\langle T\\psi_{\\alpha_1}(\\tau_1)\\ldots\\psi_{\\alpha_n}(\\tau_n)\\bar\\psi_{\\beta_n}(\\tau_n')\\ldots\\bar\\psi_{\\beta_1}(\\tau_1')\\rangle\n" }, { "math_id": 99, "text": "G^{(n)}" }, { "math_id": 100, "text": "\n\\mathcal{G}_{\\alpha\\beta}(\\tau\\mid \\tau')\n= \\frac{1}{\\beta}\\sum_{\\omega_n}\n\\mathcal{G}_{\\alpha\\beta}(\\omega_n)\\,e^{-i\\omega_n (\\tau-\\tau')}\n" }, { "math_id": 101, "text": "\nG_{\\alpha\\beta}(t\\mid t')\n= \\int_{-\\infty}^{\\infty}\\frac{d\\omega}{2\\pi}\\,\nG_{\\alpha\\beta}(\\omega)\\,e^{-i\\omega(t-t')}.\n" }, { "math_id": 102, "text": "\\mathcal{G}_{\\alpha\\beta}" }, { "math_id": 103, "text": "\\mathcal{G}_{\\alpha\\beta}(\\tau\\mid\\tau') = \\mathcal{G}_{\\alpha\\beta}(\\tau-\\tau')" }, { "math_id": 104, "text": "\\mathcal{G}_{\\alpha\\beta}(\\tau) = \\mathcal{G}_{\\alpha\\beta}(\\tau + \\beta)," }, { "math_id": 105, "text": "\\tau < 0" }, { "math_id": 106, "text": "\n\\rho_{\\alpha\\beta}(\\omega) = \\frac{1}{\\mathcal{Z}}\\sum_{m,n} 2\\pi \\delta(E_n-E_m-\\omega)\\;\n\\langle m \\mid \\psi_\\alpha\\mid n \\rangle\\langle n \\mid \\psi_\\beta^\\dagger\\mid m \\rangle\n\\left(e^{-\\beta E_m} - \\zeta e^{-\\beta E_n}\\right) ,\n" }, { "math_id": 107, "text": "m" }, { "math_id": 108, "text": "n" }, { "math_id": 109, "text": " \\mathcal{G}_{\\alpha\\beta}(\\omega_n) = \\int_{-\\infty}^{\\infty} \\frac{d\\omega'}{2\\pi} \\frac{\\rho_{\\alpha\\beta}(\\omega')}{-i\\omega_n+\\omega'}" }, { "math_id": 110, "text": "G^{\\mathrm{R}}_{\\alpha\\beta}(\\omega) = \\int_{-\\infty}^{\\infty} \\frac{d\\omega'}{2\\pi} \\frac{\\rho_{\\alpha\\beta}(\\omega')}{-(\\omega+i\\eta)+\\omega'}." }, { "math_id": 111, "text": "\\mathcal{G}(\\mathbf{k},\\omega_n)" }, { "math_id": 112, "text": "[H-\\mu N,\\psi_\\alpha^\\dagger] = \\xi_\\alpha\\psi_\\alpha^\\dagger," }, { "math_id": 113, "text": "|n \\rangle" }, { "math_id": 114, "text": "(H-\\mu N)\\mid n \\rangle = E_n \\mid n \\rangle," }, { "math_id": 115, "text": "\\psi_\\alpha \\mid n \\rangle" }, { "math_id": 116, "text": "(H-\\mu N)\\psi_\\alpha\\mid n \\rangle = (E_n - \\xi_\\alpha) \\psi_\\alpha \\mid n \\rangle," }, { "math_id": 117, "text": "\\psi_\\alpha^\\dagger\\mid n \\rangle" }, { "math_id": 118, "text": "(H-\\mu N)\\psi_\\alpha^\\dagger \\mid n \\rangle = (E_n + \\xi_\\alpha) \\psi_\\alpha^\\dagger \\mid n \\rangle." }, { "math_id": 119, "text": "\\langle m \\mid \\psi_\\alpha\\mid n \\rangle\\langle n \\mid \\psi_\\beta^\\dagger\\mid m \\rangle =\\delta_{\\xi_\\alpha, \\xi_\\beta} \\delta_{E_n, E_m + \\xi_\\alpha} \\langle m \\mid \\psi_\\alpha\\mid n \\rangle\\langle n \\mid \\psi_\\beta^\\dagger \\mid m \\rangle." }, { "math_id": 120, "text": "\n\\rho_{\\alpha\\beta}(\\omega) = \\frac{1}{\\mathcal{Z}}\\sum_{m,n} 2\\pi \\delta(\\xi_\\alpha-\\omega)\n\\delta_{\\xi_\\alpha,\\xi_\\beta}\\langle m \\mid \\psi_\\alpha\\mid n \\rangle\\langle n \\mid \\psi_\\beta^\\dagger \\mid m \\rangle\ne^{-\\beta E_m} \\left(1 - \\zeta e^{-\\beta \\xi_\\alpha}\\right),\n" }, { "math_id": 121, "text": "\n\\rho_{\\alpha\\beta}(\\omega) = \\frac{1}{\\mathcal{Z}}\\sum_m 2\\pi \\delta(\\xi_\\alpha-\\omega)\n\\delta_{\\xi_\\alpha,\\xi_\\beta}\\langle m \\mid \\psi_\\alpha\\psi_\\beta^\\dagger e^{-\\beta (H-\\mu N)}\\mid m \\rangle\n\\left(1 - \\zeta e^{-\\beta \\xi_\\alpha}\\right),\n" }, { "math_id": 122, "text": "\\langle m \\mid \\psi_\\alpha \\psi_\\beta^\\dagger\\mid m \\rangle = \\delta_{\\alpha,\\beta}\\langle m \\mid \\zeta \\psi_\\alpha^\\dagger \\psi_\\alpha + 1 \\mid m \\rangle" }, { "math_id": 123, "text": "\\rho_{\\alpha\\beta} = 2\\pi \\delta(\\xi_\\alpha - \\omega)\\delta_{\\alpha\\beta}," }, { "math_id": 124, "text": "\\mathcal{G}_{\\alpha\\beta}(\\omega_n) = \\frac{\\delta_{\\alpha\\beta}}{-i\\omega_n + \\xi_\\beta}" }, { "math_id": 125, "text": "G_{\\alpha\\beta}(\\omega) = \\frac{\\delta_{\\alpha\\beta}}{-(\\omega+i\\eta) + \\xi_\\beta}." } ]
https://en.wikipedia.org/wiki?curid=7864709
786751
Accrued interest
Money earned on an investment with interest In finance, accrued interest is the interest on a bond or loan that has accumulated since the principal investment, or since the previous coupon payment if there has been one already. For a type of obligation such as a bond, interest is calculated and paid at set intervals (for instance annually or semi-annually). However ownership of bonds/loans can be transferred between different investors at any time, not just on an interest payment date. After such a transfer, the new owner will usually receive the next interest payment, but the previous owner must be compensated for the period of time for which he or she owned the bond. In other words, the previous owner must be paid the interest that accrued before the sale. This is generally done in one of two ways, depending on market convention: On the other hand, if the sale is made during a short set period immediately before the next interest payment, then the seller, not the buyer, will receive the interest payment from the issuer of the loan (the borrower), and Accounting. In accounting, accrual-based accounting generally requires (in order to present a true and fair view) that accrued interest is computed and recorded at the end of each accounting period, perhaps by means of adjusting journal entries. This enables the accrued interest to be included in the lender's balance sheet as an asset (and in the borrower's balance sheet as a provision or liability). However if the accounts use the market price as derived by method 2 above, then such an adjustment for accrued interest is not necessary, as it has already been included in the market price. Formula. The primary formula for calculating the interest accrued in a given period is: formula_0 where formula_1 is the accrued interest, formula_2 is the fraction of the year, formula_3 is the principal, and formula_4 is the annualized interest rate. formula_2 is usually calculated as follows: formula_5 where formula_6 is the number of days in the period, and formula_7 is the number of days in the year. The main variables that affect the calculation are the period between interest payments and the day count convention used to determine the fraction of year, and the date rolling convention in use.
[ { "math_id": 0, "text": "\nI_A = T \\times P \\times R\n" }, { "math_id": 1, "text": "I_A" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "\nT = \\frac{D_P}{D_Y}\n" }, { "math_id": 6, "text": "D_P" }, { "math_id": 7, "text": "D_Y" } ]
https://en.wikipedia.org/wiki?curid=786751
7869295
Adjoint filter
In signal processing, the adjoint filter mask formula_0 of a filter mask formula_1 is reversed in time and the elements are complex conjugated. formula_2 Its name is derived from the fact that the convolution with the adjoint filter is the adjoint operator of the original filter, with respect to the Hilbert space formula_3 of the sequences in which the inner product is the Euclidean norm. formula_4 The autocorrelation of a signal formula_5 can be written as formula_6. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h^*" }, { "math_id": 1, "text": "h" }, { "math_id": 2, "text": "(h^*)_k = \\overline{h_{-k}}" }, { "math_id": 3, "text": "\\ell_2" }, { "math_id": 4, "text": "\\langle h*x, y \\rangle = \\langle x, h^* * y \\rangle" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "x^* * x" }, { "math_id": 7, "text": "{h^*}^* = h" }, { "math_id": 8, "text": "(h*g)^* = h^* * g^*" }, { "math_id": 9, "text": "(h\\leftarrow k)^* = h^* \\rightarrow k" } ]
https://en.wikipedia.org/wiki?curid=7869295
7870034
Algebraic logic
Reasoning about equations with free variables In mathematical logic, algebraic logic is the reasoning obtained by manipulating equations with free variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description of models appropriate for the study of various logics (in the form of classes of algebras that constitute the algebraic semantics for these deductive systems) and connected problems like representation and duality. Well known results like the representation theorem for Boolean algebras and Stone duality fall under the umbrella of classical algebraic logic . Works in the more recent abstract algebraic logic (AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using the Leibniz operator . Calculus of relations. A homogeneous binary relation is found in the power set of "X" × "X" for some set "X", while a heterogeneous relation is found in the power set of "X" × "Y", where "X" ≠ "Y". Whether a given relation holds for two individuals is one bit of information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered by inclusion, and lattice of these sets becomes an algebra through "relative multiplication" or composition of relations. "The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion." The "conversion" refers to the converse relation that always exists, contrary to function theory. A given relation may be represented by a logical matrix; then the converse relation is represented by the transpose matrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained by matrix multiplication using Boolean arithmetic. Example. An example of calculus of relations arises in erotetics, the theory of questions. In the universe of utterances there are statements "S" and questions "Q". There are two relations π and α from "Q" to "S": "q" α "a" holds when "a" is a direct answer to question "q". The other relation, "q" π "p" holds when "p" is a presupposition of question "q". The converse relation πT runs from "S" to "Q" so that the composition πTα is a homogeneous relation on "S". The art of putting the right question to elicit a sufficient answer is recognized in Socratic method dialogue. Functions. The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relation R that satisfies the formula formula_0 where I is the identity relation on the range of R. The injective property corresponds to univalence of formula_1, or the formula formula_2 where this time I is the identity on the domain of R. But a univalent relation is only a partial function, while a univalent total relation is a function. The formula for totality is formula_3 Charles Loewner and Gunther Schmidt use the term mapping for a total, univalent relation. The facility of complementary relations inspired Augustus De Morgan and Ernst Schröder to introduce equivalences using formula_4 for the complement of relation R. These equivalences provide alternative formulas for univalent relations (formula_5), and total relations (formula_6). Therefore, mappings satisfy the formula formula_7 Schmidt uses this principle as "slipping below negation from the left". For a mapping formula_8 Abstraction. The relation algebra structure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer opened the frontier of abstract algebraic logic. Algebras as models of logics. Algebraic logic treats algebraic structures, often bounded lattices, as models (interpretations) of certain logics, making logic a branch of order theory. In algebraic logic: In the table below, the left column contains one or more logical or mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are either Boolean algebras or proper extensions thereof. Modal and other nonclassical logics are typically modeled by what are called "Boolean algebras with operators." Algebraic formalisms going beyond first-order logic in at least some respects include: History. Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memoranda Leibniz wrote in the 1680s, some of which were published in the 19th century and translated into English by Clarence Lewis in 1918. But nearly all of Leibniz's known work on algebraic logic was published only in 1903 after Louis Couturat discovered it in Leibniz's Nachlass. and translated selections from Couturat's volume into English. Modern mathematical logic began in 1847, with two pamphlets whose respective authors were George Boole and Augustus De Morgan. In 1870 Charles Sanders Peirce published the first of several works on the logic of relatives. Alexander Macfarlane published his "Principles of the Algebra of Logic" in 1879, and in 1883, Christine Ladd, a student of Peirce at Johns Hopkins University, published "On the Algebra of Logic". Logic turned more algebraic when binary relations were combined with composition of relations. For sets "A" and "B", a relation over "A" and "B" is represented as a member of the power set of "A"×"B" with properties described by Boolean algebra. The "calculus of relations" is arguably the culmination of Leibniz's approach to logic. At the Hochschule Karlsruhe the calculus of relations was described by Ernst Schröder. In particular he formulated Schröder rules, though De Morgan had anticipated them with his Theorem K. In 1903 Bertrand Russell developed the calculus of relations and logicism as his version of pure mathematics based on the operations of the calculus as primitive notions. The "Boole–Schröder algebra of logic" was developed at University of California, Berkeley in a textbook by Clarence Lewis in 1918. He treated the logic of relations as derived from the propositional functions of two or more variables. Hugh MacColl, Gottlob Frege, Giuseppe Peano, and A. N. Whitehead all shared Leibniz's dream of combining symbolic logic, mathematics, and philosophy. Some writings by Leopold Löwenheim and Thoralf Skolem on algebraic logic appeared after the 1910–13 publication of "Principia Mathematica", and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations". According to Helena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed the logical matrix method. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic." discusses the rich historical connections between algebraic logic and model theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition. Alfred Tarski, the founder of set theoretic model theory as a major branch of contemporary mathematical logic, also: In the practice of the calculus of relations, Jacques Riguet used the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of a difunctional relation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem of N. M. Ferrers follows from interpretation of the transpose of a staircase. Riguet generated "rectangular relations" by taking the outer product of logical vectors; these contribute to the "non-enlargeable rectangles" of formal concept analysis. Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized in . To see how present-day work in logic and metaphysics can draw inspiration from, and shed light on, Leibniz's thought, see . References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. Historical perspective
[ { "math_id": 0, "text": "R^T R \\subseteq I ," }, { "math_id": 1, "text": "R^T" }, { "math_id": 2, "text": "R R^T \\subseteq I ," }, { "math_id": 3, "text": "I \\subseteq R R^T ." }, { "math_id": 4, "text": "\\bar{R}" }, { "math_id": 5, "text": " R \\bar{I} \\subseteq \\bar{R}" }, { "math_id": 6, "text": "\\bar{R} \\subseteq R \\bar{I}" }, { "math_id": 7, "text": "\\bar{R} = R \\bar{I} ." }, { "math_id": 8, "text": "f\\bar{A} = \\overline{f A} ." } ]
https://en.wikipedia.org/wiki?curid=7870034
7870623
Swiss Formula
Method to cut and harmonize tariff rates The Swiss Formula is a mathematical formula designed to cut and harmonize tariff rates in international trade. Several countries are pushing for its use in World Trade Organization trade negotiations. It was first introduced by the Swiss Delegation to the WTO during the current round of trade negotiations at the WTO, the Doha Development Round or more simply the Doha Round. Something similar was used in the Tokyo Round. The aim was to provide a mechanism where maximum tariffs could be agreed, and where existing low tariff countries would make a commitment to some further reduction. Details. The formula is of the form formula_0 where "A" is both the maximum tariff which is agreed to apply anywhere and a common coefficient to determine tariff reductions in each country; "T"old is the existing tariff rate for a particular country; and "T"new is the implied future tariff rate for that country. So for example, a value "A" of 25% might be negotiated. If a very high tariff country has a rate "T"old of 6000% then its "T"new rate would be about 24.9%, almost the maximum of 25%. Somewhere with an existing tariff "T"old of 64% would move to a "T"new rate of about 18%, rather lower than the maximum; one with a rate "T"old of 12% would move to a "T"new rate of about 8.1%, substantially lower than the maximum. A very low tariff country with a rate "T"old of 2.3% would move to a "T"new rate of about 2.1%. Mathematically, the Swiss formula has these characteristics: Criticisms. It has been argued however that the formula is too simple for use in tariff negotiations and that it does not lead to proportionate reduction in tariffs across all countries. It is because of this that those who believe an "ideal formula" exist are still looking for the ideal formula, with the Koreans having already suggested an alternative formula, though it has not yet been adopted nor is there any proof that an ideal formula exists. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_\\text{new}=\\frac{A \\times T_\\text{old}}{A+T_\\text{old}} = \\frac 1 {\\dfrac 1 {T_\\text{old}} + \\dfrac 1 A} " } ]
https://en.wikipedia.org/wiki?curid=7870623
7870701
Polyphase matrix
In signal processing, a polyphase matrix is a matrix whose elements are filter masks. It represents a filter bank as it is used in sub-band coders alias discrete wavelet transforms. If formula_0 are two filters, then one level the traditional wavelet transform maps an input signal formula_1 to two output signals formula_2, each of the half length: formula_3 Note, that the dot means polynomial multiplication; i.e., convolution and formula_4 means downsampling. If the above formula is implemented directly, you will compute values that are subsequently flushed by the down-sampling. You can avoid their computation by splitting the filters and the signal into even and odd indexed values before the wavelet transformation: formula_5 The arrows formula_6 and formula_7 denote left and right shifting, respectively. They shall have the same precedence like convolution, because they are in fact convolutions with a shifted discrete delta impulse. formula_8 The wavelet transformation reformulated to the split filters is: formula_9 This can be written as matrix-vector-multiplication formula_10 This matrix formula_11 is the polyphase matrix. Of course, a polyphase matrix can have any size, it need not to have square shape. That is, the principle scales well to any filterbanks, multiwavelets, wavelet transforms based on fractional refinements. Properties. The representation of sub-band coding by the polyphase matrix is more than about write simplification. It allows the adaptation of many results from matrix theory and module theory. The following properties are explained for a formula_12 matrix, but they scale equally to higher dimensions. Invertibility/perfect reconstruction. The case that a polyphase matrix allows reconstruction of a processed signal from the filtered data, is called perfect reconstruction property. Mathematically this is equivalent to invertibility. According to the theorem of invertibility of a matrix over a ring, the polyphase matrix is invertible if and only if the determinant of the polyphase matrix is a Kronecker delta, which is zero everywhere except for one value. formula_13 By Cramer's rule the inverse of formula_11 can be given immediately. formula_14 Orthogonality. Orthogonality means that the adjoint matrix formula_15 is also the inverse matrix of formula_11. The adjoint matrix is the transposed matrix with adjoint filters. formula_16 It implies, that the Euclidean norm of the input signals is preserved. That is, the according wavelet transform is an isometry. formula_17 The orthogonality condition formula_18 can be written out formula_19 Operator norm. For non-orthogonal polyphase matrices the question arises what Euclidean norms the output can assume. This can be bounded by the help of the operator norm. formula_20 For the formula_12 polyphase matrix the Euclidean operator norm can be given explicitly using the Frobenius norm formula_21 and the z transform formula_22: formula_23 This is a special case of the formula_24 matrix where the operator norm can be obtained via z transform and the spectral radius of a matrix or the according spectral norm. formula_25 A signal, where these bounds are assumed can be derived from the eigenvector corresponding to the maximizing and minimizing eigenvalue. Lifting scheme. The concept of the polyphase matrix allows matrix decomposition. For instance the decomposition into addition matrices leads to the lifting scheme. However, classical matrix decompositions like LU and QR decomposition cannot be applied immediately, because the filters form a ring with respect to convolution, not a field.
[ { "math_id": 0, "text": "\\scriptstyle h,\\, g" }, { "math_id": 1, "text": "\\scriptstyle a_0" }, { "math_id": 2, "text": "\\scriptstyle a_1,\\, d_1" }, { "math_id": 3, "text": "\\begin{align}\n a_1 &= (h \\cdot a_0) \\downarrow 2 \\\\\n d_1 &= (g \\cdot a_0) \\downarrow 2\n\\end{align}" }, { "math_id": 4, "text": "\\scriptstyle\\downarrow" }, { "math_id": 5, "text": "\\begin{align}\n h_\\mbox{e} &= h \\downarrow 2 & a_{0,\\mbox{e}} &= a_0 \\downarrow 2 \\\\\n h_\\mbox{o} &= (h \\leftarrow 1) \\downarrow 2 & a_{0,\\mbox{o}} &= (a_0 \\leftarrow 1) \\downarrow 2\n\\end{align}" }, { "math_id": 6, "text": "\\scriptstyle\\leftarrow" }, { "math_id": 7, "text": "\\scriptstyle\\rightarrow" }, { "math_id": 8, "text": "\\delta = (\\dots, 0, 0, \\underset{0-\\mbox{th position}}{1}, 0, 0, \\dots)" }, { "math_id": 9, "text": "\\begin{align}\n a_1 &= h_\\mbox{e} \\cdot a_{0,\\mbox{e}} +\n h_\\mbox{o} \\cdot a_{0,\\mbox{o}} \\rightarrow 1 \\\\\n d_1 &= g_\\mbox{e} \\cdot a_{0,\\mbox{e}} +\n g_\\mbox{o} \\cdot a_{0,\\mbox{o}} \\rightarrow 1\n\\end{align}" }, { "math_id": 10, "text": "\\begin{align}\n P &= \\begin{pmatrix}\n h_\\mbox{e} & h_\\mbox{o} \\rightarrow 1 \\\\\n g_\\mbox{e} & g_\\mbox{o} \\rightarrow 1\n \\end{pmatrix} \\\\\n \\begin{pmatrix} a_1 \\\\ d_1 \\end{pmatrix} &= P \\cdot\n \\begin{pmatrix}\n a_{0,\\mbox{e}} \\\\\n a_{0,\\mbox{o}}\n \\end{pmatrix}\n\\end{align}" }, { "math_id": 11, "text": "\\scriptstyle P" }, { "math_id": 12, "text": "\\scriptstyle 2 \\,\\times\\, 2" }, { "math_id": 13, "text": "\\begin{align}\n \\det P &= h_{\\mbox{e}} \\cdot g_{\\mbox{o}} - h_{\\mbox{o}} \\cdot g_{\\mbox{e}} \\\\\n \\exists A\\ A \\cdot P &= I \\iff \\exists c\\ \\exists k\\ \\det P = c \\cdot \\delta \\rightarrow k\n\\end{align}" }, { "math_id": 14, "text": "P^{-1} \\cdot \\det P =\n \\begin{pmatrix}\n g_\\mbox{o} \\rightarrow 1 & - h_\\mbox{o} \\rightarrow 1 \\\\\n -g_\\mbox{e} & h_\\mbox{e}\n \\end{pmatrix}\n" }, { "math_id": 15, "text": "\\scriptstyle P^*" }, { "math_id": 16, "text": "P^* = \\begin{pmatrix}\n h_\\mbox{e}^* & g_\\mbox{e}^* \\\\\n h_\\mbox{o}^* \\leftarrow 1 & g_\\mbox{o}^* \\leftarrow 1\n \\end{pmatrix}\n" }, { "math_id": 17, "text": "\\left\\|a_1\\right\\|_2^2 + \\left\\|d_1\\right\\|_2^2 = \\left\\|a_0\\right\\|_2^2" }, { "math_id": 18, "text": "P \\cdot P^* = I" }, { "math_id": 19, "text": "\\begin{align}\n h_\\mbox{e}^* \\cdot h_\\mbox{e} + h_\\mbox{o}^* \\cdot h_\\mbox{o} &= \\delta \\\\\n g_\\mbox{e}^* \\cdot g_\\mbox{e} + g_\\mbox{o}^* \\cdot g_\\mbox{o} &= \\delta \\\\\n h_\\mbox{e}^* \\cdot g_\\mbox{e} + h_\\mbox{o}^* \\cdot g_\\mbox{o} &= 0\n\\end{align}" }, { "math_id": 20, "text": "\\forall x\\ \\left\\|P \\cdot x\\right\\|_2 \\in \\left[\\left\\|P^{-1}\\right\\|_2^{-1} \\cdot \\|x\\|_2, \\|P\\|_2 \\cdot \\|x\\|_2\\right]" }, { "math_id": 21, "text": "\\scriptstyle\\|\\cdot\\|_F" }, { "math_id": 22, "text": "\\scriptstyle Z" }, { "math_id": 23, "text": "\\begin{align}\n p(z) &= \\frac{1}{2} \\cdot \\left\\|Z P(z)\\right\\|_F^2 \\\\\n q(z) &= \\left|\\det [Z P(z)]\\right|^2 \\\\\n \\|P\\|_2 &= \\max\\left\\{\\sqrt{p(z) + \\sqrt{p(z)^2 - q(z)}} : z\\in\\mathbb{C}\\ \\land\\ |z| = 1\\right\\} \\\\\n \\left\\|P^{-1}\\right\\|_2^{-1} &= \\min\\left\\{\\sqrt{p(z) - \\sqrt{p(z)^2 - q(z)}} : z\\in\\mathbb{C}\\ \\land\\ |z| = 1\\right\\}\n\\end{align}" }, { "math_id": 24, "text": "n\\times n" }, { "math_id": 25, "text": "\\begin{align}\n \\left\\|P\\right\\|_2\n &= \\sqrt{\\max\\left\\{\\lambda_\\text{max} \\left[Z P^*(z) \\cdot Z P(z)\\right] : z\\in\\mathbb{C}\\ \\land\\ |z| = 1\\right\\}} \\\\\n &= \\max\\left\\{\\left\\|Z P(z)\\right\\|_2 : z\\in\\mathbb{C}\\ \\land\\ |z| = 1\\right\\} \\\\[3pt]\n \\left\\|P^{-1}\\right\\|_2^{-1}\n &= \\sqrt{\\min\\left\\{\\lambda_\\text{min}\\left[Z P^*(z) \\cdot Z P(z)\\right] : z\\in\\mathbb{C}\\ \\land\\ |z| = 1\\right\\}}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=7870701
7871339
Salah times
Timing of Islamic prayers Salat times are prayer times when Muslims perform "salat". The term is primarily used for the five daily prayers including the Friday prayer, which takes the place of the Dhuhr prayer and must be performed in a group of aibadat. Muslims believe the salah times were revealed by Allah to Muhammad (ﷺ⁣). Prayer times are standard for Muslims in the world, especially the fard prayer times. They depend on the condition of the Sun and geography. There are varying opinions regarding the exact salah times, the schools of Islamic thought differing in minor details. All schools of thought agree that any given prayer cannot be performed before its stipulated time. Most Muslims pray five times a day, with their prayers being known as Fajr (before dawn), Dhuhr (noon), Asr (late afternoon), Maghrib (at sunset), and Isha (nighttime), always facing towards the Kaaba. Some Muslims pray three times a day. The direction of prayer is called the qibla; the early Muslims initially prayed in the direction of Jerusalem before this was changed to Mecca in 624 CE, about a year after Muhammad (SAW)'s migration to Medina. The timing of the five prayers are fixed intervals defined by daily astronomical phenomena. For example, the Maghrib prayer can be performed at any time after sunset and before the disappearance of the red twilight from the west. In a mosque, the muezzin broadcasts the call to prayer at the beginning of each interval. Because the start and end times for prayers are related to the solar diurnal motion, they vary throughout the year and depend on the local latitude and longitude when expressed in local time. In modern times, various religious or scientific agencies in Muslim countries produce annual prayer timetables for each locality, and electronic clocks capable of calculating local prayer times have been created. In the past, some mosques employed astronomers called the "muwaqqit"s who were responsible for regulating the prayer time using mathematical astronomy. The five intervals were defined by Muslim authorities in the decades after the death of prophet Muhammad (SAW)in 632, based on the hadith (the reported sayings and actions) of the Islamic prophet. Daily prayers. The daily prayers are considered obligatory () by many and they are performed at times determined essentially by the position of the Sun in the sky. Hence, salat times vary at different locations on the Earth. Wudu is needed for all of the prayers. Some Muslims pray three times a day. Fajr (dawn). Fajr begins at —true dawn or the beginning of twilight, when the morning light appears across the full width of the sky—and ends at sunrise. Dhuhr (midday). The time interval for offering the Zuhr or Dhuhr salah timing starts after the sun passes its zenith and lasts until call for the Asr prayer is given. This prayer needs to be given in the middle of the work-day, and people normally make their prayers during their lunch break. Asr (afternoon). Asr salat is the third of the obligatory prayers that Muslims offer daily. It is also known as “middle prayer."The Asar prayer starts when the shadow of an object is the same length as the object itself (or, according to Hanafi school, twice its length) plus the shadow length at Dhuhr, and lasts till the start of sunset. Asr can be split into two sections; the preferred time is before the sun starts to turn orange, while the time of necessity is from when the sun turns orange until 15 minutes before Maghrib. Maghrib (sunset). The Maghrib prayer begins when the sun sets, and lasts until the red light has left the sky in the west. Isha (nights). The Isha'a or Isha prayer starts when the red twilight disappears from the west, and lasts until the middle of the night, which is the middle point between Maghrib Salat and Fajr salat (others say it’s third of the night, or until fajr time) Time calculation. To calculate prayer times two astronomical measures are necessary, the declination of the sun and the difference between clock time and sundial clock. This difference being the result of the eccentricity of the Earth's orbit and the inclination of its axis, it is called the equation of time. The declination of the sun is the angle between sun's rays and the equator plan. In addition to the above measures, to calculate prayer times for a specific location we need its spherical coordinates. In the following; We first give the midday (Dhuhr) time. The midday time is simply when the local true solar time reaches noon: formula_5 The first term is the 12 o'clock noon, the second term accounts for the difference between true and mean solar times, and the third term accounts for the difference between the local mean solar time and the timezone. The other times require converting the Sun's altitude to time. We use a variant of the generalized sunrise equation: formula_6 This gives, in hours, the difference between Dhuhr time and when the sun is at altitude formula_7. Now we calculate three of the other prayer times: Muslims use readily available apps on their phone to find daily prayer times in their locality. Technological advances have allowed for products such as software-enhanced azan clocks that use a combination of GPS and microchips to calculate these formulas. This allows Muslims to live further away from mosques than previously possible, as they no longer need to rely solely on a muezzin in order to keep an accurate prayer schedule. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Z " }, { "math_id": 1, "text": " \\lambda " }, { "math_id": 2, "text": " \\phi " }, { "math_id": 3, "text": "\\Delta t" }, { "math_id": 4, "text": " \\delta " }, { "math_id": 5, "text": "T_{\\mathsf{Dhuhr}} = 12 + \\Delta t + (Z - \\lambda/15) " }, { "math_id": 6, "text": "T(\\alpha) = \\frac{1}{15} \\arccos \\left( \\frac{-\\sin(\\alpha)-\\sin(\\phi)\\sin(\\delta)}{\\cos(\\phi)\\cos(\\delta)} \\right)" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": " T(-0.833^{\\circ}) " }, { "math_id": 9, "text": "\\alpha = 0 " }, { "math_id": 10, "text": "T_{\\mathsf{Shuruq}} = T_{\\mathsf{Dhuhr}} - T(0.833^{\\circ})" }, { "math_id": 11, "text": "T_{\\mathsf{Maghrib}} = T_{\\mathsf{Dhuhr}} + T(0.833^{\\circ})" }, { "math_id": 12, "text": " 0.0347^{\\circ} \\times \\sqrt{h} " }, { "math_id": 13, "text": "T_{\\mathsf{Fajr}} = T_{\\mathsf{Dhuhr}} - T(18^{\\circ})" }, { "math_id": 14, "text": "T_{\\mathsf{Isha}} = T_{\\mathsf{Dhuhr}} + T(17^{\\circ})" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "A(n) = \\arccot(n+\\left|\\tan(\\phi-\\delta)\\right|)." }, { "math_id": 17, "text": "T_{\\mathsf{Shuruq}} = T_{\\mathsf{Dhuhr}} + T(A(n))," } ]
https://en.wikipedia.org/wiki?curid=7871339
7872003
Zipper (data structure)
Technique of representing an aggregate data structure A zipper is a technique of representing an aggregate data structure so that it is convenient for writing programs that traverse the structure arbitrarily and update its contents, especially in purely functional programming languages. The zipper was described by Gérard Huet in 1997. It includes and generalizes the gap buffer technique sometimes used with arrays. The zipper technique is general in the sense that it can be adapted to lists, trees, and other recursively defined data structures. Such modified data structures are usually referred to as "a tree with zipper" or "a list with zipper" to emphasize that the structure is conceptually a tree or list, while the zipper is a detail of the implementation. A layperson's explanation for a tree with zipper would be an ordinary computer filesystem with operations to go to parent (often codice_0), and the possibility to go downwards (codice_1). The zipper is the pointer to the current path. Behind the scenes the zippers are efficient when making (functional) changes to a data structure, where a new, slightly changed, data structure is returned from an edit operation (instead of making a change in the current data structure). Example: Bidirectional list traversal. Many common data structures in computer science can be expressed as the structure generated by a few primitive constructor operations or observer operations. These include the structure of finite lists, which can be generated by two operations: A list such as codice_6 is therefore the declaration codice_7. It is possible to describe the location in such a list as the number of steps from the front of the list to the target location. More formally, a location in the list is the number of codice_8 operations required to reconstruct the whole list from that particular location. For example, in codice_9 a codice_10 and a codice_11 operation would be required to reconstruct the list relative to position X otherwise known as codice_12. This recording together with the location is called a zipped representation of the list or a list-zipper. To be clear, a location in the list is not just the number of codice_8 operations, but also all of the other information about those codice_8; in this case, the values that must be reconnected. Here, these values may be conveniently represented in a separate list in the order of application from the target location. Specifically, from the context of "3" in the list codice_15, a recording (commonly referred to as a 'path') could be represented as codice_16 where codice_10 is applied followed by codice_18 to reconstitute the original list starting from codice_19. A list-zipper always represents the entire data structure. However, this information is from the perspective of a specific location within that data structure. Consequently, a list-zipper is a pair consisting of both the location as a context or starting point, and a recording or path that permits reconstruction from that starting location. In particular, the list-zipper of codice_15 at the location of "3" may be represented as codice_21. Now, if "3" is changed to "10", then the list-zipper becomes codice_22. The list may then be efficiently reconstructed: codice_23 or other locations traversed to. With the list represented this way, it is easy to define relatively efficient operations on immutable data structures such as Lists and Trees at arbitrary locations. In particular, applying the zipper transform to a tree makes it easy to insert or remove values at any particular location in the tree. Contexts and differentiation. The type of a zipper's contexts can be constructed via a transformation of the original type that is closely related to the derivative of calculus through decategorification. The recursive types that zippers are formed from can be viewed as the least fixed point of a unary type constructor of kind formula_0. For example, with a higher-order type constructor formula_1 that constructs the least fixed point of its argument, an unlabeled binary tree can be represented as formula_2 and an unlabeled list may take the form formula_3. Here, the notation of exponentiation, multiplication, and addition correspond to function types, product types, and sum types respectively, whilst the natural numbers label the finite types; in this way, the type constructors resemble polynomial functions in formula_4. The derivative of a type constructor can therefore be formed through this syntactic analogy: for the example of an unlabeled ternary tree, the derivative of its type constructor formula_5 would be equivalent to formula_6, analogously to the use of the sum and power rules in differential calculus. The type of the contexts of a zipper over an original type formula_7 is equivalent to the derivative of the type constructor applied to the original type, formula_8. For illustration, consider the recursive data structure of a binary tree with nodes that are either sentinel nodes of type or contain a value of type A: formula_9 The partial derivative of the type constructor can be computed to be formula_10 Thus, the type of the zipper's contexts is formula_11 As such, we find that the context of each non-sentinel child node in the labelled binary tree is a triple consisting of In general, a zipper for a tree of type formula_7 consists of two parts: a list of contexts of type formula_8 of the current node and each of its ancestors up until the root node, and the value of type formula_7 that the current node contains. Uses. The zipper is often used where there is some concept of focus or a cursor used to navigate around in some set of data, since its semantics reflect that of moving around but in a functional non-destructive manner. The zipper has been used in Alternatives and extensions. Direct modification. In a non-purely-functional programming language, it may be more convenient to simply traverse the original data structure and modify it in place (perhaps after deep cloning it, to avoid affecting other code that might hold a reference to it). Generic zipper. The generic zipper is a technique to achieve the same goal as the conventional zipper by capturing the state of the traversal in a continuation while visiting each node. (The Haskell code given in the reference uses generic programming to generate a traversal function for any data structure, but this is optional – any suitable traversal function can be used.) However, the generic zipper involves inversion of control, so some uses of it require a state machine (or equivalent) to keep track of what to do next. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "* \\rightarrow *" }, { "math_id": 1, "text": "\\text{lfp} : (* \\rightarrow *) \\rightarrow *" }, { "math_id": 2, "text": "\\text{lfp}(T \\mapsto T^2 + 1)" }, { "math_id": 3, "text": "\\text{lfp}(T \\mapsto T + 1)" }, { "math_id": 4, "text": "\\mathbb{N} \\rightarrow \\mathbb{N}" }, { "math_id": 5, "text": "(T \\mapsto T^3 + 1)'" }, { "math_id": 6, "text": "T \\mapsto 3 \\times T^2" }, { "math_id": 7, "text": "\\text{lfp}(f)" }, { "math_id": 8, "text": "f'(\\text{lfp}(f))" }, { "math_id": 9, "text": "\\text{BTree} := \\text{lfp}(T \\mapsto A \\times T^2 + 1)" }, { "math_id": 10, "text": "(T \\mapsto A \\times T^2 + 1)' = T \\mapsto 2 \\times A \\times T" }, { "math_id": 11, "text": "(T \\mapsto 2 \\times A \\times T)(\\text{BTree}) = 2 \\times A \\times \\text{BTree}" } ]
https://en.wikipedia.org/wiki?curid=7872003
7872813
Spread of a matrix
Mathematical term In mathematics, and more specifically matrix theory, the spread of a matrix is the largest distance in the complex plane between any two eigenvalues of the matrix. Definition. Let formula_0 be a square matrix with eigenvalues formula_1. That is, these values formula_2 are the complex numbers such that there exists a vector formula_3 on which formula_0 acts by scalar multiplication: formula_4 Then the spread of formula_0 is the non-negative number formula_5
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\lambda_1, \\ldots, \\lambda_n" }, { "math_id": 2, "text": "\\lambda_i" }, { "math_id": 3, "text": "v_i" }, { "math_id": 4, "text": "Av_i=\\lambda_i v_i." }, { "math_id": 5, "text": "s(A) = \\max \\{|\\lambda_i - \\lambda_j| : i,j=1,\\ldots n\\}." }, { "math_id": 6, "text": "0" }, { "math_id": 7, "text": "1" }, { "math_id": 8, "text": "B" }, { "math_id": 9, "text": "BAB^{-1}" } ]
https://en.wikipedia.org/wiki?curid=7872813
7876320
Flow coefficient
Measure of a device's efficiency at allowing fluid flow The flow coefficient of a device is a relative measure of its efficiency at allowing fluid flow. It describes the relationship between the pressure drop across an orifice valve or other assembly and the corresponding flow rate. Mathematically the flow coefficient "C"v (or flow-capacity rating of valve) can be expressed as formula_0 where Q is the rate of flow (expressed in US gallons per minute), SG is the specific gravity of the fluid (for water = 1), Δ"P" is the pressure drop across the valve (expressed in psi). In more practical terms, the "flow coefficient" "C"v is the volume (in US gallons) of water at that will flow per minute through a valve with a pressure drop of across the valve. The use of the flow coefficient offers a standard method of comparing valve capacities and sizing valves for specific applications that is widely accepted by industry. The general definition of the flow coefficient can be expanded into equations modeling the flow of liquids, gases and steam using the discharge coefficient. For gas flow in a pneumatic system the "C"v for the same assembly can be used with a more complex equation. Absolute pressures (psia) must be used for gas rather than simply differential pressure. For air flow at room temperature, when the outlet pressure is less than 1/2 the absolute inlet pressure, the flow becomes quite simple (although it reaches sonic velocity internally). With "C"v = 1.0 and 200 psia inlet pressure, the flow is 100 standard cubic feet per minute (scfm). The flow is proportional to the absolute inlet pressure, so the flow in scfm would equal the "C"v flow coefficient if the inlet pressure were reduced to 2 psia and the outlet were connected to a vacuum with less than 1 psi absolute pressure (1.0 scfm when "C"v = 1.0, 2 psia input). Flow factor. The metric equivalent flow factor ("K"v) is calculated using metric units: formula_1 where "K"v is the flow factor (expressed in m3/h), Q is the flowrate (expressed in m3/h), SG is the specific gravity of the fluid (for water = 1), ∆"P" is the differential pressure across the device (expressed in bar). "K"v can be calculated from "C"v using the equation formula_2 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_\\text{v} = Q \\sqrt{\\frac{\\text{SG}}{\\Delta P}}," }, { "math_id": 1, "text": "K_\\text{v} = Q \\sqrt{\\frac{\\text{SG}}{\\Delta P}}," }, { "math_id": 2, "text": "C_{\\text{v}} = 1.156 \\cdot K_\\text{v}." } ]
https://en.wikipedia.org/wiki?curid=7876320
7876585
Exchange matrix
Square matrix whose entries are 1 along the anti-diagonal and 0 elsewhere In mathematics, especially linear algebra, the exchange matrices (also called the reversal matrix, backward identity, or standard involutory permutation) are special cases of permutation matrices, where the 1 elements reside on the antidiagonal and all other elements are zero. In other words, they are 'row-reversed' or 'column-reversed' versions of the identity matrix. formula_0 Definition. If J is an "n" × "n" exchange matrix, then the elements of J are formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n J_2 &= \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix} \\\\[4pt]\n J_3 &= \\begin{pmatrix}\n 0 & 0 & 1 \\\\\n 0 & 1 & 0 \\\\\n 1 & 0 & 0\n \\end{pmatrix} \\\\\n &\\quad \\vdots \\\\[2pt]\n J_n &= \\begin{pmatrix}\n 0 & 0 & \\cdots & 0 & 1 \\\\\n 0 & 0 & \\cdots & 1 & 0 \\\\\n \\vdots & \\vdots & \\,{}_{_{\\displaystyle\\cdot}} \\!\\, {}^{_{_{\\displaystyle\\cdot}}} \\! \\dot\\phantom{j} & \\vdots & \\vdots \\\\ \n 0 & 1 & \\cdots & 0 & 0 \\\\\n 1 & 0 & \\cdots & 0 & 0 \n \\end{pmatrix}\n\\end{align}" }, { "math_id": 1, "text": "J_{i,j} = \\begin{cases} \n1, & i + j = n + 1 \\\\\n0, & i + j \\ne n + 1\\\\\n\\end{cases}" }, { "math_id": 2, "text": "\n\\begin{pmatrix}\n0 & 0 & 1 \\\\\n0 & 1 & 0 \\\\\n1 & 0 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n1 & 2 & 3 \\\\\n4 & 5 & 6 \\\\\n7 & 8 & 9\n\\end{pmatrix} =\n\\begin{pmatrix}\n7 & 8 & 9 \\\\\n4 & 5 & 6 \\\\\n1 & 2 & 3\n\\end{pmatrix}.\n" }, { "math_id": 3, "text": "\n\\begin{pmatrix}\n1 & 2 & 3 \\\\\n4 & 5 & 6 \\\\\n7 & 8 & 9\n\\end{pmatrix}\n\\begin{pmatrix}\n0 & 0 & 1 \\\\\n0 & 1 & 0 \\\\\n1 & 0 & 0\n\\end{pmatrix} =\n\\begin{pmatrix}\n3 & 2 & 1 \\\\\n6 & 5 & 4 \\\\\n9 & 8 & 7\n\\end{pmatrix}.\n" }, { "math_id": 4, "text": "\n J_n^\\mathsf{T} = J_n." }, { "math_id": 5, "text": "\n J_n^k = \\begin{cases}\n I & \\text{ if } k \\text{ is even,} \\\\[2pt]\n J_n & \\text{ if } k \\text{ is odd.}\n \\end{cases}\n " }, { "math_id": 6, "text": "\n J_n^{-1} = J_n." }, { "math_id": 7, "text": "\n \\operatorname{tr}(J_n) = n\\bmod 2." }, { "math_id": 8, "text": "\n \\det(J_n) = (-1)^\\frac{n(n-1)}{2}\n " }, { "math_id": 9, "text": "\n \\det(\\lambda I- J_n) = \\begin{cases}\n \\big[(\\lambda+1)(\\lambda-1)\\big]^\\frac{n}{2} & \\text{ if } n \\text{ is even,} \\\\[4pt]\n (\\lambda-1)^\\frac{n+1}{2}(\\lambda+1)^\\frac{n-1}{2} & \\text{ if } n \\text{ is odd.}\n \\end{cases}" }, { "math_id": 10, "text": "\n \\operatorname{adj}(J_n) = \\sgn(\\pi_n) J_n.\n " } ]
https://en.wikipedia.org/wiki?curid=7876585
787776
Curse of dimensionality
Difficulties arising when analyzing data with many aspects ("dimensions") The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data. Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Domains. Combinatorics. In some problems, each variable can take one of several discrete values, or the range of possible values is divided to give a finite number of possibilities. Taking the variables together, a huge number of combinations of values must be considered. This effect is also known as the combinatorial explosion. Even in the simplest case of formula_0 binary variables, the number of possible combinations already is formula_1, exponential in the dimensionality. Naively, each additional dimension doubles the effort needed to try all combinations. Sampling. There is an exponential increase in volume associated with adding extra dimensions to a mathematical space. For example, 102 = 100 evenly spaced sample points suffice to sample a unit interval (try to visualize a "1-dimensional" cube) with no more than 10−2 = 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice that has a spacing of 10−2 = 0.01 between adjacent points would require 1020 = [(102)10] sample points. In general, with a spacing distance of 10−"n" the 10-dimensional hypercube appears to be a factor of 10"n"(10−1) = [(10"n")10/(10"n")] "larger" than the 1-dimensional hypercube, which is the unit interval. In the above example "n" = 2: when using a sampling distance of 0.01 the 10-dimensional hypercube appears to be 1018 "larger" than the unit interval. This effect is a combination of the combinatorics problems above and the distance function problems explained below. Optimization. When solving dynamic optimization problems by numerical backward induction, the objective function must be computed for each combination of values. This is a significant obstacle when the dimension of the "state variable" is large. Machine learning. In machine learning problems that involve learning a "state-of-nature" from a finite number of data samples in a high-dimensional feature space with each feature having a range of possible values, typically an enormous amount of training data is required to ensure that there are several samples with each combination of values. In an abstract sense, as the number of features or dimensions grows, the amount of data we need to generalize accurately grows exponentially. A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation. In machine learning and insofar as predictive performance is concerned, the "curse of dimensionality" is used interchangeably with the "peaking phenomenon", which is also known as "Hughes phenomenon". This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily. Nevertheless, in the context of a "simple" classifier (e.g., linear discriminant analysis in the multivariate Gaussian model under the assumption of a common known covariance matrix), Zollanvari, "et al.", showed both analytically and empirically that as long as the relative cumulative efficacy of an additional feature set (with respect to features that are already part of the classifier) is greater (or less) than the size of this additional feature set, the expected error of the classifier constructed using these additional features will be less (or greater) than the expected error of the classifier constructed without them. In other words, both the size of additional features and their (relative) cumulative discriminatory effect are important in observing a decrease or increase in the average predictive power. In metric learning, higher dimensions can sometimes allow a model to achieve better performance. After normalizing embeddings to the surface of a hypersphere, FaceNet achieves the best performance using 128 dimensions as opposed to 64, 256, or 512 dimensions in one ablation study. A loss function for unitary-invariant dissimilarity between word embeddings was found to be minimized in high dimensions. Data mining. In data mining, the curse of dimensionality refers to a data set with too many features. Consider the first table, which depicts 200 individuals and 2000 genes (features) with a 1 or 0 denoting whether or not they have a genetic mutation in that gene. A data mining application to this data set may be finding the correlation between specific genetic mutations and creating a classification algorithm such as a decision tree to determine whether an individual has cancer or not. A common practice of data mining in this domain would be to create association rules between genetic mutations that lead to the development of cancers. To do this, one would have to loop through each genetic mutation of each individual and find other genetic mutations that occur over a desired threshold and create pairs. They would start with pairs of two, then three, then four until they result in an empty set of pairs. The complexity of this algorithm can lead to calculating all permutations of gene pairs for each individual or row. Given the formula for calculating the permutations of n items with a group size of r is: formula_2, calculating the number of three pair permutations of any given individual would be 7988004000 different pairs of genes to evaluate for each individual. The number of pairs created will grow by an order of factorial as the size of the pairs increase. The growth is depicted in the permutation table (see right). As we can see from the permutation table above, one of the major problems data miners face regarding the curse of dimensionality is that the space of possible parameter values grows exponentially or factorially as the number of features in the data set grows. This problem critically affects both computational time and space when searching for associations or optimal features to consider. Another problem data miners may face when dealing with too many features is the notion that the number of false predictions or classifications tend to increase as the number of features grow in the data set. In terms of the classification problem discussed above, keeping every data point could lead to a higher number of false positives and false negatives in the model. This may seem counter intuitive but consider the genetic mutation table from above, depicting all genetic mutations for each individual. Each genetic mutation, whether they correlate with cancer or not, will have some input or weight in the model that guides the decision-making process of the algorithm. There may be mutations that are outliers or ones that dominate the overall distribution of genetic mutations when in fact they do not correlate with cancer. These features may be working against one's model, making it more difficult to obtain optimal results. This problem is up to the data miner to solve, and there is no universal solution. The first step any data miner should take is to explore the data, in an attempt to gain an understanding of how it can be used to solve the problem. One must first understand what the data means, and what they are trying to discover before they can decide if anything must be removed from the data set. Then they can create or use a feature selection or dimensionality reduction algorithm to remove samples or features from the data set if they deem it necessary. One example of such methods is the interquartile range method, used to remove outliers in a data set by calculating the standard deviation of a feature or occurrence. Distance function. When a measure such as a Euclidean distance is defined using many coordinates, there is little difference in the distances between different pairs of points. One way to illustrate the "vastness" of high-dimensional Euclidean space is to compare the proportion of an inscribed hypersphere with radius formula_3 and dimension formula_0, to that of a hypercube with edges of length formula_4 The volume of such a sphere is formula_5, where formula_6 is the gamma function, while the volume of the cube is formula_7. As the dimension formula_0 of the space increases, the hypersphere becomes an insignificant volume relative to that of the hypercube. This can clearly be by comparing the proportions as the dimension formula_0 goes to infinity: formula_8 as formula_9. Furthermore, the distance between the center and the corners is formula_10, which increases without bound for fixed r. In this sense when points are uniformly generated in a high-dimensional hypercube, almost all points are much farther than formula_3 units away from the centre. In high dimensions, the volume of the "d"-dimensional unit hypercube (with coordinates of the vertices formula_11) is concentrated near a sphere with the radius formula_12 for large dimension "d". Indeed, for each coordinate formula_13 the average value of formula_14 in the cube is formula_15. The variance of formula_14 for uniform distribution in the cube is formula_16 Therefore, the squared distance from the origin, formula_17 has the average value "d"/3 and variance 4"d"/45. For large "d", distribution of formula_18 is close to the normal distribution with the mean 1/3 and the standard deviation formula_19 according to the central limit theorem. Thus, when uniformly generating points in high dimensions, both the "middle" of the hypercube, and the corners are empty, and all the volume is concentrated near the surface of a sphere of "intermediate" radius formula_20. This also helps to understand the chi-squared distribution. Indeed, the (non-central) chi-squared distribution associated to a random point in the interval [-1, 1] is the same as the distribution of the length-squared of a random point in the "d"-cube. By the law of large numbers, this distribution concentrates itself in a narrow band around "d" times the standard deviation squared (σ2) of the original derivation. This illuminates the chi-squared distribution and also illustrates that most of the volume of the "d"-cube concentrates near the boundary of a sphere of radius formula_21. A further development of this phenomenon is as follows. Any fixed distribution on the real numbers induces a product distribution on points in formula_22. For any fixed "n", it turns out that the difference between the minimum and the maximum distance between a random reference point "Q" and a list of "n" random data points "P"1...,"P""n" become indiscernible compared to the minimum distance: formula_23. This is often cited as distance functions losing their usefulness (for the nearest-neighbor criterion in feature-comparison algorithms, for example) in high dimensions. However, recent research has shown this to only hold in the artificial scenario when the one-dimensional distributions formula_24 are independent and identically distributed. When attributes are correlated, data can become easier and provide higher distance contrast and the signal-to-noise ratio was found to play an important role, thus feature selection should be used. More recently, it has been suggested that there may be a conceptual flaw in the argument that contrast-loss creates a curse in high dimensions. Machine learning can be understood as the problem of assigning instances to their respective generative process of origin, with class labels acting as symbolic representations of individual generative processes. The curse's derivation assumes all instances are independent, identical outcomes of a single high dimensional generative process. If there is only one generative process, there would exist only one (naturally occurring) class and machine learning would be conceptually ill-defined in both high and low dimensions. Thus, the traditional argument that contrast-loss creates a curse, may be fundamentally inappropriate. In addition, it has been shown that when the generative model is modified to accommodate multiple generative processes, contrast-loss can morph from a curse to a blessing, as it ensures that the nearest-neighbor of an instance is almost-surely its most closely related instance. From this perspective, contrast-loss makes high dimensional distances especially meaningful and not especially non-meaningful as is often argued. Nearest neighbor search. The effect complicates nearest neighbor search in high dimensional space. It is not possible to quickly reject candidates by using the difference in one coordinate as a lower bound for a distance based on all the dimensions. However, it has recently been observed that the mere number of dimensions does not necessarily result in difficulties, since "relevant" additional dimensions can also increase the contrast. In addition, for the resulting ranking it remains useful to discern close and far neighbors. Irrelevant ("noise") dimensions, however, reduce the contrast in the manner described above. In time series analysis, where the data are inherently high-dimensional, distance functions also work reliably as long as the signal-to-noise ratio is high enough. "k"-nearest neighbor classification. Another effect of high dimensionality on distance functions concerns "k"-nearest neighbor ("k"-NN) graphs constructed from a data set using a distance function. As the dimension increases, the indegree distribution of the "k"-NN digraph becomes skewed with a peak on the right because of the emergence of a disproportionate number of hubs, that is, data-points that appear in many more "k"-NN lists of other data-points than the average. This phenomenon can have a considerable impact on various techniques for classification (including the "k"-NN classifier), semi-supervised learning, and clustering, and it also affects information retrieval. Anomaly detection. In a 2012 survey, Zimek et al. identified the following problems when searching for anomalies in high-dimensional data: Many of the analyzed specialized methods tackle one or another of these problems, but there remain many open research questions. Blessing of dimensionality. Surprisingly and despite the expected "curse of dimensionality" difficulties, common-sense heuristics based on the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems. The term "blessing of dimensionality" was introduced in the late 1990s. Donoho in his "Millennium manifesto" clearly explained why the "blessing of dimensionality" will form a basis of future data mining. The effects of the blessing of dimensionality were discovered in many applications and found their foundation in the concentration of measure phenomena. One example of the blessing of dimensionality phenomenon is linear separability of a random point from a large finite random set with high probability even if this set is exponentially large: the number of elements in this random set can grow exponentially with dimension. Moreover, this linear functional can be selected in the form of the simplest linear Fisher discriminant. This separability theorem was proven for a wide class of probability distributions: general uniformly log-concave distributions, product distributions in a cube and many other families (reviewed recently in ). "The blessing of dimensionality and the curse of dimensionality are two sides of the same coin." For example, the typical property of essentially high-dimensional probability distributions in a high-dimensional space is: the squared distance of random points to a selected point is, with high probability, close to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing), but, at the same time, it makes the similarity search in high dimensions difficult and even useless (curse). Zimek et al. noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "2^d" }, { "math_id": 2, "text": "\\frac{n!}{(n - r)!}" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "2r." }, { "math_id": 5, "text": "\\frac{2r^d\\pi^{d/2}}{d \\; \\Gamma(d/2)}" }, { "math_id": 6, "text": "\\Gamma" }, { "math_id": 7, "text": "(2r)^d" }, { "math_id": 8, "text": "\\frac{V_\\mathrm{hypersphere}}{V_\\mathrm{hypercube}} = \\frac{\\pi^{d/2}}{d2^{d-1}\\Gamma(d/2)} \\rightarrow 0" }, { "math_id": 9, "text": "d \\rightarrow \\infty" }, { "math_id": 10, "text": "r\\sqrt{d}" }, { "math_id": 11, "text": "\\pm 1 " }, { "math_id": 12, "text": "\\sqrt{d}/\\sqrt{3}" }, { "math_id": 13, "text": "x_i" }, { "math_id": 14, "text": "x_i^2" }, { "math_id": 15, "text": "\\left\\langle x_i^2 \\right\\rangle = \\frac{1}{2}\\int_{-1}^{1}x^2 dx = \\frac{1}{3}" }, { "math_id": 16, "text": "\\frac{1}{2}\\int_{-1}^{1}x^4 dx - \\left\\langle x_i^2\\right\\rangle^2 = \\frac{4}{45}" }, { "math_id": 17, "text": "r^2 = \\sum_i x_i^2" }, { "math_id": 18, "text": "r^2/d" }, { "math_id": 19, "text": "2/{\\sqrt{45d}}" }, { "math_id": 20, "text": "\\sqrt{d/3}" }, { "math_id": 21, "text": "\\sigma\\sqrt{d}" }, { "math_id": 22, "text": "\\mathbb{R}^d" }, { "math_id": 23, "text": "\\lim_{d \\to \\infty} E\\left(\\frac{\\operatorname{dist}_{\\max} (d) - \\operatorname{dist}_{\\min} (d)}{\\operatorname{dist}_{\\min} (d)}\\right) \\to 0" }, { "math_id": 24, "text": "\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=787776
7878238
Yamartino method
Algorithm The Yamartino method is an algorithm for calculating an approximation of the circular variance of wind direction during a single pass through the incoming data. Background. The simple method for calculating circular variance requires two passes through the list of values. The first pass determines the circular mean of those values, while the second pass determines the variance. This double-pass method requires access to all values. There is also a single-pass method for calculating the standard deviation, but this method is unsuitable for angular data such as wind direction. Trying to calculate angular moments by naively applying the standard formulas to angular expressions yields absurd results. For example, a dataset that measures wind speeds of 1° and 359° would average to 180°, but expressing the same data as 1° and -1° (equal to 359°) would give an average of 0°. Thus, we define circular moments by placing all measured angles on a unit circle, then calculating the moments of these points. The Yamartino method, introduced by Robert J. Yamartino in 1984, solves both problems A further discussion of the Yamartino method, along with other methods of estimating the standard deviation of wind direction can be found in Farrugia &amp; Micallef. It is possible to calculate the exact standard deviation in one pass. However, that method needs slightly more calculation effort. Algorithm. Over the time interval to be averaged across, "n" measurements of wind direction ("θ") will be made and two totals are accumulated without storage of the "n" individual values. At the end of the interval the calculations are as follows: with the average values of sin "θ" and cos "θ" defined as formula_0 formula_1 Then the average wind direction is given via the four-quadrant arctan(x,y) function as formula_2 From twenty different functions for "σ""θ" using variables obtained in a single-pass of the wind direction data, Yamartino found the best function to be formula_3 where formula_4 The key here is to remember that sin2"θ" + cos2"θ" = 1 so that for example, with a constant wind direction at any value of "θ", the value of formula_5 will be zero, leading to a zero value for the standard deviation. The use of formula_5 alone produces a result close to that produced with a double-pass when the dispersion of angles is small (not crossing the discontinuity), but by construction it is always between 0 and 1. Taking the arcsine then produces the double-pass answer when there are just two equally common angles: in the extreme case of an oscillating wind blowing backwards and forwards, it produces a result of formula_6 radians, i.e. a right angle. The final factor adjusts this figure upwards so that it produces the double-pass result of formula_7 radians for an almost uniform distribution of angles across all directions, while making minimal change to results for small dispersions. The theoretical maximum error against the correct double-pass "σ""θ" is therefore about 15% with an oscillating wind. Comparisons against Monte Carlo generated cases indicate that Yamartino's algorithm is within 2% for more realistic distributions. A variant might be to weight each wind direction observation by the wind speed at that time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s_a = \\frac 1 n \\sum_{i=1}^n \\sin \\theta_i," }, { "math_id": 1, "text": "c_a = \\frac 1 n \\sum_{i=1}^n \\cos \\theta_i." }, { "math_id": 2, "text": "\\theta_a=\\arctan(c_a,s_a)." }, { "math_id": 3, "text": "\\sigma_\\theta = \\arcsin (\\varepsilon) \\left[1+\\left(\\tfrac 2 {\\sqrt 3} -1\\right) \\varepsilon^3\\right], " }, { "math_id": 4, "text": "\\varepsilon=\\sqrt{1-(s^2_a+c^2_a)}." }, { "math_id": 5, "text": "\\varepsilon" }, { "math_id": 6, "text": "\\tfrac{\\pi}{2}" }, { "math_id": 7, "text": "\\tfrac{\\pi}{\\sqrt{3}}" } ]
https://en.wikipedia.org/wiki?curid=7878238
787827
Klaus Roth
British mathematician (1925–2015) Klaus Friedrich Roth (29 October 1925 – 10 November 2015) was a German-born British mathematician who won the Fields Medal for proving Roth's theorem on the Diophantine approximation of algebraic numbers. He was also a winner of the De Morgan Medal and the Sylvester Medal, and a Fellow of the Royal Society. Roth moved to England as a child in 1933 to escape the Nazis, and was educated at the University of Cambridge and University College London, finishing his doctorate in 1950. He taught at University College London until 1966, when he took a chair at Imperial College London. He retired in 1988. Beyond his work on Diophantine approximation, Roth made major contributions to the theory of progression-free sets in arithmetic combinatorics and to the theory of irregularities of distribution. He was also known for his research on sums of powers, on the large sieve, on the Heilbronn triangle problem, and on square packing in a square. He was a coauthor of the book "Sequences" on integer sequences. Biography. Early life. Roth was born to a Jewish family in Breslau, Prussia, on 29 October 1925. His parents settled with him in London to escape Nazi persecution in 1933, and he was raised and educated in the UK. His father, a solicitor, had been exposed to poison gas during World War I and died while Roth was still young. Roth became a pupil at St Paul's School, London from 1939 to 1943, and with the rest of the school he was evacuated from London to Easthampstead Park during the Blitz. At school, he was known for his ability in both chess and mathematics. He tried to join the Air Training Corps, but was blocked for some years for being German and then after that for lacking the coordination needed for a pilot. Mathematical education. Roth read mathematics at Peterhouse, Cambridge, and played first board for the Cambridge chess team, finishing in 1945. Despite his skill in mathematics, he achieved only third-class honours on the Mathematical Tripos, because of his poor test-taking ability. His Cambridge tutor, John Charles Burkill, was not supportive of Roth continuing in mathematics, recommending instead that he take "some commercial job with a statistical bias". Instead, he briefly became a schoolteacher at Gordonstoun, between finishing at Cambridge and beginning his graduate studies. On the recommendation of Harold Davenport, he was accepted in 1946 to a master's program in mathematics at University College London, where he worked under the supervision of Theodor Estermann. He completed a master's degree there in 1948, and a doctorate in 1950. His dissertation was "Proof that almost all Positive Integers are Sums of a Square, a Positive Cube and a Fourth Power". Career. On receiving his master's degree in 1948, Roth became an assistant lecturer at University College London, and in 1950 he was promoted to lecturer. His most significant contributions, on Diophantine approximation, progression-free sequences, and discrepancy, were all published in the mid-1950s, and by 1958 he was given the Fields Medal, mathematicians' highest honour. However, it was not until 1961 that he was promoted to full professor. During this period, he continued to work closely with Harold Davenport. He took sabbaticals at the Massachusetts Institute of Technology in the mid-1950s and mid-1960s, and seriously considered migrating to the United States. Walter Hayman and Patrick Linstead countered this possibility, which they saw as a threat to British mathematics, with an offer of a chair in pure mathematics at Imperial College London, and Roth accepted the chair in 1966. He retained this position until official retirement in 1988. He remained at Imperial College as Visiting Professor until 1996. Roth's lectures were usually very clear but could occasionally be erratic. The Mathematics Genealogy Project lists him as having only two doctoral students, but one of them, William Chen, who continued Roth's work in discrepancy theory, became a Fellow of the Australian Mathematical Society and head of the mathematics department at Macquarie University. Personal life. In 1955, Roth married Mélèk Khaïry, who had attracted his attention when she was a student in his first lecture; Khaïry was a daughter of Egyptian senator Khaïry Pacha She came to work for the psychology department at University College London, where she published research on the effects of toxins on rats. On Roth's retirement, they moved to Inverness; Roth dedicated a room of their house to Latin dancing, a shared interest of theirs. Khaïry died in 2002, and Roth died in Inverness on 10 November 2015 at the age of 90. They had no children, and Roth dedicated the bulk of his estate, over one million pounds, to two health charities "to help elderly and infirm people living in the city of Inverness". He sent the Fields Medal with a smaller bequest to Peterhouse. Contributions. Roth was known as a problem-solver in mathematics, rather than as a theory-builder. Harold Davenport writes that the "moral in Dr Roth's work" is that "the great unsolved problems of mathematics may still yield to direct attack, however difficult and forbidding they appear to be, and however much effort has already been spent on them". His research interests spanned several topics in number theory, discrepancy theory, and the theory of integer sequences. Diophantine approximation. The subject of Diophantine approximation seeks accurate approximations of irrational numbers by rational numbers. The question of how accurately algebraic numbers could be approximated became known as the Thue–Siegel problem, after previous progress on this question by Axel Thue and Carl Ludwig Siegel. The accuracy of approximation can be measured by the approximation exponent of a number formula_0, defined as the largest number formula_1 such that formula_0 has infinitely many rational approximations formula_2 with formula_3. If the approximation exponent is large, then formula_0 has more accurate approximations than a number whose exponent is smaller. The smallest possible approximation exponent is two: even the hardest-to-approximate numbers can be approximated with exponent two using continued fractions. Before Roth's work, it was believed that the algebraic numbers could have a larger approximation exponent, related to the degree of the polynomial defining the number. In 1955, Roth published what is now known as Roth's theorem, completely settling this question. His theorem falsified the supposed connection between approximation exponent and degree, and proved that, in terms of the approximation exponent, the algebraic numbers are the least accurately approximated of any irrational numbers. More precisely, he proved that for irrational algebraic numbers, the approximation exponent is always exactly two. In a survey of Roth's work presented by Harold Davenport to the International Congress of Mathematicians in 1958, when Roth was given the Fields Medal, Davenport called this result Roth's "greatest achievement". Arithmetic combinatorics. Another result called "Roth's theorem", from 1953, is in arithmetic combinatorics and concerns sequences of integers with no three in arithmetic progression. These sequences had been studied in 1936 by Paul Erdős and Pál Turán, who conjectured that they must be sparse. However, in 1942, Raphaël Salem and Donald C. Spencer constructed progression-free subsets of the numbers from formula_4 to formula_5 of size proportional to formula_6, for every formula_7. Roth vindicated Erdős and Turán by proving that it is not possible for the size of such a set to be proportional to formula_5: every dense set of integers contains a three-term arithmetic progression. His proof uses techniques from analytic number theory including the Hardy–Littlewood circle method to estimate the number of progressions in a given sequence and show that, when the sequence is dense enough, this number is nonzero. Other authors later strengthened Roth's bound on the size of progression-free sets. A strengthening in a different direction, Szemerédi's theorem, shows that dense sets of integers contain arbitrarily long arithmetic progressions. Discrepancy. Although Roth's work on Diophantine approximation led to the highest recognition for him, it is his research on irregularities of distribution that (according to an obituary by William Chen and Bob Vaughan) he was most proud of. His 1954 paper on this topic laid the foundations for modern discrepancy theory. It concerns the placement of formula_5 points in a unit square so that, for every rectangle bounded between the origin and a point of the square, the area of the rectangle is well-approximated by the number of points in it. Roth measured this approximation by the squared difference between the number of points and formula_5 times the area, and proved that for a randomly chosen rectangle the expected value of the squared difference is logarithmic in formula_5. This result is best possible, and significantly improved a previous bound on the same problem by Tatyana Pavlovna Ehrenfest. Despite the prior work of Ehrenfest and Johannes van der Corput on the same problem, Roth was known for boasting that this result "started a subject". Other topics. Some of Roth's earliest works included a 1949 paper on sums of powers, showing that almost all positive integers could be represented as a sum of a square, a cube, and a fourth power, and a 1951 paper on the gaps between squarefree numbers, describes as "quite sensational" and "of considerable importance" respectively by Chen and Vaughan. His inaugural lecture at Imperial College concerned the large sieve: bounding the size of sets of integers from which many congruence classes of numbers modulo prime numbers have been forbidden. Roth had previously published a paper on this problem in 1965. Another of Roth's interests was the Heilbronn triangle problem, of placing points in a square to avoid triangles of small area. His 1951 paper on the problem was the first to prove a nontrivial upper bound on the area that can be achieved. He eventually published four papers on this problem, the latest in 1976. Roth also made significant progress on square packing in a square. If unit squares are packed into an formula_8 square in the obvious, axis-parallel way, then for values of formula_9 that are just below an integer, nearly formula_10 area can be left uncovered. After Paul Erdős and Ronald Graham proved that a more clever tilted packing could leave a significantly smaller area, only formula_11, Roth and Bob Vaughan responded with a 1978 paper proving the first nontrivial lower bound on the problem. As they showed, for some values of formula_9, the uncovered area must be at least proportional to formula_12. In 1966, Heini Halberstam and Roth published their book "Sequences", on integer sequences. Initially planned to be the first of a two-volume set, its topics included the densities of sums of sequences, bounds on the number of representations of integers as sums of members of sequences, density of sequences whose sums represent all integers, sieve theory and the probabilistic method, and sequences in which no element is a multiple of another. A second edition was published in 1983. Recognition. Roth won the Fields Medal in 1958 for his work on Diophantine approximation. He was the first British Fields medalist. He was elected to the Royal Society in 1960, and later became an Honorary Fellow of the Royal Society of Edinburgh, Fellow of University College London, Fellow of Imperial College London, and Honorary Fellow of Peterhouse. It was a source of amusement to him that his Fields Medal, election to the Royal Society, and professorial chair came to him in the reverse order of their prestige. The London Mathematical Society gave Roth the De Morgan Medal in 1983. In 1991, the Royal Society gave him their Sylvester Medal "for his many contributions to number theory and in particular his solution of the famous problem concerning approximating algebraic numbers by rationals." A festschrift of 32 essays on topics related to Roth's research was published in 2009, in honour of Roth's 80th birthday, and in 2017 the editors of the journal "Mathematika" dedicated a special issue to Roth. After Roth's death, the Imperial College Department of Mathematics instituted the Roth Scholarship in his honour. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "e" }, { "math_id": 2, "text": "p/q" }, { "math_id": 3, "text": "|x-p/q|<1/q^e" }, { "math_id": 4, "text": "1" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "n^{1-\\varepsilon}" }, { "math_id": 7, "text": "\\varepsilon>0" }, { "math_id": 8, "text": "s\\times s" }, { "math_id": 9, "text": "s" }, { "math_id": 10, "text": "2s" }, { "math_id": 11, "text": "O(s^{7/11})" }, { "math_id": 12, "text": "\\sqrt{s}" } ]
https://en.wikipedia.org/wiki?curid=787827
7878457
Computer
Automatic general-purpose device for performing arithmetic or logical operations A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation; or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Etymology. It was not until the mid-20th century that the word acquired its modern definition; according to the "Oxford English Dictionary", the first known use of the word "computer" was in a different sense, in a 1613 book called "The Yong Mans Gleanings" by the English writer Richard Brathwait: "I haue ["sic"] read the truest computer of Times, and the best Arithmetician that euer ["sic"] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The "Online Etymology Dictionary" gives the first attested use of "computer" in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The "Online Etymology Dictionary" states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The "Online Etymology Dictionary" indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as "Turing machine"". The name has remained, although modern computers are capable of many higher-level functions. History. Pre-20th century. Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which, through a system of pulleys and cylinders and over, could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. First computer. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables", he also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the "mill") in 1888. He gave a successful demonstration of its use in computing tables in 1906. Electromechanical calculating machine. In his work "Essays on Automatics" published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like formula_0, for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. Analog computers. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems). Digital computers. Electromechanical. By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. Vacuum tubes and digital electronic circuits. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. Modern computers. Concept of modern computer. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, "On Computable Numbers". Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Stored programs. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his "First Draft of a Report on the EDVAC" in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons &amp; Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. Transistors. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. Integrated circuits. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide in the late 1950s. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. Mobile computers. The first mobile computers were heavy and ran from mains power. The IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types. Computers can be classified in a number of different ways, including: Hardware. The term "hardware" covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. Other hardware topics. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices. When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are: Output devices. The means through which computer gives output are known as output devices. Some examples of output devices are: Control unit. The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. Central processing unit (CPU). The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a "microprocessor". Arithmetic logic unit (ALU). The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. Memory. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary. In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. Input/output (I/O). I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. Multitasking. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Multiprocessing. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software. "Software" refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". Languages. There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications. Programs. The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Stored program architecture. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: begin: addi $8, $0, 0 # initialize sum to 0 addi $9, $0, 1 # set first number to add = 1 loop: slti $10, $9, 1000 # check if the number is less than 1000 beq $10, $0, finish # if odd number is greater than n then exit add $8, $8, $9 # update sum addi $9, $9, 1 # get next number j loop # repeat the summing process finish: add $2, $8, $0 # put sum in output register Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. Machine code. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Programming language. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. Low-level languages. Machine languages and the assembly languages that represent them (collectively termed "low-level programming languages") are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC. Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. High-level languages. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Bugs. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet. Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments. Unconventional computers. A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer, a typical modern definition of a computer is: ""A device that computes", especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that "processes information" qualifies as a computer. Future. There is active research to make non-classical computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly. Computer architecture paradigms. There are many types of computer architectures: Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. Artificial intelligence. A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule-based systems and pattern recognition systems. Rule-based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern-based systems use data about a problem to generate conclusions. Examples of pattern-based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing. Professions and organizations. As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "a^x(y - z)^2" } ]
https://en.wikipedia.org/wiki?curid=7878457
787853
Roth's theorem
Algebraic numbers are not near many rationals In mathematics, Roth's theorem or Thue–Siegel–Roth theorem is a fundamental result in diophantine approximation to algebraic numbers. It is of a qualitative type, stating that algebraic numbers cannot have many rational number approximations that are 'very good'. Over half a century, the meaning of "very good" here was refined by a number of mathematicians, starting with Joseph Liouville in 1844 and continuing with work of Axel Thue (1909), Carl Ludwig Siegel (1921), Freeman Dyson (1947), and Klaus Roth (1955). Statement. Roth's theorem states that every irrational algebraic number formula_0 has approximation exponent equal to 2. This means that, for every formula_1, the inequality formula_2 can have only finitely many solutions in coprime integers formula_3 and formula_4. Roth's proof of this fact resolved a conjecture by Siegel. It follows that every irrational algebraic number α satisfies formula_5 with formula_6 a positive number depending only on formula_1 and formula_0. Discussion. The first result in this direction is Liouville's theorem on approximation of algebraic numbers, which gives an approximation exponent of "d" for an algebraic number α of degree "d" ≥ 2. This is already enough to demonstrate the existence of transcendental numbers. Thue realised that an exponent less than "d" would have applications to the solution of Diophantine equations and in Thue's theorem from 1909 established an exponent formula_7 which he applied to prove the finiteness of the solutions of Thue equation. Siegel's theorem improves this to an exponent about 2√"d", and Dyson's theorem of 1947 has exponent about √2"d". Roth's result with exponent 2 is in some sense the best possible, because this statement would fail on setting formula_8: by Dirichlet's theorem on diophantine approximation there are infinitely many solutions in this case. However, there is a stronger conjecture of Serge Lang that formula_9 can have only finitely many solutions in integers "p" and "q". If one lets α run over the whole of the set of real numbers, not just the algebraic reals, then both Roth's conclusion and Lang's hold for almost all formula_0. So both the theorem and the conjecture assert that a certain countable set misses a certain set of measure zero. The theorem is not currently effective: that is, there is no bound known on the possible values of "p","q" given formula_0. showed that Roth's techniques could be used to give an effective bound for the number of "p"/"q" satisfying the inequality, using a "gap" principle. The fact that we do not actually know "C"(ε) means that the project of solving the equation, or bounding the size of the solutions, is out of reach. Proof technique. The proof technique involves constructing an auxiliary multivariate polynomial in an arbitrarily large number of variables depending upon formula_10, leading to a contradiction in the presence of too many good approximations. More specifically, one finds a certain number of rational approximations to the irrational algebraic number in question, and then applies the function over each of these simultaneously (i.e. each of these rational numbers serve as the input to a unique variable in the expression defining our function). By its nature, it was ineffective (see effective results in number theory); this is of particular interest since a major application of this type of result is to bound the number of solutions of some diophantine equations. Generalizations. There is a higher-dimensional version, Schmidt's subspace theorem, of the basic result. There are also numerous extensions, for example using the p-adic metric, based on the Roth method. William J. LeVeque generalized the result by showing that a similar bound holds when the approximating numbers are taken from a fixed algebraic number field. Define the "height" "H"(ξ) of an algebraic number ξ to be the maximum of the absolute values of the coefficients of its minimal polynomial. Fix κ&gt;2. For a given algebraic number α and algebraic number field "K", the equation formula_11 has only finitely many solutions in elements ξ of "K". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\varepsilon>0" }, { "math_id": 2, "text": "\\left|\\alpha - \\frac{p}{q}\\right| < \\frac{1}{q^{2 + \\varepsilon}}" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "q" }, { "math_id": 5, "text": "\\left|\\alpha - \\frac{p}{q}\\right| > \\frac{C(\\alpha,\\varepsilon)}{q^{2 + \\varepsilon}}" }, { "math_id": 6, "text": "C(\\alpha,\\varepsilon)" }, { "math_id": 7, "text": "d/2 + 1 + \\varepsilon" }, { "math_id": 8, "text": "\\varepsilon = 0" }, { "math_id": 9, "text": "\\left|\\alpha - \\frac{p}{q}\\right| < \\frac{1}{q^2 \\log(q)^{1+\\varepsilon}}" }, { "math_id": 10, "text": "\\varepsilon" }, { "math_id": 11, "text": " | \\alpha - \\xi | < \\frac{1}{H(\\xi)^\\kappa} " } ]
https://en.wikipedia.org/wiki?curid=787853
7878893
Andrica's conjecture
Andrica's conjecture (named after Romanian mathematician ) is a conjecture regarding the gaps between prime numbers. The conjecture states that the inequality formula_1 holds for all formula_2, where formula_3 is the "n"th prime number. If formula_4 denotes the "n"th prime gap, then Andrica's conjecture can also be rewritten as formula_5 Empirical evidence. Imran Ghory has used data on the largest prime gaps to confirm the conjecture for formula_2 up to 1.3002 × 1016. Using a table of maximal gaps and the above gap inequality, the confirmation value can be extended exhaustively to 4 × 1018. The discrete function formula_6 is plotted in the figures opposite. The high-water marks for formula_0 occur for "n" = 1, 2, and 4, with "A"4 ≈ 0.670873..., with no larger value among the first 105 primes. Since the Andrica function decreases asymptotically as "n" increases, a prime gap of ever increasing size is needed to make the difference large as "n" becomes large. It therefore seems highly likely the conjecture is true, although this has not yet been proven. Generalizations. As a generalization of Andrica's conjecture, the following equation has been considered: formula_7 where formula_8 is the "n"th prime and "x" can be any positive number. The largest possible solution for "x" is easily seen to occur for "n"=1, when "x"max = 1. The smallest solution for "x" is conjectured to be "x"min ≈ 0.567148... (sequence in the OEIS) which occurs for "n" = 30. This conjecture has also been stated as an inequality, the generalized Andrica conjecture: formula_9 for formula_10 References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_n" }, { "math_id": 1, "text": "\\sqrt{p_{n+1}} - \\sqrt{p_n} < 1 " }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "p_n" }, { "math_id": 4, "text": "g_n = p_{n+1} - p_n" }, { "math_id": 5, "text": "g_n < 2\\sqrt{p_n} + 1." }, { "math_id": 6, "text": "A_n = \\sqrt{p_{n+1}}-\\sqrt{p_n}" }, { "math_id": 7, "text": " p _ {n+1} ^ x - p_ n ^ x = 1, " }, { "math_id": 8, "text": " p_n " }, { "math_id": 9, "text": " p _ {n+1} ^ x - p_ n ^ x < 1 " }, { "math_id": 10, "text": "x < x_{\\min}." } ]
https://en.wikipedia.org/wiki?curid=7878893
7879246
Lucas's theorem
In number theory, Lucas's theorem expresses the remainder of division of the binomial coefficient formula_0 by a prime number "p" in terms of the base "p" expansions of the integers "m" and "n". Lucas's theorem first appeared in 1878 in papers by Édouard Lucas. Statement. For non-negative integers "m" and "n" and a prime "p", the following congruence relation holds: formula_1 where formula_2 and formula_3 are the base "p" expansions of "m" and "n" respectively. This uses the convention that formula_4 if "m" &lt; "n". Proofs. There are several ways to prove Lucas's theorem. &lt;templatestyles src="Math_proof/styles.css" /&gt;Combinatorial proof Let "M" be a set with "m" elements, and divide it into "mi" cycles of length "pi" for the various values of "i". Then each of these cycles can be rotated separately, so that a group "G" which is the Cartesian product of cyclic groups "Cpi" acts on "M". It thus also acts on subsets "N" of size "n". Since the number of elements in "G" is a power of "p", the same is true of any of its orbits. Thus in order to compute formula_0 modulo "p", we only need to consider fixed points of this group action. The fixed points are those subsets "N" that are a union of some of the cycles. More precisely one can show by induction on "k"-"i", that "N" must have exactly "ni" cycles of size "pi". Thus the number of choices for "N" is exactly formula_5. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof based on generating functions This proof is due to Nathan Fine. If "p" is a prime and "n" is an integer with 1 ≤ "n" ≤ "p" − 1, then the numerator of the binomial coefficient formula_6 is divisible by "p" but the denominator is not. Hence "p" divides formula_7. In terms of ordinary generating functions, this means that formula_8 Continuing by induction, we have for every nonnegative integer "i" that formula_9 Now let "m" be a nonnegative integer, and let "p" be a prime. Write "m" in base "p", so that formula_10 for some nonnegative integer "k" and integers "m""i" with 0 ≤ "m""i" ≤ "p"-1. Then formula_11 as the representation of "n" in base "p" is unique and in the final product, "n""i" is the "i"th digit in the base "p" representation of "n". This proves Lucas's theorem. Non-prime moduli. Lucas's theorem can be generalized to give an expression for the remainder when formula_12 is divided by a prime power "p""k". However, the formulas become more complicated. If the modulo is the square of a prime "p", the following congruence relation holds for all 0 ≤ "s" ≤ "r" ≤ "p" − 1, "a" ≥ 0, and "b" ≥ 0. formula_13 where formula_14 is the "n"th harmonic number. Generalizations of Lucas's theorem for higher prime powers "p""k" are also given by Davis and Webb (1990) and Granville (1997). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tbinom{m}{n}" }, { "math_id": 1, "text": "\\binom{m}{n}\\equiv\\prod_{i=0}^k\\binom{m_i}{n_i}\\pmod p," }, { "math_id": 2, "text": "m=m_kp^k+m_{k-1}p^{k-1}+\\cdots +m_1p+m_0," }, { "math_id": 3, "text": "n=n_kp^k+n_{k-1}p^{k-1}+\\cdots +n_1p+n_0" }, { "math_id": 4, "text": "\\tbinom{m}{n} = 0" }, { "math_id": 5, "text": "\\prod_{i=0}^k\\binom{m_i}{n_i}\\pmod{p}" }, { "math_id": 6, "text": " \\binom p n = \\frac{p \\cdot (p-1) \\cdots (p-n+1)}{n \\cdot (n-1) \\cdots 1} " }, { "math_id": 7, "text": "\\tbinom{p}{n}" }, { "math_id": 8, "text": "(1+X)^p\\equiv1+X^p\\pmod{p}." }, { "math_id": 9, "text": "(1+X)^{p^i}\\equiv1+X^{p^i}\\pmod{p}." }, { "math_id": 10, "text": "m=\\sum_{i=0}^{k}m_ip^i" }, { "math_id": 11, "text": "\\begin{align}\n\\sum_{n=0}^{m}\\binom{m}{n}X^n &\n=(1+X)^m=\\prod_{i=0}^{k}\\left((1+X)^{p^i}\\right)^{m_i}\\\\\n & \\equiv \\prod_{i=0}^{k}\\left(1+X^{p^i}\\right)^{m_i}\n=\\prod_{i=0}^{k}\\left(\\sum_{j_i=0}^{m_i}\\binom{m_i}{j_i}X^{j_ip^i}\\right)\\\\\n & =\\prod_{i=0}^{k}\\left(\\sum_{j_i=0}^{p-1}\\binom{m_i}{j_i}X^{j_ip^i}\\right) \\\\\n & =\\sum_{n=0}^{m}\\left(\\prod_{i=0}^{k}\\binom{m_i}{n_i}\\right)X^n\n\\pmod{p},\n\\end{align}" }, { "math_id": 12, "text": "\\tbinom mn" }, { "math_id": 13, "text": "\\binom{pa+r}{pb+s}\\equiv\\binom ab\\binom rs(1+pa(H_r-H_{r-s})+pb(H_{r-s}-H_s))\\pmod{p^2}," }, { "math_id": 14, "text": "H_n=1+\\tfrac12+\\tfrac13+\\cdots+\\tfrac1n" } ]
https://en.wikipedia.org/wiki?curid=7879246
7880215
Rank (differential topology)
In mathematics, the rank of a differentiable map formula_0 between differentiable manifolds at a point formula_1 is the rank of the derivative of formula_2 at formula_3. Recall that the derivative of formula_2 at formula_3 is a linear map formula_4 from the tangent space at "p" to the tangent space at "f"("p"). As a linear map between vector spaces it has a well-defined rank, which is just the dimension of the image in "T""f"("p")"N": formula_5 Constant rank maps. A differentiable map "f" : "M" → "N" is said to have constant rank if the rank of "f" is the same for all "p" in "M". Constant rank maps have a number of nice properties and are an important concept in differential topology. Three special cases of constant rank maps occur. A constant rank map "f" : "M" → "N" is The map "f" itself need not be injective, surjective, or bijective for these conditions to hold, only the behavior of the derivative is important. For example, there are injective maps which are not immersions and immersions which are not injections. However, if "f" : "M" → "N" is a smooth map of constant rank then Constant rank maps have a nice description in terms of local coordinates. Suppose "M" and "N" are smooth manifolds of dimensions "m" and "n" respectively, and "f" : "M" → "N" is a smooth map with constant rank "k". Then for all "p" in "M" there exist coordinates ("x"1, ..., "x""m") centered at "p" and coordinates ("y"1, ..., "y""n") centered at "f"("p") such that "f" is given by formula_6 in these coordinates. Examples. Maps whose rank is generically maximal, but drops at certain singular points, occur frequently in coordinate systems. For example, in spherical coordinates, the rank of the map from the two angles to a point on the sphere (formally, a map "T"2 → "S"2 from the torus to the sphere) is 2 at regular points, but is only 1 at the north and south poles (zenith and nadir). A subtler example occurs in charts on SO(3), the rotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used in navigation, nautical engineering, and aerospace engineering, among many other uses. Topologically, SO(3) is the real projective space RP3, and it is often desirable to represent rotations by a set of three numbers, known as Euler angles (in numerous variants), both because this is conceptually simple, and because one can build a combination of three gimbals to produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torus "T"3 of three angles to the real projective space RP3 of rotations, but this map does not have rank 3 at all points (formally because it cannot be a covering map, as the only (non-trivial) covering space is the hypersphere "S"3), and the phenomenon of the rank dropping to 2 at certain points is referred to in engineering as "gimbal lock."
[ { "math_id": 0, "text": "f:M\\to N" }, { "math_id": 1, "text": "p\\in M" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "d_p f : T_p M \\to T_{f(p)}N\\," }, { "math_id": 5, "text": "\\operatorname{rank}(f)_p = \\dim(\\operatorname{im}(d_p f))." }, { "math_id": 6, "text": "f(x^1,\\ldots,x^m) = (x^1,\\ldots, x^k,0,\\ldots,0)\\," } ]
https://en.wikipedia.org/wiki?curid=7880215
78809
Speed limit
Maximum legal speed of vehicles Speed limits on road traffic, as used in most countries, set the legal maximum speed at which vehicles may travel on a given stretch of road. Speed limits are generally indicated on a traffic sign reflecting the maximum permitted speed, expressed as kilometres per hour (km/h) or miles per hour (mph) or both. Speed limits are commonly set by the legislative bodies of national or provincial governments and enforced by national or regional police and judicial authorities. Speed limits may also be variable, or in some places nonexistent, such as on most of the Autobahnen in Germany. The first numeric speed limit for automobiles was the limit introduced in the United Kingdom in 1861. As of 2018[ [update]] the highest posted speed limit in the world is , applied on two motorways in the UAE. Speed limits and safety distance are poorly enforced in the UAE, specifically on the Abu Dhabi to Dubai motorway – which results in dangerous traffic, according to a French government travel advisory. Additionally, "drivers often drive at high speeds [and] unsafe driving practices are common, especially on inter-city highways. On highways, unmarked speed bumps and drifting sand create additional hazards", according to a travel advisory issued by the U.S. State Department. There are several reasons to regulate speed on roads. It is often done in an attempt to improve road traffic safety and to reduce the number of casualties from traffic collisions. The World Health Organization (WHO) identified speed control as one of a number of steps that can be taken to reduce road casualties. As of 2021, the WHO estimates that approximately 1.3 million people die of road traffic crashes each year. Authorities may also set speed limits to reduce the environmental impact of road traffic (vehicle noise, vibration, emissions) or to enhance the safety of pedestrians, cyclists, and other road-users. For example, a draft proposal from Germany's National Platform on the Future of Mobility task force recommended a blanket 130 km/h (81 mph) speed limit across the Autobahnen to curb fuel consumption and carbon emissions. Some cities have reduced limits to as little as for both safety and efficiency reasons. However, some research indicates that changes in the speed limit may not always alter average vehicle speed. Lower speed limits could reduce the use of over-engineered vehicles. History. In Western cultures, speed limits predate the use of motorized vehicles. In 1652, the American colony of New Amsterdam passed a law stating, "No wagons, carts or sleighs shall be run, rode or driven at a gallop". The punishment for breaking the law was "two pounds Flemish", the equivalent of US $50 in 2019. The "1832" "Stage Carriage Act" introduced the offense of endangering the safety of a passenger or person by "furious driving" in the United Kingdom (UK). In 1872, then-President of the United States Ulysses S. Grant was arrested for speeding in his horse-drawn carriage in Washington, D.C. A series of Locomotive Acts (in 1861, 1865 and 1878) created the first numeric speed limits for mechanically propelled vehicles in the UK; the 1861 Act introduced a UK speed limit of on open roads in town, which was reduced to in towns and in rural areas by the 1865 "Red Flag Act". The Locomotives on Highways Act 1896, which raised the speed limit to is celebrated by the annual London to Brighton Veteran Car Run. On 28 January 1896, the first person to be convicted of speeding is believed to be Walter Arnold of East Peckham, Kent, UK, who was fined 1 shilling plus costs for speeding at . In 1901, Connecticut was the first state in the United States to impose a numerical speed limit for motor vehicles, setting the maximum legal speed to in cities and on rural roads. Speed limits then propagated across the United States; by 1930 all but 12 states had established numerical limits. In 1903, in the UK, the national speed limit was raised to ; however, as this was difficult to enforce due to the lack of speedometers, the 1930 "Road Traffic Act" abolished speed limits entirely. In 1934, a new limit of was imposed in urban centers, and in July 1967, a national speed limit was introduced. In Australia, during the early 20th century, there were people reported for "furious driving" offenses. One conviction in 1905 cited a vehicle furiously driving when passing a tram traveling at half that speed. In May 1934, the Nazi-era "Road Traffic Act" imposed the first nationwide speed limit in Germany. In the 1960s, in continental Europe, some speed limits were established based on the V85 speed, (so that 85% of drivers respect this speed). In 1974, Australian speed limits underwent metrication: the urban speed limit of was converted to ; the rural speed limits of and were changed to and respectively. In 2010, Sweden defined the "Vision Zero" program, a multi-national road traffic safety project that aims to achieve a highway system with no fatalities or serious injuries involving road traffic. Regulations. Most countries use the metric speed unit of kilometres per hour, while others, including the United States, United Kingdom, and Liberia, use speed limits given in miles per hour. Vienna Convention on Road Traffic. In countries bound by the Vienna Conventions on Road Traffic (1968 &amp; 1977), Article 13 defines a basic rule for speed and distance between vehicles: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Every driver of a vehicle shall in all circumstances have his vehicle under control to be able to exercise due and proper care and to be at all times in a position to perform all manœuvres required of him. He shall, when adjusting the speed of his vehicle, pay constant regard to the circumstances, in particular the lie of the land, the state of the road, the condition and load of his vehicle, the weather conditions and the density of traffic, so as to be able to stop his vehicle within his range of forward vision and short of any foreseeable obstruction. He shall slow down and if necessary stop whenever circumstances so require, and particularly when visibility is not excellent. Reasonable speed. Most legal systems expect drivers to drive at a safe speed for the conditions at hand, regardless of posted limits. In the United Kingdom, and elsewhere in common law, this is known as the reasonable man requirement. The German Highway Code ("Straßenverkehrs-Ordnung") section on speed begins with the statement (translated to English): Any person driving a vehicle may only drive so fast that the car is under control. Speeds must be adapted to the road, traffic, visibility and weather conditions as well as the personal skills and characteristics of the vehicle and load. In France, the law clarifies that even if the speed is limited by law and by local authority, the driver assumes the responsibility to control a vehicle's speed, and to reduce that speed in various circumstances (such as when overtaking a pedestrian or bicycle, individually or in a group; when overtaking a stopped convoy; when passing a transportation vehicle loading or unloading people or children; when the road does not appear clear, or risky; when visibility is low, etc.). If drivers do not control their speed, or do not reduce it in such cases, they can be penalized. Other qualifying conditions include driving through fog, heavy rain, ice, snow, gravel, or when drivers encounter sharp corners, a blinding glare, darkness, crossing traffic, or when there is an obstructed view of orthogonal traffic—such as by road curvature, parked cars, vegetation, or snow banks—thus limiting the Assured Clear Distance Ahead (ACDA). In the United States, this requirement is referred to as the basic rule, as outlined by US federal government law (49 CFR 392.14), which applies in all states as permitted under the commerce clause and due process clause. The basic speed law is almost always supplemented by specific maximum or minimum limits but applies regardless. In California, for instance, Vehicle Code section 22350 states that "No person shall drive a vehicle upon a highway at speed greater than is reasonable... and in no event at a speed which endangers the safety of persons or property". The reasonable speed may be different than the posted speed limit. "Basic rule" speed laws are statutory reinforcements of the centuries-old common law negligence doctrine as specifically applied to vehicular speed. Citations for violations of the basic speed law without a crash have sometimes been ruled unfairly vague or arbitrary, hence a violation of the due process of law, at least in the State of Montana. Even within states, differing jurisdictions (counties and cities) choose to prosecute similar cases with differing approaches. Excessive speed. Consequential results of basic law violations are often categorized as "excessive speed" crashes; for example, the leading cause of crashes on German autobahns in 2012 fell into that category: 6,587 so-called "speed related" crashes claimed the lives of 179 people, which represented almost half (46.3%) of 387 autobahn fatalities in 2012. However, "excessive speed" does not necessarily mean the speed limit was exceeded, rather that police determined at least one party traveled too fast for existing conditions. Examples of conditions where drivers may find themselves driving too fast include wet roadways (due to rain, snow, or ice), reduced visibility (due to fog or "white out" snow), uneven roads, construction zones, curves, intersections, gravel roads, and heavy traffic. Per distance traveled, consequences of inappropriate speed are more frequent on lower speed, lower quality roads; in the United States, for example, the "speeding fatality rate for local roads is three times that for Interstates". For speed management, a distinction can exist between "excess speed", which consists of driving in excess of the speed limit, and "inappropriate speed", which consists of going too fast for the conditions. Maximum speed limits. Most countries have a legally assigned numerical maximum speed limit which applies on all roads when no other speed limit indications are present; lower speed limits are often shown on a sign at the start of the restricted section, although the presence of streetlights or the physical arrangement of the road may sometimes also be used instead. A posted speed limit may only apply to that road or to all roads beyond the sign that defines them depending on local laws. The speed limit is commonly set at or below the 85th percentile speed (the operating speed which no more than 15% of traffic exceeds), and in the US is frequently set below that speed. Thus, if the 85th percentile operating speed as measured by a "Traffic and Engineering Survey" exceeds the design speed, legal protection is given to motorists traveling at such speeds (design speed is "based on conservative assumptions about the driver, the vehicle, and roadway characteristics"). The theory behind the 85th percentile rules is that, as a policy, most citizens should be deemed reasonable and prudent, and limits must be practical to enforce. However, there are some circumstances where motorists do not tend to process all the risks involved, and as a mass, choose a poor 85th percentile speed. This rule, in practice, is a process for "voting the speed limit" by driving, in contrast to delegating the speed limit to an engineering expert. The maximum speed permitted by statute, as posted, is normally based on ideal driving conditions and the basic speed rule always applies. Violation of the statute generally raises a rebuttable presumption of negligence. On international European roads, speed should be taken into account during the design stage. Minimum speed limits. Some roads also have minimum speed limits, usually where slow speeds can impede traffic flow or be dangerous. The use of minimum speed limits is not as common as maximum speed limits, since the risks of speed are less common at lower speeds. In some jurisdictions, laws requiring a minimum speed are primarily centered around red-light districts or similar areas, where they may colloquially be referred to as "kerb crawling laws". Middle speed limits. Traffic rules limiting only middle speeds are rare. One such example exists on the ice roads in Estonia, where it is advised to avoid driving at the speed of as the vehicle may create resonance that may in turn induce the breaking of ice. This means that two sets of speeds are allowed: under and between . Variable speed limits. In Germany, the first known experiments with variable speed limit signs took place in 1965 on a stretch of German motorway, the A8 between Munich and the border city of Salzburg, Austria. Mechanically variable message signs could display speeds of 60, 80 and 100 km/h, as well as text indicating a "danger zone" or "accident". Personnel monitored traffic using video technology and manually controlled the signage. Beginning in the 1970s, additional advanced traffic control systems were put into service. Modern motorway control systems can work without human intervention using various types of sensors to measure traffic flow and weather conditions. In 2009, of German motorways were equipped with such systems. In the United States, heavily traveled portions of the New Jersey Turnpike began using variable speed limit signs in combination with variable message signs in the late 1960s. Officials can adjust the speed limit according to weather, traffic conditions, and construction. More typically, variable speed limits are used on remote stretches of highway in the United States in areas with extreme changes in driving conditions. For example, variable limits were introduced in October 2010 on a stretch of Interstate 80 in Wyoming, replacing the winter season speed reduction from that had been in place since 2008. This Variable Speed Limit system has been proven effective in terms of reducing crash frequency and road closures. Similarly, Interstate 90 at Snoqualmie Pass and other mountain passes in Washington State have variable speed limits as to slow traffic in severe winter weather. As a response to fog-induced chain-reaction collisions involving 99 vehicles in 1990, a variable speed limit system covering of Interstate 75 in Tennessee was implemented in fog-prone areas around the Hiwassee River. The Georgia Department of Transportation installed variable speed limits on part of Interstate 285 around Atlanta in 2014. These speeds can be as low as but are generally set to . In 2016, the Oregon Department of Transportation installed a variable speed zone on a stretch of Interstate 84 between Baker City and Ladd Canyon. The new electronic signs collect data regarding temperature, skid resistance, and average motorist speed to determine the most effective speed limit for the area before presenting the limit on the sign. This speed zone was scheduled to be activated November 2016. Ohio established variable speed limits on three highways in 2017, then in 2019 granted the authority to the Ohio Department of Transportation to establish variable limits on any of its highways. In the United Kingdom, a variable speed limit was introduced on part of the M25 motorway in 1995, on the busiest section from junction 10 to 16. Initial results suggested savings in journey times, smoother-flowing traffic, and a decrease in the number of crashes; the scheme was made permanent in 1997. However, a 2004 National Audit Organisation report noted that the business case was unproved; conditions at the site of the Variable Speed Limits trial were not stable before or during the trial, and the study was deemed neither properly controlled nor reliable. Since December 2008 the upgraded section of the M1 between the M25 and Luton has had the capability for variable speed limits. In January 2010 temporary variable speed cameras on the M1 between J25 and J28 were made permanent. New Zealand introduced variable speed limits in February 2001. The first installation was on the Ngauranga Gorge section of the dual carriageway on State Highway 1, characterized by steep terrain, numerous bends, high traffic volumes, and a higher than average accident rate. The speed limit is normally . Austria undertook a short-term experiment in 2006, with a variable limit configuration that could increase statutory limits under the most favorable conditions, as well as reduce them. In June 2006, a stretch of motorway was configured with variable speed limits that could increase the general Austrian motorway limit of . Then Austrian Transport Minister Hubert Gorbach called the experiment "a milestone in European transport policy-despite all predictions to the contrary"; however, the experiment was discontinued. Roads without speed limits. Just over half of the German autobahns have only an advisory speed limit (a "Richtgeschwindigkeit"), 15% have temporary speed limits due to weather or traffic conditions, and 33% have permanent speed limits, according to 2008 estimates. The advisory speed limit applies to any road in Germany outside of towns which is either a dual carriageway or features at least two lanes per direction, regardless of its classification (e.g. Autobahn, Federal Highway, State Road, etc.), unless there is a speed limit posted, although it is less common for non-autobahn roads to be unrestricted. All other roads in Germany outside of towns, regardless of classification, do have a general speed limit of , which is usually reduced to at Allée-streets (roads bordered by trees or bushes on one or both sites). Travel speeds are not regularly monitored in Germany; however, a 2008 report noted that on the autobahn in Niemegk (between Leipzig and Berlin) "significantly more than 60% of road users exceed [and] more than 30% of motorists exceed ". Measurements from the state of Brandenburg in 2006 showed average speeds of on a 6-lane section of autobahn in free-flowing conditions. Prior to German reunification in 1990, accident reduction programs in eastern German states were primarily focused on restrictive traffic regulation. Within two years of reunification, the availability of high-powered vehicles and a 54% increase in motorized traffic led to a doubling of annual traffic deaths, despite "interim arrangements [which] involved the continuation of the speed limit of on autobahns and of outside cities". An extensive program of the four "E"s (enforcement, education, engineering, and emergency response) brought the number of traffic deaths back to pre-unification levels after a decade of effort, while traffic regulations were conformed to western standards (e.g., freeway advisory limit, on other rural roads). Many rural roads on the Isle of Man have no speed limits; a 2004 proposal to introduce general speed limits of and on Mountain Road, for safety reasons, was not pursued following consultation. Measured travel speeds on the island are relatively low. The Indian states of Andhra Pradesh, Maharashtra, and Telangana also do not have speed limits by default. Roads formerly without speed limits. Many roads without a maximum limit became permanently limited following the 1973 oil crisis. For example, Switzerland and Austria had no maximum restriction prior to 1973 on motorways and rural roads, but imposed a temporary maximum limit in response to higher fuel prices; the limit on motorways was increased to later in 1974. Montana and Nevada were the last remaining U.S. states relying exclusively on the basic rule, without a specific, numeric rural speed limit before the National Maximum Speed Law of 1974. After the repeal of federal speed mandates in 1996, Montana was the only state to revert to the basic rule for daylight rural speed regulation. The Montana Supreme Court ruled that the basic rule was too vague to allow citation, prosecution, and conviction of a driver; concluding enforcement was a violation of the due process requirement of the Montana Constitution. In response, Montana's legislature imposed a limit on rural freeways in 1999. Australia's Northern Territory had no rural speed limit until 2007, and again from 2014 to 2016. Sections of the Stuart Highway had no limits as part of an open speed limit trial. Method. Several methods exist to set up a speed limit: For instance, the "Injury Minimization" (known as Safe System) method takes into account the crash types that are likely to occur, the impact forces that result, and the tolerance of the human body to withstand these forces to set speed limit. This method is used in countries such as the Netherlands and Sweden. The "Operating speed" method sets the maximum speed at or around the 85th percentile speed. This reduces the need to enforce the speed limit, but also allows drivers to fail to select the appropriate travel speed, when they misjudge the risk their environment induces. This is one method used in the United States of America. Enforcement. Speed limit enforcement is the action taken by appropriately empowered authorities to check that road vehicles are complying with the speed limit. Methods used include roadside speed monitoring, set up and operated by the police, and automated roadside speed camera systems, which may incorporate the use of an automatic number plate recognition system. In 2012, in the UK, 30% of drivers did not comply with speed limits. In Europe, between 2009 and 2012, 20% of European drivers have been fined for excessive speed. In 2012, in Europe, 62% of people supported the idea of setting up speed-limiting devices, with adequate tolerance levels in order to limit driver confusion. One efficient scheme consists of penalty points and charges for speeding slightly over the speed limit. Another possibility is to alter the roadway by implementing traffic calming measures, vehicle activated signs, or safety cameras. The city of Munich has adopted "self-explaining roads": roadway widths, intersection controls, and crossing types have been harmonized so that drivers assume the speed limit without a posted sign. Effectiveness. Compliance. Speed limits are more likely to be complied with if drivers have an expectation that the speed limits will be consistently enforced. To be effective and abided by, the speed limits need to be perceived as credible; they should be reasonable regarding factors such as how well the driver can see ahead and to the sides on a particular road. Speed limits also need to conform to road infrastructure, education, and enforcement activity. Measure of effect of speed limit reduction from 90 km/h to 80 km/h, in July 2018, on the French network&lt;br&gt;(ONISR, 28 janvier 2019) In the UK, in 2017, the average free flow speed for each vehicle type is correlated with the applicable speed limit for that road type and for motorways and national speed limit single carriageway roads, the average free flow speed is below the designated speed limit for each vehicle type, except motorcycles on motorways. Average free flow speed in UK in 2017 Relationship with crash frequency. A 1998 US Federal Highway Administration report cited a number of studies regarding the effects of reductions in speed limits and the observed changes in speeding, fatalities, injuries and property damage which followed. Some states increase penalties for more serious offenses, by designating as reckless driving, speeds greatly exceeding the maximum limit. A 2018 OECD-ITF case study established a strong relationship between speed and crash frequency: when the mean speed decreases, the number of crashes and casualties decreases; to the contrary, when speed increases, the number of crashes and casualties increases. In no case was an increase in mean speed associated with a decrease in the number of crashes or casualties. Relationship between change of mean speed and change of fatalities&lt;br&gt;Source OECD-ITF South Dakota increased its maximum speed limit from in 1996. Annual surveys of speed on South Dakota Interstate roads show that from 2000 to 2011, the average speed rose from . A 1999 study found that the U.S. states that increased speed limits in the wake of the repeal of federally mandated speed limits had a 15% increase in fatalities. The "Synthesis of Safety Research Related to Speed and Speed Limits" report sponsored by the Federal Highway Administration, published in 1998, found that changing speed limits on low and moderate speed roads appeared to have no significant effect on traffic speed or the number of crashes, whilst on high-speed roads such as freeways, increased speed limits generally resulted in higher traffic speeds and more crashes. The report stated that limited evidence suggests that speed limits have a positive effect on a system wide basis. Research in 1998 showed that the reduction of some United Kingdom speed limits to had achieved only a drop in speeds and no discernible reduction in accidents; speed limit zones, which use self-enforcing traffic calming, achieved average speed reductions of ; child pedestrian accidents were reduced by 70% and child cyclist accidents by 48%. Zones where speeds are set at 30 km/h (or 20 mph) are gaining popularity as they are found to be effective at reducing crashes and increasing community cohesion. Studies undertaken in conjunction with Australia's move from speed limits to in built-up areas found that the measure was effective in reducing speed and the frequency and severity of crashes. A study of the impact of the replacement of with speed limits in New South Wales, Australia, showed only a drop in urban areas and a drop in rural areas. The report noted that widespread community compliance would require a combination of strategies including traffic calming treatments. Information campaigns are also used by authorities to bolster support for speed limits, for example the "Speeding. No one thinks big of you." campaign in Australia in 2007. Justification. Speed limits are set primarily to balance road traffic safety concerns with the effect on travel time and mobility. Speed limits are also sometimes used to reduce consumption of fuel or in response to environmental concerns (e.g. to reduce vehicle emissions or fuel use). Some speed limits have also been initiated to reduce gas-oil imports during the 1973 oil crisis. Road traffic safety. According to a 2004 report from the World Health Organization, 22% of all injury mortality worldwide was from road traffic injuries in 2002, and without "increased efforts and new initiatives" casualty rates would increase by 65% between 2000 and 2020. The report identified that the speed of vehicles was "at the core of the problem", and recommended that speed limits be set appropriately for the road function and design, along with the implementation of physical measures related to the road and the vehicle, and increased effective enforcement by the police. Road incidents are said to be the leading cause of deaths among children 10–19 years of age (260,000 children die a year; 10 million are injured). Maximum speed limits place an upper limit on speed choice and, if obeyed, can reduce the differences in vehicle speeds by drivers using the same road at the same time. Traffic engineers observe that the likelihood of a crash happening is significantly higher if vehicles are traveling at speeds faster or slower than the mean speed of traffic; when severity is taken into account, the risk is lowest for those traveling at or below the median speed and "increases exponentially for motorists travelling much faster". It is desirable to attempt to reduce the speed of road vehicles in some circumstances because the kinetic energy involved in a motor vehicle collision is proportional to the square of the speed at impact. The probability of a fatality is, for typical collision speeds, empirically correlated to the fourth power of the speed "difference" (depending on the type of collision, not necessarily the same as "travel" speed) at impact, rising much faster than kinetic energy. formula_0 formula_1 Typically motorways have higher speed limits than conventional roads because motorways have features which decrease the likelihood of collisions and the severity of impacts. For example, motorways separate opposing traffic and crossing traffic, employ traffic barriers, and prohibit the most vulnerable users such as pedestrians and bicyclists. Germany's crash experience illustrates the relative effectiveness of these strategies on crash severity: on autobahns 22 people died per 1,000 injury crashes, a lower rate than the 29 deaths per 1,000 injury accidents on conventional rural roads. However, the rural risk is five times higher than on urban roads; speeds are higher on rural roads and autobahns than urban roads, increasing the severity potential of a crash. The net effect of speed, crash probability, and impact mitigation strategies may be measured by the rate of deaths per billion-travel-kilometres: the autobahn fatality rate is 2 deaths per billion-travel-kilometres, lower than either the 8.7 rates on rural roads or the 5.3 rate in urban areas. The overall national fatality rate was 5.6, slightly higher than urban rate and more than twice that of autobahns.&lt;ref name="http://www.bast.de 2011"&gt;&lt;/ref&gt; The 2009 technical report "An Analysis of Speeding-Related Crashes:Definitions and the Effects of Road Environments" by the National Highway Traffic Safety Administration showed that about 55% of all speeding-related crashes when fatal listed "exceeding posted speed limits" among their crash factors, and 45% had "driving too fast for conditions" among their crash factors. However, the authors of the report did not attempt to determine whether the factors were a crash cause, contributor, or an unrelated factor. Furthermore, separate research finds that only 1.6% of crashes are "caused" by drivers that exceed the posted speed limit. Finally, exceeding the posted limit may not be a remarkable factor in the crash analysis as there are roadways where virtually all motorists are in technical violation of the law. The speed limit will also take note of the speed at which the road was designed to be driven (the design speed), which is defined in the US as "a selected speed used to determine the various geometric design features of the roadway". However, traffic engineers recognize that "operating speeds and even posted speed limits can be higher than design speeds without necessarily compromising safety" since design speed is "based on conservative assumptions about driver, vehicle and roadway characteristics". Vision Zero, which envision reducing road fatalities and serious injuries to zero by 2020, suggests the following "possible long term maximum travel speeds related to the infrastructure, given best practice in vehicle design and 100% restraint use": "Roads with no possibility of a side impact or frontal impact" are sometimes designated as Type 1 (motorways/freeways/Autobahns), Type 2 ("2+2 roads"), or Type 3 ("2+1 roads"). These roadways have crash barriers separating opposing traffic, limited access, grade separation and prohibitions on slower and more vulnerable road users. Undivided rural roads can be quite dangerous even with speed limits that appear low by comparison. For example, in 2011, Germany's -limited rural roads had a fatality rate of 8.7 deaths per billion travel-km, over four times higher than the autobahn rate of 2 deaths. Autobahns accounted for 31% of German road travel in 2011, but just 11% (453 of 4,009) of traffic deaths. In 2018, an IRTAD WG published a document which recommended maximum speed limits, taking into account forces the human body can tolerate and survive. Fuel efficiency. Fuel efficiency sometimes affects speed limit selection. The United States instituted a National Maximum Speed Law of , as part of the Emergency Highway Energy Conservation Act, in response to the 1973 oil crisis to reduce fuel consumption. According to a report published in 1986 by The Heritage Foundation, a Conservative advocacy group, the law was widely disregarded by motorists and hardly reduced consumption at all. In 2009, the American Trucking Associations called for a speed limit, and also national fuel economy standards, claiming that the lower speed limit was not effective at saving fuel. Environmental considerations. Speed limits can also be used to improve local air quality issues or other factors affecting environmental quality (e.g. the "environmental speed limits" in an area of Texas). The European Union is also increasingly using speed limits as in response to environmental concerns. European studies have stated that, whereas the effects of specific speed reduction schemes on particulate emissions from trucks are ambiguous, lower maximums speed for trucks consistently result in lower emissions of CO2 and better fuel efficiency. Advocacy. Speed limits, and especially some of the methods used to attempt to enforce them, have always been controversial. A variety of organisations and individuals either oppose or support the use of speed limits and their enforcement. Opposition. Speed limits and their enforcement have been opposed by various groups and for various reasons since their inception. In the UK, the Motorists' Mutual Association (est. 1905) was formed initially to warn members about speed traps; the organisation would go on to become the AA. More recently, advocacy groups seek to have certain speed limits as well as other measures removed. For example, automated camera enforcement has been criticised by motoring advocacy groups including the Association of British Drivers, and the German Auto Club (ADAC). Arguments used by those advocating a relaxation of speed limits or their removal include: Support. Various other advocacy groups press for stricter limits and better enforcement. The Pedestrians Association was formed in the United Kingdom in 1929 to protect the interests of the pedestrian. Their president published a critique of motoring legislation and the influence of motoring groups in 1947 titled "Murder most foul", which laid out in an emotional but detailed view of the situation as they saw it, calling for tighter speed limits. Historically, the Pedestrians' Association and the Automobile Association were described as "bitterly opposed" in the early years of United Kingdom motoring legislation. More recently organisations such as RoadPeace, Twenty is Plenty, and Vision Zero have campaigned for lower speed limits in residential areas. In the United States, advocacy groups favoring stricter limits and better enforcement include the Advocates for Highway and Auto Safety, Insurance Institute for Highway Safety and the National Safety Council. Signage. Most countries worldwide measure speed limits in kilometres per hour, while the United Kingdom, United States, and several smaller countries measure speed limits in miles per hour instead. Signs in Samoa display both units simultaneously. There are two basic designs for speed limit signs: the Vienna Convention on Road Signs and Signals specifies a white or yellow circle with a red border, while the "Manual on Uniform Traffic Control Devices" (MUTCD) published by the United States Federal Highway Administration specifies a white rectangle with the legend "SPEED LIMIT". Vienna-style speed limit signs originated in Europe and are used in most of the world, including many countries that otherwise follow the MUTCD. Variations on the MUTCD design are used in Canada, Guam, Liberia, Puerto Rico, the mainland United States, the U.S. Virgin Islands. Australia also used a variation on the MUTCD design until the country metricated in 1974. The Central American Integration System (SICA) equivalent to the US MUTCD, specifies a variation on the MUTCD design as an option, though not widely used. In the United States, Canada, Australia and Peru, speed limit signs are rectangular. In most of the United States, speed limit signs bear the words "SPEED LIMIT" above the numeric speed limit, as specified in the MUTCD. However, in Alaska and California, speed limits are often labeled "MAXIMUM SPEED" instead. In Oregon, most speed limit signs are simply labeled "SPEED". Canada has similar signs bearing the legend "MAXIMUM", which has a similar meaning in English and French, the country's two main languages. Peru uses a similar, reversed variation of the MUTCD order in which the words "VELOCIDAD MAXIMA" (speed limit) are placed below the numeric limit. Australia uses the same rectangular design, but inscribes the numeric speed limit within a red circle as in Vienna Convention signs. The MUTCD formerly specified an optional metric design that included the words "SPEED LIMIT" and the numeric limit inscribed within a black circle, though it was rarely used in the United States; this design is still occasionally found in Liberia. Speed limit signs of Mexico and Panama are square, unlike the United States. In the European Union, large signposts showing the national (maximum) speed limits of the respective country are usually erected immediately after border crossings, with a repeater sign some after the first. Some places provide an additional "speed zone ahead" ahead of the restriction, and speed limit reminder signs may appear at regular intervals, which may be painted on the road surface. In Ontario, the type, location, and frequency of speed limit signs are covered by regulation 615 of the Ontario Highway Traffic Act. Maximum speed limit. Some speed limits are applicable to a zone. Minimum speed limit. Minimum speed limits are often expressed with signs using blue circles, based on the obligatory sign specifications of the Vienna Convention on Road Signs and Signals. In the United States, minimum speed limit signs are identical to their respective maximum speed limit signs, with "SPEED LIMIT" replaced with "MINIMUM SPEED". Some South American countries (e.g.: Argentina) use a red border. Japan and South Korea use their normal speed limit sign, with a line below the limit. Special speed limits. In some countries, speed limits may apply to certain classes of vehicles or special conditions such as night-time. Usually, these speed limits will be reduced from the normal limit for safety reasons. Speed limit derestriction. In some countries, derestriction signs are used to mark where a speed zone ends. The speed limit beyond the sign is the prevailing limit for the general area; for example, the sign might be used to show the end of an urban area. In the United Kingdom, the sign means that the national speed limit applies ( on open roads and on dual carriageways and motorways). In New Zealand it means you are on an open road, but the maximum legal speed of still applies. On roads without general speed limits, such as the German Autobahn, a portion of the Stuart Highway, and rural areas on the Isle of Man, it means the end of all quantitative speed limits. Advisory speed limit. Advisory speed limits may provide a safe suggested speed in an area, or warn of the maximum safe speed for dangerous curves. In Germany, an advisory speed limit may be combined with a traffic signal to recommend the speed at which drivers should drive to reach the next light at its green phase, thereby avoiding a stop. Technology. Some European cars include in-vehicle systems that support drivers’ compliance with the speed limit, known as intelligent speed adaptation (ISA). ISA supports drivers in complying with the speed limit in various parts of the network, while speed limiters for heavy goods vehicles and coaches only govern the maximum speed. These systems have positive effects on speed behaviour, and improve safety. A speed-limiting device, such as ISA are considered useful by 25% of European car drivers. In 2019, Google Maps integrated alerts for speed traps within its application, along with audible alerts for nearby speed cameras. The technology was first developed by Waze, with requests for it to be removed from the application by police officers. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Documents referenced from 'Notes' section. &lt;templatestyles src="Refbegin/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_\\mathrm{kin} = \\frac{1}{2} m v^2" }, { "math_id": 1, "text": "s_\\mathrm{GefahrBrems} \\approx \\frac{1}{2} \\cdot \\left( \\frac{v^2}{10^2} \\right) " } ]
https://en.wikipedia.org/wiki?curid=78809
7883127
Quasi-Newton method
Optimization algorithm Quasi-Newton methods are methods used to find either zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian. Search for zeros: root finding. Newton's method to find zeroes of a function formula_0 of multiple variables is given by formula_1, where formula_2 is the left inverse of the Jacobian matrix formula_3 of formula_0 evaluated for formula_4. Strictly speaking, any method that replaces the exact Jacobian formula_3 with an approximation is a quasi-Newton method. For instance, the chord method (where formula_3 is replaced by formula_5 for all iterations) is a simple example. The methods given below for optimization refer to an important subclass of quasi-Newton methods, secant methods. Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes. Broyden's "good" and "bad" methods are two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are the column-updating method, the inverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method. More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found. Search for extrema: optimization. The search for a minimum or maximum of a scalar-valued function is nothing else than the search for the zeroes of the gradient of that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, if formula_0 is the gradient of formula_6, then searching for the zeroes of the vector-valued function formula_0 corresponds to the search for the extrema of the scalar-valued function formula_6; the Jacobian of formula_0 now becomes the Hessian of formula_6. The main difference is that the Hessian matrix is a symmetric matrix, unlike the Jacobian when searching for zeroes. Most quasi-Newton methods used in optimization exploit this property. In optimization, quasi-Newton methods (a special case of variable-metric methods) are algorithms for finding local maxima and minima of functions. Quasi-Newton methods are based on Newton's method to find the stationary point of a function, where the gradient is 0. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and the Hessian matrix of second derivatives of the function to be minimized. In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of the secant method to find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation is under-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian. The first quasi-Newton algorithm was proposed by William C. Davidon, a physicist working at Argonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: the DFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently the SR1 formula (for "symmetric rank-one"), the BHHH method, the widespread BFGS method (suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extension L-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods. The SR1 formula does not guarantee the update matrix to maintain positive-definiteness and can be used for indefinite problems. The Broyden's method does not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating the Jacobian (rather than the Hessian). One of the chief advantages of quasi-Newton methods over Newton's method is that the Hessian matrix (or, in the case of quasi-Newton methods, its approximation) formula_7 does not need to be inverted. Newton's method, and its derivatives such as interior point methods, require the Hessian to be inverted, which is typically implemented by solving a system of linear equations and is often quite costly. In contrast, quasi-Newton methods usually generate an estimate of formula_8 directly. As in Newton's method, one uses a second-order approximation to find the minimum of a function formula_9. The Taylor series of formula_9 around an iterate is formula_10 where (formula_11) is the gradient, and formula_7 an approximation to the Hessian matrix. The gradient of this approximation (with respect to formula_12) is formula_13 and setting this gradient to zero (which is the goal of optimization) provides the Newton step: formula_14 The Hessian approximation formula_7 is chosen to satisfy formula_15 which is called the "secant equation" (the Taylor series of the gradient itself). In more than one dimension formula_7 is underdetermined. In one dimension, solving for formula_7 and applying the Newton's step with the updated value is equivalent to the secant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such as Broyden's method) seek a symmetric solution (formula_16); furthermore, the variants listed below can be motivated by finding an update formula_17 that is as close as possible to formula_18 in some norm; that is, formula_19, where formula_20 is some positive-definite matrix that defines the norm. An approximate initial value formula_21 is often sufficient to achieve rapid convergence, although there is no general strategy to choose formula_22. Note that formula_23 should be positive-definite. The unknown formula_24 is updated applying the Newton's step calculated using the current approximate Hessian matrix formula_25: formula_30 is used to update the approximate Hessian formula_17, or directly its inverse formula_31 using the Sherman–Morrison formula. The most popular update formulas are: Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method. Relationship to matrix inversion. When formula_34 is a convex quadratic function with positive-definite Hessian formula_7, one would expect the matrices formula_35 generated by a quasi-Newton method to converge to the inverse Hessian formula_36. This is indeed the case for the class of quasi-Newton methods based on least-change updates. Notable implementations. Implementations of quasi-Newton methods are available in many programming languages. Notable open source implementations include: Notable proprietary implementations include: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g" }, { "math_id": 1, "text": "x_{n+1} = x_n -[J_g(x_n)]^{-1} g(x_n)" }, { "math_id": 2, "text": "[J_g(x_n)]^{-1}" }, { "math_id": 3, "text": "J_g(x_n)" }, { "math_id": 4, "text": "x_n" }, { "math_id": 5, "text": "J_g(x_0)" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "B^{-1}" }, { "math_id": 9, "text": "f(x)" }, { "math_id": 10, "text": "f(x_k + \\Delta x) \\approx f(x_k) + \\nabla f(x_k)^{\\mathrm T} \\,\\Delta x + \\frac{1}{2} \\Delta x^{\\mathrm T} B \\,\\Delta x," }, { "math_id": 11, "text": "\\nabla f" }, { "math_id": 12, "text": "\\Delta x" }, { "math_id": 13, "text": "\\nabla f(x_k + \\Delta x) \\approx \\nabla f(x_k) + B \\,\\Delta x," }, { "math_id": 14, "text": "\\Delta x = -B^{-1} \\nabla f(x_k)." }, { "math_id": 15, "text": "\\nabla f(x_k + \\Delta x) = \\nabla f(x_k) + B \\,\\Delta x," }, { "math_id": 16, "text": "B^T = B" }, { "math_id": 17, "text": "B_{k+1}" }, { "math_id": 18, "text": " B_{k}" }, { "math_id": 19, "text": "B_{k+1} = \\operatorname{argmin}_B \\|B - B_k\\|_V" }, { "math_id": 20, "text": "V " }, { "math_id": 21, "text": "B_0 = \\beta I " }, { "math_id": 22, "text": " \\beta " }, { "math_id": 23, "text": "B_0" }, { "math_id": 24, "text": "x_k" }, { "math_id": 25, "text": "B_{k}" }, { "math_id": 26, "text": "\\Delta x_k = -\\alpha_k B_k^{-1} \\nabla f(x_k)" }, { "math_id": 27, "text": "\\alpha" }, { "math_id": 28, "text": "x_{k+1} = x_k + \\Delta x_k" }, { "math_id": 29, "text": "\\nabla f(x_{k+1})" }, { "math_id": 30, "text": "y_k = \\nabla f(x_{k+1}) - \\nabla f(x_k)" }, { "math_id": 31, "text": "H_{k+1} = B_{k+1}^{-1}" }, { "math_id": 32, "text": "B_k" }, { "math_id": 33, "text": "\\alpha_k" }, { "math_id": 34, "text": "f " }, { "math_id": 35, "text": "H_k" }, { "math_id": 36, "text": "H = B^{-1}" } ]
https://en.wikipedia.org/wiki?curid=7883127
788338
Phot
Measure of illuminance &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; A phot (ph) is a photometric unit of illuminance, or luminous flux through an area. It is not an SI unit but rather is associated with the older centimetre–gram–second system of units. The name was coined by André Blondel in 1921. Metric equivalence: formula_0 Metric dimensions: Illuminance = luminous intensity × solid angle / length2
[ { "math_id": 0, "text": "1\\ \\mathrm{phot} = 1\\ \\frac{\\mathrm{lumen}}{\\mathrm{centimetre}^2} = 10,000\\ \\frac{\\mathrm{lumens}}{\\mathrm{metre}^2} = 10,000\\ \\mathrm{lux} = 10\\ \\mathrm{kilolux}" } ]
https://en.wikipedia.org/wiki?curid=788338
788497
Concentration of measure
Statistical parameter In mathematics, concentration of measure (about a median) is a principle that is applied in measure theory, probability and combinatorics, and has consequences for other fields such as Banach space theory. Informally, it states that "A random variable that depends in a Lipschitz way on many independent variables (but not too much on any of them) is essentially constant". The concentration of measure phenomenon was put forth in the early 1970s by Vitali Milman in his works on the local theory of Banach spaces, extending an idea going back to the work of Paul Lévy. It was further developed in the works of Milman and Gromov, Maurey, Pisier, Schechtman, Talagrand, Ledoux, and others. The general setting. Let formula_0 be a metric space with a measure formula_1 on the Borel sets with formula_2. Let formula_3 where formula_4 is the formula_5-"extension" (also called formula_5-fattening in the context of the Hausdorff distance) of a set formula_6. The function formula_7 is called the "concentration rate" of the space formula_8. The following equivalent definition has many applications: formula_9 where the supremum is over all 1-Lipschitz functions formula_10, and the median (or Levy mean) formula_11 is defined by the inequalities formula_12 Informally, the space formula_8 exhibits a concentration phenomenon if formula_13 decays very fast as formula_5 grows. More formally, a family of metric measure spaces formula_14 is called a "Lévy family" if the corresponding concentration rates formula_15 satisfy formula_16 and a "normal Lévy family" if formula_17 for some constants formula_18. For examples see below. Concentration on the sphere. The first example goes back to Paul Lévy. According to the spherical isoperimetric inequality, among all subsets formula_6 of the sphere formula_19 with prescribed spherical measure formula_20, the spherical cap formula_21 for suitable formula_22, has the smallest formula_5-extension formula_23 (for any formula_24). Applying this to sets of measure formula_25 (where formula_26), one can deduce the following concentration inequality: formula_27, where formula_28 are universal constants. Therefore formula_29 meet the definition above of a normal Lévy family. Vitali Milman applied this fact to several problems in the local theory of Banach spaces, in particular, to give a new proof of Dvoretzky's theorem. Concentration of measure in physics. All classical statistical physics is based on the concentration of measure phenomena: The fundamental idea (‘theorem’) about equivalence of ensembles in thermodynamic limit (Gibbs, 1902 and Einstein, 1902-1904) is exactly the thin shell concentration theorem. For each mechanical system consider the phase space equipped by the invariant Liouville measure (the phase volume) and conserving energy "E". The microcanonical ensemble is just an invariant distribution over the surface of constant energy E obtained by Gibbs as the limit of distributions in phase space with constant density in thin layers between the surfaces of states with energy "E" and with energy "E+ΔE". The canonical ensemble is given by the probability density in the phase space (with respect to the phase volume) formula_30 where quantities F=const and T=const are defined by the conditions of probability normalisation and the given expectation of energy "E". When the number of particles is large, then the difference between average values of the macroscopic variables for the canonical and microcanonical ensembles tends to zero, and their fluctuations are explicitly evaluated. These results are proven rigorously under some regularity conditions on the energy function "E" by Khinchin (1943). The simplest particular case when "E" is a sum of squares was well-known in detail before Khinchin and Lévy and even before Gibbs and Einstein. This is the Maxwell–Boltzmann distribution of the particle energy in ideal gas. The microcanonical ensemble is very natural from the naïve physical point of view: this is just a natural equidistribution on the isoenergetic hypersurface. The canonical ensemble is very useful because of an important property: if a system consists of two non-interacting subsystems, i.e. if the energy "E" is the sum, formula_31, where formula_32 are the states of the subsystems, then the equilibrium states of subsystems are independent, the equilibrium distribution of the system is the product of equilibrium distributions of the subsystems with the same T. The equivalence of these ensembles is the cornerstone of the mechanical foundations of thermodynamics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(X, d)" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "\\mu(X) = 1" }, { "math_id": 3, "text": "\\alpha(\\epsilon) = \\sup \\left\\{\\mu( X \\setminus A_\\epsilon) \\, | A \\mbox{ is a Borel set and} \\, \\mu(A) \\geq 1/2 \\right\\}," }, { "math_id": 4, "text": "A_\\epsilon = \\left\\{ x \\, | \\, d(x, A) < \\epsilon \\right\\} " }, { "math_id": 5, "text": "\\epsilon" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "\\alpha(\\cdot)" }, { "math_id": 8, "text": "X" }, { "math_id": 9, "text": "\\alpha(\\epsilon) = \\sup \\left\\{ \\mu( \\{ F \\geq \\mathop{M} + \\epsilon \\}) \\right\\}," }, { "math_id": 10, "text": "F: X \\to \\mathbb{R}" }, { "math_id": 11, "text": " M = \\mathop{\\mathrm{Med}} F " }, { "math_id": 12, "text": "\\mu \\{ F \\geq M \\} \\geq 1/2, \\, \\mu \\{ F \\leq M \\} \\geq 1/2." }, { "math_id": 13, "text": "\\alpha(\\epsilon)" }, { "math_id": 14, "text": "(X_n, d_n, \\mu_n)" }, { "math_id": 15, "text": "\\alpha_n" }, { "math_id": 16, "text": "\\forall \\epsilon > 0 \\,\\, \\alpha_n(\\epsilon) \\to 0 {\\rm \\;as\\; } n\\to \\infty," }, { "math_id": 17, "text": "\\forall \\epsilon > 0 \\,\\, \\alpha_n(\\epsilon) \\leq C \\exp(-c n \\epsilon^2)" }, { "math_id": 18, "text": "c,C>0" }, { "math_id": 19, "text": "S^n" }, { "math_id": 20, "text": "\\sigma_n(A)" }, { "math_id": 21, "text": " \\left\\{ x \\in S^n | \\mathrm{dist}(x, x_0) \\leq R \\right\\}, " }, { "math_id": 22, "text": "R" }, { "math_id": 23, "text": "A_\\epsilon" }, { "math_id": 24, "text": "\\epsilon > 0" }, { "math_id": 25, "text": "\\sigma_n(A) = 1/2" }, { "math_id": 26, "text": "\\sigma_n(S^n) = 1" }, { "math_id": 27, "text": "\\sigma_n(A_\\epsilon) \\geq 1 - C \\exp(- c n \\epsilon^2) " }, { "math_id": 28, "text": "C,c" }, { "math_id": 29, "text": "(S^n)_n" }, { "math_id": 30, "text": "\\rho = e^{\\frac{F - E}{k T}}," }, { "math_id": 31, "text": "E=E_1(X_1)+E_2(X_2)" }, { "math_id": 32, "text": "X_1, X_2" } ]
https://en.wikipedia.org/wiki?curid=788497
7885048
Couple (mechanics)
Pair of equal and opposite forces acting along different lines of action of force on a rigid body{dt}&lt;/math&gt; &lt;templatestyles src="Hlist/styles.css"/&gt; In mechanics, a couple is a system of forces with a resultant (a.k.a. net or sum) moment of force but no resultant force. A more descriptive term is force couple or pure moment. Its effect is to impart angular momentum but no linear momentum. In rigid body dynamics, force couples are "free vectors", meaning their effects on a body are independent of the point of application. The resultant moment of a couple is a special case of moment. A couple has the property that it is independent of reference point. Simple couple. A couple is a pair of forces, equal in magnitude, oppositely directed, and displaced by perpendicular distance or moment. The simplest kind of couple consists of two equal and opposite forces whose lines of action do not coincide. This is called a "simple couple". The forces have a turning effect or moment called a torque about an axis which is normal (perpendicular) to the plane of the forces. The SI unit for the torque of the couple is newton metre. If the two forces are F and −"F", then the magnitude of the torque is given by the following formula: formula_0 where The magnitude of the torque is equal to "F" • "d", with the direction of the torque given by the unit vector formula_2, which is perpendicular to the plane containing the two forces and positive being a counter-clockwise couple. When d is taken as a vector between the points of action of the forces, then the torque is the cross product of d and F, i.e. formula_3 Independence of reference point. The moment of a force is only defined with respect to a certain point P (it is said to be the "moment about P") and, in general, when P is changed, the moment changes. However, the moment (torque) of a "couple" is "independent" of the reference point P: Any point will give the same moment. In other words, a couple, unlike any more general moments, is a "free vector". (This fact is called "Varignon's Second Moment Theorem".) The proof of this claim is as follows: Suppose there are a set of force vectors F1, F2, etc. that form a couple, with position vectors (about some origin P), r1, r2, etc., respectively. The moment about P is formula_4 Now we pick a new reference point P' that differs from P by the vector r. The new moment is formula_5 Now the distributive property of the cross product implies formula_6 However, the definition of a force couple means that formula_7 Therefore, formula_8 This proves that the moment is independent of reference point, which is proof that a couple is a free vector. Forces and couples. A force "F" applied to a rigid body at a distance "d" from the center of mass has the same effect as the same force applied directly to the center of mass and a couple "Cℓ = Fd". The couple produces an angular acceleration of the rigid body at right angles to the plane of the couple. The force at the center of mass accelerates the body in the direction of the force without change in orientation. The general theorems are: A single force acting at any point "O′" of a rigid body can be replaced by an equal and parallel force "F" acting at any given point "O" and a couple with forces parallel to "F" whose moment is "M = Fd", "d" being the separation of "O" and "O′". Conversely, a couple and a force in the plane of the couple can be replaced by a single force, appropriately located. Any couple can be replaced by another in the same plane of the same direction and moment, having any desired force or any desired arm. Applications. Couples are very important in mechanical engineering and the physical sciences. A few examples are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau = F d " }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\hat{e}" }, { "math_id": 3, "text": " \\mathbf{\\tau} = | \\mathbf{d} \\times \\mathbf{F} | ." }, { "math_id": 4, "text": "M = \\mathbf{r}_1\\times \\mathbf{F}_1 + \\mathbf{r}_2\\times \\mathbf{F}_2 + \\cdots" }, { "math_id": 5, "text": "M' = (\\mathbf{r}_1+\\mathbf{r})\\times \\mathbf{F}_1 + (\\mathbf{r}_2+\\mathbf{r})\\times \\mathbf{F}_2 + \\cdots" }, { "math_id": 6, "text": "M' = \\left(\\mathbf{r}_1\\times \\mathbf{F}_1 + \\mathbf{r}_2\\times \\mathbf{F}_2 + \\cdots\\right) + \\mathbf{r}\\times \\left(\\mathbf{F}_1 + \\mathbf{F}_2 + \\cdots \\right)." }, { "math_id": 7, "text": "\\mathbf{F}_1 + \\mathbf{F}_2 + \\cdots = 0." }, { "math_id": 8, "text": "M' = \\mathbf{r}_1\\times \\mathbf{F}_1 + \\mathbf{r}_2\\times \\mathbf{F}_2 + \\cdots = M" } ]
https://en.wikipedia.org/wiki?curid=7885048
788540
Bond valuation
Fair price of a bond Bond valuation is the process by which an investor arrives at an estimate of the theoretical fair value, or intrinsic worth, of a bond. As with any security or capital investment, the theoretical fair value of a bond is the present value of the stream of cash flows it is expected to generate. Hence, the value of a bond is obtained by discounting the bond's expected cash flows to the present using an appropriate discount rate. In practice, this discount rate is often determined by reference to similar instruments, provided that such instruments exist. Various related yield-measures are then calculated for the given price. Where the market price of bond is less than its par value, the bond is selling at a discount. Conversely, if the market price of bond is greater than its par value, the bond is selling at a premium. For this and other relationships between price and yield, see below. If the bond includes embedded options, the valuation is more difficult and combines option pricing with discounting. Depending on the type of option, the option price as calculated is either added to or subtracted from the price of the "straight" portion. See further under Bond option. This total is then the value of the bond. Bond valuation. The fair price of a "straight bond" (a bond with no embedded options; see ) is usually determined by discounting its expected cash flows at the appropriate discount rate. Although this present value relationship reflects the theoretical approach to determining the value of a bond, in practice its price is (usually) determined with reference to other, more liquid instruments. The two main approaches here, Relative pricing and Arbitrage-free pricing, are discussed next. Finally, where it is important to recognise that future interest rates are uncertain and that the discount rate is not adequately represented by a single fixed number—for example when an option is written on the bond in question—stochastic calculus may be employed. Present value approach. The basic method for calculating a bond's theoretical fair value, or intrinsic worth, uses the present value (PV) formula shown below, using a single market interest rate to discount cash flows in all periods. A more complex approach would use different interest rates for cash flows in different periods. The formula shown below assumes that a coupon payment has just been made (see below for adjustments on other dates). formula_0 where: formula_1 par value formula_2 contractual interest rate formula_3 coupon payment (periodic interest payment) formula_4 number of payments formula_5 market interest rate, or required yield, or observed / appropriate yield to maturity (see below) formula_6 value at maturity, usually equals par value formula_7 theoretical fair value Relative price approach. Under this approach—an extension, or application, of the above—the bond will be priced relative to a benchmark, usually a government security; see Relative valuation. Here, the yield to maturity on the bond is determined based on the bond's Credit rating relative to a government security with similar maturity or duration; see Credit spread (bond). The better the quality of the bond, the smaller the spread between its required return and the YTM of the benchmark. This required return is then used to discount the bond cash flows, replacing formula_8 in the formula above, to obtain the price. Arbitrage-free pricing approach. As distinct from the two related approaches above, a bond may be thought of as a "package of cash flows"—coupon or face—with each cash flow viewed as a zero-coupon instrument maturing on the date it will be received. Thus, rather than using a single discount rate, one should use multiple discount rates, discounting each cash flow at its own rate. Here, each cash flow is separately discounted at the same rate as a zero-coupon bond corresponding to the coupon date, and of equivalent credit worthiness (if possible, from the same issuer as the bond being valued, or if not, with the appropriate credit spread). Under this approach, the bond price should reflect its "arbitrage-free" price, as any deviation from this price will be exploited and the bond will then quickly reprice to its correct level. Here, we apply the rational pricing logic relating to "Assets with identical cash flows". In detail: (1) the bond's coupon dates and coupon amounts are known with certainty. Therefore, (2) some multiple (or fraction) of zero-coupon bonds, each corresponding to the bond's coupon dates, can be specified so as to produce identical cash flows to the bond. Thus (3) the bond price today must be equal to the sum of each of its cash flows discounted at the discount rate implied by the value of the corresponding ZCB. Stochastic calculus approach. When modelling a bond option, or other interest rate derivative (IRD), it is important to recognize that future interest rates are uncertain, and therefore, the discount rate(s) referred to above, under all three cases—i.e. whether for all coupons or for each individual coupon—is not adequately represented by a fixed (deterministic) number. In such cases, stochastic calculus is employed. The following is a partial differential equation (PDE) in stochastic calculus, which, by arbitrage arguments, is satisfied by any zero-coupon bond formula_9, over (instantaneous) time formula_10, for corresponding changes in formula_11, the short rate. formula_12 The solution to the PDE (i.e. the corresponding formula for bond value) — given in Cox et al. — is: formula_13 where formula_14 is the expectation with respect to risk-neutral probabilities, and formula_15 is a random variable representing the discount rate; see also Martingale pricing. To actually determine the bond price, the analyst must choose the specific short-rate model to be employed. The approaches commonly used are: Note that depending on the model selected, a closed-form (“Black like”) solution may not be available, and a lattice- or simulation-based implementation of the model in question is then employed. See also . Clean and dirty price. When the bond is not valued precisely on a coupon date, the calculated price, using the methods above, will incorporate accrued interest: i.e. any interest due to the owner of the bond over the "stub period" since the previous coupon date (see day count convention). The price of a bond which includes this accrued interest is known as the "dirty price" (or "full price" or "all in price" or "Cash price"). The "clean price" is the price excluding any interest that has accrued. Clean prices are generally more stable over time than dirty prices. This is because the dirty price will drop suddenly when the bond goes "ex interest" and the purchaser is no longer entitled to receive the next coupon payment. In many markets, it is market practice to quote bonds on a clean-price basis. When a purchase is settled, the accrued interest is added to the quoted clean price to arrive at the actual amount to be paid. Yield and price relationships. Once the price or value has been calculated, various yields relating the price of the bond to its coupons can then be determined. Yield to maturity. The yield to maturity (YTM) is the discount rate which returns the market price of a bond without embedded optionality; it is identical to formula_8 (required return) in the above equation. YTM is thus the internal rate of return of an investment in the bond made at the observed price. Since YTM can be used to price a bond, bond prices are often quoted in terms of YTM. To achieve a return equal to YTM, i.e. where it is the required return on the bond, the bond owner must: Coupon rate. The coupon rate is the coupon payment formula_17 as a percentage of the face value formula_18. formula_19 Coupon yield is also called nominal yield. Current yield. The current yield is the coupon payment formula_17 as a percentage of the ("current") bond price formula_9. formula_20 Relationship. The concept of current yield is closely related to other bond concepts, including yield to maturity, and coupon yield. The relationship between yield to maturity and the coupon rate is as follows: Price sensitivity. The sensitivity of a bond's market price to interest rate (i.e. yield) movements is measured by its duration, and, additionally, by its convexity. Duration is a linear measure of how the price of a bond changes in response to interest rate changes. It is approximately equal to the percentage change in price for a given change in yield, and may be thought of as the elasticity of the bond's price with respect to discount rates. For example, for small interest rate changes, the duration is the approximate percentage by which the value of the bond will fall for a 1% per annum increase in market interest rate. So the market price of a 17-year bond with a duration of 7 would fall about 7% if the market interest rate (or more precisely the corresponding force of interest) increased by 1% per annum. Convexity is a measure of the "curvature" of price changes. It is needed because the price is not a linear function of the discount rate, but rather a convex function of the discount rate. Specifically, duration can be formulated as the first derivative of the price with respect to the interest rate, and convexity as the second derivative (see: Bond duration closed-form formula; Bond convexity closed-form formula; Taylor series). Continuing the above example, for a more accurate estimate of sensitivity, the convexity score would be multiplied by the square of the change in interest rate, and the result added to the value derived by the above linear formula. For embedded options, see effective duration and effective convexity; more generally, see . Accounting treatment. In accounting for liabilities, any bond discount or premium must be amortized over the life of the bond. A number of methods may be used for this depending on applicable accounting rules. One possibility is that amortization amount in each period is calculated from the following formula: formula_21 formula_22 = amortization amount in period number "n+1" formula_23 Bond Discount or Bond Premium = formula_24 = formula_25 Bond Discount or Bond Premium = formula_26 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\nTFV &= \\begin{matrix}\n \\left(\\frac{C}{1+i}+\\frac{C}{(1+i)^2}+ ... +\\frac{C}{(1+i)^N}\\right) + \\frac{M}{(1+i)^N} \n \\end{matrix}\\\\\n &= \\begin{matrix}\n \\left(\\sum_{n=1}^N\\frac{C}{(1+i)^n}\\right) + \\frac{M}{(1+i)^N} \n \\end{matrix}\\\\\n &= \\begin{matrix}\n C\\left(\\frac{1-(1+i)^{-N}}{i}\\right)+M(1+i)^{-N}\n \\end{matrix}\n\\end{align}" }, { "math_id": 1, "text": "F =" }, { "math_id": 2, "text": "i_F =" }, { "math_id": 3, "text": "C = F * i_F =" }, { "math_id": 4, "text": "N =" }, { "math_id": 5, "text": "i =" }, { "math_id": 6, "text": "M =" }, { "math_id": 7, "text": "TFV =" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "P" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "r" }, { "math_id": 12, "text": "\\frac{1}{2}\\sigma(r)^{2}\\frac{\\partial^2 P}{\\partial r^2}+[a(r)+\\sigma(r)+\\varphi(r,t)]\\frac{\\partial P}{\\partial r}+\\frac{\\partial P}{\\partial t} - rP = 0" }, { "math_id": 13, "text": "P[t, T, r(t)] = E_t^{\\ast}[e^{-R(t,T)}]" }, { "math_id": 14, "text": "E_t^{\\ast}" }, { "math_id": 15, "text": "R(t,T)" }, { "math_id": 16, "text": "P_0" }, { "math_id": 17, "text": "C" }, { "math_id": 18, "text": "F" }, { "math_id": 19, "text": "\\text{Coupon rate} = \\frac{C}{F}" }, { "math_id": 20, "text": "\\text{Current yield} = \\frac{C}{P_0}. " }, { "math_id": 21, "text": "n\\in\\{0,1, ... ,N-1\\}" }, { "math_id": 22, "text": "a_{n+1}" }, { "math_id": 23, "text": "a_{n+1}=|iP-C|{(1+i)}^n" }, { "math_id": 24, "text": "|F-P|" }, { "math_id": 25, "text": "a_1+a_2+ ... + a_N" }, { "math_id": 26, "text": "F|i-i_F|(\\frac{1-(1+i)^{-N}}{i})" } ]
https://en.wikipedia.org/wiki?curid=788540
7886457
Metzler matrix
Square matrix whose off-diagonal entries are nonnegative In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero): formula_0 It is named after the American economist Lloyd Metzler. Metzler matrices appear in stability analysis of time delayed differential equations and positive linear dynamical systems. Their properties can be derived by applying the properties of nonnegative matrices to matrices of the form "M" + "aI", where "M" is a Metzler matrix. Definition and terminology. In mathematics, especially linear algebra, a matrix is called Metzler, quasipositive (or quasi-positive) or essentially nonnegative if all of its elements are non-negative except for those on the main diagonal, which are unconstrained. That is, a Metzler matrix is any matrix "A" which satisfies formula_1 Metzler matrices are also sometimes referred to as formula_2-matrices, as a "Z"-matrix is equivalent to a negated quasipositive matrix. Properties. The exponential of a Metzler (or quasipositive) matrix is a nonnegative matrix because of the corresponding property for the exponential of a nonnegative matrix. This is natural, once one observes that the generator matrices of continuous-time Markov chains are always Metzler matrices, and that probability distributions are always non-negative. A Metzler matrix has an eigenvector in the nonnegative orthant because of the corresponding property for nonnegative matrices. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Bibliography. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\forall_{i\\neq j}\\, x_{ij} \\geq 0." }, { "math_id": 1, "text": "A=(a_{ij});\\quad a_{ij}\\geq 0, \\quad i\\neq j." }, { "math_id": 2, "text": "Z^{(-)}" } ]
https://en.wikipedia.org/wiki?curid=7886457
788704
Conjunction elimination
In propositional logic, conjunction elimination (also called and elimination, ∧ elimination, or simplification) is a valid immediate inference, argument form and rule of inference which makes the inference that, if the conjunction "A and B" is true, then "A" is true, and "B" is true. The rule makes it possible to shorten longer proofs by deriving one of the conjuncts of a conjunction on a line by itself. An example in English: It's raining and it's pouring. Therefore it's raining. The rule consists of two separate sub-rules, which can be expressed in formal language as: formula_0 and formula_1 The two sub-rules together mean that, whenever an instance of "formula_2" appears on a line of a proof, either "formula_3" or "formula_4" can be placed on a subsequent line by itself. The above example in English is an application of the first sub-rule. Formal notation. The "conjunction elimination" sub-rules may be written in sequent notation: formula_5 and formula_6 where formula_7 is a metalogical symbol meaning that formula_3 is a syntactic consequence of formula_2 and formula_4 is also a syntactic consequence of formula_2 in logical system; and expressed as truth-functional tautologies or theorems of propositional logic: formula_8 and formula_9 where formula_3 and formula_4 are propositions expressed in some formal system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{P \\land Q}{\\therefore P}" }, { "math_id": 1, "text": "\\frac{P \\land Q}{\\therefore Q}" }, { "math_id": 2, "text": "P \\land Q" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "(P \\land Q) \\vdash P" }, { "math_id": 6, "text": "(P \\land Q) \\vdash Q" }, { "math_id": 7, "text": "\\vdash" }, { "math_id": 8, "text": "(P \\land Q) \\to P" }, { "math_id": 9, "text": "(P \\land Q) \\to Q" } ]
https://en.wikipedia.org/wiki?curid=788704
7887913
Signature matrix
In mathematics, a signature matrix is a diagonal matrix whose diagonal elements are plus or minus 1, that is, any matrix of the form: formula_0 Any such matrix is its own inverse, hence is an involutory matrix. It is consequently a square root of the identity matrix. Note however that not all square roots of the identity are signature matrices. Noting that signature matrices are both symmetric and involutory, they are thus orthogonal. Consequently, any linear transformation corresponding to a signature matrix constitutes an isometry. Geometrically, signature matrices represent a reflection in each of the axes corresponding to the negated rows or columns. Properties. If A is a matrix of N*N then: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A=\\begin{pmatrix}\n\\pm 1 & 0 & \\cdots & 0 & 0 \\\\\n0 & \\pm 1 & \\cdots & 0 & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n0 & 0 & \\cdots & \\pm 1 & 0 \\\\\n0 & 0 & \\cdots & 0 & \\pm 1 \n\\end{pmatrix}" }, { "math_id": 1, "text": "-N\\leq \\operatorname{tr}(A)\\leq N" } ]
https://en.wikipedia.org/wiki?curid=7887913
789046
Jack Steinberger
German-American physicist, Nobel laureate (1921–2020) Jack Steinberger (born Hans Jakob Steinberger; May 25, 1921 – December 12, 2020) was a German-born American physicist noted for his work with neutrinos, the subatomic particles considered to be elementary constituents of matter. He was a recipient of the 1988 Nobel Prize in Physics, along with Leon M. Lederman and Melvin Schwartz, for the discovery of the muon neutrino. Through his career as an experimental particle physicist, he held positions at the University of California, Berkeley, Columbia University (1950–68), and the CERN (1968–86). He was also a recipient of the United States National Medal of Science in 1988, and the Matteucci Medal from the Italian Academy of Sciences in 1990. Early life and education. Steinberger was born in the city of Bad Kissingen in Bavaria, Germany, on May 25, 1921 into a Jewish family. The rise of Nazism in Germany, with its open anti-Semitism, prompted his parents, Ludwig Lazarus (a cantor and religious teacher) and Berta May Steinberger, to send him out of the country. Steinberger emigrated to the United States at the age of 13, making the trans-Atlantic trip with his brother Herbert. Jewish charities in the U.S. arranged for Barnett Farroll to care for him as a foster child. Steinberger attended New Trier Township High School, in Winnetka, Illinois. He was reunited with his parents and younger brother in 1938. Steinberger studied chemical engineering at Armour Institute of Technology (now Illinois Institute of Technology) but left after his scholarship ended to help supplement his family's income. He obtained a bachelor's degree in chemistry from the University of Chicago, in 1942. Shortly thereafter, he joined the Signal Corps at MIT. With the help of the G.I. Bill, he returned to graduate studies at the University of Chicago in 1946, where he studied under Edward Teller and Enrico Fermi. His Ph.D. thesis concerned the energy spectrum of electrons emitted in muon decay; his results showed that this was a three-body decay, and implied the participation of two neutral particles in the decay (later identified as the electron (formula_0) and muon (formula_1) neutrinos) rather than one. Career. Early research. After receiving his doctorate, Steinberger attended the Institute for Advanced Study in Princeton for a year. In 1949 he published a calculation of the lifetime of the neutral pion, which anticipated the study of anomalies in quantum field theory. Following Princeton, in 1949, Steinberger went to the Radiation Lab at the University of California at Berkeley, where he performed an experiment which demonstrated the production of neutral pions and their decay to photon pairs. This experiment utilized the 330 MeV synchrotron and the newly invented scintillation counters. Despite this and other achievements, he was asked to leave the Radiation Lab at Berkeley in 1950, due to his refusal to sign the so-called non-Communist Oath. Steinberger accepted a faculty position at Columbia University in 1950. The newly commissioned meson beam at Nevis Labs provided the tool for several important experiments. Measurements of the production cross-section of pions on various nuclear targets showed that the pion has odd parity. A direct measurement of the production of pions on a liquid hydrogen target, then not a common tool, provided the data needed to show that the pion has spin zero. The same target was used to observe the relatively rare decay of neutral pions to a photon, an electron, and a positron. A related experiment measured the mass difference between the charged and neutral pions based on the angular correlation between the neutral pions produced when the negative pion is captured by the proton in the hydrogen nucleus. Other important experiments studied the angular correlation between electron–positron pairs in neutral pion decays, and established the rare decay of a charged pion to an electron and neutrino; the latter required use of a liquid-hydrogen bubble chamber. Investigations of strange particles. During 1954–1955, Steinberger contributed to the development of the bubble chamber with the construction of a 15 cm device for use with the Cosmotron at Brookhaven National Laboratory. The experiment used a pion beam to produce pairs of hadrons with strange quarks to elucidate the puzzling production and decay properties of these particles. In 1956, he used a 30 cm chamber outfitted with three cameras to discover the neutral Sigma hyperon and measure its mass. This observation was important for confirming the existence of the SU(3) flavor symmetry which hypothesizes the existence of the strange quark. An important characteristic of the weak interaction is its violation of parity symmetry. This characteristic was established through the measurement of the spins and parities of many hyperons. Steinberger and his collaborators contributed several such measurements using large (75 cm) liquid-hydrogen bubble chambers and separated hadron beams at Brookhaven. One example is the measurement of the invariant mass distribution of electron–positron pairs produced in the decay of Sigma-zero hyperons to Lambda-zero hyperons. Neutrinos and the weak neutral current. In the 1960s, the emphasis in the study of the weak interaction shifted from strange particles to neutrinos. Leon Lederman, Steinberger and Schwartz built large spark chambers at Nevis Labs and exposed them in 1961 to neutrinos produced in association with muons in the decays of charged pions and kaons. They used the Alternating Gradient Synchrotron (AGS) at Brookhaven, and obtained a number of convincing events in which muons were produced, but no electrons. This result, for which they received the Nobel Prize in 1988, proved the existence of a type of neutrino associated with the muon, distinct from the neutrino produced in beta decay. Study of CP violation. The CP violation (charge conjugation and parity) was established in the neutral kaon system in 1964. Steinberger recognized that the phenomenological parameter epsilon ("ε") which quantifies the degree of CP violation could be measured in interference phenomena (See CP violation). In collaboration with Carlo Rubbia, he performed an experiment while on sabbatical at CERN during 1965 which demonstrated robustly the expected interference effect, and also measured precisely the difference in mass of the short-lived and long-lived neutral kaon masses. Back in the United States, Steinberger conducted an experiment at Brookhaven to observe CP violation in the semi-leptonic decays of neutral kaons. The charge asymmetry relates directly to the epsilon parameter, which was thereby measured precisely. This experiment also allowed the deduction of the phase of epsilon, and confirmed that CPT is a good symmetry of nature. CERN. In 1968, Steinberger left Columbia University and accepted a position as a department director at CERN. He constructed an experiment there utilizing multi-wire proportional chambers (MWPC), recently invented by Georges Charpak. The MWPCs, augmented by micro-electronic amplifiers, allowed much larger samples of events to be recorded. Several results for neutral kaons were obtained and published in the early 1970s, including the observation of the rare decay of the neutral kaon to a muon pair, the time dependence of the asymmetry for semi-leptonic decays, and a more-precise measurement of the neutral kaon mass difference. A new era in experimental technique was opened. These new techniques proved crucial for the first demonstration of direct CP-violation. The NA31 experiment at CERN was built in the early 1980s using the CERN SPS 400 GeV proton synchrotron. As well as banks of MWPCs and a hadron calorimeter, it featured a liquid argon electromagnetic calorimeter with exceptional spatial and energy resolution. NA31 showed that direct CP violation is real. Steinberger worked on the ALEPH experiment at the Large Electron–Positron Collider (LEP), where he served as the experiment's spokesperson. Among the ALEPH experiment's initial accomplishments was the precise measurement of the number of families of leptons and quarks in the Standard Model through the measurement of the decays of the Z boson. He retired from CERN in 1986, and went on to become a professor at the Scuola Normale Superiore di Pisa in Italy. He continued his association with the CERN laboratory through his visits into his 90s. Nobel Prize. Steinberger was awarded the Nobel Prize in Physics in 1988, "for the neutrino beam method and the demonstration of the doublet structure of the leptons through the discovery of the muon neutrino". He shared the prize with Leon M. Lederman and Melvin Schwartz; at the time of the research, all three experimenters were at Columbia University. The experiment used charged pion beams generated with the Alternating Gradient Synchrotron at Brookhaven National Laboratory. The pions decayed to muons which were detected in front of a steel wall; the neutrinos were detected in spark chambers installed behind the wall. The coincidence of muons and neutrinos demonstrated that a second kind of neutrino was created in association with muons. Subsequent experiments proved this neutrino to be distinct from the first kind (electron-type). Steinberger, Lederman and Schwartz published their work in "Physical Review Letters" in 1962. He gave his Nobel medal to New Trier High School in Winnetka, Illinois (USA), of which he was an alumnus. He was also awarded the National Medal of Science in 1988, by the then US president, Ronald Reagan and was the recipient of the Matteucci Medal in 1990, from the Italian Academy of Sciences. Personal life. Steinberger's first marriage to Joan Beauregard ended in a divorce, after which he married his former student, biologist Cynthia Alff. He had four children, two from each of his marriages. His son Ned Steinberger is the founder of the eponymous company for headless guitars and basses, and his daughter Julia Steinberger is an ecological economist at the University of Lausanne. As an atheist and a humanist, Steinberger was a Humanist Laureate in the International Academy of Humanism. In his own words, he is noted to have enjoyed tennis, mountaineering and sailing. In the 1980s Steinberger resumed relations with his native town Bad Kissingen. He often visited Bad Kissingen after that. The school he had attended there was named "Jack-Steinberger-Gymnasium" in 2001. In 2006 Steinberger was made honorary citizen of Bad Kissingen. "I feel welcome in Bad Kissingen. This is my hometown and I was raised there. I feel as a German again now" he told the Bavarian broadcasting company "Bayerischer Rundfunk" in 2013. He died on December 12, 2020, at his home in Geneva. He was aged 99. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_e" }, { "math_id": 1, "text": "\\nu_\\mu" } ]
https://en.wikipedia.org/wiki?curid=789046
7892663
Stanley–Wilf conjecture
Theorem that the growth rate of every proper permutation class is singly exponential The Stanley–Wilf conjecture, formulated independently by Richard P. Stanley and Herbert Wilf in the late 1980s, states that the growth rate of every proper permutation class is singly exponential. It was proved by Adam Marcus and Gábor Tardos (2004) and is no longer a conjecture. Marcus and Tardos actually proved a different conjecture, due to Zoltán Füredi and Péter Hajnal (1992), which had been shown to imply the Stanley–Wilf conjecture by . Statement. The Stanley–Wilf conjecture states that for every permutation "β", there is a constant "C" such that the number |"S""n"("β")| of permutations of length "n" which avoid "β" as a permutation pattern is at most "C""n". As observed, this is equivalent to the convergence of the limit formula_0 The upper bound given by Marcus and Tardos for "C" is exponential in the length of "β". A stronger conjecture of had stated that one could take "C" to be ("k" − 1)2, where "k" denotes the length of "β", but this conjecture was disproved for the permutation "β" = 4231 by . Indeed, has shown that "C" is, in fact, exponential in "k" for almost all permutations. Allowable growth rates. The growth rate (or Stanley–Wilf limit) of a permutation class is defined as formula_1 where "an" denotes the number of permutations of length "n" in the class. Clearly not every positive real number can be a growth rate of a permutation class, regardless of whether it is defined by a single forbidden pattern or a set of forbidden patterns. For example, numbers strictly between 0 and 1 cannot be growth rates of permutation classes. proved that if the number of permutations in a class of length "n" is ever less than the "n"th Fibonacci number then the enumeration of the class is eventually polynomial. Therefore, numbers strictly between 1 and the golden ratio also cannot be growth rates of permutation classes. Kaiser and Klazar went on to establish every possible growth constant of a permutation class below 2; these are the largest real roots of the polynomials formula_2 for an integer "k" ≥ 2. This shows that 2 is the least accumulation point of growth rates of permutation classes. later extended the characterization of growth rates of permutation classes up to a specific algebraic number κ≈2.20. From this characterization, it follows that κ is the least accumulation point of accumulation points of growth rates and that all growth rates up to κ are algebraic numbers. established that there is an algebraic number ξ≈2.31 such that there are uncountably many growth rates in every neighborhood of ξ, but only countably many growth rates below it. characterized the (countably many) growth rates below ξ, all of which are also algebraic numbers. Their results also imply that in the set of all growth rates of permutation classes, ξ is the least accumulation point from above. In the other direction, proved that every real number at least 2.49 is the growth rate of a permutation class. That result was later improved by , who proved that every real number at least 2.36 is the growth rate of a permutation class. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lim_{n\\to\\infty} \\sqrt[n]{|S_n(\\beta)|}." }, { "math_id": 1, "text": "\\limsup_{n\\to\\infty} \\sqrt[n]{a_n}," }, { "math_id": 2, "text": "x^{k+1}-2x^k+1" } ]
https://en.wikipedia.org/wiki?curid=7892663
7894744
Achilles number
Numbers with special prime factorization An Achilles number is a number that is powerful but not a perfect power. A positive integer "n" is a powerful number if, for every prime factor "p" of "n", "p"2 is also a divisor. In other words, every prime factor appears at least squared in the factorization. All Achilles numbers are powerful. However, not all powerful numbers are Achilles numbers: only those that cannot be represented as "mk", where "m" and "k" are positive integers greater than 1. Achilles numbers were named by Henry Bottomley after Achilles, a hero of the Trojan war, who was also powerful but imperfect. "Strong Achilles numbers" are Achilles numbers whose Euler totients are also Achilles numbers; the smallest are 500 and 864. Sequence of Achilles numbers. A number "n" "p"1"a"1"p"2"a"2…"p""k""a""k" is powerful if min("a"1, "a"2, …, "a""k") ≥ 2. If in addition gcd("a"1, "a"2, …, "a""k") 1 the number is an Achilles number. The Achilles numbers up to 5000 are: 72, 108, 200, 288, 392, 432, 500, 648, 675, 800, 864, 968, 972, 1125, 1152, 1323, 1352, 1372, 1568, 1800, 1944, 2000, 2312, 2592, 2700, 2888, 3087, 3200, 3267, 3456, 3528, 3872, 3888, 4000, 4232, 4500, 4563, 4608, 5000 (sequence in the OEIS). The smallest pair of consecutive Achilles numbers is: 5425069447 = 73 × 412 × 972 5425069448 = 23 × 260412 Examples. As an example, 108 is a powerful number. Its prime factorization is 22 · 33, and thus its prime factors are 2 and 3. Both 22 = 4 and 32 = 9 are divisors of 108. However, 108 cannot be represented as "mk", where "m" and "k" are positive integers greater than 1, so 108 is an Achilles number. The integer 360 is not an Achilles number because it is not powerful. One of its prime factors is 5 but 360 is not divisible by 52 = 25. Finally, 784 is not an Achilles number. It is a powerful number, because not only are 2 and 7 its only prime factors, but also 22 = 4 and 72 = 49 are divisors of it. It is a perfect power: formula_0 So it is not an Achilles number. The integer 500 = 22 × 53 is a strong Achilles number as its Euler totient of 200 = 23 × 52 is also an Achilles number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "784=2^4 \\cdot 7^2 = (2^2)^2 \\cdot 7^2 = (2^2 \\cdot 7)^2 = 28^2. \\, " } ]
https://en.wikipedia.org/wiki?curid=7894744
7895070
K correction
K correction converts measurements of astronomical objects into their respective rest frames. The correction acts on that object's observed magnitude (or equivalently, its flux). Because astronomical observations often measure through a single filter or bandpass, observers only measure a fraction of the total spectrum, redshifted into the frame of the observer. For example, to compare measurements of stars at different redshifts viewed through a red filter, one must estimate K corrections to these measurements in order to make comparisons. If one could measure all wavelengths of light from an object (a bolometric flux), a K correction would not be required, nor would it be required if one could measure the light emitted in an emission line. Carl Wilhelm Wirtz (1918), who referred to the correction as a "Konstanten k" (German for "constant") - correction dealing with the effects of redshift of in his work on Nebula. English-speaking claim for the origin of the term "K correction" is Edwin Hubble, who supposedly arbitrarily chose formula_0 to represent the reduction factor in magnitude due to this same effect and who may not have been aware / given credit to the earlier work. The K-correction can be defined as follows formula_1 I.E. the adjustment to the standard relationship between absolute and apparent magnitude required to correct for the redshift effect. Here, DL is the luminosity distance measured in parsecs. The exact nature of the calculation that needs to be applied in order to perform a K correction depends upon the type of filter used to make the observation and the shape of the object's spectrum. If multi-color photometric measurements are available for a given object thus defining its spectral energy distribution (SED), K corrections then can be computed by fitting it against a theoretical or empirical SED template. It has been shown that K corrections in many frequently used broad-band filters for low-redshift galaxies can be precisely approximated using two-dimensional polynomials as functions of a redshift and one observed color. This approach is implemented in the K corrections calculator web-service. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": " M = m - 5 (\\log_{10}{D_L} - 1) - K_{Corr}\\!\\," } ]
https://en.wikipedia.org/wiki?curid=7895070
7897443
Lookahead carry unit
Fast Digital Addition Circuit A lookahead carry unit (LCU) is a logical unit in digital circuit design used to decrease calculation time in adder units and used in conjunction with carry look-ahead adders (CLAs). 4-bit adder. A single 4-bit CLA is shown below: 16-bit adder. By combining four 4-bit CLAs, a 16-bit adder can be created but additional logic is needed in the form of an LCU. The LCU accepts the group propagate (formula_0) and group generate (formula_1) from each of the four CLAs. formula_0 and formula_1 have the following expressions for each CLA adder: formula_2 formula_3 The LCU then generates the carry input for each CLA. Assume that formula_4 is formula_0 and formula_5 is formula_1 from the ith CLA then the output carry bits are formula_6 formula_7 formula_8 formula_9 Substituting formula_10 into formula_11, then formula_11 into formula_12, then formula_12 into formula_13 yields the expanded equations: formula_6 formula_14 formula_15 formula_16 formula_10 corresponds to the carry input into the second CLA; formula_11 to the third CLA; formula_12 to the fourth CLA; and formula_13 to overflow carry bit. In addition, the LCU can calculate its own propagate and generate: formula_17 formula_18 formula_19 64-bit adder. By combining 4 CLAs and an LCU together creates a 16-bit adder. Four of these units can be combined to form a 64-bit adder. An additional (second-level) LCU is needed that accepts the propagate (formula_20) and generate (formula_21) from each LCU and the four carry outputs generated by the second-level LCU are fed into the first-level LCUs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_G" }, { "math_id": 1, "text": "G_G" }, { "math_id": 2, "text": "P_G = P_0 \\cdot P_1 \\cdot P_2 \\cdot P_3" }, { "math_id": 3, "text": "G_G = G_3 + G_2 \\cdot P_3 + G_1 \\cdot P_2 \\cdot P_3 + G_0 \\cdot P_1 \\cdot P_2 \\cdot P_3" }, { "math_id": 4, "text": "P_i" }, { "math_id": 5, "text": "G_i" }, { "math_id": 6, "text": "C_{4} = G_0 + P_0 \\cdot C_0" }, { "math_id": 7, "text": "C_{8} = G_{4} + P_{4} \\cdot C_{4}" }, { "math_id": 8, "text": "C_{12} = G_{8} + P_{8} \\cdot C_{8}" }, { "math_id": 9, "text": "C_{16} = G_{12} + P_{12} \\cdot C_{12}" }, { "math_id": 10, "text": "C_{4}" }, { "math_id": 11, "text": "C_{8}" }, { "math_id": 12, "text": "C_{12}" }, { "math_id": 13, "text": "C_{16}" }, { "math_id": 14, "text": "C_{8} = G_4 + G_0 \\cdot P_4 + C_0 \\cdot P_0 \\cdot P_4" }, { "math_id": 15, "text": "C_{12} = G_8 + G_4 \\cdot P_8 + G_0 \\cdot P_4 \\cdot P_8 + C_0 \\cdot P_0 \\cdot P_4 \\cdot P_8" }, { "math_id": 16, "text": "C_{16} = G_{12} + G_8 \\cdot P_{12} + G_4 \\cdot P_8 \\cdot P_{12} + G_0 \\cdot P_4 \\cdot P_8 \\cdot P_{12} + C_0 \\cdot P_0 \\cdot P_4 \\cdot P_8 \\cdot P_{12}" }, { "math_id": 17, "text": "P_{LCU} = P_0 \\cdot P_4 \\cdot P_8 \\cdot P_{12}" }, { "math_id": 18, "text": "G_{LCU} = G_{12} + G_8 \\cdot P_{12} + G_4 \\cdot P_8 \\cdot P_{12} + G_0 \\cdot P_4 \\cdot P_8 \\cdot P_{12}" }, { "math_id": 19, "text": "C_{16} = G_{LCU} + C_0 \\cdot P_{LCU}" }, { "math_id": 20, "text": "P_{LCU}" }, { "math_id": 21, "text": "G_{LCU}" } ]
https://en.wikipedia.org/wiki?curid=7897443
7899870
Cauchy matrix
In mathematics, a Cauchy matrix, named after Augustin-Louis Cauchy, is an "m"×"n" matrix with elements "a""ij" in the form formula_0 where formula_1 and formula_2 are elements of a field formula_3, and formula_4 and formula_5 are injective sequences (they contain "distinct" elements). The Hilbert matrix is a special case of the Cauchy matrix, where formula_6 Every submatrix of a Cauchy matrix is itself a Cauchy matrix. Cauchy determinants. The determinant of a Cauchy matrix is clearly a rational fraction in the parameters formula_4 and formula_5. If the sequences were not injective, the determinant would vanish, and tends to infinity if some formula_1 tends to formula_2. A subset of its zeros and poles are thus known. The fact is that there are no more zeros and poles: The determinant of a square Cauchy matrix A is known as a Cauchy determinant and can be given explicitly as formula_7     (Schechter 1959, eqn 4; Cauchy 1841, p. 154, eqn. 10). It is always nonzero, and thus all square Cauchy matrices are invertible. The inverse A−1 = B = [bij] is given by formula_8     (Schechter 1959, Theorem 1) where "A"i(x) and "B"i(x) are the Lagrange polynomials for formula_4 and formula_5, respectively. That is, formula_9 with formula_10 Generalization. A matrix C is called Cauchy-like if it is of the form formula_11 Defining X=diag(xi), Y=diag(yi), one sees that both Cauchy and Cauchy-like matrices satisfy the displacement equation formula_12 (with formula_13 for the Cauchy one). Hence Cauchy-like matrices have a common displacement structure, which can be exploited while working with the matrix. For example, there are known algorithms in literature for Here formula_17 denotes the size of the matrix (one usually deals with square matrices, though all algorithms can be easily generalized to rectangular matrices). References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\na_{ij}={\\frac{1}{x_i-y_j}};\\quad x_i-y_j\\neq 0,\\quad 1 \\le i \\le m,\\quad 1 \\le j \\le n\n" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "y_j" }, { "math_id": 3, "text": "\\mathcal{F}" }, { "math_id": 4, "text": "(x_i)" }, { "math_id": 5, "text": "(y_j)" }, { "math_id": 6, "text": "x_i-y_j = i+j-1. \\;" }, { "math_id": 7, "text": " \\det \\mathbf{A}={{\\prod_{i=2}^n \\prod_{j=1}^{i-1} (x_i-x_j)(y_j-y_i)}\\over {\\prod_{i=1}^n \\prod_{j=1}^n (x_i-y_j)}}" }, { "math_id": 8, "text": "b_{ij} = (x_j - y_i) A_j(y_i) B_i(x_j) \\," }, { "math_id": 9, "text": "A_i(x) = \\frac{A(x)}{A^\\prime(x_i)(x-x_i)} \\quad\\text{and}\\quad B_i(x) = \\frac{B(x)}{B^\\prime(y_i)(x-y_i)}, " }, { "math_id": 10, "text": "A(x) = \\prod_{i=1}^n (x-x_i) \\quad\\text{and}\\quad B(x) = \\prod_{i=1}^n (x-y_i). " }, { "math_id": 11, "text": "C_{ij}=\\frac{r_i s_j}{x_i-y_j}." }, { "math_id": 12, "text": "\\mathbf{XC}-\\mathbf{CY}=rs^\\mathrm{T}" }, { "math_id": 13, "text": "r=s=(1,1,\\ldots,1)" }, { "math_id": 14, "text": "O(n \\log n)" }, { "math_id": 15, "text": "O(n^2)" }, { "math_id": 16, "text": "O(n \\log^2 n)" }, { "math_id": 17, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=7899870
7901142
Fluorescence interference contrast microscopy
Fluorescence interference contrast (FLIC) microscopy is a microscopic technique developed to achieve z-resolution on the nanometer scale. FLIC occurs whenever fluorescent objects are in the vicinity of a reflecting surface (e.g. Si wafer). The resulting interference between the direct and the reflected light leads to a double sin2 modulation of the intensity, I, of a fluorescent object as a function of distance, h, above the reflecting surface. This allows for the "nanometer height measurements". FLIC microscope is well suited to measuring the topography of a membrane that contains fluorescent probes e.g. an artificial lipid bilayer, or a living cell membrane or the structure of fluorescently labeled proteins on a surface. FLIC optical theory. General two layer system. The optical theory underlying FLIC was developed by Armin Lambacher and Peter Fromherz. They derived a relationship between the observed fluorescence intensity and the distance of the fluorophore from a reflective silicon surface. The observed fluorescence intensity, formula_0, is the product of the excitation probability per unit time, formula_1, and the probability of measuring an emitted photon per unit time, formula_2. Both probabilities are a function of the fluorophore height above the silicon surface, so the observed intensity will also be a function of the fluorophore height. The simplest arrangement to consider is a fluorophore embedded in silicon dioxide (refractive index formula_3) a distance "d" from an interface with silicon (refractive index formula_4). The fluorophore is excited by light of wavelength formula_5 and emits light of wavelength formula_6. The unit vector formula_7 gives the orientation of the transition dipole of excitation of the fluorophore. formula_1 is proportional to the squared projection of the local electric field, formula_8, which includes the effects of interference, on the direction of the transition dipole. formula_9 The local electric field, formula_8, at the fluorophore is affected by interference between the direct incident light and the light reflecting off the silicon surface. The interference is quantified by the phase difference formula_10 given by formula_11 formula_12 is the angle of the incident light with respect to the silicon plane normal. Not only does interference modulate formula_8, but the silicon surface does not perfectly reflect the incident light. Fresnel coefficients give the change in amplitude between an incident and reflected wave. The Fresnel coefficients depend on the angles of incidence, formula_13 and formula_14, the indices of refraction of the two mediums and the polarization direction. The angles formula_13 and formula_14 can be related by Snell's Law. The expressions for the reflection coefficients are: formula_15 TE refers to the component of the electric field perpendicular to the plane of incidence and TM to the parallel component (The incident plane is defined by the plane normal and the propagation direction of the light). In cartesian coordinates, the local electric field is formula_16 formula_17 is the polarization angle of the incident light with respect to the plane of incidence. The orientation of the excitation dipole is a function of its angle formula_18 to the normal and formula_19 azimuthal to the plane of incidence. formula_20 The above two equations for formula_8 and formula_21 can be combined to give the probability of exciting the fluorophore per unit time formula_1. Many of the parameters used above would vary in a normal experiment. The variation in the five following parameters should be included in this theoretical description. The squared projection formula_22 must be averaged over these quantities to give the probability of excitation formula_1. Averaging over the first 4 parameters gives formula_23 formula_24 Normalization factors are not included. formula_25 is a distribution of the orientation angle of the fluorophore dipoles. The azimuthal angle formula_19 and the polarization angle formula_17 are integrated over analytically, so they no longer appear in the above equation. To finally obtain the probability of excitation per unit time, the above equation is integrated over the spread in excitation wavelength, accounting for the intensity formula_26 and the extinction coefficient of the fluorophore formula_27. formula_28 The steps to calculate formula_2 are equivalent to those above in calculating formula_1 except that the parameter labels "em" are replaced with "ex" and "in" is replaced with "out". formula_29 The resulting fluorescence intensity measured is proportional to the product of the excitation probability and emission probability formula_30 It is important to note that this theory determines a proportionality relation between the measured fluorescence intensity formula_0 and the distance of the fluorophore above the reflective surface. The fact that it is not an equality relation will have a significant effect on the experimental procedure. Experimental Setup. A silicon wafer is typically used as the reflective surface in a FLIC experiment. An oxide layer is then thermally grown on top of the silicon wafer to act as a spacer. On top of the oxide is placed the fluorescently labeled specimen, such as a lipid membrane, a cell or membrane bound proteins. With the sample system built, all that is needed is an epifluorescence microscope and a CCD camera to make quantitative intensity measurements. The silicon dioxide thickness is very important in making accurate FLIC measurements. As mentioned before, the theoretical model describes the "relative" fluorescence intensity measured versus the fluorophore height. The fluorophore position cannot be simply read off of a single measured FLIC curve. The basic procedure is to manufacture the oxide layer with at least two known thicknesses (the layer can be made with photolithographic techniques and the thickness measured by ellipsometry). The thicknesses used depends on the sample being measured. For a sample with fluorophore height in the range of 10 nm, oxide thickness around 50 nm would be best because the FLIC intensity curve is steepest here and would produce the greatest contrast between fluorophore heights. Oxide thickness above a few hundred nanometers could be problematic because the curve begins to get smeared out by polychromatic light and a range of incident angles. A ratio of measured fluorescence intensities at different oxide thicknesses is compared to the predicted ratio to calculate the fluorophore height above the oxide (formula_31). formula_32 The above equation can then be solved numerically to find formula_33. Imperfections of the experiment, such as imperfect reflection, nonnormal incidence of light and polychromatic light tend to smear out the sharp fluorescence curves. The spread in incidence angle can be controlled by the numerical aperture (N.A.). However, depending on the numerical aperture used, the experiment will yield good lateral resolution (x-y) or good vertical resolution (z), but not both. A high N.A. (~1.0) gives good lateral resolution which is best if the goal is to determine long range topography. Low N.A. (~0.001), on the other hand, provides accurate z-height measurement to determine the height of a fluorescently labeled molecule in a system. Analysis. The basic analysis involves fitting the intensity data with the theoretical model allowing the distance of the fluorophore above the oxide surface (formula_33) to be a free parameter. The FLIC curves shift to the left as the distance of the fluorophore above the oxide increases. formula_33 is usually the parameter of interest, but several other free parameters are often included to optimize the fit. Normally an amplitude factor (a) and a constant additive term for the background (b) are included. The amplitude factor scales the relative model intensity and the constant background shifts the curve up or down to account for fluorescence coming from out of focus areas, such as the top side of a cell. Occasionally the numerical aperture (N.A.) of the microscope is allowed to be a free parameter in the fitting. The other parameters entering the optical theory, such as different indices of refraction, layer thicknesses and light wavelengths, are assumed constant with some uncertainty. A FLIC chip may be made with oxide terraces of 9 or 16 different heights arranged in blocks. After a fluorescence image is captured, each 9 or 16 terrace block yields a separate FLIC curve that defines a unique formula_33. The average formula_33 is found by compiling all the formula_33 values into a histogram. The statistical error in the calculation of formula_33 comes from two sources: the error in fitting of the optical theory to the data and the uncertainty in the thickness of the oxide layer. Systematic error comes from three sources: the measurement of the oxide thickness (usually by ellipsometer), the fluorescence intensity measurement with the CCD, and the uncertainty in the parameters used in the optical theory. The systematic error has been estimated to be formula_34.
[ { "math_id": 0, "text": "I_{FLIC}" }, { "math_id": 1, "text": "P_{ex}" }, { "math_id": 2, "text": "P_{em}" }, { "math_id": 3, "text": "n_{1}" }, { "math_id": 4, "text": "n_{0}" }, { "math_id": 5, "text": "\\lambda_{ex}" }, { "math_id": 6, "text": "\\lambda_{em}" }, { "math_id": 7, "text": "''e_{ex}''" }, { "math_id": 8, "text": "F_{in}" }, { "math_id": 9, "text": "P_{ex}\\propto \\mid F_{in}\\cdot e_{ex}\\mid^{2} " }, { "math_id": 10, "text": "\\Phi_{in}" }, { "math_id": 11, "text": " \\Phi_{in} = \\frac{4\\pi n_{1}d\\cos \\theta^{in}_{1}}{\\lambda_{ex}}" }, { "math_id": 12, "text": "\\theta^{in}_{1}" }, { "math_id": 13, "text": "\\theta_{i}" }, { "math_id": 14, "text": "\\theta_{j}" }, { "math_id": 15, "text": "\nr^{TE}_{ij} = \\frac{n_{i}\\cos \\theta_{i} - n_{j}\\cos \\theta_{j}}{n_{i}\\cos \\theta_{i} + n_{j}\\cos \\theta_{j}}\\quad r^{TM}_{ij} = \\frac{n_{j}\\cos \\theta_{i} - n_{i}\\cos \\theta_{j}}{n_{j}\\cos \\theta_{i} + n_{i}\\cos \\theta_{j}}\n" }, { "math_id": 16, "text": "F_{in} = \\sin \\gamma_{in} \\left[\\begin{array}{c}0 \\\\1 + r^{TE}_{10}\\textit{e}^{ i\\Phi_{in}} \\\\0\\end{array}\\right] + \\cos \\gamma _{in} \\left[\\begin{array}{c}\\cos \\theta ^{in}_{1}(1-r^{TM}_{10}\\textit{e}^{i\\Phi_{in}}) \\\\0 \\\\ \\sin \\theta ^{in}_{1}(1+r^{TM}_{10}\\textit{e}^{i\\Phi_{in}})\\end{array}\\right]\n" }, { "math_id": 17, "text": "\\gamma_{in}" }, { "math_id": 18, "text": "\\theta_{ex}" }, { "math_id": 19, "text": "\\phi_{ex}" }, { "math_id": 20, "text": "\\textit{e}_{ex} = \\left[\\begin{array}{c}\\cos \\phi_{ex}\\sin \\theta_{ex}\\\\\\sin \\phi_{ex}\\sin \\theta_{ex} \\\\\\cos \\theta_{ex}\\end{array}\\right]" }, { "math_id": 21, "text": "\\textit{e}_{ex}" }, { "math_id": 22, "text": "\\mid F_{in}\\cdot e_{ex}\\mid^{2}" }, { "math_id": 23, "text": "\n<\\mid F_{in}\\cdot e_{ex}\\mid^{2}> \\propto \\int \\sin \\theta_{1}^{in}d\\theta_{1}^{in}A_{in}(\\theta_{1}^{in}) \\times \\int \\sin \\theta_{ex}d\\theta_{ex}O(\\theta_{ex})U_{ex}(\\lambda_{in},\\theta_{1}^{in}.\\theta_{ex})" }, { "math_id": 24, "text": " U_{ex} = \\sin^{2}\\theta_{ex}\\mid 1+r^{TE}_{10}\\textit{e}^{i\\Phi_{in}}\\mid^{2} + \\sin^{2}\\theta_{ex}\\cos^{2}\\theta^{in}_{1}\\mid 1-r^{TM}_{10}\\textit{e}^{i\\Phi_{in}}\\mid^{2}+2\\cos^{2}\\theta_{ex}\\sin^{2}\\theta^{in}_{1}\\mid 1+r^{TM}_{10}\\textit{e}^{i\\Phi_{in}}\\mid^{2}\n" }, { "math_id": 25, "text": "O(\\theta_{ex})" }, { "math_id": 26, "text": "I(\\lambda_{ex})" }, { "math_id": 27, "text": "\\epsilon(\\lambda_{ex})" }, { "math_id": 28, "text": "\nP_{ex}\\propto \\int d\\lambda_{ex}I(\\lambda_{ex})\\epsilon(\\lambda_{ex})<\\mid F_{in}\\cdot e_{ex}\\mid^{2}>\n" }, { "math_id": 29, "text": "\nP_{em}\\propto \\int d\\lambda_{em}\\Phi_{det}(\\lambda_{em})\\textit{f}(\\lambda_{em})<\\mid F_{in}\\cdot e_{ex}\\mid^{2}>\n" }, { "math_id": 30, "text": "\nI_{FLIC} \\propto P_{ex}P_{em}\n" }, { "math_id": 31, "text": "d_{\\textit{f}}," }, { "math_id": 32, "text": "\n\\frac{I_{theory}(d_{1})}{I_{theory}(d_{0})}=\\frac{I_{exp}(d_{1}+d_{\\textit{f}})}{I_{exp}(d_{0}+d_{\\textit{f}})}\n" }, { "math_id": 33, "text": "d_{\\textit{f}}" }, { "math_id": 34, "text": "\\sim 1 nm" } ]
https://en.wikipedia.org/wiki?curid=7901142
7901646
Diphenylketene
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Diphenylketene is a chemical substance of the ketene family. Diphenylketene, like most stable disubstituted ketenes, is a red-orange oil at room temperature and pressure. Due to the successive double bonds in the ketene structure R1R2C=C=O, diphenyl ketene is a heterocumulene. The most important reaction of diphenyl ketene is the [2+2] cycloaddition at C-C, C-N, C-O, and C-S multiple bonds. History. Diphenyl ketene was first isolated by Hermann Staudinger in 1905 and identified as the first example of the exceptionally reactive class of ketenes with the general formula R1R2C=C=O (R1=R2=phenyl group). Preparation. The first synthesis by H. Staudinger was based on 2-chlorodiphenylacetyl chloride (prepared from benzilic acid and thionyl chloride) from which two chlorine atoms are cleaved with zinc in a dehalogenation reaction: An early synthesis uses benzilmonohydrazone (from Diphenylethanedione and hydrazine hydrate), which is oxidized with mercury(II)oxide and calcium sulfate to form mono-diazoketone, and is then converted into the diphenylketene at 100 °C under nitrogen elimination in 58% yield: A further early diphenylketene synthesis originates from Eduard Wedekind, who had already obtained diphenyl ketene in 1901 by the dehydrohalogenation of diphenylacetyl chloride with triethylamine, without isolation and characterization though. This variant was also described in 1911 by H. Staudinger. A standard laboratory protocol is based on the Staudinger method and yields diphenyl ketene as an orange oil in yields of 53 to 57%. In a more recent process, 2-bromo-2,2-diphenylacetyl bromide is reacted with triphenylphosphine to give diphenyl ketene in yields up to 81%. Recently, a synthesis of diphenyl ketene from diphenylacetic acid and the Hendrickson reagent (triphenylphosphonium anhydride-trifluoromethanesulfonate) with water elimination in 72% yield has been reported. Properties. Diphenyl ketene is at room temperature an orange-colored to red oil (with the color of concentrated potassium dichromate solution) which is miscible with nonpolar organic solvents (such as diethyl ether, acetone, benzene, tetrahydrofuran, chloroform) and solidifies in the cold forming yellow crystals. The compound is easily oxidized by air but can be stored in tightly closed containers at 0 °C for several weeks without decomposition or in a nitrogen atmosphere with the addition of a small amount of hydroquinone as a polymerization inhibitor. Reactivity. Diphenylketene can undergo attack from a host of nucleophiles, including alcohols, amines, and enolates with fairly slow rates. These rates can be increased in the presence of catalysts. At present the mechanism of attack is unknown, but work is underway to determine the exact mechanism. The high reactivity of the diphenyl ketene is also evident in the formation of three dimers: and oligomers produced therefrom. Application. Ketenes (of the general formula R1R2C=C=O) have many parallels to isocyanates (of the general formula R-N=C=O) in their constitution as well as in their reactivity. Diphenyl ketene reacts with water in an addition reaction to form diphenylacetic acid, with ethanol to diphenyl acetic ethyl ester or with ammonia to the corresponding amide. Carboxylic acids produce mixed anhydrides of diphenylacetic acid, which can be used to activate protected amino acids for peptide linkage. formula_0 The protected dipeptide Z-Leu-Phe-OEt (N-benzyloxycarbonyl-L-leucyl-L-phenylalanine ethyl ester) is thus obtained in 59% yield via the activation of Z-leucine with diphenyl ketene and subsequent reaction with phenylalanine ethyl ester. Diphenyl ketene is prone to autoxidation, in which the corresponding polyester is formed at temperatures above 60 °C via an intermediate diphenyl acetolactone. In a Wittig reaction, allenes can be prepared from diphenyl ketene. With triphenylphosphine diphenylmethylene and diphenyl ketene, at e. g. 140 °C and under pressure tetraphenyl allenes are formed in 70% yield. The synthetically most interesting reactions of diphenyl ketene are [2+2]cycloadditions, e.g. the reaction with cyclopentadiene yielding a Diels-Alder adduct. Imines such as benzalaniline form β-lactams with diphenyl ketene. With carbonyl compounds β-lactones are formed analogously. The [2+2]cycloaddition of diphenyl ketene with phenylacetylene leads first to a cyclobutenone which thermally aromatizes to a phenyl vinyl ketene and cyclizes in a [4+2]cycloaddition to 3,4-diphenyl-1-naphthol in 81% yield. From this so-called Smith-Hoehn reaction a general synthesis method for substituted phenols and quinones has been developed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{(Phenyl)2C=C=O ->[{}\\atop\\text{Z-Leu}] (Phenyl)2CO-O-CO-{}}\\text{Z-Leu }\\ce{->[{}\\atop\\ce{H-Phe-OEt}]}\\text{ Z-Leu}\\ce{-Phe-OEt}" } ]
https://en.wikipedia.org/wiki?curid=7901646
7902939
J-coupling
Type of coupling used in NMR spectroscopy In nuclear chemistry and nuclear physics, "J"-couplings (also called spin-spin coupling or indirect dipole–dipole coupling) are mediated through chemical bonds connecting two spins. It is an indirect interaction between two nuclear spins that arises from hyperfine interactions between the nuclei and local electrons. In NMR spectroscopy, "J"-coupling contains information about relative bond distances and angles. Most importantly, "J"-coupling provides information on the connectivity of chemical bonds. It is responsible for the often complex splitting of resonance lines in the NMR spectra of fairly simple molecules. "J"-coupling is a frequency "difference" that is not affected by the strength of the magnetic field, so is always stated in Hz. Vector model and manifestations for chemical structure assignments. The origin of "J"-coupling can be visualized by a vector model for a simple molecule such as hydrogen fluoride (HF). In HF, the two nuclei have spin . Four states are possible, depending on the relative alignment of the H and F nuclear spins with the external magnetic field. The selection rules of NMR spectroscopy dictate that Δ"I" = 1, which means that a given photon (in the radio frequency range) can affect ("flip") only one of the two nuclear spins. "J"-coupling provides three parameters: the multiplicity (the "number of lines"), the magnitude of the coupling (strong, medium, weak), and the sign of the coupling. Multiplicity. The multiplicity provides information on the number of centers coupled to the signal of interest, and their nuclear spin. For simple systems, as in 1H–1H coupling in NMR spectroscopy, the multiplicity is one more than the number of adjacent protons which are magnetically nonequivalent to the protons of interest. For ethanol, each methyl proton is coupled to the two methylene protons, so the methyl signal is a triplet, while each methylene proton is coupled to the three methyl protons, so the methylene signal is a quartet. Nuclei with spins greater than , which are called quadrupolar, can give rise to greater splitting, although in many cases coupling to quadrupolar nuclei is not observed. Many elements consist of nuclei with nuclear spin and without. In these cases, the observed spectrum is the sum of spectra for each isotopomer. One of the great conveniences of NMR spectroscopy for organic molecules is that several important lighter spin nuclei are either monoisotopic, e.g. 31P and 19F, or have very high natural abundance, e.g. 1H. An additional convenience is that 12C and 16O have no nuclear spin so these nuclei, which are common in organic molecules, do not cause splitting patterns in NMR. Magnitude of "J"-coupling. For 1H–1H coupling, the magnitude of "J" decreases rapidly with the number of bonds between the coupled nuclei, especially in saturated molecules. Generally speaking two-bond coupling (i.e. 1H–C–1H) is stronger than three-bond coupling (1H–C–C–1H). The magnitude of the coupling also provides information on the dihedral angles relating the coupling partners, as described by the Karplus equation for three-bond coupling constants. For heteronuclear coupling, the magnitude of "J" is related to the nuclear magnetic moments of the coupling partners. 19F, with a high nuclear magnetic moment, gives rise to large coupling to protons. 103Rh, with a very small nuclear magnetic moment, gives only small couplings to 1H. To correct for the effect of the nuclear magnetic moment (or equivalently the gyromagnetic ratio "γ"), the "reduced coupling constant" "K" is often discussed, where "K" = . For coupling of a 13C nucleus and a directly bonded proton, the dominant term in the coupling constant "J"C–H is the Fermi contact interaction, which is a measure of the s-character of the bond at the two nuclei. Where the external magnetic field is very low, e.g. as Earth's field NMR, "J"-coupling signals of the order of hertz usually dominate chemical shifts which are of the order of millihertz and are not normally resolvable. Sign of "J"-coupling. The value of each coupling constant also has a sign, and coupling constants of comparable magnitude often have opposite signs. If the coupling constant between two given spins is negative, the energy is lower when these two spins are parallel, and conversely if their coupling constant is positive. For a molecule with a single "J"-coupling constant, the appearance of the NMR spectrum is unchanged if the sign of the coupling constant is reversed, although spectral lines at given positions may represent different transitions. The simple NMR spectrum therefore does not indicate the sign of the coupling constant, which there is no simple way of predicting. However for some molecules with two distinct "J"-coupling constants, the relative signs of the two constants can be experimentally determined by a double resonance experiment. For example in the diethylthallium ion (C2H5)2Tl+, this method showed that the methyl-thallium (CH3-Tl) and methylene-thallium (CH2-Tl) coupling constants have opposite signs. The first experimental method to determine the absolute sign of a "J"-coupling constant was proposed in 1962 by Buckingham and Lovering, who suggested the use of a strong electric field to align the molecules of a polar liquid. The field produces a direct dipolar coupling of the two spins, which adds to the observed "J"-coupling if their signs are parallel and subtracts from the observed "J"-coupling if their signs are opposed. This method was first applied to 4-nitrotoluene, for which the "J"-coupling constant between two adjacent (or ortho) ring protons was shown to be positive because the splitting of the two peaks for each proton decreases with the applied electric field. Another way to align molecules for NMR spectroscopy is to dissolve them in a nematic liquid crystal solvent. This method has also been used to determine the absolute sign of "J"-coupling constants. "J"-coupling Hamiltonian. The Hamiltonian of a molecular system may be taken as: "H" = D1 + D2 + D3, For a singlet molecular state and frequent molecular collisions, D1 and D3 are almost zero. The full form of the "J"-coupling interaction between spins 'Ij" and Ik" on the same molecule is: "H" = 2π I"j" · J"jk" · I"k" where J"jk" is the "J"-coupling tensor, a real 3 × 3 matrix. It depends on molecular orientation, but in an isotropic liquid it reduces to a number, the so-called scalar coupling. In 1D NMR, the scalar coupling leads to oscillations in the free induction decay as well as splittings of lines in the spectrum. Decoupling. By selective radio frequency irradiation, NMR spectra can be fully or partially decoupled, eliminating or selectively reducing the coupling effect. Carbon-13 NMR spectra are often recorded with proton decoupling. History. In September 1951, H. S. Gutowsky, D. W. McCall, and C. P. Slichter reported experiments on &lt;chem&gt;HPF_6&lt;/chem&gt;, &lt;chem&gt;CH_3OPF_2&lt;/chem&gt;, and &lt;chem&gt;POCl_2F&lt;/chem&gt;, where they explained the presence of multiple resonance lines with an interaction of the form formula_0. Independently, in October 1951, E. L. Hahn and D. E. Maxwell reported a "spin echo experiment" which indicates the existence of an interaction between two protons in dichloroacetaldehyde. In the echo experiment, two short, intense pulses of radiofrequency magnetic field are applied to the spin ensemble at the nuclear resonance condition and are separated by a time interval of "τ". The echo appears with a given amplitude at time 2"τ". For each setting of "τ", the maximum value of the echo signal is measured and plotted as a function of "τ". If the spin ensemble consists of a magnetic moment, a monotonic decay in the echo envelope is obtained. In the Hahn–Maxwell experiment, the decay was modulated by two frequencies: one frequency corresponded with the difference in chemical shift between the two non-equivalent spins and a second frequency, "J", that was smaller and independent of magnetic field strength ( = 0.7 Hz). Such interaction came as a great surprise. The direct interaction between two magnetic dipoles depends on the relative position of two nuclei in such a way that when averaged over all possible orientations of the molecule it equals to zero. In November 1951, N. F. Ramsey and E. M. Purcell proposed a mechanism that explained the observation and gave rise to an interaction of the form I1·I2. The mechanism is the magnetic interaction between each nucleus and the electron spin of its own atom together with the exchange coupling of the electron spins with each other. In the 1990s, direct evidence was found for the presence of "J"-couplings between magnetically active nuclei on both sides of the hydrogen bond. Initially, it was surprising to observe such couplings across hydrogen bonds since "J"-couplings are usually associated with the presence of purely covalent bonds. However, it is now well established that the H-bond "J"-couplings follow the same electron-mediated polarization mechanism as their covalent counterparts. The spin–spin coupling between nonbonded atoms in close proximity has sometimes been observed between fluorine, nitrogen, carbon, silicon and phosphorus atoms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A \\mathbf{\\mu}_1\\cdot\\mathbf{\\mu}_2" } ]
https://en.wikipedia.org/wiki?curid=7902939
7903
Diffie–Hellman key exchange
Method of exchanging cryptographic keys Diffie–Hellman (DH) key exchange is a mathematical method of securely exchanging cryptographic keys over a public channel and was one of the first public-key protocols as conceived by Ralph Merkle and named after Whitfield Diffie and Martin Hellman. DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography. Published in 1976 by Diffie and Hellman, this is the earliest publicly known work that proposed the idea of a private key and a corresponding public key. Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trusted courier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel. This key can then be used to encrypt subsequent communications using a symmetric-key cipher. Diffie–Hellman is used to secure a variety of Internet services. However, research published in October 2015 suggests that the parameters in use for many DH Internet applications at that time are not strong enough to prevent compromise by very well-funded attackers, such as the security services of some countries. The scheme was published by Whitfield Diffie and Martin Hellman in 1976, but in 1997 it was revealed that James H. Ellis, Clifford Cocks, and Malcolm J. Williamson of GCHQ, the British signals intelligence agency, had previously shown in 1969 how public-key cryptography could be achieved. Although Diffie–Hellman key exchange itself is a non-authenticated key-agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to provide forward secrecy in Transport Layer Security's ephemeral modes (referred to as EDH or DHE depending on the cipher suite). The method was followed shortly afterwards by RSA, an implementation of public-key cryptography using asymmetric algorithms. Expired US patent 4,200,770 from 1977 describes the now public-domain algorithm. It credits Hellman, Diffie, and Merkle as inventors. Name. In 2006, Hellman suggested the algorithm be called Diffie–Hellman–Merkle key exchange in recognition of Ralph Merkle's contribution to the invention of public-key cryptography (Hellman, 2006), writing: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The system...has since become known as Diffie–Hellman key exchange. While that system was first described in a paper by Diffie and me, it is a public key distribution system, a concept developed by Merkle, and hence should be called 'Diffie–Hellman–Merkle key exchange' if names are to be associated with it. I hope this small pulpit might help in that endeavor to recognize Merkle's equal contribution to the invention of public key cryptography. Description. General overview. Diffie–Hellman key exchange establishes a shared secret between two parties that can be used for secret communication for exchanging data over a public network. An analogy illustrates the concept of public key exchange by using colors instead of very large numbers: The process begins by having the two parties, Alice and Bob, publicly agree on an arbitrary starting color that does not need to be kept secret. In this example, the color is yellow. Each person also selects a secret color that they keep to themselves – in this case, red and cyan. The crucial part of the process is that Alice and Bob each mix their own secret color together with their mutually shared color, resulting in orange-tan and light-blue mixtures respectively, and then publicly exchange the two mixed colors. Finally, each of them mixes the color they received from the partner with their own private color. The result is a final color mixture (yellow-brown in this case) that is identical to their partner's final color mixture. If a third party listened to the exchange, they would only know the common color (yellow) and the first mixed colors (orange-tan and light-blue), but it would be very hard for them to find out the final secret color (yellow-brown). Bringing the analogy back to a real-life exchange using large numbers rather than colors, this determination is computationally expensive. It is impossible to compute in a practical amount of time even for modern supercomputers. Cryptographic explanation. The simplest and the original implementation, later formalized as Finite Field Diffie–Hellman in "RFC 7919", of the protocol uses the multiplicative group of integers modulo "p", where "p" is prime, and "g" is a primitive root modulo "p". These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 to "p"–1. Here is an example of the protocol, with non-secret values in blue, and secret values in red. Both Alice and Bob have arrived at the same values because under mod p, formula_0 More specifically, formula_1 Only "a" and "b" are kept secret. All the other values – "p", "g", "ga" mod "p", and "gb" mod "p" – are sent in the clear. The strength of the scheme comes from the fact that "gab" mod "p" = "gba" mod "p" take extremely long times to compute by any known algorithm just from the knowledge of "p", "g", "ga" mod "p", and "gb" mod "p". Such a function that is easy to compute but hard to invert is called a one-way function. Once Alice and Bob compute the shared secret they can use it as an encryption key, known only to them, for sending messages across the same open communications channel. Of course, much larger values of "a", "b", and "p" would be needed to make this example secure, since there are only 23 possible results of "n" mod 23. However, if "p" is a prime of at least 600 digits, then even the fastest modern computers using the fastest known algorithm cannot find "a" given only "g", "p" and "ga" mod "p". Such a problem is called the discrete logarithm problem. The computation of "ga" mod "p" is known as modular exponentiation and can be done efficiently even for large numbers. Note that "g" need not be large at all, and in practice is usually a small integer (like 2, 3, ...). Secrecy chart. The chart below depicts who knows what, again with non-secret values in blue, and secret values in red. Here Eve is an eavesdropper – she watches what is sent between Alice and Bob, but she does not alter the contents of their communications. Now s is the shared secret key and it is known to both Alice and Bob, but "not" to Eve. Note that it is not helpful for Eve to compute "AB", which equals "g"a + b mod p. Note: It should be difficult for Alice to solve for Bob's private key or for Bob to solve for Alice's private key. If it is not difficult for Alice to solve for Bob's private key (or vice versa), then an eavesdropper, Eve, may simply substitute her own private / public key pair, plug Bob's public key into her private key, produce a fake shared secret key, and solve for Bob's private key (and use that to solve for the shared secret key). Eve may attempt to choose a public / private key pair that will make it easy for her to solve for Bob's private key. Generalization to finite cyclic groups. Here is a more general description of the protocol: Both Alice and Bob are now in possession of the group element "gab" = "gba", which can serve as the shared secret key. The group "G" satisfies the requisite condition for secure communication as long as there is no efficient algorithm for determining "gab" given "g", "ga", and "gb". For example, the elliptic curve Diffie–Hellman protocol is a variant that represents an element of G as a point on an elliptic curve instead of as an integer modulo n. Variants using hyperelliptic curves have also been proposed. The supersingular isogeny key exchange is a Diffie–Hellman variant that was designed to be secure against quantum computers, but it was broken in July 2022. Ephemeral and/or static keys. The used keys can either be ephemeral or static (long term) key, but could even be mixed, so called semi-static DH. These variants have different properties and hence different use cases. An overview over many variants and some also discussions can for example be found in NIST SP 800-56A. A basic list: It is possible to use ephemeral and static keys in one key agreement to provide more security as for example shown in NIST SP 800-56A, but it is also possible to combine those in a single DH key exchange, which is then called triple DH (3-DH). Triple Diffie–Hellman (3-DH). In 1997 a kind of triple DH was proposed by Simon Blake-Wilson, Don Johnson, Alfred Menezes in 1997, which was improved by C. Kudla and K. G. Paterson in 2005 and shown to be secure. The long term secret keys of Alice and Bob are denoted by "a" and "b" respectively, with public keys "A" and "B", as well as the ephemeral key pairs "x, X" and "y, Y". Then protocol is: The long term public keys need to be transferred somehow. That can be done beforehand in a separate, trusted channel, or the public keys can be encrypted using some partial key agreement to preserve anonymity. For more of such details as well as other improvements like side channel protection or explicit key confirmation, as well as early messages and additional password authentication, see e.g. US patent "Advanced modular handshake for key agreement and optional authentication". Extended Triple Diffie–Hellman (X3DH). X3DH was initially proposed as part of the Double Ratchet Algorithm used in the Signal Protocol. The protocol offers forward secrecy and cryptographic deniability. It operates on an elliptic curve. The protocol uses five public keys. Alice has an identity key IKA and an ephemeral key EKA. Bob has an identity key IKB, a signed prekey SPKB, and a one-time prekey OPKB. Bob first publishes his three keys to a server, which Alice downloads and verifies the signature on. Alice then initiates the exchange to Bob. The OPK is optional. Operation with more than two parties. Diffie–Hellman key agreement is not limited to negotiating a key shared by only two participants. Any number of users can take part in an agreement by performing iterations of the agreement protocol and exchanging intermediate data (which does not itself need to be kept secret). For example, Alice, Bob, and Carol could participate in a Diffie–Hellman agreement as follows, with all operations taken to be modulo "p": An eavesdropper has been able to see ga mod p, gb mod p, gc mod p, gab mod p, gac mod p, and gbc mod p, but cannot use any combination of these to efficiently reproduce gabc mod p. To extend this mechanism to larger groups, two basic principles must be followed: These principles leave open various options for choosing in which order participants contribute to keys. The simplest and most obvious solution is to arrange the "N" participants in a circle and have "N" keys rotate around the circle, until eventually every key has been contributed to by all "N" participants (ending with its owner) and each participant has contributed to "N" keys (ending with their own). However, this requires that every participant perform "N" modular exponentiations. By choosing a more desirable order, and relying on the fact that keys can be duplicated, it is possible to reduce the number of modular exponentiations performed by each participant to log2("N") + 1 using a divide-and-conquer-style approach, given here for eight participants: Once this operation has been completed all participants will possess the secret gabcdefgh, but each participant will have performed only four modular exponentiations, rather than the eight implied by a simple circular arrangement. Security and practical considerations. The protocol is considered secure against eavesdroppers if "G" and "g" are chosen properly. In particular, the order of the group G must be large, particularly if the same group is used for large amounts of traffic. The eavesdropper has to solve the Diffie–Hellman problem to obtain "g""ab". This is currently considered difficult for groups whose order is large enough. An efficient algorithm to solve the discrete logarithm problem would make it easy to compute "a" or "b" and solve the Diffie–Hellman problem, making this and many other public key cryptosystems insecure. Fields of small characteristic may be less secure. The order of "G" should have a large prime factor to prevent use of the Pohlig–Hellman algorithm to obtain "a" or "b". For this reason, a Sophie Germain prime "q" is sometimes used to calculate "p" = 2"q" + 1, called a safe prime, since the order of "G" is then only divisible by 2 and "q". Sometimes "g" is chosen to generate the order "q" subgroup of "G", rather than "G", so that the Legendre symbol of "ga" never reveals the low order bit of "a". A protocol using such a choice is for example IKEv2. The generator "g" is often a small integer such as 2. Because of the random self-reducibility of the discrete logarithm problem a small "g" is equally secure as any other generator of the same group. If Alice and Bob use random number generators whose outputs are not completely random and can be predicted to some extent, then it is much easier to eavesdrop. In the original description, the Diffie–Hellman exchange by itself does not provide authentication of the communicating parties and can be vulnerable to a man-in-the-middle attack. Mallory (an active attacker executing the man-in-the-middle attack) may establish two distinct key exchanges, one with Alice and the other with Bob, effectively masquerading as Alice to Bob, and vice versa, allowing her to decrypt, then re-encrypt, the messages passed between them. Note that Mallory must be in the middle from the beginning and continuing to be so, actively decrypting and re-encrypting messages every time Alice and Bob communicate. If she arrives after the keys have been generated and the encrypted conversation between Alice and Bob has already begun, the attack cannot succeed. If she is ever absent, her previous presence is then revealed to Alice and Bob. They will know that all of their private conversations had been intercepted and decoded by someone in the channel. In most cases it will not help them get Mallory's private key, even if she used the same key for both exchanges. A method to authenticate the communicating parties to each other is generally needed to prevent this type of attack. Variants of Diffie–Hellman, such as STS protocol, may be used instead to avoid these types of attacks. Denial-of-service attack. A CVE released in 2021 ("CVE-2002-20001") disclosed a denial-of-service attack (DoS) against the protocol variants use ephemeral keys, called D(HE)at attack. The attack exploits that the Diffie–Hellman key exchange allows attackers to send arbitrary numbers that are actually not public keys, triggering expensive modular exponentiation calculations on the victim's side. Another CVE released in 2022 ("CVE-2022-40735") disclosed that the Diffie–Hellman key exchange implementations may use long private exponents that arguably make modular exponentiation calculations unnecessarily expensive. An attacker can exploit both vulnerabilities together. Practical attacks on Internet traffic. The number field sieve algorithm, which is generally the most effective in solving the discrete logarithm problem, consists of four computational steps. The first three steps only depend on the order of the group G, not on the specific number whose finite log is desired. It turns out that much Internet traffic uses one of a handful of groups that are of order 1024 bits or less. By precomputing the first three steps of the number field sieve for the most common groups, an attacker need only carry out the last step, which is much less computationally expensive than the first three steps, to obtain a specific logarithm. The Logjam attack used this vulnerability to compromise a variety of Internet services that allowed the use of groups whose order was a 512-bit prime number, so called export grade. The authors needed several thousand CPU cores for a week to precompute data for a single 512-bit prime. Once that was done, individual logarithms could be solved in about a minute using two 18-core Intel Xeon CPUs. As estimated by the authors behind the Logjam attack, the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would cost on the order of $100 million, well within the budget of a large national intelligence agency such as the U.S. National Security Agency (NSA). The Logjam authors speculate that precomputation against widely reused 1024-bit DH primes is behind claims in leaked NSA documents that NSA is able to break much of current cryptography. To avoid these vulnerabilities, the Logjam authors recommend use of elliptic curve cryptography, for which no similar attack is known. Failing that, they recommend that the order, "p", of the Diffie–Hellman group should be at least 2048 bits. They estimate that the pre-computation required for a 2048-bit prime is 109 times more difficult than for 1024-bit primes. Other uses. Encryption. Public key encryption schemes based on the Diffie–Hellman key exchange have been proposed. The first such scheme is the ElGamal encryption. A more modern variant is the Integrated Encryption Scheme. Forward secrecy. Protocols that achieve forward secrecy generate new key pairs for each session and discard them at the end of the session. The Diffie–Hellman key exchange is a frequent choice for such protocols, because of its fast key generation. Password-authenticated key agreement. When Alice and Bob share a password, they may use a password-authenticated key agreement (PK) form of Diffie–Hellman to prevent man-in-the-middle attacks. One simple scheme is to compare the hash of s concatenated with the password calculated independently on both ends of channel. A feature of these schemes is that an attacker can only test one specific password on each iteration with the other party, and so the system provides good security with relatively weak passwords. This approach is described in ITU-T Recommendation X.1035, which is used by the G.hn home networking standard. An example of such a protocol is the Secure Remote Password protocol. Public key. It is also possible to use Diffie–Hellman as part of a public key infrastructure, allowing Bob to encrypt a message so that only Alice will be able to decrypt it, with no prior communication between them other than Bob having trusted knowledge of Alice's public key. Alice's public key is formula_2. To send her a message, Bob chooses a random "b" and then sends Alice formula_3 (unencrypted) together with the message encrypted with symmetric key formula_4. Only Alice can determine the symmetric key and hence decrypt the message because only she has "a" (the private key). A pre-shared public key also prevents man-in-the-middle attacks. In practice, Diffie–Hellman is not used in this way, with RSA being the dominant public key algorithm. This is largely for historical and commercial reasons, namely that RSA Security created a certificate authority for key signing that became Verisign. Diffie–Hellman, as elaborated above, cannot directly be used to sign certificates. However, the ElGamal and DSA signature algorithms are mathematically related to it, as well as MQV, STS and the IKE component of the IPsec protocol suite for securing Internet Protocol communications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\color{Blue}A}^{\\color{Red}b}\\bmod {\\color{Blue}p} = {\\color{Blue}g}^{\\color{Red}ab}\\bmod {\\color{Blue}p} = {\\color{Blue}g}^{\\color{Red}ba}\\bmod {\\color{Blue}p} = {\\color{Blue}B}^{\\color{Red}a}\\bmod {\\color{Blue}p}" }, { "math_id": 1, "text": "({\\color{Blue}g}^{\\color{Red}a}\\bmod {\\color{Blue}p})^{\\color{Red}b}\\bmod {\\color{Blue}p} = ({\\color{Blue}g}^{\\color{Red}b}\\bmod {\\color{Blue}p})^{\\color{Red}a}\\bmod {\\color{Blue}p}" }, { "math_id": 2, "text": "(g^a \\bmod{p}, g, p)" }, { "math_id": 3, "text": "g^b \\bmod p" }, { "math_id": 4, "text": "(g^a)^b \\bmod{p}" } ]
https://en.wikipedia.org/wiki?curid=7903
7904551
Magnetic dipole–dipole interaction
Direct interaction between two magnetic dipoles Magnetic dipole–dipole interaction, also called dipolar coupling, refers to the direct interaction between two magnetic dipoles. Roughly speaking, the magnetic field of a dipole goes as the inverse cube of the distance, and the force of its magnetic field on another dipole goes as the first derivative of the magnetic field. It follows that the dipole-dipole interaction goes as the inverse fourth power of the distance. Suppose m1 and m2 are two magnetic dipole moments that are far enough apart that they can be treated as point dipoles in calculating their interaction energy. The potential energy "H" of the interaction is then given by: formula_0 where "μ"0 is the magnetic constant, r̂ is a unit vector parallel to the line joining the centers of the two dipoles, and |r| is the distance between the centers of m1 and m2. Last term with formula_1-function vanishes everywhere but the origin, and is necessary to ensure that formula_2 vanishes everywhere. Alternatively, suppose "γ"1 and "γ"2 are gyromagnetic ratios of two particles with spin quanta "S"1 and "S"2. (Each such quantum is some integral multiple of .) Then: formula_3 where formula_4 is a unit vector in the direction of the line joining the two spins, and |r| is the distance between them. Finally, the interaction energy can be expressed as the dot product of the moment of either dipole into the field from the other dipole: formula_5 where B2(r1) is the field that dipole 2 produces at dipole 1, and B1(r2) is the field that dipole 1 produces at dipole 2. It is not the sum of these terms. The force F arising from the interaction between m1 and m2 is given by: formula_6 The Fourier transform of "H" can be calculated from the fact that formula_7 and is given by formula_8 Dipolar coupling and NMR spectroscopy. The direct dipole-dipole coupling is very useful for molecular structural studies, since it depends only on known physical constants and the inverse cube of internuclear distance. Estimation of this coupling provides a direct spectroscopic route to the distance between nuclei and hence the geometrical form of the molecule, or additionally also on intermolecular distances in the solid state leading to NMR crystallography notably in amorphous materials. For example, in water, NMR spectra of hydrogen atoms of water molecules are narrow lines because dipole coupling is averaged due to chaotic molecular motion. In solids, where water molecules are fixed in their positions and do not participate in the diffusion mobility, the corresponding NMR spectra have the form of the Pake doublet. In solids with vacant positions, dipole coupling is averaged partially due to water diffusion which proceeds according to the symmetry of the solids and the probability distribution of molecules between the vacancies. Although internuclear magnetic dipole couplings contain a great deal of structural information, in isotropic solution, they average to zero as a result of diffusion. However, their effect on nuclear spin relaxation results in measurable nuclear Overhauser effects (NOEs). The residual dipolar coupling (RDC) occurs if the molecules in solution exhibit a partial alignment leading to an incomplete averaging of spatially anisotropic magnetic interactions i.e. dipolar couplings. RDC measurement provides information on the global folding of the protein-long distance structural information. It also provides information about "slow" dynamics in molecules. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " H = -\\frac{\\mu_0}{4\\pi|\\mathbf r|^3}\\left[ 3(\\mathbf m_1\\cdot\\hat\\mathbf r)(\\mathbf m_2\\cdot\\hat\\mathbf r) - \\mathbf m_1\\cdot\\mathbf m_2\\right]-\\mu_0 \\frac{2}{3} \\mathbf m_1\\cdot\\mathbf m_2 \\delta(\\mathbf r), " }, { "math_id": 1, "text": "\\delta" }, { "math_id": 2, "text": "\\nabla\\cdot\\mathbf B" }, { "math_id": 3, "text": " H = -\\frac{\\mu_0\\gamma_1\\gamma_2\\hbar^2}{4\\pi|\\mathbf r|^3 } \\left[3(\\mathbf S_1 \\cdot\\hat\\mathbf r)(\\mathbf S_2\\cdot\\hat\\mathbf r)-\\mathbf S_1\\cdot\\mathbf S_2\\right] ," }, { "math_id": 4, "text": "\\hat\\mathbf r" }, { "math_id": 5, "text": " H = -\\mathbf m_1\\cdot{\\mathbf B}_2({\\mathbf r}_1)=-\\mathbf m_2\\cdot{\\mathbf B}_1({\\mathbf r}_2), " }, { "math_id": 6, "text": "\n\\mathbf F = \\frac{3\\mu_0}{4\\pi|\\mathbf r|^4}\\{(\\hat\\mathbf r\\times\\mathbf m_1)\\times\\mathbf m_2 +(\\hat\\mathbf r\\times\\mathbf m_2)\\times\\mathbf m_1 - 2 \\hat\\mathbf r(\\mathbf m_1 \\cdot \\mathbf m_2) + 5\\hat\\mathbf r[(\\hat\\mathbf r\\times\\mathbf m_1)\\cdot(\\hat\\mathbf r\\times \\mathbf m_2)]\\}." }, { "math_id": 7, "text": " \\frac{3 (\\mathbf m_1\\cdot\\hat\\mathbf r)(\\mathbf m_2\\cdot\\hat\\mathbf r) - \\mathbf m_1\\cdot\\mathbf m_2}{4\\pi|\\mathbf r|^3} = (\\mathbf m_1\\cdot\\mathbf \\nabla)(\\mathbf m_2\\cdot\\mathbf \\nabla)\\frac{1}{4\\pi |\\mathbf r|} " }, { "math_id": 8, "text": " H = {\\mu_0}\\frac{ (\\mathbf m_1\\cdot\\mathbf q)(\\mathbf m_2\\cdot\\mathbf q) - |\\mathbf q|^2 \\mathbf m_1\\cdot\\mathbf m_2}{|\\mathbf q|^2}. " } ]
https://en.wikipedia.org/wiki?curid=7904551
7905090
Residual dipolar coupling
The residual dipolar coupling between two spins in a molecule occurs if the molecules in solution exhibit a partial alignment leading to an incomplete averaging of spatially anisotropic dipolar couplings. Partial molecular alignment leads to an incomplete averaging of anisotropic magnetic interactions such as the magnetic dipole-dipole interaction (also called dipolar coupling), the chemical shift anisotropy, or the electric quadrupole interaction. The resulting so-called "residual" anisotropic magnetic interactions are useful in biomolecular NMR spectroscopy. History and pioneering works. NMR spectroscopy in partially oriented media was reported by Alfred Saupe. After this initiation, several NMR spectra in various liquid crystalline phases were reported (see "e.g." ). A second technique for partial alignment that is not limited by a minimum anisotropy is strain-induced alignment in a gel (SAG). The technique was extensively used to study the properties of polymer gels by means of high-resolution deuterium NMR, but only lately gel alignment was used to induce RDCs in molecules dissolved into the gel. SAG allows the unrestricted scaling of alignment over a wide range and can be used for aqueous as well as organic solvents, depending on the polymer used. As a first example in organic solvents, RDC measurements in stretched polystyrene (PS) gels swollen in CDCl3 were reported as a promising alignment method. In 1995, NMR spectra were reported for cyanometmyoglobin, which has a very highly anisotropic paramagnetic susceptibility. When taken at very high field, these spectra may contain data that can usefully complement NOEs in determining a tertiary fold. In 1996 and 1997, the RDCs of a diamagnetic protein ubiquitin were reported. The results were in good agreement with the crystal structures. Physics. The secular dipolar coupling Hamiltonian of two spins, formula_0 and formula_1 is given by: formula_2 where The above equation can be rewritten in the following form: formula_11 where formula_12 In isotropic solution molecular tumbling reduces the average value of formula_13 to zero. We thus observe no dipolar coupling. If the solution is not isotropic then the average value of formula_13 may be different from zero, and one may observe "residual" couplings. RDC can be positive or negative, depending on the range of angles that are sampled. In addition to static distance and angular information, RDCs may contain information about a molecule's internal motion. To each atom in a molecule one can associate a motion tensor B, that may be computed from RDCs according to the following relation: formula_14 where A is the molecular alignment tensor. The rows of B contain the motion tensors for each atom. The motion tensors also have five degrees of freedom. From each motion tensor, 5 parameters of interest can be computed. The variables Si2, ηi, αi, βi and γi are used to denote these 5 parameters for atom i. Si2 is the magnitude of atom i's motion; ηi is a measure of the anisotropy of atom i's motion; αi and βi are related to the polar coordinates of the bond vector expressed in the initial arbitrary reference frame (i.e., the PDB frame). If the motion of the atom is anisotropic (i.e., ηi = 0), the final parameter, γi measures the principal orientation of the motion. Note that the RDC-derived motion parameters are local measurements. Measurement. Any RDC measurement in solution consists of two steps, aligning the molecules and NMR studies: Methods for aligning molecules. For diamagnetic molecules at moderate field strengths, molecules have little preference in orientation, the tumbling samples a nearly isotropic distribution, and average dipolar couplings goes to zero. Actually, most molecules have preferred orientations in the presence of a magnetic field, because most have anisotropic magnetic susceptibility tensors, Χ. The method is most suitable for systems with large values for magnetic susceptibility tensor. This includes: Protein-nucleic acid complex, nucleic acids, proteins with large number of aromatic residues, porphyrin containing proteins and metal binding proteins (metal may be replaced by lanthanides). For a fully oriented molecule, the dipolar coupling for an 1H-15N amide group would be over 20 kHz, and a pair of protons separated by 5 Å would have up to ~1 kHz coupling. However the degree of alignment achieved by applying magnetic field is so low that the largest 1H-15N or 1H-13C dipolar couplings are &lt;5 Hz. Therefore, many different alignment media have been designed: NMR experiments. There are numerous methods that have been designed to accurately measure coupling constant between nuclei. They have been classified into two groups: "frequency based methods" where separation of peaks centers (splitting) is measured in a frequency domain, and "intensity based methods" where the coupling is extracted from the resonance intensity instead of splitting. The two methods complement each other as each of them is subject to a different kind of systematic errors. Here are the prototypical examples of NMR experiments belonging to each of the two groups: Structural biology. RDC measurement provides information on the global folding of the protein or protein complex. As opposed to traditional NOE based NMR structure determinations, RDCs provide long distance structural information. It also provides information about the dynamics in molecules on time scales slower than nanoseconds. Studies of biomolecular structure. Most NMR studies of protein structure are based on analysis of the Nuclear Overhauser effect, NOE, between different protons in the protein. Because the NOE depends on the inverted sixth power of the distance between the nuclei, r−6, NOEs can be converted into distance restraints that can be used in molecular dynamics-type structure calculations. RDCs provide orientational restraints rather than distance restraints, and has several advantages over NOEs: Provided that a very complete set of RDCs is available, it has been demonstrated for several model systems that molecular structures can be calculated exclusively based on these anisotropic interactions, without recourse to NOE restraints. However, in practice, this is not achievable and RDC is used mainly to refine a structure determined by NOE data and J-coupling. One problem with using dipolar couplings in structure determination is that a dipolar coupling does not uniquely describe an internuclear vector orientation. Moreover, if a very small set of dipolar couplings are available, the refinement may lead to a structure worse than the original one. For a protein with N aminoacids, 2N RDC constraint for backbone is the minimum needed for an accurate refinement. The information content of an individual RDC measurement for a specific bond vector (such as a specific backbone NH bond in a protein molecule) can be understood by showing the target curve that traces out directions of perfect agreement between the observed RDC value and the value calculated from the model. Such a curve (see figure) has two symmetrical branches that lie on a sphere with its polar axis along the magnetic field direction. Their height from the sphere's equator depends on the magnitude of the RDC value and their shape depends on the "rhombicity" (asymmetry) of the molecular alignment tensor. If the molecular alignment were completely symmetrical around the magnetic field direction, the target curve would just consist of two circles at the same angle from the poles as the angle formula_8 that the specific bond vector makes to the applied magnetic field. In the case of elongated molecules such as RNA, where local torsional information and short distances are not enough to constrain the structures, RDC measurements can provide information about the orientations of specific chemical bonds throughout a nucleic acid with respect to a single coordinate frame. Particularly, RNA molecules are proton-poor and overlap of ribose resonances make it very difficult to use J-coupling and NOE data to determine the structure. Moreover, RDCs between nuclei with a distance larger than 5-6 Å can be detected. This distance is too much for generation of NOE signal. This is because RDC is proportional to r−3 whereas NOE is proportional to r−6. RDC measurements have also been proved to be extremely useful for a rapid determination of the relative orientations of units of known structures in proteins. In principle, the orientation of a structural subunit, which may be as small as a turn of a helix or as large as an entire domain, can be established from as few as five RDCs per subunit. Protein dynamics. As a RDC provides spatially and temporally averaged information about an angle between the external magnetic field and a bond vector in a molecule, it may provide rich geometrical information about dynamics on a slow timescale (&gt;10−9 s) in proteins. In particular, due to its radial dependence the RDC is in particular sensitive to large-amplitude angular processes An early example by Tolman "et al." found previously published structures of myoglobin insufficient to explain measured RDC data, and devised a simple model of slow dynamics to remedy this. However, for many classes of proteins, including intrinsically disordered proteins, analysis of RDCs becomes more involved, as defining an alignment frame is not trivial.&lt;ref name="doi10.1021/jacs.5b01289"&gt;&lt;/ref&gt; The problem can be addressed by circumventing the necessity of explicitly defining the alignment frame. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. Books: Review papers: Classic papers:
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "S," }, { "math_id": 2, "text": "H_\\mathrm{D}={\\frac{\\hbar^2\\gamma_I\\gamma_S}{4\\pi r^3_{IS}}}[1-3 \\cos^2\\theta](3I_zS_z-\\vec{I}\\cdot \\vec{S})" }, { "math_id": 3, "text": "\\hbar" }, { "math_id": 4, "text": "\\gamma_I" }, { "math_id": 5, "text": "\\gamma_S" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "r_{IS}" }, { "math_id": 8, "text": "\\theta" }, { "math_id": 9, "text": "\\vec{I}" }, { "math_id": 10, "text": "\\vec{S}" }, { "math_id": 11, "text": "H_\\mathrm{D}=D_{IS}(\\theta)[2I_zS_z-(I_xS_x+I_yS_y)]\\!" }, { "math_id": 12, "text": "D_{IS}(\\theta)=\\frac{\\hbar^2\\gamma_I\\gamma_S}{4\\pi r^3_{IS}}[1-3 \\cos^2\\theta].\\!" }, { "math_id": 13, "text": "D_{IS}" }, { "math_id": 14, "text": "D_{IS}=-\\frac{\\mu_0\\gamma_I\\gamma_S h}{(2\\pi r_{IS})^3} BA\\!" } ]
https://en.wikipedia.org/wiki?curid=7905090
7905850
Mathematical Institute, University of Oxford
Department of mathematics in University of Oxford The Mathematical Institute is the mathematics department at the University of Oxford in England. It is one of the nine departments of the university's Mathematical, Physical and Life Sciences Division. The institute includes both pure and applied mathematics (Statistics is a separate department) and is one of the largest mathematics departments in the United Kingdom with about 200 academic staff. It was ranked (in a joint submission with Statistics) as the top mathematics department in the UK in the 2021 Research Excellence Framework. Research at the Mathematical Institute covers all branches of mathematical sciences ranging from, for example, algebra, number theory, and geometry to the application of mathematics to a wide range of fields including industry, finance, networks, and the brain. It has more than 850 undergraduates and 550 doctoral or masters students. The institute inhabits a purpose-built building between Somerville College and Green Templeton College on Woodstock Road, next to the Faculty of Philosophy. History. The earliest forerunner of the Mathematical Institute was the School of Geometry and Arithmetic in the Bodleian Library's main quadrangle. This was completed in 1620. Notable mathematicians associated with the university include Christopher Wren who, before his notable career as an architect, made contributions in analytical mathematics, astronomy, and mathematical physics; Edmond Halley who published a series of profound papers on astronomy while Savilian Professor of Geometry in the early 18th century; John Wallis, whose innovations include using the symbol formula_0 for infinity; Charles Dodgson, who made significant contributions to geometry and logic while also achieving fame as a children's author under his pen name Lewis Carroll; and Henry John Stephen Smith, another Savilian Professor of Geometry, whose work in number theory and matrices attracted international recognition to Oxford mathematics. Dodgson jokingly proposed that the university should grant its mathematicians a narrow strip of level ground, reaching "ever so far", so that they could test whether or not parallel lines ever meet. The building of an institute was originally proposed by G. H. Hardy in 1930. Lectures were normally given in the individual colleges of the university and Hardy proposed a central space where mathematics lectures could be held and where mathematicians could regularly meet. This proposal was too ambitious for the university, who allocated just six rooms for mathematicians in an extension to the Radcliffe Science Library built in 1934. A dedicated Mathematical Institute was built in 1966 and was located at the northern end of St Giles' near the junction with Banbury Road in central north Oxford. The needs of the institute soon outgrew its building, so it also occupied a neighbouring house on St Giles and two annexes: Dartington House on Little Clarendon Street, and the Gibson Building on the site of the Radcliffe Infirmary. In 2008 the institute was given US$25 million — the largest grant ever for a mathematics department in the UK — to establish the Oxford Centre for Collaborative Applied Mathematics (OCCAM). Since 2013 the institute has been housed in the purpose-built Andrew Wiles Building in the Radcliffe Observatory Quarter in North Oxford, near the original Radcliffe Infirmary. Wiles, the university's Regius Professor of Mathematics, is known for proving Fermat's Last Theorem. The design and construction of the building was informed by the academic staff to incorporate mathematical ideas; Sir Roger Penrose designed a non-periodic pattern (a Penrose tiling) to decorate the ground at the entrance, and two structures where natural light enters the building have "crystals" illustrating concepts from graph theory and the vibration of a two-dimensional surface. Research. The institute is home to a number of research groups and funded research centres. Groups in mathematical logic, algebra, number theory, numerical analysis, geometry, topology, and mathematical physics date back to at least the 1960s. More recent groups include a combinatorics group, the Wolfson Centre for Mathematical Biology (WCMB), the Oxford Centre for Industrial Applied Mathematics (OCIAM) which includes a centre studying financial derivatives, and the Oxford Centre for Nonlinear Partial Differential Equations (OxPDE). In the 21st century, the institute's research topics have come to include quantum computing, tumour growth, and string theory, among other physical, biological, and economic problems. In 2012 the office of the President of the Clay Mathematics Institute (CMI) moved to the Mathematical Institute as Nick Woodhouse became CMI's president. The CMI offers the Millennium Prizes of one million dollars for solving famous mathematical problems that were unsolved in 2000. The current CMI president, Martin Bridson, is also based at the institute. Like other university departments in the UK, the institute has been rated for the quality and impact of its research. In the 2008 Research Assessment Exercise, Oxford was joint first (with the University of Cambridge) for applied mathematics and third for pure mathematics. In the 2014 Research Excellence Framework, the institute submitted jointly with the Department of Statistics, getting the highest placement for mathematical sciences in the UK. In the 2021 Research Excellence Framework, Oxford maintained its top place. Teaching. The institute has more than 850 undergraduate students on four degree courses: Mathematics, Mathematics and Statistics, Mathematics and Philosophy, and Mathematics and Computer Science. Students decide during their degree whether to earn a Bachelor of Arts (BA) after three years or to continue to a fourth year to earn a Master of Mathematics (MMath). In 2017, the time allowed for exams was increased from 90 to 105 minutes for each paper for all students, with one motivation being to improve women's scores and close the gender performance gap. The 550 postgraduate students take one of five courses to earn a Master of Science (MSc) or conduct research to earn a DPhil (the Oxford name for a Doctor of Philosophy). "The Guardian"'s 2021 ranking of "Best UK universities for mathematics" placed Oxford at the top. Outreach. The institute promotes understanding of mathematics outside the university by running public lectures, by hosting events for school students, and by supporting staff members who promote mathematics to the general public. Of those staff members, the best known are Sir Roger Penrose, David Acheson, and Marcus du Sautoy. Penrose, a former Rouse Ball Professor of Mathematics who has an emeritus post at the institute, has written a series of popular books on mathematics and physics. Acheson has reached a wide audience through publishing, radio, and YouTube. Du Sautoy is the current Simonyi Professor for the Public Understanding of Science and is known as a television and radio broadcaster as well as an author of popular books on mathematics. Alumni. Sir Michael Atiyah was a member between 1961 and 1990. Mary Cartwright, who earned her first degree and doctorate at Oxford, was the first female mathematician to be awarded Fellowship of the Royal Society and the first female president of the London Mathematical Society. In popular culture. In 2015, the final episode, "What Lies Tangled", of the British television detective drama "" was set and filmed in the Mathematical Institute. Sir Andrew Wiles played a professor who appears in the background of one shot. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\infty" } ]
https://en.wikipedia.org/wiki?curid=7905850
7907151
Rellich–Kondrachov theorem
In mathematics, the Rellich–Kondrachov theorem is a compact embedding theorem concerning Sobolev spaces. It is named after the Austrian-German mathematician Franz Rellich and the Russian mathematician Vladimir Iosifovich Kondrashov. Rellich proved the "L"2 theorem and Kondrashov the "L""p" theorem. Statement of the theorem. Let Ω ⊆ R"n" be an open, bounded Lipschitz domain, and let 1 ≤ "p" &lt; "n". Set formula_0 Then the Sobolev space "W"1,"p"(Ω; R) is continuously embedded in the "L""p" space "L""p"∗(Ω; R) and is compactly embedded in "L""q"(Ω; R) for every 1 ≤ "q" &lt; "p"∗. In symbols, formula_1 and formula_2 Kondrachov embedding theorem. On a compact manifold with "C"1 boundary, the Kondrachov embedding theorem states that if "k" &gt; "ℓ" and "k" − "n"/"p" &gt; "ℓ" − "n"/"q" then the Sobolev embedding formula_3 is completely continuous (compact). Consequences. Since an embedding is compact if and only if the inclusion (identity) operator is a compact operator, the Rellich–Kondrachov theorem implies that any uniformly bounded sequence in "W"1,"p"(Ω; R) has a subsequence that converges in "L""q"(Ω; R). Stated in this form, in the past the result was sometimes referred to as the Rellich–Kondrachov selection theorem, since one "selects" a convergent subsequence. (However, today the customary name is "compactness theorem", whereas "selection theorem" has a precise and quite different meaning, referring to set-valued functions). The Rellich–Kondrachov theorem may be used to prove the Poincaré inequality, which states that for "u" ∈ "W"1,"p"(Ω; R) (where Ω satisfies the same hypotheses as above), formula_4 for some constant "C" depending only on "p" and the geometry of the domain Ω, where formula_5 denotes the mean value of "u" over Ω.
[ { "math_id": 0, "text": "p^{*} := \\frac{n p}{n - p}." }, { "math_id": 1, "text": "W^{1, p} (\\Omega) \\hookrightarrow L^{p^{*}} (\\Omega)" }, { "math_id": 2, "text": "W^{1, p} (\\Omega) \\subset \\subset L^{q} (\\Omega) \\text{ for } 1 \\leq q < p^{*}." }, { "math_id": 3, "text": "W^{k,p}(M)\\subset W^{\\ell,q}(M)" }, { "math_id": 4, "text": "\\| u - u_\\Omega \\|_{L^p (\\Omega)} \\leq C \\| \\nabla u \\|_{L^p (\\Omega)}" }, { "math_id": 5, "text": "u_\\Omega := \\frac{1}{\\operatorname{meas} (\\Omega)} \\int_\\Omega u(x) \\, \\mathrm{d} x " } ]
https://en.wikipedia.org/wiki?curid=7907151
790823
Factorial moment
Expectation or average of the falling factorial of a random variable In probability theory, the factorial moment is a mathematical quantity defined as the expectation or average of the falling factorial of a random variable. Factorial moments are useful for studying non-negative integer-valued random variables, and arise in the use of probability-generating functions to derive the moments of discrete random variables. Factorial moments serve as analytic tools in the mathematical field of combinatorics, which is the study of discrete mathematical structures. Definition. For a natural number "r", the "r"-th factorial moment of a probability distribution on the real or complex numbers, or, in other words, a random variable "X" with that probability distribution, is formula_0 where the E is the expectation (operator) and formula_1 is the falling factorial, which gives rise to the name, although the notation ("x")"r" varies depending on the mathematical field. Of course, the definition requires that the expectation is meaningful, which is the case if ("X")"r" ≥ 0 or E[. If X is the number of successes in n trials, and pr is the probability that any r of the n trials are all successes, then formula_2 Examples. Poisson distribution. If a random variable "X" has a Poisson distribution with parameter "λ", then the factorial moments of "X" are formula_3 which are simple in form compared to its moments, which involve Stirling numbers of the second kind. Binomial distribution. If a random variable "X" has a binomial distribution with success probability "p" ∈[0,1] and number of trials "n", then the factorial moments of "X" are formula_4 where by convention, formula_5 and formula_6 are understood to be zero if "r" &gt; "n". Hypergeometric distribution. If a random variable "X" has a hypergeometric distribution with population size "N", number of success states "K" ∈ {0...,"N"} in the population, and draws "n" ∈ {0...,"N"}, then the factorial moments of "X" are formula_7 Beta-binomial distribution. If a random variable "X" has a beta-binomial distribution with parameters "α" &gt; 0, "β" &gt; 0, and number of trials "n", then the factorial moments of "X" are formula_8 Calculation of moments. The "r"th raw moment of a random variable "X" can be expressed in terms of its factorial moments by the formula formula_9 where the curly braces denote Stirling numbers of the second kind. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] = \\operatorname{E}\\bigl[ X(X-1)(X-2)\\cdots(X-r+1)\\bigr]," }, { "math_id": 1, "text": "(x)_r := \\underbrace{x(x-1)(x-2)\\cdots(x-r+1)}_{r \\text{ factors}} \\equiv \\frac{x!}{(x-r)!}" }, { "math_id": 2, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] = n(n-1)(n-2)\\cdots(n-r+1)p_r" }, { "math_id": 3, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] =\\lambda^r," }, { "math_id": 4, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] = \\binom{n}{r} p^r r! = (n)_r p^r," }, { "math_id": 5, "text": "\\textstyle{\\binom{n}{r}} " }, { "math_id": 6, "text": "(n)_r" }, { "math_id": 7, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] = \\frac{\\binom{K}{r}\\binom{n}{r}r!}{\\binom{N}{r}} = \\frac{(K)_r (n)_r}{(N)_r}. " }, { "math_id": 8, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] = \\binom{n}{r}\\frac{B(\\alpha+r,\\beta)r!}{B(\\alpha,\\beta)} =\n(n)_r \\frac{B(\\alpha+r,\\beta)}{B(\\alpha,\\beta)} " }, { "math_id": 9, "text": "\\operatorname{E}[X^r] = \\sum_{j=0}^r \\left\\{ {r \\atop j} \\right\\} \\operatorname{E}[(X)_j], " } ]
https://en.wikipedia.org/wiki?curid=790823
7908748
First-order reduction
In computer science, a first-order reduction is a very strong type of reduction between two computational problems in computational complexity theory. A first-order reduction is a reduction where each component is restricted to be in the class FO of problems calculable in first-order logic. Since we have formula_0, the first-order reductions are stronger reductions than the logspace reductions. Many important complexity classes are closed under first-order reductions, and many of the traditional complete problems are first-order complete as well (Immerman 1999 p. 49-50). For example, ST-connectivity is FO-complete for NL, and NL is closed under FO reductions (Immerman 1999, p. 51) (as are P, NP, and most other "well-behaved" classes).
[ { "math_id": 0, "text": "\\mbox{FO} \\subsetneq \\mbox{L}" } ]
https://en.wikipedia.org/wiki?curid=7908748
79099
Maple (software)
Mathematical computing environment Maple is a symbolic and numeric computing environment as well as a multi-paradigm programming language. It covers several areas of technical computing, such as symbolic mathematics, numerical analysis, data processing, visualization, and others. A toolbox, MapleSim, adds functionality for multidomain physical modeling and code generation. Maple's capacity for symbolic computing include those of a general-purpose computer algebra system. For instance, it can manipulate mathematical expressions and find symbolic solutions to certain problems, such as those arising from ordinary and partial differential equations. Maple is developed commercially by the Canadian software company Maplesoft. The name 'Maple' is a reference to the software's Canadian heritage. Overview. Core functionality. Users can enter mathematics in traditional mathematical notation. Custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization. Examples of symbolic computations are given below. Maple incorporates a dynamically typed imperative-style programming language (resembling Pascal), which permits variables of lexical scope. There are also interfaces to other languages (C, C#, Fortran, Java, MATLAB, and Visual Basic), as well as to Microsoft Excel. Maple supports MathML 2.0, which is a W3C format for representing and interpreting mathematical expressions, including their display in web pages. There is also functionality for converting expressions from traditional mathematical notation to markup suitable for the typesetting system LaTeX. Architecture. Maple is based on a small kernel, written in C, which provides the Maple language. Most functionality is provided by libraries, which come from a variety of sources. Most of the libraries are written in the Maple language; these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, or GMP libraries. Different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs. The standard interface and calculator interface are written in Java. History. The first concept of Maple arose from a meeting in late 1980 at the University of Waterloo. Researchers at the university wished to purchase a computer powerful enough to run the Lisp-based computer algebra system Macsyma. Instead, they opted to develop their own computer algebra system, named Maple, that would run on lower cost computers. Aiming for portability, they began writing Maple in programming languages from the BCPL family (initially using a subset of B and C, and later on only C). A first limited version appeared after three weeks, and fuller versions entered mainstream use beginning in 1982. By the end of 1983, over 50 universities had copies of Maple installed on their machines. In 1984, the research group arranged with Watcom Products Inc to license and distribute the first commercially available version, Maple 3.3. In 1988 Waterloo Maple Inc. (Maplesoft) was founded. The company's original goal was to manage the distribution of the software, but eventually it grew to have its own R&amp;D department, where most of Maple's development takes place today (the remainder being done at various university laboratories). In 1989, the first graphical user interface for Maple was developed and included with version 4.3 for the Macintosh. X11 and Windows versions of the new interface followed in 1990 with Maple V. In 1992, Maple V Release 2 introduced the Maple "worksheet" that combined text, graphics, and input and typeset output. In 1994 a special issue of a newsletter created by Maple developers called "MapleTech" was published. In 1999, with the release of Maple 6, Maple included some of the NAG Numerical Libraries. In 2003, the current "standard" interface was introduced with Maple 9. This interface is primarily written in Java (although portions, such as the rules for typesetting mathematical formulae, are written in the Maple language). The Java interface was criticized for being slow; improvements have been made in later versions, although the Maple 11 documentation recommends the previous ("classic") interface for users with less than 500 MB of physical memory. Between 1995 and 2005 Maple lost significant market share to competitors due to a weaker user interface. With Maple 10 in 2005, Maple introduced a new "document mode" interface, which has since been further developed across several releases. In September 2009 Maple and Maplesoft were acquired by the Japanese software retailer Cybernet Systems. Version history. &lt;templatestyles src="Div col/styles.css"/&gt; Features. Features of Maple include: Examples of Maple code. The following code, which computes the factorial of a nonnegative integer, is an example of an imperative programming construct within Maple: myfac := proc(n::nonnegint) local out, i; out := 1; for i from 2 to n do out := out * i end do; out end proc; Simple functions can also be defined using the "maps to" arrow notation: myfac := n -&gt; product(i, i = 1..n); Integration. Find formula_0. int(cos(x/a), x); Output: formula_1 Determinant. Compute the determinant of a matrix. M := Matrix(1,2,3], [a,b,c], [x,y,z); # example Matrix formula_2 LinearAlgebra:-Determinant(M); formula_3 Series expansion. series(tanh(x), x = 0, 15) formula_4 formula_5 Solve equations numerically. The following code numerically calculates the roots of a high-order polynomial: f := x^53-88*x^5-3*x-5 = 0 fsolve(f) -1.097486315, -.5226535640, 1.099074017 The same command can also solve systems of equations: f := (cos(x+y))^2 + exp(x)*y+cot(x-y)+cosh(z+x) = 0: g := x^5 - 8*y = 2: h := x+3*y-77*z=55; fsolve( {f,g,h} ); Plotting of function of single variable. Plot formula_6 with formula_7 ranging from -10 to 10: plot(x*sin(x), x = -10..10); Plotting of function of two variables. Plot formula_8 with formula_7 and formula_9 ranging from -1 to 1: plot3d(x^2+y^2, x = -1..1, y = -1..1); formula_10 Animation of functions. plots:-animate(subs(k = 0.5, f), x=-30..30, t=-10..10, numpoints=200, frames=50, color=red, thickness=3); plots:-animate3d(cos(t*x)*sin(3*t*y), x=-Pi..Pi, y=-Pi..Pi, t=1..2); M := Matrix(400,400,200], [100,100,-400], [1,1,1, datatype=float[8]): plot3d(1, x=0..2*Pi, y=0..Pi, axes=none, coords=spherical, viewpoint=[path=M]); Laplace transform. f := (1+A*t+B*t^2)*exp(c*t); formula_11 inttrans:-laplace(f, t, s); formula_12 inttrans:-invlaplace(1/(s-a), s, x); formula_13 Fourier transform. inttrans:-fourier(sin(x), x, w) formula_14 Integral equations. Find functions formula_15 that satisfy the integral equation formula_16. eqn:= f(x)-3*Int((x*y+x^2*y^2)*f(y), y=-1..1) = h(x): intsolve(eqn,f(x)); formula_17 Use of the Maple engine. The Maple engine is used within several other products from Maplesoft: Listed below are third-party commercial products that no longer use the Maple engine: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\int\\cos\\left(\\frac{x}{a}\\right)dx" }, { "math_id": 1, "text": "a \\sin\\left(\\frac{x}{a}\\right)" }, { "math_id": 2, "text": "\n \\begin{bmatrix}\n 1 & 2 & 3 \\\\\n a & b & c \\\\\n x & y & z\n \\end{bmatrix}\n" }, { "math_id": 3, "text": "bz-cy+3ay-2az+2xc-3xb" }, { "math_id": 4, "text": "x-\\frac{1}{3}\\,x^3+\\frac{2}{15}\\,x^5-\\frac{17}{315}\\,x^7" }, { "math_id": 5, "text": "{}+\\frac{62}{2835}\\,x^9-\\frac{1382}{155925}\\,x^{11}+\\frac{21844}{6081075}\\,x^{13}+\\mathcal{O}\\left(x^{15}\\right)" }, { "math_id": 6, "text": "x \\sin(x)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "x^2+y^2" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "f := \\frac{2k^2}{\\cosh^2\\left(x k - 4 k^3 t\\right)}" }, { "math_id": 11, "text": " \\left(1 + A \\, t + B \\, t^2\\right) e^{c t}" }, { "math_id": 12, "text": "\\frac{1}{s-c}+\\frac{A}{(s-c)^2}+\\frac{2B}{(s-c)^3}" }, { "math_id": 13, "text": "e^{ax}" }, { "math_id": 14, "text": "\\mathrm{I}\\pi\\,(\\mathrm{Dirac}(w+1)-\\mathrm{Dirac}(w-1))" }, { "math_id": 15, "text": "f" }, { "math_id": 16, "text": "f(x)-3\\int_{-1}^1(xy+x^2y^2)f(y)dy = h(x)" }, { "math_id": 17, "text": "f \\left( x \\right) =\\int _{-1}^{1}\\! \\left( -15\\,{x}^{2}{y}^{2}-3\\,xy \\right) h \\left( y \\right) {dy}+h \\left( x \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=79099
7913943
Killing tensor
In mathematics, a Killing tensor or Killing tensor field is a generalization of a Killing vector, for symmetric tensor fields instead of just vector fields. It is a concept in Riemannian and pseudo-Riemannian geometry, and is mainly used in the theory of general relativity. Killing tensors satisfy an equation similar to Killing's equation for Killing vectors. Like Killing vectors, every Killing tensor corresponds to a quantity which is conserved along geodesics. However, unlike Killing vectors, which are associated with symmetries (isometries) of a manifold, Killing tensors generally lack such a direct geometric interpretation. Killing tensors are named after Wilhelm Killing. Definition and properties. In the following definition, parentheses around tensor indices are notation for symmetrization. For example: formula_0 Definition. A Killing tensor is a tensor field formula_1 (of some order "m") on a (pseudo)-Riemannian manifold which is symmetric (that is, formula_2) and satisfies: formula_3 This equation is a generalization of Killing's equation for Killing vectors: formula_4 Properties. Killing vectors are a special case of Killing tensors. Another simple example of a Killing tensor is the metric tensor itself. A linear combination of Killing tensors is a Killing tensor. A symmetric product of Killing tensors is also a Killing tensor; that is, if formula_5 and formula_6 are Killing tensors, then formula_7 is a Killing tensor too. Every Killing tensor corresponds to a constant of motion on geodesics. More specifically, for every geodesic with tangent vector formula_8, the quantity formula_9 is constant along the geodesic. Examples. Since Killing tensors are a generalization of Killing vectors, the examples at are also examples of Killing tensors. The following examples focus on Killing tensors not simply obtained from Killing vectors. FLRW metric. The Friedmann–Lemaître–Robertson–Walker metric, widely used in cosmology, has spacelike Killing vectors corresponding to its spatial symmetries, in particular rotations around arbitrary axes and in the flat case for formula_10 translations along formula_11, formula_12, and formula_13. It also has a Killing tensor formula_14 where "a" is the scale factor, formula_15 is the "t"-coordinate basis vector, and the −+++ signature convention is used. Kerr metric. The Kerr metric, describing a rotating black hole, has two independent Killing vectors. One Killing vector corresponds to the time translation symmetry of the metric, and another corresponds to the axial symmetry about the axis of rotation. In addition, as shown by Walker and Penrose (1970), there is a nontrivial Killing tensor of order 2. The constant of motion corresponding to this Killing tensor is called the Carter constant. Killing–Yano tensor. An antisymmetric tensor of order "p", formula_16, is a Killing–Yano tensor if it satisfies the equation formula_17. While also a generalization of the Killing vector, it differs from the usual Killing tensor in that the covariant derivative is only contracted with one tensor index. Conformal Killing tensor. Conformal Killing tensors are a generalization of Killing tensors and conformal Killing vectors. A conformal Killing tensor is a tensor field formula_1 (of some order "m") which is symmetric and satisfies formula_18 for some symmetric tensor field formula_19. This generalizes the equation for conformal Killing vectors, which states that formula_20 for some scalar field formula_21. Every conformal Killing tensor corresponds to a constant of motion along null geodesics. More specifically, for every null geodesic with tangent vector formula_22, the quantity formula_23 is constant along the geodesic. The property of being a conformal Killing tensor is preserved under conformal transformations in the following sense. If formula_24 is a conformal Killing tensor with respect to a metric formula_25, then formula_26 is a conformal Killing tensor with respect to the conformally equivalent metric formula_27, for all positive-valued formula_28. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{(\\alpha\\beta\\gamma)} = \\frac{1}{6}(T_{\\alpha\\beta\\gamma} + T_{\\alpha\\gamma\\beta} + T_{\\beta\\alpha\\gamma} + T_{\\beta\\gamma\\alpha} + T_{\\gamma\\alpha\\beta} + T_{\\gamma\\beta\\alpha})" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "K_{\\beta_1 \\cdots \\beta_m} = K_{(\\beta_1 \\cdots \\beta_m)}" }, { "math_id": 3, "text": "\\nabla_{(\\alpha}K_{\\beta_1 \\cdots \\beta_m)} = 0" }, { "math_id": 4, "text": "\\nabla_{(\\alpha}K_{\\beta)} = \\frac{1}{2} (\\nabla_{\\alpha}K_{\\beta} + \\nabla_{\\beta}K_{\\alpha}) = 0" }, { "math_id": 5, "text": "S_{\\alpha_1 \\cdots \\alpha_l}" }, { "math_id": 6, "text": "T_{\\beta_1 \\cdots \\beta_m}" }, { "math_id": 7, "text": "S_{(\\alpha_1 \\cdots \\alpha_l}T_{\\beta_1 \\cdots \\beta_m)}" }, { "math_id": 8, "text": "u^\\alpha" }, { "math_id": 9, "text": "K_{\\beta_1 \\cdots \\beta_m} u^{\\beta_1} \\cdots u^{\\beta_m}" }, { "math_id": 10, "text": "k=1" }, { "math_id": 11, "text": "x" }, { "math_id": 12, "text": "y" }, { "math_id": 13, "text": "z" }, { "math_id": 14, "text": "K_{\\mu\\nu} = a^2 (g_{\\mu\\nu} + U_{\\mu}U_{\\nu})" }, { "math_id": 15, "text": "U^{\\mu} = (1,0,0,0)" }, { "math_id": 16, "text": "f_{a_1 a_2 ... a_p}" }, { "math_id": 17, "text": "\\nabla_b f_{c a_2 ... a_p} + \\nabla_c f_{b a_2 ... a_p} = 0\\," }, { "math_id": 18, "text": "\\nabla_{(\\alpha}K_{\\beta_1 \\cdots \\beta_m)} = k_{(\\beta_1 \\cdots \\beta_{m-1}} g_{\\beta_m \\alpha)}" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": "\\nabla_\\alpha K_\\beta + \\nabla_\\beta K_\\alpha = \\lambda g_{\\alpha \\beta}" }, { "math_id": 21, "text": "\\lambda" }, { "math_id": 22, "text": "v^\\alpha" }, { "math_id": 23, "text": "K_{\\beta_1 \\cdots \\beta_m} v^{\\beta_1} \\cdots v^{\\beta_m}" }, { "math_id": 24, "text": "K_{\\beta_1 \\cdots \\beta_m}" }, { "math_id": 25, "text": "g_{\\alpha \\beta}" }, { "math_id": 26, "text": "\\tilde{K}_{\\beta_1 \\cdots \\beta_m} = u^m K_{\\beta_1 \\cdots \\beta_m}" }, { "math_id": 27, "text": "\\tilde{g}_{\\alpha \\beta} = u g_{\\alpha \\beta}" }, { "math_id": 28, "text": "u" } ]
https://en.wikipedia.org/wiki?curid=7913943
7914038
Generalizability theory
Generalizability theory, or G theory, is a statistical framework for conceptualizing, investigating, and designing reliable observations. It is used to determine the reliability (i.e., reproducibility) of measurements under specific conditions. It is particularly useful for assessing the reliability of performance assessments. It was originally introduced in Cronbach, L.J., Rajaratnam, N., &amp; Gleser, G.C. (1963). Overview. In G theory, sources of variation are referred to as "facets". Facets are similar to the "factors" used in analysis of variance, and may include persons, raters, items/forms, time, and settings among other possibilities. These facets are potential sources of error and the purpose of generalizability theory is to quantify the amount of error caused by each facet and interaction of facets. The usefulness of data gained from a G study is crucially dependent on the design of the study. Therefore, the researcher must carefully consider the ways in which he/she hopes to generalize any specific results. Is it important to generalize from one setting to a larger number of settings? From one rater to a larger number of raters? From one set of items to a larger set of items? The answers to these questions will vary from one researcher to the next, and will drive the design of a G study in different ways. In addition to deciding which facets the researcher generally wishes to examine, it is necessary to determine which facet will serve as the object of measurement (e.g. the systematic source of variance) for the purpose of analysis. The remaining facets of interest are then considered to be sources of measurement error. In most cases, the object of measurement will be the person to whom a number/score is assigned. In other cases it may be a group or performers such as a team or classroom. Ideally, nearly all of the measured variance will be attributed to the object of measurement (e.g. individual differences), with only a negligible amount of variance attributed to the remaining facets (e.g., rater, time, setting). The results from a G study can also be used to inform a decision, or D, study. In a D study, we can ask the hypothetical question of "what would happen if different aspects of this study were altered?" For example, a soft drink company might be interested in assessing the quality of a new product through use of a consumer rating scale. By employing a D study, it would be possible to estimate how the consistency of quality ratings would change if consumers were asked 10 questions instead of 2, or if 1,000 consumers rated the soft drink instead of 100. By employing simulated D studies, it is therefore possible to examine how the generalizability coefficients (similar to reliability coefficients in Classical test theory) would change under different circumstances, and consequently determine the ideal conditions under which our measurements would be the most reliable. Comparison with classical test theory. The focus of classical test theory (CTT) is on determining error of the measurement. Perhaps the most famous model of CTT is the equation formula_0, where X is the observed score, T is the true score, and e is the error involved in measurement. Although "e" could represent many different types of error, such as rater or instrument error, CTT only allows us to estimate one type of error at a time. Essentially it throws all sources of error into one error term. This may be suitable in the context of highly controlled laboratory conditions, but variance is a part of everyday life. In field research, for example, it is unrealistic to expect that the conditions of measurement will remain constant. Generalizability theory acknowledges and allows for variability in assessment conditions that may affect measurements. The advantage of G theory lies in the fact that researchers can estimate what proportion of the total variance in the results is due to the individual factors that often vary in assessment, such as setting, time, items, and raters. Another important difference between CTT and G theory is that the latter approach takes into account how the consistency of outcomes may change if a measure is used to make absolute versus relative decisions. An example of an absolute, or criterion-referenced, decision would be when an individual's test score is compared to a cut-off score to determine eligibility or diagnosis (i.e. a child's score on an achievement test is used to determine eligibility for a gifted program). In contrast, an example of a relative, or norm-referenced, decision would be when the individual's test score is used to either (a) determine relative standing as compared to his/her peers (i.e. a child's score on a reading subtest is used to determine which reading group he/she is placed in), or (b) make intra-individual comparisons (i.e. comparing previous versus current performance within the same individual). The type of decision that the researcher is interested in will determine which formula should be used to calculate the generalizability coefficient (similar to a reliability coefficient in CTT).
[ { "math_id": 0, "text": "X = T + E" } ]
https://en.wikipedia.org/wiki?curid=7914038
7914891
Hitchin functional
The Hitchin functional is a mathematical concept with applications in string theory that was introduced by the British mathematician Nigel Hitchin. and are the original articles of the Hitchin functional. As with Hitchin's introduction of generalized complex manifolds, this is an example of a mathematical tool found useful in mathematical physics. Formal definition. This is the definition for 6-manifolds. The definition in Hitchin's article is more general, but more abstract. Let formula_0 be a compact, oriented 6-manifold with trivial canonical bundle. Then the Hitchin functional is a functional on 3-forms defined by the formula: formula_1 where formula_2 is a 3-form and * denotes the Hodge star operator. The proof of the theorem in Hitchin's articles Hitchin (2000) and Hitchin (2001) is relatively straightforward. The power of this concept is in the converse statement: if the exact form formula_6 is known, we only have to look at its critical points to find the possible complex structures. Stable forms. Action functionals often determine geometric structure on formula_0 and geometric structure are often characterized by the existence of particular differential forms on formula_0 that obey some integrable conditions. If an "2"-form formula_9 can be written with local coordinates formula_10 and formula_11, then formula_9 defines "symplectic structure". A "p"-form formula_12 is "stable" if it lies in an open orbit of the local formula_13 action where n=dim(M), namely if any small perturbation formula_14 can be undone by a local formula_13 action. So any "1"-form that don't vanish everywhere is stable; "2"-form (or "p"-form when "p" is even) stability is equivalent to non-degeneracy. What about "p"=3? For large "n" "3"-form is difficult because the dimension of formula_15, is of the order of formula_16, grows more fastly than the dimension of formula_13 which is formula_17. But there are some very lucky exceptional case, namely, formula_18, when dim formula_19, dim formula_20. Let formula_21 be a stable real "3"-form in dimension "6". Then the stabilizer of formula_21 under formula_22 has real dimension "36-20=16", in fact either formula_23 or formula_24. Focus on the case of formula_24 and if formula_21 has a stabilizer in formula_24 then it can be written with local coordinates as follows: formula_25 where formula_26 and formula_27 are bases of formula_28. Then formula_29 determines an almost complex structure on formula_0. Moreover, if there exist local coordinate formula_30 such that formula_31 then it determines fortunately a complex structure on formula_0. Given the stable formula_32: formula_25. We can define another real "3"-from formula_33. And then formula_34 is a holomorphic "3"-form in the almost complex structure determined by formula_21. Furthermore, it becomes to be the complex structure just if formula_35 i.e. formula_36 and formula_37. This formula_2 is just the "3"-form formula_2 in formal definition of "Hitchin functional". These idea induces the generalized complex structure. Use in string theory. Hitchin functionals arise in many areas of string theory. An example is the compactifications of the 10-dimensional string with a subsequent orientifold projection formula_38 using an involution formula_39. In this case, formula_0 is the internal 6 (real) dimensional Calabi-Yau space. The couplings to the complexified Kähler coordinates formula_40 is given by formula_41 The potential function is the functional formula_42, where J is the almost complex structure. Both are Hitchin functionals. As application to string theory, the famous OSV conjecture used "Hitchin functional" in order to relate topological string to 4-dimensional black hole entropy. Using similar technique in the formula_7 holonomy argued about topological M-theory and in the formula_8 holonomy topological F-theory might be argued. More recently, E. Witten claimed the mysterious superconformal field theory in six dimensions, called 6D (2,0) superconformal field theory . Hitchin functional gives one of the bases of it.
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\Phi(\\Omega) = \\int_M \\Omega \\wedge * \\Omega," }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "\\Phi" }, { "math_id": 4, "text": "[\\Omega] \\in H^3(M,R)" }, { "math_id": 5, "text": "\\Omega \\wedge * \\Omega < 0" }, { "math_id": 6, "text": "\\Phi(\\Omega)" }, { "math_id": 7, "text": "G_2" }, { "math_id": 8, "text": "Spin(7)" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "\\omega=dp_1\\wedge dq_1+\\cdots+dp_m\\wedge dq_m" }, { "math_id": 11, "text": "d\\omega=0" }, { "math_id": 12, "text": "\\omega\\in\\Omega^p(M,\\mathbb{R})" }, { "math_id": 13, "text": "GL(n,\\mathbb{R})" }, { "math_id": 14, "text": "\\omega\\mapsto\\omega+\\delta\\omega" }, { "math_id": 15, "text": "\\wedge^3(\\mathbb{R}^n)" }, { "math_id": 16, "text": "n^3" }, { "math_id": 17, "text": "n^2" }, { "math_id": 18, "text": "n=6" }, { "math_id": 19, "text": "\\wedge^3(\\mathbb{R}^6)=20" }, { "math_id": 20, "text": "GL(6,\\mathbb{R})=36" }, { "math_id": 21, "text": "\\rho" }, { "math_id": 22, "text": "GL(6,\\mathbb{R})" }, { "math_id": 23, "text": "SL(3,\\mathbb{R})\\times SL(3,\\mathbb{R})" }, { "math_id": 24, "text": "SL(3,\\mathbb{C})" }, { "math_id": 25, "text": "\\rho=\\frac{1}{2}(\\zeta_1\\wedge\\zeta_2\\wedge\\zeta_3+\\bar{\\zeta_1}\\wedge\\bar{\\zeta_2}\\wedge\\bar{\\zeta_3})" }, { "math_id": 26, "text": "\\zeta_1=e_1+ie_2,\\zeta_2=e_3+ie_4,\\zeta_3=e_5+ie_6" }, { "math_id": 27, "text": "e_i" }, { "math_id": 28, "text": "T^*M" }, { "math_id": 29, "text": "\\zeta_i" }, { "math_id": 30, "text": "(z_1,z_2,z_3)" }, { "math_id": 31, "text": "\\zeta_i=dz_i" }, { "math_id": 32, "text": "\\rho\\in\\Omega^3(M,\\mathbb{R})" }, { "math_id": 33, "text": "\\tilde{\\rho}(\\rho)=\\frac{1}{2}(\\zeta_1\\wedge\\zeta_2\\wedge\\zeta_3-\\bar{\\zeta_1}\\wedge\\bar{\\zeta_2}\\wedge\\bar{\\zeta_3})" }, { "math_id": 34, "text": "\\Omega=\\rho+i\\tilde{\\rho}(\\rho)" }, { "math_id": 35, "text": "d\\Omega=0" }, { "math_id": 36, "text": "d\\rho=0" }, { "math_id": 37, "text": "d\\tilde{\\rho}(\\rho)=0" }, { "math_id": 38, "text": "\\kappa" }, { "math_id": 39, "text": "\\nu" }, { "math_id": 40, "text": "\\tau" }, { "math_id": 41, "text": "g_{ij} = \\tau \\text{im} \\int \\tau i^*(\\nu \\cdot \\kappa \\tau)." }, { "math_id": 42, "text": "V[J] = \\int J \\wedge J \\wedge J" } ]
https://en.wikipedia.org/wiki?curid=7914891
79150
Feigenbaum constants
Mathematical constants related to chaotic behavior In mathematics, specifically bifurcation theory, the Feigenbaum constants are two mathematical constants which both express ratios in a bifurcation diagram for a non-linear map. They are named after the physicist Mitchell J. Feigenbaum. History. Feigenbaum originally related the first constant to the period-doubling bifurcations in the logistic map, but also showed it to hold for all one-dimensional maps with a single quadratic maximum. As a consequence of this generality, every chaotic system that corresponds to this description will bifurcate at the same rate. Feigenbaum made this discovery in 1975, and he officially published it in 1978. The first constant. The first Feigenbaum constant δ is the limiting ratio of each bifurcation interval to the next between every period doubling, of a one-parameter map formula_0 where "f" ("x") is a function parameterized by the bifurcation parameter a. It is given by the limit formula_1 where an are discrete values of a at the nth period doubling. Illustration. Non-linear maps. To see how this number arises, consider the real one-parameter map formula_2 Here a is the bifurcation parameter, x is the variable. The values of a for which the period doubles (e.g. the largest value for a with no period-2 orbit, or the largest a with no period-4 orbit), are "a"1, "a"2 etc. These are tabulated below: The ratio in the last column converges to the first Feigenbaum constant. The same number arises for the logistic map formula_3 with real parameter a and variable x. Tabulating the bifurcation values again: Fractals. In the case of the Mandelbrot set for complex quadratic polynomial formula_4 the Feigenbaum constant is the limiting ratio between the diameters of successive circles on the real axis in the complex plane (see animation on the right). Bifurcation parameter is a root point of period-2"n" component. This series converges to the Feigenbaum point c = −1.401155... The ratio in the last column converges to the first Feigenbaum constant. Other maps also reproduce this ratio; in this sense the Feigenbaum constant in bifurcation theory is analogous to π in geometry and "e" in calculus. The second constant. The second Feigenbaum constant or Feigenbaum's alpha constant (sequence in the OEIS), formula_5 is the ratio between the width of a tine and the width of one of its two subtines (except the tine closest to the fold). A negative sign is applied to α when the ratio between the lower subtine and the width of the tine is measured. These numbers apply to a large class of dynamical systems (for example, dripping faucets to population growth). A simple rational approximation is × × = . Other values. The period-3 window in the logistic map also has a period-doubling route to chaos, reaching chaos at formula_6, and it has its own two Feigenbaum constants. formula_7m and Appendix F.2 Properties. Both numbers are believed to be transcendental, although they have not been proven to be so. In fact, there is no known proof that either constant is even irrational. The first proof of the universality of the Feigenbaum constants was carried out by Oscar Lanford—with computer-assistance—in 1982 (with a small correction by Jean-Pierre Eckmann and Peter Wittwer of the University of Geneva in 1987). Over the years, non-numerical methods were discovered for different parts of the proof, aiding Mikhail Lyubich in producing the first complete non-numerical proof. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; OEIS sequence A006891 (Decimal expansion of Feigenbaum reduction parameter) OEIS sequence A195102 (Decimal expansion of the parameter for the biquadratic solution of the Feigenbaum-Cvitanovic equation)
[ { "math_id": 0, "text": "x_{i+1} = f(x_i)," }, { "math_id": 1, "text": "\\delta = \\lim_{n\\to\\infty} \\frac{a_{n-1} - a_{n-2}}{a_n - a_{n-1}} = 4.669\\,201\\,609\\,\\ldots," }, { "math_id": 2, "text": "f(x) = a-x^2." }, { "math_id": 3, "text": "f(x) = ax(1-x)" }, { "math_id": 4, "text": "f(z) = z^2 + c" }, { "math_id": 5, "text": "\\alpha = 2.502\\,907\\,875\\,095\\,892\\,822\\,283\\,902\\,873\\,218...," }, { "math_id": 6, "text": "r = 3.854 077 963 591\\dots" }, { "math_id": 7, "text": "\\delta = 55.26, \\alpha = 9.277" } ]
https://en.wikipedia.org/wiki?curid=79150
7915003
Preferential attachment
Stochastic process formalizing cumulative advantage A preferential attachment process is any of a class of processes in which some quantity, typically some form of wealth or credit, is distributed among a number of individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not. "Preferential attachment" is only the most recent of many names that have been given to such processes. They are also referred to under the names Yule process, cumulative advantage, the rich get richer, and the Matthew effect. They are also related to Gibrat's law. The principal reason for scientific interest in preferential attachment is that it can, under suitable circumstances, generate power law distributions. If preferential attachment is non-linear, measured distributions may deviate from a power law. These mechanisms may generate distributions which are approximately power law over transient periods. Definition. A preferential attachment process is a stochastic urn process, meaning a process in which discrete units of wealth, usually called "balls", are added in a random or partly random fashion to a set of objects or containers, usually called "urns". A preferential attachment process is an urn process in which additional balls are added continuously to the system and are distributed among the urns as an increasing function of the number of balls the urns already have. In the most commonly studied examples, the number of urns also increases continuously, although this is not a necessary condition for preferential attachment and examples have been studied with constant or even decreasing numbers of urns. A classic example of a preferential attachment process is the growth in the number of species per genus in some higher taxon of biotic organisms. New genera ("urns") are added to a taxon whenever a newly appearing species is considered sufficiently different from its predecessors that it does not belong in any of the current genera. New species ("balls") are added as old ones speciate (i.e., split in two) and, assuming that new species belong to the same genus as their parent (except for those that start new genera), the probability that a new species is added to a genus will be proportional to the number of species the genus already has. This process, first studied by British statistician Udny Yule, is a "linear" preferential attachment process, since the rate at which genera accrue new species is linear in the number they already have. Linear preferential attachment processes in which the number of urns increases are known to produce a distribution of balls over the urns following the so-called Yule distribution. In the most general form of the process, balls are added to the system at an overall rate of "m" new balls for each new urn. Each newly created urn starts out with "k"0 balls and further balls are added to urns at a rate proportional to the number "k" that they already have plus a constant "a" &gt; −"k"0. With these definitions, the fraction "P"("k") of urns having "k" balls in the limit of long time is given by formula_0 for "k" ≥ "k"0 (and zero otherwise), where B("x", "y") is the Euler beta function: formula_1 with Γ("x") being the standard gamma function, and formula_2 The beta function behaves asymptotically as B("x", "y") ~ "x"−"y" for large "x" and fixed "y", which implies that for large values of "k" we have formula_3 In other words, the preferential attachment process generates a "long-tailed" distribution following a Pareto distribution or power law in its tail. This is the primary reason for the historical interest in preferential attachment: the species distribution and many other phenomena are observed empirically to follow power laws and the preferential attachment process is a leading candidate mechanism to explain this behavior. Preferential attachment is considered a possible candidate for, among other things, the distribution of the sizes of cities, the wealth of extremely wealthy individuals, the number of citations received by learned publications, and the number of links to pages on the World Wide Web. The general model described here includes many other specific models as special cases. In the species/genus example above, for instance, each genus starts out with a single species ("k"0 = 1) and gains new species in direct proportion to the number it already has ("a" = 0), and hence "P"("k") = B("k", "γ")/B("k"0, "γ" − 1) with "γ"=2 + 1/"m". Similarly the Price model for scientific citations corresponds to the case "k"0 = 0, "a" = 1 and the widely studied Barabási-Albert model corresponds to "k"0 = "m", "a" = 0. Preferential attachment is sometimes referred to as the Matthew effect, but the two are not precisely equivalent. The Matthew effect, first discussed by Robert K. Merton, is named for a passage in the biblical Gospel of Matthew: "For everyone who has will be given more, and he will have an abundance. Whoever does not have, even what he has will be taken from him." (Matthew , New International Version.) The preferential attachment process does not incorporate the taking away part. This point may be moot, however, since the scientific insight behind the Matthew effect is in any case entirely different. Qualitatively it is intended to describe not a mechanical multiplicative effect like preferential attachment but a specific human behavior in which people are more likely to give credit to the famous than to the little known. The classic example of the Matthew effect is a scientific discovery made simultaneously by two different people, one well known and the other little known. It is claimed that under these circumstances people tend more often to credit the discovery to the well-known scientist. Thus the real-world phenomenon the Matthew effect is intended to describe is quite distinct from (though certainly related to) preferential attachment. History. The first rigorous consideration of preferential attachment seems to be that of Udny Yule in 1925, who used it to explain the power-law distribution of the number of species per genus of flowering plants. The process is sometimes called a "Yule process" in his honor. Yule was able to show that the process gave rise to a distribution with a power-law tail, but the details of his proof are, by today's standards, contorted and difficult, since the modern tools of stochastic process theory did not yet exist and he was forced to use more cumbersome methods of proof. Most modern treatments of preferential attachment make use of the master equation method, whose use in this context was pioneered by Simon in 1955, in work on the distribution of sizes of cities and other phenomena. The first application of preferential attachment to learned citations was given by Price in 1976. (He referred to the process as a "cumulative advantage" process.) His was also the first application of the process to the growth of a network, producing what would now be called a scale-free network. It is in the context of network growth that the process is most frequently studied today. Price also promoted preferential attachment as a possible explanation for power laws in many other phenomena, including Lotka's law of scientific productivity and Bradford's law of journal use. The application of preferential attachment to the growth of the World Wide Web was proposed by Barabási and Albert in 1999. Barabási and Albert also coined the name "preferential attachment" by which the process is best known today and suggested that the process might apply to the growth of other networks as well. For growing networks, the precise functional form of preferential attachment can be estimated by maximum likelihood estimation.
[ { "math_id": 0, "text": "\nP(k)={\\mathrm{B}(k+a,\\gamma)\\over\\mathrm{B}(k_0+a,\\gamma-1)},\n" }, { "math_id": 1, "text": "\n\\mathrm{B}(x,y)={\\Gamma(x)\\Gamma(y)\\over\\Gamma(x+y)},\n" }, { "math_id": 2, "text": "\n\\gamma=2 + {k_0 + a\\over m}.\n" }, { "math_id": 3, "text": "\nP(k) \\propto k^{-\\gamma}.\n" } ]
https://en.wikipedia.org/wiki?curid=7915003
79173
Bifurcation diagram
Visualization of sudden behavior changes caused by continuous parameter changes In mathematics, particularly in dynamical systems, a bifurcation diagram shows the values visited or approached asymptotically (fixed points, periodic orbits, or chaotic attractors) of a system as a function of a bifurcation parameter in the system. It is usual to represent stable values with a solid line and unstable values with a dotted line, although often the unstable points are omitted. Bifurcation diagrams enable the visualization of bifurcation theory. In the context of discrete-time dynamical systems, the diagram is also called orbit diagram. Logistic map. An example is the bifurcation diagram of the logistic map: formula_0 The bifurcation parameter "r" is shown on the horizontal axis of the plot and the vertical axis shows the set of values of the logistic function visited asymptotically from almost all initial conditions. The bifurcation diagram shows the forking of the periods of stable orbits from 1 to 2 to 4 to 8 etc. Each of these bifurcation points is a period-doubling bifurcation. The ratio of the lengths of successive intervals between values of "r" for which bifurcation occurs converges to the first Feigenbaum constant. The diagram also shows period doublings from 3 to 6 to 12 etc., from 5 to 10 to 20 etc., and so forth. Symmetry breaking in bifurcation sets. In a dynamical system such as formula_1 which is structurally stable when formula_2, if a bifurcation diagram is plotted, treating formula_3 as the bifurcation parameter, but for different values of formula_4, the case formula_5 is the symmetric pitchfork bifurcation. When formula_6, we say we have a pitchfork with "broken symmetry." This is illustrated in the animation on the right. Applications. Consider a system of differential equations that describes some physical quantity, that for concreteness could represent one of three examples: 1. the position and velocity of an undamped and frictionless pendulum, 2. a neuron's membrane potential over time, and 3. the average concentration of a virus in a patient's bloodstream. The differential equations for these examples include *parameters* that may affect the output of the equations. Changing the pendulum's mass and length will affect its oscillation frequency, changing the magnitude of injected current into a neuron may transition the membrane potential from resting to spiking, and the long-term viral load in the bloodstream may decrease with carefully timed treatments. In general, researchers may seek to quantify how the long-term (asymptotic) behavior of a system of differential equations changes if a parameter is changed. In the dynamical systems branch of mathematics, a bifurcation diagram quantifies these changes by showing how fixed points, periodic orbits, or chaotic attractors of a system change as a function of bifurcation parameter. Bifurcation diagrams are used to visualize these changes.
[ { "math_id": 0, "text": " x_{n+1}=rx_n(1-x_n). " }, { "math_id": 1, "text": " \\ddot {x} + f(x;\\mu) + \\varepsilon g(x) = 0," }, { "math_id": 2, "text": " \\mu \\neq 0 " }, { "math_id": 3, "text": " \\mu " }, { "math_id": 4, "text": " \\varepsilon " }, { "math_id": 5, "text": " \\varepsilon = 0" }, { "math_id": 6, "text": " \\varepsilon \\neq 0 " } ]
https://en.wikipedia.org/wiki?curid=79173
7917643
Involutory matrix
Type of square matrix In mathematics, an involutory matrix is a square matrix that is its own inverse. That is, multiplication by the matrix formula_0 is an involution if and only if formula_1, where formula_2 is the formula_3 identity matrix. Involutory matrices are all square roots of the identity matrix. This is a consequence of the fact that any invertible matrix multiplied by its inverse is the identity. Examples. The formula_4 real matrix formula_5 is involutory provided that formula_6 The Pauli matrices in M(2, C) are involutory: formula_7 One of the three classes of elementary matrix is involutory, namely the row-interchange elementary matrix. A special case of another class of elementary matrix, that which represents multiplication of a row or column by −1, is also involutory; it is in fact a trivial example of a signature matrix, all of which are involutory. Some simple examples of involutory matrices are shown below. formula_8 where Any block-diagonal matrices constructed from involutory matrices will also be involutory, as a consequence of the linear independence of the blocks. Symmetry. An involutory matrix which is also symmetric is an orthogonal matrix, and thus represents an isometry (a linear transformation which preserves Euclidean distance). Conversely every orthogonal involutory matrix is symmetric. As a special case of this, every reflection and 180° rotation matrix is involutory. Properties. An involution is non-defective, and each eigenvalue equals formula_9, so an involution diagonalizes to a signature matrix. A normal involution is Hermitian (complex) or symmetric (real) and also unitary (complex) or orthogonal (real). The determinant of an involutory matrix over any field is ±1. If A is an "n" × "n" matrix, then A is involutory if and only if P+ = (I + A)/2 is idempotent. This relation gives a bijection between involutory matrices and idempotent matrices. Similarly, A is involutory if and only if P− = (I − A)/2 is idempotent. These two operators form the symmetric and antisymmetric projections formula_10 of a vector formula_11 with respect to the involution A, in the sense that formula_12, or formula_13. The same construct applies to any involutory function, such as the complex conjugate (real and imaginary parts), transpose (symmetric and antisymmetric matrices), and Hermitian adjoint (Hermitian and skew-Hermitian matrices). If A is an involutory matrix in M("n", R), which is a matrix algebra over the real numbers, and A is not a scalar multiple of I, then the subalgebra {"x" I + "y" A: "x", "y" ∈ R} generated by A is isomorphic to the split-complex numbers. If A and B are two involutory matrices which commute with each other (i.e. AB = BA) then AB is also involutory. If A is an involutory matrix then every integer power of A is involutory. In fact, A"n" will be equal to A if "n" is odd and I if "n" is even. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_{n \\times n}" }, { "math_id": 1, "text": " A^2=I " }, { "math_id": 2, "text": "I" }, { "math_id": 3, "text": "n \\times n" }, { "math_id": 4, "text": "2\\times2" }, { "math_id": 5, "text": "\\begin{pmatrix}a & b \\\\ c & -a \\end{pmatrix}" }, { "math_id": 6, "text": "a^2 + bc = 1 ." }, { "math_id": 7, "text": "\\begin{align}\n \\sigma_1 = \\sigma_x &=\n \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix}, \\\\\n \\sigma_2 = \\sigma_y &=\n \\begin{pmatrix}\n 0 & -i \\\\\n i & 0\n \\end{pmatrix}, \\\\\n \\sigma_3 = \\sigma_z &=\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & -1\n \\end{pmatrix}.\n\\end{align}" }, { "math_id": 8, "text": "\n\\begin{array}{cc}\n\\mathbf{I} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{pmatrix}\n; & \n\\mathbf{I}^{-1} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{pmatrix}\n\\\\\n\\\\\n\\mathbf{R} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{pmatrix}\n; &\n\\mathbf{R}^{-1} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{pmatrix}\n\\\\\n\\\\\n\\mathbf{S} = \\begin{pmatrix}\n+1 & 0 & 0 \\\\\n0 & -1 & 0 \\\\\n0 & 0 & -1\n\\end{pmatrix}\n; &\n\\mathbf{S}^{-1} = \\begin{pmatrix}\n+1 & 0 & 0 \\\\\n0 & -1 & 0 \\\\\n0 & 0 & -1\n\\end{pmatrix}\n\\\\\n\\end{array}\n" }, { "math_id": 9, "text": "\\pm 1" }, { "math_id": 10, "text": "v_\\pm = P_\\pm v" }, { "math_id": 11, "text": "v = v_+ + v_-" }, { "math_id": 12, "text": "Av_\\pm = \\pm v_\\pm" }, { "math_id": 13, "text": "A P_\\pm = \\pm P_\\pm" } ]
https://en.wikipedia.org/wiki?curid=7917643
7918341
Essentially unique
In mathematics, the term essentially unique is used to describe a weaker form of uniqueness, where an object satisfying a property is "unique" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of "sameness", which is often formalized using an equivalence relation. A related notion is a universal property, where an object is not only essentially unique, but unique "up to a unique isomorphism" (meaning that it has trivial automorphism group). In general there can be more than one isomorphism between examples of an essentially unique object. Examples. Set theory. At the most basic level, there is an essentially unique set of any given cardinality, whether one labels the elements formula_0 or formula_1. In this case, the non-uniqueness of the isomorphism (e.g., match 1 to formula_2 or 1 to "formula_3") is reflected in the symmetric group. On the other hand, there is an essentially unique totally ordered set of any given finite cardinality that is unique up to unique isomorphism: if one writes formula_4 and formula_5, then the only order-preserving isomorphism is the one which maps 1 to "formula_2," 2 to "formula_6," and 3 to "formula_3." Number theory. The fundamental theorem of arithmetic establishes that the factorization of any positive integer into prime numbers is essentially unique, i.e., unique up to the ordering of the prime factors. Group theory. In the context of classification of groups, there is an essentially unique group containing exactly 2 elements. Similarly, there is also an essentially unique group containing exactly 3 elements: the cyclic group of order three. In fact, regardless of how one chooses to write the three elements and denote the group operation, all such groups can be shown to be isomorphic to each other, and hence are "the same". On the other hand, there does not exist an essentially unique group with exactly 4 elements, as there are in this case two non-isomorphic groups in total: the cyclic group of order 4 and the Klein four-group. Measure theory. There is an essentially unique measure that is translation-invariant, strictly positive and locally finite on the real line. In fact, any such measure must be a constant multiple of Lebesgue measure, specifying that the measure of the unit interval should be 1—before determining the solution uniquely. Topology. There is an essentially unique two-dimensional, compact, simply connected manifold: the 2-sphere. In this case, it is unique up to homeomorphism. In the area of topology known as knot theory, there is an analogue of the fundamental theorem of arithmetic: the decomposition of a knot into a sum of prime knots is essentially unique. Lie theory. A maximal compact subgroup of a semisimple Lie group may not be unique, but is unique up to conjugation. Category theory. An object that is the limit or colimit over a given diagram is essentially unique, as there is a "unique" isomorphism to any other limiting/colimiting object. Coding theory. Given the task of using 24-bit words to store 12 bits of information in such a way that 4-bit errors can be detected and 3-bit errors can be corrected, the solution is essentially unique: the extended binary Golay code. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{1,2,3\\}" }, { "math_id": 1, "text": "\\{a,b,c\\}" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": "\\{1 < 2 < 3\\}" }, { "math_id": 5, "text": "\\{a< b< c\\}" }, { "math_id": 6, "text": "b" } ]
https://en.wikipedia.org/wiki?curid=7918341
791863
Seebeck coefficient
Measure of voltage induced by change of temperature The Seebeck coefficient (also known as thermopower, thermoelectric power, and thermoelectric sensitivity) of a material is a measure of the magnitude of an induced thermoelectric voltage in response to a temperature difference across that material, as induced by the Seebeck effect. The SI unit of the Seebeck coefficient is volts per kelvin (V/K), although it is more often given in microvolts per kelvin (μV/K). The use of materials with a high Seebeck coefficient is one of many important factors for the efficient behaviour of thermoelectric generators and thermoelectric coolers. More information about high-performance thermoelectric materials can be found in the Thermoelectric materials article. In thermocouples the Seebeck effect is used to measure temperatures, and for accuracy it is desirable to use materials with a Seebeck coefficient that is stable over time. Physically, the magnitude and sign of the Seebeck coefficient can be approximately understood as being given by the entropy per unit charge carried by electrical currents in the material. It may be positive or negative. In conductors that can be understood in terms of independently moving, nearly-free charge carriers, the Seebeck coefficient is negative for negatively charged carriers (such as electrons), and positive for positively charged carriers (such as electron holes). &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Definition. One way to define the Seebeck coefficient is the voltage built up when a small temperature gradient is applied to a material, and when the material has come to a steady state where the current density is zero everywhere. If the temperature difference Δ"T" between the two ends of a material is small, then the Seebeck coefficient of a material is defined as: formula_0 where Δ"V" is the thermoelectric voltage seen at the terminals. (See below for more on the signs of Δ"V" and Δ"T".) Note that the voltage shift expressed by the Seebeck effect cannot be measured directly, since the measured voltage (by attaching a voltmeter) contains an additional voltage contribution, due to the temperature gradient and Seebeck effect in the measurement leads. The voltmeter voltage is always dependent on "relative" Seebeck coefficients among the various materials involved. Most generally and technically, the Seebeck coefficient is defined in terms of the portion of electric current driven by temperature gradients, as in the vector differential equation formula_1 where formula_2 is the current density, formula_3 is the electrical conductivity, formula_4 is the voltage gradient, and formula_5 is the temperature gradient. The zero-current, steady state special case described above has formula_6, which implies that the two electrical conductivity terms have cancelled out and so formula_7 Sign convention. The sign is made explicit in the following expression: formula_8 Thus, if "S" is positive, the end with the higher temperature has the lower voltage, and vice versa. The voltage gradient in the material will point against the temperature gradient. The Seebeck effect is generally dominated by the contribution from charge carrier diffusion (see below) which tends to push charge carriers towards the cold side of the material until a compensating voltage has built up. As a result, in p-type semiconductors (which have only positive mobile charges, electron holes), "S" is positive. Likewise, in n-type semiconductors (which have only negative mobile charges, electrons), "S" is negative. In most conductors, however, the charge carriers exhibit both hole-like and electron-like behaviour and the sign of "S" usually depends on which of them predominates. Relationship to other thermoelectric coefficients. According to the second Thomson relation (which holds for all non-magnetic materials in the absence of an externally applied magnetic field), the Seebeck coefficient is related to the Peltier coefficient formula_9 by the exact relation formula_10 where formula_11 is the thermodynamic temperature. According to the first Thomson relation and under the same assumptions about magnetism, the Seebeck coefficient is related to the Thomson coefficient formula_12 by formula_13 The constant of integration is such that formula_14 at absolute zero, as required by Nernst's theorem. Measurement. Relative Seebeck coefficient. In practice the absolute Seebeck coefficient is difficult to measure directly, since the voltage output of a thermoelectric circuit, as measured by a voltmeter, only depends on "differences" of Seebeck coefficients. This is because electrodes attached to a voltmeter must be placed onto the material in order to measure the thermoelectric voltage. The temperature gradient then also typically induces a thermoelectric voltage across one leg of the measurement electrodes. Therefore, the measured Seebeck coefficient is a contribution from the Seebeck coefficient of the material of interest and the material of the measurement electrodes. This arrangement of two materials is usually called a thermocouple. The measured Seebeck coefficient is then a contribution from both and can be written as: formula_15 Absolute Seebeck coefficient. Although only relative Seebeck coefficients are important for externally measured voltages, the absolute Seebeck coefficient can be important for other effects where voltage is measured indirectly. Determination of the absolute Seebeck coefficient therefore requires more complicated techniques and is more difficult, but such measurements have been performed on standard materials. These measurements only had to be performed once for all time, and for all materials; for any other material, the absolute Seebeck coefficient can be obtained by performing a relative Seebeck coefficient measurement against a standard material. A measurement of the Thomson coefficient formula_16, which expresses the strength of the Thomson effect, can be used to yield the absolute Seebeck coefficient through the relation: formula_17, provided that formula_16 is measured down to absolute zero. The reason this works is that formula_18 is expected to decrease to zero as the temperature is brought to zero—a consequence of Nernst's theorem. Such a measurement based on the integration of formula_19 was published in 1932, though it relied on the interpolation of the Thomson coefficient in certain regions of temperature. Superconductors have zero Seebeck coefficient, as mentioned below. By making one of the wires in a thermocouple superconducting, it is possible to get a direct measurement of the absolute Seebeck coefficient of the other wire, since it alone determines the measured voltage from the entire thermocouple. A publication in 1958 used this technique to measure the absolute Seebeck coefficient of lead between 7.2 K and 18 K, thereby filling in an important gap in the previous 1932 experiment mentioned above. The combination of the superconductor-thermocouple technique up to 18 K, with the Thomson-coefficient-integration technique above 18 K, allowed determination of the absolute Seebeck coefficient of lead up to room temperature. By proxy, these measurements led to the determination of absolute Seebeck coefficients for "all materials", even up to higher temperatures, by a combination of Thomson coefficient integrations and thermocouple circuits. The difficulty of these measurements, and the rarity of reproducing experiments, lends some degree of uncertainty to the absolute thermoelectric scale thus obtained. In particular, the 1932 measurements may have incorrectly measured the Thomson coefficient over the range 20 K to 50 K. Since nearly all subsequent publications relied on those measurements, this would mean that all of the commonly used values of absolute Seebeck coefficient (including those shown in the figures) are too low by about 0.3 μV/K, for all temperatures above 50 K. Seebeck coefficients for some common materials. In the table below are Seebeck coefficients at room temperature for some common, nonexotic materials, measured relative to platinum. The Seebeck coefficient of platinum itself is approximately −5 μV/K at room temperature, and so the values listed below should be compensated accordingly. For example, the Seebeck coefficients of Cu, Ag, Au are 1.5 μV/K, and of Al −1.5 μV/K. The Seebeck coefficient of semiconductors very much depends on doping, with generally positive values for p doped materials and negative values for n doping. Physical factors that determine the Seebeck coefficient. A material's temperature, crystal structure, and impurities influence the value of thermoelectric coefficients. The Seebeck effect can be attributed to two things: charge-carrier diffusion and phonon drag. Charge carrier diffusion. On a fundamental level, an applied voltage difference refers to a difference in the thermodynamic chemical potential of charge carriers, and the direction of the current under a voltage difference is determined by the universal thermodynamic process in which (given equal temperatures) particles flow from high chemical potential to low chemical potential. In other words, the direction of the current in Ohm's law is determined via the thermodynamic arrow of time (the difference in chemical potential could be exploited to produce work, but is instead dissipated as heat which increases entropy). On the other hand, for the Seebeck effect not even the sign of the current can be predicted from thermodynamics, and so to understand the origin of the Seebeck coefficient it is necessary to understand the "microscopic" physics. Charge carriers (such as thermally excited electrons) constantly diffuse around inside a conductive material. Due to thermal fluctuations, some of these charge carriers travel with a higher energy than average, and some with a lower energy. When no voltage differences or temperature differences are applied, the carrier diffusion perfectly balances out and so on average one sees no current: formula_20. A net current can be generated by applying a voltage difference (Ohm's law), or by applying a temperature difference (Seebeck effect). To understand the microscopic origin of the thermoelectric effect, it is useful to first describe the microscopic mechanism of the normal Ohm's law electrical conductance—to describe what determines the formula_3 in formula_21. Microscopically, what is happening in Ohm's law is that higher energy levels have a higher concentration of carriers per state, on the side with higher chemical potential. For each interval of energy, the carriers tend to diffuse and spread into the area of device where there are fewer carriers per state of that energy. As they move, however, they occasionally scatter dissipatively, which re-randomizes their energy according to the local temperature and chemical potential. This dissipation empties out the carriers from these higher energy states, allowing more to diffuse in. The combination of diffusion and dissipation favours an overall drift of the charge carriers towards the side of the material where they have a lower chemical potential. For the thermoelectric effect, now, consider the case of uniform voltage (uniform chemical potential) with a temperature gradient. In this case, at the hotter side of the material there is more variation in the energies of the charge carriers, compared to the colder side. This means that high energy levels have a higher carrier occupation per state on the hotter side, but also the hotter side has a "lower" occupation per state at lower energy levels. As before, the high-energy carriers diffuse away from the hot end, and produce entropy by drifting towards the cold end of the device. However, there is a competing process: at the same time low-energy carriers are drawn back towards the hot end of the device. Though these processes both generate entropy, they work against each other in terms of charge current, and so a net current only occurs if one of these drifts is stronger than the other. The net current is given by formula_22, where (as shown below) the thermoelectric coefficient formula_23 depends literally on how conductive high-energy carriers are, compared to low-energy carriers. The distinction may be due to a difference in rate of scattering, a difference in speeds, a difference in density of states, or a combination of these effects. Mott formula. The processes described above apply in materials where each charge carrier sees an essentially static environment so that its motion can be described independently from other carriers, and independent of other dynamics (such as phonons). In particular, in electronic materials with weak electron-electron interactions, weak electron-phonon interactions, etc. it can be shown in general that the linear response conductance is formula_24 and the linear response thermoelectric coefficient is formula_25 where formula_26 is the energy-dependent conductivity, and formula_27 is the Fermi–Dirac distribution function. These equations are known as the Mott relations, of Sir Nevill Francis Mott. The derivativeformula_28 is a function peaked around the chemical potential (Fermi level) formula_29 with a width of approximately formula_30. The energy-dependent conductivity (a quantity that cannot actually be directly measured — one only measures formula_31) is calculated as formula_32 where formula_33 is the electron diffusion constant and formula_34 is the electronic density of states (in general, both are functions of energy). In materials with strong interactions, none of the above equations can be used since it is not possible to consider each charge carrier as a separate entity. The Wiedemann–Franz law can also be exactly derived using the non-interacting electron picture, and so in materials where the Wiedemann–Franz law fails (such as superconductors), the Mott relations also generally tend to fail. The formulae above can be simplified in a couple of important limiting cases: Mott formula in metals. In semimetals and metals, where transport only occurs near the Fermi level and formula_26 changes slowly in the range formula_35, one can perform a Sommerfeld expansion formula_36, which leads to formula_37 This expression is sometimes called "the Mott formula", however it is much less general than Mott's original formula expressed above. In the free electron model with scattering, the value of formula_38 is of order formula_39, where formula_40 is the Fermi temperature, and so a typical value of the Seebeck coefficient in the Fermi gas is formula_41 (the prefactor varies somewhat depending on details such as dimensionality and scattering). In highly conductive metals the Fermi temperatures are typically around 104 – 105 K, and so it is understandable why their absolute Seebeck coefficients are only of order 1 – 10 μV/K at room temperature. Note that whereas the free electron model predicts a negative Seebeck coefficient, real metals actually have complicated band structures and may exhibit positive Seebeck coefficients (examples: Cu, Ag, Au). The fraction formula_42 in semimetals is sometimes calculated from the measured derivative of formula_43 with respect to some energy shift induced by field effect. This is not necessarily correct and the estimate of formula_38 can be incorrect (by a factor of two or more), since the disorder potential depends on screening which also changes with field effect. Mott formula in semiconductors. In semiconductors at low levels of doping, transport only occurs far away from the Fermi level. At low doping in the conduction band (where formula_44, where formula_45 is the minimum energy of the conduction band edge), one has formula_46. Approximating the conduction band levels' conductivity function as formula_47 for some constants formula_48 and formula_49, formula_50 whereas in the valence band when formula_51 and formula_52, formula_53 The values of formula_49 and formula_54 depend on material details; in bulk semiconductor these constants range between 1 and 3, the extremes corresponding to acoustic-mode lattice scattering and ionized-impurity scattering. In extrinsic (doped) semiconductors either the conduction or valence band will dominate transport, and so one of the numbers above will give the measured values. In general however the semiconductor may also be intrinsic in which case the bands conduct in parallel, and so the measured values will be formula_55 This results in a crossover behaviour, as shown in the figure. The highest Seebeck coefficient is obtained when the semiconductor is lightly doped, however a high Seebeck coefficient is not necessarily useful on its own. For thermoelectric power devices (coolers, generators) it is more important to maximize the thermoelectric power factor formula_56, or the thermoelectric figure of merit, and the optimum generally occurs at high doping levels. Phonon drag. Phonons are not always in local thermal equilibrium; they move against the thermal gradient. They lose momentum by interacting with electrons (or other carriers) and imperfections in the crystal. If the phonon-electron interaction is predominant, the phonons will tend to push the electrons to one end of the material, hence losing momentum and contributing to the thermoelectric field. This contribution is most important in the temperature region where phonon-electron scattering is predominant. This happens for formula_57 where formula_58 is the Debye temperature. At lower temperatures there are fewer phonons available for drag, and at higher temperatures they tend to lose momentum in phonon-phonon scattering instead of phonon-electron scattering. At lower temperatures, material boundaries also play an increasing role as the phonons can travel significant distances. Practically speaking, phonon drag is an important effect in semiconductors near room temperature (even though well above formula_59), that is comparable in magnitude to the carrier-diffusion effect described in the previous section. This region of the thermopower-versus-temperature function is highly variable under a magnetic field. Relationship with entropy. The Seebeck coefficient of a material corresponds thermodynamically to the amount of entropy "dragged along" by the flow of charge inside a material; it is in some sense the entropy per unit charge in the material. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nS = -{\\Delta V \\over \\Delta T}\n" }, { "math_id": 1, "text": "\\mathbf J = -\\sigma \\boldsymbol \\nabla V - \\sigma S \\boldsymbol \\nabla T" }, { "math_id": 2, "text": "\\scriptstyle\\mathbf J" }, { "math_id": 3, "text": "\\scriptstyle\\sigma" }, { "math_id": 4, "text": "\\scriptstyle\\boldsymbol \\nabla V" }, { "math_id": 5, "text": "\\scriptstyle\\boldsymbol \\nabla T" }, { "math_id": 6, "text": "\\scriptstyle\\mathbf J=0" }, { "math_id": 7, "text": "\\boldsymbol \\nabla V = -S\\boldsymbol \\nabla T." }, { "math_id": 8, "text": "S = -\\frac{V_{\\rm left}-V_{\\rm right}}{T_{\\rm left}-T_{\\rm right}}" }, { "math_id": 9, "text": "\\scriptstyle \\Pi" }, { "math_id": 10, "text": "S = \\frac{\\Pi}{T}, " }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\scriptstyle \\mathcal K" }, { "math_id": 13, "text": "S = \\int \\frac{\\mathcal K}{T}\\, dT. " }, { "math_id": 14, "text": "\\scriptstyle S=0" }, { "math_id": 15, "text": "\nS_{AB} = S_B-S_A = {\\Delta V_B \\over \\Delta T} - {\\Delta V_A \\over \\Delta T}\n" }, { "math_id": 16, "text": "\\mathcal{K}" }, { "math_id": 17, "text": "S(T) = \\int_0^T {\\mathcal{K}(T') \\over T'} dT'" }, { "math_id": 18, "text": "S(T)" }, { "math_id": 19, "text": "\\mathcal{K}/T" }, { "math_id": 20, "text": "\\scriptstyle\\mathbf J = 0" }, { "math_id": 21, "text": "\\scriptstyle\\mathbf J = -\\sigma\\boldsymbol\\nabla V" }, { "math_id": 22, "text": "\\scriptstyle\\mathbf J = -\\sigma S\\boldsymbol\\nabla T" }, { "math_id": 23, "text": "\\scriptstyle\\sigma S" }, { "math_id": 24, "text": "\\sigma = \\int c(E) \\Bigg( -\\frac{df(E)}{dE} \\Bigg) \\, dE," }, { "math_id": 25, "text": "\\sigma S = \\frac{k_{\\rm B}}{-e} \\int \\frac{E - \\mu}{k_{\\rm B}T} c(E) \\Bigg( -\\frac{df(E)}{dE} \\Bigg) \\, dE" }, { "math_id": 26, "text": "\\scriptstyle c(E)" }, { "math_id": 27, "text": "\\scriptstyle f(E)" }, { "math_id": 28, "text": " -\\frac{df(E)}{dE} = \\frac{1}{4k_{\\rm B}T} \\operatorname{sech}^2\\left( \\frac{E-\\mu}{2k_{\\rm B}T}\\right)" }, { "math_id": 29, "text": " \\mu" }, { "math_id": 30, "text": "3.5 k_{\\rm B}T" }, { "math_id": 31, "text": "\\sigma" }, { "math_id": 32, "text": "c(E) = e^2 D(E) \\nu(E)" }, { "math_id": 33, "text": "D(E)" }, { "math_id": 34, "text": "\\nu(E)" }, { "math_id": 35, "text": "E \\approx \\mu \\pm k_{\\rm B}T" }, { "math_id": 36, "text": "\\scriptstyle c(E) = c(\\mu) + c'(\\mu) (E-\\mu) + O[(E-\\mu)^2]" }, { "math_id": 37, "text": "S_{\\rm metal} = \\frac{\\pi^2 k_{\\rm B}^2 T}{-3 e} \\frac{c'(\\mu)}{c(\\mu)} + O[(k_{\\rm B}T)^3], \\quad \\sigma_{\\rm metal} = c(\\mu) + O[(k_{\\rm B}T)^2]." }, { "math_id": 38, "text": "\\scriptstyle c'(\\mu) / c(\\mu) " }, { "math_id": 39, "text": "\\scriptstyle 1/(k_{\\rm B}T_{\\rm F}) " }, { "math_id": 40, "text": "T_{\\rm F}" }, { "math_id": 41, "text": "\\scriptstyle S_{\\rm Fermi~gas} \\approx \\tfrac{\\pi^2 k_{\\rm B}}{-3e} T/T_{\\rm F}" }, { "math_id": 42, "text": "\\scriptstyle c'(\\mu) / c(\\mu)" }, { "math_id": 43, "text": "\\scriptstyle \\sigma_{\\rm metal} " }, { "math_id": 44, "text": " \\scriptstyle E_{\\rm C} - \\mu \\gg k_{\\rm B}T" }, { "math_id": 45, "text": " \\scriptstyle E_{\\rm C} " }, { "math_id": 46, "text": "\\scriptstyle -\\frac{df(E)}{dE} \\approx \\tfrac{1}{k_{\\rm B}T} e^{-(E-\\mu)/(k_{\\rm B}T)}" }, { "math_id": 47, "text": "\\scriptstyle c(E) = A_{\\rm C} (E - E_{\\rm C})^{a_{\\rm C}} " }, { "math_id": 48, "text": "\\scriptstyle A_{\\rm C}" }, { "math_id": 49, "text": "\\scriptstyle a_{\\rm C}" }, { "math_id": 50, "text": "S_{\\rm C} = \\frac{k_{\\rm B}}{-e} \\Big[ \\frac{E_{\\rm C} - \\mu}{k_{\\rm B}T} + a_{\\rm C} + 1\\Big], \\quad \\sigma_{\\rm C} = A_{\\rm C} (k_{\\rm B}T)^{a_{\\rm C}} e^{-\\frac{E_{\\rm C} - \\mu}{k_{\\rm B}T}} \\Gamma(a_{\\rm C}+1)." }, { "math_id": 51, "text": " \\scriptstyle \\mu - E_{\\rm V}\\gg kT" }, { "math_id": 52, "text": "\\scriptstyle c(E) = A_{\\rm V} (E_{\\rm V} - E)^{a_{\\rm V}} " }, { "math_id": 53, "text": "S_{\\rm V} = \\frac{k}{e} \\Big[ \\frac{\\mu - E_{\\rm V}}{k_{\\rm B}T} + a_{\\rm V} + 1\\Big], \\quad \\sigma_{\\rm V} = A_{\\rm V} (k_{\\rm B}T)^{a_{\\rm V}} e^{-\\frac{\\mu - E_{\\rm V}}{k_{\\rm B}T}} \\Gamma(a_{\\rm V}+1)." }, { "math_id": 54, "text": "\\scriptstyle a_{\\rm V}" }, { "math_id": 55, "text": "S_{\\rm semi} = \\frac{\\sigma_{\\rm C} S_{\\rm C} + \\sigma_{\\rm V} S_{\\rm V}}{\\sigma_{\\rm C} + \\sigma_{\\rm V}}, \\quad \\sigma_{\\rm semi} = \\sigma_{\\rm C} + \\sigma_{\\rm V} " }, { "math_id": 56, "text": " \\scriptstyle \\sigma S^2 " }, { "math_id": 57, "text": "T \\approx {1 \\over 5} \\theta_\\mathrm{D}" }, { "math_id": 58, "text": "\\scriptstyle \\theta_{\\rm D}" }, { "math_id": 59, "text": "\\scriptstyle \\theta_{\\rm D}/5" } ]
https://en.wikipedia.org/wiki?curid=791863
7919098
Grammage
The mass per unit of area of paper Grammage and basis weight, in the pulp and paper industry, are the area density of a paper product, that is, its mass per unit of area. Two ways of expressing grammage are commonly used: Grammage. In the metric system, the mass per unit area of all types of paper and paperboard is expressed in terms of grams per square metre (g/m2 or gsm). This quantity is commonly called "grammage" in both English and French, though printers in most English-speaking countries still refer to the "weight" of paper. formula_0 Typical office paper has , therefore a typical A4 sheet (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄16 of a square metre) weighs . The abbreviation "gsm" instead of the standard "g/m2" symbol is also widely encountered in English-speaking countries. Typically grammage is measured in paper mill on-line by a quality control system and verified by laboratory measurement. Basis weight. In countries that use American paper sizes, a less verifiable measure known as "basis weight" is used in addition to or instead of grammage. The basis weight of paper is the density of paper expressed in terms of the mass of a ream of given dimensions and a sheet count. In the US system, the weight is specified in avoirdupois pounds and the sheet count of a paper ream is usually 500 sheets. However, the mass specified is not the mass of the ream that is sold to the customer. Instead, it is the mass of the uncut "basis ream" in which the sheets have some larger size (parent size). Often, that is a size used during the manufacturing process before the paper is cut to the dimensions in which it is sold. So, to compute the mass per area, one must know The standard dimensions and sheet count of a ream vary according to the type of paper. These "uncut" basis sizes are not normally labelled on the product, are not formally standardized, and therefore have to be guessed or inferred somehow from trading practice. Historically, this convention is the product of pragmatic considerations such as the size of a sheet mold. By using the same basis sheet size for the same type of paper, consumers can easily compare papers of differing brands. Twenty-pound bond paper is always lighter and thinner than 32-pound bond, no matter what its cut size, and 20-pound bond "letter size" and 20-pound bond "legal size" papers are the same weight paper with a different cut size. However, a sheet of common copy paper that has a basis weight of does not have the same mass as the same size sheet of coarse paper (newsprint). In the former case, the standard ream is 500 sheets of paper, and in the latter, 500 sheets of paper. Here are some basic ream sizes for various types of paper. Units are inches except where noted. Sheets can be cut into four sheets, a standard for business stationery known conventionally as "letter sized paper". So, the ream became commonly used. The book-paper ream developed because such a size can easily be cut into sixteen book sized sheets without significant waste (nominally before trimming and binding). Early newsprint presses printed sheets in size, and so the ream dimensions for newsprint became , with 500 sheets to a ream. Newsprint was made from ground wood pulp, and ground wood hanging paper (wallpaper) was made on newsprint machines. Newsprint was used as wrapping paper, and the first paper bags were made from newsprint. The newsprint ream standard also became the standard for packaging papers, even though in packaging papers kraft pulp, rather than ground wood, was used for greater strength. Paper weight is sometimes stated using the "#" symbol. For example, "20#" means "20 pounds per basis ream of that kind of paper". When the density of a ream of paper is given in pounds, it is often accompanied by its "M weight" (M is 1000 in Roman numerals). The M weight is the weight (in pounds) of 1000 cut sheets. Paper suppliers will often charge by M weight, since it is always consistent within a specific paper size, and because it allows a simple weight calculation for shipping charges. For example, a 500-sheet ream of 20# copy paper may be specified "10 M". 1000 cut sheets (or two reams) will weigh , half of the four reams of cut paper resulting from the 20# basis ream of paper. Caliper. Paper thickness, or caliper, is a common measurement specified and required for certain printing applications. Since a paper's density is typically not directly known or specified, the thickness of any sheet of paper cannot be calculated by any method. Instead, it is measured and specified separately as its caliper. However, paper thickness for most typical business papers might be similar across comparable brands. If thickness is not specified for a paper in question, it must be either measured or guessed based on a comparable paper's specification. Caliper is usually measured in micrometres (μm), or in the United States also in mils (1 mil = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄1000 in = 25.4 μm). Commonly, 20-pound bond paper ranges between roughly in thickness. The paper density is calculated by dividing the grammage over the caliper, and is usually expressed in grams per cubic centimetre (g/cm3) to cancel out the mathematical need for unit conversions between metres and micrometres (a conversion factor of 1,000,000). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{grammage} = \\frac{\\text{mass} \\text{ (g)}}{\\text{length} \\text{ }(\\text{m}) \\times \\text{width} \\text{ } (\\text{m})}" } ]
https://en.wikipedia.org/wiki?curid=7919098
7919595
Clos network
Kind of multistage circuit-switching network In the field of telecommunications, a Clos network is a kind of multistage circuit-switching network which represents a theoretical idealization of practical, multistage switching systems. It was invented by Edson Erwin in 1938 and first formalized by the American engineer Charles Clos in 1952. By adding stages, a Clos network reduces the number of crosspoints required to compose a large crossbar switch. A Clos network topology (diagrammed below) is parameterized by three integers "n", "m", and "r": "n" represents the number of sources which feed into each of "r" ingress stage crossbar switches; each ingress stage crossbar switch has "m" outlets; and there are "m" middle stage crossbar switches. Circuit switching arranges a dedicated communications path for a connection between endpoints for the duration of the connection. This sacrifices total bandwidth available if the dedicated connections are poorly utilized, but makes the connection and bandwidth more predictable, and only introduces control overhead when the connections are initiated, rather than with every packet handled, as in modern packet-switched networks. When the Clos network was first devised, the number of crosspoints was a good approximation of the total cost of the switching system. While this was important for electromechanical crossbars, it became less relevant with the advent of VLSI, wherein the interconnects could be implemented either directly in silicon, or within a relatively small cluster of boards. Upon the advent of complex data centers, with huge interconnect structures, each based on optical fiber links, Clos networks regained importance. A subtype of Clos network, the Beneš network, has also found recent application in machine learning. Topology. Clos networks have three stages: the ingress stage, the middle stage, and the egress stage. Each stage is made up of a number of crossbar switches (see diagram below), often just called "crossbars". The network implements an r-way perfect shuffle between stages. Each call entering an ingress crossbar switch can be routed through any of the available middle stage crossbar switches, to the relevant egress crossbar switch. A middle stage crossbar is available for a particular new call if both the link connecting the ingress switch to the middle stage switch, and the link connecting the middle stage switch to the egress switch, are free. Clos networks are defined by three integers "n", "m", and "r". "n" represents the number of sources which feed into each of "r" ingress stage crossbar switches. Each ingress stage crossbar switch has "m" outlets, and there are "m" middle stage crossbar switches. There is exactly one connection between each ingress stage switch and each middle stage switch. There are "r" egress stage switches, each with "m" inputs and "n" outputs. Each middle stage switch is connected exactly once to each egress stage switch. Thus, the ingress stage has "r" switches, each of which has "n" inputs and "m" outputs. The middle stage has "m" switches, each of which has "r" inputs and "r" outputs. The egress stage has "r" switches, each of which has "m" inputs and "n" outputs. Blocking characteristics. The relative values of "m" and "n" define the blocking characteristics of the Clos network. Strict-sense nonblocking Clos networks ("m" ≥ 2"n"−1): the original 1953 Clos result. If "m" ≥ 2"n"−1, the Clos network is "strict-sense nonblocking", meaning that an unused input on an ingress switch can always be connected to an unused output on an egress switch, "without having to re-arrange existing calls". This is the result which formed the basis of Clos's classic 1953 paper. Assume that there is a free terminal on the input of an ingress switch, and this has to be connected to a free terminal on a particular egress switch. In the worst case, "n"−1 other calls are active on the ingress switch in question, and "n"−1 other calls are active on the egress switch in question. Assume, also in the worst case, that each of these calls passes through a different middle-stage switch. Hence in the worst case, 2"n"−2 of the middle stage switches are unable to carry the new call. Therefore, to ensure strict-sense nonblocking operation, another middle stage switch is required, making a total of 2"n"−1. The below diagram shows the worst case when the already established calls (blue and red) are passing different middle-stage switches, so another middle-stage switch is necessary to establish a call between the green input and output. Rearrangeably nonblocking Clos networks ("m" ≥ "n"). If "m" ≥ "n", the Clos network is "rearrangeably nonblocking", meaning that an unused input on an ingress switch can always be connected to an unused output on an egress switch, but for this to take place, existing calls may have to be rearranged by assigning them to different centre stage switches in the Clos network. To prove this, it is sufficient to consider "m" = "n", with the Clos network fully utilised; that is, "r"×"n" calls in progress. The proof shows how any permutation of these "r"×"n" input terminals onto "r"×"n" output terminals may be broken down into smaller permutations which may each be implemented by the individual crossbar switches in a Clos network with "m" = "n". The proof uses Hall's marriage theorem which is given this name because it is often explained as follows. Suppose there are "r" boys and "r" girls. The theorem states that if every subset of "k" boys (for each "k" such that 0 ≤ "k" ≤ "r") between them know "k" or more girls, then each boy can be paired off with a girl that he knows. It is obvious that this is a necessary condition for pairing to take place; what is surprising is that it is sufficient. In the context of a Clos network, each boy represents an ingress switch, and each girl represents an egress switch. A boy is said to know a girl if the corresponding ingress and egress switches carry the same call. Each set of "k" boys must know at least "k" girls because "k" ingress switches are carrying "k"×"n" calls and these cannot be carried by less than "k" egress switches. Hence each ingress switch can be paired off with an egress switch that carries the same call, via a one-to-one mapping. These "r" calls can be carried by one middle-stage switch. If this middle-stage switch is now removed from the Clos network, "m" is reduced by 1, and we are left with a smaller Clos network. The process then repeats itself until "m" = 1, and every call is assigned to a middle-stage switch. Blocking probabilities: the Lee and Jacobaeus approximations. Real telephone switching systems are rarely strict-sense nonblocking for reasons of cost, and they have a small probability of blocking, which may be evaluated by the Lee or Jacobaeus approximations, assuming no rearrangements of existing calls. Here, the potential number of other active calls on each ingress or egress switch is "u" = "n"−1. In the Lee approximation, it is assumed that each internal link between stages is already occupied by a call with a certain probability "p", and that this is completely independent between different links. This overestimates the blocking probability, particularly for small "r". The probability that a given internal link is busy is "p" = "uq"/"m", where "q" is the probability that an ingress or egress link is busy. Conversely, the probability that a link is free is 1−"p". The probability that the path connecting an ingress switch to an egress switch via a particular middle stage switch is free is the probability that both links are free, (1−"p")2. Hence the probability of it being unavailable is 1−(1−"p")2 = 2"p"−"p"2. The probability of blocking, or the probability that no such path is free, is then [1−(1−"p")2]"m". The Jacobaeus approximation is more accurate, and to see how it is derived, assume that some particular mapping of calls entering the Clos network (input calls) already exists onto middle stage switches. This reflects the fact that only the "relative" configurations of ingress switch and egress switches is of relevance. There are "i" input calls entering via the same ingress switch as the free input terminal to be connected, and there are "j" calls leaving the Clos network (output calls) via the same egress switch as the free output terminal to be connected. Hence 0 ≤ "i" ≤ "u", and 0 ≤ "j" ≤ "u". Let "A" be the number of ways of assigning the "j" output calls to the "m" middle stage switches. Let "B" be the number of these assignments which result in blocking. This is the number of cases in which the remaining "m"−"j" middle stage switches coincide with "m"−"j" of the "i" input calls, which is the number of subsets containing "m"−"j" of these calls. Then the probability of blocking is: formula_0 If "f""i" is the probability that "i" other calls are already active on the ingress switch, and "g""j" is the probability that "j" other calls are already active on the egress switch, the overall blocking probability is: formula_1 This may be evaluated with "f""i" and "g""j" each being denoted by a binomial distribution. After considerable algebraic manipulation, this may be written as: formula_2 Clos networks with more than three stages. Clos networks may also be generalised to any odd number of stages. By replacing each centre stage crossbar switch with a 3-stage Clos network, Clos networks of five stages may be constructed. By applying the same process repeatedly, 7, 9, 11... stages are possible. Beneš network ("m" = "n" = 2). A rearrangeably nonblocking network of this type with "m" = "n" = 2 is generally called a "Beneš network", even though it was discussed and analyzed by others before Václav E. Beneš. The number of inputs and outputs is "N" = "r"×"n" = 2"r". Such networks have 2 log2"N" − 1 stages, each containing "N"/2 2×2 crossbar switches, and use a total of "N" log2"N" − "N"/2 2×2 crossbar switches. For example, an 8×8 Beneš network (i.e. with "N" = 8) is shown below; it has 2 log28 − 1 = 5 stages, each containing "N"/2 = 4 2×2 crossbar switches, and it uses a total of "N" log2"N" − "N"/2 = 20 2×2 crossbar switches. The central three stages consist of two smaller 4×4 Beneš networks, while in the center stage, each 2×2 crossbar switch may itself be regarded as a 2×2 Beneš network. This example therefore highlights the recursive construction of this type of network.
[ { "math_id": 0, "text": " \\beta_{ij} = \\frac{B}{A} = \\frac\n{\\left( \\begin{array}{c} i \\\\ m-j \\end{array} \\right)}\n{\\left( \\begin{array}{c} m \\\\ j \\end{array} \\right)}\n = \\frac{i!j!}{(i+j-m)!m!}" }, { "math_id": 1, "text": " P_B = \\sum_{i=0}^{u}\\sum_{j=0}^{u}f_ig_j\\beta_{ij} " }, { "math_id": 2, "text": "P_B = \\frac{(u!)^2(2-p)^{2u-m}p^m}{m!(2u-m)!}" } ]
https://en.wikipedia.org/wiki?curid=7919595
7921
Derivative
Instantaneous rate of change (mathematics) In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation. There are multiple different notations for differentiation, two of the most commonly used being Leibniz notation and prime notation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. Derivatives can be generalized to functions of several real variables. In this generalization, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector. Definition. As a limit. A function of a real variable formula_0 is differentiable at a point formula_1 of its domain, if its domain contains an open interval containing formula_1, and the limit formula_2 exists. This means that, for every positive real number formula_3, there exists a positive real number formula_4 such that, for every formula_5 such that formula_6 and formula_7 then formula_8 is defined, and formula_9 where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit. If the function formula_10 is differentiable at formula_1, that is if the limit formula_11 exists, then this limit is called the "derivative" of formula_10 at formula_1. Multiple notations for the derivative exist. The derivative of formula_10 at formula_1 can be denoted formula_12, read as "formula_10 prime of formula_1"; or it can be denoted formula_13, read as "the derivative of formula_10 with respect to formula_14 at formula_1" or "formula_15 by (or over) formula_16 at formula_1". See below. If formula_10 is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point formula_14 to the value of the derivative of formula_10 at formula_14. This function is written formula_17 and is called the "derivative function" or the "derivative of" formula_10. The function formula_10 sometimes has a derivative at most, but not all, points of its domain. The function whose value at formula_1 equals formula_18 whenever formula_18 is defined and elsewhere is undefined is also called the derivative of formula_10. It is still a function, but its domain may be smaller than the domain of formula_10. For example, let formula_19 be the squaring function: formula_20. Then the quotient in the definition of the derivative is formula_21 The division in the last step is valid as long as formula_22. The closer formula_23 is to formula_24, the closer this expression becomes to the value formula_25. The limit exists, and for every input formula_26 the limit is formula_25. So, the derivative of the squaring function is the doubling function: formula_27. The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function formula_19, specifically the points formula_28 and formula_29. As formula_23 is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of formula_19 at formula_26. In other words, the derivative is the slope of the tangent. Using infinitesimals. One way to think of the derivative formula_13 is as the ratio of an infinitesimal change in the output of the function formula_19 to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form formula_30 for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the formula_31 in the Leibniz notation. Thus, the derivative of formula_32 becomes formula_33 for an arbitrary infinitesimal formula_34, where formula_35 denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function formula_20 as an example again, formula_36 Continuity and differentiability. If formula_10 is differentiable at formula_1, then formula_10 must also be continuous at formula_1. As an example, choose a point formula_1 and let formula_10 be the step function that returns the value 1 for all formula_14 less than formula_1, and returns a different value 10 for all formula_14 greater than or equal to formula_1. The function formula_10 cannot have a derivative at formula_1. If formula_5 is negative, then formula_37 is on the low part of the step, so the secant line from formula_1 to formula_37 is very steep; as formula_5 tends to zero, the slope tends to infinity. If formula_5 is positive, then formula_37 is on the high part of the step, so the secant line from formula_1 to formula_37 has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by formula_38 is continuous at formula_39, but it is not differentiable there. If formula_5 is positive, then the slope of the secant line from 0 to formula_5 is one; if formula_5 is negative, then the slope of the secant line from formula_40 to formula_5 is formula_41. This can be seen graphically as a "kink" or a "cusp" in the graph at formula_42. Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by formula_43 is not differentiable at formula_39. In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point. Notation. One common symbol for the derivative of a function is Leibniz notation. They are written as the quotient of two differentials formula_44 and formula_16, which were introduced by Gottfried Wilhelm Leibniz in 1675. It is still commonly used when the equation formula_45 is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by formula_46, read as "the derivative of formula_47 with respect to formula_14". This derivative can alternately be treated as the application of a differential operator to a function, formula_48 Higher derivatives are expressed using the notation formula_49 for the formula_50-th derivative of formula_51. These are abbreviations for multiple applications of the derivative operator; for example, formula_52 Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if formula_53 and formula_54 then formula_55 Another common notation for differentiation is by using the prime mark in the symbol of a function formula_0. This is known as "prime notation", due to Joseph-Louis Lagrange. The first derivative is written as formula_56, read as "formula_10 prime of formula_14", or formula_57, read as "formula_47 prime". Similarly, the second and the third derivatives can be written as formula_58 and formula_59, respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as formula_60 or formula_61 The latter notation generalizes to yield the notation formula_62 for the formula_50-th derivative of formula_19. In Newton's notation or the "dot notation," a dot is placed over a symbol to represent a time derivative. If formula_47 is a function of formula_63, then the first and second derivatives can be written as formula_64 and formula_65, respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables. Another notation is "D-notation", which represents the differential operator by the symbol formula_66 The first derivative is written formula_67 and higher derivatives are written with a superscript, so the formula_50-th derivative is formula_68 This notation is sometimes called "Euler notation", although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function formula_69 its partial derivative with respect to formula_70 can be written formula_71 or formula_72 Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. formula_73 and formula_74. Rules of computation. In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using "rules" for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation. Rules for basic functions. The following are the rules for the derivatives of the most common basic functions. Here, formula_1 is a real number, and formula_75 is the mathematical constant approximately 2.71828. Rules for combined functions. Given that the formula_10 and formula_91 are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. Computation example. The derivative of the function given by formula_103 is formula_104 Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions formula_105, formula_106, formula_107, formula_108, and formula_109, as well as the constant formula_110, were also used. Higher-order derivatives. "Higher order derivatives" means that a function is differentiated repeatedly. Given that formula_10 is a differentiable function, the derivative of formula_10 is the first derivative, denoted as formula_17. The derivative of formula_17 is the second derivative, denoted as formula_58, and the derivative of formula_58 is the third derivative, denoted as formula_59. By continuing this process, if it exists, the formula_111-th derivative as the derivative of the formula_112-th derivative or the "derivative of order formula_111". As has been discussed above, the generalization of derivative of a function formula_10 may be denoted as formula_113. A function that has formula_114 successive derivatives is called "formula_114 times differentiable". If the formula_114-th derivative is continuous, then the function is said to be of differentiability class formula_115. A function that has infinitely many derivatives is called "infinitely differentiable" or "smooth". One example of the infinitely differentiable function is polynomial; differentiate this function repeatedly results the constant function, and the infinitely subsequent derivative of that function are all zero. In one of its applications, the higher-order derivatives may have specific interpretations in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk. In other dimensions. Vector-valued functions. A vector-valued function formula_116 of a real variable sends real numbers to vectors in some vector space formula_117. A vector-valued function can be split up into its coordinate functions formula_118, meaning that formula_119. This includes, for example, parametric curves in formula_120 or formula_121. The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of formula_122 is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is, formula_123 if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of formula_116 exists for every value of formula_63, then formula_124 is another vector-valued function. Partial derivatives. Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function formula_125 with respect to the variable formula_70 is variously denoted by &lt;templatestyles src="Block indent/styles.css"/&gt;formula_126, formula_127, formula_128, formula_129, or formula_130, among other possibilities. It can be thought of as the rate of change of the function in the formula_70-direction. Here ∂ is a rounded "d" called the partial derivative symbol. To distinguish it from the letter "d", ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let formula_131, then the partial derivative of function formula_10 with respect to both variables formula_14 and formula_47 are, respectively: formula_132 In general, the partial derivative of a function formula_133 in the direction formula_134 at the point formula_135 is defined to be: formula_136 This is fundamental for the study of the functions of several real variables. Let formula_133 be such a real-valued function. If all partial derivatives formula_10 with respect to formula_137 are defined at the point formula_138, these partial derivatives define the vector formula_139 which is called the gradient of formula_10 at formula_1. If formula_10 is differentiable at every point in some domain, then the gradient is a vector-valued function formula_140 that maps the point formula_138 to the vector formula_141. Consequently, the gradient determines a vector field. Directional derivatives. If formula_10 is a real-valued function on formula_117, then the partial derivatives of formula_10 measure its variation in the direction of the coordinate axes. For example, if formula_10 is a function of formula_14 and formula_47, then its partial derivatives measure the variation in formula_10 in the formula_14 and formula_47 direction. However, they do not directly measure the variation of formula_10 in any other direction, such as along the diagonal line formula_142. These are measured using directional derivatives. Choose a vector formula_143, then the directional derivative of formula_10 in the direction of formula_144 at the point formula_145 is: formula_146 If all the partial derivatives of formula_10 exist and are continuous at formula_145, then they determine the directional derivative of formula_10 in the direction formula_144 by the formula: formula_147 Total derivative, total differential and Jacobian matrix. When formula_10 is a function from an open subset of formula_117 to formula_148, then the directional derivative of formula_10 in a chosen direction is the best linear approximation to formula_10 at that point and in that direction. However, when formula_149, no single directional derivative can give a complete picture of the behavior of formula_10. The total derivative gives a complete picture by considering all directions at once. That is, for any vector formula_144 starting at formula_150, the linear approximation formula holds: formula_151 Similarly with the single-variable derivative, formula_152 is chosen so that the error in this approximation is as small as possible. The total derivative of formula_10 at formula_150 is the unique linear transformation formula_153 such that formula_154 Here formula_155 is a vector in formula_117, so the norm in the denominator is the standard length on formula_117. However, formula_156 is a vector in formula_148, and the norm in the numerator is the standard length on formula_148. If formula_157 is a vector starting at formula_1, then formula_158 is called the pushforward of formula_144 by formula_10. If the total derivative exists at formula_150, then all the partial derivatives and directional derivatives of formula_10 exist at formula_150, and for all formula_144, formula_159 is the directional derivative of formula_10 in the direction formula_144. If formula_10 is written using coordinate functions, so that formula_160, then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of formula_10 at formula_150: formula_161 Generalizations. The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " f(x) " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": "L=\\lim_{h \\to 0}\\frac{f(a+h)-f(a)}h " }, { "math_id": 3, "text": "\\varepsilon" }, { "math_id": 4, "text": "\\delta" }, { "math_id": 5, "text": " h " }, { "math_id": 6, "text": "|h| < \\delta" }, { "math_id": 7, "text": "h\\ne 0" }, { "math_id": 8, "text": "f(a+h)" }, { "math_id": 9, "text": "\\left|L-\\frac{f(a+h)-f(a)}h\\right|<\\varepsilon," }, { "math_id": 10, "text": " f " }, { "math_id": 11, "text": " L " }, { "math_id": 12, "text": "f'(a)" }, { "math_id": 13, "text": "\\frac{df}{dx}(a)" }, { "math_id": 14, "text": " x " }, { "math_id": 15, "text": " df " }, { "math_id": 16, "text": " dx " }, { "math_id": 17, "text": " f' " }, { "math_id": 18, "text": " f'(a) " }, { "math_id": 19, "text": "f" }, { "math_id": 20, "text": "f(x) = x^2" }, { "math_id": 21, "text": "\\frac{f(a+h) - f(a)}{h} = \\frac{(a+h)^2 - a^2}{h} = \\frac{a^2 + 2ah + h^2 - a^2}{h} = 2a + h." }, { "math_id": 22, "text": "h \\neq 0" }, { "math_id": 23, "text": "h" }, { "math_id": 24, "text": "0" }, { "math_id": 25, "text": "2a" }, { "math_id": 26, "text": "a" }, { "math_id": 27, "text": "f'(x) = 2x" }, { "math_id": 28, "text": "(a,f(a))" }, { "math_id": 29, "text": "(a+h, f(a+h))" }, { "math_id": 30, "text": "1 + 1 + \\cdots + 1 " }, { "math_id": 31, "text": "d" }, { "math_id": 32, "text": "f(x)" }, { "math_id": 33, "text": "f'(x) = \\operatorname{st}\\left( \\frac{f(x + dx) - f(x)}{dx} \\right)" }, { "math_id": 34, "text": "dx" }, { "math_id": 35, "text": "\\operatorname{st}" }, { "math_id": 36, "text": " \\begin{align}\n f'(x) &= \\operatorname{st}\\left(\\frac{x^2 + 2x \\cdot dx + (dx)^2 -x^2}{dx}\\right) \\\\\n &= \\operatorname{st}\\left(\\frac{2x \\cdot dx + (dx)^2}{dx}\\right) \\\\\n &= \\operatorname{st}\\left(\\frac{2x \\cdot dx}{dx} + \\frac{(dx)^2}{dx}\\right) \\\\\n &= \\operatorname{st}\\left(2x + dx\\right) \\\\\n &= 2x.\n\\end{align} " }, { "math_id": 37, "text": " a + h " }, { "math_id": 38, "text": " f(x) = |x| " }, { "math_id": 39, "text": " x = 0 " }, { "math_id": 40, "text": " 0 " }, { "math_id": 41, "text": " -1 " }, { "math_id": 42, "text": "x=0" }, { "math_id": 43, "text": " f(x) = x^{1/3} " }, { "math_id": 44, "text": " dy " }, { "math_id": 45, "text": "y=f(x)" }, { "math_id": 46, "text": "\\frac{dy}{dx} " }, { "math_id": 47, "text": " y " }, { "math_id": 48, "text": "\\frac{dy}{dx} = \\frac{d}{dx} f(x)." }, { "math_id": 49, "text": " \\frac{d^n y}{dx^n} " }, { "math_id": 50, "text": "n" }, { "math_id": 51, "text": "y = f(x)" }, { "math_id": 52, "text": "\\frac{d^2y}{dx^2} = \\frac{d}{dx}\\Bigl(\\frac{d}{dx} f(x)\\Bigr)." }, { "math_id": 53, "text": "u = g(x)" }, { "math_id": 54, "text": "y = f(g(x))" }, { "math_id": 55, "text": "\\frac{dy}{dx} = \\frac{dy}{du} \\cdot \\frac{du}{dx}." }, { "math_id": 56, "text": " f'(x) " }, { "math_id": 57, "text": " y' " }, { "math_id": 58, "text": " f'' " }, { "math_id": 59, "text": " f''' " }, { "math_id": 60, "text": "f^{\\mathrm{iv}}" }, { "math_id": 61, "text": " f^{(4)}." }, { "math_id": 62, "text": "f^{(n)}" }, { "math_id": 63, "text": " t " }, { "math_id": 64, "text": "\\dot{y}" }, { "math_id": 65, "text": "\\ddot{y}" }, { "math_id": 66, "text": "D." }, { "math_id": 67, "text": "D f(x)" }, { "math_id": 68, "text": "D^nf(x)." }, { "math_id": 69, "text": "u = f(x, y)," }, { "math_id": 70, "text": "x" }, { "math_id": 71, "text": "D_x u" }, { "math_id": 72, "text": "D_x f(x,y)." }, { "math_id": 73, "text": "D_{xy} f(x,y) = \\frac{\\partial}{\\partial y} \\Bigl(\\frac{\\partial}{\\partial x} f(x,y) \\Bigr)" }, { "math_id": 74, "text": "D_{x}^2 f(x,y) = \\frac{\\partial}{\\partial x} \\Bigl(\\frac{\\partial}{\\partial x} f(x,y) \\Bigr)" }, { "math_id": 75, "text": " e " }, { "math_id": 76, "text": " \\frac{d}{dx}x^a = ax^{a-1} " }, { "math_id": 77, "text": " \\frac{d}{dx}e^x = e^x " }, { "math_id": 78, "text": " \\frac{d}{dx}a^x = a^x\\ln(a) " }, { "math_id": 79, "text": " a > 0 " }, { "math_id": 80, "text": " \\frac{d}{dx}\\ln(x) = \\frac{1}{x} " }, { "math_id": 81, "text": " x > 0 " }, { "math_id": 82, "text": " \\frac{d}{dx}\\log_a(x) = \\frac{1}{x\\ln(a)} " }, { "math_id": 83, "text": " x, a > 0 " }, { "math_id": 84, "text": " \\frac{d}{dx}\\sin(x) = \\cos(x) " }, { "math_id": 85, "text": " \\frac{d}{dx}\\cos(x) = -\\sin(x) " }, { "math_id": 86, "text": " \\frac{d}{dx}\\tan(x) = \\sec^2(x) = \\frac{1}{\\cos^2(x)} = 1 + \\tan^2(x) " }, { "math_id": 87, "text": " \\frac{d}{dx}\\arcsin(x) = \\frac{1}{\\sqrt{1-x^2}} " }, { "math_id": 88, "text": " -1 < x < 1 " }, { "math_id": 89, "text": " \\frac{d}{dx}\\arccos(x)= -\\frac{1}{\\sqrt{1-x^2}} " }, { "math_id": 90, "text": " \\frac{d}{dx}\\arctan(x)= \\frac{1}{{1+x^2}} " }, { "math_id": 91, "text": " g " }, { "math_id": 92, "text": "f'(x) = 0. " }, { "math_id": 93, "text": "(\\alpha f + \\beta g)' = \\alpha f' + \\beta g' " }, { "math_id": 94, "text": "g" }, { "math_id": 95, "text": "\\alpha" }, { "math_id": 96, "text": "\\beta" }, { "math_id": 97, "text": "(fg)' = f 'g + fg' " }, { "math_id": 98, "text": "(\\alpha f)' = \\alpha f'" }, { "math_id": 99, "text": "\\alpha' f = 0 \\cdot f = 0" }, { "math_id": 100, "text": "\\left(\\frac{f}{g} \\right)' = \\frac{f'g - fg'}{g^2}" }, { "math_id": 101, "text": "f(x) = h(g(x))" }, { "math_id": 102, "text": "f'(x) = h'(g(x)) \\cdot g'(x). " }, { "math_id": 103, "text": "f(x) = x^4 + \\sin \\left(x^2\\right) - \\ln(x) e^x + 7" }, { "math_id": 104, "text": " \\begin{align}\n f'(x) &= 4 x^{(4-1)}+ \\frac{d\\left(x^2\\right)}{dx}\\cos \\left(x^2\\right) - \\frac{d\\left(\\ln {x}\\right)}{dx} e^x - \\ln(x) \\frac{d\\left(e^x\\right)}{dx} + 0 \\\\\n &= 4x^3 + 2x\\cos \\left(x^2\\right) - \\frac{1}{x} e^x - \\ln(x) e^x.\n\\end{align} " }, { "math_id": 105, "text": " x^2 " }, { "math_id": 106, "text": " x^4 " }, { "math_id": 107, "text": " \\sin (x) " }, { "math_id": 108, "text": " \\ln (x) " }, { "math_id": 109, "text": " \\exp(x) = e^x " }, { "math_id": 110, "text": " 7 " }, { "math_id": 111, "text": " n " }, { "math_id": 112, "text": " (n - 1) " }, { "math_id": 113, "text": " f^{(n)} " }, { "math_id": 114, "text": " k " }, { "math_id": 115, "text": " C^k " }, { "math_id": 116, "text": " \\mathbf{y} " }, { "math_id": 117, "text": " \\R^n " }, { "math_id": 118, "text": " y_1(t), y_2(t), \\dots, y_n(t) " }, { "math_id": 119, "text": " \\mathbf{y} = (y_1(t), y_2(t), \\dots, y_n(t))" }, { "math_id": 120, "text": " \\R^2 " }, { "math_id": 121, "text": " \\R^3 " }, { "math_id": 122, "text": " \\mathbf{y}(t) " }, { "math_id": 123, "text": " \\mathbf{y}'(t)=\\lim_{h\\to 0}\\frac{\\mathbf{y}(t+h) - \\mathbf{y}(t)}{h}, " }, { "math_id": 124, "text": " \\mathbf{y}' " }, { "math_id": 125, "text": "f(x, y, \\dots)" }, { "math_id": 126, "text": "f_x" }, { "math_id": 127, "text": "f'_x" }, { "math_id": 128, "text": "\\partial_x f" }, { "math_id": 129, "text": "\\frac{\\partial}{\\partial x}f" }, { "math_id": 130, "text": "\\frac{\\partial f}{\\partial x}" }, { "math_id": 131, "text": "f(x,y) = x^2 + xy + y^2" }, { "math_id": 132, "text": " \\frac{\\partial f}{\\partial x} = 2x + y, \\qquad \\frac{\\partial f}{\\partial y} = x + 2y." }, { "math_id": 133, "text": " f(x_1, \\dots, x_n) " }, { "math_id": 134, "text": " x_i " }, { "math_id": 135, "text": "(a_1, \\dots, a_n) " }, { "math_id": 136, "text": "\\frac{\\partial f}{\\partial x_i}(a_1,\\ldots,a_n) = \\lim_{h \\to 0}\\frac{f(a_1,\\ldots,a_i+h,\\ldots,a_n) - f(a_1,\\ldots,a_i,\\ldots,a_n)}{h}." }, { "math_id": 137, "text": " x_j " }, { "math_id": 138, "text": " (a_1, \\dots, a_n) " }, { "math_id": 139, "text": "\\nabla f(a_1, \\ldots, a_n) = \\left(\\frac{\\partial f}{\\partial x_1}(a_1, \\ldots, a_n), \\ldots, \\frac{\\partial f}{\\partial x_n}(a_1, \\ldots, a_n)\\right)," }, { "math_id": 140, "text": " \\nabla f " }, { "math_id": 141, "text": " \\nabla f(a_1, \\dots, a_n) " }, { "math_id": 142, "text": " y = x " }, { "math_id": 143, "text": " \\mathbf{v} = (v_1,\\ldots,v_n) " }, { "math_id": 144, "text": " \\mathbf{v} " }, { "math_id": 145, "text": " \\mathbf{x} " }, { "math_id": 146, "text": " D_{\\mathbf{v}}{f}(\\mathbf{x}) = \\lim_{h \\rightarrow 0}{\\frac{f(\\mathbf{x} + h\\mathbf{v}) - f(\\mathbf{x})}{h}}." }, { "math_id": 147, "text": " D_{\\mathbf{v}}{f}(\\mathbf{x}) = \\sum_{j=1}^n v_j \\frac{\\partial f}{\\partial x_j}. " }, { "math_id": 148, "text": " \\R^m " }, { "math_id": 149, "text": " n > 1 " }, { "math_id": 150, "text": " \\mathbf{a} " }, { "math_id": 151, "text": "f(\\mathbf{a} + \\mathbf{v}) \\approx f(\\mathbf{a}) + f'(\\mathbf{a})\\mathbf{v}." }, { "math_id": 152, "text": " f'(\\mathbf{a}) " }, { "math_id": 153, "text": " f'(\\mathbf{a}) \\colon \\R^n \\to \\R^m " }, { "math_id": 154, "text": "\\lim_{\\mathbf{h}\\to 0} \\frac{\\lVert f(\\mathbf{a} + \\mathbf{h}) - (f(\\mathbf{a}) + f'(\\mathbf{a})\\mathbf{h})\\rVert}{\\lVert\\mathbf{h}\\rVert} = 0." }, { "math_id": 155, "text": " \\mathbf{h} " }, { "math_id": 156, "text": " f'(\\mathbf{a}) \\mathbf{h} " }, { "math_id": 157, "text": " v " }, { "math_id": 158, "text": " f'(\\mathbf{a}) \\mathbf{v} " }, { "math_id": 159, "text": " f'(\\mathbf{a})\\mathbf{v} " }, { "math_id": 160, "text": " f = (f_1, f_2, \\dots, f_m) " }, { "math_id": 161, "text": "f'(\\mathbf{a}) = \\operatorname{Jac}_{\\mathbf{a}} = \\left(\\frac{\\partial f_i}{\\partial x_j}\\right)_{ij}." }, { "math_id": 162, "text": "\\C" }, { "math_id": 163, "text": "\\R^2" }, { "math_id": 164, "text": "z" }, { "math_id": 165, "text": "x+iy" }, { "math_id": 166, "text": "M" }, { "math_id": 167, "text": "\\R^3" }, { "math_id": 168, "text": "f:M\\to N" }, { "math_id": 169, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=7921
7922560
Stagnation enthalpy
In thermodynamics and fluid mechanics, the stagnation enthalpy of a fluid is the static enthalpy of the fluid at a stagnation point. The stagnation enthalpy is also called total enthalpy. At a point where the flow does not stagnate, it corresponds to the static enthalpy of the fluid at that point assuming it was brought to rest from velocity formula_0 isentropically. That means all the kinetic energy was converted to internal energy without losses and is added to the local static enthalpy. When the potential energy of the fluid is negligible, the mass-specific stagnation enthalpy represents the total energy of a flowing fluid stream per unit mass. Stagnation enthalpy, or total enthalpy, is the sum of the static enthalpy (associated with the temperature and static pressure at that point) plus the enthalpy associated with the dynamic pressure, or velocity. This can be expressed in a formula in various ways. Often it is expressed in specific quantities, where specific means mass-specific, to get an intensive quantity: formula_1 where: formula_2 mass-specific total enthalpy, in [J/kg] formula_3 mass-specific static enthalpy, in [J/kg] formula_4 fluid velocity at the point of interest, in [m/s] formula_5 mass-specific kinetic energy, in [J/kg] The volume-specific version of this equation (in units of energy per volume, [J/m^3] is obtained by multiplying the equation with the fluid density formula_6: formula_7 where: formula_8 volume-specific total enthalpy, in [J/m^3] formula_9 volume-specific static enthalpy, in [J/m^3] formula_4 fluid velocity at the point of interest, in [m/s] formula_10 fluid density at the point of interest, in [kg/m^3] formula_11 volume-specific kinetic energy, in [J/m^3] The non-specific version of this equation, that means extensive quantities are used, is: formula_12 where: formula_13 total enthalpy, in [J] formula_14 static enthalpy, in [J] formula_15 fluid mass, in [kg] formula_4 fluid velocity at the point of interest, in [m/s] formula_16 kinetic energy, in [J] The suffix ‘0’ usually denotes the stagnation condition and is used as such here. Enthalpy is the energy associated with the temperature plus the energy associated with the pressure. The stagnation enthalpy adds a term associated with the kinetic energy of the fluid mass. The total enthalpy for a real or ideal gas does not change across a shock. The total enthalpy can not be measured directly. Instead, the static enthalpy and the fluid velocity can be measured. Static enthalpy is often used in the energy equation for a fluid. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. http://ocw.mit.edu/ans7870/16/16.unified/thermoF03/chapter_6.htm
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": " \nh_0 = h + \\frac{V^2}{2}\n" }, { "math_id": 2, "text": "h_0 =" }, { "math_id": 3, "text": "h =" }, { "math_id": 4, "text": "V =" }, { "math_id": 5, "text": "\\frac{V^2}{2} =" }, { "math_id": 6, "text": "\\rho" }, { "math_id": 7, "text": " \nh_0^* = h^* + \\rho\\frac{V^2}{2}\n" }, { "math_id": 8, "text": "h_0^* =" }, { "math_id": 9, "text": "h^* =" }, { "math_id": 10, "text": "\\rho =" }, { "math_id": 11, "text": "\\rho\\frac{V^2}{2} =" }, { "math_id": 12, "text": " \nH_0 = H + m\\frac{V^2}{2}\n" }, { "math_id": 13, "text": "H_0 =" }, { "math_id": 14, "text": "H =" }, { "math_id": 15, "text": "m =" }, { "math_id": 16, "text": "m\\frac{V^2}{2} =" } ]
https://en.wikipedia.org/wiki?curid=7922560
7926008
Quantum finite automaton
Quantum analog of probabilistic automata In quantum computing, quantum finite automata (QFA) or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They provide a mathematical abstraction of real-world quantum computers. Several types of automata may be defined, including "measure-once" and "measure-many" automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFAs are, in turn, special cases of geometric finite automata or topological finite automata. The automata work by receiving a finite-length string formula_0 of letters formula_1 from a finite alphabet formula_2, and assigning to each such string a probability formula_3 indicating the probability of the automaton being in an accept state; that is, indicating whether the automaton accepted or rejected the string. The languages accepted by QFAs are not the regular languages of deterministic finite automata, nor are they the stochastic languages of probabilistic finite automata. Study of these quantum languages remains an active area of research. Informal description. There is a simple, intuitive way of understanding quantum finite automata. One begins with a graph-theoretic interpretation of deterministic finite automata (DFA). A DFA can be represented as a directed graph, with states as nodes in the graph, and arrows representing state transitions. Each arrow is labelled with a possible input symbol, so that, given a specific state and an input symbol, the arrow points at the next state. One way of representing such a graph is by means of a set of adjacency matrices, with one matrix for each input symbol. In this case, the list of possible DFA states is written as a column vector. For a given input symbol, the adjacency matrix indicates how any given state (row in the state vector) will transition to the next state; a state transition is given by matrix multiplication. One needs a distinct adjacency matrix for each possible input symbol, since each input symbol can result in a different transition. The entries in the adjacency matrix must be zero's and one's. For any given column in the matrix, only one entry can be non-zero: this is the entry that indicates the next (unique) state transition. Similarly, the state of the system is a column vector, in which only one entry is non-zero: this entry corresponds to the current state of the system. Let formula_2 denote the set of input symbols. For a given input symbol formula_4, write formula_5 as the adjacency matrix that describes the evolution of the DFA to its next state. The set formula_6 then completely describes the state transition function of the DFA. Let "Q" represent the set of possible states of the DFA. If there are "N" states in "Q", then each matrix formula_5 is "N" by "N"-dimensional. The initial state formula_7 corresponds to a column vector with a one in the "q"0'th row. A general state "q" is then a column vector with a one in the "q"'th row. By abuse of notation, let "q"0 and "q" also denote these two vectors. Then, after reading input symbols formula_8 from the input tape, the state of the DFA will be given by formula_9 The state transitions are given by ordinary matrix multiplication (that is, multiply "q"0 by formula_5, "etc."); the order of application is 'reversed' only because we follow the standard notation of linear algebra. The above description of a DFA, in terms of linear operators and vectors, almost begs for generalization, by replacing the state-vector "q" by some general vector, and the matrices formula_10 by some general operators. This is essentially what a QFA does: it replaces "q" by a unit vector, and the formula_10 by unitary matrices. Other, similar generalizations also become obvious: the vector "q" can be some distribution on a manifold; the set of transition matrices become automorphisms of the manifold; this defines a topological finite automaton. Similarly, the matrices could be taken as automorphisms of a homogeneous space; this defines a geometric finite automaton. Before moving on to the formal description of a QFA, there are two noteworthy generalizations that should be mentioned and understood. The first is the non-deterministic finite automaton (NFA). In this case, the vector "q" is replaced by a vector that can have more than one entry that is non-zero. Such a vector then represents an element of the power set of "Q"; it’s just an indicator function on "Q". Likewise, the state transition matrices formula_10 are defined in such a way that a given column can have several non-zero entries in it. Equivalently, the multiply-add operations performed during component-wise matrix multiplication should be replaced by Boolean and-or operations, that is, so that one is working with a ring of characteristic 2. A well-known theorem states that, for each DFA, there is an equivalent NFA, and vice versa. This implies that the set of languages that can be recognized by DFA's and NFA's are the same; these are the regular languages. In the generalization to QFAs, the set of recognized languages will be different. Describing that set is one of the outstanding research problems in QFA theory. Another generalization that should be immediately apparent is to use a stochastic matrix for the transition matrices, and a probability vector for the state; this gives a probabilistic finite automaton. The entries in the state vector must be real numbers, positive, and sum to one, in order for the state vector to be interpreted as a probability. The transition matrices must preserve this property: this is why they must be stochastic. Each state vector should be imagined as specifying a point in a simplex; thus, this is a topological automaton, with the simplex being the manifold, and the stochastic matrices being linear automorphisms of the simplex onto itself. Since each transition is (essentially) independent of the previous (if we disregard the distinction between accepted and rejected languages), the PFA essentially becomes a kind of Markov chain. By contrast, in a QFA, the manifold is complex projective space formula_11, and the transition matrices are unitary matrices. Each point in formula_11 corresponds to a (pure) quantum-mechanical state; the unitary matrices can be thought of as governing the time evolution of the system (viz in the Schrödinger picture). The generalization from pure states to mixed states should be straightforward: A mixed state is simply a measure-theoretic probability distribution on formula_11. A worthy point to contemplate is the distributions that result on the manifold during the input of a language. In order for an automaton to be 'efficient' in recognizing a language, that distribution should be 'as uniform as possible'. This need for uniformity is the underlying principle behind maximum entropy methods: these simply guarantee crisp, compact operation of the automaton. Put in other words, the machine learning methods used to train hidden Markov models generalize to QFAs as well: the Viterbi algorithm and the forward–backward algorithm generalize readily to the QFA. Although the study of QFA was popularized in the work of Kondacs and Watrous in 1997 and later by Moore and Crutchfeld, they were described as early as 1971, by Ion Baianu. Measure-once automata. Measure-once automata were introduced by Cris Moore and James P. Crutchfield. They may be defined formally as follows. As with an ordinary finite automaton, the quantum automaton is considered to have formula_12 possible internal states, represented in this case by an formula_12-state qudit formula_13. More precisely, the formula_12-state qudit formula_14 is an element of formula_15-dimensional complex projective space, carrying an inner product formula_16 that is the Fubini–Study metric. The state transitions, transition matrices or de Bruijn graphs are represented by a collection of formula_17 unitary matrices formula_5, with one unitary matrix for each letter formula_4. That is, given an input letter formula_18, the unitary matrix describes the transition of the automaton from its current state formula_13 to its next state formula_19: formula_20 Thus, the triple formula_21 form a quantum semiautomaton. The accept state of the automaton is given by an formula_17 projection matrix formula_22, so that, given a formula_12-dimensional quantum state formula_13, the probability of formula_13 being in the accept state is formula_23 The probability of the state machine accepting a given finite input string formula_0 is given by formula_24 Here, the vector formula_13 is understood to represent the initial state of the automaton, that is, the state the automaton was in before it started accepting the string input. The empty string formula_25 is understood to be just the unit matrix, so that formula_26 is just the probability of the initial state being an accepted state. Because the left-action of formula_5 on formula_13 reverses the order of the letters in the string formula_27, it is not uncommon for QFAs to be defined using a right action on the Hermitian transpose states, simply in order to keep the order of the letters the same. A language over the alphabet formula_2 is accepted with probability formula_28 by a quantum finite automaton (and a given, fixed initial state formula_13), if, for all sentences formula_27 in the language, one has formula_29. Example. Consider the classical deterministic finite automaton given by the state transition table The quantum state is a vector, in bra–ket notation formula_30 with the complex numbers formula_31 normalized so that formula_32 The unitary transition matrices are formula_33 and formula_34 Taking formula_35 to be the accept state, the projection matrix is formula_36 As should be readily apparent, if the initial state is the pure state formula_37 or formula_38, then the result of running the machine will be exactly identical to the classical deterministic finite state machine. In particular, there is a language accepted by this automaton with probability one, for these initial states, and it is identical to the regular language for the classical DFA, and is given by the regular expression: formula_39 The non-classical behaviour occurs if both formula_40 and formula_41 are non-zero. More subtle behaviour occurs when the matrices formula_42 and formula_43 are not so simple; see, for example, the de Rham curve as an example of a quantum finite state machine acting on the set of all possible finite binary strings. Measure-many automata. Measure-many automata were introduced by Kondacs and Watrous in 1997. The general framework resembles that of the measure-once automaton, except that instead of there being one projection, at the end, there is a projection, or quantum measurement, performed after each letter is read. A formal definition follows. The Hilbert space formula_44 is decomposed into three orthogonal subspaces formula_45 In the literature, these orthogonal subspaces are usually formulated in terms of the set formula_46 of orthogonal basis vectors for the Hilbert space formula_44. This set of basis vectors is divided up into subsets formula_47 and formula_48, such that formula_49 is the linear span of the basis vectors in the accept set. The reject space is defined analogously, and the remaining space is designated the "non-halting" subspace. There are three projection matrices, formula_50, formula_51 and formula_52, each projecting to the respective subspace: formula_53 and so on. The parsing of the input string proceeds as follows. Consider the automaton to be in a state formula_13. After reading an input letter formula_18, the automaton will be in the state formula_54 At this point, a measurement whose three possible outcomes have eigenspaces formula_55, formula_56, formula_57 is performed on the state formula_19, at which time its wave-function collapses into one of the three subspaces formula_55 or formula_56 or formula_57. The probability of collapse to the "accept" subspace is given by formula_58 and analogously for the other two spaces. If the wave function has collapsed to either the "accept" or "reject" subspaces, then further processing halts. Otherwise, processing continues, with the next letter read from the input, and applied to what must be an eigenstate of formula_52. Processing continues until the whole string is read, or the machine halts. Often, additional symbols formula_59 and $ are adjoined to the alphabet, to act as the left and right end-markers for the string. In the literature, the measure-many automaton is often denoted by the tuple formula_60. Here, formula_46, formula_2, formula_61 and formula_62 are as defined above. The initial state is denoted by formula_63. The unitary transformations are denoted by the map formula_64, formula_65 so that formula_66 Relation to quantum computing. As of 2019, most quantum computers are implementations of measure-once quantum finite automata, and the software systems for programming them expose the state-preparation of formula_13, measurement formula_22 and a choice of unitary transformations formula_5, such the controlled NOT gate, the Hadamard transform and other quantum logic gates, directly to the programmer. The primary difference between real-world quantum computers and the theoretical framework presented above is that the initial state preparation cannot ever result in a point-like pure state, nor can the unitary operators be precisely applied. Thus, the initial state must be taken as a mixed state formula_67 for some probability distribution formula_68 characterizing the ability of the machinery to prepare an initial state close to the desired initial pure state formula_13. This state is not stable, but suffers from some amount of quantum decoherence over time. Precise measurements are also not possible, and one instead uses positive operator-valued measures to describe the measurement process. Finally, each unitary transformation is not a single, sharply defined quantum logic gate, but rather a mixture formula_69 for some probability distribution formula_70 describing how well the machinery can effect the desired transformation formula_5. As a result of these effects, the actual time evolution of the state cannot be taken as an infinite-precision pure point, operated on by a sequence of arbitrarily sharp transformations, but rather as an ergodic process, or more accurately, a mixing process that only concatenates transformations onto a state, but also smears the state over time. There is no quantum analog to the push-down automaton or stack machine. This is due to the no-cloning theorem: there is no way to make a copy of the current state of the machine, push it onto a stack for later reference, and then return to it. Geometric generalizations. The above constructions indicate how the concept of a quantum finite automaton can be generalized to arbitrary topological spaces. For example, one may take some ("N"-dimensional) Riemann symmetric space to take the place of formula_11. In place of the unitary matrices, one uses the isometries of the Riemannian manifold, or, more generally, some set of open functions appropriate for the given topological space. The initial state may be taken to be a point in the space. The set of accept states can be taken to be some arbitrary subset of the topological space. One then says that a formal language is accepted by this topological automaton if the point, after iteration by the homeomorphisms, intersects the accept set. But, of course, this is nothing more than the standard definition of an M-automaton. The behaviour of topological automata is studied in the field of topological dynamics. The quantum automaton differs from the topological automaton in that, instead of having a binary result (is the iterated point in, or not in, the final set?), one has a probability. The quantum probability is the (square of) the initial state projected onto some final state "P"; that is formula_71. But this probability amplitude is just a very simple function of the distance between the point formula_72 and the point formula_73 in formula_11, under the distance metric given by the Fubini–Study metric. To recap, the quantum probability of a language being accepted can be interpreted as a metric, with the probability of accept being unity, if the metric distance between the initial and final states is zero, and otherwise the probability of accept is less than one, if the metric distance is non-zero. Thus, it follows that the quantum finite automaton is just a special case of a geometric automaton or a metric automaton, where formula_11 is generalized to some metric space, and the probability measure is replaced by a simple function of the metric on that space. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma=(\\sigma_0,\\sigma_1,\\cdots,\\sigma_k)" }, { "math_id": 1, "text": "\\sigma_i" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "\\operatorname{Pr}(\\sigma)" }, { "math_id": 4, "text": "\\alpha\\in\\Sigma" }, { "math_id": 5, "text": "U_\\alpha" }, { "math_id": 6, "text": "\\{U_\\alpha | \\alpha\\in\\Sigma\\}" }, { "math_id": 7, "text": "q_0\\in Q" }, { "math_id": 8, "text": "\\alpha\\beta\\gamma\\cdots" }, { "math_id": 9, "text": "q = \\cdots U_\\gamma U_\\beta U_\\alpha q_0." }, { "math_id": 10, "text": "\\{U_\\alpha\\}" }, { "math_id": 11, "text": "\\mathbb{C}P^N" }, { "math_id": 12, "text": "N" }, { "math_id": 13, "text": "|\\psi\\rangle" }, { "math_id": 14, "text": "|\\psi\\rangle\\in P(\\mathbb {C}^N)" }, { "math_id": 15, "text": "(N-1)" }, { "math_id": 16, "text": "\\Vert\\cdot\\Vert" }, { "math_id": 17, "text": "N\\times N" }, { "math_id": 18, "text": "\\alpha" }, { "math_id": 19, "text": "|\\psi^\\prime\\rangle" }, { "math_id": 20, "text": "|\\psi^\\prime\\rangle = U_\\alpha |\\psi\\rangle" }, { "math_id": 21, "text": "(P(\\mathbb {C}^N),\\Sigma,\\{U_\\alpha\\;\\vert\\;\\alpha\\in\\Sigma\\})" }, { "math_id": 22, "text": "P" }, { "math_id": 23, "text": "\\langle\\psi |P |\\psi\\rangle = \\Vert P |\\psi\\rangle\\Vert^2" }, { "math_id": 24, "text": "\\operatorname{Pr}(\\sigma) = \\Vert P U_{\\sigma_k} \\cdots U_{\\sigma_1} U_{\\sigma_0}|\\psi\\rangle\\Vert^2 " }, { "math_id": 25, "text": "\\varnothing" }, { "math_id": 26, "text": "\\operatorname{Pr}(\\varnothing)= \\Vert P |\\psi\\rangle\\Vert^2" }, { "math_id": 27, "text": "\\sigma" }, { "math_id": 28, "text": "p" }, { "math_id": 29, "text": "p\\leq\\operatorname{Pr}(\\sigma)" }, { "math_id": 30, "text": "|\\psi\\rangle=a_1 |S_1\\rangle + a_2|S_2\\rangle = \n\\begin{bmatrix} a_1 \\\\ a_2 \\end{bmatrix}\n" }, { "math_id": 31, "text": "a_1,a_2" }, { "math_id": 32, "text": "\\begin{bmatrix} a^*_1 \\;\\; a^*_2 \\end{bmatrix} \\begin{bmatrix} a_1 \\\\ a_2 \\end{bmatrix} = a_1^*a_1 + a_2^*a_2 = 1" }, { "math_id": 33, "text": "U_0=\\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}" }, { "math_id": 34, "text": "U_1=\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}" }, { "math_id": 35, "text": "S_1" }, { "math_id": 36, "text": "P=\\begin{bmatrix} 1 & 0 \\\\ 0 & 0 \\end{bmatrix}" }, { "math_id": 37, "text": "|S_1\\rangle" }, { "math_id": 38, "text": "|S_2\\rangle" }, { "math_id": 39, "text": "(1^*(01^*0)^*)^* \\,\\!" }, { "math_id": 40, "text": "a_1" }, { "math_id": 41, "text": "a_2" }, { "math_id": 42, "text": "U_0" }, { "math_id": 43, "text": "U_1" }, { "math_id": 44, "text": "\\mathcal{H}_Q" }, { "math_id": 45, "text": "\\mathcal{H}_Q=\\mathcal{H}_\\text{accept} \\oplus \\mathcal{H}_\\text{reject} \\oplus \\mathcal{H}_\\text{non-halting}" }, { "math_id": 46, "text": "Q" }, { "math_id": 47, "text": "Q_\\text{acc} \\subseteq Q" }, { "math_id": 48, "text": "Q_\\text{rej} \\subseteq Q" }, { "math_id": 49, "text": "\\mathcal{H}_\\text{accept}=\\operatorname{span} \\{|q\\rangle : |q\\rangle \\in Q_\\text{acc} \\}" }, { "math_id": 50, "text": "P_\\text{acc}" }, { "math_id": 51, "text": "P_\\text{rej}" }, { "math_id": 52, "text": "P_\\text{non}" }, { "math_id": 53, "text": "P_\\text{acc}:\\mathcal{H}_Q \\to \\mathcal{H}_\\text{accept}" }, { "math_id": 54, "text": "|\\psi^\\prime\\rangle =U_\\alpha |\\psi\\rangle" }, { "math_id": 55, "text": "\\mathcal{H}_\\text{accept}" }, { "math_id": 56, "text": "\\mathcal{H}_\\text{reject}" }, { "math_id": 57, "text": "\\mathcal{H}_\\text{non-halting}" }, { "math_id": 58, "text": "\\operatorname{Pr}_\\text{acc} (\\sigma) = \\Vert P_\\text{acc} |\\psi^\\prime\\rangle \\Vert^2," }, { "math_id": 59, "text": "\\kappa" }, { "math_id": 60, "text": "(Q;\\Sigma; \\delta; q_0; Q_\\text{acc}; Q_\\text{rej})" }, { "math_id": 61, "text": "Q_\\text{acc}" }, { "math_id": 62, "text": "Q_\\text{rej}" }, { "math_id": 63, "text": "|q_0\\rangle" }, { "math_id": 64, "text": "\\delta" }, { "math_id": 65, "text": "\\delta:Q\\times \\Sigma \\times Q \\to \\mathbb{C}" }, { "math_id": 66, "text": "U_\\alpha |q_i\\rangle = \\sum_{q_j\\in Q} \\delta (q_i, \\alpha, q_j) |q_j\\rangle " }, { "math_id": 67, "text": "\\rho = \\int p(x) |\\psi_x\\rangle dx" }, { "math_id": 68, "text": "p(x)" }, { "math_id": 69, "text": "U_{\\alpha, (\\rho)}=\\int p_\\alpha(x) U_{\\alpha,x} dx" }, { "math_id": 70, "text": "p_\\alpha(x)" }, { "math_id": 71, "text": "\\mathbf{Pr} = \\vert \\langle P\\vert \\psi\\rangle \\vert^2" }, { "math_id": 72, "text": "\\vert P\\rangle" }, { "math_id": 73, "text": "\\vert \\psi\\rangle" } ]
https://en.wikipedia.org/wiki?curid=7926008
7930037
Subnormal operator
In mathematics, especially operator theory, subnormal operators are bounded operators on a Hilbert space defined by weakening the requirements for normal operators. Some examples of subnormal operators are isometries and Toeplitz operators with analytic symbols. Definition. Let "H" be a Hilbert space. A bounded operator "A" on "H" is said to be subnormal if "A" has a normal extension. In other words, "A" is subnormal if there exists a Hilbert space "K" such that "H" can be embedded in "K" and there exists a normal operator "N" of the form formula_0 for some bounded operators formula_1 Normality, quasinormality, and subnormality. Normal operators. Every normal operator is subnormal by definition, but the converse is not true in general. A simple class of examples can be obtained by weakening the properties of unitary operators. A unitary operator is an isometry with dense range. Consider now an isometry "A" whose range is not necessarily dense. A concrete example of such is the unilateral shift, which is not normal. But "A" is subnormal and this can be shown explicitly. Define an operator "U" on formula_2 by formula_3 Direct calculation shows that "U" is unitary, therefore a normal extension of "A". The operator "U" is called the "unitary dilation" of the isometry "A". Quasinormal operators. An operator "A" is said to be quasinormal if "A" commutes with "A*A". A normal operator is thus quasinormal; the converse is not true. A counter example is given, as above, by the unilateral shift. Therefore, the family of normal operators is a proper subset of both quasinormal and subnormal operators. A natural question is how are the quasinormal and subnormal operators related. We will show that a quasinormal operator is necessarily subnormal but not vice versa. Thus the normal operators is a proper subfamily of quasinormal operators, which in turn are contained by the subnormal operators. To argue the claim that a quasinormal operator is subnormal, recall the following property of quasinormal operators: Fact: A bounded operator "A" is quasinormal if and only if in its polar decomposition "A" = "UP", the partial isometry "U" and positive operator "P" commute. Given a quasinormal "A", the idea is to construct dilations for "U" and "P" in a sufficiently nice way so everything commutes. Suppose for the moment that "U" is an isometry. Let "V" be the unitary dilation of "U", formula_4 Define formula_5 The operator "N" = "VQ" is clearly an extension of "A". We show it is a normal extension via direct calculation. Unitarity of "V" means formula_6 On the other hand, formula_7 Because "UP = PU" and "P" is self adjoint, we have "U*P = PU*" and "DU*P = DU*P". Comparing entries then shows "N" is normal. This proves quasinormality implies subnormality. For a counter example that shows the converse is not true, consider again the unilateral shift "A". The operator "B" = "A" + "s" for some scalar "s" remains subnormal. But if "B" is quasinormal, a straightforward calculation shows that "A*A = AA*", which is a contradiction. Minimal normal extension. Non-uniqueness of normal extensions. Given a subnormal operator "A", its normal extension "B" is not unique. For example, let "A" be the unilateral shift, on "l"2(N). One normal extension is the bilateral shift "B" on "l"2(Z) defined by formula_8 where ˆ denotes the zero-th position. "B" can be expressed in terms of the operator matrix formula_9 Another normal extension is given by the unitary dilation "B' " of "A" defined above: formula_10 whose action is described by formula_11 Minimality. Thus one is interested in the normal extension that is, in some sense, smallest. More precisely, a normal operator "B" acting on a Hilbert space "K" is said to be a minimal extension of a subnormal "A" if " K' " ⊂ "K" is a reducing subspace of "B" and "H" ⊂ " K' ", then "K' " = "K". (A subspace is a reducing subspace of "B" if it is invariant under both "B" and "B*".) One can show that if two operators "B"1 and "B"2 are minimal extensions on "K"1 and "K"2, respectively, then there exists a unitary operator formula_12 Also, the following intertwining relationship holds: formula_13 This can be shown constructively. Consider the set "S" consisting of vectors of the following form: formula_14 Let "K' " ⊂ "K"1 be the subspace that is the closure of the linear span of "S". By definition, "K' " is invariant under "B"1* and contains "H". The normality of "B"1 and the assumption that "H" is invariant under "B"1 imply "K' " is invariant under "B"1. Therefore, "K' " = "K"1. The Hilbert space "K"2 can be identified in exactly the same way. Now we define the operator "U" as follows: formula_15 Because formula_16 , the operator "U" is unitary. Direct computation also shows (the assumption that both "B"1 and "B"2 are extensions of "A" are needed here) formula_17 formula_18 When "B"1 and "B"2 are not assumed to be minimal, the same calculation shows that above claim holds verbatim with "U" being a partial isometry. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N = \\begin{bmatrix} A & B\\\\ 0 & C\\end{bmatrix}" }, { "math_id": 1, "text": "B : H^{\\perp} \\rightarrow H, \\quad \\mbox{and} \\quad C : H^{\\perp} \\rightarrow H^{\\perp}." }, { "math_id": 2, "text": "H \\oplus H" }, { "math_id": 3, "text": " U = \\begin{bmatrix} A & I - AA^* \\\\ 0 & - A^* \\end{bmatrix}." }, { "math_id": 4, "text": " V = \\begin{bmatrix} U & I - UU^* \\\\ 0 & - U^* \\end{bmatrix}\n= \\begin{bmatrix} U & D_{U^*} \\\\ 0 & - U^* \\end{bmatrix}\n." }, { "math_id": 5, "text": " Q = \\begin{bmatrix} P & 0 \\\\ 0 & P \\end{bmatrix}." }, { "math_id": 6, "text": "N^*N = QV^*VQ = Q^2 = \\begin{bmatrix} P^2 & 0 \\\\ 0 & P^2 \\end{bmatrix}." }, { "math_id": 7, "text": "N N^* = \\begin{bmatrix} UP^2U^* + D_{U^*} P^2 D_{U^*} & -D_{U^*}P^2 U \\\\ -U^* P^2 D_{U^*} & U^* P^2 U \\end{bmatrix}." }, { "math_id": 8, "text": "B (\\ldots, a_{-1}, {\\hat a_0}, a_1, \\ldots) = (\\ldots, {\\hat a_{-1}}, a_0, a_1, \\ldots)," }, { "math_id": 9, "text": " B = \\begin{bmatrix} A & I - AA^* \\\\ 0 & A^* \\end{bmatrix}." }, { "math_id": 10, "text": " B' = \\begin{bmatrix} A & I - AA^* \\\\ 0 & - A^* \\end{bmatrix}" }, { "math_id": 11, "text": "\nB' (\\ldots, a_{-2}, a_{-1}, {\\hat a_0}, a_1, a_2, \\ldots) = (\\ldots, - a_{-2}, {\\hat a_{-1}}, a_0, a_1, a_2, \\ldots).\n" }, { "math_id": 12, "text": "U: K_1 \\rightarrow K_2." }, { "math_id": 13, "text": "U B_1 = B_2 U. \\," }, { "math_id": 14, "text": "\n\\sum_{i=0}^n (B_1^*)^i h_i = h_0+ B_1 ^* h_1 + (B_1^*)^2 h_2 + \\cdots + (B_1^*)^n h_n \\quad \\text{where} \\quad h_i \\in H.\n" }, { "math_id": 15, "text": "\nU \\sum_{i=0}^n (B_1^*)^i h_i = \\sum_{i=0}^n (B_2^*)^i h_i\n" }, { "math_id": 16, "text": "\n\\left\\langle \\sum_{i=0}^n (B_1^*)^i h_i, \\sum_{j=0}^n (B_1^*)^j h_j\\right\\rangle\n= \\sum_{i j} \\langle h_i, (B_1)^i (B_1^*)^j h_j\\rangle\n= \\sum_{i j} \\langle (B_2)^j h_i, (B_2)^i h_j\\rangle\n= \\left\\langle \\sum_{i=0}^n (B_2^*)^i h_i, \\sum_{j=0}^n (B_2^*)^j h_j\\right\\rangle ,\n" }, { "math_id": 17, "text": "\\text{if } g = \\sum_{i=0}^n (B_1^*)^i h_i ," }, { "math_id": 18, "text": "\\text{then } U B_1 g = B_2 U g = \\sum_{i=0}^n (B_2^*)^i A h_i." } ]
https://en.wikipedia.org/wiki?curid=7930037
7931806
Essential matrix
In computer vision, the essential matrix is a formula_0 matrix, formula_1 that relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model. Function. More specifically, if formula_2 and formula_3 are homogeneous "normalized" image coordinates in image 1 and 2, respectively, then formula_4 if formula_2 and formula_3 correspond to the same 3D point in the scene (not an "if and only if" due to the fact that points that lie on the same epipolar line in the first image will get mapped to the same epipolar line in the second image). The above relation which defines the essential matrix was published in 1981 by H. Christopher Longuet-Higgins, introducing the concept to the computer vision community. Richard Hartley and Andrew Zisserman's book reports that an analogous matrix appeared in photogrammetry long before that. Longuet-Higgins' paper includes an algorithm for estimating formula_1 from a set of corresponding normalized image coordinates as well as an algorithm for determining the relative position and orientation of the two cameras given that formula_1 is known. Finally, it shows how the 3D coordinates of the image points can be determined with the aid of the essential matrix. Use. The essential matrix can be seen as a precursor to the "fundamental matrix", formula_5. Both matrices can be used for establishing constraints between matching image points, but the fundamental matrix can only be used in relation to calibrated cameras since the inner camera parameters (matrices formula_6 and formula_7) must be known in order to achieve the normalization. If, however, the cameras are calibrated the essential matrix can be useful for determining both the relative position and orientation between the cameras and the 3D position of corresponding image points. The essential matrix is related to the fundamental matrix with formula_8 Derivation and definition. This derivation follows the paper by Longuet-Higgins. Two normalized cameras project the 3D world onto their respective image planes. Let the 3D coordinates of a point P be formula_9 and formula_10 relative to each camera's coordinate system. Since the cameras are normalized, the corresponding image coordinates are formula_11   and   formula_12 A homogeneous representation of the two image coordinates is then given by formula_13   and   formula_14 which also can be written more compactly as formula_15   and   formula_16 where formula_17 and formula_3 are homogeneous representations of the 2D image coordinates and formula_18 and formula_19 are proper 3D coordinates but in two different coordinate systems. Another consequence of the normalized cameras is that their respective coordinate systems are related by means of a translation and rotation. This implies that the two sets of 3D coordinates are related as formula_20 where formula_21 is a formula_0 rotation matrix and formula_22 is a 3-dimensional translation vector. The essential matrix is then defined as: where formula_23 is the matrix representation of the cross product with formula_22. Note: Here, the transformation formula_24 will transform points in the 2nd view to the 1st view. For the definition of formula_25 we are only interested in the orientations of the normalized image coordinates (See also: Triple product). As such we don't need the translational component when substituting image coordinates into the essential equation. To see that this definition of formula_25 describes a constraint on corresponding image coordinates multiply formula_1 from left and right with the 3D coordinates of point P in the two different coordinate systems: formula_26 Finally, it can be assumed that both formula_28 and formula_29 are &gt; 0, otherwise they are not visible in both cameras. This gives formula_30 which is the constraint that the essential matrix defines between corresponding image points. Properties. Not every arbitrary formula_0 matrix can be an essential matrix for some stereo cameras. To see this notice that it is defined as the matrix product of one rotation matrix and one skew-symmetric matrix, both formula_0. The skew-symmetric matrix must have two singular values which are equal and another which is zero. The multiplication of the rotation matrix does not change the singular values which means that also the essential matrix has two singular values which are equal and one which is zero. The properties described here are sometimes referred to as "internal constraints" of the essential matrix. If the essential matrix formula_1 is multiplied by a non-zero scalar, the result is again an essential matrix which defines exactly the same constraint as formula_1 does. This means that formula_1 can be seen as an element of a projective space, that is, two such matrices are considered equivalent if one is a non-zero scalar multiplication of the other. This is a relevant position, for example, if formula_1 is estimated from image data. However, it is also possible to take the position that formula_1 is defined as formula_31 where formula_32, and then formula_1 has a well-defined "scaling". It depends on the application which position is the more relevant. The constraints can also be expressed as formula_33 and formula_34 Here, the last equation is a matrix constraint, which can be seen as 9 constraints, one for each matrix element. These constraints are often used for determining the essential matrix from five corresponding point pairs. The essential matrix has five or six degrees of freedom, depending on whether or not it is seen as a projective element. The rotation matrix formula_21 and the translation vector formula_22 have three degrees of freedom each, in total six. If the essential matrix is considered as a projective element, however, one degree of freedom related to scalar multiplication must be subtracted leaving five degrees of freedom in total. Estimation. Given a set of corresponding image points it is possible to estimate an essential matrix which satisfies the defining epipolar constraint for all the points in the set. However, if the image points are subject to noise, which is the common case in any practical situation, it is not possible to find an essential matrix which satisfies all constraints exactly. Depending on how the error related to each constraint is measured, it is possible to determine or estimate an essential matrix which optimally satisfies the constraints for a given set of corresponding image points. The most straightforward approach is to set up a total least squares problem, commonly known as the eight-point algorithm. Extracting rotation and translation. Given that the essential matrix has been determined for a stereo camera pair -- for example, using the estimation method above -- this information can be used for determining also the rotation formula_21 and translation formula_22 (up to a scaling) between the two camera's coordinate systems. In these derivations formula_1 is seen as a projective element rather than having a well-determined scaling. Finding one solution. The following method for determining formula_21 and formula_22 is based on performing a SVD of formula_1, see Hartley &amp; Zisserman's book. It is also possible to determine formula_21 and formula_22 without an SVD, for example, following Longuet-Higgins' paper. An SVD of formula_1 gives formula_35 where formula_36 and formula_37 are orthogonal formula_0 matrices and formula_38 is a formula_0 diagonal matrix with formula_39 The diagonal entries of formula_38 are the singular values of formula_1 which, according to the internal constraints of the essential matrix, must consist of two identical and one zero value. Define formula_40   with   formula_41 and make the following ansatz formula_42 formula_43 Since formula_38 may not completely fulfill the constraints when dealing with real world data (f.e. camera images), the alternative formula_44   with   formula_45 may help. Proof. First, these expressions for formula_21 and formula_23 do satisfy the defining equation for the essential matrix formula_46 Second, it must be shown that this formula_23 is a matrix representation of the cross product for some formula_22. Since formula_47 it is the case that formula_48 is skew-symmetric, i.e., formula_49. This is also the case for our formula_23, since formula_50 According to the general properties of the matrix representation of the cross product it then follows that formula_23 must be the cross product operator of exactly one vector formula_22. Third, it must also need to be shown that the above expression for formula_21 is a rotation matrix. It is the product of three matrices which all are orthogonal which means that formula_51, too, is orthogonal or formula_52. To be a proper rotation matrix it must also satisfy formula_53. Since, in this case, formula_1 is seen as a projective element this can be accomplished by reversing the sign of formula_1 if necessary. Finding all solutions. So far one possible solution for formula_21 and formula_22 has been established given formula_1. It is, however, not the only possible solution and it may not even be a valid solution from a practical point of view. To begin with, since the scaling of formula_1 is undefined, the scaling of formula_22 is also undefined. It must lie in the null space of formula_1 since formula_54 For the subsequent analysis of the solutions, however, the exact scaling of formula_22 is not so important as its "sign", i.e., in which direction it points. Let formula_55 be normalized vector in the null space of formula_1. It is then the case that both formula_55 and formula_56 are valid translation vectors relative formula_1. It is also possible to change formula_57 into formula_58 in the derivations of formula_21 and formula_22 above. For the translation vector this only causes a change of sign, which has already been described as a possibility. For the rotation, on the other hand, this will produce a different transformation, at least in the general case. To summarize, given formula_1 there are two opposite directions which are possible for formula_22 and two different rotations which are compatible with this essential matrix. In total this gives four classes of solutions for the rotation and translation between the two camera coordinate systems. On top of that, there is also an unknown scaling formula_59 for the chosen translation direction. It turns out, however, that only one of the four classes of solutions can be realized in practice. Given a pair of corresponding image coordinates, three of the solutions will always produce a 3D point which lies "behind" at least one of the two cameras and therefore cannot be seen. Only one of the four classes will consistently produce 3D points which are in front of both cameras. This must then be the correct solution. Still, however, it has an undetermined positive scaling related to the translation component. The above determination of formula_21 and formula_22 assumes that formula_1 satisfy the internal constraints of the essential matrix. If this is not the case which, for example, typically is the case if formula_1 has been estimated from real (and noisy) image data, it has to be assumed that it approximately satisfy the internal constraints. The vector formula_55 is then chosen as right singular vector of formula_1 corresponding to the smallest singular value. 3D points from corresponding image points. Many methods exist for computing formula_60 given corresponding normalized image coordinates formula_61 and formula_62, if the essential matrix is known and the corresponding rotation and translation transformations have been determined. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " 3 \\times 3 " }, { "math_id": 1, "text": " \\mathbf{E} " }, { "math_id": 2, "text": " \\mathbf{y}" }, { "math_id": 3, "text": " \\mathbf{y}' " }, { "math_id": 4, "text": " (\\mathbf{y}')^\\top \\, \\mathbf{E} \\, \\mathbf{y} = 0 " }, { "math_id": 5, "text": " \\mathbf{F} " }, { "math_id": 6, "text": "\\mathbf{K}" }, { "math_id": 7, "text": "\\mathbf{K}'" }, { "math_id": 8, "text": " \\mathbf{E} = ({\\mathbf{K}'})^{\\top} \\; \\mathbf{F} \\; \\mathbf{K} . " }, { "math_id": 9, "text": " (x_1, x_2, x_3) " }, { "math_id": 10, "text": " (x'_1, x'_2, x'_3) " }, { "math_id": 11, "text": "\n\\begin{pmatrix} y_1 \\\\ y_2 \\end{pmatrix} = \\frac{1}{x_3} \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix} " }, { "math_id": 12, "text": " \\begin{pmatrix} y'_1 \\\\ y'_2 \\end{pmatrix} = \\frac{1}{x'_3} \\begin{pmatrix} x'_1 \\\\ x'_2 \\end{pmatrix} " }, { "math_id": 13, "text": " \\begin{pmatrix} y_1 \\\\ y_2 \\\\ 1 \\end{pmatrix} = \\frac{1}{x_3} \\begin{pmatrix} x_1 \\\\ x_2 \\\\ x_{3} \\end{pmatrix} " }, { "math_id": 14, "text": " \\begin{pmatrix} y'_1 \\\\ y'_2 \\\\ 1 \\end{pmatrix} = \\frac{1}{x'_3} \\begin{pmatrix} x'_1 \\\\ x'_2 \\\\ x'_{3} \\end{pmatrix} " }, { "math_id": 15, "text": "\n\\mathbf{y} = \\frac{1}{x_{3}} \\, \\tilde{\\mathbf{x}}\n" }, { "math_id": 16, "text": " \\mathbf{y}' = \\frac{1}{x'_{3}} \\, \\tilde{\\mathbf{x}}'\n" }, { "math_id": 17, "text": " \\mathbf{y} " }, { "math_id": 18, "text": " \\tilde{\\mathbf{x}} " }, { "math_id": 19, "text": " \\tilde{\\mathbf{x}}' " }, { "math_id": 20, "text": " \\tilde{\\mathbf{x}}' = \\mathbf{R} \\, (\\tilde{\\mathbf{x}} - \\mathbf{t}) " }, { "math_id": 21, "text": " \\mathbf{R} " }, { "math_id": 22, "text": " \\mathbf{t} " }, { "math_id": 23, "text": " [\\mathbf{t}]_{\\times} " }, { "math_id": 24, "text": " [\\mathbf{R}^{T} | \\mathbf{t}] " }, { "math_id": 25, "text": "\\mathbf{E}" }, { "math_id": 26, "text": " \\tilde{\\mathbf{x}}'^{T} \\, \\mathbf{E} \\, \\tilde{\\mathbf{x}} \\, \\stackrel{(1)}{=} \\,\\tilde{\\mathbf{x}}^{T} \\, \\mathbf{R}^{T} \\, \\mathbf{R} \\, [\\mathbf{t}]_{\\times} \\, \\tilde{\\mathbf{x}} \\, \\stackrel{(2)}{=} \\, \\tilde{\\mathbf{x}}^{T} \\, [\\mathbf{t}]_{\\times} \\, \\tilde{\\mathbf{x}} \\, \\stackrel{(3)}{=} \\, 0\n" }, { "math_id": 27, "text": " \\mathbf{R}^{T} \\, \\mathbf{R} = \\mathbf{I} " }, { "math_id": 28, "text": " x_{3} " }, { "math_id": 29, "text": " x'_{3} " }, { "math_id": 30, "text": " 0 = (\\tilde{\\mathbf{x}}')^{T} \\, \\mathbf{E} \\, \\tilde{\\mathbf{x}} = \\frac{1}{x'_{3}} (\\tilde{\\mathbf{x}}')^{T} \\, \\mathbf{E} \\, \\frac{1}{x_{3}} \\tilde{\\mathbf{x}} = (\\mathbf{y}')^{T} \\, \\mathbf{E} \\, \\mathbf{y}\n" }, { "math_id": 31, "text": " \\mathbf{E} =[\\mathbf{\\widetilde{t}}]_{\\times} \\, \\mathbf{R}\n" }, { "math_id": 32, "text": " \\mathbf{\\widetilde{t}} = -\\mathbf{R}\\mathbf{t} " }, { "math_id": 33, "text": " \\det \\mathbf{E} = 0\n" }, { "math_id": 34, "text": " 2 \\mathbf{E} \\mathbf{E}^T \\mathbf{E} - \\operatorname{tr} ( \\mathbf{E} \\mathbf{E}^T ) \\mathbf{E} = 0 .\n" }, { "math_id": 35, "text": " \\mathbf{E} = \\mathbf{U} \\, \\mathbf{\\Sigma} \\, \\mathbf{V}^{T} " }, { "math_id": 36, "text": " \\mathbf{U} " }, { "math_id": 37, "text": " \\mathbf{V} " }, { "math_id": 38, "text": " \\mathbf{\\Sigma} " }, { "math_id": 39, "text": " \\mathbf{\\Sigma} = \\begin{pmatrix} s & 0 & 0 \\\\ 0 & s & 0 \\\\ 0 & 0 & 0 \\end{pmatrix} " }, { "math_id": 40, "text": " \\mathbf{W} = \\begin{pmatrix} 0 & -1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix} " }, { "math_id": 41, "text": " \\mathbf{W}^{-1} = \\mathbf{W}^{T} =\\begin{pmatrix} 0 & 1 & 0 \\\\ -1 & 0 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix} " }, { "math_id": 42, "text": " [\\mathbf{t}]_{\\times} = \\mathbf{U} \\, \\mathbf{W} \\, \\mathbf{\\Sigma} \\, \\mathbf{U}^{T} " }, { "math_id": 43, "text": " \\mathbf{R} = \\mathbf{U} \\, \\mathbf{W}^{-1} \\, \\mathbf{V}^{T} " }, { "math_id": 44, "text": " [\\mathbf{t}]_{\\times} = \\mathbf{U} \\, \\mathbf{Z} \\, \\mathbf{U}^{T} " }, { "math_id": 45, "text": " \\mathbf{Z} = \\begin{pmatrix} 0 & 1 & 0 \\\\ -1 & 0 & 0 \\\\ 0 & 0 & 0 \\end{pmatrix} " }, { "math_id": 46, "text": " [\\mathbf{t}]_{\\times}\\,\\mathbf{R} = \\mathbf{U} \\, \\mathbf{W} \\, \\mathbf{\\Sigma} \\, \\mathbf{U}^{T} \\mathbf{U} \\, \\mathbf{W}^{-1} \\, \\mathbf{V}^{T}\\, = \\mathbf{U} \\, \\mathbf{\\Sigma} \\, \\mathbf{V}^{T} = \\mathbf{E} " }, { "math_id": 47, "text": " \\mathbf{W} \\, \\mathbf{\\Sigma} = \\begin{pmatrix} 0 & -s & 0 \\\\ s & 0 & 0 \\\\ 0 & 0 & 0 \\end{pmatrix} " }, { "math_id": 48, "text": " \\mathbf{W} \\, \\mathbf{\\Sigma} " }, { "math_id": 49, "text": " (\\mathbf{W} \\, \\mathbf{\\Sigma})^{T} = - \\mathbf{W} \\, \\mathbf{\\Sigma} " }, { "math_id": 50, "text": " ([\\mathbf{t}]_{\\times})^{T} = \\mathbf{U} \\, (\\mathbf{W} \\, \\mathbf{\\Sigma})^{T} \\, \\mathbf{U}^{T} = - \\mathbf{U} \\, \\mathbf{W} \\, \\mathbf{\\Sigma} \\, \\mathbf{U}^{T} = - [\\mathbf{t}]_{\\times} " }, { "math_id": 51, "text": "\\mathbf{R}" }, { "math_id": 52, "text": " \\det(\\mathbf{R}) = \\pm 1 " }, { "math_id": 53, "text": " \\det(\\mathbf{R}) = 1 " }, { "math_id": 54, "text": " \\mathbf{E} \\, \\mathbf{t} = \\mathbf{R} \\, [\\mathbf{t}]_{\\times} \\, \\mathbf{t} = \\mathbf{0} " }, { "math_id": 55, "text": " \\hat{\\mathbf{t}} " }, { "math_id": 56, "text": " -\\hat{\\mathbf{t}} " }, { "math_id": 57, "text": " \\mathbf{W} " }, { "math_id": 58, "text": " \\mathbf{W}^{-1} " }, { "math_id": 59, "text": " s > 0 " }, { "math_id": 60, "text": " (x_{1}, x_{2}, x_{3}) " }, { "math_id": 61, "text": " (y_{1}, y_{2}) " }, { "math_id": 62, "text": " (y'_{1}, y'_{2}) " } ]
https://en.wikipedia.org/wiki?curid=7931806
7932644
Unit tangent bundle
In Riemannian geometry, the unit tangent bundle of a Riemannian manifold ("M", "g"), denoted by T1"M", UT("M") or simply UT"M", is the unit sphere bundle for the tangent bundle T("M"). It is a fiber bundle over "M" whose fiber at each point is the unit sphere in the tangent bundle: formula_0 where T"x"("M") denotes the tangent space to "M" at "x". Thus, elements of UT("M") are pairs ("x", "v"), where "x" is some point of the manifold and "v" is some tangent direction (of unit length) to the manifold at "x". The unit tangent bundle is equipped with a natural projection formula_1 formula_2 which takes each point of the bundle to its base point. The fiber "π"−1("x") over each point "x" ∈ "M" is an ("n"−1)-sphere S"n"−1, where "n" is the dimension of "M". The unit tangent bundle is therefore a sphere bundle over "M" with fiber S"n"−1. The definition of unit sphere bundle can easily accommodate Finsler manifolds as well. Specifically, if "M" is a manifold equipped with a Finsler metric "F" : T"M" → R, then the unit sphere bundle is the subbundle of the tangent bundle whose fiber at "x" is the indicatrix of "F": formula_3 If "M" is an infinite-dimensional manifold (for example, a Banach, Fréchet or Hilbert manifold), then UT("M") can still be thought of as the unit sphere bundle for the tangent bundle T("M"), but the fiber "π"−1("x") over "x" is then the infinite-dimensional unit sphere in the tangent space. Structures. The unit tangent bundle carries a variety of differential geometric structures. The metric on "M" induces a contact structure on UT"M". This is given in terms of a tautological one-form, defined at a point "u" of UT"M" (a unit tangent vector of "M") by formula_4 where formula_5 is the pushforward along π of the vector "v" ∈ T"u"UT"M". Geometrically, this contact structure can be regarded as the distribution of (2"n"−2)-planes which, at the unit vector "u", is the pullback of the orthogonal complement of "u" in the tangent space of "M". This is a contact structure, for the fiber of UT"M" is obviously an integral manifold (the vertical bundle is everywhere in the kernel of θ), and the remaining tangent directions are filled out by moving up the fiber of UT"M". Thus the maximal integral manifold of θ is (an open set of) "M" itself. On a Finsler manifold, the contact form is defined by the analogous formula formula_6 where "g""u" is the fundamental tensor (the hessian of the Finsler metric). Geometrically, the associated distribution of hyperplanes at the point "u" ∈ UT"x""M" is the inverse image under π* of the tangent hyperplane to the unit sphere in T"x""M" at "u". The volume form θ∧"d"θ"n"−1 defines a measure on "M", known as the kinematic measure, or Liouville measure, that is invariant under the geodesic flow of "M". As a Radon measure, the kinematic measure μ is defined on compactly supported continuous functions "ƒ" on UT"M" by formula_7 where d"V" is the volume element on "M", and μ"p" is the standard rotationally-invariant Borel measure on the Euclidean sphere UT"p""M". The Levi-Civita connection of "M" gives rise to a splitting of the tangent bundle formula_8 into a vertical space "V" = kerπ* and horizontal space "H" on which π* is a linear isomorphism at each point of UT"M". This splitting induces a metric on UT"M" by declaring that this splitting be an orthogonal direct sum, and defining the metric on "H" by the pullback: formula_9 and defining the metric on "V" as the induced metric from the embedding of the fiber UT"x""M" into the Euclidean space T"x""M". Equipped with this metric and contact form, UT"M" becomes a Sasakian manifold.
[ { "math_id": 0, "text": "\\mathrm{UT} (M) := \\coprod_{x \\in M} \\left\\{ v \\in \\mathrm{T}_{x} (M) \\left| g_x(v,v) = 1 \\right. \\right\\}," }, { "math_id": 1, "text": "\\pi : \\mathrm{UT} (M) \\to M," }, { "math_id": 2, "text": "\\pi : (x, v) \\mapsto x," }, { "math_id": 3, "text": "\\mathrm{UT}_x (M) = \\left\\{ v \\in \\mathrm{T}_{x} (M) \\left| F(v) = 1 \\right. \\right\\}." }, { "math_id": 4, "text": "\\theta_u(v) = g(u,\\pi_* v)\\," }, { "math_id": 5, "text": "\\pi_*" }, { "math_id": 6, "text": "\\theta_u(v) = g_u(u,\\pi_*v)\\," }, { "math_id": 7, "text": "\\int_{UTM} f\\,d\\mu = \\int_M dV(p) \\int_{UT_pM} \\left.f\\right|_{UT_pM}\\,d\\mu_p" }, { "math_id": 8, "text": "T(UTM) = H\\oplus V" }, { "math_id": 9, "text": "g_H(v,w) = g(v,w),\\quad v,w\\in H" } ]
https://en.wikipedia.org/wiki?curid=7932644
7932827
Spatial multiplexing
Spatial multiplexing or space-division multiplexing (SM, SDM or SMX) is a multiplexing technique in MIMO wireless communication, fiber-optic communication and other communications technologies used to transmit independent channels separated in space. Fiber-optic communication. In fiber-optic communication SDM refers to the usage of the transverse dimension of the fiber to separate the channels. Techniques. Multi-core fiber (MCF). Multi-core fibers are designed with more than a single core. Different types of MCFs exist, of which “Uncoupled MCF” is the most common, in which each core is treated as an independent optical path. The main limitation of these systems is the presence of inter-core crosstalk. In recent times, different splicing techniques, and coupling methods have been proposed and demonstrated, and despite many of the component technologies still being in the development stage, MCF systems already present the capability for huge transmission capacity. Recently, some developed component technologies for multicore optical fiber have been demonstrated, such as three-dimensional Y-splitters between different multicore fibers, a universal interconnection among the same fiber cores, and a device for fast swapping and interchange of wavelength-division multiplexed data among cores of multicore optical fiber. Multi-mode fibers (MMF) and Few-mode fibers (FMF). Multi-mode fibers have a larger core that allows the propagation of multiple cylindrical transverse modes (Also referred as linearly polarized modes), in contrast to a single mode fiber (SMF) that only supports the fundamental mode. Each transverse mode is spatially orthogonal, and allows for the propagation in both orthogonal polarization. Typical MMF are currently not viable for SDM, as the high mode count results in unmanageable levels of modal coupling and dispersion. The utilization of few-mode fibers, which are MMFs with a core size designed specially to allow a low count of spatial modes, is currently under consideration. Due to physical imperfections, the modes exchange power and are experience different effective refractive indexes as they propagate through the fiber. The power exchange results in modal coupling, and this effect is known to reduce the achievable capacity of the fiber, if the modes experience unequal gain or attenuation. Therefore, if not compensated, the capacity increase is not linear to the mode count. The effective refractive index difference results in inter-symbolic interference, resulting from delay spread. Mode multiplexers consist of photonic lanterns, multi-plane light conversion, and others. Fiber bundles. Bundled fibers are also considered a form of SDM. Wireless communications. If the transmitter is equipped with formula_0 antennas and the receiver has formula_1 antennas, the maximum spatial multiplexing order (the number of streams) is, formula_2 if a linear receiver is used. This means that formula_3 streams can be transmitted in parallel, ideally leading to an formula_3 increase of the spectral efficiency (the number of bits per second per Hz that can be transmitted over the wireless channel). The practical multiplexing gain can be limited by spatial correlation, which means that some of the parallel streams may have very weak channel gains. Encoding. Open-loop approach. In an open-loop MIMO system with formula_0 transmitter antennas and formula_1 receiver antennas, the input-output relationship can be described as formula_4 where formula_5 is the formula_6 vector of transmitted symbols, formula_7 are the formula_8 vectors of received symbols and noise respectively and formula_9 is the formula_10 matrix of channel coefficients. An often encountered problem in open loop spatial multiplexing is to guard against instance of high channel correlation and strong power imbalances between the multiple streams. One such extension which is being considered for DVB-NGH systems is the so-called "enhanced Spatial Multiplexing (eSM)" scheme. Closed-loop approach. A closed-loop MIMO system utilizes Channel State Information (CSI) at the transmitter. In most cases, only partial CSI is available at the transmitter because of the limitations of the feedback channel. In a closed-loop MIMO system the input-output relationship with a closed-loop approach can be described as formula_11 where formula_12 is the formula_13 vector of transmitted symbols, formula_7 are the formula_14 vectors of received symbols and noise respectively, formula_9 is the formula_15 matrix of channel coefficients and formula_16 is the formula_17 linear precoding matrix. A precoding matrix formula_16 is used to precode the symbols in the vector to enhance the performance. The column dimension formula_3 of formula_16 can be selected smaller than formula_0 which is useful if the system requires formula_18 streams because of several reasons. Examples of the reasons are as follows: either the rank of the MIMO channel or the number of receiver antennas is smaller than the number of transmit antennas. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N_t" }, { "math_id": 1, "text": "N_r" }, { "math_id": 2, "text": "N_s=\\min(N_t, N_r)\\!" }, { "math_id": 3, "text": "N_s" }, { "math_id": 4, "text": "\\mathbf{y}=\\mathbf{Hx}+\\mathbf{n}" }, { "math_id": 5, "text": "\\mathbf{x} = [x_1, x_2, \\ldots, x_{N_t}]^T" }, { "math_id": 6, "text": "N_t\\times 1" }, { "math_id": 7, "text": "\\mathbf{y,n}" }, { "math_id": 8, "text": "N_r \\times 1" }, { "math_id": 9, "text": "\\mathbf{H}" }, { "math_id": 10, "text": "N_r \\times N_t" }, { "math_id": 11, "text": "\\mathbf{y}=\\mathbf{HWs}+\\mathbf{n}" }, { "math_id": 12, "text": "\\mathbf{s} = [s_1, s_2, \\ldots, s_{N_s}]^T" }, { "math_id": 13, "text": "N_s\\times 1" }, { "math_id": 14, "text": "N_r\\times 1" }, { "math_id": 15, "text": "N_r\\times N_t" }, { "math_id": 16, "text": "\\mathbf{W}" }, { "math_id": 17, "text": "N_t\\times N_s" }, { "math_id": 18, "text": "N_s (\\neq N_t)" } ]
https://en.wikipedia.org/wiki?curid=7932827
793295
Mapping class group
Group of isotopy classes of a topological automorphism group In mathematics, in the subfield of geometric topology, the mapping class group is an important algebraic invariant of a topological space. Briefly, the mapping class group is a certain discrete group corresponding to symmetries of the space. Motivation. Consider a topological space, that is, a space with some notion of closeness between points in the space. We can consider the set of homeomorphisms from the space into itself, that is, continuous maps with continuous inverses: functions which stretch and deform the space continuously without breaking or gluing the space. This set of homeomorphisms can be thought of as a space itself. It forms a group under functional composition. We can also define a topology on this new space of homeomorphisms. The open sets of this new function space will be made up of sets of functions that map compact subsets "K" into open subsets "U" as "K" and "U" range throughout our original topological space, completed with their finite intersections (which must be open by definition of topology) and arbitrary unions (again which must be open). This gives a notion of continuity on the space of functions, so that we can consider continuous deformation of the homeomorphisms themselves: called homotopies. We define the mapping class group by taking homotopy classes of homeomorphisms, and inducing the group structure from the functional composition group structure already present on the space of homeomorphisms. Definition. The term mapping class group has a flexible usage. Most often it is used in the context of a manifold "M". The mapping class group of "M" is interpreted as the group of isotopy classes of automorphisms of "M". So if "M" is a topological manifold, the mapping class group is the group of isotopy classes of homeomorphisms of "M". If "M" is a smooth manifold, the mapping class group is the group of isotopy classes of diffeomorphisms of "M". Whenever the group of automorphisms of an object "X" has a natural topology, the mapping class group of "X" is defined as formula_0, where formula_1 is the path-component of the identity in formula_2. (Notice that in the compact-open topology, path components and isotopy classes coincide, i.e., two maps "f" and "g" are in the same path-component iff they are isotopic). For topological spaces, this is usually the compact-open topology. In the low-dimensional topology literature, the mapping class group of "X" is usually denoted MCG("X"), although it is also frequently denoted formula_3, where one substitutes for Aut the appropriate group for the category to which "X" belongs. Here formula_4 denotes the 0-th homotopy group of a space. So in general, there is a short exact sequence of groups: formula_5 Frequently this sequence is not split. If working in the homotopy category, the mapping class group of "X" is the group of homotopy classes of homotopy equivalences of "X". There are many subgroups of mapping class groups that are frequently studied. If "M" is an oriented manifold, formula_6 would be the orientation-preserving automorphisms of "M" and so the mapping class group of "M" (as an oriented manifold) would be index two in the mapping class group of "M" (as an unoriented manifold) provided "M" admits an orientation-reversing automorphism. Similarly, the subgroup that acts as the identity on all the homology groups of "M" is called the Torelli group of "M". Examples. Sphere. In any category (smooth, PL, topological, homotopy) formula_7 corresponding to maps of degree ±1. Torus. In the homotopy category formula_8 This is because the n-dimensional torus formula_9 is an Eilenberg–MacLane space. For other categories if formula_10, one has the following split-exact sequences: In the category of topological spaces formula_11 In the PL-category formula_12 (⊕ representing direct sum). In the smooth category formula_13 where formula_14 are the Kervaire–Milnor finite abelian groups of homotopy spheres and formula_15 is the group of order 2. Surfaces. The mapping class groups of surfaces have been heavily studied, and are sometimes called Teichmüller modular groups (note the special case of formula_16 above), since they act on Teichmüller space and the quotient is the moduli space of Riemann surfaces homeomorphic to the surface. These groups exhibit features similar both to hyperbolic groups and to higher rank linear groups. They have many applications in Thurston's theory of geometric three-manifolds (for example, to surface bundles). The elements of this group have also been studied by themselves: an important result is the Nielsen–Thurston classification theorem, and a generating family for the group is given by Dehn twists which are in a sense the "simplest" mapping classes. Every finite group is a subgroup of the mapping class group of a closed, orientable surface; in fact one can realize any finite group as the group of isometries of some compact Riemann surface (which immediately implies that it injects in the mapping class group of the underlying topological surface). Non-orientable surfaces. Some non-orientable surfaces have mapping class groups with simple presentations. For example, every homeomorphism of the real projective plane formula_17 is isotopic to the identity: formula_18 The mapping class group of the Klein bottle "K" is: formula_19 The four elements are the identity, a Dehn twist on a two-sided curve which does not bound a Möbius strip, the y-homeomorphism of Lickorish, and the product of the twist and the y-homeomorphism. It is a nice exercise to show that the square of the Dehn twist is isotopic to the identity. We also remark that the closed genus three non-orientable surface "N"3 (the connected sum of three projective planes) has: formula_20 This is because the surface "N" has a unique class of one-sided curves such that, when "N" is cut open along such a curve "C", the resulting surface formula_21 is "a torus with a disk removed". As an unoriented surface, its mapping class group is formula_22. (Lemma 2.1). 3-Manifolds. Mapping class groups of 3-manifolds have received considerable study as well, and are closely related to mapping class groups of 2-manifolds. For example, any finite group can be realized as the mapping class group (and also the isometry group) of a compact hyperbolic 3-manifold. Mapping class groups of pairs. Given a pair of spaces "(X,A)" the mapping class group of the pair is the isotopy-classes of automorphisms of the pair, where an automorphism of "(X,A)" is defined as an automorphism of "X" that preserves "A", i.e. "f": "X" → "X" is invertible and "f(A)" = "A". Symmetry group of knot and links. If "K" ⊂ S3 is a knot or a link, the symmetry group of the knot (resp. link) is defined to be the mapping class group of the pair (S3, "K"). The symmetry group of a hyperbolic knot is known to be dihedral or cyclic; moreover every dihedral and cyclic group can be realized as symmetry groups of knots. The symmetry group of a torus knot is known to be of order two Z2. Torelli group. Notice that there is an induced action of the mapping class group on the homology (and cohomology) of the space "X". This is because (co)homology is functorial and Homeo0 acts trivially (because all elements are isotopic, hence homotopic to the identity, which acts trivially, and action on (co)homology is invariant under homotopy). The kernel of this action is the "Torelli group", named after the Torelli theorem. In the case of orientable surfaces, this is the action on first cohomology "H"1(Σ) ≅ Z2"g". Orientation-preserving maps are precisely those that act trivially on top cohomology "H"2(Σ) ≅ Z. "H"1(Σ) has a symplectic structure, coming from the cup product; since these maps are automorphisms, and maps preserve the cup product, the mapping class group acts as symplectic automorphisms, and indeed all symplectic automorphisms are realized, yielding the short exact sequence: formula_23 One can extend this to formula_24 The symplectic group is well understood. Hence understanding the algebraic structure of the mapping class group often reduces to questions about the Torelli group. Note that for the torus (genus 1) the map to the symplectic group is an isomorphism, and the Torelli group vanishes. Stable mapping class group. One can embed the surface formula_25 of genus "g" and 1 boundary component into formula_26 by attaching an additional hole on the end (i.e., gluing together formula_25 and formula_27), and thus automorphisms of the small surface fixing the boundary extend to the larger surface. Taking the direct limit of these groups and inclusions yields the stable mapping class group, whose rational cohomology ring was conjectured by David Mumford (one of conjectures called the Mumford conjectures). The integral (not just rational) cohomology ring was computed in 2002 by Ib Madsen and Michael Weiss, proving Mumford's conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Aut}(X)/\\operatorname{Aut}_0(X)" }, { "math_id": 1, "text": "\\operatorname{Aut}_0(X)" }, { "math_id": 2, "text": "\\operatorname{Aut}(X)" }, { "math_id": 3, "text": "\\pi_0(\\operatorname{Aut}(X))" }, { "math_id": 4, "text": "\\pi_0" }, { "math_id": 5, "text": "1 \\rightarrow \\operatorname{Aut}_0(X) \\rightarrow \\operatorname{Aut}(X) \\rightarrow \\operatorname{MCG}(X) \\rightarrow 1." }, { "math_id": 6, "text": "\\operatorname{Aut}(M)" }, { "math_id": 7, "text": "\\operatorname{MCG}(S^2) \\simeq \\Z/2\\Z," }, { "math_id": 8, "text": " \\operatorname{MCG}(\\mathbf{T}^n) \\simeq \\operatorname{GL}(n,\\Z). " }, { "math_id": 9, "text": "\\mathbf{T}^n = (S^1)^n" }, { "math_id": 10, "text": "n\\ge 5" }, { "math_id": 11, "text": "0\\to \\Z_2^\\infty\\to \\operatorname{MCG}(\\mathbf{T}^n) \\to \\operatorname{GL}(n,\\Z)\\to 0" }, { "math_id": 12, "text": "0\\to \\Z_2^\\infty\\oplus\\binom n2\\Z_2\\to \\operatorname{MCG}(\\mathbf{T}^n)\\to \\operatorname{GL}(n,\\Z)\\to 0" }, { "math_id": 13, "text": "0\\to \\Z_2^\\infty\\oplus\\binom n2\\Z_2\\oplus\\sum_{i=0}^n\\binom n i\\Gamma_{i+1}\\to \\operatorname{MCG}(\\mathbf{T}^n)\\to \\operatorname{GL}(n,\\Z)\\to 0" }, { "math_id": 14, "text": "\\Gamma_i" }, { "math_id": 15, "text": "\\Z_2" }, { "math_id": 16, "text": "\\operatorname{MCG}(\\mathbf{T}^2)" }, { "math_id": 17, "text": "\\mathbf{P}^2(\\R)" }, { "math_id": 18, "text": " \\operatorname{MCG}(\\mathbf{P}^2(\\R)) = 1. " }, { "math_id": 19, "text": " \\operatorname{MCG}(K)= \\Z_2 \\oplus \\Z_2." }, { "math_id": 20, "text": " \\operatorname{MCG}(N_3) = \\operatorname{GL}(2,\\Z). " }, { "math_id": 21, "text": "N\\setminus C" }, { "math_id": 22, "text": "\\operatorname{GL}(2,\\Z)" }, { "math_id": 23, "text": "1 \\to \\operatorname{Tor}(\\Sigma) \\to \\operatorname{MCG}(\\Sigma) \\to \\operatorname{Sp}(H^1(\\Sigma)) \\cong \\operatorname{Sp}_{2g}(\\mathbf{Z}) \\to 1" }, { "math_id": 24, "text": "1 \\to \\operatorname{Tor}(\\Sigma) \\to \\operatorname{MCG}^*(\\Sigma) \\to \\operatorname{Sp}^{\\pm}(H^1(\\Sigma)) \\cong \\operatorname{Sp}^{\\pm}_{2g}(\\mathbf{Z}) \\to 1" }, { "math_id": 25, "text": "\\Sigma_{g,1}" }, { "math_id": 26, "text": "\\Sigma_{g+1,1}" }, { "math_id": 27, "text": "\\Sigma_{1,2}" } ]
https://en.wikipedia.org/wiki?curid=793295
793367
Arbitrage pricing theory
Asset pricing theory In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the capital asset pricing model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital. Furthermore, the newer APT model is more dynamic being utilised in more theoretical application than the preceding CAPM model. A 1986 article written by Gregory Connor and Robert Korajczyk, utilised the APT framework and applied it to portfolio performance measurement suggesting that the Jensen coefficient is an acceptable measurement of portfolio performance. Model. APT is a single-period static model, which helps investors understand the trade-off between risk and return. The average investor aims to optimise the returns for any given level or risk and as such, expects a positive return for bearing greater risk. As per the APT model, risky asset returns are said to follow a "factor intensity structure" if they can be expressed as: formula_0 where *formula_1 is a constant for asset formula_2 *formula_3 is a systematic factor *formula_4 is the sensitivity of the formula_2th asset to factor formula_5, also called factor loading, * and formula_6 is the risky asset's idiosyncratic random shock with mean zero. Idiosyncratic shocks are assumed to be uncorrelated across assets and uncorrelated with the factors. The APT model states that if asset returns follow a factor structure then the following relation exists between expected returns and the factor sensitivities: formula_7 where * formula_8 is the risk premium of the factor, * formula_9 is the risk-free rate, That is, the expected return of an asset "j" is a linear function of the asset's sensitivities to the "n" factors. Note that there are some assumptions and requirements that have to be fulfilled for the latter to be correct: There must be perfect competition in the market, and the total number of factors may never surpass the total number of assets (in order to avoid the problem of matrix singularity). General Model. For a set of assets with returns formula_10, factor loadings formula_11, and factors formula_12, a general factor model that is used in APT is:formula_13where formula_14 follows a multivariate normal distribution. In general, it is useful to assume that the factors are distributed as:formula_15where formula_16 is the expected risk premium vector and formula_17 is the factor covariance matrix. Assuming that the noise terms for the returns and factors are uncorrelated, the mean and covariance for the returns are respectively:formula_18It is generally assumed that we know the factors in a model, which allows least squares to be utilized. However, an alternative to this is to assume that the factors are latent variables and employ factor analysis - akin to the form used in psychometrics - to extract them. Assumptions of APT Model. The APT model for asset valuation is founded on the following assumptions: Arbitrage. Arbitrage is the practice whereby investors take advantage of slight variations in asset valuation from its fair price, to generate a profit. It is the realisation of a positive expected return from overvalued or undervalued securities in the inefficient market without any incremental risk and zero additional investments. Mechanics. In the APT context, arbitrage consists of trading in two assets – with at least one being mispriced. The arbitrageur sells the asset which is relatively too expensive and uses the proceeds to buy one which is relatively too cheap. Under the APT, an asset is mispriced if its current price diverges from the price predicted by the model. The asset price today should equal the sum of all future cash flows discounted at the APT rate, where the expected return of the asset is a linear function of various factors, and sensitivity to changes in each factor is represented by a factor-specific beta coefficient. A correctly priced asset here may be in fact a "synthetic" asset - a "portfolio" consisting of other correctly priced assets. This portfolio has the same exposure to each of the macroeconomic factors as the mispriced asset. The arbitrageur creates the portfolio by identifying n correctly priced assets (one per risk-factor, plus one) and then weighting the assets such that portfolio beta per factor is the same as for the mispriced asset. When the investor is long the asset and short the portfolio (or vice versa) he has created a position which has a positive expected return (the difference between asset return and portfolio return) and which has a net zero exposure to any macroeconomic factor and is therefore risk free (other than for firm specific risk). The arbitrageur is thus in a position to make a risk-free profit: Difference between the capital asset pricing model. The APT along with the capital asset pricing model (CAPM) is one of two influential theories on asset pricing. The APT differs from the CAPM in that it is less restrictive in its assumptions, making it more flexible for use in a wider range of application. Thus, it possesses greator explanatory power (as opposed to statistical) for expected asset returns. It assumes that each investor will hold a unique portfolio with its own particular array of betas, as opposed to the identical "market portfolio". In some ways, the CAPM can be considered a "special case" of the APT in that the securities market line represents a single-factor model of the asset price, where beta is exposed to changes in value of the market. Fundamentally, the CAPM is derived on the premise that all factors in the economy can be reconciled into one factor represented by a market portfolio, thus implying they all have equivalent weight on the asset’s return. In contrast, the APT model suggests that each stock reacts uniquely to various macroeconomic factors and thus the impact of each must be accounted for separately. A disadvantage of APT is that the selection and the number of factors to use in the model is ambiguous. Most academics use three to five factors to model returns, but the factors selected have not been empirically robust. In many instances the CAPM, as a model to estimate expected returns, has empirically outperformed the more advanced APT. Additionally, the APT can be seen as a "supply-side" model, since its beta coefficients reflect the sensitivity of the underlying asset to economic factors. Thus, factor shocks would cause structural changes in assets' expected returns, or in the case of stocks, in firms' profitabilities. On the other side, the capital asset pricing model is considered a "demand side" model. Its results, although similar to those of the APT, arise from a maximization problem of each investor's utility function, and from the resulting market equilibrium (investors are considered to be the "consumers" of the assets). Implementation. As with the CAPM, the factor-specific betas are found via a linear regression of historical security returns on the factor in question. Unlike the CAPM, the APT, however, does not itself reveal the identity of its priced factors - the number and nature of these factors is likely to change over time and between economies. As a result, this issue is essentially empirical in nature. Several "a priori" guidelines as to the characteristics required of potential factors are, however, suggested: Chen, Roll and Ross identified the following macro-economic factors as significant in explaining security returns: As a practical matter, indices or spot or futures market prices may be used in place of macro-economic factors, which are reported at low frequency (e.g. monthly) and often with significant estimation errors. Market indices are sometimes derived by means of factor analysis. More direct "indices" that might be used are: International arbitrage pricing theory. International arbitrage pricing theory (IAPT) is an important extension of the base idea of arbitrage pricing theory which further considers factors such as exchange rate risk. In 1983 Bruno Solnik created an extension of the original arbitrage pricing theory to include risk related to international exchange rates hence making the model applicable international markets with multi-currency transactions. Solnik suggested that there may be several factors common to all international assets, and conversely, there may be other common factors applicable to certain markets based on nationality. Fama and French originally proposed a three-factor model in 1995 which, consistent with the suggestion from Solnik above suggests that integrated international markets may experience a common set of factors, hence making it possible to price assets in all integrated markets using their model. The Fama and French three factor model attempts to explain stock returns based on market risk, size, and value. A 2012 paper aimed to empirically investigate Solnik’s IAPT model and the suggestion that base currency fluctuations have a direct and comprehendible effect on the risk premiums of assets. This was tested by generating a returns relation which broke down individual investor returns into currency and non-currency (universal) returns. The paper utilised Fama and French’s three factor model (explained above) to estimate international currency impacts on common factors. It was concluded that the total foreign exchange risk in international markets consisted of the immediate exchange rate risk and the residual market factors. This, along with empirical data tests validates the idea that foreign currency fluctuations have a direct effect on risk premiums and the factor loadings included in the APT model, hence, confirming the validity of the IAPT model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_j = a_j + \\beta_{j1}f_1 + \\beta_{j2}f_2 + \\cdots + \\beta_{jn}f_n + \\epsilon_j" }, { "math_id": 1, "text": "a_j" }, { "math_id": 2, "text": "j" }, { "math_id": 3, "text": "f_n" }, { "math_id": 4, "text": "\\beta_{jn}" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\epsilon_j" }, { "math_id": 7, "text": "\\mathbb{E}\\left(r_j\\right) = r_f + \\beta_{j1}RP_1 + \\beta_{j2}RP_2 + \\cdots + \\beta_{jn}RP_n" }, { "math_id": 8, "text": "RP_n" }, { "math_id": 9, "text": "r_f" }, { "math_id": 10, "text": "r\\in\\mathbb{R}^{m}" }, { "math_id": 11, "text": "\\Lambda \\in\\mathbb{R}^{m\\times n}" }, { "math_id": 12, "text": "f\\in\\mathbb{R}^{n}" }, { "math_id": 13, "text": "r = r_{f} + \\Lambda f + \\epsilon, \\quad \\epsilon \\sim \\mathcal{N}(0,\\Psi)" }, { "math_id": 14, "text": "\\epsilon" }, { "math_id": 15, "text": "f \\sim \\mathcal{N}(\\mu,\\Omega)" }, { "math_id": 16, "text": "\\mu" }, { "math_id": 17, "text": "\\Omega" }, { "math_id": 18, "text": "\\mathbb{E}(r) = r_{f} + \\Lambda \\mu, \\quad \\text{Cov}(r) = \\Lambda\\Omega\\Lambda^{T} + \\Psi" } ]
https://en.wikipedia.org/wiki?curid=793367
7934659
Void ratio
Dimensionless quantity related to porosity The void ratio (formula_0) of a mixture of solids and fluids (gases and liquids), or of a porous composite material such as concrete, is the ratio of the volume of the voids (formula_1) filled by the fluids to the volume of all the solids (formula_2). It is a dimensionless quantity in materials science and in soil science, and is closely related to the porosity (often noted as formula_3, or formula_4, depending on the convention), the ratio of the volume of voids (formula_1) to the total (or bulk) volume (formula_5), as follows: formula_6 in which, for idealized porous media with a rigid and undeformable skeleton structure ("i.e.," without variation of total volume (formula_5) when the water content of the sample changes (no expansion or swelling with the wetting of the sample); nor contraction or shrinking effect after drying of the sample), the total (or bulk) volume (formula_5) of an ideal porous material is the sum of the volume of the solids (formula_2) and the volume of voids (formula_1): formula_7 and formula_8 where formula_0 is the void ratio, formula_3 is the porosity, "VV" is the volume of void-space (gases and liquids), "VS" is the volume of solids, and "V""T" is the total (or bulk) volume. This figure is relevant in composites, in mining (particular with regard to the properties of tailings), and in soil science. In geotechnical engineering, it is considered one of the state variables of soils and represented by the symbol formula_0. Note that in geotechnical engineering, the symbol formula_3 usually represents the angle of shearing resistance, a shear strength (soil) parameter. Because of this, in soil science and geotechnics, these two equations are usually presented using formula_4 for porosity: formula_9 and formula_10 where formula_0 is the void ratio, formula_4 is the porosity, "VV" is the volume of void-space (air and water), "VS" is the volume of solids, and "VT" is the total (or bulk) volume. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e" }, { "math_id": 1, "text": "V_V" }, { "math_id": 2, "text": "V_S" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "{\\eta}" }, { "math_id": 5, "text": "V_T" }, { "math_id": 6, "text": "e = \\frac{V_V}{V_S} = \\frac{V_V}{V_T - V_V} = \\frac{\\phi}{1 - \\phi}" }, { "math_id": 7, "text": "V_T = V_S + V_V" }, { "math_id": 8, "text": "\\phi = \\frac{V_V}{V_T} = \\frac{V_V}{V_S + V_V} = \\frac{e}{1 + e}" }, { "math_id": 9, "text": "e = \\frac{V_V}{V_S} = \\frac{V_V}{V_T - V_V} = \\frac{n}{1 - {\\eta}}" }, { "math_id": 10, "text": "{\\eta} = \\frac{V_V}{V_T} = \\frac{V_V}{V_S + V_V} = \\frac{e}{1 + e}" } ]
https://en.wikipedia.org/wiki?curid=7934659
7935464
Black's equation
Black's Equation is a mathematical model for the mean time to failure (MTTF) of a semiconductor circuit due to electromigration: a phenomenon of molecular rearrangement (movement) in the solid phase caused by an electromagnetic field. The equation is: formula_0 where The model is abstract, not based on a specific physical model, but flexibly describes the failure rate dependence on the temperature, the electrical stress, and the specific technology and materials. More adequately described as descriptive than prescriptive, the values for "A", "n", and "Q" are found by fitting the model to experimental data. The model's value is that it maps experimental data taken at elevated temperature and electrical stress levels in short periods of time to expected component failure rates under actual operating conditions. Experimental data is obtained by running a combination of high temperature operating life (HTOL), electrical, and any other relevant operating environment variables. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{MTTF} = \\frac{A}{j^{n}} e^\\left(\\frac{Q}{kT}\\right)" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "j" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=7935464
7936939
Potential density
The potential density of a fluid parcel at pressure formula_0 is the density that the parcel would acquire if adiabatically brought to a reference pressure formula_1, often 1 bar (100 kPa). Whereas density changes with changing pressure, potential density of a fluid parcel is conserved as the pressure experienced by the parcel changes (provided no mixing with other parcels or net heat flux occurs). The concept is used in oceanography and (to a lesser extent) atmospheric science. Potential density is a dynamically important property: for static stability potential density must decrease upward. If it doesn't, a fluid parcel displaced upward finds itself lighter than its neighbors, and continues to move upward; similarly, a fluid parcel displaced downward would be heavier than its neighbors. This is true even if the density of the fluid decreases upward. In stable conditions (potential density decreasing upward) motion along surfaces of constant potential density (isopycnals) is energetically favored over flow across these surfaces (diapycnal flow), so most of the motion within a 3-D geophysical fluid takes place along these 2-D surfaces. In oceanography, the symbol formula_2 is used to denote "potential density", with the reference pressure formula_3 taken to be the pressure at the ocean surface. The corresponding "potential density anomaly" is denoted by formula_4 kg/m3. Because the compressibility of seawater varies with salinity and temperature, the reference pressure must be chosen to be near the actual pressure to keep the definition of potential density dynamically meaningful. Reference pressures are often chosen as a whole multiple of 100 bar; for water near a pressure of 400 bar (40 MPa), say, the reference pressure 400 bar would be used, and the potential density anomaly symbol would be written formula_5. Surfaces of constant potential density (relative to and in the vicinity of a given reference pressure) are used in the analyses of ocean data and to construct models of ocean currents. Neutral density surfaces, defined using another variable called neutral density (formula_6), can be considered the continuous analog of these potential density surfaces. Potential density adjusts for the effect of compression in two ways: A parcel's density may be calculated from an equation of state: formula_7 where formula_8 is temperature, formula_0 is pressure, and formula_9 are other tracers that affect density (e.g. salinity of seawater). The potential density would then be calculated as: formula_10 where formula_11 is the potential temperature of the fluid parcel for the same reference pressure formula_3.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "P_{0}" }, { "math_id": 2, "text": "\\rho_\\theta" }, { "math_id": 3, "text": "P_0" }, { "math_id": 4, "text": "\\sigma_\\theta = \\rho_\\theta - 1000 " }, { "math_id": 5, "text": "\\sigma_4" }, { "math_id": 6, "text": " \\gamma^n " }, { "math_id": 7, "text": "\\rho = \\rho(P,T,S_1,S_2,...) " }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "S_n" }, { "math_id": 10, "text": "\\rho_\\theta = \\rho(P_0,\\theta,S_1,S_2,...) " }, { "math_id": 11, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=7936939
7937743
Spin echo
Response of spin to electromagnetic radiation In magnetic resonance, a spin echo or Hahn echo is the refocusing of spin magnetisation by a pulse of resonant electromagnetic radiation. Modern nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) make use of this effect. The NMR signal observed following an initial excitation pulse decays with time due to both spin relaxation and any "inhomogeneous" effects which cause spins in the sample to precess at different rates. The first of these, relaxation, leads to an irreversible loss of magnetisation. But the inhomogeneous dephasing can be removed by applying a 180° "inversion" pulse that inverts the magnetisation vectors. Examples of inhomogeneous effects include a magnetic field gradient and a distribution of chemical shifts. If the inversion pulse is applied after a period "t" of dephasing, the inhomogeneous evolution will rephase to form an echo at time 2"t". In simple cases, the intensity of the echo relative to the initial signal is given by "e–2t/T2" where "T"2 is the time constant for spin–spin relaxation. The echo time (TE) is the time between the excitation pulse and the peak of the signal. Echo phenomena are important features of coherent spectroscopy which have been used in fields other than magnetic resonance including laser spectroscopy and neutron scattering. History. Echoes were first detected in nuclear magnetic resonance by Erwin Hahn in 1950, and spin echoes are sometimes referred to as Hahn echoes. In nuclear magnetic resonance and magnetic resonance imaging, radiofrequency radiation is most commonly used. In 1972 F. Mezei introduced spin-echo neutron scattering, a technique that can be used to study magnons and phonons in single crystals. The technique is now applied in research facilities using triple axis spectrometers. In 2020 two teams demonstrated that when strongly coupling an ensemble of spins to a resonator, the Hahn pulse sequence does not just lead to a single echo, but rather to a whole train of periodic echoes. In this process the first Hahn echo acts back on the spins as a refocusing pulse, leading to self-stimulated secondary echoes. Principle. The spin-echo effect was discovered by Erwin Hahn when he applied two successive 90° pulses separated by short time period, but detected a signal, the echo, when no pulse was applied. This phenomenon of spin echo was explained by Erwin Hahn in his 1950 paper, and further developed by Carr and Purcell who pointed out the advantages of using a 180° refocusing pulse for the second pulse. The pulse sequence may be better understood by breaking it down into the following steps: Several simplifications are used in this sequence: no decoherence is included and each spin experiences perfect pulses during which the environment provides no spreading. Six spins are shown above and these are not given the chance to dephase significantly. The spin-echo technique is more useful when the spins have dephased more significantly such as in the animation below: Spin-echo decay. A Hahn-echo decay experiment can be used to measure the spin–spin relaxation time, as shown in the animation below. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence which is not refocused by the π pulse. In simple cases, an exponential decay is measured which is described by the T2 time. Stimulated echo. Hahn's 1950 paper showed that another method for generating spin echoes is to apply three successive 90° pulses. After the first 90° pulse, the magnetization vector spreads out as described above, forming what can be thought of as a "pancake" in the x-y plane. The spreading continues for a time formula_0, and then a second 90° pulse is applied such that the "pancake" is now in the x-z plane. After a further time formula_1 a third pulse is applied and a stimulated echo is observed after waiting for a time formula_0 after the last pulse. Photon echo. Hahn echos have also been observed at optical frequencies. For this, resonant light is applied to a material with an inhomogeneously broadened absorption resonance. Instead of using two spin states in a magnetic field, photon echoes use two energy levels that are present in the material even in zero magnetic field. Fast spin echo. Fast spin echo (RARE, FAISE or FSE), also called turbo spin echo (TSE) is an MRI sequence that results in fast scan times. In this sequence, several 180 refocusing radio-frequency pulses are delivered during each echo time (TR) interval, and the phase-encoding gradient is briefly switched on between echoes. The FSE/TSE pulse sequence superficially resembles a conventional spin-echo (CSE) sequence in that it uses a series of 180º-refocusing pulses after a single 90º-pulse to generate a train of echoes. The FSE/TSE technique, however, changes the phase-encoding gradient for each of these echoes (a conventional multi-echo sequence collects all echoes in a train with the same phase encoding). As a result of changing the phase-encoding gradient between echoes, multiple lines of k-space (i.e., phase-encoding steps) can be acquired within a given repetition time (TR). As multiple phase-encoding lines are acquired during each TR interval, FSE/TSE techniques may significantly reduce imaging time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau" }, { "math_id": 1, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=7937743
7938
Diatomic molecule
Molecule composed of any two atoms Diatomic molecules (from el " di-" 'two') are molecules composed of only two atoms, of the same or different chemical elements. If a diatomic molecule consists of two atoms of the same element, such as hydrogen () or oxygen (), then it is said to be homonuclear. Otherwise, if a diatomic molecule consists of two different atoms, such as carbon monoxide () or nitric oxide (), the molecule is said to be heteronuclear. The bond in a homonuclear diatomic molecule is non-polar. The only chemical elements that form stable homonuclear diatomic molecules at standard temperature and pressure (STP) (or at typical laboratory conditions of 1 bar and 25 °C) are the gases hydrogen (), nitrogen (), oxygen (), fluorine (), and chlorine (), and the liquid bromine (). The noble gases (helium, neon, argon, krypton, xenon, and radon) are also gases at STP, but they are monatomic. The homonuclear diatomic gases and noble gases together are called "elemental gases" or "molecular gases", to distinguish them from other gases that are chemical compounds. At slightly elevated temperatures, the halogens bromine () and iodine () also form diatomic gases. All halogens have been observed as diatomic molecules, except for astatine and tennessine, which are uncertain. Other elements form diatomic molecules when evaporated, but these diatomic species repolymerize when cooled. Heating ("cracking") elemental phosphorus gives diphosphorus (). Sulfur vapor is mostly disulfur (). Dilithium () and disodium () are known in the gas phase. Ditungsten () and dimolybdenum () form with sextuple bonds in the gas phase. Dirubidium () is diatomic. Heteronuclear molecules. All other diatomic molecules are chemical compounds of two different elements. Many elements can combine to form heteronuclear diatomic molecules, depending on temperature and pressure. Examples are gases carbon monoxide (CO), nitric oxide (NO), and hydrogen chloride (HCl). Many 1:1 binary compounds are not normally considered diatomic because they are polymeric at room temperature, but they form diatomic molecules when evaporated, for example gaseous MgO, SiO, and many others. Occurrence. Hundreds of diatomic molecules have been identified in the environment of the Earth, in the laboratory, and in interstellar space. About 99% of the Earth's atmosphere is composed of two species of diatomic molecules: nitrogen (78%) and oxygen (21%). The natural abundance of hydrogen (H2) in the Earth's atmosphere is only of the order of parts per million, but H2 is the most abundant diatomic molecule in the universe. The interstellar medium is dominated by hydrogen atoms. Molecular geometry. All diatomic molecules are linear and characterized by a single parameter which is the bond length or distance between the two atoms. Diatomic nitrogen has a triple bond, diatomic oxygen has a double bond, and diatomic hydrogen, fluorine, chlorine, iodine, and bromine all have single bonds. Historical significance. Diatomic elements played an important role in the elucidation of the concepts of element, atom, and molecule in the 19th century, because some of the most common elements, such as hydrogen, oxygen, and nitrogen, occur as diatomic molecules. John Dalton's original atomic hypothesis assumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed water's formula to be HO, giving the atomic weight of oxygen as eight times that of hydrogen, instead of the modern value of about 16. As a consequence, confusion existed regarding atomic weights and molecular formulas for about half a century. As early as 1805, Gay-Lussac and von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen, and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules. However, these results were mostly ignored until 1860, partly due to the belief that atoms of one element would have no chemical affinity toward atoms of the same element, and also partly due to apparent exceptions to Avogadro's law that were not explained until later in terms of dissociating molecules. At the 1860 Karlsruhe Congress on atomic weights, Cannizzaro resurrected Avogadro's ideas and used them to produce a consistent table of atomic weights, which mostly agree with modern values. These weights were an important prerequisite for the discovery of the periodic law by Dmitri Mendeleev and Lothar Meyer. Excited electronic states. Diatomic molecules are normally in their lowest or ground state, which conventionally is also known as the formula_0 state. When a gas of diatomic molecules is bombarded by energetic electrons, some of the molecules may be excited to higher electronic states, as occurs, for example, in the natural aurora; high-altitude nuclear explosions; and rocket-borne electron gun experiments. Such excitation can also occur when the gas absorbs light or other electromagnetic radiation. The excited states are unstable and naturally relax back to the ground state. Over various short time scales after the excitation (typically a fraction of a second, or sometimes longer than a second if the excited state is metastable), transitions occur from higher to lower electronic states and ultimately to the ground state, and in each transition results a photon is emitted. This emission is known as fluorescence. Successively higher electronic states are conventionally named formula_1, formula_2, formula_3, etc. (but this convention is not always followed, and sometimes lower case letters and alphabetically out-of-sequence letters are used, as in the example given below). The excitation energy must be greater than or equal to the energy of the electronic state in order for the excitation to occur. In quantum theory, an electronic state of a diatomic molecule is represented by the molecular term symbol formula_4 where formula_5 is the total electronic spin quantum number, formula_6 is the total electronic angular momentum quantum number along the internuclear axis, and formula_7 is the vibrational quantum number. formula_6 takes on values 0, 1, 2, ..., which are represented by the electronic state symbols formula_8, formula_9, formula_10... For example, the following table lists the common electronic states (without vibrational quantum numbers) along with the energy of the lowest vibrational level (formula_11) of diatomic nitrogen (N2), the most abundant gas in the Earth's atmosphere. The subscripts and superscripts after formula_6 give additional quantum mechanical details about the electronic state. The superscript formula_12 or formula_13 determines whether reflection in a plane containing the internuclear axis introduces a sign change in the wavefunction. The sub-script formula_14 or formula_15 applies to molecules of identical atoms, and when reflecting the state along a plane perpendicular to the molecular axis, states that does not change are labelled formula_14 (gerade), and states that change sign are labelled formula_15 (ungerade). &lt;templatestyles src="Reflist/styles.css" /&gt; The aforementioned fluorescence occurs in distinct regions of the electromagnetic spectrum, called "emission bands": each band corresponds to a particular transition from a higher electronic state and vibrational level to a lower electronic state and vibrational level (typically, many vibrational levels are involved in an excited gas of diatomic molecules). For example, N2 formula_1-formula_0 emission bands (a.k.a. Vegard-Kaplan bands) are present in the spectral range from 0.14 to 1.45 μm (micrometres). A given band can be spread out over several nanometers in electromagnetic wavelength space, owing to the various transitions that occur in the molecule's rotational quantum number, formula_16. These are classified into distinct sub-band branches, depending on the change in formula_16. The formula_17 branch corresponds to formula_18, the formula_19 branch to formula_20, and the formula_21 branch to formula_22. Bands are spread out even further by the limited spectral resolution of the spectrometer that is used to measure the spectrum. The spectral resolution depends on the instrument's point spread function. Energy levels. The molecular term symbol is a shorthand expression of the angular momenta that characterize the electronic quantum states of a diatomic molecule, which are also eigenstates of the electronic molecular Hamiltonian. It is also convenient, and common, to represent a diatomic molecule as two-point masses connected by a massless spring. The energies involved in the various motions of the molecule can then be broken down into three categories: the translational, rotational, and vibrational energies. The theoretical study of the rotational energy levels of the diatomic molecules can be described using the below description of the rotational energy levels. While the study of vibrational energy level of the diatomic molecules can be described using the harmonic oscillator approximation or using the quantum vibrational interaction potentials. These potentials give more accurate energy levels because they take multiple vibrational effects into account. Concerning history, the first treatment of diatomic molecules with quantum mechanics was made by Lucy Mensing in 1926. Translational energies. The translational energy of the molecule is given by the kinetic energy expression: formula_23 where formula_24 is the mass of the molecule and formula_7 is its velocity. Rotational energies. Classically, the kinetic energy of rotation is formula_25 where formula_26 is the angular momentum formula_27 is the moment of inertia of the molecule For microscopic, atomic-level systems like a molecule, angular momentum can only have specific discrete values given by formula_28 where formula_29 is a non-negative integer and formula_30 is the reduced Planck constant. Also, for a diatomic molecule the moment of inertia is formula_31 where formula_32 is the reduced mass of the molecule and formula_33 is the average distance between the centers of the two atoms in the molecule. So, substituting the angular momentum and moment of inertia into Erot, the rotational energy levels of a diatomic molecule are given by: formula_34 Vibrational energies. Another type of motion of a diatomic molecule is for each atom to oscillate—or vibrate—along the line connecting the two atoms. The vibrational energy is approximately that of a quantum harmonic oscillator: formula_35 where formula_36 is an integer formula_30 is the reduced Planck constant and formula_37 is the angular frequency of the vibration. Comparison between rotational and vibrational energy spacings. The spacing, and the energy of a typical spectroscopic transition, between vibrational energy levels is about 100 times greater than that of a typical transition between rotational energy levels. Hund's cases. The good quantum numbers for a diatomic molecule, as well as good approximations of rotational energy levels, can be obtained by modeling the molecule using Hund's cases. Mnemonics. The mnemonics "BrINClHOF", pronounced "Brinklehof", "HONClBrIF", pronounced "Honkelbrif", “HOBrFINCl”, pronounced “Hoberfinkel”, and "HOFBrINCl", pronounced "Hofbrinkle", have been coined to aid recall of the list of diatomic elements. Another method, for English-speakers, is the sentence: "Never Have Fear of Ice Cold Beer" as a representation of Nitrogen, Hydrogen, Fluorine, Oxygen, Iodine, Chlorine, Bromine. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "^{2S+1} \\Lambda (v)^{+/-}_{(g/u)}" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "\\Lambda" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "\\Sigma" }, { "math_id": 9, "text": "\\Pi" }, { "math_id": 10, "text": "\\Delta" }, { "math_id": 11, "text": "v=0" }, { "math_id": 12, "text": "+" }, { "math_id": 13, "text": "-" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "u" }, { "math_id": 16, "text": "J" }, { "math_id": 17, "text": "R" }, { "math_id": 18, "text": "\\Delta J = +1" }, { "math_id": 19, "text": "P" }, { "math_id": 20, "text": "\\Delta J = -1" }, { "math_id": 21, "text": "Q" }, { "math_id": 22, "text": "\\Delta J = 0" }, { "math_id": 23, "text": "E_\\text{trans}=\\frac{1}{2}mv^2" }, { "math_id": 24, "text": "m" }, { "math_id": 25, "text": "E_\\text{rot} = \\frac{L^2}{2 I} \\," }, { "math_id": 26, "text": "L \\," }, { "math_id": 27, "text": "I \\," }, { "math_id": 28, "text": "L^2 = \\ell(\\ell+1) \\hbar^2 \\," }, { "math_id": 29, "text": "\\ell" }, { "math_id": 30, "text": "\\hbar" }, { "math_id": 31, "text": "I = \\mu r_{0}^2 \\," }, { "math_id": 32, "text": "\\mu \\," }, { "math_id": 33, "text": "r_{0} \\," }, { "math_id": 34, "text": "E_\\text{rot} = \\frac{l(l+1) \\hbar^2}{2 \\mu r_{0}^2} \\ \\ \\ \\ \\ l=0,1,2,... \\," }, { "math_id": 35, "text": "E_\\text{vib} = \\left(n+\\frac{1}{2} \\right)\\hbar \\omega \\ \\ \\ \\ \\ n=0,1,2,.... \\," }, { "math_id": 36, "text": "n" }, { "math_id": 37, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=7938
7939
Duopoly
Type of oligopoly A duopoly (from Greek δύο, "duo" "two" and πωλεῖν, "polein" "to sell") is a type of oligopoly where two firms have dominant or exclusive control over a market, and most (if not all) of the competition within that market occurs directly between them. Duopoly is the most commonly studied form of oligopoly due to its simplicity. Duopolies sell to consumers in a competitive market where the choice of an individual consumer choice cannot affect the firm in a duopoly market, as the defining characteristic of duopolies is that decisions made by each seller are dependent on what the other competitor does. Duopolies can exist in various forms, such as Cournot, Bertrand, or Stackelberg competition. These models demonstrate how firms in a duopoly can compete on output or price, depending on the assumptions made about firm behavior and market conditions. Similar features are discernible in national political systems of party duopoly. Duopoly models in economics and game theory. Cournot duopoly. Cournot model in game theory. In 1838, Antoine Augustin Cournot published a book titled "Researches Into the Mathematical Principles of the Theory of Wealth" in which he introduced and developed this model for the first time. As an imperfect competition model, Cournot duopoly (also known as Cournot competition), in which two firms with identical cost functions compete with homogenous products in a static context, is also known as Cournot competition. The Cournot model, shows that two firms assume each other's output and treat this as a fixed amount, and produce in their own firm according to this. The Cournot duopoly model relies on the following assumptions: In this model, two companies, each of which chooses its own quantity of output, compete against each other while facing constant marginal and average costs. The market price is determined by the sum of the output of two companies. formula_0 is the equation for the market demand function. Π1(q1, q2) = (P(q1 + q2) − c1)*q1 Π2(q1, q2) = (P(q1 + q2) − c2)*q2 The general process for obtaining a Nash equilibrium of a game using the best response functions is followed in order to discover a Nash equilibrium of Cournot's model for a specific cost function and demand function. A Nash Equilibrium of the Cournot model is a (q1*, q2*) such that For a given q1* , q2* solves: MAXq1 Π1(q1, q2*) = (P(q1 + q2*) − c1)q1 and MAXq2 Π2(q1*, q2) = (P(q1* + q2) − c2)q2 Given the other firm's optimal quantity, each firm maximizes its profit over the residual inverse demand. In equilibrium, no firm can increase profits by changing its output level Two first order conditions equal to zero are the best response. Cournot's duopoly marked the beginning of the study of oligopolies, and specifically duopolies, as well as the expansion of the research of market structures, which had previously focussed on the extremes of perfect competition and monopoly. In the Cournot duopoly model, firms choose the quantity of output they produce simultaneously, taking into consideration the quantity produced by their competitor. Each firm's profit depends on the total output produced by both firms, and the market price is determined by the sum of their outputs. The goal of each firm is to maximize its profit given the output produced by the other firm. This process continues until both firms reach a Nash equilibrium, where neither firm has an incentive to change its output level given the output of the other firm. Bertrand duopoly. Bertrand model in game theory. The Bertrand competition was developed by a French mathematician called Joseph Louis François Bertrand after investigating the claims of the Cournot model in "Researches into the mathematical principles of the theory of wealth, 1838". According to the Cournot model, firms in a duopoly would be able to keep prices above marginal cost and hence be extremely profitable. Bertrand took issue with this. In this market structure, each firm could only choose whole amounts and each firm receives zero payoffs when the aggregate demand exceeds the size of the amount that they share with each other. The market demand function is formula_1. The Bertrand model has similar assumptions to the Cournot model: The Bertrand model, in which, in a game of two firms, competes in price instead of output. Each one of them will assume that the other will not change prices in response to its price cuts. When both firms use this logic, they will reach a Nash equilibrium. Let pm be the monopoly price, pm = argmaxp(p − c)D(p) If pj&gt; pm, Ri(pj)=pm If c &lt; pj ≤ pm ,Ri(pj) =pj-€ If pj ≤ c, Ri(pj) =c For rival prices above cost, each firm has incentive to undercut rival to get the whole demand. If rival prices below cost, firms make losses when it attracts demand; firm better off charging at cost level. Nash equilibrium is p1 = p2 = c Bertrand paradox. Under static price competition with homogenous products and constant, symmetric marginal cost, firms price at the level of marginal cost and make no economic profits. In contrast to the Cournot model, the Bertrand duopoly model assumes that firms compete on price rather than quantity. Each firm sets its price simultaneously, anticipating that the other firm will not change its price in response. When both firms use this logic, they will reach a Nash equilibrium, where neither firm has an incentive to change its price given the price set by the other firm. In this model, firms tend to price their products at the level of their marginal cost, resulting in zero economic profits, a phenomenon known as the Bertrand paradox. Quality standards. In a duopoly, quality standards can play a significant role in the competitive dynamics between the two firms. A low-quality manufacturer may benefit from a slightly stringent quality standard in the absence of sunk costs, whereas a high-quality producer may suffer from it. Consumer welfare improves if the firm generating the higher quality does not considerably enhance its quality in response to its competitor's increase in quality. Exit from the industry is triggered by a sufficiently strict requirement. The high-quality producer exits first when there are no sunk costs. In some cases, firms may engage in a quality competition, attempting to outdo one another by improving their products or services to attract more customers. Politics. Like a market, a political system can be dominated by two groups, which exclude other parties or ideologies from participation. This is known as a two-party system. In such a system, one party or the other tends to dominate government at any given time (the Majority party), while the other has only limited power (the Minority party). According to Duverger's law, this tends to be caused by a simple winner-take-all voting system without runoffs or ranked choices. The United States and many Latin American countries, such as Costa Rica, Guyana, and the Dominican Republic have two-party government systems. Duopoly in Danish court politics. The prime minister-finance minister duopoly is an unusual form of court politics. There have been few other countries where the prime minister and the Treasury have had such a tumultuous relationship as Australia and the United Kingdom. There have been some confrontations in the past when the Finance ministry did not have the full support of the prime minister, leading to internal ministerial battles over economic strategy. A permanent civil service is a basic requirement for the duopoly system to function properly. The permanent civil service in general, and the Socialist Party in particular, are critical to the duopoly's effective operation. The conventional inter-governmental duopoly is carried by civil servants. The duopoly is confronted with some quandaries, such as tensions between different groups in the office over their relative positions. Departmental budget cuts are being made across the board. The prime ministerial-finance-ministry duopoly requires more credibility. Trust is a rare commodity among Australians and Britons. Denmark has a lot to offer. The Danish duopoly works together. Australia and the United Kingdom have competitive duopolies, and competitive duopolies are unstable. Types of duopoly. Cournot duopoly. A Cournot duopoly is a model of strategic interaction between two firms where they simultaneously choose their output levels, assuming the rival's output level is fixed. The firms compete on quantity, and each firm attempts to maximize its profit given the other firm's output level. This leads to a Nash equilibrium where neither firm has an incentive to change its output, given the other firm's output. Bertrand duopoly. In a Bertrand duopoly, two firms compete on price instead of quantity. Each firm assumes that its rival's price is fixed and chooses its own price to maximize profit. This model predicts that, under certain conditions, firms will set prices equal to marginal cost, leading to perfect competition. Stackelberg duopoly. A Stackelberg duopoly is a model where one firm (the leader) chooses its output level first, followed by the other firm (the follower). The follower observes the leader's output decision and adjusts its own output to maximize profit. The Stackelberg model often results in a higher total output and lower market price than the Cournot and Bertrand models. Examples in business. A commonly cited example of a duopoly is that involving Visa and Mastercard, who between them control a large proportion of the electronic payment processing market. In 2000 they were the defendants in a United States Department of Justice antitrust lawsuit. An appeal was upheld in 2004. Examples where two companies control an overwhelming proportion of a market are: Media. In Finland, the state-owned broadcasting company Yleisradio and the private broadcaster Mainos-TV had a legal duopoly (in the economists' sense of the word) from the 1950s to 1993. No other broadcasters were allowed. Mainos-TV operated by leasing air time from Yleisradio, broadcasting in reserved blocks between Yleisradio's own programming on its two channels. This was a unique phenomenon in the world. Between 1986 and 1992 there was an independent third channel but it was jointly owned by Yle and M-TV; only in 1993 did M-TV get its own channel. In Kenya, mobile service providers Safaricom and Airtel in Kenya form a duopoly in the Kenyan telecommunications industry. In Singapore, the mass media industry is presently dominated by two players, namely Mediacorp and SPH Media Trust. In the United Kingdom, the BBC and ITV formed an effective duopoly (with Channel 4 originally being economically dependent on ITV) until the development of multichannel from the 1990s onwards. Broadcasting. Duopoly is used in the United States broadcast television and radio industry to refer to a single company owning two outlets in the same city. This usage is technically incompatible with the normal definition of the word and may lead to confusion, inasmuch as there are generally more than two owners of broadcast television stations in markets with broadcast duopolies. In Canada, this definition is therefore more commonly called a "twinstick". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(Q)=a-bQ" }, { "math_id": 1, "text": "Q(P)=a-bP" } ]
https://en.wikipedia.org/wiki?curid=7939
794163
Annualized failure rate
Probability that a device or component will fail during a year of use Annualized failure rate (AFR) gives the estimated probability that a device or component will fail during a full year of use. It is a relation between the mean time between failure (MTBF) and the hours that a number of devices are run per year. AFR is estimated from a sample of like components—AFR and MTBF as given by vendors are population statistics that can not predict the behaviour of an individual unit. Hard disk drives. For example, AFR is used to characterize the reliability of hard disk drives. The relationship between AFR and MTBF (in hours) is: formula_0 This equation assumes that the device or component is powered on for the full 8766 hours of a year, and gives the estimated fraction of an original sample of devices or components that will fail in one year, or, equivalently, 1 − AFR is the fraction of devices or components that will show no failures over a year. It is based on an exponential failure distribution (see failure rate for a full derivation). Note: Some manufacturers count a year as 8760 hours. This ratio can be approximated by, assuming a small AFR, formula_1 For example, a common specification for PATA and SATA drives may be an MTBF of 300,000 hours, giving an approximate theoretical 2.92% annualized failure rate i.e. a 2.92% chance that a given drive will fail during a year of use. The AFR for a drive is derived from time-to-fail data from a reliability-demonstration test (RDT). AFR will increase towards and beyond the end of the service life of a device or component. Google's 2007 study found, based on a large field sample of drives, that actual AFRs for individual drives ranged from 1.7% for first year drives to over 8.6% for three-year-old drives. A CMU 2007 study showed an estimated 3% mean AFR over 1–5 years based on replacement logs for a large sample of drives. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "AFR = 1-exp(-8766/MTBF)" }, { "math_id": 1, "text": "AFR = {8766 \\over MTBF} " } ]
https://en.wikipedia.org/wiki?curid=794163
7941780
Quantum Markov chain
In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical definitions of probability with quantum probability. Introduction. Very roughly, the theory of a quantum Markov chain resembles that of a measure-many automaton, with some important substitutions: the initial state is to be replaced by a density matrix, and the projection operators are to be replaced by positive operator valued measures. Formal statement. More precisely, a quantum Markov chain is a pair formula_0 with formula_1 a density matrix and formula_2 a quantum channel such that formula_3 is a completely positive trace-preserving map, and formula_4 a C*-algebra of bounded operators. The pair must obey the quantum Markov condition, that formula_5 for all formula_6. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(E,\\rho)" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "E:\\mathcal{B}\\otimes\\mathcal{B}\\to\\mathcal{B}" }, { "math_id": 4, "text": "\\mathcal{B}" }, { "math_id": 5, "text": "\\operatorname{Tr} \\rho (b_1\\otimes b_2) = \\operatorname{Tr} \\rho E(b_1, b_2)" }, { "math_id": 6, "text": "b_1,b_2\\in \\mathcal{B}" } ]
https://en.wikipedia.org/wiki?curid=7941780
7943517
Juliette Peirce
Second wife of Charles Sanders Peirce Juliette Peirce (; d. October 4, 1934) was the second wife of the mathematician and philosopher Charles Sanders Peirce. History. Almost nothing is known about Juliette Peirce's life before she met Charles—not even her name, which is variously given as Juliette Annette Froissy or Juliette Pourtalai. Some historians believe she was French, but others have speculated that she had a Gypsy heritage (Ketner 1998, p. 279ff). On occasion, she claimed to be a Habsburg princess. Scanty facts about her provide only a few possible clues to her past. She spoke French, had her own income, had gynecological illnesses that prevented her from having children, and owned a deck of tarot cards said to have predicted the downfall of Napoleon. She probably first met Charles in New York City at the Hotel Brevoort's New Year's Eve ball in December 1876. Controversy. Charles Peirce's first wife, Harriet Melusina Fay, had left him in 1875, but he was not divorced from her until 1882. Charles and Juliette became close friends and travel companions, and were likely romantically involved before his divorce was official. This indiscretion is sometimes said to have cost him his career. Peirce had a teaching position at Johns Hopkins University. When he was being considered for a permanent post, one of the major American scientists of the day, Simon Newcomb, who apparently did not like Peirce, pointed out to a Johns Hopkins trustee that Peirce, while an employee of the university, had traveled with a woman to whom he was not married. The ensuing scandal led to Peirce's dismissal. His later applications to many universities for teaching posts were all unsuccessful, and in fact he never again held a full-time permanent position anywhere. As a result, Juliette was often blamed for Peirce's failure to reach the eminent social stature his intellect might have commanded. There were strains with Peirce's mother Sarah, brother Jem (James Mills Peirce), and most of all his aunt Lizzie, who owned the house in which Sarah and Jem lived, but despite that and strains in the marriage itself, Peirce remained powerfully attached to Juliette. In a diary entry for January 6, 1889, Peirce wrote, regarding Juliette's health, "If I should lose her, I would not survive her. Therefore, I must turn my "whole" energy to saving her." Except during occasional travels by one or the other, they remained together until his death in 1914, and she never remarried. Arisbe. In 1887 Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km2) of rural land near Milford, Pennsylvania, land that never yielded an economic return. There he had an 1854 farmhouse remodeled to his design. The local people, many of whom were French, accepted Juliette. The Peirces led an active social life there and became friends with relatives of Gifford Pinchot. Except for occasional travels and stays elsewhere, the Peirces spent the rest of their lives there. They named their property "Arisbe" for possibly any or all of the following reasons: Even as they sank into poverty, they continued to make expansions to the house, almost losing it and their land because of unpaid debts. A Santiago conjecture. In "His Glassy Essence" (1998), p. 279ff, Kenneth Ketner speculates that Juliette was of Spanish Gypsy origin, and that Charles's adding "Santiago" to his name was his way of "informally ... paying tribute to his wife ... and to her cultural origins as a Spanish woman who was a Gitano, or Spanish Gypsy of Andalusia." It involves the movement of Gypsies into Spain along the pilgrimage to Santiago de Compostela, Santiago's being the patron saint of Spain, Juliette's being in Spain at the time when Peirce's friend and colleague Ernst Schröder's "Logik" was published, and other reasons. Illnesses and Juliette's widowhood. Peirce suffered from his late teens through the rest of his life with an ailment then known as "facial neuralgia" which would today be diagnosed as trigeminal neuralgia, a chronic, intensely painful condition against which he self-medicated with drugs such as morphine, cocaine, and alcohol. His mental and physical illnesses worsened with time, and he suffered numerous breakdowns over the course of his life, rendering him increasingly unreliable. His earnings from temporary posts, lectures and articles dwindled, until he and Juliette lived in poverty. At his death he had more than 100,000 pages of unpublished writing. Juliette sold these to Harvard and Victor Lenzen was responsible for relocating them there. In her later years, Juliette was described as increasingly frail. She contracted, and eventually died of, tuberculosis. When Peirce died in 1914, Juliette was left destitute and alone. She lived another twenty years, dedicated to bringing Peirce and his ideas the recognition she believed they deserved. An obituary in "Science" described her as a "gracious lady" who "lived and passed away...in the distinction of her devotion." In popular culture. "Pierce-Arrow", by Susan Howe, New Directions, 1999, consists of an essay and poems focusing on Charles and his wife Juliette. The spelling of the title is correct, referring to the old motor car company, as well as punning for example on the Peirce arrow ("formula_0"), logical symbol for "neither...nor...". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\downarrow" } ]
https://en.wikipedia.org/wiki?curid=7943517
7944108
Fixed-asset turnover
Fixed-asset turnover is the ratio of sales (on the profit and loss account) to the value of fixed assets (on the balance sheet). It indicates how well the business is using its fixed assets to generate sales. formula_0 Generally speaking, the higher the ratio, the better, because a high ratio indicates the business has less money tied up in fixed assets for each unit of currency of sales revenue. A declining ratio may indicate that the business is over-invested in plant, equipment, or other fixed assets. In A.A.T. assessments this financial measure is calculated in two different ways. 1. Total Asset Turnover Ratio = Revenue / Total Assets 2. Net Asset Turnover Ratio = Revenue / (Total Assets - Current Liabilities) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Fixed\\ Asset\\ Turnover = \\frac{Net\\ sales}{Average\\ net\\ fixed\\ assets}" } ]
https://en.wikipedia.org/wiki?curid=7944108
794492
Fitting lemma
In mathematics, the Fitting lemma – named after the mathematician Hans Fitting – is a basic statement in abstract algebra. Suppose "M" is a module over some ring. If "M" is indecomposable and has finite length, then every endomorphism of "M" is either an automorphism or nilpotent. As an immediate consequence, we see that the endomorphism ring of every finite-length indecomposable module is local. A version of Fitting's lemma is often used in the representation theory of groups. This is in fact a special case of the version above, since every "K"-linear representation of a group "G" can be viewed as a module over the group algebra "KG". Proof. To prove Fitting's lemma, we take an endomorphism "f" of "M" and consider the following two chains of submodules: Because formula_2 has finite length, both of these chains must eventually stabilize, so there is some formula_3 with formula_4 for all formula_5, and some formula_6 with formula_7 for all formula_8 Let now formula_9, and note that by construction formula_10 and formula_11 We claim that formula_12. Indeed, every formula_13 satisfies formula_14 for some formula_15 but also formula_16, so that formula_17, therefore formula_18 and thus formula_19 Moreover, formula_20: for every formula_21, there exists some formula_15 such that formula_22 (since formula_23), and thus formula_24, so that formula_25 and thus formula_26 Consequently, formula_2 is the direct sum of formula_27 and formula_28. (This statement is also known as the "Fitting decomposition theorem".) Because formula_2 is indecomposable, one of those two summands must be equal to formula_2 and the other must be the zero submodule. Depending on which of the two summands is zero, we find that formula_29 is either bijective or nilpotent.
[ { "math_id": 0, "text": "\\mathrm{im}(f) \\supseteq \\mathrm{im}(f^2) \\supseteq \\mathrm{im}(f^3) \\supseteq \\ldots" }, { "math_id": 1, "text": "\\mathrm{ker}(f) \\subseteq \\mathrm{ker}(f^2) \\subseteq \\mathrm{ker}(f^3) \\subseteq \\ldots" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\mathrm{im}(f^n) = \\mathrm{im}(f^{n'})" }, { "math_id": 5, "text": "n' \\geq n" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "\\mathrm{ker}(f^m) = \\mathrm{ker}(f^{m'})" }, { "math_id": 8, "text": "m' \\geq m." }, { "math_id": 9, "text": "k = \\max\\{n, m\\}" }, { "math_id": 10, "text": "\\mathrm{im}(f^{2k}) = \\mathrm{im}(f^{k})" }, { "math_id": 11, "text": "\\mathrm{ker}(f^{2k}) = \\mathrm{ker}(f^{k})." }, { "math_id": 12, "text": "\\mathrm{ker}(f^k) \\cap \\mathrm{im}(f^k) = 0" }, { "math_id": 13, "text": "x \\in \\mathrm{ker}(f^k) \\cap \\mathrm{im}(f^k)" }, { "math_id": 14, "text": "x=f^k(y)" }, { "math_id": 15, "text": "y \\in M" }, { "math_id": 16, "text": "f^k(x)=0" }, { "math_id": 17, "text": "0=f^k(x)=f^k(f^k(y))=f^{2k}(y)" }, { "math_id": 18, "text": "y \\in \\mathrm{ker}(f^{2k}) = \\mathrm{ker}(f^k)" }, { "math_id": 19, "text": "x=f^k(y)=0." }, { "math_id": 20, "text": "\\mathrm{ker}(f^k) + \\mathrm{im}(f^k) = M" }, { "math_id": 21, "text": "x \\in M" }, { "math_id": 22, "text": "f^k(x)=f^{2k}(y)" }, { "math_id": 23, "text": "f^k(x) \\in \\mathrm{im}(f^k) = \\mathrm{im}(f^{2k})" }, { "math_id": 24, "text": "f^k(x-f^k(y))\n= f^k(x)-f^{2k}(y)=0" }, { "math_id": 25, "text": "x-f^k(y) \\in \\mathrm{ker}(f^k)" }, { "math_id": 26, "text": "x \\in \\mathrm{ker}(f^k)+f^k(y) \\subseteq \\mathrm{ker}(f^k) + \\mathrm{im}(f^k)." }, { "math_id": 27, "text": "\\mathrm{im}(f^k)" }, { "math_id": 28, "text": "\\mathrm{ker}(f^k)" }, { "math_id": 29, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=794492
794534
Convex combination
Linear combination of points where all coefficients are non-negative and sum to 1 In convex geometry and vector algebra, a convex combination is a linear combination of points (which can be vectors, scalars, or more generally points in an affine space) where all coefficients are non-negative and sum to 1. In other words, the operation is equivalent to a standard weighted average, but whose weights are expressed as a percent of the total weight, instead of as a fraction of the "count" of the weights as in a standard weighted average. Formal definition. More formally, given a finite number of points formula_0 in a real vector space, a convex combination of these points is a point of the form formula_1 where the real numbers formula_2 satisfy formula_3 and formula_4 As a particular example, every convex combination of two points lies on the line segment between the points. A set is convex if it contains all convex combinations of its points. The convex hull of a given set of points is identical to the set of all their convex combinations. There exist subsets of a vector space that are not closed under linear combinations but are closed under convex combinations. For example, the interval formula_5 is convex but generates the real-number line under linear combinations. Another example is the convex set of probability distributions, as linear combinations preserve neither nonnegativity nor affinity (i.e., having total integral one). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_1, x_2, \\dots, x_n" }, { "math_id": 1, "text": "\\alpha_1x_1+\\alpha_2x_2+\\cdots+\\alpha_nx_n" }, { "math_id": 2, "text": "\\alpha_i" }, { "math_id": 3, "text": "\\alpha_i\\ge 0 " }, { "math_id": 4, "text": "\\alpha_1+\\alpha_2+\\cdots+\\alpha_n=1." }, { "math_id": 5, "text": "[0,1]" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=794534
794679
Suffix tree
Tree containing all suffixes of a given text In computer science, a suffix tree (also called PAT tree or, in an earlier form, position tree) is a compressed trie containing all the suffixes of the given text as their keys and positions in the text as their values. Suffix trees allow particularly fast implementations of many important string operations. The construction of such a tree for the string formula_0 takes time and space linear in the length of formula_0. Once constructed, several operations can be performed quickly, such as locating a substring in formula_0, locating a substring if a certain number of mistakes are allowed, and locating matches for a regular expression pattern. Suffix trees also provided one of the first linear-time solutions for the longest common substring problem. These speedups come at a cost: storing a string's suffix tree typically requires significantly more space than storing the string itself. History. The concept was first introduced by . Rather than the suffix formula_1, Weiner stored in his trie the "prefix identifier" for each position, that is, the shortest string starting at formula_2 and occurring only once in formula_0. His "Algorithm D" takes an uncompressed trie for formula_3 and extends it into a trie for formula_4. This way, starting from the trivial trie for formula_5, a trie for formula_6 can be built by formula_7 successive calls to Algorithm D; however, the overall run time is formula_8. Weiner's "Algorithm B" maintains several auxiliary data structures, to achieve an over all run time linear in the size of the constructed trie. The latter can still be formula_8 nodes, e.g. for formula_9 Weiner's "Algorithm C" finally uses compressed tries to achieve linear overall storage size and run time. Donald Knuth subsequently characterized the latter as "Algorithm of the Year 1973" according to his student Vaughan Pratt. The text book reproduced Weiner's results in a simplified and more elegant form, introducing the term "position tree". was the first to build a (compressed) trie of all suffixes of formula_0. Although the suffix starting at formula_2 is usually longer than the prefix identifier, their path representations in a compressed trie do not differ in size. On the other hand, McCreight could dispense with most of Weiner's auxiliary data structures; only suffix links remained. further simplified the construction. He provided the first online-construction of suffix trees, now known as Ukkonen's algorithm, with running time that matched the then fastest algorithms. These algorithms are all linear-time for a constant-size alphabet, and have worst-case running time of formula_10 in general. gave the first suffix tree construction algorithm that is optimal for all alphabets. In particular, this is the first linear-time algorithm for strings drawn from an alphabet of integers in a polynomial range. Farach's algorithm has become the basis for new algorithms for constructing both suffix trees and suffix arrays, for example, in external memory, compressed, succinct, etc. Definition. The suffix tree for the string formula_0 of length formula_11 is defined as a tree such that: If a suffix of formula_0 is also the prefix of another suffix, such a tree does not exist for the string. For example, in the string "abcbc", the suffix "bc" is also a prefix of the suffix "bcbc". In such a case, the path spelling out "bc" will not end in a leaf, violating the fifth rule. To fix this problem, formula_0 is padded with a terminal symbol not seen in the string (usually denoted codice_0). This ensures that no suffix is a prefix of another, and that there will be formula_11 leaf nodes, one for each of the formula_11 suffixes of formula_0. Since all internal non-root nodes are branching, there can be at most formula_13 such nodes, and formula_14 nodes in total (formula_11 leaves, formula_13 internal non-root nodes, 1 root). Suffix links are a key feature for older linear-time construction algorithms, although most newer algorithms, which are based on Farach's algorithm, dispense with suffix links. In a complete suffix tree, all internal non-root nodes have a suffix link to another internal node. If the path from the root to a node spells the string formula_15, where formula_16 is a single character and formula_17 is a string (possibly empty), it has a suffix link to the internal node representing formula_17. See for example the suffix link from the node for codice_1 to the node for codice_2 in the figure above. Suffix links are also used in some algorithms running on the tree. A generalized suffix tree is a suffix tree made for a set of strings instead of a single string. It represents all suffixes from this set of strings. Each string must be terminated by a different termination symbol. Functionality. A suffix tree for a string formula_0 of length formula_11 can be built in formula_18 time, if the letters come from an alphabet of integers in a polynomial range (in particular, this is true for constant-sized alphabets). For larger alphabets, the running time is dominated by first sorting the letters to bring them into a range of size formula_19; in general, this takes formula_10 time. The costs below are given under the assumption that the alphabet is constant. Assume that a suffix tree has been built for the string formula_0 of length formula_11, or that a generalised suffix tree has been built for the set of strings formula_20 of total length formula_21. You can: The suffix tree can be prepared for constant time lowest common ancestor retrieval between nodes in formula_18 time. One can then also: Applications. Suffix trees can be used to solve a large number of string problems that occur in text-editing, free-text search, computational biology and other application areas. Primary applications include: Suffix trees are often used in bioinformatics applications, searching for patterns in DNA or protein sequences (which can be viewed as long strings of characters). The ability to search efficiently with mismatches might be considered their greatest strength. Suffix trees are also used in data compression; they can be used to find repeated data, and can be used for the sorting stage of the Burrows–Wheeler transform. Variants of the LZW compression schemes use suffix trees (LZSS). A suffix tree is also used in suffix tree clustering, a data clustering algorithm used in some search engines. Implementation. If each node and edge can be represented in formula_39 space, the entire tree can be represented in formula_18 space. The total length of all the strings on all of the edges in the tree is formula_8, but each edge can be stored as the position and length of a substring of S, giving a total space usage of formula_18 computer words. The worst-case space usage of a suffix tree is seen with a fibonacci word, giving the full formula_48 nodes. An important choice when making a suffix tree implementation is the parent-child relationships between nodes. The most common is using linked lists called sibling lists. Each node has a pointer to its first child, and to the next node in the child list it is a part of. Other implementations with efficient running time properties use hash maps, sorted or unsorted arrays (with array doubling), or balanced search trees. We are interested in: Let σ be the size of the alphabet. Then you have the following costs: The insertion cost is amortised, and that the costs for hashing are given for perfect hashing. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about 10 to 20 times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of 8 (for array including LCP values built within 32-bit address space and 8-bit characters.) This factor depends on the properties and may reach 2 with usage of 4-byte wide characters (needed to contain any symbol in some UNIX-like systems, see wchar_t) on 32-bit systems. Researchers have continued to find smaller indexing structures. Parallel construction. Various parallel algorithms to speed up suffix tree construction have been proposed. Recently, a practical parallel algorithm for suffix tree construction with formula_19 work (sequential time) and formula_49 span has been developed. The algorithm achieves good parallel scalability on shared-memory multicore machines and can index the human genome – approximately 3GB – in under 3 minutes using a 40-core machine. External construction. Though linear, the memory usage of a suffix tree is significantly higher than the actual size of the sequence collection. For a large text, construction may require external memory approaches. There are theoretical results for constructing suffix trees in external memory. The algorithm by is theoretically optimal, with an I/O complexity equal to that of sorting. However the overall intricacy of this algorithm has prevented, so far, its practical implementation. On the other hand, there have been practical works for constructing disk-based suffix trees which scale to (few) GB/hours. The state of the art methods are TDD, TRELLIS, DiGeST, and B2ST. TDD and TRELLIS scale up to the entire human genome resulting in a disk-based suffix tree of a size in the tens of gigabytes. However, these methods cannot handle efficiently collections of sequences exceeding 3 GB. DiGeST performs significantly better and is able to handle collections of sequences in the order of 6 GB in about 6 hours. All these methods can efficiently build suffix trees for the case when the tree does not fit in main memory, but the input does. The most recent method, B2ST, scales to handle inputs that do not fit in main memory. ERA is a recent parallel suffix tree construction method that is significantly faster. ERA can index the entire human genome in 19 minutes on an 8-core desktop computer with 16 GB RAM. On a simple Linux cluster with 16 nodes (4 GB RAM per node), ERA can index the entire human genome in less than 9 minutes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "S[i..n]" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "S[k+1..n]" }, { "math_id": 4, "text": "S[k..n]" }, { "math_id": 5, "text": "S[n..n]" }, { "math_id": 6, "text": "S[1..n]" }, { "math_id": 7, "text": "n - 1" }, { "math_id": 8, "text": "O(n^2)" }, { "math_id": 9, "text": "S = a^n b^n a^n b^n \\$ ." }, { "math_id": 10, "text": "O(n\\log n)" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "1" }, { "math_id": 13, "text": "n-1" }, { "math_id": 14, "text": "n+(n-1)+1=2n" }, { "math_id": 15, "text": "\\chi\\alpha" }, { "math_id": 16, "text": "\\chi" }, { "math_id": 17, "text": "\\alpha" }, { "math_id": 18, "text": "\\Theta(n)" }, { "math_id": 19, "text": "O(n)" }, { "math_id": 20, "text": "D=\\{S_1,S_2,\\dots,S_K\\}" }, { "math_id": 21, "text": "n=n_1+n_2+\\cdots+n_K" }, { "math_id": 22, "text": "P" }, { "math_id": 23, "text": "m" }, { "math_id": 24, "text": "O(m)" }, { "math_id": 25, "text": "P_1,\\dots,P_q" }, { "math_id": 26, "text": "z" }, { "math_id": 27, "text": "O(m + z)" }, { "math_id": 28, "text": "P[i\\dots m]" }, { "math_id": 29, "text": "D" }, { "math_id": 30, "text": "\\Theta(m)" }, { "math_id": 31, "text": "S_i" }, { "math_id": 32, "text": "S_j" }, { "math_id": 33, "text": "\\Theta(n_i + n_j)" }, { "math_id": 34, "text": "\\Theta(n + z)" }, { "math_id": 35, "text": "\\Sigma" }, { "math_id": 36, "text": "O(n + z)" }, { "math_id": 37, "text": "S_i[p..n_i]" }, { "math_id": 38, "text": "S_j[q..n_j]" }, { "math_id": 39, "text": "\\Theta(1)" }, { "math_id": 40, "text": "O(k n + z)" }, { "math_id": 41, "text": "\\Theta(g n)" }, { "math_id": 42, "text": "g" }, { "math_id": 43, "text": "\\Theta(k n)" }, { "math_id": 44, "text": "k" }, { "math_id": 45, "text": "O(n \\log n + z)" }, { "math_id": 46, "text": "O(k n \\log (n/k) + z)" }, { "math_id": 47, "text": "k=2,\\dots,K" }, { "math_id": 48, "text": "2n" }, { "math_id": 49, "text": "O(\\log^2 n)" } ]
https://en.wikipedia.org/wiki?curid=794679
7947411
Schur orthogonality relations
In mathematics, the Schur orthogonality relations, which were proven by Issai Schur through Schur's lemma, express a central fact about representations of finite groups. They admit a generalization to the case of compact groups in general, and in particular compact Lie groups, such as the rotation group SO(3). Finite groups. Intrinsic statement. The space of complex-valued class functions of a finite group "G" has a natural inner product: formula_0 where formula_1 denotes the complex conjugate of the value of formula_2 on "g". With respect to this inner product, the irreducible characters form an orthonormal basis for the space of class functions, and this yields the orthogonality relation for the rows of the character table: formula_3 For formula_4, applying the same inner product to the columns of the character table yields: formula_5 where the sum is over all of the irreducible characters formula_6 of formula_7, and formula_8 denotes the order of the centralizer of formula_9. Note that since g and h are conjugate iff they are in the same column of the character table, this implies that the columns of the character table are orthogonal. The orthogonality relations can aid many computations including: Coordinates statement. Let formula_10 be a matrix element of an irreducible matrix representation formula_11 of a finite group formula_12 of order |"G"|. Since it can be proven that any matrix representation of any finite group is equivalent to a unitary representation, we assume formula_11 is unitary: formula_13 where formula_14 is the (finite) dimension of the irreducible representation formula_11. The orthogonality relations, only valid for matrix elements of "irreducible" representations, are: formula_15 Here formula_16 is the complex conjugate of formula_17 and the sum is over all elements of "G". The Kronecker delta formula_18 is 1 if the matrices are in the same irreducible representation formula_19. If formula_11 and formula_20 are non-equivalent it is zero. The other two Kronecker delta's state that the row and column indices must be equal (formula_21 and formula_22) in order to obtain a non-vanishing result. This theorem is also known as the Great (or Grand) Orthogonality Theorem. Every group has an identity representation (all group elements mapped to 1). This is an irreducible representation. The great orthogonality relations immediately imply that formula_23 for formula_24 and any irreducible representation formula_25 not equal to the identity representation. Example of the permutation group on 3 objects. The 3! permutations of three objects form a group of order 6, commonly denoted S3 (the symmetric group of degree three). This group is isomorphic to the point group formula_26, consisting of a threefold rotation axis and three vertical mirror planes. The groups have a 2-dimensional irreducible representation ("l" = 2). In the case of S3 one usually labels this representation by the Young tableau formula_27 and in the case of formula_26 one usually writes formula_28. In both cases the representation consists of the following six real matrices, each representing a single group element: formula_29 The normalization of the (1,1) element: formula_30 In the same manner one can show the normalization of the other matrix elements: (2,2), (1,2), and (2,1). The orthogonality of the (1,1) and (2,2) elements: formula_31 Similar relations hold for the orthogonality of the elements (1,1) and (1,2), etc. One verifies easily in the example that all sums of corresponding matrix elements vanish because of the orthogonality of the given irreducible representation to the identity representation. Direct implications. The trace of a matrix is a sum of diagonal matrix elements, formula_32 The collection of traces is the "character" formula_33 of a representation. Often one writes for the trace of a matrix in an irreducible representation with character formula_34 formula_35 In this notation we can write several character formulas: formula_36 which allows us to check whether or not a representation is irreducible. (The formula means that the lines in any character table have to be orthogonal vectors.) And formula_37 which helps us to determine how often the irreducible representation formula_11 is contained within the reducible representation formula_38 with character formula_39. For instance, if formula_40 and the order of the group is formula_41 then the number of times that formula_42 is contained within the given "reducible" representation formula_38 is formula_43 See Character theory for more about group characters. Compact groups. The generalization of the orthogonality relations from finite groups to compact groups (which include compact Lie groups such as SO(3)) is basically simple: Replace the summation over the group by an integration over the group. Every compact group formula_7 has unique bi-invariant Haar measure, so that the volume of the group is 1. Denote this measure by formula_44. Let formula_45 be a complete set of irreducible representations of formula_7, and let formula_46 be a matrix coefficient of the representation formula_47. The orthogonality relations can then be stated in two parts: 1) If formula_48 then formula_49 2) If formula_50 is an orthonormal basis of the representation space formula_47 then formula_51 where formula_52 is the dimension of formula_47. These orthogonality relations and the fact that all of the representations have finite dimensions are consequences of the Peter–Weyl theorem. An example: SO(3). An example of an r = 3 parameter group is the matrix group SO(3) consisting of all 3 × 3 orthogonal matrices with unit determinant. A possible parametrization of this group is in terms of Euler angles: formula_53 (see e.g., this article for the explicit form of an element of SO(3) in terms of Euler angles). The bounds are formula_54 and formula_55. Not only the recipe for the computation of the volume element formula_56 depends on the chosen parameters, but also the final result, i.e. the analytic form of the weight function (measure) formula_57. For instance, the Euler angle parametrization of SO(3) gives the weight formula_58 while the n, ψ parametrization gives the weight formula_59 with formula_60 It can be shown that the irreducible matrix representations of compact Lie groups are finite-dimensional and can be chosen to be unitary: formula_61 With the shorthand notation formula_62 the orthogonality relations take the form formula_63 with the volume of the group: formula_64 As an example we note that the irreducible representations of SO(3) are Wigner D-matrices formula_65, which are of dimension formula_66. Since formula_67 they satisfy formula_68 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Any physically or chemically oriented book on group theory mentions the orthogonality relations. The following more advanced books give the proofs: The following books give more mathematically inclined treatments:
[ { "math_id": 0, "text": "\\left\\langle \\alpha, \\beta \\right\\rangle := \\frac{1}{\\left| G \\right|}\\sum_{g \\in G} \\alpha(g) \\overline{\\beta(g)}" }, { "math_id": 1, "text": "\\overline{\\beta(g)}" }, { "math_id": 2, "text": "\\beta" }, { "math_id": 3, "text": "\\left\\langle \\chi_i, \\chi_j \\right\\rangle = \\begin{cases} 0& \\mbox{ if } i \\ne j, \\\\ 1& \\mbox{ if } i=j. \\end{cases}" }, { "math_id": 4, "text": "g, h \\in G" }, { "math_id": 5, "text": "\\sum_{\\chi_i} \\chi_i(g) \\overline{\\chi_i(h)} = \\begin{cases} \\left| C_G(g) \\right| & \\mbox{ if } g, h \\mbox{ are conjugate } \\\\ 0& \\mbox{ otherwise.}\\end{cases}" }, { "math_id": 6, "text": "\\chi_i" }, { "math_id": 7, "text": "G" }, { "math_id": 8, "text": "\\left | C_G(g) \\right |" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": "\\Gamma^{(\\lambda)} (R)_{mn}" }, { "math_id": 11, "text": "\\Gamma^{(\\lambda)}" }, { "math_id": 12, "text": "G = \\{R\\}" }, { "math_id": 13, "text": "\n \\sum_{n=1}^{l_\\lambda} \\; \\Gamma^{(\\lambda)} (R)_{nm}^*\\;\\Gamma^{(\\lambda)} (R)_{nk} = \\delta_{mk} \\quad \\hbox{for all}\\quad R \\in G,\n" }, { "math_id": 14, "text": "l_\\lambda" }, { "math_id": 15, "text": "\n \\sum_{R\\in G}^{|G|} \\; \\Gamma^{(\\lambda)} (R)_{nm}^*\\;\\Gamma^{(\\mu)} (R)_{n'm'} = \n\\delta_{\\lambda\\mu} \\delta_{nn'}\\delta_{mm'} \\frac{|G|}{l_\\lambda}.\n" }, { "math_id": 16, "text": "\\Gamma^{(\\lambda)} (R)_{nm}^*" }, { "math_id": 17, "text": "\\Gamma^{(\\lambda)} (R)_{nm}\\," }, { "math_id": 18, "text": "\\delta_{\\lambda\\mu}" }, { "math_id": 19, "text": "\\Gamma^{(\\lambda)} = \\Gamma^{(\\mu)}" }, { "math_id": 20, "text": "\\Gamma^{(\\mu)}" }, { "math_id": 21, "text": "n=n'" }, { "math_id": 22, "text": "m=m'" }, { "math_id": 23, "text": "\n \\sum_{R\\in G}^{|G|} \\; \\Gamma^{(\\mu)} (R)_{nm} = 0 \n" }, { "math_id": 24, "text": "n,m=1,\\ldots,l_\\mu" }, { "math_id": 25, "text": "\\Gamma^{(\\mu)}\\," }, { "math_id": 26, "text": "C_{3v}" }, { "math_id": 27, "text": " \\lambda = [2,1]" }, { "math_id": 28, "text": " \\lambda = E" }, { "math_id": 29, "text": "\\begin{pmatrix}\n1 & 0 \\\\\n0 & 1\n\\end{pmatrix}\\quad\\begin{pmatrix}\n1 & 0 \\\\\n0 & -1\n\\end{pmatrix}\\quad\\begin{pmatrix}\n-\\frac{1}{2} & \\frac{\\sqrt{3}}{2} \\\\\n\\frac{\\sqrt{3}}{2}& \\frac{1}{2}\n\\end{pmatrix}\\quad\\begin{pmatrix}\n-\\frac{1}{2} & -\\frac{\\sqrt{3}}{2} \\\\\n-\\frac{\\sqrt{3}}{2}& \\frac{1}{2}\n\\end{pmatrix}\\quad\\begin{pmatrix}\n-\\frac{1}{2} & \\frac{\\sqrt{3}}{2} \\\\\n-\\frac{\\sqrt{3}}{2}& -\\frac{1}{2}\n\\end{pmatrix}\\quad\\begin{pmatrix}\n-\\frac{1}{2} & -\\frac{\\sqrt{3}}{2} \\\\\n\\frac{\\sqrt{3}}{2}& -\\frac{1}{2}\n\\end{pmatrix}" }, { "math_id": 30, "text": "\\sum_{R \\in G}^{6} \\; \\Gamma(R)_{11}^*\\;\\Gamma(R)_{11} = 1^2 + 1^2 + \\left(-\\tfrac{1}{2}\\right)^2 + \\left(-\\tfrac{1}{2}\\right)^2 + \\left(-\\tfrac{1}{2}\\right)^2 + \\left(-\\tfrac{1}{2}\\right)^2\n= 3." }, { "math_id": 31, "text": " \\sum_{R\\in G}^{6} \\; \\Gamma(R)_{11}^*\\;\\Gamma(R)_{22} = 1^2+(1)(-1)+\\left(-\\tfrac{1}{2}\\right)\\left(\\tfrac{1}{2}\\right)\n+\\left(-\\tfrac{1}{2}\\right)\\left(\\tfrac{1}{2}\\right)\n +\\left(-\\tfrac{1}{2}\\right)^2 +\\left(-\\tfrac{1}{2}\\right)^2\n= 0 .\n" }, { "math_id": 32, "text": "\\operatorname{Tr}\\big(\\Gamma(R)\\big) = \\sum_{m=1}^{l} \\Gamma(R)_{mm}." }, { "math_id": 33, "text": "\\chi \\equiv \\{\\operatorname{Tr}\\big(\\Gamma(R)\\big)\\;|\\; R \\in G\\}" }, { "math_id": 34, "text": "\\chi^{(\\lambda)}" }, { "math_id": 35, "text": "\\chi^{(\\lambda)} (R)\\equiv \\operatorname{Tr}\\left(\\Gamma^{(\\lambda)}(R)\\right)." }, { "math_id": 36, "text": "\\sum_{R\\in G}^{|G|} \\chi^{(\\lambda)}(R)^* \\, \\chi^{(\\mu)}(R)= \\delta_{\\lambda\\mu} |G|," }, { "math_id": 37, "text": "\\sum_{R\\in G}^{|G|} \\chi^{(\\lambda)}(R)^* \\, \\chi(R) = n^{(\\lambda)} |G|," }, { "math_id": 38, "text": "\\Gamma \\," }, { "math_id": 39, "text": "\\chi(R)" }, { "math_id": 40, "text": "n^{(\\lambda)}\\, |G| = 96" }, { "math_id": 41, "text": "|G| = 24\\," }, { "math_id": 42, "text": "\\Gamma^{(\\lambda)}\\," }, { "math_id": 43, "text": "n^{(\\lambda)} = 4\\, ." }, { "math_id": 44, "text": "dg" }, { "math_id": 45, "text": "(\\pi^\\alpha)" }, { "math_id": 46, "text": "\\phi^\\alpha_{v,w}(g)=\\langle v,\\pi^\\alpha(g)w\\rangle " }, { "math_id": 47, "text": "\\pi^\\alpha" }, { "math_id": 48, "text": "\\pi^\\alpha \\ncong \\pi^\\beta " }, { "math_id": 49, "text": "\n\\int_G \\phi^\\alpha_{v,w}(g)\\phi^\\beta_{v',w'}(g)dg=0\n" }, { "math_id": 50, "text": "\\{e_i\\}" }, { "math_id": 51, "text": "\n\\int_G \\phi^\\alpha_{e_i,e_j}(g)\\overline{\\phi^\\alpha_{e_m,e_n}(g)}dg=\\delta_{i,m}\\delta_{j,n}\\frac{1}{d^\\alpha}\n" }, { "math_id": 52, "text": "d^\\alpha" }, { "math_id": 53, "text": "\\mathbf{x} = (\\alpha, \\beta, \\gamma)" }, { "math_id": 54, "text": "0 \\le\\alpha, \\gamma \\le 2\\pi" }, { "math_id": 55, "text": "0 \\le \\beta \\le\\pi" }, { "math_id": 56, "text": " \\omega(\\mathbf{x})\\, dx_1 dx_2\\cdots dx_r " }, { "math_id": 57, "text": "\\omega(\\mathbf{x})" }, { "math_id": 58, "text": "\\omega(\\alpha,\\beta,\\gamma) = \\sin\\! \\beta \\,," }, { "math_id": 59, "text": "\\omega(\\psi,\\theta,\\phi) = 2(1-\\cos\\psi)\\sin\\!\\theta\\, " }, { "math_id": 60, "text": "0\\le \\psi \\le \\pi, \\;\\; 0 \\le\\phi\\le 2\\pi,\\;\\; 0 \\le \\theta \\le \\pi." }, { "math_id": 61, "text": "\n \\Gamma^{(\\lambda)}(R^{-1}) =\\Gamma^{(\\lambda)}(R)^{-1}=\\Gamma^{(\\lambda)}(R)^\\dagger\\quad \\hbox{with}\\quad \\Gamma^{(\\lambda)}(R)^\\dagger_{mn} \\equiv \\Gamma^{(\\lambda)}(R)^*_{nm}.\n" }, { "math_id": 62, "text": "\n \\Gamma^{(\\lambda)}(\\mathbf{x})= \\Gamma^{(\\lambda)}\\Big(R(\\mathbf{x})\\Big)\n" }, { "math_id": 63, "text": "\n \\int_{x_1^0}^{x_1^1} \\cdots \\int_{x_r^0}^{x_r^1}\\; \\Gamma^{(\\lambda)}(\\mathbf{x})^*_{nm} \\Gamma^{(\\mu)}(\\mathbf{x})_{n'm'}\\; \\omega(\\mathbf{x}) dx_1\\cdots dx_r \\; = \\delta_{\\lambda \\mu} \\delta_{n n'} \\delta_{m m'} \\frac{|G|}{l_\\lambda},\n" }, { "math_id": 64, "text": "\n |G| = \\int_{x_1^0}^{x_1^1} \\cdots \\int_{x_r^0}^{x_r^1} \\omega(\\mathbf{x}) dx_1\\cdots dx_r .\n" }, { "math_id": 65, "text": "D^\\ell(\\alpha \\beta \\gamma)" }, { "math_id": 66, "text": "2\\ell+1 " }, { "math_id": 67, "text": "\n |\\mathrm{SO}(3)| = \\int_{0}^{2\\pi} d\\alpha \\int_{0}^{\\pi} \\sin\\!\\beta\\, d\\beta \\int_{0}^{2\\pi} d\\gamma = 8\\pi^2,\n" }, { "math_id": 68, "text": "\n \\int_{0}^{2\\pi} \\int_{0}^{\\pi} \\int_{0}^{2\\pi} D^{\\ell}(\\alpha \\beta\\gamma)^*_{nm} \\; D^{\\ell'}(\\alpha \\beta\\gamma)_{n'm'}\\; \\sin\\!\\beta\\, d\\alpha\\, d\\beta\\, d\\gamma = \\delta_{\\ell\\ell'}\\delta_{nn'}\\delta_{mm'} \\frac{8\\pi^2}{2\\ell+1}. \n" } ]
https://en.wikipedia.org/wiki?curid=7947411
794841
Canonical normal form
Standard forms of Boolean functions In Boolean algebra, any Boolean function can be expressed in the canonical disjunctive normal form (CDNF), minterm canonical form, or Sum of Products (SoP or SOP) as a disjunction (OR) of minterms. The De Morgan dual is the canonical conjunctive normal form (CCNF), maxterm canonical form, or Product of Sums (PoS or POS) which is a conjunction (AND) of maxterms. These forms can be useful for the simplification of Boolean functions, which is of great importance in the optimization of Boolean formulas in general and digital circuits in particular. Other canonical forms include the complete sum of prime implicants or Blake canonical form (and its dual), and the algebraic normal form (also called Zhegalkin or Reed–Muller). Minterms. For a boolean function of formula_0 variables formula_1, a minterm is a product term in which each of the formula_0 variables appears "exactly once" (either in its complemented or uncomplemented form). Thus, a "minterm" is a logical expression of "n" variables that employs only the complement operator and the conjunction operator (logical AND). A minterm gives a true value for just one combination of the input variables, the minimum nontrivial amount. For example, "a" "b"' "c", is true only when "a" and "c" both are true and "b" is false—the input arrangement where "a" = 1, "b" = 0, "c" = 1 results in 1. Indexing minterms. There are 2"n" minterms of "n" variables, since a variable in the minterm expression can be in either its direct or its complemented form—two choices per variable. Minterms are often numbered by a binary encoding of the complementation pattern of the variables, where the variables are written in a standard order, usually alphabetical. This convention assigns the value 1 to the direct form (formula_2) and 0 to the complemented form (formula_3); the minterm is then formula_4. For example, minterm formula_5 is numbered 1102 = 610 and denoted formula_6. Minterm canonical form. Given the truth table of a logical function, it is possible to write the function as a "sum of products" or "sum of minterms". This is a special form of disjunctive normal form. For example, if given the truth table for the arithmetic sum bit "u" of one bit position's logic of an adder circuit, as a function of "x" and "y" from the addends and the carry in, "ci": Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we can write "u" as a sum of minterms formula_7 and formula_8. If we wish to verify this: formula_9 evaluated for all 8 combinations of the three variables will match the table. Maxterms. For a boolean function of n variables formula_1, a maxterm is a sum term in which each of the n variables appears "exactly once" (either in its complemented or uncomplemented form). Thus, a "maxterm" is a logical expression of n variables that employs only the complement operator and the disjunction operator (logical OR). Maxterms are a dual of the minterm idea, following the complementary symmetry of De Morgan's laws. Instead of using ANDs and complements, we use ORs and complements and proceed similarly. It is apparent that a maxterm gives a "false" value for just one combination of the input variables, i.e. it is true at the maximal number of possibilities. For example, the maxterm "a"′ + "b" + "c"′ is false only when "a" and "c" both are true and "b" is false—the input arrangement where a = 1, b = 0, c = 1 results in 0. Indexing maxterms. There are again 2"n" maxterms of n variables, since a variable in the maxterm expression can also be in either its direct or its complemented form—two choices per variable. The numbering is chosen so that the complement of a minterm is the respective maxterm. That is, each maxterm is assigned an index based on the opposite conventional binary encoding used for minterms. The maxterm convention assigns the value 0 to the direct form formula_10 and 1 to the complemented form formula_11. For example, we assign the index 6 to the maxterm formula_12 (110) and denote that maxterm as "M"6. The complement formula_13 is the minterm formula_14, using de Morgan's law. Maxterm canonical form. If one is given a truth table of a logical function, it is possible to write the function as a "product of sums" or "product of maxterms". This is a special form of conjunctive normal form. For example, if given the truth table for the carry-out bit "co" of one bit position's logic of an adder circuit, as a function of "x" and "y" from the addends and the carry in, "ci": Observing that the rows that have an output of 0 are the 1st, 2nd, 3rd, and 5th, we can write "co" as a product of maxterms formula_15 and formula_16. If we wish to verify this: formula_17 evaluated for all 8 combinations of the three variables will match the table. Minimal PoS and SoP forms. It is often the case that the canonical minterm form is equivalent to a smaller SoP form. This smaller form would still consist of a sum of product terms, but have fewer product terms and/or product terms that contain fewer variables. For example, the following 3-variable function: has the canonical minterm representation formula_18, but it has an equivalent SoP form formula_19. In this trivial example, it is obvious that formula_20, and the smaller form has both fewer product terms and fewer variables within each term. The minimal SoP representations of a function according to this notion of "smallest" are referred to as "minimal SoP forms". In general, there may be multiple minimal SoP forms, none clearly smaller or larger than another. In a similar manner, a canonical maxterm form can be reduced to various minimal PoS forms. While this example was simplified by applying normal algebraic methods [formula_21], in less obvious cases a convenient method for finding minimal PoS/SoP forms of a function with up to four variables is using a Karnaugh map. The Quine–McCluskey algorithm can solve slightly larger problems. The field of logic optimization developed from the problem of finding optimal implementations of Boolean functions, such as minimal PoS and SoP forms. Application example. The sample truth tables for minterms and maxterms above are sufficient to establish the canonical form for a single bit position in the addition of binary numbers, but are not sufficient to design the digital logic unless your inventory of gates includes AND and OR. Where performance is an issue (as in the Apollo Guidance Computer), the available parts are more likely to be NAND and NOR because of the complementing action inherent in transistor logic. The values are defined as voltage states, one near ground and one near the DC supply voltage Vcc, e.g. +5 VDC. If the higher voltage is defined as the 1 "true" value, a NOR gate is the simplest possible useful logical element. Specifically, a 3-input NOR gate may consist of 3 bipolar junction transistors with their emitters all grounded, their collectors tied together and linked to Vcc through a load impedance. Each base is connected to an input signal, and the common collector point presents the output signal. Any input that is a 1 (high voltage) to its base shorts its transistor's emitter to its collector, causing current to flow through the load impedance, which brings the collector voltage (the output) very near to ground. That result is independent of the other inputs. Only when all 3 input signals are 0 (low voltage) do the emitter-collector impedances of all 3 transistors remain very high. Then very little current flows, and the voltage-divider effect with the load impedance imposes on the collector point a high voltage very near to Vcc. The complementing property of these gate circuits may seem like a drawback when trying to implement a function in canonical form, but there is a compensating bonus: such a gate with only one input implements the complementing function, which is required frequently in digital logic. This example assumes the Apollo parts inventory: 3-input NOR gates only, but the discussion is simplified by supposing that 4-input NOR gates are also available (in Apollo, those were compounded out of pairs of 3-input NORs). Canonical and non-canonical consequences of NOR gates. A set of 8 NOR gates, if their inputs are all combinations of the direct and complement forms of the 3 input variables "ci, x," and "y", always produce minterms, never maxterms—that is, of the 8 gates required to process all combinations of 3 input variables, only one has the output value 1. That's because a NOR gate, despite its name, could better be viewed (using De Morgan's law) as the AND of the complements of its input signals. The reason this is not a problem is the duality of minterms and maxterms, i.e. each maxterm is the complement of the like-indexed minterm, and vice versa. In the minterm example above, we wrote formula_22 but to perform this with a 4-input NOR gate we need to restate it as a product of sums (PoS), where the sums are the opposite maxterms. That is, formula_23 In the maxterm example above, we wrote formula_24 but to perform this with a 4-input NOR gate we need to notice the equality to the NOR of the same minterms. That is, formula_25 Design trade-offs considered in addition to canonical forms. One might suppose that the work of designing an adder stage is now complete, but we haven't addressed the fact that all 3 of the input variables have to appear in both their direct and complement forms. There's no difficulty about the addends "x" and "y" in this respect, because they are static throughout the addition and thus are normally held in latch circuits that routinely have both direct and complement outputs. (The simplest latch circuit made of NOR gates is a pair of gates cross-coupled to make a flip-flop: the output of each is wired as one of the inputs to the other.) There is also no need to create the complement form of the sum "u". However, the carry out of one bit position must be passed as the carry into the next bit position in both direct and complement forms. The most straightforward way to do this is to pass "co" through a 1-input NOR gate and label the output "co"′, but that would add a gate delay in the worst possible place, slowing down the rippling of carries from right to left. An additional 4-input NOR gate building the canonical form of "co"′ (out of the opposite minterms as "co") solves this problem. formula_26 The trade-off to maintain full speed in this way includes an unexpected cost (in addition to having to use a bigger gate). If we'd just used that 1-input gate to complement "co", there would have been no use for the minterm formula_8, and the gate that generated it could have been eliminated. Nevertheless, it is still a good trade. Now we could have implemented those functions exactly according to their SoP and PoS canonical forms, by turning NOR gates into the functions specified. A NOR gate is made into an OR gate by passing its output through a 1-input NOR gate; and it is made into an AND gate by passing each of its inputs through a 1-input NOR gate. However, this approach not only increases the number of gates used, but also doubles the number of gate delays processing the signals, cutting the processing speed in half. Consequently, whenever performance is vital, going beyond canonical forms and doing the Boolean algebra to make the unenhanced NOR gates do the job is well worthwhile. Top-down vs. bottom-up design. We have now seen how the minterm/maxterm tools can be used to design an adder stage in canonical form with the addition of some Boolean algebra, costing just 2 gate delays for each of the outputs. That's the "top-down" way to design the digital circuit for this function, but is it the best way? The discussion has focused on identifying "fastest" as "best," and the augmented canonical form meets that criterion flawlessly, but sometimes other factors predominate. The designer may have a primary goal of minimizing the number of gates, and/or of minimizing the fanouts of signals to other gates since big fanouts reduce resilience to a degraded power supply or other environmental factors. In such a case, a designer may develop the canonical-form design as a baseline, then try a bottom-up development, and finally compare the results. The bottom-up development involves noticing that "u = ci" XOR ("x" XOR "y"), where XOR means eXclusive OR [true when either input is true but not when both are true], and that "co" = "ci x" + "x y" + "y ci". One such development takes twelve NOR gates in all: six 2-input gates and two 1-input gates to produce "u" in 5 gate delays, plus three 2-input gates and one 3-input gate to produce "co"′ in 2 gate delays. The canonical baseline took eight 3-input NOR gates plus three 4-input NOR gates to produce "u, co" and "co"′ in 2 gate delays. If the circuit inventory actually includes 4-input NOR gates, the top-down canonical design looks like a winner in both gate count and speed. But if (contrary to our convenient supposition) the circuits are actually 3-input NOR gates, of which two are required for each 4-input NOR function, then the canonical design takes 14 gates compared to 12 for the bottom-up approach, but still produces the sum digit "u" considerably faster. The fanout comparison is tabulated as: The description of the bottom-up development mentions "co"′ as an output but not "co". Does that design simply never need the direct form of the carry out? Well, yes and no. At each stage, the calculation of "co"′ depends only on "ci"′, "x"′ and "y"′, which means that the carry propagation ripples along the bit positions just as fast as in the canonical design without ever developing "co". The calculation of "u", which does require "ci" to be made from "ci"′ by a 1-input NOR, is slower but for any word length the design only pays that penalty once (when the leftmost sum digit is developed). That's because those calculations overlap, each in what amounts to its own little pipeline without affecting when the next bit position's sum bit can be calculated. And, to be sure, the "co"′ out of the leftmost bit position will probably have to be complemented as part of the logic determining whether the addition overflowed. But using 3-input NOR gates, the bottom-up design is very nearly as fast for doing parallel addition on a non-trivial word length, cuts down on the gate count, and uses lower fanouts ... so it wins if gate count and/or fanout are paramount! We'll leave the exact circuitry of the bottom-up design of which all these statements are true as an exercise for the interested reader, assisted by one more algebraic formula: "u" = "ci"("x" XOR "y") + "ci"′("x" XOR "y")′]′. Decoupling the carry propagation from the sum formation in this way is what elevates the performance of a "carry-lookahead adder" over that of a "ripple carry adder". Application in digital circuit design. One application of Boolean algebra is digital circuit design, with one goal to minimize the number of gates and another to minimize the settling time. There are sixteen possible functions of two variables, but in digital logic hardware, the simplest gate circuits implement only four of them: "conjunction" (AND), "disjunction" (inclusive OR), and the respective complements of those (NAND and NOR). Most gate circuits accept more than 2 input variables; for example, the spaceborne Apollo Guidance Computer, which pioneered the application of integrated circuits in the 1960s, was built with only one type of gate, a 3-input NOR, whose output is true only when all 3 inputs are false. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "{x_1,\\dots,x_n}" }, { "math_id": 2, "text": "x_i" }, { "math_id": 3, "text": "x'_i" }, { "math_id": 4, "text": "\\sum\\limits_{i=1}^n2^{i-1}\\operatorname{value}(x_i)" }, { "math_id": 5, "text": "a b c'" }, { "math_id": 6, "text": "m_6" }, { "math_id": 7, "text": "m_1, m_2, m_4," }, { "math_id": 8, "text": "m_7" }, { "math_id": 9, "text": " u(ci,x,y) = m_1 + m_2 + m_4 + m_7 = (ci',x',y)+(ci',x,y') + (ci,x',y')+(ci,x,y)" }, { "math_id": 10, "text": "(x_i)" }, { "math_id": 11, "text": "(x'_i)" }, { "math_id": 12, "text": "a' + b' + c" }, { "math_id": 13, "text": "(a' + b' + c)'" }, { "math_id": 14, "text": "a b c' = m_6" }, { "math_id": 15, "text": "M_0, M_1, M_2" }, { "math_id": 16, "text": "M_4" }, { "math_id": 17, "text": "co(ci, x, y) = M_0 M_1 M_2 M_4 = (ci + x + y) (ci + x + y') (ci + x' + y) (ci' + x + y)" }, { "math_id": 18, "text": "f = a'bc + abc" }, { "math_id": 19, "text": "f = bc" }, { "math_id": 20, "text": "bc = a'bc + abc" }, { "math_id": 21, "text": "f = (a' + a) b c" }, { "math_id": 22, "text": "u(ci, x, y) = m_1 + m_2 + m_4 + m_7" }, { "math_id": 23, "text": "u(ci, x, y) = \\mathrm{AND}(M_0,M_3,M_5,M_6) = \\mathrm{NOR}(m_0,m_3,m_5,m_6)." }, { "math_id": 24, "text": "co(ci, x, y) = M_0 M_1 M_2 M_4" }, { "math_id": 25, "text": "co(ci, x, y) = \\mathrm{AND}(M_0,M_1,M_2,M_4) = \\mathrm{NOR}(m_0,m_1,m_2,m_4)." }, { "math_id": 26, "text": "co'(ci, x, y) = \\mathrm{AND}(M_3,M_5,M_6,M_7) = \\mathrm{NOR}(m_3,m_5,m_6,m_7)." } ]
https://en.wikipedia.org/wiki?curid=794841
7950934
Clifford torus
Geometrical object in four-dimensional space In geometric topology, the Clifford torus is the simplest and most symmetric flat embedding of the Cartesian product of two circles "S" and "S" (in the same sense that the surface of a cylinder is "flat"). It is named after William Kingdon Clifford. It resides in R4, as opposed to in R3. To see why R4 is necessary, note that if "S" and "S" each exists in its own independent embedding space R and R, the resulting product space will be R4 rather than R3. The historically popular view that the Cartesian product of two circles is an R3 torus in contrast requires the highly asymmetric application of a rotation operator to the second circle, since that circle will only have one independent axis "z" available to it after the first circle consumes "x" and "y". Stated another way, a torus embedded in R3 is an asymmetric reduced-dimension projection of the maximally symmetric Clifford torus embedded in R4. The relationship is similar to that of projecting the edges of a cube onto a sheet of paper. Such a projection creates a lower-dimensional image that accurately captures the connectivity of the cube edges, but also requires the arbitrary selection and removal of one of the three fully symmetric and interchangeable axes of the cube. If "S" and "S" each has a radius of , their Clifford torus product will fit perfectly within the unit 3-sphere "S"3, which is a 3-dimensional submanifold of R4. When mathematically convenient, the Clifford torus can be viewed as residing inside the complex coordinate space C2, since C2 is topologically equivalent to R4. The Clifford torus is an example of a square torus, because it is isometric to a square with opposite sides identified. (Some video games, including Asteroids, are played on a square torus; anything that moves off one edge of the screen reappears on the opposite edge with the same orientation.) It is further known as a Euclidean 2-torus (the "2" is its topological dimension); figures drawn on it obey Euclidean geometry as if it were flat, whereas the surface of a common "doughnut"-shaped torus is positively curved on the outer rim and negatively curved on the inner. Although having a different geometry than the standard embedding of a torus in three-dimensional Euclidean space, the square torus can also be embedded into three-dimensional space, by the Nash embedding theorem; one possible embedding modifies the standard torus by a fractal set of ripples running in two perpendicular directions along the surface. Formal definition. The unit circle "S"1 in R2 can be parameterized by an angle coordinate: formula_0 In another copy of R2, take another copy of the unit circle formula_1 Then the Clifford torus is formula_2 Since each copy of "S"1 is an embedded submanifold of R2, the Clifford torus is an embedded torus in R2 × R2 R4. If R4 is given by coordinates ("x"1, "y"1, "x"2, "y"2), then the Clifford torus is given by formula_3 This shows that in R4 the Clifford torus is a submanifold of the unit 3-sphere "S"3. It is easy to verify that the Clifford torus is a minimal surface in "S"3. Alternative derivation using complex numbers. It is also common to consider the Clifford torus as an embedded torus in C2. In two copies of C, we have the following unit circles (still parametrized by an angle coordinate): formula_4 and formula_5 Now the Clifford torus appears as formula_6 As before, this is an embedded submanifold, in the unit sphere "S"3 in C2. If C2 is given by coordinates ("z"1, "z"2), then the Clifford torus is given by formula_7 In the Clifford torus as defined above, the distance of any point of the Clifford torus to the origin of C2 is formula_8 The set of all points at a distance of 1 from the origin of C2 is the unit 3-sphere, and so the Clifford torus sits inside this 3-sphere. In fact, the Clifford torus divides this 3-sphere into two congruent solid tori (see Heegaard splitting). Since O(4) acts on R4 by orthogonal transformations, we can move the "standard" Clifford torus defined above to other equivalent tori via rigid rotations. These are all called "Clifford tori". The six-dimensional group O(4) acts transitively on the space of all such Clifford tori sitting inside the 3-sphere. However, this action has a two-dimensional stabilizer (see group action) since rotation in the meridional and longitudinal directions of a torus preserves the torus (as opposed to moving it to a different torus). Hence, there is actually a four-dimensional space of Clifford tori. In fact, there is a one-to-one correspondence between Clifford tori in the unit 3-sphere and pairs of polar great circles (i.e., great circles that are maximally separated). Given a Clifford torus, the associated polar great circles are the core circles of each of the two complementary regions. Conversely, given any pair of polar great circles, the associated Clifford torus is the locus of points of the 3-sphere that are equidistant from the two circles. More general definition of Clifford tori. The flat tori in the unit 3-sphere "S"3 that are the product of circles of radius "r" in one 2-plane R2 and radius in another 2-plane R2 are sometimes also called "Clifford tori". The same circles may be thought of as having radii that are cos "θ" and sin "θ" for some angle "θ" in the range 0 ≤ "θ" ≤ (where we include the degenerate cases "θ" = 0 and "θ" =). The union for 0 ≤ "θ" ≤ of all of these tori of form formula_9 (where "S"("r") denotes the circle in the plane R2 defined by having center (0, 0) and radius "r") is the 3-sphere "S"3. Note that we must include the two degenerate cases "θ" = 0 and "θ" =, each of which corresponds to a great circle of "S"3, and which together constitute a pair of polar great circles. This torus "T""θ" is readily seen to have area formula_10 so only the torus "T" has the maximum possible area of 2"π"2. This torus "T" is the torus "T""θ" that is most commonly called the "Clifford torus" – and it is also the only one of the "T""θ" that is a minimal surface in "S"3. Still more general definition of Clifford tori in higher dimensions. Any unit sphere S2"n"−1 in an even-dimensional euclidean space R2"n" = C"n" may be expressed in terms of the complex coordinates as follows: formula_11 Then, for any non-negative numbers "r"1, ..., "r""n" such that "r"12 + ... + "r""n"2 1, we may define a generalized Clifford torus as follows: formula_12 These generalized Clifford tori are all disjoint from one another. We may once again conclude that the union of each one of these tori "T""r"1, ..., "r""n" is the unit (2"n" − 1)-sphere "S"2"n"−1 (where we must again include the degenerate cases where at least one of the radii "r""k" 0). Uses in mathematics. In symplectic geometry, the Clifford torus gives an example of an embedded Lagrangian submanifold of C2 with the standard symplectic structure. (Of course, any product of embedded circles in C gives a Lagrangian torus of C2, so these need not be Clifford tori.) The Lawson conjecture states that every minimally embedded torus in the 3-sphere with the round metric must be a Clifford torus. A proof of this conjecture was published by Simon Brendle in 2013. Clifford tori and their images under conformal transformations are the global minimizers of the Willmore functional.
[ { "math_id": 0, "text": "S^1 = \\bigl\\{ ( \\cos\\theta, \\sin\\theta ) \\,\\big|\\, 0 \\leq \\theta < 2\\pi \\bigr\\}." }, { "math_id": 1, "text": "S^1 = \\bigl\\{ ( \\cos\\varphi, \\sin\\varphi ) \\,\\big|\\, 0 \\leq \\varphi < 2\\pi \\bigr\\}." }, { "math_id": 2, "text": "\\tfrac{1}{\\sqrt{2}}S^1 \\times \\tfrac{1}{\\sqrt{2}} S^1 = \\left\\{\\left. \\tfrac{1}{\\sqrt{2}} ( \\cos\\theta, \\sin\\theta, \\cos\\varphi, \\sin\\varphi ) \\,\\right|\\, 0 \\leq \\theta < 2\\pi, 0 \\leq \\varphi < 2\\pi \\right\\}." }, { "math_id": 3, "text": "x_1^2 + y_1^2 = x_2^2 + y_2^2 = \\tfrac{1}{2}." }, { "math_id": 4, "text": "S^1 = \\left\\{\\left. e^{i\\theta} \\,\\right|\\, 0 \\leq \\theta < 2\\pi \\right\\}" }, { "math_id": 5, "text": "S^1 = \\left\\{\\left. e^{i\\varphi} \\,\\right|\\, 0 \\leq \\varphi < 2\\pi \\right\\}." }, { "math_id": 6, "text": "\\tfrac{1}{\\sqrt{2}}S^1 \\times \\tfrac{1}{\\sqrt{2}}S^1 = \\left\\{\\left. \\tfrac{1}{\\sqrt{2}} \\left( e^{i\\theta}, e^{i\\varphi} \\right) \\, \\right| \\, 0 \\leq \\theta < 2\\pi, 0 \\leq \\varphi < 2\\pi \\right\\}." }, { "math_id": 7, "text": "\\left| z_1 \\right|^2 = \\left| z_2 \\right|^2 = \\tfrac{1}{2}." }, { "math_id": 8, "text": "\\sqrt{ \\tfrac{1}{2}\\left| e^{i\\theta} \\right|^2 + \\tfrac{1}{2}\\left| e^{i\\varphi} \\right|^2} = 1." }, { "math_id": 9, "text": "T_\\theta = S(\\cos\\theta)\\times S(\\sin\\theta)" }, { "math_id": 10, "text": " \\operatorname{area}\\left(T_\\theta\\right) = 4\\pi^2\\cos\\theta\\sin\\theta = 2\\pi^2\\sin2\\theta," }, { "math_id": 11, "text": "S^{2n-1} = \\left\\{(z_1, \\ldots, z_n) \\in \\mathbf{C}^n : |z_1|^2 + \\cdots + |z_n|^2 = 1\\right\\}." }, { "math_id": 12, "text": "T_{r_1,\\ldots,r_n} = \\bigl\\{(z_1, \\ldots, z_n) \\in \\mathbf{C}^n : |z_k| = r_k,~1 \\leqslant k \\leqslant n\\bigr\\}." } ]
https://en.wikipedia.org/wiki?curid=7950934
7951270
Sign (mathematics)
Number property of being positive or negative In mathematics, the sign of a real number is its property of being either positive, negative, or 0. Depending on local conventions, zero may be considered as having its own unique sign, having no sign, or having both positive and negative sign. In some contexts, it makes sense to distinguish between a positive and a negative zero. In mathematics and physics, the phrase "change of sign" is associated with exchanging an object for its additive inverse (multiplication with −1, negation), an operation which is not restricted to real numbers. It applies among other objects to vectors, matrices, and complex numbers, which are not prescribed to be only either positive, negative, or zero. The word "sign" is also often used to indicate binary aspects of mathematical or scientific objects, such as odd and even (sign of a permutation), sense of orientation or rotation (cw/ccw), one sided limits, and other concepts described in below. Sign of a number. Numbers from various number systems, like integers, rationals, complex numbers, quaternions, octonions, ... may have multiple attributes, that fix certain properties of a number. A number system that bears the structure of an ordered ring contains a unique number that when added with any number leaves the latter unchanged. This unique number is known as the system's additive identity element. For example, the integers has the structure of an ordered ring. This number is generally denoted as 0. Because of the total order in this ring, there are numbers greater than zero, called the "positive" numbers. Another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than 0 whose sum with the original positive number is 0. These numbers less than 0 are called the "negative" numbers. The numbers in each such pair are their respective additive inverses. This attribute of a number, being exclusively either "zero" (0), "positive" (+), or "negative" (−), is called its sign, and is often encoded to the real numbers 0, 1, and −1, respectively (similar to the way the sign function is defined). Since rational and real numbers are also ordered rings (in fact ordered fields), the "sign" attribute also applies to these number systems. When a minus sign is used in between two numbers, it represents the binary operation of subtraction. When a minus sign is written before a single number, it represents the unary operation of yielding the additive inverse (sometimes called "negation") of the operand. Abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. While 0 is its own additive inverse (−0 = 0), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. A double application of this operation is written as −(−3) = 3. The plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. In common numeral notation (used in arithmetic and elsewhere), the sign of a number is often made explicit by placing a plus or a minus sign before the number. For example, +3 denotes "positive three", and −3 denotes "negative three" (algebraically: the additive inverse of 3). Without specific context (or when no explicit sign is given), a number is interpreted per default as positive. This notation establishes a strong association of the minus sign "−" with negative numbers, and the plus sign "+" with positive numbers. Sign of zero. Within the convention of zero being neither positive nor negative, a specific sign-value 0 may be assigned to the number value 0. This is exploited in the formula_0-function, as defined for real numbers. In arithmetic, +0 and −0 both denote the same number 0. There is generally no danger of confusing the value with its sign, although the convention of assigning both signs to 0 does not immediately allow for this discrimination. In certain European countries, e.g. in Belgium and France, 0 is considered to be "both" positive and negative following the convention set forth by Nicolas Bourbaki. In some contexts, such as floating-point representations of real numbers within computers, it is useful to consider signed versions of zero, with signed zeros referring to different, discrete number representations (see signed number representations for more). The symbols +0 and −0 rarely appear as substitutes for 0+ and 0−, used in calculus and mathematical analysis for one-sided limits (right-sided limit and left-sided limit, respectively). This notation refers to the behaviour of a function as its real input variable approaches 0 along positive (resp., negative) values; the two limits need not exist or agree. Terminology for signs. When 0 is said to be neither positive nor negative, the following phrases may refer to the sign of a number: When 0 is said to be both positive and negative , modified phrases are used to refer to the sign of a number: For example, the absolute value of a real number is always "non-negative", but is not necessarily "positive" in the first interpretation, whereas in the second interpretation, it is called "positive"—though not necessarily "strictly positive". The same terminology is sometimes used for functions that yield real or other signed values. For example, a function would be called a "positive function" if its values are positive for all arguments of its domain, or a "non-negative function" if all of its values are non-negative. Complex numbers. Complex numbers are impossible to order, so they cannot carry the structure of an ordered ring, and, accordingly, cannot be partitioned into positive and negative complex numbers. They do, however, share an attribute with the reals, which is called "absolute value" or "magnitude". Magnitudes are always non-negative real numbers, and to any non-zero number there belongs a positive real number, its absolute value. For example, the absolute value of −3 and the absolute value of 3 are both equal to 3. This is written in symbols as |−3| = 3 and |3| = 3. In general, any arbitrary real value can be specified by its magnitude and its sign. Using the standard encoding, any real value is given by the product of the magnitude and the sign in standard encoding. This relation can be generalized to define a "sign" for complex numbers. Since the real and complex numbers both form a field and contain the positive reals, they also contain the reciprocals of the magnitudes of all non-zero numbers. This means that any non-zero number may be multiplied with the reciprocal of its magnitude, that is, divided by its magnitude. It is immediate that the quotient of any non-zero real number by its magnitude yields exactly its sign. By analogy, the sign of a complex number z can be defined as the quotient and its The sign of a complex number is the exponential of the product of its argument with the imaginary unit. represents in some sense its complex argument. This is to be compared to the sign of real numbers, except with formula_1 For the definition of a complex sign-function. see below. Sign functions. When dealing with numbers, it is often convenient to have their sign available as a number. This is accomplished by functions that extract the sign of any number, and map it to a predefined value before making it available for further calculations. For example, it might be advantageous to formulate an intricate algorithm for positive values only, and take care of the sign only afterwards. Real sign function. The sign function or signum function extracts the sign of a real number, by mapping the set of real numbers to the set of the three reals formula_2 It can be defined as follows: formula_3 Thus sgn("x") is 1 when x is positive, and sgn("x") is −1 when x is negative. For non-zero values of x, this function can also be defined by the formula formula_4 where is the absolute value of x. Complex sign function. While a real number has a 1-dimensional direction, a complex number has a 2-dimensional direction. The complex sign function requires the magnitude of its argument "z" = "x" + "iy", which can be calculated as formula_5 Analogous to above, the complex sign function extracts the complex sign of a complex number by mapping the set of non-zero complex numbers to the set of unimodular complex numbers, and 0 to 0: formula_6 It may be defined as follows: Let z be also expressed by its magnitude and one of its arguments φ as "z" = |"z"|⋅"eiφ", then formula_7 This definition may also be recognized as a normalized vector, that is, a vector whose direction is unchanged, and whose length is fixed to unity. If the original value was R,θ in polar form, then sign(R, θ) is 1 θ. Extension of sign() or signum() to any number of dimensions is obvious, but this has already been defined as normalizing a vector. Signs per convention. In situations where there are exactly two possibilities on equal footing for an attribute, these are often labelled by convention as "plus" and "minus", respectively. In some contexts, the choice of this assignment (i.e., which range of values is considered positive and which negative) is natural, whereas in other contexts, the choice is arbitrary, making an explicit sign convention necessary, the only requirement being consistent use of the convention. Sign of an angle. In many contexts, it is common to associate a sign with the measure of an angle, particularly an oriented angle or an angle of rotation. In such a situation, the sign indicates whether the angle is in the clockwise or counterclockwise direction. Though different conventions can be used, it is common in mathematics to have counterclockwise angles count as positive, and clockwise angles count as negative. It is also possible to associate a sign to an angle of rotation in three dimensions, assuming that the axis of rotation has been oriented. Specifically, a right-handed rotation around an oriented axis typically counts as positive, while a left-handed rotation counts as negative. An angle which is the negative of a given angle has an equal arc, but the opposite axis. Sign of a change. When a quantity "x" changes over time, the change in the value of "x" is typically defined by the equation formula_8 Using this convention, an increase in "x" counts as positive change, while a decrease of "x" counts as negative change. In calculus, this same convention is used in the definition of the derivative. As a result, any increasing function has positive derivative, while any decreasing function has negative derivative. Sign of a direction. When studying one-dimensional displacements and motions in analytic geometry and physics, it is common to label the two possible directions as positive and negative. Because the number line is usually drawn with positive numbers to the right, and negative numbers to the left, a common convention is for motions to the right to be given a positive sign, and for motions to the left to be given a negative sign. &lt;templatestyles src="Block indent/styles.css"/&gt; On the Cartesian plane, the rightward and upward directions are usually thought of as positive, with rightward being the positive "x"-direction, and upward being the positive "y"-direction. If a displacement vector is separated into its vector components, then the horizontal part will be positive for motion to the right and negative for motion to the left, while the vertical part will be positive for motion upward and negative for motion downward. Likewise, a negative speed (rate of change of displacement) implies a velocity in the opposite direction, i.e., receding instead of advancing; a special case is the radial speed. In 3D space, notions related to sign can be found in the two normal orientations and orientability in general. Signedness in computing. In computing, an integer value may be either signed or unsigned, depending on whether the computer is keeping track of a sign for the number. By restricting an integer variable to non-negative values only, one more bit can be used for storing the value of a number. Because of the way integer arithmetic is done within computers, signed number representations usually do not store the sign as a single independent bit, instead using e.g. two's complement. In contrast, real numbers are stored and manipulated as floating point values. The floating point values are represented using three separate values, mantissa, exponent, and sign. Given this separate sign bit, it is possible to represent both positive and negative zero. Most programming languages normally treat positive zero and negative zero as equivalent values, albeit, they provide means by which the distinction can be detected. Other meanings. In addition to the sign of a real number, the word sign is also used in various related ways throughout mathematics and other sciences:
[ { "math_id": 0, "text": "\\sgn" }, { "math_id": 1, "text": "e^{i \\pi}= -1." }, { "math_id": 2, "text": "\\{-1,\\; 0,\\; 1\\}." }, { "math_id": 3, "text": "\\begin{align}\n\\sgn : {} & \\Reals \\to \\{-1, 0, 1\\} \\\\\n& x \\mapsto \\sgn(x) = \\begin{cases}\n-1 & \\text{if } x < 0, \\\\ \n~~\\, 0 & \\text{if } x = 0, \\\\\n~~\\, 1 & \\text{if } x > 0.\n\\end{cases}\n\\end{align}" }, { "math_id": 4, "text": " \\sgn(x) = \\frac{x}{|x|} = \\frac{|x|}{x}," }, { "math_id": 5, "text": "|z| = \\sqrt{z\\bar z} = \\sqrt{x^2 + y^2}." }, { "math_id": 6, "text": "\\{z \\in \\Complex : |z| = 1\\} \\cup \\{0\\}." }, { "math_id": 7, "text": "\\sgn(z) = \\begin{cases}\n0 &\\text{for } z=0\\\\\n\\dfrac{z}{|z|} = e^{i\\varphi} &\\text{otherwise}.\n\\end{cases}" }, { "math_id": 8, "text": "\\Delta x = x_\\text{final} - x_\\text{initial}. " } ]
https://en.wikipedia.org/wiki?curid=7951270
7951427
Higher spin alternating sign matrix
In mathematics, a higher spin alternating sign matrix is a generalisation of the alternating sign matrix (ASM), where the columns and rows sum to an integer "r" (the "spin") rather than simply summing to 1 as in the usual alternating sign matrix definition. HSASMs are square matrices whose elements may be integers in the range −"r" to +"r". When traversing any row or column of an ASM or HSASM, the partial sum of its entries must always be non-negative. High spin ASMs have found application in statistical mechanics and physics, where they have been found to represent symmetry groups in ice crystal formation. Some typical examples of HSASMs are shown below: formula_0 The set of HSASMs is a superset of the ASMs. The extreme points of the convex hull of the set of "r"-spin HSASMs are themselves integer multiples of the usual ASMs.
[ { "math_id": 0, "text": "\n\\begin{pmatrix}\n 0 & 0 & 2 & 0 \\\\\n 0 & 2 &-1 & 1 \\\\\n 2 &-1 & 2 &-1 \\\\\n 0 & 1 &-1 & 2 \n\\end{pmatrix};\\quad\n\\begin{pmatrix}\n 0 & 0 & 2 & 0&0 \\\\\n 0 & 1 &-1 & 2 &0\\\\\n 2 &-1 &-1 & 0 &2\\\\\n 0 & 0 & 2 & 0 &0\\\\\n0&2&0&0&0\n\\end{pmatrix};\\quad\n\\begin{pmatrix}\n 0 & 0 & 0 & 2 \\\\\n 0 & 2 & 0 & 0 \\\\\n 2 &-2 & 2 & 0 \\\\\n 0 & 2 & 0 & 0 \n\\end{pmatrix};\\quad\n\\begin{pmatrix}\n 0 & 2 & 0 & 0 \\\\\n 0 & 0 & 0 & 2 \\\\\n 2 & 0 & 0 & 0 \\\\\n 0 & 0 & 2 & 0 \n\\end{pmatrix}.\n" } ]
https://en.wikipedia.org/wiki?curid=7951427
7952276
Joseph F. Traub
American computer scientist Joseph Frederick Traub (June 24, 1932 – August 24, 2015) was an American computer scientist. He was the Edwin Howard Armstrong Professor of Computer Science at Columbia University and External Professor at the Santa Fe Institute. He held positions at Bell Laboratories, University of Washington, Carnegie Mellon, and Columbia, as well as sabbatical positions at Stanford, Berkeley, Princeton, California Institute of Technology, and Technical University, Munich. Traub was the author or editor of ten monographs and some 120 papers in computer science, mathematics, physics, finance, and economics. In 1959 he began his work on optimal iteration theory culminating in his 1964 monograph, "Iterative Methods for the Solution of Equations". Subsequently, he pioneered work with Henryk Woźniakowski on computational complexity applied to continuous scientific problems (information-based complexity). He collaborated in creating significant new algorithms including the Jenkins-Traub Algorithm for Polynomial Zeros, as well as the Shaw-Traub, Kung-Traub, and Brent-Traub algorithms. One of his research areas was continuous quantum computing. As of November 10, 2015, his works have been cited 8500 times, and he has an h-index of 35. From 1971 to 1979 Traub headed the Computer Science Department at Carnegie Mellon during a critical period. From 1979 to 1989 he was the founding Chair of the Computer Science Department at Columbia. From 1986 to 1992 he served as founding Chair of the Computer Science and Telecommunications Board, National Academies and held the post again 2005–2009. Traub was founding editor of the "Annual Review of Computer Science" (1986–1990) and Editor-in-Chief of the "Journal of Complexity" (1985–2015). Both his research and institution building work have had a major impact on the field of computer science. Early career. Traub attended the Bronx High School of Science where he was captain and first board of the chess team. After graduating from City College of New York he entered Columbia in 1954 intending to take a PhD in physics. In 1955, on the advice of a fellow student, Traub visited the IBM Watson Research Lab at Columbia. At the time, this was one of the few places in the country where a student could gain access to computers. Traub found his proficiency for algorithmic thinking matched perfectly with computers. In 1957 he became a Watson Fellow through Columbia. His thesis was on computational quantum mechanics. His 1959 PhD is in applied mathematics since computer science degrees were not yet available. (Indeed, there was no Computer Science Department at Columbia until Traub was invited there in 1979 to start the Department.) Career. In 1959, Traub joined the Research Division of Bell Laboratories in Murray Hill, NJ. One day a colleague asked him how to compute the solution of a certain problem. Traub could think of a number of ways to solve the problem. What was the optimal algorithm, that is, a method which would minimize the required computational resources? To his surprise, there was no theory of optimal algorithms. (The phrase computational complexity, which is the study of the minimal resources required to solve computational problems was not introduced until 1965.) Traub had the key insight that the optimal algorithm for solving a continuous problem depended on the available information. This was to eventually lead to the field of information-based complexity. The first area for which Traub applied his insight was the solution of nonlinear equations. This research led to the 1964 monograph, "Iterative Methods for the Solution of Equations". which is still in print. In 1966 Traub spent a sabbatical year at Stanford University where he met a student named Michael Jenkins. Together they developed the Jenkins-Traub Algorithm for Polynomial Zeros, which was published as Jenkins' Ph.D. thesis. This algorithm is still one of the most widely used methods for this problem and is included in many textbooks. In 1970 Traub became a professor at the University of Washington and in 1971 he became Head of the Carnegie Mellon Computer Science Department. The Department was quite small but still included "giants" such as Allen Newell and Herbert A. Simon. By 1978, under Traub's leadership, the Department had grown to some 50 teaching and research faculty. One of Traub's PhD students was H. T. Kung, now a chaired professor at Harvard. They created the Kung-Traub algorithm for computing the expansion of an algebraic function. They showed that computing the first formula_0 terms was no harder than multiplying two formula_0-th degree polynomials. In 1973 Traub invited Henryk Woźniakowski to visit CMU. They pioneered the field of information-based complexity, co-authoring three monographs and numerous papers. Woźniakowski became a professor at both Columbia and the University of Warsaw, Poland. In 1978, while on sabbatical at Berkeley, he was recruited by Peter Likins to become founding Chairman of the Computer Science Department at Columbia and Edwin Howard Armstrong Professor of Computer Science. He served as chair 1979–1989. In 1980 he co-authored "A General Theory of Optimal Algorithms", with Woźniakowski. This was the first research monograph on information-based complexity. Greg Wasilkowski joined Traub and Woźniakowski in two more monographs "Information, Uncertainty, Complexity", Addison-Wesley, 1983, and "Information-Based Complexity", Academic Press, 1988. In 1985 Traub became founding Editor-in-Chief of the "Journal of Complexity". This was probably the first journal which had complexity in the sense of computational complexity in its title. In 1986, Traub was asked by the National Academies to form a Computer Science Board. The original name of the Board was the Computer Science and Technology Board (CSTB). Several years later CSTB was asked to also be responsible for telecommunications so it was renamed the Computer Science and Telecommunications Board, preserving the abbreviation CSTB. The Board deals with critical national issues in computer science and telecommunications. Traub served as founding chair 1986–1992 and held the post again 2005–2009. In 1990 Traub taught in the summer school of the Santa Fe Institute (SFI). He has since played a variety of roles at SFI. In the nineties he organized a series of Workshops on Limits to Scientific Knowledge funded by the Alfred P. Sloan Foundation. The goal was to enrich science in the same way that the work of Gödel and Turing on the limits of mathematics enriched that field. There were a series of Workshops on limits in various disciplines: physics, economics, and geophysics. Starting in 1991 Traub was co-organizer of an international Seminar on "Continuous Algorithms and Complexity" at Schloss Dagstuhl, Germany. Many of the Seminar talks are on information-based complexity and more recently on continuous quantum computing. Traub was invited by the Accademia Nazionale dei Lincee in Rome, Italy, to present the 1993 Lezione Lincee. He chose to give the cycle of six lectures at the Scuola Normale in Pisa. He invited Arthur Werschulz to join him in publishing the lectures. The lectures appeared in expanded form as "Complexity and Information", Cambridge University Press, 1998. In 1994 he asked a PhD student, Spassimir Paskov, to compare the Monte Carlo method (MC) with the Quasi-Monte Carlo method (QMC) when calculating a collateralized mortgage obligation (CMO) Traub had obtained from Goldman Sachs. This involved the numerical approximation of a number of integrals in 360 dimensions. To the surprise of the research group Paskov reported that QMC always beat MC for this problem. People in finance had always used MC for such problems and the experts in number theory believed QMC should not be used for integrals of dimension greater than 12. Paskov and Traub reported their results to a number of Wall Street firms to considerable initial skepticism. They first published the results in 1995. The theory and software was greatly improved by Anargyros Papageorgiou. Today QMC is widely used in the financial sector to value financial derivatives. QMC is not a panacea for all high dimensional integrals. Research is continuing on the characterization of problems for which QMC is superior to MC. In 1999 Traub received the Mayor's medal for Science and Technology. Decisions regarding this award are made by the New York Academy of Sciences. The medal was awarded by Mayor Rudy Giuliani in a ceremony in Gracie Mansion. Traub and his colleagues have also worked on continuous quantum computing. Moore's law is an empirical observation that the number of features on a chip doubles roughly every 18 months. This has held since the early 60s and is responsible for the computer and telecommunications revolution. It is widely believed that Moore's law will cease to hold in 10–15 years using silicon technology. There is therefore interest in creating new technologies. One candidate is quantum computing. That is building a computer using the principles of quantum mechanics. The motivation is that most problems in physical science, engineering, and mathematical finance have continuous mathematical models. In 2005 Traub donated archival material to the Carnegie Mellon University Library. This collection is being digitized. Patents on algorithms and software. The U.S. patents US5940810 and US0605837 were issued to Traub "et al." for the FinDer Software System and were assigned to Columbia University. These patents cover an application of a well known technique (low discrepancy sequences) to a well known problem (valuation of securities). Personal life. Traub had two daughters, Claudia Traub-Cooper and Hillary Spector. He lived in Manhattan and Santa Fe with his wife, author Pamela McCorduck. He often opined on current events by writing to the New York Times, which frequently published his comments. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "2^3P" } ]
https://en.wikipedia.org/wiki?curid=7952276
7952767
Generating function (physics)
Function used to generate other functions In physics, and more specifically in Hamiltonian mechanics, a generating function is, loosely, a function whose partial derivatives generate the differential equations that determine a system's dynamics. Common examples are the partition function of statistical mechanics, the Hamiltonian, and the function which acts as a bridge between two sets of canonical variables when performing a canonical transformation. In canonical transformations. There are four basic generating functions, summarized by the following table: Example. Sometimes a given Hamiltonian can be turned into one that looks like the harmonic oscillator Hamiltonian, which is formula_0 For example, with the Hamiltonian formula_1 where "p" is the generalized momentum and "q" is the generalized coordinate, a good canonical transformation to choose would be This turns the Hamiltonian into formula_2 which is in the form of the harmonic oscillator Hamiltonian. The generating function "F" for this transformation is of the third kind, formula_3 To find "F" explicitly, use the equation for its derivative from the table above, formula_4 and substitute the expression for "P" from equation (1), expressed in terms of "p" and "Q": formula_5 Integrating this with respect to "Q" results in an equation for the generating function of the transformation given by equation (1): To confirm that this is the correct generating function, verify that it matches (1): formula_6 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H = aP^2 + bQ^2." }, { "math_id": 1, "text": "H = \\frac{1}{2q^2} + \\frac{p^2 q^4}{2}," }, { "math_id": 2, "text": "H = \\frac{Q^2}{2} + \\frac{P^2}{2}," }, { "math_id": 3, "text": "F = F_3(p,Q)." }, { "math_id": 4, "text": "P = - \\frac{\\partial F_3}{\\partial Q}," }, { "math_id": 5, "text": "\\frac{p}{Q^2} = - \\frac{\\partial F_3}{\\partial Q}" }, { "math_id": 6, "text": "q = - \\frac{\\partial F_3}{\\partial p} = \\frac{-1}{Q}" } ]
https://en.wikipedia.org/wiki?curid=7952767
795334
Dislocation
Linear crystallographic defect or irregularity In materials science, a dislocation or Taylor's dislocation is a linear crystallographic defect or irregularity within a crystal structure that contains an abrupt change in the arrangement of atoms. The movement of dislocations allow atoms to slide over each other at low stress levels and is known as "glide" or slip. The crystalline order is restored on either side of a "glide dislocation" but the atoms on one side have moved by one position. The crystalline order is not fully restored with a "partial dislocation". A dislocation defines the boundary between "slipped" and "unslipped" regions of material and as a result, must either form a complete loop, intersect other dislocations or defects, or extend to the edges of the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms which is defined by the Burgers vector. Plastic deformation of a material occurs by the creation and movement of many dislocations. The number and arrangement of dislocations influences many of the properties of materials. The two primary types of dislocations are "sessile" dislocations which are immobile and "glissile" dislocations which are mobile. Examples of sessile dislocations are the "stair-rod" dislocation and the Lomer–Cottrell junction. The two main types of mobile dislocations are "edge" and "screw " dislocations. Edge dislocations can be visualized as being caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the surrounding planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. This phenomenon is analogous to half of a piece of paper inserted into a stack of paper, where the defect in the stack is noticeable only at the edge of the half sheet. The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, proposed that the low stresses observed to produce plastic deformation compared to theoretical predictions at the time could be explained in terms of the theory of dislocations. History. The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. The term 'dislocation' referring to a defect on the atomic scale was coined by G. I. Taylor in 1934. Prior to the 1930s, one of the enduring challenges of materials science was to explain plasticity in microscopic terms. A simplistic attempt to calculate the shear stress at which neighbouring atomic planes "slip" over each other in a perfect crystal suggests that, for a material with shear modulus formula_0, shear strength formula_1 is given approximately by: formula_2 The shear modulus in metals is typically within the range 20 000 to 150 000 MPa indicating a predicted shear stress of 3 000 to 24 000 MPa. This was difficult to reconcile with measured shear stresses in the range of 0.5 to 10 MPa. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, independently proposed that plastic deformation could be explained in terms of the theory of dislocations. Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge. In effect, a half plane of atoms is moved in response to shear stress by breaking and reforming a line of bonds, one (or a few) at a time. The energy required to break a row of bonds is far less than that required to break all the bonds on an entire plane of atoms at once. Even this simple model of the force required to move a dislocation shows that plasticity is possible at much lower stresses than in a perfect crystal. In many materials, particularly ductile materials, dislocations are the "carrier" of plastic deformation, and the energy required to move them is less than the energy required to fracture the material. Mechanisms. A dislocation is a linear crystallographic defect or irregularity within a crystal structure which contains an abrupt change in the arrangement of atoms. The crystalline order is restored on either side of a dislocation but the atoms on one side have moved or slipped. Dislocations define the boundary between slipped and unslipped regions of material and cannot end within a lattice and must either extend to a free edge or form a loop within the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms in the lattice which is called the Burgers vector. The Burgers vector of a dislocation remains constant even though the shape of the dislocation may change. A variety of dislocation types exist, with mobile dislocations known as "glissile" and immobile dislocations called "sessile". The movement of mobile dislocations allow atoms to slide over each other at low stress levels and is known as glide or slip. The movement of dislocations may be enhanced or hindered by the presence of other elements within the crystal and over time, these elements may diffuse to the dislocation forming a Cottrell atmosphere. The pinning and breakaway from these elements explains some of the unusual yielding behavior seen with steels. The interaction of hydrogen with dislocations is one of the mechanisms proposed to explain hydrogen embrittlement. Dislocations behave as though they are a distinct entity within a crystalline material where some types of dislocation can move through the material bending, flexing and changing shape and interacting with other dislocations and features within the crystal. Dislocations are generated by deforming a crystalline material such as metals, which can cause them to initiate from surfaces, particularly at stress concentrations or within the material at defects and grain boundaries. The number and arrangement of dislocations give rise to many of the properties of metals such as ductility, hardness and yield strength. Heat treatment, alloy content and cold working can change the number and arrangement of the dislocation population and how they move and interact in order to create useful properties. Generating dislocations. When metals are subjected to cold working (deformation at temperatures which are relatively low as compared to the material's absolute melting temperature, formula_3 i.e., typically less than formula_4) the dislocation density increases due to the formation of new dislocations. The consequent increasing overlap between the strain fields of adjacent dislocations gradually increases the resistance to further dislocation motion. This causes a hardening of the metal as deformation progresses. This effect is known as strain hardening or work hardening. Dislocation density formula_5 in a material can be increased by plastic deformation by the following relationship: formula_6. Since the dislocation density increases with plastic deformation, a mechanism for the creation of dislocations must be activated in the material. Three mechanisms for dislocation formation are homogeneous nucleation, grain boundary initiation, and interfaces between the lattice and the surface, precipitates, dispersed phases, or reinforcing fibers. Homogeneous nucleation. The creation of a dislocation by "homogeneous nucleation" is a result of the rupture of the atomic bonds along a line in the lattice. A plane in the lattice is sheared, resulting in 2 oppositely faced half planes or dislocations. These dislocations move away from each other through the lattice. Since homogeneous nucleation forms dislocations from perfect crystals and requires the simultaneous breaking of many bonds, the energy required for homogeneous nucleation is high. For instance, the stress required for homogeneous nucleation in copper has been shown to be formula_7, where formula_0 is the shear modulus of copper (46 GPa). Solving for formula_8, we see that the required stress is 3.4 GPa, which is very close to the theoretical strength of the crystal. Therefore, in conventional deformation homogeneous nucleation requires a concentrated stress, and is very unlikely. Grain boundary initiation and interface interaction are more common sources of dislocations. Irregularities at the grain boundaries in materials can produce dislocations which propagate into the grain. The steps and ledges at the grain boundary are an important source of dislocations in the early stages of plastic deformation. Frank–Read source. The Frank–Read source is a mechanism that is able to produce a stream of dislocations from a pinned segment of a dislocation. Stress bows the dislocation segment, expanding until it creates a dislocation loop that breaks free from the source. Surfaces. The surface of a crystal can produce dislocations in the crystal. Due to the small steps on the surface of most crystals, stress in some regions on the surface is much larger than the average stress in the lattice. This stress leads to dislocations. The dislocations are then propagated into the lattice in the same manner as in grain boundary initiation. In single crystals, the majority of dislocations are formed at the surface. The dislocation density 200 micrometres into the surface of a material has been shown to be six times higher than the density in the bulk. However, in polycrystalline materials the surface sources do not have a major effect because most grains are not in contact with the surface. Interfaces. The interface between a metal and an oxide can greatly increase the number of dislocations created. The oxide layer puts the surface of the metal in tension because the oxygen atoms squeeze into the lattice, and the oxygen atoms are under compression. This greatly increases the stress on the surface of the metal and consequently the amount of dislocations formed at the surface. The increased amount of stress on the surface steps results in an increase in dislocations formed and emitted from the interface. Dislocations may also form and remain in the interface plane between two crystals. This occurs when the lattice spacing of the two crystals do not match, resulting in a misfit of the lattices at the interface. The stress caused by the lattice misfit is released by forming regularly spaced misfit dislocations. Misfit dislocations are edge dislocations with the dislocation line in the interface plane and the Burgers vector in the direction of the interface normal. Interfaces with misfit dislocations may form e.g. as a result of epitaxial crystal growth on a substrate. Irradiation. Dislocation loops may form in the damage created by energetic irradiation. A prismatic dislocation loop can be understood as an extra (or missing) collapsed disk of atoms, and can form when interstitial atoms or vacancies cluster together. This may happen directly as a result of single or multiple collision cascades, which results in locally high densities of interstitial atoms and vacancies. In most metals, prismatic dislocation loops are the energetically most preferred clusters of self-interstitial atoms. Interaction and arrangement. Geometrically necessary dislocations. Geometrically necessary dislocations are arrangements of dislocations that can accommodate a limited degree of plastic bending in a crystalline material. Tangles of dislocations are found at the early stage of deformation and appear as non well-defined boundaries; the process of dynamic recovery leads eventually to the formation of a cellular structure containing boundaries with misorientation lower than 15° (low angle grain boundaries). Pinning. Adding pinning points that inhibit the motion of dislocations, such as alloying elements, can introduce stress fields that ultimately strengthen the material by requiring a higher applied stress to overcome the pinning stress and continue dislocation motion. The effects of strain hardening by accumulation of dislocations and the grain structure formed at high strain can be removed by appropriate heat treatment (annealing) which promotes the recovery and subsequent recrystallization of the material. The combined processing techniques of work hardening and annealing allow for control over dislocation density, the degree of dislocation entanglement, and ultimately the yield strength of the material. Persistent slip bands. Repeated cycling of a material can lead to the generation and bunching of dislocations surrounded by regions that are relatively dislocation free. This pattern forms a ladder like structure known as a "persistent slip bands" (PSB). PSB's are so-called, because they leave marks on the surface of metals that even when removed by polishing, return at the same place with continued cycling. PSB walls are predominately made up of edge dislocations. In between the walls, plasticity is transmitted by screw dislocations. Where PSB's meet the surface, extrusions and intrusions form, which under repeated cyclic loading, can lead to the initiation of a fatigue crack. Movement. Glide. Dislocations can slip in planes containing both the dislocation line and the Burgers vector, the so called glide plane. For a screw dislocation, the dislocation line and the Burgers vector are parallel, so the dislocation may slip in any plane containing the dislocation. For an edge dislocation, the dislocation and the Burgers vector are perpendicular, so there is one plane in which the dislocation can slip. Climb. "Dislocation climb" is an alternative mechanism of dislocation motion that allows an edge dislocation to move out of its slip plane. The driving force for dislocation climb is the movement of vacancies through a crystal lattice. If a vacancy moves next to the boundary of the extra half plane of atoms that forms an edge dislocation, the atom in the half plane closest to the vacancy can "jump" and fill the vacancy. This atom shift "moves" the vacancy in line with the half plane of atoms, causing a shift, or positive climb, of the dislocation. The process of a vacancy being absorbed at the boundary of a half plane of atoms, rather than created, is known as negative climb. Since dislocation climb results from individual atoms "jumping" into vacancies, climb occurs in single atom diameter increments. During positive climb, the crystal shrinks in the direction perpendicular to the extra half plane of atoms because atoms are being removed from the half plane. Since negative climb involves an addition of atoms to the half plane, the crystal grows in the direction perpendicular to the half plane. Therefore, compressive stress in the direction perpendicular to the half plane promotes positive climb, while tensile stress promotes negative climb. This is one main difference between slip and climb, since slip is caused by only shear stress. One additional difference between dislocation slip and climb is the temperature dependence. Climb occurs much more rapidly at high temperatures than low temperatures due to an increase in vacancy motion. Slip, on the other hand, has only a small dependence on temperature. Dislocation avalanches. Dislocation avalanches occur when multiple simultaneous movement of dislocations occur. Dislocation Velocity. Dislocation velocity is largely dependent upon shear stress and temperature, and can often be fit using a power law function: formula_9 where formula_10 is a material constant, formula_11 is the applied shear stress, formula_12 is a constant that decreases with increasing temperature. Increased shear stress will increase the dislocation velocity, while increased temperature will typically decrease the dislocation velocity. Greater phonon scattering at higher temperatures is hypothesized to be responsible for increased damping forces which slow the dislocation movement. Geometry. Two main types of mobile dislocations exist: edge and screw. Dislocations found in real materials are typically "mixed", meaning that they have characteristics of both. Edge. A crystalline material consists of a regular array of atoms, arranged into lattice planes. An edge dislocation is a defect where an extra half-plane of atoms is introduced midway through the crystal, distorting nearby planes of atoms. When enough force is applied from one side of the crystal structure, this extra plane passes through planes of atoms breaking and joining bonds with them until it reaches the grain boundary. The dislocation has two properties, a line direction, which is the direction running along the bottom of the extra half plane, and the Burgers vector which describes the magnitude and direction of distortion to the lattice. In an edge dislocation, the Burgers vector is perpendicular to the line direction. The stresses caused by an edge dislocation are complex due to its inherent asymmetry. These stresses are described by three equations: formula_13 formula_14 formula_15 where formula_16 is the shear modulus of the material, formula_17 is the Burgers vector, formula_18 is Poisson's ratio and formula_19 and formula_20 are coordinates. These equations suggest a vertically oriented dumbbell of stresses surrounding the dislocation, with compression experienced by the atoms near the "extra" plane, and tension experienced by those atoms near the "missing" plane. Screw. A "screw dislocation" can be visualized by cutting a crystal along a plane and slipping one half across the other by a lattice vector, the halves fitting back together without leaving a defect. If the cut only goes part way through the crystal, and then slipped, the boundary of the cut is a screw dislocation. It comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes in the crystal lattice. In pure screw dislocations, the Burgers vector is parallel to the line direction. An array of screw dislocations can cause what is known as a twist boundary. In a twist boundary, the misalignment between adjacent crystal grains occurs due to the cumulative effect of screw dislocations within the material. These dislocations cause a rotational misorientation between the adjacent grains, leading to a twist-like deformation along the boundary. Twist boundaries can significantly influence the mechanical and electrical properties of materials, affecting phenomena such as grain boundary sliding, creep, and fracture behavior The stresses caused by a screw dislocation are less complex than those of an edge dislocation and need only one equation, as symmetry allows one radial coordinate to be used: formula_21 where formula_16 is the shear modulus of the material, formula_17 is the Burgers vector, and formula_22 is a radial coordinate. This equation suggests a long cylinder of stress radiating outward from the cylinder and decreasing with distance. This simple model results in an infinite value for the core of the dislocation at formula_23 and so it is only valid for stresses outside of the core of the dislocation. If the Burgers vector is very large, the core may actually be empty resulting in a micropipe, as commonly observed in silicon carbide. Mixed. In many materials, dislocations are found where the line direction and Burgers vector are neither perpendicular nor parallel and these dislocations are called "mixed dislocations", consisting of both screw and edge character. They are characterized by formula_24, the angle between the line direction and Burgers vector, where formula_25 for pure edge dislocations and formula_26 for screw dislocations. Partial. Partial dislocations leave behind a stacking fault. Two types of partial dislocation are the "Frank partial dislocation" which is sessile and the "Shockley partial dislocation" which is glissile. A Frank partial dislocation is formed by inserting or removing a layer of atoms on the {111} plane which is then bounded by the Frank partial. Removal of a close packed layer is known as an "intrinsic" stacking fault and inserting a layer is known as an "extrinsic" stacking fault. The Burgers vector is normal to the {111} glide plane so the dislocation cannot glide and can only move through "climb". In order to lower the overall energy of the lattice, edge and screw dislocations typically disassociate into a stacking fault bounded by two Shockley partial dislocations. The width of this stacking-fault region is proportional to the stacking-fault energy of the material. The combined effect is known as an "extended dislocation" and is able to glide as a unit. However, dissociated screw dislocations must recombine before they can cross slip, making it difficult for these dislocations to move around barriers. Materials with low stacking-fault energies have the greatest dislocation dissociation and are therefore more readily cold worked. Stair-rod and the Lomer–Cottrell junction. If two glide dislocations that lie on different {111} planes split into Shockley partials and intersect, they will produce a stair-rod dislocation with a Lomer-Cottrell dislocation at its apex. It is called a "stair-rod" because it is analogous to the rod that keeps carpet in-place on a stair. Jog. A Jog describes the steps of a dislocation line that are not in the glide plane of a crystal structure. A dislocation line is rarely uniformly straight, often containing many curves and steps that can impede or facilitate dislocation movement by acting as pinpoints or nucleation points respectively. Because jogs are out of the glide plane, under shear they cannot move by glide (movement along the glide plane). They instead must rely on vacancy diffusion facilitated climb to move through the lattice. Away from the melting point of a material, vacancy diffusion is a slow process, so jogs act as immobile barriers at room temperature for most metals. Jogs typically form when two non-parallel dislocations cross during slip. The presence of jogs in a material increases its yield strength by preventing easy glide of dislocations. A pair of immobile jogs in a dislocation will act as a Frank–Read source under shear, increasing the overall dislocation density of a material. When a material's yield strength is increased via dislocation density increase, particularly when done by mechanical work, it is called work hardening. At high temperatures, vacancy facilitated movement of jogs becomes a much faster process, diminishing their overall effectiveness in impeding dislocation movement. Kink. Kinks are steps in a dislocation line parallel to glide planes. Unlike jogs, they facilitate glide by acting as a nucleation point for dislocation movement. The lateral spreading of a kink from the nucleation point allows for forward propagation of the dislocation while only moving a few atoms at a time, reducing the overall energy barrier to slip. Example in two dimensions (2D). In two dimensions (2D) only the edge dislocations exist, which play a central role in melting of 2D crystals, but not the screw dislocation. Those dislocations are topological point defects which implies that they cannot be created isolated by an affine transformation without cutting the hexagonal crystal up to infinity (or at least up to its border). They can only be created in pairs with antiparallel Burgers vector. If a lot of dislocations are e. g. thermally excited, the discrete translational order of the crystal is destroyed. Simultaneously, the shear modulus and the Young's modulus disappear, which implies that the crystal is molten to a fluid phase. The orientational order is not yet destroyed (as indicated by lattice lines in one direction) and one finds - very similar to liquid crystals - a fluid phase with typically a six-folded director field. This so-called hexatic phase still has an orientational stiffness. The isotropic fluid phase appears, if the dislocations dissociate into isolated five-folded and seven-folded disclinations. This two step melting is described within the so-called Kosterlitz-Thouless-Halperin-Nelson-Young-theory (KTHNY theory), based on two transitions of Kosterlitz-Thouless-type. Observation. Transmission electron microscopy (TEM). Transmission electron microscopy can be used to observe dislocations within the microstructure of the material. Thin foils of material are prepared to render them transparent to the electron beam of the microscope. The electron beam undergoes diffraction by the regular crystal lattice planes into a diffraction pattern and contrast is generated in the image by this diffraction (as well as by thickness variations, varying strain, and other mechanisms). Dislocations have different local atomic structure and produce a strain field, and therefore will cause the electrons in the microscope to scatter in different ways. Note the characteristic 'wiggly' contrast of the dislocation lines as they pass through the thickness of the material in the figure (dislocations cannot end in a crystal, and these dislocations are terminating at the surfaces since the image is a 2D projection). Dislocations do not have random structures, the local atomic structure of a dislocation is determined by the Burgers vector. One very useful application of the TEM in dislocation imaging is the ability to experimentally determine the Burgers vector. Determination of the Burgers vector is achieved by what is known as formula_27 ("g dot b") analysis. When performing dark field microscopy with the TEM, a diffracted spot is selected to form the image (as mentioned before, lattice planes diffract the beam into spots), and the image is formed using only electrons that were diffracted by the plane responsible for that diffraction spot. The vector in the diffraction pattern from the transmitted spot to the diffracted spot is the formula_28 vector. The contrast of a dislocation is scaled by a factor of the dot product of this vector and the Burgers vector (formula_27). As a result, if the Burgers vector and formula_28 vector are perpendicular, there will be no signal from the dislocation and the dislocation will not appear at all in the image. Therefore, by examining different dark field images formed from spots with different g vectors, the Burgers vector can be determined. Other methods. Field ion microscopy and atom probe techniques offer methods of producing much higher magnifications (typically 3 million times and above) and permit the observation of dislocations at an atomic level. Where surface relief can be resolved to the level of an atomic step, screw dislocations appear as distinctive spiral features – thus revealing an important mechanism of crystal growth: where there is a surface step, atoms can more easily add to the crystal, and the surface step associated with a screw dislocation is never destroyed no matter how many atoms are added to it. Chemical etching. When a dislocation line intersects the surface of a metallic material, the associated strain field locally increases the relative susceptibility of the material to acid etching and an etch pit of regular geometrical format results. In this way, dislocations in silicon, for example, can be observed "indirectly" using an interference microscope. Crystal orientation can be determined by the shape of the etch pits associated with the dislocations. If the material is deformed and repeatedly re-etched, a series of etch pits can be produced which effectively trace the movement of the dislocation in question. Dislocation forces. Forces on dislocations. Dislocation motion as a result of external stress on a crystal lattice can be described using virtual internal forces which act perpendicular to the dislocation line. The Peach-Koehler equation can be used to calculate the force per unit length on a dislocation as a function of the Burgers vector, formula_17, stress, formula_29, and the sense vector, formula_30. formula_31 The force per unit length of dislocation is a function of the general state of stress, formula_32, and the sense vector, formula_30. formula_33 The components of the stress field can be obtained from the Burgers vector, normal stresses, formula_29, and shear stresses, formula_11. formula_34 Forces between dislocations. The force between dislocations can be derived from the energy of interactions of the dislocations, formula_35. The work done by displacing cut faces parallel to a chosen axis that creates one dislocation in the stress field of another displacement. For the formula_19 and formula_20 directions: formula_36 The forces are then found by taking the derivatives. formula_37 Free surface forces. Dislocations will also tend to move towards free surfaces due to the lower strain energy. This fictitious force can be expressed for a screw dislocation with the formula_20 component equal to zero as: formula_38 where formula_39 is the distance from free surface in the formula_19 direction. The force for an edge dislocation with formula_40 can be expressed as: formula_41 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\tau_m" }, { "math_id": 2, "text": " \\tau_m = \\frac {G} {2 \\pi}." }, { "math_id": 3, "text": "T_m" }, { "math_id": 4, "text": "0.4T_m" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\tau \\propto \\sqrt{\\rho}" }, { "math_id": 7, "text": " \\frac {\\tau_{\\text{hom}}}{G}=7.4\\times10^{-2}" }, { "math_id": 8, "text": "\\tau_{\\text{hom}} \\,\\!" }, { "math_id": 9, "text": "v = A\\tau^m" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "\\tau" }, { "math_id": 12, "text": "m" }, { "math_id": 13, "text": " \\sigma_{xx} = \\frac {-\\mu \\mathbf{b}} {2 \\pi (1-\\nu)} \\frac {y(3x^2 +y^2)} {(x^2 +y^2)^2}" }, { "math_id": 14, "text": " \\sigma_{yy} = \\frac {\\mu \\mathbf{b}} {2 \\pi (1-\\nu)} \\frac {y(x^2 -y^2)} {(x^2 +y^2)^2}" }, { "math_id": 15, "text": " \\tau_{xy} = \\frac {\\mu \\mathbf{b}} {2 \\pi (1-\\nu)} \\frac {x(x^2 -y^2)} {(x^2 +y^2)^2}" }, { "math_id": 16, "text": "\\mu" }, { "math_id": 17, "text": "\\mathbf{b}" }, { "math_id": 18, "text": "\\nu" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": "y" }, { "math_id": 21, "text": " \\tau_{r} = \\frac {-\\mu \\mathbf{b}} {2 \\pi r} " }, { "math_id": 22, "text": "r" }, { "math_id": 23, "text": "r=0" }, { "math_id": 24, "text": "\\varphi" }, { "math_id": 25, "text": "\\varphi=\\pi/2" }, { "math_id": 26, "text": "\\varphi=0" }, { "math_id": 27, "text": "\\vec{g} \\cdot \\vec{b}" }, { "math_id": 28, "text": "\\vec{g}" }, { "math_id": 29, "text": "\\sigma" }, { "math_id": 30, "text": "\\mathbf{s}" }, { "math_id": 31, "text": "\\mathbf{f} = (\\mathbf{b}\\cdot \\sigma)\\times \\mathbf{s}" }, { "math_id": 32, "text": "\\mathbf{F}" }, { "math_id": 33, "text": "\n\\mathbf{f} = \\mathbf{F} \\times \\mathbf{s} = \n\\begin{vmatrix}\n\\hat\\imath & \\hat\\jmath & \\hat k \\\\\nF_x & F_y & F_z \\\\\ns_x & s_y & s_z\n\\end{vmatrix}" }, { "math_id": 34, "text": "\\begin{aligned}\nF_x &= b_x\\sigma_{xx} + b_y\\tau_{xy} + b_z\\tau_{xz} \\\\\nF_y &= b_x\\tau_{yx} + b_y\\sigma_{yy} + b_z\\tau_{yz} \\\\\nF_z &= b_x\\tau_{zx} + b_y\\tau_{zy} + b_z\\sigma_{zz}\n\\end{aligned}" }, { "math_id": 35, "text": "U_{\\rm int}" }, { "math_id": 36, "text": "\\begin{aligned}\nU_{\\rm int} &= \\int_{x}^{\\infty} (b_x\\tau_{xy} + b_y\\sigma_{yy} + b_z\\sigma_{zy})\\, dx \\\\\nU_{\\rm int} &= \\int_{y}^{\\infty} (b_x\\sigma_{xx} + b_y\\tau_{yx} + b_z\\sigma_{zx})\\, dy\n\\end{aligned}" }, { "math_id": 37, "text": "\\begin{aligned}\nF_x &= - \\frac{\\partial U_{\\rm int}}{\\partial x} \\\\\nF_y &= - \\frac{\\partial U_{\\rm int}}{\\partial y}\n\\end{aligned}" }, { "math_id": 38, "text": "F_x = b\\tau_{zy} = -\\frac{Gb^2}{4\\pi d}" }, { "math_id": 39, "text": "d" }, { "math_id": 40, "text": "y = 0" }, { "math_id": 41, "text": "F_x = b\\tau_{xy} = -\\frac{Gb^2}{4\\pi (1 - \\nu)d}" } ]
https://en.wikipedia.org/wiki?curid=795334
7954712
Turnstile (symbol)
Symbol in mathematical logic In mathematical logic and computer science the symbol ⊢ (formula_0) has taken the name turnstile because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee and is often read as "yields", "proves", "satisfies" or "entails". Interpretations. The turnstile represents a binary relation. It has several different interpretations in different contexts: formula_1 can then be read "I know A is true". In the same vein, a conditional assertion formula_2 can be read as: "From P, I know that Q" formula_2 means that Q is derivable from P in the system. Consistent with its use for derivability, a "⊢" followed by an expression without anything preceding it denotes a theorem, which is to say that the expression can be derived from the rules using an empty set of axioms. As such, the expression formula_3 means that Q is a theorem in the system. formula_4 means that S is provable from T. This usage is demonstrated in the article on propositional calculus. The syntactic consequence of provability should be contrasted to semantic consequence, denoted by the double turnstile symbol formula_5. One says that formula_6 is a semantic consequence of formula_7, or formula_8, when all possible valuations in which formula_7 is true, formula_6 is also true. For propositional logic, it may be shown that semantic consequence formula_5 and derivability formula_0 are equivalent to one-another. That is, propositional logic is sound (formula_0 implies formula_5) and complete (formula_5 implies formula_0) Typography. In TeX, the turnstile symbol formula_0 is obtained from the command \vdash. In Unicode, the turnstile symbol (⊢) is called right tack and is at code point U+22A2. (Code point U+22A6 is named "assertion sign" (⊦).) On a typewriter, a turnstile can be composed from a vertical bar (|) and a dash (–). In LaTeX there is a turnstile package which issues this sign in many ways, and is capable of putting labels below or above it, in the correct places. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vdash" }, { "math_id": 1, "text": "\\vdash A" }, { "math_id": 2, "text": "P \\vdash Q" }, { "math_id": 3, "text": "\\vdash Q" }, { "math_id": 4, "text": "T \\vdash S" }, { "math_id": 5, "text": "\\models" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": "T \\models S" }, { "math_id": 9, "text": "A_1,\\,\\dots,A_m \\,\\vdash\\, B_1,\\,\\dots,B_n" }, { "math_id": 10, "text": "A_1,\\,\\dots,A_m" }, { "math_id": 11, "text": "B_1,\\,\\dots,B_n" }, { "math_id": 12, "text": "\\dashv" }, { "math_id": 13, "text": "F \\dashv G" }, { "math_id": 14, "text": "G \\vdash F" }, { "math_id": 15, "text": "\\lambda \\vdash n" }, { "math_id": 16, "text": "5\\vdash2" }, { "math_id": 17, "text": "Q=2;R=1" }, { "math_id": 18, "text": "\\varphi \\vdash \\psi" }, { "math_id": 19, "text": "\\varphi" }, { "math_id": 20, "text": "\\psi" } ]
https://en.wikipedia.org/wiki?curid=7954712
7958880
D'Alembert–Euler condition
In mathematics and physics, especially the study of mechanics and fluid dynamics, the d'Alembert-Euler condition is a requirement that the streaklines of a flow are irrotational. Let x = x(X,"t") be the coordinates of the point x into which X is carried at time "t" by a (fluid) flow. Let formula_0 be the second material derivative of x. Then the d'Alembert-Euler condition is: formula_1 The d'Alembert-Euler condition is named for Jean le Rond d'Alembert and Leonhard Euler who independently first described its use in the mid-18th century. It is not to be confused with the Cauchy–Riemann conditions. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ddot{\\mathbf{x}}=\\frac{D^2\\mathbf{x}}{Dt}" }, { "math_id": 1, "text": "\\mathrm{curl}\\ \\mathbf{x}=\\mathbf{0}. \\, " } ]
https://en.wikipedia.org/wiki?curid=7958880
7959499
Generalized quantifier
Expression denoting a set of sets in formal semantics In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier "every boy" denotes the set of sets of which every boy is a member: formula_0 This treatment of quantifiers has been essential in achieving a compositional semantics for sentences containing quantifiers. Type theory. A version of type theory is often used to make the semantics of different kinds of expressions explicit. The standard construction defines the set of types recursively as follows: Given this definition, we have the simple types "e" and "t", but also a countable infinity of complex types, some of which include: formula_2 We can now assign types to the words in our sentence above (Every boy sleeps) as follows. and so we can see that the generalized quantifier in our example is of type formula_11 Thus, every denotes a function from a "set" to a function from a set to a truth value. Put differently, it denotes a function from a set to a set of sets. It is that function which for any two sets "A,B", "every"("A")("B")= 1 if and only if formula_12. Typed lambda calculus. A useful way to write complex functions is the lambda calculus. For example, one can write the meaning of "sleeps" as the following lambda expression, which is a function from an individual "x" to the proposition that "x sleeps". formula_13 Such lambda terms are functions whose domain is what precedes the period, and whose range are the type of thing that follows the period. If "x" is a variable that ranges over elements of formula_3, then the following lambda term denotes the identity function on individuals: formula_14 We can now write the meaning of "every" with the following lambda term, where "X,Y" are variables of type formula_5: formula_15 If we abbreviate the meaning of "boy" and "sleeps" as ""B" and "S"", respectively, we have that the sentence "every boy sleeps" now means the following: formula_16 By β-reduction, formula_17 and formula_18 The expression "every" is a determiner. Combined with a noun, it yields a "generalized quantifier" of type formula_11. Properties. Monotonicity. Monotone increasing GQs. A "generalized quantifier" GQ is said to be monotone increasing (also called upward entailing) if, for every pair of sets "X" and "Y", the following holds: if formula_19, then GQ("X") entails GQ("Y"). The GQ "every boy" is monotone increasing. For example, the set of things that "run fast" is a subset of the set of things that "run". Therefore, the first sentence below entails the second: Monotone decreasing GQs. A GQ is said to be monotone decreasing (also called downward entailing) if, for every pair of sets "X" and "Y", the following holds: If formula_19, then GQ("Y") entails GQ("X"). An example of a monotone decreasing GQ is "no boy". For this GQ we have that the first sentence below entails the second. The lambda term for the determiner "no" is the following. It says that the two sets have an empty intersection. formula_20 Monotone decreasing GQs are among the expressions that can license a negative polarity item, such as "any". Monotone increasing GQs do not license negative polarity items. Non-monotone GQs. A GQ is said to be "non-monotone" if it is neither monotone increasing nor monotone decreasing. An example of such a GQ is "exactly three boys". Neither of the following sentences entails the other. The first sentence does not entail the second. The fact that the number of students that ran is exactly three does not entail that each of these students "ran fast", so the number of students that did that can be smaller than 3. Conversely, the second sentence does not entail the first. The sentence "exactly three students ran fast" can be true, even though the number of students who merely ran (i.e. not so fast) is greater than 3. The lambda term for the (complex) determiner "exactly three" is the following. It says that the cardinality of the intersection between the two sets equals 3. formula_21 Conservativity. A determiner D is said to be "conservative" if the following equivalence holds: formula_22 For example, the following two sentences are equivalent. It has been proposed that "all" determiners—in every natural language—are conservative. The expression "only" is not conservative. The following two sentences are not equivalent. But it is, in fact, not common to analyze "only" as a determiner. Rather, it is standardly treated as a focus-sensitive adverb.
[ { "math_id": 0, "text": "\\{X \\mid \\forall x (x \\text{ is a boy} \\to x \\in X) \\}" }, { "math_id": 1, "text": "\\langle a,b\\rangle" }, { "math_id": 2, "text": "\\langle e,t\\rangle;\\qquad \\langle t,t\\rangle;\\qquad \\langle\\langle e,t\\rangle, t\\rangle; \\qquad\\langle e,\\langle e,t\\rangle\\rangle; \\qquad \\langle\\langle e,t\\rangle,\\langle \\langle e, t\\rangle, t\\rangle\\rangle;\\qquad \\ldots" }, { "math_id": 3, "text": "D_e" }, { "math_id": 4, "text": "\\{0,1\\}" }, { "math_id": 5, "text": "\\langle e,t\\rangle" }, { "math_id": 6, "text": "D_t^{D_e}" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "D_b^{D_a}" }, { "math_id": 10, "text": "\\langle\\langle e,t\\rangle,\\langle \\langle e, t\\rangle, t\\rangle\\rangle" }, { "math_id": 11, "text": "\\langle\\langle e,t\\rangle,t\\rangle" }, { "math_id": 12, "text": "A\\subseteq B" }, { "math_id": 13, "text": "\\lambda x. \\mathrm{sleep}'(x)" }, { "math_id": 14, "text": "\\lambda x.x" }, { "math_id": 15, "text": "\\lambda X.\\lambda Y. X\\subseteq Y" }, { "math_id": 16, "text": "(\\lambda X.\\lambda Y. X\\subseteq Y)(B)(S)" }, { "math_id": 17, "text": "(\\lambda Y. B \\subseteq Y)(S)" }, { "math_id": 18, "text": "B\\subseteq S" }, { "math_id": 19, "text": "X\\subseteq Y" }, { "math_id": 20, "text": "\\lambda X.\\lambda Y. X\\cap Y= \\emptyset" }, { "math_id": 21, "text": "\\lambda X.\\lambda Y. |X\\cap Y|=3" }, { "math_id": 22, "text": "D(A)(B) \\leftrightarrow D(A)(A\\cap B)" } ]
https://en.wikipedia.org/wiki?curid=7959499
7960510
Restricted sumset
Sumset of a field subject to a specific polynomial restriction In additive number theory and combinatorics, a restricted sumset has the form formula_0 where formula_1 are finite nonempty subsets of a field "F" and formula_2 is a polynomial over "F". If formula_3 is a constant non-zero function, for example formula_4 for any formula_5, then formula_6 is the usual sumset formula_7 which is denoted by formula_8 if formula_9 When formula_10 "S" is written as formula_11 which is denoted by formula_12 if formula_9 Note that |"S"| &gt; 0 if and only if there exist formula_13 with formula_14 Cauchy–Davenport theorem. The Cauchy–Davenport theorem, named after Augustin Louis Cauchy and Harold Davenport, asserts that for any prime "p" and nonempty subsets "A" and "B" of the prime order cyclic group formula_15 we have the inequality formula_16 where formula_17, i.e. we're using modular arithmetic. It can be generalised to arbitrary (not necessarily abelian) groups using a Dyson transform. If formula_18 are subsets of a group formula_19, then formula_20 where formula_21 is the size of the smallest nontrivial subgroup of formula_19 (we set it to formula_22 if there is no such subgroup). We may use this to deduce the Erdős–Ginzburg–Ziv theorem: given any sequence of 2"n"−1 elements in the cyclic group formula_23, there are "n" elements that sum to zero modulo "n". (Here "n" does not need to be prime.) A direct consequence of the Cauchy-Davenport theorem is: Given any sequence "S" of "p"−1 or more nonzero elements, not necessarily distinct, of formula_15, every element of formula_15 can be written as the sum of the elements of some subsequence (possibly empty) of "S". Kneser's theorem generalises this to general abelian groups. Erdős–Heilbronn conjecture. The Erdős–Heilbronn conjecture posed by Paul Erdős and Hans Heilbronn in 1964 states that formula_24 if "p" is a prime and "A" is a nonempty subset of the field Z/"p"Z. This was first confirmed by J. A. Dias da Silva and Y. O. Hamidoune in 1994 who showed that formula_25 where "A" is a finite nonempty subset of a field "F", and "p"("F") is a prime "p" if "F" is of characteristic "p", and "p"("F") = ∞ if "F" is of characteristic 0. Various extensions of this result were given by Noga Alon, M. B. Nathanson and I. Ruzsa in 1996, Q. H. Hou and Zhi-Wei Sun in 2002, and G. Karolyi in 2004. Combinatorial Nullstellensatz. A powerful tool in the study of lower bounds for cardinalities of various restricted sumsets is the following fundamental principle: the combinatorial Nullstellensatz. Let formula_26 be a polynomial over a field formula_27. Suppose that the coefficient of the monomial formula_28 in formula_26 is nonzero and formula_29 is the total degree of formula_26. If formula_30 are finite subsets of formula_27 with formula_31 for formula_32, then there are formula_13 such that formula_33. This tool was rooted in a paper of N. Alon and M. Tarsi in 1989, and developed by Alon, Nathanson and Ruzsa in 1995–1996, and reformulated by Alon in 1999. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S=\\{a_1+\\cdots+a_n:\\ a_1\\in A_1,\\ldots,a_n\\in A_n \\ \\mathrm{and}\\ P(a_1,\\ldots,a_n)\\not=0\\}," }, { "math_id": 1, "text": " A_1,\\ldots,A_n" }, { "math_id": 2, "text": "P(x_1,\\ldots,x_n)" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "P(x_1,\\ldots,x_n)=1" }, { "math_id": 5, "text": "x_1,\\ldots,x_n" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "A_1+\\cdots+A_n" }, { "math_id": 8, "text": "nA" }, { "math_id": 9, "text": "A_1=\\cdots=A_n=A." }, { "math_id": 10, "text": "P(x_1,\\ldots,x_n) = \\prod_{1 \\le i < j \\le n} (x_j-x_i)," }, { "math_id": 11, "text": "A_1\\dotplus\\cdots\\dotplus A_n" }, { "math_id": 12, "text": "n^{\\wedge} A" }, { "math_id": 13, "text": "a_1\\in A_1,\\ldots,a_n\\in A_n" }, { "math_id": 14, "text": "P(a_1,\\ldots,a_n)\\not=0." }, { "math_id": 15, "text": "\\mathbb{Z}/p\\mathbb{Z}" }, { "math_id": 16, "text": "|A+B| \\ge \\min\\{p,\\, |A|+|B|-1\\}" }, { "math_id": 17, "text": "A+B := \\{a+b \\pmod p \\mid a \\in A, b \\in B\\}" }, { "math_id": 18, "text": "A, B" }, { "math_id": 19, "text": "G" }, { "math_id": 20, "text": "|A+B| \\ge \\min\\{p(G),\\, |A|+|B|-1\\}" }, { "math_id": 21, "text": "p(G)" }, { "math_id": 22, "text": "1" }, { "math_id": 23, "text": "\\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 24, "text": "|2^\\wedge A| \\ge \\min\\{p,\\, 2|A|-3\\}" }, { "math_id": 25, "text": "|n^\\wedge A| \\ge \\min\\{p(F),\\ n|A|-n^2+1\\}," }, { "math_id": 26, "text": "f(x_1,\\ldots,x_n)" }, { "math_id": 27, "text": "F" }, { "math_id": 28, "text": "x_1^{k_1}\\cdots x_n^{k_n}" }, { "math_id": 29, "text": "k_1+\\cdots+k_n" }, { "math_id": 30, "text": "A_1,\\ldots,A_n" }, { "math_id": 31, "text": "|A_i|>k_i" }, { "math_id": 32, "text": "i=1,\\ldots,n" }, { "math_id": 33, "text": "f(a_1,\\ldots,a_n)\\not = 0 " } ]
https://en.wikipedia.org/wiki?curid=7960510
7960760
Hubble volume
Region of the observable universe In cosmology, a Hubble volume (named for the astronomer Edwin Hubble) or Hubble sphere, Hubble bubble, subluminal sphere, causal sphere and sphere of causality is a spherical region of the observable universe surrounding an observer beyond which objects recede from that observer at a rate greater than the speed of light due to the expansion of the universe. The Hubble volume is approximately equal to 1031 cubic light years (or about 1079 cubic meters). The proper radius of a Hubble sphere (known as the Hubble radius or the Hubble length) is formula_0, where formula_1 is the speed of light and formula_2 is the Hubble constant. The surface of a Hubble sphere is called the "microphysical horizon", the "Hubble surface", or the "Hubble limit". More generally, the term "Hubble volume" can be applied to any region of space with a volume of order formula_3. However, the term is also frequently (but mistakenly) used as a synonym for the observable universe; the latter is larger than the Hubble volume. The center of the Hubble volume and observable universe is arbitrary in relation to the overall universe; instead it is centered around its origin (impersonal or personal "observer"). The Hubble length formula_0 is 14.4 billion light years in the standard cosmological model, equivalent to formula_1 times Hubble time. The Hubble time is the reciprocal of the Hubble constant, and is slightly larger than the age of the universe (13.8 billion years) as it is the age the universe would have had if expansion was linear. Hubble limit as an event horizon. For objects at the Hubble limit, the space between us and the object of interest has an average expansion speed of "c". So, in a universe with constant Hubble parameter, light emitted at the present time by objects outside the Hubble limit would never be seen by an observer on Earth. That is, the Hubble limit would coincide with a cosmological event horizon (a boundary separating events visible at some time and those that are never visible). See Hubble horizon for more details. However, the Hubble parameter is not constant in various cosmological models so that the Hubble limit does not, in general, coincide with a cosmological event horizon. For example, in a decelerating Friedmann universe the Hubble sphere expands with time, and its boundary overtakes light emitted by more distant galaxies so that light emitted at earlier times by objects "outside" the Hubble volume still may eventually arrive inside the sphere and be seen by us. Similarly, in an accelerating universe with a decreasing Hubble constant, the Hubble volume expands with time and can overtake light from sources previously receding relative to us. In both of these circumstances, the cosmological event horizon lies beyond the Hubble Horizon. In a universe with an increasing Hubble constant, the Hubble horizon will contract, and its boundary overtakes light emitted by nearer galaxies so that light emitted at earlier times by objects "inside" the Hubble sphere will eventually recede outside the sphere and will never be seen by us. If the shrinkage of the Hubble volume does not stop due to some yet unknown phenomenon (one suggestion is the "early phase transition"), the Hubble volume will become nearly a point (due to the uncertainty principle pure singularities are impossible; also a proportion of their self-interactions are energetic enough to produce escaping particles via quantum tunneling), meeting the criteria of big bang. The justification of this view is that no subluminal Hubble volume will exist and pointwise superluminal expansion (the generalization of the Big Bang theory) will prevail everywhere or at least in a vast region of the universe. In this cyclic cosmology (there are many other cyclic versions) the universe always expands and does not revert to a smaller default size (non-conformal or expandatory conformal, non-Penrosean expandatory cyclic cosmology). Observations indicate that the expansion of the universe is accelerating, and the Hubble constant is thought to be decreasing. Thus, sources of light outside the Hubble horizon but inside the cosmological event horizon can eventually reach us. A fairly counter-intuitive result is that photons we observe from the first ~5 billion years of the universe come from regions that are, and always have been, receding from us at superluminal speeds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c/H_0" }, { "math_id": 1, "text": "c" }, { "math_id": 2, "text": "H_0" }, { "math_id": 3, "text": "(c/H_0)^3" } ]
https://en.wikipedia.org/wiki?curid=7960760