id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
8370929
Fitness model (network theory)
In complex network theory, the fitness model is a model of the evolution of a network: how the links between nodes change over time depends on the fitness of nodes. Fitter nodes attract more links at the expense of less fit nodes. It has been used to model the network structure of the World Wide Web. Description of the model. The model is based on the idea of fitness, an inherent competitive factor that nodes may have, capable of affecting the network's evolution. According to this idea, the nodes' intrinsic ability to attract links in the network varies from node to node, the most efficient (or "fit") being able to gather more edges in the expense of others. In that sense, not all nodes are identical to each other, and they claim their degree increase according to the fitness they possess every time. The fitness factors of all the nodes composing the network may form a distribution ρ(η) characteristic of the system been studied. Ginestra Bianconi and Albert-László Barabási proposed a new model called Bianconi-Barabási model, a variant to the Barabási-Albert model (BA model), where the probability for a node to connect to another one is supplied with a term expressing the fitness of the node involved. The fitness parameter is time-independent and is multiplicative to the probability. Fitness model where fitnesses are not coupled to preferential attachment has been introduced by Caldarelli et al. Here a link is created between two vertices formula_0 with a probability given by a linking function formula_1 of the fitnesses of the vertices involved. The degree of a vertex i is given by: formula_2 If formula_3 is an invertible and increasing function of formula_4, then the probability distribution formula_5 is given by formula_6 As a result if the fitnesses formula_7 are distributed as a power law, then also the node degree does. Less intuitively with a fast decaying probability distribution as formula_8 together with a linking function of the kind formula_9 with formula_10 a constant and formula_11 the Heavyside function, we also obtain scale-free networks. Such model has been successfully applied to describe trade between nations by using GDP as fitness for the various nodes formula_0 and a linking function of the kind; formula_12 Fitness model and the evolution of the Web. The fitness model has been used to model the network structure of the World Wide Web. In a PNAS article, Kong et al. extended the fitness model to include random node deletion, a common phenomena in the Web. When the deletion rate of the web pages are accounted for, they found that the overall fitness distribution is exponential. Nonetheless, even this small variance in the fitness is amplified through the preferential attachment mechanism, leading to a heavy-tailed distribution of incoming links on the Web. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "i,j" }, { "math_id": 1, "text": "f(\\eta_i,\\eta_j)" }, { "math_id": 2, "text": "k(\\eta_i)=N\\int_0^\\infty \\!\\!\\! f(\\eta_i,\\eta_j) \\rho(\\eta_j) d\\eta_j " }, { "math_id": 3, "text": "k(\\eta_i)" }, { "math_id": 4, "text": "\\eta_i" }, { "math_id": 5, "text": "P(k)" }, { "math_id": 6, "text": "P(k)=\\rho(\\eta(k)) \\cdot \\eta'(k)" }, { "math_id": 7, "text": "\\eta" }, { "math_id": 8, "text": "\\rho(\\eta)=e^{-\\eta}" }, { "math_id": 9, "text": " f(\\eta_i,\\eta_j)=\\Theta(\\eta_i+\\eta_j-Z)" }, { "math_id": 10, "text": "Z" }, { "math_id": 11, "text": "\\Theta" }, { "math_id": 12, "text": " \\frac{\\delta \\eta_i\\eta_j}{1+ \\delta \\eta_i\\eta_j}" } ]
https://en.wikipedia.org/wiki?curid=8370929
8371035
Girvan–Newman algorithm
Community detection algorithm The Girvan–Newman algorithm (named after Michelle Girvan and Mark Newman) is a hierarchical method used to detect communities in complex systems. Edge betweenness and community structure. The Girvan–Newman algorithm detects communities by progressively removing edges from the original network. The connected components of the remaining network are the communities. Instead of trying to construct a measure that tells us which edges are the most central to communities, the Girvan–Newman algorithm focuses on edges that are most likely "between" communities. Vertex betweenness is an indicator of highly central nodes in networks. For any node formula_0, vertex betweenness is defined as the fraction of shortest paths between pairs of nodes that run through it. It is relevant to models where the network modulates transfer of goods between known start and end points, under the assumption that such transfer seeks the shortest available route. The Girvan–Newman algorithm extends this definition to the case of edges, defining the "edge betweenness" of an edge as the number of shortest paths between pairs of nodes that run along it. If there is more than one shortest path between a pair of nodes, each path is assigned equal weight such that the total weight of all of the paths is equal to unity. If a network contains communities or groups that are only loosely connected by a few inter-group edges, then all shortest paths between different communities must go along one of these few edges. Thus, the edges connecting communities will have high edge betweenness (at least one of them). By removing these edges, the groups are separated from one another and so the underlying community structure of the network is revealed. The algorithm's steps for community detection are summarized below The fact that the only betweennesses being recalculated are only the ones which are affected by the removal, may lessen the running time of the process' simulation in computers. However, the betweenness centrality must be recalculated with each step, or severe errors occur. The reason is that the network adapts itself to the new conditions set after the edge removal. For instance, if two communities are connected by more than one edge, then there is no guarantee that all of these edges will have high betweenness. According to the method, we know that at least one of them will have, but nothing more than that is known. By recalculating betweennesses after the removal of each edge, it is ensured that at least one of the remaining edges between two communities will always have a high value. The end result of the Girvan–Newman algorithm is a dendrogram. As the Girvan–Newman algorithm runs, the dendrogram is produced from the top down (i.e. the network splits up into different communities with the successive removal of links). The leaves of the dendrogram are individual nodes.
[ { "math_id": 0, "text": "i" } ]
https://en.wikipedia.org/wiki?curid=8371035
8371092
Gödel numbering for sequences
In mathematics, a Gödel numbering for sequences provides an effective way to represent each finite sequence of natural numbers as a single natural number. While a set theoretical embedding is surely possible, the emphasis is on the effectiveness of the functions manipulating such representations of sequences: the operations on sequences (accessing individual members, concatenation) can be "implemented" using total recursive functions, and in fact by primitive recursive functions. It is usually used to build sequential “data types” in arithmetic-based formalizations of some fundamental notions of mathematics. It is a specific case of the more general idea of Gödel numbering. For example, recursive function theory can be regarded as a formalization of the notion of an algorithm, and can be regarded as a programming language to mimic lists by encoding a sequence of natural numbers in a single natural number. Gödel numbering. Besides using Gödel numbering to encode unique sequences of symbols into unique natural numbers (i.e. place numbers into mutually exclusive or one-to-one correspondence with the sequences), we can use it to encode whole “architectures” of sophisticated “machines”. For example, we can encode Markov algorithms, or Turing machines into natural numbers and thereby prove that the expressive power of recursive function theory is no less than that of the former machine-like formalizations of algorithms. Accessing members. Any such representation of sequences should contain all the information as in the original sequence—most importantly, each individual member must be retrievable. However, the length does not have to match directly; even if we want to handle sequences of different length, we can store length data as a surplus member, or as the other member of an ordered pair by using a pairing function. We expect that there is an effective way for this information retrieval process in form of an appropriate total recursive function. We want to find a totally recursive function "f" with the property that for all "n" and for any "n"-length sequence of natural numbers formula_0, there exists an appropriate natural number "a", called the Gödel number of the sequence, such that for all "i" where formula_1, formula_2. There are effective functions which can retrieve each member of the original sequence from a Gödel number of the sequence. Moreover, we can define some of them in a constructive way, so we can go well beyond mere proofs of existence. Gödel's β-function lemma. By an ingenious use of the Chinese remainder theorem, we can constructively define such a recursive function formula_3 (using simple number-theoretical functions, all of which can be defined in a total recursive way) fulfilling the specifications given above. Gödel defined the formula_3 function using the Chinese remainder theorem in his article written in 1931. This is a primitive recursive function. Thus, for all "n" and for any "n"-length sequence of natural numbers formula_0, there exists an appropriate natural number "a", called the Gödel number of the sequence such that formula_4. Using a pairing function. Our specific solution will depend on a pairing function—there are several ways to implement the pairing function, so one method must be selected. Now, we can abstract from the details of the implementation of the pairing function. We need only to know its “interface”: let formula_5, "K", and "L" denote the pairing function and its two projection functions, respectively, satisfying specification: formula_6 formula_7 Remainder for natural numbers. We shall use another auxiliary function that will compute the remainder for natural numbers. Examples: It can be proven that this function can be implemented as a recursive function. Using the Chinese remainder theorem. Implementation of the β function. Using the Chinese remainder theorem, we can prove that implementing formula_3 as formula_10 will work, according to the specification we expect formula_3 to satisfy. We can use a more concise form by an abuse of notation (constituting a sort of pattern matching): formula_11 Let us achieve even more readability by more modularity and reuse (as these notions are used in computer science): by defining formula_12 the sequence formula_13, we can write formula_14. We shall use this formula_15 notation in the proof. Hand-tuned assumptions. For proving the correctness of the above definition of the formula_3 function, we shall use several lemmas. These have their own assumptions. Now we try to find out these assumptions, calibrating and tuning their strength carefully: they should not be said in an either superfluously sharp, or unsatisfactorily weak form. Let formula_16 be a sequence of natural numbers. Let "m" be chosen to satisfy formula_17 formula_18 The first assumption is meant as formula_19 It is needed to meet an assumption of the Chinese remainder theorem (that of being pairwise coprime). In the literature, sometimes this requirement is replaced with a stronger one, e.g. constructively built with the factorial function, but the stronger premise is not required for this proof. The second assumption does not concern the Chinese remainder theorem in any way. It will have importance in proving that the specification for formula_3 is met eventually. It ensures that an formula_20 solution of the simultaneous congruence system formula_21 for each "i" where formula_1 also satisfies formula_22. A stronger assumption for "m" requiring formula_23 automatically satisfies the second assumption (if we define the notation formula_15 as above). Proof that (coprimality) assumption for Chinese remainder theorem is met. In the section Hand-tuned assumptions, we required that formula_17. What we want to prove is that we can produce a sequence of pairwise coprime numbers in a way that will turn out to correspond to the Implementation of the β function. In detail: formula_24 remembering that formula_12 we defined formula_13. The proof is by contradiction; assume the negation of the original statement: formula_25 First steps. We know what “coprime” relation means (in a lucky way, its negation can be formulated in a concise form); thus, let us substitute in the appropriate way: formula_26 Using a “more” prenex normal form (but note allowing a constraint-like notation in quantifiers): formula_27 Because of a theorem on divisibility, formula_28 allows us to also say formula_29. Substituting the definitions of formula_30-sequence notation, we get formula_31, thus (as equality axioms postulate identity to be a congruence relation) we get formula_32. Since "p" is a prime element (note that the irreducible element property is used), we get formula_33. Resorting to the first hand-tuned assumption. Now we must resort to our assumption formula_17. The assumption was chosen carefully to be as weak as possible, but strong enough to enable us to use it now. The assumed negation of the original statement contains an appropriate existential statement using indices formula_34; this entails formula_35, thus the mentioned assumption can be applied, so formula_36 holds. Using an (object) theorem of the propositional calculus as a lemma. We can prove by several means known in propositional calculus that formula_37 holds. Since formula_36, by the transitivity property of the divisibility relation, formula_38. Thus (as equality axioms postulate identity to be a congruence relation ) formula_39 can be proven. Reaching the contradiction. The negation of original statement contained formula_40 and we have just proved formula_39. Thus, formula_41 should also hold. But after substituting the definition of formula_15, formula_42 Thus, summarizing the above three statements, by transitivity of the equality, formula_43 should also hold. However, in the negation of the original statement "p" is existentially quantified and restricted to primes formula_44. This establishes the contradiction we wanted to reach. End of reductio ad absurdum. By reaching contradiction with its negation, we have just proven the original statement: formula_45 The system of simultaneous congruences. We build a system of simultaneous congruences formula_46 formula_47 formula_48 We can write it in a more concise way: formula_49 Many statements will be said below, all beginning with "formula_50". To achieve a more ergonomic treatment, from now on all statements should be read as being in the scope of an formula_50 quantification. Thus, formula_51 begins here. Let us chose a solution formula_52 for the system of simultaneous congruences. At least one solution must exist, because formula_53 are pairwise comprime as proven in the previous sections, so we can refer to the solution ensured by the Chinese remainder theorem. Thus, from now on we can regard formula_52 as satisfying formula_54, which means (by definition of modular arithmetic) that formula_55 Resorting to the second hand-tuned assumption. Recall the second assumption, “formula_56”, and remember that we are now in the scope of an implicit quantification for "i", so we don't repeat its quantification for each statement. The second assumption formula_57 implies that formula_58. Now by transitivity of equality we get formula_59. QED. Our original goal was to prove that the definition formula_60 is good for achieving what we declared in the specification of formula_3: we want formula_61 to hold. This can be seen now by transitivity of equality, looking at the above three equations. Existence and uniqueness. We have just proven the correctness of the definition of formula_3: its specification requiring formula_62 is met. Although proving this was most important for establishing an encoding scheme for sequences, we have to fill in some gaps yet. These are related notions similar to existence and uniqueness (although on uniqueness, “at most one” should be meant here, and the conjunction of both is delayed as a final result). Uniqueness of encoding, achieved by minimalization. Our ultimate question is: what number should stand for the encoding of sequence formula_63? The specification declares only an existential quantification, not yet a functional connection. We want a constructive and algorithmic connection: a (total) recursive function that performs the encoding. Totality, because minimalization is restricted to special functions. This gap can be filled in a straightforward way: we shall use minimalization, and the totality of the resulting function is ensured by everything we have proven till now (i.e. the correctness of the definition of formula_3 by meeting its specification). In fact, the specification formula_62 plays a role here of a more general notion (“special function”). The importance of this notion is that it enables us to split off the (sub)class of (total) recursive functions from the (super)class of partial recursive functions. In brief, the specification says that a function "f" satisfying the specification formula_64 is a special function; that is, for each fixed combination of all-but-last arguments, the function "f" has root in its last argument: formula_65 The Gödel numbering function g can be chosen to be total recursive. Thus, let us choose the minimal possible number that fits well in the specification of the formula_3 function: formula_66 formula_67. It can be proven (using the notions of the previous section ) that "g" is (total) recursive. Access of length. If we use the above scheme for encoding sequences only in contexts where the length of the sequences is fixed, then no problem arises. In other words, we can use them in an analogous way as arrays are used in programming. But sometimes we need dynamically stretching sequences, or we need to deal with sequences whose length cannot be typed in a static way. In other words, we may encode sequences in an analogous way to lists in programming. To illustrate both cases: if we form the Gödel numbering of a Turing machine, then the each row in the matrix of the “program” can be represented with tuples, sequences of fixed length (thus, without storing the length), because the number of the columns is fixed. But if we want to reason about configuration-like things (of Turing-machines), and specifically if we want to encode the significant part of the tape of a running Turing machine, then we have to represent sequences together with their length. We can mimic dynamically stretching sequences by representing sequence concatenation (or at least, augmenting a sequence with one more element) with a totally recursive function. Length can be stored simply as a surplus member: formula_68 formula_69. The corresponding modification of the proof is straightforward, by adding a surplus formula_70 to the system of simultaneous congruences (provided that the surplus member index is chosen to be 0). Also, the assumptions have to be modified accordingly.
[ { "math_id": 0, "text": "\\langle a_0,\\dots a_{n-1} \\rangle" }, { "math_id": 1, "text": "0\\le i \\le n-1" }, { "math_id": 2, "text": "f(a,i) = a_i" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\\beta(a,i) = a_i" }, { "math_id": 5, "text": "\\pi" }, { "math_id": 6, "text": "K\\left(\\pi\\left(x,y\\right)\\right) = x" }, { "math_id": 7, "text": "L\\left(\\pi\\left(x,y\\right)\\right) = y" }, { "math_id": 8, "text": "\\mathrm{rem}(5, 3) = 2" }, { "math_id": 9, "text": "\\mathrm{rem}(7, 2) = 1" }, { "math_id": 10, "text": "\\beta(s,i) = \\mathrm{rem}\\left(K\\left(s\\right),\\left(i+1\\right)\\cdot L\\left(s\\right)+1\\right)" }, { "math_id": 11, "text": "\\beta\\left(\\pi\\left(x_0,m\\right),i\\right) = \\mathrm{rem}\\left(x_0, \\left(i+1\\right)\\cdot m+1\\right)" }, { "math_id": 12, "text": "\\forall i<n" }, { "math_id": 13, "text": "m_i = (i+1)\\cdot m+1" }, { "math_id": 14, "text": "\\beta\\left(\\pi\\left(x_0,m\\right),i\\right) = \\mathrm{rem}\\left(x_0, m_i\\right)" }, { "math_id": 15, "text": "m_i" }, { "math_id": 16, "text": "a_0,\\dots a_{n-1}" }, { "math_id": 17, "text": "\\forall i \\in \\overline n \\setminus \\left\\{0\\right\\} \\left(i \\mid m\\right)" }, { "math_id": 18, "text": "\\forall i < n \\left( a_i < m_i \\right)" }, { "math_id": 19, "text": "1 \\mid m \\land \\dots \\land n-1 \\mid m" }, { "math_id": 20, "text": "\\tilde x" }, { "math_id": 21, "text": "x \\equiv a_i \\pmod{m_i}" }, { "math_id": 22, "text": "a_i = \\mathrm{rem}(\\tilde x, m_i)" }, { "math_id": 23, "text": "\\forall i < n \\; (a_i < m)" }, { "math_id": 24, "text": "\\forall i<n,j < n \\; \\left( i \\neq j \\rightarrow \\mathrm{coprime}\\left(m_i,m_j\\right) \\right)" }, { "math_id": 25, "text": "\\exists i<n,j < n \\; \\left( i \\neq j \\land \\lnot \\mathrm{coprime}\\left(m_i,m_j\\right) \\right)" }, { "math_id": 26, "text": "\\exists i<n,j < n \\; \\left( i \\neq j \\land \\exists p \\in \\mathrm{Prime} \\; \\left( p \\mid m_i \\land p \\mid m_j \\right) \\right)" }, { "math_id": 27, "text": "\\exists i<n,j < n,p \\in \\mathrm{Prime} \\; \\left( i \\neq j \\land p \\mid m_i \\land p \\mid m_j \\right)" }, { "math_id": 28, "text": "p \\mid m_i \\land p \\mid m_j" }, { "math_id": 29, "text": "p \\mid m_i - m_j" }, { "math_id": 30, "text": "m_k" }, { "math_id": 31, "text": "m_i - m_j = (i-j) \\cdot m" }, { "math_id": 32, "text": " p \\mid (i-j) \\cdot m" }, { "math_id": 33, "text": "p \\mid i-j \\lor p \\mid m" }, { "math_id": 34, "text": "i<n\\land j<n \\land i\\neq j" }, { "math_id": 35, "text": "i-j \\in \\overline n \\setminus \\left\\{0\\right\\}" }, { "math_id": 36, "text": "i-j \\mid m" }, { "math_id": 37, "text": "\\left(A \\land \\left( A \\rightarrow B\\right)\\right) \\rightarrow B" }, { "math_id": 38, "text": "p \\mid i-j \\rightarrow p \\mid m" }, { "math_id": 39, "text": "p \\mid m" }, { "math_id": 40, "text": "p \\mid m_i" }, { "math_id": 41, "text": "p \\mid m_i - \\left(i+1\\right)\\cdot m" }, { "math_id": 42, "text": "m_i - \\left(i+1\\right)\\cdot m = 1" }, { "math_id": 43, "text": "p \\mid 1" }, { "math_id": 44, "text": "\\exists p \\in \\mathrm{Prime}" }, { "math_id": 45, "text": "\\forall i<n,j<n \\; \\left( i \\neq j \\rightarrow \\mathrm{coprime}\\left(m_i,m_j\\right)\\right)" }, { "math_id": 46, "text": "x \\equiv a_0 \\pmod{m_0}" }, { "math_id": 47, "text": "\\vdots" }, { "math_id": 48, "text": "x \\equiv a_{n-1} \\pmod{m_{n-1}}" }, { "math_id": 49, "text": "\\forall i < n \\; \\left(x \\equiv a_i \\pmod{m_i}\\right)" }, { "math_id": 50, "text": "\\forall i < n \\; \\left(\\dots\\right)" }, { "math_id": 51, "text": "\\forall i < n (" }, { "math_id": 52, "text": "x_0" }, { "math_id": 53, "text": "m_0,\\dots m_{n-1}" }, { "math_id": 54, "text": "x_0 \\equiv a_i \\pmod{m_i}" }, { "math_id": 55, "text": "\\mathrm{rem}\\left(x_0,m_i\\right) = \\mathrm{rem}\\left(a_i,m_i\\right)" }, { "math_id": 56, "text": "\\forall i < n \\; \\left(a_i < m_i \\right)" }, { "math_id": 57, "text": "a_i < m_i" }, { "math_id": 58, "text": "\\mathrm{rem}\\left(a_i,m_i\\right) = a_i" }, { "math_id": 59, "text": "\\mathrm{rem}\\left(x_0,m_i\\right) = a_i" }, { "math_id": 60, "text": "\\beta\\left(\\pi\\left(x_0,m\\right),i\\right) = \\mathrm{rem}\\left(x_0,m_i\\right)" }, { "math_id": 61, "text": "\\beta\\left(\\pi\\left(x_0,m\\right),i\\right) = a_i" }, { "math_id": 62, "text": "\\forall a_0,\\dots, a_{n-1}\\;\\exists s\\;\\forall i < n \\; \\beta(s,i) = a_i" }, { "math_id": 63, "text": "\\left\\langle a_0,\\dots,a_{n-1}\\right\\rangle" }, { "math_id": 64, "text": "f\\left(a_0,\\dots, a_{n-1}, s\\right) = 0 \\leftrightarrow \\forall i < n \\; \\left(\\beta(s,i) = a_i\\right)" }, { "math_id": 65, "text": "\\forall a_0,\\dots,a_{n-1}\\;\\exists s\\; \\left(f\\left(a_0,\\dots,a_{n-1},s\\right)=0\\right)" }, { "math_id": 66, "text": "g : \\mathbb N^n \\to \\mathbb N" }, { "math_id": 67, "text": "\\left\\langle a_0,\\dots,a_{n-1}\\right\\rangle \\longmapsto \\mu a . \\left[ \\forall i < n \\; \\left(\\beta\\left(a,i\\right) = a_i\\right)\\right]" }, { "math_id": 68, "text": "g : \\mathbb N^* \\to \\mathbb N" }, { "math_id": 69, "text": "\\left\\langle a_0,\\dots,a_{n-1}, a_n\\right\\rangle \\longmapsto \\mu a . \\left[ a_0 = n \\land \\forall i < n \\; \\left(\\beta\\left(a,i+1\\right) = a_i\\right)\\right]" }, { "math_id": 70, "text": "x \\equiv n \\pmod{m_0}" } ]
https://en.wikipedia.org/wiki?curid=8371092
8371384
3-j symbol
Coefficients coupled with angular momentum In quantum mechanics, the Wigner 3-j symbols, also called 3"-jm" symbols, are an alternative to Clebsch–Gordan coefficients for the purpose of adding angular momenta. While the two approaches address exactly the same physical problem, the 3-"j" symbols do so more symmetrically. Mathematical relation to Clebsch–Gordan coefficients. The 3-"j" symbols are given in terms of the Clebsch–Gordan coefficients by formula_0 The "j" and "m" components are angular-momentum quantum numbers, i.e., every "j" (and every corresponding "m") is either a nonnegative integer or half-odd-integer. The exponent of the sign factor is always an integer, so it remains the same when transposed to the left, and the inverse relation follows upon making the substitution "m"3 → −"m"3: formula_1 formula_2 Explicit expression. where formula_3 is the Kronecker delta. The summation is performed over those integer values k for which the argument of each factorial in the denominator is non-negative, i.e. summation limits K and N are taken equal: the lower one formula_4 the upper one formula_5 Factorials of negative numbers are conventionally taken equal to zero, so that the values of the 3"j" symbol at, for example, formula_6 or formula_7 are automatically set to zero. Definitional relation to Clebsch–Gordan coefficients. The CG coefficients are defined so as to express the addition of two angular momenta in terms of a third: formula_8 The 3-"j" symbols, on the other hand, are the coefficients with which three angular momenta must be added so that the resultant is zero: formula_9 Here formula_10 is the zero-angular-momentum state (formula_11). It is apparent that the 3-"j" symbol treats all three angular momenta involved in the addition problem on an equal footing and is therefore more symmetrical than the CG coefficient. Since the state formula_10 is unchanged by rotation, one also says that the contraction of the product of three rotational states with a 3-"j" symbol is invariant under rotations. Selection rules. The Wigner 3-"j" symbol is zero unless all these conditions are satisfied: formula_12 Symmetry properties. A 3-"j" symbol is invariant under an even permutation of its columns: formula_13 An odd permutation of the columns gives a phase factor: formula_14 formula_15 Changing the sign of the formula_16 quantum numbers () also gives a phase: formula_17 The 3-"j" symbols also have so-called Regge symmetries, which are not due to permutations or time reversal. These symmetries are: formula_18 formula_19 With the Regge symmetries, the 3-"j" symbol has a total of 72 symmetries. These are best displayed by the definition of a Regge symbol, which is a one-to-one correspondence between it and a 3-"j" symbol and assumes the properties of a semi-magic square: formula_20 whereby the 72 symmetries now correspond to 3! row and 3! column interchanges plus a transposition of the matrix. These facts can be used to devise an effective storage scheme. Orthogonality relations. A system of two angular momenta with magnitudes "j"1 and "j"2 can be described either in terms of the uncoupled basis states (labeled by the quantum numbers "m"1 and "m"2), or the coupled basis states (labeled by "j"3 and "m"3). The 3-"j" symbols constitute a unitary transformation between these two bases, and this unitarity implies the orthogonality relations formula_21 formula_22 The "triangular delta" {"j"1 "j"2 "j"3} is equal to 1 when the triad ("j"1, "j"2, "j"3) satisfies the triangle conditions, and is zero otherwise. The triangular delta itself is sometimes confusingly called a "3-"j" symbol" (without the "m") in analogy to 6-"j" and 9-"j" symbols, all of which are irreducible summations of 3-"jm" symbols where no m variables remain. Relation to spherical harmonics; Gaunt coefficients. The 3-"jm" symbols give the integral of the products of three spherical harmonics formula_23 with formula_24, formula_25 and formula_26 integers. These integrals are called Gaunt coefficients. Relation to integrals of spin-weighted spherical harmonics. Similar relations exist for the spin-weighted spherical harmonics if formula_27: formula_28 formula_29 Asymptotic expressions. For formula_30 a non-zero 3-"j" symbol is formula_31 where formula_32, and formula_33 is a Wigner function. Generally a better approximation obeying the Regge symmetry is given by formula_34 where formula_35. Metric tensor. The following quantity acts as a metric tensor in angular-momentum theory and is also known as a "Wigner 1-jm symbol": formula_36 It can be used to perform time reversal on angular momenta. formula_37 Special cases and other properties. From equation (3.7.9) in formula_38 formula_39 where "P" are Legendre polynomials. Relation to Racah "V"-coefficients. Wigner 3-"j" symbols are related to Racah V-coefficients by a simple phase: formula_40 Relation to group theory. This section essentially recasts the definitional relation in the language of group theory. A group representation of a group is a homomorphism of the group into a group of linear transformations over some vector space. The linear transformations can be given by a group of matrices with respect to some basis of the vector space. The group of transformations leaving angular momenta invariant is the three dimensional rotation group SO(3). When "spin" angular momenta are included, the group is its double covering group, SU(2). A reducible representation is one where a change of basis can be applied to bring all the matrices into block diagonal form. A representation is irreducible (irrep) if no such transformation exists. For each value of "j", the 2"j"+1 kets form a basis for an irreducible representation (irrep) of SO(3)/SU(2) over the complex numbers. Given two irreps, the tensor direct product can be reduced to a sum of irreps, giving rise to the Clebcsh-Gordon coefficients, or by reduction of the triple product of three irreps to the trivial irrep 1 giving rise to the 3j symbols. 3j symbols for other groups. The formula_41 symbol has been most intensely studied in the context of the coupling of angular momentum. For this, it is strongly related to the group representation theory of the groups SU(2) and SO(3) as discussed above. However, many other groups are of importance in physics and chemistry, and there has been much work on the formula_41 symbol for these other groups. In this section, some of that work is considered. Simply reducible groups. The original paper by Wigner was not restricted to SO(3)/SU(2) but instead focussed on simply reducible (SR) groups. These are groups in which For SR groups, every irrep is equivalent to its complex conjugate, and under permutations of the columns the absolute value of the symbol is invariant and the phase of each can be chosen so that they at most change sign under odd permutations and remain unchanged under even permutations. General compact groups. Compact groups form a wide class of groups with topological structure. They include the finite groups with added discrete topology and many of the Lie groups. General compact groups will neither be ambivalent nor multiplicity free. Derome and Sharp and Derome examined the formula_41 symbol for the general case using the relation to the Clebsch-Gordon coefficients of formula_44 where formula_45 is the dimension of the representation space of formula_46 and formula_47 is the complex conjugate representation to formula_48. By examining permutations of columns of the formula_41 symbol, they showed three cases: Further research into formula_41 symbols for compact groups has been performed based on these principles. SU(n). The Special unitary group SU(n) is the Lie group of n × n unitary matrices with determinant 1. The group SU(3) is important in particle theory. There are many papers dealing with the formula_41 or equivalent symbol The formula_41 symbol for the group SU(4) has been studied while there is also work on the general SU(n) groups Crystallographic point groups. There are many papers dealing with the formula_41 symbols or Clebsch-Gordon coefficients for the finite crystallographic point groups and the double point groups The book by Butler references these and details the theory along with tables. Magnetic groups. Magnetic groups include antilinear operators as well as linear operators. They need to be dealt with using Wigner's theory of corepresentations of unitary and antiunitary groups. A significant departure from standard representation theory is that the multiplicity of the irreducible corepresentation formula_47 in the direct product of the irreducible corepresentations formula_57 is generally smaller than the multiplicity of the trivial corepresentation in the triple product formula_58, leading to significant differences between the Clebsch-Gordon coefficients and the formula_41 symbol. The formula_41 symbols have been examined for the grey groups and for the magnetic point groups
[ { "math_id": 0, "text": "\n \\begin{pmatrix}\n j_1 & j_2 & j_3 \\\\\n m_1 & m_2 & m_3\n \\end{pmatrix}\n \\equiv\n \\frac{(-1)^{j_1 - j_2 - m_3}}{\\sqrt{2 j_3 + 1}}\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | j_3 \\, (-m_3) \\rangle.\n" }, { "math_id": 1, "text": "\n\\langle j_1 \\, m_1 \\, j_2 \\, m_2 | j_3 \\, m_3 \\rangle\n = (-1)^{-j_1 + j_2 - m_3} \\sqrt{2 j_3 + 1}\n \\begin{pmatrix}\n j_1 & j_2 & j_3 \\\\\n m_1 & m_2 & -m_3\n \\end{pmatrix}.\n" }, { "math_id": 2, "text": "\n\\begin{align}\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n& \\equiv \\delta(m_1+m_2+m_3,0) (-1)^{j_1 - j_2 - m_3} {} \\sqrt{\\frac{(j_1+j_2-j_3)!(j_1-j_2+j_3)!(-j_1+j_2+j_3)!}{(j_1+j_2+j_3+1)!}}\\ \\times {} \\\\[6pt]\n&\\times\\sqrt{(j_1-m_1)!(j_1+m_1)!(j_2-m_2)!(j_2+m_2)!(j_3-m_3)!(j_3+m_3)!}\\ \\times {} \\\\[6pt]\n&\\times\\sum_{k=K}^N \\frac{(-1)^k}{k!(j_1+j_2-j_3-k)!(j_1-m_1-k)!(j_2+m_2-k)!(j_3-j_2+m_1+k)!(j_3-j_1-m_2+k)!},\n\\end{align}\n" }, { "math_id": 3, "text": "\\delta(i,j)" }, { "math_id": 4, "text": "K=\\max(0, j_2-j_3-m_1, j_1-j_3+m_2)," }, { "math_id": 5, "text": "N=\\min(j_1+j_2-j_3, j_1-m_1, j_2+m_2)." }, { "math_id": 6, "text": "j_3>j_1+j_2" }, { "math_id": 7, "text": "j_1<m_1" }, { "math_id": 8, "text": "\n|j_3\\, m_3\\rangle\n = \\sum_{m_1=-j_1}^{j_1} \\sum_{m_2=-j_2}^{j_2}\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | j_3 \\, m_3 \\rangle\n |j_1 \\, m_1 \\, j_2 \\, m_2 \\rangle.\n" }, { "math_id": 9, "text": "\n \\sum_{m_1=-j_1}^{j_1} \\sum_{m_2=-j_2}^{j_2} \\sum_{m_3=-j_3}^{j_3}\n |j_1 m_1\\rangle |j_2 m_2\\rangle |j_3 m_3\\rangle\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n = |0 \\, 0\\rangle.\n" }, { "math_id": 10, "text": "|0 \\, 0\\rangle" }, { "math_id": 11, "text": "j = m = 0" }, { "math_id": 12, "text": "\\begin{align}\n& m_i \\in \\{-j_i, -j_i + 1, -j_i + 2, \\ldots, j_i\\} \\quad (i = 1, 2, 3), \\\\\n& m_1 + m_2 + m_3 = 0, \\\\\n& |j_1 - j_2| \\le j_3 \\le j_1 + j_2, \\\\\n& (j_1 + j_2 + j_3) \\text{ is an integer (and, moreover, an even integer if } m_1 = m_2 = m_3 = 0 \\text{)}. \\\\\n\\end{align}" }, { "math_id": 13, "text": "\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n j_2 & j_3 & j_1\\\\\n m_2 & m_3 & m_1\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n j_3 & j_1 & j_2\\\\\n m_3 & m_1 & m_2\n\\end{pmatrix}.\n" }, { "math_id": 14, "text": "\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n=\n(-1)^{j_1+j_2+j_3}\n\\begin{pmatrix}\n j_2 & j_1 & j_3\\\\\n m_2 & m_1 & m_3\n\\end{pmatrix}\n" }, { "math_id": 15, "text": "\n=\n(-1)^{j_1+j_2+j_3}\n\\begin{pmatrix}\n j_1 & j_3 & j_2\\\\\n m_1 & m_3 & m_2\n\\end{pmatrix}\n=\n(-1)^{j_1+j_2+j_3}\n\\begin{pmatrix}\n j_3 & j_2 & j_1\\\\\n m_3 & m_2 & m_1\n\\end{pmatrix}.\n" }, { "math_id": 16, "text": "m" }, { "math_id": 17, "text": "\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n -m_1 & -m_2 & -m_3\n\\end{pmatrix}\n=\n(-1)^{j_1+j_2+j_3}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}.\n" }, { "math_id": 18, "text": "\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n j_1 & \\frac{j_2+j_3-m_1}{2} & \\frac{j_2+j_3+m_1}{2}\\\\\n j_3-j_2 & \\frac{j_2-j_3-m_1}{2}-m_3 & \\frac{j_2-j_3+m_1}{2}+m_3\n\\end{pmatrix},\n" }, { "math_id": 19, "text": "\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n=\n(-1)^{j_1+j_2+j_3}\n\\begin{pmatrix}\n \\frac{j_2+j_3+m_1}{2} & \\frac{j_1+j_3+m_2}{2} & \\frac{j_1+j_2+m_3}{2}\\\\\n j_1 - \\frac{j_2+j_3-m_1}{2} & j_2 - \\frac{j_1+j_3-m_2}{2} & j_3-\\frac{j_1+j_2-m_3}{2}\n\\end{pmatrix}.\n" }, { "math_id": 20, "text": "\nR=\n\\begin{array}{|ccc|}\n \\hline\n -j_1+j_2+j_3 & j_1-j_2+j_3 & j_1+j_2-j_3\\\\\n j_1-m_1 & j_2-m_2 & j_3-m_3\\\\\n j_1+m_1 & j_2+m_2 & j_3+m_3\\\\\n \\hline\n\\end{array},\n" }, { "math_id": 21, "text": "\n(2 j_3 + 1)\\sum_{m_1 m_2}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n\\begin{pmatrix}\n j_1 & j_2 & j'_3\\\\\n m_1 & m_2 & m'_3\n\\end{pmatrix}\n= \\delta_{j_3, j'_3} \\delta_{m_3, m'_3} \\begin{Bmatrix} j_1 & j_2 & j_3 \\end{Bmatrix},\n" }, { "math_id": 22, "text": "\n\\sum_{j_3 m_3} (2 j_3 + 1)\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1' & m_2' & m_3\n\\end{pmatrix}\n= \\delta_{m_1, m_1'} \\delta_{m_2, m_2'}.\n" }, { "math_id": 23, "text": "\n\\begin{align}\n& \\int Y_{l_1 m_1}(\\theta, \\varphi) Y_{l_2 m_2}(\\theta, \\varphi) Y_{l_3 m_3}(\\theta, \\varphi)\\,\\sin\\theta\\,\\mathrm{d}\\theta\\,\\mathrm{d}\\varphi \\\\\n&\\quad = \\sqrt{\\frac{(2l_1 + 1)(2l_2 + 1)(2l_3 + 1)}{4\\pi}}\n\\begin{pmatrix}\n l_1 & l_2 & l_3 \\\\\n 0 & 0 & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n l_1 & l_2 & l_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n\\end{align}\n" }, { "math_id": 24, "text": "l_1" }, { "math_id": 25, "text": "l_2" }, { "math_id": 26, "text": "l_3" }, { "math_id": 27, "text": "s_1 + s_2 + s_3 = 0" }, { "math_id": 28, "text": "\n\\begin{align}\n& \\int d\\mathbf{\\hat n} \\,_{s_1}\\!Y_{j_1 m_1}(\\mathbf{\\hat n}) \\,_{s_2}\\!Y_{j_2 m_2}(\\mathbf{\\hat n}) \\,_{s_3}\\!Y_{j_3 m_3}(\\mathbf{\\hat n}) \\\\\n&\\quad = \\sqrt{\\frac{(2j_1 + 1)(2j_2 + 1)(2j_3 + 1)}{4\\pi}}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n -s_1 & -s_2 & -s_3\n\\end{pmatrix}.\n\\end{align}\n" }, { "math_id": 29, "text": "\n\\begin{align}\n& {-}\\sqrt{(l_3 \\mp s_3)(l_3 \\pm s_3 + 1)} \n\\begin{pmatrix}\n l_1 & l_2 & l_3 \\\\\n s_1 & s_2 & s_3 \\pm 1\n\\end{pmatrix}=\n \\\\\n&\\quad = \\sqrt{(l_1 \\mp s_1)(l_1 \\pm s_1 + 1)} \n\\begin{pmatrix}\n l_1 & l_2 & l_3 \\\\\n s_1 \\pm 1 & s_2 & s_3\n\\end{pmatrix}\n+ \\sqrt{(l_2 \\mp s_2)(l_2 \\pm s_2 + 1)} \n\\begin{pmatrix}\n l_1 & l_2 & l_3 \\\\\n s_1 & s_2 \\pm 1 & s_3\n\\end{pmatrix}.\n\\end{align}\n" }, { "math_id": 30, "text": "l_1 \\ll l_2, l_3" }, { "math_id": 31, "text": "\n\\begin{pmatrix}\n l_1 & l_2 & l_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n \\approx (-1)^{l_3+m_3} \\frac{d^{l_1}_{m_1, l_3 - l_2}(\\theta)}{\\sqrt{2l_3 + 1}},\n" }, { "math_id": 32, "text": "\\cos(\\theta) = -2m_3 / (2l_3 + 1)" }, { "math_id": 33, "text": "d^l_{mn}" }, { "math_id": 34, "text": "\n\\begin{pmatrix}\n l_1 & l_2 & l_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n \\approx (-1)^{l_3+m_3} \\frac{ d^{l_1}_{m_1, l_3-l_2}(\\theta)}{\\sqrt{l_2+l_3+1}},\n" }, { "math_id": 35, "text": "\\cos(\\theta) = (m_2 - m_3)/(l_2 + l_3 + 1)" }, { "math_id": 36, "text": "\\begin{pmatrix}\n j \\\\\n m \\quad m'\n\\end{pmatrix}\n:= \\sqrt{2 j + 1}\n\\begin{pmatrix}\n j & 0 & j \\\\\n m & 0 & m'\n\\end{pmatrix}\n= (-1)^{j - m'} \\delta_{m, -m'}.\n" }, { "math_id": 37, "text": "\\sum_m (-1)^{j - m}\n\\begin{pmatrix}\n j & j & J \\\\\n m & -m & 0\n\\end{pmatrix} = \\sqrt{2 j + 1} \\, \\delta_{J, 0}.\n" }, { "math_id": 38, "text": "\n\\begin{pmatrix}\n j & j & 0 \\\\\n m & -m & 0\n\\end{pmatrix} = \\frac{1}{\\sqrt{2 j + 1}} (-1)^{j - m}.\n" }, { "math_id": 39, "text": "\n\\frac{1}{2} \\int_{-1}^1 P_{l_1}(x) P_{l_2}(x) P_{l}(x) \\, dx = \n\\begin{pmatrix}\n l & l_1 & l_2 \\\\\n 0 & 0 & 0\n\\end{pmatrix}^2,\n" }, { "math_id": 40, "text": "\nV(j_1 \\, j_2 \\, j_3; m_1 \\, m_2 \\, m_3) = (-1)^{j_1 - j_2 - j_3}\n\\begin{pmatrix}\n j_1 & j_2 & j_3 \\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}.\n" }, { "math_id": 41, "text": "3j" }, { "math_id": 42, "text": "X" }, { "math_id": 43, "text": "X^{-1}" }, { "math_id": 44, "text": "\n \\begin{pmatrix}\n j_1 & j_2 & j_3 \\\\\n m_1 & m_2 & m_3\n \\end{pmatrix}\n \\equiv\n \\frac{1}{[j_3]}\n \\langle j_1 \\, m_1 \\, j_2 \\, m_2 | j_3^* \\, m_3 \\rangle.\n" }, { "math_id": 45, "text": "[j]" }, { "math_id": 46, "text": "j" }, { "math_id": 47, "text": "j_3^*" }, { "math_id": 48, "text": "j_3" }, { "math_id": 49, "text": "j_1, j_2, j_3" }, { "math_id": 50, "text": "S_3" }, { "math_id": 51, "text": "[2]" }, { "math_id": 52, "text": "[1^2]" }, { "math_id": 53, "text": "S_2" }, { "math_id": 54, "text": "[3]" }, { "math_id": 55, "text": "[1^3]" }, { "math_id": 56, "text": "[21]" }, { "math_id": 57, "text": "j_1 \\otimes j_2" }, { "math_id": 58, "text": "j_1 \\otimes j_2 \\otimes j_3" } ]
https://en.wikipedia.org/wiki?curid=8371384
8373800
Walter Zinn
Nuclear physicist (1906–2000) Walter Henry Zinn (December 10, 1906 – February 14, 2000) was a Canadian-born American nuclear physicist who was the first director of the Argonne National Laboratory from 1946 to 1956. He worked at the Manhattan Project's Metallurgical Laboratory during World War II, and supervised the construction of Chicago Pile-1, the world's first nuclear reactor, which went critical on December 2, 1942, at the University of Chicago. At Argonne he designed and built several new reactors, including Experimental Breeder Reactor I, the first nuclear reactor to produce electric power, which went live on December 20, 1951. Early life. Walter Henry Zinn was born in Berlin (now Kitchener), Ontario, on December 10, 1906, the son of John Zinn, who worked in a tire factory, and Maria Anna Stoskopf. He had an older brother, Albert, who also became a factory worker. Zinn entered Queen's University, where he earned a Bachelor of Arts degree in mathematics in 1927 and a Master of Arts degree in 1930. He then entered Columbia University in 1930, where he studied physics, writing his Doctor of Philosophy thesis on "Two-crystal study of the structure and width of K X-ray absorption limits". This was subsequently published in the "Physical Review". To support himself, Zinn taught at Queen's University from 1927 to 1928, and at Columbia from 1931 to 1932. He became an instructor at the City College of New York in 1932. While at Queen's he met Jennie A. (Jean) Smith, a fellow student. They were married in 1933 and had two sons, John Eric and Robert James. In 1938, Zinn became a naturalised United States citizen. Manhattan Project. In 1939, the Pupin Physics Laboratories at Columbia where Zinn worked were the center of intensive research into the properties of uranium and nuclear fission, which had recently been discovered by Lise Meitner, Otto Hahn and Fritz Strassmann. At Columbia, Zinn, Enrico Fermi, Herbert L. Anderson, John R. Dunning and Leo Szilard investigated whether uranium-238 fissioned with slow neutrons, as Fermi believed, or only the uranium-235 isotope, as Niels Bohr contended. Since pure uranium-235 was not available, Fermi and Szilard chose to work with natural uranium. They were particularly interested in whether a nuclear chain reaction could be initiated. This would require more than one neutron to be emitted per fission on average in order to keep the chain reaction going. By March 1939, they established that about two were being emitted per fission on average. The delay between an atom absorbing a neutron and fission occurring would be the key to controlling a chain reaction. At this point Zinn began working for Fermi, constructing experimental uranium lattices. To slow neutrons down requires a neutron moderator. Water was Fermi's first choice, but it tended to absorb neutrons as well as slow them. In July, Szilard suggested using carbon, in the form of graphite. The critical radius of a spherical reactor was calculated to be: formula_0 In order for a self-sustaining nuclear chain reaction to occur, they needed k &gt; 1. For a practical reactor configuration, it needed to be at least 3 or 4 percent more; but in August 1941 Zinn's initial experiments indicated a disappointing value of 0.87. Fermi pinned his hopes of a better result on an improved configuration, and purer uranium and graphite. In early 1942, with the United States now embroiled World War II, Arthur Compton concentrated the Manhattan Project's various teams working on plutonium at the Metallurgical Laboratory at the University of Chicago. Zinn used athletes to build Fermi's increasingly large experimental configurations under the stands of the disused Stagg Field. In July 1942, Fermi measured a k = 1.007 from a uranium oxide lattice. This raised hopes that pure uranium would yield a suitable value of k. By December 1942, Zinn and Anderson had the new configuration ready at Stagg Field. Some long, wide and high, it contained of graphite and of uranium metal and uranium oxide. When the experiment was carried out on the afternoon of December 2, 1942, the reactor, known as Chicago Pile-1, reached criticality without incident. Since the reactor had no radiation shield, it was run at a maximum power of only 200 W, enough to power a light bulb, and ran for only three months. It was shut down on February 28, 1943, because the US Army did not want to risk an accident near densely populated downtown Chicago. The Army leased a of the Cook County Forest Preserves known as "Site A" to the Manhattan Project, and "the Country Club" to the hundred or so scientists, guards and others who worked there. Zinn was placed in charge of Site A, under Fermi. Chicago Pile-1 was disassembled and rebuilt, this time with a radiation shield, at Site A. The reactor, now known as Chicago Pile-2, was operational again on March 20, 1943. Within a few months, Fermi began designing a new reactor, which became known as Chicago Pile-3. This was a very different type of reactor. It was much smaller, being only in diameter and high. It was power by 120 uranium metal rods, and moderated by of heavy water. Once again Zinn was in charge of construction, which commenced on New Year's Day in 1944. Chicago Pile-3 went critical on May 15, 1944, and commenced operation on June 23 at its full power of 300 KW. When Fermi departed for the Hanford Site, Zinn became the sole authority at Site A. On September 29, 1944, Zinn received an urgent call from Samuel Allison, the director of the Metallurgical Laboratory. The B Reactor at Hanford had shut down shortly after reaching full power, only to come back to life again some hours later. Norman Hilberry suspected a neutron poison was responsible. If so, it had a half life of around 9.7 hours. Xenon-135 had a half life close to that, but had not been detected in Argonne or by the X-10 Graphite Reactor in Oak Ridge, Tennessee. Zinn quickly brought Chicago Pile-3 up to full power, and within twelve hours, had made a series of measurements that confirmed the Hanford results. Over the following months, some 175 technical personnel were transferred from the Metallurgical Laboratory to Hanford and Los Alamos. Zinn's Argonne Laboratory was reduced to a skeleton staff, but Compton would not countenance its closure. Argonne National Laboratory. On July 11, 1946, the Argonne laboratory officially became the Argonne National Laboratory, with Zinn as its first director. Alvin Weinberg characterized Zinn as "a model of what a director of the then-emerging national laboratories should be: sensitive to the aspirations of both contractor and fund provider, but confident enough to prevail when this was necessary." One of the first problems confronting Zinn was that of accommodation. The Federal government had promised to restore Site A to the Cook County Forest Preserves after the war, and despite intervention from the Secretary of War, Robert P. Patterson, the most the Cook County Forest Preserves Commission would agree to was that the Argonne National Laboratory could continue to occupy a portion of the lease until a new site was found. Zinn rejected alternate sites outside the Chicago area, and the Army found a new site for the laboratory's permanent home about away in DuPage County, Illinois, which became known as Site D. Under Zinn, the Argonne National Laboratory adopted slightly more progressive hiring practices than other contemporary institutions. Three African American women and seven men, six of whom had worked on the Manhattan Project, were employed in research at Argonne at a time when the Los Alamos National Laboratory had no African American scientists. Argonne also appointed women to positions of authority, with Maria Goeppert-Mayer as a section leader in the theoretical physics division, and Hoylande Young as director of the technical information division. The Atomic Energy Commission (AEC) replaced the Manhattan Project on January 1, 1947, and on January 1, 1948 it announced that the Argonne National Laboratory would be "focused chiefly on problems of reactor development." Zinn did not seek the additional responsibility, which he realised would divert the Laboratory away from research, and divert him from other responsibilities, such as designing a fast breeder reactor. He even obtained a written assurance from Carroll L. Wilson, the AEC's general manager, that it would not. He was therefore willing to collaborate with Alvin Weinberg to allow the Oak Ridge National Laboratory to remain involved in reactor design. Nonetheless, reactor research accounted for almost half the laboratory's budget in 1949, and 84 percent of its research was classified. Zinn did not get along well with Captain Hyman G. Rickover, the US Navy's Director of Naval Reactors, but nonetheless Argonne assisted in the development of nuclear marine propulsion, eventually producing two reactors, a land-based prototype Mark I and a propulsion reactor, the Mark II. The STR (Submarine Thermal Reactor) pressurized water reactor designed at Argonne powered the first nuclear-powered submarine, , and became the basis of nearly all the reactors installed in warships. The other branch of reactor development at the Argonne National Laboratory, and the one closer to Zinn's heart, was the fast breeder reactor. At the time it was believed that uranium was a scarce resource, so it would be wise to make the best use of it. The breeders were designed to create more fissile material than they consumed. By 1948, he had become convinced that it would be unwise to build large experimental reactors near Chicago, and the AEC acquired land around Arco, Idaho, which became an outpost of Argonne. The Experimental Breeder Reactor I (EBR-I, but known at Argonne as "ZIP" — Zinn's Infernal Pile) was the first reactor to be cooled by liquid metal, and the first to produce electricity. It proved the breeder concept. AEC Chairman Gordon Dean described it as a major milestone in nuclear history. The BORAX Experiments were a series of destructive tests of boiling water reactors conducted by Argonne National Laboratory in Idaho. The BORAX-1 test was conducted under Zinn's supervision in 1954. He had the control rods removed to demonstrate that the reactor would shut down without trouble, and it immediately blew up with a loud bang and a tall column of dark smoke, a turn of events that he had not anticipated. He shouted to Harold Lichtenberg to put the control rods back in again, but Lichtenberg pointed out that one was already flying through the air. Zinn later had to testify on the experiment before the Joint Committee on Atomic Energy. Later life. After leaving the Argonne National Laboratory in 1956, Zinn moved to Florida, where he founded his own consulting firm, General Nuclear Engineering, with its headquarters in Dunedin, Florida. The company was involved in the design and construction of pressurized water reactors. It was acquired by Combustion Engineering in 1964, and he became a vice president and head of its nuclear division. He stepped down from this position in 1970, but remained a board member until 1986. He served as a member of the President's Science Advisory Committee from 1960 to 1962, and a member of the General Advisory Committee of the AEC and its successor, the Energy Research and Development Administration, from 1972 to 1975. Over the years Zinn received multiple awards for his work, including a special commendation from the AEC in 1956, the Atoms for Peace Award in 1960, the Enrico Fermi Award in 1969, and the Elliott Cresson Medal from The Franklin Institute in 1970. In 1955 he was elected as the first president of the American Nuclear Society (ANS). Zinn's wife Jean died in 1964. He married Mary Teresa Pratt in 1966, and thereby acquired two stepsons, Warren and Robert Johnson. He died in Mease Countryside Hospital in Safety Harbor, Florida, on February 14, 2000, after suffering a stroke. He was survived by his wife Mary, sons John and Robert and stepson Warren. Robert had become a professor of astronomy at Yale University. Walter H. Zinn Award. Since 1976, the American Nuclear Society's Operations and Power Division, has annually presented the Walter H. Zinn Award to recognize an individual "for a notable and sustained contribution to the nuclear power industry that has not been widely recognized." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_{crit} = - \\frac{\\pi M}{\\sqrt{k - 1}}" } ]
https://en.wikipedia.org/wiki?curid=8373800
837770
Entropy of fusion
Increase in entropy when a solid melts In thermodynamics, the entropy of fusion is the increase in entropy when melting a solid substance. This is almost always positive since the degree of disorder increases in the transition from an organized crystalline solid to the disorganized structure of a liquid; the only known exception is helium. It is denoted as formula_0 and normally expressed in joules per mole-kelvin, J/(mol·K). A natural process such as a phase transition will occur when the associated change in the Gibbs free energy is negative. formula_1 where &amp;NoBreak;}&amp;NoBreak; is the enthalpy of fusion. Since this is a thermodynamic equation, the symbol &amp;NoBreak;&amp;NoBreak; refers to the absolute thermodynamic temperature, measured in kelvins (K). Equilibrium occurs when the temperature is equal to the melting point formula_2 so that formula_3 and the entropy of fusion is the heat of fusion divided by the melting point: formula_4 Helium. Helium-3 has a negative entropy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative entropy of fusion below 0.8 K. This means that, at appropriate constant pressures, these substances freeze with the addition of heat. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta S_{\\text{fus}}" }, { "math_id": 1, "text": "\\Delta G_{\\text{fus}} = \\Delta H_{\\text{fus}} - T \\times \\Delta S_{\\text{fus}} < 0," }, { "math_id": 2, "text": "T = T_f" }, { "math_id": 3, "text": "\\Delta G_{\\text{fus}} = \\Delta H_{\\text{fus}} - T_f \\times \\Delta S_{\\text{fus}} = 0," }, { "math_id": 4, "text": "\\Delta S_{\\text{fus}} = \\frac {\\Delta H_{\\text{fus}}} {T_f}" } ]
https://en.wikipedia.org/wiki?curid=837770
8378
Dipole
Electromagnetic phenomenon In physics, a dipole (from grc " ' ()" 'twice' and " ' ()" 'axis') is an electromagnetic phenomenon which occurs in two ways: Dipoles, whether electric or magnetic, can be characterized by their dipole moment, a vector quantity. For the simple electric dipole, the electric dipole moment points from the negative charge towards the positive charge, and has a magnitude equal to the strength of each charge times the separation between the charges. (To be precise: for the definition of the dipole moment, one should always consider the "dipole limit", where, for example, the distance of the generating charges should "converge" to 0 while simultaneously, the charge strength should "diverge" to infinity in such a way that the product remains a positive constant.) For the magnetic (dipole) current loop, the magnetic dipole moment points through the loop (according to the right hand grip rule), with a magnitude equal to the current in the loop times the area of the loop. Similar to magnetic current loops, the electron particle and some other fundamental particles have magnetic dipole moments, as an electron generates a magnetic field identical to that generated by a very small current loop. However, an electron's magnetic dipole moment is not due to a current loop, but to an intrinsic property of the electron. The electron may also have an "electric" dipole moment though such has yet to be observed (see electron electric dipole moment). A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles (not to be confused with monopoles, see Classification below) and may be labeled "north" and "south". In terms of the Earth's magnetic field, they are respectively "north-seeking" and "south-seeking" poles: if the magnet were freely suspended in the Earth's magnetic field, the north-seeking pole would point towards the north and the south-seeking pole would point towards the south. The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. In a magnetic compass, the north pole of a bar magnet points north. However, that means that Earth's geomagnetic north pole is the "south" pole (south-seeking pole) of its dipole moment and vice versa. The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated. Classification. A "physical dipole" consists of two equal and opposite point charges: in the literal sense, two poles. Its field at large distances (i.e., distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A "point (electric) dipole" is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field. Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic "point dipole" has a magnetic field of exactly the same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop. Any configuration of charges or currents has a 'dipole moment', which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion when the total charge ("monopole moment") is 0—as it "always" is for the magnetic case, since there are no magnetic monopoles. The dipole term is the dominant one at large distances: Its field falls off in proportion to , as compared to for the next (quadrupole) term and higher powers of for higher terms, or for the monopole term. Molecular dipoles. Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on the various atoms. Such is the case with polar compounds like hydrogen fluoride (HF), where electron density is shared unequally between atoms. Therefore, a molecule's dipole is an electric dipole with an inherent electric field that should not be confused with a magnetic dipole, which generates a magnetic field. The physical chemist Peter J. W. Debye was the first scientist to study molecular dipoles extensively, and, as a consequence, dipole moments are measured in the non-SI unit named "debye" in his honor. For molecules there are three types of dipoles: More generally, an induced dipole of "any" polarizable charge distribution "ρ" (remember that a molecule has a charge distribution) is caused by an electric field external to "ρ". This field may, for instance, originate from an ion or polar molecule in the vicinity of "ρ" or may be macroscopic (e.g., a molecule between the plates of a charged capacitor). The size of the induced dipole moment is equal to the product of the strength of the external field and the dipole polarizability of "ρ". Dipole moment values can be obtained from measurement of the dielectric constant. Some typical gas phase values given with the unit debye are: Potassium bromide (KBr) has one of the highest dipole moments because it is an ionic compound that exists as a molecule in the gas phase. The overall dipole moment of a molecule may be approximated as a vector sum of bond dipole moments. As a vector sum it depends on the relative orientation of the bonds, so that from the dipole moment information can be deduced about the molecular geometry. For example, the zero dipole of CO2 implies that the two C=O bond dipole moments cancel so that the molecule must be linear. For H2O the O−H bond moments do not cancel because the molecule is bent. For ozone (O3) which is also a bent molecule, the bond dipole moments are not zero even though the O−O bonds are between similar atoms. This agrees with the Lewis structures for the resonance forms of ozone which show a positive charge on the central oxygen atom. An example in organic chemistry of the role of geometry in determining dipole moment is the "cis" and "trans" isomers of 1,2-dichloroethene. In the "cis" isomer the two polar C−Cl bonds are on the same side of the C=C double bond and the molecular dipole moment is 1.90 D. In the "trans" isomer, the dipole moment is zero because the two C−Cl bonds are on opposite sides of the C=C and cancel (and the two bond moments for the much less polar C−H bonds also cancel). Another example of the role of molecular geometry is boron trifluoride, which has three polar bonds with a difference in electronegativity greater than the traditionally cited threshold of 1.7 for ionic bonding. However, due to the equilateral triangular distribution of the fluoride ions centered on and in the same plane as the boron cation, the symmetry of the molecule results in its dipole moment being zero. Quantum-mechanical dipole operator. Consider a collection of "N" particles with charges "qi" and position vectors r"i". For instance, this collection may be a molecule consisting of electrons, all with charge −"e", and nuclei with charge "eZi", where "Zi" is the atomic number of the "i" th nucleus. The dipole observable (physical quantity) has the quantum mechanical dipole operator: formula_0 Notice that this definition is valid only for neutral atoms or molecules, i.e. total charge equal to zero. In the ionized case, we have formula_1 where formula_2 is the center of mass of the molecule/group of particles. Atomic dipoles. A non-degenerate ("S"-state) atom can have only a zero permanent dipole. This fact follows quantum mechanically from the inversion symmetry of atoms. All 3 components of the dipole operator are antisymmetric under inversion with respect to the nucleus, formula_3 where formula_4 is the dipole operator and formula_5 is the inversion operator. The permanent dipole moment of an atom in a non-degenerate state (see degenerate energy level) is given as the expectation (average) value of the dipole operator, formula_6 where formula_7 is an "S"-state, non-degenerate, wavefunction, which is symmetric or antisymmetric under inversion: formula_8. Since the product of the wavefunction (in the ket) and its complex conjugate (in the bra) is always symmetric under inversion and its inverse, formula_9 it follows that the expectation value changes sign under inversion. We used here the fact that formula_10, being a symmetry operator, is unitary: formula_11 and by definition the Hermitian adjoint formula_12 may be moved from bra to ket and then becomes formula_13. Since the only quantity that is equal to minus itself is the zero, the expectation value vanishes, formula_14 In the case of open-shell atoms with degenerate energy levels, one could define a dipole moment by the aid of the first-order Stark effect. This gives a non-vanishing dipole (by definition proportional to a non-vanishing first-order Stark shift) only if some of the wavefunctions belonging to the degenerate energies have opposite parity; i.e., have different behavior under inversion. This is a rare occurrence, but happens for the excited H-atom, where 2s and 2p states are "accidentally" degenerate (see article Laplace–Runge–Lenz vector for the origin of this degeneracy) and have opposite parity (2s is even and 2p is odd). Field of a static magnetic dipole. Magnitude. The far-field strength, "B", of a dipole magnetic field is given by formula_15 where "B" is the strength of the field, measured in teslas "r" is the distance from the center, measured in metres "λ" is the magnetic latitude (equal to 90° − "θ") where "θ" is the magnetic colatitude, measured in radians or degrees from the dipole axis "m" is the dipole moment, measured in ampere-square metres or joules per tesla "μ"0 is the permeability of free space, measured in henries per metre. Conversion to cylindrical coordinates is achieved using "r"2 "z"2 + "ρ"2 and formula_16 where "ρ" is the perpendicular distance from the "z"-axis. Then, formula_17 Vector form. The field itself is a vector quantity: formula_18 where B is the field r is the vector from the position of the dipole to the position where the field is being measured "r" is the absolute value of r: the distance from the dipole r̂ = is the unit vector parallel to r; m is the (vector) dipole moment "μ"0 is the permeability of free space This is "exactly" the field of a point dipole, "exactly" the dipole term in the multipole expansion of an arbitrary field, and "approximately" the field of any dipole-like configuration at large distances. Magnetic vector potential. The vector potential A of a magnetic dipole is formula_19 with the same definitions as above. Field from an electric dipole. The electrostatic potential at position r due to an electric dipole at the origin is given by: formula_20 where p is the (vector) dipole moment, and "є"0 is the permittivity of free space. This term appears as the second term in the multipole expansion of an arbitrary electrostatic potential Φ(r). If the source of Φ(r) is a dipole, as it is assumed here, this term is the only non-vanishing term in the multipole expansion of Φ(r). The electric field from a dipole can be found from the gradient of this potential: formula_21 This is of the same form of the expression for the magnetic field of a point magnetic dipole, ignoring the delta function. In a real electric dipole, however, the charges are physically separate and the electric field diverges or converges at the point charges. This is different to the magnetic field of a real magnetic dipole which is continuous everywhere. The delta function represents the strong field pointing in the opposite direction between the point charges, which is often omitted since one is rarely interested in the field at the dipole's position. For further discussions about the internal field of dipoles, see or "". Torque on a dipole. Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge. When placed in a homogeneous electric or magnetic field, equal but opposite forces arise on each side of the dipole creating a torque τ}: formula_22 for an electric dipole moment p (in coulomb-meters), or formula_23 for a magnetic dipole moment m (in ampere-square meters). The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of formula_24. The energy of a magnetic dipole is similarly formula_25. Dipole radiation. In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time. It is an extension, or a more physical next-step, to spherical wave radiation. In particular, consider a harmonically oscillating electric dipole, with angular frequency "ω" and a dipole moment "p"0 along the ẑ direction of the form formula_26 In vacuum, the exact field produced by this oscillating dipole can be derived using the retarded potential formulation as: formula_27 For  ≫ 1, the far-field takes the simpler form of a radiating "spherical" wave, but with angular dependence embedded in the cross-product: formula_28 The time-averaged Poynting vector formula_29 is not distributed isotropically, but concentrated around the directions lying perpendicular to the dipole moment, as a result of the non-spherical electric and magnetic waves. In fact, the spherical harmonic function (sin "θ") responsible for such toroidal angular distribution is precisely the "l" = 1 "p" wave. The total time-average power radiated by the field can then be derived from the Poynting vector as formula_30 Notice that the dependence of the power on the fourth power of the frequency of the radiation is in accordance with the Rayleigh scattering, and the underlying effects why the sky consists of mainly blue colour. A circular polarized dipole is described as a superposition of two linear dipoles. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{p} = \\sum_{i=1}^N \\, q_i \\, \\mathbf{r}_i \\, ." }, { "math_id": 1, "text": "\\mathfrak{p} = \\sum_{i=1}^N \\, q_i \\, (\\mathbf{r}_i - \\mathbf{r}_c) ," }, { "math_id": 2, "text": " \\mathbf{r}_c" }, { "math_id": 3, "text": " \\mathfrak{I} \\;\\mathfrak{p}\\; \\mathfrak{I}^{-1} = -\\mathfrak{p}, " }, { "math_id": 4, "text": "\\mathfrak{p}" }, { "math_id": 5, "text": "\\mathfrak{I}" }, { "math_id": 6, "text": "\\left\\langle \\mathfrak{p} \\right\\rangle = \\left\\langle\\, S\\, | \\mathfrak{p} |\\, S \\,\\right\\rangle," }, { "math_id": 7, "text": " |\\, S\\, \\rangle " }, { "math_id": 8, "text": " \\mathfrak{I}\\, |\\, S\\, \\rangle = \\pm|\\, S\\, \\rangle" }, { "math_id": 9, "text": "\n \\left\\langle \\mathfrak{p} \\right\\rangle =\n \\left\\langle\\, \\mathfrak{I}^{-1}\\, S\\, | \\mathfrak{p} |\\, \\mathfrak{I}^{-1}\\, S\\, \\right\\rangle =\n \\left\\langle\\, S\\, | \\mathfrak{I}\\, \\mathfrak{p}\\, \\mathfrak{I}^{-1} |\\, S\\, \\right\\rangle =\n -\\left\\langle \\mathfrak{p} \\right\\rangle\n" }, { "math_id": 10, "text": " \\mathfrak{I}" }, { "math_id": 11, "text": " \\mathfrak{I}^{-1} = \\mathfrak{I}^{*}\\," }, { "math_id": 12, "text": " \\mathfrak{I}^*\\," }, { "math_id": 13, "text": " \\mathfrak{I}^{**} = \\mathfrak{I}\\," }, { "math_id": 14, "text": "\\left\\langle \\mathfrak{p} \\right\\rangle = 0." }, { "math_id": 15, "text": "B(m, r, \\lambda) = \\frac{\\mu_0}{4\\pi} \\frac{m}{r^3} \\sqrt{1 + 3\\sin^2(\\lambda)} \\, ," }, { "math_id": 16, "text": "\\lambda = \\arcsin\\left(\\frac{z}{\\sqrt{z^2 + \\rho^2}}\\right)" }, { "math_id": 17, "text": "B(\\rho, z) = \\frac{\\mu_0 m}{4 \\pi \\left(z^2 + \\rho^2\\right)^\\frac32} \\sqrt{1 + \\frac{3 z^2}{z^2 + \\rho^2}}" }, { "math_id": 18, "text": "\\mathbf{B}(\\mathbf{m}, \\mathbf{r}) =\n \\frac{\\mu_0}{4\\pi} \\ \\frac{3(\\mathbf{m} \\cdot \\hat{\\mathbf{r}}) \\hat{\\mathbf{r}} - \\mathbf{m}}{r^3}\n " }, { "math_id": 19, "text": "\\mathbf{A}(\\mathbf{r}) = \\frac{\\mu_0}{4\\pi} \\frac{\\mathbf{m} \\times \\hat{\\mathbf{r}}}{r^2}" }, { "math_id": 20, "text": " \\Phi(\\mathbf{r}) = \\frac{1}{4\\pi\\epsilon_0}\\,\\frac{\\mathbf{p}\\cdot\\hat{\\mathbf{r}}}{r^2}" }, { "math_id": 21, "text": " \\mathbf{E} = - \\nabla \\Phi =\\frac {1} {4\\pi\\epsilon_0} \\ \\frac{3(\\mathbf{p}\\cdot\\hat{\\mathbf{r}})\\hat{\\mathbf{r}}-\\mathbf{p}}{r^3} - \\delta^3(\\mathbf{r})\\frac{\\mathbf{p}}{3\\epsilon_0}." }, { "math_id": 22, "text": " \\boldsymbol{\\tau} = \\mathbf{p} \\times \\mathbf{E}" }, { "math_id": 23, "text": " \\boldsymbol{\\tau} = \\mathbf{m} \\times \\mathbf{B}" }, { "math_id": 24, "text": " U = -\\mathbf{p} \\cdot \\mathbf{E}" }, { "math_id": 25, "text": " U = -\\mathbf{m} \\cdot \\mathbf{B}" }, { "math_id": 26, "text": "\\mathbf{p}(\\mathbf{r}, t) = \\mathbf{p}(\\mathbf{r})e^{-i\\omega t} = p_0\\hat{\\mathbf{z}}e^{-i\\omega t} ." }, { "math_id": 27, "text": "\\begin{align}\n \\mathbf{E} &= \\frac{1}{4\\pi\\varepsilon_0} \\left\\{\n \\frac{\\omega^2}{c^2 r} \\left( \\hat{\\mathbf{r}} \\times \\mathbf{p} \\right) \\times \\hat{\\mathbf{r}} +\n \\left( \\frac{1}{r^3} - \\frac{i\\omega}{cr^2} \\right)\n \\left( 3\\hat{\\mathbf{r}} \\left[\\hat{\\mathbf{r}} \\cdot \\mathbf{p}\\right] - \\mathbf{p} \\right)\n \\right\\} e^\\frac{i\\omega r}{c} e^{-i\\omega t} \\\\\n \\mathbf{B} &= \\frac{\\omega^2}{4\\pi\\varepsilon_0 c^3} (\\hat{\\mathbf{r}} \\times \\mathbf{p}) \\left( 1 - \\frac{c}{i\\omega r} \\right) \\frac{e^{i\\omega r/c}}{r} e^{-i\\omega t}.\n\\end{align}" }, { "math_id": 28, "text": "\\begin{align}\n \\mathbf{B}\n &= \\frac{\\omega^2}{4\\pi\\varepsilon_0 c^3} (\\hat{\\mathbf{r}} \\times \\mathbf{p}) \\frac{e^{i\\omega (r/c - t)}}{r}\n = \\frac{\\omega^2 \\mu_0 p_0 }{4\\pi c} (\\hat{\\mathbf{r}} \\times \\hat{\\mathbf{z}}) \\frac{e^{i\\omega (r/c - t)}}{r}\n = -\\frac{\\omega^2 \\mu_0 p_0 }{4\\pi c} \\sin(\\theta) \\frac{e^{i\\omega (r/c - t)}}{r} \\mathbf{\\hat{\\phi}} \\\\\n \\mathbf{E}\n &= c \\mathbf{B} \\times \\hat{\\mathbf{r}}\n = -\\frac{\\omega^2 \\mu_0 p_0}{4\\pi} \\sin(\\theta) \\left(\\hat{\\phi} \\times \\mathbf{\\hat{r}}\\right) \\frac{e^{i\\omega (r/c - t)}}{r}\n = -\\frac{\\omega^2 \\mu_0 p_0}{4\\pi} \\sin(\\theta) \\frac{e^{i\\omega (r/c - t)}}{r} \\hat{\\theta}.\n\\end{align}" }, { "math_id": 29, "text": "\\langle \\mathbf{S} \\rangle = \\left(\\frac{\\mu_0 p_0^2\\omega^4}{32\\pi^2 c}\\right) \\frac{\\sin^2(\\theta)}{r^2} \\mathbf{\\hat{r}}" }, { "math_id": 30, "text": "P = \\frac{\\mu_0 \\omega^4 p_0^2}{12\\pi c}." } ]
https://en.wikipedia.org/wiki?curid=8378
8378107
Simon model
In applied probability theory, the Simon model is a class of stochastic models that results in a power-law distribution function. It was proposed by Herbert A. Simon to account for the wide range of empirical distributions following a power-law. It models the dynamics of a system of elements with associated counters (e.g., words and their frequencies in texts, or nodes in a network and their connectivity formula_0). In this model the dynamics of the system is based on constant growth via addition of new elements (new instances of words) as well as incrementing the counters (new occurrences of a word) at a rate proportional to their current values. Description. To model this type of network growth as described above, Bornholdt and Ebel considered a network with formula_1 nodes, and each node with connectivities formula_2, formula_3. These nodes form classes formula_4 of formula_5 nodes with identical connectivity formula_0. Repeat the following steps: (i) With probability formula_6 add a new node and attach a link to it from an arbitrarily chosen node. (ii) With probability formula_7 add one link from an arbitrary node to a node formula_8 of class formula_4 chosen with a probability proportional to formula_9. For this stochastic process, Simon found a stationary solution exhibiting power-law scaling, formula_10, with exponent formula_11 Properties. (i) Barabási-Albert (BA) model can be mapped to the subclass formula_12 of Simon's model, when using the simpler probability for a node being connected to another node formula_13 with connectivity formula_2 formula_14 (same as the preferential attachment at BA model). In other words, the Simon model describes a general class of stochastic processes that can result in a scale-free network, appropriate to capture Pareto and Zipf's laws. (ii) The only free parameter of the model formula_6 reflects the relative growth of number of nodes versus the number of links. In general formula_6 has small values; therefore, the scaling exponents can be predicted to be formula_15. For instance, Bornholdt and Ebel studied the linking dynamics of World Wide Web, and predicted the scaling exponent as formula_16, which was consistent with observation. (iii) The interest in the scale-free model comes from its ability to describe the topology of complex networks. The Simon model does not have an underlying network structure, as it was designed to describe events whose frequency follows a power-law. Thus network measures going beyond the degree distribution such as the average path length, spectral properties, and clustering coefficient, cannot be obtained from this mapping. The Simon model is related to generalized scale-free models with growth and preferential attachment properties. For more reference, see.
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "k_i" }, { "math_id": 3, "text": "i = 1, \\ldots, n" }, { "math_id": 4, "text": "[k]" }, { "math_id": 5, "text": "f(k)" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "1-\\alpha" }, { "math_id": 8, "text": "j" }, { "math_id": 9, "text": "k f(k)" }, { "math_id": 10, "text": "P(k) \\propto k^{- \\gamma}" }, { "math_id": 11, "text": "\\gamma = 1 + \\frac{1}{1- \\alpha}." }, { "math_id": 12, "text": "\\alpha= 1/2" }, { "math_id": 13, "text": "i" }, { "math_id": 14, "text": "P(\\mathrm{new\\ link\\ to\\ } i) \\propto k_i " }, { "math_id": 15, "text": "\\gamma\\approx 2" }, { "math_id": 16, "text": "\\gamma \\approx 2.1" } ]
https://en.wikipedia.org/wiki?curid=8378107
8378440
Copying mechanism
In the study of scale-free networks, a copying mechanism is a process by which such a network can form and grow, by means of repeated steps in which nodes are duplicated with mutations from existing nodes. Several variations have been studied. In the general copying model, a growing network starts as a small initial graph and, at each time step, a new vertex is added with a given number "k" of new outgoing edges. As a result of a stochastic selection, the neighbors of the new vertex are either chosen randomly among the existing vertices, or one existing vertex is randomly selected and "k" of its neighbors are "copied" as heads of the new edges. Motivation. Copying mechanisms for modeling growth of the World Wide Web are motivated by the following intuition: Those are the growth and preferential attachment properties of the networks. Description. For the simple case, nodes are never deleted. At each step we create a new node with a single edge emanating from it. Let u be a page chosen uniformly at random from the pages in existence before this step. (I) With probability formula_0, the only parameter of the model, the new edge points to u. (II) With probability formula_1, the new edge points to the destination of u's (sole) out-link; the new node attains its edge by copying. The second process increases the probability of high-degree nodes' receiving new incoming edges. In fact, since u is selected randomly, the probability that a webpage with degree formula_2 will receive a new hyperlink is proportional with formula_3, indicating that the copying mechanism effectively amounts to a linear preferential attachment. Kumar et al. prove that the expectation of the incoming degree distribution is formula_4, thus formula_5 follows a power-law with an exponent which varies between 2 (for formula_6) and formula_7 (for formula_8). Above is the linear growth copying model. Since the web is currently growing exponentially, there is the exponential growth copying model. At each step a new epoch of vertices arrives whose size is a constant fraction of the current graph. Each of these vertices may link only to vertices from previous epochs. The evolving models above are by no means complete. They can be extended in several ways. First of all, the tails in the models are either static, chosen uniformly from the new vertices, or chosen from the existing vertices proportional to their out-degrees. This process could be made more sophisticated to account for the observed deviations of the out-degree distribution from the power-law distribution. Similarly, the models can be extended to include death processes, which cause vertices and edges to disappear as time evolves. A number of other extensions are possible, but we seek to determine the properties of this simple model, in order to understand which extensions are necessary to capture the complexity of the web. Undirected models. Protein interaction networks. Vazquez proposed a growing graph based on duplication modeling protein interactions. At every time step a prototype is chosen randomly. With probability q edges of the prototype are copied. With probability p an edge to the prototype is created. Proteome networks. Sole proposed a growing graph initialized with a 5-ring substrate. At every time step a new node is added and a prototype is chosen at random. The prototype's edges are copied with a probability δ. Furthermore, random nodes are connected to the newly introduced node with probability α= β/N, where δ and β are given parameters in (0,1) and N is the number of total nodes at the considered time step. (see fig. 1). Directed models. Biological networks. Middendorf-Ziv (MZ) proposed a growing directed graph modeling biological network dynamics. A prototype is chosen at random and duplicated. The prototype or progenitor node has edges pruned with probability β and edges added with probability α«β. Based loosely on the undirected protein network model of Sole et al. WWW networks and citation networks. Vazquez proposed a growth model based on a recursive 'copying' mechanism, continuing to 2nd nearest neighbors, 3rd nearest neighbors etc. The authors call it a 'random walk' mechanism.). Growing network with copying (GNC). Krapivsky and Redner proposed a new growing network model, which grows by adding nodes one at a time. A newly introduced node randomly selects a target node and links to it, as well as to all ancestor nodes of the target node (Fig. 2). If the target node is the initial root node, no additional links are generated by the copying mechanism. If the newly introduced node were to always choose the root node as the target, a star graph would be generated. On the other hand, if the target node is always the most recent one in the network, all previous nodes are ancestors of the target and the copying mechanism would give a complete graph. Correspondingly, the total number of links formula_9 in a network of N nodes can range from N−1 (star graph) to N(N−1)/2 (complete graph). Notice also that the number of outgoing links from each new node (the out-degree) can range between 1 and the current number of nodes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "1-p" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "(1-p)k" }, { "math_id": 4, "text": "P(k_{in})=k^{-(2-p)/(1-p)}" }, { "math_id": 5, "text": "P(k)" }, { "math_id": 6, "text": "p\\rightarrow 0" }, { "math_id": 7, "text": "\\infty" }, { "math_id": 8, "text": "p\\rightarrow 1" }, { "math_id": 9, "text": "L_N" } ]
https://en.wikipedia.org/wiki?curid=8378440
837875
Malliavin calculus
Mathematical techniques used in probability theory and related fields In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations. Malliavin calculus is named after Paul Malliavin whose ideas led to a proof that Hörmander's condition implies the existence and smoothness of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. The calculus has been applied to stochastic partial differential equations as well. The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications in, for example, stochastic filtering. Overview and history. Malliavin introduced Malliavin calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations. Invariance principle. The usual invariance principle for Lebesgue integration over the whole real line is that, for any real number ε and integrable function "f", the following holds formula_0 and hence formula_1 This can be used to derive the integration by parts formula since, setting "f" = "gh", it implies formula_2 A similar idea can be applied in stochastic analysis for the differentiation along a Cameron-Martin-Girsanov direction. Indeed, let formula_3 be a square-integrable predictable process and set formula_4 If formula_5 is a Wiener process, the Girsanov theorem then yields the following analogue of the invariance principle: formula_6 Differentiating with respect to ε on both sides and evaluating at ε=0, one obtains the following integration by parts formula: formula_7 Here, the left-hand side is the Malliavin derivative of the random variable formula_8 in the direction formula_9 and the integral appearing on the right hand side should be interpreted as an Itô integral. Gaussian probability space. The toy model of Malliavin calculus is an irreducible Gaussian probability space formula_10. This is a (complete) probability space formula_11 together with a closed subspace formula_12 such that all formula_13 are mean zero Gaussian variables and formula_14. If one chooses a basis for formula_15 then one calls formula_5 a "numerical model". On the other hand, for any separable Hilbert space formula_16 exists a canonical irreducible Gaussian probability space formula_17 named the "Segal model" having formula_16 as its Gaussian subspace. Properties of a Gaussian probability space that do not depend on the particular choice of basis are called "intrinsic" and such that do depend on the choice "extrensic". We denote the countably infinite product of real spaces as formula_18. Let formula_19 be the canonical Gaussian measure, by transferring the Cameron-Martin theorem from formula_20 into a numerical model formula_5, the additive group of formula_15 will define a quasi-automorphism group on formula_21. A construction can be done as follows: choose an orthonormal basis in formula_15, let formula_22 denote the translation on formula_23 by formula_24, denote the map into the Cameron-Martin space by formula_25, denote formula_26 and formula_27 we get a canonical representation of the additive group formula_28 acting on the endomorphisms by defining formula_29 One can show that the action of formula_30 is extrinsic meaning it does not depend on the choice of basis for formula_15, further formula_31 for formula_32 and for the infinitesimal generator of formula_33 that formula_34 where formula_35 is the identity operator and formula_36 denotes the multiplication operator by the random variable on formula_21 associated to formula_37 (acting on the endomorphisms). Clark–Ocone formula. One of the most useful results from Malliavin calculus is the Clark–Ocone theorem, which allows the process in the martingale representation theorem to be identified explicitly. A simplified version of this theorem is as follows: Consider the standard Wiener measure on the canonical space formula_38, equipped with its canonical filtration. For formula_39 satisfying formula_40 which is Lipschitz and such that "F" has a strong derivative kernel, in the sense that for formula_9 in "C"[0,1] formula_41 then formula_42 where "H" is the previsible projection of "F"'("x", ("t",1]) which may be viewed as the derivative of the function "F" with respect to a suitable parallel shift of the process "X" over the portion ("t",1] of its domain. This may be more concisely expressed by formula_43 Much of the work in the formal development of the Malliavin calculus involves extending this result to the largest possible class of functionals "F" by replacing the derivative kernel used above by the "Malliavin derivative" denoted formula_44 in the above statement of the result. Skorokhod integral. The Skorokhod integral operator which is conventionally denoted δ is defined as the adjoint of the Malliavin derivative in the white noise case when the Hilbert space is an formula_45 space, thus for u in the domain of the operator which is a subset of formula_46, for F in the domain of the Malliavin derivative, we require formula_47 where the inner product is that on formula_48 viz formula_49 The existence of this adjoint follows from the Riesz representation theorem for linear operators on Hilbert spaces. It can be shown that if "u" is adapted then formula_50 where the integral is to be understood in the Itô sense. Thus this provides a method of extending the Itô integral to non adapted integrands. Applications. The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\int_{-\\infty}^\\infty f(x)\\, d \\lambda(x) = \\int_{-\\infty}^\\infty f(x+\\varepsilon)\\, d \\lambda(x) " }, { "math_id": 1, "text": "\\int_{-\\infty}^\\infty f'(x)\\, d \\lambda(x)=0." }, { "math_id": 2, "text": "0 = \\int_{-\\infty}^\\infty f' \\,d \\lambda = \\int_{-\\infty}^\\infty (gh)' \\,d \\lambda = \\int_{-\\infty}^\\infty g h'\\, d \\lambda +\n\\int_{-\\infty}^\\infty g' h\\, d \\lambda." }, { "math_id": 3, "text": "h_s" }, { "math_id": 4, "text": " \\varphi(t) = \\int_0^t h_s\\, d s ." }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": " E(F(X + \\varepsilon\\varphi))= E \\left [F(X) \\exp \\left ( \\varepsilon\\int_0^1 h_s\\, d X_s -\n\\frac{1}{2}\\varepsilon^2 \\int_0^1 h_s^2\\, ds \\right ) \\right ]." }, { "math_id": 7, "text": "E(\\langle DF(X), \\varphi\\rangle) = E\\Bigl[ F(X) \\int_0^1 h_s\\, dX_s\\Bigr].\n" }, { "math_id": 8, "text": "F" }, { "math_id": 9, "text": "\\varphi" }, { "math_id": 10, "text": "X=(\\Omega,\\mathcal{F},P,\\mathcal{H})" }, { "math_id": 11, "text": "(\\Omega,\\mathcal{F},P)" }, { "math_id": 12, "text": "\\mathcal{H}\\subset L^2(\\Omega,\\mathcal{F},P)" }, { "math_id": 13, "text": "H\\in \\mathcal{H}" }, { "math_id": 14, "text": "\\mathcal{F}=\\sigma(H:H\\in \\mathcal{H})" }, { "math_id": 15, "text": "\\mathcal{H}" }, { "math_id": 16, "text": "\\mathcal{G}" }, { "math_id": 17, "text": "\\operatorname{Seg}(\\mathcal{G})" }, { "math_id": 18, "text": "\\R^{\\N}=\\prod\\limits_{i=1}^{\\infty}\\R" }, { "math_id": 19, "text": "\\gamma" }, { "math_id": 20, "text": "(\\R^{\\N},\\mathcal{B}(\\R^{\\N}),\\gamma^{\\N}=\\otimes_{n\\in\\N} \\gamma)" }, { "math_id": 21, "text": "\\Omega" }, { "math_id": 22, "text": "\\tau_{\\alpha}(x)=x+\\alpha" }, { "math_id": 23, "text": "\\R^\\N" }, { "math_id": 24, "text": "\\alpha" }, { "math_id": 25, "text": "j:\\mathcal{H}\\to \\ell^2" }, { "math_id": 26, "text": "L^{\\infty-0}(\\Omega,\\mathcal{F},P)=\\bigcap\\limits_{p<\\infty}L^p(\\Omega,\\mathcal{F},P)\\quad" }, { "math_id": 27, "text": "\\quad q:L^{\\infty-0}(\\R^{\\N},\\mathcal{B}(\\R^{\\N}),\\gamma^{\\N})\\to L^{\\infty-0}(\\Omega,\\mathcal{F},P)," }, { "math_id": 28, "text": "\\rho:\\mathcal{H}\\to \\operatorname{End}(L^{\\infty-0}(\\Omega,\\mathcal{F},P))" }, { "math_id": 29, "text": "\\rho(h)=q\\circ \\tau_{j(h)}\\circ q^{-1}." }, { "math_id": 30, "text": "\\rho" }, { "math_id": 31, "text": "\\rho(h+h')=\\rho(h)\\rho(h')" }, { "math_id": 32, "text": "h,h'\\in \\mathcal{H}" }, { "math_id": 33, "text": "(\\rho(h))_{h}" }, { "math_id": 34, "text": "\\lim\\limits_{\\varepsilon \\to 0}\\frac{\\rho(\\varepsilon h)-I}{\\varepsilon}=M_h" }, { "math_id": 35, "text": "I" }, { "math_id": 36, "text": "M_h" }, { "math_id": 37, "text": "h\\in \\mathcal{H}" }, { "math_id": 38, "text": "C[0,1]" }, { "math_id": 39, "text": "F: C[0,1] \\to \\R" }, { "math_id": 40, "text": " E(F(X)^2) < \\infty" }, { "math_id": 41, "text": " \\lim_{\\varepsilon \\to 0} \\frac 1 \\varepsilon (F(X+\\varepsilon \\varphi) - F(X) ) = \\int_0^1 F'(X,dt) \\varphi(t)\\ \\mathrm{a.e.}\\ X" }, { "math_id": 42, "text": "F(X) = E(F(X)) + \\int_0^1 H_t \\,d X_t ," }, { "math_id": 43, "text": "F(X) = E(F(X))+\\int_0^1 E (D_t F \\mid \\mathcal{F}_t ) \\, d X_t ." }, { "math_id": 44, "text": "D_t" }, { "math_id": 45, "text": "L^2" }, { "math_id": 46, "text": "L^2([0,\\infty) \\times \\Omega)" }, { "math_id": 47, "text": " E (\\langle DF, u \\rangle ) = E (F \\delta (u) )," }, { "math_id": 48, "text": "L^2[0,\\infty)" }, { "math_id": 49, "text": " \\langle f, g \\rangle = \\int_0^\\infty f(s) g(s) \\, ds." }, { "math_id": 50, "text": " \\delta(u) = \\int_0^\\infty u_t\\, d W_t ," } ]
https://en.wikipedia.org/wiki?curid=837875
8379669
Mean absolute error
Statistical error measure In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of "Y" versus "X" include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement. MAE is calculated as the sum of absolute errors (i.e., the Manhattan distance) divided by the sample size:formula_0It is thus an arithmetic average of the absolute errors formula_1, where formula_2 is the prediction and formula_3 the true value. Alternative formulations may include relative frequencies as weight factors. The mean absolute error uses the same scale as the data being measured. This is known as a scale-dependent accuracy measure and therefore cannot be used to make comparisons between predicted values that use different scales. The mean absolute error is a common measure of forecast error in time series analysis, sometimes used in confusion with the more standard definition of mean absolute deviation. The same confusion exists more generally. Quantity disagreement and allocation disagreement. In remote sensing the MAE is sometimes expressed as the sum of two components: quantity disagreement and allocation disagreement. Quantity disagreement is the absolute value of the mean error:formula_4Allocation disagreement is MAE minus quantity disagreement. It is also possible to identify the types of difference by looking at an formula_5 plot. Quantity difference exists when the average of the X values does not equal the average of the Y values. Allocation difference exists if and only if points reside on both sides of the identity line. Related measures. The mean absolute error is one of a number of ways of comparing forecasts with their eventual outcomes. Well-established alternatives are the mean absolute scaled error (MASE), mean absolute log error (MALE), and the mean squared error. These all summarize performance in ways that disregard the direction of over- or under- prediction; a measure that does place emphasis on this is the mean signed difference. Where a prediction model is to be fitted using a selected performance measure, in the sense that the least squares approach is related to the mean squared error, the equivalent for mean absolute error is least absolute deviations. MAE is not identical to root-mean square error (RMSE), although some researchers report and interpret it that way. The MAE is conceptually simpler and also easier to interpret than RMSE: it is simply the average absolute vertical or horizontal distance between each point in a scatter plot and the Y=X line. In other words, MAE is the average absolute difference between X and Y. Furthermore, each error contributes to MAE in proportion to the absolute value of the error. This is in contrast to RMSE which involves squaring the differences, so that a few large differences will increase the RMSE to a greater degree than the MAE. Optimality property. The "mean absolute error" of a real variable "c" with respect to the random variable "X" isformula_6Provided that the probability distribution of "X" is such that the above expectation exists, then "m" is a median of "X" if and only if "m" is a minimizer of the mean absolute error with respect to "X". In particular, "m" is a sample median if and only if "m" minimizes the arithmetic mean of the absolute deviations. More generally, a median is defined as a minimum offormula_7as discussed at Multivariate median (and specifically at Spatial median). This optimization-based definition of the median is useful in statistical data-analysis, for example, in "k"-medians clustering. Proof of optimality. Statement: The classifier minimising formula_8 is formula_9 . Proof: The Loss functions for classification isformula_10Differentiating with respect to "a" givesformula_11This meansformula_12Hence,formula_13 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{MAE} = \\frac{\\sum_{i=1}^n\\left| y_i - x_i\\right|}{n} =\\frac{\\sum_{i=1}^n\\left| e_i \\right|}{n}." }, { "math_id": 1, "text": "|e_i| = |y_i - x_i|" }, { "math_id": 2, "text": "y_i" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "\\left|\\frac{\\sum_{i=1}^n y_i-x_i}{n}\\right|." }, { "math_id": 5, "text": "(x,y)" }, { "math_id": 6, "text": "E(\\left|X-c\\right|)." }, { "math_id": 7, "text": "E(|X-c| - |X| )," }, { "math_id": 8, "text": "\\mathbb{E}|y-\\hat{y}|" }, { "math_id": 9, "text": "\\hat{f}(x)=\\text{Median}(y|X=x)" }, { "math_id": 10, "text": "\\begin{align}\nL &= \\mathbb{E}[|y-a||X=x]\\\\\n &= \\int_{-\\infty}^{\\infty}|y-a|f_{Y|X}(y)\\, dy\\\\\n &= \\int_{-\\infty}^a (a-y)f_{Y|X}(y)\\, dy+\\int_a^{\\infty}(y-a)f_{Y|X}(y)\\, dy.\\\\\n\\end{align}" }, { "math_id": 11, "text": "\\frac{\\partial }{\\partial a}L = \\int_{-\\infty}^af_{Y|X}(y)\\, dy+\\int_a^{\\infty}-f_{Y|X}(y)\\, dy=0 ." }, { "math_id": 12, "text": "\\int_{-\\infty}^a f(y)\\, dy = \\int_a^{\\infty} f(y)\\, dy ." }, { "math_id": 13, "text": "F_{Y|X}(a)=0.5 ." } ]
https://en.wikipedia.org/wiki?curid=8379669
838061
Broaching (metalworking)
Tool with teeth used to cut metal Broaching is a machining process that uses a toothed tool, called a broach, to remove material. There are two main types of broaching: "linear" and "rotary". In linear broaching, which is the more common process, the broach is run linearly against a surface of the workpiece to produce the cut. Linear broaches are used in a broaching machine, which is also sometimes shortened to "broach". In rotary broaching, the broach is rotated and pressed into the workpiece to cut an axisymmetric shape. A rotary broach is used in a lathe or screw machine. In both processes the cut is performed in one pass of the broach, which makes it very efficient. Broaching is used when precision machining is required, especially for odd shapes. Commonly machined surfaces include circular and non-circular holes, splines, keyways, and flat surfaces. Typical workpieces include small to medium-sized castings, forgings, screw machine parts, and stampings. Even though broaches can be expensive, broaching is usually favored over other processes when used for high-quantity production runs. Broaches are shaped similar to a saw, except the height of the teeth increases over the length of the tool. Moreover, the broach contains three distinct sections: one for roughing, another for semi-finishing, and the final one for finishing. Broaching is an unusual machining process because it has the feed built into the tool. The profile of the machined surface is always the inverse of the profile of the broach. The rise per tooth (RPT), also known as the "step" or feed per tooth, determines the amount of material removed and the size of the chip. The broach can be moved relative to the workpiece or vice versa. Because all of the features are built into the broach, no complex motion or skilled labor is required to use it. A broach is effectively a collection of single-point cutting tools arrayed in sequence, cutting one after the other; its cut is analogous to multiple passes of a shaper. History. The concept of broaching can be traced back to the early 1850s, with the first applications used for cutting keyways in pulleys and gears. After World War I, broaching was used to rifle gun barrels. In the 1920s and 30s the tolerances were tightened and the cost reduced thanks to advances in form grinding and broaching machines. Process. The process depends on the type of broaching being performed. Surface broaching is very simple as either the workpiece is moved against a stationary surface broach, or the workpiece is held stationary while the broach is moved against it. Internal broaching is more involved. The process begins by clamping the workpiece into a special holding fixture, called a "workholder", which mounts in the broaching machine. The broaching machine "elevator", which is the part of the machine that moves the broach above the workholder, then lowers the broach through the workpiece. Once through, the broaching machine's "puller", essentially a hook, grabs the "pilot" of the broach. The elevator then releases the top of the follower and the puller pulls the broach through the workpiece completely. The workpiece is then removed from the machine and the broach is raised back up to reengage with the elevator. The broach usually only moves linearly, but sometimes it is also rotated to create a spiral spline or gun-barrel rifling. Cutting fluids are used for three reasons: Fortified petroleum cutting fluids are the most common. However, heavy-duty water-soluble cutting fluids are being used because of their superior cooling, cleanliness, and non-flammability. Usage. Broaching was originally developed for machining internal keyways. However, it was soon discovered that broaching is very useful for machining other surfaces and shapes for high volume workpieces. Because each broach is specialized to cut just one shape, either the broach must be specially designed for the geometry of the workpiece or the workpiece must be designed around a standard broach geometry. A customized broach is usually only viable with high volume workpieces, because the broach can cost US$15,000 to US$30,000 to produce. Broaching speeds vary from 20 to 120 surface feet per minute (SFPM). This results in a complete cycle time of 5 to 30 seconds. Most of the time is consumed by the return stroke, broach handling, and workpiece loading and unloading. The only limitations on broaching are that there are no obstructions over the length of the surface to be machined, the geometry to be cut does not have curves in multiple planes, and that the workpiece is strong enough to withstand the forces involved. Specifically for internal broaching a hole must first exist in the workpiece so the broach can enter. Also, there are limits on the size of internal cuts. Common internal holes can range from in diameter but it is possible to achieve a range of . Surface broaches' range is usually , although the feasible range is . Tolerances are usually ±0.002 in (±0.05 mm), but in precise applications a tolerance of ±0.0005 in (±0.01 mm) can be held. Surface finishes are usually between 16 and 63 microinches (μin), but can range from 8 to 125 μin. There may be small burrs on the exit side of the cut. Broaching works best on softer materials, such as brass, bronze, copper alloys, aluminium, graphite, hard rubbers, wood, composites, and plastic. However, it still has a good machinability rating on mild steels and free machining steels. When broaching, the machinability rating is closely related to the hardness of the material. For steels the ideal hardness range is between 16 and 24 Rockwell C (HRC); a hardness greater than HRC 35 will dull the broach quickly. Broaching is more difficult on harder materials, stainless steel and titanium, but is still possible. Types. Broaches can be categorized by many means: If the broach is large enough the costs can be reduced by using a "built-up" or "modular" construction. This involves producing the broach in pieces and assembling it. If any portion wears out only that section has to be replaced, instead of the entire broach. Most broaches are made from high speed steel (HSS) or an alloy steel; titanium nitride (TiN) coatings are common on HSS to prolong life. Except when broaching cast iron, tungsten carbide is rarely used as a tooth material because the cutting edge will crack on the first pass. Surface broaches. The "slab broach" is the simplest surface broach. It is a general purpose tool for cutting flat surfaces. "Slot broaches" (G &amp; H) are for cutting slots of various dimensions at high production rates. Slot broaching is much quicker than milling when more than one slot needs to be machined, because multiple broaches can be run through the part at the same time on the same broaching machine. "Contour broaches" are designed to cut concave, convex, cam, contoured, and irregular shaped surfaces. "Pot broaches" are cut the inverse of an internal broach; they cut the outside diameter of a cylindrical workpiece. They are named after the pot looking fixture in which the broaches are mounted; the fixture is often referred to as a "pot". The pot is designed to hold multiple broaching tools concentrically over its entire length. The broach is held stationary while the workpiece is pushed or pulled through it. This has replaced hobbing for some involute gears and cutting external splines and slots. "Straddle broaches" use two slab broaches to cut parallel surfaces on opposite sides of a workpiece in one pass. This type of broaching holds closer tolerances than if the two cuts were done independently. It is named after the fact that the broaches "straddle" the workpiece on multiple sides. Internal broaches. "Solid" broaches are the most common type; they are made from one solid piece of material. For broaches that wear out quickly "shell" broaches are used; these broaches are similar to a solid broach, except there is a hole through the center where it mounts on an arbor. Shell broaches cost more initially, but save the cost overall if the broach must be replaced often because the pilots are on the mandrel and do not have to be reproduced with each replacement. "Modular" broaches are commonly used for large internal broaching applications. They are similar to shell broaches in that they are a multi-piece construction. This design is used because it is cheaper to build and resharpen and is more flexible than a solid design. A common type of internal broach is the "keyway" broach (C &amp; D). It uses a special fixture called a "horn" to support the broach and properly locate the part with relation to the broach. A "concentricity broach" is a special type of spline cutting broach which cuts both the minor diameter and the spline form to ensure precise concentricity. The "cut-and-recut broach" is used to cut thin-walled workpieces. Thin-walled workpieces have a tendency to expand during cutting and then shrink afterward. This broach overcomes that problem by first broaching with the standard roughing teeth, followed by a "breathing" section, which serves as a pilot as the workpiece shrinks. The teeth after the "breathing" section then include roughing, semi-finishing, and finishing teeth. Design. For defining the geometry of a broach an internal type is shown below. Note that the geometries of other broaches are similar. where: The most important characteristic of a broach is the rise per tooth (RPT), which is how much material is removed by each tooth. The RPT varies for each section of the broach, which are the roughing section ("t"r), semi-finishing section ("t"s), and finishing section ("t"f). The roughing teeth remove most of the material so the number of roughing teeth required dictates how long the broach is. The semi-finishing teeth provide surface finish and the finishing teeth provide the final finishing. The finishing section's RPT (tf) is usually zero so that as the first finishing teeth wear the later ones continue the sizing function. For free-machining steels the RPT ranges from . For surface broaching the RPT is usually between and for diameter broaching is usually between . The exact value depends on many factors. If the cut is too big it will impart too much stress into the teeth and the workpiece; if the cut is too small the teeth rub instead of cutting. One way to increase the RPT while keeping the stresses down is with "chip breakers". They are notches in the teeth designed to break the chip and decrease the overall amount of material being removed by any given tooth (see the drawing above). For broaching to be effective, the workpiece should have more material than the final dimension of the cut. The "hook" ("α") angle is a parameter of the material being cut. For steel, it is between 15 and 20° and for cast iron it is between 6 and 8°. The "back-off" ("γ") provides clearance for the teeth so that they don't rub on the workpiece; it is usually between 1 and 3°. When radially broaching workpieces that require a deep cut per tooth, such as forgings or castings, a "rotor-cut" or "jump-cut" design can be used; these broaches are also known as "free egress" or "nibbling" broaches. In this design the RPT is designated to two or three rows of teeth. For the broach to work the first tooth of that cluster has a wide notch, or undercut, and then the next tooth has a smaller notch (in a three tooth design) and the final tooth has no notch. This allows for a deep cut while keeping stresses, forces, and power requirements low. There are two different options for achieving the same goal when broaching a flat surface. The first is similar to the rotor-cut design, which is known as a "double-cut" design. Here four teeth in a row have the same RPT, but each progressive tooth takes only a portion of the cut due to notches in the teeth (see the image gallery below). The other option is known as a "progressive" broach, which completely machines the center of the workpiece and then the rest of the broach machines outward from there. All of these designs require a broach that is longer than if a standard design were used. For some circular broaches, "burnishing teeth" are provided instead of finishing teeth. They are not really teeth, as they are just rounded discs that are oversized. This results in burnishing the hole to the proper size. This is primarily used on non-ferrous and cast iron workpieces. The pitch defines the tooth construction, strength, and number of teeth in contact with the workpiece. The pitch is usually calculated from workpiece length, so that the broach can be designed to have at least two teeth in contact with the workpiece at any time; the pitch remains constant for all teeth of the broach. One way to calculate the pitch is: formula_0 Broaching machines. Broaching machines are relatively simple as they only have to move the broach in a linear motion at a predetermined speed and provide a means for handling the broach automatically. Most machines are hydraulic, but a few specialty machines are mechanically driven. The machines are distinguished by whether their motion is horizontal or vertical. The choice of machine is primarily dictated by the stroke required. Vertical broaching machines rarely have a stroke longer than . Vertical broaching machines can be designed for push broaching, pull-down broaching, pull-up broaching, or surface broaching. Push broaching machines are similar to an arbor press with a guided ram; typical capacities are 5 to 50 tons. The two ram pull-down machine is the most common type of broaching machine. This style machine has the rams under the table. Pull-up machines have the ram above the table; they usually have more than one ram. Most surface broaching is done on a vertical machine. Horizontal broaching machines are designed for pull broaching, surface broaching, continuous broaching, and rotary broaching. Pull style machines are basically vertical machines laid on the side with a longer stroke. Surface style machines hold the broach stationary while the workpieces are clamped into fixtures that are mounted on a conveyor system. Continuous style machines are similar to the surface style machines except adapted for internal broaching. Horizontal machines used to be much more common than vertical machines; however, today they represent just 10% of all broaching machines purchased. Vertical machines are more popular because they take up less space. Broaching is often impossible without the specific broaching or keyway machines unless you have a system that can be used in conjunction with a modern machining centre or driven tooling lathe; these extra bits of equipment open up the possibility of producing keyways, splines and Torx through one-hit machining. Rotary broaching. A somewhat different design of cutting tool that can achieve the irregular hole or outer profile of a broach is called a "rotary broach" or "wobble broach". One of the biggest advantages to this type of broaching is that it does not require a broaching machine, but instead is used on lathes, milling machines, screw machines or Swiss lathes. Rotary broaching requires two tooling components: a tool holder and a broach. The leading (cutting) edge of the broach has a contour matching the desired final shape. The broach is mounted in a special tool holder that allows it to freely rotate. The tool holder is special because it holds the tool so that its axis of rotation is inclined slightly to the axis of rotation of the work. A typical value for this misalignment is 1°. This angle is what produces a rotating edge for the broach to cut the workpiece. Either the workpiece or the tool holder is rotated. If the tool holder is rotated, the misalignment causes the broach to appear as though it is "wobbling", which is the origin of the term "wobble broach". For internal broaching the sides of the broach are drafted inward so it becomes thinner; for external broaching the sides are drafted outward, to make the pocket bigger. This draft keeps the broach from jamming; the draft must be larger than the angle of misalignment. If the work piece rotates, the broach is pressed against it, is driven by it, and rotates synchronously with it. If the tool holder rotates, the broach is pressed against the workpiece, but is driven by the tool holder. Ideally the tool advances at the same rate that it cuts. The ideal rate of cut is defined as: Rate of cut [inches per rotation (IPR)] = (diameter of tool [inches]) × sin(Angle of misalignment [degrees]) If it advances much faster, then the tool becomes choked; conversely, if it advances much slower, then an interrupted or zig-zag cut occurs. In practice the rate of cut is slightly less than the ideal rate so that the load is released on the non-cutting edge of the tool. There is some spiraling of the tool as it cuts, so the form at the bottom of the workpiece may be rotated with respect to the form at the top of the hole or profile. Spiraling may be undesirable because it binds the body of the tool and prevents it from cutting sharply. One solution to this is to reverse the rotation in mid cut, causing the tool to spiral in the opposite direction. If reversing the machine is not practical, then interrupting the cut is another possible solution. In general, a rotary broach will not cut as accurately as a push or pull broach. However, the ability to use this type of cutting tool on common machine tools is highly advantageous. In addition, push or pull broaches cannot be used in a blind hole, while a rotary broach can, as long as there is sufficient space for chips at the bottom of the hole. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\cong 0.35 \\sqrt{L_\\mathrm w}" } ]
https://en.wikipedia.org/wiki?curid=838061
8384818
Moving shock
In fluid dynamics, a moving shock is a shock wave that is travelling through a fluid (often gaseous) medium with a velocity relative to the velocity of the fluid already making up the medium. As such, the normal shock relations require modification to calculate the properties before and after the moving shock. A knowledge of moving shocks is important for studying the phenomena surrounding detonation, among other applications. Theory. To derive the theoretical equations for a moving shock, one may start by denoting the region in front of the shock as subscript 1, with the subscript 2 defining the region behind the shock. This is shown in the figure, with the shock wave propagating to the right. The velocity of the gas is denoted by "u", pressure by "p", and the local speed of sound by "a". The speed of the shock wave relative to the gas is "W", making the total velocity equal to "u1" + "W". Next, suppose a reference frame is then fixed to the shock so it appears stationary as the gas in regions 1 and 2 move with a velocity relative to it. Redefining region 1 as "x" and region 2 as "y" leads to the following shock-relative velocities: formula_0 formula_1 With these shock-relative velocities, the properties of the regions before and after the shock can be defined below introducing the temperature as "T", the density as "ρ", and the Mach number as "M": formula_2 formula_3 formula_4 formula_5 Introducing the heat capacity ratio as "γ", the speed of sound, density, and pressure ratios can be derived: formula_6 formula_7 formula_8 One must keep in mind that the above equations are for a shock wave moving towards the right. For a shock moving towards the left, the "x" and "y" subscripts must be switched and: formula_9 formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ u_y = W + u_1 - u_2," }, { "math_id": 1, "text": "\\ u_x = W." }, { "math_id": 2, "text": "\\ p_1 = p_x \\quad ; \\quad p_2 = p_y \\quad ; \\quad T_1 = T_x \\quad ; \\quad T_2 = T_y," }, { "math_id": 3, "text": "\\ \\rho_1 = \\rho_x \\quad ; \\quad \\rho_2 = \\rho_y \\quad ; \\quad a_1 = a_x \\quad ; \\quad a_2 = a_y," }, { "math_id": 4, "text": "\\ M_x = \\frac{u_x}{a_x} = \\frac{W}{a_1}," }, { "math_id": 5, "text": "\\ M_y = \\frac{u_y}{a_y} = \\frac{W + u_1 - u_2}{a_2}." }, { "math_id": 6, "text": "\\ \\frac{a_2}{a_1} = \\sqrt{1 + \\frac{2(\\gamma - 1)}{(\\gamma + 1)^2}\\left[\\gamma M_x^2 - \\frac{1}{M_x^2} - (\\gamma - 1)\\right]}," }, { "math_id": 7, "text": "\\ \\frac{\\rho_2}{\\rho_1} = \\frac{1}{1-\\frac{2}{\\gamma + 1}\\left[1 - \\frac{1}{M_x^2}\\right]}," }, { "math_id": 8, "text": "\\ \\frac{p_2}{p_1} = 1 + \\frac{2\\gamma}{\\gamma + 1}\\left[M_x^2 - 1\\right]." }, { "math_id": 9, "text": "\\ u_y = W - u_1 + u_2," }, { "math_id": 10, "text": "\\ M_y = \\frac{W - u_1 + u_2}{a_2}." } ]
https://en.wikipedia.org/wiki?curid=8384818
8385078
Melnikov distance
In mathematics, the Melnikov method is a tool to identify the existence of chaos in a class of dynamical systems under periodic perturbation. Background. The Melnikov method is used in many cases to predict the occurrence of chaotic orbits in non-autonomous smooth nonlinear systems under periodic perturbation. According to the method, it is possible to construct a function called the "Melnikov function" which can be used to predict either regular or chaotic behavior of a dynamical system. Thus, the Melnikov function will be used to determine a measure of distance between stable and unstable manifolds in the Poincaré map. Moreover, when this measure is equal to zero, by the method, those manifolds crossed each other transversally and from that crossing the system will become chaotic. This method appeared in 1890 by H. Poincaré and by V. Melnikov in 1963 and could be called the "Poincaré-Melnikov Method". Moreover, it was described by several textbooks as Guckenheimer &amp; Holmes, Kuznetsov, S. Wiggins, Awrejcewicz &amp; Holicke and others. There are many applications for Melnikov distance as it can be used to predict chaotic vibrations. In this method, critical amplitude is found by setting the distance between homoclinic orbits and stable manifolds equal to zero. Just like in Guckenheimer &amp; Holmes where they were the first who based on the KAM theorem, determined a set of parameters of relatively weak perturbed Hamiltonian systems of two-degrees-of-freedom, at which homoclinic bifurcation occurred. The Melnikov distance. Consider the following class of systems given by formula_0or in vector formformula_1 where formula_6, formula_7, formula_8 and formula_9 Assume that system (1) is smooth on the region of interest, formula_10 is a small perturbation parameter and formula_11 is a periodic vector function in formula_12with the period formula_13. If formula_14, then there is an unperturbed system formula_15 From this system (3), looking at the phase space in Figure 1, consider the following assumptions To obtain the Melnikov function, some tricks have to be used, for example, to get rid of the time dependence and to gain geometrical advantages new coordinate has to be used formula_23 that is cyclic type given by formula_24Then, the system (1) could be rewritten in vector form as follows formula_26 Hence, looking at Figure 2, the three-dimensional phase space formula_27where formula_28and formula_29has the hyperbolic fixed point formula_30of the unperturbed system becoming a periodic orbit formula_31 The two-dimensional stable and unstable manifolds of formula_32by formula_2and formula_3 are denoted, respectively. By the assumption formula_33 formula_2 and formula_3coincide along a two-dimensional homoclinic manifold. This is denoted by formula_34where formula_35 is the time of flight from a point formula_36to the point formula_37on the homoclinic connection. In the Figure 3, for any point formula_38 a vector is constructed formula_25, normal to the formula_5as follows formula_39Thus varying formula_35and formula_40serve to move formula_25to every point on formula_4 Splitting of stable and unstable manifolds. If formula_41 is sufficiently small, which is the system (2), then formula_42 becomes formula_43 formula_5 becomes formula_44 and the stable and unstable manifolds become different from each other. Furthermore, for this sufficiently small formula_10in a neighborhood formula_45 the periodic orbit formula_42of the unperturbed vector field (3) persists as a periodic orbit, formula_46 Moreover, formula_47 and formula_48 are formula_49 formula_10-close to formula_50 and formula_51 respectively. Consider the following cross-section of the phase space formula_54 then formula_55 and formula_56 are the trajectories of the unperturbed and perturbed vector fields, respectively. The projections of these trajectories onto formula_57are given by formula_58 and formula_59 Looking at the Figure 4, splitting of formula_52 and formula_60 is defined hence, consider the points that intersect formula_25 transversely as formula_61 and formula_62, respectively. Therefore, it is natural to define the distance between formula_52 and formula_53 at the point formula_63 denoted by formula_64and it can be rewritten as formula_65 Since formula_61and formula_62lie on formula_66and formula_67 and then formula_68 can be rewritten by formula_69 The manifolds formula_52 and formula_53 may intersect formula_25 in more than one point as shown in Figure 5. For it to be possible, after every intersection, for formula_10 sufficiently small, the trajectory must pass through formula_70 again. Deduction of the Melnikov function. Expanding in Taylor series the eq. (5) about formula_71 gives us formula_72 where formula_73 and formula_74 When formula_75 then the Melnikov function is defined to be formula_76 since formula_77is not zero on formula_36, considering formula_35finite and formula_78 Using eq. (6) it will require knowing the solution to the perturbed problem. To avoid this, Melnikov defined a time dependent Melnikov function formula_79 Where formula_80 and formula_81 are the trajectories starting at formula_82 and formula_83 respectively. Taking the time-derivative of this function allows for some simplifications. The time-derivative of one of the terms in eq. (7) isformula_84 From the equation of motion, formula_85 then formula_86Plugging equations (2) and (9) back into (8) gives formula_87 The first two terms on the right hand side can be verified to cancel by explicitly evaluating the matrix multiplications and dot products. formula_88 has been reparameterized to formula_89. Integrating the remaining term, the expression for the original terms does not depend on the solution of the perturbed problem. formula_90 The lower integration bound has been chosen to be the time where formula_91, so that formula_92 and therefore the boundary terms are zero. Combining these terms and setting formula_93 the final form for the Melnikov distance is obtained by formula_94 Then, using this equation, the following theorem Theorem 1: Suppose there is a point formula_95such that Then, for formula_10 sufficiently small, formula_52 and formula_53 intersect transversely at formula_98 Moreover, if formula_99 for all formula_100, then formula_101 Simple zeros of the Melnikov function imply chaos. From theorem 1 when there is a simple zero of the Melnikov function implies in transversal intersections of the stable formula_52and formula_53 manifolds that results in a homoclinic tangle. Such tangle is a very complicated structure with the stable and unstable manifolds intersecting an infinite number of times. Consider a small element of phase volume, departing from the neighborhood of a point near the transversal intersection, along the unstable manifold of a fixed point. Clearly, when this volume element approaches the hyperbolic fixed point it will be distorted considerably, due to the repetitive infinite intersections and stretching (and folding) associated with the relevant invariant sets. Therefore, it is reasonably expect that the volume element will undergo an infinite sequence of stretch and fold transformations as the horseshoe map. Then, this intuitive expectation is rigorously confirmed by a theorem stated as follows Theorem 2: Suppose that a diffeomorphism formula_102, where formula_103 is an n-dimensional manifold, has a hyperbolic fixed point formula_104 with a stable formula_105 and formula_106 unstable manifold that intersect transversely at some point formula_107, formula_108where formula_109 Then, formula_103 contains a hyperbolic set formula_110, invariant under formula_111, on which formula_111 is topologically conjugate to a shift on finitely many symbols. Thus, according to the theorem 2, it implies that the dynamics with a transverse homoclinic point is topologically similar to the horseshoe map and it has the property of sensitivity to initial conditions and hence when the Melnikov distance (10) has a simple zero, it implies that the system is chaotic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{{ \\begin{array}{lcl} \\dot{x} &=& \\frac{\\partial H}{\\partial y}(x,y) + \\epsilon g_{1}(x,y,t,\\epsilon) \\\\ \\dot{y} &=& -\\frac{\\partial H}{\\partial x}(x,y) + \\epsilon g_{2}(x,y,t,\\epsilon),\\end{array}{{(1)}}}}\n" }, { "math_id": 1, "text": "{{\\dot{q} = J DH(q) + \\epsilon g(q,t,\\epsilon)~\\ ~\\ {{(2)}}}}" }, { "math_id": 2, "text": "W^{s}(\\gamma (t))" }, { "math_id": 3, "text": "W^{u}(\\gamma (t))" }, { "math_id": 4, "text": "\\Gamma_{\\gamma}." }, { "math_id": 5, "text": "\\Gamma_{\\gamma}" }, { "math_id": 6, "text": "q=(x,y)" }, { "math_id": 7, "text": "DH = \\left(\\frac{\\partial H}{\\partial x},\\frac{\\partial H}{\\partial y}\\right)" }, { "math_id": 8, "text": "g=(g_{1},g_{2})" }, { "math_id": 9, "text": "J = \\left( \\begin{array}{cc}\n0 & 1 \\\\\n-1 & 0 \\\\\n\\end{array} \\right)." }, { "math_id": 10, "text": "\\epsilon" }, { "math_id": 11, "text": "g" }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "T = \\dfrac{2 \\pi}{\\omega}" }, { "math_id": 14, "text": "\\epsilon = 0" }, { "math_id": 15, "text": "{\\dot{q} = J DH(q). ~\\ ~\\ {(3)}}\n" }, { "math_id": 16, "text": "p_0" }, { "math_id": 17, "text": "q_{0}(t) = (x_{0}(t),y_{0}(t));" }, { "math_id": 18, "text": "\\Gamma_{p_{0}}" }, { "math_id": 19, "text": "q^{\\alpha}(t)" }, { "math_id": 20, "text": "T^{\\alpha}" }, { "math_id": 21, "text": "\\alpha \\in (-1, 0)," }, { "math_id": 22, "text": "\\Gamma_{p_{0}} = \\{q \\in \\mathbb{R}^{2}|q=q_{0}(t), t \\in \\mathbb{R}\\} = W^{s}(p_{0}) \\cap W^{u}(p_{0}) \\cup \\{p_{0}\\}." }, { "math_id": 23, "text": "\\phi" }, { "math_id": 24, "text": "\\phi = \\omega t + \\phi_{0}." }, { "math_id": 25, "text": "\\pi_{p}" }, { "math_id": 26, "text": "{{\n\\begin{array}{lcl} \\dot{q} &=& J DH(q) + \\epsilon g(q,\\phi,\\epsilon) \\\\ \n\\dot{\\phi} &=& \\omega.\\end{array} ~\\ ~\\ {{(4)}}}}" }, { "math_id": 27, "text": "\\mathbb{R}^{2} \\times \\mathbb{S}^{1}," }, { "math_id": 28, "text": "q \\in \\mathbb{R}^{2}" }, { "math_id": 29, "text": "\\phi \\in \\mathbb{S}^{1}" }, { "math_id": 30, "text": "p_{0}" }, { "math_id": 31, "text": "\\gamma(t) = (p_{0}, \\phi(t))." }, { "math_id": 32, "text": "\\gamma (t)" }, { "math_id": 33, "text": "A1," }, { "math_id": 34, "text": "\\Gamma_{\\gamma} = \\{(q,\\phi)\\in \\mathbb{R}^{2} \\times \\mathbb{S}^{1}|q=q_{0}(-t_{0}), t_{0} \\in \\mathbb{R}; \\phi= \\phi_{0} \\in (0, 2\\pi]\\}," }, { "math_id": 35, "text": "t_0" }, { "math_id": 36, "text": "q_{0}(-t_{0})" }, { "math_id": 37, "text": "q_{0}(0)" }, { "math_id": 38, "text": "p \\equiv (q_{0}(-t_{0}), \\phi_{0})," }, { "math_id": 39, "text": "\\pi_{p} \\equiv (DH(q_{0}(-t_{0}),0)." }, { "math_id": 40, "text": "\\phi_0" }, { "math_id": 41, "text": "\\epsilon \\neq 0" }, { "math_id": 42, "text": "\\gamma(t)" }, { "math_id": 43, "text": "\\gamma_{\\epsilon}(t)," }, { "math_id": 44, "text": "\\Gamma_{\\gamma_{\\epsilon}}," }, { "math_id": 45, "text": "\\mathcal{N}(\\epsilon_{0})," }, { "math_id": 46, "text": "\\gamma_{\\epsilon}(t) = \\gamma(t) + \\mathcal{O}(\\epsilon)." }, { "math_id": 47, "text": "W^{s}_{loc}(\\gamma_{\\epsilon}(t))" }, { "math_id": 48, "text": "W^{u}_{loc}(\\gamma_{\\epsilon}(t))" }, { "math_id": 49, "text": "C^{r}" }, { "math_id": 50, "text": "W^{s}_{loc}(\\gamma(t))" }, { "math_id": 51, "text": "W^{u}_{loc}(\\gamma(t))" }, { "math_id": 52, "text": "W^{s}(\\gamma_{\\epsilon}(t))" }, { "math_id": 53, "text": "W^{u}(\\gamma_{\\epsilon}(t))" }, { "math_id": 54, "text": "\\Sigma^{\\phi_0} = \\{ (q,\\phi) \\in \\mathbb{R}^{2}| \\phi = \\phi_0 \\}," }, { "math_id": 55, "text": "(q(t),\\phi(t))" }, { "math_id": 56, "text": "(q_{\\epsilon}(t),\\phi(t))" }, { "math_id": 57, "text": "\\Sigma^{\\phi_0}" }, { "math_id": 58, "text": "(q(t),\\phi_{0}(t))" }, { "math_id": 59, "text": "(q_{\\epsilon}(t),\\phi_{0}(t))." }, { "math_id": 60, "text": "W^{u}(\\gamma_{\\epsilon}(t))," }, { "math_id": 61, "text": "p^{s}_{\\epsilon}" }, { "math_id": 62, "text": "p^{u}_{\\epsilon}" }, { "math_id": 63, "text": "p," }, { "math_id": 64, "text": "d(p,\\epsilon) \\equiv |p^{s}_{\\epsilon} - p^{u}_{\\epsilon}|" }, { "math_id": 65, "text": "d(p,\\epsilon) = \\dfrac{(p^{s}_{\\epsilon} - p^{u}_{\\epsilon}) \\cdot\n\n(DH(q_{0}(-t_{0}),0)}{\\parallel(DH(q_{0}(-t_{0}),0)\\parallel }." }, { "math_id": 66, "text": "\\pi_{p}, p^{s}_{\\epsilon} = (q_{\\epsilon}^{s},\\phi_0)" }, { "math_id": 67, "text": "p^{u}_{\\epsilon} = (q_{\\epsilon}^{u},\\phi_0)," }, { "math_id": 68, "text": "d(p,\\epsilon)" }, { "math_id": 69, "text": "{{ d(t_{0}, \\phi_{0}, \\epsilon) = \\dfrac{DH(q_{0}(-t_{0})) \\cdot\n\n(q_{\\epsilon}^{u}-q_{\\epsilon}^{s})}{\\parallel(DH(q_{0}(-t_{0}))\n\n\\parallel}. ~\\ ~\\ {{(5)}}}}" }, { "math_id": 70, "text": "\\mathcal{N}(\\epsilon_{0})" }, { "math_id": 71, "text": "\\epsilon = 0," }, { "math_id": 72, "text": "d(t_{0},\\phi_{0},\\epsilon) = d(t_{0},\\phi_{0},0) + \\epsilon\n\n\\frac{\\partial d}{\\partial \\epsilon}(t_{0},\\phi_{0},0) +\n\n\\mathcal{O}(\\epsilon^{2})," }, { "math_id": 73, "text": "d(t_{0},\\phi_{0},0)=0" }, { "math_id": 74, "text": "\\frac{\\partial d}{\\partial \\epsilon}(t_{0},\\phi_{0},0) =\n\n\\dfrac{DH(q_{0}(-t_{0})) \\cdot\n\n\\left(\\frac{\\partial q_{\\epsilon}^{u}}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n\n-\\frac{\\partial q_{\\epsilon}^{s}}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\\right)\n\n}{\\parallel(DH(q_{0}(-t_{0}))\\parallel }." }, { "math_id": 75, "text": "d(t_{0},\\phi_{0},\\epsilon) = 0," }, { "math_id": 76, "text": "{{M(t_{0},\\phi_0) \\equiv DH(q_{0}(-t_{0})) \\cdot\n\n\\left(\\frac{\\partial q_{\\epsilon}^{u}}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n\n-\\frac{\\partial q_{\\epsilon}^{s}}{\\partial \\epsilon}\n\n\\Big |_{\\epsilon=0}\\right), ~\\ ~\\ {{(6)}}}}" }, { "math_id": 77, "text": "DH(q_{0}(-t_{0})) = \\left(\n\n\\dfrac{\\partial H}{\\partial x}(q_{0}(-t_{0})),\n\n\\dfrac{\\partial H}{\\partial y}(q_{0}(-t_{0}))\\right)" }, { "math_id": 78, "text": "M(t_{0},\\phi_0) = 0 \\Rightarrow \\dfrac{\\partial d}{\\partial \\epsilon}\n\n(t_{0},\\phi_{0}) = 0." }, { "math_id": 79, "text": "{{M(t;t_{0},\\phi_0) \\equiv DH(q_{0}(t-t_{0})) \\cdot\n\n\\left(\\frac{\\partial q_{\\epsilon}^{u}(t)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n\n-\\frac{\\partial q_{\\epsilon}^{s}(t)}{\\partial \\epsilon}\n\n\\Big |_{\\epsilon=0}\\right) ~\\ ~\\ {{(7)}}}}" }, { "math_id": 80, "text": "q_\\epsilon^u(t)" }, { "math_id": 81, "text": "q_\\epsilon^s(t)" }, { "math_id": 82, "text": "q_\\epsilon^u" }, { "math_id": 83, "text": "q_\\epsilon^s" }, { "math_id": 84, "text": "{{\\dfrac{d}{dt} \\left(DH(q_{0}(t-t_{0})) \\cdot\n\n\\frac{\\partial q_{\\epsilon}^{u,s}(t)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n\n\\right)\n=\n\\left(D^2H(q_{0}(t-t_{0})\\dot{q_0}(t-t_0))\\right) \\cdot\n\n\\frac{\\partial q_{\\epsilon}^{u,s}(t)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n+\nDH(q_{0}(t-t_{0})) \\cdot\n\n\\dfrac{d}{dt}\\frac{\\partial q_{\\epsilon}^{u,s}(t)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n. ~\\ ~\\ {{(8)}}}}" }, { "math_id": 85, "text": "\\dot{q}_{\\epsilon}^{u,s}(t)\n= JDH(q_{\\epsilon}^{u,s}(t))\n+ \\epsilon g(q_{\\epsilon}^{u,s}(t),t,\\epsilon)," }, { "math_id": 86, "text": "{{\\dfrac{d}{dt}\\frac{\\partial q_{\\epsilon}^{u,s}}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n= JD^2H(q_0(t-t_0))\\dfrac{\\partial q_{\\epsilon}^{u,s}}{\\partial\\epsilon}\\Big |_{\\epsilon=0}\n+ g(q_0(t-t_0),t,0) ~\\ ~\\ {{(9)}}}}" }, { "math_id": 87, "text": "{{\\begin{align}{ll} \\dfrac{d}{dt} \\left(DH(q_{0}(t-t_{0})) \\cdot\n\n\\dfrac{\\partial q_{\\epsilon}^{u,s}}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n\n\\right)\n=&\nD^2H(q_{0}(t-t_{0}))JDH(q_0(t-t_0) \\cdot\n\n\\dfrac{\\partial q_{\\epsilon}^{u,s}(t)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n\\\\&+\n\\ DH(q_{0}(t-t_{0})) \\cdot\nJD^2H(q_0(t-t_0))\\dfrac{\\partial q_{\\epsilon}^{u,s}(t)}{\\partial\\epsilon}\\Big |_{\\epsilon=0}\n\\\\&+\n\\ DH(q_{0}(t-t_{0})) \\cdot\ng(q_0(t-t_0),\\phi(t),0)\\end{align} ~\\ ~\\ {{(10)}}}}" }, { "math_id": 88, "text": "g(q,t,\\epsilon)" }, { "math_id": 89, "text": "g(q,\\phi,\\epsilon)" }, { "math_id": 90, "text": "{{\\begin{array}{lcl}DH(q_{0}(\\tau-t_{0})) \\cdot\n\\dfrac{\\partial q_{\\epsilon}^{u}(\\tau)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n& =\n\\displaystyle \\int_{-\\infty}^{\\tau} DH(q_{0}(t-t_{0})) \\cdot\ng(q_0(t-t_0),\\omega t+\\phi_0,0) dt\n\\\\\nDH(q_{0}(\\tau-t_{0})) \\cdot\n\\dfrac{\\partial q_{\\epsilon}^{s}(\\tau)}{\\partial \\epsilon}\\Big |_{\\epsilon=0}\n& =\n\\displaystyle \\int_{\\infty}^{\\tau} DH(q_{0}(t-t_{0})) \\cdot\ng(q_0(t-t_0),\\omega t+\\phi_0,0) dt \\end{array} ~\\ ~\\ {{(11)}}}}" }, { "math_id": 91, "text": "q_{\\epsilon}^{u,s}(t) = \\gamma(t)" }, { "math_id": 92, "text": "\\frac{\\partial q_{\\epsilon}^{u,s}(t)}{\\partial\\epsilon} = 0" }, { "math_id": 93, "text": "\\tau=0," }, { "math_id": 94, "text": "{{M(t_{0},\\phi_0) = \\int_{-\\infty}^{+\\infty} DH(q_{0}(t)) \\cdot\n\ng(q_{0}(t), \\omega t + \\omega t_0 + \\phi_0, 0) dt. ~\\ ~\\ {{(12)}}}}" }, { "math_id": 95, "text": "(t_0, \\phi_0) = (\\bar{t_0},\\bar{\\phi_0})" }, { "math_id": 96, "text": "M(\\bar{t_0},\\bar{\\phi_0}) = 0" }, { "math_id": 97, "text": "\\left.\\frac{\\partial M}{\\partial t_0}\\right|_{(\\bar{t_0},\\bar{\\phi_0})} \\neq 0" }, { "math_id": 98, "text": "(q_{0}(-t_0) + \\mathcal{O}(\\epsilon), \\phi_0)." }, { "math_id": 99, "text": "M(t_{0},\\phi_0) \\neq 0" }, { "math_id": 100, "text": "(t_{0},\\phi_0) \\in \\mathbb{R}^{1} \\times \\mathbb{S}^{1}" }, { "math_id": 101, "text": "W^{s}(\\gamma_{\\epsilon}(t)) \\cap W^{u}(\\gamma_{\\epsilon}(t)) = \\emptyset." }, { "math_id": 102, "text": " P : M \\rightarrow M" }, { "math_id": 103, "text": "M" }, { "math_id": 104, "text": "\\bar{x}" }, { "math_id": 105, "text": "W^{s}(\\bar{x})" }, { "math_id": 106, "text": "W^{u}(\\bar{x})" }, { "math_id": 107, "text": "x_0 \\neq \\bar{x}" }, { "math_id": 108, "text": "W^{s}(\\bar{x}) \\perp W^{u}(\\bar{x})," }, { "math_id": 109, "text": "dimW^{s} + dimW^{u}=n." }, { "math_id": 110, "text": "\\Lambda" }, { "math_id": 111, "text": "P" } ]
https://en.wikipedia.org/wiki?curid=8385078
8386232
Bicycle and motorcycle geometry
Collection of key measurements that define a particular bike configuration Bicycle and motorcycle geometry is the collection of key measurements (lengths and angles) that define a particular bike configuration. Primary among these are wheelbase, steering axis angle, fork offset, and trail. These parameters have a major influence on how a bike handles. Wheelbase. The wheelbase is the "horizontal" distance between the centers (or the ground contact points) of the front and rear wheels. Wheelbase is a function of rear frame length, steering axis angle, and fork offset. It is similar to the term wheelbase used for automobiles and trains. Wheelbase has a major influence on the longitudinal stability of a bike, along with the height of the center of mass of the combined bike and rider. Short bikes are much more suitable for performing wheelies and stoppies. Steering axis angle. The steering axis angle is called "caster angle" when measured from vertical axis or "head angle" when measured from horizontal axis. The "steering axis" is the axis about which the steering mechanism (fork, handlebars, front wheel, etc.) pivots. The steering axis angle usually matches the angle of the head tube. Bicycle head angle. In "bicycles", the steering axis angle is measured from the horizontal and called the "head angle"; a 90° head angle would be vertical. For example, Lemond offers: Due to front fork suspension, modern mountain bikes—as opposed to road bikes—tend to have slacker head tube angles, generally around 70°, although they can be as low as 62° (depending on frame geometry setting). At least one manufacturer, Cane Creek, offers an after-market threadless headset that enables changing the head angle. Motorcycle rake angle. In "motorcycles", the steering axis angle is measured from the vertical and called the "caster angle", "rake angle", or just "rake"; a 0° rake is therefore vertical. For example, Moto Guzzi offers: Fork offset. The "fork offset" is the "perpendicular" distance from the steering axis to the center of the front wheel. In "bicycles", fork offset is also called "fork rake". Road racing bicycle forks have an offset of . The offset may be implemented by curving the forks, adding a perpendicular tab at their lower ends, offsetting the fork blade sockets of the fork crown ahead of the steerer, or by mounting the forks into the crown at an angle to the steer tube. The development of forks with curves is attributed to George Singer. In "motorcycles" with telescopic fork tubes, fork offset can be implemented by either an "offset" in the triple tree, adding a "triple tree rake" (usually measured in degrees from 0) to the fork tubes as they mount into the triple tree, or a combination of the two. Other, less-common motorcycle forks, such as trailing link or leading link forks, can implement offset by the length of link arms. Fork length. The length of a fork is measured parallel to the steer tube from the lower fork crown bearing to the axle center. Trail. "Trail" is the "horizontal" distance from where the front wheel touches the ground to where the steering axis intersects the ground. The measurement is considered "positive" if the front wheel ground contact point is behind (towards the rear of the bike) the steering axis intersection with the ground. Most bikes have positive trail, though a few, such as the two-mass-skate bicycle and the Python Lowracer, have negative trail. Trail is often cited as an important determinant of bicycle handling characteristics, and is sometimes listed in bicycle manufacturers' geometry data. Wilson and Papodopoulos argue that mechanical trail may be a more important and informative variable, although both expressions describe very nearly the same thing. Trail is a function of steering axis angle, fork offset, and wheel size. Their relationship can be described by this formula: formula_0 and formula_1 where formula_2 is wheel radius, formula_3 is the bicycle head angle measured from the horizontal, formula_4 is the motorcycle rake angle measured from the vertical, and formula_5 is the fork offset. Trail can be increased by increasing the wheel size, decreasing or slackening the head angle, or decreasing the fork offset. Trail decreases as head angle increases (becomes steeper), as fork offset increases, or as wheel diameter decreases. Motorcyclists tend to speak of trail in relation to rake angle. The larger the rake angle, the larger the trail. Note that, on a bicycle, as rake angle increases, head angle decreases. Trail can vary as the bike leans or steers. In the case of traditional geometry, trail decreases (and wheelbase increases if measuring distance between ground contact points and not hubs) as the bike leans and steers in the direction of the lean. Trail can also vary as the suspension activates, in response to braking for example. As telescopic forks compress due to load transfer during braking, the trail and the wheelbase both decrease. At least one motorcycle, the MotoCzysz C1, has a fork with adjustable trail, from . Mechanical trail. "Mechanical trail" is the "perpendicular" distance between the steering axis and the point of contact between the front wheel and the ground. It may also be referred to as "normal trail". In each case, its value is equal to the numerator in the expression for trail. formula_6, and formula_7 Although the scientific understanding of bicycle steering remains incomplete, we do have a good overall understanding of the interdependent dynamic complexities. Mechanical trail is certainly one of the most important variables in determining the handling characteristics of a bicycle. A trail of zero may give some advantages: Skilled and alert riders may have more path control if the mechanical trail is lower while a higher trail is known to make a bicycle easier to ride "no hands" and thus more subjectively stable. Wheel flop. Wheel flop refers to steering behavior in which a bicycle or motorcycle tends to turn more than expected due to the front wheel "flopping" over when the handlebars are rotated. Wheel flop is caused by the lowering of the front end of a bicycle or motorcycle as the handlebars are rotated away from the "straight ahead" position. This lowering phenomenon occurs according to the following equation: formula_8 where: formula_9 = "wheel flop factor," the distance that the center of the front wheel axle is lowered when the handlebars are rotated from the straight ahead position to a position 90 degrees away from straight ahead formula_10 = trail formula_11 = head angle Because wheel flop involves the lowering of the front end of a bicycle or motorcycle, the force due to gravity will tend to cause handlebar rotation to continue with increasing rotational velocity and without additional rider input on the handlebars. Once the handlebars are turned, the rider needs to apply torque to the handlebars to bring them back to the straight ahead position and bring the front end of the bicycle or motorcycle back up to the original height. The rotational inertia of the front wheel will lessen the severity of the wheel flop effect because it results in opposing torque being required to initiate or accelerate changing the direction of the front wheel. According to the equation listed above, increasing the trail and/or decreasing the head angle will increase the wheel flop factor on a bicycle or motorcycle, which will increase the torque required to bring the handlebars back to the straight ahead position and increase the vehicle's tendency to veer suddenly off the line of a curve. Also, increasing the weight borne by the front wheel of the vehicle, either by increasing the mass of the vehicle, rider and cargo or by changing the weight ratio to shift the center of mass forward, will increase the severity of the wheel flop effect. Increasing the rotational inertia of the front wheel by increasing the speed of the vehicle and the rotational speed of the wheel will tend to counter the wheel flop effect. A certain amount of wheel flop is generally considered to be desirable. Bicycle Quarterly magazine states, "A bike with too little wheel flop will be sluggish in its reactions to handlebar inputs. A bike with too much wheel flop will tend to veer off its line at low and moderate speeds." Modifications. Forks may be modified or replaced, thereby altering the geometry of the bike. Changing fork length. Increasing the length of the fork, for example by switching from rigid to suspension, raises the front of a bicycle and thus decreases its head angle. Lengthening the fork would have the opposite effect on the rake of a motorcycle, since rake is measured in the opposite direction. A rule of thumb is a 10 mm change in fork length gives a half-degree change in the steering axis angle. Changing fork offset. Increasing the offset of a fork reduces the trail, and if performed on an existing fork by bending without lengthening the blades, shortens the fork. Legal requirements. The state of North Dakota (USA) has minimum and maximum requirements on rake and trail for "manufacture, sale, and safe operation of a motorcycle upon public highways." "4. All motorcycles, except three-wheel motorcycles, must meet the following specifications in relationship to front wheel geometry: MAXIMUM: Rake: 45 degrees; Trail: positive MINIMUM: Rake: 20 degrees; Trail: positive Manufacturer's specifications must include the specific rake and trail for each motorcycle or class of motorcycles and the terms "rake" and "trail" must be defined by the director by rules adopted pursuant to chapter 28–32." Other aspects. For other aspects of geometry, such as ergonomics or intended use, see the bicycle frame article. For motorcycles the other main geometric parameters are seat height and relative foot peg and handlebar placement. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Trail}_\\text{bicycle} = \\frac{R_w \\cos(A_h) - O_f}{\\sin(A_h)}" }, { "math_id": 1, "text": "\\text{Trail}_\\text{motorcycle} = \\frac{R_w \\sin(A_r) - O_f}{\\cos(A_r)}" }, { "math_id": 2, "text": "R_w" }, { "math_id": 3, "text": "A_h" }, { "math_id": 4, "text": "A_r" }, { "math_id": 5, "text": "O_f" }, { "math_id": 6, "text": "\\text{MechanicalTrail}_\\text{bicycle} = R_w \\cos(A_h) - O_f" }, { "math_id": 7, "text": "\\text{MechanicalTrail}_\\text{motorcycle} = R_w \\sin(A_r) - O_f" }, { "math_id": 8, "text": "f = b \\sin\\theta \\cos \\theta" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "b" }, { "math_id": 11, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=8386232
8387306
Heat capacity rate
The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. It is typically denoted as "C", listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, "C"h and "C"c either graphically, or as a linearized equation. It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine. Basis. A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling. As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions, and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day. If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through a heat exchanger, the hot fluid would have a very small change in temperature while the cold fluid would heat up a significant amount. If the cool fluid has a much lower heat capacity rate, that is desirable. If they were equal, they would both change more or less temperature equally, assuming equal mass-flow per unit time through a heat exchanger. In practice, a cooling fluid which has both a higher specific heat capacity and a lower heat capacity rate is desirable, accounting for the pervasiveness of water cooling solutions in technology—the polar nature of the water molecule creates some distinct sub-atomic behaviors favorable in practice. formula_0 where C = heat capacity rate of the fluid of interest in W⋅K−1,&lt;br&gt; dm/dt = mass flow rate of the fluid of interest and&lt;br&gt; cp = specific heat of the fluid of interest. See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "C=c_p\\frac{dm}{dt}" } ]
https://en.wikipedia.org/wiki?curid=8387306
8389168
Averch–Johnson effect
The Averch–Johnson effect is the tendency of regulated companies to engage in excessive amounts of capital accumulation in order to expand the volume of their profits. If companies' profits to capital ratio is regulated at a certain percentage then there is a strong incentive for companies to over-invest in order to increase profits overall. This investment goes beyond any optimal efficiency point for capital that the company may have calculated as higher profit is almost always desired over and above efficiency. Excessive capital accumulation under rate-of-return regulation is informally known as gold plating. But the so-called Averch-Johnson effect of overcapitalization does not as a general case involve "gold-plating". Mathematical derivation. Suppose that a regulated firm wishes to maximize its profit:formula_0where formula_1 is the revenue function, formula_2 is the firm's capital stock, formula_3 is the firm's labor stock, formula_4 is the wage rate, and formula_5 is the cost of capital. The firm's profit is constrained such that:formula_6where formula_7 is the allowable rate of return. Assume that formula_8. We may then form a functional to find the firm's optimal action:formula_9where formula_10 is the Lagrange multiplier (also known as the shadow price). The derivatives of this functional are:formula_11Taken together, this implies that:formula_12The ratio of the marginal product of capital and the marginal product of labor is:formula_13Since this new cost of capital is perceived to be less than the market cost of capital, the firm will tend to overinvest in capital. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi = R(K,L) - wL - rK" }, { "math_id": 1, "text": "R(K,L)" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "w" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "\\sigma = {R-wL\\over{K}}" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "\\sigma > r" }, { "math_id": 9, "text": "J = R(K,L)-wL-rK - \\lambda[R(K,L)-wL-\\sigma K]" }, { "math_id": 10, "text": "\\lambda" }, { "math_id": 11, "text": "\\begin{aligned}\n{\\partial J\\over{\\partial K}} &= (1-\\lambda)R_{K} - r + \\lambda \\sigma \\\\\n{\\partial J\\over{\\partial L}} &= (1-\\lambda)R_{L} - (1-\\lambda)w\n\\end{aligned}" }, { "math_id": 12, "text": "R_{K} = {r-\\lambda \\sigma\\over{1-\\lambda}}, \\quad R_{L} = w" }, { "math_id": 13, "text": "{R_{K}\\over{R_{L}}} = {r-\\alpha\\over{w}}, \\quad \\alpha = {\\lambda\\over{1-\\lambda}}(\\sigma - r)" } ]
https://en.wikipedia.org/wiki?curid=8389168
83909
Graham's law
Graham's law of diffusion Graham's law of effusion (also called Graham's law of diffusion) was formulated by Scottish physical chemist Thomas Graham in 1848. Graham found experimentally that the rate of effusion of a gas is inversely proportional to the square root of the molar mass of its particles. This formula is stated as: formula_0, where: Rate1 is the rate of effusion for the first gas. (volume or number of moles per unit time). Rate2 is the rate of effusion for the second gas. "M1" is the molar mass of gas 1 "M2" is the molar mass of gas 2. Graham's law states that the rate of diffusion or of effusion of a gas is inversely proportional to the square root of its molecular weight. Thus, if the molecular weight of one gas is four times that of another, it would diffuse through a porous plug or escape through a small pinhole in a vessel at half the rate of the other (heavier gases diffuse more slowly). A complete theoretical explanation of Graham's law was provided years later by the kinetic theory of gases. Graham's law provides a basis for separating isotopes by diffusion—a method that came to play a crucial role in the development of the atomic bomb. Graham's law is most accurate for molecular effusion which involves the movement of one gas at a time through a hole. It is only approximate for diffusion of one gas in another or in air, as these processes involve the movement of more than one gas. In the same conditions of temperature and pressure, the molar mass is proportional to the mass density. Therefore, the rates of diffusion of different gases are inversely proportional to the square roots of their mass densities: formula_1 where: "ρ" is the mass density. Examples. First Example: Let gas 1 be H2 and gas 2 be O2. (This example is solving for the ratio between the rates of the two gases) formula_2 Therefore, hydrogen molecules effuse four times faster than those of oxygen. Graham's Law can also be used to find the approximate molecular weight of a gas if one gas is a known species, and if there is a specific ratio between the rates of two gases (such as in the previous example). The equation can be solved for the unknown molecular weight. formula_3 Graham's law was the basis for separating uranium-235 from uranium-238 found in natural uraninite (uranium ore) during the Manhattan Project to build the first atomic bomb. The United States government built a gaseous diffusion plant at the Clinton Engineer Works in Oak Ridge, Tennessee, at the cost of $479 million (equivalent to $ in 2023). In this plant, uranium from uranium ore was first converted to uranium hexafluoride and then forced repeatedly to diffuse through porous barriers, each time becoming a little more enriched in the slightly lighter uranium-235 isotope. Second Example: An unknown gas diffuses 0.25 times as fast as He. What is the molar mass of the unknown gas? Using the formula of gaseous diffusion, we can set up this equation. formula_4 Which is the same as the following because the problem states that the rate of diffusion of the unknown gas relative to the helium gas is 0.25. formula_5 Rearranging the equation results in formula_6 History. Graham's research on the diffusion of gases was triggered by his reading about the observations of German chemist Johann Döbereiner that hydrogen gas diffused out of a small crack in a glass bottle faster than the surrounding air diffused in to replace it. Graham measured the rate of diffusion of gases through plaster plugs, through very fine tubes, and through small orifices. In this way he slowed down the process so that it could be studied quantitatively. He first stated in 1831 that the rate of effusion of a gas is inversely proportional to the square root of its density, and later in 1848 showed that this rate is inversely proportional to the square root of the molar mass. Graham went on to study the diffusion of substances in solution and in the process made the discovery that some apparent solutions actually are suspensions of particles too large to pass through a parchment filter. He termed these materials colloids, a term that has come to denote an important class of finely divided materials. Around the time Graham did his work, the concept of molecular weight was being established largely through the measurements of gases. Daniel Bernoulli suggested in 1738 in his book "Hydrodynamica" that heat increases in proportion to the velocity, and thus kinetic energy, of gas particles. Italian physicist Amedeo Avogadro also suggested in 1811 that equal volumes of different gases contain equal numbers of molecules. Thus, the relative molecular weights of two gases are equal to the ratio of weights of equal volumes of the gases. Avogadro's insight together with other studies of gas behaviour provided a basis for later theoretical work by Scottish physicist James Clerk Maxwell to explain the properties of gases as collections of small particles moving through largely empty space. Perhaps the greatest success of the kinetic theory of gases, as it came to be called, was the discovery that for gases, the temperature as measured on the Kelvin (absolute) temperature scale is directly proportional to the average kinetic energy of the gas molecules. Graham's law for diffusion could thus be understood as a consequence of the molecular kinetic energies being equal at the same temperature. The rationale of the above can be summed up as follows: Kinetic energy of each type of particle (in this example, Hydrogen and Oxygen, as above) within the system is equal, as defined by thermodynamic temperature: formula_7 Which can be simplified and rearranged to: formula_8 or: formula_9 Ergo, when constraining the system to the passage of particles through an area, Graham's Law appears as written at the start of this article. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\mbox{Rate}_1 \\over \\mbox{Rate}_2}=\\sqrt{M_2 \\over M_1}" }, { "math_id": 1, "text": "{\\mbox{r}} \\propto {\\mbox{1}\\over\\sqrt{\\rho}}" }, { "math_id": 2, "text": "{\\mbox{Rate H}_2 \\over \\mbox{Rate O}_2} =\\sqrt{M(O_2) \\over M(H_2)} ={\\sqrt{32} \\over \\sqrt{2}}= \\sqrt{16} = 4" }, { "math_id": 3, "text": "{M_2}={M_1 \\mbox{Rate}_1^2 \\over \\mbox{Rate}_2^2}" }, { "math_id": 4, "text": "\\frac{\\mathrm{Rate}_\\mathrm{unknown}}{\\mathrm{Rate}_\\mathrm{He}} = \\frac{\\sqrt{4}}{\\sqrt{M_2}}" }, { "math_id": 5, "text": "0.25 = \\frac{\\sqrt{4}}{\\sqrt{M_2}}" }, { "math_id": 6, "text": "M = (\\frac{\\sqrt{4}}{0.25})^2 = \\frac{\\mathrm{64g}}{\\mathrm{mol}}" }, { "math_id": 7, "text": " \\frac{1}{2}m_{\\rm H_{2}}v^{2}_{\\rm H_{2}}=\\frac{1}{2}m_{\\rm O_{2}}v^{2}_{\\rm O_{2}} " }, { "math_id": 8, "text": " \\frac{v^{2}_{\\rm H_{2}}}{v^{2}_{\\rm O_{2}}} = \\frac{m_{\\rm O_{2}}}{m_{\\rm H_{2}}} " }, { "math_id": 9, "text": " \\frac{v_{\\mathrm H_{2}}}{v_{\\mathrm O_{2}}} = \\sqrt{\\frac{m_{\\mathrm O_{2}}}{m_{\\mathrm H_{2}}}} " } ]
https://en.wikipedia.org/wiki?curid=83909
839259
Active electronically scanned array
Type of phased array radar An active electronically scanned array (AESA) is a type of phased array antenna, which is a computer-controlled antenna array in which the beam of radio waves can be electronically steered to point in different directions without moving the antenna. In the AESA, each antenna element is connected to a small solid-state transmit/receive module (TRM) under the control of a computer, which performs the functions of a transmitter and/or receiver for the antenna. This contrasts with a passive electronically scanned array (PESA), in which all the antenna elements are connected to a single transmitter and/or receiver through phase shifters under the control of the computer. AESA's main use is in radar, and these are known as active phased array radar (APAR). The AESA is a more advanced, sophisticated, second-generation of the original PESA phased array technology. PESAs can only emit a single beam of radio waves at a single frequency at a time. The PESA must utilize a Butler matrix if multiple beams are required. The AESA can radiate multiple beams of radio waves at multiple frequencies simultaneously. AESA radars can spread their signal emissions across a wider range of frequencies, which makes them more difficult to detect over background noise, allowing ships and aircraft to radiate powerful radar signals while still remaining stealthy, as well as being more resistant to jamming. Hybrids of AESA and PESA can also be found, consisting of subarrays that individually resemble PESAs, where each subarray has its own RF front end. Using a hybrid approach, the benefits of AESA (e.g., multiple independent beams) can be realized at a lower cost compared to pure AESA. History. Bell Labs proposed replacing the Nike Zeus radars with a phased array system in 1960, and was given the go-ahead for development in June 1961. The result was the Zeus Multi-function Array Radar (ZMAR), an early example of an active electronically steered array radar system. ZMAR became MAR when the Zeus program ended in favor of the Nike-X system in 1963. The MAR (Multi-function Array Radar) was made of a large number of small antennas, each one connected to a separate computer-controlled transmitter or receiver. Using a variety of beamforming and signal processing steps, a single MAR was able to perform long-distance detection, track generation, discrimination of warheads from decoys, and tracking of the outbound interceptor missiles. MAR allowed the entire battle over a wide space to be controlled from a single site. Each MAR, and its associated battle center, would process tracks for hundreds of targets. The system would then select the most appropriate battery for each one, and hand off particular targets for them to attack. One battery would normally be associated with the MAR, while others would be distributed around it. Remote batteries were equipped with a much simpler radar whose primary purpose was to track the outgoing Sprint missiles before they became visible to the potentially distant MAR. These smaller Missile Site Radars (MSR) were passively scanned, forming only a single beam instead of the MAR's multiple beams. While MAR was ultimately successful, the cost of the system was enormous. When the ABM problem became so complex that even a system like MAR could no longer deal with realistic attack scenarios, the Nike-X concept was abandoned in favor of much simpler concepts like the Sentinel program, which did not use MAR. A second example, MAR-II, was abandoned in-place on Kwajalein Atoll. The first Soviet APAR, the 5N65, was developed in 1963–1965 as a part of the S-225 ABM system. After some modifications in the system concept in 1967 it was built at Sary Shagan Test Range in 1970–1971 and nicknamed Flat Twin in the West. Four years later another radar of this design was built on Kura Test Range, while the S-225 system was never commissioned. US based manufacturers of the AESA radars used in the F-22 and Super Hornet include Northrop Grumman and Raytheon. These companies also design, develop and manufacture the transmit/receive modules which comprise the 'building blocks' of an AESA radar. The requisite electronics technology was developed in-house via Department of Defense research programs such as MMIC Program. In 2016 the Congress funded a military industry competition to produce new radars for two dozen National Guard fighter aircraft. Basic concept. Radar systems generally work by connecting an antenna to a powerful radio transmitter to emit a short pulse of signal. The transmitter is then disconnected and the antenna is connected to a sensitive receiver which amplifies any echos from target objects. By measuring the time it takes for the signal to return, the radar receiver can determine the distance to the object. The receiver then sends the resulting output to a display of some sort. The transmitter elements were typically klystron tubes or magnetrons, which are suitable for amplifying or generating a narrow range of frequencies to high power levels. To scan a portion of the sky, the radar antenna must be physically moved to point in different directions. Starting in the 1960s new solid-state devices capable of delaying the transmitter signal in a controlled way were introduced. That led to the first practical large-scale passive electronically scanned array (PESA), or simply phased array radar. PESAs took a signal from a single source, split it into hundreds of paths, selectively delayed some of them, and sent them to individual antennas. The radio signals from the separate antennas overlapped in space, and the interference patterns between the individual signals were controlled to reinforce the signal in certain directions, and mute it in all others. The delays could be easily controlled electronically, allowing the beam to be steered very quickly without moving the antenna. A PESA can scan a volume of space much quicker than a traditional mechanical system. Additionally, thanks to progress in electronics, PESAs added the ability to produce several active beams, allowing them to continue scanning the sky while at the same time focusing smaller beams on certain targets for tracking or guiding semi-active radar homing missiles. PESAs quickly became widespread on ships and large fixed emplacements in the 1960s, followed by airborne sensors as the electronics shrank. AESAs are the result of further developments in solid-state electronics. In earlier systems the transmitted signal was originally created in a klystron or traveling wave tube or similar device, which are relatively large. Receiver electronics were also large due to the high frequencies that they worked with. The introduction of gallium arsenide microelectronics through the 1980s served to greatly reduce the size of the receiver elements until effective ones could be built at sizes similar to those of handheld radios, only a few cubic centimeters in volume. The introduction of JFETs and MESFETs did the same to the transmitter side of the systems as well. It gave rise to amplifier-transmitters with a low-power solid-state waveform generator feeding an amplifier, allowing any radar so equipped to transmit on a much wider range of frequencies, to the point of changing operating frequency with every pulse sent out. Shrinking the entire assembly (the transmitter, receiver and antenna) into a single "transmitter-receiver module" (TRM) about the size of a carton of milk and arraying these elements produces an AESA. The primary advantage of an AESA over a PESA is the capability of the different modules to operate on different frequencies. Unlike the PESA, where the signal is generated at single frequencies by a small number of transmitters, in the AESA each module generates and radiates its own independent signal. This allows the AESA to produce numerous simultaneous "sub-beams" that it can recognize due to different frequencies, and actively track a much larger number of targets. AESAs can also produce beams that consist of many different frequencies at once, using post-processing of the combined signal from a number of TRMs to re-create a display as if there was a single powerful beam being sent. However, this means that the noise present in each frequency is also received and added. Advantages. AESAs add many capabilities of their own to those of the PESAs. Among these are: the ability to form multiple beams simultaneously, to use groups of TRMs for different roles concurrently, like radar detection, and, more importantly, their multiple simultaneous beams and scanning frequencies create difficulties for traditional, correlation-type radar detectors. Low probability of intercept. Radar systems work by sending out a signal and then listening for its echo off distant objects. Each of these paths, to and from the target, is subject to the inverse square law of propagation in both the transmitted signal and the signal reflected back. That means that a radar's received energy drops with the fourth power of the distance, which is why radar systems require high powers, often in the megawatt range, to be effective at long range. The radar signal being sent out is a simple radio signal, and can be received with a simple radio receiver. Military aircraft and ships have defensive receivers, called "radar warning receivers" (RWR), which detect when an enemy radar beam is on them, thus revealing the position of the enemy. Unlike the radar unit, which must send the pulse out and then receive its reflection, the target's receiver does not need the reflection and thus the signal drops off only as the square of distance. This means that the receiver is always at an advantage [neglecting disparity in antenna size] over the radar in terms of range - it will always be able to detect the signal long before the radar can see the target's echo. Since the position of the radar is extremely useful information in an attack on that platform, this means that radars generally must be turned off for lengthy periods if they are subject to attack; this is common on ships, for instance. Unlike the radar, which knows which direction it is sending its signal, the receiver simply gets a pulse of energy and has to interpret it. Since the radio spectrum is filled with noise, the receiver's signal is integrated over a short period of time, making periodic sources like a radar add up and stand out over the random background. The rough direction can be calculated using a rotating antenna, or similar passive array using phase or amplitude comparison. Typically RWRs store the detected pulses for a short period of time, and compare their broadcast frequency and pulse repetition frequency against a database of known radars. The direction to the source is normally combined with symbology indicating the likely purpose of the radar – airborne early warning and control, surface-to-air missile, etc. This technique is much less useful against a radar with a frequency-agile (solid state) transmitter. Since the AESA (or PESA) can change its frequency with every pulse (except when using doppler filtering), and generally does so using a random sequence, integrating over time does not help pull the signal out of the background noise. Moreover, a radar may be designed to extend the duration of the pulse and lower its peak power. An AESA or modern PESA will often have the capability to alter these parameters during operation. This makes no difference to the total energy reflected by the target but makes the detection of the pulse by an RWR system less likely. Nor does the AESA have any sort of fixed pulse repetition frequency, which can also be varied and thus hide any periodic brightening across the entire spectrum. Older generation RWRs are essentially useless against AESA radars, which is why AESAs are also known as "low probability of intercept radars". Modern RWRs must be made highly sensitive (small angles and bandwidths for individual antennas, low transmission loss and noise) and add successive pulses through time-frequency processing to achieve useful detection rates. High jamming resistance. Jamming is likewise much more difficult against an AESA. Traditionally, jammers have operated by determining the operating frequency of the radar and then broadcasting a signal on it to confuse the receiver as to which is the "real" pulse and which is the jammer's. This technique works as long as the radar system cannot easily change its operating frequency. When the transmitters were based on klystron tubes this was generally true, and radars, especially airborne ones, had only a few frequencies to choose among. A jammer could listen to those possible frequencies and select the one to be used to jam. Most radars using modern electronics are capable of changing their operating frequency with every pulse. This can make jamming less effective; although it is possible to send out broadband white noise to conduct barrage jamming against all the possible frequencies, this reduces the amount of jammer energy in any one frequency. An AESA has the additional capability of spreading its frequencies across a wide band even in a single pulse, a technique known as a "chirp". In this case, the jamming will be the same frequency as the radar for only a short period, while the rest of the radar pulse is unjammed. AESAs can also be switched to a receive-only mode, and use these powerful jamming signals to track its source, something that required a separate receiver in older platforms. By integrating received signals from the targets' own radar along with a lower rate of data from its own broadcasts, a detection system with a precise RWR like an AESA can generate more data with less energy. Some receive beamforming-capable systems, usually ground-based, may even discard a transmitter entirely. However, using a single receiving antenna only gives a direction. Obtaining a range and a target vector requires at least two physically separate passive devices for triangulation to provide instantaneous determinations, unless phase interferometry is used. Target motion analysis can estimate these quantities by incorporating many directional measurements over time, along with knowledge of the position of the receiver and constraints on the possible motion of the target. Other advantages. Since each element in an AESA is a powerful radio receiver, active arrays have many roles besides traditional radar. One use is to dedicate several of the elements to reception of common radar signals, eliminating the need for a separate radar warning receiver. The same basic concept can be used to provide traditional radio support, and with some elements also broadcasting, form a very high bandwidth data link. The F-35 uses this mechanism to send sensor data between aircraft in order to provide a synthetic picture of higher resolution and range than any one radar could generate. In 2007, tests by Northrop Grumman, Lockheed Martin, and L-3 Communications enabled the AESA system of a Raptor to act like a WiFi access point, able to transmit data at 548 megabits per second and receive at gigabit speed; this is far faster than the Link 16 system used by US and allied aircraft, which transfers data at just over 1 Mbit/s. To achieve these high data rates requires a highly directional antenna which AESA provides but which precludes reception by other units not within the antennas beamwidth, whereas like most Wi-Fi designs, Link-16 transmits its signal omni-directionally to ensure all units within range can receive the data. AESAs are also much more reliable than either PESAs or older designs. Since each module operates independently of the others, single failures have little effect on the operation of the system as a whole. Additionally, the modules individually operate at low powers, perhaps 40 to 60 watts, so the need for a large high-voltage power supply is eliminated. Replacing a mechanically scanned array with a fixed AESA mount (such as on the Boeing F/A-18E/F Super Hornet) can help reduce an aircraft's overall radar cross-section (RCS), but some designs (such as the Eurofighter Typhoon and Gripen NG) forgo this advantage in order to combine mechanical scanning with electronic scanning and provide a wider angle of total coverage. This high off-nose pointing allows the AESA equipped fighter to employ a crossing the T maneuver, often referred to as "beaming" in the context of air-to-air combat, against a mechanically scanned radar that would filter out the low closing speed of the perpendicular flight as ground clutter while the AESA swivels 40 degrees towards the target in order to keep it within the AESA's 60 degree off-angle limit. Limitations. With a half wavelength distance between the elements, the maximum beam angle is approximately formula_0°. With a shorter element distance, the highest field of view (FOV) for a flat phased array antenna is currently 120° (formula_1°), although this can be combined with mechanical steering as noted above. List of existing systems. Surface systems (land, maritime). The first AESA radar employed on an operational warship was the Japanese OPS-24 manufactured by Mitsubishi Electric introduced on the JDS Hamagiri (DD-155), the first ship of the latter batch of the Asagiri-class destroyer, launched in 1988. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pm 45" }, { "math_id": 1, "text": "\\pm 60" } ]
https://en.wikipedia.org/wiki?curid=839259
8393604
Clip coordinates
Coordinate system used in computer graphics The clip coordinate system is a homogeneous coordinate system in the graphics pipeline that is used for clipping. Objects' coordinates are transformed via a projection transformation into clip coordinates, at which point it may be efficiently determined on an object-by-object basis which portions of the objects will be visible to the user. In the context of OpenGL or Vulkan, the result of executing vertex processing shaders is considered to be in clip coordinates. All coordinates may then be divided by the formula_0 component of 3D homogeneous coordinates, in what is called the perspective division. More concretely, a point in clip coordinates is represented with four components, formula_1 and the following equality defines the relationship between the normalized device coordinates formula_2, formula_3 and formula_4 and clip coordinates, formula_5 Clip coordinates are convenient for clipping algorithms as points can be checked if their coordinates are outside of the viewing volume. For example, a coordinate formula_6 for a point is within the viewing volume if it satisfies the inequality formula_7. Polygons with vertices outside of the viewing volume may be clipped to fit within the volume. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w" }, { "math_id": 1, "text": "\\begin{pmatrix}x_c\\\\y_c\\\\z_c\\\\w_c\\end{pmatrix}," }, { "math_id": 2, "text": "x_n" }, { "math_id": 3, "text": "y_n" }, { "math_id": 4, "text": "z_n" }, { "math_id": 5, "text": "\\begin{pmatrix}x_n\\\\y_n\\\\z_n\\end{pmatrix} = \\begin{pmatrix}x_c / w_c\\\\y_c / w_c\\\\z_c / w_c\\end{pmatrix}." }, { "math_id": 6, "text": "x_c" }, { "math_id": 7, "text": "-w_c \\leq x_c \\leq w_c" } ]
https://en.wikipedia.org/wiki?curid=8393604
8393831
Solving quadratic equations with continued fractions
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is formula_0 where "a" ≠ 0. The quadratic equation on a number formula_1 can be solved using the well-known quadratic formula, which can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves a quadratic irrational number, which is an algebraic fraction that can be evaluated as a decimal fraction only by applying an additional root extraction algorithm. If the roots are real, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions. Simple example. Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. We begin with the equation formula_2 and manipulate it directly. Subtracting one from both sides we obtain formula_3 This is easily factored into formula_4 from which we obtain formula_5 and finally formula_6 Now comes the crucial step. We substitute this expression for "x" back into itself, recursively, to obtain formula_7 But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity "x" as far down and to the right as we please, and obtaining in the limit the infinite continued fraction formula_8 By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers. Algebraic explanation. We can gain further insight into this simple example by considering the successive powers of formula_9 That sequence of successive powers is given by formula_10 and so forth. Notice how the fractions derived as successive approximants to √2 appear in this geometric progression. Since 0 &lt; "ω" &lt; 1, the sequence {"ω""n"} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to √2, in the limit. We can also find these numerators and denominators appearing in the successive powers of formula_11 The sequence of successive powers {"ω"−"n"} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example. Notice also that the set obtained by forming all the combinations "a" + "b"√2, where "a" and "b" are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain. See also algebraic number field. General quadratic equation. Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial formula_12 which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that formula_13 But now we can apply the last equation to itself recursively to obtain formula_14 If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial "x"2 + "bx" + "c" = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula and a monic polynomial with real coefficients. If the discriminant of such a polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if "b" and "c" are real numbers and "b"2 − 4"c" &lt; 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form "u" + "iv" (where "v" ≠ 0), which does not lie on the real number line. General theorem. By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients formula_12 given by formula_14 either converges or diverges depending on both the coefficient "b" and the value of the discriminant, "b"2 − 4"c". If "b" = 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and formula_15. If "b" ≠ 0 we distinguish three cases. When the monic quadratic equation with real coefficients is of the form "x"2 = "c", the general solution described above is useless because division by zero is not well defined. As long as "c" is positive, though, it is always possible to transform the equation by subtracting a perfect square from both sides and proceeding along the lines illustrated with √2 above. In symbols, if formula_16 just choose some positive real number "p" such that formula_17 Then by direct manipulation we obtain formula_18 and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers. Complex coefficients. By the fundamental theorem of algebra, if the monic polynomial equation "x"2 + "bx" + "c" = 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminant "b"2 − 4"c" is not as useful in this situation, because it may be a complex number. Still, a modified version of the general theorem can be proved. The continued fraction solution to the general monic quadratic equation with complex coefficients formula_19 given by formula_14 converges or not depending on the value of the discriminant, "b"2 − 4"c", and on the relative magnitude of its two roots. Denoting the two roots by "r"1 and "r"2 we distinguish three cases. In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges. This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of the convergence problem for continued fractions with complex elements. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ax^2+bx+c=0," }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "\nx^2 = 2\n" }, { "math_id": 3, "text": "\nx^2 - 1 = 1.\n" }, { "math_id": 4, "text": "\n(x+1)(x-1) = 1\n" }, { "math_id": 5, "text": "\n(x-1) = \\frac{1}{1+x}\n" }, { "math_id": 6, "text": "\nx = 1+\\frac{1}{1+x}.\n" }, { "math_id": 7, "text": "\nx = 1+\\cfrac{1}{1+\\left(1+\\cfrac{1}{1+x}\\right)} = 1+\\cfrac{1}{2+\\cfrac{1}{1+x}}.\n" }, { "math_id": 8, "text": "\nx = 1+\\cfrac{1} {2+\\cfrac{1} {2+\\cfrac{1} {2+\\cfrac{1} {2+\\cfrac{1} {2+\\ddots}}}}} = \\sqrt{2}.\n" }, { "math_id": 9, "text": "\n\\omega = \\sqrt{2} - 1.\n" }, { "math_id": 10, "text": "\n\\begin{align}\n\\omega^2& = 3 - 2\\sqrt{2}, & \\omega^3& = 5\\sqrt{2} - 7, & \\omega^4& = 17 - 12\\sqrt{2}, \\\\\n\\omega^5& = 29\\sqrt{2}-41, & \\omega^6& = 99 - 70\\sqrt{2}, & \\omega^7& = 169\\sqrt{2} - 239, \\,\n\\end{align}\n" }, { "math_id": 11, "text": "\n\\omega^{-1} = \\sqrt{2} + 1.\n" }, { "math_id": 12, "text": "\nx^2 + bx + c = 0\n" }, { "math_id": 13, "text": "\n\\begin{align}\nx^2 + bx& = -c\\\\\nx + b& = \\frac{-c}{x}\\\\\nx& = -b - \\frac{c}{x}\\,\n\\end{align}\n" }, { "math_id": 14, "text": "\nx = -b-\\cfrac{c} {-b-\\cfrac{c} {-b-\\cfrac{c} {-b-\\cfrac{c} {-b-\\ddots\\,}}}}\n" }, { "math_id": 15, "text": "\\infty" }, { "math_id": 16, "text": "\nx^2 = c\\qquad(c>0)\n" }, { "math_id": 17, "text": "\np^2 < c.\n" }, { "math_id": 18, "text": "\n\\begin{align}\nx^2-p^2& = c-p^2\\\\\n(x+p)(x-p)& = c-p^2\\\\\nx-p& = \\frac{c-p^2}{p+x}\\\\\nx& = p + \\frac{c-p^2}{p+x}\\\\\n& = p+\\cfrac{c-p^2} {p+\\left(p+\\cfrac{c-p^2} {p+x}\\right)}& = p+\\cfrac{c-p^2} {2p+\\cfrac{c-p^2} {2p+\\cfrac{c-p^2} {2p+\\ddots\\,}}}\\,\n\\end{align}\n" }, { "math_id": 19, "text": "\nx^2 + bx + c = 0\\qquad (b\\ne0)\n" } ]
https://en.wikipedia.org/wiki?curid=8393831
8395423
Chain transfer
Movement of the active site on one growing polymer chain to another molecule In polymer chemistry, chain transfer is a polymerization reaction by which the activity of a growing polymer chain is transferred to another molecule: formula_0 where • is the active center, P is the initial polymer chain, X is the end group, and R is the substituent to which the active center is transferred. Chain transfer reactions reduce the average molecular weight of the final polymer. Chain transfer can be either introduced deliberately into a polymerization (by use of a "chain transfer agent") or it may be an unavoidable side-reaction with various components of the polymerization. Chain transfer reactions occur in most forms of addition polymerization including radical polymerization, ring-opening polymerization, coordination polymerization, and cationic polymerization, as well as anionic polymerization. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; IUPAC definitions Chain transfer (in a chain polymerization): Chemical reaction occurring during a "chain polymerization" in which an "active center" is transferred from a growing macromolecule or oligomer molecule to another molecule or to another site on the same molecule. Chain-transfer agent: Substance able to react with a "chain carrier" by a reaction in which the original chain carrier is deactivated and a new chain carrier is generated. Types. Chain transfer reactions are usually categorized by the nature of the molecule that reacts with the growing chain. Historical development. Chain transfer was first proposed by Hugh Stott Taylor and William H. Jones in 1930. They were studying the production of polyethylene [(C2H4)"n"] from ethylene [C2H4] and hydrogen [H2] in the presence of ethyl radicals that had been generated by the thermal decomposition of (Et)2Hg and (Et)4Pb. The observed product mixture could be best explained by postulating "transfer" of radical character from one reactant to another. Flory incorporated the radical transfer concept in his mathematical treatment of vinyl polymerization in 1937. He coined the term "chain transfer" to explain observations that, during polymerization, average polymer chain lengths were usually lower than predicted by rate considerations alone. The first widespread use of chain transfer agents came during World War II in the US Rubber Reserve Company. The "Mutual" recipe for styrene-butadiene rubber was based on the Buna-S recipe, developed by I. G. Farben in the 1930s. The Buna-S recipe, however, produced a very tough, high molecular weight rubber that required heat processing to break it down and make it processable on standard rubber mills. Researchers at Standard Oil Development Company and the U. S. Rubber Company discovered that addition of a mercaptan "modifier" to the recipe not only produced a lower molecular weight and more tractable rubber, but it also increased the polymerization rate. Use of a mercaptan modifier became standard in the Mutual recipe. Although German scientists had become familiar with the actions of chain transfer agents in the 1930s, Germany continued to make unmodified rubber to the end of the war and did not fully exploit their knowledge. Throughout the 1940s and 1950s, progress was made in the understanding of the chain transfer reaction and the behavior of chain transfer agents. Snyder "et al." proved the sulfur from a mercaptan modifier did indeed become incorporated into a polymer chain under the conditions of bulk or emulsion polymerization. A series of papers from Frank R. Mayo (at the U.S. Rubber Co.) laid the foundation for determining the rates of chain transfer reactions. In the early 1950s, workers at DuPont conclusively demonstrated that short and long branching in polyethylene was due to two different mechanisms of chain transfer to polymer. Around the same time, the presence of chain transfer in cationic polymerizations was firmly established. Current activity. The nature of chain transfer reactions is currently well understood and is given in standard polymerization textbooks. Since the 1980s, however, a particularly active area of research has been in the various forms of free radical living polymerizations including catalytic chain transfer polymerization, RAFT, and iodine transfer polymerization (ITP). In these processes, the chain transfer reaction produces a polymer chain with similar chain transfer activity to the original chain transfer agent. Therefore, there is no net loss of chain transfer activity.
[ { "math_id": 0, "text": "\\ce{P}^\\bullet + \\ce{XR -> PX + R}^\\bullet" } ]
https://en.wikipedia.org/wiki?curid=8395423
8396078
Z-channel (information theory)
In coding theory and information theory, a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems. Definition. A Z-channel is a channel with binary input and binary output, where each 0 bit is transmitted correctly, but each 1 bit has probability "p" of being transmitted incorrectly as a 0, and probability 1–"p" of being transmitted correctly as a 1. In other words, if "X" and "Y" are the random variables describing the probability distributions of the input and the output of the channel, respectively, then the crossovers of the channel are characterized by the conditional probabilities: formula_0 Capacity. The channel capacity formula_1 of the Z-channel formula_2 with the crossover 1 → 0 probability "p", when the input random variable "X" is distributed according to the Bernoulli distribution with probability "formula_3" for the occurrence of 0, is given by the following equation: formula_4 where formula_5 for the binary entropy function formula_6. This capacity is obtained when the input variable "X" has Bernoulli distribution with probability formula_3 of having value 0 and formula_7 of value 1, where: formula_8 For small "p", the capacity is approximated by formula_9 as compared to the capacity formula_10 of the binary symmetric channel with crossover probability "p". For any "p", formula_11 (i.e. more 0s should be transmitted than 1s) because transmitting a 1 introduces noise. As formula_12, the limiting value of formula_3 is formula_13. Bounds on the size of an asymmetric-error-correcting code. Define the following distance function formula_14 on the words formula_15 of length "n" transmitted via a Z-channel formula_16 Define the sphere formula_17 of radius "t" around a word formula_18 of length "n" as the set of all the words at distance "t" or less from formula_19, in other words, formula_20 A code formula_21 of length "n" is said to be "t"-asymmetric-error-correcting if for any two codewords formula_22, one has formula_23. Denote by formula_24 the maximum number of codewords in a "t"-asymmetric-error-correcting code of length "n". The Varshamov bound. For "n"≥1 and "t"≥1, formula_25 The constant-weight code bound. For "n &gt; 2t ≥ 2", let the sequence "B0, B1, ..., Bn-2t-1" be defined as formula_26 for formula_27. Then formula_28 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n\\operatorname {Pr} [ Y = 0 | X = 0 ] &= 1 \\\\\n\\operatorname {Pr} [ Y = 0 | X = 1 ] &= p \\\\\n\\operatorname {Pr} [ Y = 1 | X = 0 ] &= 0 \\\\\n\\operatorname {Pr} [ Y = 1 | X = 1 ] &= 1 - p\n\\end{align}" }, { "math_id": 1, "text": "\\mathsf{cap}(\\mathbb{Z})" }, { "math_id": 2, "text": "\\mathbb{Z}" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\mathsf{cap}(\\mathbb{Z}) = \\mathsf{H}\\left(\\frac{1}{1+2^{\\mathsf{s}(p)}}\\right) - \\frac{\\mathsf{s}(p)}{1+2^{\\mathsf{s}(p)}} = \\log_2(1{+}2^{-\\mathsf{s}(p)}) = \\log_2\\left(1+(1-p) p^{p/(1-p)}\\right) " }, { "math_id": 5, "text": "\\mathsf{s}(p) = \\frac{\\mathsf{H}(p)}{1-p}" }, { "math_id": 6, "text": "\\mathsf{H}(\\cdot)" }, { "math_id": 7, "text": "1-\\alpha" }, { "math_id": 8, "text": "\\alpha = 1 - \\frac{1}{(1-p)(1+2^{\\mathsf{H}(p)/(1-p)})}," }, { "math_id": 9, "text": " \\mathsf{cap}(\\mathbb{Z}) \\approx 1- 0.5 \\mathsf{H}(p) " }, { "math_id": 10, "text": "1{-}\\mathsf{H}(p)" }, { "math_id": 11, "text": "\\alpha<0.5" }, { "math_id": 12, "text": "p\\rightarrow 1" }, { "math_id": 13, "text": "\\frac{1}{e}" }, { "math_id": 14, "text": "\\mathsf{d}_A(\\mathbf{x}, \\mathbf{y})" }, { "math_id": 15, "text": "\\mathbf{x}, \\mathbf{y} \\in \\{0,1\\}^n" }, { "math_id": 16, "text": "\\mathsf{d}_A(\\mathbf{x}, \\mathbf{y}) \\stackrel{\\vartriangle}{=} \\max\\left\\{ \\big|\\{i \\mid x_i = 0, y_i = 1\\}\\big| , \\big|\\{i \\mid x_i = 1, y_i = 0\\}\\big| \\right\\}." }, { "math_id": 17, "text": "V_t(\\mathbf{x})" }, { "math_id": 18, "text": "\\mathbf{x} \\in \\{0,1\\}^n" }, { "math_id": 19, "text": "\\mathbf{x}" }, { "math_id": 20, "text": "V_t(\\mathbf{x}) = \\{\\mathbf{y} \\in \\{0, 1\\}^n \\mid \\mathsf{d}_A(\\mathbf{x}, \\mathbf{y}) \\leq t\\}." }, { "math_id": 21, "text": "\\mathcal{C}" }, { "math_id": 22, "text": "\\mathbf{c}\\ne \\mathbf{c}' \\in \\{0,1\\}^n" }, { "math_id": 23, "text": "V_t(\\mathbf{c}) \\cap V_t(\\mathbf{c}') = \\emptyset" }, { "math_id": 24, "text": "M(n,t)" }, { "math_id": 25, "text": "M(n,t) \\leq \\frac{2^{n+1}}{\\sum_{j = 0}^t{\\left( \\binom{\\lfloor n/2\\rfloor}{j}+\\binom{\\lceil n/2\\rceil}{j}\\right)}}." }, { "math_id": 26, "text": "B_0 = 2, \\quad B_i = \\min_{0 \\leq j < i}\\{ B_j + A(n{+}t{+}i{-}j{-}1, 2t{+}2, t{+}i)\\}" }, { "math_id": 27, "text": "i > 0" }, { "math_id": 28, "text": "M(n,t) \\leq B_{n-2t-1}." } ]
https://en.wikipedia.org/wiki?curid=8396078
8398
Dimension
Property of a mathematical space In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces. In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space. The concept of dimension is not restricted to physical objects. &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;High-dimensional spaces frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space. In mathematics. In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two etc. The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. The dimension of Euclidean "n"-space E"n"is "n". When trying to generalize to other types of spaces, one is faced with the question "what makes E"n" "n"-dimensional?" One answer is that to cover a fixed ball in E"n" by small balls of radius "ε", one needs on the order of "ε"−"n" such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in E"n" looks locally like E"n"-1 and this leads to the notion of the inductive dimension. While these notions agree on E"n", they turn out to be different when one looks at more general spaces. A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract "has four dimensions"", mathematicians usually express this as: "The tesseract "has dimension 4"", or: "The dimension of the tesseract "is" 4" or: 4D. Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 "Theorie der vielfachen Kontinuität", and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry. The rest of this section examines some of the more important mathematical definitions of dimension. Vector spaces. The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the "Hamel dimension" or "algebraic dimension" to distinguish it from other notions of dimension. For the non-free case, this generalizes to the notion of the length of a module. Manifolds. The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean "n"-space, in which the number "n" is the manifold's dimension. For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point. In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases are simplified by having extra space in which to "work"; and the cases "n" 3 and 4 are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied. Complex dimension. The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number ("x" + "iy") has a real part "x" and an imaginary part "y", in which x and y are both real numbers; hence, the complex dimension is half the real dimension. Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension. Varieties. The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety. An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains formula_0 of sub-varieties of the given algebraic set (the length of such a chain is the number of "formula_1"). Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if "V" is a variety of dimension "m" and "G" is an algebraic group of dimension "n" acting on "V", then the quotient stack ["V"/"G"] has dimension "m" − "n". Krull dimension. The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length "n" being a sequence formula_2 of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety. For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0. Topological spaces. For any normal topological space "X", the Lebesgue covering dimension of "X" is defined to be the smallest integer "n" for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than "n" + 1 elements. In this case dim "X" "n". For "X" a manifold, this coincides with the dimension mentioned above. If no such integer "n" exists, then the dimension of "X" is said to be infinite, and one writes dim "X" ∞. Moreover, "X" has dimension −1, i.e. dim "X" −1 if and only if "X" is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open". An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a "new direction", one obtains a 2-dimensional object. In general one obtains an ("n" + 1)-dimensional object by dragging an "n"-dimensional object in a "new" direction. The inductive dimension of a topological space may refer to the "small inductive dimension" or the "large inductive dimension", and is based on the analogy that, in the case of metric spaces, balls have "n"-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1. Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles. Hausdorff dimension. The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values. Hilbert spaces. Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide. In physics. Spatial dimensions. Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies; "i.e.", moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.) Time. A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction. The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy). The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is "time". Additional dimensions. In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" at such tiny scales as to be effectively invisible to current experiments. In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building. In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a (3 + 1)-dimensional subspace. Thus the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume. Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration. Extra dimensions are said to be universal if all fields are equally free to propagate within them. In computer graphics and spatial data. Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions: Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomena may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This "dimensional generalization" correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood, but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines). More dimensions. &lt;templatestyles src="Div col/styles.css"/&gt;* Degrees of freedom [[Category:Pages which use a template in place of a magic word|TDimension]] 1) [[Category:Pages which use a template in place of a magic word|TDimension]] 2) List of topics by dimension. &lt;templatestyles src="Div col/styles.css"/&gt;* Zero See also. &lt;templatestyles src="Div col/styles.css"/&gt;* Dimension (data warehouse) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_0\\subsetneq V_1\\subsetneq \\cdots \\subsetneq V_d" }, { "math_id": 1, "text": "\\subsetneq" }, { "math_id": 2, "text": "\\mathcal{P}_0\\subsetneq \\mathcal{P}_1\\subsetneq \\cdots \\subsetneq\\mathcal{P}_n " } ]
https://en.wikipedia.org/wiki?curid=8398
840
Axiom of choice
Axiom of set theory In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that "a Cartesian product of a collection of non-empty sets is non-empty". Informally put, the axiom of choice says that given any collection of sets, each containing at least one element, it is possible to construct a new set by choosing one element from each set, even if the collection is infinite. Formally, it states that for every indexed family formula_0 of nonempty sets, there exists an indexed set formula_1 such that formula_2 for every formula_3. The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem. In many cases, a set created by choosing elements can be made without invoking the axiom of choice, particularly if the number of sets from which to choose the elements is finite, or if a canonical rule on how to choose the elements is available — some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets , the set containing each smallest element is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets are collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. But no definite choice function is known for the collection of all non-empty subsets of the real numbers. In that case, the axiom of choice must be invoked. Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate collection (i.e. set) of shoes; this makes it possible to define a choice function directly. For an "infinite" collection of pairs of socks (assumed to have no distinguishing features), there is no obvious way to make a function that forms a set out of selecting one sock from each pair without invoking the axiom of choice. Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced. Statement. A choice function (also called selector or selection) is a function "f", defined on a collection "X" of nonempty sets, such that for every set "A" in "X", "f"("A") is an element of "A". With this concept, the axiom can be stated: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Axiom — For any set "X" of nonempty sets, there exists a choice function "f" that is defined on "X" and maps each set of "X" to an element of that set. Formally, this may be expressed as follows: formula_4 Thus, the negation of the axiom may be expressed as the existence of a collection of nonempty sets which has no choice function. Formally, this may be derived making use of the logical equivalence of formula_5 to formula_6. Each choice function on a collection "X" of nonempty sets is an element of the Cartesian product of the sets in "X". This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all "distinct" sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to: Given any family of nonempty sets, their Cartesian product is a nonempty set. Nomenclature. In this article and other discussions of the Axiom of Choice the following abbreviations are common: Variants. There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it. One variation avoids the use of choice functions by, in effect, replacing each choice function with its range: Given any set "X", if the empty set is not an element of "X" and the elements of "X" are pairwise disjoint, then there exists a set "C" such that its intersection with any of the elements of "X" contains exactly one element. This can be formalized in first-order logic as: ∀x ( ∃o (o ∈ x ∧ ¬∃n (n ∈ o)) ∨ ∃a ∃b ∃c (a ∈ x ∧ b ∈ x ∧ c ∈ a ∧ c ∈ b ∧ ¬(a = b)) ∨ ∃c ∀e (e ∈ x → ∃a (a ∈ e ∧ a ∈ c ∧ ∀b ((b ∈ e ∧ b ∈ c) → a = b)))) Note that P ∨ Q ∨ R is logically equivalent to (¬P ∧ ¬Q) → R.&lt;br&gt; In English, this first-order sentence reads: Given any set "X", "X" contains the empty set as an element or the elements of "X" are not pairwise disjoint or there exists a set "C" such that its intersection with any of the elements of "X" contains exactly one element. This guarantees for any partition of a set "X" the existence of a subset "C" of "X" containing exactly one element from each part of the partition. Another equivalent axiom only considers collections "X" that are essentially powersets of other sets: For any set "A", the power set of "A" (with the empty set removed) has a choice function. Authors who use this formulation often speak of the "choice function on A", but this is a slightly different notion of choice function. Its domain is the power set of "A" (with the empty set removed), and so makes sense for any set "A", whereas with the definition used elsewhere in this article, the domain of a choice function on a "collection of sets" is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as Every set has a choice function. which is equivalent to For any set "A" there is a function "f" such that for any non-empty subset "B" of "A", "f"("B") lies in "B". The negation of the axiom can thus be expressed as: There is a set "A" such that for all functions "f" (on the set of non-empty subsets of "A"), there is a "B" such that "f"("B") does not lie in "B". Restriction to finite sets. The usual statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by the principle of finite induction. In the even simpler case of a collection of "one" set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections. Usage. Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set "X" contains only non-empty sets, a mathematician might have said "let "F"("s") be one of the members of "s" for all "s" in "X"" to define a function "F". In general, it is impossible to prove that "F" exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo. Examples. The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection "X" is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to add the axiom of choice to our axioms of set theory. The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our selection forms a legitimate set (as defined by the other ZF axioms of set theory)? For example, suppose that "X" is the set of all non-empty subsets of the real numbers. First we might try to proceed as if "X" were finite. If we try to choose an element from each set, then, because "X" is infinite, our choice procedure will never come to an end, and consequently we shall never be able to produce a choice function for all of "X". Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if "x" is in (0,1), then so is "x"/2, and "x"/2 is always strictly smaller than "x". So this attempt also fails. Additionally, consider for instance the unit circle "S", and the action on "S" by a group "G" consisting of all rational rotations, that is, rotations by angles which are rational multiples of "π". Here "G" is countable while "S" is uncountable. Hence "S" breaks up into uncountably many orbits under "G". Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset "X" of "S" with the property that all of its translates by "G" are disjoint from "X". The set of those translates partitions the circle into a countable collection of pairwise disjoint sets, which are all pairwise congruent. Since "X" is not measurable for any rotation-invariant countably additive finite measure on "S", finding an algorithm to form a set from selecting a point in each orbit requires that one add the axiom of choice to our axioms of set theory. See non-measurable set for more details. In classical arithmetic, the natural numbers are well-ordered: for every nonempty subset of the natural numbers, there is a unique least element under the natural ordering. In this way, one may specify a set from any given subset. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds. Criticism and acceptance. A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no individual well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable. The axiom of choice proves the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles. Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice. Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive. One example is the Banach–Tarski paradox, which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets. Moreover, paradoxical consequences of the axiom of choice for the no-signaling principle in physics have recently been pointed out. Despite these seemingly paradoxical results, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. But the debate is interesting enough that it is considered notable when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type that requires the axiom of choice to be true. Theorems of ZF hold true in any model of that theory, regardless of the truth or falsity of the axiom of choice in that particular model. The implications of choice below, including weaker versions of the axiom itself, are listed because they are not theorems of ZF. The Banach–Tarski paradox, for example, is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Such statements can be rephrased as conditional statements—for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice. In constructive mathematics. As discussed above, in the classical theory of ZFC, the axiom of choice enables nonconstructive proofs in which the existence of a type of object is proved without an explicit instance being constructed. In fact, in set theory and topos theory, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle. The principle is thus not available in constructive set theory, where non-classical logic is employed. The situation is different when the principle is formulated in Martin-Löf type theory. There and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does. The type theoretical context is discussed further below. Different choice principles have been thoroughly studied in the constructive contexts and the principles' status varies between different school and varieties of the constructive mathematics. Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle. Errett Bishop, who is notable for developing a framework for constructive analysis, argued that an axiom of choice was constructively acceptable, saying &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A choice function exists in constructive mathematics, because a choice is implied by the very meaning of existence. Although the axiom of countable choice in particular is commonly used in constructive mathematics, its use has also been questioned. Independence. It has been known since as early as 1922 that the axiom of choice may fail in a variant of ZF with urelements, through the technique of permutation models introduced by Abraham Fraenkel and developed further by Andrzej Mostowski. The basic technique can be illustrated as follows: Let "x""n" and "y""n" be distinct urelements for "n"=1, 2, 3..., and build a model where each set is symmetric under the interchange "x""n" ↔ "y""n" for all but a finite number of "n". Then the set {{nowrap|1="X" = {{"x"1, "y"1}, {"x"2, "y"2}, {"x"3, "y"3}, ...} }} can be in the model but sets such as {{nowrap|{"x"1, "x"2, "x"3, ...} }} cannot, and thus "X" cannot have a choice function. In 1938, Kurt Gödel showed that the "negation" of the axiom of choice is not a theorem of ZF by constructing an inner model (the constructible universe) that satisfies ZFC, thus showing that ZFC is consistent if ZF itself is consistent. In 1963, Paul Cohen employed the technique of forcing, developed for this purpose, to show that, assuming ZF is consistent, the axiom of choice itself is not a theorem of ZF. He did this by constructing a much more complex model that satisfies ZF¬C (ZF with the negation of AC added as axiom) and thus showing that ZF¬C is consistent. Cohen's model is a symmetric model, which is similar to permutation models, but uses "generic" subsets of the natural numbers (justified by forcing) in place of urelements. Together these results establish that the axiom of choice is logically independent of ZF. The assumption that ZF is consistent is harmless because adding another axiom to an already inconsistent system cannot make the situation worse. Because of independence, the decision whether to use the axiom of choice (or its negation) in a proof cannot be made by appeal to other axioms of set theory. It must be made on other grounds. One argument in favor of using the axiom of choice is that it is convenient because it allows one to prove some simplifying propositions that otherwise could not be proved. Many theorems provable using choice are of an elegant general character: the cardinalities of any two sets are comparable, every nontrivial ring with unity has a maximal ideal, every vector space has a basis, every connected graph has a spanning tree, and every product of compact spaces is compact, among many others. Frequently, the axiom of choice allows generalizing a theorem to "larger" objects. For example, it is provable without the axiom of choice that every vector space of finite dimension has a basis, but the generalization to all vector spaces requires the axiom of choice. Likewise, a finite product of compact spaces can be proven to be compact without the axiom of choice, but the generalization to infinite products (Tychonoff's theorem) requires the axiom of choice. The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC. Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When attempting to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF. The axiom of choice is not the only significant statement that is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZFC. However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF. Stronger axioms. The axiom of constructibility and the generalized continuum hypothesis each imply the axiom of choice and so are strictly stronger than it. In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is an axiom called the axiom of global choice that is stronger than the axiom of choice for sets because it also applies to proper classes. The axiom of global choice follows from the axiom of limitation of size. Tarski's axiom, which is used in Tarski–Grothendieck set theory and states (in the vernacular) that every set belongs to {{em|some}} Grothendieck universe, is stronger than the axiom of choice. Equivalents. There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem. Category theory. Several results in category theory invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), or even locally small categories, whose hom-objects are sets, then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above. Examples of category-theoretic statements which require choice include: Weaker forms. There are several weaker statements that are not equivalent to the axiom of choice but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice. Given an ordinal parameter α ≥ ω+2 — for every set "S" with rank less than α, "S" is well-orderable. Given an ordinal parameter α ≥ 1 — for every set "S" with Hartogs number less than ωα, "S" is well-orderable. As the ordinal parameter is increased, these approximate the full axiom of choice more and more closely. Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to Tarski's 1930 ultrafilter lemma: every filter is a subset of some ultrafilter. Results requiring AC (or weaker forms) but weaker than it. One of the most interesting aspects of the axiom of choice is the large number of places in mathematics where it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF. Possibly equivalent implications of AC. There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. Zermelo cited the partition principle, which was formulated before AC itself, as a justification for believing AC. In 1906, Russell declared PP to be equivalent, but whether the partition principle implies AC is the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every "known" model of ZF where choice fails, these statements fail too, but it is unknown whether they can hold without choice. Stronger forms of the negation of AC. If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is. It is also consistent with ZF + DC that every set of reals is Lebesgue measurable, but this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals). Quine's system of axiomatic set theory, New Foundations (NF), takes its name from the title ("New Foundations for Mathematical Logic") of the 1937 article that introduced it. In the NF axiomatic system, the axiom of choice can be disproved. Statements implying the negation of AC. There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to validate the negation of some standard ZFC theorems. As any model of ZF¬C is also a model of ZF, it is the case that for each of the following statements, there exists a model of ZF in which that statement is true. For proofs, see {{harvtxt|Jech|2008}}. Additionally, by imposing definability conditions on sets (in the sense of descriptive set theory) one can often prove restricted versions of the axiom of choice from axioms incompatible with general choice. This appears, for example, in the Moschovakis coding lemma. Axiom of choice in type theory. In type theory, a different kind of statement is known as the axiom of choice. This form begins with two types, σ and τ, and a relation "R" between objects of type σ and objects of type τ. The axiom of choice states that if for each "x" of type σ there exists a "y" of type τ such that "R"("x","y"), then there is a function "f" from objects of type σ to objects of type τ such that "R"("x","f"("x")) holds for all "x" of type σ: formula_14 Unlike in set theory, the axiom of choice in type theory is typically stated as an axiom scheme, in which "R" varies over all formulas or over all formulas of a particular logical form. References. | last = Jech | first = Thomas | author-link = Thomas Jech | isbn = 978-0-486-46624-8 | orig-year=1973 | publisher = Dover Publications | location = Mineola, New York | title = The axiom of choice Translated in: Jean van Heijenoort, 2002. "From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931". New edition. Harvard University Press. {{ISBN|0-674-32449-8}} *1904. "Proof that every set can be well-ordered," 139-41. *1908. "Investigations in the foundations of set theory I," 199–215.
[ { "math_id": 0, "text": "(S_i)_{i \\in I}" }, { "math_id": 1, "text": "(x_i)_{i \\in I}" }, { "math_id": 2, "text": "x_i \\in S_i" }, { "math_id": 3, "text": "i \\in I" }, { "math_id": 4, "text": "\\forall X \\left[ \\varnothing \\notin X \\implies \\exists f \\colon X \\rightarrow \\bigcup_{A\\in X} A \\quad \\forall A \\in X \\, ( f(A) \\in A ) \\right] \\,." }, { "math_id": 5, "text": "\\neg \\forall X \\left[ P(X)\\to Q(X) \\right]" }, { "math_id": 6, "text": "\\exists X \\left[ P(X)\\land \\neg Q(X) \\right]" }, { "math_id": 7, "text": "I \\neq \\varnothing" }, { "math_id": 8, "text": "\\left(X_i\\right)_{i \\in I}" }, { "math_id": 9, "text": "\\prod_{i \\in I} X_{i}" }, { "math_id": 10, "text": "G_S" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "\\Sigma" }, { "math_id": 13, "text": "\\mathbb{R}^\\Omega" }, { "math_id": 14, "text": "\n(\\forall x^\\sigma)(\\exists y^\\tau) R(x,y) \\to (\\exists f^{\\sigma \\to \\tau})(\\forall x^\\sigma) R(x,f(x)).\n" } ]
https://en.wikipedia.org/wiki?curid=840
8400
Duodecimal
Base-12 numeral system The duodecimal system, also known as base twelve or dozenal, is a positional numeral system using twelve as its base. In duodecimal, the number twelve is denoted "10", meaning 1 twelve and 0 units; in the decimal system, this number is instead written as "12" meaning 1 ten and 2 units, and the string "10" means ten. In duodecimal, "100" means twelve squared, "1000" means twelve cubed, and "0.1" means a twelfth. Various symbols have been used to stand for ten and eleven in duodecimal notation; this page uses A and B, as in hexadecimal, which make a duodecimal count from zero to twelve read 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, 10. The Dozenal Societies of America and Great Britain (organisations promoting the use of duodecimal) use turned digits in their published material: 2 (a turned 2) for ten and 3 (a turned 3) for eleven. The number twelve, a superior highly composite number, is the smallest number with four non-trivial factors (2, 3, 4, 6), and the smallest to include as factors all four numbers (1 to 4) within the subitizing range, and the smallest abundant number. All multiples of reciprocals of 3-smooth numbers ( where a,b,c are integers) have a terminating representation in duodecimal. In particular,  (0.3),  (0.4),  (0.6),  (0.8), and  (0.9) all have a short terminating representation in duodecimal. There is also higher regularity observable in the duodecimal multiplication table. As a result, duodecimal has been described as the optimal number system. In these respects, duodecimal is considered superior to decimal, which has only 2 and 5 as factors, and other proposed bases like octal or hexadecimal. Sexagesimal (base sixty) does even better in this respect (the reciprocals of all 5-smooth numbers terminate), but at the cost of unwieldy multiplication tables and a much larger number of symbols to memorize. "In this section, numerals are in decimal. For example, "10" means 9+1, and "12" means 9+3." Origin. Georges Ifrah speculatively traced the origin of the duodecimal system to a system of finger counting based on the knuckle bones of the four larger fingers. Using the thumb as a pointer, it is possible to count to 12 by touching each finger bone, starting with the farthest bone on the fifth finger, and counting on. In this system, one hand counts repeatedly to 12, while the other displays the number of iterations, until five dozens, i.e. the 60, are full. This system is still in use in many regions of Asia. Languages using duodecimal number systems are uncommon. Languages in the Nigerian Middle Belt such as Janji, Gbiri-Niragu (Gure-Kahugu), Piti, and the Nimbia dialect of Gwandara; and the Chepang language of Nepal are known to use duodecimal numerals. Germanic languages have special words for 11 and 12, such as "eleven" and "twelve" in English. They come from Proto-Germanic *"ainlif" and *"twalif" (meaning, respectively, "one left" and "two left"), suggesting a decimal rather than duodecimal origin. However, Old Norse used a hybrid decimal–duodecimal counting system, with its words for "one hundred and eighty" meaning 200 and "two hundred" meaning 240. In the British Isles, this style of counting survived well into the Middle Ages as the long hundred. Historically, units of time in many civilizations are duodecimal. There are twelve signs of the zodiac, twelve months in a year, and the Babylonians had twelve hours in a day (although at some point, this was changed to 24). Traditional Chinese calendars, clocks, and compasses are based on the twelve Earthly Branches or 24 (12×2) Solar terms. There are 12 inches in an imperial foot, 12 troy ounces in a troy pound, 12 old British pence in a shilling, 24 (12×2) hours in a day; many other items are counted by the dozen, gross (144, square of 12), or great gross (1728, cube of 12). The Romans used a fraction system based on 12, including the uncia, which became both the English words "ounce" and "inch". Pre-decimalisation, Ireland and the United Kingdom used a mixed duodecimal-vigesimal currency system (12 pence = 1 shilling, 20 shillings or 240 pence to the pound sterling or Irish pound), and Charlemagne established a monetary system that also had a mixed base of twelve and twenty, the remnants of which persist in many places. Notations and pronunciations. In a positional numeral system of base "n" (twelve for duodecimal), each of the first "n" natural numbers is given a distinct numeral symbol, and then "n" is denoted "10", meaning 1 times "n" plus 0 units. For duodecimal, the standard numeral symbols for 0–9 are typically preserved for zero through nine, but there are numerous proposals for how to write the numerals representing "ten" and "eleven". More radical proposals do not use any Arabic numerals under the principle of "separate identity." Pronunciation of duodecimal numbers also has no standard, but various systems have been proposed. Transdecimal symbols. Several authors have proposed using letters of the alphabet for the transdecimal symbols. Latin letters such as ⟨⟩ (as in hexadecimal) or ⟨⟩ (initials of "Ten" and "Eleven") are convenient because they are widely accessible, and for instance can be typed on typewriters. However, when mixed with ordinary prose, they might be confused for letters. As an alternative, Greek letters such as ⟨⟩ could be used instead. Frank Emerson Andrews, an early American advocate for duodecimal, suggested and used in his 1935 book "New Numbers" ⟨⟩ (italic capital X from the Roman numeral for ten and a rounded italic capital E similar to open E), along with italic numerals "0"–"9". Edna Kramer in her 1951 book "The Main Stream of Mathematics" used a ⟨⟩ (sextile or six-pointed asterisk, hash or octothorpe). The symbols were chosen because they were available on some typewriters; they are also on push-button telephones. This notation was used in publications of the Dozenal Society of America (DSA) from 1974 to 2008. From 2008 to 2015, the DSA used ⟨ ,  ⟩, the symbols devised by William Addison Dwiggins. The Dozenal Society of Great Britain (DSGB) proposed symbols . This notation, derived from Arabic digits by 180° rotation, was introduced by Isaac Pitman in 1857. In March 2013, a proposal was submitted to include the digit forms for ten and eleven propagated by the Dozenal Societies in the Unicode Standard. Of these, the British/Pitman forms were accepted for encoding as characters at code points and . They were included in Unicode 8.0 (2015). After the Pitman digits were added to Unicode, the DSA took a vote and then began publishing PDF content using the Pitman digits instead, but continues to use the letters X and E on its webpage. Base notation. There are also varying proposals of how to distinguish a duodecimal number from a decimal one. The most common method used in mainstream mathematics sources comparing various number bases uses a subscript "10" or "12", e.g. "5412 = 6410". To avoid ambiguity about the meaning of the subscript 10, the subscripts might be spelled out, "54twelve = 64ten". In 2015 the Dozenal Society of America adopted the more compact single-letter abbreviation "z" for "dozenal" and "d" for "decimal", "54z = 64d". Other proposed methods include italicizing duodecimal numbers ""54" = 64", adding a "Humphrey point" (a semicolon instead of a decimal point) to duodecimal numbers "54;6 = 64.5", prefixing duodecimal numbers by an asterisk "*54 = 64", or some combination of these. The Dozenal Society of Great Britain uses an asterisk prefix for duodecimal whole numbers, and a Humphrey point for other duodecimal numbers. Pronunciation. The Dozenal Society of America suggested the pronunciation of ten and eleven as "dek" and "el". For the names of powers of twelve, there are two prominent systems. Duodecimal numbers. In this system, the prefix "e"- is added for fractions. As numbers get larger (or fractions smaller), the last two morphemes are successively replaced with tri-mo, quad-mo, penta-mo, and so on. Multiple digits in this series are pronounced differently: 12 is "do two"; 30 is "three do"; 100 is "gro"; BA9 is "el gro dek do nine"; B86 is "el gro eight do six"; 8BB,15A is "eight gro el do el, one gro five do dek"; ABA is "dek gro el do dek"; BBB is "el gro el do el"; 0.06 is "six egro"; and so on. Systematic Dozenal Nomenclature (SDN). This system uses "-qua" ending for the positive powers of 12 and "-cia" ending for the negative powers of 12, and an extension of the IUPAC systematic element names (with syllables dec and lev for the two extra digits needed for duodecimal) to express which power is meant. After hex-, further prefixes continue sept-, oct-, enn-, dec-, lev-, unnil-, unun-. Advocacy and "dozenalism". William James Sidis used 12 as the base for his constructed language Vendergood in 1906, noting it being the smallest number with four factors and its prevalence in commerce. The case for the duodecimal system was put forth at length in Frank Emerson Andrews' 1935 book "New Numbers: How Acceptance of a Duodecimal Base Would Simplify Mathematics". Emerson noted that, due to the prevalence of factors of twelve in many traditional units of weight and measure, many of the computational advantages claimed for the metric system could be realized "either" by the adoption of ten-based weights and measure "or" by the adoption of the duodecimal number system. Both the Dozenal Society of America and the Dozenal Society of Great Britain promote widespread adoption of the duodecimal system. They use the word "dozenal" instead of "duodecimal" to avoid the more overtly decimal terminology. However, the etymology of "dozenal" itself is also an expression based on decimal terminology since "dozen" is a direct derivation of the French word "douzaine", which is a derivative of the French word for twelve, "douze", descended from Latin "duodecim". Mathematician and mental calculator Alexander Craig Aitken was an outspoken advocate of duodecimal: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The duodecimal tables are easy to master, easier than the decimal ones; and in elementary teaching they would be so much more interesting, since young children would find more fascinating things to do with twelve rods or blocks than with ten. Anyone having these tables at command will do these calculations more than one-and-a-half times as fast in the duodecimal scale as in the decimal. This is my experience; I am certain that even more so it would be the experience of others. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;But the final quantitative advantage, in my own experience, is this: in varied and extensive calculations of an ordinary and not unduly complicated kind, carried out over many years, I come to the conclusion that the efficiency of the decimal system might be rated at about 65 or less, if we assign 100 to the duodecimal. In media. In "Little Twelvetoes", American television series "Schoolhouse Rock!" portrayed an alien being with twelve fingers and twelve toes using duodecimal arithmetic, using "dek" and "el" as names for ten and eleven, and Andrews' script-X and script-E for the digit symbols. Duodecimal systems of measurements. Systems of measurement proposed by dozenalists include: "In this section, numerals are in decimal. For example, "10" means 9+1, and "12" means 9+3." Comparison to other number systems. The Dozenal Society of America argues that if a base is too small, significantly longer expansions are needed for numbers; if a base is too large, one must memorise a large multiplication table to perform arithmetic. Thus, it presumes that "a number base will need to be between about 7 or 8 through about 16, possibly including 18 and 20". The number 12 has six factors, which are 1, 2, 3, 4, 6, and 12, of which 2 and 3 are prime. It is the smallest number to have six factors, the largest number to have at least half of the numbers below it as divisors, and is only slightly larger than 10. (The numbers 18 and 20 also have six factors but are much larger.) Ten, in contrast, only has four factors, which are 1, 2, 5, and 10, of which 2 and 5 are prime. Six shares the prime factors 2 and 3 with twelve; however, like ten, six only has four factors (1, 2, 3, and 6) instead of six. Its corresponding base, senary, is below the DSA's stated threshold. Eight and Sixteen only have 2 as a prime factor. Therefore, in octal and hexadecimal, the only terminating fractions are those whose denominator is a power of two. Thirty is the smallest number that has three different prime factors (2, 3, and 5, the first three primes), and it has eight factors in total (1, 2, 3, 5, 6, 10, 15, and 30). Sexagesimal was actually used by the ancient Sumerians and Babylonians, among others; its base, sixty, adds the four convenient factors 4, 12, 20, and 60 to 30 but no new prime factors. The smallest number that has four different prime factors is 210; the pattern follows the primorials. However, these numbers are quite large to use as bases, and are far beyond the DSA's stated threshold. In all base systems, there are similarities to the representation of multiples of numbers that are one less than or one more than the base."In the following multiplication table, numerals are written in duodecimal. For example, "10" means twelve, and "12" means fourteen." Conversion tables to and from decimal. To convert numbers between bases, one can use the general conversion algorithm (see the relevant section under positional notation). Alternatively, one can use digit-conversion tables. The ones provided below can be used to convert any duodecimal number between 0;1 and BB,BBB;B to decimal, or any decimal number between 0.1 and 99,999.9 to duodecimal. To use them, the given number must first be decomposed into a sum of numbers with only one significant digit each. For example: 12,345.6 = 10,000 + 2,000 + 300 + 40 + 5 + 0.6 This decomposition works the same no matter what base the number is expressed in. Just isolate each non-zero digit, padding them with as many zeros as necessary to preserve their respective place values. If the digits in the given number include zeroes (for example, 7,080.9), these are left out in the digit decomposition (7,080.9 = 7,000 + 80 + 0.9). Then, the digit conversion tables can be used to obtain the equivalent value in the target base for each digit. If the given number is in duodecimal and the target base is decimal, we get: 10,000 + 2,000 + 300 + 40 + 5 + 0;6 &lt;br&gt; = 20,736 + 3,456 + 432 + 48 + 5 + 0.5 Because the summands are already converted to decimal, the usual decimal arithmetic is used to perform the addition and recompose the number, arriving at the conversion result: Duodecimal ---&gt; Decimal 10,000 = 20,736 2,000 = 3,456 300 = 432 40 = 48 5 = 5 + 0;6 = + 0.5 12,345;6 = 24,677.5 That is, 12,345;6 equals 24,677.5 If the given number is in decimal and the target base is duodecimal, the method is same. Using the digit conversion tables: 10,000 + 2,000 + 300 + 40 + 5 + 0.6 &lt;br&gt; = 5,954 + 1,1A8 + 210 + 34 + 5 + 0;7249 To sum these partial products and recompose the number, the addition must be done with duodecimal rather than decimal arithmetic: Decimal --&gt; Duodecimal 10,000 = 5,954 2,000 = 1,1A8 300 = 210 40 = 34 5 = 5 + 0.6 = + 0;7249 12,345.6 = 7,189;7249 That is, 12,345.6 equals 7,189;7249 Fractions and irrational numbers. Fractions. Duodecimal fractions for rational numbers with 3-smooth denominators terminate: while other rational numbers have recurring duodecimal fractions: As explained in recurring decimals, whenever an irreducible fraction is written in radix point notation in any base, the fraction can be expressed exactly (terminates) if and only if all the prime factors of its denominator are also prime factors of the base. Because formula_0 in the decimal system, fractions whose denominators are made up solely of multiples of 2 and 5 terminate:  = ,  = , and  =  can be expressed exactly as 0.125, 0.05, and 0.002 respectively. and , however, recur (0.333... and 0.142857142857...). Because formula_1 in the duodecimal system, is exact; and recur because they include 5 as a factor; is exact, and recurs, just as it does in decimal. The number of denominators that give terminating fractions within a given number of digits, "n", in a base "b" is the number of factors (divisors) of formula_2, the "n"th power of the base "b" (although this includes the divisor 1, which does not produce fractions when used as the denominator). The number of factors of "formula_2" is given using its prime factorization. For decimal, formula_3. The number of divisors is found by adding one to each exponent of each prime and multiplying the resulting quantities together, so the number of factors of "formula_4" is formula_5. For example, the number 8 is a factor of 103 (1000), so formula_6 and other fractions with a denominator of 8 cannot require more than three fractional decimal digits to terminate. formula_7 For duodecimal, formula_8. This has formula_9 divisors. The sample denominator of 8 is a factor of a gross formula_10 in decimal), so eighths cannot need more than two duodecimal fractional places to terminate. formula_11 Because both ten and twelve have two unique prime factors, the number of divisors of "formula_2" for "b" 10 or 12 grows quadratically with the exponent "n" (in other words, of the order of formula_12). Recurring digits. The Dozenal Society of America argues that factors of 3 are more commonly encountered in real-life division problems than factors of 5. Thus, in practical applications, the nuisance of repeating decimals is encountered less often when duodecimal notation is used. Advocates of duodecimal systems argue that this is particularly true of financial calculations, in which the twelve months of the year often enter into calculations. However, when recurring fractions "do" occur in duodecimal notation, they are less likely to have a very short period than in decimal notation, because 12 (twelve) is between two prime numbers, 11 (eleven) and 13 (thirteen), whereas ten is adjacent to the composite number 9. Nonetheless, having a shorter or longer period does not help the main inconvenience that one does not get a finite representation for such fractions in the given base (so rounding, which introduces inexactitude, is necessary to handle them in calculations), and overall one is more likely to have to deal with infinite recurring digits when fractions are expressed in decimal than in duodecimal, because one out of every three consecutive numbers contains the prime factor 3 in its factorization, whereas only one out of every five contains the prime factor 5. All other prime factors, except 2, are not shared by either ten or twelve, so they do not influence the relative likeliness of encountering recurring digits (any irreducible fraction that contains any of these other factors in its denominator will recur in either base). Also, the prime factor 2 appears twice in the factorization of twelve, whereas only once in the factorization of ten; which means that most fractions whose denominators are powers of two will have a shorter, more convenient terminating representation in duodecimal than in decimal: The duodecimal period length of 1/"n" are (in decimal) 0, 0, 0, 0, 4, 0, 6, 0, 0, 4, 1, 0, 2, 6, 4, 0, 16, 0, 6, 4, 6, 1, 11, 0, 20, 2, 0, 6, 4, 4, 30, 0, 1, 16, 12, 0, 9, 6, 2, 4, 40, 6, 42, 1, 4, 11, 23, 0, 42, 20, 16, 2, 52, 0, 4, 6, 6, 4, 29, 4, 15, 30, 6, 0, 4, 1, 66, 16, 11, 12, 35, 0, ... (sequence in the OEIS) The duodecimal period length of 1/("n"th prime) are (in decimal) 0, 0, 4, 6, 1, 2, 16, 6, 11, 4, 30, 9, 40, 42, 23, 52, 29, 15, 66, 35, 36, 26, 41, 8, 16, 100, 102, 53, 54, 112, 126, 65, 136, 138, 148, 150, 3, 162, 83, 172, 89, 90, 95, 24, 196, 66, 14, 222, 113, 114, 8, 119, 120, 125, 256, 131, 268, 54, 138, 280, ... (sequence in the OEIS) Smallest prime with duodecimal period "n" are (in decimal) 11, 13, 157, 5, 22621, 7, 659, 89, 37, 19141, 23, 20593, 477517, 211, 61, 17, 2693651, 1657, 29043636306420266077, 85403261, 8177824843189, 57154490053, 47, 193, 303551, 79, 306829, 673, 59, 31, 373, 153953, 886381, 2551, 71, 73, ... (sequence in the OEIS) Irrational numbers. The representations of irrational numbers in any positional number system (including decimal and duodecimal) neither terminate nor repeat. The following table gives the first digits for some important algebraic and transcendental numbers in both decimal and duodecimal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2\\times5=10" }, { "math_id": 1, "text": "2\\times2\\times3=12" }, { "math_id": 2, "text": "b^n" }, { "math_id": 3, "text": "10^n=2^n\\times 5^n" }, { "math_id": 4, "text": "10^n" }, { "math_id": 5, "text": "(n+1)(n+1)=(n+1)^2" }, { "math_id": 6, "text": "\\frac{1}{8}" }, { "math_id": 7, "text": "\\frac{5}{8}=0.625_{10}." }, { "math_id": 8, "text": "10^n=2^{2n}\\times 3^n" }, { "math_id": 9, "text": "(2n+1)(n+1)" }, { "math_id": 10, "text": "12^2=144" }, { "math_id": 11, "text": "\\frac{5}{8}=0.76_{12}." }, { "math_id": 12, "text": "n^2" } ]
https://en.wikipedia.org/wiki?curid=8400
840106
Master equation
Equations governing time evolution of physical systems In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states. The name was proposed in 1940: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"When the probabilities of the elementary processes are known, one can write down a continuity equation for W, from which all other equations can be derived and which we will call therefore the "master” equation." Introduction. A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable "t". The most familiar form of a master equation is a matrix form: formula_0 where formula_1 is a column vector, and formula_2 is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state "i" is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix formula_2 depends on the time, formula_3 ), the process is not stationary and the master equation reads formula_4 When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation: formula_5 The matrix formula_2 can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, and then the process is not in equilibrium. Detailed description of the matrix and properties of the system. Let formula_2 be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but is appropriate for conventional matrix multiplication. For each state "k", the increase in occupation probability depends on the contribution from all other states to "k", and is given by: formula_6 where formula_7 is the probability for the system to be in the state formula_8, while the matrix formula_2 is filled with a grid of transition-rate constants. Similarly, formula_9 contributes to the occupation of all other states formula_10 formula_11 In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation. The master equation can be simplified so that the terms with "ℓ" = "k" do not appear in the summation. This allows calculations even if the main diagonal of formula_2 is not defined or has been assigned an arbitrary value. formula_12 The final equality arises from the fact that formula_13 because the summation over the probabilities formula_14 yields one, a constant function. Since this has to hold for any probability formula_1 (and in particular for any probability of the form formula_15 for some k) we get formula_16 Using this we can write the diagonal elements as formula_17 The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium—i.e. if, for all states "k" and "ℓ" having equilibrium probabilities formula_18 and formula_19, formula_20 These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations. Examples of master equations. Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a "master equation", thereby performing a great simplification of the problem (see mathematical model). The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a "master equation", it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix). Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion. Stochastic chemical kinetics provide yet another example of the use of the master equation. A master equation may be used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules). The chemical master equation can also solved for the very large models, such as the DNA damage signal from fungal pathogen Candida albicans. Quantum master equations. A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical. The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation). Theorem about eigenvalues of the matrix and time evolution. Because formula_2 fulfills formula_21 and formula_22 one can show that: This has important consequences for the time evolution of a state. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{d\\vec{P}}{dt} = \\mathbf{A}\\vec{P}," }, { "math_id": 1, "text": "\\vec{P}" }, { "math_id": 2, "text": "\\mathbf{A}" }, { "math_id": 3, "text": "\\mathbf{A}\\rightarrow\\mathbf{A}(t)" }, { "math_id": 4, "text": " \\frac{d\\vec{P}}{dt} = \\mathbf{A}(t)\\vec{P}." }, { "math_id": 5, "text": " \\frac{d\\vec{P}}{dt}= \\int^t_0 \\mathbf{A}(t- \\tau )\\vec{P}( \\tau ) \\, d \\tau . " }, { "math_id": 6, "text": " \\sum_\\ell A_{k\\ell}P_\\ell, " }, { "math_id": 7, "text": " P_\\ell " }, { "math_id": 8, "text": " \\ell " }, { "math_id": 9, "text": "P_k" }, { "math_id": 10, "text": " P_\\ell, " }, { "math_id": 11, "text": " \\sum_\\ell A_{\\ell k}P_k, " }, { "math_id": 12, "text": "\n \\frac{dP_k}{dt}\n = \\sum_\\ell(A_{k\\ell}P_\\ell)\n = \\sum_{\\ell\\neq k}(A_{k\\ell}P_\\ell) + A_{kk}P_k\n = \\sum_{\\ell\\neq k}(A_{k\\ell}P_\\ell - A_{\\ell k}P_k). " }, { "math_id": 13, "text": " \\sum_{\\ell, k}(A_{\\ell k}P_k) = \\frac{d}{dt} \\sum_\\ell(P_{\\ell}) = 0 " }, { "math_id": 14, "text": " P_{\\ell} " }, { "math_id": 15, "text": " P_{\\ell} = \\delta_{\\ell k}" }, { "math_id": 16, "text": " \\sum_{\\ell}(A_{\\ell k}) = 0 \\qquad \\forall k." }, { "math_id": 17, "text": " A_{kk} = -\\sum_{\\ell\\neq k}(A_{\\ell k}) \\Rightarrow A_{kk} P_k = -\\sum_{\\ell\\neq k}(A_{\\ell k} P_k) ." }, { "math_id": 18, "text": "\\pi_k" }, { "math_id": 19, "text": "\\pi_\\ell" }, { "math_id": 20, "text": "A_{k \\ell} \\pi_\\ell = A_{\\ell k} \\pi_k ." }, { "math_id": 21, "text": " \\sum_{\\ell}A_{\\ell k} = 0 \\qquad \\forall k" }, { "math_id": 22, "text": " A_{\\ell k} \\geq 0 \\qquad \\forall \\ell\\neq k," }, { "math_id": 23, "text": " \\lambda" }, { "math_id": 24, "text": " 0 > \\operatorname{Re} \\lambda \\geq 2 \\operatorname{min}_i A_{ii}" }, { "math_id": 25, "text": "v" }, { "math_id": 26, "text": " \\sum_{i}v_{i} = 0" } ]
https://en.wikipedia.org/wiki?curid=840106
8401893
Runge–Kutta–Fehlberg method
Algorithm in numerical analysis In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods. The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family, meaning that identical function evaluations are used in conjunction with each other to create methods of varying order and similar error constants. The method presented in Fehlberg's 1969 paper has been dubbed the RKF45 method, and is a method of order O("h"4) with an error estimator of order O("h"5). By performing one extra calculation, the error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. Butcher tableau for Fehlberg's 4(5) method. Any Runge–Kutta method is uniquely identified by its Butcher tableau. The embedded pair proposed by Fehlberg The first row of coefficients at the bottom of the table gives the fifth-order accurate method, and the second row gives the fourth-order accurate method. Implementing an RK4(5) Algorithm. The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α2=1/3) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: The coefficients in the below table do not work. Fehlberg outlines a solution to solving a system of "n" differential equations of the form: formula_0 to iterative solve for formula_1 where "h" is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, formula_2, and an estimated slope specified by function "f" on the right-hand side of the differential equation. formula_3 Then the weighted average is: formula_4 The estimate of the truncation error is: formula_5 At the completion of the step, a new stepsize is calculated: formula_6 If formula_7, then replace formula_2 with formula_8 and repeat the step. If formula_9, then the step is completed. Replace formula_2 with formula_8 for the next step. The coefficients found by Fehlberg for Formula 2 (derivation with his parameter α2 = 3/8) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: In another table in Fehlberg, coefficients for an RKF4(5) derived by D. Sarafyan are given:
[ { "math_id": 0, "text": "\\frac{dy_i}{dx} = f_i(x,y_1,y_2, \\ldots, y_n), i=1,2,\\ldots,n" }, { "math_id": 1, "text": "y_i(x+h), i=1,2,\\ldots,n" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "\\begin{align}\nk_1&=h\\cdot f(x+A(1) \\cdot h,y) \\\\\nk_2&=h\\cdot f(x+A(2)\\cdot h,y+B(2,1)\\cdot k_1) \\\\\nk_3&=h\\cdot f(x+A(3)\\cdot h, y+B(3,1)\\cdot k_1+B(3,2)\\cdot k_2 ) \\\\\nk_4&=h\\cdot f(x+A(4)\\cdot h, y+B(4,1)\\cdot k_1+B(4,2)\\cdot k_2+B(4,3)\\cdot k_3 ) \\\\\nk_5&=h\\cdot f(x+A(5)\\cdot h, y+B(5,1)\\cdot k_1+B(5,2)\\cdot k_2+B(5,3)\\cdot k_3+B(5,4)\\cdot k_4 ) \\\\\nk_6&=h\\cdot f(x+A(6)\\cdot h, y+B(6,1)\\cdot k_1+B(6,2)\\cdot k_2+B(6,3)\\cdot k_3+B(6,4)\\cdot k_4+B(6,5) \\cdot k_5)\n\\end{align}" }, { "math_id": 4, "text": "y(x+h)=y(x) + CH(1) \\cdot k_1 + CH(2) \\cdot k_2 + CH(3) \\cdot k_3 + CH(4) \\cdot k_4 + CH(5) \\cdot k_5 + CH(6) \\cdot k_6 " }, { "math_id": 5, "text": "\\mathrm{TE} = \\left|\\mathrm{CT}(1) \\cdot k_1 + \\mathrm{CT}(2) \\cdot k_2 + \\mathrm{CT}(3) \\cdot k_3 + \\mathrm{CT}(4) \\cdot k_4 + \\mathrm{CT}(5) \\cdot k_5 + \\mathrm{CT}(6) \\cdot k_6\\right|" }, { "math_id": 6, "text": "h_{\\text{new}} = 0.9 \\cdot h \\cdot \\left ( \\frac {\\varepsilon} {TE} \\right )^{1/5}" }, { "math_id": 7, "text": "\\mathrm{TE} > \\varepsilon" }, { "math_id": 8, "text": "h_{\\text{new}}" }, { "math_id": 9, "text": "TE\\leqslant\\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=8401893
84029
Euclidean planes in three-dimensional space
Flat surface In Euclidean geometry, a plane is a flat two-dimensional surface that extends indefinitely. Euclidean planes often arise as subspaces of three-dimensional space formula_0. A prototypical example is one of a room's walls, infinitely extended and assumed infinitesimal thin. While a pair of real numbers formula_1 suffices to describe points on a plane, the relationship with out-of-plane points requires special consideration for their embedding in the ambient space formula_0. Derived concepts. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;plane segment or &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;planar region (or simply "plane", in lay use) is a planar surface region; it is analogous to a line segment. A "bivector" is an oriented plane segment, analogous to directed line segments. A "face" is a plane segment bounding a solid object. A "slab" is a region bounded by two parallel planes. A "parallelepiped" is a region bounded by three pairs of parallel planes. Occurrence in nature. A plane serves as a mathematical model for many physical phenomena, such as specular reflection in a plane mirror or wavefronts in a traveling plane wave. The free surface of undisturbed liquids tends to be nearly flat (see flatness). The flattest surface ever manufactured is a quantum-stabilized atom mirror. In astronomy, various "reference planes" are used to define positions in orbit. "Anatomical planes" may be lateral ("sagittal"), frontal ("coronal") or transversal. In geology, "beds" (layers of sediments) often are planar. Planes are involved in different forms of imaging, such as the "focal plane", "picture plane", and "image plane". Background. Euclid set forth the first great landmark of mathematical thought, an axiomatic treatment of geometry. He selected a small core of undefined terms (called "common notions") and postulates (or axioms) which he then used to prove various geometrical statements. Although the plane in its modern sense is not directly given a definition anywhere in the "Elements", it may be thought of as part of the common notions. Euclid never used numbers to measure length, angle, or area. The Euclidean plane equipped with a chosen Cartesian coordinate system is called a "Cartesian plane"; a non-Cartesian Euclidean plane equipped with a polar coordinate system would be called a "polar plane". A plane is a ruled surface. Representation. This section is solely concerned with planes embedded in three dimensions: specifically, in R3. Determination by contained points and lines. In a Euclidean space of any number of dimensions, a plane is uniquely determined by any of the following: Properties. The following statements hold in three-dimensional Euclidean space but not in higher dimensions, though they have higher-dimensional analogues: Point–normal form and general form of the equation of a plane. In a manner analogous to the way lines in a two-dimensional space are described using a point-slope form for their equations, planes in a three dimensional space have a natural description using a point in the plane and a vector orthogonal to it (the normal vector) to indicate its "inclination". Specifically, let r0 be the position vector of some point "P"0 = ("x"0, "y"0, "z"0), and let n = ("a", "b", "c") be a nonzero vector. The plane determined by the point "P"0 and the vector n consists of those points "P", with position vector r, such that the vector drawn from "P"0 to "P" is perpendicular to n. Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be described as the set of all points r such that formula_2 The dot here means a dot (scalar) product.&lt;br&gt; Expanded this becomes formula_3 which is the "point–normal" form of the equation of a plane. This is just a linear equation formula_4 where formula_5 which is the expanded form of formula_6 In mathematics it is a common convention to express the normal as a unit vector, but the above argument holds for a normal vector of any non-zero length. Conversely, it is easily shown that if "a", "b", "c", and "d" are constants and "a", "b", and "c" are not all zero, then the graph of the equation formula_4 is a plane having the vector n = ("a", "b", "c") as a normal. This familiar equation for a plane is called the "general form" of the equation of the plane. Thus for example a regression equation of the form "y" = "d" + "ax" + "cz" (with "b" = −1) establishes a best-fit plane in three-dimensional space when there are two explanatory variables. Describing a plane with a point and two vectors lying on it. Alternatively, a plane may be described parametrically as the set of all points of the form formula_7 where s and t range over all real numbers, v and w are given linearly independent vectors defining the plane, and r0 is the vector representing the position of an arbitrary (but fixed) point on the plane. The vectors v and w can be visualized as vectors starting at r0 and pointing in different directions along the plane. The vectors v and w can be perpendicular, but cannot be parallel. Describing a plane through three points. Let p1 = ("x"1, "y"1, "z"1), p2 = ("x"2, "y"2, "z"2), and p3 = ("x"3, "y"3, "z"3) be non-collinear points. Method 1. The plane passing through p1, p2, and p3 can be described as the set of all points ("x","y","z") that satisfy the following determinant equations: formula_8 Method 2. To describe the plane by an equation of the form formula_9, solve the following system of equations: formula_10 formula_11 formula_12 This system can be solved using Cramer's rule and basic matrix manipulations. Let formula_13 If "D" is non-zero (so for planes not through the origin) the values for "a", "b" and "c" can be calculated as follows: formula_14 formula_15 formula_16 These equations are parametric in "d". Setting "d" equal to any non-zero number and substituting it into these equations will yield one solution set. Method 3. This plane can also be described by the "point and a normal vector" prescription above. A suitable normal vector is given by the cross product formula_17 and the point r0 can be taken to be any of the given points p1, p2 or p3 (or any other point in the plane). See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. Explanatory notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^3" }, { "math_id": 1, "text": "\\mathbb{R}^2" }, { "math_id": 2, "text": "\\boldsymbol{n} \\cdot (\\boldsymbol{r}-\\boldsymbol{r}_0)=0." }, { "math_id": 3, "text": " a (x-x_0) + b(y-y_0) + c(z-z_0) = 0," }, { "math_id": 4, "text": " ax + by + cz + d = 0," }, { "math_id": 5, "text": " d = -(ax_0 + by_0 + cz_0)," }, { "math_id": 6, "text": "- \\boldsymbol{n} \\cdot \\boldsymbol{r}_0." }, { "math_id": 7, "text": "\\boldsymbol{r} = \\boldsymbol{r}_0 + s \\boldsymbol{v} + t \\boldsymbol{w}," }, { "math_id": 8, "text": "\\begin{vmatrix}\nx - x_1 & y - y_1 & z - z_1 \\\\\nx_2 - x_1 & y_2 - y_1 & z_2 - z_1 \\\\\nx_3 - x_1 & y_3 - y_1 & z_3 - z_1\n\\end{vmatrix} = \\begin{vmatrix}\nx - x_1 & y - y_1 & z - z_1 \\\\\nx - x_2 & y - y_2 & z - z_2 \\\\\nx - x_3 & y - y_3 & z - z_3\n\\end{vmatrix} = 0. " }, { "math_id": 9, "text": " ax + by + cz + d = 0 " }, { "math_id": 10, "text": " ax_1 + by_1 + cz_1 + d = 0" }, { "math_id": 11, "text": " ax_2 + by_2 + cz_2 + d = 0" }, { "math_id": 12, "text": " ax_3 + by_3 + cz_3 + d = 0." }, { "math_id": 13, "text": "D = \\begin{vmatrix}\nx_1 & y_1 & z_1 \\\\\nx_2 & y_2 & z_2 \\\\\nx_3 & y_3 & z_3\n\\end{vmatrix}." }, { "math_id": 14, "text": "a = \\frac{-d}{D} \\begin{vmatrix}\n1 & y_1 & z_1 \\\\\n1 & y_2 & z_2 \\\\\n1 & y_3 & z_3\n\\end{vmatrix}" }, { "math_id": 15, "text": "b = \\frac{-d}{D} \\begin{vmatrix}\nx_1 & 1 & z_1 \\\\\nx_2 & 1 & z_2 \\\\\nx_3 & 1 & z_3\n\\end{vmatrix}" }, { "math_id": 16, "text": "c = \\frac{-d}{D} \\begin{vmatrix}\nx_1 & y_1 & 1 \\\\\nx_2 & y_2 & 1 \\\\\nx_3 & y_3 & 1\n\\end{vmatrix}." }, { "math_id": 17, "text": "\\boldsymbol n = ( \\boldsymbol p_2 - \\boldsymbol p_1 ) \\times ( \\boldsymbol p_3 - \\boldsymbol p_1 ), " } ]
https://en.wikipedia.org/wiki?curid=84029
8404953
Factor shares
In macroeconomics, factor shares are the share of production given to the factors of production, usually capital and labor. This concept uses the methods and fits into the framework of neoclassical economics. Derivation. In exogenous growth models, the production function can be represented by: formula_0 with "Y" total production, "K" capital, and "L" labor. So a representative agent will attempt to maximize a profit function: formula_1 where formula_2 is the cost to the firm, "r" the rental rate of capital, "w" the wage rate for labor, and "P" is the price of the output. As in microeconomics supply and demand models, first-order conditions that the derivative of this function with respect to capital and labor will be zero at the functions maximum. Thus (assuming "P = 1") we can calculate the wages and the rental rate of capital: formula_3 and formula_4. Now we can write the expenditure allocated to labor as formula_5 and to capital as formula_6 So the factor share devoted to labor is: formula_7 and the factor share devoted to capital is: formula_8 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y=F(K,L)\\," }, { "math_id": 1, "text": "\\pi = max_{\\{K,L\\}} F(K,L)*P - (rK + wL)\\," }, { "math_id": 2, "text": "rK+wL\\," }, { "math_id": 3, "text": "w=D_L[F(K,L)]\\," }, { "math_id": 4, "text": "r=D_K[F(K,L)]\\," }, { "math_id": 5, "text": "wL=D_L[F(K,L)]*L\\," }, { "math_id": 6, "text": "rK=D_K[F(K,L)]*K\\," }, { "math_id": 7, "text": "wL/Y=D_L[F(K,L)]*L/F(K,L)\\," }, { "math_id": 8, "text": "rK/Y=D_K[F(K,L)]*K/F(K,L)\\," } ]
https://en.wikipedia.org/wiki?curid=8404953
8405295
Finite-rank operator
In functional analysis, a branch of mathematics, a finite-rank operator is a bounded linear operator between Banach spaces whose range is finite-dimensional. Finite-rank operators on a Hilbert space. A canonical form. Finite-rank operators are matrices (of finite size) transplanted to the infinite dimensional setting. As such, these operators may be described via linear algebra techniques. From linear algebra, we know that a rectangular matrix, with complex entries, formula_0 has rank formula_1 if and only if formula_2 is of the form formula_3 Exactly the same argument shows that an operator formula_4 on a Hilbert space formula_5 is of rank formula_1 if and only if formula_6 where the conditions on formula_7 are the same as in the finite dimensional case. Therefore, by induction, an operator formula_4 of finite rank formula_8 takes the form formula_9 where formula_10 and formula_11 are orthonormal bases. Notice this is essentially a restatement of singular value decomposition. This can be said to be a "canonical form" of finite-rank operators. Generalizing slightly, if formula_8 is now countably infinite and the sequence of positive numbers formula_12 accumulate only at formula_13, formula_4 is then a compact operator, and one has the canonical form for compact operators. Compact operators are trace class only if the series formula_14 is convergent; a property that automatically holds for all finite-rank operators. Algebraic property. The family of finite-rank operators formula_15 on a Hilbert space formula_5 form a two-sided *-ideal in formula_16, the algebra of bounded operators on formula_5. In fact it is the minimal element among such ideals, that is, any two-sided *-ideal formula_17 in formula_16 must contain the finite-rank operators. This is not hard to prove. Take a non-zero operator formula_18, then formula_19 for some formula_20. It suffices to have that for any formula_21, the rank-1 operator formula_22 that maps formula_23 to formula_24 lies in formula_17. Define formula_25 to be the rank-1 operator that maps formula_23 to formula_26, and formula_27 analogously. Then formula_28 which means formula_22 is in formula_17 and this verifies the claim. Some examples of two-sided *-ideals in formula_29 are the trace-class, Hilbert–Schmidt operators, and compact operators. formula_30 is dense in all three of these ideals, in their respective norms. Since any two-sided ideal in formula_31 must contain formula_30, the algebra formula_31 is simple if and only if it is finite dimensional. Finite-rank operators on a Banach space. A finite-rank operator formula_32 between Banach spaces is a bounded operator such that its range is finite dimensional. Just as in the Hilbert space case, it can be written in the form formula_33 where now formula_34, and formula_35 are bounded linear functionals on the space formula_36. A bounded linear functional is a particular case of a finite-rank operator, namely of rank one. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " M \\in \\mathbb{C}^{n \\times m} " }, { "math_id": 1, "text": "1" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "M = \\alpha \\cdot u v^*, \\quad \\mbox{where} \\quad \\|u \\| = \\|v\\| = 1 \\quad \\mbox{and} \\quad \\alpha \\geq 0 ." }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "H" }, { "math_id": 6, "text": "T h = \\alpha \\langle h, v\\rangle u \\quad \\mbox{for all} \\quad h \\in H ," }, { "math_id": 7, "text": " \\alpha, u, v " }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "T h = \\sum _{i = 1} ^n \\alpha_i \\langle h, v_i\\rangle u_i \\quad \\mbox{for all} \\quad h \\in H ," }, { "math_id": 10, "text": "\\{ u_i \\}" }, { "math_id": 11, "text": "\\{v_i\\}" }, { "math_id": 12, "text": "\\{ \\alpha_i \\} " }, { "math_id": 13, "text": "0" }, { "math_id": 14, "text": " \\sum _i \\alpha _i " }, { "math_id": 15, "text": "F(H)" }, { "math_id": 16, "text": "L(H)" }, { "math_id": 17, "text": "I" }, { "math_id": 18, "text": "T\\in I" }, { "math_id": 19, "text": "Tf = g" }, { "math_id": 20, "text": "f, g \\neq 0" }, { "math_id": 21, "text": "h, k\\in H" }, { "math_id": 22, "text": " S_{h, k} " }, { "math_id": 23, "text": "h" }, { "math_id": 24, "text": "k" }, { "math_id": 25, "text": " S_{h, f} " }, { "math_id": 26, "text": "f" }, { "math_id": 27, "text": " S_{g,k}" }, { "math_id": 28, "text": "S_{h,k} = S_{g,k} T S_{h,f}, \\," }, { "math_id": 29, "text": " L(H) " }, { "math_id": 30, "text": " F(H)" }, { "math_id": 31, "text": " L(H)" }, { "math_id": 32, "text": "T:U\\to V" }, { "math_id": 33, "text": "T h = \\sum _{i = 1} ^n \\langle u_i, h\\rangle v_i \\quad \\mbox{for all} \\quad h \\in U ," }, { "math_id": 34, "text": "v_i\\in V" }, { "math_id": 35, "text": "u_i\\in U'" }, { "math_id": 36, "text": "U" } ]
https://en.wikipedia.org/wiki?curid=8405295
8405353
Probability integral transform
Probability theory operation In probability theory, the probability integral transform (also known as universality of the uniform) relates to the result that data values that are modeled as being random variables from any given continuous distribution can be converted to random variables having a standard uniform distribution. This holds exactly provided that the distribution being used is the true distribution of the random variables; if the distribution is one fitted to the data, the result will hold approximately in large samples. The result is sometimes modified or extended so that the result of the transformation is a standard distribution other than the uniform distribution, such as the exponential distribution. The transform was introduced by Ronald Fisher in his 1932 edition of the book "Statistical Methods for Research Workers". Applications. One use for the probability integral transform in statistical data analysis is to provide the basis for testing whether a set of observations can reasonably be modelled as arising from a specified distribution. Specifically, the probability integral transform is applied to construct an equivalent set of values, and a test is then made of whether a uniform distribution is appropriate for the constructed dataset. Examples of this are P–P plots and Kolmogorov–Smirnov tests. A second use for the transformation is in the theory related to copulas which are a means of both defining and working with distributions for statistically dependent multivariate data. Here the problem of defining or manipulating a joint probability distribution for a set of random variables is simplified or reduced in apparent complexity by applying the probability integral transform to each of the components and then working with a joint distribution for which the marginal variables have uniform distributions. A third use is based on applying the inverse of the probability integral transform to convert random variables from a uniform distribution to have a selected distribution: this is known as inverse transform sampling. Statement. Suppose that a random variable formula_0 has a continuous distribution for which the cumulative distribution function (CDF) is formula_1 Then the random variable formula_2 defined as formula_3 has a standard uniform distribution. Equivalently, if formula_4 is the uniform measure on formula_5, the distribution of formula_0 on formula_6 is the pushforward measure formula_7. Proof. Given any random continuous variable formula_0, define formula_8. Given formula_9, if formula_10 exists (i.e., if there exists a unique formula_11 such that formula_12), then: formula_13 If formula_10 does not exist, then it can be replaced in this proof by the function formula_14, where we define formula_15, formula_16, and formula_17 for formula_18, with the same result that formula_19. Thus, formula_20 is just the CDF of a formula_21 random variable, so that formula_2 has a uniform distribution on the interval formula_5. Examples. For a first, illustrative example, let formula_0 be a random variable with a standard normal distribution formula_22. Then its CDF is formula_23 where formula_24 is the error function. Then the new random variable formula_25 defined by formula_26 is uniformly distributed. As second example, if formula_0 has an exponential distribution with unit mean, then its CDF is formula_27 and the immediate result of the probability integral transform is that formula_28 has a uniform distribution. Moreover, by symmetry of the uniform distribution, formula_29 also has a uniform distribution.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "F_X." }, { "math_id": 2, "text": "Y" }, { "math_id": 3, "text": "Y:=F_X(X) \\,," }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "[0, 1]" }, { "math_id": 6, "text": "\\R" }, { "math_id": 7, "text": "\\mu \\circ F_X^{-1}" }, { "math_id": 8, "text": "Y = F_X (X)" }, { "math_id": 9, "text": "y \\in [0,1] " }, { "math_id": 10, "text": " F^{-1}_X(y) " }, { "math_id": 11, "text": "x" }, { "math_id": 12, "text": " F_X(x)=y " }, { "math_id": 13, "text": " \\begin{align}\nF_Y (y) &= \\operatorname{P}(Y\\leq y) \\\\\n &= \\operatorname{P}(F_X (X)\\leq y) \\\\\n &= \\operatorname{P}(X\\leq F^{-1}_X (y)) \\\\\n &= F_X (F^{-1}_X (y)) \\\\\n &= y\n\\end{align} " }, { "math_id": 14, "text": "\\chi" }, { "math_id": 15, "text": "\\chi(0)=-\\infty" }, { "math_id": 16, "text": "\\chi(1)=\\infty" }, { "math_id": 17, "text": " \\chi(y) \\equiv \\inf \\{x : F_X(x)\\ge y \\} " }, { "math_id": 18, "text": "y\\in(0,1)" }, { "math_id": 19, "text": "F_Y(y)=y" }, { "math_id": 20, "text": "F_Y" }, { "math_id": 21, "text": "\\mathrm{Uniform}(0,1)" }, { "math_id": 22, "text": "\\mathcal{N}(0,1)" }, { "math_id": 23, "text": "\\Phi(x) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^x {\\rm e}^{-t^2/2} \\, {\\rm d}t \n = \\frac12\\Big[\\, 1 + \\operatorname{erf}\\Big(\\frac{x}{\\sqrt{2}}\\Big)\\,\\Big],\\quad x\\in\\mathbb{R},\n\\," }, { "math_id": 24, "text": "\\operatorname{erf}()," }, { "math_id": 25, "text": "Y," }, { "math_id": 26, "text": "Y:=\\Phi(X)," }, { "math_id": 27, "text": "F(x)=1-\\exp(-x)," }, { "math_id": 28, "text": "Y=1-\\exp(-X)" }, { "math_id": 29, "text": "Z=\\exp(-X)" } ]
https://en.wikipedia.org/wiki?curid=8405353
8405799
Costin Amigo
The Costin Amigo is a lightweight sports car built in the United Kingdom from 1970 to 1972. The Amigo was designed by Frank Costin and built by Costin Automotive Racing Products Ltd. The car's chassis is made of timbers and plywood. History. Frank Costin was an engineer who started his career in aviation design and later moved into automobiles and auto racing. He is considered to have been one of the preeminent aerodynamicists of his time. In Costin's personal history of automotive designs, the Amigo was Auto Project XVIII. The name was chosen to denote a car that was driver friendly. The goals set by Costin for the Amigo included the capability to cruise at a steady with an engine speed below 5000 rpm, the ability to cover without tiring the driver or stopping for fuel while carrying adequate luggage for the trip, and a rate of fuel consumption of . The project was started in 1968, while Costin was still based in North Wales. It subsequently moved to Little Staughton, Bedfordshire, and finally to a location near Luton, where Vauxhall had a large factory. The car was officially announced in December 1970. Production of the prototype was financed by television industry executive Jack Wiggins. Additional backing was provided by Paul Pycroft de Ferranti. The Amigo's selling price was set at £3,326 78p. Some sources say only eight of the cars were ever built, while others say the total was nine. One reports a total of nine with two cars left incomplete when production ended, one of those later completed in 1979. Features. Chassis and body. The car's chassis is described as a wooden monocoque. This was not the first such structure designed and built by Costin. In 1959 he had partnered with Jem Marsh to start Marcos Engineering and produce the timber chassis Marcos GT Xylon that debuted in 1959. In 1965 the Costin-Nathan sports racer was launched, funded by Roger Nathan. And in 1967 the Costin-Harris Protos open wheel car started to be raced by Ron Harris Racing in Formula Two (F2) events, as well as one appearance in Formula One (F1) at the 1967 German Grand Prix joint F1/F2 event at the Nürburgring. The Amigo's chassis is made up of six interconnected torsion boxes. Three longitudinal boxes form the car's center tunnel and left and right sill boxes, and three lateral boxes define the engine compartment, cockpit, and boot and rear suspension bay. The underside of the car is enclosed with the exception of some service openings. The chassis is made of gaboon plywood. Parana pine replaces the Sitka spruce used by Costin on the earlier Marcos structure for jointing strips and local reinforcements. The wooden components are bonded with Aerolite adhesive from Ciba. The completed chassis weighs . Torsional stiffness is per degree of twist. Rollover protection is provided by a triangulated steel tube attached to the double-boxed rear bulkhead. The fiberglass body is bonded to the chassis with an Araldite adhesive, but is not structural. Its shape includes a reverse or reflex camber line like the one Costin had used in his aerodynamic refinements of the body of the original Lotus Elite. This contributes to aerodynamic stability at speed, although it is said to be detrimental to the same when in traffic. An unusual feature on some cars is a fin-like pylon that is attached just ahead of the trailing edge of the roof and is topped by a small red lamp. Costin's focus on aerodynamic efficiency meant that even items like the external mirrors were subject to rigorous scrutiny. The car's drag coefficient (formula_0) is 0.29. Running gear. Much of the car's running gear is sourced from Vauxhall, with many parts coming from the VX 4/90 in particular. The front suspension includes the crossmember from a Vauxhall Victor along with the Vauxhall's front suspension of upper and lower wishbones and coil springs. Custom trailing arms of Costin's design were added. The rear suspension employs a Vauxhall Victor live axle with leading arms, coil springs, and a Panhard rod. The damper units are special self-leveling Koni pieces, that were only otherwise made available to Ferrari. Brakes are the same front disc and rear drum assemblies used on the Vauxhall. Power train. Motive power comes from a 2.0 L Vauxhall Slant-4 engine. Some references mention a 2.3 L version of the same engine. The larger engine does not appear to have been used except in modified cars. The engine is paired with a four-speed manual transmission also from Vauxhall, augmented by a de Normanville overdrive manufactured by Laycock Engineering. A limited-slip differential was substituted for the original Vauxhall unit. Performance. The car is reported to be capable of a top speed in the range of , and able to accelerate from in from 7.1 to 7.5 seconds. Technical data. &lt;templatestyles src="Template:Table alignment/tables.css" /&gt; Motorsports. Amigo chassis number 060 appeared in the 3 Hours of Le Mans in 1971. The car was powered by a Lotus-Ford Twin Cam engine tuned by Brian Hart. Hart was also the driver, and was partnered with Paul Pycroft de Ferranti, although Pycroft never took the wheel. The car did not finish. With its Hart-tuned Lotus-Ford engine the car was capable of a top speed in excess of , and was able to reach from a standing start in 5.5 seconds. Driven by Gerry Marshall, chassis 060 won at Thruxton Circuit the same year. At this point the car had a 2.3 L slant four tuned by Bill Blydenstein to Dealer Team Vauxhall (DTV) specifications. The car was later completely rebuilt by Blydenstein, with a freshened dry sump 2.3 litre engine, a 5-speed ZF transmission and dual circuit brakes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle C_\\mathrm d\\," } ]
https://en.wikipedia.org/wiki?curid=8405799
840644
United States tort law
This article addresses torts in United States law. As such, it covers primarily common law. Moreover, it provides general rules, as individual states all have separate civil codes. There are three general categories of torts: intentional torts, negligence, and strict liability torts. Intentional torts. Intentional torts involve situations in which the defendant desires or knows to a substantial certainty that his act will cause the plaintiff damage. They include battery, assault, false imprisonment, intentional infliction of emotional distress ("IIED"), trespass to land, trespass to chattels, conversion, invasion of privacy, malicious prosecution, abuse of process, fraud, inducing breach of contract, intentional interference with business relations, and defamation of character (libel/slander). Elements. The elements of most intentional torts follow the same pattern: intent, act, result, and causation. Intent. This element typically requires the defendant to desire or know to a substantial certainty that something will occur as a result of his act. Therefore, the term intent, for purposes of this section, always includes either desire or knowledge to a substantial certainty. For an example in battery, Dave shoots a gun into a crowd of people because he is specifically trying to hit someone with a bullet. This element would be satisfied, as David had an actual desire to procure the harm required for this tort. Alternatively, Dave shoots a gun into a crowd of people for some reason and genuinely hopes no one gets hit but knows that it is virtually inevitable that someone will actually get hit. This element would still be satisfied, as David had knowledge to a substantial certainty that harm would result. In contrast, if all that can be said about the defendant's state of mind is that he "should have" known better, he will not be liable for an intentional tort. This situation might occur if, as opposed to the examples above, Dave shoots a gun in a remote part of the desert without looking just for fun, not wanting to hit anyone, but the bullet does hit someone. Dave did not have a desire or knowledge to a substantial certainty that someone would get hit in this situation. He may, however, be liable for some other tort, namely negligence. Transferred intent. Transferred intent is the legal principle that intent can be transferred from one victim or tort to another. [1] In tort law, there are generally five areas in which transferred intent is applicable: battery, assault, false imprisonment, trespass to land, and trespass to chattels. Generally, any intent to cause any one of these five torts which results in the completion of any of the five tortious acts will be considered an intentional act, even if the actual target of the tort is one other than the intended target of the original tort. Act. The element of an act varies by whatever tort is in question but always requires voluntariness. For example, if Dave has a muscle spasm that makes his arm fling out to his side and hit Paula, who is standing next to him, any case that Paula attempts to bring against Dave for battery will fail for lack of the requisite act (which will be discussed in the section on battery, below). The act was not voluntary. Result. This element typically refers to damage, although damage is not required to prevail on certain intentional torts, such as trespass to land. Causation. This element refers to actual cause and proximate cause. It will be treated in its own section. Causes of action. Battery. A person commits a battery when he acts either intending to cause a harmful or offensive contact with another or intending to cause another imminent apprehension of such contact and when such contact results. Therefore, there is a variety of ways in which a person can commit a battery, as illustrated by the following examples of defendant Dave and plaintiff Paula. Apprehension is a broader term than fear. If a defendant intends to cause the plaintiff to actually fear a harmful contact, for example, it will therefore always suffice as apprehension, but there are other ways to achieve apprehension as well. Assault. Assault is notably similar to battery. Indeed, the elements of intent and act are identical. The only difference is the result. A person commits an assault when he acts either intending to cause a harmful or offensive contact with another or intending to cause another imminent apprehension of such contact and when such imminent apprehension results. Therefore, there is a variety of ways in which a person can commit an assault. False imprisonment. A person commits false imprisonment when he acts intending to confine another and when confinement actually results that the confinee is either aware of or damaged by. Confinement must typically be within boundaries that the defendant establishes. For example, a person is not confined when he is refused entry to a building, because he is free to leave. In addition, a person is not confined unless the will to leave of an ordinary person in the same situation would be overborne. For example, Dave calls Paula into a room with one door. Dave closes the door and stands in front of it. He tells Paula that if she wants to leave, he will open the door and get out of her way but also threatens to blink twice if she does so. An ordinary person's will to leave would not be overborne by Dave's threat to blink twice. No damage is required in false imprisonment, hence the requirement of a result of awareness "or" damage. For example, Dave calls Paula into a room with one door. Dave closes the door and stands in front of it. He tells Paula that if she wants to leave, he will take out a gun and shoot her. (Note that this "would" overcome the will of an ordinary person to leave.) An hour later, Dave changes his mind and leaves the premises. Paula subsequently leaves and is not physically injured at all. Her awareness of confinement is sufficient to satisfy the element of the result in false imprisonment. Alternatively, Paula is a narcoleptic. She suddenly falls into a deep sleep while feeding the chickens in a barn on Dave's farm in a remote area. Not wanting to move her, Dave locks her in the barn from the outside when he needs to go into town, trying to protect her but also knowing that she won't be able to leave (or call for help) if she wakes up. While Dave is away, the chickens severely scratch Paula's arms, but she does not wake up. Dave returns, unlocks the barn, and successfully wakes up Paula to tend to her wounds. Even though she was unaware of her confinement, she was damaged by it and will have a claim of false imprisonment against Dave. Intentional infliction of emotional distress. A person is liable for intentional infliction of emotional distress (IIED) when he intentionally "or recklessly" engages in extreme and outrageous conduct that is highly likely to cause "severe" emotional distress. This is a notable exception to the general rule given above that for almost all intentional torts only desire or knowledge to a substantial certainty will do. IIED also includes recklessness. This still distinguishes it from negligent infliction of emotional distress, though. Extreme and outrageous conduct refers to the act. Severe emotional distress refers to the result. This is another intentional tort for which no damage is ordinarily required. However, some jurisdictions require the accompaniment of physical effects. In other words, emotional distress will not be deemed to exist in those jurisdictions unless there are physical manifestations, such as vomiting or fainting. Trespass to land. A person commits trespass to land when he wrongfully and intentionally enters, or causes a thing or third person to enter, land owned or occupied by another. Trespass to chattel. A person commits trespass to chattel when he acts either intending to dispossess the rightful possessor of a chattel or intending to use or "intermeddle" with the chattel of another and when dispossession of the chattel for a substantial time results, or damage to the chattel results, or physical injury to the rightful possessor results. Conversion. A person commits conversion when he acts intending to exercise "dominion and control" and when interference with the rightful possessor's control results that is so serious that it requires the actor to pay the full value of the chattel to the rightful possessor. An exercise of dominion and control refers to the act. Serious interference refers to the result. Seriousness is determined by the following factors: The remedy for this cause of action not only requires the defendant to pay the plaintiff the full value of the chattel but also is properly considered a forced sale. The plaintiff must tender the defendant the chattel. Therefore, a plaintiff may not elect to pursue this cause of action but instead trespass to chattel, namely when he wants to keep his chattel despite its potential damage. Affirmative defenses. The following are affirmative defenses to intentional torts. Consent. Consent can be a defense to any intentional tort, although lack of consent is occasionally incorporated into the definition of an intentional tort, such as trespass to land. However, lack of consent is not always an essential element to establish a prima facie case in such situations. Therefore, it is properly treated as an affirmative defense. Self-defense. Self-defense is typically a defense to battery. Similar to self-defense is the defense of others. Defense of property. This is typically a defense to trespass to land or trespass to chattels, as it can refer to realty or personalty. Necessity. Necessity is typically a defense to trespass to land. There are two kinds of necessity, private and public. Private necessity. This is a partial privilege. A party who has this privilege is still liable for damage caused. This defense is therefore more important when there is a concomitant issue of whether the opposing party has a valid privilege of defense of property. The following example is derived from an actual Vermont case from 1908 called Ploof v. Putnam. Paula is sailing on a lake when a violent storm suddenly breaks out. She navigates to the nearest dock and quickly ties up her vessel, not damaging the dock at all. The dock belongs to Dave. Dave attempts to exercise the privilege of defense of property, as Paula would ordinarily be committing a trespass to land in this situation, and unties the vessel. Paula therefore drifts back away from the shore. Her boat is damaged, and she suffers personal injuries, both as a result of the storm. If Paula had damaged Dave's dock, she would be liable for it, even though she has a valid privilege of private necessity. More importantly, Dave is now liable to Paula for the damage to her boat and for her personal injuries. Because of the private necessity, Paula is not considered a trespasser. So, Dave did not in fact have a valid privilege of defense of property. Ordinarily, for private necessity to be valid, the party attempting to exercise it must not have created the emergency. For example, if Paula intentionally punctures her fuel tank just so she can race over to Dave's dock and tie up, she will not have a valid privilege of private necessity. As such, she would be a trespasser, and Dave would have a valid privilege of defense of property. Public necessity. This is a complete privilege. A party who has this privilege, typically a public official or governmental entity, is not liable for any damage caused. A famous early case on this privilege involved John W. Geary, the first mayor of San Francisco, who made the decision during a major fire to burn down several private residences to establish a fire break. Negligence. Amongst unintentional torts one finds negligence as being the most common source of common law. Most Americans are under the impression that most people can sue for any type of negligence, but it is untrue in most US jurisdictions (partly because negligence is one of the few torts for which ordinary people can and do obtain liability insurance.) It is a form of extracontractual liability that is based upon a failure to comply with the duty of care of a reasonable person, which failure is the actual cause and proximate cause of damages. That is, but for the tortfeasor's act or omission, the damages to the plaintiff would not have been incurred, and the damages were a reasonably foreseeable consequence of the tortious conduct. Some jurisdictions recognize one or more designations less than actual intentional wrongdoing, but more egregious than mere negligence, such as "wanton", "reckless" or "despicable" conduct. A finding in those states that a defendant's conduct was "wanton," "reckless" or "despicable", rather than merely negligent, can be significant because certain defenses, such as contributory negligence, are often unavailable when such conduct is the cause of the damages. Breach of duty. Breach is ordinarily established by showing that the defendant failed to exercise reasonable care. Some courts use the terms ordinary care or prudent care instead. Conduct is typically considered to be unreasonable when the disadvantages outweigh the advantages. Judge Learned Hand famously reduced this to algebraic form in "United States v. Carroll Towing Co.": Where formula_0 which means that if the burden of exercising more care is less than the probability of damage or harm multiplied by the severity of the expected loss, and a person fails to undertake the burden, he is not exercising reasonable care and is thus breaching his duty to do so (assuming he has one). In other words, the burden of prevention is less than the probability that the injury will occur multiplied by the gravity of the harm/injury. Under this formula, duty changes as circumstances change—if the cost of prevention increases, then the duty to prevent decreases; if the likelihood of damage or the severity of the potential damage increases, then duty to prevent increases. There are other ways of establishing breach, as well. Violation of statute. This is also known as negligence per se. An incident would not have happened if there was not a breach. Breach can be shown in most jurisdictions if a defendant violates a statute that pertains to safety and the purpose of which is to prevent the result of the case. Note that this is an alternative way to show breach. A violation of statute will not have occurred in every case. Therefore, just because it cannot be shown does not mean that there has been no breach. Even if it is attempted to be shown but fails, there may be other bases of breach. Excuse. Occasionally, there is a valid excuse for violating a safety statute, namely when it is safer or arguably safer to violate than to comply with it. This happened in "Tedla v. Ellman". A statute required pedestrians using roadways to walk against traffic. At the time in question, there was heavy traffic going the opposite direction as the plaintiff. Therefore, the plaintiff would have had to walk past many more vehicles, arguably increasing his chances of being hit. So, the plaintiff walked with traffic on the other side of the road, thus violating the statute. There were far fewer vehicles travelling that direction, but the plaintiff was hit anyway. Even though the purpose of the statute was to prevent precisely the result that occurred, the plaintiff nonetheless prevailed because of a valid excuse for violating the statute, namely that it was probably safer not to comply. Violation of custom. Breach can be shown in most jurisdictions if a defendant violates a custom that is widespread and itself reasonable. For example, where ten percent of a certain industry does a certain thing, it probably will not be considered a custom for purposes of breach in negligence. Alternatively, if 90 percent of a certain industry does a certain thing, but the thing is inherently unsafe, and it is upholding the custom as a cost-saving measure, violation of that custom (doing something safer) will not constitute breach. As with violation of statute, this is an alternative way to show breach. Therefore, just because it cannot be shown, or is attempted to be shown but fails, does not mean that there has been no breach. There may be other ways of showing breach. Res ipsa loquitur. This is a Latin phrase that means "the thing speaks for itself." It is a rare alternative basis of breach. Ordinarily, it only applies when the plaintiff has little or limited access to the evidence of negligent conduct. Res ipsa loquitur requires that the defendant have exclusive control over the thing that causes the injury and that the act be one that would not ordinarily occur without negligence. Likely defendant negligence was responsible and plaintiff was not cause. Causation. Causation is typically a bigger issue in negligence cases than intentional torts. However, as mentioned previously, it is an element of any tort. The defendant's act must be an actual cause and a proximate cause of the result in a particular cause of action. Actual cause. Actual cause has historically been determined by the "but for" test. If the result would not have occurred but for the defendant's act, the act is an actual cause of the result. Several other tests have been created to supplement this general rule, however, especially to deal with cases in which the plaintiff suffers great harm, yet because multiple acts by multiple defendants, the but for test is unhelpful. This situation occurred in the famous case of "Summers v. Tice". For example, Dan and Dave both negligently fire their shotguns at Paula. Paula is struck by only one pellet and it is impossible to determine which gun it was fired from. Using the but for test alone, Dan and Dave can both escape liability. Dan can say that but for his own negligence, Paula still might have suffered the same harm. Dave can make the same argument. As a matter of public policy, most courts will nonetheless hold Dan and Dave jointly and severally liable. The act of each defendant is therefore said to be an actual cause, even if this is a fiction. A similar situation arises when it is impossible to show that the defendant(s) was/were negligent at all. This almost inevitably arises in cases also involving res ipsa loquitor. See Ybarra v. Spangard. For example, making the facts of that case more extreme, Paula goes to the hospital for an appendectomy. She wakes up, and finds her left arm has also been amputated for no apparent reason. (Note that this would implicate multiple issues and other causes of action than negligence.) For purposes of actual cause, unless there is evidence or an admission of negligent conduct, Paula will be unable to show an actual cause. In this situation too, most courts will hold all the defendants that Paula names (possibly everyone on the medical staff that was in the room during her surgery) jointly and severally liable. The act of each defendant is likewise said to be an actual cause, even if this is a fiction. Substantial factor test. Another test deals with cases in which there are two actual causes but only one is negligent. For example, there are three equidistant points, A, B, and C. Paula's house is at point A. Dave negligently ignites a fire at point B. Lightning simultaneously strikes point C, starting a second fire. The fire at point B and the fire at point C both burn towards point A. Paula's house burns down. Unlike "Summers v. Tice", there is only one defendant in this situation. Most courts will still hold Dave's negligence to be an actual cause, as his conduct was a "substantial factor" in causing Paula's damage. This is sometimes called the substantial factor test. Proximate cause. There are many tests for determining whether an actual cause is a proximate one. Most involve some form of foreseeability. Justice Cardozo has two factors to determine if there was a proximate cause between the plaintiff's injury and the defendant's breach of duty: Justice Andrews has several factors to determine if there was a proximate cause between the plaintiff's injury and the defendant's breach of duty: Other causes of action. Interspousal Tort Suits. Historically at common law, spouses were not allowed to sue one another at all. Initially, this was due to the doctrine of coverture. Even after this legal doctrine was abandoned with the adoption of the Married Women's Property Acts, many courts disallowed lawsuits between spouses other than divorce or criminal proceedings for the fear that it would disrupt marital harmony. From the 1860s until 1913, courts completely rejected the notion of interspousal liability. Then, in 1914, one woman was allowed to bring a civil suit against her husband for assault and false imprisonment. Between 1914 and 1920, there were seven state supreme courts that allowed spouses to sue one another for claims such as assault and battery, wrongful imprisonment, wrongful death, and infliction of venereal disease. However, recognition of spouses' ability to sue one another stalled around 1921. Scholars suggest this change in direction is due to the rise of tort suits arising out of automobile accidents. Courts declined to extend spouses the ability to sue each another after car accidents for fear of collusion and insurance fraud. This fear stems from the fact that both sides of a negligent car accident suit between spouses want the injured party to recover. Courts then blurred the lines between willful and negligent tort suits to disallow any interspousal tort suits. This argument contrasts the popular narrative that patriarchal restrictions were responsible for interspousal immunity from suit. Damages. Punitive damages (sums intended to punish the defendant) may be awarded in addition to actual damages intended to compensate the plaintiff. Punitive damage awards generally require a higher showing than mere negligence, but lower than intention. For instance, grossly negligent, reckless, or outrageous conduct may be grounds for an award of punitive damages. These punitive damages awards can be quite substantial in some cases. Strict liability. Strict liability torts are brought for injuries resulting from ultrahazardous activities, for which the defendant will be held liable even if there was no negligence on his/her part. Strict liability also applies to some types of product liability claims and to copyright infringement and some trademark cases. Some statutory torts are also strict liability, including many environmental torts. The term "strict liability" refers to the fact that the tortfeasor's liability is not premised on their culpable state of mind (whether they knew or intended to accomplish the wrongful act, or violated a standard of care by doing so,) but, instead, strictly on the conduct itself or its result. Product liability. Product liability refers to the liability of manufacturers, wholesalers and retailers for unreasonably dangerous products. Federal torts. Although federal courts often hear tort cases arising out of common law or state statutes, there are relatively few tort claims that arise exclusively as a result of federal law. The most common federal tort claim is the 42 U.S.C. § 1983 remedy for violation of one's civil rights under color of federal or state law, which can be used to sue for anything from a free speech claim to use of excessive force by the police. Tort claims arising out of injuries occurring on vessels on navigable waters of the United States fall under federal admiralty jurisdiction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B<PL" } ]
https://en.wikipedia.org/wiki?curid=840644
8406584
COSMO solvation model
COSMO (COnductor-like Screening MOdel) is a calculation method for determining the electrostatic interaction of a molecule with a solvent. COSMO is a dielectric continuum model (a.k.a. continuum solvation model). These models can be used in computational chemistry to model solvation effects. COSMO has become a popular method of these solvation models in recent years. The COSMO formalism is similar to the method proposed earlier by Hoshi et al. The COSMO approach is based – as many other dielectric continuum models – on the surface segmentation of a molecule surface (usually referred to as 'solvent accessible surface' SAS approach). Continuum solvation models – such as COSMO – treat each solvent as a continuum with a permittivity "formula_0". Continuum solvation models approximate the solvent by a dielectric continuum, surrounding the solute molecules outside of a molecular cavity. In most cases it is constructed as an assembly of atom-centered spheres with radii approximately 20% larger than the Van der Waals radius. For the actual calculation the cavity surface is approximated by segments, e.g., hexagons, pentagons, or triangles. Unlike other continuum solvation models, COSMO derives the polarization charges of the continuum, caused by the polarity of the solute, from a scaled-conductor approximation. If the solvent were an ideal conductor the electric potential on the cavity surface must disappear. If the distribution of the electric charge in the molecule is known, e.g. from quantum chemistry, then it is possible to calculate the charge formula_1 on the surface segments. For solvents with finite dielectric constant this charge "formula_2" is lower by approximately a factor formula_3: formula_4 The factor formula_3 is approximately formula_5 where the value of formula_6 should be set to 0.5 for neutral molecules and to 0.0 for ions, see original derivation. The value of formula_6 is erroneously set to 0 in the popular C-PCM reimplementation of COSMO in Gaussian. From the thus determined solvent charges formula_2 and the known charge distribution of the molecule, the energy of the interaction between the solvent and the solute molecule can be calculated. The COSMO method can be used for all methods in theoretical chemistry where the charge distribution of a molecule can be determined, for example semiempirical calculations, Hartree–Fock-method calculations or density functional theory (quantum physics) calculations. Variants and implementations. COSMO has been implemented in a number of quantum chemistry or semi-empirical codes such as ADF, GAMESS-US, Gaussian, MOPAC, NWChem, TURBOMOLE, and Q-Chem. A COSMO version of the polarizable continuum model PCM has also been developed . Depending on the implementation, the details of the cavity construction and the used radii, the segments representing the molecule surface and the formula_6 value for the dielectric scaling function formula_3 may vary –which at times causes problems regarding the reproducibility of published results. Comparison with other methods. While models based on the multipole expansion of the charge distribution of a molecule are limited to small, quasi-spherical or ellipsoidal molecules, the COSMO method has the advantage (as many other dielectric continuum models) that it can be applied to large and irregularly formed molecular structures. In contrast to the polarizable continuum model (PCM), which uses the exact dielectric boundary conditions, the COSMO method uses the approximative scaling function formula_3. Though the scaling is an approximation, it turned out to provide a more accurate description of the so-called outlying charge, reducing the corresponding error. A method comparison of COSMO and the integral equation formalism PCM (IEFPCM), which combines the exact dielectric boundary conditions with a reduced outlying charge error, showed that the differences between the methods are small as compared to deviations to experimental solvation data. The errors introduced by treating a solvent as a continuum and thus neglecting effects like hydrogen bonding or reorientation are thus more relevant to reproduce experimental data than the details of the different continuum solvation methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varepsilon" }, { "math_id": 1, "text": "q^*" }, { "math_id": 2, "text": "q" }, { "math_id": 3, "text": "f(\\varepsilon)" }, { "math_id": 4, "text": "q = f(\\varepsilon) q^*." }, { "math_id": 5, "text": "f(\\varepsilon)=\\frac{\\varepsilon-1}{\\varepsilon+x}," }, { "math_id": 6, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=8406584
8407
Dodecahedron
Polyhedron with 12 faces In geometry, a dodecahedron (from grc " ' ()"; from " ' ()" 'twelve' and " ' ()" 'base, seat, face') or duodecahedron is any polyhedron with twelve flat faces. The most familiar dodecahedron is the regular dodecahedron with regular pentagons as faces, which is a Platonic solid. There are also three regular star dodecahedra, which are constructed as stellations of the convex form. All of these have icosahedral symmetry, order 120. Some dodecahedra have the same combinatorial structure as the regular dodecahedron (in terms of the graph formed by its vertices and edges), but their pentagonal faces are not regular: The pyritohedron, a common crystal form in pyrite, has pyritohedral symmetry, while the tetartoid has tetrahedral symmetry. The rhombic dodecahedron can be seen as a limiting case of the pyritohedron, and it has octahedral symmetry. The elongated dodecahedron and trapezo-rhombic dodecahedron variations, along with the rhombic dodecahedra, are space-filling. There are numerous other dodecahedra. While the regular dodecahedron shares many features with other Platonic solids, one unique property of it is that one can start at a corner of the surface and draw an infinite number of straight lines across the figure that return to the original point without crossing over any other corner. Regular dodecahedron. The convex regular dodecahedron is one of the five regular Platonic solids and can be represented by its Schläfli symbol {5, 3}. The dual polyhedron is the regular icosahedron {3, 5}, having five equilateral triangles around each vertex. The convex regular dodecahedron also has three stellations, all of which are regular star dodecahedra. They form three of the four Kepler–Poinsot polyhedra. They are the small stellated dodecahedron {5/2, 5}, the great dodecahedron {5, 5/2}, and the great stellated dodecahedron {5/2, 3}. The small stellated dodecahedron and great dodecahedron are dual to each other; the great stellated dodecahedron is dual to the great icosahedron {3, 5/2}. All of these regular star dodecahedra have regular pentagonal or pentagrammic faces. The convex regular dodecahedron and great stellated dodecahedron are different realisations of the same abstract regular polyhedron; the small stellated dodecahedron and great dodecahedron are different realisations of another abstract regular polyhedron. Other pentagonal dodecahedra. In crystallography, two important dodecahedra can occur as crystal forms in some symmetry classes of the cubic crystal system that are topologically equivalent to the regular dodecahedron but less symmetrical: the pyritohedron with pyritohedral symmetry, and the tetartoid with tetrahedral symmetry: Pyritohedron. A pyritohedron is a dodecahedron with pyritohedral (Th) symmetry. Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices (see figure). However, the pentagons are not constrained to be regular, and the underlying atomic arrangement has no true fivefold symmetry axis. Its 30 edges are divided into two sets – containing 24 and 6 edges of the same length. The only axes of rotational symmetry are three mutually perpendicular twofold axes and four threefold axes. Although regular dodecahedra do not exist in crystals, the pyritohedron form occurs in the crystals of the mineral pyrite, and it may be an inspiration for the discovery of the regular Platonic solid form. The true regular dodecahedron can occur as a shape for quasicrystals (such as holmium–magnesium–zinc quasicrystal) with icosahedral symmetry, which includes true fivefold rotation axes. Crystal pyrite. The name "crystal pyrite" comes from one of the two common crystal habits shown by pyrite (the other one being the cube). In pyritohedral pyrite, the faces have a Miller index of (210), which means that the dihedral angle is 2·arctan(2) ≈ 126.87° and each pentagonal face has one angle of approximately 121.6° in between two angles of approximately 106.6° and opposite two angles of approximately 102.6°. The following formulas show the measurements for the face of a perfect crystal (which is rarely found in nature). formula_0 formula_1 formula_2 Cartesian coordinates. The eight vertices of a cube have the coordinates (±1, ±1, ±1). The coordinates of the 12 additional vertices are (0, ±(1 + "h"), ±(1 − "h"2)), (±(1 + "h"), ±(1 − "h"2), 0) and (±(1 − "h"2), 0, ±(1 + "h")). "h" is the height of the wedge-shaped "roof" above the faces of that cube with edge length 2. An important case is "h" = (a quarter of the cube edge length) for perfect natural pyrite (also the pyritohedron in the Weaire–Phelan structure). Another one is "h" = = 0.618... for the regular dodecahedron. See section "Geometric freedom" for other cases. Two pyritohedra with swapped nonzero coordinates are in dual positions to each other like the dodecahedra in the compound of two dodecahedra. Geometric freedom. The pyritohedron has a geometric degree of freedom with limiting cases of a cubic convex hull at one limit of collinear edges, and a rhombic dodecahedron as the other limit as 6 edges are degenerated to length zero. The regular dodecahedron represents a special intermediate case where all edges and angles are equal. It is possible to go past these limiting cases, creating concave or nonconvex pyritohedra. The "endo-dodecahedron" is concave and equilateral; it can tessellate space with the convex regular dodecahedron. Continuing from there in that direction, we pass through a degenerate case where twelve vertices coincide in the centre, and on to the regular great stellated dodecahedron where all edges and angles are equal again, and the faces have been distorted into regular pentagrams. On the other side, past the rhombic dodecahedron, we get a nonconvex equilateral dodecahedron with fish-shaped self-intersecting equilateral pentagonal faces. Tetartoid. A tetartoid (also tetragonal pentagonal dodecahedron, pentagon-tritetrahedron, and tetrahedric pentagon dodecahedron) is a dodecahedron with chiral tetrahedral symmetry (T). Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices. However, the pentagons are not regular and the figure has no fivefold symmetry axes. Although regular dodecahedra do not exist in crystals, the tetartoid form does. The name tetartoid comes from the Greek root for one-fourth because it has one fourth of full octahedral symmetry, and half of pyritohedral symmetry. The mineral cobaltite can have this symmetry form. Abstractions sharing the solid's topology and symmetry can be created from the cube and the tetrahedron. In the cube each face is bisected by a slanted edge. In the tetrahedron each edge is trisected, and each of the new vertices connected to a face center. (In Conway polyhedron notation this is a gyro tetrahedron.) Cartesian coordinates. The following points are vertices of a tetartoid pentagon under tetrahedral symmetry: ("a", "b", "c"); (−"a", −"b", "c"); (−, −, ); (−"c", −"a", "b"); (−, , ), under the following conditions: 0 ≤ "a" ≤ "b" ≤ "c", "n" = "a"2"c" − "bc"2, "d"1 = "a"2 − "ab" + "b"2 + "ac" − 2"bc", "d"2 = "a"2 + "ab" + "b"2 − "ac" − 2"bc", "nd"1"d"2 ≠ 0. Geometric freedom. The regular dodecahedron is a tetartoid with more than the required symmetry. The triakis tetrahedron is a degenerate case with 12 zero-length edges. (In terms of the colors used above this means, that the white vertices and green edges are absorbed by the green vertices.) Dual of triangular gyrobianticupola. A lower symmetry form of the regular dodecahedron can be constructed as the dual of a polyhedron constructed from two triangular anticupola connected base-to-base, called a "triangular gyrobianticupola." It has D3d symmetry, order 12. It has 2 sets of 3 identical pentagons on the top and bottom, connected 6 pentagons around the sides which alternate upwards and downwards. This form has a hexagonal cross-section and identical copies can be connected as a partial hexagonal honeycomb, but all vertices will not match. Rhombic dodecahedron. The "rhombic dodecahedron" is a zonohedron with twelve rhombic faces and octahedral symmetry. It is dual to the quasiregular cuboctahedron (an Archimedean solid) and occurs in nature as a crystal form. The rhombic dodecahedron packs together to fill space. The "rhombic dodecahedron" can be seen as a degenerate pyritohedron where the 6 special edges have been reduced to zero length, reducing the pentagons into rhombic faces. The rhombic dodecahedron has several stellations, the first of which is also a parallelohedral spacefiller. Another important rhombic dodecahedron, the Bilinski dodecahedron, has twelve faces congruent to those of the rhombic triacontahedron, i.e. the diagonals are in the ratio of the golden ratio. It is also a zonohedron and was described by Bilinski in 1960. This figure is another spacefiller, and can also occur in non-periodic spacefillings along with the rhombic triacontahedron, the rhombic icosahedron and rhombic hexahedra. Other dodecahedra. There are 6,384,634 topologically distinct "convex" dodecahedra, excluding mirror images—the number of vertices ranges from 8 to 20. (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.) Topologically distinct dodecahedra (excluding pentagonal and rhombic forms) Practical usage. Armand Spitz used a dodecahedron as the "globe" equivalent for his Digital Dome planetarium projector, based upon a suggestion from Albert Einstein. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Height} = \\frac{\\sqrt{5}}{2} \\cdot \\text{Long side}" }, { "math_id": 1, "text": "\\text{Width} = \\frac{4}{3} \\cdot \\text{Long side}" }, { "math_id": 2, "text": "\\text{Short sides} = \\sqrt{\\frac{7}{12}} \\cdot \\text{Long side}" } ]
https://en.wikipedia.org/wiki?curid=8407
840704
Involute
Curve traced by a string as it is unwrapped from another curve In mathematics, an involute (also known as an evolvent) is a particular type of curve that is dependent on another shape or curve. An involute of a curve is the locus of a point on a piece of taut string as the string is either unwrapped from or wrapped around the curve. The evolute of an involute is the original curve. It is generalized by the roulette family of curves. That is, the involutes of a curve are the roulettes of the curve generated by a straight line. The notions of the involute and evolute of a curve were introduced by Christiaan Huygens in his work titled "Horologium oscillatorium sive de motu pendulorum ad horologia aptato demonstrationes geometricae" (1673), where he showed that the involute of a cycloid is still a cycloid, thus providing a method for constructing the cycloidal pendulum, which has the useful property that its period is independent of the amplitude of oscillation. Involute of a parameterized curve. Let formula_0 be a regular curve in the plane with its curvature nowhere 0 and formula_1, then the curve with the parametric representation formula_2 is an "involute" of the given curve. Adding an arbitrary but fixed number formula_3 to the integral formula_4 results in an involute corresponding to a string extended by formula_3 (like a ball of wool yarn having some length of thread already hanging before it is unwound). Hence, the involute can be varied by constant formula_5 and/or adding a number to the integral (see Involutes of a semicubic parabola). If formula_6 one gets formula_7 Properties of involutes. In order to derive properties of a regular curve it is advantageous to suppose the arc length formula_8 to be the parameter of the given curve, which lead to the following simplifications: formula_9 and formula_10, with formula_11 the curvature and formula_12 the unit normal. One gets for the involute: formula_13 and formula_14 and the statement: and from formula_17 follows: The family of involutes and the family of tangents to the original curve makes up an orthogonal coordinate system. Consequently, one may construct involutes graphically. First, draw the family of tangent lines. Then, an involute can be constructed by always staying orthogonal to the tangent line passing the point. Cusps. This section is based on. There are generically two types of cusps in involutes. The first type is at the point where the involute touches the curve itself. This is a cusp of order 3/2. The second type is at the point where the curve has an inflection point. This is a cusp of order 5/2. This can be visually seen by constructing a map formula_23 defined by formula_24where formula_25 is the arclength parametrization of the curve, and formula_26 is the slope-angle of the curve at the point formula_25. This maps the 2D plane into a surface in 3D space. For example, this maps the circle into the hyperboloid of one sheet. By this map, the involutes are obtained in a three-step process: map formula_27 to formula_28, then to the surface in formula_29, then project it down to formula_28 by removing the z-axis: formula_30where formula_31 is any real constant. Since the mapping formula_32 has nonzero derivative at all formula_33, cusps of the involute can only occur where the derivative of formula_32 is vertical (parallel to the z-axis), which can only occur where the surface in formula_29 has a vertical tangent plane. Generically, the surface has vertical tangent planes at only two cases: where the surface touches the curve, and where the curve has an inflection point. cusp of order 3/2. For the first type, one can start by the involute of a circle, with equationformula_34then set formula_35, and expand for small formula_36, to obtainformula_37thus giving the order 3/2 curve formula_38, a semicubical parabola. cusp of order 5/2. For the second type, consider the curve formula_39. The arc from formula_40 to formula_41 is of length formula_42, and the tangent at formula_41 has angle formula_43. Thus, the involute starting from formula_40 at distance formula_44 has parametric formulaformula_45Expand it up to order formula_46, we obtainformula_47which is a cusp of order 5/2. Explicitly, one may solve for the polynomial expansion satisfied by formula_48:formula_49or formula_50which clearly shows the cusp shape. Setting formula_51, we obtain the involute passing the origin. It is special as it contains no cusp. By serial expansion, it has parametric equationformula_52or formula_53 Examples. Involutes of a circle. For a circle with parametric representation formula_54, one has formula_55. Hence formula_56, and the path length is formula_57. Evaluating the above given equation of the involute, one gets formula_58 for the parametric equation of the involute of the circle. The formula_5 term is optional; it serves to set the start location of the curve on the circle. The figure shows involutes for formula_59 (green), formula_35 (red), formula_60 (purple) and formula_61 (light blue). The involutes look like Archimedean spirals, but they are actually not. The arc length for formula_62 and formula_63 of the involute is formula_64 Involutes of a semicubic parabola. The parametric equation formula_65 describes a semicubical parabola. From formula_66 one gets formula_67 and formula_68. Extending the string by formula_69 extensively simplifies further calculation, and one gets formula_70 Eliminating t yields formula_71 showing that this involute is a parabola. The other involutes are thus parallel curves of a parabola, and are not parabolas, as they are curves of degree six (See ). Involutes of a catenary. For the catenary formula_72, the tangent vector is formula_73, and, as formula_74 its length is formula_75. Thus the arc length from the point (0, 1) is formula_76 Hence the involute starting from (0, 1) is parametrized by formula_77 and is thus a tractrix. The other involutes are not tractrices, as they are parallel curves of a tractrix. Involutes of a cycloid. The parametric representation formula_78 describes a cycloid. From formula_79, one gets (after having used some trigonometric formulas) formula_80 and formula_81 Hence the equations of the corresponding involute are formula_82 formula_83 which describe the shifted red cycloid of the diagram. Hence formula_85 Involute and evolute. The evolute of a given curve formula_86 consists of the curvature centers of formula_86. Between involutes and evolutes the following statement holds: "A curve is the evolute of any of its involutes." Application. The most common profiles of modern gear teeth are involutes of a circle. In an involute gear system the teeth of two meshing gears contact at a single instantaneous point that follows along a single straight line of action. The forces exerted the contacting teeth exert on each other also follow this line, and are normal to the teeth. The involute gear system maintaining these conditions follows the fundamental law of gearing: the ratio of angular velocities between the two gears must remain constant throughout. With teeth of other shapes, the relative speeds and forces rise and fall as successive teeth engage, resulting in vibration, noise, and excessive wear. For this reason, nearly all modern planar gear systems are either involute or the related cycloidal gear system. The involute of a circle is also an important shape in gas compressing, as a scroll compressor can be built based on this shape. Scroll compressors make less sound than conventional compressors and have proven to be quite efficient. The High Flux Isotope Reactor uses involute-shaped fuel elements, since these allow a constant-width channel between them for coolant. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\vec c(t),\\; t\\in [t_1,t_2] " }, { "math_id": 1, "text": "a\\in (t_1,t_2)" }, { "math_id": 2, "text": "\\vec C_a(t)=\\vec c(t) -\\frac{\\vec c'(t)}{|\\vec c'(t)|}\\; \\int_a^t|\\vec c'(w)|\\; dw " }, { "math_id": 3, "text": "l_0" }, { "math_id": 4, "text": " \\Bigl(\\int_a^t|\\vec c'(w)|\\; dw\\Bigr) " }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "\\vec c(t)=(x(t),y(t))^T" }, { "math_id": 7, "text": "\\begin{align}\nX(t) &= x(t) - \\frac{x'(t)}{\\sqrt{x'(t)^2 + y'(t)^2}} \\int_a^t \\sqrt{x'(w)^2 + y'(w)^2} \\,dw \\\\\nY(t) &= y(t) - \\frac{y'(t)}{\\sqrt{x'(t)^2 + y'(t)^2}} \\int_a^t \\sqrt{x'(w)^2 + y'(w)^2} \\,dw \\; .\n\\end{align}" }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "\\;|\\vec c'(s)|=1\\;" }, { "math_id": 10, "text": "\\;\\vec c''(s)=\\kappa(s)\\vec n(s)\\;" }, { "math_id": 11, "text": "\\kappa" }, { "math_id": 12, "text": "\\vec n" }, { "math_id": 13, "text": "\\vec C_a(s)=\\vec c(s) -\\vec c'(s)(s-a)\\ " }, { "math_id": 14, "text": "\\vec C_a'(s)=-\\vec c''(s)(s-a)=-\\kappa(s)\\vec n(s)(s-a)\\; " }, { "math_id": 15, "text": " \\vec C_a(a)" }, { "math_id": 16, "text": "| \\vec C_a'(a)|=0" }, { "math_id": 17, "text": "\\; \\vec C_a'(s)\\cdot\\vec c'(s)=0 \\;" }, { "math_id": 18, "text": "\\vec C_a(s)" }, { "math_id": 19, "text": "\\vec c(s)" }, { "math_id": 20, "text": "\\vec C_a(s)=\\vec C_0(s)+a\\vec c'(s)" }, { "math_id": 21, "text": "\\vec c'(s)" }, { "math_id": 22, "text": "\\vec C_0(s)" }, { "math_id": 23, "text": "f: \\R^2 \\to \\R^3" }, { "math_id": 24, "text": "(s, t) \\mapsto (x(s) + t\\cos(\\theta), y(s) + t\\sin(\\theta), t)" }, { "math_id": 25, "text": "(x(s), y(s))" }, { "math_id": 26, "text": "\\theta" }, { "math_id": 27, "text": "\\R" }, { "math_id": 28, "text": "\\R^2" }, { "math_id": 29, "text": "\\R^3" }, { "math_id": 30, "text": "s \\mapsto (s, l- s) \\mapsto f(s, l- s) \\mapsto (f(s, l- s)_x, f(s, l- s)_y)" }, { "math_id": 31, "text": "l" }, { "math_id": 32, "text": "s \\mapsto f(s, l-s)" }, { "math_id": 33, "text": "s\\in \\R" }, { "math_id": 34, "text": "\\begin{align}\nX(t) &= r(\\cos t + (t - a)\\sin t)\\\\\nY(t) &= r(\\sin t - (t - a)\\cos t)\n\\end{align}" }, { "math_id": 35, "text": "a = 0" }, { "math_id": 36, "text": "t" }, { "math_id": 37, "text": "\\begin{align}\nX(t) &= r + r t^2/2 + O(t^4)\\\\\nY(t) &= rt^3/3 + O(t^5)\n\\end{align}" }, { "math_id": 38, "text": "Y^2 - \\frac{8}{9r} (X-r)^{3} + O(Y^{8/3}) = 0 " }, { "math_id": 39, "text": "y = x^3" }, { "math_id": 40, "text": "x= 0" }, { "math_id": 41, "text": "x = s" }, { "math_id": 42, "text": "\\int_0^s \\sqrt{1 + (3t^2)^2}dt = s + \\frac{9}{10} s^5 - \\frac 98 s^9 + O(s^{13})" }, { "math_id": 43, "text": "\\theta = \\arctan(3s^2)" }, { "math_id": 44, "text": "L" }, { "math_id": 45, "text": "\\begin{cases}\n x(s) = s + (L-s-\\frac{9}{10}s^5 + \\cdots)\\cos\\theta \\\\\n y(s) = s^3 + (L-s-\\frac{9}{10}s^5 + \\cdots)\\sin\\theta\n\\end{cases}" }, { "math_id": 46, "text": "s^5" }, { "math_id": 47, "text": "\\begin{cases}\n x(s) = L - \\frac 92 L s^4 + (\\frac 92 L - \\frac{9}{10}) s^5 + O(s^6)\\\\\n y(s) = 3Ls^2 - 2 s^3 + O(s^6)\n\\end{cases}" }, { "math_id": 48, "text": "x, y" }, { "math_id": 49, "text": "\\left(x - L + \\frac{y^2}{2L} \\right)^2 - \\left(\\frac 92 L + \\frac{51}{10} \\right)^2 \\left(\\frac{y}{3L} \\right)^5 + O(s^{11}) = 0" }, { "math_id": 50, "text": "x = L - \\frac{y^2}{2L} \\pm \\left(\\frac 92 L + \\frac{51}{10} \\right) \\left(\\frac{y}{3L} \\right)^{2.5} + O(y^{2.75}),\\quad \\quad y \\geq 0 " }, { "math_id": 51, "text": "L=0" }, { "math_id": 52, "text": "\\begin{cases}\n x(s) = \\frac{18}{5} s^5 - \\frac{126}{5} s^9 + O(s^{13}) \\\\\n y(s) = -2s^3 + \\frac{54}{5} s^7 - \\frac{318}{5} s^{11} + O(s^{15})\n\\end{cases}" }, { "math_id": 53, "text": "x = -\\frac{18}{5 \\cdot 2^{1/3}}y^{5/3} + O(y^3)" }, { "math_id": 54, "text": "(r\\cos(t), r\\sin(t))" }, { "math_id": 55, "text": "\\vec c'(t) = (-r\\sin t, r\\cos t)" }, { "math_id": 56, "text": "|\\vec c'(t)| = r" }, { "math_id": 57, "text": "r(t - a)" }, { "math_id": 58, "text": "\\begin{align}\nX(t) &= r(\\cos (t+a) + t\\sin (t+a))\\\\\nY(t) &= r(\\sin (t+a) - t\\cos (t+a))\n\\end{align}" }, { "math_id": 59, "text": "a = -0.5" }, { "math_id": 60, "text": "a = 0.5" }, { "math_id": 61, "text": "a = 1" }, { "math_id": 62, "text": "a=0" }, { "math_id": 63, "text": "0 \\le t \\le t_2" }, { "math_id": 64, "text": "L = \\frac{r}{2} t_2^2." }, { "math_id": 65, "text": "\\vec c(t) = (\\tfrac{t^3}{3}, \\tfrac{t^2}{2})" }, { "math_id": 66, "text": "\\vec c'(t) = (t^2, t)" }, { "math_id": 67, "text": "|\\vec c'(t)| = t\\sqrt{t^2 + 1}" }, { "math_id": 68, "text": "\\int_0^t w\\sqrt{w^2 + 1}\\,dw = \\frac{1}{3}\\sqrt{t^2 + 1}^3 - \\frac13" }, { "math_id": 69, "text": "l_0={1\\over3}" }, { "math_id": 70, "text": "\\begin{align} X(t)&= -\\frac{t}{3}\\\\\nY(t) &= \\frac{t^2}{6} - \\frac{1}{3}.\\end{align}" }, { "math_id": 71, "text": "Y = \\frac{3}{2}X^2 - \\frac{1}{3}," }, { "math_id": 72, "text": "(t, \\cosh t)" }, { "math_id": 73, "text": "\\vec c'(t) = (1, \\sinh t)" }, { "math_id": 74, "text": " 1 + \\sinh^2 t =\\cosh^2 t," }, { "math_id": 75, "text": "|\\vec c'(t)| = \\cosh t" }, { "math_id": 76, "text": "\\textstyle\\int_0^t \\cosh w\\,dw = \\sinh t." }, { "math_id": 77, "text": "(t - \\tanh t, 1/\\cosh t)," }, { "math_id": 78, "text": "\\vec c(t) = (t - \\sin t, 1 - \\cos t)" }, { "math_id": 79, "text": "\\vec c'(t) = (1 - \\cos t, \\sin t)" }, { "math_id": 80, "text": "|\\vec c'(t)| = 2\\sin\\frac{t}{2}," }, { "math_id": 81, "text": "\\int_\\pi^t 2\\sin\\frac{w}{2}\\,dw = -4\\cos\\frac{t}{2}." }, { "math_id": 82, "text": "X(t) = t + \\sin t," }, { "math_id": 83, "text": "Y(t) = 3 + \\cos t," }, { "math_id": 84, "text": "(t - \\sin t, 1 - \\cos t)" }, { "math_id": 85, "text": "(t + \\sin t, 3 + \\cos t)." }, { "math_id": 86, "text": "c_0" } ]
https://en.wikipedia.org/wiki?curid=840704
8407270
Total dynamic head
Term used in fluid dynamics In fluid dynamics, total dynamic head (TDH) is the work to be done by a pump, per unit weight, per unit volume of fluid. TDH is expressed as the total equivalent height that a fluid is to be pumped, taking into account friction losses in the pipe. formula_0 TDH = Static Lift + Pressure Head + Velocity Head + Friction Loss where: "Static lift" is the difference in elevation between the suction point and the discharge point. "Pressure head" is the difference in pressure between the suction point and the discharge point, expressed as an equivalent height of fluid. "Velocity head" represents the kinetic energy of the fluid due to its bulk motion. "Friction loss" (or "head loss") represents energy lost to friction as fluid flows through the pipe. This equation can be derived from Bernoulli's Equation. For incompressible liquids such as water, "Static lift + Pressure head" together equal the difference in fluid surface elevation between the suction basin and the discharge basin.
[ { "math_id": 0, "text": " {TDH = \\Delta z + \\Delta \\frac{\\psi}{\\rho g} + \\Delta \\frac{v^2}{2g}} + h_F " } ]
https://en.wikipedia.org/wiki?curid=8407270
840758
Ext functor
Construction in homological algebra In mathematics, the Ext functors are the derived functors of the Hom functor. Along with the Tor functor, Ext is one of the core concepts of homological algebra, in which ideas from algebraic topology are used to define invariants of algebraic structures. The cohomology of groups, Lie algebras, and associative algebras can all be defined in terms of Ext. The name comes from the fact that the first Ext group Ext1 classifies extensions of one module by another. In the special case of abelian groups, Ext was introduced by Reinhold Baer (1934). It was named by Samuel Eilenberg and Saunders MacLane (1942), and applied to topology (the universal coefficient theorem for cohomology). For modules over any ring, Ext was defined by Henri Cartan and Eilenberg in their 1956 book "Homological Algebra". Definition. Let "R" be a ring and let "R"-Mod be the category of modules over "R". (One can take this to mean either left "R"-modules or right "R"-modules.) For a fixed "R"-module "A", let "T"("B") = Hom"R"("A", "B") for "B" in "R"-Mod. (Here Hom"R"("A", "B") is the abelian group of "R"-linear maps from "A" to "B"; this is an "R"-module if "R" is commutative.) This is a left exact functor from "R"-Mod to the category of abelian groups Ab, and so it has right derived functors "RiT". The Ext groups are the abelian groups defined by formula_0 for an integer "i". By definition, this means: take any injective resolution formula_1 remove the term "B", and form the cochain complex: formula_2 For each integer "i", Ext("A", "B") is the cohomology of this complex at position "i". It is zero for "i" negative. For example, Ext("A", "B") is the kernel of the map Hom"R"("A", "I"0) → Hom"R"("A", "I"1), which is isomorphic to Hom"R"("A", "B"). An alternative definition uses the functor "G"("A")=Hom"R"("A", "B"), for a fixed "R"-module "B". This is a contravariant functor, which can be viewed as a left exact functor from the opposite category ("R"-Mod)op to Ab. The Ext groups are defined as the right derived functors "RiG": formula_3 That is, choose any projective resolution formula_4 remove the term "A", and form the cochain complex: formula_5 Then Ext("A", "B") is the cohomology of this complex at position "i". One may wonder why the choice of resolution has been left vague so far. In fact, Cartan and Eilenberg showed that these constructions are independent of the choice of projective or injective resolution, and that both constructions yield the same Ext groups. Moreover, for a fixed ring "R", Ext is a functor in each variable (contravariant in "A", covariant in "B"). For a commutative ring "R" and "R"-modules "A" and "B", Ext("A", "B") is an "R"-module (using that Hom"R"("A", "B") is an "R"-module in this case). For a non-commutative ring "R", Ext("A", "B") is only an abelian group, in general. If "R" is an algebra over a ring "S" (which means in particular that "S" is commutative), then Ext("A", "B") is at least an "S"-module. Properties of Ext. Here are some of the basic properties and computations of Ext groups. formula_8 for any "R"-module "B". Here "B"["u"] denotes the "u"-torsion subgroup of "B", {"x" ∈ "B": "ux" = 0}. Taking "R" to be the ring formula_9 of integers, this calculation can be used to compute formula_10 for any finitely generated abelian group "A". formula_11 for any "R"-module "A". Also, a short exact sequence 0 → "K" → "L" → "M" → 0 induces a long exact sequence of the form formula_12 for any "R"-module "B". formula_13 formula_14 Ext and extensions. Equivalence of extensions. The Ext groups derive their name from their relation to extensions of modules. Given "R"-modules "A" and "B", an extension of "A" by "B" is a short exact sequence of "R"-modules formula_15 Two extensions formula_16 formula_17 are said to be equivalent (as extensions of "A" by "B") if there is a commutative diagram: Note that the Five lemma implies that the middle arrow is an isomorphism. An extension of "A" by "B" is called split if it is equivalent to the trivial extension formula_18 There is a one-to-one correspondence between equivalence classes of extensions of "A" by "B" and elements of Ext("A", "B"). The trivial extension corresponds to the zero element of Ext("A", "B"). The Baer sum of extensions. The Baer sum is an explicit description of the abelian group structure on Ext("A", "B"), viewed as the set of equivalence classes of extensions of "A" by "B". Namely, given two extensions formula_19 and formula_20 first form the pullback over formula_21, formula_22 Then form the quotient module formula_23 The Baer sum of "E" and "E′" is the extension formula_24 where the first map is formula_25 and the second is formula_26. Up to equivalence of extensions, the Baer sum is commutative and has the trivial extension as identity element. The negative of an extension 0 → "B" → "E" → "A" → 0 is the extension involving the same module "E", but with the homomorphism "B" → "E" replaced by its negative. Construction of Ext in abelian categories. Nobuo Yoneda defined the abelian groups Ext("A", "B") for objects "A" and "B" in any abelian category C; this agrees with the definition in terms of resolutions if C has enough projectives or enough injectives. First, Ext("A","B") = HomC("A", "B"). Next, Ext("A", "B") is the set of equivalence classes of extensions of "A" by "B", forming an abelian group under the Baer sum. Finally, the higher Ext groups Ext("A", "B") are defined as equivalence classes of "n-extensions", which are exact sequences formula_27 under the equivalence relation generated by the relation that identifies two extensions formula_28 if there are maps formula_29 for all "m" in {1, 2, ..., "n"} so that every resulting square commutes formula_30 that is, if there is a chain map formula_31 which is the identity on "A" and "B". The Baer sum of two "n"-extensions as above is formed by letting formula_32 be the pullback of formula_33 and formula_34 over "A", and formula_35 be the pushout of formula_36 and formula_37 under "B". Then the Baer sum of the extensions is formula_38 The derived category and the Yoneda product. An important point is that Ext groups in an abelian category C can be viewed as sets of morphisms in a category associated to C, the derived category "D"(C). The objects of the derived category are complexes of objects in C. Specifically, one has formula_39 where an object of C is viewed as a complex concentrated in degree zero, and ["i"] means shifting a complex "i" steps to the left. From this interpretation, there is a bilinear map, sometimes called the Yoneda product: formula_40 which is simply the composition of morphisms in the derived category. The Yoneda product can also be described in more elementary terms. For "i" = "j" = 0, the product is the composition of maps in the category C. In general, the product can be defined by splicing together two Yoneda extensions. Alternatively, the Yoneda product can be defined in terms of resolutions. (This is close to the definition of the derived category.) For example, let "R" be a ring, with "R"-modules "A", "B", "C", and let "P", "Q", and "T" be projective resolutions of "A", "B", "C". Then Ext("A","B") can be identified with the group of chain homotopy classes of chain maps "P" → "Q"["i"]. The Yoneda product is given by composing chain maps: formula_41 By any of these interpretations, the Yoneda product is associative. As a result, formula_42 is a graded ring, for any "R"-module "A". For example, this gives the ring structure on group cohomology formula_43 since this can be viewed as formula_44. Also by associativity of the Yoneda product: for any "R"-modules "A" and "B", formula_45 is a module over formula_42. formula_48 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Ext}_R^i(A,B)=(R^iT)(B)," }, { "math_id": 1, "text": "0 \\to B \\to I^0 \\to I^1 \\to \\cdots," }, { "math_id": 2, "text": "0 \\to \\operatorname{Hom}_R(A,I^0) \\to \\operatorname{Hom}_R(A,I^1) \\to \\cdots." }, { "math_id": 3, "text": "\\operatorname{Ext}_R^i(A,B)=(R^iG)(A)." }, { "math_id": 4, "text": "\\cdots \\to P_1 \\to P_0 \\to A \\to 0, " }, { "math_id": 5, "text": "0\\to \\operatorname{Hom}_R(P_0,B)\\to \\operatorname{Hom}_R(P_1,B) \\to \\cdots." }, { "math_id": 6, "text": "\\operatorname{Ext}^i_{\\Z}(A,B) = 0" }, { "math_id": 7, "text": "\\operatorname{Ext}^i_R(A,B)=0" }, { "math_id": 8, "text": "\\operatorname{Ext}_R^i(R/(u),B)\\cong\\begin{cases} B[u] & i=0\\\\ B/uB & i=1\\\\ 0 &\\text{otherwise,}\\end{cases}" }, { "math_id": 9, "text": "\\Z" }, { "math_id": 10, "text": "\\operatorname{Ext}^1_{\\Z}(A,B)" }, { "math_id": 11, "text": "0 \\to \\mathrm{Hom}_R(A,K) \\to \\mathrm{Hom}_R(A,L) \\to \\mathrm{Hom}_R(A,M) \\to \\mathrm{Ext}^1_R(A,K) \\to \\mathrm{Ext}^1_R(A,L) \\to \\cdots," }, { "math_id": 12, "text": "0 \\to \\mathrm{Hom}_R(M,B) \\to \\mathrm{Hom}_R(L,B) \\to \\mathrm{Hom}_R(K,B) \\to \\mathrm{Ext}^1_R(M,B) \\to \\mathrm{Ext}^1_R(L,B) \\to \\cdots," }, { "math_id": 13, "text": "\\begin{align}\n\\operatorname{Ext}^i_R \\left(\\bigoplus_\\alpha M_\\alpha,N \\right) &\\cong\\prod_\\alpha \\operatorname{Ext}^i_R (M_\\alpha,N) \\\\\n\\operatorname{Ext}^i_R \\left(M,\\prod_\\alpha N_\\alpha \\right ) &\\cong\\prod_\\alpha \\operatorname{Ext}^i_R (M,N_\\alpha)\n\\end{align}" }, { "math_id": 14, "text": "S^{-1} \\operatorname{Ext}_R^i(A, B) \\cong \\operatorname{Ext}_{S^{-1} R}^i \\left (S^{-1} A, S^{-1} B \\right )." }, { "math_id": 15, "text": "0\\to B\\to E\\to A\\to 0." }, { "math_id": 16, "text": "0\\to B\\to E\\to A\\to 0" }, { "math_id": 17, "text": "0\\to B\\to E' \\to A\\to 0" }, { "math_id": 18, "text": "0\\to B\\to A\\oplus B\\to A\\to 0." }, { "math_id": 19, "text": "0\\to B\\xrightarrow[f]{} E \\xrightarrow[g]{} A\\to 0" }, { "math_id": 20, "text": "0\\to B\\xrightarrow[f']{} E'\\xrightarrow[g']{} A\\to 0," }, { "math_id": 21, "text": "A" }, { "math_id": 22, "text": "\\Gamma = \\left\\{ (e, e') \\in E \\oplus E' \\; | \\; g(e) = g'(e')\\right\\}." }, { "math_id": 23, "text": "Y = \\Gamma / \\{(f(b), -f'(b)) \\;|\\;b \\in B\\}." }, { "math_id": 24, "text": "0\\to B\\to Y\\to A\\to 0," }, { "math_id": 25, "text": "b \\mapsto [(f(b), 0)] = [(0, f'(b))]" }, { "math_id": 26, "text": "(e, e') \\mapsto g(e) = g'(e')" }, { "math_id": 27, "text": "0\\to B\\to X_n\\to\\cdots\\to X_1\\to A\\to 0," }, { "math_id": 28, "text": "\\begin{align}\n\\xi : 0 &\\to B\\to X_n\\to\\cdots\\to X_1\\to A\\to 0 \\\\\n\\xi': 0 &\\to B\\to X'_n\\to\\cdots\\to X'_1\\to A\\to 0\n\\end{align}" }, { "math_id": 29, "text": "X_m \\to X'_m" }, { "math_id": 30, "text": "\n\\begin{array}{cc cc cc c cc cc cc}\n 0 & \\longrightarrow & B & \\longrightarrow & X_n & \\longrightarrow &\n \\dots & \\longrightarrow & X_1 & \\longrightarrow & A & \n \\longrightarrow & 0 \n \\\\\n && \\Bigg\\Vert && \\Bigg\\downarrow \\iota_n \\! &&&&\n \\Bigg\\downarrow \\iota_1 && \\Bigg\\Vert && \n \\\\\n 0 & \\longrightarrow & B & \\longrightarrow & X'_n & \\longrightarrow &\n \\dots & \\longrightarrow & X'_1 & \\longrightarrow & A & \n \\longrightarrow & 0\n\\end{array}\n" }, { "math_id": 31, "text": "\\iota\\colon \\xi \\to \\xi'" }, { "math_id": 32, "text": "X''_1" }, { "math_id": 33, "text": "X_1" }, { "math_id": 34, "text": "X'_1" }, { "math_id": 35, "text": "X''_n" }, { "math_id": 36, "text": "X_n" }, { "math_id": 37, "text": "X'_n" }, { "math_id": 38, "text": "0\\to B\\to X''_n\\to X_{n-1}\\oplus X'_{n-1}\\to\\cdots\\to X_2\\oplus X'_2\\to X''_1\\to A\\to 0." }, { "math_id": 39, "text": "\\operatorname{Ext}^i_{\\mathbf C}(A,B) = \\operatorname{Hom}_{D({\\mathbf C})}(A,B[i])," }, { "math_id": 40, "text": "\\operatorname{Ext}^i_{\\mathbf C}(A,B) \\times \\operatorname{Ext}^j_{\\mathbf C}(B,C) \\to \\operatorname{Ext}^{i+j}_{\\mathbf C}(A,C)," }, { "math_id": 41, "text": "P\\to Q[i]\\to T[i+j]." }, { "math_id": 42, "text": "\\operatorname{Ext}^*_R(A,A)" }, { "math_id": 43, "text": "H^*(G, \\Z)," }, { "math_id": 44, "text": "\\operatorname{Ext}^*_{\\Z[G]}(\\Z,\\Z)" }, { "math_id": 45, "text": "\\operatorname{Ext}^*_R(A,B)" }, { "math_id": 46, "text": "H^*(G,M)=\\operatorname{Ext}_{\\Z[G]}^*(\\Z, M)" }, { "math_id": 47, "text": "\\Z[G]" }, { "math_id": 48, "text": "HH^*(A,M)=\\operatorname{Ext}^*_{A\\otimes_k A^{\\text{op}}} (A, M)." }, { "math_id": 49, "text": "H^*(\\mathfrak g,M)=\\operatorname{Ext}^*_{U\\mathfrak g}(k,M)" }, { "math_id": 50, "text": "\\mathfrak g" }, { "math_id": 51, "text": "U\\mathfrak g" }, { "math_id": 52, "text": "H^*(X, A) = \\operatorname{Ext}^*(\\Z_X, A)." }, { "math_id": 53, "text": "\\Z_X" }, { "math_id": 54, "text": "\\operatorname{Ext}^*_R(k,k)" } ]
https://en.wikipedia.org/wiki?curid=840758
8407637
Goos–Hänchen effect
The Goos–Hänchen effect (named after Hermann Fritz Gustav Goos (1883 – 1968) and Hilda Hänchen (1919 – 2013) is an optical phenomenon in which linearly polarized light undergoes a small lateral shift when totally internally reflected. The shift is perpendicular to the direction of propagation in the plane containing the incident and reflected beams. This effect is the linear polarization analog of the Imbert–Fedorov effect. Acoustic analog of the Goos–Hänchen effect is known as Schoch displacement. Description. This effect occurs because the reflections of a finite sized beam will interfere along a line transverse to the average propagation direction. As shown in the figure, the superposition of two plane waves with slightly different angles of incidence but with the same frequency or wavelength is given by formula_0 where formula_1 and formula_2 with formula_3. It can be shown that the two waves generate an interference pattern transverse to the average propagation direction, formula_4 and on the interface along the formula_5 plane. Both waves are reflected from the surface and undergo different phase shifts, which leads to a lateral shift of the finite beam. Therefore, the Goos–Hänchen effect is a coherence phenomenon. Research. This effect continues to be a topic of scientific research, for example in the context of nanophotonics applications. A negative "Goos–Hänchen shift" was shown by Walter J. Wild and Lee Giles. Sensitive detection of biological molecules is achieved based on measuring the Goos–Hänchen shift, where the signal of lateral change is in a linear relation with the concentration of target molecules. The work by M. Merano et al. studied the Goos–Hänchen effect experimentally for the case of an optical beam reflecting from a metal surface (gold) at 826 nm. They report a substantial, negative lateral shift of the reflected beam in the plane of incidence for a p-polarization and a smaller, positive shift for the s-polarization case. Generation of giant Goos-Hänchen shift. It is known that the value of lateral position Goos-Hänchen shift is only 5-10 μm at a total internal reflection interface of water and air, which is very difficult to be experimentally measured. In order to generate a giant Goos-Hänchen shift up to 100 μm, surface plasmon resonance techniques were applied based on an interface between metal/dielectric. The electrons on the metallic surface are strongly resonant with the optical waves under specific excitation condition. The light has been fully absorbed by the metallic nanostructures and create an extreme dark point the resonance angle. Thus, a giant Goos-Hänchen position shift is generated by this singular dark point at the totally internally reflected interface. This giant Goos-Hänchen shift has been applied not only for highly sensitive detection of biological molecules but also for the observation of Photonic Spin Hall Effect that are important in quantum information processing and communications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{\\underline{E}}(x,z,t)=\\mathbf{\\underline{E}}^{TE/TM} \\left( e^{j\\mathbf{k}_1 \\cdot \\mathbf{r}} + e^{j\\mathbf{k}_2 \\cdot \\mathbf{r}} \\right) \\cdot e^{-j \\omega t}" }, { "math_id": 1, "text": " \\mathbf{k}_{1} = k \\left( \\cos{\\left( \\theta_0 + \\Delta \\theta \\right)} \\mathbf{\\hat{x}} + \n\\sin{\\left( \\theta_0 + \\Delta \\theta \\right)} \\mathbf{\\hat{z}}\n\\right) " }, { "math_id": 2, "text": " \\mathbf{k}_{2} = k \\left( \\cos{\\left( \\theta_0 - \\Delta \\theta \\right)} \\mathbf{\\hat{x}} + \n\\sin{\\left( \\theta_0 - \\Delta \\theta \\right)} \\mathbf{\\hat{z}}\n\\right) " }, { "math_id": 3, "text": " k = \\begin{matrix}\\frac{\\omega }{c} \\end{matrix} n_1 " }, { "math_id": 4, "text": " \\mathbf{k}_0 = k \\left( \\cos{\\theta_0} \\mathbf{\\hat{x}} + \\sin{\\theta_0} \\mathbf{\\hat{z}} \\right) " }, { "math_id": 5, "text": "(y,z)" } ]
https://en.wikipedia.org/wiki?curid=8407637
840894
Computation tree logic
Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied (e.g., all program variables are positive or no cars on a highway straddle two lanes), then all possible executions of a program avoid some undesirable condition (e.g., dividing a number by zero or two cars colliding on a highway). In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*. History. CTL was first proposed by Edmund M. Clarke and E. Allen Emerson in 1981, who used it to synthesize so-called "synchronisation skeletons", "i.e" abstractions of concurrent programs. Since the introduction of CTL, there has been debate about the relative merits of CTL and LTL. Because CTL is more computationally efficient to model check, it has become more common in industrial use, and many of the most successful model-checking tools use CTL as a specification language. Syntax of CTL. The language of well-formed formulas for CTL is generated by the following grammar: formula_0 where formula_1 ranges over a set of atomic formulas. It is not necessary to use all connectives – for example, formula_2 comprises a complete set of connectives, and the others can be defined using them. For example, the following is a well-formed CTL formula: formula_5 The following is not a well-formed CTL formula: formula_6 The problem with this string is that formula_7 can occur only when paired with an formula_8 or an formula_9. CTL uses atomic propositions as its building blocks to make statements about the states of a system. These propositions are then combined into formulas using logical operators and temporal operators. Operators. Logical operators. The logical operators are the usual ones: ¬, ∨, ∧, ⇒ and ⇔. Along with these operators CTL formulas can also make use of the boolean constants true and false. Temporal operators. The temporal operators are the following: In CTL*, the temporal operators can be freely mixed. In CTL, operators must always be grouped in pairs: one path operator followed by a state operator. See the examples below. CTL* is strictly more expressive than CTL. Minimal set of operators. In CTL there are minimal sets of operators. All CTL formulas can be transformed to use only those operators. This is useful in model checking. One minimal set of operators is: {true, ∨, ¬, EG, EU, EX}. Some of the transformations used for temporal operators are: Semantics of CTL. Definition. CTL formulae are interpreted over transition systems. A transition system is a triple formula_10, where formula_11 is a set of states, formula_12 is a transition relation, assumed to be serial, i.e. every state has at least one successor, and formula_13 is a labelling function, assigning propositional letters to states. Let formula_14 be such a transition model, with formula_15, and formula_16, where formula_17 is the set of well-formed formulas over the language of formula_18. Then the relation of semantic entailment formula_19 is defined recursively on formula_20: Characterisation of CTL. Rules 10–15 above refer to computation paths in models and are what ultimately characterise the "Computation Tree"; they are assertions about the nature of the infinitely deep computation tree rooted at the given state formula_36. Semantic equivalences. The formulae formula_20 and formula_37 are said to be semantically equivalent if any state in any model that satisfies one also satisfies the other. This is denoted formula_38 It can be seen that formula_8 and formula_9 are duals, being universal and existential computation path quantifiers respectively: formula_39. Furthermore, so are formula_40 and formula_41. Hence an instance of De Morgan's laws can be formulated in CTL: formula_42 formula_43 formula_44 It can be shown using such identities that a subset of the CTL temporal connectives is adequate if it contains formula_45, at least one of formula_46 and at least one of formula_47 and the boolean connectives. The important equivalences below are called the expansion laws; they allow unfolding the verification of a CTL connective towards its successors in time. formula_48 formula_49 formula_50 formula_51 formula_52 formula_53 Examples. Let "P" mean "I like chocolate" and Q mean "It's warm outside." "I will like chocolate from now on, no matter what happens." "It's possible I may like chocolate some day, at least for one day." "It's always possible (AF) that I will suddenly start liking chocolate for the rest of time." (Note: not just the rest of my life, since my life is finite, while G is infinite). "Depending on what happens in the future (E), it's possible that for the rest of time (G), I'll be guaranteed at least one (AF) chocolate-liking day still ahead of me. However, if something ever goes wrong, then all bets are off and there's no guarantee about whether I'll ever like chocolate." The two following examples show the difference between CTL and CTL*, as they allow for the until operator to not be qualified with any path operator (A or E): "From now until it's warm outside, I will like chocolate every single day. Once it's warm outside, all bets are off as to whether I'll like chocolate anymore. Oh, and it's guaranteed to be warm outside eventually, even if only for a single day." "It's possible that: there will eventually come a time when it will be warm forever (AG.Q) and that before that time there will always be "some" way to get me to like chocolate the next day (EX.P)." Relations with other logics. Computation tree logic (CTL) is a subset of CTL* as well as of the modal μ calculus. CTL is also a fragment of Alur, Henzinger and Kupferman's alternating-time temporal logic (ATL). Computation tree logic (CTL) and linear temporal logic (LTL) are both a subset of CTL*. CTL and LTL are not equivalent and they have a common subset, which is a proper subset of both CTL and LTL. Extensions. CTL has been extended with second-order quantification formula_54 and formula_55 to "quantified computational tree logic" (QCTL). There are two semantics: A reduction from the model-checking problem of QCTL with the structure semantics, to TQBF (true quantified Boolean formulae) has been proposed, in order to take advantage of the QBF solvers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n\\phi &::= \\bot \\mid \\top \\mid p \\mid (\\neg\\phi) \\mid (\\phi\\land\\phi) \\mid (\\phi\\lor\\phi) \\mid \n(\\phi\\Rightarrow\\phi) \\mid (\\phi\\Leftrightarrow\\phi) \\\\\n&\\mid\\quad \\mbox{AX }\\phi \\mid \\mbox{EX }\\phi \\mid \\mbox{AF }\\phi \\mid \\mbox{EF }\\phi \\mid \\mbox{AG }\\phi \\mid \\mbox{EG }\\phi \\mid \n\\mbox{A }[\\phi \\mbox{ U } \\phi] \\mid \\mbox{E }[\\phi \\mbox{ U } \\phi]\n\\end{align}" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "\\{\\neg, \\land, \\mbox{AX}, \\mbox{AU}, \\mbox{EU}\\}" }, { "math_id": 3, "text": "\\mbox{A}" }, { "math_id": 4, "text": "\\mbox{E}" }, { "math_id": 5, "text": "\\mbox{EF }(\\mbox{EG } p \\Rightarrow \\mbox{AF } r)" }, { "math_id": 6, "text": "\\mbox{EF }\\big(r \\mbox{ U } q\\big)" }, { "math_id": 7, "text": "\\mathrm U" }, { "math_id": 8, "text": "\\mathrm A" }, { "math_id": 9, "text": "\\mathrm E" }, { "math_id": 10, "text": "\\mathcal{M}=(S,{\\rightarrow},L)" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "{\\rightarrow} \\subseteq S \\times S" }, { "math_id": 13, "text": "L" }, { "math_id": 14, "text": "\\mathcal{M}=(S,\\rightarrow,L)" }, { "math_id": 15, "text": "s \\in S" }, { "math_id": 16, "text": "\\phi \\in F" }, { "math_id": 17, "text": "F" }, { "math_id": 18, "text": "\\mathcal{M}" }, { "math_id": 19, "text": "(\\mathcal{M}, s \\models \\phi)" }, { "math_id": 20, "text": "\\phi" }, { "math_id": 21, "text": "\\Big( (\\mathcal{M}, s) \\models \\top \\Big) \\land \\Big( (\\mathcal{M}, s) \\not\\models \\bot \\Big)" }, { "math_id": 22, "text": "\\Big( (\\mathcal{M}, s) \\models p \\Big) \\Leftrightarrow \\Big( p \\in L(s) \\Big)" }, { "math_id": 23, "text": "\\Big( (\\mathcal{M}, s) \\models \\neg\\phi \\Big) \\Leftrightarrow \\Big( (\\mathcal{M}, s) \\not\\models \\phi \\Big)" }, { "math_id": 24, "text": "\\Big( (\\mathcal{M}, s) \\models \\phi_1 \\land \\phi_2 \\Big) \\Leftrightarrow \\Big( \\big((\\mathcal{M}, s) \\models \\phi_1 \\big) \\land \\big((\\mathcal{M}, s) \\models \\phi_2 \\big) \\Big)" }, { "math_id": 25, "text": "\\Big( (\\mathcal{M}, s) \\models \\phi_1 \\lor \\phi_2 \\Big) \\Leftrightarrow \\Big( \\big((\\mathcal{M}, s) \\models \\phi_1 \\big) \\lor \\big((\\mathcal{M}, s) \\models \\phi_2 \\big) \\Big)" }, { "math_id": 26, "text": "\\Big( (\\mathcal{M}, s) \\models \\phi_1 \\Rightarrow \\phi_2 \\Big) \\Leftrightarrow \\Big( \\big((\\mathcal{M}, s) \\not\\models \\phi_1 \\big) \\lor \\big((\\mathcal{M}, s) \\models \\phi_2 \\big) \\Big)" }, { "math_id": 27, "text": "\\bigg( (\\mathcal{M}, s) \\models \\phi_1 \\Leftrightarrow \\phi_2 \\bigg) \\Leftrightarrow \\bigg( \\Big( \\big((\\mathcal{M}, s) \\models \\phi_1 \\big) \\land \\big((\\mathcal{M}, s) \\models \\phi_2 \\big) \\Big) \\lor \\Big( \\neg \\big((\\mathcal{M}, s) \\models \\phi_1 \\big) \\land \\neg \\big((\\mathcal{M}, s) \\models \\phi_2 \\big) \\Big) \\bigg)" }, { "math_id": 28, "text": "\\Big( (\\mathcal{M}, s) \\models AX\\phi \\Big) \\Leftrightarrow \\Big( \\forall \\langle s \\rightarrow s_1 \\rangle \\big( (\\mathcal{M}, s_1) \\models \\phi \\big) \\Big)" }, { "math_id": 29, "text": "\\Big( (\\mathcal{M}, s) \\models EX\\phi \\Big) \\Leftrightarrow \\Big( \\exists \\langle s \\rightarrow s_1 \\rangle \\big( (\\mathcal{M}, s_1) \\models \\phi \\big) \\Big)" }, { "math_id": 30, "text": "\\Big( (\\mathcal{M}, s) \\models AG\\phi \\Big) \\Leftrightarrow \\Big( \\forall \\langle s_1 \\rightarrow s_2 \\rightarrow \\ldots \\rangle (s=s_1) \\forall i \\big( (\\mathcal{M}, s_i) \\models \\phi \\big) \\Big)" }, { "math_id": 31, "text": "\\Big( (\\mathcal{M}, s) \\models EG\\phi \\Big) \\Leftrightarrow \\Big( \\exists \\langle s_1 \\rightarrow s_2 \\rightarrow \\ldots \\rangle (s=s_1) \\forall i \\big( (\\mathcal{M}, s_i) \\models \\phi \\big) \\Big)" }, { "math_id": 32, "text": "\\Big( (\\mathcal{M}, s) \\models AF\\phi \\Big) \\Leftrightarrow \\Big( \\forall \\langle s_1 \\rightarrow s_2 \\rightarrow \\ldots \\rangle (s=s_1) \\exists i \\big( (\\mathcal{M}, s_i) \\models \\phi \\big) \\Big)" }, { "math_id": 33, "text": "\\Big( (\\mathcal{M}, s) \\models EF\\phi \\Big) \\Leftrightarrow \\Big( \\exists \\langle s_1 \\rightarrow s_2 \\rightarrow \\ldots \\rangle (s=s_1) \\exists i \\big( (\\mathcal{M}, s_i) \\models \\phi \\big) \\Big)" }, { "math_id": 34, "text": "\\bigg( (\\mathcal{M}, s) \\models A[\\phi_1 U \\phi_2] \\bigg) \\Leftrightarrow \\bigg( \\forall \\langle s_1 \\rightarrow s_2 \\rightarrow \\ldots \\rangle (s=s_1) \\exists i \\Big( \\big( (\\mathcal{M}, s_i) \\models \\phi_2 \\big) \\land \\big( \\forall (j < i) (\\mathcal{M}, s_j) \\models \\phi_1 \\big) \\Big) \\bigg)" }, { "math_id": 35, "text": "\\bigg( (\\mathcal{M}, s) \\models E[\\phi_1 U \\phi_2] \\bigg) \\Leftrightarrow \\bigg( \\exists \\langle s_1 \\rightarrow s_2 \\rightarrow \\ldots \\rangle (s=s_1) \\exists i \\Big( \\big( (\\mathcal{M}, s_i) \\models \\phi_2 \\big) \\land \\big( \\forall (j < i) (\\mathcal{M}, s_j) \\models \\phi_1 \\big) \\Big) \\bigg)" }, { "math_id": 36, "text": "s" }, { "math_id": 37, "text": "\\psi" }, { "math_id": 38, "text": "\\phi \\equiv \\psi" }, { "math_id": 39, "text": "\\neg \\mathrm A\\Phi \\equiv \\mathrm E \\neg \\Phi " }, { "math_id": 40, "text": "\\mathrm G" }, { "math_id": 41, "text": "\\mathrm F" }, { "math_id": 42, "text": "\\neg AF\\phi \\equiv EG\\neg\\phi" }, { "math_id": 43, "text": "\\neg EF\\phi \\equiv AG\\neg\\phi" }, { "math_id": 44, "text": "\\neg AX\\phi \\equiv EX\\neg\\phi" }, { "math_id": 45, "text": "EU" }, { "math_id": 46, "text": "\\{AX,EX\\}" }, { "math_id": 47, "text": "\\{EG,AF,AU\\}" }, { "math_id": 48, "text": "AG\\phi \\equiv \\phi \\land AX AG \\phi" }, { "math_id": 49, "text": "EG\\phi \\equiv \\phi \\land EX EG \\phi" }, { "math_id": 50, "text": "AF\\phi \\equiv \\phi \\lor AX AF \\phi" }, { "math_id": 51, "text": "EF\\phi \\equiv \\phi \\lor EX EF \\phi" }, { "math_id": 52, "text": "A[\\phi U \\psi] \\equiv \\psi \\lor (\\phi \\land AX A [\\phi U \\psi])" }, { "math_id": 53, "text": "E[\\phi U \\psi] \\equiv \\psi \\lor (\\phi \\land EX E [\\phi U \\psi])" }, { "math_id": 54, "text": "\\exists p" }, { "math_id": 55, "text": "\\forall p" } ]
https://en.wikipedia.org/wiki?curid=840894
840950
Comma category
Mathematics construct In mathematics, a comma category (a special case being a slice category) is a construction in category theory. It provides another way of looking at morphisms: instead of simply relating objects of a category to one another, morphisms become objects in their own right. This notion was introduced in 1963 by F. W. Lawvere (Lawvere, 1963 p. 36), although the technique did not become generally known until many years later. Several mathematical concepts can be treated as comma categories. Comma categories also guarantee the existence of some limits and colimits. The name comes from the notation originally used by Lawvere, which involved the comma punctuation mark. The name persists even though standard notation has changed, since the use of a comma as an operator is potentially confusing, and even Lawvere dislikes the uninformative term "comma category" (Lawvere, 1963 p. 13). Definition. The most general comma category construction involves two functors with the same codomain. Often one of these will have domain 1 (the one-object one-morphism category). Some accounts of category theory consider only these special cases, but the term comma category is actually much more general. General form. Suppose that formula_0, formula_1, and formula_2 are categories, and formula_3 and formula_4 (for source and target) are functors: formula_5 We can form the comma category formula_6 as follows: Morphisms are composed by taking formula_17 to be formula_18, whenever the latter expression is defined. The identity morphism on an object formula_7 is formula_19. Slice category. The first special case occurs when formula_20, the functor formula_3 is the identity functor, and formula_21 (the category with one object formula_22 and one morphism). Then formula_23 for some object formula_24 in formula_0. formula_25 In this case, the comma category is written formula_26, and is often called the "slice category" over formula_24 or the category of "objects over formula_24". The objects formula_27 can be simplified to pairs formula_28, where formula_29. Sometimes, formula_30 is denoted by formula_31. A morphism formula_32 from formula_33 to formula_34 in the slice category can then be simplified to an arrow formula_13 making the following diagram commute: Coslice category. The dual concept to a slice category is a coslice category. Here, formula_35, formula_3 has domain formula_36 and formula_4 is an identity functor. formula_37 In this case, the comma category is often written formula_38, where formula_39 is the object of formula_1 selected by formula_3. It is called the "coslice category" with respect to formula_40, or the category of "objects under formula_40". The objects are pairs formula_41 with formula_42. Given formula_41 and formula_43, a morphism in the coslice category is a map formula_14 making the following diagram commute: Arrow category. formula_3 and formula_4 are identity functors on formula_2 (so formula_44). formula_45 In this case, the comma category is the arrow category formula_46. Its objects are the morphisms of formula_2, and its morphisms are commuting squares in formula_2. Other variations. In the case of the slice or coslice category, the identity functor may be replaced with some other functor; this yields a family of categories particularly useful in the study of adjoint functors. For example, if formula_4 is the forgetful functor mapping an abelian group to its underlying set, and formula_47 is some fixed set (regarded as a functor from 1), then the comma category formula_48 has objects that are maps from formula_47 to a set underlying a group. This relates to the left adjoint of formula_4, which is the functor that maps a set to the free abelian group having that set as its basis. In particular, the initial object of formula_48 is the canonical injection formula_49, where formula_50 is the free group generated by formula_47. An object of formula_48 is called a "morphism from formula_47 to formula_4" or a "formula_4-structured arrow with domain formula_47". An object of formula_51 is called a "morphism from formula_3 to formula_52" or a "formula_3-costructured arrow with codomain formula_52". Another special case occurs when both formula_3 and formula_4 are functors with domain formula_36. If formula_53 and formula_54, then the comma category formula_6, written formula_55, is the discrete category whose objects are morphisms from formula_8 to formula_9. An inserter category is a (non-full) subcategory of the comma category where formula_56 and formula_57 are required. The comma category can also be seen as the inserter of formula_58 and formula_59, where formula_60 and formula_61 are the two projection functors out of the product category formula_62. Properties. For each comma category there are forgetful functors from it. Examples of use. Some notable categories. Several interesting categories have a natural definition in terms of comma categories. Limits and universal morphisms. Limits and colimits in comma categories may be "inherited". If formula_0 and formula_1 are complete, formula_94 is a continuous functor, and formula_95 is another functor (not necessarily continuous), then the comma category formula_6 produced is complete, and the projection functors formula_96 and formula_97 are continuous. Similarly, if formula_0 and formula_1 are cocomplete, and formula_98 is cocontinuous, then formula_6 is cocomplete, and the projection functors are cocontinuous. For example, note that in the above construction of the category of graphs as a comma category, the category of sets is complete and cocomplete, and the identity functor is continuous and cocontinuous. Thus, the category of graphs is complete and cocomplete. The notion of a universal morphism to a particular colimit, or from a limit, can be expressed in terms of a comma category. Essentially, we create a category whose objects are cones, and where the limiting cone is a terminal object; then, each universal morphism for the limit is just the morphism to the terminal object. This works in the dual case, with a category of cocones having an initial object. For example, let formula_2 be a category with formula_99 the functor taking each object formula_100 to formula_101 and each arrow formula_87 to formula_102. A universal morphism from formula_103 to formula_104 consists, by definition, of an object formula_101 and morphism formula_105 with the universal property that for any morphism formula_106 there is a unique morphism formula_107 with formula_108. In other words, it is an object in the comma category formula_109 having a morphism to any other object in that category; it is initial. This serves to define the coproduct in formula_2, when it exists. Adjunctions. Lawvere showed that the functors formula_110 and formula_111 are adjoint if and only if the comma categories formula_112 and formula_113, with formula_114 and formula_115 the identity functors on formula_116 and formula_2 respectively, are isomorphic, and equivalent elements in the comma category can be projected onto the same element of formula_117. This allows adjunctions to be described without involving sets, and was in fact the original motivation for introducing comma categories. Natural transformations. If the domains of formula_118 are equal, then the diagram which defines morphisms in formula_119 with formula_120 is identical to the diagram which defines a natural transformation formula_121. The difference between the two notions is that a natural transformation is a particular collection of morphisms of type of the form formula_122, while objects of the comma category contains "all" morphisms of type of such form. A functor to the comma category selects that particular collection of morphisms. This is described succinctly by an observation by S.A. Huq that a natural transformation formula_123, with formula_124, corresponds to a functor formula_125 which maps each object formula_8 to formula_126 and maps each morphism formula_127 to formula_12. This is a bijective correspondence between natural transformations formula_121 and functors formula_125 which are sections of both forgetful functors from formula_119.
[ { "math_id": 0, "text": "\\mathcal{A}" }, { "math_id": 1, "text": "\\mathcal{B}" }, { "math_id": 2, "text": "\\mathcal{C}" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\mathcal A \\xrightarrow{\\;\\; S\\;\\;} \\mathcal C\\xleftarrow{\\;\\; T\\;\\;} \\mathcal B" }, { "math_id": 6, "text": "(S \\downarrow T)" }, { "math_id": 7, "text": "(A, B, h)" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "B" }, { "math_id": 10, "text": "h : S(A)\\rightarrow T(B)" }, { "math_id": 11, "text": "(A', B', h')" }, { "math_id": 12, "text": "(f, g)" }, { "math_id": 13, "text": "f : A \\rightarrow A'" }, { "math_id": 14, "text": "g : B \\rightarrow B'" }, { "math_id": 15, "text": "\\mathcal A" }, { "math_id": 16, "text": "\\mathcal B" }, { "math_id": 17, "text": "(f', g') \\circ (f, g)" }, { "math_id": 18, "text": "(f' \\circ f, g' \\circ g)" }, { "math_id": 19, "text": "(\\mathrm{id}_{A}, \\mathrm{id}_{B})" }, { "math_id": 20, "text": "\\mathcal{C} = \\mathcal{A}" }, { "math_id": 21, "text": "\\mathcal{B}=\\textbf{1}" }, { "math_id": 22, "text": "*" }, { "math_id": 23, "text": "T(*) = A_*" }, { "math_id": 24, "text": "A_*" }, { "math_id": 25, "text": "\\mathcal A \\xrightarrow{\\;\\; \\mathrm{id}_{\\mathcal{A}}\\;\\;} \\mathcal A\\xleftarrow{\\;\\; A_*\\;\\;} \\textbf{1}" }, { "math_id": 26, "text": "(\\mathcal{A} \\downarrow A_*)" }, { "math_id": 27, "text": "(A, *, h)" }, { "math_id": 28, "text": "(A, h)" }, { "math_id": 29, "text": "h : A \\rightarrow A_*" }, { "math_id": 30, "text": "h" }, { "math_id": 31, "text": "\\pi_A" }, { "math_id": 32, "text": "(f,\\mathrm{id}_*)" }, { "math_id": 33, "text": "(A, \\pi_A)" }, { "math_id": 34, "text": "(A', \\pi_{A'})" }, { "math_id": 35, "text": "\\mathcal{C} = \\mathcal{B}" }, { "math_id": 36, "text": "\\textbf{1}" }, { "math_id": 37, "text": "\\textbf{1} \\xrightarrow{\\;\\; B_*\\;\\;} \\mathcal B\\xleftarrow{\\;\\; \\mathrm{id}_{\\mathcal{B}}\\;\\;} \\mathcal B" }, { "math_id": 38, "text": "(B_*\\downarrow \\mathcal{B})" }, { "math_id": 39, "text": "B_*=S(*)" }, { "math_id": 40, "text": "B_*" }, { "math_id": 41, "text": "(B, \\iota_B)" }, { "math_id": 42, "text": "\\iota_B : B_* \\rightarrow B" }, { "math_id": 43, "text": "(B', \\iota_{B'})" }, { "math_id": 44, "text": "\\mathcal{A} = \\mathcal{B} = \\mathcal{C}" }, { "math_id": 45, "text": "\\mathcal{C} \\xrightarrow{\\;\\; \\mathrm{id}_{\\mathcal{C}}\\;\\;} \\mathcal C\\xleftarrow{\\;\\; \\mathrm{id}_{\\mathcal{C}}\\;\\;} \\mathcal C" }, { "math_id": 46, "text": "\\mathcal{C}^\\rightarrow" }, { "math_id": 47, "text": "s" }, { "math_id": 48, "text": "(s \\downarrow T)" }, { "math_id": 49, "text": "s\\rightarrow T(G)" }, { "math_id": 50, "text": "G" }, { "math_id": 51, "text": "(S \\downarrow t)" }, { "math_id": 52, "text": "t" }, { "math_id": 53, "text": "S(*)=A" }, { "math_id": 54, "text": "T(*)=B" }, { "math_id": 55, "text": "(A\\downarrow B)" }, { "math_id": 56, "text": "\\mathcal{A} = \\mathcal{B}" }, { "math_id": 57, "text": "f = g" }, { "math_id": 58, "text": "S \\circ \\pi_1" }, { "math_id": 59, "text": "T \\circ \\pi_2" }, { "math_id": 60, "text": "\\pi_1" }, { "math_id": 61, "text": "\\pi_2" }, { "math_id": 62, "text": "\\mathcal{A} \\times \\mathcal{B}" }, { "math_id": 63, "text": "S\\downarrow T \\to \\mathcal A" }, { "math_id": 64, "text": "(A, B, h)\\mapsto A" }, { "math_id": 65, "text": "(f, g)\\mapsto f" }, { "math_id": 66, "text": "S\\downarrow T \\to \\mathcal B" }, { "math_id": 67, "text": "(A, B, h)\\mapsto B" }, { "math_id": 68, "text": "(f, g)\\mapsto g" }, { "math_id": 69, "text": "S\\downarrow T\\to {\\mathcal C}^{\\rightarrow}" }, { "math_id": 70, "text": "(A, B, h)\\mapsto h" }, { "math_id": 71, "text": "(f, g)\\mapsto (Sf,Tg)" }, { "math_id": 72, "text": "\\scriptstyle {(\\bull \\downarrow \\mathbf{Set})}" }, { "math_id": 73, "text": "\\scriptstyle {\\bull}" }, { "math_id": 74, "text": "\\scriptstyle {\\mathbf{Set}}" }, { "math_id": 75, "text": "\\scriptstyle {(\\bull \\downarrow \\mathbf{Top})}" }, { "math_id": 76, "text": "R" }, { "math_id": 77, "text": "\\scriptstyle {(R \\downarrow \\mathbf{Ring})}" }, { "math_id": 78, "text": "f: R \\to S" }, { "math_id": 79, "text": "h: S \\to T" }, { "math_id": 80, "text": "\\scriptstyle {(\\mathbf{Set} \\downarrow D)}" }, { "math_id": 81, "text": "\\scriptstyle {D: \\, \\mathbf{Set} \\rightarrow \\mathbf{Set}}" }, { "math_id": 82, "text": "s \\times s" }, { "math_id": 83, "text": "(a, b, f)" }, { "math_id": 84, "text": "a" }, { "math_id": 85, "text": "b" }, { "math_id": 86, "text": "f : a \\rightarrow (b \\times b)" }, { "math_id": 87, "text": "f" }, { "math_id": 88, "text": "b \\times b" }, { "math_id": 89, "text": "(g, h) : (a, b, f) \\rightarrow (a', b', f')" }, { "math_id": 90, "text": "f' \\circ g = D(h) \\circ f" }, { "math_id": 91, "text": "(S \\downarrow A)" }, { "math_id": 92, "text": "(B, \\pi_B)" }, { "math_id": 93, "text": "\\pi_B" }, { "math_id": 94, "text": "T : \\mathcal{B} \\rightarrow \\mathcal{C}" }, { "math_id": 95, "text": "S \\colon \\mathcal{A} \\rightarrow \\mathcal{C}" }, { "math_id": 96, "text": "(S\\downarrow T) \\rightarrow \\mathcal{A}" }, { "math_id": 97, "text": "(S\\downarrow T) \\rightarrow \\mathcal{B}" }, { "math_id": 98, "text": "S : \\mathcal{A} \\rightarrow \\mathcal{C}" }, { "math_id": 99, "text": "F : \\mathcal{C} \\rightarrow \\mathcal{C} \\times \\mathcal{C}" }, { "math_id": 100, "text": "c" }, { "math_id": 101, "text": "(c, c)" }, { "math_id": 102, "text": "(f, f)" }, { "math_id": 103, "text": "(a, b)" }, { "math_id": 104, "text": "F" }, { "math_id": 105, "text": "\\rho : (a, b) \\rightarrow (c, c)" }, { "math_id": 106, "text": "\\rho' : (a, b) \\rightarrow (d, d)" }, { "math_id": 107, "text": "\\sigma : c \\rightarrow d" }, { "math_id": 108, "text": "F(\\sigma) \\circ \\rho = \\rho'" }, { "math_id": 109, "text": "((a, b) \\downarrow F)" }, { "math_id": 110, "text": "F : \\mathcal{C} \\rightarrow \\mathcal{D}" }, { "math_id": 111, "text": "G : \\mathcal{D} \\rightarrow \\mathcal{C}" }, { "math_id": 112, "text": "(F \\downarrow id_\\mathcal{D})" }, { "math_id": 113, "text": "(id_\\mathcal{C} \\downarrow G)" }, { "math_id": 114, "text": "id_\\mathcal{D}" }, { "math_id": 115, "text": "id_\\mathcal{C}" }, { "math_id": 116, "text": "\\mathcal{D}" }, { "math_id": 117, "text": "\\mathcal{C} \\times \\mathcal{D}" }, { "math_id": 118, "text": "S, T" }, { "math_id": 119, "text": "S\\downarrow T" }, { "math_id": 120, "text": "A=B, A'=B', f=g" }, { "math_id": 121, "text": "S\\to T" }, { "math_id": 122, "text": "S(A)\\to T(A)" }, { "math_id": 123, "text": "\\eta:S\\to T" }, { "math_id": 124, "text": "S, T:\\mathcal A \\to \\mathcal C" }, { "math_id": 125, "text": "\\mathcal A \\to (S\\downarrow T)" }, { "math_id": 126, "text": "(A, A, \\eta_A)" }, { "math_id": 127, "text": "f=g" } ]
https://en.wikipedia.org/wiki?curid=840950
8410
Decibel
Logarithmic unit expressing the ratio of physical quantities The decibel (symbol: dB) is a relative unit of measurement equal to one tenth of a bel (B). It expresses the ratio of two values of a power or root-power quantity on a logarithmic scale. Two signals whose levels differ by one decibel have a power ratio of 101/10 (approximately ) or root-power ratio of 101/20 (approximately ). The unit expresses a relative change or an absolute value. In the latter case, the numeric value expresses the ratio of a value to a fixed reference value; when used in this way, the unit symbol is often suffixed with letter codes that indicate the reference value. For example, for the reference value of 1 volt, a common suffix is "V" (e.g., "20 dBV"). Two principal types of scaling of the decibel are in common use. When expressing a power ratio, it is defined as ten times the logarithm with base 10. That is, a change in "power" by a factor of 10 corresponds to a 10 dB change in level. When expressing root-power quantities, a change in "amplitude" by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two, so that the related power and root-power levels change by the same value in linear systems, where power is proportional to the square of amplitude. The definition of the decibel originated in the measurement of transmission loss and power in telephony of the early 20th century in the Bell System in the United States. The bel was named in honor of Alexander Graham Bell, but the bel is seldom used. Instead, the decibel is used for a wide variety of measurements in science and engineering, most prominently for sound power in acoustics, in electronics and control theory. In electronics, the gains of amplifiers, attenuation of signals, and signal-to-noise ratios are often expressed in decibels. History. The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. Until the mid-1920s, the unit for loss was "miles of standard cable" (MSC). 1 MSC corresponded to the loss of power over one mile (approximately 1.6 km) of standard telephone cable at a frequency of  radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to a listener. A standard telephone cable was "a cable having uniformly distributed resistance of 88 ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile" (approximately corresponding to 19 gauge wire). In 1924, Bell Telephone Laboratories received a favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the "Transmission Unit" (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power. The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the "bel", in honor of the telecommunications pioneer Alexander Graham Bell. The bel is seldom used, as the decibel was the proposed working unit. The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Since the earliest days of the telephone, the need for a unit in which to measure the transmission efficiency of telephone facilities has been recognized. The introduction of cable in 1896 afforded a stable basis for a convenient unit and the "mile of standard" cable came into general use shortly thereafter. This unit was employed up to 1923 when a new unit was adopted as being more suitable for modern telephone work. The new transmission unit is widely used among the foreign telephone organizations and recently it was termed the "decibel" at the suggestion of the International Advisory Committee on Long Distance Telephony. The decibel may be defined by the statement that two amounts of power differ by 1 decibel when they are in the ratio of 100.1 and any two amounts of power differ by "N" decibels when they are in the ratio of 10"N"(0.1). The number of transmission units expressing the ratio of any two powers is therefore ten times the common logarithm of that ratio. This method of designating the gain or loss of power in telephone circuits permits direct addition or subtraction of the units expressing the efficiency of different parts of the circuit ... In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the name "logit" for "standard magnitudes which combine by multiplication", to contrast with the name "unit" for "standard magnitudes which combine by addition". In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO). The IEC permits the use of the decibel with root-power quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO. Definition. ISO 80000-3 describes definitions for quantities and units of space and time. The IEC Standard 60027-3:2002 defines the following quantities. The decibel (dB) is one-tenth of a bel: 1 dB = 0.1 B. The bel (B) is &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 ln(10) nepers: 1 B = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 ln(10) Np. The neper is the change in the level of a root-power quantity when the root-power quantity changes by a factor of "e", that is 1 Np = ln(e) = 1, thereby relating all of the units as nondimensional natural "log" of root-power-quantity ratios, =  = . Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity. Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two root-power quantities of √10:1. Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately , and an amplitude (root-power quantity) ratio of 101/20 (). The bel is rarely used either without a prefix or with SI unit prefixes other than "deci"; it is preferred, for example, to use "hundredths of a decibel" rather than "millibels". Thus, five one-thousandths of a bel would normally be written 0.05 dB, and not 5 mB. The method of expressing a ratio as a level in decibels depends on whether the measured property is a "power quantity" or a "root-power quantity"; see "Power, root-power, and field quantities" for details. Power quantities. When referring to measurements of "power" quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to reference value. Thus, the ratio of "P" (measured power) to "P"0 (reference power) is represented by "L""P", that ratio expressed in decibels, which is calculated using the formula: formula_0 The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel). "P" and "P"0 must measure the same type of quantity, and have the same units before calculating the ratio. If "P" = "P"0 in the above equation, then "L""P" = 0. If "P" is greater than "P"0 then "L""P" is positive; if "P" is less than "P"0 then "L""P" is negative. Rearranging the above equation gives the following formula for "P" in terms of "P"0 and "L""P": formula_1 Root-power (field) quantities. When referring to measurements of root-power quantities, it is usual to consider the ratio of the squares of "F" (measured) and "F"0 (reference). This is because the definitions were originally formulated to give the same value for relative ratios for both power and root-power quantities. Thus, the following definition is used: formula_2 The formula may be rearranged to give formula_3 Similarly, in electrical circuits, dissipated power is typically proportional to the square of voltage or current when the impedance is constant. Taking voltage as an example, this leads to the equation for power gain level "L""G": formula_4 where "V"out is the root-mean-square (rms) output voltage, "V"in is the rms input voltage. A similar formula holds for current. The term "root-power quantity" is introduced by ISO Standard 80000-1:2009 as a substitute of "field quantity". The term "field quantity" is deprecated by that standard and "root-power" is used throughout this article. Relationship between power and root-power levels. Although power and root-power quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to make "changes" in the respective levels match under restricted conditions such as when the medium is linear and the "same" waveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship formula_5 holding. In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in a linear system in which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes. For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantities "P"0 and "F"0 need not be related), or equivalently, formula_6 must hold to allow the power level difference to be equal to the root-power level difference from power "P"1 and "F"1 to "P"2 and "F"2. An example might be an amplifier with unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantities power spectral density and the associated root-power quantities via the Fourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently. Conversions. Since logarithm differences measured in these units often represent power ratios and root-power ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic root-power (amplitude) ratio. Examples. The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarly dBm for a 1 mW reference point. (31.62 V / 1 V)2 ≈ 1 kW / 1 W, illustrating the consequence from the definitions above that "L""G" has the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared. A change in power ratio by a factor of 10 corresponds to a change in level of 10 dB. A change in power ratio by a factor of 2 or is approximately a change of 3 dB. More precisely, the change is ± dB, but this is almost universally rounded to 3 dB in technical writing. This implies an increase in voltage by a factor of . Likewise, a doubling or halving of the voltage, corresponding to a quadrupling or quartering of the power, is commonly described as 6 dB rather than ± dB. Should it be necessary to make the distinction, the number of decibels is written with additional significant figures. 3.000 dB corresponds to a power ratio of 103/10, or , about 0.24% different from exactly 2, and a voltage ratio of , 0.12% different from exactly √2. Similarly, an increase of 6.000 dB corresponds to the power ratio is 106/10 ≈ , about 0.5% different from 4. Properties. The decibel is useful for representing large ratios and for simplifying representation of multiplicative effects, such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive, such as in the combined sound pressure level of two machines operating together. Care is also necessary with decibels directly in fractions and with the units of multiplicative operations. Reporting large ratios. The logarithmic scale nature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar to scientific notation. This allows one to clearly visualize huge changes of some quantity. See "Bode plot" and "Semi-log plot". For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing". Representation of multiplication operations. Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is, log("A" × "B" × "C")= log("A") + log("B") + log("C"). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example: However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era of slide rules than to modern digital processing, and is cumbersome and difficult to interpret. Quantities in decibels are not necessarily additive, thus being "of unacceptable form for use in dimensional analysis". Thus, units require special care in decibel operations. Take, for example, carrier-to-noise-density ratio "C"/"N"0 (in hertz), involving carrier power "C" (in watts) and noise power spectral density "N"0 (in W/Hz). Expressed in decibels, this ratio would be a subtraction ("C"/"N"0)dB = "C"dB − "N"0dB. However, the linear-scale units still simplify in the implied fraction, so that the results would be expressed in dB-Hz. Representation of addition operations. According to Mitschke, "The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations:if two machines each individually produce a sound pressure level of, say, 90 dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!; suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA.; in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB: logarithmic average = 87 dB; arithmetic average = 80 dB. Addition on a logarithmic scale is called logarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations: formula_11 formula_12 The logarithmic mean is obtained from the logarithmic sum by subtracting formula_13, since logarithmic division is linear subtraction. Fractions. Attenuation constants, in topics such as optical fiber communication and radio propagation path loss, are often expressed as a fraction or ratio to distance of transmission. In this case, dB/m represents decibel per meter, dB/mi represents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a 3.5 dB/km fiber yields a loss of 0.35 dB = 3.5 dB/km × 0.1 km. Uses. Perception. The human perception of the intensity of sound and light more nearly approximates the logarithm of intensity rather than a linear relationship (see Weber–Fechner law), making the dB scale a useful measure. Acoustics. The decibel is commonly used in acoustics as a unit of sound power level or sound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure. As sound pressure is a root-power quantity, the appropriate version of the unit definition is used: formula_14 where "p"rms is the root mean square of the measured sound pressure and "p"ref is the standard reference sound pressure of 20 micropascals in air or 1 micropascal in water. Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value. Sound intensity is proportional to the square of sound pressure. Therefore the sound intensity level can also be defined as: formula_15 The human ear has a large dynamic range in sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is equal to or greater than 1 trillion (1012). Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012 is 12, which is expressed as a sound intensity level of 120 dB re 1 pW/m2. The reference values of I and p in air have been chosen such that this corresponds approximately to a sound pressure level of 120 dB re 20 μPa. Since the human ear is not equally sensitive to all sound frequencies, the acoustic power spectrum is modified by frequency weighting (A-weighting being the most common standard) to get the weighted acoustic power before converting to a sound level or noise level in decibels. Telephony. The decibel is used in telephony and audio. Similarly to the use in acoustics, a frequency weighted power is often used. For audio noise measurements in electrical circuits, the weightings are called psophometric weightings. Electronics. In electronics, the decibel is often used to express power or amplitude ratios (as for gains) in preference to arithmetic ratios or percentages. One advantage is that the total decibel gain of a series of components (such as amplifiers and attenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space, waveguide, coaxial cable, fiber optics, etc.) using a link budget. The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined with "m" for "milliwatt" to produce the "dBm". A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW). In professional audio specifications, a popular unit is the dBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or √1 mW×600 Ω≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm are identical. Optics. In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities. In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B. Video and digital imaging. In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in a CCD imager where response voltage is linear in intensity. Thus, a camera signal-to-noise ratio or dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest. Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear. However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities called "dynamic range" or "signal-to-noise" (of the camera) would be specified in 20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value. Photographers typically use an alternative base-2 log unit, the stop, to describe light intensity ratios or dynamic range. Suffixes and reference values. Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt. In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative. This form of attaching suffixes to dB is widespread in practice, albeit being against the rules promulgated by standards bodies (ISO and IEC), given the "unacceptability of attaching information to units" and the "unacceptability of mixing information with units". The IEC 60027-3 standard recommends the following format: "L""x" (re "x"ref) or as "L""x"/"x"ref, where "x" is the quantity symbol and "x"ref is the value of the reference quantity, e.g., "L""E" (re 1 μV/m) = 20 dB or "L""E"/(1 μV/m) = 20 dB for the electric field strength "E" relative to 1 μV/m reference value. If the measurement result 20 dB is presented separately, it can be specified using the information in parentheses, which is then part of the surrounding text and not a part of the unit: 20 dB (re: 1 μV/m) or 20 dB (1 μV/m). Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with a hyphen, as in "dB‑Hz", or with a space, as in "dB HL", or enclosed in parentheses, as in "dB(sm)", or with no intervening character, as in "dBm" (which is non-compliant with international standards). List of suffixes. Voltage. Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above. Acoustics. Probably the most common usage of "decibels" in reference to sound level is dB SPL, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a root-power quantity) use the factor of 20, and the measures of power (e.g. dB SIL and dB SWL) use the factor of 10. Audio electronics. See also dBV and dBu above. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nL_P = \\frac{1}{2} \\ln\\!\\left(\\frac{P}{P_0}\\right)\\,\\text{Np} = 10 \\log_{10}\\!\\left(\\frac{P}{P_0}\\right)\\,\\text{dB}.\n" }, { "math_id": 1, "text": "\nP = 10^\\frac{L_P}{10\\,\\text{dB}} P_0.\n" }, { "math_id": 2, "text": "\nL_F = \\ln\\!\\left(\\frac{F}{F_0}\\right)\\,\\text{Np} = 10 \\log_{10}\\!\\left(\\frac{F^2}{F_0^2}\\right)\\,\\text{dB} = 20 \\log_{10} \\left(\\frac{F}{F_0}\\right)\\,\\text{dB}.\n" }, { "math_id": 3, "text": "\nF = 10^\\frac{L_F}{20\\,\\text{dB}} F_0.\n" }, { "math_id": 4, "text": "\nL_G = 20 \\log_{10}\\!\\left (\\frac{V_\\text{out}}{V_\\text{in}}\\right)\\,\\text{dB},\n" }, { "math_id": 5, "text": " \\frac{P(t)}{P_0} = \\left(\\frac{F(t)}{F_0}\\right)^2 " }, { "math_id": 6, "text": " \\frac{P_2}{P_1} = \\left(\\frac{F_2}{F_1}\\right)^2 " }, { "math_id": 7, "text": "\nL_G = 10 \\log_{10} \\left(\\frac{1\\,000\\,\\text{W}}{1\\,\\text{W}}\\right)\\,\\text{dB} = 30\\,\\text{dB}.\n" }, { "math_id": 8, "text": "\nL_G = 20 \\log_{10} \\left(\\frac{31.62\\,\\text{V}}{1\\,\\text{V}}\\right)\\,\\text{dB} = 30\\,\\text{dB}.\n" }, { "math_id": 9, "text": "\nL_G = 10 \\log_{10} \\left(\\frac{10\\text{ W}}{0.001\\text{ W}}\\right) \\text{ dB} = 40 \\text{ dB}.\n" }, { "math_id": 10, "text": "\nG = 10^\\frac{3}{10} \\times 1 = 1.995\\,26\\ldots \\approx 2.\n" }, { "math_id": 11, "text": "87\\,\\text{dBA} \\ominus 83\\,\\text{dBA} = 10 \\cdot \\log_{10}\\bigl(10^{87/10} - 10^{83/10}\\bigr)\\,\\text{dBA} \\approx 84.8\\,\\text{dBA}" }, { "math_id": 12, "text": "\n\\begin{align}\nM_\\text{lm}(70, 90) &= \\left(70\\,\\text{dBA} + 90\\,\\text{dBA}\\right)/2 \\\\\n&= 10 \\cdot \\log_{10}\\left(\\bigl(10^{70/10} + 10^{90/10}\\bigr)/2\\right)\\,\\text{dBA} \\\\\n&= 10 \\cdot \\left(\\log_{10}\\bigl(10^{70/10} + 10^{90/10}\\bigr) - \\log_{10} 2\\right)\\,\\text{dBA}\n\\approx 87\\,\\text{dBA}.\n\\end{align}\n" }, { "math_id": 13, "text": "10\\log_{10} 2" }, { "math_id": 14, "text": "\nL_p = 20 \\log_{10}\\!\\left(\\frac{p_{\\text{rms}}}{p_{\\text{ref}}}\\right)\\,\\text{dB},\n" }, { "math_id": 15, "text": "\nL_p = 10 \\log_{10}\\!\\left(\\frac{I}{I_{\\text{ref}}}\\right)\\,\\text{dB},\n" }, { "math_id": 16, "text": "V = \\sqrt{600 \\, \\Omega \\cdot 0.001\\,\\text{W}} \\approx 0.7746\\,\\text{V}" }, { "math_id": 17, "text": "20\\cdot\\log_{10}\\left ( \\frac{1\\,V_\\text{RMS}}{\\sqrt{0.6}\\,V} \\right )=2.218\\,\\text{dBu}." }, { "math_id": 18, "text": "V = \\sqrt{R \\cdot P}" }, { "math_id": 19, "text": "R" }, { "math_id": 20, "text": "P" }, { "math_id": 21, "text": "L_\\text{ov} = 10\\log_{10}\\left ( \\frac{P}{P_0} \\right )\\ [\\text{dBov}]," }, { "math_id": 22, "text": "P_0=1.0" }, { "math_id": 23, "text": "x_\\text{over}" }, { "math_id": 24, "text": "L= -3.01\\ \\text{dBov}" }, { "math_id": 25, "text": "\\sqrt{0.6}\\,\\text{V}\\, \\approx 0.7746\\,\\text{V}\\, \\approx -2.218\\,\\text{dBV}" } ]
https://en.wikipedia.org/wiki?curid=8410
8410162
Lexis ratio
The Lexis ratio is used in statistics as a measure which seeks to evaluate differences between the statistical properties of random mechanisms where the outcome is two-valued — for example "success" or "failure", "win" or "lose". The idea is that the probability of success might vary between different sets of trials in different situations. This ratio is not much used currently having been largely replaced by the use of the chi-squared test in testing for the homogeneity of samples. This measure compares the between-set variance of the sample proportions (evaluated for each set) with what the variance should be if there were no difference between in the true proportions of success across the different sets. Thus the measure is used to evaluate how data compares to a fixed-probability-of-success Bernoulli distribution. The term "Lexis ratio" is sometimes referred to as "L" or "Q", where formula_0 Where formula_1 is the (weighted) sample variance derived from the observed proportions of success in sets in "Lexis trials" and formula_2 is the variance computed from the expected Bernoulli distribution on the basis of the overall average proportion of success. Trials where "L" falls significantly above or below 1 are known as "supernormal" and "subnormal," respectively. This ratio ( Q ) is a measure that can be used to distinguish between three types of variation in sampling for attributes: Bernoullian, Lexian and Poissonian. The Lexis ratio is sometimes also referred to as "L". Definition. Let there be "k" samples of size "n"1, "n"3, "n"3, ... , "n"k and these samples have the proportion of the attribute being examined of "p"1, "p"2, "p"3, ..., "p"k respectively. Then the Lexis ratio is formula_3 If the Lexis ratio is significantly below 1, the sampling is referred to as Poissonian (or subnormal); it is equal to 1 the sampling is referred to as Bernoullian (or normal); and if it is above 1 it is referred to as Lexian (or supranormal). Chuprov showed in 1922 that in the case of statistical homogeneity formula_4 and formula_5 where "E"() is the expectation and "var"() is the variance. The formula for the variance is approximate and holds only for large values of "n". An alternative definition is formula_6 here formula_7 is the (weighted) sample variance derived from the observed proportions of success in sets in "Lexis trials" and formula_2 is the variance computed from the expected Bernoulli distribution on the basis of the overall average proportion of success. Lexis variation. A closely related concept is the Lexis variation. Let "k" samples each of size "n" be drawn at random. Let the probability of success ("p") be constant and let the actual probability of success in the "k"th sample be "p"1, "p"2, ... , "p"k. The average probability of success ("p") is formula_8 The variance in the number of successes is formula_9 where var( "p"i ) is the variance of the "p"i. If all the "p"i are equal the sampling is said to be Bernoullian; where the "p"i differ the sampling is said to be Lexian and the dispersion is said to be supranormal. Lexian sampling occurs in sampling from non homogenous strata. History. Wilhelm Lexis introduced this statistic to test the then commonly held assumption that sampling data could be regarded as homogeneous. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^2 =Q^2 = \\frac{s^2}{\\sigma_0^2}." }, { "math_id": 1, "text": "s^2 " }, { "math_id": 2, "text": "\\sigma_0^2" }, { "math_id": 3, "text": " Q = \\frac{ \\sum{ n_i ( p_i - p )^2 } }{ ( k - 1 ) p( 1 - p ) } " }, { "math_id": 4, "text": " E( Q ) = 1 " }, { "math_id": 5, "text": " var( Q ) = \\frac{ 2 }{ n - 1 } " }, { "math_id": 6, "text": " Q = \\frac{ s^2 }{ \\sigma_0^2 }" }, { "math_id": 7, "text": "s^2 \\," }, { "math_id": 8, "text": " p = \\frac{ 1 }{ k } \\sum{ p_i } " }, { "math_id": 9, "text": " var(successes) = n p ( 1 - p ) + n ( n - 1 ) var( p_i ) " } ]
https://en.wikipedia.org/wiki?curid=8410162
8410911
Mian–Chowla sequence
Sequence of numbers with distinct sums In mathematics, the Mian–Chowla sequence is an integer sequence defined recursively in the following way. The sequence starts with formula_0 Then for formula_1, formula_2 is the smallest integer such that every pairwise sum formula_3 is distinct, for all formula_4 and formula_5 less than or equal to formula_6. Properties. Initially, with formula_7, there is only one pairwise sum, 1 + 1 = 2. The next term in the sequence, formula_8, is 2 since the pairwise sums then are 2, 3 and 4, i.e., they are distinct. Then, formula_9 can't be 3 because there would be the non-distinct pairwise sums 1 + 3 = 2 + 2 = 4. We find then that formula_10, with the pairwise sums being 2, 3, 4, 5, 6 and 8. The sequence thus begins 1, 2, 4, 8, 13, 21, 31, 45, 66, 81, 97, 123, 148, 182, 204, 252, 290, 361, 401, 475, ... (sequence in the OEIS). Similar sequences. If we define formula_11, the resulting sequence is the same except each term is one less (that is, 0, 1, 3, 7, 12, 20, 30, 44, 65, 80, 96, ... OEIS: ). History. The sequence was invented by Abdul Majid Mian and Sarvadaman Chowla.
[ { "math_id": 0, "text": "a_1 = 1." }, { "math_id": 1, "text": " n>1" }, { "math_id": 2, "text": "a_n" }, { "math_id": 3, "text": "a_i + a_j" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "j" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "a_1" }, { "math_id": 8, "text": "a_2" }, { "math_id": 9, "text": "a_3" }, { "math_id": 10, "text": "a_3 = 4" }, { "math_id": 11, "text": "a_1 = 0" } ]
https://en.wikipedia.org/wiki?curid=8410911
8412074
Acid–base homeostasis
Process by which the human body regulates pH Acid–base homeostasis is the homeostatic regulation of the pH of the body's extracellular fluid (ECF). The proper balance between the acids and bases (i.e. the pH) in the ECF is crucial for the normal physiology of the body—and for cellular metabolism. The pH of the intracellular fluid and the extracellular fluid need to be maintained at a constant level. The three dimensional structures of many extracellular proteins, such as the plasma proteins and membrane proteins of the body's cells, are very sensitive to the extracellular pH. Stringent mechanisms therefore exist to maintain the pH within very narrow limits. Outside the acceptable range of pH, proteins are denatured (i.e. their 3D structure is disrupted), causing enzymes and ion channels (among others) to malfunction. An acid–base imbalance is known as acidemia when the pH is acidic, or alkalemia when the pH is alkaline. Lines of defense. In humans and many other animals, acid–base homeostasis is maintained by multiple mechanisms involved in three lines of defense: The second and third lines of defense operate by making changes to the buffers, each of which consists of two components: a weak acid and its conjugate base. It is the ratio concentration of the weak acid to its conjugate base that determines the pH of the solution. Thus, by manipulating firstly the concentration of the weak acid, and secondly that of its conjugate base, the pH of the extracellular fluid (ECF) can be adjusted very accurately to the correct value. The bicarbonate buffer, consisting of a mixture of carbonic acid (H2CO3) and a bicarbonate (HCO3-) salt in solution, is the most abundant buffer in the extracellular fluid, and it is also the buffer whose acid-to-base ratio can be changed very easily and rapidly. Acid–base balance. The pH of the extracellular fluid, including the blood plasma, is normally tightly regulated between 7.32 and 7.42 by the chemical buffers, the respiratory system, and the renal system. The normal pH in the fetus differs from that in the adult. In the fetus, the pH in the umbilical vein pH is normally 7.25 to 7.45 and that in the umbilical artery is normally 7.18 to 7.38. Aqueous buffer solutions will react with strong acids or strong bases by absorbing excess H+ ions, or OH- ions, replacing the strong acids and bases with weak acids and weak bases. This has the effect of damping the effect of pH changes, or reducing the pH change that would otherwise have occurred. But buffers cannot correct abnormal pH levels in a solution, be that solution in a test tube or in the extracellular fluid. Buffers typically consist of a pair of compounds in solution, one of which is a weak acid and the other a weak base. The most abundant buffer in the ECF consists of a solution of carbonic acid (H2CO3), and the bicarbonate (HCO3-) salt of, usually, sodium (Na+). Thus, when there is an excess of OH- ions in the solution carbonic acid "partially" neutralizes them by forming H2O and bicarbonate (HCO3-) ions. Similarly an excess of H+ ions is "partially" neutralized by the bicarbonate component of the buffer solution to form carbonic acid (H2CO3), which, because it is a weak acid, remains largely in the undissociated form, releasing far fewer H+ ions into the solution than the original strong acid would have done. The pH of a buffer solution depends solely on the "ratio" of the molar concentrations of the weak acid to the weak base. The higher the concentration of the weak acid in the solution (compared to the weak base) the lower the resulting pH of the solution. Similarly, if the weak base predominates the higher the resulting pH. This principle is exploited to "regulate" the pH of the extracellular fluids (rather than just "buffering" the pH). For the carbonic acid-bicarbonate buffer, a molar ratio of weak acid to weak base of 1:20 produces a pH of 7.4; and vice versa—when the pH of the extracellular fluids is 7.4 then the ratio of carbonic acid to bicarbonate ions in that fluid is 1:20. Henderson–Hasselbalch equation. The Henderson–Hasselbalch equation, when applied to the carbonic acid-bicarbonate buffer system in the extracellular fluids, states that: formula_0 where: However, since the carbonic acid concentration is directly proportional to the partial pressure of carbon dioxide (formula_1) in the extracellular fluid, the equation can be rewritten as follows: formula_2 where: The pH of the extracellular fluids can thus be controlled by the regulation of formula_1 and the other metabolic acids. Homeostatic mechanisms. Homeostatic control can change the "P"CO2 and hence the pH of the arterial plasma within a few seconds. The partial pressure of carbon dioxide in the arterial blood is monitored by the central chemoreceptors of the medulla oblongata. These chemoreceptors are sensitive to the levels of carbon dioxide and pH in the cerebrospinal fluid. The central chemoreceptors send their information to the respiratory centers in the medulla oblongata and pons of the brainstem. The respiratory centres then determine the average rate of ventilation of the alveoli of the lungs, to keep the "P"CO2 in the arterial blood constant. The respiratory center does so via motor neurons which activate the muscles of respiration (in particular, the diaphragm). A rise in the "P"CO2 in the arterial blood plasma above reflexly causes an increase in the rate and depth of breathing. Normal breathing is resumed when the partial pressure of carbon dioxide has returned to 5.3 kPa. The converse happens if the partial pressure of carbon dioxide falls below the normal range. Breathing may be temporally halted, or slowed down to allow carbon dioxide to accumulate once more in the lungs and arterial blood. The sensor for the plasma HCO concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. The metabolism of these cells produces CO2, which is rapidly converted to H+ and HCO through the action of carbonic anhydrase. When the extracellular fluids tend towards acidity, the renal tubular cells secrete the H+ ions into the tubular fluid from where they exit the body via the urine. The HCO ions are simultaneously secreted into the blood plasma, thus raising the bicarbonate ion concentration in the plasma, lowering the carbonic acid/bicarbonate ion ratio, and consequently raising the pH of the plasma. The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions into the plasma. These combine with the bicarbonate ions in the plasma to form carbonic acid (H+ + HCO formula_3 H2CO3), thus raising the carbonic acid:bicarbonate ratio in the extracellular fluids, and returning its pH to normal. In general, metabolism produces more waste acids than bases. Urine produced is generally acidic and is partially neutralized by the ammonia (NH3) that is excreted into the urine when glutamate and glutamine (carriers of excess, no longer needed, amino groups) are deaminated by the distal renal tubular epithelial cells. Thus some of the "acid content" of the urine resides in the resulting ammonium ion (NH4+) content of the urine, though this has no effect on pH homeostasis of the extracellular fluids. Imbalance. Acid–base imbalance occurs when a significant insult causes the blood pH to shift out of the normal range (7.32 to 7.42). An abnormally low pH in the extracellular fluid is called an "acidemia" and an abnormally high pH is called an "alkalemia". "Acidemia" and "alkalemia" unambiguously refer to the actual change in the pH of the extracellular fluid (ECF). Two other similar sounding terms are "acidosis" and "alkalosis". They refer to the customary effect of a component, respiratory or metabolic. "Acidosis" would cause an "acidemia" on its own (i.e. if left "uncompensated" by an alkalosis). Similarly, an "alkalosis" would cause an "alkalemia" on its own. In medical terminology, the terms "acidosis" and "alkalosis" should always be qualified by an adjective to indicate the etiology of the disturbance: "respiratory" (indicating a change in the partial pressure of carbon dioxide), or "metabolic" (indicating a change in the Base Excess of the ECF). There are therefore four different acid-base problems: metabolic acidosis, respiratory acidosis, metabolic alkalosis, and respiratory alkalosis. One or a combination of these conditions may occur simultaneously. For instance, a "metabolic acidosis" (as in uncontrolled diabetes mellitus) is almost always partially compensated by a "respiratory alkalosis" (hyperventilation). Similarly, a "respiratory acidosis" can be completely or partially corrected by a "metabolic alkalosis". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{pH} = \\mathrm{p}K_{\\mathrm{a}~\\mathrm{H}_2\\mathrm{CO}_3} + \\log_{10} \\left ( \\frac{[\\mathrm{HCO}_3^-]}{[\\mathrm{H}_2\\mathrm{CO}_3]} \\right )," }, { "math_id": 1, "text": "P_{{\\mathrm{CO}}_2}" }, { "math_id": 2, "text": " \\mathrm{pH} = 6.1 + \\log_{10} \\left ( \\frac{[\\mathrm{HCO}_3^-]}{0.0307 \\times P_{\\mathrm{CO}_2}} \\right )," }, { "math_id": 3, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=8412074
8415223
Harrod–Johnson diagram
Geometric representation used in economics In two-sector macroeconomic models, the Harrod–Johnson diagram, occasionally referred to as the Samuelson-Harrod-Johnson diagram, is a way of visualizing the relationship between the output price ratios, the input price ratios, and the endowment ratio of the two goods. Often the goods are a consumption and investment good, and this diagram shows what will happen to the price ratio if the endowment changes. The diagram juxtaposes a graph which has input price ratios as its horizontal axis, endowment ratios as its positive vertical axis, and output price ratios as its negative vertical axis. The diagram is named after economists Roy F. Harrod and Harry G. Johnson; the Samuelson-Harrod-Johnson name is in reference to economist Paul Samuelson. Economist Hirofumi Uzawa, comparing the Harrod-Johnson diagram to Abba P. Lerner's earlier factor-price equalization theorem, considered Lerner's to be more accurate, as well as more beautiful. Derivation. If good 1 is an investment good governed by the equation&lt;br&gt; formula_1 and good 2 be a consumption good governed by the equation formula_2, then rental and wage rates can be calculated by optimizing a representative firm's profit function, giving formula_3 for the rental rate of capital, "r", and formula_4 for the wage rate of labor, "w", so the input price ratio, formula_0, is formula_5 for formula_6 Normalizing this equation by letting formula_7, and solving for formula_8 provides the formulas to be graphed in the first quadrant. On the other hand, normalizing the equation formula_9 (or formula_10, which is presumably equivalent), and solving for the price ratio, formula_11 provides the formula which is to be graphed in the fourth quadrant. Graphing these three functions together shows the relationship. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega" }, { "math_id": 1, "text": "Y_1=F_1(K,L)\\," }, { "math_id": 2, "text": "Y_s=F_s(K,L)\\," }, { "math_id": 3, "text": "p_1 D_K[F_1(K,L)]=r=p_2 D_K[F_2(K,L)]\\," }, { "math_id": 4, "text": "p_1 D_L[F_1(K,L)]=w=p_2 D_L[F_2(K,L)]\\," }, { "math_id": 5, "text": "\\omega=w/r=\\frac{p_i D_L[F_i(K,L)],p_i D_K[F_i(K,L)]}\\," }, { "math_id": 6, "text": "i=\\{1,2\\}." }, { "math_id": 7, "text": "k_i = K_i/L_i" }, { "math_id": 8, "text": "k_i," }, { "math_id": 9, "text": "p_1 D_K[F_1(K,L)]=p_2 D_K[F_2(K,L)]" }, { "math_id": 10, "text": "p_1 D_L[F_1(K,L)]=p_2 D_L[F_2(K,L)]\\," }, { "math_id": 11, "text": "p_1/P_2," } ]
https://en.wikipedia.org/wiki?curid=8415223
8416103
Evolvability (computer science)
The term evolvability is used for a recent framework of computational learning introduced by Leslie Valiant in his paper of the same name and described below. The aim of this theory is to model biological evolution and categorize which types of mechanisms are evolvable. Evolution is an extension of PAC learning and learning from statistical queries. General framework. Let formula_0 and formula_1 be collections of functions on formula_2 variables. Given an "ideal function" formula_3, the goal is to find by local search a "representation" formula_4 that closely approximates formula_5. This closeness is measured by the "performance" formula_6 of formula_7 with respect to formula_5. As is the case in the biological world, there is a difference between genotype and phenotype. In general, there can be multiple representations (genotypes) that correspond to the same function (phenotype). That is, for some formula_8, with formula_9, still formula_10 for all formula_11. However, this need not be the case. The goal then, is to find a representation that closely matches the phenotype of the ideal function, and the spirit of the local search is to allow only small changes in the genotype. Let the "neighborhood" formula_12 of a representation formula_7 be the set of possible mutations of formula_7. For simplicity, consider Boolean functions on formula_13, and let formula_14 be a probability distribution on formula_15. Define the performance in terms of this. Specifically, formula_16 Note that formula_17 In general, for non-Boolean functions, the performance will not correspond directly to the probability that the functions agree, although it will have some relationship. Throughout an organism's life, it will only experience a limited number of environments, so its performance cannot be determined exactly. The "empirical performance" is defined by formula_18 where formula_19 is a multiset of formula_20 independent selections from formula_15 according to formula_14. If formula_20 is large enough, evidently formula_21 will be close to the actual performance formula_6. Given an ideal function formula_3, initial representation formula_4, "sample size" formula_20, and "tolerance" formula_22, the "mutator" formula_23 is a random variable defined as follows. Each formula_24 is classified as beneficial, neutral, or deleterious, depending on its empirical performance. Specifically, If there are any beneficial mutations, then formula_23 is equal to one of these at random. If there are no beneficial mutations, then formula_23 is equal to a random neutral mutation. In light of the similarity to biology, formula_7 itself is required to be available as a mutation, so there will always be at least one neutral mutation. The intention of this definition is that at each stage of evolution, all possible mutations of the current genome are tested in the environment. Out of the ones who thrive, or at least survive, one is chosen to be the candidate for the next stage. Given formula_29, we define the sequence formula_30 by formula_31. Thus formula_32 is a random variable representing what formula_33 has evolved to after formula_34 "generations". Let formula_35 be a class of functions, formula_36 be a class of representations, and formula_37 a class of distributions on formula_38. We say that formula_35 is "evolvable by formula_36 over formula_37" if there exists polynomials formula_39, formula_40, formula_41, and formula_42 such that for all formula_2 and all formula_43, for all ideal functions formula_3 and representations formula_29, with probability at least formula_44, formula_45 where the sizes of neighborhoods formula_12 for formula_46 are at most formula_47, the sample size is formula_48, the tolerance is formula_49, and the generation size is formula_50. formula_35 is "evolvable over formula_37" if it is evolvable by some formula_36 over formula_37. formula_35 is "evolvable" if it is evolvable over all distributions formula_37. Results. The class of conjunctions and the class of disjunctions are evolvable over the uniform distribution for short conjunctions and disjunctions, respectively. The class of parity functions (which evaluate to the parity of the number of true literals in a given subset of literals) are not evolvable, even for the uniform distribution. Evolvability implies PAC learnability.
[ { "math_id": 0, "text": "F_n\\," }, { "math_id": 1, "text": "R_n\\," }, { "math_id": 2, "text": "n\\," }, { "math_id": 3, "text": "f \\in F_n" }, { "math_id": 4, "text": "r \\in R_n" }, { "math_id": 5, "text": "f\\," }, { "math_id": 6, "text": "\\operatorname{Perf}(f,r)" }, { "math_id": 7, "text": "r\\," }, { "math_id": 8, "text": "r,r' \\in R_n" }, { "math_id": 9, "text": "r \\neq r'\\," }, { "math_id": 10, "text": "r(x) = r'(x)\\," }, { "math_id": 11, "text": "x \\in X_n" }, { "math_id": 12, "text": "N(r)\\," }, { "math_id": 13, "text": "X_n = \\{-1,1\\}^n\\," }, { "math_id": 14, "text": "D_n\\," }, { "math_id": 15, "text": "X_n\\," }, { "math_id": 16, "text": " \\operatorname{Perf}(f,r) = \\sum_{x \\in X_n} f(x) r(x) D_n(x). " }, { "math_id": 17, "text": "\\operatorname{Perf}(f,r) = \\operatorname{Prob}(f(x)=r(x)) - \\operatorname{Prob}(f(x) \\neq r(x))." }, { "math_id": 18, "text": " \\operatorname{Perf}_s(f,r) = \\frac{1}{s} \\sum_{x \\in S} f(x)r(x), " }, { "math_id": 19, "text": "S\\," }, { "math_id": 20, "text": "s\\," }, { "math_id": 21, "text": "\\operatorname{Perf}_s(f,r)" }, { "math_id": 22, "text": "t\\," }, { "math_id": 23, "text": "\\operatorname{Mut}(f,r,s,t)" }, { "math_id": 24, "text": "r' \\in N(r)" }, { "math_id": 25, "text": "r'\\," }, { "math_id": 26, "text": "\\operatorname{Perf}_s(f,r') - \\operatorname{Perf}_s(f,r) \\geq t" }, { "math_id": 27, "text": "-t < \\operatorname{Perf}_s(f,r') - \\operatorname{Perf}_s(f,r) < t" }, { "math_id": 28, "text": "\\operatorname{Perf}_s(f,r') - \\operatorname{Perf}_s(f,r) \\leq -t" }, { "math_id": 29, "text": "r_0 \\in R_n" }, { "math_id": 30, "text": "r_0,r_1,r_2,\\ldots" }, { "math_id": 31, "text": "r_{i+1} = \\operatorname{Mut}(f,r_i,s,t)" }, { "math_id": 32, "text": "r_g\\," }, { "math_id": 33, "text": "r_0\\," }, { "math_id": 34, "text": "g\\," }, { "math_id": 35, "text": "F\\," }, { "math_id": 36, "text": "R\\," }, { "math_id": 37, "text": "D\\," }, { "math_id": 38, "text": "X\\," }, { "math_id": 39, "text": "p(\\cdot,\\cdot)" }, { "math_id": 40, "text": "s(\\cdot,\\cdot)" }, { "math_id": 41, "text": "t(\\cdot,\\cdot)" }, { "math_id": 42, "text": "g(\\cdot,\\cdot)" }, { "math_id": 43, "text": "\\epsilon > 0\\," }, { "math_id": 44, "text": "1 - \\epsilon\\," }, { "math_id": 45, "text": " \\operatorname{Perf}(f,r_{g(n,1/\\epsilon)}) \\geq 1-\\epsilon, " }, { "math_id": 46, "text": "r \\in R_n\\," }, { "math_id": 47, "text": "p(n,1/\\epsilon)\\," }, { "math_id": 48, "text": "s(n,1/\\epsilon)\\," }, { "math_id": 49, "text": "t(1/n,\\epsilon)\\," }, { "math_id": 50, "text": "g(n,1/\\epsilon)\\," } ]
https://en.wikipedia.org/wiki?curid=8416103
841685
Tractrix
Curve traced by a point on a rod as one end is dragged along a line In geometry, a tractrix (from la " trahere" 'to pull, drag'; plural: tractrices) is the curve along which an object moves, under the influence of friction, when pulled on a horizontal plane by a line segment attached to a pulling point (the "tractor") that moves at a right angle to the initial line between the object and the puller at an infinitesimal speed. It is therefore a curve of pursuit. It was first introduced by Claude Perrault in 1670, and later studied by Isaac Newton (1676) and Christiaan Huygens (1693). Mathematical derivation. Suppose the object is placed at ("a", 0) (or (4, 0) in the example shown at right), and the puller at the origin, so a is the length of the pulling thread (4 in the example at right). Then the puller starts to move along the y axis in the positive direction. At every moment, the thread will be tangent to the curve "y" = "y"("x") described by the object, so that it becomes completely determined by the movement of the puller. Mathematically, if the coordinates of the object are ("x", "y"), the of the puller is formula_0 by the Pythagorean theorem. Writing that the slope of thread equals that of the tangent to the curve leads to the differential equation formula_1 with the initial condition "y"("a") = 0. Its solution is formula_2 where the sign ± depends on the direction (positive or negative) of the movement of the puller. The first term of this solution can also be written formula_3 where arsech is the inverse hyperbolic secant function. The sign before the solution depends whether the puller moves upward or downward. Both branches belong to the tractrix, meeting at the cusp point ("a", 0). Basis of the tractrix. The essential property of the tractrix is constancy of the distance between a point P on the curve and the intersection of the tangent line at P with the asymptote of the curve. The tractrix might be regarded in a multitude of ways: The function admits a horizontal asymptote. The curve is symmetrical with respect to the y-axis. The curvature radius is "r" = "a" cot . A great implication that the tractrix had was the study of its surface of revolution about its asymptote: the pseudosphere. Studied by Eugenio Beltrami in 1868, as a surface of constant negative Gaussian curvature, the pseudosphere is a local model of hyperbolic geometry. The idea was carried further by Kasner and Newman in their book "Mathematics and the Imagination", where they show a toy train dragging a pocket watch to generate the tractrix. Practical application. In 1927, P. G. A. H. Voigt patented a horn loudspeaker design based on the assumption that a wave front traveling through the horn is spherical of a constant radius. The idea is to minimize distortion caused by internal reflection of sound within the horn. The resulting shape is the surface of revolution of a tractrix.An important application is in the forming technology for sheet metal. In particular a tractrix profile is used for the corner of the die on which the sheet metal is bent during deep drawing. A toothed belt-pulley design provides improved efficiency for mechanical power transmission using a tractrix catenary shape for its teeth. This shape minimizes the friction of the belt teeth engaging the pulley, because the moving teeth engage and disengage with minimal sliding contact. Original timing belt designs used simpler trapezoidal or circular tooth shapes, which cause significant sliding and friction. Drawing machines. A history of all these machines can be seen in an article by H. J. M. Bos. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y + \\operatorname{sign}(y)\\sqrt{a^2 - x^2}," }, { "math_id": 1, "text": "\\frac{dy}{dx} = \\pm\\frac{\\sqrt{a^2-x^2}}{x}" }, { "math_id": 2, "text": "y = \\int_x^a \\frac{\\sqrt{a^2-t^2}}{t}\\,dt = \\pm\\! \\left( a\\ln{\\frac{a+\\sqrt{a^2-x^2}}{x}}-\\sqrt{a^2-x^2} \\right)," }, { "math_id": 3, "text": "a \\operatorname{arsech}\\frac{x}{a}, " }, { "math_id": 4, "text": "x = t - \\tanh(t), y= 1/{\\cosh(t)}" } ]
https://en.wikipedia.org/wiki?curid=841685
841689
Pullback (category theory)
Most general completion of a commutative square given two morphisms with same codomain In category theory, a branch of mathematics, a pullback (also called a fiber product, fibre product, fibered product or Cartesian square) is the limit of a diagram consisting of two morphisms "f" : "X" → "Z" and "g" : "Y" → "Z" with a common codomain. The pullback is written "P" "X" ×"f", "Z", "g" "Y". Usually the morphisms f and g are omitted from the notation, and then the pullback is written "P" "X" ×"Z" "Y". The pullback comes equipped with two natural morphisms "P" → "X" and "P" → "Y". The pullback of two morphisms "f" and "g" need not exist, but if it does, it is essentially uniquely defined by the two morphisms. In many situations, "X" ×"Z" "Y" may intuitively be thought of as consisting of pairs of elements ("x", "y") with "x" in "X", "y" in "Y", and "f"("x")   "g"("y"). For the general definition, a universal property is used, which essentially expresses the fact that the pullback is the "most general" way to complete the two given morphisms to a commutative square. The dual concept of the pullback is the "pushout". Universal property. Explicitly, a pullback of the morphisms f and g consists of an object P and two morphisms "p"1 : "P" → "X" and "p"2 : "P" → "Y" for which the diagram commutes. Moreover, the pullback ("P", "p"1, "p"2) must be universal with respect to this diagram. That is, for any other such triple ("Q", "q"1, "q"2) where "q"1 : "Q" → "X" and "q"2 : "Q" → "Y" are morphisms with "f" "q"1   "g" "q"2, there must exist a unique "u" : "Q" → "P" such that formula_0 This situation is illustrated in the following commutative diagram. As with all universal constructions, a pullback, if it exists, is unique up to isomorphism. In fact, given two pullbacks ("A", "a"1, "a"2) and ("B", "b"1, "b"2) of the same cospan "X" → "Z" ← "Y", there is a unique isomorphism between A and B respecting the pullback structure. Pullback and product. The pullback is similar to the product, but not the same. One may obtain the product by "forgetting" that the morphisms f and g exist, and forgetting that the object Z exists. One is then left with a discrete category containing only the two objects X and Y, and no arrows between them. This discrete category may be used as the index set to construct the ordinary binary product. Thus, the pullback can be thought of as the ordinary (Cartesian) product, but with additional structure. Instead of "forgetting" Z, f, and g, one can also "trivialize" them by specializing Z to be the terminal object (assuming it exists). f and g are then uniquely determined and thus carry no information, and the pullback of this cospan can be seen to be the product of X and Y. Examples. Commutative rings. In the category of commutative rings (with identity), the pullback is called the fibered product. Let A, B, and C be commutative rings (with identity) and "α" : "A" → "C" and "β" : "B" → "C" (identity preserving) ring homomorphisms. Then the pullback of this diagram exists and is given by the subring of the product ring "A" × "B" defined by formula_1 along with the morphisms formula_2 given by formula_3 and formula_4 for all formula_5. We then have formula_6 Groups and modules. In complete analogy to the example of commutative rings above, one can show that all pullbacks exist in the category of groups and in the category of modules over some fixed ring. Sets. In the category of sets, the pullback of functions "f" : "X" → "Z" and "g" : "Y" → "Z" always exists and is given by the set formula_7 together with the restrictions of the projection maps "π"1 and "π"2 to "X" ×"Z" "Y". Alternatively one may view the pullback in Set asymmetrically: formula_8 where formula_9 is the disjoint union of sets (the involved sets are not disjoint on their own unless f resp. g is injective). In the first case, the projection "π"1 extracts the x index while "π"2 forgets the index, leaving elements of Y. This example motivates another way of characterizing the pullback: as the equalizer of the morphisms "f" ∘ "p"1, "g" ∘ "p"2 : "X" × "Y" → "Z" where "X" × "Y" is the binary product of X and Y and "p"1 and "p"2 are the natural projections. This shows that pullbacks exist in any category with binary products and equalizers. In fact, by the existence theorem for limits, all finite limits exist in a category with binary products and equalizers; equivalently, all finite limits exist in a category with terminal object and pullbacks (by the fact that binary product = pullback on the terminal object, and that an equalizer is a pullback involving binary product). Graphs of functions. A specific example of a pullback is given by the graph of a function. Suppose that formula_10 is a function. The "graph" of f is the set formula_11 The graph can be reformulated as the pullback of f and the identity function on Y. By definition, this pullback is formula_12 and this equals formula_13. Fiber bundles. Another example of a pullback comes from the theory of fiber bundles: given a bundle map "π" : "E" → "B" and a continuous map "f" : "X" → "B", the pullback (formed in the category of topological spaces with continuous maps) "X" ×"B" "E" is a fiber bundle over X called the pullback bundle. The associated commutative diagram is a morphism of fiber bundles. This is also the case in the category of differentiable manifolds. A special case is the pullback of two fiber bundles "E""1", "E"2 → "B". In this case "E"1 × "E"2 is a fiber bundle over "B × B", and pulling back along the diagonal map "B" → "B × B" gives a space homeomorphic (diffeomorphic) to "E"1 ×"B" "E"2, which is a fiber bundle over "B". The pullback of two smooth transverse maps into the same differentiable manifold is also a differentiable manifold, and the tangent space of the pullback is the pullback of the tangent spaces along the differential maps. Preimages and intersections. Preimages of sets under functions can be described as pullbacks as follows: Suppose "f" : "A" → "B", "B"0 ⊆ "B". Let g be the inclusion map "B"0 ↪ "B". Then a pullback of f and g (in Set) is given by the preimage "f"−1["B"0] together with the inclusion of the preimage in A "f"−1["B"0] ↪ "A" and the restriction of f to "f"−1["B"0] "f"−1["B"0] → "B"0. Because of this example, in a general category the pullback of a morphism "f" and a monomorphism "g" can be thought of as the "preimage" under "f" of the subobject specified by "g". Similarly, pullbacks of two monomorphisms can be thought of as the "intersection" of the two subobjects. Least common multiple. Consider the multiplicative monoid of positive integers Z+ as a category with one object. In this category, the pullback of two positive integers "m" and "n" is just the pair formula_14, where the numerators are both the least common multiple of "m" and "n". The same pair is also the pushout. is a pullback diagram, then the induced morphism ker("p"2) → ker("f") is an isomorphism, and so is the induced morphism ker("p"1) → ker("g"). Every pullback diagram thus gives rise to a commutative diagram of the following form, where all rows and columns are exact: Properties. formula_15 Furthermore, in an abelian category, if "X" → "Z" is an epimorphism, then so is its pullback "P" → "Y", and symmetrically: if "Y" → "Z" is an epimorphism, then so is its pullback "P" → "X". In these situations, the pullback square is also a pushout square. Graphically this means that two pullback squares, placed side by side and sharing one morphism, form a larger pullback square when ignoring the inner shared morphism. formula_16 Weak pullbacks. A weak pullback of a cospan "X" → "Z" ← "Y" is a cone over the cospan that is only weakly universal, that is, the mediating morphism "u" : "Q" → "P" above is not required to be unique. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_1 \\circ u=q_1, \\qquad p_2\\circ u=q_2." }, { "math_id": 1, "text": "A \\times_{C} B = \\left\\{(a,b) \\in A \\times B \\; \\big| \\; \\alpha(a) = \\beta(b) \\right\\}" }, { "math_id": 2, "text": "\\beta' \\colon A \\times_{C} B \\to A, \\qquad \\alpha'\\colon A \\times_{C} B \\to B" }, { "math_id": 3, "text": "\\beta'(a, b) = a" }, { "math_id": 4, "text": "\\alpha'(a, b) = b" }, { "math_id": 5, "text": "(a, b) \\in A \\times_C B" }, { "math_id": 6, "text": "\\alpha \\circ \\beta' = \\beta \\circ \\alpha'." }, { "math_id": 7, "text": "X\\times_Z Y = \\{(x, y) \\in X \\times Y| f(x) = g(y)\\} = \\bigcup_{z \\in f(X) \\cap g(Y)} f^{-1}[\\{z\\}] \\times g^{-1}[\\{z\\}] ," }, { "math_id": 8, "text": "X\\times_Z Y \\cong \\coprod_{x\\in X} g^{-1}[\\{f(x)\\}] \\cong \\coprod_{y\\in Y} f^{-1}[\\{g(y)\\}]" }, { "math_id": 9, "text": "\\coprod" }, { "math_id": 10, "text": "f \\colon X \\to Y" }, { "math_id": 11, "text": "\\Gamma_f = \\{(x, f(x)) \\colon x \\in X\\} \\subseteq X \\times Y." }, { "math_id": 12, "text": "X \\times_{f,Y,1_Y} Y = \\{(x, y) \\colon f(x) = 1_Y(y)\\} = \\{(x, y) \\colon f(x) = y\\} \\subseteq X \\times Y," }, { "math_id": 13, "text": "\\Gamma_f" }, { "math_id": 14, "text": "\\left(\\frac{\\operatorname{lcm}(m,n)}{m}, \\frac{\\operatorname{lcm}(m,n)}{n}\\right)" }, { "math_id": 15, "text": "\n\\begin{array}{ccccccc} \n&&&&0&&0\\\\\n&&&&\\downarrow&&\\downarrow\\\\\n&&&&L&=&L\\\\\n&&&&\\downarrow&&\\downarrow\\\\\n0&\\rightarrow&K&\\rightarrow&P&\\rightarrow&Y \\\\\n&&\\parallel&&\\downarrow& & \\downarrow\\\\\n0&\\rightarrow&K&\\rightarrow&X&\\rightarrow&Z\n\\end{array}\n" }, { "math_id": 16, "text": "\n\\begin{array}{ccccc} \nQ&\\xrightarrow{t}&P& \\xrightarrow{r} & A \\\\\n\\downarrow_{u} & & \\downarrow_{s} & &\\downarrow_{f}\\\\\nD & \\xrightarrow{h} & B &\\xrightarrow{g} & C\n\\end{array}\n" } ]
https://en.wikipedia.org/wiki?curid=841689
8417346
River regime
The river regime generally refers to the mathematical relationship between the river discharge and its width, depth and slope. Thus, "river regime" describes a series of characteristic power-law relationships between discharge and width, depth and slope It is described by the fact that the discharge through a river of an approximate rectangular cross-section must, through conservation of mass, equal formula_0 where formula_1 is the volumetric discharge, formula_2 is the mean flow velocity, formula_3 is the channel width (breadth) and formula_4 is the channel depth. Because of this relationship, as discharge increases, depth, width, and/or mean velocity must increase as well. Empirically derived relationships between depth, slope, and velocity are: formula_5 formula_6 formula_7 formula_1 refers to a "dominant discharge" or "channel-forming discharge", which is typically the 1–2 year flood, though there is a large amount of scatter around this mean. This is the event that causes significant erosion and deposition and determines the channel morphology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q = \\bar{u} b h" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "\\bar{u}" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "h" }, { "math_id": 5, "text": "b \\propto Q^{0.5}" }, { "math_id": 6, "text": "h \\propto Q^{0.4}" }, { "math_id": 7, "text": "u \\propto Q^{0.1}" } ]
https://en.wikipedia.org/wiki?curid=8417346
841890
Haldane's dilemma
Limit on the speed of beneficial evolution Haldane's dilemma, also known as the waiting time problem, is a limit on the speed of beneficial evolution, calculated by J. B. S. Haldane in 1957. Before the invention of DNA sequencing technologies, it was not known how much polymorphism DNA harbored, although alloenzymes (variant forms of an enzyme which differ structurally but not functionally from other alloenzymes coded for by different alleles at the same locus) were beginning to make it clear that substantial polymorphism existed. This was puzzling because the amount of polymorphism known to exist seemed to exceed the theoretical limits that Haldane calculated, that is, the limits imposed if polymorphisms present in the population generally influence an organism's fitness. Motoo Kimura's landmark paper on neutral theory in 1968 built on Haldane's work to suggest that most molecular evolution is neutral, resolving the dilemma. Although neutral evolution remains the consensus theory among modern biologists, and thus Kimura's resolution of Haldane's dilemma is widely regarded as correct, some biologists argue that adaptive evolution explains a large fraction of substitutions in protein coding sequence, and they propose alternative solutions to Haldane's dilemma. Substitution cost. In the introduction to "The Cost of Natural Selection" Haldane writes that it is difficult for breeders to simultaneously select all the desired qualities, partly because the required genes may not be found together in the stock; but, he writes, especially in slowly breeding animals such as cattle, one cannot cull even half the females, even though only one in a hundred of them combines the various qualities desired. That is, the problem for the cattle breeder is that keeping only the specimens with the desired qualities will lower the reproductive capability too much to keep a useful breeding stock. Haldane states that this same problem arises with respect to natural selection. Characters that are positively correlated at one time may be negatively correlated at a later time, so simultaneous optimization of more than one character is a problem also in nature. And, as Haldane writes [i]n this paper I shall try to make quantitative the fairly obvious statement that natural selection cannot occur with great intensity for a number of characters at once unless they happen to be controlled by the same genes. In faster breeding species there is less of a problem. Haldane mentions the peppered moth, "Biston betularia", whose variation in pigmentation is determined by several alleles at a single gene. One of these alleles, "C", is dominant to all the others, and any CC or Cx moths are dark (where "x" is any other allele). Another allele, "c", is recessive to all the others, and cc moths are light. Against the originally pale lichens the darker moths were easier for birds to pick out, but in areas, where pollution has darkened the lichens, the cc moths had become rare. Haldane mentions that in a single day the frequency of cc moths might be halved. Another potential problem is that if "ten other independently inherited characters had been subject to selection of the same intensity as that for colour, only formula_0, or one in 1024, of the original genotype would have survived." The species would most likely have become extinct; but it might well survive ten other selective periods of comparable selectivity, if they happened in different centuries. Selection intensity. Haldane proceeds to define the "intensity of selection" regarding "juvenile survival" (that is, survival to reproductive age) as formula_1, where formula_2 is the proportion of those with the optimal genotype (or genotypes) that survive to reproduce, and formula_3 is the proportion of the entire population that similarly so survive. The proportion for the entire population that die without reproducing is thus formula_4, and this would have been formula_5 if all genotypes had survived as well as the optimal. Hence formula_6 is the proportion of "genetic" deaths due to selection. As Haldane mentions, if formula_7, then formula_8. The cost. Haldane writes I shall investigate the following case mathematically. A population is in equilibrium under selection and mutation. One or more genes are rare because their appearance by mutation is balanced by natural selection. A sudden change occurs in the environment, for example, pollution by smoke, a change of climate, the introduction of a new food source, predator, or pathogen, and above all migration to a new habitat. It will be shown later that the general conclusions are not affected if the change is slow. The species is less adapted to the new environment, and its reproductive capacity is lowered. It is gradually improved as a result of natural selection. But meanwhile, a number of deaths, or their equivalents in lowered fertility, have occurred. If selection at the formula_9 selected locus is responsible for formula_10 of these deaths in any generation the reproductive capacity of the species will be formula_11 of that of the optimal genotype, or formula_12 nearly, if every formula_10 is small. Thus the intensity of selection approximates to formula_13. Comparing to the above, we have that formula_14, if we say that formula_15 is the quotient of deaths for the formula_9 selected locus and formula_3 is again the quotient of deaths for the entire population. The problem statement is therefore that the alleles in question are "not" particularly beneficial under the previous circumstances; but a change in environment favors these genes by natural selection. The individuals without the genes are therefore disfavored, and the favorable genes spread in the population by the death (or lowered fertility) of the individuals without the genes. Note that Haldane's model as stated here allows for more than one gene to move towards fixation at a time; but each such will add to the cost of substitution. The total cost of substitution of the formula_9 gene is the sum formula_16 of all values of formula_10 over all generations of selection; that is, until fixation of the gene. Haldane states that he will show that formula_16 depends mainly on formula_17, the small frequency of the gene in question, as selection begins – that is, at the time that the environmental change occurs (or begins to occur). A mathematical model of the cost of diploids. Let A and a be two alleles with frequencies formula_18 and formula_19 in the formula_20 generation. Their relative fitness is given by where 0 ≤ formula_21 ≤ 1, and 0 ≤ λ ≤ 1. If λ = 0, then Aa has the same fitness as AA, e.g. if Aa is phenotypically equivalent with AA (A dominant), and if λ = 1, then Aa has the same fitness as aa, e.g. if Aa is phenotypically equivalent with aa (A recessive). In general λ indicates how close in fitness Aa is to aa. The fraction of selective deaths in the formula_20 generation then is formula_22 and the total number of deaths is the population size multiplied by formula_23 Important number 300. Haldane approximates the above equation by taking the continuum limit of the above equation. This is done by multiplying and dividing it by dq so that it is in integral form formula_24 substituting q=1-p, the cost (given by the total number of deaths, 'D', required to make a substitution) is given by formula_25 Assuming λ &lt; 1, this gives formula_26 where the last approximation assumes formula_17 to be small. If λ = 1, then we have formula_27 In his discussion Haldane writes that the substitution cost, if it is paid by juvenile deaths, "usually involves a number of deaths equal to about 10 or 20 times the number in a generation" – the minimum being the population size (= "the number in a generation") and rarely being 100 times that number. Haldane assumes 30 to be the mean value. Assuming substitution of genes to take place slowly, one gene at a time over "n" generations, the fitness of the species will fall below the optimum (achieved when the substitution is complete) by a factor of about 30/"n", so long as this is small – small enough to prevent extinction. Haldane doubts that high intensities – such as in the case of the peppered moth – have occurred frequently and estimates that a value of "n" = 300 is a probable number of generations. This gives a selection intensity of formula_28. Haldane then continues: The number of loci in a vertebrate species has been estimated at about 40,000. 'Good' species, even when closely related, may differ at several thousand loci, even if the differences at most of them are very slight. But it takes as many deaths, or their equivalents, to replace a gene by one producing a barely distinguishable phenotype as by one producing a very different one. If two species differ at 1000 loci, and the mean rate of gene substitution, as has been suggested, is one per 300 generations, it will take 300,000 generations to generate an interspecific difference. It may take a good deal more, for if an allele a1 is replaced by a10, the population may pass through stages where the commonest genotype is a1a1, a2a2, a3a3, and so on, successively, the various alleles in turn giving maximal fitness in the existing environment and the residual environment. The number 300 of generations is a conservative estimate for a slowly evolving species not at the brink of extinction by Haldane's calculation. For a difference of at least 1,000 genes, 300,000 generations might be needed – maybe more, if some gene runs through more than one optimisation. Origin of the term "Haldane's dilemma". Apparently the first use of the term "Haldane's dilemma" was by paleontologist Leigh Van Valen in his 1963 paper "Haldane's Dilemma, Evolutionary Rates, and Heterosis". Van Valen writes: Haldane (1957 [= "The Cost of Natural Selection"]) drew attention to the fact that in the process of the evolutionary substitution of one allele for another, at any intensity of selection and no matter how slight the importance of the locus, a substantial number of individuals would usually be lost because they did not already possess the new allele. Kimura (1960, 1961) has referred to this loss as the substitutional (or evolutional) load, but because it necessarily involves either a completely new mutation or (more usually) previous change in the environment or the genome, I like to think of it as a dilemma for the population: for most organisms, rapid turnover in a few genes precludes rapid turnover in the others. A corollary of this is that, if an environmental change occurs that necessitates the rather rapid replacement of several genes if a population is to survive, the population becomes extinct. That is, since a high number of deaths are required to fix one gene rapidly, and dead organisms do not reproduce, fixation of more than one gene simultaneously would conflict. Note that Haldane's model assumes independence of genes at different loci; if the selection intensity is 0.1 for each gene moving towards fixation, and there are "N" such genes, then the reproductive capacity of the species will be lowered to 0.9"N" times the original capacity. Therefore, if it is necessary for the population to fix more than one gene, it may not have reproductive capacity to counter the deaths. Evolution above Haldane's limit. Various models evolve at rates above Haldane's limit. J. A. Sved showed that a threshold model of selection, where individuals with a phenotype less than the threshold die and individuals with a phenotype above the threshold are all equally fit, allows for a greater substitution rate than Haldane's model (though no obvious upper limit was found, though tentative paths to calculate one were examined e.g. the death rate). John Maynard Smith and Peter O'Donald followed on the same track. Additionally, the effects of density-dependent processes, epistasis, and soft selective sweeps on the maximum rate of substitution have been examined. By looking at the polymorphisms within species and divergence between species an estimate can be obtained for the fraction of substitutions that occur due to selection. This parameter is generally called alpha (hence DFE-alpha), and appears to be large in some species, although almost all approaches suggest that the human-chimp divergence was primarily neutral. However, if divergence between "Drosophila" species was as adaptive as the alpha parameter suggests, then it would exceed Haldane's limit. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(1/2)^{10}" }, { "math_id": 1, "text": "I = \\ln (s_0/S)" }, { "math_id": 2, "text": "s_0" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "1-S" }, { "math_id": 5, "text": "1-s_0" }, { "math_id": 6, "text": "s_0-S" }, { "math_id": 7, "text": "s_0 \\approx S" }, { "math_id": 8, "text": "I \\approx s_0-S" }, { "math_id": 9, "text": "i^{th}" }, { "math_id": 10, "text": "d_i" }, { "math_id": 11, "text": "\\prod \\left( 1- d_i \\right)" }, { "math_id": 12, "text": "\\exp \\left ( -\\sum d_i\\right)" }, { "math_id": 13, "text": "\\sum d_i" }, { "math_id": 14, "text": "d_i = s_{0i} - S" }, { "math_id": 15, "text": "s_{0i}" }, { "math_id": 16, "text": "D_i" }, { "math_id": 17, "text": "p_0" }, { "math_id": 18, "text": "p_n" }, { "math_id": 19, "text": "q_n" }, { "math_id": 20, "text": "n^\\mbox{th}" }, { "math_id": 21, "text": "K" }, { "math_id": 22, "text": "d_n = 2\\lambda Kp_nq_n + Kq_n^2 = Kq_n[2\\lambda + (1 - 2\\lambda)q_n]" }, { "math_id": 23, "text": "D = K \\sum_0^\\infin q_n \\; [2\\lambda + (1 - 2\\lambda)q_n]." }, { "math_id": 24, "text": " dq_n=-Kp_n q_n[\\lambda + (1-2 \\lambda )q_n ] " }, { "math_id": 25, "text": "D = \\int_0^{q_{_0}} \\frac{[2\\lambda + (1 - 2\\lambda)q]}{(1 - q)[\\lambda + (1 - 2\\lambda)q]}dq = \\frac{1}{1 - \\lambda} \\int_0^{q_{_0}} \\left[\\frac{1}{1 - q} + \\frac{\\lambda(1 - 2\\lambda)}{\\lambda + (1 - 2\\lambda)q}\\right]dq." }, { "math_id": 26, "text": "D = \\frac{1}{1 - \\lambda} \\left[-\\mbox{ln } p_0 + \\lambda \\mbox{ ln }\\left(\\frac{1 - \\lambda - (1 - 2\\lambda) p_0}{\\lambda}\\right)\\right] \\approx \\frac{1}{1 - \\lambda} \\left[-\\mbox{ln } p_0 + \\lambda \\mbox{ ln }\\left(\\frac{1 - \\lambda}{\\lambda}\\right)\\right]" }, { "math_id": 27, "text": "D = \\int_0^{q_{_0}} \\frac{2 - q}{(1 - q)^2} dq = \\int_0^{q_{_0}} \\left[\\frac{1}{1 - q} + \\frac{1}{(1 - q)^2}\\right]dq = p_0^{-1} - \\mbox{ ln } p_0 + O(\\lambda K)." }, { "math_id": 28, "text": "I = 30/300 = 0.1" } ]
https://en.wikipedia.org/wiki?curid=841890
8419626
Linear response function
A linear response function describes the input-output relationship of a signal transducer, such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory, physics and engineering there exist alternative names for specific linear response functions such as susceptibility, impulse response or impedance; see also transfer function. The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related. Mathematical definition. Denote the input of a system by formula_0 (e.g. a force), and the response of the system by formula_1 (e.g. a position). Generally, the value of formula_1 will depend not only on the present value of formula_0, but also on past values. Approximately formula_1 is a weighted sum of the previous values of formula_2, with the weights given by the linear response function formula_3: formula_4 The explicit term on the right-hand side is the leading order term of a Volterra expansion for the full nonlinear response. If the system in question is highly non-linear, higher order terms in the expansion, denoted by the dots, become important and the signal transducer cannot adequately be described just by its linear response function. The complex-valued Fourier transform formula_5 of the linear response function is very useful as it describes the output of the system if the input is a sine wave formula_6 with frequency formula_7. The output reads formula_8 with amplitude gain formula_9 and phase shift formula_10. Example. Consider a damped harmonic oscillator with input given by an external driving force formula_0, formula_11 The complex-valued Fourier transform of the linear response function is given by formula_12 The amplitude gain is given by the magnitude of the complex number formula_13 and the phase shift by the arctan of the imaginary part of the function divided by the real one. From this representation, we see that for small formula_14 the Fourier transform formula_5 of the linear response function yields a pronounced maximum ("Resonance") at the frequency formula_15. The linear response function for a harmonic oscillator is mathematically identical to that of an RLC circuit. The width of the maximum, formula_16 typically is much smaller than formula_17 so that the Quality factor formula_18 can be extremely large. Kubo formula. The exposition of linear response theory, in the context of quantum statistics, can be found in a paper by Ryogo Kubo. This defines particularly the Kubo formula, which considers the general case that the "force" "h"("t") is a perturbation of the basic operator of the system, the Hamiltonian, formula_19 where formula_20 corresponds to a measurable quantity as input, while the output "x"("t") is the perturbation of the thermal expectation of another measurable quantity formula_21. The Kubo formula then defines the quantum-statistical calculation of the susceptibility formula_22 by a general formula involving only the mentioned operators. As a consequence of the principle of causality the complex-valued function formula_23 has poles only in the lower half-plane. This leads to the Kramers–Kronig relations, which relates the real and the imaginary parts of formula_23 by integration. The simplest example is once more the damped harmonic oscillator.
[ { "math_id": 0, "text": "h(t)" }, { "math_id": 1, "text": "x(t)" }, { "math_id": 2, "text": "h(t')" }, { "math_id": 3, "text": "\\chi(t-t')" }, { "math_id": 4, "text": "x(t) = \\int_{-\\infty}^t dt'\\, \\chi(t-t') h(t') + \\cdots\\,." }, { "math_id": 5, "text": "\\tilde{\\chi}(\\omega) " }, { "math_id": 6, "text": "h(t) = h_0 \\sin(\\omega t)" }, { "math_id": 7, "text": "\\omega" }, { "math_id": 8, "text": "x(t) = \\left|\\tilde{\\chi}(\\omega)\\right| h_0 \\sin(\\omega t+\\arg\\tilde{\\chi}(\\omega))\\,," }, { "math_id": 9, "text": "|\\tilde{\\chi}(\\omega)|" }, { "math_id": 10, "text": "\\arg\\tilde{\\chi}(\\omega)" }, { "math_id": 11, "text": "\\ddot{x}(t)+\\gamma \\dot{x}(t)+\\omega_0^2 x(t) = h(t). " }, { "math_id": 12, "text": "\\tilde{\\chi}(\\omega) = \\frac{\\tilde{x}(\\omega)}{\\tilde{h}(\\omega)} = \\frac{1}{\\omega_0^2-\\omega^2+i\\gamma\\omega}. " }, { "math_id": 13, "text": "\\tilde\\chi (\\omega )," }, { "math_id": 14, "text": "\\gamma" }, { "math_id": 15, "text": " \\omega\\approx\\omega_0" }, { "math_id": 16, "text": "\\Delta\\omega ," }, { "math_id": 17, "text": "\\omega_0 ," }, { "math_id": 18, "text": "Q:=\\omega_0 /\\Delta\\omega" }, { "math_id": 19, "text": "\\hat H_0 \\to \\hat{H}_0 -h(t')\\hat{B}(t') " }, { "math_id": 20, "text": "\\hat B" }, { "math_id": 21, "text": "\\hat A(t)" }, { "math_id": 22, "text": "\\chi ( t -t' )" }, { "math_id": 23, "text": "\\tilde{\\chi }(\\omega )" } ]
https://en.wikipedia.org/wiki?curid=8419626
8420425
Elasticity of cell membranes
Ability of cell membranes to deform elastically A cell membrane defines a boundary between a cell and its environment. The primary constituent of a membrane is a phospholipid bilayer that forms in a water-based environment due to the hydrophilic nature of the lipid head and the hydrophobic nature of the two tails. In addition there are other lipids and proteins in the membrane, the latter typically in the form of isolated rafts. Of the numerous models that have been developed to describe the deformation of cell membranes, a widely accepted model is the fluid mosaic model proposed by Singer and Nicolson in 1972. In this model, the cell membrane surface is modeled as a two-dimensional fluid-like lipid bilayer where the lipid molecules can move freely. The proteins are partially or fully embedded in the lipid bilayer. Fully embedded proteins are called integral membrane proteins because they traverse the entire thickness of the lipid bilayer. These communicate information and matter between the interior and the exterior of the cell. Proteins that are only partially embedded in the bilayer are called peripheral membrane proteins. The membrane skeleton is a network of proteins below the bilayer that links with the proteins in the lipid membrane. Elasticity of closed lipid vesicles. The simplest component of a membrane is the lipid bilayer which has a thickness that is much smaller than the length scale of the cell. Therefore, the lipid bilayer can be represented by a two-dimensional mathematical surface. In 1973, based on similarities between lipid bilayers and nematic liquid crystals, Helfrich proposed the following expression for the curvature energy per unit area of the closed lipid bilayer where formula_0 are bending rigidities, formula_1 is the spontaneous curvature of the membrane, and formula_2 and formula_3 are the mean and Gaussian curvature of the membrane surface, respectively. The free energy of a closed bilayer under the osmotic pressure formula_4 (the outer pressure minus the inner one) as: where "dA" and "dV" are the area element of the membrane and the volume element enclosed by the closed bilayer, respectively, and "λ" is the Lagrange multiplier for area inextensibility of the membrane, which has the same dimension as surface tension. By taking the first order variation of above free energy, Ou-Yang and Helfrich derived an equation to describe the equilibrium shape of the bilayer as: They also obtained that the threshold pressure for the instability of a spherical bilayer was where formula_5 being the radius of the spherical bilayer. Using the shape equation (3) of closed vesicles, Ou-Yang predicted that there was a lipid torus with the ratio of two generated radii being exactly formula_6. His prediction was soon confirmed by the experiment Additionally, researchers obtained an analytical solution to (3) which explained the classical problem, the biconcave discoidal shape of normal red blood cells. In the last decades, the Helfrich model has been extensively used in computer simulations of vesicles, red blood cells and related systems. From a numerical point-of-view bending forces stemming from the Helfrich model are very difficult to compute as they require the numerical evaluation of fourth-order derivatives and, accordingly, a large variety of numerical methods have been proposed for this task. Elasticity of open lipid membranes. The opening-up process of lipid bilayers by talin was observed by Saitoh et al. arose the interest of studying the equilibrium shape equation and boundary conditions of lipid bilayers with free exposed edges. Capovilla et al., Tu and Ou-Yang carefully studied this problem. The free energy of a lipid membrane with an edge formula_7 is written as where formula_8 and formula_9 represent the arclength element and the line tension of the edge, respectively. This line tension is a function of dimension and distribution of molecules comprising the edge, and their interaction strength and range. The first order variation gives the shape equation and boundary conditions of the lipid membrane: where formula_10, formula_11, and formula_12 are normal curvature, geodesic curvature, and geodesic torsion of the boundary curve, respectively. formula_13 is the unit vector perpendicular to the tangent vector of the curve and the surface normal vector of the membrane. Elasticity of cell membranes. A cell membrane is simplified as lipid bilayer plus membrane skeleton. The skeleton is a cross-linking protein network and joints to the bilayer at some points. Assume that each proteins in the membrane skeleton have similar length which is much smaller than the whole size of the cell membrane, and that the membrane is locally 2-dimensional uniform and homogenous. Thus the free energy density can be expressed as the invariant form of formula_14, formula_3, formula_15 and formula_16: where formula_17 is the in-plane strain of the membrane skeleton. Under the assumption of small deformations, and invariant between formula_18 and formula_19, (10) can be expanded up to second order terms as: where formula_20 and formula_21 are two elastic constants. In fact, the first two terms in (11) are the bending energy of the cell membrane which contributes mainly from the lipid bilayer. The last two terms come from the entropic elasticity of the membrane skeleton. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. Reviews on configurations of lipid vesicles. [1] R. Lipowsky, The Conformation of Membranes, Nature 349 (1991) 475-481. [2] U. Seifert, Configurations of Fluid Membranes and Vesicles, Adv. Phys. 46 (1997) 13-137. [3] Z. C. Ou-Yang, J. X. Liu and Y. Z. Xie, Geometric Methods in the Elastic Theory of Membranes in Liquid Crystal Phases (World Scientific, Singapore, 1999). [4] A. Biria, M. Maleki and E. Fried, (2013). Continuum theory for the edge of an open lipid bilayer, Advances in Applied Mechanics 46 (2013) 1-68. Research papers on closed vesicles. [1] W. Helfrich, Elastic Properties of Lipid Bilayers—Theory and Possible Experiments, Z. Naturforsch. C 28 (1973) 693-703. [2] O.-Y. Zhong-Can and W. Helfrich, Instability and Deformation of a Spherical Vesicle by Pressure, Phys. Rev. Lett. 59 (1987) 2486-2488. [3] O.-Y. Zhong-Can, Anchor Ring-Vesicle Membranes, Phys. Rev. A 41 (1990) 4517-4520. [4] H. Naito, M. Okuda, and O.-Y. Zhong-Can, Counterexample to Some Shape Equations for Axisymmetric Vesicles, Phys. Rev. E 48 (1993) 2304-2307. [5] U. Seifert, Vesicles of toroidal topology, Phys. Rev. Lett. 66 (1991) 2404-2407. [6] U. Seifert, K. Berndl, and R. Lipowsky, Shape transformations of vesicles: Phase diagram for spontaneous- curvature and bilayer-coupling models, Phys. Rev. A 44 (1991) 1182-1202. [7] L. Miao, et al., Budding transitions of fluid-bilayer vesicles: The effect of area-difference elasticity, Phys. Rev. E 49 (1994) 5389-5407. Research papers on open membranes. [1] A. Saitoh, K. Takiguchi, Y. Tanaka, and H. Hotani, Opening-up of liposomal membranes by talin, Proc. Natl. Acad. Sci. 95 (1998) 1026-1031. [2] R. Capovilla, J. Guven, and J.A. Santiago, Lipid membranes with an edge, Phys. Rev. E 66 (2002) 021607. [3] R. Capovilla and J. Guven, Stresses in lipid membranes, J. Phys. A 35 (2002) 6233-6247. [4] Z. C. Tu and Z. C. Ou-Yang, Lipid membranes with free edges, Phys. Rev. E 68, (2003) 061915. [5] T. Umeda, Y. Suezaki, K. Takiguchi, and H. Hotani, Theoretical analysis of opening-up vesicles with single and two holes, Phys. Rev. E 71 (2005) 011913. [6] A. Biria, M. Maleki and E. Fried, (2013). Continuum theory for the edge of an open lipid bilayer, Advances in Applied Mechanics 46 (2013) 1-68. Numerical solutions on lipid membranes. [1] J. Yan, Q. H. Liu, J. X. Liu and Z. C. Ou-Yang, Numerical observation of nonaxisymmetric vesicles in fluid membranes, Phys. Rev. E 58 (1998) 4730-4736. [2] J. J. Zhou, Y. Zhang, X. Zhou, Z. C. Ou-Yang, Large Deformation of Spherical Vesicle Studied by Perturbation Theory and Surface Evolver, Int J Mod Phys B 15 (2001) 2977-2991. [3] Y. Zhang, X. Zhou, J. J. Zhou and Z. C. Ou-Yang, Triconcave Solution to the Helfrich Variation Problem for the Shape of Lipid Bilayer Vesicles is Found by Surface Evolver, In. J. Mod. Phys. B 16 (2002) 511-517. [4] Q. Du, C. Liu and X. Wang, Simulating the deformation of vesicle membranes under elastic bending energy in three dimensions, J. Comput. Phys. 212 (2006) 757. [5] X. Wang and Q. Du, physics/0605095. Selected papers on cell membranes. [1] Y. C. Fung and P. Tong, Theory of the Sphering of Red Blood Cells, Biophys. J. 8 (1968) 175-198. [2] S. K. Boey, D. H. Boal, and D. E. Discher, Simulations of the Erythrocyte Cytoskeleton at Large Deformation. I. Microscopic Models, Biophys. J. 75 (1998) 1573-1583. [3] D. E. Discher, D. H. Boal, and S. K. Boey, Simulations of the Erythrocyte Cytoskeleton at Large Deformation. II. Micropipette Aspiration, Biophys. J. 75 (1998) 1584-1597. [4] E. Sackmann, A.R. Bausch and L. Vonna, Physics of Composite Cell Membrane and Actin Based Cytoskeleton, in Physics of bio-molecules and cells, Edited by H. Flyvbjerg, F. Julicher, P. Ormos And F. David (Springer, Berlin, 2002). [5] G. Lim, M. Wortis, and R. Mukhopadhyay, Stomatocyte–discocyte–echinocyte sequence of the human red blood cell: Evidence for the bilayer–couple hypothesis from membrane mechanics, Proc. Natl. Acad. Sci. 99 (2002) 16766-16769. [6] Z. C. Tu and Z. C. Ou-Yang, A Geometric Theory on the Elasticity of Bio-membranes, J. Phys. A: Math. Gen. 37 (2004) 11407-11429. [7] Z. C. Tu and Z. C. Ou-Yang, Elastic theory of low-dimensional continua and its applications in bio- and nano-structures,arxiv:0706.0001.
[ { "math_id": 0, "text": "k_c,\\bar{k}" }, { "math_id": 1, "text": "c_0" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "K" }, { "math_id": 4, "text": "\\Delta p" }, { "math_id": 5, "text": "R" }, { "math_id": 6, "text": "\\sqrt{2}" }, { "math_id": 7, "text": "C" }, { "math_id": 8, "text": "ds" }, { "math_id": 9, "text": "\\gamma" }, { "math_id": 10, "text": "k_n" }, { "math_id": 11, "text": "k_g" }, { "math_id": 12, "text": "\\tau_g" }, { "math_id": 13, "text": "\\mathbf{e}_2" }, { "math_id": 14, "text": "2H" }, { "math_id": 15, "text": "\\mathrm{tr}(\\varepsilon)" }, { "math_id": 16, "text": "\\det(\\varepsilon)" }, { "math_id": 17, "text": "\\varepsilon" }, { "math_id": 18, "text": "\\mathrm{tr}\\varepsilon" }, { "math_id": 19, "text": "-\\mathrm{tr}\\varepsilon" }, { "math_id": 20, "text": "k_d" }, { "math_id": 21, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=8420425
8421588
Amortizing loan
Loan where the principal sum of the loan is paid down over the life of the loan In banking and finance, an amortizing loan is a loan where the principal of the loan is paid down over the life of the loan (that is, amortized) according to an amortization schedule, typically through equal payments. Similarly, an amortizing bond is a bond that repays part of the principal (face value) along with the coupon payments. Compare with a sinking fund, which amortizes the total debt outstanding by repurchasing some bonds. Each payment to the lender will consist of a portion of interest and a portion of principal. Mortgage loans are typically amortizing loans. The calculations for an amortizing loan are those of an annuity using the time value of money formulas and can be done using an amortization calculator. An amortizing loan should be contrasted with a bullet loan, where a large portion of the loan will be paid at the final maturity date instead of being paid down gradually over the loan's life. An accumulated amortization loan represents the amount of amortization expense that has been claimed since the acquisition of the asset. Effects. Amortization of debt has two major effects: Equated monthly installment. In EMI or Equated Monthly Installments, payments are divided into equal amounts for the duration of the loan, making it the simplest repayment model. A greater amount of the payment is applied to interest at the beginning of the amortization schedule, while more money is applied to principal at the end. This is captured by the formula formula_0 or, equivalently, formula_1 where: "P" is the principal amount borrowed, "A" is the periodic amortization payment, "r" is the periodic interest rate divided by 100 (nominal annual interest rate also divided by 12 in case of monthly installments), and "n" is the total number of payments (for a 30-year loan with monthly payments "n" = 30 × 12 = 360). Negative amortization. Negative amortization (also called deferred interest) occurs if the payments made do not cover the interest due. The remaining interest owed is added to the outstanding loan balance, making it larger than the original loan amount. If the repayment model for a loan is "fully amortized", then the last payment (which, if the schedule was calculated correctly, should be equal to all others) pays off all remaining principal and interest on the loan. If the repayment model on a loan is not fully amortized, then the last payment due may be a large balloon payment of all remaining principal and interest. If the borrower lacks the funds or assets to immediately make that payment, or adequate credit to refinance the balance into a new loan, the borrower may end up in default. Weighted-average life. The number weighted average of the times of the principal repayments of an amortizing loan is referred to as the weighted-average life (WAL), also called "average life". It's the average time until a dollar of principal is repaid. In a formula, formula_2 where: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\,=\\,A\\cdot\\frac{1-\\left(\\frac{1}{1+r}\\right)^{n} }{r}" }, { "math_id": 1, "text": "A \\,=\\,P\\cdot\\frac{r(1 + r)^n}{(1 + r)^n - 1}" }, { "math_id": 2, "text": "\\text{WAL} = \\sum_{i=1}^n \\frac {P_i}{P} t_i," }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "P_i" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "\\frac{P_i}{P}" }, { "math_id": 7, "text": "t_i" } ]
https://en.wikipedia.org/wiki?curid=8421588
842224
Elastomer
Polymer with rubber-like elastic properties An elastomer is a polymer with viscoelasticity (i.e. both viscosity and elasticity) and with weak intermolecular forces, generally low Young's modulus (E) and high failure strain compared with other materials. The term, a portmanteau of "elastic polymer", is often used interchangeably with "rubber", although the latter is preferred when referring to vulcanisates. Each of the monomers which link to form the polymer is usually a compound of several elements among carbon, hydrogen, oxygen and silicon. Elastomers are amorphous polymers maintained above their glass transition temperature, so that considerable molecular reconformation is feasible without breaking of covalent bonds. At ambient temperatures, such rubbers are thus relatively compliant (E ≈ 3 MPa) and deformable. Their primary uses are for seals, adhesives, and molded flexible parts. Rubber-like solids with elastic properties are called elastomers. Polymer chains are held together in these materials by relatively weak intermolecular bonds, which permit the polymers to stretch in response to macroscopic stresses. Elastomers are usually thermosets (requiring vulcanization) but may also be thermoplastic (see thermoplastic elastomer). The long polymer chains cross-link during curing (i.e., vulcanizing). The molecular structure of elastomers can be imagined as a 'spaghetti and meatball' structure, with the meatballs signifying cross-links. The elasticity is derived from the ability of the long chains to reconfigure themselves to distribute an applied stress. The covalent cross-linkages ensure that the elastomer will return to its original configuration when the stress is removed. Crosslinking most likely occurs in an equilibrated polymer without any solvent. The free energy expression derived from the Neohookean model of rubber elasticity is in terms of free energy change due to deformation per unit volume of the sample. The strand concentration, v, is the number of strands over the volume which does not depend on the overall size and shape of the elastomer. Beta relates the end-to-end distance of polymer strands across crosslinks over polymers that obey random walk statistics. formula_0 formula_1 In the specific case of shear deformation, the elastomer besides abiding to the simplest model of rubber elasticity is also incompressible. For pure shear we relate the shear strain, to the extension ratios lambdas. Pure shear is a two-dimensional stress state making lambda equal to 1, reducing the energy strain function above to: formula_2 To get shear stress, then the energy strain function is differentiated with respect to shear strain to get the shear modulus, G, times the shear strain: formula_3 Shear stress is then proportional to the shear strain even at large strains. Notice how a low shear modulus correlates to a low deformation strain energy density and vice versa. Shearing deformation in elastomers, require less energy to change shape than volume. formula_4 Examples. Unsaturated rubbers that can be cured by sulfur vulcanization: Saturated rubbers that cannot be cured by sulfur vulcanization: Various other types of elastomers: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta f_d = \\frac{\\Delta F_d}{V} = \\frac{K_BT\\nu_{el}\\beta\\lambda_1p^2 + \\lambda_2p + 2\\lambda_3p^2 - 3}{2}" }, { "math_id": 1, "text": "v_{el} = \\frac{n_{el}}{V} , \\beta = 1" }, { "math_id": 2, "text": "\\Delta f_{d}= \\frac{k_{B}T\\nu_{s}\\beta\\gamma^2}{2}" }, { "math_id": 3, "text": "\\sigma_{12} = \\frac{d(\\Delta f_{d})}{d\\gamma} = G\\gamma" }, { "math_id": 4, "text": "\\Delta f_d = W = \\frac{G(\\lambda_{1p}^2+\\lambda_{2p}^2+\\lambda_{3p}^2-3)}{2}" } ]
https://en.wikipedia.org/wiki?curid=842224
8423611
Extremely large telescope
20-100-m-aperture astronomical observatory An extremely large telescope (ELT) is an astronomical observatory featuring an optical telescope with an aperture for its primary mirror from 20 metres up to 100 metres across, when discussing reflecting telescopes of optical wavelengths including ultraviolet (UV), visible, and near infrared wavelengths. Among many planned capabilities, extremely large telescopes are planned to increase the chance of finding Earth-like planets around other stars. Telescopes for radio wavelengths can be much bigger physically, such as the aperture fixed focus radio telescope of the Arecibo Observatory (now defunct). Freely steerable radio telescopes with diameters up to have been in operation since the 1970s. These telescopes have a number of features in common, in particular the use of a segmented primary mirror (similar to the existing Keck telescopes), and the use of high-order adaptive optics systems. Although extremely large telescope designs are large, they can have smaller apertures than the aperture synthesis on many large optical interferometers. However, they may collect much more light, along with other advantages. Budget. Possible budget figures, which are estimates and can vary over time. For construction costs, it is recommended to estimate the cost of a giant telescope with the following equation: formula_0 Projects. There were several telescopes in various stages in the 1990s and early 2000s, and some developed into construction projects. Some of these projects have been cancelled, or merged into ongoing extremely large telescopes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "cost \\varpropto D^{2.7}" } ]
https://en.wikipedia.org/wiki?curid=8423611
842387
Evolute
Centers of curvature of a curve In the differential geometry of curves, the evolute of a curve is the locus of all its centers of curvature. That is to say that when the center of curvature of each point on a curve is drawn, the resultant shape will be the evolute of that curve. The evolute of a circle is therefore a single point at its center. Equivalently, an evolute is the envelope of the normals to a curve. The evolute of a curve, a surface, or more generally a submanifold, is the caustic of the normal map. Let M be a smooth, regular submanifold in R"n". For each point p in M and each vector v, based at p and normal to M, we associate the point "p" + v. This defines a Lagrangian map, called the normal map. The caustic of the normal map is the evolute of M. Evolutes are closely connected to involutes: A curve is the evolute of any of its involutes. History. Apollonius (c. 200 BC) discussed evolutes in Book V of his "Conics". However, Huygens is sometimes credited with being the first to study them (1673). Huygens formulated his theory of evolutes sometime around 1659 to help solve the problem of finding the tautochrone curve, which in turn helped him construct an isochronous pendulum. This was because the tautochrone curve is a cycloid, and the cycloid has the unique property that its evolute is also a cycloid. The theory of evolutes, in fact, allowed Huygens to achieve many results that would later be found using calculus. Evolute of a parametric curve. If formula_0 is the parametric representation of a regular curve in the plane with its curvature nowhere 0 and formula_1 its curvature radius and formula_2 the unit normal pointing to the curvature center, then formula_3 describes the evolute of the given curve. For formula_4 and formula_5 one gets formula_6 and formula_7 Properties of the evolute. In order to derive properties of a regular curve it is advantageous to use the arc length formula_8 of the given curve as its parameter, because of formula_9 and formula_10 (see Frenet–Serret formulas). Hence the tangent vector of the evolute formula_11 is: formula_12 From this equation one gets the following properties of the evolute: "Proof" of the last property: Let be formula_14 at the section of consideration. An involute of the evolute can be described as follows: formula_16 where formula_17 is a fixed string extension (see Involute of a parameterized curve ). With formula_18 and formula_19 one gets formula_20 That means: For the string extension formula_21 the given curve is reproduced. "Proof:" A parallel curve with distance formula_22 off the given curve has the parametric representation formula_23 and the radius of curvature formula_24 (see parallel curve). Hence the evolute of the parallel curve is formula_25 Examples. Evolute of a parabola. For the parabola with the parametric representation formula_26 one gets from the formulae above the equations: formula_27 formula_28 which describes a semicubic parabola Evolute of an ellipse. For the ellipse with the parametric representation formula_29 one gets: formula_30 formula_31 These are the equations of a non symmetric astroid. Eliminating parameter formula_32 leads to the implicit representation formula_33 Evolute of a cycloid. For the cycloid with the parametric representation formula_34 the evolute will be: formula_35 formula_36 which describes a transposed replica of itself. Evolute of log-aesthetic curves. The evolute of a log-aesthetic curve is another log-aesthetic curve. One instance of this relation is that the evolute of an Euler spiral is a spiral with Cesàro equation formula_37. Evolutes of some curves. The evolute Radial curve. A curve with a similar definition is the radial of a given curve. For each point on the curve take the vector from the point to the center of curvature and translate it so that it begins at the origin. Then the locus of points at the end of such vectors is called the radial of the curve. The equation for the radial is obtained by removing the x and y terms from the equation of the evolute. This produces formula_38 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec x= \\vec c(t),\\; t\\in [t_1, t_2]" }, { "math_id": 1, "text": "\\rho(t)" }, { "math_id": 2, "text": "\\vec n(t)" }, { "math_id": 3, "text": "\\vec E(t) = \\vec c(t) + \\rho (t) \\vec n(t)" }, { "math_id": 4, "text": "\\vec c(t)=(x(t),y(t))^\\mathsf{T}" }, { "math_id": 5, "text": "\\vec E=(X,Y)^\\mathsf{T}" }, { "math_id": 6, "text": " X(t) = x(t) - \\frac{y'(t) \\Big(x'(t)^2+y'(t)^2\\Big)}{x'(t) y''(t) - x''(t) y'(t)}" }, { "math_id": 7, "text": " Y(t) = y(t) + \\frac{x'(t) \\Big(x'(t)^2+y'(t)^2\\Big)}{x'(t) y''(t) - x''(t) y'(t)}." }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "\\left|\\vec c'\\right| = 1" }, { "math_id": 10, "text": "\\vec n' = -\\vec c'/\\rho" }, { "math_id": 11, "text": "\\vec E=\\vec c +\\rho \\vec n " }, { "math_id": 12, "text": "\\vec E' = \\vec c' +\\rho'\\vec n + \\rho\\vec n' = \\rho'\\vec n\\ ." }, { "math_id": 13, "text": "\\rho' = 0" }, { "math_id": 14, "text": "\\rho' > 0" }, { "math_id": 15, "text": "\\rho' < 0" }, { "math_id": 16, "text": "\\vec C_0=\\vec E -\\frac{\\vec E'}{\\left|\\vec E'\\right|} \\left(\\int_0^s\\left|\\vec E'(w)\\right| \\mathrm dw + l_0 \\right) ," }, { "math_id": 17, "text": "l_0" }, { "math_id": 18, "text": "\\vec E=\\vec c +\\rho\\vec n\\; ,\\; \\vec E'=\\rho'\\vec n" }, { "math_id": 19, "text": "\\rho'>0" }, { "math_id": 20, "text": "\\vec C_0 = \\vec c +\\rho\\vec n-\\vec n \\left(\\int_0^s \\rho'(w) \\; \\mathrm dw \\;+l_0\\right)= \\vec c + (\\rho(0) - l_0)\\; \\vec n\\, ." }, { "math_id": 21, "text": "l_0=\\rho(0)" }, { "math_id": 22, "text": "d" }, { "math_id": 23, "text": "\\vec c_d = \\vec c + d\\vec n" }, { "math_id": 24, "text": "\\rho_d=\\rho -d" }, { "math_id": 25, "text": "\\vec E_d = \\vec c_d +\\rho_d \\vec n =\\vec c +d\\vec n +(\\rho -d)\\vec n=\\vec c +\\rho \\vec n = \\vec E\\; ." }, { "math_id": 26, "text": "(t,t^2)" }, { "math_id": 27, "text": "X=\\cdots=-4t^3" }, { "math_id": 28, "text": "Y=\\cdots=\\frac{1}{2} + 3t^2 \\, ," }, { "math_id": 29, "text": "(a\\cos t, b\\sin t)" }, { "math_id": 30, "text": "X= \\cdots = \\frac{a^2-b^2}{a}\\cos ^3t" }, { "math_id": 31, "text": "Y= \\cdots = \\frac{b^2-a^2}{b}\\sin ^3t \\; ." }, { "math_id": 32, "text": "t" }, { "math_id": 33, "text": "(aX)^{\\tfrac{2}{3}} +(bY)^{\\tfrac{2}{3}} = (a^2-b^2)^{\\tfrac{2}{3}}\\ ." }, { "math_id": 34, "text": "(r(t - \\sin t), r(1 - \\cos t))" }, { "math_id": 35, "text": "X=\\cdots=r(t + \\sin t)" }, { "math_id": 36, "text": "Y=\\cdots=r(\\cos t - 1)" }, { "math_id": 37, "text": "\\kappa(s) = -s^{-3}" }, { "math_id": 38, "text": "(X, Y)= \\left(-y'\\frac{{x'}^2+{y'}^2}{x'y''-x''y'}\\; ,\\; x'\\frac{{x'}^2+{y'}^2}{x'y''-x''y'}\\right) ." } ]
https://en.wikipedia.org/wiki?curid=842387
842430
Inflaton
Hypothetical field that may have driven cosmic inflation The inflaton field is a hypothetical scalar field which is conjectured to have driven cosmic inflation in the very early universe. The field, originally postulated by Alan Guth, provides a mechanism by which a period of rapid expansion from 10−35 to 10−34 seconds after the initial expansion can be generated, forming a universe consistent with observed spatial isotropy and homogeneity. Cosmological inflation. The basic model of inflation proceeds in three phases: Expanding vacuum state with high potential energy. In quantum field theory, a vacuum state or vacuum is a state of quantum fields which is at locally minimal potential energy. Quantum particles are excitations which deviate from this minimal potential energy state, therefore a vacuum state has no particles in it. Depending on the specifics of a quantum field theory, it can have more than one vacuum state. Different vacua, despite all "being empty" (having no particles), will generally have different vacuum energy. Quantum field theory stipulates that the pressure of the vacuum energy is always negative and equal in magnitude to its energy density. Inflationary theory postulates that there is a vacuum state with very large vacuum energy, caused by a non-zero vacuum expectation value of the inflaton field. Any region of space in this state will rapidly expand. Even if initially it is not empty (contains some particles), very rapid exponential expansion dilutes particle density to essentially zero. Phase transition to true vacuum. Inflationary theory further postulates that this "inflationary vacuum" state is not the state with globally lowest energy; rather, it is a "false vacuum", also known as a "metastable" state. For each observer at any chosen point of space, the false vacuum eventually tunnels into a state with the same potential energy, but which is not a vacuum (it is not at a local minimum of the potential energy—it can "decay"). This state can be seen as a true vacuum, filled with a large number of inflaton particles. However, the rate of expansion of the true vacuum does not change at that moment: Only its exponential character changes to much slower expansion of the FLRW metric. This ensures that expansion rate precisely matches the energy density. Slow roll and reheating. In the true vacuum, inflaton particles decay, eventually giving rise to the observed Standard Model particles. The shape of the potential energy function near "tunnel exit" from false vacuum state must have a shallow slope, otherwise particle production would be confined to the boundary of expanding true vacuum bubble, which contradicts observation (our Universe is not built of huge completely void bubbles). In other words, the quantum state should "roll to the bottom slowly". When complete, the decay of inflaton particles fills the space with hot and dense Big Bang plasma. Field quanta. Just like every other quantum field, excitations of the inflaton field are expected to be quantized. The field quanta of the inflaton field are known as inflatons. Depending on the modeled potential energy density, the inflaton field's ground state might, or might not, be zero. The term "inflaton" follows the typical style of other quantum particles’ names – such as photon, gluon, boson, and fermion – deriving from the word "inflation". The term was first used in a paper by Nanopoulos, Olive, and Srednicki (1983). The nature of the inflaton field is currently not known. One of the obstacles for narrowing its properties is that current quantum theory is not able to correctly predict the observed vacuum energy, based on the particle content of a chosen theory (see vacuum catastrophe). Atkins (2012) suggested that it is possible that no new field is necessary – that a modified version of the Higgs field could function as an inflaton. Non-minimally coupled inflation. Non-minimally coupled inflation is an inflationary model in which the constant which couples gravity to the inflaton field is not small. The coupling constant is usually represented by formula_0 (letter "xi"), which features in the action (constructed by modifying the Einstein–Hilbert action): formula_1, with formula_0 representing the strength of the interaction between formula_2 and formula_3, which respectively relate to the curvature of space and the magnitude of the inflaton field. See also. &lt;templatestyles src="Div col/styles.css"/&gt;* Hubble's law References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\xi" }, { "math_id": 1, "text": "S = \\int d^4x \\sqrt{-g} \\left[ \\tfrac{1}{2} m_P^2 R - \\tfrac{1}{2}\\partial^{\\mu}\\phi \\partial_{\\mu}\\phi \n- V(\\phi) - \\tfrac{1}{2} \\xi R \\phi^2\\right]" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "\\phi" } ]
https://en.wikipedia.org/wiki?curid=842430
842493
High-electron-mobility transistor
Type of field-effect transistor A high-electron-mobility transistor (HEMT or HEM FET), also known as heterostructure FET (HFET) or modulation-doped FET (MODFET), is a field-effect transistor incorporating a junction between two materials with different band gaps (i.e. a heterojunction) as the channel instead of a doped region (as is generally the case for a MOSFET). A commonly used material combination is GaAs with AlGaAs, though there is wide variation, dependent on the application of the device. Devices incorporating more indium generally show better high-frequency performance, while in recent years, gallium nitride HEMTs have attracted attention due to their high-power performance. Like other FETs, HEMTs can be used in integrated circuits as digital on-off switches. FETs can also be used as amplifiers for large amounts of current using a small voltage as a control signal. Both of these uses are made possible by the FET’s unique current–voltage characteristics. HEMT transistors are able to operate at higher frequencies than ordinary transistors, up to millimeter wave frequencies, and are used in high-frequency products such as cell phones, satellite television receivers, voltage converters, and radar equipment. They are widely used in satellite receivers, in low power amplifiers and in the defense industry. Applications. The applications of HEMTs include microwave and millimeter wave communications, imaging, radar, radio astronomy, and power switching. They are found in many types of equipment ranging from cellphones, power supply adapters and DBS receivers to radio astronomy and electronic warfare systems such as radar systems. Numerous companies worldwide develop, manufacture, and sell HEMT-based devices in the form of discrete transistors, as 'monolithic microwave integrated circuits' (MMICs), or within power switching integrated circuits. HEMTs are suitable for applications where high gain and low noise at high frequencies are required, as they have shown current gain to frequencies greater than 600 GHz and power gain to frequencies greater than 1THz. Gallium nitride based HEMTs are used as power switching transistors for voltage converter applications due to their low on-state resistances, low switching losses, and high breakdown strength. These gallium nitride enhanced voltage converter applications include AC adapters, which benefit from smaller package sizes due to the power circuitry requiring smaller passive electronic components. History. The invention of the high-electron-mobility transistor (HEMT) is usually attributed to physicist Takashi Mimura (三村 高志), while working at Fujitsu in Japan. The basis for the HEMT was the GaAs (gallium arsenide) MOSFET (metal–oxide–semiconductor field-effect transistor), which Mimura had been researching as an alternative to the standard silicon (Si) MOSFET since 1977. He conceived the HEMT in Spring 1979, when he read about a modulated-doped heterojunction superlattice developed at Bell Labs in the United States, by Ray Dingle, Arthur Gossard and Horst Störmer who filed a patent in April 1978. Mimura filed a patent disclosure for a HEMT in August 1979, and then a patent later that year. The first demonstration of a HEMT device, the D-HEMT, was presented by Mimura and Satoshi Hiyamizu in May 1980, and then they later demonstrated the first E-HEMT in August 1980. Independently, Daniel Delagebeaudeuf and Tranc Linh Nuyen, while working at Thomson-CSF in France, filed a patent for a similar type of field-effect transistor in March 1979. It also cites the Bell Labs patent as an influence. The first demonstration of an "inverted" HEMT was presented by Delagebeaudeuf and Nuyen in August 1980. One of the earliest mentions of a GaN-based HEMT is in the 1993 "Applied Physics Letters" article, by Khan "et al". Later, in 2004, P.D. Ye and B. Yang "et al" demonstrated a GaN (gallium nitride) metal–oxide–semiconductor HEMT (MOS-HEMT). It used atomic layer deposition (ALD) aluminum oxide (Al2O3) film both as a gate dielectric and for surface passivation. Operation. Field effect transistors whose operation relies on the formation of a two-dimensional electron gas (2DEG) are known as HEMTs. In HEMTS electric current flows between a drain and source element via the 2DEG, which is located at the interface between two layers of differing band gaps, termed the heterojunction. Some examples of previously explored heterojunction layer compositions (heterostructures) for HEMTs include AlGaN/GaN, AlGaAs/GaAs, InGaAs/GaAs, and Si/SiGe. Advantages. The advantages of HEMTs over other transistor architectures, like the bipolar junction transistor and the MOSFET, are the higher operating temperatures, higher breakdown strengths, and lower specific on-state resistances, all in the case of GaN-based HEMTs compared to Si-based MOSFETs. Furthermore, InP-based HEMTs exhibit low noise performance and higher switching speeds. 2DEG channel creation. The wide band element is doped with donor atoms; thus it has excess electrons in its conduction band. These electrons will diffuse to the adjacent narrow band material’s conduction band due to the availability of states with lower energy. The movement of electrons will cause a change in potential and thus an electric field between the materials. The electric field will push electrons back to the wide band element’s conduction band. The diffusion process continues until electron diffusion and electron drift balance each other, creating a junction at equilibrium similar to a p–n junction. Note that the undoped narrow band gap material now has excess majority charge carriers. The fact that the charge carriers are majority carriers yields high switching speeds, and the fact that the low band gap semiconductor is undoped means that there are no donor atoms to cause scattering and thus yields high mobility. In the case of GaAs HEMTs, they make use of high mobility electrons generated using the heterojunction of a highly doped wide-bandgap n-type donor-supply layer (AlGaAs in our example) and a non-doped narrow-bandgap channel layer with no dopant impurities (GaAs in this case). The electrons generated in the thin n-type AlGaAs layer drop completely into the GaAs layer to form a depleted AlGaAs layer, because the heterojunction created by different band-gap materials forms a quantum well (a steep canyon) in the conduction band on the GaAs side where the electrons can move quickly without colliding with any impurities because the GaAs layer is undoped, and from which they cannot escape. The effect of this is the creation of a very thin layer of highly mobile conducting electrons with very high concentration, giving the channel very low resistivity (or to put it another way, "high electron mobility"). Electrostatic mechanism. Since GaAs has higher electron affinity, free electrons in the AlGaAs layer are transferred to the undoped GaAs layer where they form a two dimensional high mobility electron gas within 100 ångström (10 nm) of the interface. The n-type AlGaAs layer of the HEMT is depleted completely through two depletion mechanisms: The Fermi level of the gate metal is matched to the pinning point, which is 1.2 eV below the conduction band. With the reduced AlGaAs layer thickness, the electrons supplied by donors in the AlGaAs layer are insufficient to pin the layer. As a result, band bending is moving upward and the two-dimensional electrons gas does not appear. When a positive voltage greater than the threshold voltage is applied to the gate, electrons accumulate at the interface and form a two-dimensional electron gas. Modulation doping in HEMTs. An important aspect of HEMTs is that the band discontinuities across the conduction and valence bands can be modified separately. This allows the type of carriers in and out of the device to be controlled. As HEMTs require electrons to be the main carriers, a graded doping can be applied in one of the materials, thus making the conduction band discontinuity smaller and keeping the valence band discontinuity the same. This diffusion of carriers leads to the accumulation of electrons along the boundary of the two regions inside the narrow band gap material. The accumulation of electrons leads to a very high current in these devices. The term "modulation doping" refers to the fact that the dopants are spatially in a different region from the current carrying electrons. This technique was invented by Horst Störmer at Bell Labs. Manufacture. MODFETs can be manufactured by epitaxial growth of a strained SiGe layer. In the strained layer, the germanium content increases linearly to around 40-50%. This concentration of germanium allows the formation of a quantum well structure with a high conduction band offset and a high density of very mobile charge carriers. The end result is a FET with ultra-high switching speeds and low noise. InGaAs/AlGaAs, AlGaN/InGaN, and other compounds are also used in place of SiGe. InP and GaN are starting to replace SiGe as the base material in MODFETs because of their better noise and power ratios. Versions of HEMTs. By growth technology: pHEMT and mHEMT. Ideally, the two different materials used for a heterojunction would have the same lattice constant (spacing between the atoms). In practice, the lattice constants are typically slightly different (e.g. AlGaAs on GaAs), resulting in crystal defects. As an analogy, imagine pushing together two plastic combs with a slightly different spacing. At regular intervals, you'll see two teeth clump together. In semiconductors, these discontinuities form deep-level traps and greatly reduce device performance. A HEMT where this rule is violated is called a pHEMT or pseudomorphic HEMT. This is achieved by using an extremely thin layer of one of the materials – so thin that the crystal lattice simply stretches to fit the other material. This technique allows the construction of transistors with larger bandgap differences than otherwise possible, giving them better performance. Another way to use materials of different lattice constants is to place a buffer layer between them. This is done in the mHEMT or metamorphic HEMT, an advancement of the pHEMT. The buffer layer is made of AlInAs, with the indium concentration graded so that it can match the lattice constant of both the GaAs substrate and the GaInAs channel. This brings the advantage that practically any Indium concentration in the channel can be realized, so the devices can be optimized for different applications (low indium concentration provides low noise; high indium concentration gives high gain). By electrical behaviour: eHEMT and dHEMT. HEMTs made of semiconductor hetero-interfaces lacking interfacial net polarization charge, such as AlGaAs/GaAs, require positive gate voltage or appropriate donor-doping in the AlGaAs barrier to attract the electrons towards the gate, which forms the 2D electron gas and enables conduction of electron currents. This behaviour is similar to that of commonly used field-effect transistors in the enhancement mode, and such a device is called enhancement HEMT, or eHEMT. When a HEMT is built from AlGaN/GaN, higher power density and breakdown voltage can be achieved. Nitrides also have different crystal structure with lower symmetry, namely the wurtzite one, which has built-in electrical polarisation. Since this polarization differs between the GaN "channel" layer and AlGaN "barrier" layer, a sheet of uncompensated charge in the order of 0.01-0.03 C/mformula_0 is formed. Due to the crystal orientation typically used for epitaxial growth ("gallium-faced") and the device geometry favorable for fabrication (gate on top), this charge sheet is positive, causing the 2D electron gas to be formed even if there is no doping. Such a transistor is normally on, and will turn off only if the gate is negatively biased - thus this kind of HEMT is known as "depletion HEMT", or dHEMT. By sufficient doping of the barrier with acceptors (e.g. Mg), the built-in charge can be compensated to restore the more customary eHEMT operation, however high-density p-doping of nitrides is technologically challenging due to dopant diffusion into the channel. Induced HEMT. In contrast to a modulation-doped HEMT, an induced high electron mobility transistor provides the flexibility to tune different electron densities with a top gate, since the charge carriers are "induced" to the 2DEG plane rather than created by dopants. The absence of a doped layer enhances the electron mobility significantly when compared to their modulation-doped counterparts. This level of cleanliness provides opportunities to perform research into the field of Quantum Billiard for quantum chaos studies, or applications in ultra stable and ultra sensitive electronic devices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^2" } ]
https://en.wikipedia.org/wiki?curid=842493
842555
Enterprise application integration
Use of software for integration Enterprise application integration (EAI) is the use of software and computer systems' architectural principles to integrate a set of enterprise computer applications. Overview. Enterprise application integration is an integration framework composed of a collection of technologies and services which form a middleware or "middleware framework" to enable integration of systems and applications across an enterprise. Many types of business software such as supply chain management applications, ERP systems, CRM applications for managing customers, business intelligence applications, payroll, and human resources systems typically cannot communicate with one another in order to share data or business rules. For this reason, such applications are sometimes referred to as islands of automation or information silos. This lack of communication leads to inefficiencies, wherein identical data are stored in multiple locations, or straightforward processes are unable to be automated. Enterprise application integration is the process of linking such applications within a single organization together in order to simplify and automate business processes to the greatest extent possible, while at the same time avoiding having to make sweeping changes to the existing applications or data structures. Applications can be linked either at the back-end via APIs or (seldom) the front-end (GUI). In the words of research firm Gartner: "[EAI is] the unrestricted sharing of data and business processes among any connected application or data sources in the enterprise." The various systems that need to be linked together may reside on different operating systems, use different database solutions or computer languages, or different date and time formats, or could be legacy systems that are no longer supported by the vendor who originally created them. In some cases, such systems are dubbed "stovepipe systems" because they consist of components that have been jammed together in a way that makes it very hard to modify them in any way. Improving connectivity. If integration is applied without following a structured EAI approach, point-to-point connections grow across an organization. Dependencies are added on an impromptu basis, resulting in a complex structure that is difficult to maintain. This is commonly referred to as spaghetti, an allusion to the programming equivalent of spaghetti code. For example, the number of connections needed to have fully meshed point-to-point connections, with n points, is given by formula_0 (see binomial coefficient). Thus, for ten applications to be fully integrated point-to-point, formula_1 point-to-point connections are needed, following a quadratic growth pattern. However, the number of connections within organizations does not necessarily grow according to the square of the number of points. In general, the number of connections to any point is only limited by the number of other points in an organization, but can be significantly smaller in principle. EAI can also increase coupling between systems and therefore increase management overhead and costs. EAI is not just about sharing data between applications but also focuses on sharing both business data and business processes. A middleware analyst attending to EAI will often look at the system of systems. Purposes. EAI can be used for different purposes: Patterns. This section describes common design patterns for implementing EAI, including integration, access and lifetime patterns. These are abstract patterns and can be implemented in many different ways. There are many other patterns commonly used in the industry, ranging from high-level abstract design patterns to highly specific implementation patterns. Integration patterns. EAI systems implement two patterns: Both patterns are often used concurrently. The same EAI system could be keeping multiple applications in sync (mediation), while servicing requests from external users against these applications (federation). Access patterns. EAI supports both asynchronous (fire and forget) and synchronous access patterns, the former being typical in the mediation case and the latter in the federation case. Lifetime patterns. An integration operation could be short-lived (e.g., keeping data in sync across two applications could be completed within a second) or long-lived (e.g., one of the steps could involve the EAI system interacting with a human work flow application for approval of a loan that takes hours or days to complete). Topologies. There are two major topologies: hub-and-spoke, and bus. Each has its own advantages and disadvantages. In the hub-and-spoke model, the EAI system is at the center (the hub), and interacts with the applications via the spokes. In the bus model, the EAI system is the bus (or is implemented as a resident module in an already existing message bus or message-oriented middleware). Most large enterprises use zoned networks to create a layered defense against network oriented threats. For example, an enterprise typically has a credit card processing (PCI-compliant) zone, a non-PCI zone, a data zone, a DMZ zone to proxy external user access, and an IWZ zone to proxy internal user access. Applications need to integrate across multiple zones. The Hub and spoke model would work better in this case. Technologies. Multiple technologies are used in implementing each of the components of the EAI system: Communication architectures. Currently, there are many variations of thought on what constitutes the best infrastructure, component model, and standards structure for Enterprise Application Integration. There seems to be a consensus that four components are essential for a modern enterprise application integration architecture: Although other approaches like connecting at the database or user-interface level have been explored, they have not been found to scale or be able to adjust. Individual applications can publish messages to the centralized broker and subscribe to receive certain messages from that broker. Each application only requires one connection to the broker. This central control approach can be extremely scalable and highly evolvable. Enterprise Application Integration is related to middleware technologies such as message-oriented middleware (MOM), and data representation technologies such as XML or JSON. Other EAI technologies involve using web services as part of service-oriented architecture as a means of integration. Enterprise Application Integration tends to be data centric. In the near future, it will come to include content integration and business processes. Implementation pitfalls. In 2003 it was reported that 70% of all EAI projects fail. Most of these failures are not due to the software itself or technical difficulties, but due to management issues. Integration Consortium European Chairman Steve Craggs has outlined the seven main pitfalls undertaken by companies using EAI systems and explains solutions to these problems. Other potential problems may arise in these areas: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tbinom n 2 = \\tfrac{n(n-1)}{2}" }, { "math_id": 1, "text": "\\tfrac{10\\times9}{2} = 45" } ]
https://en.wikipedia.org/wiki?curid=842555
8426019
Negligible function
In mathematics, a negligible function is a function formula_0 such that for every positive integer "c" there exists an integer "N""c" such that for all "x" &gt; "N""c", formula_1 Equivalently, we may also use the following definition. A function formula_0 is negligible, if for every positive polynomial poly(·) there exists an integer "N"poly &gt; 0 such that for all "x" &gt; "N"poly formula_2 History. The concept of "negligibility" can find its trace back to sound models of analysis. Though the concepts of "continuity" and "infinitesimal" became important in mathematics during Newton and Leibniz's time (1680s), they were not well-defined until the late 1810s. The first reasonably rigorous definition of "continuity" in mathematical analysis was due to Bernard Bolzano, who wrote in 1817 the modern definition of continuity. Later Cauchy, Weierstrass and Heine also defined as follows (with all numbers in the real number domain formula_3): (Continuous function) A function formula_4 is "continuous" at formula_5 if for every formula_6, there exists a positive number formula_7 such that formula_8 implies formula_9 This classic definition of continuity can be transformed into the definition of negligibility in a few steps by changing parameters used in the definition. First, in the case formula_10 with formula_11, we must define the concept of "infinitesimal function": (Infinitesimal) A continuous function formula_12 is "infinitesimal" (as formula_13 goes to infinity) if for every formula_6 there exists formula_14 such that for all formula_15 formula_16 Next, we replace formula_6 by the functions formula_17 where formula_18 or by formula_19 where formula_20 is a positive polynomial. This leads to the definitions of negligible functions given at the top of this article. Since the constants formula_6 can be expressed as formula_19 with a constant polynomial, this shows that infinitesimal functions are a superset of negligible functions. Use in cryptography. In complexity-based modern cryptography, a security scheme is "provably secure" if the probability of security failure (e.g., inverting a one-way function, distinguishing cryptographically strong pseudorandom bits from truly random bits) is negligible in terms of the input formula_13 = cryptographic key length formula_21. Hence comes the definition at the top of the page because key length formula_21 must be a natural number. Nevertheless, the general notion of negligibility doesn't require that the input parameter formula_13 is the key length formula_21. Indeed, formula_13 can be any predetermined system metric and corresponding mathematical analysis would illustrate some hidden analytical behaviors of the system. The reciprocal-of-polynomial formulation is used for the same reason that computational boundedness is defined as polynomial running time: it has mathematical closure properties that make it tractable in the asymptotic setting (see #Closure properties). For example, if an attack succeeds in violating a security condition only with negligible probability, and the attack is repeated a polynomial number of times, the success probability of the overall attack still remains negligible. In practice one might want to have more concrete functions bounding the adversary's success probability and to choose the security parameter large enough that this probability is smaller than some threshold, say 2−128. Closure properties. One of the reasons that negligible functions are used in foundations of complexity-theoretic cryptography is that they obey closure properties. Specifically, Conversely, if formula_24 is not negligible, then neither is formula_27 for any real polynomial formula_25. Examples. Assume formula_35, we take the limit as formula_36: Negligible: Non-negligible: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu:\\mathbb{N}\\to\\mathbb{R}" }, { "math_id": 1, "text": "|\\mu(x)|<\\frac{1}{x^c}." }, { "math_id": 2, "text": "|\\mu(x)|<\\frac 1 {\\operatorname{poly}(x)}." }, { "math_id": 3, "text": "\\mathbb{R}" }, { "math_id": 4, "text": "f:\\mathbb{R}{\\rightarrow}\\mathbb{R}" }, { "math_id": 5, "text": "x=x_0" }, { "math_id": 6, "text": "\\varepsilon>0" }, { "math_id": 7, "text": "\\delta>0" }, { "math_id": 8, "text": "|x-x_0|<\\delta" }, { "math_id": 9, "text": "|f(x)-f(x_0)|<\\varepsilon." }, { "math_id": 10, "text": "x_0=\\infty" }, { "math_id": 11, "text": "f(x_0)=0" }, { "math_id": 12, "text": "\\mu:\\mathbb{R}\\to\\mathbb{R}" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "N_\\varepsilon" }, { "math_id": 15, "text": "x>N_\\varepsilon" }, { "math_id": 16, "text": "|\\mu(x)|<\\varepsilon\\,." }, { "math_id": 17, "text": "1/x^c" }, { "math_id": 18, "text": "c>0" }, { "math_id": 19, "text": "1/\\operatorname{poly}(x)" }, { "math_id": 20, "text": "\\operatorname{poly}(x)" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "f,g:\\mathbb{N}\\to\\mathbb{R}" }, { "math_id": 23, "text": "x\\mapsto f(x)+g(x)" }, { "math_id": 24, "text": "f:\\mathbb{N}\\to\\mathbb{R}" }, { "math_id": 25, "text": "p" }, { "math_id": 26, "text": "x\\mapsto p(x)\\cdot f(x)" }, { "math_id": 27, "text": "x\\mapsto f(x)/p(x)" }, { "math_id": 28, "text": " n \\mapsto a^{-n}" }, { "math_id": 29, "text": "a\\geq 2" }, { "math_id": 30, "text": " f(n) = 3^{-\\sqrt{n}}" }, { "math_id": 31, "text": " f(n) = n^{-\\log n}" }, { "math_id": 32, "text": " f(n) = (\\log n)^{-\\log n}" }, { "math_id": 33, "text": " f(n) = 2^{-c \\log n}" }, { "math_id": 34, "text": "c" }, { "math_id": 35, "text": " n > 0 " }, { "math_id": 36, "text": " n \\to \\infty" }, { "math_id": 37, "text": " f(n) = 1/x^{n/2} " }, { "math_id": 38, "text": " f(n) = 1/x^{\\log{(n^k)}} " }, { "math_id": 39, "text": " k \\geq 1 " }, { "math_id": 40, "text": " f(n) = 1/x^{({\\log{n})}^k} " }, { "math_id": 41, "text": " f(n) = 1/x^{\\sqrt{n}} " }, { "math_id": 42, "text": " f(n) = \\frac{1}{n^{\\frac{1}{n}}} " }, { "math_id": 43, "text": " f(n) = \\frac{1}{x^{n(\\log{n})}} " } ]
https://en.wikipedia.org/wiki?curid=8426019
8426172
Distance from a point to a plane
Length in solid geometry In Euclidean space, the distance from a point to a plane is the distance between a given point and its orthogonal projection on the plane, the perpendicular distance to the nearest point on the plane. It can be found starting with a change of variables that moves the origin to coincide with the given point then finding the point on the shifted plane formula_0 that is closest to the origin. The resulting point has Cartesian coordinates formula_1: formula_2. The distance between the origin and the point formula_1 is formula_3. Converting general problem to distance-from-origin problem. Suppose we wish to find the nearest point on a plane to the point (formula_4), where the plane is given by formula_5. We define formula_6, formula_7, formula_8, and formula_9, to obtain formula_0 as the plane expressed in terms of the transformed variables. Now the problem has become one of finding the nearest point on this plane to the origin, and its distance from the origin. The point on the plane in terms of the original coordinates can be found from this point using the above relationships between formula_10 and formula_11, between formula_12 and formula_13, and between formula_14 and formula_15; the distance in terms of the original coordinates is the same as the distance in terms of the revised coordinates. Restatement using linear algebra. The formula for the closest point to the origin may be expressed more succinctly using notation from linear algebra. The expression formula_16 in the definition of a plane is a dot product formula_17, and the expression formula_18 appearing in the solution is the squared norm formula_19. Thus, if formula_20 is a given vector, the plane may be described as the set of vectors formula_21 for which formula_22 and the closest point on this plane to the origin is the vector formula_23. The Euclidean distance from the origin to the plane is the norm of this point, formula_24. Why this is the closest point. In either the coordinate or vector formulations, one may verify that the given point lies on the given plane by plugging the point into the equation of the plane. To see that it is the closest point to the origin on the plane, observe that formula_25 is a scalar multiple of the vector formula_26 defining the plane, and is therefore orthogonal to the plane. Thus, if formula_27 is any point on the plane other than formula_25 itself, then the line segments from the origin to formula_25 and from formula_25 to formula_27 form a right triangle, and by the Pythagorean theorem the distance from the origin to formula_28 is formula_29. Since formula_30 must be a positive number, this distance is greater than formula_31, the distance from the origin to formula_25. Alternatively, it is possible to rewrite the equation of the plane using dot products with formula_25 in place of the original dot product with formula_26 (because these two vectors are scalar multiples of each other) after which the fact that formula_25 is the closest point becomes an immediate consequence of the Cauchy–Schwarz inequality. Closest point and distance for a hyperplane and arbitrary point. The vector equation for a hyperplane in formula_32-dimensional Euclidean space formula_33 through a point formula_25 with normal vector formula_34 is formula_35 or formula_36 where formula_37. The corresponding Cartesian form is formula_38 where formula_39. The closest point on this hyperplane to an arbitrary point formula_40 is formula_41 and the distance from formula_40 to the hyperplane is formula_42. Written in Cartesian form, the closest point is given by formula_43 for formula_44 where formula_45, and the distance from formula_40 to the hyperplane is formula_46. Thus in formula_47 the point on a plane formula_48 closest to an arbitrary point formula_49 is formula_1 given by formula_50 where formula_51, and the distance from the point to the plane is formula_52. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ax + by + cz = d" }, { "math_id": 1, "text": "(x,y,z)" }, { "math_id": 2, "text": "\\displaystyle x = \\frac {ad}{{a^2+b^2+c^2}}, \\quad \\quad \\displaystyle y = \\frac {bd}{{a^2+b^2+c^2}}, \\quad \\quad \\displaystyle z = \\frac {cd}{{a^2+b^2+c^2}}" }, { "math_id": 3, "text": "\\sqrt{x^2+y^2+z^2}" }, { "math_id": 4, "text": "X_0, Y_0, Z_0" }, { "math_id": 5, "text": "aX + bY + cZ = D" }, { "math_id": 6, "text": "x = X - X_0" }, { "math_id": 7, "text": "y = Y - Y_0" }, { "math_id": 8, "text": "z = Z - Z_0" }, { "math_id": 9, "text": "d = D - aX_0 - bY_0 - cZ_0" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "y" }, { "math_id": 13, "text": "Y" }, { "math_id": 14, "text": "z" }, { "math_id": 15, "text": "Z" }, { "math_id": 16, "text": "ax+by+cz" }, { "math_id": 17, "text": "(a,b,c)\\cdot(x,y,z)" }, { "math_id": 18, "text": "a^2+b^2+c^2" }, { "math_id": 19, "text": "|(a,b,c)|^2" }, { "math_id": 20, "text": "\\mathbf{v}=(a,b,c)" }, { "math_id": 21, "text": "\\mathbf{w}" }, { "math_id": 22, "text": "\\mathbf{v}\\cdot\\mathbf{w}=d" }, { "math_id": 23, "text": "\\mathbf{p}=\\frac{\\mathbf{v}d}{|\\mathbf{v}|^2}" }, { "math_id": 24, "text": "\\frac{|d|}{|\\mathbf{v}|} = \\frac{|d|}{\\sqrt{a^2+b^2+c^2}}" }, { "math_id": 25, "text": "\\mathbf{p}" }, { "math_id": 26, "text": "\\mathbf{v}" }, { "math_id": 27, "text": "\\mathbf{q}" }, { "math_id": 28, "text": "q" }, { "math_id": 29, "text": "\\sqrt{|\\mathbf{p}|^2+|\\mathbf{p}-\\mathbf{q}|^2}" }, { "math_id": 30, "text": "|\\mathbf{p}-\\mathbf{q}|^2" }, { "math_id": 31, "text": "|\\mathbf{p}|" }, { "math_id": 32, "text": "n" }, { "math_id": 33, "text": "\\mathbb{R}^n" }, { "math_id": 34, "text": "\\mathbf{a} \\ne \\mathbf{0}" }, { "math_id": 35, "text": "(\\mathbf{x}-\\mathbf{p})\\cdot\\mathbf{a} = 0" }, { "math_id": 36, "text": "\\mathbf{x}\\cdot\\mathbf{a}=d" }, { "math_id": 37, "text": "d=\\mathbf{p}\\cdot\\mathbf{a}" }, { "math_id": 38, "text": "a_1x_1+a_2x_2+\\cdots+a_nx_n=d" }, { "math_id": 39, "text": "d=\\mathbf{p}\\cdot\\mathbf{a}=a_1p_1+a_2p_2+\\cdots a_np_n" }, { "math_id": 40, "text": "\\mathbf{y}" }, { "math_id": 41, "text": "\\mathbf{x}=\\mathbf{y}-\\left[\\dfrac{(\\mathbf{y}-\\mathbf{p})\\cdot\\mathbf{a}}{\\mathbf{a}\\cdot\\mathbf{a}}\\right]\\mathbf{a}=\\mathbf{y}-\\left[\\dfrac{\\mathbf{y}\\cdot\\mathbf{a}-d}{\\mathbf{a}\\cdot\\mathbf{a}}\\right]\\mathbf{a}" }, { "math_id": 42, "text": "\\left\\|\\mathbf{x}-\\mathbf{y}\\right\\| = \\left\\|\\left[\\dfrac{(\\mathbf{y}-\\mathbf{p})\\cdot\\mathbf{a}}{\\mathbf{a}\\cdot\\mathbf{a}}\\right]\\mathbf{a}\\right\\|=\\dfrac{\\left|(\\mathbf{y}-\\mathbf{p})\\cdot\\mathbf{a}\\right|}{\\left\\|\\mathbf{a}\\right\\|}=\\dfrac{\\left|\\mathbf{y}\\cdot\\mathbf{a}-d\\right|}{\\left\\|\\mathbf{a}\\right\\|}" }, { "math_id": 43, "text": "x_i=y_i-ka_i" }, { "math_id": 44, "text": "1\\le i\\le n" }, { "math_id": 45, "text": "k=\\dfrac{\\mathbf{y}\\cdot\\mathbf{a}-d}{\\mathbf{a}\\cdot\\mathbf{a}}=\\dfrac{a_1y_1+a_2y_2+\\cdots a_ny_n-d}{a_1^2+a_2^2+\\cdots a_n^2}" }, { "math_id": 46, "text": "\\dfrac{\\left|a_1y_1+a_2y_2+\\cdots a_ny_n-d\\right|}{\\sqrt{a_1^2+a_2^2+\\cdots a_n^2}}" }, { "math_id": 47, "text": "\\mathbb{R}^3" }, { "math_id": 48, "text": "ax+by+cz=d" }, { "math_id": 49, "text": "(x_1,y_1,z_1)" }, { "math_id": 50, "text": "\\left.\\begin{array}{l}x=x_1-ka\\\\y=y_1-kb\\\\z=z_1-kc\\end{array}\\right\\}" }, { "math_id": 51, "text": "k=\\dfrac{ax_1+by_1+cz_1+d}{a^2+b^2+c^2}" }, { "math_id": 52, "text": "\\dfrac{\\left|ax_1+by_1+cz_1+d\\right|}{\\sqrt{a^2+b^2+c^2}}" } ]
https://en.wikipedia.org/wiki?curid=8426172
8426318
Four Pillars of Destiny
One of the fortune-telling born in China The Four Pillars of Destiny, as known as "Ba-Zi", which means "eight characters" or "eight words" in Chinese, is a Chinese astrological concept that a person's destiny or fate can be divined by the two sexagenary cycle characters assigned to their birth year, month, day, and hour. This type of cosmological astrology is also widely used in South Korea, Japan and Vietnam. Development. Four Pillars of Destiny can be dated back to the Han Dynasty, but it was not systematic as it is known today. Method. Days, hours, months, and years are all assigned one of the ten Celestial Stems (Chinese: 十天干) and one of the twelve Terrestrial Branches (Chinese: 十二地支) in the sexagenary cycle. A person's fortune is determined by looking up the branch and stem characters for each of these four parts of their birth time, with relation to the 10-year luck cycle (Chinese: 十年大运). Schools. The schools are the Scholarly School (學院派, "Xué Yuàn Pài") and the Professional School (江湖派, "Jiāng Hú Pài"). The Scholarly School began with "Xú Zi Píng" (徐子平) at the beginning of the Song Dynasty. Xú founded the pure theoretical basis of the system. Representatives of this school and their publications include: In Japan. Definitions. "Shō-Kan" is also the relative pronoun among the "Heavenly Stems". A birthday in the Chinese calendar will be written"甲子", "甲戌", "甲申", "甲午", "甲辰", "甲寅", whereas the will belong to the Shō-Kan. When the Heavenly Stems will be "甲" in a birthday for the Chinese calendar, the "丁" acts as a Shō-Kan factor, as follows: Example. The chart is as follows: The main structure of his chart is "傷官" (Shō-Kan), "格".&lt;br&gt; The day of 丁 (in the Chinese calendar) meets April, the month of , the month of "戊", so that we get the Shō-Kan. The most important element and worker in his chart is the "甲" or "乙". The Inju is also the worker which controls Shō-Kan. In 1945, in the year of "乙酉", the Inju has no effect. The Heavenly Stem "乙" is in . Additionally, the "Dai Un" (Japan's own long-term history) is as follows. The beginning of April in the Lunar calendar is the fifth day, so there are 24 days from day 5 to Hirohito's birthday. One month is equivalent to ten years in "Dai Un", and the 24 days are equivalent to eight years. Events in the historical timeline corresponding to his life from age eight to 18 are as follows. From the age of 8 to the age of 18 : 辛卯 Advocates of the Shō-Kan system believe that Hirohito's chart somehow explains the defeat of Japan in World War II after the catastrophic atomic bomb explosions at Hiroshima and Nagasaki. Periodicity of Four Pillars. The problem of periodicity of four pillars is a problem in calendrical arithmetics, but most of fortune tellers are unable to handle the mathematics correctly. Hee for example, proposed that it takes 240 years for a given four-pillar quadruplet to repeat itself. In p. 22, Hee wrote, ... because of the numerous possible combinations, it takes 60 years for the same set of year pillars to repeat itself (by comparison, as set of month pillars repeats itself after just five years). Therefore, if you have a certain day and time, the set of four pillars will repeat itself in 60 years. However, since the same day may not appear in exactly the same month – and even if it is in the same month, the day may not be found in the same half month – it takes 240 years before the identical four pillars appear again ...Hee's proposal is incorrect and can be easily refuted by a counterexample. For example, the four-pillar quadruplets for 1984-3-18 and 2044-3-3 are exactly the same (i.e. 甲子-丁卯-辛亥-xx) and they are spaced only by 60 years. But the next iso-quadruplet will reappear only after 360 years (on 2404-4-5). Furthermore, a periodicity of 1800 years is needed to order to match both sexagenary cycle and the Gregorian cycle. For example, 4-3-18, 1980-3-18, and 3964-3-18 share the same four-pillar quadruplet. The solution to the iso-Gregorian quadruplet is a Diophantine problem. Suppose that the gap, formula_0, between two successive four-pillar quadruplet is irregular and it is given by formula_1 and suppose that formula_2 and formula_3 are two successive rata die numbers with identical Gregorian month and day, then it can be shown that the interval formula_4 is given byformula_5For formula_0 and formula_4 to coincide, we need solve formula_6 to which one of the solution is formula_7 Therefore formula_8days or about 1800 Gregorian years. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g" }, { "math_id": 1, "text": "g = 60(365\\lambda_1 + \\lambda_2)" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "f + g'" }, { "math_id": 4, "text": "g'" }, { "math_id": 5, "text": "g' = 365\\lambda_3 + 366\\lambda_4." }, { "math_id": 6, "text": "60(365\\lambda_1 + \\lambda_2) = 365\\lambda_3 + 366\\lambda_4," }, { "math_id": 7, "text": "(\\lambda_1, \\lambda_2, \\lambda_3, \\lambda_4) = (33, 8, 1575, 402)." }, { "math_id": 8, "text": "g = 60(365 \\times 33 + 8) = 723180" } ]
https://en.wikipedia.org/wiki?curid=8426318
8427933
List of artificial objects leaving the Solar System
Several space probes and the upper stages of their launch vehicles are leaving the Solar System, all of which were launched by NASA. Three of the probes, "Voyager 1", "Voyager 2", and "New Horizons" are still functioning and are regularly contacted by radio communication, while "Pioneer 10" and "Pioneer 11" are now defunct. In addition to these spacecraft, some upper stages and de-spin weights are leaving the Solar System, assuming they continue on their trajectories. These objects are leaving the Solar System because their velocity and direction are taking them away from the Sun, and at their distance from the Sun, its gravitational pull is not sufficient to pull these objects back or into orbit. They are not impervious to the gravitational pull of the Sun and are being slowed, but are still traveling in excess of escape velocity to leave the Solar System and coast into interstellar space. Planetary exploration probes. Although other probes were launched first, "Voyager 1" has achieved a higher speed and overtaken all others. "Voyager 1" overtook "Voyager 2" a few months after launch, on December 19, 1977. It overtook "Pioneer 11" in 1981, and then "Pioneer 10"—becoming the probe farthest from the Sun—on February 17, 1998. "Voyager 2" is moving faster than all other probes launched before it; it overtook "Pioneer 11" in the late 1980s and then "Pioneer 10" — becoming the second-farthest spacecraft from the Sun — on July 18, 2023. Depending on how the ""Pioneer" anomaly" affects it, "New Horizons" will also probably pass the "Pioneer" probes, but will need many years to do so. It will overtake "Pioneer 11" in 2143, and will overtake "Pioneer 10" in 2314, but will never overtake the "Voyagers". Speed and distance from the Sun. To put the distances in the table in context, Pluto's average distance (semi-major axis) is about 40 AU. Solar escape velocity is a function of distance (r) from the Sun's center, given by formula_0 where the product "G" "Msun" is the heliocentric gravitational parameter. The initial speed required to escape the Sun from its surface is , and drops down to at Earth's distance from the Sun (1 AU), and at a distance of 100 AU. In order to leave the Solar System, the probe needs to reach the local escape velocity. After leaving Earth, the Sun's escape velocity is 42.1 km/s. In order to reach this speed, it is highly advantageous to also use the orbital speed of the Earth around the Sun, which is 29.78 km/s. By later passing near a planet, a probe can gain extra speed with a gravity assist. Propulsion stages. Every planetary probe was placed into its escape trajectory by a multistage rocket, the last stage of which ends up on nearly the same trajectory as the probe it launched. Because these stages cannot be actively guided, their trajectories are now different from the probes they launched (the probes having been guided with small thrusters that allowed course changes). However, in cases where the spacecraft acquired escape velocity because of a gravity assist, the stages may not have a similar course and there is the extremely remote possibility that they collided with something. The stages on an escape trajectory are: In addition, two small yo-yo de-spin weights on wires were used to reduce the spin of the "New Horizons" probe prior to its release from the third-stage rocket. Once the spin rate was lowered, these masses and the wires were released, and so are also on an escape trajectory out of the Solar System. None of the above objects are trackable – they have no power or radio antennas, spin uncontrollably, and are too small to be detected. Their exact positions are unknowable beyond their projected Solar System escape trajectories. The third stage of "Pioneer 11" is thought to be in solar orbit because its encounter with Jupiter would not have resulted in escape from the Solar System. "Pioneer 11" gained the required velocity to escape the Solar System in its subsequent encounter with Saturn. On January 19, 2006, the "New Horizons" spacecraft to Pluto was launched directly into a solar-escape trajectory at from Cape Canaveral using an Atlas V and the Common Core Booster, Centaur upper stage, and Star 48B third stage. "New Horizons" passed the Moon's orbit in just nine hours. The subsequent encounter with Jupiter only increased its velocity, and enabled the probe to arrive at Pluto three years earlier than without this encounter. Thus the only objects to date to be launched "directly" into a solar escape trajectory were the "New Horizons" spacecraft, its third stage, and the two de-spin masses. The "New Horizons" Centaur (second) stage is not escaping; it is in a 2.83-year heliocentric (solar) orbit. The "Pioneer 10" and "11", and "Voyager 1" and "2" Centaur (second) stages are also in heliocentric orbits. Future. Given the huge emptiness of interstellar space, all the objects listed here are likely to continue into deep space in timelines that, barring the exceptionally unlikely chance of their colliding with (or being collected by) another object, could outlast even the Main Sequence existence of the Sun's life, billions of years hence. One estimated timescale as to the likelihood of the Pioneer or Voyager spacecraft colliding with a star (or stellar remnant) is 1020 (100 quintillion) years. They are very unlikely, however, to gain enough velocity to escape the Milky Way galaxy (or its future merger with the Andromeda galaxy) into intergalactic space. "Ulysses". In 1990, the solar probe Ulysses was launched towards Jupiter in order to reach a high-inclination heliocentric orbit over the Sun's poles; the spacecraft was shut down in 2008. "Ulysses" is currently in a 79° inclination orbit around the Sun with its apoapsis crossing the orbit of Jupiter. In November 2098, it will have another close fly-by with Jupiter, crossing between the orbits of Europa and Ganymede. After this slingshot maneuver, it will possibly enter a hyperbolic trajectory around the Sun and eventually leave the Solar System. "Ulysses" is now switched off as its RTG power supply has run down, and so is uncontactable and cannot be tracked or guided in any way since 2009. Its exact trajectory is therefore unknowable as factors such as solar radiation pressure could significantly alter its encounter path. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_e = \\sqrt{\\frac{2GM_\\text{sun}}{r}}," } ]
https://en.wikipedia.org/wiki?curid=8427933
8429
Density
Mass per unit volume Density (volumetric mass density or specific mass) is a substance's mass per unit of volume. The symbol most often used for density is "ρ" (the lower case Greek letter rho), although the Latin letter "D" can also be used. Mathematically, density is defined as mass divided by volume: formula_0 where "ρ" is the density, "m" is the mass, and "V" is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate – this quantity is more specifically called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium is the densest known element at standard conditions for temperature and pressure. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a relative density less than one relative to water means that the substance floats in water. The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid, which causes it to rise relative to denser unheated material. The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass. Other conceptually comparable quantities or ratios include specific density, relative density (specific gravity), and specific weight. History. Density, floating, and sinking. The understanding that different materials have different densities, and of a relationship between density, floating, and sinking must date to prehistoric times. Much later it was put in writing. Aristotle, for example, wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There is so great a difference in density between salt and fresh water that vessels laden with cargoes of the same weight almost sink in rivers, but ride quite easily at sea and are quite seaworthy. And an ignorance of this has sometimes cost people dear who load their ships in rivers. The following is a proof that the density of a fluid is greater when a substance is mixed with it. If you make water very salt by mixing salt in with it, eggs will float on it. ... If there were any truth in the stories they tell about the lake in Palestine it would further bear out what I say. For they say if you bind a man or beast and throw him into it he floats and does not sink beneath the surface. Volume vs. density; volume of an irregular shape. In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (). As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment. The story first appeared in written form in Vitruvius' "books of architecture", two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. Nevertheless, in 1586, Galileo Galilei, in one of his first experiments, made a possible reconstruction of how the experiment could have been performed with ancient Greek resources Units. From the equation for density ("ρ" = "m"/"V"), mass density has any unit that is "mass divided by volume". As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. One g/cm3 is equal to 1000 kg/m3. One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density. The litre and tonne are not part of the SI, but are acceptable for use with it, leading to the following units: Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3. In US customary units density can be stated in: Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The Imperial gallon was based on the concept that an Imperial fluid ounce of water would have a mass of one Avoirdupois ounce, and indeed 1 g/cm3 ≈ 1.00224129 ounces per Imperial fluid ounce = 10.0224129 pounds per Imperial gallon. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion. Knowing the volume of the unit cell of a crystalline material and its formula weight (in daltons), the density can be calculated. One dalton per cubic ångström is equal to a density of 1.660 539 066 60 g/cm3. Measurement. A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question. Homogeneous materials. The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object. Heterogeneous materials. If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: formula_1, where formula_2 is an elementary volume at position formula_3. The mass of the body then can be expressed as formula_4 Non-compact materials. In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules. Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture. The "bulk volume" of a material —inclusive of the void space fraction— is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions. Mass divided by bulk volume determines "bulk density". This is not the same thing as the material volumetric mass density. To determine the material volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a "variable" void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling. In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void. In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand). Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known. Changes of density. In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures. The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius. In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is formula_5 where "M" is the molar mass, "P" is the pressure, "R" is the universal gas constant, and "T" is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature. In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is formula_6 where formula_7 is the density at a reference temperature, formula_8 is the thermal expansion coefficient of the material at temperatures close to formula_9. Density of solutions. The density of a solution is the sum of mass (massic) concentrations of the components of that solution. Mass (massic) concentration of each given component formula_10 in a solution sums to density of the solution, formula_11 Expressed as a function of the densities of pure components of the mixture and their volume participation, it allows the determination of excess molar volumes: formula_12 provided that there is no interaction between the components. Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients: formula_13 List of densities. Various materials. &lt;templatestyles src="Reflist/styles.css" /&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho = \\frac{m}{V}," }, { "math_id": 1, "text": "\\rho(\\vec{r}) = dm / dV" }, { "math_id": 2, "text": "dV" }, { "math_id": 3, "text": "\\vec r" }, { "math_id": 4, "text": " m = \\int_V \\rho(\\vec{r})\\,dV. " }, { "math_id": 5, "text": "\\rho = \\frac {MP}{RT}," }, { "math_id": 6, "text": "\\rho = \\frac{\\rho_{T_0}}{1 + \\alpha \\cdot \\Delta T}," }, { "math_id": 7, "text": "\\rho_{T_0}" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "T_0" }, { "math_id": 10, "text": "\\rho_i" }, { "math_id": 11, "text": "\\rho = \\sum_i \\rho_i ." }, { "math_id": 12, "text": "\\rho = \\sum_i \\rho_i \\frac{V_i}{V}\\, = \\sum_i \\rho_i \\varphi_i = \\sum_i \\rho_i \\frac{V_i}{\\sum_i V_i + \\sum_i {V^E}_i}," }, { "math_id": 13, "text": "\\overline{V^E}_i = RT \\frac{\\partial\\ln\\gamma_i}{\\partial P}." } ]
https://en.wikipedia.org/wiki?curid=8429
843192
Nephroid
Plane curve; an epicycloid with radii differing by 1/2 In geometry, a nephroid (from grc " "ὁ νεφρός" (ho nephros)" 'kidney-shaped') is a specific plane curve. It is a type of epicycloid in which the smaller circle's radius differs from the larger one by a factor of one-half. Name. Although the term "nephroid" was used to describe other curves, it was applied to the curve in this article by Richard A. Proctor in 1878. Strict definition. A nephroid is Equations. Parametric. If the small circle has radius formula_0, the fixed circle has midpoint formula_1 and radius formula_2, the rolling angle of the small circle is formula_3 and point formula_4 the starting point (see diagram) then one gets the parametric representation: formula_5 formula_6 The complex map formula_7 maps the unit circle to a nephroid Proof of the parametric representation. The proof of the parametric representation is easily done by using complex numbers and their representation as complex plane. The movement of the small circle can be split into two rotations. In the complex plane a rotation of a point formula_8 around point formula_9 (origin) by an angle formula_10 can be performed by the multiplication of point formula_8 (complex number) by formula_11. Hence the rotation formula_12 around point formula_13 by angle formula_3 is formula_14 , rotation formula_15 around point formula_9 by angle formula_10 is formula_16. A point formula_17 of the nephroid is generated by the rotation of point formula_2 by formula_12 and the subsequent rotation with formula_15: formula_18. Herefrom one gets formula_19 Implicit. Inserting formula_21 and formula_22 into the equation shows that this equation is an implicit representation of the curve. Proof of the implicit representation. With formula_24 one gets formula_25 Orientation. If the cusps are on the y-axis the parametric representation is formula_26 and the implicit one: formula_27 Metric properties. For the nephroid above the The proofs of these statements use suitable formulae on curves (arc length, area and radius of curvature) and the parametric representation above formula_31 formula_32 and their derivatives formula_33 formula_34 formula_35 . formula_36 . formula_37 Construction. Nephroid as envelope of a pencil of circles. Proof. Let formula_38 be the circle formula_41 with midpoint formula_1 and radius formula_2. The diameter may lie on the x-axis (see diagram). The pencil of circles has equations: formula_42 The envelope condition is formula_43 One can easily check that the point of the nephroid formula_44 is a solution of the system formula_45 and hence a point of the envelope of the pencil of circles. Nephroid as envelope of a pencil of lines. Similar to the generation of a cardioid as envelope of a pencil of lines the following procedure holds: Proof. The following consideration uses trigonometric formulae for formula_48. In order to keep the calculations simple, the proof is given for the nephroid with cusps on the y-axis. "Equation of the tangent": for the nephroid with parametric representation formula_49: Herefrom one determines the normal vector formula_50, at first. The equation of the tangent formula_51 is: formula_52 For formula_53 one gets the cusps of the nephroid, where there is no tangent. For formula_54 one can divide by formula_55 to obtain "Equation of the chord": to the circle with midpoint formula_1 and radius formula_57: The equation of the chord containing the two points formula_58 is: formula_59 For formula_60 the chord degenerates to a point. For formula_61 one can divide by formula_62 and gets the equation of the chord: The two angles formula_64 are defined differently (formula_10 is one half of the rolling angle, formula_65 is the parameter of the circle, whose chords are determined), for formula_66 one gets the same line. Hence any chord from the circle above is tangent to the nephroid and Nephroid as caustic of one half of a circle. The considerations made in the previous section give a proof for the fact, that the caustic of one half of a circle is a nephroid. Proof. The circle may have the origin as midpoint (as in the previous section) and its radius is formula_57. The circle has the parametric representation formula_67 The tangent at the circle point formula_68 has normal vector formula_69. The reflected ray has the normal vector (see diagram) formula_70 and containing circle point formula_71. Hence the reflected ray is part of the line with equation formula_72 which is tangent to the nephroid of the previous section at point formula_73 (see above). The evolute and involute of a nephroid. Evolute. The evolute of a curve is the locus of centers of curvature. In detail: For a curve formula_74 with radius of curvature formula_75 the evolute has the representation formula_76 with formula_77 the suitably oriented unit normal. For a nephroid one gets: Proof. The nephroid as shown in the picture has the parametric representation formula_78 the unit normal vector pointing to the center of curvature formula_79 (see section above) and the radius of curvature formula_80 (s. section on metric properties). Hence the evolute has the representation: formula_81 formula_82 which is a nephroid half as large and rotated 90 degrees (see diagram and section above) Involute. Because the evolute of a nephroid is another nephroid, the involute of the nephroid is also another nephroid. The original nephroid in the image is the involute of the smaller nephroid. Inversion of a nephroid. The inversion formula_83 across the circle with midpoint formula_1 and radius formula_2 maps the nephroid with equation formula_23 onto the curve of degree 6 with equation formula_84 (see diagram) . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "(0,0)" }, { "math_id": 2, "text": "2a" }, { "math_id": 3, "text": "2\\varphi" }, { "math_id": 4, "text": "(2a,0)" }, { "math_id": 5, "text": "x(\\varphi) = 3a\\cos\\varphi- a\\cos3\\varphi=6a\\cos\\varphi-4a \\cos^3\\varphi \\ ," }, { "math_id": 6, "text": "y(\\varphi) = 3a \\sin\\varphi - a\\sin3\\varphi =4a\\sin^3\\varphi\\ , \\qquad 0\\le \\varphi < 2\\pi" }, { "math_id": 7, "text": "z \\to z^3 + 3z" }, { "math_id": 8, "text": "z" }, { "math_id": 9, "text": "0" }, { "math_id": 10, "text": "\\varphi" }, { "math_id": 11, "text": " e^{i\\varphi}" }, { "math_id": 12, "text": "\\Phi_3" }, { "math_id": 13, "text": "3a" }, { "math_id": 14, "text": ": z \\mapsto 3a+(z-3a)e^{i2\\varphi}" }, { "math_id": 15, "text": "\\Phi_0" }, { "math_id": 16, "text": ":\\quad z \\mapsto ze^{i\\varphi}" }, { "math_id": 17, "text": " p(\\varphi)" }, { "math_id": 18, "text": "p(\\varphi)=\\Phi_0(\\Phi_3(2a))=\\Phi_0(3a-ae^{i2\\varphi})=(3a-ae^{i2\\varphi})e^{i\\varphi}=3ae^{i\\varphi}-ae^{i3\\varphi}" }, { "math_id": 19, "text": "\n\\begin{array}{cclcccc}\nx(\\varphi)&=&3a\\cos\\varphi-a\\cos3\\varphi &=& 6a\\cos\\varphi-4a \\cos^3\\varphi \\ ,&& \\\\\ny(\\varphi)&=&3a\\sin\\varphi-a\\sin3\\varphi&=& 4a\\sin^3\\varphi &.&\n\\end{array} " }, { "math_id": 20, "text": " e^{i\\varphi}=\\cos\\varphi+ i\\sin\\varphi, \\ \\cos^2\\varphi+ \\sin^2\\varphi=1, \\ \\cos3\\varphi=4\\cos^3\\varphi-3\\cos\\varphi,\\;\\sin 3\\varphi=3\\sin\\varphi -4\\sin^3\\varphi" }, { "math_id": 21, "text": "x(\\varphi)" }, { "math_id": 22, "text": "y(\\varphi)" }, { "math_id": 23, "text": "(x^2+y^2-4a^2)^3=108a^4y^2" }, { "math_id": 24, "text": "x^2+y^2-4a^2=(3a\\cos\\varphi-a\\cos3\\varphi)^2+(3a\\sin\\varphi-a\\sin3\\varphi)^2 -4a^2=\\cdots=6a^2(1-\\cos2\\varphi)=12a^2\\sin^2\\varphi" }, { "math_id": 25, "text": "(x^2+y^2-4a^2)^3=(12a^2)^3\\sin^6\\varphi=108a^4(4a\\sin^3\\varphi)^2=108a^4y^2\\ ." }, { "math_id": 26, "text": "x=3a\\cos \\varphi+a\\cos3\\varphi,\\quad y=3a\\sin \\varphi+a\\sin3\\varphi)." }, { "math_id": 27, "text": "(x^2+y^2-4a^2)^3=108a^4x^2." }, { "math_id": 28, "text": " L= 24 a, " }, { "math_id": 29, "text": " A= 12\\pi a^2\\ " }, { "math_id": 30, "text": "\\rho=|3a\\sin \\varphi|." }, { "math_id": 31, "text": "x(\\varphi)=6a\\cos\\varphi-4a \\cos^3\\varphi \\ , " }, { "math_id": 32, "text": "y(\\varphi)= 4a\\sin^3\\varphi " }, { "math_id": 33, "text": "\\dot x=-6a\\sin\\varphi(1 - 2\\cos^2\\varphi)\\ ,\\quad \\ \\ddot x= -6 a\\cos \\varphi(5-6\\cos^2\\varphi)\\ ," }, { "math_id": 34, "text": "\\dot y=12a\\sin^2\\varphi\\cos\\varphi \\quad , \\quad \\quad \\quad \\quad \n\\ddot y=12a\\sin\\varphi(3\\cos^2\\varphi-1)\\ . " }, { "math_id": 35, "text": "L=2\\int_0^\\pi{\\sqrt{\\dot x^2+\\dot y^2}} \\; d\\varphi=\\cdots =12a\\int_0^\\pi \\sin\\varphi\\; d\\varphi= 24a" }, { "math_id": 36, "text": " A=2\\cdot \\tfrac{1}{2}|\\int_0^\\pi[x \\dot y-y \\dot x]\\; d\\varphi|=\\cdots= 24a^2\\int_0^\\pi\\sin^2\\varphi\\; d\\varphi= 12\\pi a^2" }, { "math_id": 37, "text": "\\rho = \\left|\\frac {\\left({\\dot{x}^2 + \\dot{y}^2}\\right)^\\frac32}{\\dot {x}\\ddot{y} - \\dot{y}\\ddot{x}}\\right|=\\cdots= |3a\\sin \\varphi|." }, { "math_id": 38, "text": "c_0" }, { "math_id": 39, "text": "D_1,D_2" }, { "math_id": 40, "text": "d_{12}" }, { "math_id": 41, "text": "(2a\\cos\\varphi,2a\\sin\\varphi)" }, { "math_id": 42, "text": " f(x,y,\\varphi)=(x-2a\\cos\\varphi)^2+(y-2a\\sin\\varphi)^2-(2a\\sin\\varphi)^2=0 \\ ." }, { "math_id": 43, "text": "f_\\varphi(x,y,\\varphi)=2a(x\\sin\\varphi -y\\cos\\varphi-2a\\cos\\varphi\\sin\\varphi)=0\\ . " }, { "math_id": 44, "text": "p(\\varphi)=(6a\\cos\\varphi-4a \\cos^3\\varphi\\; ,\\; 4a\\sin^3\\varphi)" }, { "math_id": 45, "text": "f(x,y,\\varphi)=0, \\; f_\\varphi(x,y,\\varphi)=0" }, { "math_id": 46, "text": "3N" }, { "math_id": 47, "text": "(1,3), (2,6), ...., (n,3n),...., (N,3N), (N+1,3), (N+2,6), ...., " }, { "math_id": 48, "text": " \\cos \\alpha+\\cos\\beta,\\ \\sin \\alpha+\\sin\\beta, \\ \\cos (\\alpha+\\beta), \\ \\cos2\\alpha" }, { "math_id": 49, "text": "x=3\\cos\\varphi + \\cos3\\varphi,\\; y=3\\sin\\varphi+\\sin3\\varphi" }, { "math_id": 50, "text": "\\vec n=(\\dot y , -\\dot x)^T " }, { "math_id": 51, "text": "\\dot y(\\varphi)\\cdot (x -x(\\varphi)) - \\dot x(\\varphi)\\cdot (y-y(\\varphi))= 0" }, { "math_id": 52, "text": "(\\cos2\\varphi\\cdot x \\ + \\ \\sin 2\\varphi\\cdot y)\\cos \\varphi = 4\\cos^2 \\varphi \\ ." }, { "math_id": 53, "text": " \\varphi=\\tfrac{\\pi}{2},\\tfrac{3\\pi}{2}" }, { "math_id": 54, "text": " \\varphi\\ne\\tfrac{\\pi}{2},\\tfrac{3\\pi}{2}" }, { "math_id": 55, "text": "\\cos\\varphi" }, { "math_id": 56, "text": "\\cos2\\varphi \\cdot x + \\sin2\\varphi \\cdot y = 4 \\cos\\varphi \\ ." }, { "math_id": 57, "text": "4" }, { "math_id": 58, "text": "(4\\cos\\theta, 4\\sin\\theta), \\ (4\\cos{\\color{red}3}\\theta, 4\\sin{\\color{red}3}\\theta)) " }, { "math_id": 59, "text": "(\\cos2\\theta \\cdot x + \\sin2\\theta \\cdot y)\\sin\\theta = 4 \\cos\\theta\\sin\\theta \\ ." }, { "math_id": 60, "text": "\\theta =0, \\pi" }, { "math_id": 61, "text": "\\theta \\ne 0,\\pi" }, { "math_id": 62, "text": "\\sin\\theta" }, { "math_id": 63, "text": "\\cos2\\theta \\cdot x + \\sin2\\theta \\cdot y = 4 \\cos\\theta \\ ." }, { "math_id": 64, "text": "\\varphi , \\theta" }, { "math_id": 65, "text": "\\theta" }, { "math_id": 66, "text": "\\varphi=\\theta " }, { "math_id": 67, "text": "k(\\varphi)=4(\\cos\\varphi,\\sin\\varphi) \\ ." }, { "math_id": 68, "text": "K:\\ k(\\varphi)" }, { "math_id": 69, "text": "\\vec n_t=(\\cos\\varphi,\\sin\\varphi)^T" }, { "math_id": 70, "text": "\\vec n_r=(\\cos{\\color{red}2}\\varphi,\\sin{\\color{red}2}\\varphi)^T" }, { "math_id": 71, "text": "K: \\ 4(\\cos\\varphi,\\sin\\varphi) " }, { "math_id": 72, "text": "\\cos{\\color{red}2}\\varphi\\cdot x \\ + \\ \\sin {\\color{red}2}\\varphi\\cdot y = 4\\cos\\varphi \\ ," }, { "math_id": 73, "text": "P:\\ (3\\cos\\varphi + \\cos3\\varphi,3\\sin\\varphi+\\sin3\\varphi)" }, { "math_id": 74, "text": "\\vec x=\\vec c(s)" }, { "math_id": 75, "text": "\\rho(s)" }, { "math_id": 76, "text": "\\vec x=\\vec c(s) + \\rho(s)\\vec n(s)." }, { "math_id": 77, "text": "\\vec n(s)" }, { "math_id": 78, "text": "x=3\\cos\\varphi + \\cos3\\varphi,\\quad y=3\\sin\\varphi+\\sin3\\varphi \\ ," }, { "math_id": 79, "text": "\\vec n(\\varphi)=(-\\cos 2\\varphi,-\\sin 2\\varphi)^T" }, { "math_id": 80, "text": "3\\cos \\varphi" }, { "math_id": 81, "text": "x=3\\cos\\varphi + \\cos3\\varphi -3\\cos\\varphi\\cdot\\cos2\\varphi=\\cdots=3\\cos\\varphi-2\\cos^3\\varphi," }, { "math_id": 82, "text": "y=3\\sin\\varphi+\\sin3\\varphi -3\\cos\\varphi\\cdot\\sin2\\varphi\\ =\\cdots=2\\sin^3\\varphi \\ ," }, { "math_id": 83, "text": " x \\mapsto \\frac{4a^2x}{x^2+y^2}, \\quad y\\mapsto \\frac{4a^2y}{x^2+y^2} " }, { "math_id": 84, "text": "(4a^2-(x^2+y^2))^3=27a^2(x^2+y^2)y^2" } ]
https://en.wikipedia.org/wiki?curid=843192
843211
Fracture mechanics
Study of propagation of cracks in materials Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of analytical solid mechanics to calculate the driving force on a crack and those of experimental solid mechanics to characterize the material's resistance to fracture. Theoretically, the stress ahead of a sharp crack tip becomes infinite and cannot be used to describe the state around a crack. Fracture mechanics is used to characterise the loads on a crack, typically using a single parameter to describe the complete loading state at the crack tip. A number of different parameters have been developed. When the plastic zone at the tip of the crack is small relative to the crack length the stress state at the crack tip is the result of elastic forces within the material and is termed linear elastic fracture mechanics (LEFM) and can be characterised using the stress intensity factor formula_0. Although the load on a crack can be arbitrary, in 1957 G. Irwin found any state could be reduced to a combination of three independent stress intensity factors: When the size of the plastic zone at the crack tip is too large, elastic-plastic fracture mechanics can be used with parameters such as the J-integral or the crack tip opening displacement. The characterising parameter describes the state of the crack tip which can then be related to experimental conditions to ensure similitude. Crack growth occurs when the parameters typically exceed certain critical values. Corrosion may cause a crack to slowly grow when the stress corrosion stress intensity threshold is exceeded. Similarly, small flaws may result in crack growth when subjected to cyclic loading. Known as fatigue, it was found that for long cracks, the rate of growth is largely governed by the range of the stress intensity formula_1 experienced by the crack due to the applied loading. Fast fracture will occur when the stress intensity exceeds the fracture toughness of the material. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. Motivation. The processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the manufacturing process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions. Fracture mechanics is the analysis of flaws to discover those that are safe (that is, do not grow) and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure. Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide quantitative answers to the following questions: Linear elastic fracture mechanics. Griffith's criterion. Fracture mechanics was developed during World War I by English aeronautical engineer A. A. Griffith – thus the term Griffith crack – to explain the failure of brittle materials. Griffith's work was motivated by two contradictory facts: A theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the fracture stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had been used extensively to predict material failure before Griffith, could not be a specimen-independent material property. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material. To verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens. The artificial flaw was in the form of a surface crack which was much larger than other flaws in a specimen. The experiments showed that the product of the square root of the flaw length (formula_2) and the stress at fracture (formula_3) was nearly constant, which is expressed by the equation: formula_4 An explanation of this relation in terms of linear elasticity theory is problematic. Linear elasticity theory predicts that stress (and hence the strain) at the tip of a sharp flaw in a linear elastic material is infinite. To avoid that problem, Griffith developed a thermodynamic approach to explain the relation that he observed. The growth of a crack, the extension of the surfaces on either side of the crack, requires an increase in the surface energy. Griffith found an expression for the constant formula_5 in terms of the surface energy of the crack by solving the elasticity problem of a finite crack in an elastic plate. Briefly, the approach was: formula_6 where formula_7 is the Young's modulus of the material and formula_8 is the surface energy density of the material. Assuming formula_9 and formula_10 gives excellent agreement of Griffith's predicted fracture stress with experimental results for glass. For the simple case of a thin rectangular plate with a crack perpendicular to the load, the energy release rate, formula_11, becomes: formula_12 where formula_13 is the applied stress, formula_2 is half the crack length, and formula_7 is the Young's modulus, which for the case of plane strain should be divided by the plate stiffness factor formula_14. The strain energy release rate can physically be understood as: "the rate at which energy is absorbed by growth of the crack". However, we also have that: formula_15 If formula_11 ≥ formula_16, this is the criterion for which the crack will begin to propagate. For materials highly deformed before crack propagation, the linear elastic fracture mechanics formulation is no longer applicable and an adapted model is necessary to describe the stress and displacement field close to crack tip, such as on fracture of soft materials. Irwin's modification. "Griffith's work was largely ignored by the engineering community until the early 1950s. The reasons for this appear to be (a) in the actual structural materials the level of energy needed to cause fracture is orders of magnitude higher than the corresponding surface energy, and (b) in structural materials there are always some inelastic deformations around the crack front that would make the assumption of linear elastic medium with infinite stresses at the crack tip highly unrealistic." Griffith's theory provides excellent agreement with experimental data for brittle materials such as glass. For ductile materials such as steel, although the relation formula_17 still holds, the surface energy ("γ") predicted by Griffith's theory is usually unrealistically high. A group working under G. R. Irwin at the U.S. Naval Research Laboratory (NRL) during World War II realized that plasticity must play a significant role in the fracture of ductile materials. In ductile materials (and even in materials that appear to be brittle), a plastic zone develops at the tip of the crack. As the applied load increases, the plastic zone increases in size until the crack grows and the elastically strained material behind the crack tip unloads. The plastic loading and unloading cycle near the crack tip leads to the dissipation of energy as heat. Hence, a dissipative term has to be added to the energy balance relation devised by Griffith for brittle materials. In physical terms, additional energy is needed for crack growth in ductile materials as compared to brittle materials. Irwin's strategy was to partition the energy into two parts: Then the total energy is: formula_18 where formula_8 is the surface energy and formula_19 is the plastic dissipation (and dissipation from other sources) per unit area of crack growth. The modified version of Griffith's energy criterion can then be written as formula_20 For brittle materials such as glass, the surface energy term dominates and formula_21. For ductile materials such as steel, the plastic dissipation term dominates and formula_22. For polymers close to the glass transition temperature, we have intermediate values of formula_11 between 2 and 1000 formula_23. Stress intensity factor. Another significant achievement of Irwin and his colleagues was to find a method of calculating the amount of energy available for fracture in terms of the asymptotic stress and displacement fields around a crack front in a linear elastic solid. This asymptotic expression for the stress field in mode I loading is related to the stress intensity factor formula_24 following: formula_25 where formula_26 are the Cauchy stresses, formula_27 is the distance from the crack tip, formula_28 is the angle with respect to the plane of the crack, and formula_29 are functions that depend on the crack geometry and loading conditions. Irwin called the quantity formula_30 the stress intensity factor. Since the quantity formula_29 is dimensionless, the stress intensity factor can be expressed in units of formula_31. Stress intensity replaced strain energy release rate and a term called fracture toughness replaced surface weakness energy. Both of these terms are simply related to the energy terms that Griffith used: formula_32 and formula_33 where formula_34 is the mode formula_35 stress intensity, formula_36 the fracture toughness, and formula_37 is Poisson's ratio. Fracture occurs when formula_38. For the special case of plane strain deformation, formula_36 becomes formula_39 and is considered a material property. The subscript formula_35 arises because of the different ways of loading a material to enable a crack to propagate. It refers to so-called "mode formula_35" loading as opposed to mode formula_40 or formula_41: The expression for formula_34 will be different for geometries other than the center-cracked infinite plate, as discussed in the article on the stress intensity factor. Consequently, it is necessary to introduce a dimensionless correction factor, formula_42, in order to characterize the geometry. This correction factor, also often referred to as the "geometric shape factor", is given by empirically determined series and accounts for the type and geometry of the crack or notch. We thus have: formula_43 where formula_42 is a function of the crack length and width of sheet given, for a sheet of finite width formula_44 containing a through-thickness crack of length formula_45, by: formula_46 Strain energy release. Irwin was the first to observe that if the size of the plastic zone around a crack is small compared to the size of the crack, the energy required to grow the crack will not be critically dependent on the state of stress (the plastic zone) at the crack tip. In other words, a purely elastic solution may be used to calculate the amount of energy available for fracture. The energy release rate for crack growth or "strain energy release rate" may then be calculated as the change in elastic strain energy per unit area of crack growth, i.e., formula_47 where "U" is the elastic energy of the system and "a" is the crack length. Either the load "P" or the displacement "u" are constant while evaluating the above expressions. Irwin showed that for a mode I crack (opening mode) the strain energy release rate and the stress intensity factor are related by: formula_48 where "E" is the Young's modulus, "ν" is Poisson's ratio, and "K"I is the stress intensity factor in mode I. Irwin also showed that the strain energy release rate of a planar crack in a linear elastic body can be expressed in terms of the mode I, mode II (sliding mode), and mode III (tearing mode) stress intensity factors for the most general loading conditions. Next, Irwin adopted the additional assumption that the size and shape of the energy dissipation zone remains approximately constant during brittle fracture. This assumption suggests that the energy needed to create a unit fracture surface is a constant that depends only on the material. This new material property was given the name "fracture toughness" and designated "G"Ic. Today, it is the critical stress intensity factor "K"Ic, found in the plane strain condition, which is accepted as the defining property in linear elastic fracture mechanics. Crack tip plastic zone. In theory the stress at the crack tip where the radius is nearly zero, would tend to infinity. This would be considered a stress singularity, which is not possible in real-world applications. For this reason, in numerical studies in the field of fracture mechanics, it is often appropriate to represent cracks as round tipped notches, with a geometry dependent region of stress concentration replacing the crack-tip singularity. In actuality, the stress concentration at the tip of a crack within real materials has been found to have a finite value but larger than the nominal stress applied to the specimen. Nevertheless, there must be some sort of mechanism or property of the material that prevents such a crack from propagating spontaneously. The assumption is, the plastic deformation at the crack tip effectively blunts the crack tip. This deformation depends primarily on the applied stress in the applicable direction (in most cases, this is the y-direction of a regular Cartesian coordinate system), the crack length, and the geometry of the specimen. To estimate how this plastic deformation zone extended from the crack tip, Irwin equated the yield strength of the material to the far-field stresses of the y-direction along the crack (x direction) and solved for the effective radius. From this relationship, and assuming that the crack is loaded to the critical stress intensity factor, Irwin developed the following expression for the idealized radius of the zone of plastic deformation at the crack tip: formula_49 Models of ideal materials have shown that this zone of plasticity is centered at the crack tip. This equation gives the approximate ideal radius of the plastic zone deformation beyond the crack tip, which is useful to many structural scientists because it gives a good estimate of how the material behaves when subjected to stress. In the above equation, the parameters of the stress intensity factor and indicator of material toughness, formula_50, and the yield stress, formula_51, are of importance because they illustrate many things about the material and its properties, as well as about the plastic zone size. For example, if formula_36 is high, then it can be deduced that the material is tough, and if formula_51 is low, one knows that the material is more ductile. The ratio of these two parameters is important to the radius of the plastic zone. For instance, if formula_51 is small, then the squared ratio of formula_50 to formula_51 is large, which results in a larger plastic radius. This implies that the material can plastically deform, and, therefore, is tough. This estimate of the size of the plastic zone beyond the crack tip can then be used to more accurately analyze how a material will behave in the presence of a crack. The same process as described above for a single event loading also applies and to cyclic loading. If a crack is present in a specimen that undergoes cyclic loading, the specimen will plastically deform at the crack tip and delay the crack growth. In the event of an overload or excursion, this model changes slightly to accommodate the sudden increase in stress from that which the material previously experienced. At a sufficiently high load (overload), the crack grows out of the plastic zone that contained it and leaves behind the pocket of the original plastic deformation. Now, assuming that the overload stress is not sufficiently high as to completely fracture the specimen, the crack will undergo further plastic deformation around the new crack tip, enlarging the zone of residual plastic stresses. This process further toughens and prolongs the life of the material because the new plastic zone is larger than what it would be under the usual stress conditions. This allows the material to undergo more cycles of loading. This idea can be illustrated further by the graph of Aluminum with a center crack undergoing overloading events. Limitations. But a problem arose for the NRL researchers because naval materials, e.g., ship-plate steel, are not perfectly elastic but undergo significant plastic deformation at the tip of a crack. One basic assumption in Irwin's linear elastic fracture mechanics is small scale yielding, the condition that the size of the plastic zone is small compared to the crack length. However, this assumption is quite restrictive for certain types of failure in structural steels though such steels can be prone to brittle fracture, which has led to a number of catastrophic failures. Linear-elastic fracture mechanics is of limited practical use for structural steels and Fracture toughness testing can be expensive. Elastic–plastic fracture mechanics. Most engineering materials show some nonlinear elastic and inelastic behavior under operating conditions that involve large loads. In such materials the assumptions of linear elastic fracture mechanics may not hold, that is, Therefore, a more general theory of crack growth is needed for elastic-plastic materials that can account for: CTOD. Historically, the first parameter for the determination of fracture toughness in the elasto-plastic region was the crack tip opening displacement (CTOD) or "opening at the apex of the crack" indicated. This parameter was determined by Wells during the studies of structural steels, which due to the high toughness could not be characterized with the linear elastic fracture mechanics model. He noted that, before the fracture happened, the walls of the crack were leaving and that the crack tip, after fracture, ranged from acute to rounded off due to plastic deformation. In addition, the rounding of the crack tip was more pronounced in steels with superior toughness. There are a number of alternative definitions of CTOD. In the two most common definitions, CTOD is the displacement at the original crack tip and the 90 degree intercept. The latter definition was suggested by Rice and is commonly used to infer CTOD in finite element models of such. Note that these two definitions are equivalent if the crack tip blunts in a semicircle. Most laboratory measurements of CTOD have been made on edge-cracked specimens loaded in three-point bending. Early experiments used a flat paddle-shaped gage that was inserted into the crack; as the crack opened, the paddle gage rotated, and an electronic signal was sent to an x-y plotter. This method was inaccurate, however, because it was difficult to reach the crack tip with the paddle gage. Today, the displacement V at the crack mouth is measured, and the CTOD is inferred by assuming the specimen halves are rigid and rotate about a hinge point (the crack tip). R-curve. An early attempt in the direction of elastic-plastic fracture mechanics was Irwin's crack extension resistance curve, Crack growth resistance curve or R-curve. This curve acknowledges the fact that the resistance to fracture increases with growing crack size in elastic-plastic materials. The R-curve is a plot of the total energy dissipation rate as a function of the crack size and can be used to examine the processes of slow stable crack growth and unstable fracture. However, the R-curve was not widely used in applications until the early 1970s. The main reasons appear to be that the R-curve depends on the geometry of the specimen and the crack driving force may be difficult to calculate. J-integral. In the mid-1960s James R. Rice (then at Brown University) and G. P. Cherepanov independently developed a new toughness measure to describe the case where there is sufficient crack-tip deformation that the part no longer obeys the linear-elastic approximation. Rice's analysis, which assumes non-linear elastic (or monotonic deformation theory plastic) deformation ahead of the crack tip, is designated the J-integral. This analysis is limited to situations where plastic deformation at the crack tip does not extend to the furthest edge of the loaded part. It also demands that the assumed non-linear elastic behavior of the material is a reasonable approximation in shape and magnitude to the real material's load response. The elastic-plastic failure parameter is designated JIc and is conventionally converted to KIc using the equation below. Also note that the J integral approach reduces to the Griffith theory for linear-elastic behavior. The mathematical definition of J-integral is as follows: formula_52 where formula_53 is an arbitrary path clockwise around the apex of the crack, formula_54 is the density of strain energy, formula_55 are the components of the vectors of traction, formula_56 are the components of the displacement vectors, formula_57 is an incremental length along the path formula_53, and formula_58 and formula_59 are the stress and strain tensors. Since engineers became accustomed to using "K"Ic to characterise fracture toughness, a relation has been used to reduce "J"Ic to it: formula_60 where formula_61 for plane stress and formula_62 for plane strain. Cohesive zone model. When a significant region around a crack tip has undergone plastic deformation, other approaches can be used to determine the possibility of further crack extension and the direction of crack growth and branching. A simple technique that is easily incorporated into numerical calculations is the "cohesive zone model" method which is based on concepts proposed independently by Barenblatt and Dugdale in the early 1960s. The relationship between the Dugdale-Barenblatt models and Griffith's theory was first discussed by Willis in 1967. The equivalence of the two approaches in the context of brittle fracture was shown by Rice in 1968. Transition flaw size. Let a material have a yield strength formula_51 and a fracture toughness in mode I formula_39. Based on fracture mechanics, the material will fail at stress formula_63. Based on plasticity, the material will yield when formula_64. These curves intersect when formula_65. This value of formula_2 is called as transition flaw size formula_66., and depends on the material properties of the structure. When the formula_67, the failure is governed by plastic yielding, and when formula_68 the failure is governed by fracture mechanics. The value of formula_69 for engineering alloys is 100 mm and for ceramics is 0.001 mm. If we assume that manufacturing processes can give rise to flaws in the order of micrometers, then, it can be seen that ceramics are more likely to fail by fracture, whereas engineering alloys would fail by plastic deformation. Concrete fracture analysis. Concrete fracture analysis is part of fracture mechanics that studies crack propagation and related failure modes in concrete. As it is widely used in construction, fracture analysis and modes of reinforcement are an important part of the study of concrete, and different concretes are characterized in part by their fracture properties. Common fractures include the cone-shaped fractures that form around anchors under tensile strength. Bažant (1983) proposed a crack band model for materials like concrete whose homogeneous nature changes randomly over a certain range. He also observed that in plain concrete, the size effect has a strong influence on the critical stress intensity factor, and proposed the relation formula_13 = formula_70 / √(1+{formula_71/formula_72formula_73}), where formula_13 = stress intensity factor, formula_70 = tensile strength, formula_71 = size of specimen, formula_73 = maximum aggregate size, and formula_72 = an empirical constant. Atomistic Fracture Mechanics. Atomistic Fracture Mechanics (AFM) is a relatively new field that studies the behavior and properties of materials at the atomic scale when subjected to fracture. It integrates concepts from fracture mechanics with atomistic simulations to understand how cracks initiate, propagate, and interact with the microstructure of materials. By using techniques like Molecular Dynamics (MD) simulations, AFM can provide insights into the fundamental mechanisms of crack formation and growth, the role of atomic bonds, and the influence of material defects and impurities on fracture behavior.
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "\\Delta K" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "\\sigma_f" }, { "math_id": 4, "text": "\\sigma_f\\sqrt{a} \\approx C" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "C = \\sqrt{\\cfrac{2E\\gamma}{\\pi}}" }, { "math_id": 7, "text": "E" }, { "math_id": 8, "text": "\\gamma" }, { "math_id": 9, "text": "E = 62\\ \\text{GPa}" }, { "math_id": 10, "text": "\\gamma = 1\\ \\text{J/m}^2" }, { "math_id": 11, "text": "G" }, { "math_id": 12, "text": "G = \\frac{\\pi \\sigma^2 a}{E}\\," }, { "math_id": 13, "text": "\\sigma" }, { "math_id": 14, "text": "(1-\\nu^2)" }, { "math_id": 15, "text": "G_c = \\frac{\\pi \\sigma_f^2 a}{E}\\," }, { "math_id": 16, "text": "G_c" }, { "math_id": 17, "text": " \\sigma_f\\sqrt{a} = C " }, { "math_id": 18, "text": "G = 2\\gamma + G_p" }, { "math_id": 19, "text": "G_p" }, { "math_id": 20, "text": "\\sigma_f\\sqrt{a} = \\sqrt{\\cfrac{E~G}{\\pi}}." }, { "math_id": 21, "text": "G \\approx 2\\gamma = 2 \\,\\, \\text{J/m}^2" }, { "math_id": 22, "text": "G \\approx G_p = 1000 \\,\\, \\text{J/m}^2" }, { "math_id": 23, "text": "\\text{J/m}^2" }, { "math_id": 24, "text": "\n K_I\n " }, { "math_id": 25, "text": "\\sigma_{ij} = \\left(\\cfrac{K_{I}}{\\sqrt{2\\pi r}}\\right)~f_{ij}(\\theta)" }, { "math_id": 26, "text": "\n \\sigma_{ij}\n " }, { "math_id": 27, "text": "\n r\n " }, { "math_id": 28, "text": "\n \\theta\n " }, { "math_id": 29, "text": "\n f_{ij}\n " }, { "math_id": 30, "text": "\n K\n " }, { "math_id": 31, "text": "\\text{MPa}\\sqrt{\\text{m}}" }, { "math_id": 32, "text": "K_I = \\sigma \\sqrt{\\pi a}\\," }, { "math_id": 33, "text": "\n K_c = \\begin{cases} \\sqrt{EG_c} & \\text{for plane stress} \\\\\n \\\\\n \\sqrt{\\cfrac{EG_c}{1-\\nu^2}} & \\text{for plane strain} \\end{cases}\n " }, { "math_id": 34, "text": "K_I" }, { "math_id": 35, "text": "\n I\n " }, { "math_id": 36, "text": "K_c" }, { "math_id": 37, "text": "\\nu" }, { "math_id": 38, "text": "K_I \\geq K_c" }, { "math_id": 39, "text": "K_{Ic}" }, { "math_id": 40, "text": "\n II\n " }, { "math_id": 41, "text": "\n III\n " }, { "math_id": 42, "text": "\n Y\n " }, { "math_id": 43, "text": "K_I = Y \\sigma \\sqrt{\\pi a}\\," }, { "math_id": 44, "text": "\n W\n " }, { "math_id": 45, "text": "\n 2a\n " }, { "math_id": 46, "text": "Y \\left ( \\frac{a}{W} \\right ) = \\sqrt{\\sec\\left ( \\frac{\\pi a}{W} \\right )}\\," }, { "math_id": 47, "text": "G := \\left[\\cfrac{\\partial U}{\\partial a}\\right]_P = -\\left[\\cfrac{\\partial U}{\\partial a}\\right]_u" }, { "math_id": 48, "text": "\n G = G_I = \\begin{cases} \\cfrac{K_I^2}{E} & \\text{plane stress} \\\\\n \\cfrac{(1-\\nu^2) K_I^2}{E} & \\text{plane strain} \\end{cases}\n " }, { "math_id": 49, "text": "r_p = \\frac{K_{C}^2}{2\\pi\\sigma_Y^2}" }, { "math_id": 50, "text": "K_C" }, { "math_id": 51, "text": "\\sigma_Y" }, { "math_id": 52, "text": "\n J= \\int_\\Gamma( w \\,dy - T_i \\frac{\\partial u_i}{\\partial x}\\,ds) \\quad \\text{with} \\quad\n w=\\int^{\\varepsilon_{ij}}_0 \\sigma_{ij} \\,d\\varepsilon_{ij}\n" }, { "math_id": 53, "text": "\\Gamma" }, { "math_id": 54, "text": "w" }, { "math_id": 55, "text": "T_i" }, { "math_id": 56, "text": "u_i" }, { "math_id": 57, "text": "ds" }, { "math_id": 58, "text": "\\sigma_{ij}" }, { "math_id": 59, "text": "\\varepsilon_{ij}" }, { "math_id": 60, "text": "K_{Ic} = \\sqrt{E^* J_{Ic}}\\," }, { "math_id": 61, "text": "E^* = E" }, { "math_id": 62, "text": "E^* = \\frac{E}{1 - \\nu^2}" }, { "math_id": 63, "text": "\\sigma_\\text{fail}=K_{Ic}/\\sqrt{\\pi a}" }, { "math_id": 64, "text": "\\sigma_{fail}=\\sigma_Y" }, { "math_id": 65, "text": "a=K_{Ic}^2/\\pi\\sigma_Y^2" }, { "math_id": 66, "text": " a_t" }, { "math_id": 67, "text": "a<a_t " }, { "math_id": 68, "text": "a>a_t " }, { "math_id": 69, "text": "a_t" }, { "math_id": 70, "text": "\\tau" }, { "math_id": 71, "text": "d" }, { "math_id": 72, "text": "\\lambda" }, { "math_id": 73, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=843211
8433618
Ubbelohde viscometer
An Ubbelohde type viscometer or suspended-level viscometer is a measuring instrument which uses a capillary based method of measuring viscosity. It is recommended for higher viscosity cellulosic polymer solutions. The advantage of this instrument is that the values obtained are independent of the total volume. The device was developed by the German chemist Leo Ubbelohde (1877-1964). ASTM and other test methods are: ISO 3104, ISO 3105, ASTM D445, ASTM D446, ASTM D4020, IP 71, BS 188. The Ubbelohde viscometer is closely related to the Ostwald viscometer. Both are u-shaped pieces of glassware with a reservoir on one side and a measuring bulb with a capillary on the other. A liquid is introduced into the reservoir then sucked through the capillary and measuring bulb. The liquid is allowed to travel back through the measuring bulb and the time it takes for the liquid to pass through two calibrated marks is a measure for viscosity. The Ubbelohde device has a third arm extending from the end of the capillary and open to the atmosphere. In this way the pressure head only depends on a fixed height and no longer on the total volume of liquid. Determination of viscosity. The determination of viscosity is based on Poiseuille's law: formula_0 where t is the time it takes for a volume V to elute. The ratio formula_1 depends on R as the capillary radius, on the average applied pressure P, on its length L and on the dynamic viscosity η. The average pressure head is given by: formula_2 with ρ the density of the liquid, g the Standard gravity and H the average head of the liquid. In this way the viscosity of a fluid can be determined. Usually the viscosity of a liquid is compared to a liquid with an analyte for example a polymer dissolved in it. The relative viscosity is given by: formula_3 where t0 and ρ0 are the elution time and density of the pure liquid. When the solution is very diluted formula_4 the so-called specific viscosity becomes: formula_5 This specific viscosity is related to the concentration of the analyte through the Intrinsic viscosity [η] by the power series: formula_6 or formula_7 where formula_8 is called the viscosity number. The intrinsic viscosity can be determined experimentally by measuring the viscosity number as function of concentration as the Y-axis intercept. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{dV}{dt} = v \\pi R^{2} = \\frac{\\pi R^{4}}{8 \\eta} \\left( \\frac{- \\Delta P}{\\Delta x}\\right) = \\frac{\\pi R^{4}}{8 \\eta} \\frac{ |\\Delta P|}{L}, " }, { "math_id": 1, "text": "\\frac{dv}{dt}" }, { "math_id": 2, "text": "\\Delta P = \\rho g \\Delta H \\," }, { "math_id": 3, "text": "\\eta_r = \\frac{\\eta}{\\eta_0} = \\frac{t \\rho}{t_0 \\rho_0}," }, { "math_id": 4, "text": "\\rho \\simeq \\rho_0 \\," }, { "math_id": 5, "text": "\\eta_{sp} = \\eta_r - 1 = \\frac{t - t_0}{t_0}. \\," }, { "math_id": 6, "text": "\\eta_{sp} = [\\eta] c + k [\\eta]^2 c^2 + \\cdots\\," }, { "math_id": 7, "text": "\\frac{\\eta_{sp}}{c} = [\\eta] + k[\\eta]^2 c + \\cdots,\\," }, { "math_id": 8, "text": "\\frac{\\eta_{sp}}{c}" } ]
https://en.wikipedia.org/wiki?curid=8433618
8433728
Pulse compression
Signal processing technique Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth (or equivalently range resolution) of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse. Simple pulse. Signal description. The ideal model for the simplest, and historically first type of signals a pulse radar or sonar can transmit is a truncated sinusoidal pulse (also called a CW --carrier wave-- pulse), of amplitude formula_0 and carrier frequency, formula_1, truncated by a rectangular function of width, formula_2. The pulse is transmitted periodically, but that is not the main topic of this article; we will consider only a single pulse, formula_3. If we assume the pulse to start at time formula_4, the signal can be written the following way, using the complex notation: formula_5 Range resolution. Let us determine the range resolution which can be obtained with such a signal. The return signal, written formula_6, is an attenuated and time-shifted copy of the original transmitted signal (in reality, Doppler effect can play a role too, but this is not important here.) There is also noise in the incoming signal, both on the imaginary and the real channel. The noise is assumed to be band-limited, that is to have frequencies only in formula_7 (this generally holds in reality, where a bandpass filter is generally used as one of the first stages in the reception chain); we write formula_8 to denote that noise. To detect the incoming signal, a matched filter is commonly used. This method is optimal when a known signal is to be detected among additive noise having a normal distribution. In other words, the cross-correlation of the received signal with the transmitted signal is computed. This is achieved by convolving the incoming signal with a conjugated and time-reversed version of the transmitted signal. This operation can be done either in software or with hardware. We write formula_9 for this cross-correlation. We have: formula_10 If the reflected signal comes back to the receiver at time formula_11 and is attenuated by factor formula_0, this yields: formula_12 Since we know the transmitted signal, we obtain: formula_13 where formula_14, is the result of the intercorrelation between the noise and the transmitted signal. Function formula_15 is the triangle function, its value is 0 on formula_16, it increases linearly on formula_17 where it reaches its maximum 1, and it decreases linearly on formula_18 until it reaches 0 again. Figures at the end of this paragraph show the shape of the intercorrelation for a sample signal (in red), in this case a real truncated sine, of duration formula_19 seconds, of unit amplitude, and frequency formula_20 hertz. Two echoes (in blue) come back with delays of 3 and 5 seconds and amplitudes equal to 0.5 and 0.3 times the amplitude of the transmitted pulse, respectively; these are just random values for the sake of the example. Since the signal is real, the intercorrelation is weighted by an additional &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 factor. If two pulses come back (nearly) at the same time, the intercorrelation is equal to the sum of the intercorrelations of the two elementary signals. To distinguish one "triangular" envelope from that of the other pulse, it is clearly visible that the times of arrival of the two pulses must be separated by at least formula_2 so that the maxima of both pulses can be separated. If this condition is not met, both triangles will be mixed together and impossible to separate. Since the distance travelled by a wave during formula_2 is formula_21 (where "c" is the speed of the wave in the medium), and since this distance corresponds to a round-trip time, we get: Energy and signal-to-noise ratio of the received signal. The instantaneous power of the received pulse is formula_22. The energy put into that signal is: formula_23 If formula_24 is the standard deviation of the noise which is assumed to have the same bandwidth as the signal, the signal-to-noise ratio (SNR) at the receiver is: formula_25 The SNR is proportional to pulse duration formula_26, if other parameters are held constant. This introduces a tradeoff: increasing formula_26 improves the SNR, but reduces the resolution, and vice versa. Pulse compression by linear frequency modulation (or "chirping"). Basic principles. How can one have a large enough pulse (to still have a good SNR at the receiver) without poor resolution? This is where pulse compression enters the picture. The basic principle is the following: In radar or sonar applications, linear chirps are the most typically used signals to achieve pulse compression. The pulse being of finite length, the amplitude is a rectangle function. If the transmitted signal has a duration formula_2, begins at formula_27 and linearly sweeps the frequency band formula_28 centered on carrier formula_1, it can be written: formula_29 The chirp definition above means that the phase of the chirped signal (that is, the argument of the complex exponential), is the quadratic: formula_30 thus the instantaneous frequency is (by definition): formula_31 which is the intended linear ramp going from formula_32 at formula_33 to formula_34 at formula_35. The relation of phase to frequency is often used in the other direction, starting with the desired formula_36 and writing the chirp phase via the integration of frequency: formula_37 This transmitted signal is typically reflected by the target and undergoes attenuation due to various causes, so the received signal is a time-delayed, attenuated version of the transmitted signal plus an additive noise of constant power spectral density on formula_38, and zero everywhere else: formula_39 Cross-correlation between the transmitted and the received signal. We now endeavor to compute the correlation of the received signal with the transmitted signals. Two actions are going to be taken to do this: - The first action is a simplification. Instead of computing the cross-correlation we are going to compute an auto-correlation which amounts to assuming that the autocorrelation peak is centered at zero. This will not change the resolution and the amplitudes but will simplify the math: formula_40 - The second action is, as shown below, is to set an amplitude for the reference signal which is not one, but formula_41. Constant formula_42 is to be determined so that energy is conserved through correlation. formula_43 Now, it can be shown that the correlation function of formula_44 with formula_45 is: formula_46 where formula_47 is the correlation of the reference signal with the received noise. Width of the signal after correlation. Assuming noise is zero, the maximum of the autocorrelation function of formula_48 is reached at 0. Around 0, this function behaves as the sinc (or cardinal sine) term, defined here as formula_49. The −3 dB temporal width of that cardinal sine is more or less equal to formula_50. Everything happens as if, after matched filtering, we had the resolution that would have been reached with a simple pulse of duration formula_51. For the common values of formula_28, formula_51 is smaller than formula_2, hence the "pulse compression" name. Since the cardinal sine can have annoying sidelobes, a common practice is to filter the result by a window (Hamming, Hann, etc.). In practice, this can be done at the same time as the adapted filtering by multiplying the reference chirp with the filter. The result will be a signal with a slightly lower maximum amplitude, but the sidelobes will be filtered out, which is more important.   Energy and peak power after correlation. When the reference signal formula_44 is correctly scaled using term formula_42, then it is possible to conserve the energy before and after correlation. The peak (and average) power before correlation is: formula_52 Since, before compression, the pulse is box-shaped, the energy before correlation is: formula_53 The peak power after correlation is reached at formula_54: formula_55 Note that if formula_56 this peak power is the energy of the received signal before correlation, which is as expected. After compression, the pulse is approximal by a box having a width equal to the typical width of the formula_57 function, that is, a width formula_58, so the energy after correlation is: formula_59 If energy is conserved: formula_60 ... it comes that: formula_61 so that the peak power after correlation is: formula_62 As a conclusion, the peak power of the pulse-compressed signal is formula_63 that of the raw received signal (assuming that the template formula_44 is correctly scaled to conserve energy through correlation). Signal-to-noise gain after correlation. As we have seen above, things are written so that the energy of the signal does not vary during pulse compression. However, it is now located in the main lobe of the cardinal sine, whose width is approximately formula_64. If formula_65 is the power of the signal before compression, and formula_66 the power of the signal after compression, energy formula_67 is conserved and we have: formula_68 which yields an increase in power after pulse compression: formula_69 In the spectral domain, the power spectrum of the chirp has a nearly constant spectral density formula_70 in interval formula_7 and zero elsewhere, so that energy is equivalently expressed as formula_71. This spectral density remains the same after matched filtering. Imagining now an equivalent sinusoidal (CW) pulse of duration formula_58 and identical input power, this equivalent sinusoidal pulse has an energy: formula_72 After matched filtering, the equivalent sinusoidal pulse turns into a triangular-shaped signal of twice its original width but the same peak power. Energy is conserved. The spectral domain is approximated by a nearly constant spectral density formula_73 in interval formula_7 where formula_74. Through conservation of energy, we have: formula_75 Since by definition we also have: formula_76 it comes that: formula_77 meaning that the spectral densities of the chirped pulse, and the equivalent CW pulse are very nearly identical, and are equivalent to that of a bandpass filter on formula_7. The filtering effect of correlation also acts on the noise, meaning that the reference band for the noise is formula_78 and since formula_79, the same filtering effect is obtained on the noise in both cases after correlation. This means that the net effect of pulse compression is that, compared to the equivalent CW pulse, the signal-to-noise ratio (SNR) has improved by a factor formula_80 because the signal is amplified but not the noise. As a consequence: For technical reasons, correlation is not necessarily done for actual received CW pulses as for chirped pulses. However during baseband shifting the signal undergoes a bandpass filtering on formula_81 which has the same net effect on the noise as the correlation, so the overall reasoning remains the same (that is, the SNR makes only sense for noise defined on a given bandwidth, here being that of the signal). This gain in the SNR seems magical, but remember that the power spectral density does not represent the phase of the signal. In reality the phases are different for the equivalent CW pulse, the CW pulse after correlation, the original chirped pulse and the correlated chirped pulse, which explains the different shapes of the signals (especially the varying lengths) despite having (nearly) the same power spectrum in all cases. If the peak transmitting power formula_82 and the bandwidth formula_78 are constrained, pulse compression thus achieves a better peak power (but same resolution) by transmitting a longer pulse (that is, more energy), compared to an equivalent CW pulse of same peak power formula_82 and bandwidth formula_78, and squeezing the pulse by correlation. This works best only for a limited number of signal types which, after correlation, have a narrower peak than the original signal, and low sidelobes. Stretch processing. While pulse compression can ensure good SNR and fine range resolution in the same time, digital signal processing in such a system can be difficult to implement because of the high instantaneous bandwidth of the waveform (formula_28 can be hundreds of megahertz or even exceed 1 GHz.) Stretch Processing is a technique for matched filtering of wideband chirping waveform and is suitable for applications seeking very fine range resolution over relatively short range intervals. Picture above shows the scenario for analyzing stretch processing. The central reference point(CRP) is in the middle of the range window of interest at range of formula_83, corresponding to a time delay of formula_84. If the transmitted waveform is the chirp waveform: formula_85 then the echo from the target at distance formula_86can be expressed as: formula_87 where formula_88 is proportional to the scatterer reflectivity. We then multiply the echo by formula_89 and the echo will become: formula_90 where formula_91 is the wavelength of electromagnetic wave in air. After conducting sampling and discrete Fourier transform on y(t) the sinusoid frequency formula_92 can be solved: formula_93 and the differential range formula_94 can be obtained: formula_95 To show that the bandwidth of y(t) is less than the original signal bandwidth formula_28, we suppose that the range window is formula_96 long. If the target is at the lower bound of the range window, the echo will arrive formula_97 seconds after transmission; similarly, If the target is at the upper bound of the range window, the echo will arrive formula_98 seconds after transmission. The differential arrive time formula_99 for each case is formula_100 and formula_101, respectively. We can then obtain the bandwidth by considering the difference in sinusoid frequency for targets at the lower and upper bound of the range window: formula_102 As a consequence: To demonstrate that stretch processing preserves range resolution, we need to understand that y(t) is actually an impulse train with pulse duration T and period formula_103, which is equal to the period of the transmitted impulse train. As a result, the Fourier transform of y(t) is actually a sinc function with Rayleigh resolution formula_104. That is, the processor will be able to resolve scatterers whose formula_92 are at least formula_105 apart. Consequently, formula_106 and, formula_107 which is the same as the resolution of the original linear frequency modulation waveform. Stepped-frequency waveform. Although stretch processing can reduce the bandwidth of received baseband signal, all of the analog components in RF front-end circuitry still must be able to support an instantaneous bandwidth of formula_28. In addition, the effective wavelength of the electromagnetic wave changes during the frequency sweep of a chirp signal, and therefore the antenna look direction will be inevitably changed in a Phased array system. Stepped-frequency waveforms are an alternative technique that can preserve fine range resolution and SNR of the received signal without large instantaneous bandwidth. Unlike the chirping waveform, which sweeps linearly across a total bandwidth of formula_28 in a single pulse, stepped-frequency waveform employs an impulse train where the frequency of each pulse is increased by formula_108 from the preceding pulse. The baseband signal can be expressed as: formula_109 where formula_110 is a rectangular impulse of length formula_111 and M is the number of pulses in a single pulse train. The total bandwidth of the waveform is still equal to formula_112, but the analog components can be reset to support the frequency of the following pulse during the time between pulses. As a result, the problem mentioned above can be avoided. To calculate the distance of the target corresponding to a delay formula_113, individual pulses are processed through the simple pulse matched filter: formula_114 and the output of the matched filter is: formula_115 where formula_116 If we sample formula_117 at formula_118, we can get: formula_119 where l means the range bin l. Conduct DTFT (m is served as time here) and we can get: formula_120 ,and the peak of the summation occurs when formula_121. Consequently, the DTFT of formula_122 provides a measure of the delay of the target relative to the range bin delay formula_123: formula_124 and the differential range can be obtained: formula_125 where c is the speed of light. To demonstrate stepped-frequency waveform preserves range resolution, it should be noticed that formula_126 is a sinc-like function, and therefore it has a Rayleigh resolution of formula_127. As a result: formula_128 and therefore the differential range resolution is : formula_129 which is the same of the resolution of the original linear-frequency-modulation waveform. Pulse compression by phase coding. There are other means to modulate the signal. Phase modulation is a commonly used technique; in this case, the pulse is divided in formula_130 time slots of duration formula_131 for which the phase at the origin is chosen according to a pre-established convention. For instance, it is possible to not change the phase for some time slots (which comes down to just leaving the signal as it is, in those slots) and de-phase the signal in the other slots by formula_132 (which is equivalent of changing the sign of the signal); this is known as binary phase-shift keying. The precise way of choosing the sequence of formula_133 phases can be done according to a technique known as Barker codes. The advantages of the Barker codes are their simplicity (as indicated above, a formula_132 de-phasing is a simple sign change), but the pulse compression ratio is lower than in the chirp case and the compression is very sensitive to frequency changes due to the Doppler effect if that change is larger than formula_104. Other pseudorandom binary sequences have nearly optimal pulse compression properties, such as Gold codes, JPL codes or Kasami codes, because their autocorrelation peak is very narrow. These sequences have other interesting properties making them suitable for GNSS positioning, for instance. It is possible to code the sequence on more than two phases (polyphase coding). As with a linear chirp, pulse compression is achieved through intercorrelation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " A" }, { "math_id": 1, "text": " f_0" }, { "math_id": 2, "text": " T" }, { "math_id": 3, "text": "s" }, { "math_id": 4, "text": " t=0" }, { "math_id": 5, "text": "s(t) = \\begin{cases}\n e^{2 i \\pi f_0 t} &\\text{if} \\; 0 \\leq t < T \\\\\n 0 &\\text{otherwise}\n\\end{cases}" }, { "math_id": 6, "text": " r(t)" }, { "math_id": 7, "text": "[f_0-\\Delta f/2, f_0+\\Delta f/2]" }, { "math_id": 8, "text": " N(t)" }, { "math_id": 9, "text": "\\langle s,r \\rangle (t)" }, { "math_id": 10, "text": "\\langle s,r \\rangle (t) = \\int_{t'\\,=\\,0}^{+\\infty} s^\\star(t')r(t+t') dt'" }, { "math_id": 11, "text": " t_r" }, { "math_id": 12, "text": "r(t)= \\left\\{ \\begin{array}{ll} A e^{2 i \\pi f_0 (t\\,-\\,t_r)} +N(t) &\\mbox{if} \\; t_r \\leq t < t_r+T \\\\ N(t) &\\mbox{otherwise}\\end{array}\\right." }, { "math_id": 13, "text": "\\langle s,r \\rangle (t) = A\\Lambda\\left (\\frac{t-t_r}{T} \\right)e^{2 i \\pi f_0 (t\\,-\\,t_r)} + N'(t)" }, { "math_id": 14, "text": " N'(t)" }, { "math_id": 15, "text": "\\Lambda" }, { "math_id": 16, "text": " [-\\infty, -\\frac{1}{2}] \\cup [\\frac{1}{2}, +\\infty]" }, { "math_id": 17, "text": " [-\\frac{1}{2}, 0]" }, { "math_id": 18, "text": " [0,\\frac{1}{2}]" }, { "math_id": 19, "text": " T=1" }, { "math_id": 20, "text": " f_0=10" }, { "math_id": 21, "text": " cT" }, { "math_id": 22, "text": " P(t) = |r|^2(t)" }, { "math_id": 23, "text": "E = \\int_0^T P(t)dt = A^2 T" }, { "math_id": 24, "text": "\\sigma" }, { "math_id": 25, "text": "SNR = \\frac{E_r}{\\sigma^{2}} = \\frac{A^2 T}{\\sigma^{2}}" }, { "math_id": 26, "text": "T" }, { "math_id": 27, "text": " t = 0" }, { "math_id": 28, "text": " \\Delta f" }, { "math_id": 29, "text": "s_c(t) = \\left\\{ \\begin{array}{ll} e^{i 2 \\pi \\left( \\left( f_0 \\,-\\, \\frac{\\Delta f}{2}\\right) t \\, + \\, \\frac{\\Delta f}{2T}t^2 \\, \\right)} &\\mbox{if} \\; 0 \\leq t < T \\\\ 0 &\\mbox{otherwise}\\end{array}\\right." }, { "math_id": 30, "text": "\\phi(t) = 2\\pi \\left( \\left( f_0 \\,-\\, \\frac{\\Delta f}{2}\\right) t \\, + \\, \\frac{\\Delta f}{2T}t^2 \\, \\right) " }, { "math_id": 31, "text": "f(t) = \\frac{1}{2\\pi}\\left[\\frac{d\\phi}{dt}\\right ]_t = f_0-\\frac{\\Delta f}{2}+\\frac{\\Delta f}{T}t" }, { "math_id": 32, "text": " f_0 - \\frac{\\Delta f}{2}" }, { "math_id": 33, "text": "t = 0" }, { "math_id": 34, "text": " f_0 + \\frac{\\Delta f}{2}" }, { "math_id": 35, "text": " t = T" }, { "math_id": 36, "text": "f(t)" }, { "math_id": 37, "text": "\\phi(t) = 2 \\pi \\int_0^t f(u)\\,du " }, { "math_id": 38, "text": "[f_0-\\Delta f/2,f_0+\\Delta f/2\n]" }, { "math_id": 39, "text": "r(t) = \\left\\{ \\begin{array}{ll} Ae^{i 2 \\pi \\left( \\left( f_0 \\,-\\, \\frac{\\Delta f}{2}\\right) (t-t_r) \\, + \\, \\frac{\\Delta f}{2T}(t-t_r)^2 \\, \\right)} +N(t)&\\mbox{if} \\; t_r \\leq t < t_r+T \\\\ N(t) &\\mbox{otherwise}\\end{array}\\right." }, { "math_id": 40, "text": "r'(t) = \\begin{cases}\n A e^{2 i \\pi \\left (f_0 \\,+\\, \\frac{\\Delta f}{2T}t\\right) t} +N(t) &\\mbox{if}\\; -\\frac{T}{2} \\leq t < \\frac{T}{2} \\\\\n N(t) &\\mbox{otherwise}\n\\end{cases}" }, { "math_id": 41, "text": "\\rho \\neq 1" }, { "math_id": 42, "text": "\\rho" }, { "math_id": 43, "text": "s_c'(t) = \\begin{cases}\n \\rho e^{2 i \\pi \\left (f_0 \\,+\\, \\frac{\\Delta f}{2T}t\\right) t} &\\mbox{if}\\; -\\frac{T}{2} \\leq t < \\frac{T}{2} \\\\\n0 &\\mbox{otherwise}\n\\end{cases}" }, { "math_id": 44, "text": "s_c'" }, { "math_id": 45, "text": "r'" }, { "math_id": 46, "text": "\\langle s_c', r'\\rangle(t) = \\rho A\\sqrt{T} \\Lambda \\left(\\frac{t}{T} \\right) \\mathrm{sinc} \\left[ \\Delta f t \\Lambda \\left( \\frac{t}{T}\\right) \\right] e^{2 i \\pi f_0 t}+N'(t) " }, { "math_id": 47, "text": "N'(t)" }, { "math_id": 48, "text": " s_{c'}" }, { "math_id": 49, "text": "sinc(x)=sin(\\pi x)/(\\pi x)" }, { "math_id": 50, "text": " T' = \\frac{1}{\\Delta f}" }, { "math_id": 51, "text": " T'" }, { "math_id": 52, "text": " P_{r'}=|r'(t)|^2 = P^{peak}_{r'} = A^2" }, { "math_id": 53, "text": " E_{r'}= \\int_{-T/2}^{T/2} |r'(t)|^2 dt = A^2T" }, { "math_id": 54, "text": "t=0" }, { "math_id": 55, "text": " P^{peak}_{<s_c',r'>}=|<s_c',r'>(0)|^2=\\rho^2 A^2T" }, { "math_id": 56, "text": "\\rho=1" }, { "math_id": 57, "text": "sinc" }, { "math_id": 58, "text": "T'=1/\\Delta f" }, { "math_id": 59, "text": " E_{<s_c',r'>}=\\int_{-\\infty}^{+\\infty} |<s_c',r'>(t)|^2 dt\\approx P^{peak}_{<s_c',r'>}\\times T' = \\rho^2 \\frac{A^2T}{\\Delta f}" }, { "math_id": 60, "text": "E_{r'}=E_{<s_c',r'>}" }, { "math_id": 61, "text": "\\rho=\\sqrt{\\Delta f}" }, { "math_id": 62, "text": " P^{peak}_{<s_c',r'>}=\\rho^2 A^2 T=P_{r'}\\times\\Delta f \\times T" }, { "math_id": 63, "text": "\\Delta f \\times T" }, { "math_id": 64, "text": " T' \\approx \\frac{1}{\\Delta f}" }, { "math_id": 65, "text": " P" }, { "math_id": 66, "text": " P'" }, { "math_id": 67, "text": "E" }, { "math_id": 68, "text": "E = P\\times T = P' \\times T' " }, { "math_id": 69, "text": "P'= P\\times \\frac{T}{T'} " }, { "math_id": 70, "text": "D=P/\\Delta f" }, { "math_id": 71, "text": "E = P\\times T = D.\\Delta f.T " }, { "math_id": 72, "text": "E' = P\\times T' = E\\frac{T'}{T} " }, { "math_id": 73, "text": "D'" }, { "math_id": 74, "text": "\\Delta f\\approx 1/T'" }, { "math_id": 75, "text": "E' = E\\frac{T'}{T} = D\\Delta f T\\frac{T'}{T} =D\\Delta f T' " }, { "math_id": 76, "text": "E' = D'\\Delta f T' " }, { "math_id": 77, "text": "D' = D " }, { "math_id": 78, "text": "\\Delta f" }, { "math_id": 79, "text": "D=D'" }, { "math_id": 80, "text": "T/T'" }, { "math_id": 81, "text": "[f_0-\\Delta f/2,f_0+\\Delta f/2]" }, { "math_id": 82, "text": "P" }, { "math_id": 83, "text": " R_0" }, { "math_id": 84, "text": " t_0" }, { "math_id": 85, "text": "x(t)=\\exp\\left(j\\pi\\frac{\\Delta f}{T}(t)^2\\right)\\exp(j2\\pi f_0(t)), 0\\leq t\\leq T" }, { "math_id": 86, "text": " R_{b}" }, { "math_id": 87, "text": "\\bar{x}(t)=\\rho \\exp\\left(j\\pi\\frac{\\Delta f}{T}(t-t_{b})^2\\right)\\exp(j2\\pi f_0(t-t_{b})), 0\\leq t-t_{b}\\leq T" }, { "math_id": 88, "text": " \\rho" }, { "math_id": 89, "text": " \\exp(-j2\\pi f_0 t)\\exp\\left(-j\\pi\\frac{\\Delta f}{T}(t-t_0)^2\\right)" }, { "math_id": 90, "text": "y(t)=\\rho \\exp\\left(-j\\frac{4\\pi R_{b}}{\\lambda}\\right)\\exp\\left(-j2\\pi\\frac{\\Delta f}{T}\\delta t_{b}(t-t_0)\\right)\\exp\\left(j\\pi \\frac{\\Delta f}{T}(\\delta t_{b})^2\\right),t_0\\leq t-\\delta t_{b}\\leq t_0+T" }, { "math_id": 91, "text": " \\lambda" }, { "math_id": 92, "text": " F_{b}" }, { "math_id": 93, "text": "F_{b}=-\\delta t_{b}\\frac{\\Delta f}{T}(Hz)" }, { "math_id": 94, "text": " \\delta R_{b}" }, { "math_id": 95, "text": "\\delta R_{b}=-\\frac{cTF_{b}}{2\\Delta f}" }, { "math_id": 96, "text": " R_{w} = \\frac{cT_{w}}{2}" }, { "math_id": 97, "text": " t_0-T_{w}/2" }, { "math_id": 98, "text": " t_0+T_{w}/2" }, { "math_id": 99, "text": " \\delta t_{b}" }, { "math_id": 100, "text": " -T_{w}/2" }, { "math_id": 101, "text": " T_{w}/2" }, { "math_id": 102, "text": "\\Delta f_{s} = F_{b,\\text{near}}-F_{b,\\text{far}} = -\\frac{\\Delta f}{T}(-T_{w}/2-T_{w}/2) = \\frac{T_{w}}{T} \\Delta f" }, { "math_id": 103, "text": " T_{trans}" }, { "math_id": 104, "text": " \\frac{1}{T}" }, { "math_id": 105, "text": " \\Delta F_{b}=1/T" }, { "math_id": 106, "text": "\\frac{1}{T}=\\left\\vert \\frac{\\Delta f}{T}\\Delta(\\delta t_{b}) \\right\\vert \\Rightarrow \\left\\vert \\Delta(\\delta t_{b})\\right\\vert =\\frac{1}{\\Delta f}" }, { "math_id": 107, "text": "\\Delta(\\delta R_{b})=\\frac{c\\Delta(\\delta t_{b})}{2}=\\frac{c}{2\\Delta f}" }, { "math_id": 108, "text": " \\Delta F" }, { "math_id": 109, "text": "x(t)=\\sum_{m=0}^{M-1}x_{p}(t-mT)e^{j2\\pi m\\Delta F(t-mT)}" }, { "math_id": 110, "text": " x_{p}(t)" }, { "math_id": 111, "text": " \\tau" }, { "math_id": 112, "text": " \\Delta f=M\\Delta F" }, { "math_id": 113, "text": " t_{l}+\\delta t" }, { "math_id": 114, "text": "\nh_{p}(t)=x^*_{p}(-t)\n" }, { "math_id": 115, "text": "\ny_{m}(t)=s^*_{p}(t-(t_{l}+\\delta t)-mT)e^{j2\\pi m\\Delta F(t-(t_{l}+\\delta t)-mT)}\n" }, { "math_id": 116, "text": "\ns^*_{p}(t-(t_{l}+\\delta t)-mT)=x_{p}(t-(t_{l}+\\delta t)-mT)*h_{p}(t)\n" }, { "math_id": 117, "text": " y_{m}(t)" }, { "math_id": 118, "text": " t=t_{l}+mT" }, { "math_id": 119, "text": "\ny[l,m]=s^*_{p}(\\delta t)e^{j2\\pi m\\Delta F\\delta t}\n" }, { "math_id": 120, "text": "\nY[l,\\omega]=\\sum_{m=0}^{M-1}y[l,m]e^{-j\\omega m}=s^*_{p}(\\delta t)\\sum_{m=0}^{M-1}e^{j(\\omega-2\\pi\\Delta F\\delta t)m}\n" }, { "math_id": 121, "text": " \\omega=2\\pi\\Delta F\\delta t" }, { "math_id": 122, "text": " y[l,m]" }, { "math_id": 123, "text": " t_{l}" }, { "math_id": 124, "text": "\\delta t=\\frac{\\omega_{p}}{2\\pi \\Delta F}=\\frac{f_{p}}{\\Delta F}" }, { "math_id": 125, "text": "\\delta R=\\frac{cf_{p}}{2\\Delta F}" }, { "math_id": 126, "text": " Y[l,\\omega]" }, { "math_id": 127, "text": " \\Delta f_{p}=1/M" }, { "math_id": 128, "text": "\\Delta(\\delta t)=\\frac{1}{M\\Delta F}=\\frac{1}{\\Delta f}" }, { "math_id": 129, "text": "\\Delta(\\delta R)=\\frac{c}{2\\Delta f}" }, { "math_id": 130, "text": " N" }, { "math_id": 131, "text": " \\frac{T}{N}" }, { "math_id": 132, "text": " \\pi" }, { "math_id": 133, "text": " \\{0, \\pi \\}" } ]
https://en.wikipedia.org/wiki?curid=8433728
8434642
Spin-polarized scanning tunneling microscopy
Spin-polarized scanning tunneling microscopy (SP-STM) is a type of scanning tunneling microscope (STM) that can provide detailed information of magnetic phenomena on the single-atom scale additional to the atomic topography gained with STM. SP-STM opened a novel approach to static and dynamic magnetic processes as precise investigations of domain walls in ferromagnetic and antiferromagnetic systems, as well as thermal and current-induced switching of nanomagnetic particles. Principle of operation. An extremely sharp tip coated with a thin layer of magnetic material is moved systematically over a sample. A voltage is applied between the tip and the sample allowing electrons to tunnel between the two, resulting in a current. In the absence of magnetic phenomena, the strength of this current is indicative for local electronic properties. If the tip is magnetized, electrons with spins matching the tip's magnetization will have a higher chance of tunneling. This is essentially the effect of tunnel magnetoresistance and the tip/surface essentially acts as a spin valve. Since a scan using only a magnetized tip cannot distinguish between current changes due to magnetization or space separation, multi-domain structures and/or topographical information from another source (frequently conventional STM) must be utilized. This makes possible magnetic imaging down to the atomic scale, for example, in antiferromagnetic system. Topographical and magnetic information can be simultaneously obtained if the tip's magnetization is modulated at a high frequency (20–30 kHz) using a small coil wound around the tip. The tip's magnetization thus flips too fast for the STM feedback loop to respond to and topographical information is obtained intact. The high frequency signal is separated using a lock-in amplifier and this signal provides the magnetic information about the surface. In standard scanning tunneling microscopy (STM), the tunneling probability of electrons between the probe tip and the sample strongly depends on the distance between them, as it decays exponentially as the separation increases. In spin-polarized STM (SP-STM) the tunneling current also depends on the spin-orientation of the tip and the sample. The local density of states (LDOS) of the magnetic tip and the sample is different for different spin orientations, and tunneling can occur only between the states with parallel spin (ignoring spin flip processes). When the spin of sample and the tip are parallel there are many available states to which the electrons can tunnel, thus resulting in a large tunneling current. On the other hand, if the spins are antiparallel most of the available states are already filled and the tunneling current will be significantly smaller. With SP -STM it is then possible to probe the spin dependent local density of states of magnetic samples by measuring the tunneling conductance formula_0, which for small bias is given byformula_1where formula_2 is the tunneling conductance in nonmagnetic case, formula_3 is the tunneling matrix element which describes the transitions between the spin dependent states of the tip and the sample, formula_4, formula_5, and formula_6, formula_7 are the total densities of state and polarizations for the tip (t) and the sample (s), respectively, and formula_8 is the angle between the magnetization directions of the tip and the sample. In the nonmagnetic limit (formula_9 or formula_10), this expression reduces to the Tersoff and Hamann model for standard STM tunneling conductance. In the more general case, with finite bias voltage formula_11, the expression for the tunneling current at tip location formula_12 becomes formula_13where formula_14 is constant, formula_15 the inverse decay length of the electron wavefunction, and, formula_16 and formula_17 the charge and mass of electron, respectively, formula_18 is the energy-integrated LDOS of the tip, and formula_19, and formula_20 are the corresponding magnetization vectors of the spin-polarized LDOS. The tunneling current is sum of spin-independent formula_21, and spin-dependent formula_22 parts. Probe tip preparation. The most critical component in the SP-STM setup is the probe tip which has to be atomically sharp to offer spatial resolution down to atomic level, have large enough spin polarization to provide sufficient signal to noise ratio, but at the same time have small enough stray magnetic field to enable nondestructive magnetic probing of the sample, and finally the spin orientation at the tip apex has to be controlled in order to determine which spin orientation of the sample is imaged. In order to prevent oxidization the tip preparation usually has to be done in ultra-high vacuum (UHV). There are three main ways to obtain probe tip suitable for SP-STM measurements: Modes of operation. SP-STM can be operated in one of three modes: constant current, and spectroscopic mode which are similar to standard STM operation modes but with spin-resolution, or modulated tip magnetization mode which is unique to SP-STM measurements. In constant current mode, the tip-sample separation is kept constant by an electric feedback loop. The measured tunneling current formula_23 consists of spin-averaged and spin-dependent components (formula_24) which can be decomposed from the data. Tunneling current is primarily dominated by the smallest non-zero reciprocal lattice vector, which means that as magnetic superstructures usually have the longest real space periodicities (and thus the shortest reciprocal space periodicities), bring the largest contribution to the spin-dependent tunneling current formula_25. Thus SP-STM is an excellent method to observe magnetic structure rather than atomic structure of the sample. The downside is that it is difficult to study larger than atomic scales in constant-current mode as the topographical features of the surface may interfere with the magnetic features making data analysis very difficult. The second mode of operation is spin-resolved spectroscopic mode which measures local differential tunneling conductance formula_26 as a function of bias voltage formula_27 and spatial coordinates of the tip. Spectroscopic mode can be used under constant-current conditions in which the sample-tip separation varies resulting in superposition of topographic and electronic information which can then be separated. If spectroscopic mode is used with constant tip-sample separation, the measured formula_26 is directly related to the spin-resolved LDOS of the sample whereas the measured tunneling current formula_23 is proportional to the energy-integrated spin-polarized LDOS. By combining the spectroscopic mode with constant-current mode, it is possible to obtain both topographic and spin-resolved surface data. Thirdly, SP-STM can be used in modulated magnetization mode in which the tip magnetization is periodically switched resulting in a tunneling current that is proportional to the local magnetization of the sample. This enables it to separate magnetic features from electronic and topographical features. Since spin-polarized LDOS can change not only magnitude but also sign as a function of energy, the measured tunneling current can vanish even if there is finite magnetization in the sample. Thus bias dependence of the spin-polarized tunneling current in modulated magnetization mode has to be studied as well. Only ferromagnetic tips are suitable for modulated magnetization mode meaning that their stray fields might make nondestructive imaging impossible. Applications of SP-STM. The spin polarized scanning tunneling microscope is a versatile instrument which has gained tremendous attention due to its enhanced surface sensitivity and lateral resolution up to atomic scale, and can be used as an important tool to study ferromagnetic materials, such as dysprosium (Dy), quasi-2D thin films, nano islands and quasi-1D nanowire that have high magnetic anisotropy, etc. In a study carried out by L. Berbil-Bautista et al., the magnetic domain wall or Néel wall of the width 2-5 nm present in these materials is observed by bringing the chromium (Cr)-coated tungsten tip close to the Dy layer. This causes the transfer of Dy particles from magnetic material on to the apex of the tip. The width of the domain wall is calculated asformula_28 where formula_29 is exchange stiffness. The magnetic contrast is enhanced due to the presence of electronic states that are not occupied in the cluster of Dy atoms present on the apex of the tip. The formation of 360° domain walls in ferromagnetic films plays an important role in making magnetic random access memory devices. These domain walls are formed when an external magnetic field is applied along the easy direction of magnetic material. This forces the two 180° walls, which also have identical sense of rotation to come closer. In a study carried out by A. Kubetzka et al., the SP-STM was used to measure the evolution of 360° domain wall profiles of two atomic layer iron nanowires by varying the external magnetic field between 550-800 mT. The quantum interference phenomena has been observed in cobalt islands deposited on copper(111) substrate. This has been attributed to the fact that scattering caused by surface state electron defects, such as terrace edges, impurities or adsorbates present on a densely packed noble metal surface. Spin polarized-STM has been used to investigate the electronic structure of triangular Cobalt islands deposited on copper(111). This study shows that the substrate and islands exhibit their individual standing wave patterns and this can be used to find the spin polarized material. New advances in SP-STM. New advances in SP-STM shows that this technique can be further used to understand complex phenomena that have not been explained by other imaging techniques. Non-magnetic impurities, such as oxygen on magnetic surface (iron double layer on tungsten (W) substrate) causes formation of spin polarized waves. The adsorbed oxygen impurity on iron double layer can be used to study the interaction between Kondo impurities on RKKY interaction. This study shows that anisotropic scattering states can be observed around individual oxygen atoms adsorbed on iron double layer. This gives information on spin characteristics of electronic states involved in the scattering process. Similarly, existence of 2D anti-ferromagnetism at the interface of manganese (Mn) and W(110) has been observed using SP-STM technique. The importance of this study is that the atomic scale roughness at the interface between Mn and W(110) causes frustration in magnetic interaction, and it gives rise to complex spin structures that cannot be studied using other methods. Alternate method. Another way to obtain the magnetization distribution is to have the tip provide a strong stream of spin polarized electrons. One method to achieve this is to shine polarization laser light onto a GaAs tip, which produces spin polarized electrons due to spin-orbit coupling. The tip is then scanned along the sample much like conventional STM . One limitation of this method is that the most effective source of spin polarized electrons is obtained by having the incident laser light shine directly opposite of the tip, i.e. through the sample itself. This restricts the method to measuring thin samples. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G=\\mathrm{d}I/\\mathrm{d}U" }, { "math_id": 1, "text": "G=2\\pi^{2}G_{0}|M_{0}|^{2}n_{\\mathrm{t}}n_{\\mathrm{s}}\\left(1+P_{\\mathrm{t}}P_{\\mathrm{s}}\\cos\\theta\\right)," }, { "math_id": 2, "text": "G_{0}" }, { "math_id": 3, "text": "M_{0}" }, { "math_id": 4, "text": "n_{\\mathrm{t}}" }, { "math_id": 5, "text": "P_{\\mathrm{t}}" }, { "math_id": 6, "text": "n_{\\mathrm{s}}" }, { "math_id": 7, "text": "P_{\\mathrm{s}}" }, { "math_id": 8, "text": "\\theta" }, { "math_id": 9, "text": "P_{t}=0" }, { "math_id": 10, "text": "P_{s}=0" }, { "math_id": 11, "text": "U" }, { "math_id": 12, "text": "\\mathbf{r}" }, { "math_id": 13, "text": "I\\left(\\mathbf{r},U,\\theta\\right)=I_{0}\\left(\\mathbf{r},U\\right)+I_{d}\\left(\\mathbf{r},U,\\theta\\right)=\\frac{4\\pi^{3}C^{2}\\hslash^{3}e}{\\kappa^{2}m^{2}}\\left[n_{\\mathrm{t}}\\tilde{n}_{\\mathrm{s}}\\left(\\mathbf{r},U\\right)+\\mathbf{m}_{\\mathrm{t}}\\mathbf{\\tilde{m}}_{\\mathrm{s}}\\left(\\mathbf{r},U\\right)\\right]," }, { "math_id": 14, "text": "C" }, { "math_id": 15, "text": "\\kappa" }, { "math_id": 16, "text": "e" }, { "math_id": 17, "text": "m" }, { "math_id": 18, "text": "\\tilde{n}_{\\mathrm{s}}" }, { "math_id": 19, "text": "\\mathbf{m}_{\\mathrm{t}}" }, { "math_id": 20, "text": "\\mathbf{\\tilde{m}}_{\\mathrm{s}}" }, { "math_id": 21, "text": "I_{0}" }, { "math_id": 22, "text": "I_{d}" }, { "math_id": 23, "text": "I" }, { "math_id": 24, "text": "I=I_{0}+I_{\\mathrm{d}}" }, { "math_id": 25, "text": "I_{\\mathrm{d}}" }, { "math_id": 26, "text": "\\mathrm{d}I/\\mathrm{d}U" }, { "math_id": 27, "text": "U" }, { "math_id": 28, "text": "{\\displaystyle w={\\sqrt {2(A/k)}}}, " }, { "math_id": 29, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=8434642
8437941
Noncommutative standard model
In theoretical particle physics, the non-commutative Standard Model (best known as Spectral Standard Model &lt;ref name="10.1007/JHEP09(2012)104"&gt; &lt;/ref&gt; &lt;ref name="10.1007/JHEP11(2013)132"&gt; &lt;/ref&gt; ), is a model based on noncommutative geometry that unifies a modified form of general relativity with the Standard Model (extended with right-handed neutrinos). The model postulates that space-time is the product of a 4-dimensional compact spin manifold formula_0 by a finite space formula_1. The full Lagrangian (in Euclidean signature) of the Standard model minimally coupled to gravity is obtained as pure gravity over that product space. It is therefore close in spirit to Kaluza–Klein theory but without the problem of massive tower of states. The parameters of the model live at unification scale and physical predictions are obtained by running the parameters down through renormalization. It is worth stressing that it is more than a simple reformation of the Standard Model. For example, the scalar sector and the fermions representations are more constrained than in effective field theory. Motivation. Following ideas from Kaluza–Klein and Albert Einstein, the spectral approach seeks unification by expressing all forces as pure gravity on a space formula_2. The group of invariance of such a space should combine the group of invariance of general relativity formula_3 with formula_4, the group of maps from formula_0 to the standard model gauge group formula_5. formula_3 acts on formula_6 by permutations and the full group of symmetries of formula_2 is the semi-direct product: formula_7 Note that the group of invariance of formula_2 is not a simple group as it always contains the normal subgroup formula_6. It was proved by Mather &lt;ref name="10.1090/S0002-9904-1974-13456-7"&gt; &lt;/ref&gt; and Thurston &lt;ref name="10.1090/S0002-9904-1974-13475-0"&gt; &lt;/ref&gt; that for ordinary (commutative) manifolds, the connected component of the identity in formula_3 is always a simple group, therefore no ordinary manifold can have this semi-direct product structure. It is nevertheless possible to find such a space by enlarging the notion of space. In noncommutative geometry, spaces are specified in algebraic terms. The algebraic object corresponding to a diffeomorphism is the automorphism of the algebra of coordinates. If the algebra is taken non-commutative it has trivial automorphisms (so-called inner automorphisms). These inner automorphisms form a normal subgroup of the group of automorphisms and provide the correct group structure. Picking different algebras then give rise to different symmetries. The Spectral Standard Model takes as input the algebra formula_8 where formula_9 is the algebra of differentiable functions encoding the 4-dimensional manifold and formula_10 is a finite dimensional algebra encoding the symmetries of the standard model. History. First ideas to use noncommutative geometry to particle physics appeared in 1988-89, &lt;ref name="10.1016/0370-2693(89)90083-X"&gt; &lt;/ref&gt;&lt;ref name="10.1063/1.528917"&gt; &lt;/ref&gt; and were formalized a couple of years later by Alain Connes and John Lott in what is known as the Connes-Lott model .&lt;ref name="10.1016/0920-5632(91)90120-4"&gt; &lt;/ref&gt; The Connes-Lott model did not incorporate the gravitational field. In 1997, Ali Chamseddine and Alain Connes published a new action principle, the Spectral Action, &lt;ref name="10.1007/s002200050126"&gt; &lt;/ref&gt; that made possible to incorporate the gravitational field into the model. Nevertheless, it was quickly noted that the model suffered from the notorious fermion-doubling problem (quadrupling of the fermions) &lt;ref name="10.1103/PhysRevD.55.6357"&gt; &lt;/ref&gt; &lt;ref name="10.1016/S0370-2693(97)01310-5"&gt; &lt;/ref&gt; and required neutrinos to be massless. One year later, experiments in Super-Kamiokande and Sudbury Neutrino Observatory began to show that solar and atmospheric neutrinos change flavors and therefore are massive, ruling out the Spectral Standard Model. Only in 2006 a solution to the latter problem was proposed, independently by John W. Barrett&lt;ref name="10.1063/1.2408400"&gt; &lt;/ref&gt; and Alain Connes,&lt;ref name="10.1088/1126-6708/2006/11/081"&gt; &lt;/ref&gt; almost at the same time. They show that massive neutrinos can be incorporated into the model by disentangling the KO-dimension (which is defined modulo 8) from the metric dimension (which is zero) for the finite space. By setting the KO-dimension to be 6, not only massive neutrinos were possible, but the see-saw mechanism was imposed by the formalism and the fermion doubling problem was also addressed. The new version of the model was studied in &lt;ref name="10.4310/ATMP.2007.v11.n6.a3"&gt; &lt;/ref&gt; and under an additional assumption, known as the "big desert" hypothesis, computations were carried out to predict the Higgs boson mass around 170 GeV and postdict the Top quark mass. In August 2008, Tevatron experiments excluded a Higgs mass of 158 to 175 GeV at the 95% confidence level. Alain Connes acknowledged on a blog about non-commutative geometry that the prediction about the Higgs mass was invalidated. In July 2012, CERN announced the discovery of the Higgs boson with a mass around 125 GeV/"c"2. A proposal to address the problem of the Higgs mass was published by Ali Chamseddine and Alain Connes in 2012 by taking into account a real scalar field that was already present in the model but was neglected in previous analysis. Another solution to the Higgs mass problem was put forward by Christopher Estrada and Matilde Marcolli by studying renormalization group flow in presence of gravitational correction terms.&lt;ref name="10.1142/S0219887813500369"&gt; &lt;/ref&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{M}" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "\\mathcal{X}" }, { "math_id": 3, "text": "\\text{Diff}(\\mathcal{M})" }, { "math_id": 4, "text": "\\mathcal{G} = \\text{Map}(\\mathcal{M}, G)" }, { "math_id": 5, "text": "G=SU(3) \\times SU(2) \\times U(1)" }, { "math_id": 6, "text": "\\mathcal{G}" }, { "math_id": 7, "text": "\\text{Diff}(\\mathcal{X}) = \\mathcal{G} \\rtimes \\text{Diff}(\\mathcal{M})" }, { "math_id": 8, "text": "A = C^{\\infty}(M) \\otimes A_F " }, { "math_id": 9, "text": "C^{\\infty}(M)" }, { "math_id": 10, "text": "A_F = \\mathbb{C} \\oplus \\mathbb{H} \\oplus M_3(\\mathbb{C})" } ]
https://en.wikipedia.org/wiki?curid=8437941
8439678
Hilbert's arithmetic of ends
In mathematics, specifically in the area of hyperbolic geometry, Hilbert's arithmetic of ends is a method for endowing a geometric set, the set of ideal points or "ends" of a hyperbolic plane, with an algebraic structure as a field. It was introduced by German mathematician David Hilbert. Definitions. Ends. In a hyperbolic plane, one can define an "ideal point " or "end" to be an equivalence class of limiting parallel rays. The set of ends can then be topologized in a natural way and forms a circle. This usage of "end" is not canonical; in particular the concept it indicates is different from that of a topological end (see End (topology) and End (graph theory)). In the Poincaré disk model or Klein model of hyperbolic geometry, every ray intersects the boundary circle (also called the "circle at infinity" or "line at infinity") in a unique point, and the ends may be identified with these points. However, the points of the boundary circle are not considered to be points of the hyperbolic plane itself. Every hyperbolic line has exactly two distinct ends, and every two distinct ends are the ends of a unique line. For the purpose of Hilbert's arithmetic, it is expedient to denote a line by the ordered pair ("a", "b") of its ends. Hilbert's arithmetic fixes arbitrarily three distinct ends, and labels them as 0, 1, and ∞. The set "H" on which Hilbert defines a field structure is the set of all ends other than ∞, while "H"' denotes the set of all ends including ∞. Addition. Hilbert defines the addition of ends using hyperbolic reflections. For every end "x" in "H", its negation −"x" is defined by constructing the hyperbolic reflection of line ("x",∞) across the line (0,∞), and choosing −"x" to be the end of the reflected line. The composition of any three hyperbolic reflections whose axes of symmetry all share a common end is itself another reflection, across another line with the same end. Based on this "three reflections theorem", given any two ends "x" and "y" in "H", Hilbert defines the sum "x" + "y" to be the non-infinite end of the symmetry axis of the composition of the three reflections through the lines ("x",∞), (0,∞), and ("y",∞). It follows from the properties of reflections that these operations have the properties required of the negation and addition operations in the algebra of fields: they form the inverse and addition operations of an additive abelian group. Multiplication. The multiplication operation in the arithmetic of ends is defined (for nonzero elements "x" and "y" of "H") by considering the lines (1,−1), ("x",−"x"), and ("y",−"y"). Because of the way −1, −"x", and −"y" are defined by reflection across the line (0,∞), each of the three lines (1,−1), ("x",−"x"), and ("y",−"y") is perpendicular to (0,∞). From these three lines, a fourth line can be determined, the axis of symmetry of the composition of the reflections through ("x",−"x"), (1,−1), and ("y",−"y"). This line is also perpendicular to (0,∞), and so takes the form ("z",−"z") for some end "z". Alternatively, the intersection of this line with the line (0,∞) can be found by adding the lengths of the line segments from the crossing with (1,−1) to the crossings of the other two points. For exactly one of the two possible choices for "z", an even number of the four elements 1, "x", "y", and "z" lie on the same side of line (0,∞) as each other. The sum "x" + "y" is defined to be this choice of "z". Because it can be defined by adding lengths of line segments, this operation satisfies the requirement of a multiplication operation over a field, that it forms an abelian group over the nonzero elements of the field, with identity one. The inverse operation of the group is the reflection of an end across the line (1,−1). This multiplication operation can also be shown to obey the distributive property together with the addition operation of the field. Rigid motions. Let formula_0 be a hyperbolic plane and "H" its field of ends, as introduced above. In the plane formula_0, we have rigid motions and their effects on ends as follows: formula_3 formula_4 formula_7 formula_11 formula_12 on ends. The rotation around "O" sending 0 to formula_10 gives formula_13 For a more extensive treatment than this article can give, confer.
[ { "math_id": 0, "text": "\\scriptstyle \\Pi" }, { "math_id": 1, "text": "\\scriptstyle(0,\\, \\infty)" }, { "math_id": 2, "text": "\\scriptstyle x\\, \\in\\, H'" }, { "math_id": 3, "text": "x'=-x.\\," }, { "math_id": 4, "text": "x'={1 \\over x}.\\," }, { "math_id": 5, "text": "\\scriptstyle(0,\\,\\infty)" }, { "math_id": 6, "text": "\\scriptstyle a\\, \\in\\, H" }, { "math_id": 7, "text": "x'=ax.\\," }, { "math_id": 8, "text": "\\scriptstyle(0,\\infty)" }, { "math_id": 9, "text": "\\scriptstyle((1/2) a,\\, \\infty)" }, { "math_id": 10, "text": "\\scriptstyle \\infty" }, { "math_id": 11, "text": "x'=x+a.\\," }, { "math_id": 12, "text": "x'=\\frac{x+a}{1-ax}" }, { "math_id": 13, "text": "x'=-{1 \\over x}." } ]
https://en.wikipedia.org/wiki?curid=8439678
84397
Deborah number
The Deborah number (De) is a dimensionless number, often used in rheology to characterize the fluidity of materials under specific flow conditions. It quantifies the observation that given enough time even a solid-like material might flow, or a fluid-like material can act solid when it is deformed rapidly enough. Materials that have low relaxation times flow easily and as such show relatively rapid stress decay. Definition. The Deborah number is the ratio of fundamentally different characteristic times. The Deborah number is defined as the ratio of the time it takes for a material to adjust to applied stresses or deformations, and the characteristic time scale of an experiment (or a computer simulation) probing the response of the material: formula_0 where "t"c stands for the relaxation time and "t"p for the "time of observation", typically taken to be the time scale of the process. The numerator, relaxation time, is the time needed for a reference amount of deformation to occur under a suddenly applied reference load (a more fluid-like material will therefore require less time to flow, giving a lower Deborah number relative to a solid subjected to the same loading rate). The denominator, material time, is the amount of time required to reach a given reference strain (a faster loading rate will therefore reach the reference strain sooner, giving a higher Deborah number). Equivalently, the relaxation time is the time required for the stress induced, by a suddenly applied reference strain, to reduce by a certain reference amount. The relaxation time is actually based on the rate of relaxation that exists at the moment of the suddenly applied load. This incorporates both the elasticity and viscosity of the material. At lower Deborah numbers, the material behaves in a more fluidlike manner, with an associated Newtonian viscous flow. At higher Deborah numbers, the material behavior enters the non-Newtonian regime, increasingly dominated by elasticity and demonstrating solidlike behavior. For example, for a Hookean elastic solid, the relaxation time "t"c will be infinite and it will vanish for a Newtonian viscous fluid. For liquid water, "t"c is typically 10−12 s, for lubricating oils passing through gear teeth at high pressure it is of the order of 10−6 s and for polymers undergoing plastics processing, the relaxation time will be of the order of a few seconds. Therefore, depending on the situation, these liquids may exhibit elastic properties, departing from purely viscous behavior. While De is similar to the Weissenberg number and is often confused with it in technical literature, they have different physical interpretations. The Weissenberg number indicates the degree of anisotropy or orientation generated by the deformation, and is appropriate to describe flows with a constant stretch history, such as simple shear. In contrast, the Deborah number should be used to describe flows with a non-constant stretch history, and physically represents the rate at which elastic energy is stored or released. History. The Deborah number was originally proposed by Markus Reiner, a professor at Technion in Israel, who chose the name inspired by a verse in the Bible, stating "The mountains flowed before the Lord" in a song by the prophetess Deborah in the Book of Judges; הָרִ֥ים נָזְל֖וּ מִפְּנֵ֣י יְהוָ֑ה "hā-rîm nāzəlū mippənê Yahweh"). In his 1964 paper (a reproduction of his after-dinner speech to the Fourth International Congress on Rheology in 1962), Markus Reiner further elucidated the name's origin:“Deborah knew two things.  First, that the mountains flow, as everything flows. But, secondly, that they flowed before the Lord, and not before man, for the simple reason that man in his short lifetime cannot see them flowing, while the time of observation of God is infinite. We may therefore well define a nondimensional number the Deborah number D = time of relaxation/time of observation.” Time-temperature superposition. The Deborah number is particularly useful in conceptualizing the time–temperature superposition principle. Time-temperature superposition has to do with altering experimental time scales using reference temperatures to extrapolate temperature-dependent mechanical properties of polymers. A material at low temperature with a long experimental or relaxation time behaves like the same material at high temperature and short experimental or relaxation time if the Deborah number remains the same. This can be particularly useful when working with materials which relax on a long time scale under a certain temperature. The practical application of this idea arises in the Williams–Landel–Ferry equation. Time-temperature superposition avoids the inefficiency of measuring a polymer's behavior over long periods of time at a specified temperature by utilizing the Deborah number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathrm{De} = \\frac{t_\\mathrm{c}}{t_\\mathrm{p}}," } ]
https://en.wikipedia.org/wiki?curid=84397
84400
Zero-point energy
Lowest possible energy of a quantum system or field Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity. The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel each other out. This idea would be true if supersymmetry were an exact symmetry of nature; however, the LHC at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature". Etymology and terminology. The term zero-point energy (ZPE) is a translation from the German . Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate. Overview. In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics. The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy. Given the equivalence of mass and energy expressed by Albert Einstein's , any point in space that contains energy can be thought of as having mass to create particles. Virtual particles spontaneously flash into existence at every point in space due to the energy of quantum fluctuations caused by the uncertainty principle. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces, it treats every single point of space as a quantum harmonic oscillator. According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field. The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant. For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang, the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem. Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission, Casimir force, Lamb shift, magnetic moment of the electron and Delbrück scattering. These effects are usually called "radiative corrections". In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states, symmetry breaking, chaos and emergence. Active areas of research include the effects of virtual particles, quantum entanglement, the difference (if any) between inertial and gravitational mass, variation in the speed of light, a reason for the observed value of the cosmological constant and the nature of dark energy. History. Early aether theories. Zero-point energy evolved from historical ideas about the vacuum. To Aristotle the vacuum was , "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. Debate about the characteristics of the vacuum were largely confined to the realm of philosophy, it was not until much later on with the beginning of the renaissance, that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum. Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were "ad hoc": &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;To those who maintained the existence of a plenum as a philosophical principle, nature's abhorrence of a vacuum was a sufficient reason for imagining an all-surrounding aether ... Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, till a space had been filled three or four times with aethers. Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved. Second quantum theory. In 1900, Max Planck derived the average energy ε of a single "energy radiator", e.g., a vibrating atomic unit, as a function of absolute temperature: formula_0 where h is the Planck constant, ν is the frequency, k is the Boltzmann constant, and T is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900. The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900. In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of "hν". This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one , as an additional term dependent on the frequency ν, which was greater than zero (where h is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be: formula_1 Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern. In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". Zero-point energy was also invoked by Peter Debye, who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation. With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There is a weighty argument to be adduced in favour of the aether hypothesis. To deny the aether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view ... according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it. Kurt Bennewitz and Francis Simon (1923), who worked at Walther Nernst's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen, argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10BO and 11BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, with the development of matrix mechanics in Werner Heisenberg's article "Quantum theoretical re-interpretation of kinematic and mechanical relations" the zero-point energy was derived from quantum mechanics. In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law held down to the zero values of r. For the forces between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would tend to shrink into nothing or to diminish indefinitely in size." The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation. This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability. Quantum field theory and beyond. In 1926 Pascual Jordan published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory". Building on the work of Heisenberg and others, Paul Dirac's theory of emission and absorption (1927) was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The light-quantum has the peculiarity that it apparently ceases to exist when it is in one of its stationary states, namely, the zero state, in which its momentum and therefore also its energy, are zero. When a light-quantum is absorbed it can be considered to jump into this zero state, and when one is emitted it can be considered to jump from the zero state to one in which it is physically in evidence, so that it appears to have been created. Since there is no limit to the number of light-quanta that may be created in this way, we must suppose that there are an infinite number of light quanta in the zero state ... Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;From quantum theory there follows the existence of so called zero-point oscillations; for example each oscillator in its lowest state is not completely at rest but always is moving about its equilibrium position. Therefore electromagnetic oscillations also can never cease completely. Thus the quantum nature of the electromagnetic field has as its consequence zero point oscillations of the field strength in the lowest energy state, in which there are no light quanta in space ... The zero point oscillations act on an electron in the same way as ordinary electrical oscillations do. They can change the eigenstate of the electron, but only in a transition to a state with the lowest energy, since empty space can only take away energy, and not give it up. In this way spontaneous radiation arises as a consequence of the existence of these unique field strengths corresponding to zero point oscillations. Thus spontaneous radiation is induced radiation of light quanta produced by zero point oscillations of empty space This view was also later supported by Theodore Welton (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources. Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift, and measurement of the magnetic moment of the electron. Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers and also Victor Weisskopf (1936), and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli's 1945 Nobel lecture he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality". In 1948 Hendrik Casimir showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions. These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder, Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) in the case of plane parallel dielectric plates. The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. The role of relativistic forces becomes dominant at orders of a hundred nanometers. In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions. In 1963 the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency (Rabi frequency). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") or amplified. Amplification was first predicted by Purcell in 1946 (the Purcell effect) and has been experimentally verified. This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom. Uncertainty principle. Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well. Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator, formula_2 where "V"0 is the minimum of the classical potential well. The uncertainty principle tells us that formula_3 making the expectation values of the kinetic and potential terms above satisfy formula_4 The expectation value of the energy must therefore be at least formula_5 where "ω" √"k"/"m" is the angular frequency at which the system oscillates. A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly "E"0 "V"0 + , requires solving for the ground state of the system. Atomic physics. The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by ν above, using angular frequency, denoted with ω and defined by "ω" 2"πν". This leads to a convention of writing the Planck constant h with a bar through its top (ħ) to denote the quantity . In these terms, an example of zero-point energy is the above "E" associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state. If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system. According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature. The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by: formula_6 where h is the Planck constant, m is the mass of the particle, n is the energy state ("n" 1 corresponds to the ground-state energy), and L is the width of the well. Quantum field theory. In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is "zero-point field" (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields. In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field. Each point in space makes a contribution of "E" , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy. Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy. In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses. Quantum electrodynamic vacuum. The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories. Redefining the zero of energy. In the quantum theory of the electromagnetic field, classical wave amplitudes α and "α"* are replaced by operators a and "a"† that satisfy: formula_7 The classical quantity appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator "a"†"a". The fact that: formula_8 implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for "a"†"a" and a. The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" "a"† and a associated with these classical modes. The zero-point energy of the field arises formally from the non-commutativity of a and "a"†. This is true for any harmonic oscillator: the zero-point energy appears when we write the Hamiltonian: formula_9 It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by: formula_10 without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted :"HF", i.e.: formula_11 In other words, within the normal ordering symbol we can commute a and "a"†. Since zero-point energy is intimately connected to the non-commutativity of a and "a"†, the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with a and "a"† and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion. However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian. Electromagnetic field in free space. From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by: formula_12 We introduce the "mode function" A0(r) that satisfies the Helmholtz equation: formula_13 where "k" and assume it is normalized such that: formula_14 We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that should be independent of r for each mode of the field. The mode function satisfying these conditions is: formula_15 where k · ek 0 in order to have the transversality condition ∇ · A(r,"t") satisfied for the Coulomb gauge in which we are working. To achieve the desired normalization we pretend space is divided into cubes of volume "V" "L"3 and impose on the field the periodic boundary condition: formula_16 or equivalently formula_17 where n can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function: formula_18 which satisfies the Helmholtz equation, transversality, and the "box normalization": formula_19 where "e"k is chosen to be a unit vector which specifies the polarization of the field mode. The condition k · "e"k 0 means that there are two independent choices of "e"k, which we call "e"k1 and "e"k2 where "e"k1 · "e"k2 0 and "e" "e" 1. Thus we define the mode functions: formula_20 in terms of which the vector potential becomes: formula_21 or: formula_22 where "ωk" "kc" and "a"k"λ", "a" are photon annihilation and creation operators for the mode with wave vector k and polarization λ. This gives the vector potential for a plane wave mode of the field. The condition for ("kx", "ky", "kz") shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write: formula_23 for the total vector potential in free space. Using the fact that: formula_24 we find the field Hamiltonian is: formula_25 This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations: formula_26 Clearly the least eigenvalue for "HF" is: formula_27 This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor formula_28 shows. The summation becomes approximately the integral: formula_29 for high values of v. It diverges proportional to "v"4 for large v. There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume V is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry. This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out. Necessity of the vacuum field in QED. The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which "n"k"λ" 0 for all modes (k, "λ"). The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero. In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by Σk"λ" is infinite. We can make the replacement: formula_30 the zero-point energy density is: formula_31 or in other words the spectral energy density of the vacuum field: formula_32 The zero-point energy density in the frequency range from "ω"1 to "ω"2 is therefore: formula_33 This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg/cm3. We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is: formula_34 This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate x and the canonical momentum p "m"ẋ + of the oscillator are: formula_35 or: formula_36 since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative formula_37 For nonrelativistic motion we may neglect the magnetic force and replace the expression for "m"ẍ by: formula_38 Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for "a"k"λ" is found similarly from the Hamiltonian to be: formula_39 in the electric dipole approximation. In deriving these equations for x, p, and "a"k"λ" we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, "t" 0) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator "A"("t") evolves in time as "A"("t") "U"†("t")"A"(0)"U"("t"), where "U"("t") is the time evolution operator satisfying formula_40 Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is: formula_41 and therefore the equation for "ȧ"k"λ" may be written: formula_42 where formula_43 and formula_44 It can be shown that in the radiation reaction field, if the mass m is regarded as the "observed" mass then we can take formula_45 The total field acting on the dipole has two parts, E0("t") and E"RR"("t"). E0("t") is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation formula_46 satisfied by the field in the (source free) vacuum. For this reason E0("t") is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at "t" 0. E"RR"("t") is the source field, the field generated by the dipole and acting on the dipole. Using the above equation for E"RR"("t") we obtain an equation for the Heisenberg-picture operator formula_47 that is formally the same as the classical equation for a linear dipole oscillator: formula_48 where "τ" . in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole. Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field E0("t"). According to our earlier equation for "a"k"λ"("t") the free field is the only field in existence at "t" 0 as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at "t" 0 is therefore of the form formula_49 where |vac⟩ is the vacuum state of the field and |"ψD"⟩ is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero: formula_50 since "a"k"λ"(0)|vac⟩ 0. however, the energy density associated with the free field is infinite: formula_51 The important point of this is that the zero-point field energy "HF" does not affect the Heisenberg equation for "a"kλ" since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with "a"kλ". We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term Σk"λ" in the field Hamiltonian. The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory: formula_52 We can calculate ["z"("t"),"pz"("t")] from the formal solution of the operator equation of motion formula_53 Using the fact that formula_54 and that equal-time particle and field operators commute, we obtain: formula_55 For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., "τω"0 ≪ 1. Then the integrand above is sharply peaked at "ω" "ω"0 and: formula_56 the necessity of the vacuum field can also be appreciated by making the small damping approximation in formula_57 and formula_58 Without the free field E0("t") in this equation the operator x("t") would be exponentially dampened, and commutators like ["z"("t"),"pz"("t")] would approach zero for "t" ≫. With the vacuum field included, however, the commutator is "iħ" at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator. What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails. The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of x, the spectral energy density of the vacuum field must be proportional to the third power of ω in order for ["z"("t"),"pz"("t")] to hold. In the case of a dissipative force proportional to ẋ, by contrast, the fluctuation force must be proportional to formula_59 in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem. The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field. Quantum chromodynamic vacuum. The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a "non-perturbative" vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions. Higgs field. The Standard Model hypothesises a field called the Higgs field (symbol: ϕ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of "ϕ"0 in the ground state (the vacuum expectation value or VEV) is then ⟨"ϕ"0⟩ , where "v" . The measured value of this parameter is approximately . It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number. The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory). Experimental observations. Zero-point energy has many observed physical consequences. It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics. Casimir effect. A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move. Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then. In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction. An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than c between two closely spaced conducting plates. Lamb shift. The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels 2"S" and 2"P" (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation, according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; this effect is called the Lamb shift. The shift of about is roughly of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms. Fine-structure constant. Taking ħ (the Planck constant divided by 2π), c (the speed of light), and "e"2 (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where "qe" is the absolute value of the electronic charge and formula_60 is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant: formula_61 The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it. The Heisenberg inequality where "ħ" , and Δ"x", Δ"p" are the standard deviations of position and momentum states that: formula_62 It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, "mzc"2 ≈ 90 GeV, that: formula_63 rather than the low-energy "α" ≈. The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining α. All in all, α depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of α has been observed for several years now in precision experiment in high-energy physics. Vacuum birefringence. In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction - gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit. At this point the vacuum has all the properties of a birefringent medium, thus in principle a rotation of the polarization frame (the Faraday effect) can be observed in empty space. Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance. Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754, the closest discovered neutron star to Earth. Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." The team found that visible light from the star had undergone linear polarisation of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE). Speculated involvement in other phenomena. Dark energy. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in physics: In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy. There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude. The European Space Agency's Euclid telescope, launched on 1 July 2023, will map galaxies up to 10 billion light years away. By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence, where observed acceleration is due to the energy of a scalar field, rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the "Standard Model of particle physics" and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy. Cosmic inflation. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in physics: Why does the observable universe have more matter than antimatter? Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed. The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain. Cosmology. Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation. Alternative theories. There has been a long debate over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger, in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED." Milonni has shown the necessity of the vacuum field for the formal consistency of QED. Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory. Nevertheless, as pointed out by Jaffe, "no known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”" Chaotic and emergent phenomena. The mathematical models used in classical electromagnetism, quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) and Altshuler–Aronov–Spivak (AAS) effects, Berry, Aharonov–Anandan, Pancharatnam and Chiao–Wu phase rotation effects, Josephson effect, Quantum Hall effect, the De Haas–Van Alphen effect, the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence. What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity. A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law, which elaborates Maxwell's equations is violated by these force free vortices. These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem, conservation laws such as the laws of thermodynamics need not always apply to dissipative systems, which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" or "order out of chaos". It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun. One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors. Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations. However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. Bostick, for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors. Others have also pointed out this connection, Fröhlich has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter (formula_64 = electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors. This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but "without any associated rise in temperature". This is an example of zero-point energy having multiple stable states (see Quantum phase transition, Quantum critical point, Topological degeneracy, Topological order) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum. The idea that the vacuum energy can have multiple stable energy states is a leading hypothesis for the cause of cosmic inflation. In fact, it has been argued that these early vacuum fluctuations led to the expansion of the universe and in turn have guaranteed the non-equilibrium conditions necessary to drive order from chaos, as without such expansion the universe would have reached thermal equilibrium and no complexity could have existed. With the continued accelerated expansion of the universe, the cosmos generates an energy gradient that increases the "free energy" (i.e. the available, usable or potential energy for useful work) which the universe is able to use to create ever more complex forms of order. The only reason Earth's environment does not decay into an equilibrium state is that it receives a daily dose of sunshine and that, in turn, is due to the sun "polluting" interstellar space with entropy. The sun's fusion power is only possible due to the gravitational disequilibrium of matter that arose from cosmic expansion. In this essence, the vacuum energy can be viewed as the key cause of the structure throughout the universe. That humanity might alter the morphology of the vacuum energy to create an energy gradient for useful work is the subject of much controversy. Purported applications. Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines. Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil. Casimir batteries and engines. A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates. In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in "Physical Review" his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved." Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system. In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force. A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation. Single heat baths. In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties. There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations. Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown Space travel and gravitational shielding. The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutrons stars for example it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak. In 1963 Robert Forward, a physicist and aerospace engineer at Hughes Research Laboratories, published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. Since all atoms have spin, gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. In 1966 Dewitt was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization. In 1983, Dewitt's work was substantially expanded by Ross. From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves." In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them". In 1988 Kip Thorne et al. published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy. In 1993 Scharnhorst and Barton showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy, dark energy or zero-point radiation of the vacuum. Building on the work of Thorne, in 1994 Miguel Alcubierre proposed a method for changing the geometry of space by creating a wave that would cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand (see Alcubierre drive). The ship would then ride this wave inside a region of flat space, known as a "warp bubble" and would not move within this bubble but instead be carried along as the region itself moves due to the actions of the drive. In 1992 Evgeny Podkletnov published a heavily debated journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some but defended by others. In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. Li is reported to have left the University of Alabama in 1999 to found the company "AC Gravity LLC". AC Gravity was awarded a U.S. DOD grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were ever made public. In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Moscow, also commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable" Froning and Roach (2002) put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space. In 2009 Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves. In the spheron model of the nucleus, proposed by the two times Nobel laureate Linus Pauling, dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state, but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored. In 2014 NASA's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out. Zero-point energy in fiction. The concept of Zero-point energy used as an energy source has been an element used in science fiction and related media. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Articles in the press. &lt;templatestyles src="Refbegin/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. Press articles. &lt;templatestyles src="Refbegin/styles.css" /&gt; Journal articles. &lt;templatestyles src="Refbegin/styles.css" /&gt; Books. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\varepsilon = \\frac{h\\nu}{ e^{h\\nu/(kT)}-1} \\,," }, { "math_id": 1, "text": "\\varepsilon =\\frac{h\\nu} 2 + \\frac{h\\nu}{e^{h\\nu/(kT)}-1} ~." }, { "math_id": 2, "text": "\\hat{H} = V_0 + \\tfrac{1}{2} k \\left(\\hat{x} - x_0\\right)^2 + \\frac{1}{2m} \\hat{p}^2 \\,," }, { "math_id": 3, "text": "\\sqrt{\\left\\langle \\left(\\hat{x} - x_0\\right)^2 \\right\\rangle} \\sqrt{\\left\\langle \\hat{p}^2 \\right\\rangle} \\geq \\frac{\\hbar}{2} \\,," }, { "math_id": 4, "text": "\\left\\langle \\tfrac{1}{2} k \\left(\\hat{x} - x_0\\right)^2 \\right\\rangle \\left\\langle \\frac{1}{2m} \\hat{p}^2 \\right\\rangle \\geq \\left(\\frac{\\hbar}{4}\\right)^2 \\frac{k}{m} \\,." }, { "math_id": 5, "text": "\\left\\langle \\hat{H} \\right\\rangle \\geq V_0 + \\frac{\\hbar}{2} \\sqrt{\\frac{k}{m}} = V_0 + \\frac{\\hbar \\omega}{2}" }, { "math_id": 6, "text": "\\frac{h^2 n^2}{8 m L^2}" }, { "math_id": 7, "text": "\\left[a,a^\\dagger\\right] = 1" }, { "math_id": 8, "text": "\\left[a,a^\\dagger a\\right] \\ne 1" }, { "math_id": 9, "text": "\\begin{align}\nH_{cl} &= \\frac{p^2}{2m} + \\tfrac{1}{2} m \\omega^2 {q}^2 \\\\\n&= \\tfrac{1}{2} \\hbar \\omega \\left(a a^\\dagger + a^\\dagger a\\right) \\\\\n&=\\hbar \\omega \\left(a^\\dagger a +\\tfrac{1}{2}\\right)\n\\end{align}" }, { "math_id": 10, "text": "\\begin{align}\nH_F - \\left\\langle 0|H_F|0\\right\\rangle &=\\tfrac{1}{2} \\hbar \\omega \\left(a a^\\dagger + a^\\dagger a\\right)-\\tfrac{1}{2}\\hbar \\omega \\\\\n&= \\hbar \\omega \\left(a^\\dagger a + \\tfrac{1}{2} \\right)-\\tfrac{1}{2}\\hbar \\omega \\\\\n&= \\hbar \\omega a^\\dagger a\n\\end{align}" }, { "math_id": 11, "text": ":H_F : \\equiv \\hbar \\omega \\left(a a^\\dagger + a^\\dagger a\\right) : \\equiv \\hbar \\omega a^\\dagger a" }, { "math_id": 12, "text": "\\begin{align}\nH_F &= \\frac{1}{8\\pi}\\int d^3r \\left(\\mathbf{E}^2 +\\mathbf{B}^2\\right) \\\\\n&=\\frac{k^2}{2\\pi}|\\alpha (t)|^2\n\\end{align}" }, { "math_id": 13, "text": " \\left( \\nabla^2 + k^2 \\right) \\mathbf{A}_0(\\mathbf{r}) = 0 " }, { "math_id": 14, "text": "\\int d^3r \\left|\\mathbf{A}_0(\\mathbf{r})\\right|^2 = 1" }, { "math_id": 15, "text": " \\mathbf{A}_0(\\mathbf{r}) = e_{\\mathbf{k}}e^{i\\mathbf{k}\\cdot\\mathbf{r}} " }, { "math_id": 16, "text": "\\mathbf{A}(x+L,y+L,z+L,t)=\\mathbf{A}(x,y,z,t)" }, { "math_id": 17, "text": " \\left(k_x,k_y,k_z\\right)=\\frac{2\\pi}{L}\\left(n_x,n_y,n_z\\right)" }, { "math_id": 18, "text": "\\mathbf{A}_\\mathbf{k}(\\mathbf{r})= \\frac{1}\\sqrt{V} e_{\\mathbf{k}}e^{i\\mathbf{k}\\cdot\\mathbf{r}}" }, { "math_id": 19, "text": "\\int_V d^3r \\left|\\mathbf{A}_\\mathbf{k}(\\mathbf{r})\\right|^2 = 1" }, { "math_id": 20, "text": "\\mathbf{A}_{\\mathbf{k}\\lambda}(\\mathbf{r})=\\frac{1}\\sqrt{V}e_{\\mathbf{k}\\lambda}e^{i\\mathbf{k}\\cdot\\mathbf{r}} \\, , \\quad \\lambda = \\begin{cases} 1\\\\2 \\end{cases}" }, { "math_id": 21, "text": "\\mathbf{A}_{\\mathbf{k}\\lambda}(\\mathbf{r},t)=\\sqrt{\\frac{2\\pi\\hbar c^2}{\\omega_k V}}\\left[a_{\\mathbf{k}\\lambda}(0)e^{i\\mathbf{k}\\cdot\\mathbf{r}}+a_{\\mathbf{k}\\lambda}^\\dagger(0)e^{-i\\mathbf{k}\\cdot\\mathbf{r}}\\right]e_{\\mathbf{k}\\lambda}" }, { "math_id": 22, "text": "\\mathbf{A}_{\\mathbf{k}\\lambda}(\\mathbf{r},t)=\\sqrt{\\frac{2\\pi\\hbar c^2}{\\omega_k V}}\\left[a_{\\mathbf{k}\\lambda}(0)e^{-i(\\omega_k t-\\mathbf{k}\\cdot\\mathbf{r})}+a_{\\mathbf{k}\\lambda}^\\dagger(0)e^{i(\\omega_k t-\\mathbf{k}\\cdot\\mathbf{r})}\\right]\n" }, { "math_id": 23, "text": "\\mathbf{A}(\\mathbf{r}t)=\\sum_{\\mathbf{k}\\lambda}\\sqrt{\\frac{2\\pi\\hbar c^2}{\\omega_k V}}\\left[a_{\\mathbf{k}\\lambda}(0)e^{i\\mathbf{k}\\cdot\\mathbf{r}}+a_{\\mathbf{k}\\lambda}^\\dagger(0)e^{-i\\mathbf{k}\\cdot\\mathbf{r}}\\right]e_{\\mathbf{k}\\lambda}" }, { "math_id": 24, "text": "\\int_V d^3r \\mathbf{A}_{\\mathbf{k}\\lambda}(\\mathbf{r})\\cdot \\mathbf{A}_{\\mathbf{k}'\\lambda'}^\\ast(\\mathbf{r})=\\delta_{\\mathbf{k},\\mathbf{k}'}^3\\delta_{\\lambda,\\lambda'}" }, { "math_id": 25, "text": "H_F=\\sum_{\\mathbf{k}\\lambda}\\hbar\\omega_k\\left(a_{\\mathbf{k}\\lambda}^\\dagger a_{\\mathbf{k}\\lambda} + \\tfrac{1}{2} \\right) " }, { "math_id": 26, "text": "\\begin{align}\n\\left[a_{\\mathbf{k}\\lambda}(t),a_{\\mathbf{k}'\\lambda'}^\\dagger(t)\\right]&=\\delta_{\\mathbf{k},\\mathbf{k}'}^3\\delta_{\\lambda,\\lambda'} \\\\[10px]\n\\left[a_{\\mathbf{k}\\lambda}(t),a_{\\mathbf{k}'\\lambda'}(t)\\right]&=\\left[a_{\\mathbf{k}\\lambda}^\\dagger(t),a_{\\mathbf{k}'\\lambda'}^\\dagger(t)\\right]=0\n\\end{align}" }, { "math_id": 27, "text": "\\sum_{\\mathbf{k}\\lambda}\\tfrac{1}{2}\\hbar\\omega_k" }, { "math_id": 28, "text": "\\frac{8\\pi v^2 dv}{c^3}V" }, { "math_id": 29, "text": "\\frac{4\\pi h V}{c^3}\\int v^3 \\, dv" }, { "math_id": 30, "text": "\\sum_{\\mathbf{k}\\lambda}\\longrightarrow\\sum_{\\lambda}\\left (\\frac{1}{2\\pi} \\right )^3 \\int d^3 k = \\frac{V}{8\\pi^3} \\sum_\\lambda \\int d^3 k" }, { "math_id": 31, "text": "\\begin{align}\n\\frac{1}{V}\\sum_{\\mathbf{k}\\lambda}\\tfrac{1}{2}\\hbar\\omega_k &=\\frac{2}{8\\pi^3}\\int d^3 k \\tfrac{1}{2}\\hbar\\omega_k \\\\\n&= \\frac{4\\pi}{4\\pi^3} \\int dk\\,k^2 \\left(\\tfrac{1}{2}\\hbar\\omega_k\\right) \\\\\n&=\\frac{\\hbar}{2\\pi^2 c^3} \\int d\\omega\\,\\omega^3\n\\end{align}" }, { "math_id": 32, "text": "\\rho_0(\\omega)=\\frac{\\hbar\\omega^3}{2\\pi^2c^3}" }, { "math_id": 33, "text": "\\int_{\\omega_1}^{\\omega_2} d\\omega\\rho_0(\\omega) = \\frac{\\hbar}{8\\pi^2c^3}\\left(\\omega_2^4-\\omega_1^4\\right)" }, { "math_id": 34, "text": "H=\\frac{1}{2m}\\left(\\mathbf{p}-\\frac{e}{c}\\mathbf{A}\\right)^2 + \\tfrac{1}{2}m\\omega_0^2\\mathbf{x}^2 + H_F" }, { "math_id": 35, "text": "\\begin{align}\n\\mathbf{\\dot{x}}&=(i\\hbar)^{-1}[\\mathbf{x}.H] = \\frac{1}{m}\\left(\\mathbf{p}-\\frac{e}{c}\\mathbf{A}\\right) \\\\\n\\mathbf{\\dot{p}}&=(i\\hbar)^{-1}[\\mathbf{p}.H]\n\\begin{align}&=\\tfrac{1}{2}\\nabla\\left(\\mathbf{p}-\\frac{e}{c}\\mathbf{A}\\right)^2-m\\omega_0^2\\mathbf{\\dot{x}} \\\\\n&=-\\frac{1}{m} \\left[\\left(\\mathbf{p}-\\frac{e}{c}\\mathbf{A}\\right) \\cdot \\nabla\\right] \\left[-\\frac{e}{c}\\mathbf{A}\\right] - \\frac{1}{m} \\left(\\mathbf{p}-\\frac{e}{c}\\mathbf{A}\\right) \\times \\nabla \\times \\left[-\\frac{e}{c}\\mathbf{A}\\right] -m\\omega_0^2 \\mathbf{\\dot{x}} \\\\\n&= \\frac{e}{c}(\\mathbf{\\dot{x}}\\cdot\\nabla)\\mathbf{A} + \\frac{e}{c}\\mathbf{\\dot{x}} \\times \\mathbf{B} -m\\omega_0^2 \\mathbf{\\dot{x}}\n\\end{align}\\end{align}" }, { "math_id": 36, "text": "\\begin{align}\nm \\mathbf{\\ddot{x}} &= \\mathbf{\\dot{p}} - \\frac{e}{c} \\mathbf{\\dot{A}} \\\\\n&= -\\frac{e}{c} \\left[\\mathbf{\\dot{A}} - \\left(\\mathbf{\\dot{x}} \\cdot \\nabla\\right) \\mathbf{A}\\right] + \\frac{e}{c} \\mathbf{\\dot{x}} \\times \\mathbf{B} - m\\omega_0^2\\mathbf{x} \\\\\n&= e\\mathbf{E} + \\frac{e}{c} \\mathbf{\\dot{x}} \\times \\mathbf{B} - m\\omega_0^2\\mathbf{x}\n\\end{align}" }, { "math_id": 37, "text": "\\mathbf{\\dot{A}}=\\frac{\\partial\\mathbf{A}}{\\partial t} + (\\mathbf{\\dot{x}} \\cdot \\nabla) \\mathbf{A}^3 \\,." }, { "math_id": 38, "text": "\\begin{align}\n\\mathbf{\\ddot{x}}+\\omega_0^2\\mathbf{x}\n&\\approx \\frac{e}{m}\\mathbf{E} \\\\\n&\\approx \\sum_{\\mathbf{k}\\lambda}\n\\sqrt{\\frac{2\\pi\\hbar\\omega_k}{V}} \\left[a_{\\mathbf{k}\\lambda}(t) + a_{\\mathbf{k}\\lambda}^\\dagger(t)\\right] e_{\\mathbf{k}\\lambda}\n\\end{align}" }, { "math_id": 39, "text": "\\dot{a}_{\\mathbf{k}\\lambda} = i \\omega_k a_{\\mathbf{k}\\lambda} + ie \\sqrt\\frac{2\\pi}{\\hbar \\omega_k V} \\mathbf{\\dot{x}} \\cdot e_{\\mathbf{k}\\lambda}" }, { "math_id": 40, "text": "i\\hbar\\dot{U} = HU \\,,\\quad U^\\dagger(t) = U^{-1}(t) \\,,\\quad U(0) = 1 \\,." }, { "math_id": 41, "text": "a_{\\mathbf{k}\\lambda}(t)=a_{\\mathbf{k}\\lambda}(0)e^{-i\\omega_{k}t}+ie \\sqrt{\\frac{2\\pi}{\\hbar \\omega_k V}} \\int^t_0dt'\\,e_{\\mathbf{k}\\lambda}\\cdot\\mathbf{\\dot{x}}(t')e^{i\\omega_k\\left(t'-t\\right)}" }, { "math_id": 42, "text": "\\mathbf{\\ddot{x}}+\\omega^2_0\\mathbf{x}=\\frac{e}{m}\\mathbf{E}_0(t)+\\frac{e}{m}\\mathbf{E}_{RR}(t)" }, { "math_id": 43, "text": "\\mathbf{E}_0(t)=i\\sum_{\\mathbf{k}\\lambda} \\sqrt{\\frac{2\\pi\\hbar \\omega_k}{V}}\\left[a_{\\mathbf{k}\\lambda}(0)e^{-i\\omega_kt}-a^\\dagger_{\\mathbf{k}\\lambda}(0)e^{i\\omega_kt}\\right]e_{\\mathbf{k}\\lambda}" }, { "math_id": 44, "text": "\\mathbf{E}_{RR}(t)=-\\frac{4\\pi e}{V} \\sum_{\\mathbf{k}\\lambda} \\int^t_0dt'\\left[e_{\\mathbf{k}\\lambda}\\cdot\\mathbf{\\dot{x}}\\left(t'\\right)\\right]\\cos\\omega_k\\left(t'-t\\right)" }, { "math_id": 45, "text": "\\mathbf{E}_{RR}(t)=\\frac{2e}{3c^3}\\mathbf{\\ddot{x}}" }, { "math_id": 46, "text": "\\left[\\nabla^2-\\frac{1}{c^2}\\frac{\\partial^2}{\\partial t^2}\\right]\\mathbf{E}=0" }, { "math_id": 47, "text": "\\mathbf{x}(t)" }, { "math_id": 48, "text": "\n\\mathbf{\\ddot{x}} + \\omega^2_0\\mathbf{x}-\\tau \\mathbf{\\overset{...}{x}}=\\frac{e}{m}\\mathbf{E}_0(t)\n" }, { "math_id": 49, "text": "|\\Psi\\rangle=|\\text{vac}\\rangle|\\psi_D\\rangle \\,," }, { "math_id": 50, "text": "\\langle\\mathbf{E}_0(t)\\rangle=\\langle\\Psi|\\mathbf{E}_0(t)|\\Psi\\rangle=0" }, { "math_id": 51, "text": "\\begin{align}\n\\frac{1}{4\\pi} \\left\\langle \\mathbf{E}^2_0(t) \\right\\rangle &= \\frac{1}{4\\pi} \\sum_{\\mathbf{k}\\lambda} \\sum_{\\mathbf{k'}\\lambda'} \\sqrt{\\frac{2\\pi\\hbar \\omega_k}{V}} \\sqrt{\\frac{2\\pi\\hbar \\omega_{k'}}{V}} \\times \\left\\langle a_{\\mathbf{k}\\lambda}(0)a^\\dagger_{\\mathbf{k'}\\lambda'}(0)\\right\\rangle \\\\\n&= \\frac{1}{4\\pi}\\sum_{\\mathbf{k}\\lambda}\\left (\\frac{2\\pi\\hbar \\omega_k}{V} \\right )\\\\\n&= \\int^\\infin_0dw\\,\\rho_0(\\omega)\n\\end{align}" }, { "math_id": 52, "text": "\\begin{align}\n\\left[z(t),p_z(t)\\right]&=\\left[U^\\dagger(t)z(0)U(t),U^\\dagger(t)p_z(0)U(t)\\right]\\\\\n&=U^\\dagger(t)\\left[z(0),p_z(0)\\right]U(t)\\\\\n&=i\\hbar U^\\dagger(t)U(t)\\\\\n&=i\\hbar\n\\end{align}" }, { "math_id": 53, "text": "\\mathbf{\\ddot{x}} + \\omega^2_0\\mathbf{x}-\\tau \\mathbf{\\overset{...}{x}}=\\frac{e}{m}\\mathbf{E}_0(t)" }, { "math_id": 54, "text": "\\left[a_{\\mathbf{k}\\lambda}(0),a^\\dagger_{\\mathbf{k'}\\lambda'}(0)\\right]=\\delta^3_\\mathbf{kk'},\\delta_{\\lambda\\lambda'}" }, { "math_id": 55, "text": "\\begin{align}\n[z(t),p_z(t)]&=\\left[z(t),m\\dot{z}(t)\\right]+\\left[z(t),\\frac{e}{c}A_z(t)\\right] \\\\\n&=\\left[z(t),m\\dot{z}(t)\\right] \\\\\n&= \\left (\\frac{i\\hbar e^2}{2\\pi^2mc^3} \\right ) \\left (\\frac{8\\pi}{3} \\right ) \\int^\\infin_0\\frac{d\\omega\\,\\omega^4}{\\left(\\omega^2-\\omega^2_0\\right)^2+\\tau^2\\omega^6}\n\\end{align}" }, { "math_id": 56, "text": "\\begin{align}\n\\left[z(t),p_z(t)\\right]&\\approx \\frac{2i\\hbar e^2}{3\\pi mc^3}\\omega^3_0 \\int^\\infin_{-\\infin} \\frac{dx}{x^2 + \\tau^2\\omega^6_0} \\\\\n&= \\left (\\frac{2i\\hbar e^2 \\omega^3_0}{3\\pi mc^3} \\right )\\left (\\frac{\\pi}{\\tau\\omega^3_0} \\right ) \\\\\n&=i\\hbar\n\\end{align}" }, { "math_id": 57, "text": "\\begin{align}\n&\\mathbf{\\ddot{x}} + \\omega^2_0\\mathbf{x}-\\tau \\mathbf{\\overset{...}{x}}=\\frac{e}{m}\\mathbf{E}_0(t) \\\\\n&\\mathbf{\\ddot{x}}\\approx-\\omega^2_0\\mathbf{x}(t) && \\mathbf{\\overset{...}{x}}\\approx-\\omega^2_0\\mathbf{\\dot{x}}\n\\end{align}" }, { "math_id": 58, "text": "\\mathbf{\\ddot{x}}+\\tau\\omega^2_0\\mathbf{\\dot{x}}+\\omega^2_0\\mathbf{x}\\approx\\frac{e}{m}\\mathbf{E}_0(t)" }, { "math_id": 59, "text": "\\omega" }, { "math_id": 60, "text": "\\varepsilon_0" }, { "math_id": 61, "text": "\\alpha = \\frac{e^2}{\\hbar c} = \\frac{q_e^2}{4\\pi\\varepsilon_0\\hbar c} \\approx \\frac{1}{137}" }, { "math_id": 62, "text": "\\Delta_x\\Delta_p\\ge\\frac{1}{2}\\hbar" }, { "math_id": 63, "text": "\\alpha\\approx\\frac{1}{129}" }, { "math_id": 64, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=84400
844290
Hydrogel
Soft water-rich polymer gel A hydrogel is a biphasic material, a mixture of porous, permeable solids and at least 10% by weight or volume of interstitial fluid composed completely or mainly by water. In hydrogels the porous permeable solid is a water insoluble three dimensional network of natural or synthetic polymers and a fluid, having absorbed a large amount of water or biological fluids. These properties underpin several applications, especially in the biomedical area. Many hydrogels are synthetic, but some are derived from nature. The term 'hydrogel' was coined in 1894. Chemistry. Classification. The crosslinks which bond the polymers of a hydrogel fall under two general categories: physical hydrogels and chemical hydrogels. Chemical hydrogels have covalent cross-linking bonds, whereas physical hydrogels have non-covalent bonds. Chemical hydrogels can result in strong reversible or irreversible gels due to the covalent bonding. Chemical hydrogels that contain reversible covalent cross-linking bonds, such as hydrogels of thiomers being cross-linked via disulfide bonds, are non-toxic and are used in numerous medicinal products. Physical hydrogels usually have high biocompatibility, are not toxic, and are also easily reversible by simply changing an external stimulus such as pH, ion concentration (alginate) or temperature (gelatine); they are also used for medical applications. Physical crosslinks consist of hydrogen bonds, hydrophobic interactions, and chain entanglements (among others). A hydrogel generated through the use of physical crosslinks is sometimes called a 'reversible' hydrogel. Chemical crosslinks consist of covalent bonds between polymer strands. Hydrogels generated in this manner are sometimes called 'permanent' hydrogels. Hydrogels are prepared using a variety of polymeric materials, which can be divided broadly into two categories according to their origin: natural or synthetic polymers. Natural polymers for hydrogel preparation include hyaluronic acid, chitosan, heparin, alginate, gelatin and fibrin. Common synthetic polymers include polyvinyl alcohol, polyethylene glycol, sodium polyacrylate, acrylate polymers and copolymers thereof. Whereas natural hydrogels are usually non-toxic, and often provide other advantages for medical use, such as biocompatibility, biodegradability, antibiotic/antifungal effect and improve regeneration of nearby tissue, their stability and strength is usually much lower than synthetic hydrogels. There are also synthetic hydrogels than can be used for medical applications, such as polyethylene glycol (PEG), polyacrylate, and polyvinylpyrrolidone (PVP). Preparation. There are two suggested mechanisms behind physical hydrogel formation, the first one being the gelation of nanofibrous peptide assemblies, usually observed for oligopeptide precursors. The precursors self-assemble into fibers, tapes, tubes, or ribbons that entangle to form non-covalent cross-links. The second mechanism involves non-covalent interactions of cross-linked domains that are separated by water-soluble linkers, and this is usually observed in longer multi-domain structures. Tuning of the supramolecular interactions to produce a self-supporting network that does not precipitate, and is also able to immobilize water which is vital for to gel formation. Most oligopeptide hydrogels have a β-sheet structure, and assemble to form fibers, although α-helical peptides have also been reported. The typical mechanism of gelation involves the oligopeptide precursors self-assemble into fibers that become elongated, and entangle to form cross-linked gels. One notable method of initiating a polymerization fuving involves the use of light as a stimulus. In this method, photoinitiators, compounds that cleave from the absorption of photons, are added to the precursor solution which will become the hydrogel. When the precursor solution is exposed to a concentrated source of light, usually ultraviolet irradiation, the photoinitiators will cleave and form free radicals, which will begin a polymerization reaction that forms crosslinks between polymer strands. This reaction will cease if the light source is removed, allowing the amount of crosslinks formed in the hydrogel to be controlled. The properties of a hydrogel are highly dependent on the type and quantity of its crosslinks, making photopolymerization a popular choice for fine-tuning hydrogels. This technique has seen considerable use in cell and tissue engineering applications due to the ability to inject or mold a precursor solution loaded with cells into a wound site, then solidify it in situ. Physically crosslinked hydrogels can be prepared by different methods depending on the nature of the crosslink involved. Polyvinyl alcohol hydrogels are usually produced by the freeze-thawed technique. In this, the solution is frozen for a few hours, then thawed at room temperature, and the cycle is repeated until a strong and stable hydrogel is formed. Alginate hydrogels are formed by ionic interactions between alginate and double-charged cations. A salt, usually calcium chloride, is dissolved into an aqueous sodium alginate solution, that causes the calcium ions to create ionic bonds between alginate chains. Gelatin hydrogels are formed by temperature change. A water solution of gelatin forms an hydrogel at temperatures below 37–35 °C, as Van der Waals interactions between collagen fibers become stronger than thermal molecular vibrations. Peptides based hydrogels. Peptides based hydrogels possess exceptional biocompatibility and biodegradability qualities, giving rise to their wide use of applications, particularly in biomedicine; as such, their physical properties can be fine-tuned in order to maximise their use. Methods to do this are: modulation of the amino acid sequence, pH, chirality, and increasing the number of aromatic residues. The order of amino acids within the sequence is crucial for gelation, as has been shown many times. In one example, a short peptide sequence Fmoc-Phe-Gly readily formed a hydrogel, whereas Fmoc-Gly-Phe failed to do so as a result of the two adjacent aromatic moieties being moved, hindering the aromatic interactions. Altering the pH can also have similar effects, an example involved the use of the naphthalene (Nap) modified dipeptides Nap-Gly-Ala, and Nap- Ala-Gly, where a drop in pH induced gelation of the former, but led to crystallisation of the latter. A controlled pH decrease method using glucono-δ-lactone (GdL), where the GdL is hydrolysed to gluconic acid in water is a recent strategy that has been developed as a way to form homogeneous and reproducible hydrogels. The hydrolysis is slow, which allows for a uniform pH change, and thus resulting in reproducible homogenous gels. In addition to this, the desired pH can be achieved by altering the amount of GdL added. The use of GdL has been used various times for the hydrogelation of Fmoc and Nap-dipeptides. In another direction, Morris et al reported the use of GdL as a 'molecular trigger' to predict and control the order of gelation. Chirality also plays an essential role in gel formation, and even changing the chirality of a single amino acid from its natural L-amino acid to its unnatural D-amino acid can significantly impact the gelation properties, with the natural forms not forming gels. Furthermore, aromatic interactions play a key role in hydrogel formation as a result of π- π stacking driving gelation, shown by many studies. Other. Hydrogels also possess a degree of flexibility very similar to natural tissue due to their significant water content. As responsive "smart materials", hydrogels can encapsulate chemical systems which upon stimulation by external factors such as a change of pH may cause specific compounds such as glucose to be liberated to the environment, in most cases by a gel–sol transition to the liquid state. Chemomechanical polymers are mostly also hydrogels, which upon stimulation change their volume and can serve as actuators or sensors. Mechanical properties. Hydrogels have been investigated for diverse applications. By modifying the polymer concentration of a hydrogel (or conversely, the water concentration), the Young's modulus, shear modulus, and storage modulus can vary from 10 Pa to 3 MPa, a range of about five orders of magnitude. A similar effect can be seen by altering the crosslinking concentration. This much variability of the mechanical stiffness is why hydrogels are so appealing for biomedical applications, where it is vital for implants to match the mechanical properties of the surrounding tissues. Characterizing the mechanical properties of hydrogels can be difficult especially due to the differences in mechanical behavior that hydrogels have in comparison to other traditional engineering materials. In addition to its rubber elasticity and viscoelasticity, hydrogels have an additional time dependent deformation mechanism which is dependent on fluid flow called poroelasticity. These properties are extremely important to consider while performing mechanical experiments. Some common mechanical testing experiments for hydrogels are tension, compression (confined or unconfined), indentation, shear rheometry or dynamic mechanical analysis. Hydrogels have two main regimes of mechanical properties: rubber elasticity and viscoelasticity: Rubber elasticity. In the unswollen state, hydrogels can be modelled as highly crosslinked chemical gels, in which the system can be described as one continuous polymer network. In this case: formula_0 where "G" is the shear modulus, "k" is the Boltzmann constant, "T" is temperature, "Np" is the number of polymer chains per unit volume, "ρ" is the density, "R" is the ideal gas constant, and formula_1 is the (number) average molecular weight between two adjacent cross-linking points. formula_1 can be calculated from the swell ratio, "Q", which is relatively easy to test and measure. For the swollen state, a perfect gel network can be modeled as: formula_2 In a simple uniaxial extension or compression test, the true stress, formula_3, and engineering stress, formula_4, can be calculated as: formula_5 formula_6 where formula_7 is the stretch. Viscoelasticity. For hydrogels, their elasticity comes from the solid polymer matrix while the viscosity originates from the polymer network mobility and the water and other components that make up the aqueous phase. Viscoelastic properties of a hydrogel is highly dependent on the nature of the applied mechanical motion. Thus, the time dependence of these applied forces is extremely important for evaluating the viscoelasticity of the material. Physical models for viscoelasticity attempt to capture the elastic and viscous material properties of a material. In an elastic material, the stress is proportional to the strain while in a viscous material, the stress is proportional to the strain rate. The Maxwell model is one developed mathematical model for linear viscoelastic response. In this model, viscoelasticity is modeled analogous to an electrical circuit with a Hookean spring, that represents the Young's modulus, and a Newtonian dashpot that represents the viscosity. A material that exhibit properties described in this model is a Maxwell material. Another physical model used is called the Kelvin-Voigt Model and a material that follow this model is called a Kelvin–Voigt material. In order to describe the time-dependent creep and stress-relaxation behavior of hydrogel, a variety of physical lumped parameter models can be used. These modeling methods vary greatly and are extremely complex, so the empirical Prony Series description is commonly used to describe the viscoelastic behavior in hydrogels. In order to measure the time-dependent viscoelastic behavior of polymers dynamic mechanical analysis is often performed. Typically, in these measurements the one side of the hydrogel is subjected to a sinusoidal load in shear mode while the applied stress is measured with a stress transducer and the change in sample length is measured with a strain transducer. One notation used to model the sinusoidal response to the periodic stress or strain is: formula_8 in which G' is the real (elastic or storage) modulus, G" is the imaginary (viscous or loss) modulus. Poroelasticity. Poroelasticity is a characteristic of materials related to the migration of solvent through a porous material and the concurrent deformation that occurs. Poroelasticity in hydrated materials such as hydrogels occurs due to friction between the polymer and water as the water moves through the porous matrix upon compression. This causes a decrease in water pressure, which adds additional stress upon compression. Similar to viscoelasticity, this behavior is time dependent, thus poroelasticity is dependent on compression rate: a hydrogel shows softness upon slow compression, but fast compression makes the hydrogel stiffer. This phenomenon is due to the friction between the water and the porous matrix is proportional to the flow of water, which in turn is dependent on compression rate. Thus, a common way to measure poroelasticity is to do compression tests at varying compression rates. Pore size is an important factor in influencing poroelasticity. The Kozeny–Carman equation has been used to predict pore size by relating the pressure drop to the difference in stress between two compression rates. Poroelasticity is described by several coupled equations, thus there are few mechanical tests that relate directly to the poroelastic behavior of the material, thus more complicated tests such as indentation testing, numerical or computational models are utilized. Numerical or computational methods attempt to simulate the three dimensional permeability of the hydrogel network. Toughness and Hysteresis. The toughness of a hydrogel refers to the ability of the hydrogel to withstand deformation or mechanical stress without fracturing or breaking apart. A hydrogel with high toughness can maintain its structural integrity and functionality under higher stress. Several factors contribute to the toughness of a hydrogel including composition, crosslink density, polymer chain structure, and hydration level. The toughness of a hydrogel is highly dependent on what polymer(s) and crosslinker(s) make up its matrix as certain polymers possess higher toughness and certain crosslinking covalent bonds are inherently stronger. Additionally, higher crosslinking density generally leads to increased toughness by restricting polymer chain mobility and enhancing resistance to deformation. The structure of the polymer chains is also a factor in that, longer chain lengths and higher molecular weight leads to a greater number of entanglements and higher toughness. A good balance (equilibrium) in the hydration of a hydrogel leads is important because too low hydration causes poor flexibility and toughness within the hydrogel, but too high of water content can cause excessive swelling, weakening the mechanical properties of the hydrogel. The hysteresis of a hydrogel refers to the phenomenon where there is a delay in the deformation and recovery of a hydrogel when it is subjected to mechanical stress and relieved of that stress. This occurs because the polymer chains within a hydrogel rearrange, and the water molecules are displaced, and energy is stored as it deforms in mechanical extension or compression. When the mechanical stress is removed, the hydrogel begins to recover its original shape, but there may be a delay in the recovery process due to factors like viscoelasticity, internal friction, etc. This leads to a difference between the stress-strain curve during loading and unloading. Hysteresis within a hydrogel is influenced by several factors including composition, crosslink density, polymer chain structure, and temperature. The toughness and hysteresis of a hydrogel are especially important in the context of biomedical applications such as tissue engineering and drug delivery, as the hydrogel may need to withstand mechanical forces within the body, but also maintain mechanical performance and stability over time. Most typical hydrogels, both natural and synthetic, have a positive correlation between toughness and hysteresis, meaning that the higher the toughness, the longer the hydrogel takes to recover its original shape and vice versa. This is largely due to sacrificial bonds being the source of toughness within many of these hydrogels. Sacrificial bonds are non-covalent interactions such as hydrogen bonds, ionic interactions, and hydrophobic interactions, that can break and reform under mechanical stress. The reforming of these bonds takes time, especially when there are more of them, which leads to an increase in hysteresis. However, there is currently research focused on the development of highly entangled hydrogels, which instead rely on the long chain length of the polymers and their entanglement to limit the deformation of the hydrogel, thereby increasing the toughness without increasing hysteresis as there is no need for the reformation of the bonds. Environmental response. The most commonly seen environmental sensitivity in hydrogels is a response to temperature. Many polymers/hydrogels exhibit a temperature dependent phase transition, which can be classified as either an upper critical solution temperature (UCST) or lower critical solution temperature (LCST). UCST polymers increase in their water-solubility at higher temperatures, which lead to UCST hydrogels transitioning from a gel (solid) to a solution (liquid) as the temperature is increased (similar to the melting point behavior of pure materials). This phenomenon also causes UCST hydrogels to expand (increase their swell ratio) as temperature increases while they are below their UCST. However, polymers with LCSTs display an inverse (or negative) temperature-dependence, where their water-solubility decreases at higher temperatures. LCST hydrogels transition from a liquid solution to a solid gel as the temperature is increased, and they also shrink (decrease their swell ratio) as the temperature increases while they are above their LCST. Applications can dictate for diverse thermal responses. For example, in the biomedical field, LCST hydrogels are being investigated as drug delivery systems due to being injectable (liquid) at room temp and then solidifying into a rigid gel upon exposure to the higher temperatures of the human body. There are many other stimuli that hydrogels can be responsive to, including: pH, glucose, electrical signals, light, pressure, ions, antigens, and more. Additives. The mechanical properties of hydrogels can be fine-tuned in many ways beginning with attention to their hydrophobic properties. Another method of modifying the strength or elasticity of hydrogels is to graft or surface coat them onto a stronger/stiffer support, or by making superporous hydrogel (SPH) composites, in which a cross-linkable matrix swelling additive is added. Other additives, such as nanoparticles and microparticles, have been shown to significantly modify the stiffness and gelation temperature of certain hydrogels used in biomedical applications. Processing techniques. While a hydrogel's mechanical properties can be tuned and modified through crosslink concentration and additives, these properties can also be enhanced or optimized for various applications through specific processing techniques. These techniques include electro-spinning, 3D/4D printing, self-assembly, and freeze-casting. One unique processing technique is through the formation of multi-layered hydrogels to create a spatially-varying matrix composition and by extension, mechanical properties. This can be done by polymerizing the hydrogel matrixes in a layer by layer fashion via UV polymerization. This technique can be useful in creating hydrogels that mimic articular cartilage, enabling a material with three separate zones of distinct mechanical properties. Another emerging technique to optimize hydrogel mechanical properties is by taking advantage of the Hofmeister series. Due to this phenomenon, through the addition of salt solution, the polymer chains of a hydrogel aggregate and crystallize, which increases the toughness of the hydrogel. This method, called "salting out", has been applied to poly(vinyl alcohol) hydrogels by adding a sodium sulfate salt solution. Some of these processing techniques can be used synergistically with each other to yield optimal mechanical properties. Directional freezing or freeze-casting is another method in which a directional temperature gradient is applied to the hydrogel is another way to form materials with anisotropic mechanical properties. Utilizing both the freeze-casting and salting-out processing techniques on poly(vinyl alcohol) hydrogels to induce hierarchical morphologies and anisotropic mechanical properties. Directional freezing of the hydrogels helps to align and coalesce the polymer chains, creating anisotropic array honeycomb tube-like structures while salting out the hydrogel yielded out a nano-fibril network on the surface of these honeycomb tube-like structures. While maintaining a water content of over 70%, these hydrogels' toughness values are well above those of water-free polymers such as polydimethylsiloxane (PDMS), Kevlar, and synthetic rubber. The values also surpass the toughness of natural tendon and spider silk. Applications. Soft contact lenses. The dominant material for contact lenses are acrylate-siloxane hydrogels. They have replaced hard contact lenses. One of their most attractive properties is oxygen permeability, which is required since the cornea lacks vasculature. Biomaterials. Implanted or injected hydrogels have the potential to support tissue regeneration by mechanical tissue support, localized drug or cell delivery, local cell recruitement or immunomodulation, or encapsulation of nanoparticles for local photothermal therapy or brachytherapy. Polymeric drug delivery systems have overcome challenges due to their biodegradability, biocompatibility, and anti-toxicity. Materials such as collagen, chitosan, cellulose, and poly (lactic-co-glycolic acid) have been implemented extensively for drug delivery to organs such as eye, nose, kidneys, lungs, intestines, skin and brain. Future work is focused on reducing toxicity, improving biocompatibility, expanding assembly techniques Hydrogels have been considered as vehicles for drug delivery. They can also be made to mimic animal mucosal tissues to be used for testing mucoadhesive properties. They have been examined for use as reservoirs in topical drug delivery; particularly ionic drugs, delivered by iontophoresis. References. " This article incorporates text by Jessica Hutchinson available under the license." &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "G=N_{p}kT={\\rho RT \\over \\overline{M}_{c}}" }, { "math_id": 1, "text": "\\overline{M}_{c}" }, { "math_id": 2, "text": "G_{\\textrm{swollen}}=GQ^{-1/3}" }, { "math_id": 3, "text": "\\sigma _{t}" }, { "math_id": 4, "text": "\\sigma _{e}" }, { "math_id": 5, "text": "\\sigma _{t}=G_{\\textrm{swollen}}\\left ( \\lambda ^{2}-\\lambda ^{-1} \\right )" }, { "math_id": 6, "text": "\\sigma _{e}=G_{\\textrm{swollen}}\\left ( \\lambda -\\lambda ^{-2} \\right )" }, { "math_id": 7, "text": "\\lambda =l_{\\textrm{current}}/l_{\\textrm{original}}" }, { "math_id": 8, "text": "G = G' + iG''" } ]
https://en.wikipedia.org/wiki?curid=844290
844292
Search tree
Data structure in tree form sorted for fast lookup In computer science, a search tree is a tree data structure used for locating specific keys from within a set. In order for a tree to function as a search tree, the key for each node must be greater than any keys in subtrees on the left, and less than any keys in subtrees on the right. The advantage of search trees is their efficient search time given the tree is reasonably balanced, which is to say the leaves at either end are of comparable depths. Various search-tree data structures exist, several of which also allow efficient insertion and deletion of elements, which operations then have to maintain tree balance. Search trees are often used to implement an associative array. The search tree algorithm uses the key from the key–value pair to find a location, and then the application stores the entire key–value pair at that particular location. Types of trees. Binary search tree. A Binary Search Tree is a node-based data structure where each node contains a key and two subtrees, the left and right. For all nodes, the left subtree's key must be less than the node's key, and the right subtree's key must be greater than the node's key. These subtrees must all qualify as binary search trees. The worst-case time complexity for searching a binary search tree is the height of the tree, which can be as small as O(log n) for a tree with n elements. B-tree. B-trees are generalizations of binary search trees in that they can have a variable number of subtrees at each node. While child-nodes have a pre-defined range, they will not necessarily be filled with data, meaning B-trees can potentially waste some space. The advantage is that B-trees do not need to be re-balanced as frequently as other self-balancing trees. Due to the variable range of their node length, B-trees are optimized for systems that read large blocks of data, they are also commonly used in databases. The time complexity for searching a B-tree is O(log n). (a,b)-tree. An (a,b)-tree is a search tree where all of its leaves are the same depth. Each node has at least a children and at most b children, while the root has at least 2 children and at most b children. a and b can be decided with the following formula: formula_0 The time complexity for searching an (a,b)-tree is O(log n). Ternary search tree. A ternary search tree is a type of tree that can have 3 nodes: a low child, an equal child, and a high child. Each node stores a single character and the tree itself is ordered the same way a binary search tree is, with the exception of a possible third node. Searching a ternary search tree involves passing in a string to test whether any path contains it. The time complexity for searching a balanced ternary search tree is O(log n). Searching algorithms. Searching for a specific key. Assuming the tree is ordered, we can take a key and attempt to locate it within the tree. The following algorithms are generalized for binary search trees, but the same idea can be applied to trees of other formats. Recursive. search-recursive(key, node) if node is "NULL" return "EMPTY_TREE" if key &lt; node.key return search-recursive(key, node.left) else if key &gt; node.key return search-recursive(key, node.right) else return node Iterative. searchIterative(key, node) currentNode := node while currentNode is not "NULL" if currentNode.key = key return currentNode else if currentNode.key &gt; key currentNode := currentNode.left else currentNode := currentNode.right Searching for min and max. In a sorted tree, the minimum is located at the node farthest left, while the maximum is located at the node farthest right. Minimum. findMinimum(node) if node is "NULL" return "EMPTY_TREE" min := node while min.left is not "NULL" min := min.left return min.key Maximum. findMaximum(node) if node is "NULL" return "EMPTY_TREE" max := node while max.right is not "NULL" max := max.right return max.key References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2 \\le a \\le \\frac{(b+1)}{2}" } ]
https://en.wikipedia.org/wiki?curid=844292
8443164
Van der Waerden number
Van der Waerden's theorem states that for any positive integers "r" and "k" there exists a positive integer "N" such that if the integers {1, 2, ..., "N"} are colored, each with one of "r" different colors, then there are at least "k" integers in arithmetic progression all of the same color. The smallest such "N" is the van der Waerden number "W"("r", "k"). Tables of Van der Waerden numbers. There are two cases in which the van der Waerden number "W"("r", "k") is easy to compute: first, when the number of colors "r" is equal to 1, one has "W"(1, "k") = "k" for any integer "k", since one color produces only trivial colorings RRRRR...RRR (for the single color denoted R). Second, when the length "k" of the forced arithmetic progression is 2, one has "W"("r", 2) = "r" + 1, since one may construct a coloring that avoids arithmetic progressions of length 2 by using each color at most once, but using any color twice creates a length-2 arithmetic progression. (For example, for "r" = 3, the longest coloring that avoids an arithmetic progression of length 2 is RGB.) There are only seven other van der Waerden numbers that are known exactly. The table below gives exact values and bounds for values of "W"("r", "k"); values are taken from Rabung and Lotts except where otherwise noted. Some lower bound colorings computed using SAT approach by Marijn J.H. Heule can be found on github project page. Van der Waerden numbers with "r" ≥ 2 are bounded above by formula_0 as proved by Gowers. For a prime number "p", the 2-color van der Waerden number is bounded below by formula_1 as proved by Berlekamp. One sometimes also writes "w"(r; "k"1, "k"2, ..., "k""r") to mean the smallest number "w" such that any coloring of the integers {1, 2, ..., "w"} with "r" colors contains a progression of length "k""i" of color "i", for some "i". Such numbers are called "off-diagonal van der Waerden numbers". Thus "W"("r", "k") = "w"(r; "k", "k", ..., "k"). Following is a list of some known van der Waerden numbers: Van der Waerden numbers are primitive recursive, as proved by Shelah; in fact he proved that they are (at most) on the fifth level formula_2 of the Grzegorczyk hierarchy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W(r,k)\\le 2^{2^{r^{2^{2^{k+9}}}}}" }, { "math_id": 1, "text": "p\\cdot2^p\\le W(2,p+1)," }, { "math_id": 2, "text": "\\mathcal{E}^5" } ]
https://en.wikipedia.org/wiki?curid=8443164
8444301
Sudoku solving algorithms
Algorithms to complete a sudoku A standard Sudoku contains 81 cells, in a 9×9 grid, and has 9 boxes, each box being the intersection of the first, middle, or last 3 rows, and the first, middle, or last 3 columns. Each cell may contain a number from one to nine, and each number can only occur once in each row, column, and box. A Sudoku starts with some cells containing numbers ("clues"), and the goal is to solve the remaining cells. Proper Sudokus have one solution. Players and investigators use a wide range of computer algorithms to solve Sudokus, study their properties, and make new puzzles, including Sudokus with interesting symmetries and other properties. There are several computer algorithms that will solve 9×9 puzzles (n = 9) in fractions of a second, but combinatorial explosion occurs as n increases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved as n increases. Techniques. Backtracking. Some hobbyists have developed computer programs that will solve Sudoku puzzles using a backtracking algorithm, which is a type of brute force search. Backtracking is a "depth-first search" (in contrast to a "breadth-first search"), because it will completely explore one branch to a possible solution before moving to another branch. Although it has been established that approximately 5.96 x 1026 final grids exist, a brute force algorithm can be a practical method to solve Sudoku puzzles. A brute force algorithm visits the empty cells in some order, filling in digits sequentially, or backtracking when the number is found to be not valid. Briefly, a program would solve a puzzle by placing the digit "1" in the first cell and checking if it is allowed to be there. If there are no violations (checking row, column, and box constraints) then the algorithm advances to the next cell and places a "1" in that cell. When checking for violations, if it is discovered that the "1" is not allowed, the value is advanced to "2". If a cell is discovered where none of the 9 digits is allowed, then the algorithm leaves that cell blank and moves back to the previous cell. The value in that cell is then incremented by one. This is repeated until the allowed value in the last (81st) cell is discovered. The animation shows how a Sudoku is solved with this method. The puzzle's clues (red numbers) remain fixed while the algorithm tests each unsolved cell with a possible solution. Notice that the algorithm may discard all the previously tested values if it finds the existing set does not fulfill the constraints of the Sudoku. Advantages of this method are: The disadvantage of this method is that the solving time may be slow compared to algorithms modeled after deductive methods. One programmer reported that such an algorithm may typically require as few as 15,000 cycles, or as many as 900,000 cycles to solve a Sudoku, each cycle being the change in position of a "pointer" as it moves through the cells of a Sudoku. A different approach which also uses backtracking, draws from the fact that in the solution to a standard sudoku the distribution for every individual symbol (value) must be one of only 46656 patterns. In manual sudoku solving this technique is referred to as pattern overlay or using templates and is confined to filling in the last values only. A library with all the possible patterns may get loaded or created at program start. Then every given symbol gets assigned a filtered set with those patterns, which are in accordance with the given clues. In the last step, the actual backtracking part, patterns from these sets are tried to be combined or overlayed in a non-conflicting way until the one permissible combination is hit upon. The Implementation is exceptionally easy when using bit vectors, because for all the tests only bit-wise logical operations are needed, instead of any nested iterations across rows and columns. Significant optimization can be achieved by reducing the sets of patterns even further during filtering. By testing every questionable pattern against all the reduced sets that were already accepted for the other symbols the total number of patterns left for backtracking is greatly diminished. And as with all sudoku brute-force techniques, run time can be vastly reduced by first applying some of the most simple solving practices which may fill in some 'easy' values. A Sudoku can be constructed to work against backtracking. Assuming the solver works from top to bottom (as in the animation), a puzzle with few clues (17), no clues in the top row, and has a solution "987654321" for the first row, would work in opposition to the algorithm. Thus the program would spend significant time "counting" upward before it arrives at the grid which satisfies the puzzle. In one case, a programmer found a brute force program required six hours to arrive at the solution for such a Sudoku (albeit using a 2008-era computer). Such a Sudoku can be solved nowadays in less than 1 second using an exhaustive search routine and faster processors. Stochastic search / optimization methods. Sudoku can be solved using stochastic (random-based) algorithms. An example of this method is to: A solution to the puzzle is then found. Approaches for shuffling the numbers include simulated annealing, genetic algorithm and tabu search. Stochastic-based algorithms are known to be fast, though perhaps not as fast as deductive techniques. Unlike the latter however, optimisation algorithms do not necessarily require problems to be logic-solvable, giving them the potential to solve a wider range of problems. Algorithms designed for graph colouring are also known to perform well with Sudokus. It is also possible to express a Sudoku as an integer linear programming problem. Such approaches get close to a solution quickly, and can then use branching towards the end. The simplex algorithm is able to solve proper Sudokus, indicating if the Sudoku is not valid (no solution). If there is more than one solution (non-proper Sudokus) the simplex algorithm will generally yield a solution with fractional amounts of more than one digit in some squares. However, for proper Sudokus, linear programming presolve techniques alone will deduce the solution without any need for simplex iterations. The logical rules used by presolve techniques for the reduction of LP problems include the set of logical rules used by humans to solve Sudokus. Constraint programming. A Sudoku may also be modelled as a constraint satisfaction problem. In his paper "Sudoku as a Constraint Problem", Helmut Simonis describes many "reasoning algorithms" based on constraints which can be applied to model and solve problems. Some constraint solvers include a method to model and solve Sudokus, and a program may require fewer than 100 lines of code to solve a simple Sudoku. If the code employs a strong reasoning algorithm, incorporating backtracking is only needed for the most difficult Sudokus. An algorithm combining a constraint-model-based algorithm with backtracking would have the advantage of fast solving time - of the order of a few milliseconds - and the ability to solve all sudokus. Exact cover. Sudoku puzzles may be described as an exact cover problem, or more precisely, an exact hitting set problem. This allows for an elegant description of the problem and an efficient solution. Modelling Sudoku as an exact cover problem and using an algorithm such as Knuth's Algorithm X and his Dancing Links technique "is the method of choice for rapid finding [measured in microseconds] of all possible solutions to Sudoku puzzles." An alternative approach is the use of Gauss elimination in combination with column and row striking. Relations and residuals. Let "Q" be the 9x9 Sudoku matrix, "N" = {1, 2, 3, 4, 5, 6, 7, 8, 9}, and "X" represent a generic row, column, or block. "N" supplies symbols for filling "Q" as well as the index set for the 9 elements of any "X". The given elements "q" in "Q" represent a univalent relation from "Q" to "N". The solution "R" is a "total relation" and hence a function. Sudoku rules require that the restriction of "R" to "X" is a bijection, so any partial solution "C", restricted to an "X", is a partial permutation of "N". Let "T" = { "X" : "X" is a row, column, or block of "Q" }, so "T" has 27 elements. An "arrangement" is either a partial permutation or a permutation on "N". Let "Z" be the set of all arrangements on "N". A partial solution "C" can be reformulated to include the rules as a composition of relations "A" (one-to-three) and "B" requiring compatible arrangements: formula_0 Solution of the puzzle, suggestions for new "q" to enter "Q", come from prohibited arrangements formula_1, the complement of "C" in "Q"x"Z": useful tools in the calculus of relations are residuals: formula_2 maps "T" to "Z", and formula_3 maps "Q" to "T". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q \\xrightarrow{A} T \\xrightarrow{B} Z \\quad \\text{with}\\quad A;B \\subseteq C ." }, { "math_id": 1, "text": "\\bar{C}," }, { "math_id": 2, "text": "A \\backslash C = \\overline{A^T;\\bar{C}}" }, { "math_id": 3, "text": " C/B = \\overline{\\bar{C};B^T}" } ]
https://en.wikipedia.org/wiki?curid=8444301
844461
Representable functor
In mathematics, particularly category theory, a representable functor is a certain functor from an arbitrary category into the category of sets. Such functors give representations of an abstract category in terms of known structures (i.e. sets and functions) allowing one to utilize, as much as possible, knowledge about the category of sets in other settings. From another point of view, representable functors for a category "C" are the functors "given" with "C". Their theory is a vast generalisation of upper sets in posets, and Yoneda's representability theorem generalizes Cayley's theorem in group theory. Definition. Let C be a locally small category and let Set be the category of sets. For each object "A" of C let Hom("A",–) be the hom functor that maps object "X" to the set Hom("A","X"). A functor "F" : C → Set is said to be representable if it is naturally isomorphic to Hom("A",–) for some object "A" of C. A representation of "F" is a pair ("A", Φ) where Φ : Hom("A",–) → "F" is a natural isomorphism. A contravariant functor "G" from C to Set is the same thing as a functor "G" : Cop → Set and is commonly called a presheaf. A presheaf is representable when it is naturally isomorphic to the contravariant hom-functor Hom(–,"A") for some object "A" of C. Universal elements. According to Yoneda's lemma, natural transformations from Hom("A",–) to "F" are in one-to-one correspondence with the elements of "F"("A"). Given a natural transformation Φ : Hom("A",–) → "F" the corresponding element "u" ∈ "F"("A") is given by formula_0 Conversely, given any element "u" ∈ "F"("A") we may define a natural transformation Φ : Hom("A",–) → "F" via formula_1 where "f" is an element of Hom("A","X"). In order to get a representation of "F" we want to know when the natural transformation induced by "u" is an isomorphism. This leads to the following definition: A universal element of a functor "F" : C → Set is a pair ("A","u") consisting of an object "A" of C and an element "u" ∈ "F"("A") such that for every pair ("X","v") consisting of an object "X" of C and an element "v" ∈ "F"("X") there exists a unique morphism "f" : "A" → "X" such that ("Ff")("u") = "v". A universal element may be viewed as a universal morphism from the one-point set {•} to the functor "F" or as an initial object in the category of elements of "F". The natural transformation induced by an element "u" ∈ "F"("A") is an isomorphism if and only if ("A","u") is a universal element of "F". We therefore conclude that representations of "F" are in one-to-one correspondence with universal elements of "F". For this reason, it is common to refer to universal elements ("A","u") as representations. Analogy: Representable functionals. Consider a linear functional on a complex Hilbert space "H", i.e. a linear function formula_3. The Riesz representation theorem states that if "F" is continuous, then there exists a unique element formula_4 which represents "F" in the sense that "F" is equal to the inner product functional formula_5, that is formula_6 for formula_7. For example, the continuous linear functionals on the square-integrable function space formula_8 are all representable in the form formula_9 for a unique function formula_10. The theory of distributions considers more general continuous functionals on the space of test functions formula_11. Such a distribution functional is not necessarily representable by a function, but it may be considered intuitively as a generalized function. For instance, the Dirac delta function is the distribution defined by formula_12 for each test function formula_13, and may be thought of as "represented" by an infinitely tall and thin bump function near formula_14. Thus, a function formula_15 may be determined not by its values, but by its effect on other functions via the inner product. Analogously, an object "A" in a category may be characterized not by its internal features, but by its functor of points, i.e. its relation to other objects via morphisms. Just as non-representable functionals are described by distributions, non-representable functors may be described by more complicated structures such as stacks. Properties. Uniqueness. Representations of functors are unique up to a unique isomorphism. That is, if ("A"1,Φ1) and ("A"2,Φ2) represent the same functor, then there exists a unique isomorphism φ : "A"1 → "A"2 such that formula_16 as natural isomorphisms from Hom("A"2,–) to Hom("A"1,–). This fact follows easily from Yoneda's lemma. Stated in terms of universal elements: if ("A"1,"u"1) and ("A"2,"u"2) represent the same functor, then there exists a unique isomorphism φ : "A"1 → "A"2 such that formula_17 Preservation of limits. Representable functors are naturally isomorphic to Hom functors and therefore share their properties. In particular, (covariant) representable functors preserve all limits. It follows that any functor which fails to preserve some limit is not representable. Contravariant representable functors take colimits to limits. Left adjoint. Any functor "K" : "C" → Set with a left adjoint "F" : Set → "C" is represented by ("FX", η"X"(•)) where "X" = {•} is a singleton set and η is the unit of the adjunction. Conversely, if "K" is represented by a pair ("A", "u") and all small copowers of "A" exist in "C" then "K" has a left adjoint "F" which sends each set "I" to the "I"th copower of "A". Therefore, if "C" is a category with all small copowers, a functor "K" : "C" → Set is representable if and only if it has a left adjoint. Relation to universal morphisms and adjoints. The categorical notions of universal morphisms and adjoint functors can both be expressed using representable functors. Let "G" : "D" → "C" be a functor and let "X" be an object of "C". Then ("A",φ) is a universal morphism from "X" to "G" if and only if ("A",φ) is a representation of the functor Hom"C"("X","G"–) from "D" to Set. It follows that "G" has a left-adjoint "F" if and only if Hom"C"("X","G"–) is representable for all "X" in "C". The natural isomorphism Φ"X" : Hom"D"("FX",–) → Hom"C"("X","G"–) yields the adjointness; that is formula_18 is a bijection for all "X" and "Y". The dual statements are also true. Let "F" : "C" → "D" be a functor and let "Y" be an object of "D". Then ("A",φ) is a universal morphism from "F" to "Y" if and only if ("A",φ) is a representation of the functor Hom"D"("F"–,"Y") from "C" to Set. It follows that "F" has a right-adjoint "G" if and only if Hom"D"("F"–,"Y") is representable for all "Y" in "D".
[ { "math_id": 0, "text": "u = \\Phi_A(\\mathrm{id}_A).\\," }, { "math_id": 1, "text": "\\Phi_X(f) = (Ff)(u)\\," }, { "math_id": 2, "text": "X\\to A" }, { "math_id": 3, "text": "F: H\\to\\mathbb C" }, { "math_id": 4, "text": "a\\in H" }, { "math_id": 5, "text": "\\langle a, -\\rangle " }, { "math_id": 6, "text": "F(v) = \\langle a,v\\rangle " }, { "math_id": 7, "text": "v\\in H" }, { "math_id": 8, "text": "H = L^2(\\mathbb R)" }, { "math_id": 9, "text": "\\textstyle F(v) = \\langle a,v\\rangle = \\int_{\\mathbb R} a(x)v(x)\\,dx" }, { "math_id": 10, "text": "a(x)\\in H" }, { "math_id": 11, "text": "C=C^\\infty_c(\\mathbb R)" }, { "math_id": 12, "text": "F(v) = v(0)" }, { "math_id": 13, "text": "v(x)\\in C" }, { "math_id": 14, "text": "x=0" }, { "math_id": 15, "text": "a(x)" }, { "math_id": 16, "text": "\\Phi_1^{-1}\\circ\\Phi_2 = \\mathrm{Hom}(\\varphi,-)" }, { "math_id": 17, "text": "(F\\varphi)u_1 = u_2." }, { "math_id": 18, "text": "\\Phi_{X,Y}\\colon \\mathrm{Hom}_{\\mathcal D}(FX,Y) \\to \\mathrm{Hom}_{\\mathcal C}(X,GY)" } ]
https://en.wikipedia.org/wiki?curid=844461
8444826
Fast marching method
Algorithm for solving boundary value problems of the Eikonal equation The fast marching method is a numerical method created by James Sethian for solving boundary value problems of the Eikonal equation: formula_0 formula_1 Typically, such a problem describes the evolution of a closed surface as a function of time formula_2 with speed formula_3 in the normal direction at a point formula_4 on the propagating surface. The speed function is specified, and the time at which the contour crosses a point formula_4 is obtained by solving the equation. Alternatively, formula_5 can be thought of as the minimum amount of time it would take to reach formula_6 starting from the point formula_4. The fast marching method takes advantage of this optimal control interpretation of the problem in order to build a solution outwards starting from the "known information", i.e. the boundary values. The algorithm is similar to Dijkstra's algorithm and uses the fact that information only flows outward from the seeding area. This problem is a special case of level-set methods. More general algorithms exist but are normally slower. Extensions to non-flat (triangulated) domains solving formula_7 for the surface formula_8 and formula_9, were introduced by Ron Kimmel and James Sethian. Algorithm. First, assume that the domain has been discretized into a mesh. We will refer to meshpoints as nodes. Each node formula_10 has a corresponding value formula_11. The algorithm works just like Dijkstra's algorithm but differs in how the nodes' values are calculated. In Dijkstra's algorithm, a node's value is calculated using a single one of the neighboring nodes. However, in solving the PDE in formula_12, between formula_13 and formula_14 of the neighboring nodes are used. Nodes are labeled as "far" (not yet visited), "considered" (visited and value tentatively assigned), and "accepted" (visited and value permanently assigned). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\nabla u(x)|=1/f(x) \\text{ for } x \\in \\Omega" }, { "math_id": 1, "text": "u(x) = 0 \\text{ for } x \\in \\partial \\Omega" }, { "math_id": 2, "text": "u" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "u(x)" }, { "math_id": 6, "text": "\\partial\\Omega" }, { "math_id": 7, "text": "|\\nabla_S u(x)|=1 / f(x),\n" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "x\\in S" }, { "math_id": 10, "text": "x_i" }, { "math_id": 11, "text": "U_i = U(x_i) \\approx u(x_i)" }, { "math_id": 12, "text": "\\mathbb{R}^n" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "U_i=+\\infty" }, { "math_id": 16, "text": "x_i \\in \\partial\\Omega" }, { "math_id": 17, "text": "U_i = 0" }, { "math_id": 18, "text": "\\tilde{U}" }, { "math_id": 19, "text": "\\tilde{U} < U_i" }, { "math_id": 20, "text": "U_i = \\tilde{U}" }, { "math_id": 21, "text": "\\tilde{x}" }, { "math_id": 22, "text": "U" }, { "math_id": 23, "text": "\\tilde{U} < U_i " } ]
https://en.wikipedia.org/wiki?curid=8444826
8447393
Ligand cone angle
Measure of the steric bulk of a ligand in a coordination complex In coordination chemistry, the ligand cone angle (θ) is a measure of the steric bulk of a ligand in a transition metal coordination complex. It is defined as the solid angle formed with the metal at the vertex of a cone and the outermost edge of the van der Waals spheres of the ligand atoms at the perimeter of the base of the cone. Tertiary phosphine ligands are commonly classified using this parameter, but the method can be applied to any ligand. The term "cone angle" was first introduced by Chadwick A. Tolman, a research chemist at DuPont. Tolman originally developed the method for phosphine ligands in nickel complexes, determining them from measurements of accurate physical models. Asymmetric cases. The concept of cone angle is most easily visualized with symmetrical ligands, e.g. PR3. But the approach has been refined to include less symmetrical ligands of the type PRR′R″ as well as diphosphines. In such asymmetric cases, the substituent angles' half angles, , are averaged and then doubled to find the total cone angle, "θ". In the case of diphosphines, the of the backbone is approximated as half the chelate bite angle, assuming a bite angle of 74°, 85°, and 90° for diphosphines with methylene, ethylene, and propylene backbones, respectively. The Manz cone angle is often easier to compute than the Tolman cone angle: formula_0 Variations. The Tolman cone angle method assumes empirical bond data and defines the perimeter as the maximum possible circumscription of an idealized free-spinning substituent. The metal-ligand bond length in the Tolman model was determined empirically from crystal structures of tetrahedral nickel complexes. In contrast, the solid-angle concept derives both bond length and the perimeter from empirical solid state crystal structures. There are advantages to each system. If the geometry of a ligand is known, either through crystallography or computations, an exact cone angle ("θ") can be calculated. No assumptions about the geometry are made, unlike the Tolman method. Application. The concept of cone angle is of practical importance in homogeneous catalysis because the size of the ligand affects the reactivity of the attached metal center. In an example, the selectivity of hydroformylation catalysts is strongly influenced by the size of the coligands. Despite being monovalent, some phosphines are large enough to occupy more than half of the coordination sphere of a metal center. Recent research has found that other descriptors—such as percent buried volume—are more accurate than cone angle at capturing the relevant steric effects of the phosphine ligand(s) when bound to the metal center. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta = \\frac{2}{3} \\sum_i \\frac{\\theta_i}{2}" } ]
https://en.wikipedia.org/wiki?curid=8447393
8447449
Reciprocity (network science)
In network science, reciprocity is a measure of the likelihood of vertices in a directed network to be mutually linked. Like the clustering coefficient, scale-free degree distribution, or community structure, reciprocity is a quantitative measure used to study complex networks. Motivation. In real network problems, people are interested in determining the likelihood of occurring double links (with opposite directions) between vertex pairs. This problem is fundamental for several reasons. First, in the networks that transport information or material (such as email networks, World Wide Web (WWW), World Trade Web, or Wikipedia ), mutual links facilitate the transportation process. Second, when analyzing directed networks, people often treat them as undirected ones for simplicity; therefore, the information obtained from reciprocity studies helps to estimate the error introduced when a directed network is treated as undirected (for example, when measuring the clustering coefficient). Finally, detecting nontrivial patterns of reciprocity can reveal possible mechanisms and organizing principles that shape the observed network's topology. Definitions. Traditional definition. A traditional way to define the reciprocity formula_0 is using the ratio of the number of links pointing in both directions formula_1 to the total number of links L formula_2 With this definition, formula_3 is for a purely bidirectional network while formula_4 for a purely unidirectional one. Real networks have an intermediate value between 0 and 1. However, this definition of reciprocity has some defects. It cannot tell the relative difference of reciprocity compared with purely random network with the same number of vertices and edges. The useful information from reciprocity is not the value itself, but whether mutual links occur more or less often than expected by chance. Besides, in those networks containing self-linking loops (links starting and ending at the same vertex), the self-linking loops should be excluded when calculating formula_5. Garlaschelli and Loffredo's definition. In order to overcome the defects of the above definition, Garlaschelli and Loffredo defined reciprocity as the correlation coefficient between the entries of the adjacency matrix of a directed graph (formula_6 if a link from formula_7 to formula_8 exists, and formula_9 if not): formula_10, where the average value formula_11. formula_12 measures the ratio of observed to possible directed links (link density), and self-linking loops are now excluded from formula_5 since formula_7 is not equal to formula_8. The definition can be written in the following simple form: formula_13 The new definition of reciprocity gives an absolute quantity which directly allows one to distinguish between reciprocal (formula_14) and antireciprocal (formula_15) networks, with mutual links occurring more and less often than random respectively. If all the links occur in reciprocal pairs, formula_16; if formula_17, formula_18. formula_19 This is another advantage of using formula_20, since it incorporates the idea that complete antireciprocality is more statistically significant in networks with larger density, while it must be regarded as a less pronounced effect in sparser networks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "L^{<->}" }, { "math_id": 2, "text": "r = \\frac {L^{<->}}{L}" }, { "math_id": 3, "text": "r = 1" }, { "math_id": 4, "text": "r = 0 " }, { "math_id": 5, "text": "L" }, { "math_id": 6, "text": "a_{ij} = 1" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "j" }, { "math_id": 9, "text": "a_{ij} = 0" }, { "math_id": 10, "text": "\\rho \\equiv \\frac {\\sum_{i \\neq j} (a_{ij} - \\bar{a}) (a_{ji} - \\bar{a})}{\\sum_{i \\neq j} (a_{ij} - \\bar{a})^2}" }, { "math_id": 11, "text": "\\bar{a} \\equiv \\frac {\\sum_{i \\neq j} a_{ij}} {N(N-1)} = \\frac {L} {N(N-1)}" }, { "math_id": 12, "text": "\\bar{a}" }, { "math_id": 13, "text": "\\rho = \\frac {r - \\bar{a}} {1- \\bar{a}}" }, { "math_id": 14, "text": "\\rho > 0" }, { "math_id": 15, "text": "\\rho < 0" }, { "math_id": 16, "text": "\\rho = 1" }, { "math_id": 17, "text": "r=0" }, { "math_id": 18, "text": "\\rho = \\rho_{min}" }, { "math_id": 19, "text": "\\rho_{min} \\equiv \\frac {- \\bar{a}} {1- \\bar{a}}" }, { "math_id": 20, "text": "\\rho" } ]
https://en.wikipedia.org/wiki?curid=8447449
844772
Operational risk
Failure Mode and Effects Analysis (FMEA) Factor Operational risk is the risk of losses caused by flawed or failed processes, policies, systems or events that disrupt business operations. Employee errors, criminal activity such as fraud, and physical events are among the factors that can trigger operational risk. The process to manage operational risk is known as operational risk management. The definition of operational risk, adopted by the European Solvency II Directive for insurers, is a variation adopted from the Basel II regulations for banks: "The risk of a change in value caused by the fact that actual losses, incurred for inadequate or failed internal processes, people and systems, or from external events (including legal risk), differ from the expected losses". The scope of operational risk is then broad, and can also include other classes of risks, such as fraud, security, privacy protection, legal risks, physical (e.g. infrastructure shutdown) or environmental risks. Operational risks similarly may impact broadly, in that they can affect client satisfaction, reputation and shareholder value, all while increasing business volatility. Previously, in Basel I, operational risk was negatively defined: namely that operational risk are all risks which are "not" market risk and not credit risk. Some banks have therefore also used the term operational risk synonymously with non-financial risks. In October 2014, the Basel Committee on Banking Supervision proposed a revision to its operational risk capital framework that sets out a new standardized approach to replace the basic indicator approach and the standardized approach for calculating operational risk capital. Contrary to other risks (e.g. credit risk, market risk, insurance risk) operational risks are usually not willingly incurred nor are they revenue driven. Moreover, they are not diversifiable and cannot be laid off. This means that as long as people, systems, and processes remain imperfect, operational risk cannot be fully eliminated. Operational risk is, nonetheless, manageable as to keep losses within some level of risk tolerance (i.e. the amount of risk one is prepared to accept in pursuit of his objectives), determined by balancing the costs of improvement against the expected benefits. Wider trends such as globalization, the expansion of the internet and the rise of social media, as well as the increasing demands for greater corporate accountability worldwide, reinforce the need for proper risk management. Thus operational risk management (ORM) is a specialized discipline within risk management. It constitutes the continuous-process of risk assessment, decision making, and implementation of risk controls, resulting in the acceptance, mitigation, or avoidance of the various operational risks. ORM somewhat overlaps quality management and the internal audit function. Background. Until Basel II reforms to banking supervision, operational risk was a residual category reserved for risks and uncertainties which were difficult to quantify and manage in traditional ways – the "other risks" basket. Such regulations institutionalized operational risk as a category of regulatory and managerial attention and connected operational risk management with good corporate governance. Businesses in general, and other institutions such as the military, have been aware, for many years, of hazards arising from operational factors, internal or external. The primary goal of the military is to fight and win wars in quick and decisive fashion, and with minimal losses. For the military and the businesses of the world alike, operational risk management is an effective process for preserving resources by anticipation. Two decades (from 1980 to the early 2000s) of globalization and deregulation ("e.g." Big Bang (financial markets)), combined with the increased sophistication of financial services around the world, introduced additional complexities into the activities of banks, insurers, and firms in general and therefore their risk profiles. Since the mid-1990s, the topics of market risk and credit risk have been the subject of much debate and research, with the result that financial institutions have made significant progress in the identification, measurement, and management of both these forms of risk. However, the near collapse of the U.S. financial system in September 2008 is an indication that our ability to measure market and credit risk is far from perfect and eventually led to the introduction of new regulatory requirements worldwide, including Basel III regulations for banks and Solvency II regulations for insurers. Events such as the September 11 terrorist attacks, rogue trading losses at Société Générale, Barings, AIB, UBS, and National Australia Bank serve to highlight the fact that the scope of risk management extends beyond merely market and credit risk. These reasons underscore banks' and supervisors' growing focus upon the identification and measurement of operational risk. The list of risks (and, more importantly, the scale of these risks) faced by banks today includes fraud, system failures, terrorism, and employee compensation claims. These types of risk are generally classified under the term 'operational risk'. The identification and measurement of operational risk is a real and live issue for modern-day banks, particularly since the decision by the Basel Committee on Banking Supervision (BCBS) to introduce a capital charge for this risk as part of the new capital adequacy framework (Basel II). Definition. The Basel Committee defines operational risk in Basel II and Basel III as: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk. The Basel Committee recognizes that operational risk is a term that has a variety of meanings and therefore, for internal purposes, banks are permitted to adopt their own definitions of operational risk, provided that the minimum elements in the Committee's definition are included. Scope exclusions. The Basel II definition of operational risk excludes, for example, strategic risk – the risk of a loss arising from a poor strategic business decision. Other risk terms are seen as potential consequences of operational risk events. For example, reputational risk (damage to an organization through loss of its reputation or standing) can arise as a consequence (or impact) of operational failures – as well as from other events. Event types. The following lists the seven official Basel II event types with some examples for each category: Vendor risk. Vendor risk refers to the risk caused by the dependency of one's services or products on a lower-level service or product sourced from a particular vendor. It includes the risks of Difficulties. It is relatively straightforward for an organization to set and observe specific, measurable levels of market risk and credit risk because models exist which attempt to predict the potential impact of market movements, or changes in the cost of credit. These models are only as good as the underlying assumptions, and a large part of the recent financial crisis arose because the valuations generated by these models for particular types of investments were based on incorrect assumptions. By contrast, it is relatively difficult to identify or assess levels of operational risk and its many sources. Historically organizations have accepted operational risk as an unavoidable cost of doing business. Many now though collect data on operational losses – for example through system failure or fraud – and are using this data to model operational risk and to calculate a capital reserve against future operational losses. In addition to the Basel II requirement for banks, this is now a requirement for European insurance firms who are in the process of implementing Solvency II, the equivalent of Basel II for the insurance sector. Methods for calculating operational risk capital. Basel II and various supervisory bodies of the countries have prescribed various soundness standards for operational risk management for banks and similar financial institutions. To complement these standards, Basel II has given guidance to 3 broad methods of capital calculation for operational risk: The operational risk management framework should include identification, measurement, monitoring, reporting, control and mitigation frameworks for operational risk. There are a number of methodologies to choose from when modeling operational risk, each with its advantages and target applications. The ultimate choice of the methodology/methodologies to use in your institution depends on a number of factors, including: Standardised Measurement Approach (Basel III). The Basel Committee on Banking Supervision (BCBS) has proposed the "Standardised Measurement Approach" (SMA) as a method of assessing operational risk as a replacement for all existing approaches, including AMA. The objective is to provide stable, comparable and risk-sensitive estimates for the operational risk exposure and is effective January 1, 2022. The SMA puts weight on the internal loss history (losses of the last 10 years must be considered). It is possible to consider net losses (after recoveries and insurance). The marginal coefficient (α) increases with the size of the BI as shown in the table below. The ILM is defined as: formula_0 where the Loss Component (LC) is equal to 15 times average annual operational risk losses incurred over the previous 10 years. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " ILM = \\ln(\\exp(1)-1 + (LC/BIC)^{0.8})" } ]
https://en.wikipedia.org/wiki?curid=844772
844783
Forgetful functor
Concept in category theory In mathematics, in the area of category theory, a forgetful functor (also known as a stripping functor) 'forgets' or drops some or all of the input's structure or properties 'before' mapping to the output. For an algebraic structure of a given signature, this may be expressed by curtailing the signature: the new signature is an edited form of the old one. If the signature is left as an empty list, the functor is simply to take the underlying set of a structure. Because many structures in mathematics consist of a set with an additional added structure, a forgetful functor that maps to the underlying set is the most common case. Overview. As an example, there are several forgetful functors from the category of commutative rings. A (unital) ring, described in the language of universal algebra, is an ordered tuple formula_0 satisfying certain axioms, where formula_1 and formula_2 are binary functions on the set formula_3, formula_4 is a unary operation corresponding to additive inverse, and 0 and 1 are nullary operations giving the identities of the two binary operations. Deleting the 1 gives a forgetful functor to the category of rings without unit; it simply "forgets" the unit. Deleting formula_2 and 1 yields a functor to the category of abelian groups, which assigns to each ring formula_3 the underlying additive abelian group of formula_3. To each morphism of rings is assigned the same function considered merely as a morphism of addition between the underlying groups. Deleting all the operations gives the functor to the underlying set formula_3. It is beneficial to distinguish between forgetful functors that "forget structure" versus those that "forget properties". For example, in the above example of commutative rings, in addition to those functors that delete some of the operations, there are functors that forget some of the axioms. There is a functor from the category CRing to Ring that forgets the axiom of commutativity, but keeps all the operations. Occasionally the object may include extra sets not defined strictly in terms of the underlying set (in this case, which part to consider the underlying set is a matter of taste, though this is rarely ambiguous in practice). For these objects, there are forgetful functors that forget the extra sets that are more general. Most common objects studied in mathematics are constructed as underlying sets along with extra sets of structure on those sets (operations on the underlying set, privileged subsets of the underlying set, etc.) which may satisfy some axioms. For these objects, a commonly considered forgetful functor is as follows. Let formula_5 be any category based on sets, e.g. groups—sets of elements—or topological spaces—sets of 'points'. As usual, write formula_6 for the objects of formula_5 and write formula_7 for the morphisms of the same. Consider the rule: For all formula_8 in formula_9 the underlying set of formula_10 For all formula_11 in formula_12 the morphism, formula_11, as a map of sets. The functor formula_13 is then the forgetful functor from formula_5 to Set, the category of sets. Forgetful functors are almost always faithful. Concrete categories have forgetful functors to the category of sets—indeed they may be "defined" as those categories that admit a faithful functor to that category. Forgetful functors that only forget axioms are always fully faithful, since every morphism that respects the structure between objects that satisfy the axioms automatically also respects the axioms. Forgetful functors that forget structures need not be full; some morphisms don't respect the structure. These functors are still faithful however because distinct morphisms that do respect the structure are still distinct when the structure is forgotten. Functors that forget the extra sets need not be faithful, since distinct morphisms respecting the structure of those extra sets may be indistinguishable on the underlying set. In the language of formal logic, a functor of the first kind removes axioms, a functor of the second kind removes predicates, and a functor of the third kind remove types. An example of the first kind is the forgetful functor Ab → Grp. One of the second kind is the forgetful functor Ab → Set. A functor of the third kind is the functor Mod → Ab, where Mod is the fibred category of all modules over arbitrary rings. To see this, just choose a ring homomorphism between the underlying rings that does not change the ring action. Under the forgetful functor, this morphism yields the identity. Note that an object in Mod is a tuple, which includes a ring and an abelian group, so which to forget is a matter of taste. Left adjoints of forgetful functors. Forgetful functors tend to have left adjoints, which are 'free' constructions. For example: For a more extensive list, see (Mac Lane 1997). As this is a fundamental example of adjoints, we spell it out: adjointness means that given a set "X" and an object (say, an "R"-module) "M", maps "of sets" formula_19 correspond to maps of modules formula_20: every map of sets yields a map of modules, and every map of modules comes from a map of sets. In the case of vector spaces, this is summarized as: "A map between vector spaces is determined by where it sends a basis, and a basis can be mapped to anything." Symbolically: formula_21 The unit of the free–forgetful adjunction is the "inclusion of a basis": formula_22. Fld, the category of fields, furnishes an example of a forgetful functor with no adjoint. There is no field satisfying a free universal property for a given set. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(R,+,\\times,a,0,1)" }, { "math_id": 1, "text": "+" }, { "math_id": 2, "text": "\\times" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "\\mathcal{C}" }, { "math_id": 6, "text": "\\operatorname{Ob}(\\mathcal{C})" }, { "math_id": 7, "text": "\\operatorname{Fl}(\\mathcal{C})" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "\\operatorname{Ob}(\\mathcal{C}), A\\mapsto |A|=" }, { "math_id": 10, "text": "A," }, { "math_id": 11, "text": "u" }, { "math_id": 12, "text": "\\operatorname{Fl}(\\mathcal{C}), u\\mapsto |u|=" }, { "math_id": 13, "text": "|\\cdot|" }, { "math_id": 14, "text": "\\mathbf{Mod}(R)" }, { "math_id": 15, "text": "\\mathbf{Set}" }, { "math_id": 16, "text": "\\operatorname{Free}_R" }, { "math_id": 17, "text": "X\\mapsto \\operatorname{Free}_R(X)" }, { "math_id": 18, "text": "X" }, { "math_id": 19, "text": "X \\to |M|" }, { "math_id": 20, "text": "\\operatorname{Free}_R(X) \\to M" }, { "math_id": 21, "text": "\\operatorname{Hom}_{\\mathbf{Mod}_R}(\\operatorname{Free}_R(X),M) = \\operatorname{Hom}_{\\mathbf{Set}}(X,\\operatorname{Forget}(M))." }, { "math_id": 22, "text": "X \\to \\operatorname{Free}_R(X)" } ]
https://en.wikipedia.org/wiki?curid=844783
8449246
Adaptive step size
In mathematics and numerical analysis, an adaptive step size is used in some methods for the numerical solution of ordinary differential equations (including the special case of numerical integration) in order to control the errors of the method and to ensure stability properties such as A-stability. Using an adaptive stepsize is of particular importance when there is a large variation in the size of the derivative. For example, when modeling the motion of a satellite about the earth as a standard Kepler orbit, a fixed time-stepping method such as the Euler method may be sufficient. However things are more difficult if one wishes to model the motion of a spacecraft taking into account both the Earth and the Moon as in the Three-body problem. There, scenarios emerge where one can take large time steps when the spacecraft is far from the Earth and Moon, but if the spacecraft gets close to colliding with one of the planetary bodies, then small time steps are needed. Romberg's method and Runge–Kutta–Fehlberg are examples of a numerical integration methods which use an adaptive stepsize. Example. For simplicity, the following example uses the simplest integration method, the Euler method; in practice, higher-order methods such as Runge–Kutta methods are preferred due to their superior convergence and stability properties. Consider the initial value problem formula_0 where "y" and "f" may denote vectors (in which case this equation represents a system of coupled ODEs in several variables). We are given the function "f"("t","y") and the initial conditions ("a", "y""a"), and we are interested in finding the solution at "t" = "b". Let "y"("b") denote the exact solution at "b", and let "yb" denote the solution that we compute. We write formula_1, where formula_2 is the error in the numerical solution. For a sequence ("t""n") of values of "t", with "t""n" = "a" + "nh", the Euler method gives approximations to the corresponding values of "y"("t""n") as formula_3 The local truncation error of this approximation is defined by formula_4 and by Taylor's theorem, it can be shown that (provided "f" is sufficiently smooth) the local truncation error is proportional to the square of the step size: formula_5 where "c" is some constant of proportionality. We have marked this solution and its error with a formula_6. The value of "c" is not known to us. Let us now apply Euler's method again with a different step size to generate a second approximation to "y"("t""n"+1). We get a second solution, which we label with a formula_7. Take the new step size to be one half of the original step size, and apply two steps of Euler's method. This second solution is presumably more accurate. Since we have to apply Euler's method twice, the local error is (in the worst case) twice the original error. formula_8 formula_9 formula_10 formula_11 Here, we assume error factor formula_12 is constant over the interval formula_13. In reality its rate of change is proportional to formula_14. Subtracting solutions gives the error estimate: formula_15 This local error estimate is third order accurate. The local error estimate can be used to decide how stepsize formula_16 should be modified to achieve the desired accuracy. For example, if a local tolerance of formula_17 is allowed, we could let "h" evolve like: formula_18 The formula_19 is a safety factor to ensure success on the next try. The minimum and maximum are to prevent extreme changes from the previous stepsize. This should, in principle give an error of about formula_20 in the next try. If formula_21, we consider the step successful, and the error estimate is used to improve the solution: formula_22 This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps. This technique is called Richardson extrapolation. Beginning with an initial stepsize of formula_23, this theory facilitates our controllable integration of the ODE from point formula_24 to formula_25, using an optimal number of steps given a local error tolerance. A drawback is that the step size may become prohibitively small, especially when using the low-order Euler method. Similar methods can be developed for higher order methods, such as the 4th-order Runge–Kutta method. Also, a global error tolerance can be achieved by scaling the local error to global scope. Embedded error estimates. Adaptive stepsize methods that use a so-called 'embedded' error estimate include the Bogacki–Shampine, Runge–Kutta–Fehlberg, Cash–Karp and Dormand–Prince methods. These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates. To illustrate the ideas of embedded method, consider the following scheme which update formula_26: formula_27 formula_28 The next step formula_29 is predicted from the previous information formula_30. For embedded RK method, computation of formula_31 includes a lower order RK method formula_32. The error then can be simply written as formula_33 formula_34 is the unnormalized error. To normalize it, we compare it against a user-defined tolerance, which consists of the absolute tolerance and relative tolerance: formula_35 formula_36 Then we compare the normalized error formula_37 against 1 to get the predicted formula_29: formula_38 The parameter "q" is the order corresponding to the RK method formula_32, which has lower order. The above prediction formula is plausible in a sense that it enlarges the step if the estimated local error is smaller than the tolerance and it shrinks the step otherwise. The description given above is a simplified procedures used in the stepsize control for explicit RK solvers. A more detailed treatment can be found in Hairer's textbook. The ODE solver in many programming languages uses this procedure as the default strategy for adaptive stepsize control, which adds other engineering parameters to make the system more stable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " y'(t) = f(t,y(t)), \\qquad y(a)=y_a " }, { "math_id": 1, "text": "y_b + \\varepsilon = y(b)" }, { "math_id": 2, "text": "\\varepsilon" }, { "math_id": 3, "text": "y_{n+1}^{(0)}=y_n+hf(t_n,y_n)" }, { "math_id": 4, "text": "\\tau_{n+1}^{(0)}=y(t_{n+1}) - y_{n+1}^{(0)}" }, { "math_id": 5, "text": "\\tau_{n+1}^{(0)}=ch^2" }, { "math_id": 6, "text": "(0)" }, { "math_id": 7, "text": "(1)" }, { "math_id": 8, "text": "y_{n+\\frac{1}{2}}=y_n+\\frac{h}{2}f(t_n,y_n)" }, { "math_id": 9, "text": "y_{n+1}^{(1)}=y_{n+\\frac{1}{2}}+\\frac{h}{2}f(t_{n+\\frac{1}{2}},y_{n+\\frac{1}{2}})" }, { "math_id": 10, "text": "\\tau_{n+1}^{(1)}=c\\left(\\frac{h}{2}\\right)^2+c\\left(\\frac{h}{2}\\right)^2=2c\\left(\\frac{h}{2}\\right)^2=\\frac{1}{2}ch^2=\\frac{1}{2}\\tau_{n+1}^{(0)}" }, { "math_id": 11, "text": "y_{n+1}^{(1)} + \\tau_{n+1}^{(1)}=y(t+h)" }, { "math_id": 12, "text": "c" }, { "math_id": 13, "text": "[t, t+h]" }, { "math_id": 14, "text": "y^{(3)}(t)" }, { "math_id": 15, "text": " y_{n+1}^{(1)}-y_{n+1}^{(0)} = \\tau_{n+1}^{(1)} " }, { "math_id": 16, "text": "h" }, { "math_id": 17, "text": "\\text{tol}" }, { "math_id": 18, "text": " h \\rightarrow 0.9 \\times h \\times \\min \\left(\\max \\left( \\left(\\frac{\\text{tol}}{2\\left|\\tau_{n+1}^{(1)}\\right|}\\right)^{1/2}, 0.3\\right) ,2\\right) " }, { "math_id": 19, "text": "0.9" }, { "math_id": 20, "text": "0.9 \\times \\text{tol}" }, { "math_id": 21, "text": "|\\tau_{n+1}^{(1)}| < \\text{tol}" }, { "math_id": 22, "text": " y_{n+1}^{(2)} = y_{n+1}^{(1)} + \\tau_{n+1}^{(1)} " }, { "math_id": 23, "text": "h=b-a" }, { "math_id": 24, "text": "a" }, { "math_id": 25, "text": "b" }, { "math_id": 26, "text": "y_n" }, { "math_id": 27, "text": "y_{n+1}=y_n + h_n \\psi(t_n,y_n,h_n)" }, { "math_id": 28, "text": "t_{n+1}=t_n + h_n" }, { "math_id": 29, "text": "h_n" }, { "math_id": 30, "text": "h_n=g(t_n,y_n, h_{n-1})" }, { "math_id": 31, "text": "\\psi" }, { "math_id": 32, "text": "\\tilde{\\psi}" }, { "math_id": 33, "text": " \\textrm{err}_n(h) = \\tilde{y}_{n+1} - y_{n+1} = h(\\tilde{\\psi}(t_n, y_n, h_n) - \\psi(t_n, y_n, h_n))" }, { "math_id": 34, "text": " \\textrm{err}_n" }, { "math_id": 35, "text": " \\textrm{tol}_n = \\textrm{Atol} + \\textrm{Rtol} \\cdot \\max(|y_n|, |y_{n-1}|)" }, { "math_id": 36, "text": " E_n = \\textrm{norm}(\\textrm{err}_n / \\textrm{tol}_n)" }, { "math_id": 37, "text": "E_n" }, { "math_id": 38, "text": " h_n = h_{n-1} (1/E_n)^{1/(q+1)}" } ]
https://en.wikipedia.org/wiki?curid=8449246
8449731
Mortgage
Loan secured using real estate A mortgage loan or simply mortgage (), in civil law jurisdictions known also as a hypothec loan, is a loan used either by purchasers of real property to raise funds to buy real estate, or by existing property owners to raise funds for any purpose while putting a lien on the property being mortgaged. The loan is "secured" on the borrower's property through a process known as mortgage origination. This means that a legal mechanism is put into place which allows the lender to take possession and sell the secured property ("foreclosure" or "repossession") to pay off the loan in the event the borrower defaults on the loan or otherwise fails to abide by its terms. The word "mortgage" is derived from a Law French term used in Britain in the Middle Ages meaning "death pledge" and refers to the pledge ending (dying) when either the obligation is fulfilled or the property is taken through foreclosure. A mortgage can also be described as "a borrower giving consideration in the form of a collateral for a benefit (loan)". Mortgage borrowers can be individuals mortgaging their home or they can be businesses mortgaging commercial property (for example, their own business premises, residential property let to tenants, or an investment portfolio). The lender will typically be a financial institution, such as a bank, credit union or building society, depending on the country concerned, and the loan arrangements can be made either directly or indirectly through intermediaries. Features of mortgage loans such as the size of the loan, maturity of the loan, interest rate, method of paying off the loan, and other characteristics can vary considerably. The lender's rights over the secured property take priority over the borrower's other creditors, which means that if the borrower becomes bankrupt or insolvent, the other creditors will only be repaid the debts owed to them from a sale of the secured property if the mortgage lender is repaid in full first. In many jurisdictions, it is normal for home purchases to be funded by a mortgage loan. Few individuals have enough savings or liquid funds to enable them to purchase property outright. In countries where the demand for home ownership is highest, strong domestic markets for mortgages have developed. Mortgages can either be funded through the banking sector (that is, through short-term deposits) or through the capital markets through a process called "securitization", which converts pools of mortgages into fungible bonds that can be sold to investors in small denominations. Mortgage loan basics. Basic concepts and legal regulation. According to Anglo-American property law, a mortgage occurs when an owner (usually of a fee simple / freehold interest in real property, but frequently leasehold in England and Wales) pledges his or her interest (right to the property) as security or collateral for a loan. Therefore, a mortgage is an encumbrance (limitation) on the right to the property just as an easement would be, but because most mortgages occur as a condition for new loan money, the word "mortgage" has become the generic term for a loan secured by such real property. As with other types of loans, mortgages have an interest rate and are scheduled to amortize over a set period of time, typically 30 years in the United States. All types of real property can be, and usually are, secured with a mortgage and bear an interest rate that is supposed to reflect the lender's risk. Mortgage lending is the primary mechanism used in many countries to finance private ownership of residential and commercial property (see commercial mortgages). Although the terminology and precise forms will differ from country to country, the basic components tend to be similar: Many other specific characteristics are common to many markets, but the above are the essential features. Governments usually regulate many aspects of mortgage lending, either directly (through legal requirements, for example) or indirectly (through regulation of the participants or the financial markets, such as the banking industry), and often through state intervention (direct lending by the government, direct lending by state-owned banks, or sponsorship of various entities). Other aspects that define a specific mortgage market may be regional, historical, or driven by specific characteristics of the legal or financial system. Mortgage loans are generally structured as long-term loans, the periodic payments for which are similar to an annuity and calculated according to the time value of money formulae. The most basic arrangement would require a fixed monthly payment over a period of ten to thirty years, depending on local conditions. Over this period the principal component of the loan (the original loan) would be slowly paid down through amortization. In practice, many variants are possible and common worldwide and within each country. Lenders provide funds against property to earn interest income, and generally borrow these funds themselves (for example, by taking deposits or issuing bonds). The price at which the lenders borrow money, therefore, affects the cost of borrowing. Lenders may also, in many countries, sell the mortgage loan to other parties who are interested in receiving the stream of cash payments from the borrower, often in the form of a security (by means of a securitization). Mortgage lending will also take into account the (perceived) riskiness of the mortgage loan, that is, the likelihood that the funds will be repaid (usually considered a function of the creditworthiness of the borrower); that if they are not repaid, the lender will be able to foreclose on the real estate assets; and the financial, interest rate risk and time delays that may be involved in certain circumstances. Mortgage underwriting. During the mortgage loan approval process, a mortgage loan underwriter verifies the financial information that the applicant has provided as to income, employment, credit history and the value of the home being purchased via an appraisal. An appraisal may be ordered. The underwriting process may take a few days to a few weeks. Sometimes the underwriting process takes so long that the provided financial statements need to be resubmitted so they are current. It is advisable to maintain the same employment and not to use or open new credit during the underwriting process. Any changes made in the applicant's credit, employment, or financial information could result in the loan being denied. Mortgage loan types. There are many types of mortgages used worldwide, but several factors broadly define the characteristics of the mortgage. All of these may be subject to local regulation and legal requirements. The two basic types of amortized loans are the fixed rate mortgage (FRM) and adjustable-rate mortgage (ARM) (also known as a floating rate or variable rate mortgage). In some countries, such as the United States, fixed rate mortgages are the norm, but floating rate mortgages are relatively common. Combinations of fixed and floating rate mortgages are also common, whereby a mortgage loan will have a fixed rate for some period, for example the first five years, and vary after the end of that period. The charge to the borrower depends upon the credit risk in addition to the interest rate risk. The mortgage origination and underwriting process involves checking credit scores, debt-to-income, downpayments (deposits), assets, and assessing property value. Jumbo mortgages and subprime lending are not supported by government guarantees and face higher interest rates. Other innovations described below can affect the rates as well. Loan to value and down payments. Upon making a mortgage loan for the purchase of a property, lenders usually require that the borrower make a down payment (called a deposit in English law); that is, contribute a portion of the cost of the property. This down payment may be expressed as a portion of the value of the property (see below for a definition of this term). The loan to value ratio (or LTV) is the size of the loan against the value of the property. Therefore, a mortgage loan in which the purchaser has made a down payment of 20% has a loan to value ratio of 80%. For loans made against properties that the borrower already owns, the loan to value ratio will be imputed against the estimated value of the property. The loan to value ratio is considered an important indicator of the riskiness of a mortgage loan: the higher the LTV, the higher the risk that the value of the property (in case of foreclosure) will be insufficient to cover the remaining principal of the loan. Value: appraised, estimated, and actual. Since the value of the property is an important factor in understanding the risk of the loan, determining the value is a key factor in mortgage lending. The value may be determined in various ways, but the most common are: Payment and debt ratios. In most countries, a number of more or less standard measures of creditworthiness may be used. Common measures include payment to income (mortgage payments as a percentage of gross or net income); debt to income (all debt payments, including mortgage payments, as a percentage of income); and various net worth measures. In many countries, credit scores are used in lieu of or to supplement these measures. There will also be requirements for documentation of the creditworthiness, such as income tax returns, pay stubs, etc. the specifics will vary from location to location. Income tax incentives usually can be applied in forms of tax refunds or tax deduction schemes. The first implies that income tax paid by individual taxpayers will be refunded to the extent of interest on mortgage loans taken to acquire residential property. Income tax deduction implies lowering tax liability to the extent of interest rate paid for the mortgage loan. Some lenders may also require a potential borrower have one or more months of "reserve assets" available. In other words, the borrower may be required to show the availability of enough assets to pay for the housing costs (including mortgage, taxes, etc.) for a period of time in the event of the job loss or other loss of income. Many countries have lower requirements for certain borrowers, or "no-doc" / "low-doc" lending standards that may be acceptable under certain circumstances. Standard or conforming mortgages. Many countries have a notion of standard or conforming mortgages that define a perceived acceptable level of risk, which may be formal or informal, and may be reinforced by laws, government intervention, or market practice. For example, a standard mortgage may be considered to be one with no more than 70–80% LTV and no more than one-third of gross income going to mortgage debt. A standard or conforming mortgage is a key concept as it often defines whether or not the mortgage can be easily sold or securitized, or, if non-standard, may affect the price at which it may be sold. In the United States, a conforming mortgage is one which meets the established rules and procedures of the two major government-sponsored entities in the housing finance market (including some legal requirements). In contrast, lenders who decide to make nonconforming loans are exercising a higher risk tolerance and do so knowing that they face more challenge in reselling the loan. Many countries have similar concepts or agencies that define what are "standard" mortgages. Regulated lenders (such as banks) may be subject to limits or higher-risk weightings for non-standard mortgages. For example, banks and mortgage brokerages in Canada face restrictions on lending more than 80% of the property value; beyond this level, mortgage insurance is generally required. Foreign currency mortgage. In some countries with currencies that tend to depreciate, foreign currency mortgages are common, enabling lenders to lend in a stable foreign currency, whilst the borrower takes on the currency risk that the currency will depreciate and they will therefore need to convert higher amounts of the domestic currency to repay the loan. Repaying the mortgage. In addition to the two standard means of setting the "cost" of a mortgage loan (fixed at a set interest rate for the term, or variable relative to market interest rates), there are variations in "how" that cost is paid, and how the loan itself is repaid. Repayment depends on locality, tax laws and prevailing culture. There are also various mortgage repayment structures to suit different types of borrower. Principal and interest. The most common way to repay a secured mortgage loan is to make regular payments toward the principal and interest over a set term, commonly referred to as (self) amortization in the U.S. and as a repayment mortgage in the UK. A mortgage is a form of annuity (from the perspective of the lender), and the calculation of the periodic payments is based on the time value of money formulas. Certain details may be specific to different locations: interest may be calculated on the basis of a 360-day year, for example; interest may be compounded daily, yearly, or semi-annually; prepayment penalties may apply; and other factors. There may be legal restrictions on certain matters, and consumer protection laws may specify or prohibit certain practices. Depending on the size of the loan and the prevailing practice in the country the term may be short (10 years) or long (50 years plus). In the UK and U.S., 25 to 30 years is the usual maximum term (although shorter periods, such as 15-year mortgage loans, are common). Mortgage payments, which are typically made monthly, contain a repayment of the principal and an interest element. The amount going toward the principal in each payment varies throughout the term of the mortgage. In the early years the repayments are mostly interest. Towards the end of the mortgage, payments are mostly for principal. In this way, the payment amount determined at outset is calculated to ensure the loan is repaid at a specified date in the future. This gives borrowers assurance that by maintaining repayment the loan will be cleared at a specified date if the interest rate does not change. Some lenders and 3rd parties offer a bi-weekly mortgage payment program designed to accelerate the payoff of the loan. Similarly, a mortgage can be ended before its scheduled end by paying some or all of the remainder prematurely, called curtailment. An amortization schedule is typically worked out taking the principal left at the end of each month, multiplying by the monthly rate and then subtracting the monthly payment. This is typically generated by an amortization calculator using the following formula: formula_0 where: Interest only. The main alternative to a principal and interest mortgage is an interest-only mortgage, where the principal is not repaid throughout the term. This type of mortgage is common in the UK, especially when associated with a regular investment plan. With this arrangement regular contributions are made to a separate investment plan designed to build up a lump sum to repay the mortgage at maturity. This type of arrangement is called an "investment-backed mortgage" or is often related to the type of plan used: endowment mortgage if an endowment policy is used, similarly a personal equity plan (PEP) mortgage, Individual Savings Account (ISA) mortgage or pension mortgage. Historically, investment-backed mortgages offered various tax advantages over repayment mortgages, although this is no longer the case in the UK. Investment-backed mortgages are seen as higher risk as they are dependent on the investment making sufficient return to clear the debt. Until recently it was not uncommon for interest only mortgages to be arranged without a repayment vehicle, with the borrower gambling that the property market will rise sufficiently for the loan to be repaid by trading down at retirement (or when rent on the property and inflation combine to surpass the interest rate). Interest-only lifetime mortgage. Recent Financial Services Authority guidelines to UK lenders regarding interest-only mortgages have tightened the criteria on new lending on an interest-only basis. The problem for many people has been the fact that no repayment vehicle had been implemented, or the vehicle itself (e.g. endowment/ISA policy) performed poorly and therefore insufficient funds were available to repay balance at the end of the term. Moving forward, the FSA under the Mortgage Market Review (MMR) have stated there must be strict criteria on the repayment vehicle being used. As such the likes of Nationwide and other lenders have pulled out of the interest-only market. A resurgence in the equity release market has been the introduction of interest-only lifetime mortgages. Where an interest-only mortgage has a fixed term, an interest-only lifetime mortgage will continue for the rest of the mortgagors life. These schemes have proved of interest to people who do like the roll-up effect (compounding) of interest on traditional equity release schemes. They have also proved beneficial to people who had an interest-only mortgage with no repayment vehicle and now need to settle the loan. These people can now effectively remortgage onto an interest-only lifetime mortgage to maintain continuity. Interest-only lifetime mortgage schemes are currently offered by two lenders – Stonehaven and more2life. They work by having the options of paying the interest on a monthly basis. By paying off the interest means the balance will remain level for the rest of their life. This market is set to increase as more retirees require finance in retirement. Reverse mortgages. For older borrowers (typically in retirement), it may be possible to arrange a mortgage where neither the principal nor interest is repaid. The interest is rolled up with the principal, increasing the debt each year. These arrangements are variously called reverse mortgages, lifetime mortgages or "equity release mortgages" (referring to home equity), depending on the country. The loans are typically not repaid until the borrowers are deceased, hence the age restriction. Through the Federal Housing Administration, the U.S. government insures reverse mortgages via a program called the HECM (Home Equity Conversion Mortgage). Unlike standard mortgages (where the entire loan amount is typically disbursed at the time of loan closing) the HECM program allows the homeowner to receive funds in a variety of ways: as a one time lump sum payment; as a monthly tenure payment which continues until the borrower dies or moves out of the house permanently; as a monthly payment over a defined period of time; or as a credit line. Interest and partial principal. In the U.S. a partial amortization or balloon loan is one where the amount of monthly payments due are calculated (amortized) over a certain term, but the outstanding balance on the principal is due at some point short of that term. In the UK, a partial repayment mortgage is quite common, especially where the original mortgage was investment-backed. Variations. Graduated payment mortgage loans have increasing costs over time and are geared to young borrowers who expect wage increases over time. Balloon payment mortgages have only partial amortization, meaning that amount of monthly payments due are calculated (amortized) over a certain term, but the outstanding principal balance is due at some point short of that term, and at the end of the term a balloon payment is due. When interest rates are high relative to the rate on an existing seller's loan, the buyer can consider assuming the seller's mortgage. A wraparound mortgage is a form of seller financing that can make it easier for a seller to sell a property. A biweekly mortgage has payments made every two weeks instead of monthly. Budget loans include taxes and insurance in the mortgage payment; package loans add the costs of furnishings and other personal property to the mortgage. Buydown mortgages allow the seller or lender to pay something similar to points to reduce interest rate and encourage buyers. Homeowners can also take out equity loans in which they receive cash for a mortgage debt on their house. Shared appreciation mortgages are a form of equity release. In the US, foreign nationals due to their unique situation face Foreign National mortgage conditions. Flexible mortgages allow for more freedom by the borrower to skip payments or prepay. Offset mortgages allow deposits to be counted against the mortgage loan. In the UK there is also the endowment mortgage where the borrowers pay interest while the principal is paid with a life insurance policy. Commercial mortgages typically have different interest rates, risks, and contracts than personal loans. Participation mortgages allow multiple investors to share in a loan. Builders may take out blanket loans which cover several properties at once. Bridge loans may be used as temporary financing pending a longer-term loan. Hard money loans provide financing in exchange for the mortgaging of real estate collateral. Foreclosure and non-recourse lending. In most jurisdictions, a lender may foreclose the mortgaged property if certain conditions occur – principally, non-payment of the mortgage loan. Subject to local legal requirements, the property may then be sold. Any amounts received from the sale (net of costs) are applied to the original debt. In some jurisdictions, mortgage loans are non-recourse loans: if the funds recouped from sale of the mortgaged property are insufficient to cover the outstanding debt, the lender may not have recourse to the borrower after foreclosure. In other jurisdictions, the borrower remains responsible for any remaining debt. In virtually all jurisdictions, specific procedures for foreclosure and sale of the mortgaged property apply, and may be tightly regulated by the relevant government. There are strict or judicial foreclosures and non-judicial foreclosures, also known as power of sale foreclosures. In some jurisdictions, foreclosure and sale can occur quite rapidly, while in others, foreclosure may take many months or even years. In many countries, the ability of lenders to foreclose is extremely limited, and mortgage market development has been notably slower. National differences. A study issued by the UN Economic Commission for Europe compared German, US, and Danish mortgage systems. The German "Bausparkassen" (savings and loans associations) reported nominal interest rates of approximately 6 per cent per annum in the last 40 years (as of 2004). "Bausparkassen" are not identical with banks that give mortgages. In addition, they charge administration and service fees (about 1.5 per cent of the loan amount). However, in the United States, the average interest rates for fixed-rate mortgages in the housing market started in the tens and twenties in the 1980s and have (as of 2004) reached about 6 per cent per annum. However, gross borrowing costs are substantially higher than the nominal interest rate and amounted for the last 30 years to 10.46 per cent. In Denmark, similar to the United States mortgage market, interest rates have fallen to 6 per cent per annum. A risk and administration fee amounts to 0.5 per cent of the outstanding debt. In addition, an acquisition fee is charged which amounts to one per cent of the principal. United States. The mortgage industry of the United States is a major financial sector. The federal government created several programs, or government sponsored entities, to foster mortgage lending, construction and encourage home ownership. These programs include the Government National Mortgage Association (known as Ginnie Mae), the Federal National Mortgage Association (known as Fannie Mae) and the Federal Home Loan Mortgage Corporation (known as Freddie Mac). The US mortgage sector has been the center of major financial crises over the last century. Unsound lending practices resulted in the National Mortgage Crisis of the 1930s, the savings and loan crisis of the 1980s and 1990s and the subprime mortgage crisis of 2007 which led to the 2010 foreclosure crisis. In the United States, the mortgage loan involves two separate documents: the mortgage note (a promissory note) and the security interest evidenced by the "mortgage" document; generally, the two are assigned together, but if they are split traditionally the holder of the note and not the mortgage has the right to foreclose. For example, Fannie Mae promulgates a standard form contract Multistate Fixed-Rate Note 3200 and also separate security instrument mortgage forms which vary by state. Canada. In Canada, the Canada Mortgage and Housing Corporation (CMHC) is the country's national housing agency, providing mortgage loan insurance, mortgage-backed securities, housing policy and programs, and housing research to Canadians. It was created by the federal government in 1946 to address the country's post-war housing shortage, and to help Canadians achieve their homeownership goals. The most common mortgage in Canada is the five-year fixed-rate closed mortgage, as opposed to the U.S. where the most common type is the 30-year fixed-rate open mortgage. Throughout the financial crisis and the ensuing recession, Canada's mortgage market continued to function well, partly due to the residential mortgage market's policy framework, which includes an effective regulatory and supervisory regime that applies to most lenders. Since the crisis, however, the low interest rate environment that has arisen has contributed to a significant increase in mortgage debt in the country. In April 2014, the Office of the Superintendent of Financial Institutions (OSFI) released guidelines for mortgage insurance providers aimed at tightening standards around underwriting and risk management. In a statement, the OSFI has stated that the guideline will "provide clarity about best practices in respect of residential mortgage insurance underwriting, which contribute to a stable financial system." This comes after several years of federal government scrutiny over the CMHC, with former Finance Minister Jim Flaherty musing publicly as far back as 2012 about privatizing the Crown corporation. In an attempt to cool down the real estate prices in Canada, Ottawa introduced a mortgage stress test effective 17 October 2016. Under the stress test, every home buyer who wants to get a mortgage from any federally regulated lender should undergo a test in which the borrower's affordability is judged based on a rate that is not lower than a stress rate set by the Bank of Canada. For high-ratio mortgage (loan to value of more than 80%), which is insured by Canada Mortgage and Housing Corporation, the rate is the maximum of the stress test rate and the current target rate. However, for uninsured mortgage, the rate is the maximum of the stress test rate and the target interest rate plus 2%. This stress test has lowered the maximum mortgage approved amount for all borrowers in Canada. The stress-test rate consistently increased until its peak of 5.34% in May 2018 and it was not changed until July 2019 in which for the first time in three years it decreased to 5.19%. This decision may reflect the push-back from the real-estate industry as well as the introduction of the first-time home buyer incentive program (FTHBI) by the Canadian government in the 2019 Canadian federal budget. Because of all the criticisms from real estate industry, Canada finance minister Bill Morneau ordered to review and consider changes to the mortgage stress test in December 2019. United Kingdom. The mortgage industry of the United Kingdom has traditionally been dominated by building societies, but from the 1970s the share of the new mortgage loans market held by building societies has declined substantially. Between 1977 and 1987, the share fell from 96% to 66% while that of banks and other institutions rose from 3% to 36%. (The figures have since changed further in favour of banks in part due to demutualisation.) There are currently over 200 significant separate financial organizations supplying mortgage loans to house buyers in Britain. The major lenders include building societies, banks, specialized mortgage corporations, insurance companies, and pension funds. In the UK variable-rate mortgages are more common than in the United States. This is in part because mortgage loan financing relies less on fixed income securitized assets (such as mortgage-backed securities) than in the United States, Denmark, and Germany, and more on retail savings deposits like Australia and Spain. Thus, lenders prefer variable-rate mortgages to fixed rate ones and whole-of-term fixed rate mortgages are generally not available. Nevertheless, in recent years fixing the rate of the mortgage for short periods has become popular and the initial two, three, five and, occasionally, ten years of a mortgage can be fixed. From 2007 to the beginning of 2013 between 50% and 83% of new mortgages had initial periods fixed in this way. Home ownership rates are comparable to the United States, but overall default rates are lower. Prepayment penalties during a fixed rate period are common, whilst the United States has discouraged their use. Like other European countries and the rest of the world, but unlike most of the United States, mortgages loans are usually not nonrecourse debt, meaning debtors are liable for any loan deficiencies after foreclosure. The customer-facing aspects of the residential mortgage sector are regulated by the Financial Conduct Authority (FCA), and lenders' financial probity is overseen by a separate regulator, the Prudential Regulation Authority (PRA) which is part of the Bank of England. The FCA and PRA were established in 2013 with the aim of responding to criticism of regulatory failings highlighted by the financial crisis of 2007–2008 and its aftermath. Continental Europe. Western European countries present a diversified landscape, with some countries (France, Belgium, Germany, the Netherlands, Denmark) where fixed-rate mortgages are the norm and some countries (Austria, Greece, Italy, Portugal, Spain, Sweden) favouring adjustable-rate mortgages. Much of Europe has home ownership rates comparable to the United States, but overall default rates are lower in Europe than in the United States. Mortgage loan financing relies less on securitizing mortgages and more on formal government guarantees backed by covered bonds (such as the ) and deposits, except Denmark and Germany where asset-backed securities are also common. Prepayment penalties are still common, whilst the United States has discouraged their use. Unlike much of the United States, mortgage loans are usually not nonrecourse debt. Within the European Union, covered bonds market volume (covered bonds outstanding) amounted to about €2 trillion at year-end 2007 with Germany, Denmark, Spain, and France each having outstandings above €200 billion. -like securities have been introduced in more than 25 European countries—and in recent years also in the U.S. and other countries outside Europe—each with their own unique law and regulations. Recent trends. On July 28, 2008, US Treasury Secretary Henry Paulson announced that, along with four large U.S. banks, the Treasury would attempt to kick start a market for these securities in the United States, primarily to provide an alternative form of mortgage-backed securities. Similarly, in the UK "the Government is inviting views on options for a UK framework to deliver more affordable long-term fixed-rate mortgages, including the lessons to be learned from international markets and institutions". George Soros's October 10, 2008 "The Wall Street Journal" editorial promoted the Danish mortgage market model. Malaysia. Mortgages in Malaysia can be categorised into two different groups: conventional home loan and Islamic home loan. Under the conventional home loan, banks normally charge a fixed interest rate, a variable interest rate, or both. These interest rates are tied to a base rate (individual bank's benchmark rate). For Islamic home financing, it follows the Sharia Law and comes in 2 common types: Bai’ Bithaman Ajil (BBA) or Musharakah Mutanaqisah (MM). Bai' Bithaman Ajil is when the bank buys the property at current market price and sells it back to you at a much higher price. Musharakah Mutanaqisah is when the bank buys the property together with you. You will then slowly buy the bank's portion of the property through rental (whereby a portion of the rental goes to paying for the purchase of a part of the bank's share in the property until the property comes to your complete ownership). Islamic countries. Islamic Sharia law prohibits the payment or receipt of interest, meaning that Muslims cannot use conventional mortgages. The Islamic mortgage loan cancels any form of interest because of doctrines, so in the mortgage loan process, the lender and the borrower are more like a capital-shared partnership than a debt relationship. However, real estate is far too expensive for most people to buy outright using cash: Islamic mortgages solve this problem by having the property change hands twice. In one variation, the bank will buy the house outright and then act as a landlord. The homebuyer, in addition to paying rent, will pay a contribution towards the purchase of the property. When the last payment is made, the property changes hands. Typically, this may lead to a higher final price for the buyers. This is because in some countries (such as the United Kingdom and India) there is a stamp duty which is a tax charged by the government on a change of ownership. Because ownership changes twice in an Islamic mortgage, a stamp tax may be charged twice. Many other jurisdictions have similar transaction taxes on change of ownership which may be levied. In the United Kingdom, the dual application of stamp duty in such transactions was removed in the Finance Act 2003 in order to facilitate Islamic mortgages. An alternative scheme involves the bank reselling the property according to an installment plan, at a price higher than the original price. Both of these methods compensate the lender as if they were charging interest, but the loans are structured in a way that in name they are not, and the lender shares the financial risks involved in the transaction with the homebuyer. Mortgage insurance. Mortgage insurance is an insurance policy designed to protect the mortgagee (lender) from any default by the mortgagor (borrower). It is used commonly in loans with a loan-to-value ratio over 80%, and employed in the event of foreclosure and repossession. This policy is typically paid for by the borrower as a component to final nominal (note) rate, or in one lump sum up front, or as a separate and itemized component of monthly mortgage payment. In the last case, mortgage insurance can be dropped when the lender informs the borrower, or its subsequent assigns, that the property has appreciated, the loan has been paid down, or any combination of both to relegate the loan-to-value under 80%. In the event of repossession, banks, investors, etc. must resort to selling the property to recoup their original investment (the money lent) and are able to dispose of hard assets (such as real estate) more quickly by reductions in price. Therefore, the mortgage insurance acts as a hedge should the repossessing authority recover less than full and fair market value for any hard asset. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A =P\\cdot\\frac{r(1 + r)^n}{(1 + r)^n - 1}" }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " P " }, { "math_id": 3, "text": " r " }, { "math_id": 4, "text": " n " } ]
https://en.wikipedia.org/wiki?curid=8449731
8450479
Bias of an estimator
Statistical property In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased (see bias versus consistency for more). All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because a biased estimator may be unbiased with respect to different measures of central tendency; because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful. Bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes "median"-unbiased from the usual "mean"-unbiasedness property. Mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is (see ); for example, the sample variance is a biased estimator for the population variance. These are all illustrated below. An unbiased estimator for a parameter need not always exist. For example, there is no unbiased estimator for the reciprocal of the parameter of a binomial random variable. Definition. Suppose we have a statistical model, parameterized by a real number "θ", giving rise to a probability distribution for observed data, formula_0, and a statistic formula_1 which serves as an estimator of "θ" based on any observed data formula_2. That is, we assume that our data follows some unknown distribution formula_3 (where "θ" is a fixed, unknown constant that is part of this distribution), and then we construct some estimator formula_1 that maps observed data to values that we hope are close to "θ". The bias of formula_1 relative to formula_4 is defined as formula_5 where formula_6 denotes expected value over the distribution formula_3 (i.e., averaging over all possible observations formula_2). The second equation follows since "θ" is measurable with respect to the conditional distribution formula_3. An estimator is said to be unbiased if its bias is equal to zero for all values of parameter "θ", or equivalently, if the expected value of the estimator matches that of the parameter. Unbiasedness is not guaranteed to carry over. For example, if formula_1 is an unbiased estimator for parameter "θ", it is not guaranteed that g(formula_1) is an unbiased estimator for g("θ)." In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference. Examples. Sample variance. The sample variance of a random variable demonstrates two aspects of estimator bias: firstly, the naive estimator is biased, which can be corrected by a scale factor; second, the unbiased estimator is not optimal in terms of mean squared error (MSE), which can be minimized by using a different scale factor, resulting in a biased estimator with lower MSE than the unbiased estimator. Concretely, the naive estimator sums the squared deviations and divides by "n," which is biased. Dividing instead by "n" − 1 yields an unbiased estimator. Conversely, MSE can be minimized by dividing by a different number (depending on distribution), but this results in a biased estimator. This number is always larger than "n" − 1, so this is known as a shrinkage estimator, as it "shrinks" the unbiased estimator towards zero; for the normal distribution the optimal value is "n" + 1. Suppose "X"1, ..., "X""n" are independent and identically distributed (i.i.d.) random variables with expectation "μ" and variance "σ"2. If the sample mean and uncorrected sample variance are defined as formula_7 then "S"2 is a biased estimator of "σ"2, because formula_8 To continue, we note that by subtracting formula_9 from both sides of formula_10, we get formula_11 Meaning, (by cross-multiplication) formula_12. Then, the previous becomes: formula_13 This can be seen by noting the following formula, which follows from the Bienaymé formula, for the term in the inequality for the expectation of the uncorrected sample variance above: formula_14. In other words, the expected value of the uncorrected sample variance does not equal the population variance "σ"2, unless multiplied by a normalization factor. The sample mean, on the other hand, is an unbiased estimator of the population mean "μ". Note that the usual definition of sample variance is formula_15, and this is an unbiased estimator of the population variance. Algebraically speaking, formula_16 is unbiased because: formula_17 where the transition to the second line uses the result derived above for the biased estimator. Thus formula_18, and therefore formula_19 is an unbiased estimator of the population variance, "σ"2. The ratio between the biased (uncorrected) and unbiased estimates of the variance is known as Bessel's correction. The reason that an uncorrected sample variance, "S"2, is biased stems from the fact that the sample mean is an ordinary least squares (OLS) estimator for "μ": formula_20 is the number that makes the sum formula_21 as small as possible. That is, when any other number is plugged into this sum, the sum can only increase. In particular, the choice formula_22 gives, formula_23 and then formula_24 The above discussion can be understood in geometric terms: the vector formula_25 can be decomposed into the "mean part" and "variance part" by projecting to the direction of formula_26 and to that direction's orthogonal complement hyperplane. One gets formula_27 for the part along formula_28 and formula_29 for the complementary part. Since this is an orthogonal decomposition, Pythagorean theorem says formula_30, and taking expectations we get formula_31, as above (but times formula_32). If the distribution of formula_33 is rotationally symmetric, as in the case when formula_34 are sampled from a Gaussian, then on average, the dimension along formula_28 contributes to formula_35 equally as the formula_36 directions perpendicular to formula_28, so that formula_37 and formula_38. This is in fact true in general, as explained above. Estimating a Poisson probability. A far more extreme case of a biased estimator being better than any unbiased estimator arises from the Poisson distribution. Suppose that "X" has a Poisson distribution with expectation "λ". Suppose it is desired to estimate formula_39 with a sample of size 1. (For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and "λ" is the average number of calls per minute, then "e"−2"λ" is the probability that no calls arrive in the next two minutes.) Since the expectation of an unbiased estimator "δ"("X") is equal to the estimand, i.e. formula_40 the only function of the data constituting an unbiased estimator is formula_41 To see this, note that when decomposing e−"λ" from the above expression for expectation, the sum that is left is a Taylor series expansion of e−"λ" as well, yielding e−"λ"e−"λ" = e−2"λ" (see Characterizations of the exponential function). If the observed value of "X" is 100, then the estimate is 1, although the true value of the quantity being estimated is very likely to be near 0, which is the opposite extreme. And, if "X" is observed to be 101, then the estimate is even more absurd: It is −1, although the quantity being estimated must be positive. The (biased) maximum likelihood estimator formula_42 is far better than this unbiased estimator. Not only is its value always positive but it is also more accurate in the sense that its mean squared error formula_43 is smaller; compare the unbiased estimator's MSE of formula_44 The MSEs are functions of the true value "λ". The bias of the maximum-likelihood estimator is: formula_45 Maximum of a discrete uniform distribution. The bias of maximum-likelihood estimators can be substantial. Consider a case where "n" tickets numbered from 1 through to "n" are placed in a box and one is selected at random, giving a value "X". If "n" is unknown, then the maximum-likelihood estimator of "n" is "X", even though the expectation of "X" given "n" is only ("n" + 1)/2; we can be certain only that "n" is at least "X" and is probably more. In this case, the natural unbiased estimator is 2"X" − 1. Median-unbiased estimators. The theory of median-unbiased estimators was revived by George W. Brown in 1947: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;An estimate of a one-dimensional parameter θ will be said to be median-unbiased, if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation. Further properties of median-unbiased estimators have been noted by Lehmann, Birnbaum, van der Vaart and Pfanzagl. In particular, median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators do not exist. They are invariant under one-to-one transformations. There are methods of construction median-unbiased estimators for probability distributions that have monotone likelihood-functions, such as one-parameter exponential families, to ensure that they are optimal (in a sense analogous to minimum-variance property considered for mean-unbiased estimators). One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss-functions. Bias with respect to other loss functions. Any minimum-variance "mean"-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function (among mean-unbiased estimators), as observed by Gauss. A minimum-average absolute deviation "median"-unbiased estimator minimizes the risk with respect to the absolute loss function (among median-unbiased estimators), as observed by Laplace. Other loss functions are used in statistics, particularly in robust statistics. Effect of transformations. For univariate parameters, median-unbiased estimators remain median-unbiased under transformations that preserve order (or reverse order). Note that, when a transformation is applied to a mean-unbiased estimator, the result need not be a mean-unbiased estimator of its corresponding population statistic. By Jensen's inequality, a convex function as transformation will introduce positive bias, while a concave function will introduce negative bias, and a function of mixed convexity may introduce bias in either direction, depending on the specific function and distribution. That is, for a non-linear function "f" and a mean-unbiased estimator "U" of a parameter "p", the composite estimator "f"("U") need not be a mean-unbiased estimator of "f"("p"). For example, the square root of the unbiased estimator of the population variance is not a mean-unbiased estimator of the population standard deviation: the square root of the unbiased sample variance, the corrected sample standard deviation, is biased. The bias depends both on the sampling distribution of the estimator and on the transform, and can be quite involved to calculate – see unbiased estimation of standard deviation for a discussion in this case. Bias, variance and mean squared error. While bias quantifies the "average" difference to be expected between an estimator and an underlying parameter, an estimator based on a finite sample can additionally be expected to differ from the parameter due to the randomness in the sample. An estimator that minimises the bias will not necessarily minimise the mean square error. One measure which is used to try to reflect both types of difference is the mean square error, formula_46 This can be shown to be equal to the square of the bias, plus the variance: formula_47 When the parameter is a vector, an analogous decomposition applies: formula_48 where formula_49 is the trace (diagonal sum) of the covariance matrix of the estimator and formula_50 is the square vector norm. Example: Estimation of population variance. For example, suppose an estimator of the form formula_51 is sought for the population variance as above, but this time to minimise the MSE: formula_52 If the variables "X"1 ... "X""n" follow a normal distribution, then "nS"2/σ2 has a chi-squared distribution with "n" − 1 degrees of freedom, giving: formula_53 and so formula_54 With a little algebra it can be confirmed that it is "c" = 1/("n" + 1) which minimises this combined loss function, rather than "c" = 1/("n" − 1) which minimises just the square of the bias. More generally it is only in restricted classes of problems that there will be an estimator that minimises the MSE independently of the parameter values. However it is very common that there may be perceived to be a "bias–variance tradeoff", such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall. Bayesian view. Most bayesians are rather unconcerned about unbiasedness (at least in the formal sampling-theory sense above) of their estimates. For example, Gelman and coauthors (1995) write: "From a Bayesian perspective, the principle of unbiasedness is reasonable in the limit of large samples, but otherwise it is potentially misleading." Fundamentally, the difference between the Bayesian approach and the sampling-theory approach above is that in the sampling-theory approach the parameter is taken as fixed, and then probability distributions of a statistic are considered, based on the predicted sampling distribution of the data. For a Bayesian, however, it is the "data" which are known, and fixed, and it is the unknown parameter for which an attempt is made to construct a probability distribution, using Bayes' theorem: formula_55 Here the second term, the likelihood of the data given the unknown parameter value θ, depends just on the data obtained and the modelling of the data generation process. However a Bayesian calculation also includes the first term, the prior probability for θ, which takes account of everything the analyst may know or suspect about θ "before" the data comes in. This information plays no part in the sampling-theory approach; indeed any attempt to include it would be considered "bias" away from what was pointed to purely by the data. To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be "unbiased" in sampling theory terms. But the results of a Bayesian approach can differ from the sampling theory approach even if the Bayesian tries to adopt an "uninformative" prior. For example, consider again the estimation of an unknown population variance σ2 of a Normal distribution with unknown mean, where it is desired to optimise "c" in the expected loss function formula_56 A standard choice of uninformative prior for this problem is the Jeffreys prior, formula_57, which is equivalent to adopting a rescaling-invariant flat prior for ln(σ2). One consequence of adopting this prior is that "S"2/σ2 remains a pivotal quantity, i.e. the probability distribution of "S"2/σ2 depends only on "S"2/σ2, independent of the value of "S"2 or σ2: formula_58 However, while formula_59 in contrast formula_60 — when the expectation is taken over the probability distribution of σ2 given "S"2, as it is in the Bayesian case, rather than "S"2 given σ2, one can no longer take σ4 as a constant and factor it out. The consequence of this is that, compared to the sampling-theory calculation, the Bayesian calculation puts more weight on larger values of σ2, properly taking into account (as the sampling-theory calculation cannot) that under this squared-loss function the consequence of underestimating large values of σ2 is more costly in squared-loss terms than that of overestimating small values of σ2. The worked-out Bayesian calculation gives a scaled inverse chi-squared distribution with "n" − 1 degrees of freedom for the posterior probability distribution of σ2. The expected loss is minimised when "cnS"2 = &lt;σ2&gt;; this occurs when "c" = 1/("n" − 3). Even with an uninformative prior, therefore, a Bayesian calculation may not give the same expected-loss minimising result as the corresponding sampling-theory calculation. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_\\theta(x) = P(x\\mid\\theta)" }, { "math_id": 1, "text": "\\hat\\theta" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "P(x\\mid\\theta)" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": " \\operatorname{Bias}(\\hat\\theta, \\theta) =\\operatorname{Bias}_\\theta[\\,\\hat\\theta\\,] = \\operatorname{E}_{x\\mid\\theta}[\\,\\hat{\\theta}\\,]-\\theta = \\operatorname{E}_{x\\mid\\theta}[\\, \\hat\\theta - \\theta \\,]," }, { "math_id": 6, "text": "\\operatorname{E}_{x\\mid\\theta}" }, { "math_id": 7, "text": "\\overline{X}\\,=\\frac 1 n \\sum_{i=1}^n X_i \\qquad S^2=\\frac 1 n \\sum_{i=1}^n\\big(X_i-\\overline{X}\\,\\big)^2 \\qquad " }, { "math_id": 8, "text": "\n \\begin{align}\n \\operatorname{E}[S^2]\n &= \\operatorname{E}\\left[ \\frac 1 n \\sum_{i=1}^n \\big(X_i-\\overline{X}\\big)^2 \\right]\n = \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n \\bigg((X_i-\\mu)-(\\overline{X}-\\mu)\\bigg)^2 \\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n \\bigg((X_i-\\mu)^2 -\n 2(\\overline{X}-\\mu)(X_i-\\mu) +\n (\\overline{X}-\\mu)^2\\bigg) \\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 -\n \\frac 2 n (\\overline{X}-\\mu) \\sum_{i=1}^n (X_i-\\mu) +\n \\frac 1 n (\\overline{X}-\\mu)^2 \\sum_{i=1}^n 1 \\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 -\n \\frac 2 n (\\overline{X}-\\mu)\\sum_{i=1}^n (X_i-\\mu) +\n \\frac 1 n (\\overline{X}-\\mu)^2 \\cdot n\\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 -\n \\frac 2 n (\\overline{X}-\\mu)\\sum_{i=1}^n (X_i-\\mu) +\n (\\overline{X}-\\mu)^2 \\bigg] \\\\[8pt]\n \\end{align}\n" }, { "math_id": 9, "text": "\\mu" }, { "math_id": 10, "text": "\\overline{X}= \\frac 1 n \\sum_{i=1}^nX_i" }, { "math_id": 11, "text": "\n \\begin{align}\n \\overline{X}-\\mu = \\frac 1 n \\sum_{i=1}^n X_i - \\mu = \\frac 1 n \\sum_{i=1}^n X_i - \\frac 1 n \\sum_{i=1}^n\\mu\\ = \\frac 1 n \\sum_{i=1}^n (X_i - \\mu).\\\\[8pt]\n \\end{align}\n" }, { "math_id": 12, "text": "n \\cdot (\\overline{X}-\\mu)=\\sum_{i=1}^n (X_i-\\mu)" }, { "math_id": 13, "text": "\n \\begin{align}\n \\operatorname{E}[S^2]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 -\n \\frac 2 n (\\overline{X}-\\mu)\\sum_{i=1}^n (X_i-\\mu) +\n (\\overline{X}-\\mu)^2 \\bigg]\\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 -\n \\frac 2 n (\\overline{X}-\\mu) \\cdot n \\cdot (\\overline{X}-\\mu)+\n (\\overline{X}-\\mu)^2 \\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 -\n 2(\\overline{X}-\\mu)^2 +\n (\\overline{X}-\\mu)^2 \\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 - (\\overline{X}-\\mu)^2 \\bigg] \\\\[8pt]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2\\bigg] - \\operatorname{E}\\bigg[(\\overline{X}-\\mu)^2 \\bigg] \\\\[8pt]\n &= \\sigma^2 - \\operatorname{E}\\bigg[(\\overline{X}-\\mu)^2 \\bigg]\n = \\left( 1 -\\frac{1}{n}\\right) \\sigma^2 < \\sigma^2.\n \\end{align}\n " }, { "math_id": 14, "text": "\\operatorname{E}\\big[ (\\overline{X}-\\mu)^2 \\big] = \\frac 1 n \\sigma^2" }, { "math_id": 15, "text": "S^2=\\frac 1 {n-1} \\sum_{i=1}^n(X_i-\\overline{X}\\,)^2" }, { "math_id": 16, "text": " \\operatorname{E}[S^2] " }, { "math_id": 17, "text": "\n \\begin{align}\n \\operatorname{E}[S^2]\n &= \\operatorname{E}\\left[ \\frac 1 {n-1}\\sum_{i=1}^n \\big(X_i-\\overline{X}\\big)^2 \\right]\n = \\frac{n}{n-1}\\operatorname{E}\\left[ \\frac 1 {n}\\sum_{i=1}^n \\big(X_i-\\overline{X}\\big)^2 \\right] \\\\[8pt]\n &= \\frac{n}{n-1}\\left( 1 -\\frac{1}{n}\\right) \\sigma^2 = \\sigma^2, \\\\[8pt]\n \\end{align}\n" }, { "math_id": 18, "text": "\\operatorname{E}[S^2] = \\sigma^2" }, { "math_id": 19, "text": "S^2=\\frac 1 {n-1}\\sum_{i=1}^n(X_i-\\overline{X}\\,)^2" }, { "math_id": 20, "text": "\\overline{X}" }, { "math_id": 21, "text": "\\sum_{i=1}^n (X_i-\\overline{X})^2" }, { "math_id": 22, "text": "\\mu \\ne \\overline{X}" }, { "math_id": 23, "text": "\n \\frac 1 n \\sum_{i=1}^n (X_i-\\overline{X})^2 < \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2,\n " }, { "math_id": 24, "text": "\n \\begin{align}\n \\operatorname{E}[S^2]\n &= \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\overline{X})^2 \\bigg]\n < \\operatorname{E}\\bigg[ \\frac 1 n \\sum_{i=1}^n (X_i-\\mu)^2 \\bigg] = \\sigma^2.\n \\end{align}\n " }, { "math_id": 25, "text": "\\vec{C}=(X_1 -\\mu, \\ldots, X_n-\\mu)" }, { "math_id": 26, "text": " \\vec{u}=(1,\\ldots, 1)" }, { "math_id": 27, "text": "\\vec{A}=(\\overline{X}-\\mu, \\ldots, \\overline{X}-\\mu)" }, { "math_id": 28, "text": " \\vec{u}" }, { "math_id": 29, "text": "\\vec{B}=(X_1-\\overline{X}, \\ldots, X_n-\\overline{X})" }, { "math_id": 30, "text": " |\\vec{C}|^2= |\\vec{A}|^2+ |\\vec{B}|^2" }, { "math_id": 31, "text": " n \\sigma^2 = n \\operatorname{E}\\left[ (\\overline{X}-\\mu)^2 \\right] +n \\operatorname{E}[S^2] " }, { "math_id": 32, "text": "n" }, { "math_id": 33, "text": "\\vec{C}" }, { "math_id": 34, "text": "X_i" }, { "math_id": 35, "text": " |\\vec{C}|^2" }, { "math_id": 36, "text": "n-1" }, { "math_id": 37, "text": " \\operatorname{E}\\left[ (\\overline{X}-\\mu)^2 \\right] =\\frac{\\sigma^2} n " }, { "math_id": 38, "text": "\\operatorname{E}[S^2] =\\frac{(n-1)\\sigma^2} n " }, { "math_id": 39, "text": "\\operatorname{P}(X=0)^2=e^{-2\\lambda}\\quad" }, { "math_id": 40, "text": "\\operatorname E(\\delta(X))=\\sum_{x=0}^\\infty \\delta(x) \\frac{\\lambda^x e^{-\\lambda}}{x!} = e^{-2\\lambda}," }, { "math_id": 41, "text": "\\delta(x)=(-1)^x. \\, " }, { "math_id": 42, "text": "e^{-2{X}}\\quad" }, { "math_id": 43, "text": "e^{-4\\lambda}-2e^{\\lambda(1/e^2-3)}+e^{\\lambda(1/e^4-1)} \\, " }, { "math_id": 44, "text": "1-e^{-4\\lambda}. \\, " }, { "math_id": 45, "text": "e^{-2\\lambda}-e^{\\lambda(1/e^2-1)}. \\, " }, { "math_id": 46, "text": "\\operatorname{MSE}(\\hat{\\theta})=\\operatorname{E}\\big[(\\hat{\\theta}-\\theta)^2\\big]." }, { "math_id": 47, "text": "\\begin{align}\n\\operatorname{MSE}(\\hat{\\theta})= & (\\operatorname{E}[\\hat{\\theta}]-\\theta)^2 + \\operatorname{E}[\\,(\\hat{\\theta} - \\operatorname{E}[\\,\\hat{\\theta}\\,])^2\\,]\\\\\n= & (\\operatorname{Bias}(\\hat{\\theta},\\theta))^2 + \\operatorname{Var}(\\hat{\\theta})\n\\end{align}" }, { "math_id": 48, "text": "\\operatorname{MSE}(\\hat{\\theta }) =\\operatorname{trace}(\\operatorname{Cov}(\\hat{\\theta }))\n+\\left\\Vert\\operatorname{Bias}(\\hat{\\theta},\\theta)\\right\\Vert^{2}" }, { "math_id": 49, "text": "\\operatorname{trace}(\\operatorname{Cov}(\\hat{\\theta }))" }, { "math_id": 50, "text": "\\left\\Vert\\operatorname{Bias}(\\hat{\\theta},\\theta)\\right\\Vert^{2}" }, { "math_id": 51, "text": "T^2 = c \\sum_{i=1}^n\\left(X_i-\\overline{X}\\,\\right)^2 = c n S^2" }, { "math_id": 52, "text": "\\begin{align}\\operatorname{MSE} = & \\operatorname{E}\\left[(T^2 - \\sigma^2)^2\\right] \\\\\n= & \\left(\\operatorname{E}\\left[T^2 - \\sigma^2\\right]\\right)^2 + \\operatorname{Var}(T^2)\\end{align}" }, { "math_id": 53, "text": "\\operatorname{E}[nS^2] = (n-1)\\sigma^2\\text{ and }\\operatorname{Var}(nS^2)=2(n-1)\\sigma^4. " }, { "math_id": 54, "text": "\\operatorname{MSE} = (c (n-1) - 1)^2\\sigma^4 + 2c^2(n-1)\\sigma^4" }, { "math_id": 55, "text": "p(\\theta \\mid D, I) \\propto p(\\theta \\mid I) p(D \\mid \\theta, I)" }, { "math_id": 56, "text": "\\operatorname{Expected Loss} = \\operatorname{E}\\left[\\left(c n S^2 - \\sigma^2\\right)^2\\right] = \\operatorname{E}\\left[\\sigma^4 \\left(c n \\tfrac{S^2}{\\sigma^2} -1 \\right)^2\\right]" }, { "math_id": 57, "text": "\\scriptstyle{p(\\sigma^2) \\;\\propto\\; 1 / \\sigma^2}" }, { "math_id": 58, "text": "p\\left(\\tfrac{S^2}{\\sigma^2}\\mid S^2\\right) = p\\left(\\tfrac{S^2}{\\sigma^2}\\mid \\sigma^2\\right) = g\\left(\\tfrac{S^2}{\\sigma^2}\\right)" }, { "math_id": 59, "text": "\\operatorname{E}_{p(S^2\\mid \\sigma^2)}\\left[\\sigma^4 \\left(c n \\tfrac{S^2}{\\sigma^2} -1 \\right)^2\\right] = \\sigma^4 \\operatorname{E}_{p(S^2\\mid \\sigma^2)}\\left[\\left(c n \\tfrac{S^2}{\\sigma^2} -1 \\right)^2\\right]" }, { "math_id": 60, "text": "\\operatorname{E}_{p(\\sigma^2\\mid S^2)}\\left[\\sigma^4 \\left(c n \\tfrac{S^2}{\\sigma^2} -1 \\right)^2\\right] \\neq \\sigma^4 \\operatorname{E}_{p(\\sigma^2\\mid S^2)}\\left[\\left(c n \\tfrac{S^2}{\\sigma^2} -1 \\right)^2\\right]" } ]
https://en.wikipedia.org/wiki?curid=8450479
845060
Fundamental theorem of Riemannian geometry
Unique existence of the Levi-Civita connection In the mathematical field of Riemannian geometry, the fundamental theorem of Riemannian geometry states that on any Riemannian manifold (or pseudo-Riemannian manifold) there is a unique affine connection that is torsion-free and metric-compatible, called the "Levi-Civita connection" or "(pseudo-)Riemannian connection" of the given metric. Because it is canonically defined by such properties, often this connection is automatically used when given a metric. Statement of the theorem. Fundamental theorem of Riemannian Geometry. Let ("M", "g") be a Riemannian manifold (or pseudo-Riemannian manifold). Then there is a unique connection ∇ which satisfies the following conditions: The first condition is called "metric-compatibility" of ∇. It may be equivalently expressed by saying that, given any curve in M, the inner product of any two ∇–parallel vector fields along the curve is constant. It may also be equivalently phrased as saying that the metric tensor is preserved by parallel transport, which is to say that the metric is parallel when considering the natural extension of ∇ to act on (0,2)-tensor fields: ∇"g" 0. It is further equivalent to require that the connection is induced by a principal bundle connection on the orthonormal frame bundle. The second condition is sometimes called "symmetry" of ∇. It expresses the condition that the torsion of ∇ is zero, and as such is also called "torsion-freeness". There are alternative characterizations. An extension of the fundamental theorem states that given a pseudo-Riemannian manifold there is a unique connection preserving the metric tensor, with any given vector-valued 2-form as its torsion. The difference between an arbitrary connection (with torsion) and the corresponding Levi-Civita connection is the contorsion tensor. The fundamental theorem asserts both existence and uniqueness of a certain connection, which is called the "Levi-Civita connection" or "(pseudo-)Riemannian connection". However, the existence result is extremely direct, as the connection in question may be explicitly defined by either the "second Christoffel identity" or "Koszul formula" as obtained in the proofs below. This explicit definition expresses the Levi-Civita connection in terms of the metric and its first derivatives. As such, if the metric is k-times continuously differentiable, then the Levi-Civita connection is ("k" − 1)-times continuously differentiable. The Levi-Civita connection can also be characterized in other ways, for instance via the Palatini variation of the Einstein–Hilbert action. Proof of the theorem. The proof of the theorem can be presented in various ways. Here the proof is first given in the language of coordinates and Christoffel symbols, and then in the coordinate-free language of covariant derivatives. Regardless of the presentation, the idea is to use the metric-compatibility and torsion-freeness conditions to obtain a direct formula for any connection that is both metric-compatible and torsion-free. This establishes the uniqueness claim in the fundamental theorem. To establish the existence claim, it must be directly checked that the formula obtained does define a connection as desired. Local coordinates. Here the Einstein summation convention will be used, which is to say that an index repeated as both subscript and superscript is being summed over all values. Let m denote the dimension of M. Recall that, relative to a local chart, a connection is given by "m"3 smooth functions formula_2 with formula_3 for any vector fields X and Y. Torsion-freeness of the connection refers to the condition that ∇"X""Y" − ∇"Y" "X" ["X", "Y"] for arbitrary X and Y. Written in terms of local coordinates, this is equivalent to formula_4 which by arbitrariness of X and Y is equivalent to the condition Γ Γ. Similarly, the condition of metric-compatibility is equivalent to the condition formula_5 In this way, it is seen that the conditions of torsion-freeness and metric-compatibility can be viewed as a linear system of equations for the connection, in which the coefficients and 'right-hand side' of the system are given by the metric and its first derivative. The fundamental theorem of Riemannian geometry can be viewed as saying that this linear system has a unique solution. This is seen via the following computation: formula_6 in which the metric-compatibility condition is used three times for the first equality and the torsion-free condition is used three times for the second equality. The resulting formula is sometimes known as the "first Christoffel identity". It can be contracted with the inverse of the metric, "g""kl", to find the "second Christoffel identity": formula_7 This proves the uniqueness of a torsion-free and metric-compatible condition; that is, any such connection must be given by the above formula. To prove the existence, it must be checked that the above formula defines a connection that is torsion-free and metric-compatible. This can be done directly. Invariant formulation. The above proof can also be expressed in terms of vector fields. Torsion-freeness refers to the condition that formula_1 and metric-compatibility refers to the condition that formula_8 where X, Y, and Z are arbitrary vector fields. The computation previously done in local coordinates can be written as formula_9 This reduces immediately to the first Christoffel identity in the case that X, Y, and Z are coordinate vector fields. The equations displayed above can be rearranged to produce the "Koszul formula" or "identity" formula_10 This proves the uniqueness of a torsion-free and metric-compatible condition, since if "g"("W", "Z") is equal to "g"("U", "Z") for arbitrary Z, then W must equal U. This is a consequence of the "non-degeneracy" of the metric. In the local formulation above, this key property of the metric was implicitly used, in the same way, via the existence of "g""kl". Furthermore, by the same reasoning, the Koszul formula can be used to define a vector field ∇"X""Y" when given X and Y, and it is routine to check that this defines a connection that is torsion-free and metric-compatible. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\big(g(Y,Z)\\big) = g( \\nabla_X Y,Z ) + g( Y,\\nabla_X Z )," }, { "math_id": 1, "text": "\\nabla_XY-\\nabla_YX=[X,Y]," }, { "math_id": 2, "text": "\\left \\{ \\Gamma^l_{ij} \\right \\}," }, { "math_id": 3, "text": "(\\nabla_XY)^i=X^j\\partial_jY^i+X^jY^k\\Gamma_{jk}^i" }, { "math_id": 4, "text": "0=X^jY^k(\\Gamma_{jk}^i-\\Gamma_{kj}^i)," }, { "math_id": 5, "text": "\\partial_kg_{ij}=\\Gamma_{ki}^lg_{lj}+\\Gamma_{kj}^lg_{il}." }, { "math_id": 6, "text": "\\begin{align}\\partial_ig_{jl}+\\partial_jg_{il}-\\partial_lg_{ij}&=\\left(\\Gamma_{ij}^pg_{pl}+\\Gamma_{il}^pg_{jp}\\right)+\\left(\\Gamma_{ji}^pg_{pl}+\\Gamma_{jl}^pg_{ip}\\right)-\\left(\\Gamma_{li}^pg_{pj}+\\Gamma_{lj}^pg_{ip}\\right)\\\\ &=2\\Gamma_{ij}^pg_{pl}\\end{align}" }, { "math_id": 7, "text": "\\Gamma^k_{ij} = \\tfrac{1}{2} g^{kl}\\left ( \\partial_i g_{jl}+ \\partial_j g_{il} - \\partial_l g_{ij} \\right )." }, { "math_id": 8, "text": "X\\left(g(Y,Z)\\right)=g(\\nabla_XY,Z)+g(Y,\\nabla_XZ)," }, { "math_id": 9, "text": "\\begin{align}X\\left(g(Y,Z)\\right)&+Y\\left(g(X,Z)\\right)-Z\\left(g(X,Y)\\right)\\\\\n&=\\Big(g(\\nabla_XY,Z)+g(Y,\\nabla_XZ)\\Big)+\\Big(g(\\nabla_YX,Z)+g(X,\\nabla_YZ)\\Big)-\\Big(g(\\nabla_ZX,Y)+g(X,\\nabla_ZY)\\Big)\\\\ &=g(\\nabla_XY+\\nabla_YX,Z)+g(\\nabla_XZ-\\nabla_ZX,Y)+g(\\nabla_YZ-\\nabla_ZY,X)\\\\\n&=g(2\\nabla_XY+[Y,X],Z)+g([X,Z],Y)+g([Y,Z],X).\\end{align}" }, { "math_id": 10, "text": "2g(\\nabla_XY,Z)=X\\left(g(Y,Z)\\right)+Y\\left(g(X,Z)\\right)-Z\\left(g(X,Y)\\right)-g([Y,X],Z)-g([X,Z],Y)-g([Y,Z],X)." } ]
https://en.wikipedia.org/wiki?curid=845060
8453671
Subtractor
Circuit that performs subtraction In electronics, a subtractor – a digital circuit that performs subtraction of numbers – can be designed using the same approach as that of an adder. The binary subtraction process is summarized below. As with an adder, in the general case of calculations on multi-bit numbers, three bits are involved in performing the subtraction for each bit of the difference: the minuend (formula_0), subtrahend (formula_1), and a borrow in from the previous (less significant) bit order position (formula_2). The outputs are the difference bit (formula_3) and borrow bit formula_4. The subtractor is best understood by considering that the subtrahend and both borrow bits have negative weights, whereas the X and D bits are positive. The operation performed by the subtractor is to rewrite formula_5 (which can take the values -2, -1, 0, or 1) as the sum formula_6. formula_7 formula_8, where ⊕ represents exclusive or. Subtractors are usually implemented within a binary adder for only a small cost when using the standard two's complement notation, by providing an addition/subtraction selector to the carry-in and to invert the second operand. formula_9 (definition of two's complement notation) formula_10 Half subtractor. The half subtractors can be designed through the combinational Boolean logic circuits [2] as shown in Figure 1 and 2.The half subtractor is a combinational circuit which is used to perform subtraction of two bits. It has two inputs, the minuend formula_11 and subtrahend formula_12 and two outputs the difference formula_13 and borrow out formula_14. The borrow out signal is set when the subtractor needs to borrow from the next digit in a multi-digit subtraction. That is, formula_15 when formula_16. Since formula_11 and formula_12 are bits, formula_17 if and only if formula_18 and formula_19. An important point worth mentioning is that the half subtractor diagram aside implements formula_20 and not formula_21 since formula_14 on the diagram is given by formula_22. This is an important distinction to make since subtraction itself is not commutative, but the difference bit formula_13 is calculated using an XOR gate which is commutative. The truth table for the half subtractor is: Using the table above and a Karnaugh map, we find the following logic equations for formula_13 and formula_14: formula_23 formula_24. Consequently, a simplified half-subtract circuit, advantageously avoiding crossed traces in particular as well as a negate gate is: where lines to the right are outputs and others (from the top, bottom or left) are inputs. Full subtractor. The full subtractor is a combinational circuit which is used to perform subtraction of three input bits: the minuend formula_11, subtrahend formula_12, and borrow in formula_25. The full subtractor generates two output bits: the difference formula_13 and borrow out formula_14. formula_25 is set when the previous digit is borrowed from formula_11. Thus, formula_25 is also subtracted from formula_11 as well as the subtrahend formula_12. Or in symbols: formula_26. Like the half subtractor, the full subtractor generates a borrow out when it needs to borrow from the next digit. Since we are subtracting formula_12 and formula_25 from formula_11, a borrow out needs to be generated when formula_27. When a borrow out is generated, 2 is added in the current digit. (This is similar to the subtraction algorithm in decimal. Instead of adding 2, we add 10 when we borrow.) Therefore, formula_28. The truth table for the full subtractor is: Therefore the equation is: formula_29 formula_30
[ { "math_id": 0, "text": "X_{i}" }, { "math_id": 1, "text": "Y_{i}" }, { "math_id": 2, "text": "B_{i}" }, { "math_id": 3, "text": "D_{i}" }, { "math_id": 4, "text": "B_{i+1}" }, { "math_id": 5, "text": "X_{i}-Y_{i}-B_{i}" }, { "math_id": 6, "text": "-2B_{i+1}+D_{i}" }, { "math_id": 7, "text": " D_{i} = X_{} \\oplus Y_{i} \\oplus B_{i}" }, { "math_id": 8, "text": " B_{i+1} = X_{i} < (Y_{i} + B_{i})" }, { "math_id": 9, "text": "-B = \\bar{B} + 1" }, { "math_id": 10, "text": "\\begin{alignat}{2}\nA-B & = A + (-B) \\\\\n& = A + \\bar{B} + 1 \\\\\n\\end{alignat}" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "Y" }, { "math_id": 13, "text": "D" }, { "math_id": 14, "text": "B_\\text{out}" }, { "math_id": 15, "text": "B_{\\text{out}} = 1" }, { "math_id": 16, "text": "X < Y" }, { "math_id": 17, "text": "B_\\text{out} = 1" }, { "math_id": 18, "text": "X = 0" }, { "math_id": 19, "text": "Y = 1" }, { "math_id": 20, "text": "X - Y" }, { "math_id": 21, "text": "Y-X" }, { "math_id": 22, "text": "B_{\\text{out}} = \\overline{X} \\cdot Y" }, { "math_id": 23, "text": "D = X \\oplus Y" }, { "math_id": 24, "text": "B_\\text{out} = \\overline X \\cdot Y" }, { "math_id": 25, "text": "B_\\text{in}" }, { "math_id": 26, "text": "X - Y - B_\\text{in}" }, { "math_id": 27, "text": "X < Y + B_\\text{in}" }, { "math_id": 28, "text": "D = X - Y - B_\\text{in} + 2B_\\text{out}" }, { "math_id": 29, "text": "D=X\\oplus Y\\oplus B_{in}" }, { "math_id": 30, "text": "B_{out}=\\bar{X}B_{in}+\\bar{X}Y+YB_{in}" } ]
https://en.wikipedia.org/wiki?curid=8453671
8454
Double planet
A binary system where two planetary-mass objects share an orbital axis external to both In astronomy, a double planet (also binary planet) is a binary satellite system where both objects are planets, or planetary-mass objects, and whose joint barycenter is external to both planetary bodies. Although up to a third of the star systems in the Milky Way are binary, double planets are expected to be much rarer given the typical planet to satellite mass ratio is around 1:10000, they are influenced heavily by the gravitational pull of the parent star and according to the giant-impact hypothesis are gravitationally stable only under particular circumstances. The Solar System does not have an official double planet, however the Earth–Moon system is sometimes considered to be one. In promotional materials advertising the SMART-1 mission, the European Space Agency referred to the Earth–Moon system as a double planet. Several dwarf planet candidates can be described as binary planets. At its 2006 General Assembly, the International Astronomical Union considered a proposal that Pluto and Charon be reclassified as a double planet, but the proposal was abandoned in favor of the current IAU definition of planet. Other trans-Neptunian systems with proportionally large planetary-mass satellites include Eris–Dysnomia, Orcus–Vanth and Varda–Ilmarë. Binary asteroids with components of roughly equal mass are sometimes referred to as double minor planets. These include binary asteroids 69230 Hermes and 90 Antiope and binary Kuiper belt objects (KBOs) 79360 Sila–Nunam and . Definition of "double planet". There is debate as to what criteria should be used to distinguish a "double planet" from a "planet–moon system". The following are considerations. Both bodies satisfy planet criterion. A definition proposed in the "Astronomical Journal" calls for both bodies to individually satisfy an orbit-clearing criterion in order to be called a double planet. Mass ratios closer to 1. One important consideration for defining "double planets" is the ratio of the masses of the two bodies. A mass ratio of 1 would indicate bodies of equal mass, and bodies with mass ratios closer to 1 are more attractive to label as "doubles". Using this definition, the satellites of Mars, Jupiter, Saturn, Uranus, and Neptune can all easily be excluded; they all have masses less than 0.00025 (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4000) of the planets around which they revolve. Some dwarf planets, too, have satellites substantially less massive than the dwarf planets themselves. The most notable exception is the Pluto–Charon system. The Charon-to-Pluto mass ratio of 0.122 (≈ &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄8) is close enough to 1 that Pluto and Charon have frequently been described by many scientists as "double dwarf planets" ("double planets" prior to the 2006 definition of "planet"). The International Astronomical Union (IAU) earlier classified Charon as a satellite of Pluto, but had also explicitly expressed the willingness to reconsider the bodies as double dwarf planets in the future. However, a 2006 IAU report classified Charon–Pluto as a double planet. The Moon-to-Earth mass ratio of 0.01230 (≈ &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄81) is also notably close to 1 when compared to all other satellite-to-planet ratios. Consequently, some scientists view the Earth–Moon system as a double planet as well, though this is a minority view. Eris's lone satellite, Dysnomia, has a radius somewhere around &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4 that of Eris; assuming similar densities (Dysnomia's compositional make-up may or may not differ substantially from Eris's), the mass ratio would be near &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄40, a value intermediate to the Moon–Earth and Charon–Pluto ratios. Center-of-mass position. Currently, the most commonly proposed definition for a double-planet system is one in which the barycenter, around which both bodies orbit, lies outside both bodies. Under this definition, Pluto and Charon are double dwarf planets, since they orbit a point clearly outside of Pluto, as visible in animations created from images of the "New Horizons" space probe in June 2015. Under this definition, the Earth–Moon system is not currently a double planet; although the Moon is massive enough to cause the Earth to make a noticeable revolution around this center of mass, this point nevertheless lies well within Earth. However, the Moon currently migrates outward from Earth at a rate of approximately per year; in a few billion years, the Earth–Moon system's center of mass will lie outside Earth, which would make it a double-planet system. The center of mass of the Jupiter–Sun system lies outside the surface of the Sun, though arguing that Jupiter and the Sun are a double star is not analogous to arguing Pluto–Charon is a double dwarf planet. Jupiter is too light to be a fusor; were it thirteen times heavier, it would achieve deuterium fusion and become a brown dwarf. Tug-of-war value. Isaac Asimov suggested a distinction between planet–moon and double-planet structures based in part on what he called a "tug-of-war" value, which does not consider their relative sizes. This quantity is simply the ratio of the force exerted on the smaller body by the larger (primary) body to the force exerted on the smaller body by the Sun. This can be shown to equal formula_0 where "m"p is the mass of the primary (the larger body), "m"s is the mass of the Sun, "d"s is the distance between the smaller body and the Sun, and "d"p is the distance between the smaller body and the primary. The tug-of-war value does not rely on the mass of the satellite (the smaller body). This formula actually reflects the relation of the gravitational effects on the smaller body from the larger body and from the Sun. The tug-of-war figure for Saturn's moon Titan is 380, which means that Saturn's hold on Titan is 380 times as strong as the Sun's hold on Titan. Titan's tug-of-war value may be compared with that of Saturn's moon Phoebe, which has a tug-of-war value of just 3.5; that is, Saturn's hold on Phoebe is only 3.5 times as strong as the Sun's hold on Phoebe. Asimov calculated tug-of-war values for several satellites of the planets. He showed that even the largest gas giant, Jupiter, had only a slightly better hold than the Sun on its outer captured satellites, some with tug-of-war values not much higher than one. In nearly every one of Asimov's calculations the tug-of-war value was found to be greater than one, so in those cases the Sun loses the tug-of-war with the planets. The one exception was Earth's Moon, where the Sun wins the tug-of-war with a value of 0.46, which means that Earth's hold on the Moon is less than half as strong as the Sun's. Asimov included this with his other arguments that Earth and the Moon should be considered a binary planet. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;We might look upon the Moon, then, as neither a true satellite of the Earth nor a captured one, but as a planet in its own right, moving about the Sun in careful step with the Earth. From within the Earth–Moon system, the simplest way of picturing the situation is to have the Moon revolve about the Earth; but if you were to draw a picture of the orbits of the Earth and Moon about the Sun exactly to scale, you would see that the Moon's orbit is everywhere concave toward the Sun. It is always "falling toward" the Sun. All the other satellites, without exception, "fall away" from the Sun through part of their orbits, caught as they are by the superior pull of their primary planets – but not the Moon. See the Path of Earth and Moon around Sun section in the "Orbit of the Moon" article for a more detailed explanation. This definition of double planet depends on the pair's distance from the Sun. If the Earth–Moon system happened to orbit farther away from the Sun than it does now, then Earth would win the tug of war. For example, at the orbit of Mars, the Moon's tug-of-war value would be 1.05. Also, several tiny moons discovered since Asimov's proposal would qualify as double planets by this argument. Neptune's small outer moons Neso and Psamathe, for example, have tug-of-war values of 0.42 and 0.44, less than that of Earth's Moon. Yet their masses are tiny compared to Neptune's, with an estimated ratio of 1.5×10-9 (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄700,000,000) and 0.4×10-9 (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,500,000,000). Formation of the system. A final consideration is the way in which the two bodies came to form a system. Both the Earth–Moon and Pluto–Charon systems are thought to have been formed as a result of giant impacts: one body was impacted by a second body, resulting in a debris disk, and through accretion, either two new bodies formed or one new body formed, with the larger body remaining (but changed). However, a giant impact is not a sufficient condition for two bodies being "double planets" because such impacts can also produce tiny satellites, such as the four small outer satellites of Pluto. A now-abandoned hypothesis for the origin of the Moon was actually called the "double-planet hypothesis"; the idea was that the Earth and the Moon formed in the same region of the Solar System's proto-planetary disk, forming a system under gravitational interaction. This idea, too, is a problematic condition for defining two bodies as "double planets" because planets can "capture" moons through gravitational interaction. For example, the moons of Mars (Phobos and Deimos) are thought to be asteroids captured long ago by Mars. Such a definition would also deem Neptune–Triton a double planet, since Triton was a Kuiper belt body the same size and of similar composition to Pluto, later captured by Neptune. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Informational notes &lt;templatestyles src="Reflist/styles.css" /&gt; Citations &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography Further reading
[ { "math_id": 0, "text": "\\text{tug-of-war value} = \\frac{m_\\mathrm{p}}{m_\\mathrm{s}} \\cdot \\left( \\frac{d_\\mathrm{s}}{d_\\mathrm{p}} \\right)^2" } ]
https://en.wikipedia.org/wiki?curid=8454
8454507
Rayleigh length
Concept in laser optics In optics and especially laser science, the Rayleigh length or Rayleigh range, formula_3, is the distance along the propagation direction of a beam from the waist to the place where the area of the cross section is doubled. A related parameter is the confocal parameter, "b", which is twice the Rayleigh length. The Rayleigh length is particularly important when beams are modeled as Gaussian beams. Explanation. For a Gaussian beam propagating in free space along the formula_4 axis with wave number formula_5, the Rayleigh length is given by formula_6 where formula_7 is the wavelength (the vacuum wavelength divided by formula_8, the index of refraction) and formula_2 is the beam waist, the radial size of the beam at its narrowest point. This equation and those that follow assume that the waist is not extraordinarily small; formula_9. The radius of the beam at a distance formula_1 from the waist is formula_10 The minimum value of formula_0 occurs at formula_11, by definition. At distance formula_3 from the beam waist, the beam radius is increased by a factor formula_12 and the cross sectional area by 2. Related quantities. The total angular spread of a Gaussian beam in radians is related to the Rayleigh length by formula_13 The diameter of the beam at its waist (focus spot size) is given by formula_14. These equations are valid within the limits of the paraxial approximation. For beams with much larger divergence the Gaussian beam model is no longer accurate and a physical optics analysis is required. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w(z)" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "w_0" }, { "math_id": 3, "text": "z_\\mathrm{R}" }, { "math_id": 4, "text": "\\hat{z}" }, { "math_id": 5, "text": "k = 2\\pi/\\lambda" }, { "math_id": 6, "text": "z_\\mathrm{R} = \\frac{\\pi w_0^2}{\\lambda} = \\frac{1}{2} k w_0^2" }, { "math_id": 7, "text": "\\lambda" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "w_0 \\ge 2\\lambda/\\pi" }, { "math_id": 10, "text": "w(z) = w_0 \\, \\sqrt{ 1+ {\\left( \\frac{z}{z_\\mathrm{R}} \\right)}^2 } . " }, { "math_id": 11, "text": "w(0) = w_0" }, { "math_id": 12, "text": "\\sqrt{2}" }, { "math_id": 13, "text": "\\Theta_{\\mathrm{div}} \\simeq 2\\frac{w_0}{z_R}." }, { "math_id": 14, "text": "D = 2\\,w_0 \\simeq \\frac{4\\lambda}{\\pi\\, \\Theta_{\\mathrm{div}}}" } ]
https://en.wikipedia.org/wiki?curid=8454507
84563
Logistic function
S-shaped curve A logistic function or logistic curve is a common S-shaped curve (sigmoid curve) with the equation formula_0 where &lt;templatestyles src="Block indent/styles.css"/&gt;formula_1 is the carrying capacity, the supremum of the values of the function; &lt;templatestyles src="Block indent/styles.css"/&gt;formula_2 is the logistic growth rate, the steepness of the curve; and &lt;templatestyles src="Block indent/styles.css"/&gt;formula_3 is the formula_4 value of the function's midpoint. The logistic function has domain the real numbers, the limit as formula_5 is 0, and the limit as formula_6 is formula_1. The standard logistic function, depicted at right, where formula_7, has the equation formula_8 and is sometimes simply called the sigmoid. It is also sometimes called the expit, being the reciprocal function of the logit. The logistic function finds applications in a range of fields, including biology (especially ecology), biomathematics, chemistry, demography, economics, geoscience, mathematical psychology, probability, sociology, political science, linguistics, statistics, and artificial neural networks. There are various generalizations, depending on the field. History. The logistic function was introduced in a series of three papers by Pierre François Verhulst between 1838 and 1847, who devised it as a model of population growth by adjusting the exponential growth model, under the guidance of Adolphe Quetelet. Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838, then presented an expanded analysis and named the function in 1844 (published 1845); the third paper adjusted the correction term in his model of Belgian population growth. The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth approaches the limit with an exponentially decaying gap, like the initial stage in reverse. Verhulst did not explain the choice of the term "logistic" (), but it is presumably in contrast to the "logarithmic" curve, and by analogy with arithmetic and geometric. His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus "logistic growth" is presumably named by analogy, "logistic" being from , a traditional division of Greek mathematics. As a word derived from ancient Greek mathematical terms, the name of this function is unrelated to the military and management term "logistics", which is instead from "lodgings", though some believe the Greek term also influenced "logistics"; see for details. Mathematical properties. The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;standard logistic function is the logistic function with parameters formula_9, formula_10, formula_11, which yields formula_12 In practice, due to the nature of the exponential function formula_13, it is often sufficient to compute the standard logistic function for formula_4 over a small range of real numbers, such as a range contained in [−6, +6], as it quickly converges very close to its saturation values of 0 and 1. Symmetries. The logistic function has the symmetry property that formula_14 This reflects that the growth from 0 when formula_4 is small is symmetric with the decay of the gap to the limit (1) when formula_4 is large. Further, formula_15 is an odd function. The sum of the logistic function and its reflection about the vertical axis, formula_16, is formula_17 The logistic function is thus rotationally symmetrical about the point (0, 1/2). Inverse function. The logistic function is the inverse of the natural logit function formula_18 and so converts the logarithm of odds into a probability. The conversion from the log-likelihood ratio of two alternatives also takes the form of a logistic curve. Hyperbolic tangent. The logistic function is an offset and scaled hyperbolic tangent function: formula_19 or formula_20 This follows from formula_21 The hyperbolic-tangent relationship leads to another form for the logistic function's derivative: formula_22 which ties the logistic function into the logistic distribution. Geometrically, the hyperbolic tangent function is the hyperbolic angle on the unit hyperbola formula_23, which factors as formula_24, and thus has asymptotes the lines through the origin with slope &amp;NoBreak;&amp;NoBreak; and with slope &amp;NoBreak;&amp;NoBreak;, and vertex at &amp;NoBreak;&amp;NoBreak; corresponding to the range and midpoint (&amp;NoBreak;&amp;NoBreak;) of tanh. Analogously, the logistic function can be viewed as the hyperbolic angle on the hyperbola formula_25, which factors as formula_26, and thus has asymptotes the lines through the origin with slope &amp;NoBreak;&amp;NoBreak; and with slope &amp;NoBreak;&amp;NoBreak;, and vertex at &amp;NoBreak;&amp;NoBreak;, corresponding to the range and midpoint (&amp;NoBreak;&amp;NoBreak;) of the logistic function. Parametrically, hyperbolic cosine and hyperbolic sine give coordinates on the unit hyperbola: formula_27, with quotient the hyperbolic tangent. Similarly, formula_28 parametrizes the hyperbola formula_25, with quotient the logistic function. These correspond to linear transformations (and rescaling the parametrization) of the hyperbola formula_29, with parametrization formula_30: the parametrization of the hyperbola for the logistic function corresponds to formula_31 and the linear transformation formula_32, while the parametrization of the unit hyperbola (for the hyperbolic tangent) corresponds to the linear transformation formula_33. Derivative. The standard logistic function has an easily calculated derivative. The derivative is known as the density of the logistic distribution: formula_34 formula_35 The logistic distribution is a location–scale family, which corresponds to parameters of the logistic function. If &amp;NoBreak;&amp;NoBreak; is fixed, then the midpoint &amp;NoBreak;&amp;NoBreak; is the location and the slope &amp;NoBreak;&amp;NoBreak; is the scale. Integral. Conversely, its antiderivative can be computed by the substitution formula_36, since formula_37 so (dropping the constant of integration) formula_38 In artificial neural networks, this is known as the "softplus" function and (with scaling) is a smooth approximation of the ramp function, just as the logistic function (with scaling) is a smooth approximation of the Heaviside step function. Logistic differential equation. The unique standard logistic function is the solution of the simple first-order non-linear ordinary differential equation formula_39 with boundary condition formula_40. This equation is the continuous version of the logistic map. Note that the reciprocal logistic function is solution to a simple first-order "linear" ordinary differential equation. The qualitative behavior is easily understood in terms of the phase line: the derivative is 0 when the function is 1; and the derivative is positive for formula_41 between 0 and 1, and negative for formula_41 above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0 and a stable equilibrium at 1, and thus for any function value greater than 0 and less than 1, it grows to 1. The logistic equation is a special case of the Bernoulli differential equation and has the following solution: formula_42 Choosing the constant of integration formula_43 gives the other well known form of the definition of the logistic curve: formula_44 More quantitatively, as can be seen from the analytical solution, the logistic curve shows early exponential growth for negative argument, which reaches to linear growth of slope 1/4 for an argument near 0, then approaches 1 with an exponentially decaying gap. The differential equation derived above is a special case of a general differential equation that only models the sigmoid function for formula_45. In many modeling applications, the more "general form" formula_46 can be desirable. Its solution is the shifted and scaled sigmoid formula_47. Probabilistic interpretation. When the capacity formula_11, the value of the logistic function is in the range &amp;NoBreak;&amp;NoBreak; and can be interpreted as a probability p. In more detail, p can be interpreted as the probability of one of two alternatives (the parameter of a Bernoulli distribution); the two alternatives are complementary, so the probability of the other alternative is formula_48 and formula_49. The two alternatives are coded as 1 and 0, corresponding to the limiting values as formula_50. In this interpretation the input x is the log-odds for the first alternative (relative to the other alternative), measured in "logistic units" (or logits), &amp;NoBreak;&amp;NoBreak; is the odds for the first event (relative to the second), and, recalling that given odds of formula_51 for (&amp;NoBreak;&amp;NoBreak; against 1), the probability is the ratio of for over (for plus against), formula_52, we see that formula_53 is the probability of the first alternative. Conversely, x is the log-odds "against" the second alternative, &amp;NoBreak;&amp;NoBreak; is the log-odds "for" the second alternative, formula_13 is the odds for the second alternative, and formula_54 is the probability of the second alternative. This can be framed more symmetrically in terms of two inputs, &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak;, which then generalizes naturally to more than two alternatives. Given two real number inputs, &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak;, interpreted as logits, their "difference" formula_55 is the log-odds for option 1 (the log-odds "against" option 0), formula_56 is the odds, formula_57 is the probability of option 1, and similarly formula_58 is the probability of option 0. This form immediately generalizes to more alternatives as the softmax function, which is a vector-valued function whose i-th coordinate is formula_59. More subtly, the symmetric form emphasizes interpreting the input x as formula_55 and thus "relative" to some reference point, implicitly to formula_10. Notably, the softmax function is invariant under adding a constant to all the logits formula_60, which corresponds to the difference formula_61 being the log-odds for option j against option i, but the individual logits formula_60 not being log-odds on their own. Often one of the options is used as a reference ("pivot"), and its value fixed as 0, so the other logits are interpreted as odds versus this reference. This is generally done with the first alternative, hence the choice of numbering: formula_10, and then formula_62 is the log-odds for option i against option 0. Since formula_63, this yields the formula_64 term in many expressions for the logistic function and generalizations. Generalizations. In growth modeling, numerous generalizations exist, including the generalized logistic curve, the Gompertz function, the cumulative distribution function of the shifted Gompertz distribution, and the hyperbolastic function of type I. In statistics, where the logistic function is interpreted as the probability of one of two alternatives, the generalization to three or more alternatives is the softmax function, which is vector-valued, as it gives the probability of each alternative. Applications. In ecology: modeling population growth. A typical application of the logistic equation is a common model of population growth (see also population dynamics), originally due to Pierre-François Verhulst in 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had read Thomas Malthus' "An Essay on the Principle of Population", which describes the Malthusian growth model of simple (unconstrained) exponential growth. Verhulst derived his logistic equation to describe the self-limiting growth of a biological population. The equation was rediscovered in 1911 by A. G. McKendrick for the growth of bacteria in broth and experimentally tested using a technique for nonlinear parameter estimation. The equation is also sometimes called the "Verhulst-Pearl equation" following its rediscovery in 1920 by Raymond Pearl (1879–1940) and Lowell Reed (1888–1966) of the Johns Hopkins University. Another scientist, Alfred J. Lotka derived the equation again in 1925, calling it the "law of population growth". Letting formula_65 represent population size (formula_66 is often used in ecology instead) and formula_67 represent time, this model is formalized by the differential equation: formula_68 where the constant formula_69 defines the growth rate and formula_70 is the carrying capacity. In the equation, the early, unimpeded growth rate is modeled by the first term formula_71. The value of the rate formula_69 represents the proportional increase of the population formula_65 in one unit of time. Later, as the population grows, the modulus of the second term (which multiplied out is formula_72) becomes almost as large as the first, as some members of the population formula_65 interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called the "bottleneck", and is modeled by the value of the parameter formula_70. The competition diminishes the combined growth rate, until the value of formula_65 ceases to grow (this is called "maturity" of the population). The solution to the equation (with formula_73 being the initial population) is formula_74 where formula_75 where formula_70 is the limiting value of formula_65, the highest value that the population can reach given infinite time (or come close to reaching in finite time). The carrying capacity is asymptotically reached independently of the initial value formula_76, and also in the case that formula_77. In ecology, species are sometimes referred to as formula_69-strategist or formula_70-strategist depending upon the selective processes that have shaped their life history strategies. Choosing the variable dimensions so that formula_78 measures the population in units of carrying capacity, and formula_79 measures time in units of formula_80, gives the dimensionless differential equation formula_81 Integral. The antiderivative of the ecological form of the logistic function can be computed by the substitution formula_82, since formula_83 formula_84 Time-varying carrying capacity. Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying, with formula_85, leading to the following mathematical model: formula_86 A particularly important case is that of carrying capacity that varies periodically with period formula_87: formula_88 It can be shown that in such a case, independently from the initial value formula_76, formula_89 will tend to a unique periodic solution formula_90, whose period is formula_87. A typical value of formula_87 is one year: In such case formula_91 may reflect periodical variations of weather conditions. Another interesting generalization is to consider that the carrying capacity formula_91 is a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation, which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death. In statistics and machine learning. Logistic functions are used in several roles in statistics. For example, they are the cumulative distribution function of the logistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat their opponent in the Elo rating system. More specific examples now follow. Logistic regression. Logistic functions are used in logistic regression to model how the probability formula_92 of an event may be affected by one or more explanatory variables: an example would be to have the model formula_93 where formula_4 is the explanatory variable, formula_94 and formula_95 are model parameters to be fitted, and formula_41 is the standard logistic function. Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression. Another application of the logistic function is in the Rasch model, used in item response theory. In particular, the Rasch model forms a basis for maximum likelihood estimation of the locations of objects or persons on a continuum, based on collections of categorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect. Neural networks. Logistic functions are often used in artificial neural networks to introduce nonlinearity in the model or to clamp signals to within a specified interval. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function as the activation function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron. A common choice for the activation or "squashing" functions, used to clip large magnitudes to keep the response of the neural network bounded, is formula_96 which is a logistic function. These relationships result in simplified implementations of artificial neural networks with artificial neurons. Practitioners caution that sigmoidal functions which are antisymmetric about the origin (e.g. the hyperbolic tangent) lead to faster convergence when training networks with backpropagation. The logistic function is itself the derivative of another proposed activation function, the softplus. In medicine: modeling of growth of tumors. Another application of logistic curve is in medicine, where the logistic differential equation is used to model the growth of tumors. This application can be considered an extension of the above-mentioned use in the framework of ecology (see also the Generalized logistic curve, allowing for more parameters). Denoting with formula_97 the size of the tumor at time formula_67, its dynamics are governed by formula_98 which is of the type formula_99 where formula_100 is the proliferation rate of the tumor. If a chemotherapy is started with a log-kill effect, the equation may be revised to be formula_101 where formula_102 is the therapy-induced death rate. In the idealized case of very long therapy, formula_102 can be modeled as a periodic function (of period formula_87) or (in case of continuous infusion therapy) as a constant function, and one has that formula_103 i.e. if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy (e.g. it does not take into account the phenomenon of clonal resistance). In medicine: modeling of a pandemic. A novel infectious pathogen to which a population has no immunity will generally spread exponentially in the early stages, while the supply of susceptible individuals is plentiful. The SARS-CoV-2 virus that causes COVID-19 exhibited exponential growth early in the course of infection in several countries in early 2020. Factors including a lack of susceptible hosts (through the continued spread of infection until it passes the threshold for herd immunity) or reduction in the accessibility of potential hosts through physical distancing measures, may result in exponential-looking epidemic curves first linearizing (replicating the "logarithmic" to "logistic" transition first noted by Pierre-François Verhulst, as noted above) and then reaching a maximal limit. A logistic function, or related functions (e.g. the Gompertz function) are usually used in a descriptive or phenomenological manner because they fit well not only to the early exponential rise, but to the eventual levelling off of the pandemic as the population develops a herd immunity. This is in contrast to actual models of pandemics which attempt to formulate a description based on the dynamics of the pandemic (e.g. contact rates, incubation times, social distancing, etc.). Some simple models have been developed, however, which yield a logistic solution. Modeling early COVID-19 cases. A generalized logistic function, also called the Richards growth curve, has been applied to model the early phase of the COVID-19 outbreak. The authors fit the generalized logistic function to the cumulative number of infected cases, here referred to as "infection trajectory". There are different parameterizations of the generalized logistic function in the literature. One frequently used forms is formula_104 where formula_105 are real numbers, and formula_106 is a positive real number. The flexibility of the curve formula_41 is due to the parameter formula_106: (i) if formula_107 then the curve reduces to the logistic function, and (ii) as formula_106 approaches zero, the curve converges to the Gompertz function. In epidemiological modeling, formula_108, formula_109, and formula_110 represent the final epidemic size, infection rate, and lag phase, respectively. See the right panel for an example infection trajectory when formula_111 is set to formula_112. One of the benefits of using a growth function such as the generalized logistic function in epidemiological modeling is its relatively easy application to the multilevel model framework, where information from different geographic regions can be pooled together. In chemistry: reaction models. The concentration of reactants and products in autocatalytic reactions follow the logistic function. The degradation of Platinum group metal-free (PGM-free) oxygen reduction reaction (ORR) catalyst in fuel cell cathodes follows the logistic decay function, suggesting an autocatalytic degradation mechanism. In physics: Fermi–Dirac distribution. The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilities that each possible energy level is occupied by a fermion, according to Fermi–Dirac statistics. In optics: mirage. The logistic function also finds applications in optics, particularly in modelling phenomena such as mirages. Under certain conditions, such as the presence of a temperature or concentration gradient due to diffusion and balancing with gravity, logistic curve behaviours can emerge. A mirage, resulting from a temperature gradient that modifies the refractive index related to the density/concentration of the material over distance, can be modelled using a fluid with a refractive index gradient due to the concentration gradient. This mechanism can be equated to a limiting population growth model, where the concentrated region attempts to diffuse into the lower concentration region, while seeking equilibrium with gravity, thus yielding a logistic function curve. In material science: Phase diagrams. See Diffusion bonding. In linguistics: language change. In linguistics, the logistic function can be used to model language change: an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted. In agriculture: modeling crop response. The logistic S-curve can be used for modeling the crop response to changes in growth factors. There are two types of response functions: "positive" and "negative" growth curves. For example, the crop yield may "increase" with increasing value of the growth factor up to a certain level (positive function), or it may "decrease" with increasing growth factor values (negative function owing to a negative growth factor), which situation requires an "inverted" S-curve. In economics and sociology: diffusion of innovations. The logistic function can be used to illustrate the progress of the diffusion of an innovation through its life cycle. In "The Laws of Imitation" (1890), Gabriel Tarde describes the rise and spread of new ideas through imitative chains. In particular, Tarde identifies three main stages through which innovations spread: the first one corresponds to the difficult beginnings, during which the idea has to struggle within a hostile environment full of opposing habits and beliefs; the second one corresponds to the properly exponential take-off of the idea, with formula_113; finally, the third stage is logarithmic, with formula_114, and corresponds to the time when the impulse of the idea gradually slows down while, simultaneously new opponent ideas appear. The ensuing situation halts or stabilizes the progress of the innovation, which approaches an asymptote. In a sovereign state, the subnational units (constituent states or cities) may use loans to finance their projects. However, this funding source is usually subject to strict legal rules as well as to economy scarcity constraints, especially the resources the banks can lend (due to their equity or Basel limits). These restrictions, which represent a saturation level, along with an exponential rush in an economic competition for money, create a public finance diffusion of credit pleas and the aggregate national response is a sigmoid curve. Historically, when new products are introduced there is an intense amount of research and development which leads to dramatic improvements in quality and reductions in cost. This leads to a period of rapid industry growth. Some of the more famous examples are: railroads, incandescent light bulbs, electrification, cars and air travel. Eventually, dramatic improvement and cost reduction opportunities are exhausted, the product or process are in widespread use with few remaining potential new customers, and markets become saturated. Logistic analysis was used in papers by several researchers at the International Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructures and energy source substitutions and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigated by Robert Ayres (1989). Cesare Marchetti published on long economic cycles and on diffusion of innovations. Arnulf Grübler's book (1990) gives a detailed account of the diffusion of infrastructures including canals, railroads, highways and airlines, showing that their diffusion followed logistic shaped curves. Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technological era as "irruption", the ascent as "frenzy", the rapid build out as "synergy" and the completion as "maturity". Sequential analysis. Link created an extension of Wald's theory of sequential analysis to a distribution-free accumulation of random variables until either a positive or negative bound is first equaled or exceeded. Link derives the probability of first equaling or exceeding the positive boundary as formula_115, the logistic function. This is the first proof that the logistic function may have a stochastic process as its basis. Link provides a century of examples of "logistic" experimental results and a newly derived relation between this probability and the time of absorption at the boundaries. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = \\frac{L}{1 + e^{-k(x-x_0)}}" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "x_0" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x \\to -\\infty" }, { "math_id": 6, "text": "x \\to +\\infty" }, { "math_id": 7, "text": "L=1, k=1, x_0=0" }, { "math_id": 8, "text": "f(x) = \\frac{1}{1 + e^{-x}}" }, { "math_id": 9, "text": "k = 1" }, { "math_id": 10, "text": "x_0 = 0" }, { "math_id": 11, "text": "L = 1" }, { "math_id": 12, "text": "f(x) = \\frac{1}{1 + e^{-x}} = \\frac{e^x}{e^x + 1} = \\frac{e^{x/2}}{e^{x/2} + e^{-x/2}}." }, { "math_id": 13, "text": "e^{-x}" }, { "math_id": 14, "text": "1 - f(x) = f(-x)." }, { "math_id": 15, "text": "x \\mapsto f(x) - 1/2" }, { "math_id": 16, "text": "f(-x)" }, { "math_id": 17, "text": "\\frac{1}{1 + e^{-x}} + \\frac{1}{1 + e^{-(-x)}} = \\frac{e^x}{e^x + 1} + \\frac{1}{e^x + 1} = 1." }, { "math_id": 18, "text": " \\operatorname{logit} p = \\log \\frac p {1-p} \\text{ for } 0<p<1 " }, { "math_id": 19, "text": "f(x) = \\frac12 + \\frac12 \\tanh\\left(\\frac{x}{2}\\right)," }, { "math_id": 20, "text": "\\tanh(x) = 2 f(2x) - 1." }, { "math_id": 21, "text": "\n\\begin{align}\n\\tanh(x) & = \\frac{e^x - e^{-x}}{e^x + e^{-x}} = \\frac{e^x \\cdot \\left(1 - e^{-2x}\\right)}{e^x \\cdot \\left(1 + e^{-2x}\\right)} \\\\\n &= f(2x) - \\frac{e^{-2x}}{1 + e^{-2x}} = f(2x) - \\frac{e^{-2x} + 1 - 1}{1 + e^{-2x}} = 2f(2x) - 1.\n\\end{align}\n" }, { "math_id": 22, "text": "\\frac{d}{dx} f(x) = \\frac14 \\operatorname{sech}^2\\left(\\frac{x}{2}\\right)," }, { "math_id": 23, "text": "x^2 - y^2 = 1" }, { "math_id": 24, "text": "(x + y)(x - y) = 1" }, { "math_id": 25, "text": "xy - y^2 = 1" }, { "math_id": 26, "text": "y(x - y) = 1" }, { "math_id": 27, "text": "\\left( (e^t + e^{-t})/2, (e^t - e^{-t})/2\\right)" }, { "math_id": 28, "text": "\\bigl(e^{t/2} + e^{-t/2}, e^{t/2}\\bigr)" }, { "math_id": 29, "text": "xy = 1" }, { "math_id": 30, "text": "(e^{-t}, e^t)" }, { "math_id": 31, "text": "t/2" }, { "math_id": 32, "text": "\\bigl( \\begin{smallmatrix}\n1 & 1\\\\ 0 & 1 \\end{smallmatrix} \\bigr)" }, { "math_id": 33, "text": "\\tfrac{1}{2}\\bigl( \\begin{smallmatrix}\n1 & 1\\\\ -1 & 1 \\end{smallmatrix} \\bigr)" }, { "math_id": 34, "text": "f(x) = \\frac{1}{1 + e^{-x}} = \\frac{e^x}{1 + e^x}," }, { "math_id": 35, "text": "\n\\begin{align}\n\\frac{\\mathrm{d}}{\\mathrm{d}x}f(x) &= \\frac{e^x \\cdot (1 + e^x) - e^x \\cdot e^x}{(1 + e^x)^2} \\\\\n&= \\frac{e^{x}}{(1 + e^{x})^2} \\\\\n&= \\left(\\frac{e^{x}}{1 + e^{x}}\\right) \\left(\\frac{1}{1 + e^{x}}\\right) \\\\\n&= \\left(\\frac{e^{x}}{1 + e^{x}}\\right) \\left(1-\\frac{e^{x}}{1 + e^{x}}\\right) \\\\\n&= f(x)\\left(1 - f(x)\\right)\n\\end{align}\n" }, { "math_id": 36, "text": "u = 1 + e^x" }, { "math_id": 37, "text": "f(x) = \\frac{e^x}{1 + e^x} = \\frac{u'}{u}," }, { "math_id": 38, "text": "\\int \\frac{e^x}{1 + e^x}\\,dx = \\int \\frac{1}{u}\\,du = \\ln u = \\ln (1 + e^x)." }, { "math_id": 39, "text": "\\frac{d}{dx}f(x) = f(x)\\big(1 - f(x)\\big)" }, { "math_id": 40, "text": "f(0) = 1/2" }, { "math_id": 41, "text": "f" }, { "math_id": 42, "text": "f(x) = \\frac{e^x}{e^x + C}." }, { "math_id": 43, "text": "C = 1" }, { "math_id": 44, "text": "f(x) = \\frac{e^x}{e^x + 1} = \\frac{1}{1 + e^{-x}}." }, { "math_id": 45, "text": "x > 0" }, { "math_id": 46, "text": "\\frac{df(x)}{dx} = \\frac{k}{a} f(x)\\big(a - f(x)\\big), \\quad f(0) = \\frac a {1 + e^{kr}}" }, { "math_id": 47, "text": "aS\\big(k(x - r)\\big)" }, { "math_id": 48, "text": "q=1-p" }, { "math_id": 49, "text": "p+q=1" }, { "math_id": 50, "text": "x \\to \\pm \\infty" }, { "math_id": 51, "text": "O = O:1" }, { "math_id": 52, "text": "O/(O+1)" }, { "math_id": 53, "text": "e^x/(e^x + 1) = 1/(1 + e^{-x}) = p" }, { "math_id": 54, "text": "e^{-x}/(e^{-x} + 1) = 1/(1 + e^x) = q" }, { "math_id": 55, "text": "x_1 - x_0" }, { "math_id": 56, "text": "e^{x_1 - x_0}" }, { "math_id": 57, "text": "e^{x_1 - x_0}/(e^{x_1 - x_0} + 1) = 1/\\left(1 + e^{-(x_1 - x_0)}\\right) = e^{x_1}/(e^{x_0} + e^{x_1})" }, { "math_id": 58, "text": "e^{x_0}/(e^{x_0} + e^{x_1})" }, { "math_id": 59, "text": "e^{x_i}/\\sum_{i=0}^n e^{x_i}" }, { "math_id": 60, "text": "x_i" }, { "math_id": 61, "text": "x_j - x_i" }, { "math_id": 62, "text": "x_i = x_i - x_0" }, { "math_id": 63, "text": "e^0 = 1" }, { "math_id": 64, "text": "+1" }, { "math_id": 65, "text": "P" }, { "math_id": 66, "text": "N" }, { "math_id": 67, "text": "t" }, { "math_id": 68, "text": "\\frac{dP}{dt}=r P \\left(1 - \\frac{P}{K}\\right)," }, { "math_id": 69, "text": "r" }, { "math_id": 70, "text": "K" }, { "math_id": 71, "text": "+rP" }, { "math_id": 72, "text": "-r P^2 / K" }, { "math_id": 73, "text": "P_0" }, { "math_id": 74, "text": "P(t) = \\frac{K P_0 e^{rt}}{K + P_0 \\left( e^{rt} - 1\\right)} = \\frac{K}{1+\\left(\\frac{K-P_0}{P_0}\\right)e^{-rt}}, " }, { "math_id": 75, "text": "\\lim_{t\\to\\infty} P(t) = K," }, { "math_id": 76, "text": "P(0) > 0" }, { "math_id": 77, "text": "P(0) > K" }, { "math_id": 78, "text": "n" }, { "math_id": 79, "text": "\\tau" }, { "math_id": 80, "text": "1/r" }, { "math_id": 81, "text": "\\frac{dn}{d\\tau} = n (1-n)." }, { "math_id": 82, "text": "u = K + P_0 \\left( e^{rt} - 1\\right)" }, { "math_id": 83, "text": "du = r P_0 e^{rt} dt" }, { "math_id": 84, "text": "\\int \\frac{K P_0 e^{rt}}{K + P_0 \\left( e^{rt} - 1\\right)}\\,dt = \\int \\frac{K}{r} \\frac{1}{u}\\,du = \\frac{K}{r} \\ln u + C = \\frac{K}{r} \\ln \\left(K + P_0 (e^{rt} - 1) \\right) + C" }, { "math_id": 85, "text": "K(t) > 0" }, { "math_id": 86, "text": "\\frac{dP}{dt} = rP \\cdot \\left(1 - \\frac{P}{K(t)}\\right)." }, { "math_id": 87, "text": "T" }, { "math_id": 88, "text": "K(t + T) = K(t)." }, { "math_id": 89, "text": "P(t)" }, { "math_id": 90, "text": "P_*(t)" }, { "math_id": 91, "text": "K(t)" }, { "math_id": 92, "text": "p" }, { "math_id": 93, "text": "p = f(a + bx)," }, { "math_id": 94, "text": "a" }, { "math_id": 95, "text": "b" }, { "math_id": 96, "text": "g(h) = \\frac{1}{1 + e^{-2 \\beta h}}," }, { "math_id": 97, "text": "X(t)" }, { "math_id": 98, "text": "X' = r\\left(1 - \\frac X K \\right)X," }, { "math_id": 99, "text": "X' = F(X)X, \\quad F'(X) \\le 0," }, { "math_id": 100, "text": "F(X)" }, { "math_id": 101, "text": "X' = r\\left(1 - \\frac X K \\right)X - c(t) X," }, { "math_id": 102, "text": "c(t)" }, { "math_id": 103, "text": "\\frac 1 T \\int_0^T c(t)\\, dt > r \\to \\lim_{t \\to +\\infty} x(t) = 0," }, { "math_id": 104, "text": " f(t ; \\theta_1,\\theta_2,\\theta_3, \\xi) = \\frac{\\theta_1}{[1 + \\xi \\exp (-\\theta_2 \\cdot (t - \\theta_3) ) ]^{1/\\xi}}" }, { "math_id": 105, "text": "\\theta_1,\\theta_2,\\theta_3" }, { "math_id": 106, "text": " \\xi " }, { "math_id": 107, "text": " \\xi = 1 " }, { "math_id": 108, "text": "\\theta_1" }, { "math_id": 109, "text": "\\theta_2" }, { "math_id": 110, "text": "\\theta_3" }, { "math_id": 111, "text": "(\\theta_1,\\theta_2,\\theta_3)" }, { "math_id": 112, "text": "(10000,0.2,40)" }, { "math_id": 113, "text": "f(x)=2^x" }, { "math_id": 114, "text": "f(x)=\\log(x)" }, { "math_id": 115, "text": "1/(1+e^{-\\theta A})" } ]
https://en.wikipedia.org/wiki?curid=84563
845722
Tunnel magnetoresistance
Magnetic effect in insulators between ferromagnets Tunnel magnetoresistance (TMR) is a magnetoresistive effect that occurs in a magnetic tunnel junction (MTJ), which is a component consisting of two ferromagnets separated by a thin insulator. If the insulating layer is thin enough (typically a few nanometres), electrons can tunnel from one ferromagnet into the other. Since this process is forbidden in classical physics, the tunnel magnetoresistance is a strictly quantum mechanical phenomenon, and lies in the study of spintronics. Magnetic tunnel junctions are manufactured in thin film technology. On an industrial scale the film deposition is done by magnetron sputter deposition; on a laboratory scale molecular beam epitaxy, pulsed laser deposition and electron beam physical vapor deposition are also utilized. The junctions are prepared by photolithography. Phenomenological description. The direction of the two magnetizations of the ferromagnetic films can be switched individually by an external magnetic field. If the magnetizations are in a parallel orientation it is more likely that electrons will tunnel through the insulating film than if they are in the oppositional (antiparallel) orientation. Consequently, such a junction can be switched between two states of electrical resistance, one with low and one with very high resistance. History. The effect was originally discovered in 1975 by Michel Jullière (University of Rennes, France) in Fe/Ge-O/Co-junctions at 4.2 K. The relative change of resistance was around 14%, and did not attract much attention. In 1991 Terunobu Miyazaki (Tohoku University, Japan) found a change of 2.7% at room temperature. Later, in 1994, Miyazaki found 18% in junctions of iron separated by an amorphous aluminum oxide insulator and Jagadeesh Moodera found 11.8% in junctions with electrodes of CoFe and Co. The highest effects observed at this time with aluminum oxide insulators was around 70% at room temperature. Since the year 2000, tunnel barriers of crystalline magnesium oxide (MgO) have been under development. In 2001 Butler and Mathon independently made the theoretical prediction that using iron as the ferromagnet and MgO as the insulator, the tunnel magnetoresistance can reach several thousand percent. The same year, Bowen et al. were the first to report experiments showing a significant TMR in a MgO based magnetic tunnel junction [Fe/MgO/FeCo(001)]. In 2004, Parkin and Yuasa were able to make Fe/MgO/Fe junctions that reach over 200% TMR at room temperature. In 2008, effects of up to 604% at room temperature and more than 1100% at 4.2 K were observed in junctions of CoFeB/MgO/CoFeB by S. Ikeda, H. Ohno group of Tohoku University in Japan. Applications. The read-heads of modern hard disk drives work on the basis of magnetic tunnel junctions. TMR, or more specifically the magnetic tunnel junction, is also the basis of MRAM, a new type of non-volatile memory. The 1st generation technologies relied on creating cross-point magnetic fields on each bit to write the data on it, although this approach has a scaling limit at around 90–130 nm. There are two 2nd generation techniques currently being developed: Thermal Assisted Switching (TAS) and Spin-transfer torque. Magnetic tunnel junctions are also used for sensing applications. Today they are commonly used for position sensors and current sensors in various automotive, industrial and consumer applications. These higher performance sensors are replacing Hall sensors in many applications due to their improved performance. Physical explanation. The relative resistance change—or effect amplitude—is defined as formula_0 where formula_1 is the electrical resistance in the anti-parallel state, whereas formula_2 is the resistance in the parallel state. The TMR effect was explained by Jullière with the spin polarizations of the ferromagnetic electrodes. The spin polarization "P" is calculated from the spin dependent density of states (DOS) formula_3 at the Fermi energy: formula_4 The spin-up electrons are those with spin orientation parallel to the external magnetic field, whereas the spin-down electrons have anti-parallel alignment with the external field. The relative resistance change is now given by the spin polarizations of the two ferromagnets, "P1" and "P2": formula_5 If no voltage is applied to the junction, electrons tunnel in both directions with equal rates. With a bias voltage "U", electrons tunnel preferentially to the positive electrode. With the assumption that spin is conserved during tunneling, the current can be described in a two-current model. The total current is split in two partial currents, one for the spin-up electrons and another for the spin-down electrons. These vary depending on the magnetic state of the junctions. There are two possibilities to obtain a defined anti-parallel state. First, one can use ferromagnets with different coercivities (by using different materials or different film thicknesses). And second, one of the ferromagnets can be coupled with an antiferromagnet (exchange bias). In this case the magnetization of the uncoupled electrode remains "free". The TMR becomes infinite if "P1" and "P2" equal 1, i.e. if both electrodes have 100% spin polarization. In this case the magnetic tunnel junction becomes a switch, that switches magnetically between low resistance and infinite resistance. Materials that come into consideration for this are called "ferromagnetic half-metals". Their conduction electrons are fully spin-polarized. This property is theoretically predicted for a number of materials (e.g. CrO2, various Heusler alloys) but its experimental confirmation has been the subject of subtle debate. Nevertheless, if one considers only those electrons that enter into transport, measurements by Bowen et al. of up to 99.6% spin polarization at the interface between La0.7Sr0.3MnO3 and SrTiO3 pragmatically amount to experimental proof of this property. The TMR decreases with both increasing temperature and increasing bias voltage. Both can be understood in principle by magnon excitations and interactions with magnons, as well as due to tunnelling with respect to localized states induced by oxygen vacancies (see Symmetry Filtering section hereafter). Symmetry-filtering in tunnel barriers. Prior to the introduction of epitaxial magnesium oxide (MgO), amorphous aluminum oxide was used as the tunnel barrier of the MTJ, and typical room temperature TMR was in the range of tens of percent. MgO barriers increased TMR to hundreds of percent. This large increase reflects a synergetic combination of electrode and barrier electronic structures, which in turn reflects the achievement of structurally ordered junctions. Indeed, MgO filters the tunneling transmission of electrons with a particular symmetry that are fully spin-polarized within the current flowing across body-centered cubic Fe-based electrodes. Thus, in the MTJ's parallel (P) state of electrode magnetization, electrons of this symmetry dominate the junction current. In contrast, in the MTJ's antiparallel (AP) state, this channel is blocked, such that electrons with the next most favorable symmetry to transmit dominate the junction current. Since those electrons tunnel with respect to a larger barrier height, this results in the sizeable TMR. Beyond these large values of TMR across MgO-based MTJs, this impact of the barrier's electronic structure on tunnelling spintronics has been indirectly confirmed by engineering the junction's potential landscape for electrons of a given symmetry. This was first achieved by examining how the electrons of a lanthanum strontium manganite half-metallic electrode with both full spin (P=+1 ) and symmetry polarization tunnel across an electrically biased SrTiO3 tunnel barrier. The conceptually simpler experiment of inserting an appropriate metal spacer at the junction interface during sample growth was also later demonstrated While theory, first formulated in 2001, predicts large TMR values associated with a 4eV barrier height in the MTJ's P state and 12eV in the MTJ's AP state, experiments reveal barrier heights as low as 0.4eV. This contradiction is lifted if one takes into account the localized states of oxygen vacancies in the MgO tunnel barrier. Extensive solid-state tunnelling spectroscopy experiments across MgO MTJs revealed in 2014 that the electronic retention on the ground and excited states of an oxygen vacancy, which is temperature-dependent, determines the tunnelling barrier height for electrons of a given symmetry, and thus crafts the effective TMR ratio and its temperature dependence. This low barrier height in turn enables the high current densities required for spin-transfer torque, discussed hereafter. Spin-transfer torque in magnetic tunnel junctions (MTJs). The effect of spin-transfer torque has been studied and applied widely in MTJs, where there is a tunnelling barrier sandwiched between a set of two ferromagnetic electrodes such that there is (free) magnetization of the right electrode, while assuming that the left electrode (with fixed magnetization) acts as spin-polarizer. This may then be pinned to some selecting transistor in a magnetoresistive random-access memory device, or connected to a preamplifier in a hard disk drive application. The spin-transfer torque vector, driven by the linear response voltage, can be computed from the expectation value of the torque operator: formula_6 where formula_7 is the gauge-invariant nonequilibrium density matrix for the steady-state transport, in the zero-temperature limit, in the linear-response regime, and the torque operator formula_8 is obtained from the time derivative of the spin operator: formula_9 Using the general form of a 1D tight-binding Hamiltonian: formula_10 where total magnetization (as macrospin) is along the unit vector formula_11 and the Pauli matrices properties involving arbitrary classical vectors formula_12, given by formula_13 formula_14 formula_15 it is then possible to first obtain an analytical expression for formula_8 (which can be expressed in compact form using formula_16, and the vector of Pauli spin matrices formula_17). The spin-transfer torque vector in general MTJs has two components: a parallel and perpendicular component: A parallel component: formula_18 And a perpendicular component: formula_19 In symmetric MTJs (made of electrodes with the same geometry and exchange splitting), the spin-transfer torque vector has only one active component, as the perpendicular component disappears: formula_20. Therefore, only formula_21 vs. formula_22 needs to be plotted at the site of the right electrode to characterise tunnelling in symmetric MTJs, making them appealing for production and characterisation at an industrial scale. Note: In these calculations the active region (for which it is necessary to calculate the retarded Green's function) should consist of the tunnel barrier + the right ferromagnetic layer of finite thickness (as in realistic devices). The active region is attached to the left ferromagnetic electrode (modeled as semi-infinite tight-binding chain with non-zero Zeeman splitting) and the right N electrode (semi-infinite tight-binding chain without any Zeeman splitting), as encoded by the corresponding self-energy terms. Discrepancy between theory and experiment. Theoretical tunnelling magneto-resistance ratios of 10000% have been predicted. However, the largest that have been observed are only 604%. One suggestion is that grain boundaries could be affecting the insulating properties of the MgO barrier; however, the structure of films in buried stack structures is difficult to determine. The grain boundaries may act as short circuit conduction paths through the material, reducing the resistance of the device. Recently, using new scanning transmission electron microscopy techniques, the grain boundaries within FeCoB/MgO/FeCoB MTJs have been atomically resolved. This has allowed first principles density functional theory calculations to be performed on structural units that are present in real films. Such calculations have shown that the band gap can be reduced by as much as 45%. In addition to grain boundaries, point defects such as boron interstitial and oxygen vacancies could be significantly altering the tunnelling magneto-resistance. Recent theoretical calculations have revealed that boron interstitials introduce defect states in the band gap potentially reducing the TMR further These theoretical calculations have also been backed up by experimental evidence showing the nature of boron within the MgO layer between two different systems and how the TMR is different. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{TMR} := \\frac{R_{\\mathrm{ap}}-R_{\\mathrm{p}}}{R_{\\mathrm{p}}}" }, { "math_id": 1, "text": "R_\\mathrm{ap}" }, { "math_id": 2, "text": "R_\\mathrm{p}" }, { "math_id": 3, "text": "\\mathcal{D}" }, { "math_id": 4, "text": "P = \\frac{\\mathcal{D}_\\uparrow(E_\\mathrm{F}) - \\mathcal{D}_\\downarrow(E_\\mathrm{F})}{\\mathcal{D}_\\uparrow(E_\\mathrm{F}) + \\mathcal{D}_\\downarrow(E_\\mathrm{F})}" }, { "math_id": 5, "text": "\\mathrm{TMR} = \\frac{2 P_1 P_2}{1 - P_1 P_2}" }, { "math_id": 6, "text": " \\mathbf{T} = \\mathrm{Tr}[\\hat{\\mathbf{T}} \\hat{\\rho}_\\mathrm{neq}] " }, { "math_id": 7, "text": " \\hat{\\rho}_\\mathrm{neq} " }, { "math_id": 8, "text": " \\hat{\\mathbf{T}} " }, { "math_id": 9, "text": "\n\\hat{\\mathbf{T}} = \\frac{d\\hat{\\mathbf{S}}}{dt}= -\\frac{i}{\\hbar}\\left[\\frac{\\hbar}{2}\\boldsymbol{\\sigma},\\hat{H}\\right]\n" }, { "math_id": 10, "text": " \\hat{H}=\\hat{H}_0 - \\Delta (\\boldsymbol{\\sigma} \\cdot \\mathbf{m})/2 " }, { "math_id": 11, "text": " \\mathbf{m}" }, { "math_id": 12, "text": " \\mathbf{p},\\mathbf{q} " }, { "math_id": 13, "text": " (\\boldsymbol{\\sigma} \\cdot \\mathbf{p})(\\boldsymbol{\\sigma} \\cdot \\mathbf{q}) = \\mathbf{p} \\cdot \\mathbf{q} + i(\\mathbf{p}\\times\\mathbf{q})\\cdot \\boldsymbol{\\sigma} " }, { "math_id": 14, "text": " (\\boldsymbol{\\sigma} \\cdot \\mathbf{p}) \\boldsymbol{\\sigma} = \\mathbf{p} + i \\boldsymbol{\\sigma} \\times \\mathbf{p} " }, { "math_id": 15, "text": " \\boldsymbol{\\sigma} (\\boldsymbol{\\sigma} \\cdot \\mathbf{q}) = \\mathbf{q} + i \\mathbf{q} \\times \\boldsymbol{\\sigma} " }, { "math_id": 16, "text": " \\Delta, \\mathbf{m} " }, { "math_id": 17, "text": " \\boldsymbol{\\sigma}=(\\sigma_x,\\sigma_y,\\sigma_z) " }, { "math_id": 18, "text": " T_{\\parallel}=\\sqrt{T_x^2+T_z^2} " }, { "math_id": 19, "text": " T_{\\perp}=T_y " }, { "math_id": 20, "text": " T_{\\perp} \\equiv 0 " }, { "math_id": 21, "text": " T_{\\parallel} " }, { "math_id": 22, "text": " \\theta " } ]
https://en.wikipedia.org/wiki?curid=845722
845737
Glueball
Hypothetical particle composed of gluons In particle physics, a glueball (also gluonium, gluon-ball) is a hypothetical composite particle. It consists solely of gluon particles, without valence quarks. Such a state is possible because gluons carry color charge and experience the strong interaction between themselves. Glueballs are extremely difficult to identify in particle accelerators, because they mix with ordinary meson states. In pure gauge theory, glueballs are the only states of the spectrum and some of them are stable. Theoretical calculations show that glueballs should exist at energy ranges accessible with current collider technology. However, due to the aforementioned difficulty (among others), they have so far not been observed and identified with certainty, although phenomenological calculations have suggested that an experimentally identified glueball candidate, denoted formula_0, has properties consistent with those expected of a Standard Model glueball. The prediction that glueballs exist is one of the most important predictions of the Standard Model of particle physics that has not yet been confirmed experimentally. Glueballs are the only particles predicted by the Standard Model with total angular momentum ("J") (sometimes called "intrinsic spin") that could be either 2 or 3 in their ground states. Experimental evidence was announced in 2021, by the TOTEM collaboration at the LHC in collaboration with the DØ collaboration at the former Tevatron collider at Fermilab, of odderon (a composite gluonic particle with odd C-parity) exchange. This exchange, associated with a quarkless three-gluon vector glueball, was identified in the comparison of proton–proton and proton–antiproton scattering. In 2024, the X(2370) particle was determined to have mass and spin parity consistent with that of a glueball. However, other exotic particle candidates such as a tetraquark could not be ruled out. Properties. In principle, it is theoretically possible for all properties of glueballs to be calculated exactly and derived directly from the equations and fundamental physical constants of quantum chromodynamics (QCD) without further experimental input. So, the predicted properties of these hypothetical particles can be described in exquisite detail using only Standard Model physics that have wide acceptance in the theoretical physics literature. But, there is considerable uncertainty in the measurement of some of the relevant key physical constants, and the QCD calculations are so difficult that solutions to these equations are almost always numerical approximations (calculated using several very different methods). This can lead to variation in theoretical predictions of glueball properties, like mass and branching ratios in glueball decays. Constituent particles and color charge. Theoretical studies of glueballs have focused on glueballs consisting of either two gluons or three gluons, by analogy to mesons and baryons that have two and three quarks respectively. As in the case of mesons and baryons, glueballs would be QCD color charge neutral. The baryon number of a glueball is zero. Total angular momentum. Double-gluon glueballs can have total angular momentum (which are either scalar or pseudo-scalar) or (tensor). Triple-gluon glueballs can have total angular momentum (vector boson) or 3 (third-order tensor boson). All glueballs have integer total angular momentum that implies that they are bosons rather than fermions. Glueballs are the only particles predicted by the Standard Model with total angular momentum (J) (sometimes called "intrinsic spin") that could be either 2 or 3 in their ground states, although mesons made of two quarks with and with similar masses have been observed and excited states of other mesons can have these values of total angular momentum. Electric charge. All glueballs would have an electric charge of zero, as gluons themselves do not have an electric charge. Mass and parity. Glueballs are predicted by quantum chromodynamics to be massive, despite the fact that gluons themselves have zero rest mass in the Standard Model. Glueballs with all four possible combinations of quantum numbers P (spatial parity) and C (charge parity) for every possible total angular momentum have been considered, producing at least fifteen possible glueball states including excited glueball states that share the same quantum numbers but have differing masses with the lightest states having masses as low as (for a glueball with quantum numbers J = 0, P = +1, C = +1, or equivalently J = 0++), and the heaviest states having masses as great as almost (for a glueball with quantum numbers J = 0, P = +1, C = −1, or J = 0+−). These masses are on the same order of magnitude as the masses of many experimentally observed mesons and baryons, as well as to the masses of the tau lepton, charm quark, bottom quark, some hydrogen isotopes, and some helium isotopes. Stability and decay channels. Just as all Standard Model mesons and baryons, except the proton, are unstable in isolation, all glueballs are predicted by the Standard Model to be unstable in isolation, with various QCD calculations predicting the total decay width (which is functionally related to half-life) for various glueball states. QCD calculations also make predictions regarding the expected decay patterns of glueballs. For example, glueballs would not have radiative or two photon decays, but would have decays into pairs of pions, pairs of kaons, or pairs of eta mesons. Practical impact on macroscopic low energy physics. Because Standard Model glueballs are so ephemeral (decaying almost immediately into more stable decay products) and are only generated in high energy physics, glueballs only arise synthetically in the natural conditions found on Earth that humans can easily observe. They are scientifically notable mostly because they are a testable prediction of the Standard Model, and not because of phenomenological impact on macroscopic processes, or their engineering applications. Lattice QCD simulations. Lattice QCD provides a way to study the glueball spectrum theoretically and from first principles. Some of the first quantities calculated using lattice QCD methods (in 1980) were glueball mass estimates. Morningstar and Peardon computed in 1999 the masses of the lightest glueballs in QCD without dynamical quarks. The three lowest states are tabulated below. The presence of dynamical quarks would slightly alter these data, but also makes the computations more difficult. Since that time calculations within QCD (lattice and sum rules) find the lightest glueball to be a scalar with mass in the range of about . Lattice predictions for scalar and pseudoscalar glueballs, including their excitations, were confirmed by Dyson–Schwinger/Bethe–Salpeter equations in Yang–Mills theory. Experimental candidates. Particle accelerator experiments are often able to identify unstable composite particles and assign masses to those particles to a precision of approximately , without being able to immediately assign to the particle resonance that is observed all of the properties of that particle. Scores of such particles have been detected, although particles detected in some experiments but not others can be viewed as doubtful. Many of these candidates have been the subject of active investigation for at least eighteen years. The GlueX experiment has been specifically designed to produce more definitive experimental evidence of glueballs. Some of the candidate particle resonances that could be glueballs, although the evidence is not definitive, include the following: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_{0}(1710)" } ]
https://en.wikipedia.org/wiki?curid=845737
845864
Pedal curve
Curve generated by the projections of a fixed point on the tangents of another curve In mathematics, a pedal curve of a given curve results from the orthogonal projection of a fixed point on the tangent lines of this curve. More precisely, for a plane curve "C" and a given fixed "pedal point" "P", the pedal curve of "C" is the locus of points "X" so that the line "PX" is perpendicular to a tangent "T" to the curve passing through the point "X". Conversely, at any point "R" on the curve "C", let "T" be the tangent line at that point "R"; then there is a unique point "X" on the tangent "T" which forms with the pedal point "P" a line perpendicular to the tangent "T" (for the special case when the fixed point "P" lies on the tangent "T", the points "X" and "P" coincide) – the pedal curve is the set of such points "X", called the "foot" of the perpendicular to the tangent "T" from the fixed point "P", as the variable point "R" ranges over the curve "C". Complementing the pedal curve, there is a unique point "Y" on the line normal to "C" at "R" so that "PY" is perpendicular to the normal, so "PXRY" is a (possibly degenerate) rectangle. The locus of points "Y" is called the contrapedal curve. The orthotomic of a curve is its pedal magnified by a factor of 2 so that the center of similarity is "P". This is locus of the reflection of "P" through the tangent line "T". The pedal curve is the first in a series of curves "C"1, "C"2, "C"3, etc., where "C"1 is the pedal of "C", "C"2 is the pedal of "C"1, and so on. In this scheme, "C"1 is known as the "first positive pedal" of "C", "C"2 is the "second positive pedal" of "C", and so on. Going the other direction, "C" is the "first negative pedal" of "C"1, the "second negative pedal" of "C"2, etc. Equations. From the Cartesian equation. Take "P" to be the origin. For a curve given by the equation "F"("x", "y")=0, if the equation of the tangent line at "R"=("x"0, "y"0) is written in the form formula_0 then the vector (cos α, sin α) is parallel to the segment "PX", and the length of "PX", which is the distance from the tangent line to the origin, is "p". So "X" is represented by the polar coordinates ("p", α) and replacing ("p", α) by ("r", θ) produces a polar equation for the pedal curve. For example, for the ellipse formula_1 the tangent line at "R"=("x"0, "y"0) is formula_2 and writing this in the form given above requires that formula_3 The equation for the ellipse can be used to eliminate "x"0 and "y"0 giving formula_4 and converting to ("r", θ) gives formula_5 as the polar equation for the pedal. This is easily converted to a Cartesian equation as formula_6 From the polar equation. For "P" the origin and "C" given in polar coordinates by "r" = "f"(θ). Let "R"=("r", θ) be a point on the curve and let "X"=("p", α) be the corresponding point on the pedal curve. Let ψ denote the angle between the tangent line and the radius vector, sometimes known as the polar tangential angle. It is given by formula_7 Then formula_8 and formula_9 These equations may be used to produce an equation in "p" and α which, when translated to "r" and θ gives a polar equation for the pedal curve. For example, let the curve be the circle given by "r" = "a" cos θ. Then formula_10 so formula_11 Also formula_12 So the polar equation of the pedal is formula_13 From the pedal equation. The pedal equations of a curve and its pedal are closely related. If "P" is taken as the pedal point and the origin then it can be shown that the angle ψ between the curve and the radius vector at a point "R" is equal to the corresponding angle for the pedal curve at the point "X". If "p" is the length of the perpendicular drawn from "P" to the tangent of the curve (i.e. "PX") and "q" is the length of the corresponding perpendicular drawn from "P" to the tangent to the pedal, then by similar triangles formula_14 It follows immediately that the if the pedal equation of the curve is "f"("p","r")=0 then the pedal equation for the pedal curve is formula_15 From this all the positive and negative pedals can be computed easily if the pedal equation of the curve is known. From parametric equations. Let formula_16 be the vector for "R" to "P" and write formula_17, the tangential and normal components of formula_18 with respect to the curve. Then formula_19 is the vector from "R" to "X" from which the position of "X" can be computed. Specifically, if "c" is a parametrization of the curve then formula_20 parametrises the pedal curve (disregarding points where "c' "is zero or undefined). For a parametrically defined curve, its pedal curve with pedal point (0;0) is defined as formula_21 formula_22 The contrapedal curve is given by: formula_23 With the same pedal point, the contrapedal curve is the pedal curve of the evolute of the given curve. Geometrical properties. Consider a right angle moving rigidly so that one leg remains on the point "P" and the other leg is tangent to the curve. Then the vertex of this angle is "X" and traces out the pedal curve. As the angle moves, its direction of motion at "P" is parallel to "PX" and its direction of motion at "R" is parallel to the tangent "T" = "RX". Therefore, the instant center of rotation is the intersection of the line perpendicular to "PX" at "P" and perpendicular to "RX" at "R", and this point is "Y". It follows that the tangent to the pedal at "X" is perpendicular to "XY". Draw a circle with diameter "PR", then it circumscribes rectangle "PXRY" and "XY" is another diameter. The circle and the pedal are both perpendicular to "XY" so they are tangent at "X". Hence the pedal is the envelope of the circles with diameters "PR" where "R" lies on the curve. The line "YR" is normal to the curve and the envelope of such normals is its evolute. Therefore, "YR" is tangent to the evolute and the point "Y" is the foot of the perpendicular from "P" to this tangent, in other words "Y" is on the pedal of the evolute. It follows that the contrapedal of a curve is the pedal of its evolute. Let "C′" be the curve obtained by shrinking "C" by a factor of 2 toward "P". Then the point "R′" corresponding to "R" is the center of the rectangle "PXRY", and the tangent to "C′" at "R′" bisects this rectangle parallel to "PY" and "XR". A ray of light starting from "P" and reflected by "C′" at "R' "will then pass through "Y". The reflected ray, when extended, is the line "XY" which is perpendicular to the pedal of "C". The envelope of lines perpendicular to the pedal is then the envelope of reflected rays or the catacaustic of "C′". This proves that the catacaustic of a curve is the evolute of its orthotomic. As noted earlier, the circle with diameter "PR" is tangent to the pedal. The center of this circle is "R′" which follows the curve "C′". Let "D′" be a curve congruent to "C′" and let "D′" roll without slipping, as in the definition of a roulette, on "C′" so that "D′" is always the reflection of "C′" with respect to the line to which they are mutually tangent. Then when the curves touch at "R′" the point corresponding to "P" on the moving plane is "X", and so the roulette is the pedal curve. Equivalently, the orthotomic of a curve is the roulette of the curve on its mirror image. Example. When "C" is a circle the above discussion shows that the following definitions of a limaçon are equivalent: We also have shown that the catacaustic of a circle is the evolute of a limaçon. Pedals of specific curves. Pedals of some specific curves are: References. Notes &lt;templatestyles src="Reflist/styles.css" /&gt; Sources &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\cos \\alpha x + \\sin \\alpha y = p" }, { "math_id": 1, "text": "\\frac{x^2}{a^2}+\\frac{y^2}{b^2}=1" }, { "math_id": 2, "text": "\\frac{x_0x}{a^2}+\\frac{y_0y}{b^2}=1" }, { "math_id": 3, "text": "\\frac{x_0}{a^2}=\\frac{\\cos \\alpha}{p},\\,\\frac{y_0}{b^2}=\\frac{\\sin \\alpha}{p}." }, { "math_id": 4, "text": "a^2 \\cos^2 \\alpha + b^2 \\sin^2 \\alpha = p^2,\\," }, { "math_id": 5, "text": "a^2 \\cos^2 \\theta + b^2 \\sin^2 \\theta = r^2,\\," }, { "math_id": 6, "text": "a^2 x^2 + b^2 y^2 = (x^2+y^2)^2.\\," }, { "math_id": 7, "text": "r=\\frac{dr}{d\\theta}\\tan \\psi." }, { "math_id": 8, "text": "p=r\\sin \\psi" }, { "math_id": 9, "text": "\\alpha = \\theta + \\psi - \\frac{\\pi}{2}." }, { "math_id": 10, "text": "a \\cos \\theta = -a \\sin \\theta \\tan \\psi" }, { "math_id": 11, "text": "\\tan \\psi = -\\cot \\theta,\\, \\psi = \\frac{\\pi}{2} + \\theta, \\alpha = 2 \\theta." }, { "math_id": 12, "text": "p=r\\sin \\psi\\ = r \\cos \\theta = a \\cos^2 \\theta = a \\cos^2 {\\alpha \\over 2}." }, { "math_id": 13, "text": "r = a \\cos^2 {\\theta \\over 2}." }, { "math_id": 14, "text": "\\frac{p}{r}=\\frac{q}{p}." }, { "math_id": 15, "text": "f(r,\\frac{r^2}{p})=0" }, { "math_id": 16, "text": "\\vec{v} = P - R" }, { "math_id": 17, "text": "\\vec{v} = \\vec{v}_{\\parallel}+\\vec{v}_\\perp" }, { "math_id": 18, "text": "\\vec{v}" }, { "math_id": 19, "text": "\\vec{v}_{\\parallel}" }, { "math_id": 20, "text": "t\\mapsto c(t)+{ c'(t) \\cdot (P-c(t))\\over|c'(t)|^2} c'(t)" }, { "math_id": 21, "text": "X[x,y]=\\frac{(xy'-yx')y'}{x'^2 + y'^2}" }, { "math_id": 22, "text": "Y[x,y]=\\frac{(yx'-xy')x'}{x'^2 + y'^2}." }, { "math_id": 23, "text": "t\\mapsto P-{ c'(t) \\cdot (P-c(t))\\over|c'(t)|^2} c'(t)" } ]
https://en.wikipedia.org/wiki?curid=845864
845928
Capital accumulation
Dynamic that motivates pursuit of profit, central tenet of capitalism Capital accumulation is the dynamic that motivates the pursuit of profit, involving the investment of money or any financial asset with the goal of increasing the initial monetary value of said asset as a financial return whether in the form of profit, rent, interest, royalties or capital gains. The aim of capital accumulation is to create new fixed and working capitals, broaden and modernize the existing ones, grow the material basis of social-cultural activities, as well as constituting the necessary resource for reserve and insurance. The process of capital accumulation forms the basis of capitalism, and is one of the defining characteristics of a capitalist economic system. Definition. The definition of capital accumulation is subject to controversy and ambiguities, because it could refer to: Most often, capital accumulation involves both a net addition and a redistribution of wealth, which may raise the question of who really benefits from it most. If more wealth is produced than there was before, a society becomes richer; the total stock of wealth increases. But if some accumulate capital only at the expense of others, wealth is merely shifted from A to B. It is also possible that some accumulate capital much faster than others. When one person is enriched at the expense of another in circumstances that the law sees as unjust it is called unjust enrichment. In principle, it is possible that a few people or organisations accumulate capital and grow richer, although the total stock of wealth of society "decreases". In economics and accounting, capital accumulation is often equated with investment of profit income or savings, especially in real capital goods. The concentration and centralisation of capital are two of the results of such accumulation (see below). Capital accumulation refers ordinarily to: and by extension to: Both non-financial and financial capital accumulation is usually needed for economic growth, since additional production usually requires additional funds to enlarge the scale of production. Smarter and more productive organization of production can also increase production without increased capital. Capital can be created without increased investment by inventions or improved organization that increase productivity, discoveries of new assets (oil, gold, minerals, etc.), the sale of property, etc. In modern macroeconomics and econometrics the term "capital formation" is often used in preference to "accumulation", though the United Nations Conference on Trade and Development (UNCTAD) refers nowadays to "accumulation". The term is occasionally used in national accounts. Measurement of accumulation. Accumulation can be measured as the monetary value of investments, the amount of income that is reinvested, or as the change in the value of assets owned (the increase in the value of the capital stock). Using company balance sheets, tax data and direct surveys as a basis, government statisticians estimate total investments and assets for the purpose of national accounts, national balance of payments and flow of funds statistics. Usually, the reserve banks and the Treasury provide interpretations and analysis of this data. Standard indicators include capital formation, gross fixed capital formation, fixed capital, household asset wealth, and foreign direct investment. Organisations such as the International Monetary Fund, UNCTAD, the World Bank Group, the OECD, and the Bank for International Settlements used national investment data to estimate world trends. The Bureau of Economic Analysis, Eurostat and the Japan Statistical Office provide data on the US, Europe and Japan respectively. Other useful sources of investment information are business magazines such as "Fortune, Forbes, The Economist, Business Week", etc., and various corporate "watchdog" organisations and non-governmental organization publications. A reputable scientific journal is the "Review of Income and Wealth". In the case of the US, the "Analytical Perspectives" document (an annex to the yearly budget) provides useful wealth and capital estimates applying to the whole country. Demand-led growth models. In macroeconomics, following the Harrod–Domar model, the savings ratio (formula_0) and the capital coefficient (formula_1) are regarded as critical factors for accumulation and growth, assuming that all saving is used to finance fixed investment. The rate of growth of the real stock of fixed capital (formula_2) is: formula_3 where formula_4 is the real national income. If the capital-output ratio or capital coefficient (formula_5) is constant, the rate of growth of formula_4 is equal to the rate of growth of formula_2. This is determined by formula_0 (the ratio of net fixed investment or saving to formula_4) and formula_1. A country might, for example, save and invest 12% of its national income, and then if the capital coefficient is 4:1 (i.e. $4 billion must be invested to increase the national income by 1 billion) the rate of growth of the national income might be 3% annually. However, as Keynesian economics points out, savings do not automatically mean investment (as liquid funds may be hoarded for example). Investment may also not be investment in fixed capital (see above). Assuming that the turnover of total production capital invested remains constant, the proportion of total investment which just maintains the stock of total capital, rather than enlarging it, will typically increase as the total stock increases. The growth rate of incomes and net new investments must then also increase, in order to accelerate the growth of the capital stock. Simply put, the bigger capital grows, the more capital it takes to keep it growing and the more markets must expand. The Harrodian model has a problem of unstable static equilibrium, since if the growth rate is not equal to the Harrodian warranted rate, the production will tend to extreme points (infinite or zero production). The Neo-Kaleckians models do not suffer from the Harrodian instability but fails to deliver a convergence dynamic of the effective capacity utilization to the planned capacity utilization. For its turn, the model of the Sraffian Supermultiplier grants a static stable equilibrium and a convergence to the planned capacity utilization. The Sraffian Supermultiplier model diverges from the Harrodian model since it takes the investment as induced and not as autonomous. The autonomous components in this model are the Autonomous Non-Capacity Creating Expenditures, such as exports, credit led consumption and public spending. The growth rate of these expenditures determines the long run rate of capital accumulation and product growth. Marxist concept. Marx borrowed the idea of capital accumulation or the concentration of capital from early socialist writers such as Charles Fourier, Louis Blanc, Victor Considerant, and Constantin Pecqueur. In Karl Marx's critique of political economy, capital accumulation is the operation whereby profits are reinvested into the economy, increasing the total quantity of capital. Capital was understood by Marx to be expanding value, that is, in other terms, as a sum of capital, usually expressed in money, that is transformed through human labor into a larger value and extracted as profits. Here, capital is defined essentially as economic or commercial asset value that is used by capitalists to obtain additional value (surplus-value). This requires property relations which enable objects of value to be appropriated and owned, and trading rights to be established. Over-accumulation and crisis. The Marxist analysis of capital accumulation and the development of capitalism identifies systemic issues with the process that arise with expansion of the productive forces. A crisis of overaccumulation of capital occurs when the rate of profit is greater than the rate of new profitable investment outlets in the economy, arising from increasing productivity from a rising organic composition of capital (higher capital input to labor input ratio). This depresses the wage bill, leading to stagnant wages and high rates of unemployment for the working class while excess profits search for new profitable investment opportunities. Marx believed that this cyclical process would be the fundamental cause for the dissolution of capitalism and its replacement by socialism, which would operate according to a different economic dynamic. In Marxist thought, socialism would succeed capitalism as the dominant mode of production when the accumulation of capital can no longer sustain itself due to falling rates of profit in real production relative to increasing productivity. A socialist economy would not base production on the accumulation of capital, instead basing production on the criteria of satisfying human needs and directly producing use-values. This concept is encapsulated in the principle of production for use. Concentration and centralization. According to Marx, capital has the tendency for concentration and centralization in the hands of richest capitalists. Marx explains: "It is concentration of capitals already formed, destruction of their individual independence, expropriation of capitalist by capitalist, transformation of many small into few large capitals... Capital grows in one place to a huge mass in a single hand, because it has in another place been lost by many... The battle of competition is fought by cheapening of commodities. The cheapness of commodities demands, "ceteris paribus", on the productiveness of labour, and this again on the scale of production. Therefore, the larger capitals beat the smaller. It will further be remembered that, with the development of the capitalist mode of production, there is an increase in the minimum amount of individual capital necessary to carry on a business under its normal conditions. The smaller capitals, therefore, crowd into spheres of production which Modern Industry has only sporadically or incompletely got hold of. Here competition rages... It always ends in the ruin of many small capitalists, whose capitals partly pass into the hands of their conquerors, partly vanish." Rate of accumulation. In Marxian economics, the "rate of accumulation" is defined as (1) the value of the real net increase in the stock of capital in an accounting period, (2) the proportion of realized surplus-value or profit-income which is reinvested, rather than consumed. This rate can be expressed by means of various ratios between the original capital outlay, the realized turnover, surplus-value or profit and reinvestment's (see, e.g., the writings of the economist Michał Kalecki). Other things being equal, the greater the amount of profit-income that is disbursed as personal earnings and used for consumption purposes, the lower the savings rate and the lower the rate of accumulation is likely to be. However, earnings spent on consumption can also stimulate market demand and higher investment. This is the cause of endless controversies in economic theory about "how much to spend, and how much to save". In a boom period of capitalism, the growth of investments is cumulative, i.e. one investment leads to another, leading to a constantly expanding market, an expanding labor force, and an increase in the standard of living for the majority of the people. In a stagnating, decadent capitalism, the accumulation process is increasingly oriented towards investment on military and security forces, real estate, financial speculation, and luxury consumption. In that case, income from value-adding production will decline in favour of interest, rent and tax income, with as a corollary an increase in the level of permanent unemployment. As a rule, the larger the total sum of capital invested, the higher the return on investment will be. The more capital one owns, the more capital one can also borrow and reinvest at a higher rate of profit or interest. The inverse is also true, and this is one factor in the widening gap between the rich and the poor. Ernest Mandel emphasized that the rhythm of capital accumulation and growth depended critically on (1) the division of a society's social product between necessary product and surplus product, and (2) the division of the surplus product between investment and consumption. In turn, this allocation pattern reflected the outcome of competition among capitalists, competition between capitalists and workers, and competition between workers. The pattern of capital accumulation can therefore never be simply explained by commercial factors, it also involved social factors and power relationships. Circuit of capital accumulation from production. Strictly speaking, capital has accumulated only when realized profit income has been "reinvested" in capital assets. But the process of capital accumulation in production has, as suggested in the first volume of Marx's "Das Kapital", at least seven distinct but linked moments: All of these moments do not refer simply to an economic or commercial process. Rather, they assume the existence of legal, social, cultural and economic power conditions, without which creation, distribution and circulation of the new wealth could not occur. This becomes especially clear when the attempt is made to create a market where none exists, or where people refuse to trade. In fact Marx argues that the original or primitive accumulation of capital often occurs through violence, plunder, slavery, robbery, extortion and theft. He argues that the capitalist mode of production requires that people be forced to work in value-adding production for someone else, and for this purpose, they must be cut off from sources of income other than selling their labor power. Simple and expanded reproduction. In volume 2 of "Das Kapital", Marx continues the story and shows that, with the aid of bank credit, capital in search of growth can more or less smoothly mutate from one form to another, alternately taking the form of money capital (liquid deposits, securities, etc.), commodity capital (tradeable products, real estate etc.), or production capital (means of production and labor power). His discussion of the simple and expanded reproduction of the conditions of production offers a more sophisticated model of the parameters of the accumulation process as a whole. At simple reproduction, a sufficient amount is produced to sustain society at the given living standard; the stock of capital stays constant. At expanded reproduction, "more" product-value is produced than is necessary to sustain society at a given living standard (a surplus product); the additional product-value is available for investments which enlarge the scale and variety of production. The bourgeois claim there is no economic law according to which capital is necessarily re-invested in the expansion of production, that such depends on anticipated profitability, market expectations and perceptions of investment risk. Such statements only explain the subjective experiences of investors and ignore the objective realities which would influence such opinions. As Marx states in Vol.2, simple reproduction only exists if the variable and surplus capital realized by Dept. 1—producers of means of production—exactly equals that of the constant capital of Dept. 2, producers of articles of consumption (p. 524). Such equilibrium rests on various assumptions, such as a constant labor supply (no population growth). Accumulation does not imply a necessary change in total magnitude of value produced but can simply refer to a change in the composition of an industry (p. 514). Ernest Mandel introduced the additional concept of "contracted economic reproduction", i.e. reduced accumulation where business operating at a loss outnumbers growing business, or economic reproduction on a decreasing scale, for example due to wars, natural disasters or devalorisation. Balanced economic growth requires that different factors in the accumulation process expand in appropriate proportions. But markets themselves cannot spontaneously create that balance, in fact what drives business activity is precisely the imbalances between supply and demand: inequality is the motor of growth. This partly explains why the worldwide pattern of economic growth is very uneven and unequal, even although markets have existed almost everywhere for a very long time. Some people argue that it also explains government regulation of market trade and protectionism. Origins. According to Marx, capital accumulation has a double origin, namely in trade and in expropriation, both of a legal or illegal kind. The reason is that a stock of capital can be increased through a process of exchange or "trading up" but also through directly taking an asset or resource from someone else, without compensation. David Harvey calls this accumulation by dispossession. Marx does not discuss gifts and grants as a source of capital accumulation, nor does he analyze taxation in detail (he could not, as he died even before completing his major book, "Das Kapital"). The continuation and progress of capital accumulation depends on the removal of obstacles to the expansion of trade, and this has historically often been a violent process. As markets expand, more and more new opportunities develop for accumulating capital, because more and more types of goods and services can be traded in. But capital accumulation may also confront resistance, when people refuse to sell, or refuse to buy (for example a strike by investors or workers, or consumer resistance). Capital accumulation as social relation. "Accumulation of capital" sometimes also refers in Marxist writings to the reproduction of capitalist social relations (institutions) on a larger scale over time, i.e., the expansion of the size of the proletariat and of the wealth owned by the bourgeoisie. This interpretation emphasizes that capital ownership, predicated on command over labor, is a social relation: the growth of capital implies the growth of the working class (a "law of accumulation"). In the first volume of "Das Kapital" Marx had illustrated this idea with reference to Edward Gibbon Wakefield's theory of colonisation: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;...Wakefield discovered that in the Colonies, property in money, means of subsistence, machines, and other means of production, does not as yet stamp a man as a capitalist if there be wanting the correlative — the wage-worker, the other man who is compelled to sell himself of his own free-will. He discovered that capital is not a thing, but a social relation between persons, established by the instrumentality of things. Mr. Peel, he moans, took with him from England to Swan River, West Australia, means of subsistence and of production to the amount of £50,000. Mr. Peel had the foresight to bring with him, besides, 3,000 persons of the working-class, men, women, and children. Once arrived at his destination, “Mr. Peel was left without a servant to make his bed or fetch him water from the river.” Unhappy Mr. Peel, who provided for everything except the export of English modes of production to Swan River! In the third volume of "Das Kapital", Marx refers to the "fetishism of capital" reaching its highest point with "interest-bearing capital", because now capital seems to grow of its own accord without anybody doing anything. In this case, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The relations of capital assume their most externalised and most fetish-like form in interest-bearing capital. We have here formula_6, money creating more money, self-expanding value, without the process that effectuates these two extremes. In merchant's capital, formula_7, there is at least the general form of the capitalistic movement, although it confines itself solely to the sphere of circulation, so that profit appears merely as profit derived from alienation; but it is at least seen to be the product of a social relation, not the product of a mere thing. (...) This is obliterated in formula_6, the form of interest-bearing capital. (...) The thing (money, commodity, value) is now capital even as a mere thing, and capital appears as a mere thing. The result of the entire process of reproduction appears as a property inherent in the thing itself. It depends on the owner of the money, i.e., of the commodity in its continually exchangeable form, whether he wants to spend it as money or loan it out as capital. In interest-bearing capital, therefore, this automatic fetish, self-expanding value, money generating money, are brought out in their pure state and in this form it no longer bears the birth-marks of its origin. The social relation is consummated in the relation of a thing, of money, to itself.—Instead of the actual transformation of money into capital, we see here only form without content. Markets with social influence. Product recommendations and information about past purchases have been shown to influence consumers choices significantly whether it is for music, movie, book, technological, and other type of products. Social influence often induces a rich-get-richer phenomenon (Matthew effect) where popular products tend to become even more popular. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": " {{\\Delta K} \\over K} = {{{\\Delta K} \\over Y} \\over {K \\over Y}} = {s \\over k }" }, { "math_id": 4, "text": "Y" }, { "math_id": 5, "text": "k={K \\over Y}" }, { "math_id": 6, "text": "M - M'" }, { "math_id": 7, "text": "M - C - M'" } ]
https://en.wikipedia.org/wiki?curid=845928
846412
Conjugate prior
Concept in probability theory In Bayesian probability theory, if, given a likelihood function formula_0, the posterior distribution formula_1 is in the same probability distribution family as the prior probability distribution formula_2, the prior and posterior are then called conjugate distributions with respect to that likelihood function and the prior is called a conjugate prior for the likelihood function formula_0. A conjugate prior is an algebraic convenience, giving a closed-form expression for the posterior; otherwise, numerical integration may be necessary. Further, conjugate priors may give intuition by more transparently showing how a likelihood function updates a prior distribution. The concept, as well as the term "conjugate prior", were introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory. A similar concept had been discovered independently by George Alfred Barnard. Example. The form of the conjugate prior can generally be determined by inspection of the probability density or probability mass function of a distribution. For example, consider a random variable which consists of the number of successes formula_3 in formula_4 Bernoulli trials with "unknown" probability of success formula_5 in [0,1]. This random variable will follow the binomial distribution, with a probability mass function of the form formula_6 The usual conjugate prior is the beta distribution with parameters (formula_7, formula_8): formula_9 where formula_7 and formula_8 are chosen to reflect any existing belief or information (formula_10 and formula_11 would give a uniform distribution) and formula_12 is the Beta function acting as a normalising constant. In this context, formula_7 and formula_8 are called "hyperparameters" (parameters of the prior), to distinguish them from parameters of the underlying model (here formula_5). A typical characteristic of conjugate priors is that the dimensionality of the hyperparameters is one greater than that of the parameters of the original distribution. If all parameters are scalar values, then there will be one more hyperparameter than parameter; but this also applies to vector-valued and matrix-valued parameters. (See the general article on the exponential family, and also consider the Wishart distribution, conjugate prior of the covariance matrix of a multivariate normal distribution, for an example where a large dimensionality is involved.) If we sample this random variable and get formula_3 successes and formula_13 failures, then we have formula_14 which is another Beta distribution with parameters formula_15. This posterior distribution could then be used as the prior for more samples, with the hyperparameters simply adding each extra piece of information as it comes. Interpretations. Pseudo-observations. It is often useful to think of the hyperparameters of a conjugate prior distribution corresponding to having observed a certain number of "pseudo-observations" with properties specified by the parameters. For example, the values formula_7 and formula_8 of a beta distribution can be thought of as corresponding to formula_16 successes and formula_17 failures if the posterior mode is used to choose an optimal parameter setting, or formula_7 successes and formula_8 failures if the posterior mean is used to choose an optimal parameter setting. In general, for nearly all conjugate prior distributions, the hyperparameters can be interpreted in terms of pseudo-observations. This can help provide intuition behind the often messy update equations and help choose reasonable hyperparameters for a prior. Dynamical system. One can think of conditioning on conjugate priors as defining a kind of (discrete time) dynamical system: from a given set of hyperparameters, incoming data updates these hyperparameters, so one can see the change in hyperparameters as a kind of "time evolution" of the system, corresponding to "learning". Starting at different points yields different flows over time. This is again analogous with the dynamical system defined by a linear operator, but note that since different samples lead to different inferences, this is not simply dependent on time but rather on data over time. For related approaches, see Recursive Bayesian estimation and Data assimilation. Practical example. Suppose a rental car service operates in your city. Drivers can drop off and pick up cars anywhere inside the city limits. You can find and rent cars using an app. Suppose you wish to find the probability that you can find a rental car within a short distance of your home address at any time of day. Over three days you look at the app and find the following number of cars within a short distance of your home address: formula_18 Suppose we assume the data comes from a Poisson distribution. In that case, we can compute the maximum likelihood estimate of the parameters of the model, which is formula_19 Using this maximum likelihood estimate, we can compute the probability that there will be at least one car available on a given day: formula_20 This is the Poisson distribution that is "the" most likely to have generated the observed data formula_21. But the data could also have come from another Poisson distribution, e.g., one with formula_22, or formula_23, etc. In fact, there is an infinite number of Poisson distributions that "could" have generated the observed data. With relatively few data points, we should be quite uncertain about which exact Poisson distribution generated this data. Intuitively we should instead take a weighted average of the probability of formula_24 for each of those Poisson distributions, weighted by how likely they each are, given the data we've observed formula_21. Generally, this quantity is known as the posterior predictive distribution formula_25 where formula_26 is a new data point, formula_21 is the observed data and formula_27 are the parameters of the model. Using Bayes' theorem we can expand formula_28 therefore formula_29 Generally, this integral is hard to compute. However, if you choose a conjugate prior distribution formula_2, a closed-form expression can be derived. This is the posterior predictive column in the tables below. Returning to our example, if we pick the Gamma distribution as our prior distribution over the rate of the Poisson distributions, then the posterior predictive is the negative binomial distribution, as can be seen from the table below. The Gamma distribution is parameterized by two hyperparameters formula_30, which we have to choose. By looking at plots of the gamma distribution, we pick formula_31, which seems to be a reasonable prior for the average number of cars. The choice of prior hyperparameters is inherently subjective and based on prior knowledge. Given the prior hyperparameters formula_7 and formula_8 we can compute the posterior hyperparameters formula_32 and formula_33 Given the posterior hyperparameters, we can finally compute the posterior predictive of formula_34 This much more conservative estimate reflects the uncertainty in the model parameters, which the posterior predictive takes into account. Table of conjugate distributions. Let "n" denote the number of observations. In all cases below, the data is assumed to consist of "n" points formula_35 (which will be random vectors in the multivariate cases). If the likelihood function belongs to the exponential family, then a conjugate prior exists, often also in the exponential family; see . Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p(x \\mid \\theta)" }, { "math_id": 1, "text": "p(\\theta \\mid x)" }, { "math_id": 2, "text": "p(\\theta)" }, { "math_id": 3, "text": "s" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "q" }, { "math_id": 6, "text": "p(s) = {n \\choose s}q^s (1-q)^{n-s}" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "\\beta" }, { "math_id": 9, "text": "p(q) = {q^{\\alpha-1}(1-q)^{\\beta-1} \\over \\Beta(\\alpha,\\beta)}" }, { "math_id": 10, "text": "\\alpha=1" }, { "math_id": 11, "text": "\\beta=1" }, { "math_id": 12, "text": "\\Beta(\\alpha,\\beta)" }, { "math_id": 13, "text": "f = n - s" }, { "math_id": 14, "text": "\\begin{align}\n P(s, f \\mid q=x) &= {s+f \\choose s} x^s(1-x)^f,\\\\\n P(q=x) &= {x^{\\alpha-1}(1-x)^{\\beta-1} \\over \\Beta(\\alpha,\\beta)},\\\\\n P(q=x \\mid s,f) &= \\frac{P(s, f \\mid x)P(x)}{\\int P(s, f \\mid y)P(y)dy}\\\\\n & = {{{s+f \\choose s} x^{s+\\alpha-1}(1-x)^{f+\\beta-1} / \\Beta(\\alpha,\\beta)} \\over \\int_{y=0}^1 \\left({s+f \\choose s} y^{s+\\alpha-1}(1-y)^{f+\\beta-1} / \\Beta(\\alpha,\\beta)\\right) dy} \\\\\n & = {x^{s+\\alpha-1}(1-x)^{f+\\beta-1} \\over \\Beta(s+\\alpha,f+\\beta)},\n\\end{align}" }, { "math_id": 15, "text": "(\\alpha + s, \\beta + f)" }, { "math_id": 16, "text": "\\alpha-1" }, { "math_id": 17, "text": "\\beta-1" }, { "math_id": 18, "text": "\\mathbf{x} = [3,4,1]" }, { "math_id": 19, "text": "\\lambda = \\frac{3+4+1}{3} \\approx 2.67." }, { "math_id": 20, "text": "p(x>0 | \\lambda \\approx 2.67) = 1 - p(x=0 | \\lambda \\approx 2.67) = 1-\\frac{2.67^0 e^{-2.67}}{0!} \\approx 0.93" }, { "math_id": 21, "text": "\\mathbf{x}" }, { "math_id": 22, "text": "\\lambda = 3" }, { "math_id": 23, "text": "\\lambda = 2" }, { "math_id": 24, "text": "p(x>0| \\lambda)" }, { "math_id": 25, "text": "p(x|\\mathbf{x}) = \\int_\\theta p(x|\\theta)p(\\theta|\\mathbf{x})d\\theta\\,," }, { "math_id": 26, "text": "x" }, { "math_id": 27, "text": "\\theta" }, { "math_id": 28, "text": "p(\\theta|\\mathbf{x}) = \\frac{p(\\mathbf{x}|\\theta)p(\\theta)}{p(\\mathbf{x})}\\,," }, { "math_id": 29, "text": "p(x|\\mathbf{x}) = \\int_\\theta p(x|\\theta)\\frac{p(\\mathbf{x}|\\theta)p(\\theta)}{p(\\mathbf{x})}d\\theta\\,." }, { "math_id": 30, "text": "\\alpha, \\beta" }, { "math_id": 31, "text": "\\alpha = \\beta = 2" }, { "math_id": 32, "text": "\\alpha' = \\alpha + \\sum_i x_i = 2 + 3+4+1 = 10" }, { "math_id": 33, "text": "\\beta' = \\beta + n = 2+3 = 5" }, { "math_id": 34, "text": "p(x>0|\\mathbf{x}) = 1-p(x=0|\\mathbf{x}) = 1 - NB\\left(0\\, |\\, 10, \\frac{1}{1+5}\\right) \\approx 0.84" }, { "math_id": 35, "text": "x_1,\\ldots,x_n" } ]
https://en.wikipedia.org/wiki?curid=846412
846421
Psychrometrics
Study of gas-vapor mixtures Psychrometrics (or psychrometry, from el " "ψυχρόν" (psuchron)" 'cold' and " "μέτρον" (metron)" 'means of measurement'; also called hygrometry) is the field of engineering concerned with the physical and thermodynamic properties of gas-vapor mixtures. History. With the inventions of the hygrometer and thermometer, the theories of combining the two began to emerge during the sixteenth and seventeenth centuries. In 1818, a German inventor, Ernst Ferdinand August (1795-1870), patented the term “psychrometer”, from the Greek language meaning “cold measure”. The psychrometer is a hygrometric instrument based on the principle that dry air enhances evaporation, unlike wet air, which slows it. Common applications. Although the principles of psychrometry apply to any physical system consisting of gas-vapor mixtures, the most common system of interest is the mixture of water vapor and air, because of its application in heating, ventilation, and air-conditioning and meteorology. In human terms, our thermal comfort is in large part a consequence of not just the temperature of the surrounding air, but (because we cool ourselves via perspiration) the extent to which that air is saturated with water vapor. Many substances are hygroscopic, meaning they attract water, usually in proportion to the relative humidity or above a critical relative humidity. Such substances include cotton, paper, cellulose, other wood products, sugar, calcium oxide (burned lime) and many chemicals and fertilizers. Industries that use these materials are concerned with relative humidity control in production and storage of such materials. Relative humidity is often controlled in manufacturing areas where flammable materials are handled, to avoid fires caused by the static electricity discharges that can occur in very dry air. In industrial drying applications, such as drying paper, manufacturers usually try to achieve an optimum between low relative humidity, which increases the drying rate, and energy usage, which decreases as exhaust relative humidity increases. In many industrial applications it is important to avoid condensation that would ruin product or cause corrosion. Molds and fungi can be controlled by keeping relative humidity low. Wood destroying fungi generally do not grow at relative humidities below 75%. Psychrometric properties. Dry-bulb temperature (DBT). The dry-bulb temperature is the temperature indicated by a thermometer exposed to the air in a place sheltered from direct solar radiation. The term dry-bulb is customarily added to temperature to distinguish it from wet-bulb and dew point temperature. In meteorology and psychrometrics the word temperature by itself without a prefix usually means dry-bulb temperature. Technically, the temperature registered by the dry-bulb thermometer of a psychrometer. The name implies that the sensing bulb or element is in fact dry. WMO provides a 23-page chapter on the measurement of temperature. Wet-bulb temperature (WBT). The thermodynamic wet-bulb temperature is a thermodynamic property of a mixture of air and water vapor. The value indicated by a wet-bulb thermometer often provides an adequate approximation of the thermodynamic wet-bulb temperature. The accuracy of a simple wet-bulb thermometer depends on how fast air passes over the bulb and how well the thermometer is shielded from the radiant temperature of its surroundings. Speeds up to 5,000 ft/min (~60 mph, 25.4 m/s) are best but it may be dangerous to move a thermometer at that speed. Errors up to 15% can occur if the air movement is too slow or if there is too much radiant heat present (from sunlight, for example). A wet bulb temperature taken with air moving at about 1–2 m/s is referred to as a screen temperature, whereas a temperature taken with air moving about 3.5 m/s or more is referred to as sling temperature. A psychrometer is a device that includes both a dry-bulb and a wet-bulb thermometer. A sling psychrometer requires manual operation to create the airflow over the bulbs, but a powered psychrometer includes a fan for this function. Knowing both the dry-bulb temperature (DBT) and wet-bulb temperature (WBT), one can determine the relative humidity (RH) from the psychrometric chart appropriate to the air pressure. Dew point temperature. The saturation temperature of the moisture present in the sample of air, it can also be defined as the temperature at which the vapour changes into liquid (condensation). Usually the level at which water vapor changes into liquid marks the base of the cloud in the atmosphere hence called condensation level. So the temperature value that allows this process (condensation) to take place is called the 'dew point temperature'. A simplified definition is the temperature at which the water vapour turns into "dew" (Chamunoda Zambuko 2012). Humidity. Specific Humidity. Specific humidity is defined as the mass of water vapor as a proportion of the mass of the moist air sample (including both dry air and the water vapor); it is closely related to humidity ratio and always lower in value. Absolute humidity. The mass of water vapor per unit mass of dry air containing the water vapor. This quantity is also known as the water vapor density. Relative humidity. Is a ratio, expressed in percent, of the amount of atmospheric moisture present relative to the amount that would be present if the air was saturated. Specific enthalpy. Analogous to the specific enthalpy of a pure substance. In psychrometrics, the term quantifies the total energy of both the dry air and water vapour per kilogram of dry air. Specific volume. Analogous to the specific volume of a pure substance. However, in psychrometrics, the term quantifies the total volume of both the dry air and water vapour per unit mass of dry air. Psychrometric ratio. The psychrometric ratio is the ratio of the heat transfer coefficient to the product of mass transfer coefficient and humid heat at a wetted surface. It may be evaluated with the following equation: formula_0 where: formula_1 = psychrometric ratio, dimensionless formula_2 = convective heat transfer coefficient, W m−2 K−1 formula_3 = convective mass transfer coefficient, kg m−2 s−1 formula_4 = humid heat, J kg−1 K−1 The psychrometric ratio is an important property in the area of psychrometry, as it relates the absolute humidity and saturation humidity to the difference between the dry bulb temperature and the adiabatic saturation temperature. Mixtures of air and water vapor are the most common systems encountered in psychrometry. The psychrometric ratio of air-water vapor mixtures is approximately unity, which implies that the difference between the adiabatic saturation temperature and wet bulb temperature of air-water vapor mixtures is small. This property of air-water vapor systems simplifies drying and cooling calculations often performed using psychrometric relationships. Humid heat. Humid heat is the constant-pressure specific heat of moist air, per unit mass of the dry air. The Humid Heat is the amount of Heat required to change the temperature of unit mass of a Water Vapor - Air Mixture by 1 °C. Pressure. Many psychrometric properties are dependent on pressure concept: Psychrometric charts. Terminology. A psychrometric chart is a graph of the thermodynamic parameters of moist air at a constant pressure, often equated to an elevation relative to sea level. The ASHRAE-style psychrometric chart, shown here, was pioneered by Willis Carrier in 1904. It depicts these parameters and is thus a graphical equation of state. The parameters are: formula_6 where: formula_7 = mass of dry air formula_8 = mass of water vapor formula_9 = total volume formula_10 = moist air specific volume, m3 kg−1 formula_11 = humidity ratio The psychrometric chart allows all the parameters of some moist air to be determined from any three independent parameters, one of which must be the pressure. Changes in "state", such as when two air streams mix, can be modeled easily and somewhat graphically using the correct psychrometric chart for the location's air pressure or elevation relative to sea level. For locations at not more than 2000 ft (600 m) of altitude it is common practice to use the sea-level psychrometric chart. In the "ω"-"t" chart, the dry bulb temperature ("t") appears as the abscissa (horizontal axis) and the humidity ratio ("ω") appear as the ordinate (vertical axis). A chart is valid for a given air pressure (or elevation above sea level). From any two independent ones of the six parameters dry bulb temperature, wet bulb temperature, relative humidity, humidity ratio, specific enthalpy, and specific volume, all the others can be determined. There are formula_12 possible combinations of independent and derived parameters. Locating parameters on chart. The region above the saturation curve is a two-phase region that represents a mixture of saturated moist air and liquid water, in thermal equilibrium. The protractor on the upper left of the chart has two scales. The inner scale represents sensible-total heat ratio (SHF). The outer scale gives the ratio of enthalpy difference to humidity difference. This is used to establish the slope of a condition line between two processes. The horizontal component of the condition line is the change in sensible heat while the vertical component is the change in latent heat. How to read the chart: fundamental examples. Psychrometric charts are available in SI (metric) and IP (U.S./Imperial) units. They are also available in low and high temperature ranges and for different pressures. A common variation of this problem is determining the final humidity of air leaving an air conditioner evaporator coil then heated to a higher temperature. Assume that the temperature leaving the coil is 10°C (50°F) and is heated to room temperature (not mixed with room air), which is found by following the horizontal humidity ratio from the dew point or saturation line to the room dry bulb temperature line and reading the relative humidity. In typical practice the conditioned air is mixed with room air that is being infiltrated with outside air. Mollier diagram. The "Mollier "i"-"x"" (Enthalpy – Humidity Mixing Ratio) diagram, developed by Richard Mollier in 1923, is an alternative psychrometric chart, preferred by many users in Germany, Austria, Switzerland, the Netherlands, Belgium, France, Scandinavia, Eastern Europe, and Russia. The underlying psychrometric parameter data for the psychrometric chart and the Mollier diagram are identical. At first glance there is little resemblance between the charts, but if the chart is rotated by ninety degrees and looked at in a mirror the resemblance becomes apparent. The Mollier diagram coordinates are enthalpy and humidity ratio. The enthalpy coordinate is "skewed" and the lines of constant enthalpy are parallel and evenly spaced. The ASHRAE psychrometric charts since 1961 use similar plotting coordinates. Some psychrometric charts use "dry-bulb temperature" and "humidity ratio" coordinates. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nr = \\frac {h_c} {k_y c_s}\\,\n" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "h_c" }, { "math_id": 3, "text": "k_y" }, { "math_id": 4, "text": "c_s" }, { "math_id": 5, "text": " \\rho " }, { "math_id": 6, "text": "\n\\rho = \\frac {M_{da} + M_w} {V}\\, = \\left ( \\frac {1}{v} \\right ) (1+W)\n" }, { "math_id": 7, "text": "M_{da}" }, { "math_id": 8, "text": "M_w" }, { "math_id": 9, "text": "V" }, { "math_id": 10, "text": "v" }, { "math_id": 11, "text": "W = \\frac {M_w} {M_{da}}\\," }, { "math_id": 12, "text": "\\left({6 \\atop 2}\\right) = 15" } ]
https://en.wikipedia.org/wiki?curid=846421
8464940
Range of a projectile
In physics, a projectile launched with specific initial conditions will have a range. It may be more predictable assuming a flat Earth with a uniform gravity field, and no air resistance. The horizontal ranges of a projectile are equal for two complementary angles of projection with the same velocity. The following applies for ranges which are small compared to the size of the Earth. For longer ranges see sub-orbital spaceflight. The maximum horizontal distance travelled by the projectile, neglecting air resistance, can be calculated as follows: formula_0 where If y0 is taken to be zero, meaning that the object is being launched on flat ground, the range of the projectile will simplify to: formula_1 Ideal projectile motion. Ideal projectile motion states that there is no air resistance and no change in gravitational acceleration. This assumption simplifies the mathematics greatly, and is a close approximation of actual projectile motion in cases where the distances travelled are small. Ideal projectile motion is also a good introduction to the topic before adding the complications of air resistance. Derivations. A launch angle of 45 degrees displaces the projectile the farthest horizontally. This is due to the nature of right triangles. Additionally, from the equation for the range : formula_2 We can see that the range will be maximum when the value of formula_3 is the highest (i.e. when it is equal to 1). Clearly, formula_4 has to be 90 degrees. That is to say, formula_5 is 45 degrees. Flat ground. First we examine the case where (y0) is zero. The horizontal position of the projectile is formula_6 In the vertical direction formula_7 We are interested in the time when the projectile returns to the same height it originated. Let tg be any time when the height of the projectile is equal to its initial value. formula_8 By factoring: formula_9 or formula_10 but t = T = time of flight formula_11 The first solution corresponds to when the projectile is first launched. The second solution is the useful one for determining the range of the projectile. Plugging this value for (t) into the horizontal equation yields formula_12 Applying the trigonometric identity formula_13 If x and y are same, formula_14 allows us to simplify the solution to formula_15 Note that when (θ) is 45°, the solution becomes formula_16 Uneven ground. Now we will allow (y0) to be nonzero. Our equations of motion are now formula_17 and formula_18 Once again we solve for (t) in the case where the (y) position of the projectile is at zero (since this is how we defined our starting height to begin with) formula_19 Again by applying the quadratic formula we find two solutions for the time. After several steps of algebraic manipulation formula_20 The square root must be a positive number, and since the velocity and the sine of the launch angle can also be assumed to be positive, the solution with the greater time will occur when the positive of the plus or minus sign is used. Thus, the solution is formula_21 Solving for the range once again formula_22 To maximize the range at any height formula_23 Checking the limit as formula_24 approaches 0 formula_25 Angle of impact. The angle ψ at which the projectile lands is given by: formula_26 For maximum range, this results in the following equation: formula_27 Rewriting the original solution for θ, we get: formula_28 Multiplying with the equation for (tan ψ)^2 gives: formula_29 Because of the trigonometric identity formula_30, this means that θ + ψ must be 90 degrees. Actual projectile motion. In addition to air resistance, which slows a projectile and reduces its range, many other factors also have to be accounted for when actual projectile motion is considered. Projectile characteristics. Generally speaking, a projectile with greater volume faces greater air resistance, reducing the range of the projectile. (And see Trajectory of a projectile.) Air resistance drag can be modified by the projectile shape: a tall and wide, but short projectile will face greater air resistance than a low and narrow, but long, projectile of the same volume. The surface of the projectile also must be considered: a smooth projectile will face less air resistance than a rough-surfaced one, and irregularities on the surface of a projectile may change its trajectory if they create more drag on one side of the projectile than on the other. However, certain irregularities such as dimples on a golf ball may actually increase its range by reducing the amount of turbulence caused behind the projectile as it travels. Mass also becomes important, as a more massive projectile will have more kinetic energy, and will thus be less affected by air resistance. The distribution of mass within the projectile can also be important, as an unevenly weighted projectile may spin undesirably, causing irregularities in its trajectory due to the magnus effect. If a projectile is given rotation along its axes of travel, irregularities in the projectile's shape and weight distribution tend to be cancelled out. See rifling for a greater explanation. Firearm barrels. For projectiles that are launched by firearms and artillery, the nature of the gun's barrel is also important. Longer barrels allow more of the propellant's energy to be given to the projectile, yielding greater range. Rifling, while it may not increase the average (arithmetic mean) range of many shots from the same gun, will increase the accuracy and precision of the gun. Very large ranges. Some cannons or howitzers have been created with a very large range. During World War I the Germans created an exceptionally large cannon, the Paris Gun, which could fire a shell more than 80 miles (130 km). North Korea has developed a gun known in the West as Koksan, with a range of 60 km using rocket-assisted projectiles. (And see Trajectory of a projectile.) Such cannons are distinguished from rockets, or ballistic missiles, which have their own rocket engines, which continue to accelerate the missile for a period after they have been launched. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " d = \\frac{v \\cos \\theta}{g} \\left( v \\sin \\theta + \\sqrt{v^2 \\sin^2 \\theta + 2gy_0} \\right) " }, { "math_id": 1, "text": " d = \\frac{v^{2} \\sin 2\\theta}{g} " }, { "math_id": 2, "text": " d = \\frac{v^2 \\sin 2 \\theta} {g} " }, { "math_id": 3, "text": " \\sin 2 \\theta " }, { "math_id": 4, "text": " 2 \\theta " }, { "math_id": 5, "text": " \\theta " }, { "math_id": 6, "text": " x(t) = v t \\cos \\theta " }, { "math_id": 7, "text": " y(t) = v t \\sin \\theta - \\frac{1} {2} g t^2 " }, { "math_id": 8, "text": " 0 = v t \\sin \\theta - \\frac{1} {2} g t^2 " }, { "math_id": 9, "text": " t = 0 " }, { "math_id": 10, "text": " t = \\frac{2 v \\sin \\theta} {g} " }, { "math_id": 11, "text": " T = \\frac{2 v \\sin \\theta} {g} " }, { "math_id": 12, "text": " x = \\frac {2 v^2 \\cos \\theta \\, \\sin \\theta } {g} " }, { "math_id": 13, "text": "\\sin(x+y) = \\sin x \\, \\cos y \\ + \\ \\sin y \\, \\cos x " }, { "math_id": 14, "text": "\\sin 2\\theta = 2 \\sin \\theta \\, \\cos \\theta " }, { "math_id": 15, "text": " d = \\frac {v^2 \\sin 2 \\theta}{g} " }, { "math_id": 16, "text": " d_{\\max} = \\frac {v^2} {g} " }, { "math_id": 17, "text": " x(t) = v t \\cos \\theta " }, { "math_id": 18, "text": " y(t) = y_0 + v t \\sin \\theta - \\frac{1}{2} g t^2 " }, { "math_id": 19, "text": " 0 = y_0 + v t \\sin \\theta - \\frac{1} {2} g t^2 " }, { "math_id": 20, "text": " t = \\frac {v \\sin \\theta} {g} \\pm \\frac {\\sqrt{v^2 \\sin^2 \\theta + 2 g y_0}} {g} " }, { "math_id": 21, "text": " t = \\frac {v \\sin \\theta} {g} + \\frac {\\sqrt{v^2 \\sin^2 \\theta + 2 g y_0}} {g} " }, { "math_id": 22, "text": " d = \\frac {v \\cos \\theta} {g} \\left ( v \\sin \\theta + \\sqrt{v^2 \\sin^2 \\theta + 2 g y_0} \\right)" }, { "math_id": 23, "text": " \\theta = \\arccos \\sqrt{ \\frac {2 g y_0 + v^2} {2 g y_0 + 2v^2}} " }, { "math_id": 24, "text": " y_0 " }, { "math_id": 25, "text": " \\lim_{y_0 \\to 0} \\arccos \\sqrt{ \\frac {2 g y_0 + v^2} {2 g y_0 + 2v^2}} = \\frac {\\pi} {4} " }, { "math_id": 26, "text": " \\tan \\psi = \\frac {-v_y(t_d)} {v_x(t_d)} = \\frac {\\sqrt { v^2 \\sin^2 \\theta + 2 g y_0 }} { v \\cos \\theta}" }, { "math_id": 27, "text": " \\tan^2 \\psi = \\frac { 2 g y_0 + v^2 } { v^2 } = C+1" }, { "math_id": 28, "text": " \\tan^2 \\theta = \\frac { 1 - \\cos^2 \\theta } { \\cos^2 \\theta } = \\frac { v^2 } { 2 g y_0 + v^2 } = \\frac { 1 } { C + 1 }" }, { "math_id": 29, "text": " \\tan^2 \\psi \\, \\tan^2 \\theta = \\frac { 2 g y_0 + v^2 } { v^2 } \\frac { v^2 } { 2 g y_0 + v^2 } = 1" }, { "math_id": 30, "text": " \\tan (\\theta + \\psi) = \\frac { \\tan \\theta + \\tan \\psi } { 1 - \\tan \\theta \\tan \\psi } " } ]
https://en.wikipedia.org/wiki?curid=8464940