id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
7156065
Dagger symmetric monoidal category
Symmetric monoidal category with a special involution In the mathematical field of category theory, a dagger symmetric monoidal category is a monoidal category formula_0 that also possesses a dagger structure. That is, this category comes equipped not only with a tensor product in the category theoretic sense but also with a dagger structure, which is used to describe unitary morphisms and self-adjoint morphisms in formula_1: abstract analogues of those found in FdHilb, the category of finite-dimensional Hilbert spaces. This type of category was introduced by Peter Selinger as an intermediate structure between dagger categories and the dagger compact categories that are used in categorical quantum mechanics, an area that now also considers dagger symmetric monoidal categories when dealing with infinite-dimensional quantum mechanical concepts. Formal definition. A dagger symmetric monoidal category is a symmetric monoidal category formula_1 that also has a dagger structure such that for all formula_2, formula_3 and all formula_4 and formula_5 in formula_6, Here, formula_12 and formula_13 are the natural isomorphisms that form the symmetric monoidal structure. Examples. The following categories are examples of dagger symmetric monoidal categories: A dagger symmetric monoidal category that is also compact closed is a dagger compact category; both of the above examples are in fact dagger compact. References. <templatestyles src="Refbegin/styles.css" /> <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\langle\\mathbf{C},\\otimes, I\\rangle" }, { "math_id": 1, "text": "\\mathbf{C}" }, { "math_id": 2, "text": "f:A\\rightarrow B " }, { "math_id": 3, "text": "g:C\\rightarrow D " }, { "math_id": 4, "text": " A,B,C" }, { "math_id": 5, "text": " D" }, { "math_id": 6, "text": "Ob(\\mathbf{C})" }, { "math_id": 7, "text": " (f\\otimes g)^\\dagger=f^\\dagger\\otimes g^\\dagger:B\\otimes D\\rightarrow A\\otimes C " }, { "math_id": 8, "text": " \\alpha^\\dagger_{A,B,C}=\\alpha^{-1}_{A,B,C}:A\\otimes (B\\otimes C)\\rightarrow (A\\otimes B)\\otimes C" }, { "math_id": 9, "text": " \\rho^\\dagger_A=\\rho^{-1}_A:A \\rightarrow A \\otimes I" }, { "math_id": 10, "text": " \\lambda^\\dagger_A=\\lambda^{-1}_A: A \\rightarrow I \\otimes A" }, { "math_id": 11, "text": " \\sigma^\\dagger_{A,B}=\\sigma^{-1}_{A,B}:B \\otimes A \\rightarrow A \\otimes B" }, { "math_id": 12, "text": "\\alpha,\\lambda,\\rho" }, { "math_id": 13, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=7156065
715688
Chargino
In particle physics, the chargino is a hypothetical particle which refers to the mass eigenstates of a charged superpartner, i.e. any new electrically charged fermion (with spin 1/2) predicted by supersymmetry. They are linear combinations of the charged wino and charged higgsinos. There are two charginos that are fermions and are electrically charged, which are typically labeled (the lightest) and (the heaviest), although sometimes formula_0 and formula_1 are also used to refer to charginos, when formula_2 is used to refer to neutralinos. The heavier chargino can decay through the neutral Z boson to the lighter chargino. Both can decay through a charged W boson to a neutralino: → + References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\tilde{\\chi}_1^\\pm" }, { "math_id": 1, "text": "\\tilde{\\chi}_2^\\pm" }, { "math_id": 2, "text": "\\tilde{\\chi}_i^0" } ]
https://en.wikipedia.org/wiki?curid=715688
7156954
Algebraically closed group
Group allowing solution of all algebraic equations In group theory, a group formula_0 is algebraically closed if any finite set of equations and inequations that are applicable to formula_0 have a solution in formula_0 without needing a group extension. This notion will be made precise later in the article in . Informal discussion. Suppose we wished to find an element formula_1 of a group formula_2 satisfying the conditions (equations and inequations): formula_3 formula_4 formula_5 Then it is easy to see that this is impossible because the first two equations imply formula_6. In this case we say the set of conditions are inconsistent with formula_2. (In fact this set of conditions are inconsistent with any group whatsoever.) Now suppose formula_2 is the group with the multiplication table to the right. Then the conditions: formula_3 formula_5 have a solution in formula_2, namely formula_7. However the conditions: formula_8 formula_9 Do not have a solution in formula_2, as can easily be checked. However, if we extend the group formula_11 to the group formula_12 with the adjacent multiplication table: Then the conditions have two solutions, namely formula_13 and formula_14. Thus there are three possibilities regarding such conditions: It is reasonable to ask whether there are any groups formula_15 such that whenever a set of conditions like these have a solution at all, they have a solution in formula_15 itself? The answer turns out to be "yes", and we call such groups algebraically closed groups. Formal definition. We first need some preliminary ideas. If formula_2 is a group and formula_16 is the free group on countably many generators, then by a finite set of equations and inequations with coefficients in formula_2 we mean a pair of subsets formula_17 and formula_18 of formula_19 the free product of formula_16 and formula_2. This formalizes the notion of a set of equations and inequations consisting of variables formula_20 and elements formula_21 of formula_2. The set formula_17 represents equations like: formula_22 formula_23 formula_24 The set formula_18 represents inequations like formula_25 formula_24 By a solution in formula_2 to this finite set of equations and inequations, we mean a homomorphism formula_26, such that formula_27 for all formula_28 and formula_29 for all formula_30, where formula_31 is the unique homomorphism formula_32 that equals formula_33 on formula_16 and is the identity on formula_2. This formalizes the idea of substituting elements of formula_2 for the variables to get true identities and inidentities. In the example the substitutions formula_34 and formula_35 yield: formula_36 formula_37 formula_24 formula_38 formula_24 We say the finite set of equations and inequations is consistent with formula_2 if we can solve them in a "bigger" group formula_10. More formally: The equations and inequations are consistent with formula_2 if there is a groupformula_10 and an embedding formula_39 such that the finite set of equations and inequations formula_40 and formula_41 has a solution in formula_10, where formula_42 is the unique homomorphism formula_43 that equals formula_44 on formula_2 and is the identity on formula_16. Now we formally define the group formula_0 to be algebraically closed if every finite set of equations and inequations that has coefficients in formula_0 and is consistent with formula_0 has a solution in formula_0. Known results. It is difficult to give concrete examples of algebraically closed groups as the following results indicate: The proofs of these results are in general very complex. However, a sketch of the proof that a countable group formula_45 can be embedded in an algebraically closed group follows. First we embed formula_45 in a countable group formula_46 with the property that every finite set of equations with coefficients in formula_45 that is consistent in formula_46 has a solution in formula_46 as follows: There are only countably many finite sets of equations and inequations with coefficients in formula_45. Fix an enumeration formula_47 of them. Define groups formula_48 inductively by: formula_49 formula_50 Now let: formula_51 Now iterate this construction to get a sequence of groups formula_52 and let: formula_53 Then formula_0 is a countable group containing formula_45. It is algebraically closed because any finite set of equations and inequations that is consistent with formula_0 must have coefficients in some formula_54 and so must have a solution in formula_55.
[ { "math_id": 0, "text": "A\\ " }, { "math_id": 1, "text": "x\\ " }, { "math_id": 2, "text": "G\\ " }, { "math_id": 3, "text": "x^2=1\\ " }, { "math_id": 4, "text": "x^3=1\\ " }, { "math_id": 5, "text": "x\\ne 1\\ " }, { "math_id": 6, "text": "x=1\\ " }, { "math_id": 7, "text": "x=a\\ " }, { "math_id": 8, "text": "x^4=1\\ " }, { "math_id": 9, "text": "x^2a^{-1} = 1\\ " }, { "math_id": 10, "text": "H\\ " }, { "math_id": 11, "text": "G \\ " }, { "math_id": 12, "text": "H \\ " }, { "math_id": 13, "text": "x=b \\ " }, { "math_id": 14, "text": "x=c \\ " }, { "math_id": 15, "text": "A \\ " }, { "math_id": 16, "text": "F\\ " }, { "math_id": 17, "text": "E\\ " }, { "math_id": 18, "text": "I\\ " }, { "math_id": 19, "text": "F\\star G" }, { "math_id": 20, "text": "x_i\\ " }, { "math_id": 21, "text": "g_j\\ " }, { "math_id": 22, "text": "x_1^2g_1^4x_3=1" }, { "math_id": 23, "text": "x_3^2g_2x_4g_1=1" }, { "math_id": 24, "text": "\\dots\\ " }, { "math_id": 25, "text": "g_5^{-1}x_3\\ne 1" }, { "math_id": 26, "text": "f:F\\rightarrow G" }, { "math_id": 27, "text": "\\tilde{f}(e)=1\\ " }, { "math_id": 28, "text": "e\\in E" }, { "math_id": 29, "text": "\\tilde{f}(i)\\ne 1\\ " }, { "math_id": 30, "text": "i\\in I" }, { "math_id": 31, "text": "\\tilde{f}" }, { "math_id": 32, "text": "\\tilde{f}:F\\star G\\rightarrow G" }, { "math_id": 33, "text": "f\\ " }, { "math_id": 34, "text": "x_1\\mapsto g_6, x_3\\mapsto g_7" }, { "math_id": 35, "text": "x_4\\mapsto g_8" }, { "math_id": 36, "text": "g_6^2g_1^4g_7=1" }, { "math_id": 37, "text": "g_7^2g_2g_8g_1=1" }, { "math_id": 38, "text": "g_5^{-1}g_7\\ne 1" }, { "math_id": 39, "text": "h:G\\rightarrow H" }, { "math_id": 40, "text": "\\tilde{h}(E)" }, { "math_id": 41, "text": "\\tilde{h}(I)" }, { "math_id": 42, "text": "\\tilde{h}" }, { "math_id": 43, "text": "\\tilde{h}:F\\star G\\rightarrow F\\star H" }, { "math_id": 44, "text": "h\\ " }, { "math_id": 45, "text": "C\\ " }, { "math_id": 46, "text": "C_1\\ " }, { "math_id": 47, "text": "S_0,S_1,S_2,\\dots\\ " }, { "math_id": 48, "text": "D_0,D_1,D_2,\\dots\\ " }, { "math_id": 49, "text": "D_0 = C\\ " }, { "math_id": 50, "text": "D_{i+1} = \n\\left\\{\\begin{matrix} \nD_i\\ &\\mbox{if}\\ S_i\\ \\mbox{is not consistent with}\\ D_i \\\\\n\\langle D_i,h_1,h_2,\\dots,h_n \\rangle &\\mbox{if}\\ S_i\\ \\mbox{has a solution in}\\ H\\supseteq D_i\\ \\mbox{with}\\ x_j\\mapsto h_j\\ 1\\le j\\le n\n\\end{matrix}\\right.\n" }, { "math_id": 51, "text": "C_1=\\cup_{i=0}^{\\infty}D_{i}" }, { "math_id": 52, "text": "C=C_0,C_1,C_2,\\dots\\ " }, { "math_id": 53, "text": "A=\\cup_{i=0}^{\\infty}C_{i}" }, { "math_id": 54, "text": "C_i\\ " }, { "math_id": 55, "text": "C_{i+1}\\ " } ]
https://en.wikipedia.org/wiki?curid=7156954
71570180
Talent scheduling
Talent scheduling is an optimization problem in computer science and operations research, and it is also a problem in combinatorial optimization. Suppose we need to make films, and each film contains several scenes. Each scene needs to be shot by one or more actors. And suppose you can only shoot one scene a day. The salaries of these actors are calculated by the day. In this problem, we can only hire each actor consecutively. For example, we can't hire an actor on the first and third days, but not the second day. During the hiring period, the producers still need to pay the actors even if they are not involved in the filming assignment. The purpose of talent scheduling is to minimize the actors' total salary by adjusting the sequence of scenes. Mathematical formulation. Consider a film shoot composed of formula_0 shooting days and involving a total of formula_1 actors. Then we use the day out of days matrix (DODM) formula_2 to represent the requirements for the various shooting days. The matrix with the formula_3 entry given by: formula_4 Then we define the pay vector formula_5, with the formula_6th element given by formula_7 which means rate of pay per day of the formula_6th actor. Let v denote any permutation of the n columns of formula_8, we have: formula_9 formula_10 is the permutation set of the n shooting days. Then define formula_11 to be the matrix formula_8 with its columns permuted according to formula_12, we have: formula_13 "for" formula_14 Then we use formula_15 and formula_16 to represent denote respectively the earliest and latest days in the schedule formula_17 determined by a which require actor formula_6. So we can find actor formula_6 will be hired for formula_18 days. But in these days, only formula_19 days are actually required, which means formula_20 days are unnecessary, we have: formula_21 The total cost of unnecessary days is: formula_22 formula_23 will be the objective function we should minimize. Proof of strong NP-hardness. In talent scheduling problem, we can prove that is NP-hard by a reduction from the optimal linear arrangement(OLA) problem. And in this problem, even we restrict each actor is needed for just two days and all actors' salaries are 1, it's still polynomially reducible to the OLA problem. Thus, this problem is unlikely to have pseudo-polynomial algorithm. Integer programming. The integer programming model is given by: In this model, formula_24 means the earliest shooting day for talent formula_6, formula_25 is the latest shooting day for talent formula_6, formula_26 is the scheduling for the project, i.e. formula_27 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "T^0 \\in \\{0,1\\}_{m \\times n}" }, { "math_id": 3, "text": "(i,j)" }, { "math_id": 4, "text": "t^0_{m \\times n} = \\begin{cases}\n 1, & \\mbox{if actor i is required in scene j,}\\\\\n 0, & \\mbox{otherwise.}\n\\end{cases}" }, { "math_id": 5, "text": "\\mathfrak{R}^m" }, { "math_id": 6, "text": "i" }, { "math_id": 7, "text": "c_i" }, { "math_id": 8, "text": "T^0" }, { "math_id": 9, "text": "\\sigma :\\{1,2,...,n\\} \\rightarrow \\{1,2,...,n\\}" }, { "math_id": 10, "text": "\\sigma_n" }, { "math_id": 11, "text": "T(\\sigma)" }, { "math_id": 12, "text": "\\sigma" }, { "math_id": 13, "text": "t_{i,j}(\\sigma)=t^0_{i,\\sigma(j)}" }, { "math_id": 14, "text": "i \\in \\{1,2,...,n\\},j \\in \\{1,2,...,n\\}" }, { "math_id": 15, "text": "l_i(\\sigma)" }, { "math_id": 16, "text": "e_i(\\sigma)" }, { "math_id": 17, "text": "S" }, { "math_id": 18, "text": "l_i(\\sigma)-e_i(\\sigma)+1" }, { "math_id": 19, "text": "r_i=\\sum_{j=1}^{n} t^0_{ij}" }, { "math_id": 20, "text": "h_i(S)" }, { "math_id": 21, "text": "h_i(S)=h_i(\\sigma)=l_i(\\sigma)-e_i(\\sigma)+1-r_i=l_i(\\sigma)-e_i(\\sigma)+1-\\sum_{j=1}^{n} t^0_{i,j}" }, { "math_id": 22, "text": "K(\\sigma)=\\sum_{i=1}^{m}c_ih_i(\\sigma)=\\sum_{i=1}^{m}c_i[l_i(\\sigma)-e_i(\\sigma)+1-\\sum_{j=1}^{n} t^0_{i,j}]" }, { "math_id": 23, "text": "K(\\sigma)" }, { "math_id": 24, "text": "e_i" }, { "math_id": 25, "text": "l_i" }, { "math_id": 26, "text": "x_{j,k}" }, { "math_id": 27, "text": " x_{j,k} = \\begin{cases} 1 & \\text{if scene } j \\text{ is scheduled in day } k \\text{ of shooting } \\\\ 0 & \\text{otherwise} \\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=71570180
715707
Gyroelongated square bipyramid
17th Johnson solid In geometry, the gyroelongated square bipyramid is a polyhedron with 16 triangular faces. it can be constructed from a square antiprism by attaching two equilateral square pyramids to each of its square faces. The same shape is also called hexakaidecadeltahedron, heccaidecadeltahedron, or tetrakis square antiprism; these last names mean a polyhedron with 16 triangular faces. It is an example of deltahedron, and of a Johnson solid. The dual polyhedron of the gyroelongated square bipyramid is a square truncated trapezohedron with eight pentagons and two squares as its faces. The gyroelongated square pyramid appears in chemistry as the basis for the bicapped square antiprismatic molecular geometry, and in mathematical optimization as a solution to the Thomson problem. Construction. Like other gyroelongated bipyramids, the gyroelongated square bipyramid can be constructed by attaching two equilateral square pyramids onto the square faces of a square antiprism; this process is known as gyroelongation. These pyramids cover each square, replacing it with four equilateral triangles, so that the resulting polyhedron has 16 equilateral triangles as its faces. A polyhedron with only equilateral triangles as faces is called a deltahedron. There are only eight different convex deltahedra, one of which is the gyroelongated square bipyramid. More generally, the convex polyhedron in which all faces are regular is the Johnson solid, and every convex deltahedron is a Johnson solid. The gyroelongated square bipyramid is numbered among the Johnson solids as formula_1. One possible system of Cartesian coordinates for the vertices of a gyroelongated square bipyramid, giving it edge length 2, is: formula_2 Properties. The surface area of a gyroelongated square bipyramid is 16 times the area of an equilateral triangle, that is: formula_3 and the volume of a gyroelongated square bipyramid is obtained by slicing it into two equilateral square pyramids and one square antiprism, and then adding their volume: formula_4 It has the same three-dimensional symmetry group as the square antiprism, the dihedral group of formula_0 of order 8. Its dihedral angle is similar to the gyroelongated square pyramid, by calculating the sum of the equilateral square pyramid and the square antiprism's angle in the following: The dual polyhedron of a gyroleongated square bipyramid is the square truncated trapezohedron. It has eight pentagons and two squares. Application. Gyroelongated square bipyramid can be visualized in the geometry of chemical compounds as the atom cluster surrounding a central atom as a polyhedron, and the compound of such cluster is the bicapped square antiprismatic molecular geometry. It has 10 vertices and 24 edges, corresponding to the "closo" polyhedron with formula_8 skeletal electrons. An example is nickel carbonyl carbide anion Ni10C(CO)182-, a 22 skeletal electron chemical compound with ten Ni(CO)2 vertices and the deficiency of two carbon monoxides. The Thomson problem concerning the minimum-energy configuration of formula_9 charged particles on a sphere. The minimum solution known for formula_10 places the points at the vertices of a gyroelongated square bipyramid, inscribed in a sphere. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " D_{4d} " }, { "math_id": 1, "text": " J_{17} " }, { "math_id": 2, "text": " \\begin{align}\n \\left(\\pm 1, \\pm 1, 2^{-1/4} \\right),\\qquad &\\left(\\pm \\sqrt{2}, 0, -2^{-1/4} \\right), \\\\\n \\left(0, \\pm \\sqrt{2}, -2^{-1/4} \\right),\\qquad &\\left(0, 0, \\pm \\left(2^{-1/4} + \\sqrt{2}\\right)\\right).\n\\end{align} " }, { "math_id": 3, "text": " 4\\sqrt{3}a^2 \\approx 6.928a^2, " }, { "math_id": 4, "text": " \\frac{\\sqrt{2} + \\sqrt{4 + 3\\sqrt{2}}}{3}a^3 \\approx 1.428a^3. " }, { "math_id": 5, "text": " 109.47^\\circ " }, { "math_id": 6, "text": " 127.55^\\circ " }, { "math_id": 7, "text": " 158.57^\\circ" }, { "math_id": 8, "text": " 2n + 2 " }, { "math_id": 9, "text": " n " }, { "math_id": 10, "text": " n = 10 " } ]
https://en.wikipedia.org/wiki?curid=715707
7157316
Attack rate
In epidemiology, the attack rate is the proportion of an at-risk population that contracts the disease during a specified time interval. It is used in hypothetical predictions and during actual outbreaks of disease. An at-risk population is defined as one that has no immunity to the attacking pathogen, which can be either a novel pathogen or an established pathogen. It is used to project the number of infections to expect during an epidemic. This aids in marshalling resources for delivery of medical care as well as production of vaccines and/or anti-viral and anti-bacterial medicines. The rate is arrived at by taking the number of new cases in the population at risk and dividing by the number of persons at risk in the population. formula_0 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mbox{attack rate} = \\frac{\\mbox{number of new cases in the population at risk}}{\\mbox{number of persons at risk in the population}}" } ]
https://en.wikipedia.org/wiki?curid=7157316
7158665
Unit distance graph
Geometric graph with unit edge lengths In mathematics, particularly geometric graph theory, a unit distance graph is a graph formed from a collection of points in the Euclidean plane by connecting two points whenever the distance between them is exactly one. To distinguish these graphs from a broader definition that allows some non-adjacent pairs of vertices to be at distance one, they may also be called strict unit distance graphs or faithful unit distance graphs. As a hereditary family of graphs, they can be characterized by forbidden induced subgraphs. The unit distance graphs include the cactus graphs, the matchstick graphs and penny graphs, and the hypercube graphs. The generalized Petersen graphs are non-strict unit distance graphs. An unsolved problem of Paul Erdős asks how many edges a unit distance graph on formula_0 vertices can have. The best known lower bound is slightly above linear in formula_0—far from the upper bound, proportional to formula_1. The number of colors required to color unit distance graphs is also unknown (the Hadwiger–Nelson problem): some unit distance graphs require five colors, and every unit distance graph can be colored with seven colors. For every algebraic number there is a unit distance graph with two vertices that must be that distance apart. According to the Beckman–Quarles theorem, the only plane transformations that preserve all unit distance graphs are the isometries. It is possible to construct a unit distance graph efficiently, given its points. Finding all unit distances has applications in pattern matching, where it can be a first step in finding congruent copies of larger patterns. However, determining whether a given graph can be represented as a unit distance graph is NP-hard, and more specifically complete for the existential theory of the reals. Definition. The unit distance graph for a set of points in the plane is the undirected graph having those points as its vertices, with an edge between two vertices whenever their Euclidean distance is exactly one. An abstract graph is said to be a unit distance graph if it is possible to find distinct locations in the plane for its vertices, so that its edges have unit length and so that all non-adjacent pairs of vertices have non-unit distances. When this is possible, the abstract graph is isomorphic to the unit distance graph of the chosen locations. Alternatively, some sources use a broader definition, allowing non-adjacent pairs of vertices to be at unit distance. The resulting graphs are the subgraphs of the unit distance graphs (as defined here). Where the terminology may be ambiguous, the graphs in which non-edges must be a non-unit distance apart may be called strict unit distance graphs or faithful unit distance graphs. The subgraphs of unit distance graphs are equivalently the graphs that can be drawn in the plane using only one edge length. For brevity, this article refers to these as "non-strict unit distance graphs". Unit distance graphs should not be confused with unit disk graphs, which connect pairs of points when their distance is less than or equal to one, and are frequently used to model wireless communication networks. Examples. The complete graph on two vertices is a unit distance graph, as is the complete graph on three vertices (the triangle graph), but not the complete graph on four vertices. Generalizing the triangle graph, every cycle graph is a unit distance graph, realized by a regular polygon. Two finite unit distance graphs, connected at a single shared vertex, yield another unit distance graph, as one can be rotated with respect to the other to avoid undesired additional unit distances. By thus connecting graphs, every finite tree or cactus graph may be realized as a unit distance graph. Any Cartesian product of unit distance graphs produces another unit distance graph; however, the same is not true for some other common graph products. For instance, the strong product of graphs, applied to any two non-empty graphs, produces complete subgraphs with four vertices, which are not unit distance graphs. The Cartesian products of path graphs form grid graphs of any dimension, the Cartesian products of the complete graph on two vertices are the hypercube graphs, and the Cartesian products of triangle graphs are the Hamming graphs formula_2. Other specific graphs that are unit distance graphs include the Petersen graph, the Heawood graph, the wheel graph formula_3 (the only wheel graph that is a unit distance graph), and the Moser spindle and Golomb graph (small 4-chromatic unit distance graphs). All generalized Petersen graphs, such as the Möbius–Kantor graph depicted, are non-strict unit distance graphs. Matchstick graphs are a special case of unit distance graphs, in which no edges cross. Every matchstick graph is a planar graph, but some otherwise-planar unit distance graphs (such as the Moser spindle) have a crossing in every representation as a unit distance graph. Additionally, in the context of unit distance graphs, the term 'planar' should be used with care, as some authors use it to refer to the plane in which the unit distances are defined, rather than to a prohibition on crossings. The penny graphs are an even more special case of unit distance and matchstick graphs, in which every non-adjacent pair of vertices are more than one unit apart. Properties. Number of edges. <templatestyles src="Unsolved/styles.css" /> Unsolved problem in mathematics: How many unit distances can be determined by a set of formula_0 points? Paul Erdős (1946) posed the problem of estimating how many pairs of points in a set of formula_0 points could be at unit distance from each other. In graph-theoretic terms, the question asks how dense a unit distance graph can be, and Erdős's publication on this question was one of the first works in extremal graph theory. The hypercube graphs and Hamming graphs provide a lower bound on the number of unit distances, proportional to formula_4 By considering points in a square grid with carefully chosen spacing, Erdős found an improved lower bound of the form formula_5 for a constant formula_6, and offered $500 for a proof of whether the number of unit distances can also be bounded above by a function of this form. The best known upper bound for this problem is formula_7 This bound can be viewed as counting incidences between points and unit circles, and is closely related to the crossing number inequality and to the Szemerédi–Trotter theorem on incidences between points and lines. For small values of formula_0 the exact maximum number of possible edges is known. For formula_8 these numbers of edges are: <templatestyles src="Block indent/styles.css"/> Forbidden subgraphs. If a given graph formula_9 is not a non-strict unit distance graph, neither is any supergraph formula_10 of formula_9. A similar idea works for strict unit distance graphs, but using the concept of an induced subgraph, a subgraph formed from all edges between the pairs of vertices in a given subset of vertices. If formula_9 is not a strict unit distance graph, then neither is any other formula_10 that has formula_9 as an induced subgraph. Because of these relations between whether a subgraph or its supergraph is a unit distance graph, it is possible to describe unit distance graphs by their forbidden subgraphs. These are the minimal graphs that are not unit distance graphs of the given type. They can be used to determine whether a given graph formula_9 is a unit distance graph, of either type. formula_9 is a non-strict unit distance graph, if and only if formula_9 is not a supergraph of a forbidden graph for the non-strict unit distance graphs. formula_9 is a strict unit distance graph, if and only if formula_9 is not an induced supergraph of a forbidden graph for the strict unit distance graphs. For both the non-strict and strict unit distance graphs, the forbidden graphs include both the complete graph formula_11 and the complete bipartite graph formula_12. For formula_12, wherever the vertices on the two-vertex side of this graph are placed, there are at most two positions at unit distance from them to place the other three vertices, so it is impossible to place all three vertices at distinct points. These are the only two forbidden graphs for the non-strict unit distance graphs on up to five vertices; there are six forbidden graphs on up to seven vertices and 74 on graphs up to nine vertices. Because gluing two unit distance graphs (or subgraphs thereof) at a vertex produce strict (respectively non-strict) unit distance graphs, every forbidden graph is a biconnected graph, one that cannot be formed by this gluing process. The wheel graph formula_3 can be realized as a strict unit distance graph with six of its vertices forming a unit regular hexagon and the seventh at the center of the hexagon. Removing one of the edges from the center vertex produces a subgraph that still has unit-length edges, but which is not a strict unit distance graph. The regular-hexagon placement of its vertices is the only one way (up to congruence) to place the vertices at distinct locations such that adjacent vertices are a unit distance apart, and this placement also puts the two endpoints of the missing edge at unit distance. Thus, it is a forbidden graph for the strict unit distance graphs, but not one of the six forbidden graphs for the non-strict unit distance graphs. Other examples of graphs that are non-strict unit distance graphs but not strict unit distance graphs include the graph formed by removing an outer edge from formula_3, and the six-vertex graph formed from a triangular prism by removing an edge from one of its triangles. Algebraic numbers and rigidity. For every algebraic number formula_13, it is possible to construct a unit distance graph formula_9 in which some pair of vertices are at distance formula_13 in all unit distance representations of formula_9. This result implies a finite version of the Beckman–Quarles theorem: for any two points formula_14 and formula_15 at distance formula_13 from each other, there exists a finite rigid unit distance graph containing formula_14 and formula_15 such that any transformation of the plane that preserves the unit distances in this graph also preserves the distance between formula_14 and formula_15. The full Beckman–Quarles theorem states that the only transformations of the Euclidean plane (or a higher-dimensional Euclidean space) that preserve unit distances are the isometries. Equivalently, for the infinite unit distance graph generated by all the points in the plane, all graph automorphisms preserve all of the distances in the plane, not just the unit distances. If formula_13 is an algebraic number of modulus 1 that is not a root of unity, then the integer combinations of powers of formula_13 form a finitely generated subgroup of the additive group of complex numbers whose unit distance graph has infinite degree. For instance, formula_13 can be chosen as one of the two complex roots of the polynomial formula_16, producing an infinite-degree unit distance graph with four generators. Coloring. <templatestyles src="Unsolved/styles.css" /> Unsolved problem in mathematics: What is the largest possible chromatic number of a unit distance graph? The Hadwiger–Nelson problem concerns the chromatic number of unit distance graphs, and more specifically of the infinite unit distance graph formed from all points of the Euclidean plane. By the de Bruijn–Erdős theorem, which assumes the axiom of choice, this is equivalent to asking for the largest chromatic number of a finite unit distance graph. There exist unit distance graphs requiring five colors in any proper coloring, and all unit distance graphs can be colored with at most seven colors. Answering another question of Paul Erdős, it is possible for triangle-free unit distance graphs to require four colors. Enumeration. The number of strict unit distance graphs on formula_17 labeled vertices is at most formula_18 as expressed using big O notation and little o notation. Generalization to higher dimensions. The definition of a unit distance graph may naturally be generalized to any higher-dimensional Euclidean space. In three dimensions, unit distance graphs of formula_0 points have at most formula_19 edges, where formula_20 is a very slowly growing function related to the inverse Ackermann function. This result leads to a similar bound on the number of edges of three-dimensional relative neighborhood graphs. In four or more dimensions, any complete bipartite graph is a unit distance graph, realized by placing the points on two perpendicular circles with a common center, so unit distance graphs can be dense graphs. The enumeration formulas for unit distance graphs generalize to higher dimensions, and shows that in dimensions four or more the number of strict unit distance graphs is much larger than the number of subgraphs of unit distance graphs. Any finite graph may be embedded as a unit distance graph in a sufficiently high dimension. Some graphs may need very different dimensions for embeddings as non-strict unit distance graphs and as strict unit distance graphs. For instance the formula_21-vertex crown graph may be embedded in four dimensions as a non-strict unit distance graph (that is, so that all its edges have unit length). However, it requires at least formula_22 dimensions to be embedded as a strict unit distance graph, so that its edges are the only unit-distance pairs. The dimension needed to realize any given graph as a strict unit graph is at most twice its maximum degree. Computational complexity. Constructing a unit distance graph from its points is an important step for other algorithms for finding congruent copies of some pattern in a larger point set. These algorithms use this construction to search for candidate positions where one of the distances in the pattern is present, and then use other methods to test the rest of the pattern for each candidate. A method of can be applied to this problem, yielding an algorithm for finding a planar point set's unit distance graph in time formula_23 where formula_24 is the slowly growing iterated logarithm function. It is NP-hard—and more specifically, complete for the existential theory of the reals—to test whether a given graph is a (strict or non-strict) unit distance graph in the plane. It is also NP-complete to determine whether a planar unit distance graph has a Hamiltonian cycle, even when the graph's vertices all have known integer coordinates. References. Notes. <templatestyles src="Reflist/styles.css" /> Sources. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n^{4/3}" }, { "math_id": 2, "text": "H(d,3)" }, { "math_id": 3, "text": "W_7" }, { "math_id": 4, "text": "n\\log n." }, { "math_id": 5, "text": "n^{1+c/\\log\\log n}" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "\\sqrt[3]{\\frac{29n^4}{4}}\\approx 1.936n^{4/3}." }, { "math_id": 8, "text": "n=2,3,4,\\dots" }, { "math_id": 9, "text": "G" }, { "math_id": 10, "text": "H" }, { "math_id": 11, "text": "K_4" }, { "math_id": 12, "text": "K_{2,3}" }, { "math_id": 13, "text": "\\alpha" }, { "math_id": 14, "text": "p" }, { "math_id": 15, "text": "q" }, { "math_id": 16, "text": "z^4-z^3-z^2-z+1" }, { "math_id": 17, "text": "n\\ge 4" }, { "math_id": 18, "text": "\\binom{n(n-1)}{2n}=O\\left(2^{\\bigl(4+o(1)\\bigr)n\\log_2 n}\\right)," }, { "math_id": 19, "text": "n^{3/2}\\beta(n)" }, { "math_id": 20, "text": "\\beta" }, { "math_id": 21, "text": "2n" }, { "math_id": 22, "text": "n-2" }, { "math_id": 23, "text": "n^{4/3}2^{O(\\log^* n)}" }, { "math_id": 24, "text": "\\log^*" } ]
https://en.wikipedia.org/wiki?curid=7158665
71587738
Barocaloric material
Barocaloric materials are characterized by strong, reversible thermic responses to changes in pressure. Many involve solid-to-solid phase changes from disordered to ordered and rigid under increased pressure, releasing heat. Barocaloric solids undergo solid-to-solid phase change. One barocaloric material processes heat without a phase change: natural rubber. Input energy. Barocaloric effects can be achieved at pressures above 200 MPa for intermetallics or about 100 MPa in plastic crystals. However, changes phase at pressures of 80 MPa. The hybrid organic–inorganic layered perovskite (CH3–(CH2)"n"−1–NH3)2MnCl4 ("n" = 9,10), shows reversible barocaloric entropy change of Δ"S"r ~ 218, 230 J kg−1 K−1 at 0.08 GPa at 294-311.5 K (transition temperature). Barocaloric materials are one of several classes of materials that undergo caloric phase transitions. The others are magnetocaloric, electrocaloric, and elastocaloric. Magnetocaloric effects typically require field strengths larger than 2 T, while electrocaloric materials require field strengths in the kV to MV/m range. Elastocaloric materials may require force levels as large as 700 MPa. Potential applications. Barocaloric materials have potential use as refrigerants in cooling systems instead of gases such as hydrofluorocarbons. cycles, the pressure then drives a solid-to-solid phase change. A prototype air conditioner was made from a metal tube filled with a metal-halide perovskite (the refrigerant) and water or oil (heat/pressure transport material). A piston pressurizes the liquid. Another project used as the refrigerant. It achieved reversible entropy changes of formula_0 ~71 J K−1 kg−1 at ambient temperature. The phase transition temperature is a function of pressure, varying at a rate of ~0.79 K MPa−1. The accompanying saturation driving pressure is ~40 MPa, a barocaloric strength of formula_1 ~1.78 J K−1 kg−1 MPa−1, and a temperature span of ~41 K under 80 MPa. Neutron scattering characterizations of crystal structures/atomic dynamics show that reorientation-vibration coupling is responsible for the pressure sensitivity. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Delta {S}_{{P}_{0}\\to P}^{{{\\max }}}" }, { "math_id": 1, "text": "\\left|\\Delta {S}_{{P}_{0}\\to P}^{{{\\max }}}/\\Delta P\\right|" } ]
https://en.wikipedia.org/wiki?curid=71587738
715886
Time-to-digital converter
In electronic instrumentation and signal processing, a time-to-digital converter (TDC) is a device for recognizing events and providing a digital representation of the time they occurred. For example, a TDC might output the time of arrival for each incoming pulse. Some applications wish to measure the time interval between two events rather than some notion of an absolute time. In electronics time-to-digital converters (TDCs) or time digitizers are devices commonly used to measure a time interval and convert it into digital (binary) output. In some cases interpolating TDCs are also called time counters (TCs). TDCs are used to determine the time interval between two signal pulses (known as start and stop pulse). Measurement is started and stopped when the rising or falling edge of a signal pulse crosses a set threshold. This pattern is seen in many physical experiments, like time-of-flight and lifetime measurements in atomic and high energy physics, experiments that involve laser ranging and electronic research involving the testing of integrated circuits and high-speed data transfer. Application. TDCs are used to timestamp events and measure time differences between events, especially where picosecond precision and high accuracy is required, such as the measurement of events in high energy physics experiments, where particles (e.g. electrons, photons, and ions) are detected. Another application is cost-effective and non-mechanical water flow metering by measuring the time difference between ultrasound pulses that travel through the flow and arrive at different times depending on the flow speed and direction. In an all-digital phase-locked loop (ADPLL), a TDC measures the phase shift and its result is used to adjust the digital controlled oscillator (DCO). Coarse measurement. If the required time resolution is not high, then counters can be used to make the conversion. Basic counter. In its simplest implementation, a TDC is simply a high-frequency counter that increments every clock cycle. The current contents of the counter represents the current time. When an event occurs, the counter's value is captured in an output register. In that approach, the measurement is an integer number of clock cycles, so the measurement is quantized to a clock period. To get finer resolution, a faster clock is needed. The accuracy of the measurement depends upon the stability of the clock frequency. Typically a TDC uses a crystal oscillator reference frequency for good long term stability. High stability crystal oscillators are usually relative low frequency such as 10 MHz (or 100 ns resolution). To get better resolution, a phase-locked loop frequency multiplier can be used to generate a faster clock. One might, for example, multiply the crystal reference oscillator by 100 to get a clock rate of 1 GHz (1 ns resolution). Counter technology. High clock rates impose additional design constraints on the counter: if the clock period is short, it is difficult to update the count. Binary counters, for example, need a fast carry architecture because they essentially add one to the previous counter value. A solution is using a hybrid counter architecture. A Johnson counter, for example, is a fast non-binary counter. It can be used to count very quickly the low order count; a more conventional binary counter can be used to accumulate the high order count. The fast counter is sometime called a prescaler. The speed of counters fabricated in CMOS-technology is limited by the capacitance between the gate and the channel and by the resistance of the channel and the signal traces. The product of both is the cut-off-frequency. Modern chip technology allows multiple metal layers and therefore coils with a large number of windings to be inserted into the chip. This allows designers to peak the device for a specific frequency, which may lie above the cut-off-frequency of the original transistor. A peaked variant of the Johnson counter is the traveling-wave counter which also achieves sub-cycle resolution. Other methods to achieve sub-cycle resolution include analog-to-digital converters and vernier Johnson counters. Measuring a time interval. In most situations, the user does not want to just capture an arbitrary time that an event occurs, but wants to measure a time interval, the time between a start event and a stop event. That can be done by measuring an arbitrary time of both the start and stop events and subtracting. The measurement can be off by two counts. The subtraction can be avoided if the counter is held at zero until the start event, counts during the interval, and then stops counting after the stop event. Coarse counters base on a reference clock with signals generated at a stable frequency formula_0. When the start signal is detected the counter starts counting clock signals and terminates counting after the stop signal is detected. The time interval formula_1 between start and stop is then formula_2 with formula_3, the number of counts and formula_4, the period of the reference clock. Statistical counter. Since start, stop and clock signal are asynchronous, there is a uniform probability distribution of the start and stop signal-times between two subsequent clock pulses. This detuning of the start and stop signal from the clock pulses is called quantization error. For a series of measurements on the same constant and asynchronous time interval one measures two different numbers of counted clock pulses formula_5 and formula_6 (see picture). These occur with probabilities formula_7 formula_8 with formula_9 the fractional part of formula_10. The value for the time interval is then obtained by formula_11 Measuring a time interval using a coarse counter with the averaging method described above is relatively time consuming because of the many repetitions that are needed to determine the probabilities formula_12 and formula_13. In comparison to the other methods described later on, a coarse counter has a very limited resolution (1ns in case of a 1 GHz reference clock), but satisfies with its theoretically unlimited measuring range. Fine measurement. In contrast to the coarse counter in the previous section, fine measurement methods with much better accuracy but far smaller measuring range are presented here. Analogue methods like time interval stretching or double conversion as well as digital methods like tapped delay lines and the Vernier method are under examination. Though the analogue methods still obtain better accuracies, digital time interval measurement is often preferred due to its flexibility in integrated circuit technology and its robustness against external perturbations like temperature changes. The counter implementation's accuracy is limited by the clock frequency. If time is measured by whole counts, then the resolution is limited to the clock period. For example, a 10 MHz clock has a resolution of 100 ns. To get resolution finer than a clock period, there are time interpolation circuits. These circuits measure the fraction of a clock period: that is, the time between a clock event and the event being measured. The interpolation circuits often require a significant amount of time to perform their function; consequently, the TDC needs a quiet interval before the next measurement. Ramp interpolator. When counting is not feasible because the clock rate would be too high, analog methods can be used. Analog methods are often used to measure intervals that are between 10 and 200 ns. These methods often use a capacitor that is charged during the interval being measured. Initially, the capacitor is discharged to zero volts. When the start event occurs, the capacitor is charged with a constant current "I"1; the constant current causes the voltage "v" on the capacitor to increase linearly with time. The rising voltage is called the fast ramp. When the stop event occurs, the charging current is stopped. The voltage on the capacitor "v" is directly proportional to the time interval "T" and can be measured with an analog-to-digital converter (ADC). The resolution of such a system is in the range of 1 to 10 ps. Although a separate ADC can be used, the ADC step is often integrated into the interpolator. A second constant current "I"2 is used to discharge the capacitor at a constant but much slower rate (the slow ramp). The slow ramp might be 1/1000 of the fast ramp. This discharge effectively "stretches" the time interval; it will take 1000 times as long for the capacitor to discharge to zero volts. The stretched interval can be measured with a counter. The measurement is similar to a dual-slope analog converter. The dual-slope conversion can take a long time: a thousand or so clock ticks in the scheme described above. That limits how often a measurement can be made (dead time). Resolution of 1 ps with a 100 MHz (10 ns) clock requires a stretch ratio of 10,000 and implies a conversion time of 150 μs. To decrease the conversion time, the interpolator circuit can be used twice in a residual interpolator technique. The fast ramp is used initially as above to determine the time. The slow ramp is only at 1/100. The slow ramp will cross zero at some time during the clock period. When the ramp crosses zero, the fast ramp is turned on again to measure the crossing time ("t"residual). Consequently, the time can be determined to 1 part in 10,000. Interpolators are often used with a stable system clock. The start event is asynchronous, but the stop event is a following clock. For convenience, imagine that the fast ramp rises exactly 1 volt during a 100 ns clock period. Assume the start event occurs at 67.3 ns after a clock pulse; the fast ramp integrator is triggered and starts rising. The asynchronous start event is also routed through a synchronizer that takes at least two clock pulses. By the next clock pulse, the ramp has risen to .327 V. By the second clock pulse, the ramp has risen to 1.327 V and the synchronizer reports the start event has been seen. The fast ramp is stopped and the slow ramp starts. The synchronizer output can be used to capture system time from a counter. After 1327 clocks, the slow ramp returns to its starting point, and interpolator knows that the event occurred 132.7 ns before the synchronizer reported. The interpolator is actually more involved because there are synchronizer issues and current switching is not instantaneous. Also, the interpolator must calibrate the height of the ramp to a clock period. Vernier. Vernier interpolator. The vernier method is more involved. The method involves a triggerable oscillator and a coincidence circuit. At the event, the integer clock count is stored and the oscillator is started. The triggered oscillator has a slightly different frequency than the clock oscillator. For sake of argument, say the triggered oscillator has a period that is 1 ns faster than the clock. If the event happened 67 ns after the last clock, then the triggered oscillator transition will slide by −1 ns after each subsequent clock pulse. The triggered oscillator will be at 66 ns after the next clock, at 65 ns after the second clock, and so forth. A coincidence detector looks for when the triggered oscillator and the clock transition at the same time, and that indicates the fraction time that needs to be added. The interpolator design is more involved. The triggerable clock must be calibrated to clock. It must also start quickly and cleanly. Vernier method. The Vernier method is a digital version of the time stretching method. Two only slightly detuned oscillators (with frequencies formula_14 and formula_15) start their signals with the arrival of the start and the stop signal. As soon as the leading edges of the oscillator signals coincide the measurement ends and the number of periods of the oscillators (formula_5 and formula_6 respectively) lead to the original time interval formula_1: formula_16 Since highly reliable oscillators with stable and accurate frequency are still quite a challenge one also realizes the vernier method via two tapped delay lines using two slightly different cell delay times formula_17. This setting is called differential delay line or vernier delay line. In the example presented here the first delay line affiliated with the start signal contains cells of D-flip-flops with delay formula_18 which are initially set to transparent. During the transition of the start signal through one of those cells, the signal is delayed by formula_18 and the state of the flip-flop is sampled as transparent. The second delay line belonging to the stop signal is composed of a series of non-inverting buffers with delay formula_19. Propagating through its channel the stop signal latches the flip-flops of the start signal's delay line. As soon as the stop signal passes the start signal, the latter is stopped and all leftover flip-flops are sampled opaque. Analogous to the above case of the oscillators the wanted time interval formula_1 is then formula_20 with n the number of cells marked as transparent. Digital Delay-Line based TDC. In general a digital delay-line based TDC, also known as tapped delay line, contains a chain of cells (e.g. using D-latches in the figure) with well defined delay times formula_17. The start signal propagates through this chain and is successively delayed by each cell. The number of cells that the start signal propagated through when the stop signal happens will be the (rounded) time interval between the start and stop signal divided by formula_17. Hybrid measurement. Counters can measure long intervals but have limited resolution. Interpolators have high resolution but they cannot measure long intervals. A hybrid approach can achieve both long intervals and high resolution. The long interval can be measured with a counter. The counter information is supplemented with two time interpolators: one interpolator measures the (short) interval between the start event and a following clock event, and the second interpolator measure the interval between the stop event and a following clock event. The basic idea has some complications: the start and stop events are asynchronous, and one or both might happen close to a clock pulse. The counter and interpolators must agree on matching the start and end clock events. To accomplish that goal, synchronizers are used. The common hybrid approach is the Nutt method. In this example the fine measurement circuit measures the time between start and stop pulse and the respective second nearest clock pulse of the coarse counter ("T"start, "T"stop), detected by the synchronizer (see figure). Thus the wanted time interval is formula_21 with "n" the number of counter clock pulses and "T"0 the period of the coarse counter. History. Time measurement has played a crucial role in the understanding of nature from the earliest times. Starting with sun, sand or water driven clocks we are able to use clocks today, based on the most precise caesium resonators. The first direct predecessor of a TDC was invented in the year 1942 by Bruno Rossi for the measurement of muon lifetimes. It was designed as a time-to-amplitude-converter, constantly charging a capacitor during the measured time interval. The corresponding voltage is directly proportional to the time interval under examination. While the basic concepts (like Vernier methods (Pierre Vernier 1584-1638) and time stretching) of dividing time into measurable intervals are still up-to-date, the implementation changed a lot during the past 50 years. Starting with vacuum tubes and ferrite pot-core transformers those ideas are implemented in complementary metal–oxide–semiconductor (CMOS) design today. Errors. Regarding even the fine measuring methods presented, there are still errors one may wish remove or at least to consider. Non-linearities of the time-to-digital conversion for example can be identified by taking a large number of measurements of a poissonian distributed source (statistical code density test). Small deviations from the uniform distribution reveal the non-linearities. Inconveniently the statistical code density method is quite sensitive to external temperature changes. Thus stabilizing delay or phase-locked loop (DLL or PLL) circuits are recommended. In a similar way, offset errors (non-zero readouts at "T" = 0) can be removed. For long time intervals, the error due to instabilities in the reference clock (jitter) plays a major role. Thus clocks of superior quality are needed for such TDCs. Furthermore, external noise sources can be eliminated in postprocessing by robust estimation methods. Configurations. TDCs are currently built as stand-alone measuring devices in physical experiments or as system components like PCI cards. They can be made up of either discrete or integrated circuits. Circuit design changes with the purpose of the TDC, which can either be a very good solution for single-shot TDCs with long dead times or some trade-off between dead-time and resolution for multi-shot TDCs. Delay generator. The time-to-digital converter measures the time between a start event and a stop event. There is also a digital-to-time converter or delay generator. The delay generator converts a number to a time delay. When the delay generator gets a start pulse at its input, then it outputs a stop pulse after the specified delay. The architectures for TDC and delay generators are similar. Both use counters for long, stable, delays. Both must consider the problem of clock quantization errors. For example, the Tektronix 7D11 Digital Delay uses a counter architecture. A digital delay may be set from 100 ns to 1 s in 100 ns increments. An analog circuit provides an additional fine delay of 0 to 100 ns. A 5 MHz reference clock drives a phase-locked loop to produce a stable 500 MHz clock. It is this fast clock that is gated by the (fine-delayed) start event and determines the main quantization error. The fast clock is divided down to 10 MHz and fed to main counter. The instrument quantization error depends primarily on the 500 MHz clock (2 ns steps), but other errors also enter; the instrument is specified to have 2.2 ns of jitter. The recycle time is 575 ns. Just as a TDC may use interpolation to get finer than one clock period resolution, a delay generator may use similar techniques. The Hewlett-Packard 5359A High Resolution Time Synthesizer provides delays of 0 to 160 ms, has an accuracy of 1 ns, and achieves a typical jitter of 100 ps. The design uses a triggered phase-locked oscillator that runs at 200 MHz. Interpolation is done with a ramp, an 8-bit digital-to-analog converter, and a comparator. The resolution is about 45 ps. When the start pulse is received, then counts down and outputs a stop pulse. For low jitter the synchronous counter has to feed a zero flag from the most significant bit down to the least significant bit and then combine it with the output from the Johnson counter. A digital-to-analog converter (DAC) could be used to achieve sub-cycle resolution, but it is easier to either use vernier Johnson counters or traveling-wave Johnson counters. The delay generator can be used for pulse-width modulation, e.g. to drive a MOSFET to load a Pockels cell within 8 ns with a specific charge. The output of a delay generator can gate a digital-to-analog converter and so pulses of a variable height can be generated. This allows matching to low levels needed by analog electronics, higher levels for ECL and even higher levels for TTL. If a series of DACs is gated in sequence, variable pulse shapes can be generated to account for any transfer function. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f_0" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": " T = n\\cdot T_0 " }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "T_0 = 1/f_0" }, { "math_id": 5, "text": "n_1" }, { "math_id": 6, "text": "n_2" }, { "math_id": 7, "text": " p(n_1) = 1 - c" }, { "math_id": 8, "text": "q(n_2) = c" }, { "math_id": 9, "text": "c = Frc(T/T_0)" }, { "math_id": 10, "text": "T/T_0" }, { "math_id": 11, "text": "T = (p\\cdot n_1 + q\\cdot n_2)\\cdot T_0" }, { "math_id": 12, "text": "p" }, { "math_id": 13, "text": "q" }, { "math_id": 14, "text": "f_1" }, { "math_id": 15, "text": "f_2" }, { "math_id": 16, "text": "T = \\frac{n_1-1}{f_1} - \\frac{n_2-1}{f_2}" }, { "math_id": 17, "text": "\\tau" }, { "math_id": 18, "text": "\\tau_L" }, { "math_id": 19, "text": "\\tau_B < \\tau_L" }, { "math_id": 20, "text": "T = n\\cdot (\\tau_1 - \\tau_2)" }, { "math_id": 21, "text": "T = n T_0 + T_{\\mathrm{start}} - T_{\\mathrm{stop}}" } ]
https://en.wikipedia.org/wiki?curid=715886
71590696
Subgroup distortion
Concept in geometric group theory In geometric group theory, a discipline of mathematics, subgroup distortion measures the extent to which an overgroup can reduce the complexity of a group's word problem. Like much of geometric group theory, the concept is due to Misha Gromov, who introduced it in 1993. Formally, let S generate group H, and let G be an overgroup for H generated by "S" ∪ "T". Then each generating set defines a word metric on the corresponding group; the distortion of H in G is the asymptotic equivalence class of the function formula_0 where "BX"("x", "r") is the ball of radius r about center x in X and diam("S") is the diameter of S. A subgroup with bounded distortion is called undistorted, and is the same thing as a quasi-isometrically embedded subgroup. Examples. For example, consider the infinite cyclic group ℤ   ⟨"b"⟩, embedded as a normal subgroup of the Baumslag–Solitar group BS(1, 2)   ⟨"a", "b"⟩. With respect to the chosen generating sets, the element formula_1 is distance 2"n" from the origin in ℤ, but distance 2"n" + 1 from the origin in BS(1, 2). In particular, ℤ is at least exponentially distorted with base 2. On the other hand, any embedded copy of ℤ in the free abelian group on two generators ℤ2 is undistorted, as is any embedding of ℤ into itself. Elementary properties. In a tower of groups "K" ≤ "H" ≤ "G", the distortion of K in G is at least the distortion of K in H. A normal abelian subgroup has distortion determined by the eigenvalues of the conjugation overgroup representation; formally, if "g" ∈ "G" acts on "V" ≤ "G" with eigenvalue λ, then V is at least exponentially distorted with base λ. For many non-normal but still abelian subgroups, the distortion of the normal core gives a strong lower bound. Known values. Every computable function with at most exponential growth can be a subgroup distortion, but Lie subgroups of a nilpotent Lie group always have distortion "n" ↦ "nr" for some rational r. The denominator in the definition is always 2"R"; for this reason, it is often omitted. In that case, a subgroup that is not locally finite has superadditive distortion; conversely every superadditive function (up to asymptotic equivalence) can be found this way. In cryptography. The simplification in a word problem induced by subgroup distortion suffices to construct a cryptosystem, algorithms for encoding and decoding secret messages. Formally, the plaintext message is any object (such as text, images, or numbers) that can be encoded as a number n. The transmitter then encodes n as an element "g" ∈ "H" with word length n. In a public overgroup G with that distorts H, the element g has a word of much smaller length, which is then transmitted to the receiver along with a number of "decoys" from "G" \ "H", to obscure the secret subgroup H. The receiver then picks out the element of H, re-expresses the word in terms of generators of H, and recovers n. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R\\mapsto\\frac{\\operatorname{diam}_H(B_G(0,R)\\cap H)}{\\operatorname{diam}_H(B_H(0,R))}\\text{,}" }, { "math_id": 1, "text": "b^{2^n}=a^nba^{-n}" } ]
https://en.wikipedia.org/wiki?curid=71590696
715909
Greek alphabet
Script used to write the Greek language The Greek alphabet has been used to write the Greek language since the late 9th or early 8th century BC. It is derived from the earlier Phoenician alphabet, and was the earliest known alphabetic script to have distinct letters for vowels as well as consonants. In Archaic and early Classical times, the Greek alphabet existed in many local variants, but, by the end of the 4th century BC, the Euclidean alphabet, with 24 letters, ordered from alpha to omega, had become standard and it is this version that is still used for Greek writing today. The uppercase and lowercase forms of the 24 letters are: , , , , , , , , , , , , , , , , , /ς, , , , , , . The Greek alphabet is the ancestor of the Latin and Cyrillic scripts. Like Latin and Cyrillic, Greek originally had only a single form of each letter; it developed the letter case distinction between uppercase and lowercase in parallel with Latin during the modern era. Sound values and conventional transcriptions for some of the letters differ between Ancient and Modern Greek usage because the pronunciation of Greek has changed significantly between the 5th century BC and today. Modern and Ancient Greek also use different diacritics, with modern Greek keeping only the stress accent (acute) and the diaeresis. Apart from its use in writing the Greek language, in both its ancient and its modern forms, the Greek alphabet today also serves as a source of international technical symbols and labels in many domains of mathematics, science, and other fields. Letters. Sound values. In both Ancient and Modern Greek, the letters of the Greek alphabet have fairly stable and consistent symbol-to-sound mappings, making pronunciation of words largely predictable. Ancient Greek spelling was generally near-phonemic. For a number of letters, sound values differ considerably between Ancient and Modern Greek, because their pronunciation has followed a set of systematic phonological shifts that affected the language in its post-classical stages. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; Among consonant letters, all letters that denoted voiced plosive consonants () and aspirated plosives () in Ancient Greek stand for corresponding fricative sounds in Modern Greek. The correspondences are as follows: Among the vowel symbols, Modern Greek sound values reflect the radical simplification of the vowel system of post-classical Greek, merging multiple formerly distinct vowel phonemes into a much smaller number. This leads to several groups of vowel letters denoting identical sounds today. Modern Greek orthography remains true to the historical spellings in most of these cases. As a consequence, the spellings of words in Modern Greek are often not predictable from the pronunciation alone, while the reverse mapping, from spelling to pronunciation, is usually regular and predictable. The following vowel letters and digraphs are involved in the mergers: Modern Greek speakers typically use the same, modern symbol–sound mappings in reading Greek of all historical stages. In other countries, students of Ancient Greek may use a variety of conventional approximations of the historical sound system in pronouncing Ancient Greek. Digraphs and letter combinations. Several letter combinations have special conventional sound values different from those of their single components. Among them are several digraphs of vowel letters that formerly represented diphthongs but are now monophthongized. In addition to the four mentioned above (⟨, οι, υι⟩, pronounced and ⟨⟩, pronounced ), there is also ⟨⟩, and ⟨⟩, pronounced . The Ancient Greek diphthongs ⟨⟩, ⟨⟩ and ⟨⟩ are pronounced , and in Modern Greek. In some environments, they are devoiced to , and . The Modern Greek consonant combinations ⟨⟩ and ⟨⟩ stand for and (or and ); ⟨⟩ stands for and ⟨⟩ stands for . In addition, both in Ancient and Modern Greek, the letter ⟨⟩, before another velar consonant, stands for the velar nasal ; thus ⟨⟩ and ⟨⟩ are pronounced like English ⟨ng⟩ like in the word finger (not like in the word thing). In analogy to ⟨⟩ and ⟨⟩, ⟨⟩ is also used to stand for . There are also the combinations ⟨⟩ and ⟨⟩. Diacritics. In the polytonic orthography traditionally used for ancient Greek and katharevousa, the stressed vowel of each word carries one of three accent marks: either the acute accent (), the grave accent (), or the circumflex accent ( or ). These signs were originally designed to mark different forms of the phonological pitch accent in Ancient Greek. By the time their use became conventional and obligatory in Greek writing, in late antiquity, pitch accent was evolving into a single stress accent, and thus the three signs have not corresponded to a phonological distinction in actual speech ever since. In addition to the accent marks, every word-initial vowel must carry either of two so-called "breathing marks": the rough breathing (), marking an sound at the beginning of a word, or the smooth breathing (), marking its absence. The letter rho (ρ), although not a vowel, also carries rough breathing in a word-initial position. If a rho was geminated within a word, the first always had the smooth breathing and the second the rough breathing (ῤῥ) leading to the transliteration rrh. The vowel letters ⟨⟩ carry an additional diacritic in certain words, the so-called iota subscript, which has the shape of a small vertical stroke or a miniature ⟨⟩ below the letter. This iota represents the former offglide of what were originally long diphthongs, ⟨⟩ (i.e. ), which became monophthongized during antiquity. Another diacritic used in Greek is the diaeresis (), indicating a hiatus. This system of diacritics was first developed by the scholar Aristophanes of Byzantium (c. 257 – c. 185/180 BC), who worked at the Musaeum in Alexandria during the third century BC. Aristophanes of Byzantium also was the first to divide poems into lines, rather than writing them like prose, and also introduced a series of signs for textual criticism. In 1982, a new, simplified orthography, known as "monotonic", was adopted for official use in Modern Greek by the Greek state. It uses only a single accent mark, the acute (also known in this context as "tonos", i.e. simply "accent"), marking the stressed syllable of polysyllabic words, and occasionally the diaeresis to distinguish diphthongal from digraph readings in pairs of vowel letters, making this monotonic system very similar to the accent mark system used in Spanish. The polytonic system is still conventionally used for writing Ancient Greek, while in some book printing and generally in the usage of conservative writers it can still also be found in use for Modern Greek. Although it is not a diacritic, the comma has a similar function as a silent letter in a handful of Greek words, principally distinguishing ("ó,ti", "whatever") from ("óti", "that"). Romanization. There are many different methods of rendering Greek text or Greek names in the Latin script. The form in which classical Greek names are conventionally rendered in English goes back to the way Greek loanwords were incorporated into Latin in antiquity. In this system, ⟨⟩ is replaced with ⟨c⟩, the diphthongs ⟨⟩ and ⟨⟩ are rendered as ⟨ae⟩ and ⟨oe⟩ (or ⟨æ,œ⟩); and ⟨⟩ and ⟨⟩ are simplified to ⟨i⟩ and ⟨u⟩. Smooth breathing marks are usually ignored and rough breathing marks are usually rendered as the letter ⟨h⟩. In modern scholarly transliteration of Ancient Greek, ⟨⟩ will usually be rendered as ⟨k⟩, and the vowel combinations ⟨, οι, ει, ου⟩ as ⟨ai, oi, ei, ou⟩. The letters ⟨⟩ and ⟨⟩ are generally rendered as ⟨th⟩ and ⟨ph⟩; ⟨⟩ as either ⟨ch⟩ or ⟨kh⟩; and word-initial ⟨⟩ as ⟨rh⟩. Transcription conventions for Modern Greek differ widely, depending on their purpose, on how close they stay to the conventional letter correspondences of Ancient Greek-based transcription systems, and to what degree they attempt either an exact letter-by-letter transliteration or rather a phonetically based transcription. Standardized formal transcription systems have been defined by the International Organization for Standardization (as ISO 843), by the United Nations Group of Experts on Geographical Names, by the Library of Congress, and others. History. Origins. During the Mycenaean period, from around the sixteenth century to the twelfth century BC, Linear B was used to write the earliest attested form of the Greek language, known as Mycenaean Greek. This writing system, unrelated to the Greek alphabet, last appeared in the thirteenth century BC. In the late ninth century BC or early eighth century BC, the Greek alphabet emerged. The period between the use of the two writing systems, during which no Greek texts are attested, is known as the Greek Dark Ages. The Greeks adopted the alphabet from the earlier Phoenician alphabet, one of the closely related scripts used for the West Semitic languages, calling it Φοινικήια γράμματα 'Phoenician letters'. However, the Phoenician alphabet is limited to consonants. When it was adopted for writing Greek, certain consonants were adapted to express vowels. The use of both vowels and consonants makes Greek the first alphabet in the narrow sense, as distinguished from the abjads used in Semitic languages, which have letters only for consonants. Greek initially took over all of the 22 letters of Phoenician. Five were reassigned to denote vowel sounds: the glide consonants ("yodh") and ("waw") were used for [i] (Ι, "iota") and [u] (Υ, "upsilon"); the glottal stop consonant ("aleph") was used for [a] (Α, "alpha"); the pharyngeal ("ʿayin") was turned into [o] (Ο, "omicron"); and the letter for ("he") was turned into [e] (Ε, "epsilon"). A doublet of waw was also borrowed as a consonant for [w] (Ϝ, digamma). In addition, the Phoenician letter for the emphatic glottal ("heth") was borrowed in two different functions by different dialects of Greek: as a letter for /h/ (Η, heta) by those dialects that had such a sound, and as an additional vowel letter for the long (Η, eta) by those dialects that lacked the consonant. Eventually, a seventh vowel letter for the long (Ω, omega) was introduced. Greek also introduced three new consonant letters for its aspirated plosive sounds and consonant clusters: Φ ("phi") for , Χ ("chi") for and Ψ ("psi") for . In western Greek variants, Χ was instead used for and Ψ for . The origin of these letters is a matter of some debate. Three of the original Phoenician letters dropped out of use before the alphabet took its classical shape: the letter Ϻ ("san"), which had been in competition with Σ ("sigma") denoting the same phoneme /s/; the letter Ϙ ("qoppa"), which was redundant with Κ ("kappa") for /k/, and Ϝ ("digamma"), whose sound value /w/ dropped out of the spoken language before or during the classical period. Greek was originally written predominantly from right to left, just like Phoenician, but scribes could freely alternate between directions. For a time, a writing style with alternating right-to-left and left-to-right lines (called "boustrophedon", literally "ox-turning", after the manner of an ox ploughing a field) was common, until in the classical period the left-to-right writing direction became the norm. Individual letter shapes were mirrored depending on the writing direction of the current line. Archaic variants. There were initially numerous local (epichoric) variants of the Greek alphabet, which differed in the use and non-use of the additional vowel and consonant symbols and several other features. Epichoric alphabets are commonly divided into four major types according to their different treatments of additional consonant letters for the aspirated consonants (/pʰ, kʰ/) and consonant clusters (/ks, ps/) of Greek. These four types are often conventionally labelled as "green", "red", "light blue" and "dark blue" types, based on a colour-coded map in a seminal 19th-century work on the topic, "Studien zur Geschichte des griechischen Alphabets" by Adolf Kirchhoff (1867). The "green" (or southern) type is the most archaic and closest to the Phoenician. The "red" (or western) type is the one that was later transmitted to the West and became the ancestor of the Latin alphabet, and bears some crucial features characteristic of that later development. The "blue" (or eastern) type is the one from which the later standard Greek alphabet emerged. Athens used a local form of the "light blue" alphabet type until the end of the fifth century BC, which lacked the letters Ξ and Ψ as well as the vowel symbols Η and Ω. In the Old Attic alphabet, stood for and for . was used for all three sounds (correspondinɡ to classical ), and was used for all of (corresponding to classical ). The letter (heta) was used for the consonant . Some variant local letter forms were also characteristic of Athenian writing, some of which were shared with the neighboring (but otherwise "red") alphabet of Euboia: a form of that resembled a Latin "L" () and a form of that resembled a Latin "S" (). The classical twenty-four-letter alphabet that is now used to represent the Greek language was originally the local alphabet of Ionia. By the late fifth century BC, it was commonly used by many Athenians. In c. 403 BC, at the suggestion of the archon Eucleides, the Athenian Assembly formally abandoned the Old Attic alphabet and adopted the Ionian alphabet as part of the democratic reforms after the overthrow of the Thirty Tyrants. Because of Eucleides's role in suggesting the idea to adopt the Ionian alphabet, the standard twenty-four-letter Greek alphabet is sometimes known as the "Eucleidean alphabet". Roughly thirty years later, the Eucleidean alphabet was adopted in Boeotia and it may have been adopted a few years previously in Macedonia. By the end of the fourth century BC, it had displaced local alphabets across the Greek-speaking world to become the standard form of the Greek alphabet. Letter names. When the Greeks adopted the Phoenician alphabet, they took over not only the letter shapes and sound values but also the names by which the sequence of the alphabet could be recited and memorized. In Phoenician, each letter name was a word that began with the sound represented by that letter; thus "ʾaleph", the word for "ox", was used as the name for the glottal stop , "bet", or "house", for the sound, and so on. When the letters were adopted by the Greeks, most of the Phoenician names were maintained or modified slightly to fit Greek phonology; thus, "ʾaleph, bet, gimel" became "alpha, beta, gamma". The Greek names of the following letters are more or less straightforward continuations of their Phoenician antecedents. Between Ancient and Modern Greek, they have remained largely unchanged, except that their pronunciation has followed regular sound changes along with other words (for instance, in the name of "beta", ancient /b/ regularly changed to modern /v/, and ancient /ɛː/ to modern /i/, resulting in the modern pronunciation "vita"). The name of lambda is attested in early sources as besides ; in Modern Greek the spelling is often , reflecting pronunciation. Similarly, iota is sometimes spelled in Modern Greek ( is conventionally transcribed ⟨γ{ι,η,υ,ει,οι}⟩ word-initially and intervocalically before back vowels and ). In the tables below, the Greek names of all letters are given in their traditional polytonic spelling; in modern practice, like with all other words, they are usually spelled in the simplified monotonic system. In the cases of the three historical sibilant letters below, the correspondence between Phoenician and Ancient Greek is less clear, with apparent mismatches both in letter names and sound values. The early history of these letters (and the fourth sibilant letter, obsolete san) has been a matter of some debate. Here too, the changes in the pronunciation of the letter names between Ancient and Modern Greek are regular. In the following group of consonant letters, the older forms of the names in Ancient Greek were spelled with , indicating an original pronunciation with "-ē". In Modern Greek these names are spelled with . The following group of vowel letters were originally called simply by their sound values as long vowels: ē, ō, ū, and . Their modern names contain adjectival qualifiers that were added during the Byzantine period, to distinguish between letters that had become confusable. Thus, the letters ⟨ο⟩ and ⟨ω⟩, pronounced identically by this time, were called "o mikron" ("small o") and "o mega" ("big o"). The letter ⟨ε⟩ was called "e psilon" ("plain e") to distinguish it from the identically pronounced digraph ⟨αι⟩, while, similarly, ⟨υ⟩, which at this time was pronounced , was called "y psilon" ("plain y") to distinguish it from the identically pronounced digraph ⟨οι⟩. Some dialects of the Aegean and Cypriot have retained long consonants and pronounce and ; also, has come to be pronounced in Cypriot. Letter shapes. Like Latin and other alphabetic scripts, Greek originally had only a single form of each letter, without a distinction between uppercase and lowercase. This distinction is an innovation of the modern era, drawing on different lines of development of the letter shapes in earlier handwriting. The oldest forms of the letters in antiquity are majuscule forms. Besides the upright, straight inscriptional forms (capitals) found in stone carvings or incised pottery, more fluent writing styles adapted for handwriting on soft materials were also developed during antiquity. Such handwriting has been preserved especially from papyrus manuscripts in Egypt since the Hellenistic period. Ancient handwriting developed two distinct styles: uncial writing, with carefully drawn, rounded block letters of about equal size, used as a book hand for carefully produced literary and religious manuscripts, and cursive writing, used for everyday purposes. The cursive forms approached the style of lowercase letter forms, with ascenders and descenders, as well as many connecting lines and ligatures between letters. In the ninth and tenth century, uncial book hands were replaced with a new, more compact writing style, with letter forms partly adapted from the earlier cursive. This minuscule style remained the dominant form of handwritten Greek into the modern era. During the Renaissance, western printers adopted the minuscule letter forms as lowercase printed typefaces, while modeling uppercase letters on the ancient inscriptional forms. The orthographic practice of using the letter case distinction for marking proper names, titles, etc. developed in parallel to the practice in Latin and other western languages. Derived alphabets. The Greek alphabet was the model for various others: The Armenian and Georgian alphabets are almost certainly modeled on the Greek alphabet, but their graphic forms are quite different. Other uses. Use for other languages. Apart from the daughter alphabets listed above, which were adapted from Greek but developed into separate writing systems, the Greek alphabet has also been adopted at various times and in various places to write other languages. For some of them, additional letters were introduced. In mathematics and science. Greek symbols are used as symbols in mathematics, physics and other sciences. Many symbols have traditional uses, such as lower case epsilon (ε) for an arbitrarily small positive number, lower case pi (π) for the ratio of the circumference of a circle to its diameter, capital sigma (Σ) for summation, and lower case sigma (σ) for standard deviation. For many years the Greek alphabet was used by the World Meteorological Organization for naming North Atlantic hurricanes if a season was so active that it exhausted the regular list of storm names. This happened during the 2005 season (when Alpha through Zeta were used), and the 2020 season (when Alpha through Iota were used), after which the practice was discontinued. In May 2021 the World Health Organization announced that the variants of SARS-CoV-2 of the virus would be named using letters of the Greek alphabet to avoid stigma and simplify communications for non-scientific audiences. Astronomy. Greek letters are used to denote the brighter stars within each of the eighty-eight constellations. In most constellations, the brightest star is designated Alpha and the next brightest Beta etc. For example, the brightest star in the constellation of Centaurus is known as Alpha Centauri. For historical reasons, the Greek designations of some constellations begin with a lower ranked letter. International Phonetic Alphabet. Several Greek letters are used as phonetic symbols in the International Phonetic Alphabet (IPA). Several of them denote fricative consonants; the rest stand for variants of vowel sounds. The glyph shapes used for these letters in specialized phonetic fonts is sometimes slightly different from the conventional shapes in Greek typography proper, with glyphs typically being more upright and using serifs, to make them conform more with the typographical character of other, Latin-based letters in the phonetic alphabet. Nevertheless, in the Unicode encoding standard, the following three phonetic symbols are considered the same characters as the corresponding Greek letters proper: On the other hand, the following phonetic letters have Unicode representations separate from their Greek alphabetic use, either because their conventional typographic shape is too different from the original, or because they also have secondary uses as regular alphabetic characters in some Latin-based alphabets, including separate Latin uppercase letters distinct from the Greek ones. The symbol in Americanist phonetic notation for the voiceless alveolar lateral fricative is the Greek letter lambda ⟨⟩, but ⟨ɬ⟩ in the IPA. The IPA symbol for the palatal lateral approximant is ⟨ʎ⟩, which looks similar to lambda, but is actually an inverted lowercase "y". Use as numerals. Greek letters were also used to write numbers. In the classical Ionian system, the first nine letters of the alphabet stood for the numbers from 1 to 9, the next nine letters stood for the multiples of 10, from 10 to 90, and the next nine letters stood for the multiples of 100, from 100 to 900. For this purpose, in addition to the 24 letters which by that time made up the standard alphabet, three otherwise obsolete letters were retained or revived: digamma ⟨Ϝ⟩ for 6, koppa ⟨Ϙ⟩ for 90, and a rare Ionian letter for [ss], today called sampi ⟨Ͳ⟩, for 900. This system has remained in use in Greek up to the present day, although today it is only employed for limited purposes such as enumerating chapters in a book, similar to the way Roman numerals are used in English. The three extra symbols are today written as ⟨ϛ⟩, ⟨ϟ⟩ and ⟨ϡ⟩. To mark a letter as a numeral sign, a small stroke called "keraia" is added to the right of it. Use by student fraternities and sororities. In North America, many college fraternities and sororities are named with combinations of Greek letters, and are hence also known as "Greek letter organizations". This naming tradition was initiated by the foundation of the Phi Beta Kappa Society at the College of William and Mary in 1776. The name of this fraternal organization is an acronym for the ancient Greek phrase (), which means "Love of wisdom, the guide of life" and serves as the organization's motto. Sometimes early fraternal organizations were known by their Greek letter names because the mottos that these names stood for were secret and revealed only to members of the fraternity. Different chapters within the same fraternity are almost always (with a handful of exceptions) designated using Greek letters as serial numbers. The founding chapter of each organization is its A chapter. As an organization expands, it establishes a B chapter, a Γ chapter, and so on and so forth. In an organization that expands to more than 24 chapters, the chapter after Ω chapter is AA chapter, followed by AB chapter, etc. Each of these is still a "chapter Letter", albeit a double-digit letter just as 10 through 99 are double-digit numbers. The Roman alphabet has a similar extended form with such double-digit letters when necessary, but it is used for columns in a table or chart rather than chapters of an organization. Glyph variants. Some letters can occur in variant shapes, mostly inherited from medieval minuscule handwriting. While their use in normal typography of Greek is purely a matter of font styles, some such variants have been given separate encodings in Unicode. Computer encodings. For computer usage, a variety of encodings have been used for Greek online, many of them documented in . The two principal ones still used today are ISO/IEC 8859-7 and Unicode. ISO 8859-7 supports only the monotonic orthography; Unicode supports both the monotonic and polytonic orthographies. ISO/IEC 8859-7. For the range A0–FF (hex), it follows the Unicode range 370–3CF (see below) except that some symbols, like ©, ½, § etc. are used where Unicode has unused locations. Like all ISO-8859 encodings, it is equal to ASCII for 00–7F (hex). Greek in Unicode. Unicode supports polytonic orthography well enough for ordinary continuous text in modern and ancient Greek, and even many archaic forms for epigraphy. With the use of combining characters, Unicode also supports Greek philology and dialectology and various other specialized requirements. Most current text rendering engines do not render diacritics well, so, though alpha with macron and acute can be "represented" as U+03B1 U+0304 U+0301, this rarely renders well: . There are two main blocks of Greek characters in Unicode. The first is "Greek and Coptic" (U+0370 to U+03FF). This block is based on ISO 8859-7 and is sufficient to write Modern Greek. There are also some archaic letters and Greek-based technical symbols. This block also supports the Coptic alphabet. Formerly, most Coptic letters shared codepoints with similar-looking Greek letters; but in many scholarly works, both scripts occur, with quite different letter shapes, so as of Unicode 4.1, Coptic and Greek were disunified. Those Coptic letters with no Greek equivalents still remain in this block (U+03E2 to U+03EF). To write polytonic Greek, one may use combining diacritical marks or the precomposed characters in the "Greek Extended" block (U+1F00 to U+1FFF). Combining and letter-free diacritics. Combining and spacing (letter-free) diacritical marks pertaining to Greek language: Encodings with a subset of the Greek alphabet. IBM code pages 437, 860, 861, 862, 863, and 865 contain the letters ΓΘΣΦΩαδεπστφ (plus β as an alternative interpretation for ß). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon\\,\\!" }, { "math_id": 1, "text": "\\varepsilon\\,\\!" }, { "math_id": 2, "text": "\\varpi\\,\\!" }, { "math_id": 3, "text": "\\Upsilon" }, { "math_id": 4, "text": "\\textstyle\\phi\\,\\!" }, { "math_id": 5, "text": "\\textstyle\\varphi\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=715909
71594556
Lattice Boltzmann methods for solids
Class of computational fluid dynamics methods The Lattice Boltzmann methods for solids (LBMS) are a set of methods for solving partial differential equations (PDE) in solid mechanics. The methods use a discretization of the Boltzmann equation(BM), and their use is known as the lattice Boltzmann methods for solids. LBMS methods are categorized by their reliance on: The LBMS subset remains highly challenging from a computational aspect as much as from a theoretical point of view. Solving solid equations within the LBM framework is still a very active area of research. If solids are solved, this shows that the Boltzmann equation is capable of describing solid motions as well as fluids and gases: thus unlocking complex physics to be solved such as fluid-structure interaction (FSI) in biomechanics. Proposed insights. Vectorial distributions. The first attempt of LBMS tried to use a Boltzmann-like equation for force (vectorial) distributions. The approach requires more computational memory but results are obtained in fracture and solid cracking. Wave solvers. Another approach consists in using LBM as acoustic solvers to capture waves propagation in solids. Force tuning. Introduction. This idea consists of introducing a modified version of the forcing term: (or equilibrium distribution) into the LBM as a stress divergence force. This force is considered space-time dependent and contains solid properties formula_0, where formula_1 denotes the Cauchy stress tensor. formula_2 and formula_3 are respectively the gravity vector and solid matter density. The stress tensor is usually computed across the lattice aiming finite difference schemes. Some results. Force tuning has recently proven its efficiency with a maximum error of 5% in comparison with standard finite element solvers in mechanics. Accurate validation of results can also be a tedious task since these methods are very different, common issues are: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{g} = \\frac{1}{\\rho} \\vec{\\mathbf{\\nabla}_{x}} \\cdot \\overline{\\overline{\\sigma}}" }, { "math_id": 1, "text": "\\overline{\\overline{\\sigma}}" }, { "math_id": 2, "text": "\\vec{g}" }, { "math_id": 3, "text": "\\rho" } ]
https://en.wikipedia.org/wiki?curid=71594556
715946
Green–Schwarz mechanism
Mechanism in superstring theory The Green–Schwarz mechanism (sometimes called the Green–Schwarz anomaly cancellation mechanism) is the main discovery that started the first superstring revolution in superstring theory. Discovery. In 1984, Michael Green and John H. Schwarz realized that the anomaly in type I string theory with the gauge group SO(32) cancels because of an extra "classical" contribution from a 2-form field. They realized that one of the necessary conditions for a superstring theory to make sense is that the dimension of the gauge group of type I string theory must be 496 and then demonstrated this to be so. In the original calculation, gauge anomalies, mixed anomalies, and gravitational anomalies were expected to arise from a hexagon Feynman diagram. For the special choice of the gauge group SO(32) or E8 x E8, however, the anomaly factorizes and may be cancelled by a tree diagram. In string theory, this indeed occurs. The tree diagram describes the exchange of a virtual quantum of the B-field. It is somewhat counterintuitive to see that a tree diagram cancels a one-loop diagram, but in reality, both of these diagrams arise as one-loop diagrams in superstring theory in which the anomaly cancellation is more transparent. As recounted in "The Elegant Universe"'s TV version, in the second episode, "The String's the Thing", section "Wrestling with String Theory", Green describes finding 496 on each side of the equals sign during a stormy night filled with lightning, and fondly recalls joking that "the gods are trying to prevent us from completing this calculation". Green soon entitled some of his subsequent lectures "The Theory of Everything". Details. Anomalies in quantum theory arise from one-loop diagrams, with a chiral fermion in the loop and gauge fields, Ricci tensors, or global symmetry currents as the external legs. These diagrams have the form of a triangle in 4 spacetime dimensions, which generalizes to a hexagon in "D" = 10, thus involving 6 external lines. The interesting anomaly in SUSY "D" = 10 gauge theory is the hexagon which has a particular linear combination of the two-form gauge field strength and Ricci tensor, formula_0, for the external lines. Green and Schwarz realized that one can add a so-called Chern–Simons term to the classical action, having the form formula_1, where the integral is over the 10 dimensions, formula_2 is the rank-two Kalb–Ramond field, and formula_3 is a gauge invariant combination of formula_4 (with space-time indices not contracted), which is precisely one of the factors appearing in the hexagon anomaly. If the variation of formula_2 under the transformations of gauge field for formula_5 and under general coordinate transformations is appropriately specified, then the Green–Schwarz term formula_6, when combined with a trilinear vertex through exchange of a gauge boson, has precisely the right variation to cancel the hexagon anomaly. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F^6,\\ F^4 R^2,\\ F^2 R^4,\\ R^6" }, { "math_id": 1, "text": "S_{GS} = \\int B_{2}\\wedge X_8" }, { "math_id": 2, "text": "B_{2}" }, { "math_id": 3, "text": "X_8" }, { "math_id": 4, "text": "F^4,\\ F^2 R^2,\\ R^4" }, { "math_id": 5, "text": "F_{(2)}" }, { "math_id": 6, "text": "S_{GS}" } ]
https://en.wikipedia.org/wiki?curid=715946
71599508
Job 8
Job 8 is the eighth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Bildad (one of Job's friends), which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q100 (4QJobb; 50–1 BCE) with extant verses 15–17. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 8 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 8 record Bildad's first response to Job, which can be divided into several distinct sections: The essence and basis of Bildad's argument (8:1–10). Bildad is the second of Job's friends to speak (verse 1) and he regards Job's words as inappropriate, so he rebukes Job based on his principle that Almighty God will not pervert justice or righteousness. This is in contrast to Eliphaz's approach of God's utter holiness. Bildad believes that suffering is punishment, so the death of Job's children is proof that they have sinned (verse 4–7). The source of Bildad's argument is the long-held traditions, those searched out by former generations and appeared to have stood the test of time (verses 8–10). [Bildad said:] "How long will you speak these things," "and the words of your mouth be like a strong wind?" [Bildad said:] "Does God pervert judgment?" "Or does the Almighty pervert justice?" Verse 3. This verse, stated in the form of a rhetorical question, contains the fundamental premiss of Bildad's argument. The twin concepts, judgment (justice; Hebrew: "mišpāṭ") and justice (righteousness; Hebrew: "tsedeq"), are central in describing the Lord's activity in the Hebrew Bible, such as on these two principles 'the earth is established', as is 'God's throne' (Psalm 97:2), also as the two qualities God requires of Israel (Isaiah 5:7; ), and in which the covenant is grounded (). Bildad's discursive comments and optimistic finish (8:11–22). Bildad's speech (verses 11–19) focuses almost entirely on the negative aspects of the traditional doctrine of retribution, that is, the punishment of the wicked. The excessive and overwhelming details of the discourse seem to force Job to 'understand' that Job's suffering must have been caused by sin. Bildad then concludes his teaching on a fairly positive note (verse 20–22; cf. Psalm 126:2; 132:18), but this 'theoretically optimistic' sense is conditional to Job's repentance on his alleged sin and his turning away from the accusations that God is perverting justice. [Job said:] "Those who hate you will be clothed with shame," "and the dwelling place of the wicked will come to nothing." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71599508
7160369
EEStor
FuelPositive Corporation (formerly EEStor ) is a company based in Cedar Park, Texas, United States that claims to have developed a solid state polymer capacitor for electricity storage. The company claims the device stores more energy than lithium-ion batteries at a lower cost than lead-acid batteries used in gasoline-powered cars. Such a device would revolutionize the electric car industry. Many experts believe these claims are not realistic and EEStor has yet to publicly demonstrate these claims. The corporate slogan is "Energy Everywhere". Claimed specifications. The claims are described in detail in several of the company's patents, The following is how EEStor's energy storage device (sometimes referred to the EESU) is claimed to compare to electrochemical batteries used for electric cars: Status and delays. Several delays in production occurred and there has not been a public demonstration of the uniquely high energy density claims of the inventors. This has led to the speculation that the claims are false. In January 2007 EEStor stated in a press release "EEStor, Inc. remains on track to begin shipping production 15 kilowatt-hour Electrical Energy Storage Units (EESU) to ZENN Motor Company in 2007 for use in their electric vehicles." In September 2007, EEStor co-founder Richard Weir told CNET production would begin in the middle of 2008. In August 2008, it was reported he stated "as soon as possible in 2009". ZENN Motor Company (ZMC) denied there was a delay, just a clarification of the schedule, separating "development" and "commercialization". In March 2008 Zenn stated in a quarterly report a "late 2009" launch was scheduled for an EEStor-enabled EV. In December 2009 Zenn announced that production of the lead acid based ZENN LSV would end April 30, 2010. At that time Zenn did not announce a date for production of an EEstor based car. In July 2009 ZENN Motor Company invested an additional $5 million in EEStor, increasing its share of ownership to 10.7%. A Zenn press release indicates they were able to get a 10.7% stake because other EEStor investors did not increase their stake. In a press release dated February 2021, the company announced it was changing its name to "Fuelpositive Corporation". Skepticism from experts and lack of demonstrated claims. EEStor's claims for the EESU exceed by orders of magnitude the energy storage capacity of any capacitor currently sold. Many in the industry have expressed skepticism about the claims. Jim Miller, vice president of advanced transportation technologies at Maxwell Technologies and capacitor expert, stated he was skeptical because of current leakage typically seen at high voltages and because there should be microfractures from temperature changes. He stated "I'm surprised that Kleiner has put money into it." EEStor's claims for the comprehensive permittivity, breakdown strength, and leakage performance of their dielectric material far exceeded those understood to be consistent with the fundamental physical capabilities of any known elemental material or composite structure. For example, the thermochemical theory of polar molecular bond strengths has been confirmed to be valid for a wide range of low-formula_0 thru high-formula_0 paraelectric materials, and shows that there exists a near universal inverse relationship (formula_1) between a material's permittivity (formula_2) and its intrinsic (i.e. defect-free, and thus likely optimal) breakdown strength (formula_3). Patent description and claims. EEStor reports a large relative permittivity (19818) at an unusually high electric field strength of 350 MV/m, giving 104 J/cm3 (103 Wh/L) in the dielectric. Voltage independence of permittivity was claimed up to 500 V/μm to within 0.25% of low voltage measurements. Variation in permittivity at a single voltage for 10 different components was claimed by measurements in the patent to be less than +/- 0.15%. If true, their capacitors store at least 30 times more energy per volume than (other) cutting-edge methods such as nanotube designs by Dr Schindall at M.I.T., Dr. Ducharme's plastics research, and breakthrough ceramics discussed by Dr. Cann. Northrop Grumman and BASF have also filed patents with similar theoretical energy density claims. The EEStor patents cite a journal article and a Philips Corporation patent as exact descriptions of its "calcined composition-modified barium titanate powder." EEStor's US patent 7033406 mentions aluminum oxide and calcium magnesium aluminosilicate glass as coatings, although their subsequent US patent 7466536 mentions only aluminum oxide. EEStor's latest (2016) US patent WO2016094310 mentions a polymer matrix which can include epoxy and ceramic powders including composition modified barium titanate (CMBT). The patent also mentions a layer thickness of 0.1 microns to 100 microns. It also indicates the CMBT particle density in the polymer matrix can be up to 95%. Phase 4 and Phase 5 testing reports used an epoxy/CMBT solution. More recent testing reports from March 2017 are showing samples with CMBT ratios of over 80% and in that same report EEStor mentions plans for near term samples with thickness of 70 microns with plans for greater levels of densification with near complete densification. A targeted near term goal of 110 Wh/L energy density 70 micron layer is in development currently. Partnerships. In July 2005, Kleiner Perkins Caufield &amp; Byers invested $3 million in EEStor. In April 2007, ZENN Motor Company, a Canadian electric vehicle manufacturer, invested $2.5 million in EEStor for 3.8% ownership and exclusive rights to distribute their devices for passenger and utility vehicles weighing up to 1,400 kg (excluding capacitor mass), along with other rights. In July 2009, Zenn invested another $5 million for a 10.7% stake. A Zenn press release indicates they were able to get a 10.7% stake because other EEStor investors did not increase their stake. In December 2009 Zenn canceled plans for the car but plans to supply the drive train. By April 2010, Zenn had cancelled all production of electric vehicles, leaving ownership of EEStor and their rights to the technology as their focus. Zenn raised CAD$2 million in April 2012, mostly on the promise of EEStor's technology. In January 2008, Lockheed-Martin signed an agreement with EEStor for the exclusive rights to integrate and market EESU units in military and homeland security applications. In December 2008, a patent application was filed by Lockheed-Martin that mentions EEStor's patent as a possible electrical energy storage unit. In September 2008, Light Electric Vehicles Company announced an agreement with EEStor to exclusively provide EEStor's devices for the two and three wheel market. On December 30, 2013, ZENN announces completion of the purchase of Series A preferred shares of EEStor (includes Kleiner Perkins Caufield &amp; Byers shares and other private holders shares) and the associated rights for US$1.5 million which gives ZENN a total ownership of 41% in EEStor. On May 8, 2014, ZENN and EEStor complete an exchange offer which gives ZENN a total ownership of 71.3% in EEStor. Following the ZENN controlling ownership on May 19, Ian Clifford assumes role of CEO following the resignation of James Kofman. ZENN Motor Company Inc. has changed its name to "EEStor Corporation" to better reflect the focus and activities of the company. The name change was approved by shareholders at the company's annual and special meeting held on March 31, 2015. EEStor Corporation formerly (ZENN Motor Company) publicly trades on the Canadian exchanges as symbol ESU and on the US stock exchanges as OTC stock symbol ZNNMF. EEStor Corporation holds 71% equity while the other percent is held privately. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\kappa" }, { "math_id": 1, "text": " \\beta \\propto \\kappa^{-1/2} " }, { "math_id": 2, "text": " \\kappa " }, { "math_id": 3, "text": " \\beta " } ]
https://en.wikipedia.org/wiki?curid=7160369
71606024
Lamp cord trick
Mathematical observation In topology, a branch of mathematics, and specifically knot theory, the lamp cord trick is an observation that two certain spaces are homeomorphic, even if one of the components is knotted. The spaces are formula_0, where formula_1 is a hollow ball homeomorphic to formula_2 and formula_3 a tube connecting the boundary components of formula_1. The name comes from R. H. Bing's book "The Geometric Topology of 3-manifolds". References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "M^3\\backslash T_i,i=1,2" }, { "math_id": 1, "text": "M^3" }, { "math_id": 2, "text": "S^2\\times[0,1]" }, { "math_id": 3, "text": "T_i" } ]
https://en.wikipedia.org/wiki?curid=71606024
7160955
Langley extrapolation
Langley extrapolation is a method for determining the Sun's irradiance at the top of the atmosphere with ground-based instrumentation, and is often used to remove the effect of the atmosphere from measurements of, for example, aerosol optical thickness or ozone. It is based on repeated measurements with a Sun photometer operated at a given location for a cloudless morning or afternoon as the Sun moves across the sky. It is named for American astronomer and physicist Samuel Pierpont Langley. Theory. It is known from Beer's law that, for every instantaneous measurement, the "direct-Sun irradiance" "I" is linked to the "solar extraterrestrial irradiance" "I"0 and the atmospheric optical depth formula_0 by the following equation: where "m" is a geometrical factor accounting for the slant path through the atmosphere, known as the airmass factor. For a plane-parallel atmosphere, the airmass factor is simple to determine if one knows the solar zenith angle θ: "m" = 1/cos(θ). As time passes, the Sun moves across the sky, and therefore θ and "m" vary according to known astronomical laws. By taking the logarithm of the above equation, one obtains: and if one assumes that the atmospheric disturbance formula_0 does not change during the observations (which last for a morning or an afternoon), the plot of ln "I" versus "m" is a straight line with a slope equal to formula_0. Then, by linear extrapolation to "m" = 0, one obtains "I"0, i.e. the Sun's radiance that would be observed by an instrument placed above the atmosphere. The requirement for good "Langley plots" is a constant atmosphere (constant formula_0). This requirement can be fulfilled only under particular conditions, since the atmosphere is continuously changing. Needed conditions are in particular: the absence of clouds along the optical path, and the absence of variations in the atmospheric aerosol layer. Since aerosols tend to be more concentrated at low altitude, Langley extrapolation is often performed at high mountain sites. Data from NASA Glenn Research Center indicates that the Langley plot accuracy is improved if the data is taken above the tropopause. Solar cell calibration. A Langley plot can also be used as a method to calculate the performance of solar cells outside the Earth's atmosphere. At the Glenn Research Center, the performance of solar cells is measured as a function of altitude. By extrapolation, researchers determine their performance under space conditions. Low cost LED-based photometers. Sun photometers using low cost light-emitting diode (LED) detectors in place of optical interference filters and photodiodes have a relatively wide spectral response. They might be used by a globally distributed network of students and teachers to monitor atmospheric haze and aerosols, and can be calibrated using Langley extrapolation. In 2001, David Brooks and Forrest Mims were among many to propose detailed procedures to modify the Langley plot in order to account for Rayleigh scattering, and atmospheric refraction by a spherical Earth. Di Justo and Gertz compiled a handbook for using Arduino to develop these photometers in 2012. The handbook refers to formula_0 in equations (1) and (2), as the "AOT" (Atmospheric Optical Thickness), and the handbook refers to I0 as the "EC" (extraterrestrial constant). The manual suggests that once a photometer is constructed, the user waits for a clear day with few clouds, no haze and constant humidity. After the data is fit to equation (1) to find I0, the handbook suggests a daily measurement of I. Both I0 and I are obtained from the LED current (voltage across sensing resistor) by subtracting the dark current: where formula_1 is the voltage while the LED is pointing at the Sun, and formula_2 is the voltage while the LED is kept dark. There is a misprint in the manual regarding the calculation of formula_0 from this single data point. The correct equation is: where formula_3 was calculated on that clear and stable day using Langley extrapolation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau" }, { "math_id": 1, "text": "V_s" }, { "math_id": 2, "text": "V_d" }, { "math_id": 3, "text": "I_0" } ]
https://en.wikipedia.org/wiki?curid=7160955
71612575
Potassium perrhenate
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Potassium perrhenate is an inorganic compound with the chemical formula KReO4. Preparation. Potassium perrhenate can be produced by the neutralization of potassium hydroxide and perrhenic acid. formula_0 Properties. Potassium perrhenate is a white solid that is sparingly soluble in water and ethanol. It have a tetragonal crystal system with the space group "I"41/"a" (No. 88), and lattice constants a = 567.4 pm and c = 1266.8 pm. It is a strong oxidizer.
[ { "math_id": 0, "text": "\\mathrm{KOH + HReO_4 \\longrightarrow KReO_4 + H_2O}" } ]
https://en.wikipedia.org/wiki?curid=71612575
71615512
Constant function market maker
Type of market maker Constant-function market makers (CFMM) are a paradigm in the design of trading venues where a trading function and a set of rules determine how liquidity takers (LTs) and liquidity providers (LPs) interact, and how markets are cleared. The trading function is deterministic and known to all market participants. CFMMs display pools of liquidity of two assets. The takers and providers of liquidity interact in the liquidity pools: LPs deposit their assets in the pool and LTs exchange assets directly with the pool. CFMMs rely on two rules; the LT trading condition and the LP provision condition. The LT trading condition links the state of the pool before and after a trade is executed, and it determines the relative prices between the assets by their quantities in the pool. The LP provision condition links the state of the pool before and after liquidity is deposited or withdrawn by an LP. Thus, the trading function establishes the link between liquidity and prices, so LTs can compute the execution costs of their trades as a function of the trade size, and LPs can compute the exact quantities that they deposit. In CFMMs, both conditions state that price formation happens only through LT trades (see below). In decentralized platforms running on peer-to-peer networks, CFMMs are hard-coded and immutable programs implemented as Smart Contracts, where LPs and LTs invoke the code of the contract to execute their transactions. A particular case of CFMMs are the constant product market makers (CPMMs) such as Uniswap v2 and Uniswap v3 where the trading function uses the product of the quantities of each asset in the pool to determine clearing prices. CFMMs are also popular in prediction markets. Definition. Trading function. Consider a reference asset formula_0 and an asset formula_1 which is valued in terms of formula_0. Assume that the liquidity pool of the CFMM initially consists of quantity formula_2 of asset formula_0 and quantity formula_3 of asset formula_1. The pair formula_4 is referred to as the reserves of the pool (the following definitions can be extended to a basket of more than two assets). The CFM is characterised by a trading function formula_5 (also known as the invariant) defined over the pool reserves formula_2 and formula_3. The trading function is continuously differentiable and increasing in its arguments (formula_6 denotes the set of positive real numbers). For instance, the trading function of the constant product market maker (CPMM) is formula_7. Other types of CFMMs are the constant sum market maker with formula_8; the constant mean market maker with formula_9, where formula_10 and formula_11; and the hybrid function market maker, which uses combinations of trading functions. LT trading condition and convexity. LT transactions involve exchanging a quantity formula_12 of asset formula_1 for a quantity formula_13 of asset formula_0, and vice-versa. The quantities to exchange are determined by the LT trading condition: (1) formula_14 where formula_15 is the "depth" of the pool (see the LP provision condition below) and is a measure of the available liquidity. The value of the depth formula_16 is constant before and after a trade is executed, so the LT trading condition (1) defines a level curve. For a fixed value of the depth formula_15, the level function formula_17 (also known as the forward exchange function ) is such that formula_18. For any value formula_15 of the depth, the level function formula_19 is twice differentiable. The LT trading condition (1) links the state of the pool before and after a liquidity taking trade is executed. For LTs, this condition specifies the exchange rate formula_20, of asset formula_1 in terms of the reference asset formula_0, to trade a (possibly negative) quantity formula_12 of asset formula_1: formula_21 The marginal exchange rate of asset formula_1 in terms of asset formula_0, akin to the midprice in a limit order book (LOB), is the price for an infinitesimal trade in a CFMM: formula_22 It is proven that no roundtrip arbitrage in a CFMM implies that the level function formula_23 must be convex. Execution costs in the CFMM are defined as the difference between the marginal exchange rate and the exchange rate at which a trade is executed. It has been shown that LTs can use the convexity of the level function around the pool's reserves level to approximate the execution costs formula_24 by formula_25. LP provision condition and homotheticity. LP transactions involve depositing or withdrawing quantities formula_26 of asset formula_0 and asset formula_1. Let formula_27 be the initial depth of the pool and let formula_28 be the depth of the pool after an LP deposits formula_29, i.e., formula_30 and formula_31. Let formula_32 and formula_33 be the level functions corresponding to the values formula_27 and formula_28, respectively. Denote by formula_34 the initial marginal exchange rate of the pool. The LP provision condition requires that LPs do not change the marginal rate formula_34, so (2) formula_35 The LP provision condition (2) links the state of the pool before and after a liquidity provision operation is executed. The trading function formula_36 is increasing in the pool reserves formula_2 and formula_37 So, when liquidity provision activity increases (decreases) the size of the pool, the value of the pool's depth formula_15 increases (decreases). The value of formula_15 can be seen as a measure of the liquidity depth in the pool. Note that the LP provision condition holds for any homothetic trading function. Constant Product Market Maker. In CPMMs such as Uniswap v2, the trading function is formula_38 so the level function is formula_39, the marginal exchange rate is formula_40 and the exchange rate for a quantity formula_12 is formula_41 In CPMMs, the liquidity provision condition is formula_42 when the quantities formula_43 are deposited to the pool. Thus, liquidity is provided so that the proportion of the reserves formula_2 and formula_3 in the pool is preserved. Profits and losses of liquidity providers. Fees. For LPs, the key difference between the traditional markets based on LOBs and CFMMs is that in LOBs, market makers post limit orders above and below the mid-price to earn the spread on roundtrip trades, while in CFMMs, LPs earn fees paid by LTs when their liquidity is used. Loss-Versus-Rebalancing. Without fees paid by LTs, liquidity provision in CFMMs is a loss-leading activity. Loss-Versus-Rebalancing (LVR) is a popular measure of these losses. Assume the price follows the dynamics formula_44 then the LVR is given by formula_45 Predictable loss. To thoroughly characterise their losses, LPs can also use Predictable Loss (PL), which is a comprehensive and model-free measure for the unhedgeable and predictable losses of liquidity provision. One source of PL is the convexity cost (losses due to adverse selection, they can be regarded as generalized LVR) whose magnitude depends on liquidity taking activity and the convexity of the level function. The other source is the opportunity cost, which is incurred by LPs who lock assets in the pool instead of investing them in the risk-free asset. For an LP providing reserves formula_46 at time formula_47 and withdrawing liquidity at time formula_48, PL is formula_49 where formula_50 is an increasing stochastic process with initial value formula_51, and formula_52 is a process that describes the reserves in asset formula_1. In particular, formula_53 satisfies formula_54 PL can be estimated without specifying dynamics for the marginal rate or the trading flow and without specifying a parametric form for the level function. PL shows that liquidity provision generates losses for any type of LT trading activity (informed and uninformed). The level of fee revenue must exceed PL in expectation for liquidity provision to be profitable in CFMMs. Impermanent loss. Impermanent loss, or divergence loss, is sometimes used to characterise the risk of providing liquidity in a CFMM. Impermanent loss compares the evolution of the value of the LP's assets in the pool with the evolution of a self-financing buy-and-hold portfolio invested in an alternative venue. The self-financing portfolio is initiated with the same quantities formula_55 as those that the LP deposits in the pool. It can be shown that the impermanent loss formula_56 at time formula_57 is formula_58 where formula_59 are the reserves in asset formula_1 in the pool at time formula_60. The convexity of the level function shows that formula_61. In the case of CPMMs, the impermanent loss is given by formula_62 where formula_63 is the marginal exchange rate in the CPMM pool at time formula_60. formula_64 is not an appropriate measure to characterise the losses of LPs because it can underestimate or overestimate the losses that are solely imputable to liquidity provision. More precisely, the alternative buy-and-hold portfolio is not exposed to the same market risk as the holdings of the LP in the pool, and the impermanent loss can be partly hedged. In contrast, PL is the predictable and unhedgeable component in the wealth of LPs. Concentrated liquidity. Concentrated liquidity is a feature introduced by Uniswap v3 for CPMMs. The key feature of a CPMM pool with CL is that LPs specify a range of exchange rates in which to post liquidity. The bounds of the liquidity range take values in a discretised finite set of exchange rates called ticks. Concentrating liquidity increases fee revenue, but also increases PL and "concentration risk", i.e., the risk of the exchange rate exiting the range. History. An early description of a CFMM was published by economist Robin Hanson in "Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation" (2002). Early literature referred to the broader class of "automated market makers", including that of the Hollywood Stock Exchange founded in 1999; the term "constant-function market maker" was introduced in "Improved Price Oracles: Constant Function Market Makers" (Angeris &amp; Chitra 2020). First be seen in production on a Minecraft server in 2012, CFMMs are a popular DEX architecture. Crowdfunded CFMMs. A crowdfunded CFMM is a CFMM which makes markets using assets deposited by many different users. Users may contribute their assets to the CFMM's inventory, and receive in exchange a pro rata share of the inventory, claimable at any point for the assets in the inventory at that time the claim is made. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "(x, y)" }, { "math_id": 5, "text": "f : \\mathbb{R}_{++} \\times \\mathbb{R}_{++} \\rightarrow \\mathbb{R}" }, { "math_id": 6, "text": "\\mathbb{R}_{++} " }, { "math_id": 7, "text": "f(x,y)=x\\times y" }, { "math_id": 8, "text": "f(x,y) = x+y" }, { "math_id": 9, "text": "f(x,y) = w_x x + w_y y " }, { "math_id": 10, "text": "w_x, w_y > 0 " }, { "math_id": 11, "text": "w_x + w_y = 1" }, { "math_id": 12, "text": "\\Delta^y" }, { "math_id": 13, "text": "\\Delta^x" }, { "math_id": 14, "text": "f(x,y) = f(x+ \\Delta^x, y - \\Delta^y) = \\kappa\\,," }, { "math_id": 15, "text": "\\kappa" }, { "math_id": 16, "text": "\\kappa > 0" }, { "math_id": 17, "text": "\\varphi_{\\kappa}" }, { "math_id": 18, "text": "f(x,y) = \\kappa^2 \\iff x=\\varphi_{\\kappa}(y)" }, { "math_id": 19, "text": "\\varphi_{\\kappa}: \\mathbb R_{++} \\mapsto \\mathbb R_{++}" }, { "math_id": 20, "text": "\\tilde Z(\\Delta^y)" }, { "math_id": 21, "text": "\\tilde Z(\\Delta^y) = \\left(\\varphi_\\kappa\\left(y\\right)-\\varphi_\\kappa\\left(y+\\Delta^y\\right)\\right)\\big / \\Delta^y\\,." }, { "math_id": 22, "text": "Z = \\lim_{\\Delta^y \\rightarrow 0} \\tilde Z(\\Delta^y) = -\\varphi'_{\\kappa}(y)." }, { "math_id": 23, "text": "\\varphi" }, { "math_id": 24, "text": "\\left|\\tilde Z(\\Delta^y) - Z\\right|" }, { "math_id": 25, "text": "\\frac12\\,\\varphi_\\kappa^{''}(y) \\left|\\Delta^y\\right|" }, { "math_id": 26, "text": "(\\Delta^x, \\Delta^y)" }, { "math_id": 27, "text": "\\kappa_0" }, { "math_id": 28, "text": "\\kappa_1" }, { "math_id": 29, "text": "\\Delta^x, \\Delta^y" }, { "math_id": 30, "text": "f(x , y ) = \\kappa_0^2" }, { "math_id": 31, "text": "f(x + \\Delta x, y + \\Delta y) = \\kappa_1^2" }, { "math_id": 32, "text": "\\varphi_{\\kappa_0}" }, { "math_id": 33, "text": "\\varphi_{\\kappa_1}" }, { "math_id": 34, "text": "Z" }, { "math_id": 35, "text": "-\\varphi'_{\\kappa_0}\\left(y\\right) = -\\varphi'_{\\kappa_1}\\left(y + \\Delta^y\\right)=Z\\,. " }, { "math_id": 36, "text": "f(x,y)" }, { "math_id": 37, "text": "y." }, { "math_id": 38, "text": "f\\left(x, y\\right) = x\\times y," }, { "math_id": 39, "text": "\\varphi\\left(y\\right) = \\kappa^2 \\big / y" }, { "math_id": 40, "text": "Z = x / y," }, { "math_id": 41, "text": "\\tilde Z\\left(\\Delta^y\\right) = Z - Z^{3/2} \\Delta^y \\big / \\kappa." }, { "math_id": 42, "text": "x/y = (x+ \\Delta^x)/(y+ \\Delta^y)" }, { "math_id": 43, "text": "(\\Delta^x,\\Delta^y)" }, { "math_id": 44, "text": "dS_t = \\sigma_t dW_t" }, { "math_id": 45, "text": "\\text{LVR}_{t}=-\\frac{1}{2}\\int_{0}^{t}\\,\\sigma_s^2\\,\\text{d}s \\,\\leq 0\\,." }, { "math_id": 46, "text": "(x_0, y_0)" }, { "math_id": 47, "text": "t=0" }, { "math_id": 48, "text": "T >0" }, { "math_id": 49, "text": " \\text{PL}_{T}= -\\,\\,\\underbrace{\\frac{1}{2}\\int_{0}^{T}\\,\\varphi''\\left(y_{s}\\right)\\,\\text{d}\\left\\langle y,y\\right\\rangle_{s}\\,}_{\\text{Convexity cost}\\,\\geq\\,0} \\,\\,-\\,\\, \\underbrace{\\int_{0}^{T}\\xi_s\\,r\\,\\text{d}s \\,}_{\\text{Opportunity cost}\\,\\geq\\,0} \\,," }, { "math_id": 50, "text": "\\left(\\xi_t\\right)_{t\\in[0, T]}" }, { "math_id": 51, "text": "0" }, { "math_id": 52, "text": "\\left(y_t\\right)_{t\\in[0, T]}" }, { "math_id": 53, "text": "\\text{PL}" }, { "math_id": 54, "text": "\\text{PL}_{t}\\leq-\\frac{1}{2}\\int_{0}^{t}\\,\\varphi''\\left(y_{s}\\right)\\,\\text{d}\\left\\langle y,y\\right\\rangle_{s} \\,\\leq 0\\,." }, { "math_id": 55, "text": "\\left(x_0, y_0\\right)" }, { "math_id": 56, "text": "\\text{IL}_t" }, { "math_id": 57, "text": "t>0" }, { "math_id": 58, "text": "\\text{IL}_t = \\ -\\left(\\varphi\\left(y_0\\right)-\\varphi\\left(y_{t}\\right)-\\varphi'(y_{t})\\left(y_0-y_{t}\\right)\\right)" }, { "math_id": 59, "text": "y_t" }, { "math_id": 60, "text": "t" }, { "math_id": 61, "text": "\\text{IL}_t \\leq 0" }, { "math_id": 62, "text": "\\text{IL}_t = -\\kappa\\,\\sqrt{Z_t}\\left(1-\\sqrt{\\frac{Z_{t}}{Z_0}}\\right)^{2}\\leq 0\\,." }, { "math_id": 63, "text": "Z_t" }, { "math_id": 64, "text": "\\text{IL}" }, { "math_id": 65, "text": "\\varphi = R_1 * R_2" }, { "math_id": 66, "text": "\\varphi = -K\\Phi(\\Phi^{-1}(1-R_1)-\\sigma\\sqrt{\\tau}) + R_2" }, { "math_id": 67, "text": "\\varphi = R_{1}-\\left(p_{1}-\\frac{1}{2}R_{2}\\right)^{2}" }, { "math_id": 68, "text": "\\varphi = R_1 + R_2 " } ]
https://en.wikipedia.org/wiki?curid=71615512
7161754
Schoolmaster snapper
Species of fish &lt;templatestyles src="Template:Taxobox/core/styles.css" /&gt; The schoolmaster snapper (Lutjanus apodus), is a species of marine ray-finned fish, a snapper belonging to the family Lutjanidae. It is found in the western Atlantic Ocean. Like other snapper species, it is a popular food fish. Taxonomy. The schoolmaster snapper was first formally described in 1792 as "Perca apoda" by the German physician, naturalist and taxonomist Johann Julius Walbaum with the type locality given as the Bahamas. Walbaum's description was based on an illustration which omitted the fish's pectoral fins, so he gave it the specific name "apoda" meaning "footless". Description. The schoolmaster snapper has a moderately deep body which is robust and slightly compressed with a long. pointed snout and a large mouth. One of the upper pairs of canine teeth is clearly larger than back teeth in the lower jaw and can be seen when mouth is closed. The vomerine teeth are arranged in a chevron or crescent shaped patch with a line of similar teeth extending from the middle of the patch towards the rear. The preoperculum has a weakly developed incision and knob. This species has a protrusible upper jaw which is mostly covered by the cheek bone when the mouth is closed and both pairs of nostrils are simple holes. The dorsal fin is continuous with a slight incision separating the spiny part from the soft rayed part. The dorsal fin has 10 spines and 14 soft rays while the rounded anal fin contains 3 spines and 8 soft rays. The interior scale rows on back are parallel to lateral line. The caudal fin is slight emarginate or truncate. The pectoral fins are longer than the distance from longest point of the snout to tail edge of preopercle, reaching the level of anus. The color is olive gray to brownish on upper back and upper sides, with yellow to reddish mite around the head. The lower sides and belly are lighter; there is no dark lateral spot below the anterior part of soft dorsal fin. There are 8 narrow, light vertical bars on the side of the body which may be faded or absent in large adults. A solid or broken blue line runs beneath the eye; it may also disappear with growth. From the upper jaw to the tip of the fleshy opercle, the line is often broken into parts that resemble dashes and spots. The fin and tail is bright yellow, yellow green, or pale orange, and the snout contains blue stripes. This fish attains a maximum fork length of , although is more typical, and the maximum published weight is . Distribution. The schoolmaster snapper is found in the western Atlantic Bermuda and the southeastern coast of the United States from Cape Canaveral in Florida southwards to the Bahamas and into the Gulf of Mexico where its range runs from the Florida Keys as far north as Tampa, Florida then from Alabama westwards along the coast of the Gulf to the Yucatan Peninsula and northwestern Cuba. It occurs throughout the Caribbean Sea. It has been recorded as far north as Massachusetts but these records involve juveniles that cannot survive the winter. It is typically found at depths between , with one record at . Adults usually stay near shore shelter around elkhorn and gorgonian coral. Large adults are sometimes found on the continental shelf. Typical depths are up to . reported that at night, schoolmaster snapper may increase their range to twice the daytime range, mostly by visiting seagrass beds. Biology. The schoolmaster snapper forms large resting schools during the day which disperse to forage at night, these schools frequently sheltering in beds of sea grass. These aggregations are defensive and are adopted by the fishes to minimise the chance of any one of them being predated. These fish spend 84% of the day swimming, 13% resting, 2% eating and less than 0.5% in other behaviors. Feeding. Studies have reported that the diet of the schoolmaster snapper changes ontogenetically, small individuals, less than in length have a 90% of their diet made up of crustaceans, specifically amphipods and crabs. Larger specimens preferred smaller fish. these making up more than 50% of their diet by weight, and also ate crabs, shrimp, and stomatopods. These differences in diets were attributed to the ability of the bigger fish to open their jaws wider for bigger prey. Reproduction and growth. Schoolmaster snapper are gonochorist, meaning males and females are separate. They spawn over most of the year, with the majority of the spawning happening during middle to late summer. They spawn during April–June off Cuba. They reproduce by spawning in open water with both male and female fish releasing their gametes at the same time. The fertilized eggs then settle to the bottom, where they are left unguarded. The schoolmaster snapper is a slow growing, long lived species which has a maximum recorded age of 42 years. As fish grow longer, they increase in weight, but the relationship is not linear. The relationship between length (L) and weight (W) for nearly all species of fish can be expressed by an equation of the form: formula_0 Invariably, b is close to 3.0 for all species, and c varies between species. A weight-length relationship based on 100 schoolmaster snapper ranging in length from 2 to 7 in (50 mm to 180 mm) found the coefficient c was 0.000050015 and the exponent b was 2.9107. This relationship suggests a 12.5-inch schoolmaster snapper (320 mm) will weigh about 2.2 lb (1 kg). Commercial and recreational use. Schoolmaster snapper, along with other snapper species, are sought by both recreational and commercial fishermen. but the schoolmaster snapper is not as frequently targeted by commercial fisheries as other sympatric Lutjanus snappers. Their food quality is reported to be excellent. However, the consumption of this species has been linked to ciguatera poisoning in humans. Fishing regulations in US state waters are specific to each state, but they have similarities. For example, the minimum length in Florida for schoolmaster snapper is total length with a catch limit of 10 per fisherman per day. However, the 10-fish limit is an aggregate for all species of snapper. Light spinning and baitcasting tackle are used to fish for schoolmaster snapper. Live shrimp and baitfish, as well as shrimp pieces and cut bait, are the best natural bait. While jigs make for the best artificial bait, artificials are rarely used and rarely successful. Conservation. The IUCN lists this species as being of Least Concern because it is not routinely targeted by commercial fisheries and separate commercial catch statistics are unavailable. The juveniles are common in mangroves and on shallow reefs which may be threatened by coastal development, as well as the impacts of climate change. In Colombia, illegal dynamite fishing in the Rosario Islands led to the near extirpation of this species. In some areas, minimum sizes and bag limits have been introduced to conserve the stocks of schoolmaster snapper. Fishing pressure could lead to a decrease in numbers and make the defensive aggregations of these fish more vulnerable to predation, so it has been suggested that no-take zones be introduced to reduce the amount of fishes killed by fisheries. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W = cL^b\\!\\," } ]
https://en.wikipedia.org/wiki?curid=7161754
71624
JPEG File Interchange Format
Image file format with multiple editions The JPEG File Interchange Format (JFIF) is an image file format standard published as ITU-T Recommendation T.871 and ISO/IEC 10918-5. It defines supplementary specifications for the container format that contains the image data encoded with the JPEG algorithm. The base specifications for a JPEG container format are defined in Annex B of the JPEG standard, known as JPEG Interchange Format (JIF). JFIF builds over JIF to solve some of JIF's limitations, including unnecessary complexity, component sample registration, resolution, aspect ratio, and color space. Because JFIF is not the original JPG standard, one might expect another MIME type. However, it is still registered as "image/jpeg" (indicating its primary data format rather than the amended information). JFIF is mutually incompatible with the newer Exchangeable image file format (Exif). Purpose. JFIF defines a number of details that are left unspecified by the JPEG Part 1 standard (ISO/IEC 10918-1, ITU-T Recommendation T.81.) Component sample registration. JPEG allows multiple components (such as Y, Cb, and Cr) to have different resolutions, but it does not define how those differing sample arrays (which render bitmaps) should be aligned. This pixel-producing information is rendered with the expectation of indicating rectangles by their centroid, rather than being pixel data directly, or being 'first corner and flood', etc. which is uncommon. Resolution and aspect ratio. The JPEG standard does not include any method of coding the resolution or aspect ratio of an image. JFIF provides resolution or aspect ratio information using an application segment extension to JPEG. It uses Application Segment #0, with a segment header consisting of the null-terminated string spelling "JFIF" in ASCII followed by a byte equal to 0, and specifies that this must be the first segment in the file, hence making it simple to recognize a JFIF file. Exif images recorded by digital cameras generally do not include this segment, but typically comply in all other respects with the JFIF standard. Color space. The JPEG standard used for the compression coding in JFIF files does not define which color encoding is to be used for images. JFIF defines the color model to be used: either Y for greyscale, or YCbCr derived from RGB color primaries as defined in CCIR 601 (now known as Rec. ITU-R BT.601), except with a different "full range" scaling of the Y, Cb and Cr components. Unlike the "studio range" defined in CCIR 601, in which black is represented by Y=16 and white by Y=235 and values outside of this range are available for signal processing "headroom" and "footroom", JFIF uses all 256 levels of the 8-bit representation, so that Y=0 for black and Y=255 for peak white. The RGB color primaries defined in JFIF via CCIR 601 also differ somewhat from what has become common practice in newer applications (e.g., they differ slightly from the color primaries defined in sRGB). Moreover, CCIR 601 (before 2007) did not provide a precise definition of the RGB color primaries; it relied instead on the underlying practices of the television industry. Color interpretation of a JFIF image may be improved by embedding an ICC profile, colorspace metadata, or an sRGB tag, and using an application that interprets this information. File format structure. A JFIF file consists of a sequence of markers or marker segments (for details refer to JPEG, Syntax and structure). The markers are defined in part 1 of the JPEG Standard. Each marker consists of two bytes: an codice_0 byte followed by a byte which is not equal to codice_1 or codice_0 and specifies the type of the marker. Some markers stand alone, but most indicate the start of a marker segment that contains data bytes according to the following pattern: codice_3 The bytes "s1" and "s2" are taken together to represent a big-endian 16-bit integer specifying the length of the following "data bytes" plus the 2 bytes used to represent the length. In other words, "s1" and "s2" specify the number of the following "data bytes" as formula_0. According to part 1 of the JPEG standard, applications can use APP marker segments and define an application specific meaning of the data. In the JFIF standard, the following APP marker segments are defined: They are described below. The JFIF standard requires that the JFIF APP0 marker segment immediately follows the SOI marker. If a JFIF extension APP0 marker segment is used, it must immediately follow the JFIF APP0 marker segment. So a JFIF file will have the following structure: JFIF APP0 marker segment. In the mandatory JFIF APP0 marker segment the parameters of the image are specified. Optionally an uncompressed thumbnail can be embedded. JFIF extension APP0 marker segment. Immediately following the JFIF APP0 marker segment may be a JFIF extension APP0 marker segment. This segment may only be present for JFIF versions 1.02 and above. It allows to embed a thumbnail image in 3 different formats. The thumbnail data depends on the thumbnail format as follows: Compatibility. The newer Exchangeable image file format (Exif) is comparable to JFIF, but the two standards are mutually incompatible. This is because both standards specify that their particular application segment (APP0 for JFIF, APP1 for Exif) must immediately follow the SOI marker. In practice, many programs and digital cameras produce files with both application segments included. This will not affect the image decoding for most decoders, but poorly designed JFIF or Exif parsers may not recognise the file properly. JFIF is compatible with Adobe Photoshop's JPEG "Information Resource Block" extensions, and IPTC Information Interchange Model metadata, since JFIF does not preclude other application segments, and the Photoshop extensions are not required to be the first in the file. However, Photoshop generally saves CMYK buffers as four-component "Adobe JPEGs" that are not conformant with JFIF. Since these files are not in a YCbCr color space, they are typically not decodable by Web browsers and other Internet software. History. Development of the JFIF document was led by Eric Hamilton of C-Cube Microsystems, and agreement on the first version was established in late 1991 at a meeting held at C-Cube involving about 40 representatives of various computer, telecommunications, and imaging companies. Shortly afterwards, a minor revision was published — JFIF 1.01. For nearly 20 years, the latest version available was v1.02, published September 1, 1992. In 1996, RFC 2046 specified that the image format used for transmitting JPEG images across the Internet should be JFIF. The MIME type of "image/jpeg" must be encoded as JFIF. In practice, however, virtually all Internet software can decode any baseline "JIF" image that uses Y or YCbCr components, whether it is JFIF compliant or not. As time went by, C-Cube was restructured (and eventually devolved into Harmonic, LSI Logic, Magnum Semiconductor, Avago Technologies, Broadcom, and GigOptix, GigPeak, etc), and lost interest in the document, and the specification had no official publisher until it was picked up by Ecma International and the ITU-T/ISO/IEC Joint Photographic Experts Group around 2009 to avoid it being lost to history and provide a way to formally cite it in standard publications and improve its editorial quality. It was published by ECMA in 2009 as Technical Report number 98 to avoid loss of the historical record, and it was formally standardized by ITU-T in 2011 as its Recommendation T.871 and by ISO/IEC in 2013 as ISO/IEC 10918-5, The newer publications included editorial improvements but no substantial technical changes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "256 \\cdot s1 + s2 - 2" } ]
https://en.wikipedia.org/wiki?curid=71624
7163
Catenary
Curve formed by a hanging chain In physics and geometry, a catenary ( , ) is the curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends in a uniform gravitational field. The catenary curve has a U-like shape, superficially similar in appearance to a parabola, which it is not. The curve appears in the design of certain types of arches and as a cross section of the catenoid—the shape assumed by a soap film bounded by two parallel circular rings. The catenary is also called the alysoid, chainette, or, particularly in the materials sciences, an example of a funicular. Rope statics describes catenaries in a classic statics problem involving a hanging rope. Mathematically, the catenary curve is the graph of the hyperbolic cosine function. The surface of revolution of the catenary curve, the catenoid, is a minimal surface, specifically a minimal surface of revolution. A hanging chain will assume a shape of least potential energy which is a catenary. Galileo Galilei in 1638 discussed the catenary in the book "Two New Sciences" recognizing that it was different from a parabola. The mathematical properties of the catenary curve were studied by Robert Hooke in the 1670s, and its equation was derived by Leibniz, Huygens and Johann Bernoulli in 1691. Catenaries and related curves are used in architecture and engineering (e.g., in the design of bridges and arches so that forces do not result in bending moments). In the offshore oil and gas industry, "catenary" refers to a steel catenary riser, a pipeline suspended between a production platform and the seabed that adopts an approximate catenary shape. In the rail industry it refers to the overhead wiring that transfers power to trains. (This often supports a contact wire, in which case it does not follow a true catenary curve.) In optics and electromagnetics, the hyperbolic cosine and sine functions are basic solutions to Maxwell's equations. The symmetric modes consisting of two evanescent waves would form a catenary shape. History. The word "catenary" is derived from the Latin word "catēna", which means "chain". The English word "catenary" is usually attributed to Thomas Jefferson, who wrote in a letter to Thomas Paine on the construction of an arch for a bridge: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I have lately received from Italy a treatise on the equilibrium of arches, by the Abbé Mascheroni. It appears to be a very scientifical work. I have not yet had time to engage in it; but I find that the conclusions of his demonstrations are, that every part of the catenary is in perfect equilibrium. It is often said that Galileo thought the curve of a hanging chain was parabolic. However, in his "Two New Sciences" (1638), Galileo wrote that a hanging cord is only an approximate parabola, correctly observing that this approximation improves in accuracy as the curvature gets smaller and is almost exact when the elevation is less than 45°. The fact that the curve followed by a chain is not a parabola was proven by Joachim Jungius (1587–1657); this result was published posthumously in 1669. The application of the catenary to the construction of arches is attributed to Robert Hooke, whose "true mathematical and mechanical form" in the context of the rebuilding of St Paul's Cathedral alluded to a catenary. Some much older arches approximate catenaries, an example of which is the Arch of Taq-i Kisra in Ctesiphon. In 1671, Hooke announced to the Royal Society that he had solved the problem of the optimal shape of an arch, and in 1675 published an encrypted solution as a Latin anagram in an appendix to his "Description of Helioscopes," where he wrote that he had found "a true mathematical and mechanical form of all manner of Arches for Building." He did not publish the solution to this anagram in his lifetime, but in 1705 his executor provided it as "ut pendet continuum flexile, sic stabit contiguum rigidum inversum", meaning "As hangs a flexible cable so, inverted, stand the touching pieces of an arch." In 1691, Gottfried Leibniz, Christiaan Huygens, and Johann Bernoulli derived the equation in response to a challenge by Jakob Bernoulli; their solutions were published in the "Acta Eruditorum" for June 1691. David Gregory wrote a treatise on the catenary in 1697 in which he provided an incorrect derivation of the correct differential equation. Euler proved in 1744 that the catenary is the curve which, when rotated about the x-axis, gives the surface of minimum surface area (the catenoid) for the given bounding circles. Nicolas Fuss gave equations describing the equilibrium of a chain under any force in 1796. Inverted catenary arch. Catenary arches are often used in the construction of kilns. To create the desired curve, the shape of a hanging chain of the desired dimensions is transferred to a form which is then used as a guide for the placement of bricks or other building material. The Gateway Arch in St. Louis, Missouri, United States is sometimes said to be an (inverted) catenary, but this is incorrect. It is close to a more general curve called a flattened catenary, with equation "y" = "A" cosh("Bx"), which is a catenary if "AB" = 1. While a catenary is the ideal shape for a freestanding arch of constant thickness, the Gateway Arch is narrower near the top. According to the U.S. National Historic Landmark nomination for the arch, it is a "weighted catenary" instead. Its shape corresponds to the shape that a weighted chain, having lighter links in the middle, would form. Catenary bridges. In free-hanging chains, the force exerted is uniform with respect to length of the chain, and so the chain follows the catenary curve. The same is true of a simple suspension bridge or "catenary bridge," where the roadway follows the cable. A stressed ribbon bridge is a more sophisticated structure with the same catenary shape. However, in a suspension bridge with a suspended roadway, the chains or cables support the weight of the bridge, and so do not hang freely. In most cases the roadway is flat, so when the weight of the cable is negligible compared with the weight being supported, the force exerted is uniform with respect to horizontal distance, and the result is a parabola, as discussed below (although the term "catenary" is often still used, in an informal sense). If the cable is heavy then the resulting curve is between a catenary and a parabola. Anchoring of marine objects. The catenary produced by gravity provides an advantage to heavy anchor rodes. An anchor rode (or anchor line) usually consists of chain or cable or both. Anchor rodes are used by ships, oil rigs, docks, floating wind turbines, and other marine equipment which must be anchored to the seabed. When the rope is slack, the catenary curve presents a lower angle of pull on the anchor or mooring device than would be the case if it were nearly straight. This enhances the performance of the anchor and raises the level of force it will resist before dragging. To maintain the catenary shape in the presence of wind, a heavy chain is needed, so that only larger ships in deeper water can rely on this effect. Smaller boats also rely on catenary to maintain maximum holding power. Cable ferries and chain boats present a special case of marine vehicles moving although moored by the two catenaries each of one or more cables (wire ropes or chains) passing through the vehicle and moved along by motorized sheaves. The catenaries can be evaluated graphically. Mathematical description. Equation. The equation of a catenary in Cartesian coordinates has the form formula_0 where cosh is the hyperbolic cosine function, and where a is the distance of the lowest point above the x axis. All catenary curves are similar to each other, since changing the parameter a is equivalent to a uniform scaling of the curve. The Whewell equation for the catenary is formula_1 where formula_2 is the tangential angle and s the arc length. Differentiating gives formula_3 and eliminating formula_2 gives the Cesàro equation formula_4 where formula_5 is the curvature. The radius of curvature is then formula_6 which is the length of the normal between the curve and the x-axis. Relation to other curves. When a parabola is rolled along a straight line, the roulette curve traced by its focus is a catenary. The envelope of the directrix of the parabola is also a catenary. The involute from the vertex, that is the roulette traced by a point starting at the vertex when a line is rolled on a catenary, is the tractrix. Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels. Geometrical properties. Over any horizontal interval, the ratio of the area under the catenary to its length equals a, independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the x-axis. Science. A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light c). The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the x-axis. Analysis. Model of chains and arches. In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium. Let the path followed by the chain be given parametrically by r = ("x", "y") = ("x"("s"), "y"("s")) where s represents arc length and r is the position vector. This is the natural parameterization and has the property that formula_7 where u is a unit tangent vector. A differential equation for the curve may be derived as follows. Let c be the lowest point on the chain, called the vertex of the catenary. The slope of the curve is zero at c since it is a minimum point. Assume r is to the right of c since the other case is implied by symmetry. The forces acting on the section of the chain from c to r are the tension of the chain at c, the tension of the chain at r, and the weight of the chain. The tension at c is tangent to the curve at c and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written (−"T"0, 0) where "T"0 is the magnitude of the force. The tension at r is parallel to the curve at r and pulls the section to the right. The tension at r can be split into two components so it may be written "T"u = ("T" cos "φ", "T" sin "φ"), where T is the magnitude of the force and φ is the angle between the curve at r and the x-axis (see tangential angle). Finally, the weight of the chain is represented by (0, −"ws") where w is the weight per unit length and s is the length of the segment of chain between c and r. The chain is in equilibrium so the sum of three forces is 0, therefore formula_8 and formula_9 and dividing these gives formula_10 It is convenient to write formula_11 which is the length of chain whose weight is equal in magnitude to the tension at c. Then formula_12 is an equation defining the curve. The horizontal component of the tension, "T" cos "φ" = "T"0 is constant and the vertical component of the tension, "T" sin "φ" = "ws" is proportional to the length of chain between r and the vertex. Derivation of equations for the curve. The differential equation formula_13, given above, can be solved to produce equations for the curve. We will solve the equation using the boundary condition that the vertex is positioned at formula_14 and formula_15. First, invoke the formula for arc length to get formula_16 then separate variables to obtain formula_17 A reasonably straightforward approach to integrate this is to use hyperbolic substitution, which gives formula_18 (where formula_19 is a constant of integration), and hence formula_20 But formula_21, so formula_22 which integrates as formula_23 (with formula_24 being the constant of integration satisfying the boundary condition). Since the primary interest here is simply the shape of the curve, the placement of the coordinate axes are arbitrary; so make the convenient choice of formula_25 to simplify the result to formula_26 For completeness, the formula_27 relation can be derived by solving each of the formula_28 and formula_29 relations for formula_30, giving: formula_31 so formula_32 which can be rewritten as formula_33 Alternative derivation. The differential equation can be solved using a different approach. From formula_34 it follows that formula_35 and formula_36 Integrating gives, formula_37 and formula_38 As before, the x and y-axes can be shifted so α and β can be taken to be 0. Then formula_39 and taking the reciprocal of both sides formula_40 Adding and subtracting the last two equations then gives the solution formula_41 and formula_42 Determining parameters. In general the parameter a is the position of the axis. The equation can be determined in this case as follows: Relabel if necessary so that "P"1 is to the left of "P"2 and let H be the horizontal and v be the vertical distance from "P"1 to "P"2. Translate the axes so that the vertex of the catenary lies on the y-axis and its height a is adjusted so the catenary satisfies the standard equation of the curve formula_43 and let the coordinates of "P"1 and "P"2 be ("x"1, "y"1) and ("x"2, "y"2) respectively. The curve passes through these points, so the difference of height is formula_44 and the length of the curve from "P"1 to "P"2 is formula_45 When "L"2 − "v"2 is expanded using these expressions the result is formula_46 so formula_47 This is a transcendental equation in a and must be solved numerically. Since formula_48 is strictly monotonic on formula_49, there is at most one solution with "a" &gt; 0 and so there is at most one position of equilibrium. However, if both ends of the curve ("P"1 and "P"2) are at the same level ("y"1 "y"2), it can be shown that formula_50 where L is the total length of the curve between "P"1 and "P"2 and h is the sag (vertical distance between "P"1, "P"2 and the vertex of the curve). It can also be shown that formula_51 and formula_52 where H is the horizontal distance between "P"1 and "P"2 which are located at the same level ("H" "x"2 − "x"1). The horizontal traction force at "P"1 and "P"2 is "T0" "wa", where w is the weight per unit length of the chain or cable. Tension relations. There is a simple relationship between the tension in the cable at a point and its x- and/or y- coordinate. Begin by combining the squares of the vector components of the tension: formula_53 which (recalling that formula_54) can be rewritten as formula_55 But, as shown above, formula_56 (assuming that formula_57), so we get the simple relations formula_58 Variational formulation. Consider a chain of length formula_59 suspended from two points of equal height and at distance formula_60. The curve has to minimize its potential energy formula_61 (where w is the weight per unit length) and is subject to the constraint formula_62 The modified Lagrangian is therefore formula_63 where formula_64 is the Lagrange multiplier to be determined. As the independent variable formula_65 does not appear in the Lagrangian, we can use the Beltrami identity formula_66 where formula_67 is an integration constant, in order to obtain a first integral formula_68 This is an ordinary first order differential equation that can be solved by the method of separation of variables. Its solution is the usual hyperbolic cosine where the parameters are obtained from the constraints. Generalizations with vertical force. Nonuniform chains. If the density of the chain is variable then the analysis above can be adapted to produce equations for the curve given the density, or given the curve to find the density. Let w denote the weight per unit length of the chain, then the weight of the chain has magnitude formula_69 where the limits of integration are c and r. Balancing forces as in the uniform chain produces formula_8 and formula_70 and therefore formula_71 Differentiation then gives formula_72 In terms of φ and the radius of curvature ρ this becomes formula_73 Suspension bridge curve. A similar analysis can be done to find the curve followed by the cable supporting a suspension bridge with a horizontal roadway. If the weight of the roadway per unit length is w and the weight of the cable and the wire supporting the bridge is negligible in comparison, then the weight on the cable (see the figure in Catenary#Model of chains and arches) from c to r is wx where x is the horizontal distance between c and r. Proceeding as before gives the differential equation formula_74 This is solved by simple integration to get formula_75 and so the cable follows a parabola. If the weight of the cable and supporting wires is not negligible then the analysis is more complex. Catenary of equal strength. In a catenary of equal strength, the cable is strengthened according to the magnitude of the tension at each point, so its resistance to breaking is constant along its length. Assuming that the strength of the cable is proportional to its density per unit length, the weight, w, per unit length of the chain can be written , where c is constant, and the analysis for nonuniform chains can be applied. In this case the equations for tension are formula_76 Combining gives formula_77 and by differentiation formula_78 where ρ is the radius of curvature. The solution to this is formula_79 In this case, the curve has vertical asymptotes and this limits the span to π"c". Other relations are formula_80 The curve was studied 1826 by Davies Gilbert and, apparently independently, by Gaspard-Gustave Coriolis in 1836. Recently, it was shown that this type of catenary could act as a building block of electromagnetic metasurface and was known as "catenary of equal phase gradient". Elastic catenary. In an elastic catenary, the chain is replaced by a spring which can stretch in response to tension. The spring is assumed to stretch in accordance with Hooke's Law. Specifically, if p is the natural length of a section of spring, then the length of the spring with tension T applied has length formula_81 where E is a constant equal to kp, where k is the stiffness of the spring. In the catenary the value of T is variable, but ratio remains valid at a local level, so formula_82 The curve followed by an elastic spring can now be derived following a similar method as for the inelastic spring. The equations for tension of the spring are formula_83 and formula_84 from which formula_85 where p is the natural length of the segment from c to r and "w"0 is the weight per unit length of the spring with no tension. Write formula_86 so formula_87 Then formula_88 from which formula_89 Integrating gives the parametric equations formula_90 Again, the x and y-axes can be shifted so α and β can be taken to be 0. So formula_91 are parametric equations for the curve. At the rigid limit where E is large, the shape of the curve reduces to that of a non-elastic chain. Other generalizations. Chain under a general force. With no assumptions being made regarding the force G acting on the chain, the following analysis can be made. First, let T = T("s") be the force of tension as a function of s. The chain is flexible so it can only exert a force parallel to itself. Since tension is defined as the force that the chain exerts on itself, T must be parallel to the chain. In other words, formula_92 where T is the magnitude of T and u is the unit tangent vector. Second, let G = G("s") be the external force per unit length acting on a small segment of a chain as a function of s. The forces acting on the segment of the chain between s and "s" + Δ"s" are the force of tension T("s" + Δ"s") at one end of the segment, the nearly opposite force −T("s") at the other end, and the external force acting on the segment which is approximately GΔ"s". These forces must balance so formula_93 Divide by Δ"s" and take the limit as Δ"s" → 0 to obtain formula_94 These equations can be used as the starting point in the analysis of a flexible chain acting under any external force. In the case of the standard catenary, G = (0, −"w") where the chain has weight w per unit length. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y = a \\cosh \\left(\\frac{x}{a}\\right) = \\frac{a}{2}\\left(e^\\frac{x}{a} + e^{-\\frac{x}{a}}\\right)," }, { "math_id": 1, "text": "\\tan \\varphi = \\frac{s}{a}," }, { "math_id": 2, "text": "\\varphi" }, { "math_id": 3, "text": "\\frac{d\\varphi}{ds} = \\frac{\\cos^2\\varphi}{a}," }, { "math_id": 4, "text": "\\kappa=\\frac{a}{s^2+a^2}," }, { "math_id": 5, "text": "\\kappa" }, { "math_id": 6, "text": "\\rho = a \\sec^2 \\varphi," }, { "math_id": 7, "text": "\\frac{d\\mathbf{r}}{ds}=\\mathbf{u}" }, { "math_id": 8, "text": "T \\cos \\varphi = T_0" }, { "math_id": 9, "text": "T \\sin \\varphi = ws\\,," }, { "math_id": 10, "text": "\\frac{dy}{dx}=\\tan \\varphi = \\frac{ws}{T_0}\\,." }, { "math_id": 11, "text": "a = \\frac{T_0}{w}" }, { "math_id": 12, "text": "\\frac{dy}{dx}=\\frac{s}{a}" }, { "math_id": 13, "text": "dy/dx = s/a" }, { "math_id": 14, "text": "s_0=0" }, { "math_id": 15, "text": "(x,y)=(x_0,y_0)" }, { "math_id": 16, "text": "\\frac{ds}{dx}\n = \\sqrt{1+\\left(\\frac{dy}{dx}\\right)^2}\n = \\sqrt{1+\\left(\\frac{s}{a}\\right)^2}\\,," }, { "math_id": 17, "text": "\\frac{ds}{\\sqrt{1+(s/a)^2}}\n = dx\\,." }, { "math_id": 18, "text": "a \\sinh^{-1}\\frac{s}{a} + x_0 = x" }, { "math_id": 19, "text": "x_0" }, { "math_id": 20, "text": "\\frac{s}{a} = \\sinh\\frac{x-x_0}{a}\\,." }, { "math_id": 21, "text": "s/a = dy/dx" }, { "math_id": 22, "text": "\\frac{dy}{dx} = \\sinh\\frac{x-x_0}{a}\\,," }, { "math_id": 23, "text": "y = a \\cosh\\frac{x-x_0}{a} + \\delta" }, { "math_id": 24, "text": "\\delta=y_0-a" }, { "math_id": 25, "text": "x_0=0=\\delta" }, { "math_id": 26, "text": "y = a \\cosh\\frac{x}{a} \\quad\\square" }, { "math_id": 27, "text": "y \\leftrightarrow s" }, { "math_id": 28, "text": "x \\leftrightarrow y" }, { "math_id": 29, "text": "x \\leftrightarrow s" }, { "math_id": 30, "text": "x/a" }, { "math_id": 31, "text": "\\cosh^{-1}\\frac{y-\\delta}{a} = \\frac{x-x_0}{a} = \\sinh^{-1}\\frac{s}{a}\\,," }, { "math_id": 32, "text": "y-\\delta = a\\cosh\\left(\\sinh^{-1}\\frac{s}{a}\\right)\\,," }, { "math_id": 33, "text": "y-\\delta = a\\sqrt{1+\\left(\\frac{s}{a}\\right)^2} = \\sqrt{a^2 + s^2}\\,." }, { "math_id": 34, "text": "s = a \\tan \\varphi" }, { "math_id": 35, "text": "\\frac{dx}{d\\varphi} = \\frac{dx}{ds}\\frac{ds}{d\\varphi}=\\cos \\varphi \\cdot a \\sec^2 \\varphi= a \\sec \\varphi" }, { "math_id": 36, "text": "\\frac{dy}{d\\varphi} = \\frac{dy}{ds}\\frac{ds}{d\\varphi}=\\sin \\varphi \\cdot a \\sec^2 \\varphi= a \\tan \\varphi \\sec \\varphi\\,." }, { "math_id": 37, "text": "x = a \\ln(\\sec \\varphi + \\tan \\varphi) + \\alpha" }, { "math_id": 38, "text": "y = a \\sec \\varphi + \\beta\\,." }, { "math_id": 39, "text": "\\sec \\varphi + \\tan \\varphi = e^\\frac{x}{a}\\,," }, { "math_id": 40, "text": "\\sec \\varphi - \\tan \\varphi = e^{-\\frac{x}{a}}\\,." }, { "math_id": 41, "text": "y = a \\sec \\varphi = a \\cosh\\left(\\frac{x}{a}\\right)\\,," }, { "math_id": 42, "text": "s = a \\tan \\varphi = a \\sinh\\left(\\frac{x}{a}\\right)\\,." }, { "math_id": 43, "text": "y = a \\cosh\\left(\\frac{x}{a}\\right)" }, { "math_id": 44, "text": "v = a \\cosh\\left(\\frac{x_2}{a}\\right) - a \\cosh\\left(\\frac{x_1}{a}\\right)\\,." }, { "math_id": 45, "text": "L = a \\sinh\\left(\\frac{x_2}{a}\\right) - a \\sinh\\left(\\frac{x_1}{a}\\right)\\,." }, { "math_id": 46, "text": "L^2-v^2=2a^2\\left(\\cosh\\left(\\frac{x_2-x_1}{a}\\right)-1\\right)=4a^2\\sinh^2\\left(\\frac{H}{2a}\\right)\\,," }, { "math_id": 47, "text": "\\frac 1H \\sqrt{L^2-v^2}=\\frac{2a}H \\sinh\\left(\\frac{H}{2a}\\right)\\,." }, { "math_id": 48, "text": "\\sinh(x)/x" }, { "math_id": 49, "text": "x > 0" }, { "math_id": 50, "text": "a = \\frac {\\frac14 L^2-h^2} {2h}\\," }, { "math_id": 51, "text": "L = 2a \\sinh \\frac {H} {2a}\\," }, { "math_id": 52, "text": "H = 2a \\operatorname {arcosh} \\frac {h+a} {a}\\," }, { "math_id": 53, "text": "(T\\cos\\varphi)^2 + (T\\sin\\varphi)^2 = T_0^2 + (ws)^2" }, { "math_id": 54, "text": "T_0=wa" }, { "math_id": 55, "text": "\\begin{align}\nT^2(\\cos^2\\varphi + \\sin^2\\varphi) &= (wa)^2 + (ws)^2 \\\\[6pt]\nT^2 &= w^2 (a^2 + s^2) \\\\[6pt]\nT &= w\\sqrt{a^2+s^2} \\,.\n\\end{align}" }, { "math_id": 56, "text": "y = \\sqrt{a^2 + s^2}" }, { "math_id": 57, "text": "y_0=a" }, { "math_id": 58, "text": "T = wy = wa \\cosh\\frac{x}{a}\\,." }, { "math_id": 59, "text": "L" }, { "math_id": 60, "text": "D" }, { "math_id": 61, "text": " U = \\int_0^D w y\\sqrt{1+y'^2} dx " }, { "math_id": 62, "text": " \\int_0^D \\sqrt{1+y'^2} dx = L\\,." }, { "math_id": 63, "text": " \\mathcal{L} = (w y - \\lambda )\\sqrt{1+y'^2}" }, { "math_id": 64, "text": "\\lambda " }, { "math_id": 65, "text": "x" }, { "math_id": 66, "text": " \\mathcal{L}-y' \\frac{\\partial \\mathcal{L} }{\\partial y'} = C " }, { "math_id": 67, "text": "C" }, { "math_id": 68, "text": "\\frac{(w y - \\lambda )}{\\sqrt{1+y'^2}} = -C" }, { "math_id": 69, "text": "\\int_\\mathbf{c}^\\mathbf{r} w\\, ds\\,," }, { "math_id": 70, "text": "T \\sin \\varphi = \\int_\\mathbf{c}^\\mathbf{r} w\\, ds\\,," }, { "math_id": 71, "text": "\\frac{dy}{dx}=\\tan \\varphi = \\frac{1}{T_0} \\int_\\mathbf{c}^\\mathbf{r} w\\, ds\\,." }, { "math_id": 72, "text": "w=T_0 \\frac{d}{ds}\\frac{dy}{dx} = \\frac{T_0 \\dfrac{d^2y}{dx^2}}{\\sqrt{1+\\left(\\dfrac{dy}{dx}\\right)^2}}\\,." }, { "math_id": 73, "text": "w= \\frac{T_0}{\\rho \\cos^2 \\varphi}\\,." }, { "math_id": 74, "text": "\\frac{dy}{dx}=\\tan \\varphi = \\frac{w}{T_0}x\\,. " }, { "math_id": 75, "text": "y=\\frac{w}{2T_0}x^2 + \\beta" }, { "math_id": 76, "text": "\\begin{align}\nT \\cos \\varphi &= T_0\\,,\\\\\nT \\sin \\varphi &= \\frac{1}{c}\\int T\\, ds\\,.\n\\end{align}" }, { "math_id": 77, "text": "c \\tan \\varphi = \\int \\sec \\varphi\\, ds" }, { "math_id": 78, "text": "c = \\rho \\cos \\varphi" }, { "math_id": 79, "text": "y = c \\ln\\left(\\sec\\left(\\frac{x}{c}\\right)\\right)\\,." }, { "math_id": 80, "text": "x = c\\varphi\\,,\\quad s = \\ln\\left(\\tan\\left(\\frac{\\pi+2\\varphi}{4}\\right)\\right)\\,." }, { "math_id": 81, "text": "s=\\left(1+\\frac{T}{E}\\right)p\\,," }, { "math_id": 82, "text": "\\frac{ds}{dp}=1+\\frac{T}{E}\\,." }, { "math_id": 83, "text": "T \\cos \\varphi = T_0\\,," }, { "math_id": 84, "text": "T \\sin \\varphi = w_0 p\\,," }, { "math_id": 85, "text": "\\frac{dy}{dx}=\\tan \\varphi = \\frac{w_0 p}{T_0}\\,,\\quad T=\\sqrt{T_0^2+w_0^2 p^2}\\,," }, { "math_id": 86, "text": "a = \\frac{T_0}{w_0}" }, { "math_id": 87, "text": "\\frac{dy}{dx}=\\tan \\varphi = \\frac{p}{a} \\quad\\text{and}\\quad T=\\frac{T_0}{a}\\sqrt{a^2+p^2}\\,." }, { "math_id": 88, "text": "\\begin{align}\n\\frac{dx}{ds} &= \\cos \\varphi = \\frac{T_0}{T} \\\\[6pt]\n\\frac{dy}{ds} &= \\sin \\varphi = \\frac{w_0 p}{T}\\,,\n\\end{align}" }, { "math_id": 89, "text": "\\begin{alignat}{3}\n\\frac{dx}{dp} &= \\frac{T_0}{T}\\frac{ds}{dp} &&= T_0\\left(\\frac{1}{T}+\\frac{1}{E}\\right) &&= \\frac{a}{\\sqrt{a^2+p^2}}+\\frac{T_0}{E} \\\\[6pt]\n\\frac{dy}{dp} &= \\frac{w_0 p}{T}\\frac{ds}{dp} &&= \\frac{T_0p}{a}\\left(\\frac{1}{T}+\\frac{1}{E}\\right) &&= \\frac{p}{\\sqrt{a^2+p^2}}+\\frac{T_0p}{Ea}\\,.\n\\end{alignat}" }, { "math_id": 90, "text": "\\begin{align}\nx&=a\\operatorname{arsinh}\\left(\\frac{p}{a}\\right)+\\frac{T_0}{E}p + \\alpha\\,, \\\\[6pt]\ny&=\\sqrt{a^2+p^2}+\\frac{T_0}{2Ea}p^2+\\beta\\,.\n\\end{align}" }, { "math_id": 91, "text": "\\begin{align}\nx&=a\\operatorname{arsinh}\\left(\\frac{p}{a}\\right)+\\frac{T_0}{E}p\\,, \\\\[6pt]\ny&=\\sqrt{a^2+p^2}+\\frac{T_0}{2Ea}p^2\n\\end{align}" }, { "math_id": 92, "text": "\\mathbf{T} = T \\mathbf{u}\\,," }, { "math_id": 93, "text": "\\mathbf{T}(s+\\Delta s)-\\mathbf{T}(s)+\\mathbf{G}\\Delta s \\approx \\mathbf{0}\\,." }, { "math_id": 94, "text": "\\frac{d\\mathbf{T}}{ds} + \\mathbf{G} = \\mathbf{0}\\,." } ]
https://en.wikipedia.org/wiki?curid=7163
71630
Unicity distance
Length of ciphertext needed to unambiguously break a cipher In cryptography, unicity distance is the length of an original ciphertext needed to break the cipher by reducing the number of possible spurious keys to zero in a brute force attack. That is, after trying every possible key, there should be just one decipherment that makes sense, i.e. expected amount of ciphertext needed to determine the key completely, assuming the underlying message has redundancy. Claude Shannon defined the unicity distance in his 1949 paper "Communication Theory of Secrecy Systems". Consider an attack on the ciphertext string "WNAIW" encrypted using a Vigenère cipher with a five letter key. Conceivably, this string could be deciphered into any other string—RIVER and WATER are both possibilities for certain keys. This is a general rule of cryptanalysis: with no additional information it is impossible to decode this message. Of course, even in this case, only a certain number of five letter keys will result in English words. Trying all possible keys we will not only get RIVER and WATER, but SXOOS and KHDOP as well. The number of "working" keys will likely be very much smaller than the set of all possible keys. The problem is knowing which of these "working" keys is the right one; the rest are spurious. Relation with key size and possible plaintexts. In general, given particular assumptions about the size of the key and the number of possible messages, there is an average ciphertext length where there is only one key (on average) that will generate a readable message. In the example above we see only upper case English characters, so if we assume that the plaintext has this form, then there are 26 possible letters for each position in the string. Likewise if we assume five-character upper case keys, there are K=265 possible keys, of which the majority will not "work". A tremendous number of possible messages, N, can be generated using even this limited set of characters: N = 26L, where L is the length of the message. However, only a smaller set of them is readable plaintext due to the rules of the language, perhaps M of them, where M is likely to be very much smaller than N. Moreover, M has a one-to-one relationship with the number of keys that work, so given K possible keys, only K × (M/N) of them will "work". One of these is the correct key, the rest are spurious. Since M/N gets arbitrarily small as the length L of the message increases, there is eventually some L that is large enough to make the number of spurious keys equal to zero. Roughly speaking, this is the L that makes KM/N=1. This L is the unicity distance. Relation with key entropy and plaintext redundancy. The unicity distance can equivalently be defined as the minimum amount of ciphertext required to permit a computationally unlimited adversary to recover the unique encryption key. The expected unicity distance can then be shown to be: formula_0 where "U" is the unicity distance, "H"("k") is the entropy of the key space (e.g. 128 for 2128 equiprobable keys, rather less if the key is a memorized pass-phrase). "D" is defined as the plaintext redundancy in bits per character. Now an alphabet of 32 characters can carry 5 bits of information per character (as 32 = 25). In general the number of bits of information per character is log2(N), where "N" is the number of characters in the alphabet and log2 is the binary logarithm. So for English each character can convey log2(26) 4.7 bits of information. However the average amount of actual information carried per character in meaningful English text is only about 1.5 bits per character. So the plain text redundancy is "D" = 4.7 − 1.5 = 3.2. Basically the bigger the unicity distance the better. For a one time pad of unlimited size, given the unbounded entropy of the key space, we have formula_1, which is consistent with the one-time pad being unbreakable. Unicity distance of substitution cipher. For a simple substitution cipher, the number of possible keys is 26! 4.0329 × 1026 288.4, the number of ways in which the alphabet can be permuted. Assuming all keys are equally likely, "H"("k") log2(26!) 88.4 bits. For English text "D" 3.2, thus "U" 88.4/3.2 28. So given 28 characters of ciphertext it should be theoretically possible to work out an English plaintext and hence the key. Practical application. Unicity distance is a useful theoretical measure, but it does not say much about the security of a block cipher when attacked by an adversary with real-world (limited) resources. Consider a block cipher with a unicity distance of three ciphertext blocks. Although there is clearly enough information for a computationally unbounded adversary to find the right key (simple exhaustive search), this may be computationally infeasible in practice. The unicity distance can be increased by reducing the plaintext redundancy. One way to do this is to deploy data compression techniques prior to encryption, for example by removing redundant vowels while retaining readability. This is a good idea anyway, as it reduces the amount of data to be encrypted. Ciphertexts greater than the unicity distance can be assumed to have only one meaningful decryption. Ciphertexts shorter than the unicity distance may have multiple plausible decryptions. Unicity distance is not a measure of how much ciphertext is required for cryptanalysis, but how much ciphertext is required for there to be only one reasonable solution for cryptanalysis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U = H(k) / D" }, { "math_id": 1, "text": "U = \\infty" } ]
https://en.wikipedia.org/wiki?curid=71630
71630560
Corisk Index
The CoRisk Index is the first economic indicator of industry risk assessments related to COVID-19. In contrast to conventional economic climate indexes, e.g. the Ifo Business Climate Index or Purchasing Managers' Index, the CoRisk Index relies on automatically retrieved company filings. The index has been developed by a team of researchers at the Oxford Internet Institute, University of Oxford, and the Hertie School of Governance in March 2020. It gained international media attention as an up-to-date empirical source for policy makers and researchers investigating the economic repercussions of the Coronavirus Recession. Methodology. The index is calculated with the use of company 10-k risk reports filed to the U.S. Securities and Exchange Commission (SEC). The CoRisk Index is calculated industry-specific as a geometric mean of three measures: formula_0, where "k" refers to the average industry count of Corona-related keywords used in each report and "n" represents the average industry share of negative keywords in Corona-related sentences.
[ { "math_id": 0, "text": "CoRisk Index = \\sqrt[2]{k+n}" } ]
https://en.wikipedia.org/wiki?curid=71630560
71637184
Job 10
Bible chapter of the Old Testament Job 10 is the tenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 10 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 10 has a form of a lament to follow Job's contemplation to get a legal settlement in the previous chapter. The first part (verses 1–7) seems to contain a rehearsing of words to be used for a confrontation with a legal adversary in Job's imaginary litigation, but in general, especially in the second part (verses 8–22), it is primarily a complaint addressed to God. Three sharp questions (10:1–7). The opening of this section (verse 1) is similar to the transitional to the complaint in chapter 7 (Job 7:11), but verse 2 is formed as a request from a defendant that a plaintiff makes known the charge against the defendant. Job then probes God's motive by directly asking 'three sharp rhetorical questions' (verses 3–5). Job is convinced that God knows Job is not guilty, that is, a "conviction born of his faith", so whereas he contemplated to look for an 'umpire' or arbiter to settle his case (Job 9:32–34), he is now longing for a 'deliverer' (verse 7b). [Job said:] "Is it good for You that You should oppress," "that You should despise the work of Your hands" "and smile on the counsel of the wicked?" Words of despair (10:8–22). Two thoughts about the accusation in verse 3a are stated in verse 8 which will be unpacked in the next parts within the section: The conclusion of Job's second speech recalls his opening outcry (verses 18–19; cf. Job 3:11–26) and his previous plea (verses 20–22; cf. Job 7:19). There are two significant changes to the earlier statements in Job 3:11, 16: However, acting out of faith, Job does not aim primarily to get relief from his suffering, but to have his relationship with God restored. [Job said:] "Your hands have shaped me and made me completely," "yet You destroy me." [Job said:] "Remember, I pray, that You have made me like clay." "And will You turn me into dust again?" [Job said:] 18"Why then did You bring me forth out of the womb?" "Oh, that I had died, and no eye had seen me!" Verse 18. The two imperfect verbs in this verse stress 'regrets for something which did not happen'. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71637184
7164
Color temperature
Property of light sources related to black-body radiation Color temperature is a parameter describing the color of a visible light source by comparing it to the color of light emitted by an idealized opaque, non-reflective body. The temperature of the ideal emitter that matches the color most closely is defined as the color temperature of the original visible light source. The color temperature scale describes only the color of light emitted by a light source, which may actually be at a different (and often much lower) temperature. Color temperature has applications in lighting, photography, videography, publishing, manufacturing, astrophysics, and other fields. In practice, color temperature is most meaningful for light sources that correspond somewhat closely to the color of some black body, i.e., light in a range going from red to orange to yellow to white to bluish white. Although the concept of correlated color temperature extends the definition to any visible light, the color temperature of a green or a purple light rarely is useful information. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit for absolute temperature. Color temperatures over 5000 K are called "cool colors" (bluish), while lower color temperatures (2700–3000 K) are called "warm colors" (yellowish). "Warm" in this context is with respect to a traditional categorization of colors, not a reference to black body temperature. The hue-heat hypothesis states that low color temperatures will feel warmer while higher color temperatures will feel cooler. The spectral peak of warm-colored light is closer to infrared, and most natural warm-colored light sources emit significant infrared radiation. The fact that "warm" lighting in this sense actually has a "cooler" color temperature often leads to confusion. Categorizing different lighting. &lt;templatestyles src="Stack/styles.css"/&gt; The color temperature of the electromagnetic radiation emitted from an ideal black body is defined as its surface temperature in kelvins, or alternatively in micro reciprocal degrees (mired). This permits the definition of a standard by which light sources are compared. To the extent that a hot surface emits thermal radiation but is not an ideal black-body radiator, the color temperature of the light is not the actual temperature of the surface. An incandescent lamp's light is thermal radiation, and the bulb approximates an ideal black-body radiator, so its color temperature is essentially the temperature of the filament. Thus a relatively low temperature emits a dull red and a high temperature emits the almost white of the traditional incandescent light bulb. Metal workers are able to judge the temperature of hot metals by their color, from dark red to orange-white and then white (see red heat). Many other light sources, such as fluorescent lamps, or light emitting diodes (LEDs) emit light primarily by processes other than thermal radiation. This means that the emitted radiation does not follow the form of a black-body spectrum. These sources are assigned what is known as a correlated color temperature (CCT). CCT is the color temperature of a black-body radiator which to human color perception most closely matches the light from the lamp. Because such an approximation is not required for incandescent light, the CCT for an incandescent light is simply its unadjusted temperature, derived from comparison to a black-body radiator. The Sun. The Sun closely approximates a black-body radiator. The effective temperature, defined by the total radiative power per square unit, is 5772 K. The color temperature of sunlight above the atmosphere is about 5900 K. The Sun may appear red, orange, yellow, or white from Earth, depending on its position in the sky. The changing color of the Sun over the course of the day is mainly a result of the scattering of sunlight and is not due to changes in black-body radiation. Rayleigh scattering of sunlight by Earth's atmosphere causes the blue color of the sky, which tends to scatter blue light more than red light. Some daylight in the early morning and late afternoon (the golden hours) has a lower ("warmer") color temperature due to increased scattering of shorter-wavelength sunlight by atmospheric particulates – an optical phenomenon called the Tyndall effect. Daylight has a spectrum similar to that of a black body with a correlated color temperature of 6500 K (D65 viewing standard) or 5500 K (daylight-balanced photographic film standard). For colors based on black-body theory, blue occurs at higher temperatures, whereas red occurs at lower temperatures. This is the opposite of the cultural associations attributed to colors, in which "red" is "hot", and "blue" is "cold". Applications. Lighting. For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices. CCT dimming for LED technology is regarded as a difficult task, since binning, age and temperature drift effects of LEDs change the actual color value output. Here feedback loop systems are used, for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs. Aquaculture. In fishkeeping, color temperature has different functions and foci in the various branches. Digital photography. In digital photography, the term color temperature sometimes refers to remapping of color values to simulate variations in ambient color temperature. Most digital cameras and raw image software provide presets simulating specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit entry of white balance values in kelvins. These settings vary color values along the blue–yellow axis, while some software includes additional controls (sometimes labeled "tint") adding the magenta–green axis, and are to some extent arbitrary and a matter of artistic interpretation. Photographic film. Photographic emulsion film does not respond to lighting color identically to the human retina or visual perception. An object that appears to the observer to be white may turn out to be very blue or orange in a photograph. The color balance may need to be corrected during printing to achieve a neutral color print. The extent of this correction is limited since color film normally has three layers sensitive to different colors and when used under the "wrong" light source, every layer may not respond proportionally, giving odd color casts in the shadows, although the mid-tones may have been correctly white-balanced under the enlarger. Light sources with discontinuous spectra, such as fluorescent tubes, cannot be fully corrected in printing either, since one of the layers may barely have recorded an image at all. Photographic film is made for specific light sources (most commonly daylight film and tungsten film), and, used properly, will create a neutral color print. Matching the sensitivity of the film to the color temperature of the light source is one way to balance color. If tungsten film is used indoors with incandescent lamps, the yellowish-orange light of the tungsten incandescent lamps will appear as white (3200 K) in the photograph. Color negative film is almost always daylight-balanced, since it is assumed that color can be adjusted in printing (with limitations, see above). Color transparency film, being the final artefact in the process, has to be matched to the light source or filters must be used to correct color. Filters on a camera lens, or color gels over the light source(s) may be used to correct color balance. When shooting with a bluish light (high color temperature) source such as on an overcast day, in the shade, in window light, or if using tungsten film with white or blue light, a yellowish-orange filter will correct this. For shooting with daylight film (calibrated to 5600 K) under warmer (low color temperature) light sources such as sunsets, candlelight or tungsten lighting, a bluish (e.g. #80A) filter may be used. More-subtle filters are needed to correct for the difference between, say 3200 K and 3400 K tungsten lamps or to correct for the slightly blue cast of some flash tubes, which may be 6000 K. If there is more than one light source with varied color temperatures, one way to balance the color is to use daylight film and place color-correcting gel filters over each light source. Photographers sometimes use color temperature meters. These are usually designed to read only two regions along the visible spectrum (red and blue); more expensive ones read three regions (red, green, and blue). However, they are ineffective with sources such as fluorescent or discharge lamps, whose light varies in color and may be harder to correct for. Because this light is often greenish, a magenta filter may correct it. More sophisticated colorimetry tools can be used if such meters are lacking. Desktop publishing. In the desktop publishing industry, it is important to know a monitor's color temperature. Color matching software, such as Apple's ColorSync Utility for MacOS, measures a monitor's color temperature and then adjusts its settings accordingly. This enables on-screen color to more closely match printed color. Common monitor color temperatures, along with matching standard illuminants in parentheses, are as follows: D50 is scientific shorthand for a standard illuminant: the daylight spectrum at a correlated color temperature of 5000 K. Similar definitions exist for D55, D65 and D75. Designations such as "D50" are used to help classify color temperatures of light tables and viewing booths. When viewing a color slide at a light table, it is important that the light be balanced properly so that the colors are not shifted towards the red or blue. Digital cameras, web graphics, DVDs, etc., are normally designed for a 6500 K color temperature. The sRGB standard commonly used for images on the Internet stipulates a 6500 K display white point. TV, video, and digital still cameras. The NTSC and PAL TV norms call for a compliant TV screen to display an electrically black and white signal (minimal color saturation) at a color temperature of 6500 K. On many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures adjusted to 6500 K by using a preprogrammed setting or a custom calibration. Current versions of ATSC explicitly call for the color temperature data to be included in the data stream, but old versions of ATSC allowed this data to be omitted. In this case, current versions of ATSC cite default colorimetry standards depending on the format. Both of the cited standards specify a 6500 K color temperature. Most video and digital still cameras can adjust for color temperature by zooming into a white or neutral colored object and setting the manual "white balance" (telling the camera that "this object is white"); the camera then shows true white as white and adjusts all the other colors accordingly. White-balancing is necessary especially when indoors under fluorescent lighting and when moving the camera from one lighting situation to another. Most cameras also have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today's digital cameras and produce an accurate white balance in a wide variety of lighting situations. Artistic application via control of color temperature. Video camera operators can white-balance objects that are not white, downplaying the color of the object used for white-balancing. For instance, they can bring more warmth into a picture by white-balancing off something that is light blue, such as faded blue denim; in this way white-balancing can replace a filter or lighting gel when those are not available. Cinematographers do not "white balance" in the same way as video camera operators; they use techniques such as filters, choice of film stock, pre-flashing, and, after shooting, color grading, both by exposure at the labs and also digitally. Cinematographers also work closely with set designers and lighting crews to achieve the desired color effects. For artists, most pigments and papers have a cool or warm cast, as the human eye can detect even a minute amount of saturation. Gray mixed with yellow, orange, or red is a "warm gray". Green, blue, or purple create "cool grays". This sense of temperature is the reverse of that of real temperature; bluer is described as "cooler" even though it corresponds to a higher-temperature black body. Lighting designers sometimes select filters by color temperature, commonly to match light that is theoretically white. Since fixtures using discharge type lamps produce a light of a considerably higher color temperature than do tungsten lamps, using the two in conjunction could potentially produce a stark contrast, so sometimes fixtures with HID lamps, commonly producing light of 6000–7000 K, are fitted with 3200 K filters to emulate tungsten light. Fixtures with color mixing features or with multiple colors (if including 3200 K), are also capable of producing tungsten-like light. Color temperature may also be a factor when selecting lamps, since each is likely to have a different color temperature. Color rendering index. The CIE color rendering index (CRI) is a method to determine how well a light source's illumination of eight sample patches compares to the illumination provided by a reference source. Cited together, the CRI and CCT give a numerical estimate of what reference (ideal) light source best approximates a particular artificial light, and what the difference is. Spectral power distribution. Light sources and illuminants may be characterized by their spectral power distribution (SPD). The relative SPD curves provided by many manufacturers may have been produced using 10 nm increments or more on their spectroradiometer. The result is what would seem to be a smoother ("fuller spectrum") power distribution than the lamp actually has. Owing to their spiky distribution, much finer increments are advisable for taking measurements of fluorescent lights, and this requires more expensive equipment. Color temperature in astronomy. In astronomy, the color temperature is defined by the local slope of the SPD at a given wavelength, or, in practice, a wavelength range. Given, for example, the color magnitudes "B" and "V" which are calibrated to be equal for an A0V star (e.g. Vega), the stellar color temperature formula_0 is given by the temperature for which the color index formula_1 of a black-body radiator fits the stellar one. Besides the formula_1, other color indices can be used as well. The color temperature (as well as the correlated color temperature defined above) may differ largely from the effective temperature given by the radiative flux of the stellar surface. For example, the color temperature of an A0V star is about 15000 K compared to an effective temperature of about 9500 K. For most applications in astronomy (e.g., to place a star on the HR diagram or to determine the temperature of a model flux fitting an observed spectrum) the effective temperature is the quantity of interest. Various color-effective temperature relations exist in the literature. There relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_C" }, { "math_id": 1, "text": "B-V" } ]
https://en.wikipedia.org/wiki?curid=7164
716401
Cross-polytope
Regular polytope dual to the hypercube in any number of dimensions In geometry, a cross-polytope, hyperoctahedron, orthoplex, or cocube is a regular, convex polytope that exists in "n"-dimensional Euclidean space. A 2-dimensional cross-polytope is a square, a 3-dimensional cross-polytope is a regular octahedron, and a 4-dimensional cross-polytope is a 16-cell. Its facets are simplexes of the previous dimension, while the cross-polytope's vertex figure is another cross-polytope from the previous dimension. The vertices of a cross-polytope can be chosen as the unit vectors pointing along each co-ordinate axis – i.e. all the permutations of (±1, 0, 0, ..., 0). The cross-polytope is the convex hull of its vertices. The "n"-dimensional cross-polytope can also be defined as the closed unit ball (or, according to some authors, its boundary) in the ℓ1-norm on R"n": formula_0 In 1 dimension the cross-polytope is simply the line segment [−1, +1], in 2 dimensions it is a square (or diamond) with vertices {(±1, 0), (0, ±1)}. In 3 dimensions it is an octahedron—one of the five convex regular polyhedra known as the Platonic solids. This can be generalised to higher dimensions with an "n"-orthoplex being constructed as a bipyramid with an ("n"−1)-orthoplex base. The cross-polytope is the dual polytope of the hypercube. The 1-skeleton of an "n"-dimensional cross-polytope is the Turán graph "T"(2"n", "n") (also known as a "cocktail party graph" ). 4 dimensions. The 4-dimensional cross-polytope also goes by the name hexadecachoron or 16-cell. It is one of the six convex regular 4-polytopes. These 4-polytopes were first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century. Higher dimensions. The cross-polytope family is one of three regular polytope families, labeled by Coxeter as "βn", the other two being the hypercube family, labeled as "γn", and the simplex family, labeled as "αn". A fourth family, the infinite tessellations of hypercubes, he labeled as "δn". The "n"-dimensional cross-polytope has 2"n" vertices, and 2"n" facets (("n" − 1)-dimensional components) all of which are ("n" − 1)-simplices. The vertex figures are all ("n" − 1)-cross-polytopes. The Schläfli symbol of the cross-polytope is {3,3...,3,4}. The dihedral angle of the "n"-dimensional cross-polytope is formula_1. This gives: δ2 = arccos(0/2) = 90°, δ3 = arccos(−1/3) = 109.47°, δ4 = arccos(−2/4) = 120°, δ5 = arccos(−3/5) = 126.87°, ... δ∞ = arccos(−1) = 180°. The hypervolume of the "n"-dimensional cross-polytope is formula_2 For each pair of non-opposite vertices, there is an edge joining them. More generally, each set of "k" + 1 orthogonal vertices corresponds to a distinct "k"-dimensional component which contains them. The number of "k"-dimensional components (vertices, edges, faces, ..., facets) in an "n"-dimensional cross-polytope is thus given by (see binomial coefficient): formula_3 The extended f-vector for an "n"-orthoplex can be computed by (1,2)"n", like the coefficients of polynomial products. For example a 16-cell is (1,2)4 = (1,4,4)2 = (1,8,24,32,16). There are many possible orthographic projections that can show the cross-polytopes as 2-dimensional graphs. Petrie polygon projections map the points into a regular 2"n"-gon or lower order regular polygons. A second projection takes the 2("n"−1)-gon petrie polygon of the lower dimension, seen as a bipyramid, projected down the axis, with 2 vertices mapped into the center. The vertices of an axis-aligned cross polytope are all at equal distance from each other in the Manhattan distance (L1 norm). Kusner's conjecture states that this set of 2"d" points is the largest possible equidistant set for this distance. Generalized orthoplex. Regular complex polytopes can be defined in complex Hilbert space called "generalized orthoplexes" (or cross polytopes), β = 2{3}2{3}...2{4}"p", or ... Real solutions exist with "p" = 2, i.e. β = β"n" = 2{3}2{3}...2{4}2 = {3,3..,4}. For "p" &gt; 2, they exist in formula_4. A "p"-generalized "n"-orthoplex has "pn" vertices. "Generalized orthoplexes" have regular simplexes (real) as facets. Generalized orthoplexes make complete multipartite graphs, β make K"p","p" for complete bipartite graph, β make K"p","p","p" for complete tripartite graphs. β creates K"p""n". An orthogonal projection can be defined that maps all the vertices equally-spaced on a circle, with all pairs of vertices connected, except multiples of "n". The regular polygon perimeter in these orthogonal projections is called a petrie polygon. Related polytope families. Cross-polytopes can be combined with their dual cubes to form compound polytopes: Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{x\\in\\mathbb R^n : \\|x\\|_1 \\le 1\\}." }, { "math_id": 1, "text": "\\delta_n = \\arccos\\left(\\frac{2-n}{n}\\right)" }, { "math_id": 2, "text": "\\frac{2^n}{n!}." }, { "math_id": 3, "text": "2^{k+1}{n \\choose {k+1}}" }, { "math_id": 4, "text": "\\mathbb{\\Complex}^n" } ]
https://en.wikipedia.org/wiki?curid=716401
716422
16-cell
Four-dimensional analog of the octahedron In geometry, the 16-cell is the regular convex 4-polytope (four-dimensional analogue of a Platonic solid) with Schläfli symbol {3,3,4}. It is one of the six regular convex 4-polytopes first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century. It is also called C16, hexadecachoron, or hexdecahedroid ["sic?"] . It is the 4-dimesional member of an infinite family of polytopes called cross-polytopes, "orthoplexes", or "hyperoctahedrons" which are analogous to the octahedron in three dimensions. It is Coxeter's formula_0 polytope. The dual polytope is the tesseract (4-cube), which it can be combined with to form a compound figure. The cells of the 16-cell are dual to the 16 vertices of the tesseract. Geometry. The 16-cell is the second in the sequence of 6 convex regular 4-polytopes (in order of size and complexity). Each of its 4 successor convex regular 4-polytopes can be constructed as the convex hull of a polytope compound of multiple 16-cells: the 16-vertex tesseract as a compound of two 16-cells, the 24-vertex 24-cell as a compound of three 16-cells, the 120-vertex 600-cell as a compound of fifteen 16-cells, and the 600-vertex 120-cell as a compound of seventy-five 16-cells. !style="vertical-align:top;text-align:right;"|Short radius !style="vertical-align:top;text-align:right;"|Area !style="vertical-align:top;text-align:right;"|Volume !style="vertical-align:top;text-align:right;"|4-Content Coordinates. The 16-cell is the 4-dimensional cross polytope (4-orthoplex), which means its vertices lie in opposite pairs on the 4 axes of a (w, x, y, z) Cartesian coordinate system. The eight vertices are (±1, 0, 0, 0), (0, ±1, 0, 0), (0, 0, ±1, 0), (0, 0, 0, ±1). All vertices are connected by edges except opposite pairs. The edge length is √2. The vertex coordinates form 6 orthogonal central squares lying in the 6 coordinate planes. Squares in "opposite" planes that do not share an axis (e.g. in the "xy" and "wz" planes) are completely disjoint (they do not intersect at any vertices). The 16-cell constitutes an orthonormal "basis" for the choice of a 4-dimensional reference frame, because its vertices exactly define the four orthogonal axes. Structure. The Schläfli symbol of the 16-cell is {3,3,4}, indicating that its cells are regular tetrahedra {3,3} and its vertex figure is a regular octahedron {3,4}. There are 8 tetrahedra, 12 triangles, and 6 edges meeting at every vertex. Its edge figure is a square. There are 4 tetrahedra and 4 triangles meeting at every edge. The 16-cell is bounded by 16 cells, all of which are regular tetrahedra. It has 32 triangular faces, 24 edges, and 8 vertices. The 24 edges bound 6 orthogonal central squares lying on great circles in the 6 coordinate planes (3 pairs of completely orthogonal great squares). At each vertex, 3 great squares cross perpendicularly. The 6 edges meet at the vertex the way 6 edges meet at the apex of a canonical octahedral pyramid. The 6 orthogonal central planes of the 16-cell can be divided into 4 orthogonal central hyperplanes (3-spaces) each forming an octahedron with 3 orthogonal great squares. Rotations. Rotations in 4-dimensional Euclidean space can be seen as the composition of two 2-dimensional rotations in completely orthogonal planes. The 16-cell is a simple frame in which to observe 4-dimensional rotations, because each of the 16-cell's 6 great squares has another completely orthogonal great square (there are 3 pairs of completely orthogonal squares). Many rotations of the 16-cell can be characterized by the angle of rotation in one of its great square planes (e.g. the "xy" plane) and another angle of rotation in the completely orthogonal great square plane (the "wz" plane). Completely orthogonal great squares have disjoint vertices: 4 of the 16-cell's 8 vertices rotate in one plane, and the other 4 rotate independently in the completely orthogonal plane. In 2 or 3 dimensions a rotation is characterized by a single plane of rotation; this kind of rotation taking place in 4-space is called a simple rotation, in which only one of the two completely orthogonal planes rotates (the angle of rotation in the other plane is 0). In the 16-cell, a simple rotation in one of the 6 orthogonal planes moves only 4 of the 8 vertices; the other 4 remain fixed. (In the simple rotation animation above, all 8 vertices move because the plane of rotation is not one of the 6 orthogonal basis planes.) In a double rotation both sets of 4 vertices move, but independently: the angles of rotation may be different in the 2 completely orthogonal planes. If the two angles happen to be the same, a maximally symmetric isoclinic rotation takes place. In the 16-cell an isoclinic rotation by 90 degrees of any pair of completely orthogonal square planes takes every square plane to its completely orthogonal square plane. Constructions. Octahedral dipyramid. The simplest construction of the 16-cell is on the 3-dimensional cross polytope, the octahedron. The octahedron has 3 perpendicular axes and 6 vertices in 3 opposite pairs (its Petrie polygon is the hexagon). Add another pair of vertices, on a fourth axis perpendicular to all 3 of the other axes. Connect each new vertex to all 6 of the original vertices, adding 12 new edges. This raises two octahedral pyramids on a shared octahedron base that lies in the 16-cell's central hyperplane. The octahedron that the construction starts with has three perpendicular intersecting squares (which appear as rectangles in the hexagonal projections). Each square intersects with each of the other squares at two opposite vertices, with "two" of the squares crossing at each vertex. Then two more points are added in the fourth dimension (above and below the 3-dimensional hyperplane). These new vertices are connected to all the octahedron's vertices, creating 12 new edges and "three more squares" (which appear edge-on as the 3 "diameters" of the hexagon in the projection), and three more octahedra. Something unprecedented has also been created. Notice that each square no longer intersects with "all" of the other squares: it does intersect with four of them (with "three" of the squares crossing at each vertex now), but each square has "one" other square with which it shares "no" vertices: it is not directly connected to that square at all. These two "separate" perpendicular squares (there are three pairs of them) are like the opposite edges of a tetrahedron: perpendicular, but non-intersecting. They lie opposite each other (parallel in some sense), and they don't touch, but they also pass through each other like two perpendicular links in a chain (but unlike links in a chain they have a common center). They are an example of Clifford parallel planes, and the 16-cell is the simplest regular polytope in which they occur. Clifford parallelism of objects of more than one dimension (more than just curved "lines") emerges here and occurs in all the subsequent 4-dimensional regular polytopes, where it can be seen as the defining relationship "among" disjoint concentric regular 4-polytopes and their corresponding parts. It can occur between congruent (similar) polytopes of 2 or more dimensions. For example, as noted above all the subsequent convex regular 4-polytopes are compounds of multiple 16-cells; those 16-cells are Clifford parallel polytopes. Tetrahedral constructions. The 16-cell has two Wythoff constructions from regular tetrahedra, a regular form and alternated form, shown here as nets, the second represented by tetrahedral cells of two alternating colors. The alternated form is a lower symmetry construction of the 16-cell called the demitesseract. Wythoff's construction replicates the 16-cell's characteristic 5-cell in a kaleidoscope of mirrors. Every regular 4-polytope has its characteristic 4-orthoscheme, an irregular 5-cell. There are three regular 4-polytopes with tetrahedral cells: the 5-cell, the 16-cell, and the 600-cell. Although all are bounded by "regular" tetrahedron cells, their characteristic 5-cells (4-orthoschemes) are different tetrahedral pyramids, all based on the same characteristic "irregular" tetrahedron. They share the same characteristic tetrahedron (3-orthoscheme) and characteristic right triangle (2-orthoscheme) because they have the same kind of cell. The characteristic 5-cell of the regular 16-cell is represented by the Coxeter-Dynkin diagram , which can be read as a list of the dihedral angles between its mirror facets. It is an irregular tetrahedral pyramid based on the characteristic tetrahedron of the regular tetrahedron. The regular 16-cell is subdivided by its symmetry hyperplanes into 384 instances of its characteristic 5-cell that all meet at its center. The characteristic 5-cell (4-orthoscheme) has four more edges than its base characteristic tetrahedron (3-orthoscheme), joining the four vertices of the base to its apex (the fifth vertex of the 4-orthoscheme, at the center of the regular 16-cell). If the regular 16-cell has unit radius edge and edge length 𝒍 = , its characteristic 5-cell's ten edges have lengths , , around its exterior right-triangle face (the edges opposite the "characteristic angles" 𝟀, 𝝉, 𝟁), plus , , (the other three edges of the exterior 3-orthoscheme facet the characteristic tetrahedron, which are the "characteristic radii" of the regular tetrahedron), plus , , , (edges which are the characteristic radii of the regular 16-cell). The 4-edge path along orthogonal edges of the orthoscheme is , , , , first from a 16-cell vertex to a 16-cell edge center, then turning 90° to a 16-cell face center, then turning 90° to a 16-cell tetrahedral cell center, then turning 90° to the 16-cell center. Helical construction. A 16-cell can be constructed (three different ways) from two Boerdijk–Coxeter helixes of eight chained tetrahedra, each bent in the fourth dimension into a ring. The two circular helixes spiral around each other, nest into each other and pass through each other forming a Hopf link. The 16 triangle faces can be seen in a 2D net within a triangular tiling, with 6 triangles around every vertex. The purple edges represent the Petrie polygon of the 16-cell. The eight-cell ring of tetrahedra contains three octagrams of different colors, eight-edge circular paths that wind twice around the 16-cell on every third vertex of the octagram. The orange and yellow edges are two four-edge halves of one octagram, which join their ends to form a Möbius strip. Thus the 16-cell can be decomposed into two cell-disjoint circular chains of eight tetrahedrons each, four edges long, one spiraling to the right (clockwise) and the other spiraling to the left (counterclockwise). The left-handed and right-handed cell rings fit together, nesting into each other and entirely filling the 16-cell, even though they are of opposite chirality. This decomposition can be seen in a 4-4 duoantiprism construction of the 16-cell: or , Schläfli symbol {2}⨂{2} or s{2}s{2}, symmetry [4,2+,4], order 64. Three eight-edge paths (of different colors) spiral along each eight-cell ring, making 90° angles at each vertex. (In the Boerdijk–Coxeter helix before it is bent into a ring, the angles in different paths vary, but are not 90°.) Three paths (with three different colors and apparent angles) pass through each vertex. When the helix is bent into a ring, the segments of each eight-edge path (of various lengths) join their ends, forming a Möbius strip eight edges long along its single-sided circumference of 4𝝅, and one edge wide. The six four-edge halves of the three eight-edge paths each make four 90° angles, but they are "not" the six orthogonal great squares: they are open-ended squares, four-edge 360° helices whose open ends are antipodal vertices. The four edges come from four different great squares, and are mutually orthogonal. Combined end-to-end in pairs of the same chirality, the six four-edge paths make three eight-edge Möbius loops, helical octagrams. Each octagram is both a Petrie polygon of the 16-cell, and the helical track along which all eight vertices rotate together, in one of the 16-cell's distinct isoclinic rotations. Each eight-edge helix is a skew octagram{8/3} that winds three times around the 16-cell and visits every vertex before closing into a loop. Its eight √2 edges are chords of an "isocline", a helical arc on which the 8 vertices circle during an isoclinic rotation. All eight 16-cell vertices are √2 apart except for opposite (antipodal) vertices, which are √4 apart. A vertex moving on the isocline visits three other vertices that are √2 apart before reaching the fourth vertex that is √4 away. The eight-cell ring is chiral: there is a right-handed form which spirals clockwise, and a left-handed form which spirals counterclockwise. The 16-cell contains one of each, so it also contains a left and a right isocline; the isocline is the circular axis around which the eight-cell ring twists. Each isocline visits all eight vertices of the 16-cell. Each eight-cell ring contains half of the 16 cells, but all 8 vertices; the two rings share the vertices, as they nest into each other and fit together. They also share the 24 edges, though left and right octagram helices are different eight-edge paths. Because there are three pairs of completely orthogonal great squares, there are three congruent ways to compose a 16-cell from two eight-cell rings. The 16-cell contains three left-right pairs of eight-cell rings in different orientations, with each cell ring containing its axial isocline. Each left-right pair of isoclines is the track of a left-right pair of distinct isoclinic rotations: the rotations in one pair of completely orthogonal invariant planes of rotation. At each vertex, there are three great squares and six octagram isoclines that cross at the vertex and share a 16-cell axis chord. As a configuration. This configuration matrix represents the 16-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole 16-cell. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_1 Tessellations. One can tessellate 4-dimensional Euclidean space by regular 16-cells. This is called the 16-cell honeycomb and has Schläfli symbol {3,3,4,3}. Hence, the 16-cell has a dihedral angle of 120°. Each 16-cell has 16 neighbors with which it shares a tetrahedron, 24 neighbors with which it shares only an edge, and 72 neighbors with which it shares only a single point. Twenty-four 16-cells meet at any given vertex in this tessellation. The dual tessellation, the 24-cell honeycomb, {3,4,3,3}, is made of regular 24-cells. Together with the tesseractic honeycomb {4,3,3,4} these are the only three regular tessellations of R4. Projections. The cell-first parallel projection of the 16-cell into 3-space has a cubical envelope. The closest and farthest cells are projected to inscribed tetrahedra within the cube, corresponding with the two possible ways to inscribe a regular tetrahedron in a cube. Surrounding each of these tetrahedra are 4 other (non-regular) tetrahedral volumes that are the images of the 4 surrounding tetrahedral cells, filling up the space between the inscribed tetrahedron and the cube. The remaining 6 cells are projected onto the square faces of the cube. In this projection of the 16-cell, all its edges lie on the faces of the cubical envelope. The cell-first perspective projection of the 16-cell into 3-space has a triakis tetrahedral envelope. The layout of the cells within this envelope are analogous to that of the cell-first parallel projection. The vertex-first parallel projection of the 16-cell into 3-space has an octahedral envelope. This octahedron can be divided into 8 tetrahedral volumes, by cutting along the coordinate planes. Each of these volumes is the image of a pair of cells in the 16-cell. The closest vertex of the 16-cell to the viewer projects onto the center of the octahedron. Finally the edge-first parallel projection has a shortened octahedral envelope, and the face-first parallel projection has a hexagonal bipyramidal envelope. 4 sphere Venn diagram. A 3-dimensional projection of the 16-cell and 4 intersecting spheres (a Venn diagram of 4 sets) are topologically equivalent. Symmetry constructions. The 16-cell's symmetry group is denoted B4. There is a lower symmetry form of the "16-cell", called a demitesseract or 4-demicube, a member of the demihypercube family, and represented by h{4,3,3}, and Coxeter diagrams or . It can be drawn bicolored with alternating tetrahedral cells. It can also be seen in lower symmetry form as a tetrahedral antiprism, constructed by 2 parallel tetrahedra in dual configurations, connected by 8 (possibly elongated) tetrahedra. It is represented by s{2,4,3}, and Coxeter diagram: . It can also be seen as a snub 4-orthotope, represented by s{21,1,1}, and Coxeter diagram: or . With the tesseract constructed as a 4-4 duoprism, the 16-cell can be seen as its dual, a 4-4 duopyramid. Related complex polygons. The Möbius–Kantor polygon is a regular complex polygon 3{3}3, , in formula_2 shares the same vertices as the 16-cell. It has 8 vertices, and 8 3-edges. The regular complex polygon, 2{4}4, , in formula_2 has a real representation as a 16-cell in 4-dimensional space with 8 vertices, 16 2-edges, only half of the edges of the 16-cell. Its symmetry is 4[4]2, order 32. Related uniform polytopes and honeycombs. The regular 16-cell and tesseract are the regular members of a set of 15 uniform 4-polytopes with the same B4 symmetry. The 16-cell is also one of the uniform polytopes of D4 symmetry. The 16-cell is also related to the cubic honeycomb, order-4 dodecahedral honeycomb, and order-4 hexagonal tiling honeycomb which all have octahedral vertex figures. It belongs to the sequence of {3,3,p} 4-polytopes which have tetrahedral cells. The sequence includes three regular 4-polytopes of Euclidean 4-space, the 5-cell {3,3,3}, 16-cell {3,3,4}, and 600-cell {3,3,5}), and the order-6 tetrahedral honeycomb {3,3,6} of hyperbolic space. It is first in a sequence of quasiregular polytopes and honeycombs h{4,p,q}, and a half symmetry sequence, for regular forms {p,3,4}. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta_4" }, { "math_id": 1, "text": "\\begin{bmatrix}\\begin{matrix}8 & 6 & 12 & 8 \\\\ 2 & 24 & 4 & 4 \\\\ 3 & 3 & 32 & 2 \\\\ 4 & 6 & 4 & 16 \\end{matrix}\\end{bmatrix}" }, { "math_id": 2, "text": "\\mathbb{C}^2" } ]
https://en.wikipedia.org/wiki?curid=716422
71642695
Stable Diffusion
Image-generating machine learning model Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. The generative artificial intelligence technology is the premier product of Stability AI and is considered to be a part of the ongoing artificial intelligence boom. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt. Its development involved researchers from the CompVis Group at Ludwig Maximilian University of Munich and Runway with a computational donation from Stability and training data from non-profit organizations. Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly, and it can run on most consumer hardware equipped with a modest GPU with at least 4 GB VRAM. This marked a departure from previous proprietary text-to-image models such as DALL-E and Midjourney which were accessible only via cloud services. Development. Stable Diffusion originated from a project called Latent Diffusion, developed in Germany by researchers at Ludwig Maximilian University in Munich and Heidelberg University. Four of the original 5 authors (Robin Rombach, Andreas Blattmann, Patrick Esser and Dominik Lorenz) later joined Stability AI and released subsequent versions of Stable Diffusion. The technical license for the model was released by the CompVis group at Ludwig Maximilian University of Munich. Development was led by Patrick Esser of Runway and Robin Rombach of CompVis, who were among the researchers who had earlier invented the latent diffusion model architecture used by Stable Diffusion. Stability AI also credited EleutherAI and LAION (a German nonprofit which assembled the dataset on which Stable Diffusion was trained) as supporters of the project. Technology. Architecture. Stable Diffusion uses a kind of diffusion model (DM), called a latent diffusion model (LDM), developed by the CompVis group at LMU Munich. Introduced in 2015, diffusion models are trained with the objective of removing successive applications of Gaussian noise on training images, which can be thought of as a sequence of denoising autoencoders. Stable Diffusion consists of 3 parts: the variational autoencoder (VAE), U-Net, and an optional text encoder. The VAE encoder compresses the image from pixel space to a smaller dimensional latent space, capturing a more fundamental semantic meaning of the image. Gaussian noise is iteratively applied to the compressed latent representation during forward diffusion. The U-Net block, composed of a ResNet backbone, denoises the output from forward diffusion backwards to obtain a latent representation. Finally, the VAE decoder generates the final image by converting the representation back into pixel space. The denoising step can be flexibly conditioned on a string of text, an image, or another modality. The encoded conditioning data is exposed to denoising U-Nets via a cross-attention mechanism. For conditioning on text, the fixed, pretrained CLIP ViT-L/14 text encoder is used to transform text prompts to an embedding space. Researchers point to increased computational efficiency for training and generation as an advantage of LDMs. The name "diffusion" takes inspiration from the thermodynamic diffusion and an important link was made between this purely physical field and deep learning in 2015. With 860million parameters in the U-Net and 123million in the text encoder, Stable Diffusion is considered relatively lightweight by 2022 standards, and unlike other diffusion models, it can run on consumer GPUs, and even CPU-only if using the OpenVINO version of Stable Diffusion. SD 1.5. The earliest model was Latent Diffusion Model (LDM) released by CompVis. The SD 1.1 up to SD 2.1 were all based on LDM series uses essentially the same architecture as the earlier. Its UNet backbone takes the following kinds of inputs: Each run through the UNet backbone produces a predicted noise vector. This noise vector is scaled down and subtracted away from the latent image array, resulting in a slightly less noisy latent image. The denoising is repeated according to a denoising schedule ("noise schedule"), and the output of the last step is processed by the VAE decoder into a finished image. Similar to the standard U-Net, the U-Net backbone used in the SD 1.5 is essentially composed of down-scaling layers followed by up-scaling layers. However, the UNet backbone has additional modules to allow for it to handle the embedding. As an illustration, we describe a single down-scaling layer in the backbone: In pseudocode, def ResBlock(latent_array, time_embedding): latent_array = conv_layer(latent_array) time_embedding_processed = feedforward_network(time_embedding) latent_array = latent_array + time_embedding_processed # Broadcasting assumed latent_array = conv_layer(latent_array) latent_array = latent_array + feedforward_network(time_embedding) # Broadcasting assumed def downscaling_layer(latent_array, time_embedding, conditioning_sequence): latent_array = ResBlock(latent_array, time_embedding) # SpatialTransformer application latent_array = SpatialTransformer(latent_array, conditioning_sequence) return latent_array The detailed architecture may be found in . This architecture does not have self-attention mechanisms, although the cross-attention mechanisms can be modified into self-attention mechanisms. SD XL. The XL version uses the same architecture, except larger: larger UNet backbone, larger cross-attention context, two text encoders instead of one, and trained on multiple aspect ratios (not just the square aspect ratio like previous versions). The SD XL Refiner, released at the same time, has the same architecture as SD XL, but it was trained for adding fine details to preexisting images via text-conditional img2img. SD 3.0. The 3.0 version completely changes the backbone. Not a UNet, but a "Rectified Flow Transformer", which implements the rectified flow method with a Transformer. The Transformer architecture used for SD 3.0 has three "tracks", for original text encoding, transformed text encoding, and image encoding (in latent space). The transformed text encoding and image encoding are mixed during each transformer block. The architecture is named "multimodal diffusion transformer (MMDiT), where the "multimodal" means that it mixes text and image encodings inside its operations. This differs from previous versions of DiT, where the text encoding affects the image encoding, but not vice versa. Training data. Stable Diffusion was trained on pairs of images and captions taken from LAION-5B, a publicly available dataset derived from Common Crawl data scraped from the web, where 5 billion image-text pairs were classified based on language and filtered into separate datasets by resolution, a predicted likelihood of containing a watermark, and predicted "aesthetic" score (e.g. subjective visual quality). The dataset was created by LAION, a German non-profit which receives funding from Stability AI. The Stable Diffusion model was trained on three subsets of LAION-5B: laion2B-en, laion-high-resolution, and laion-aesthetics v2 5+. A third-party analysis of the model's training data identified that out of a smaller subset of 12 million images taken from the original wider dataset used, approximately 47% of the sample size of images came from 100 different domains, with Pinterest taking up 8.5% of the subset, followed by websites such as WordPress, Blogspot, Flickr, DeviantArt and Wikimedia Commons. An investigation by Bayerischer Rundfunk showed that LAION's datasets, hosted on Hugging Face, contain large amounts of private and sensitive data. Training procedures. The model was initially trained on the laion2B-en and laion-high-resolution subsets, with the last few rounds of training done on LAION-Aesthetics v2 5+, a subset of 600 million captioned images which the LAION-Aesthetics Predictor V2 predicted that humans would, on average, give a score of at least 5 out of 10 when asked to rate how much they liked them. The LAION-Aesthetics v2 5+ subset also excluded low-resolution images and images which LAION-5B-WatermarkDetection identified as carrying a watermark with greater than 80% probability. Final rounds of training additionally dropped 10% of text conditioning to improve Classifier-Free Diffusion Guidance. The model was trained using 256 Nvidia A100 GPUs on Amazon Web Services for a total of 150,000 GPU-hours, at a cost of $600,000. SD3 was trained at a cost of around $10 million. Limitations. Stable Diffusion has issues with degradation and inaccuracies in certain scenarios. Initial releases of the model were trained on a dataset that consists of 512×512 resolution images, meaning that the quality of generated images noticeably degrades when user specifications deviate from its "expected" 512×512 resolution; the version 2.0 update of the Stable Diffusion model later introduced the ability to natively generate images at 768×768 resolution. Another challenge is in generating human limbs due to poor data quality of limbs in the LAION database. The model is insufficiently trained to understand human limbs and faces due to the lack of representative features in the database, and prompting the model to generate images of such type can confound the model. Stable Diffusion XL (SDXL) version 1.0, released in July 2023, introduced native 1024x1024 resolution and improved generation for limbs and text. Accessibility for individual developers can also be a problem. In order to customize the model for new use cases that are not included in the dataset, such as generating anime characters ("waifu diffusion"), new data and further training are required. Fine-tuned adaptations of Stable Diffusion created through additional retraining have been used for a variety of different use-cases, from medical imaging to algorithmically generated music. However, this fine-tuning process is sensitive to the quality of new data; low resolution images or different resolutions from the original data can not only fail to learn the new task but degrade the overall performance of the model. Even when the model is additionally trained on high quality images, it is difficult for individuals to run models in consumer electronics. For example, the training process for waifu-diffusion requires a minimum 30 GB of VRAM, which exceeds the usual resource provided in such consumer GPUs as Nvidia's GeForce 30 series, which has only about 12 GB. The creators of Stable Diffusion acknowledge the potential for algorithmic bias, as the model was primarily trained on images with English descriptions. As a result, generated images reinforce social biases and are from a western perspective, as the creators note that the model lacks data from other communities and cultures. The model gives more accurate results for prompts that are written in English in comparison to those written in other languages, with western or white cultures often being the default representation. End-user fine-tuning. To address the limitations of the model's initial training, end-users may opt to implement additional training to fine-tune generation outputs to match more specific use-cases, a process also referred to as personalization. There are three methods in which user-accessible fine-tuning can be applied to a Stable Diffusion model checkpoint: Capabilities. The Stable Diffusion model supports the ability to generate new images from scratch through the use of a text prompt describing elements to be included or omitted from the output. Existing images can be re-drawn by the model to incorporate new elements described by a text prompt (a process known as "guided image synthesis") through its diffusion-denoising mechanism. In addition, the model also allows the use of prompts to partially alter existing images via inpainting and outpainting, when used with an appropriate user interface that supports such features, of which numerous different open source implementations exist. Stable Diffusion is recommended to be run with 10 GB or more VRAM, however users with less VRAM may opt to load the weights in float16 precision instead of the default float32 to tradeoff model performance with lower VRAM usage. Text to image generation. The text to image sampling script within Stable Diffusion, known as "txt2img", consumes a text prompt in addition to assorted option parameters covering sampling types, output image dimensions, and seed values. The script outputs an image file based on the model's interpretation of the prompt. Generated images are tagged with an invisible digital watermark to allow users to identify an image as generated by Stable Diffusion, although this watermark loses its efficacy if the image is resized or rotated. Each txt2img generation will involve a specific seed value which affects the output image. Users may opt to randomize the seed in order to explore different generated outputs, or use the same seed to obtain the same image output as a previously generated image. Users are also able to adjust the number of inference steps for the sampler; a higher value takes a longer duration of time, however a smaller value may result in visual defects. Another configurable option, the classifier-free guidance scale value, allows the user to adjust how closely the output image adheres to the prompt. More experimentative use cases may opt for a lower scale value, while use cases aiming for more specific outputs may use a higher value. Additional text2img features are provided by front-end implementations of Stable Diffusion, which allow users to modify the weight given to specific parts of the text prompt. Emphasis markers allow users to add or reduce emphasis to keywords by enclosing them with brackets. An alternative method of adjusting weight to parts of the prompt are "negative prompts". Negative prompts are a feature included in some front-end implementations, including Stability AI's own DreamStudio cloud service, and allow the user to specify prompts which the model should avoid during image generation. The specified prompts may be undesirable image features that would otherwise be present within image outputs due to the positive prompts provided by the user, or due to how the model was originally trained, with mangled human hands being a common example. Image modification. Stable Diffusion also includes another sampling script, "img2img", which consumes a text prompt, path to an existing image, and strength value between 0.0 and 1.0. The script outputs a new image based on the original image that also features elements provided within the text prompt. The strength value denotes the amount of noise added to the output image. A higher strength value produces more variation within the image but may produce an image that is not semantically consistent with the prompt provided. There are different methods for performing img2img. The main method is SDEdit, which first adds noise to an image, then denoises it as usual in text2img. The ability of img2img to add noise to the original image makes it potentially useful for data anonymization and data augmentation, in which the visual features of image data are changed and anonymized. The same process may also be useful for image upscaling, in which the resolution of an image is increased, with more detail potentially being added to the image. Additionally, Stable Diffusion has been experimented with as a tool for image compression. Compared to JPEG and WebP, the recent methods used for image compression in Stable Diffusion face limitations in preserving small text and faces. Additional use-cases for image modification via img2img are offered by numerous front-end implementations of the Stable Diffusion model. Inpainting involves selectively modifying a portion of an existing image delineated by a user-provided layer mask, which fills the masked space with newly generated content based on the provided prompt. A dedicated model specifically fine-tuned for inpainting use-cases was created by Stability AI alongside the release of Stable Diffusion 2.0. Conversely, outpainting extends an image beyond its original dimensions, filling the previously empty space with content generated based on the provided prompt. A depth-guided model, named "depth2img", was introduced with the release of Stable Diffusion 2.0 on November 24, 2022; this model infers the depth of the provided input image, and generates a new output image based on both the text prompt and the depth information, which allows the coherence and depth of the original input image to be maintained in the generated output. ControlNet. ControlNet is a neural network architecture designed to manage diffusion models by incorporating additional conditions. It duplicates the weights of neural network blocks into a "locked" copy and a "trainable" copy. The "trainable" copy learns the desired condition, while the "locked" copy preserves the original model. This approach ensures that training with small datasets of image pairs does not compromise the integrity of production-ready diffusion models. The "zero convolution" is a 1×1 convolution with both weight and bias initialized to zero. Before training, all zero convolutions produce zero output, preventing any distortion caused by ControlNet. No layer is trained from scratch; the process is still fine-tuning, keeping the original model secure. This method enables training on small-scale or even personal devices. User Interfaces. Stability provides an online image generation service called "DreamStudio". The company also released an open source version of "DreamStudio" called "StableStudio". In addition to Stability's interfaces, many third party open source interfaces exist, such as AUTOMATIC1111 Stable Diffusion Web UI, which is the most popular and offers extra features, "Fooocus", which aims to decrease the amount of prompting needed by the user, and "ComfyUI", which has a node-based user interface, essentially a visual programming language akin to many 3D modeling applications. Releases. Key papers Training cost Usage and controversy. Stable Diffusion claims no rights on generated images and freely gives users the rights of usage to any generated images from the model provided that the image content is not illegal or harmful to individuals. The images Stable Diffusion was trained on have been filtered without human input, leading to some harmful images and large amounts of private and sensitive information appearing in the training data. More traditional visual artists have expressed concern that widespread usage of image synthesis software such as Stable Diffusion may eventually lead to human artists, along with photographers, models, cinematographers, and actors, gradually losing commercial viability against AI-based competitors. Stable Diffusion is notably more permissive in the types of content users may generate, such as violent or sexually explicit imagery, in comparison to other commercial products based on generative AI. Addressing the concerns that the model may be used for abusive purposes, CEO of Stability AI, Emad Mostaque, argues that "[it is] peoples' responsibility as to whether they are ethical, moral, and legal in how they operate this technology", and that putting the capabilities of Stable Diffusion into the hands of the public would result in the technology providing a net benefit, in spite of the potential negative consequences. In addition, Mostaque argues that the intention behind the open availability of Stable Diffusion is to end corporate control and dominance over such technologies, who have previously only developed closed AI systems for image synthesis. This is reflected by the fact that any restrictions Stability AI places on the content that users may generate can easily be bypassed due to the availability of the source code. Controversy around photorealistic sexualized depictions of underage characters have been brought up, due to such images generated by Stable Diffusion being shared on websites such as Pixiv. In June of 2024, a hack on an extension of ComfyUI, a user interface for Stable Diffusion, took place, with the hackers claiming they targeted users who committed "one of our sins", which included AI-art generation, art theft, promoting cryptocurrency. Litigation. In January 2023, three artists, Sarah Andersen, Kelly McKernan, and Karla Ortiz, filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists. The same month, Stability AI was also sued by Getty Images for using its images in the training data. In July 2023, U.S. District Judge William Orrick inclined to dismiss most of the lawsuit filed by Andersen, McKernan, and Ortiz but allowed them to file a new complaint. License. Unlike models like DALL-E, Stable Diffusion makes its source code available, along with the model (pretrained weights). It applies the Creative ML OpenRAIL-M license, a form of Responsible AI License (RAIL), to the model (M). The license prohibits certain use cases, including crime, libel, harassment, doxing, "exploiting ... minors", giving medical advice, automatically creating legal obligations, producing legal evidence, and "discriminating against or harming individuals or groups based on ... social behavior or ... personal or personality characteristics ... [or] legally protected characteristics or categories". The user owns the rights to their generated output images, and is free to use them commercially. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\text{channel}, \\text{width}, \\text{height})" }, { "math_id": 1, "text": "(3, 64, 64)" }, { "math_id": 2, "text": "t = 0" }, { "math_id": 3, "text": "t = 100" }, { "math_id": 4, "text": "(128, 32, 32)" }, { "math_id": 5, "text": "1024" }, { "math_id": 6, "text": "128" } ]
https://en.wikipedia.org/wiki?curid=71642695
71651194
Silver's dichotomy
Statement about equivalence relations In descriptive set theory, a branch of mathematics, Silver's dichotomy (also known as Silver's theorem) is a statement about equivalence relations, named after Jack Silver. Statement and history. A relation is said to be coanalytic if its complement is an analytic set. Silver's dichotomy is a statement about the equivalence classes of a coanalytic equivalence relation, stating any coanalytic equivalence relation either has countably many equivalence classes, or else there is a perfect set of reals that are each incomparable to each other. In the latter case, there must be continuum many equivalence classes of the relation. The first published proof of Silver's dichotomy was by Jack Silver, appearing in 1980 in order to answer a question posed by Harvey Friedman. One application of Silver's dichotomy appearing in recursive set theory is since equality restricted to a set formula_0 is coanalytic, there is no Borel equivalence relation formula_1 such that formula_2, where formula_3 denotes Borel-reducibility. Some later results motivated by Silver's dichotomy founded a new field known as invariant descriptive set theory, which studies definable equivalence relations. Silver's dichotomy also admits several weaker recursive versions, which have been compared in strength with subsystems of second-order arithmetic from reverse mathematics, while Silver's dichotomy itself is provably equivalent to formula_4 over formula_5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "(=\\upharpoonright\\aleph_0)\\leq_B R\\leq_B (=\\upharpoonright 2^{\\aleph_0})" }, { "math_id": 3, "text": "\\leq_B" }, { "math_id": 4, "text": "\\Pi_1^1\\mathsf{-CA}_0" }, { "math_id": 5, "text": "\\mathsf{RCA}_0" } ]
https://en.wikipedia.org/wiki?curid=71651194
71656835
Job 16
Job 16 is the sixteenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 16 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 16 consists of three parts: Job reflects on his friends as miserable comforters (16:1–5). Job starts his first speech in the second round of the conversation with his friends with a complaint that all of them are miserable comforters, giving him nothing new as he has heard many things like what they said (verse 2a) and he also was able to speak as they did (verse 4a). Job points out that the friends string words together while shaking their head, not in sympathy but in derision, against him (cf. Psalm 22:7; 109:25), instead of saying something useful as he would do if he were in their shoes (verse 5). [Job said:] "I could strengthen you with my mouth," "and the solace of my lips would assuage your pain." Job complains of God's treatment and wishes help from a heavenly witness (16:6–22). Job states that his pain is not eased by speaking or by not speaking about it, as he firmly believes God is in charge of the world and treats Job as God pleases (verses 12–14). Therefore Job called for help from a heavenly figure, who will argue Job's case with God (verse 21; cf. Job 9:33; 19:25; 31:35). "My friends scorn me, "but I pour out my tears to God." [Job said:] "Oh, that one might plead for a man with God," "as a man pleads for his neighbor!" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71656835
71656873
Job 11
Job 11 is the eleventh chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Zophar the Naamathite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 20 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 11 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 11 starts with an introduction of Zophar, Job's third friend to speak, followed by the exposition of Zophar's fundamental stand (verses 2–6). Zophar argues that human cannot fathom God's depths (verses 7–12), but he believes that reward will come to the repentant righteous (verses 13–20), ended with a warning that the wicked will be destroyed. Zophar's fundamental position (11:1–12). Zophar thinks that Job is a man of full of empty talk and has to be silenced and shamed, because he regards anyone protesting to God is mocking God. Zophar's statements imply that God's wisdom is a secret kept from Job, but not from Zophar, so Zophar can speak on behalf of God (despite without any revelation; verses 5–6). Zophar focuses on God's greatness in creation (like Eliphaz and Bildad at the start of their speeches) to tell Job about the punishment of the wicked, as Zophar perceives Job as a worthless, hollow-minded person in contrast to God's wisdom (verse 12). At the end of the book (Job 42:7–8), it is stated that Zophar is wrongly claiming to speak for God, so Zophar words actually are his own view. [Zophar said:] "“Should your empty talk make men hold their peace? "And when you mock, will no one shame you?"" Zophar proposes a way forward (11:13–20). In this section Zophar shows the positive results (verses 15–19) of several conditions (verses 13–14) to gain God's favor, concluded by a warning about the destruction of the wicked (verse 20), based on Zophar's conviction that Job is wicked. Many of Zophar's words ring true, but they don't apply to Job's circumstances (because it is stated in the Prologue, chapters 1 and 2, that Job is blameless in this case). Compared to Eliphaz's thought that Job's suffering can be a temporary setback or Bildad's attempt to distinguish the greater sins of Job's children with Job's sin, Zophar insists that Job is getting off lightly, because of his belief that the degree of sufferings is proportional to one's wickedness. [Zopharsaid:] "“But the eyes of the wicked will fail," "and they will not escape," "and their hope will be as the giving up of breath."" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71656873
71659775
Josephson diode
Superconducting quantum material diode A Josephson diode (JD) is a special type of Josephson junction (JJ), which conducts (super)current in one direction better that in the opposite direction. In other words it has asymmetric current-voltage characteristic. Since Josephson diode is a superconducting device, the asymmetry of the "supercurrent" transport is the main focus of attention. Opposite to conventional Josephson junctions, the critical (maximum) supercurrents formula_0 and formula_1 for opposite bias directions are different by absolute values (formula_2). In the presence of such a non-reciprocity, the bias currents of any magnitude in the range between formula_0 and formula_3 can flow without resistance in only one direction. This asymmetry, characterized by the ratio of critical currents formula_4, is the main figure of merit of Josephson diodes and is the subject of new developments and optimizations. The Josephson diode effect can occur, e.g., in superconducting devices where time reversal symmetry and inversion symmetry are broken. Josephson diodes can be subdivided into two categories, those requiring an external (magnetic) field to be asymmetric and those not requiring an external magnetic field --- the so-called “field-free” Josephson diodes (more attractive for applications). In 2021, the Josephson diode was realized in the absence of applied magnetic field in a non-centrosymmetric material, followed shortly by the first realization of the zero-field Josephson diode in an inversion-symmetric device. History. Since decades the physicists tried to construct Josephson junction devices with asymmetric critical currents. This didn't involve new physical principles or advanced (quantum) material engineering, but rather electrical engineering tricks like combining several JJs in a special way (e.g. asymmetric 3JJ SQUID) or specially designed long JJs or JJ arrays, where Josephson vortex transport was asymmetric in opposite directions. After all, if one does not look inside the device, but treats such a device as a black box with two electrodes, its current-voltage characteristic is asymmetric with formula_2. Such devices were especially popular in the context of Josephson ratchets -- devices used to rectify random or deterministic signals with zero time-average. These devices can be subdivided into several classes: Starting from 2020 one observes a new wave of interest to the systems with non-reciprocal supercurrent transport based on novel materials and physical principles. In-depth review of recent developments. Superconducting diode effect. The superconducting diode effect is an example of nonreciprocal superconductivity, where a material is superconducting in one direction and resistive in the other. This leads to half-wave rectification when a square wave AC-current is applied. In 2020, this effect was demonstrated in an artificial [Nb/V/Ta]n superlattice. The phenomenon in the Josephson diode is believed to originate from asymmetric Josephson tunneling. Unlike conventional semiconducting junction diodes, the superconducting diode effect can be realized in Josephson junctions as well as junction-free bulk superconductors. Theories. Currently, the precise mechanism behind the Josephson diode effect is not fully understood. However, some theories have emerged that are now under theoretical investigation. There are two types of Josephson diodes, relating to which symmetries are being broken. The inversion breaking Josephson diode, and the Josephson diode breaking inversion breaking and time-reversal. The minimal symmetry breaking requirement for forming the Josephson diode is inversion symmetry breaking, and is required to obtain nonreciprocal transport. One proposed mechanism originates from finite momentum Cooper pairs. It may also be possible that the superconducting diode effect in the JD originates from self-field effects, but this still has to be rigorously studied. Figures of merit. Depending on the potential application different parameters of the Josephson diodes, from operation temperature to their size can be important. However, the most important parameter is the asymmetry of critical currents (also called non-reciprocity). It can be defined as dimensionless ratio of larger to smaller critical currents formula_13 to be always positive and lay in the range from 1 (symmetric JJ) to formula_14 (infinitely asymmetric one). Instead, some researchers use the so-called "efficiency", defined as formula_15 It lays in the range from 0 (symmetric system) to 1 (infinitely asymmetric system). Among other things the efficiency formula_16 shows the theoretical limit for "thermodynamic efficiency" (ratio of output to input power) that can be reached by the diode during rectification. Intuitively it is clear that the larger the asymmetry formula_4 is, the better the diode performs. A quantitative analysis showed that a large asymmetry allows one to achieve rectification in a wide range of driving current amplitudes, a large counter current (corresponding to a heavy load), against which rectification is still possible, and a large thermodynamic efficiency (ratio of output dc to input ac power). Thus, to create a practically relevant diode one should design a system with high asymmetry. The asymmetry ratios (efficiency) for different implementations of Josephson diodes are summarized in the table below. Size. Previously demonstrated Josephson diodes were rather large (see the table), which hampers their integration into micro- or nano-electronic superconducting circuits or stacking. Novel devices based on heterostructures can potentially have 100nm-scale dimensions, which is difficult to achieve using previous approaches with long JJs, fluxons, etc. Voltage. Important parameter of any nano-rectifier is the maximum dc voltage produced. See the table for comparison. Operating temperature. Ideally one would like to operate the diode in wide temperature range. Obviously, an upper limit in operation temperature is given by the transition temperature formula_17 of the superconducting material(s) used to fabricate the Josephson diodes. In the table below we quote the operating temperature for which the other parameters such as asymmetry are quoted. Many novel diodes, unfortunately, operate below 4.2K. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_{c+}" }, { "math_id": 1, "text": "I_{c-}" }, { "math_id": 2, "text": "I_{c+}\\neq|I_{c-}|" }, { "math_id": 3, "text": "|I_{c-}|" }, { "math_id": 4, "text": "\\mathcal{A}" }, { "math_id": 5, "text": "\\varphi" }, { "math_id": 6, "text": "\\mathcal{A}\\sim10" }, { "math_id": 7, "text": "N\\times" }, { "math_id": 8, "text": "\\mathcal{A}\\approx1.07" }, { "math_id": 9, "text": "\\mathcal{A}\\approx2" }, { "math_id": 10, "text": "T\\approx2\\,\\mathrm{K}" }, { "math_id": 11, "text": "\\mathcal{A}\\approx2.3" }, { "math_id": 12, "text": "T\\approx3.8\\,\\mathrm{K}" }, { "math_id": 13, "text": "\\mathcal{A}=\\frac{\\mathrm{max}(I_{c+},|I_{c-}|)}{\\mathrm{min}(I_{c+},|I_{c-}|)}" }, { "math_id": 14, "text": "\\infty" }, { "math_id": 15, "text": "\n \\eta=\\left|\\frac{I_{c+}-|I_{c-}|}{I_{c+}+|I_{c-}|}\\right| = \\frac{\\mathcal{A}-1}{\\mathcal{A}+1}.\n" }, { "math_id": 16, "text": "\\eta" }, { "math_id": 17, "text": "T_c" } ]
https://en.wikipedia.org/wiki?curid=71659775
71663792
Job 12
12th chapter of Old Testament Job 12 is the twelfth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 25 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 12 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapters 12 to 14 contain Job's closing speech of the first round, where he directly addresses his friends (12:2–3; 13:2, 4–12). Job believes God's hand in creation (12:1–12). Job points out that some who are wicked are prospering, regardless how the righteous is rewarded or is suffering, and that the life of the nature all are in God's hand (verse 9). Job suggests his friends to look behind the 'age-old traditions' and 'past-dogmas' to 'the God who is both the source of all wisdom' and the one in control of all creation (verse 12). [Job said:] "Who among all these does not know" "that the hand of the Lord has done this," "In whose hand is the life of all living things" "and the breath of every human being?" Job believes God's active control of the worlds (12:13–25). This section follows Job's statements in verse 12 (which can also be read as rhetorical questions) to declare the wisdom and might of God (verse 13) whose sovereign activity can be observed in all areas and situations of life (verses 14–25). [Job said:] "He increases the nations and destroys them;" "He enlarges the nations and guides them." Verse 23. Job uses the rise and fall of nations, which does not seem to be governed by any moral principle, as an example of God’s arbitrary power, which is spelled out in detail in Daniel's interpretation of King Nebuchadnezzar's dream (Daniel 2) how no group of humans can thwart the purpose of God Almighty. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71663792
71667739
McShane integral
Integral in integration theory In the branch of mathematics known as integration theory, the McShane integral, created by Edward J. McShane, is a modification of the Henstock-Kurzweil integral. The McShane integral is equivalent to the Lebesgue integral. Definition. Free tagged partition. Given a closed interval ["a", "b"] of the real line, a "free tagged partition" formula_0 "of" formula_1 is a set formula_2 where formula_3 and each tag formula_4. The fact that the tags are allowed to be outside the subintervals is why the partition is called "free". It's also the only difference between the definitions of the Henstock-Kurzweil integral and the McShane integral. For a function formula_5 and a free tagged partition formula_0, define formula_6 Gauge. A positive function formula_7 is called a "gauge" in this context. We say that a free tagged partition formula_0 is formula_8"-fine" if for all formula_9 formula_10 Intuitively, the gauge controls the widths of the subintervals. Like with the Henstock-Kurzweil integral, this provides flexibility (especially near problematic points) not given by the Riemann integral. McShane integral. The value formula_11 is the McShane integral of formula_5 if for every formula_12 we can find a gauge formula_8 such that for all formula_8-fine free tagged partitions formula_0 of formula_1, formula_13 Examples. It's clear that if a function formula_5 is integrable according to the McShane definition, then formula_14 is also Henstock-Kurzweil integrable. Both integrals coincide in the regard of its uniqueness. In order to illustrate the above definition we analyse the McShane integrability of the functions described in the following examples, which are already known as Henstock-Kurzweil integrable (see the paragraph 3 of the site of this Wikipedia "Henstock-Kurzweil integral"). Example 1. Let formula_5 be such that formula_15 and formula_16 if formula_17 As is well known, this function is Riemann integrable and the correspondent integral is equal to formula_18 We will show that this formula_14 is also McShane integrable and that its integral assumes the same value. For that purpose, for a given formula_19, let's choose the gauge formula_20 such that formula_21 and formula_22 if formula_23 Any free tagged partition formula_24 of formula_25 can be decomposed into sequences like formula_26, for formula_27, formula_28, for formula_29, and formula_30, where formula_31, such that formula_32 formula_33 This way, we have the Riemann sum formula_34 and by consequence formula_35 Therefore if formula_0 is a free tagged formula_8-fine partition we have formula_36, for every formula_27, and formula_37, for every formula_29. Since each one of those intervals do not overlap the interior of all the remaining, we obtain formula_38 Thus formula_14 is McShane integrable and formula_39 The next example proves the existence of a distinction between Riemann and McShane integrals. Example 2. Let formula_40 the well known Dirichlet's function given by formula_41 which one knows to be not Riemann integrable. We will show that formula_42 is integrable in the MacShane sense and that its integral is zero. Denoting by formula_43 the set of all rational numbers of the interval formula_1, for any formula_19 let's formulate the following gauge formula_44 For any formula_8-fine free tagged partition formula_45 consider its Riemann sum formula_46. Taking into account that formula_47 whenever formula_48 is irrational, we can exclude in the sequence of ordered pairs which constitute formula_0, the pairs formula_49 where formula_48 is irrational. The remainder are subsequences of the type formula_50 such that formula_51, formula_52 Since each one of those intervals do not overlap the interior of the remaining, each one of these sequences gives rise in the Riemann sum to subsums of the type formula_53. Thus formula_54, which proves that the Dirichlet's function is McShane integrable and that formula_55 Relationship with Derivatives. For real functions defined on an interval formula_1, both Henstock-Kurzweil and McShane integrals satisfy the elementary properties enumerated below, where by formula_56 we denote indistinctly the value of anyone of those inetegrals. With respect to the integrals mentioned above, the proofs of these properties are identical excepting slight variations inherent to the differences of the correspondent definitions (see Washek Pfeffer [Sec. 6.1]). This way a certain parallelism between the two integrals is observed. However an imperceptible rupture occurs when other properties are analysed, such as the absolute integrability and the integrability of the derivatives of integrable differentiable functions. On this matter the following theorems hold (see [Prop.2.2.3 e Th. 6.1.2]). Theorem 1 (on the absolute integrability of the McShane integral). If formula_74 is McShane integrable on formula_1 then formula_75 is also McShane integrable on formula_1 andformula_76. Theorem 2 (fundamental theorem of Henstock-Kurzweil integral). If formula_77 is differentiable on formula_1, then formula_78 is Henstock-Kurzweil integrable on formula_1 andformula_79. In order to illustrate these theorems we analyse the following example based upon Example 2.4.12. Example 3. Let's consider the function: formula_80 formula_81 is obviously differentiable at any formula_82 and differentiable, as well, at formula_83, since formula_84. Moreover formula_85 As the function formula_86 is continuous and, by the Theorem 2, the function formula_87 is Henstock-Kurzweil integrable on formula_88 then by the properties 6 and 7, the same holds to the function formula_89 But the function formula_90 is not integrable on formula_91 for none of the mentioned integrals. In fact, otherwise, denoting by formula_92 anyone of such integrals, we should have necessarily formula_93 for any positive integer formula_94. Then through the change of variable formula_95, we should obtain taking into account the property 5: formula_96 formula_97. As formula_94 is an arbitrary positive integer and formula_98, we obtain a contradiction. From this example we are able to conclude the following relevant consequences: Relationship with Lebesgue Integral. The more surprising result of the McShane integral is stated in the following theorem, already announced in the introduction. Theorem 3. Let formula_74. Then formula_14 is McShane integrable formula_101 formula_14 is Lebesgue integrable. The correspondent integrals coincide. This fact enables to conclude that with the McShane integral one formulates a kind of unification of the integration theory around Riemann sums, which, after all, constitute the origin of that theory. So far is not known an immediate proof of such theorem. In Washek Pfeffer [Ch. 4] it is stated through the development of the theory of McShane integral, including measure theory, in relationship with already known properties of Lebesgue integral. In Charles Swartz that same equivalence is proved in Appendix 4. Furtherly to the book by Russel Gordon [Ch. 10], on this subject we call the attention of the reader also to the works by Robert McLeod [Ch. 8] and Douglas Kurtz together with Charles W. Swartz. Another perspective of the McShane integral is that it can be looked as new formulation of the Lebesgue integral without using Measure Theory, as alternative to the courses of Frigyes Riesz and Bela Sz. Nagy [Ch.II] or Serge Lang [Ch.X, §4 Appendix] (see also).
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "[a,b]" }, { "math_id": 2, "text": "\\{(t_i, [a_{i-1}, a_i]) : 1 \\leq i \\leq n \\} " }, { "math_id": 3, "text": " a = a_0 < a_1 < \\dots < a_n = b " }, { "math_id": 4, "text": " t_i \\in [a, b]" }, { "math_id": 5, "text": "f : [a,b] \\to \\mathbb{R}" }, { "math_id": 6, "text": "S(f, P) = \\sum_{i = 1}^n f(t_i) (a_i - a_{i-1})." }, { "math_id": 7, "text": " \\delta : [a, b] \\to (0, +\\infty) " }, { "math_id": 8, "text": "\\delta" }, { "math_id": 9, "text": "i = 1,2, \\dots, n," }, { "math_id": 10, "text": "[a_{i-1}, a_i] \\subseteq [t_i - \\delta(t_i), t_i + \\delta(t_i)]. " }, { "math_id": 11, "text": "\\int_a^b f" }, { "math_id": 12, "text": "\\varepsilon > 0" }, { "math_id": 13, "text": " \\left |\\int_a^bf - S(f, P) \\right| < \\varepsilon. " }, { "math_id": 14, "text": "f" }, { "math_id": 15, "text": "f(a)=f(b)=0" }, { "math_id": 16, "text": "f(x)=1" }, { "math_id": 17, "text": "x\\in]a,b[." }, { "math_id": 18, "text": "b-a." }, { "math_id": 19, "text": "\\varepsilon>0" }, { "math_id": 20, "text": "\\delta(t)" }, { "math_id": 21, "text": "\\delta(a)=\\delta(b)=\\varepsilon/4" }, { "math_id": 22, "text": "\\delta(t)=b-a" }, { "math_id": 23, "text": "t\\in]a,b[." }, { "math_id": 24, "text": "P=\\{(t_i, [a_{i-1}, a_i]) : i=1,...,n\\} " }, { "math_id": 25, "text": "[a,b] " }, { "math_id": 26, "text": "(a,[x_{i_j-1},x_{i_j}])" }, { "math_id": 27, "text": "j=1,...,\\lambda" }, { "math_id": 28, "text": "(b,[x_{i_k-1},x_{i_k}])" }, { "math_id": 29, "text": "k=1,...,\\mu" }, { "math_id": 30, "text": "(t_{i_r},[x_{i_r-1},x_{i_r}])" }, { "math_id": 31, "text": "r=1,...,\\nu" }, { "math_id": 32, "text": "t_{i_r}\\in]a,b[" }, { "math_id": 33, "text": "(\\lambda+\\mu+\\nu=n)." }, { "math_id": 34, "text": "S(f, P) = \\sum_{r=1}^\\nu \\displaystyle(x_{i_r}-x_{i_r-1})" }, { "math_id": 35, "text": "|S(P,f)-(b-a)|=\\textstyle \\sum_{j=1}^\\lambda \\displaystyle(x_{i_j}-x_{i_j-1})+\\textstyle \\sum_{k=1}^\\mu \\displaystyle(x_{i_k}-x_{i_k-1})." }, { "math_id": 36, "text": "[x_{i_j-1},x_{i_j}]\\subset[a-\\delta(a),a+\\delta(a)]" }, { "math_id": 37, "text": "[x_{i_k-1},x_{i_k}]\\subset[b-\\delta(b),b+\\delta(b)]" }, { "math_id": 38, "text": "|S(P,f)-(b-a)| <2\\delta(a)+2\\delta(b)=\\frac{\\varepsilon}{2}+\\frac{\\varepsilon}{2}=\\varepsilon." }, { "math_id": 39, "text": "\\int_a^b f=b-a." }, { "math_id": 40, "text": "d:[a,b]\\rightarrow\\mathbb{R}" }, { "math_id": 41, "text": "d(x) = \\begin{cases} 1, & \\text{if }x\\text{ is rational,} \\\\0, & \\text{if }x\\text{ is irrational,} \\end{cases}" }, { "math_id": 42, "text": "d" }, { "math_id": 43, "text": "\\{r_1,r_2,...,r_n,...\\}" }, { "math_id": 44, "text": "\\delta(x) = \\begin{cases} \\varepsilon2^{-n-1}, & \\text{if }x=r_n\\text{ and } n=1,2,...,\\\\1, & \\text{if }x\\text{ is irrational.} \\end{cases}" }, { "math_id": 45, "text": "P=\\{(t_i,[x_{i-1},x_i]):i=1,...,n\\}" }, { "math_id": 46, "text": "S(P,f)=\\textstyle \\sum_{i=1}^n \\displaystyle f(t_i)(x_i-x_{i-1})" }, { "math_id": 47, "text": "f(t_i)=0" }, { "math_id": 48, "text": "t_i" }, { "math_id": 49, "text": "(t_i,[x_{i-1},x_i])" }, { "math_id": 50, "text": "(r_k,[x_{i_1-1},x_{i_1}]),...,(r_k,[x_{i_k-1},x_{i_k}])" }, { "math_id": 51, "text": "[x_{i_j-1},x_{i_j}]\\subset [r_k-\\delta(r_k),r_k+\\delta(r_k)]" }, { "math_id": 52, "text": "j=1,...,k." }, { "math_id": 53, "text": "\\textstyle \\sum_{j=1}^k \\displaystyle f(r_k)(x_{i_j}-x_{i_j-1})=\\textstyle \\sum_{j=1}^k \\displaystyle (x_{i_j}-x_{i_j-1})\\leq2\\delta(r_k)=\\frac{\\varepsilon}{2^n}" }, { "math_id": 54, "text": "0\\leq S(P,f)<\\textstyle \\sum_{n\\geq1} \\displaystyle \\varepsilon/2^n=\\varepsilon" }, { "math_id": 55, "text": "\\int_a^b d=0." }, { "math_id": 56, "text": "\\int_{a}^{b} f" }, { "math_id": 57, "text": "[a,c]" }, { "math_id": 58, "text": "[c,b]" }, { "math_id": 59, "text": "\\int_{a}^{c} f+\\int_{c}^{b} f=\\int_{a}^{b} f" }, { "math_id": 60, "text": "\\phi:[a,b]\\rightarrow[\\alpha,\\beta]" }, { "math_id": 61, "text": "f:[\\alpha,\\beta]\\rightarrow\\mathbb{R}" }, { "math_id": 62, "text": "[\\alpha,\\beta]" }, { "math_id": 63, "text": "(f\\circ\\phi)|\\phi'|" }, { "math_id": 64, "text": "\\int_{a}^{b} (f\\circ\\phi)|\\phi'|=\\int_{\\alpha}^{\\beta} f" }, { "math_id": 65, "text": "kf" }, { "math_id": 66, "text": "\\int_{a}^{b}kf=k\\int_{a}^{b}f" }, { "math_id": 67, "text": "k\\in \\mathbb{R}" }, { "math_id": 68, "text": "g" }, { "math_id": 69, "text": "f+g" }, { "math_id": 70, "text": "\\int_{a}^{b} (f+g)=\\int_{a}^{b} f+\\int_{a}^{b} g" }, { "math_id": 71, "text": "f\\leq g " }, { "math_id": 72, "text": "\\left[ a,b\\right]" }, { "math_id": 73, "text": "\\Rightarrow \\int_{a}^{b}f\\leq \\int_{a}^{b}g" }, { "math_id": 74, "text": "f:[a,b]\\rightarrow\\mathbb{R}" }, { "math_id": 75, "text": "|f|" }, { "math_id": 76, "text": "|\\int_{a}^{b} f|\\leq\\int_{a}^{b} |f|" }, { "math_id": 77, "text": "F:[a,b]\\rightarrow\\mathbb{R}" }, { "math_id": 78, "text": "F'" }, { "math_id": 79, "text": "\\int_{a}^{b} F'=F(b)-F(a)" }, { "math_id": 80, "text": "F(x) = \\begin{cases} x^2\\cos(\\pi/x^2), & \\text{if }x\\neq0, \\\\ 0, & \\text{if }x=0. \\end{cases}" }, { "math_id": 81, "text": "F" }, { "math_id": 82, "text": "x\\neq0" }, { "math_id": 83, "text": "x=0" }, { "math_id": 84, "text": "\\lim_{x \\to 0}\\left ( \\frac{F(x)}{x} \\right )=\\lim_{x \\to 0}\\left ( x\\cos\\frac{\\pi}{x^2} \\right )=0" }, { "math_id": 85, "text": "F'(x) = \\begin{cases} 2x\\cos(\\pi/x^2)+ \\frac{2\\pi}{x}\\sin(\\pi/x^2) , & \\text{if }x\\neq0, \\\\ 0, & \\text{if }x=0. \\end{cases}" }, { "math_id": 86, "text": "h(x) = \\begin{cases} 2x\\cos(\\pi/x^2), & \\text{if }x\\neq0, \\\\ 0, & \\text{if }x=0, \\end{cases}" }, { "math_id": 87, "text": "F'(x)" }, { "math_id": 88, "text": "[0,1]," }, { "math_id": 89, "text": "g_0(x) = \\begin{cases} \\frac{1}{x} \\sin(\\pi/x^2), & \\text{if }x\\neq0, \\\\ 0, & \\text{if }x=0. \\end{cases}" }, { "math_id": 90, "text": "g(x)=|g_0(x) |= \\begin{cases} \\frac{1}{x} |\\sin(\\pi/x^2)|, & \\text{if }x\\neq0, \\\\ 0, & \\text{if }x=0, \\end{cases}" }, { "math_id": 91, "text": "[0,1]" }, { "math_id": 92, "text": "\\int_{0}^{1} g(x) dx" }, { "math_id": 93, "text": "\\int_{0}^{1} g(x) dx\\geq\\int_{1/\\sqrt{n}}^{1} \\frac{1}{x} |\\sin(\\pi/x^2)| dx," }, { "math_id": 94, "text": "n" }, { "math_id": 95, "text": "x=1/\\sqrt{t}" }, { "math_id": 96, "text": "\\int_{1/\\sqrt{n}}^{1} \\frac{1}{x} |\\sin(\\pi/x^2)| dx=\\frac{1}{2}\\int_{1}^{n} \\frac{1}{t} |\\sin(\\pi t)| dt\n=\\frac{1}{2}\\sum_{k=2}^n\\int_{k-1}^{k} \\frac{1}{t} |\\sin(\\pi t)| dt\\geq" }, { "math_id": 97, "text": "\\geq\\frac{1}{2}\\sum_{k=2}^n\\frac{1}{k} \\int_{k-1}^{k} |\\sin(\\pi t)| dt=\\frac{1}{\\pi} \\sum_{k=2}^n\\frac{1}{k}" }, { "math_id": 98, "text": "\\lim_{n \\to \\infty}\\sum_{k=2}^N\\frac{1}{k}=+\\infty" }, { "math_id": 99, "text": "g_0" }, { "math_id": 100, "text": "F'" }, { "math_id": 101, "text": "\\Leftrightarrow" } ]
https://en.wikipedia.org/wiki?curid=71667739
71667849
Dirac–Kähler equation
Geometric analogue of the Dirac equation In theoretical physics, the Dirac–Kähler equation, also known as the Ivanenko–Landau–Kähler equation, is the geometric analogue of the Dirac equation that can be defined on any pseudo-Riemannian manifold using the Laplace–de Rham operator. In four-dimensional flat spacetime, it is equivalent to four copies of the Dirac equation that transform into each other under Lorentz transformations, although this is no longer true in curved spacetime. The geometric structure gives the equation a natural discretization that is equivalent to the staggered fermion formalism in lattice field theory, making Dirac–Kähler fermions the formal continuum limit of staggered fermions. The equation was discovered by Dmitri Ivanenko and Lev Landau in 1928 and later rediscovered by Erich Kähler in 1962. Mathematical overview. In four dimensional Euclidean spacetime a generic fields of differential forms formula_0 is written as a linear combination of sixteen basis forms indexed by formula_1, which runs over the sixteen ordered combinations of indices formula_2 with formula_3. Each index runs from one to four. Here formula_4 are antisymmetric tensor fields while formula_5 are the corresponding differential form basis elements formula_6 Using the Hodge star operator formula_7, the exterior derivative formula_8 is related to the codifferential through formula_9. These form the Laplace–de Rham operator formula_10 which can be viewed as the square root of the Laplacian operator since formula_11. The Dirac–Kähler equation is motivated by noting that this is also the property of the Dirac operator, yielding Dirac–Kähler equation formula_12 This equation is closely related to the usual Dirac equation, a connection which emerges from the close relation between the exterior algebra of differential forms and the Clifford algebra of which Dirac spinors are irreducible representations. For the basis elements to satisfy the Clifford algebra formula_13, it is required to introduce a new Clifford product formula_14 acting on basis elements as formula_15 Using this product, the action of the Laplace–de Rham operator on differential form basis elements is written as formula_16 To acquire the Dirac equation, a change of basis must be performed, where the new basis can be packaged into a matrix formula_17 defined using the Dirac matrices formula_18 The matrix formula_19 is designed to satisfy formula_20, decomposing the Clifford algebra into four irreducible copies of the Dirac algebra. This is because in this basis the Clifford product only mixes the column elements indexed by formula_21. Writing the differential form in this basis formula_22 transforms the Dirac–Kähler equation into four sets of the Dirac equation indexed by formula_23 formula_24 The minimally coupled Dirac–Kähler equation is found by replacing the derivative with the covariant derivative formula_25 leading to formula_26 As before, this is also equivalent to four copies of the Dirac equation. In the abelian case formula_27, while in the non-abelian case there are additional color indices. The Dirac–Kähler fermion formula_28 also picks up color indices, with it formally corresponding to cross-sections of the Whitney product of the Atiyah–Kähler bundle of differential forms with the vector bundle of local color spaces. Discretization. There is a natural way in which to discretize the Dirac–Kähler equation using the correspondence between exterior algebra and simplicial complexes. In four dimensional space a lattice can be considered as a simplicial complex, whose simplexes are constructed using a basis of formula_29-dimensional hypercubes formula_30 with a base point formula_31 and an orientation determined by formula_1. Then a h-chain is a formal linear combination formula_32 The h-chains admit a boundary operator formula_33 defined as the (h-1)-simplex forming the boundary of the h-chain. A coboundary operator formula_34 can be similarly defined to yield a (h+1)-chain. The dual space of chains consists of formula_29-cochains formula_35, which are linear functions acting on the h-chains mapping them to real numbers. The boundary and coboundary operators admit similar structures in dual space called the dual boundary formula_36 and dual coboundary formula_37 defined to satisfy formula_38 Under the correspondence between the exterior algebra and simplicial complexes, differential forms are equivalent to cochains, while the exterior derivative and codifferential correspond to the dual boundary and dual coboundary, respectively. Therefore, the Dirac–Kähler equation is written on simplicial complexes as formula_39 The resulting discretized Dirac–Kähler fermion formula_40 is equivalent to the staggered fermion found in lattice field theory, which can be seen explicitly by an explicit change of basis. This equivalence shows that the continuum Dirac–Kähler fermion is the formal continuum limit of fermion staggered fermions. Relation to the Dirac equation. As described previously, the Dirac–Kähler equation in flat spacetime is equivalent to four copies of the Dirac equation, despite being a set of equations for antisymmetric tensor fields. The ability of integer spin tensor fields to describe half integer spinor fields is explained by the fact that Lorentz transformations do not commute with the internal Dirac–Kähler formula_41 symmetry, with the parameters of this symmetry being tensors rather than scalars. This means that the Lorentz transformations mix different spins together and the Dirac fermions are not strictly speaking half-integer spin representations of the Clifford algebra. They instead correspond to a coherent superposition of differential forms. In higher dimensions, particularly on formula_42 dimensional surfaces, the Dirac–Kähler equation is equivalent to formula_43 Dirac equations. In curved spacetime, the Dirac–Kähler equation no longer decomposes into four Dirac equations. Rather it is a modified Dirac equation acquired if the Dirac operator remained the square root of the Laplace operator, a property not shared by the Dirac equation in curved spacetime. This comes at the expense of Lorentz invariance, although these effects are suppressed by powers of the Planck mass. The equation also differs in that its zero modes on a compact manifold are always guaranteed to exist whenever some of the Betti numbers vanish, being given by the harmonic forms, unlike for the Dirac equation which never has zero modes on a manifold with positive curvature. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\Phi = \\sum_{H} \\Phi_H(x)dx_H,\n" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "\\{\\mu_1,\\dots, \\mu_h\\}" }, { "math_id": 3, "text": "\\mu_1<\\cdots < \\mu_h" }, { "math_id": 4, "text": "\\Phi_H(x) = \\Phi_{\\mu_1\\dots \\mu_h}(x)" }, { "math_id": 5, "text": "dx_H" }, { "math_id": 6, "text": "\ndx_H = dx^{\\mu_1}\\wedge \\cdots \\wedge dx^{\\mu_h}.\n" }, { "math_id": 7, "text": "\\star" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "\\delta = -\\star d \\star" }, { "math_id": 10, "text": "d-\\delta" }, { "math_id": 11, "text": "(d-\\delta)^2=\\square" }, { "math_id": 12, "text": " (d-\\delta +m)\\Phi = 0. " }, { "math_id": 13, "text": "\\{dx^\\mu,dx^\\nu\\} = 2\\delta^{\\mu\\nu}" }, { "math_id": 14, "text": "\\vee" }, { "math_id": 15, "text": "\ndx_\\mu \\vee dx_\\nu = dx_\\mu \\wedge dx_\\nu + \\delta_{\\mu\\nu}.\n" }, { "math_id": 16, "text": "\n(d-\\delta)\\Phi(x) = dx^\\mu \\vee \\partial_\\mu \\Phi(x).\n" }, { "math_id": 17, "text": "Z_{ab}" }, { "math_id": 18, "text": "\nZ_{ab} = \\sum_H (-1)^{h(h-1)/2}(\\gamma_H)^T_{ab} dx_H.\n" }, { "math_id": 19, "text": "Z" }, { "math_id": 20, "text": "dx_\\mu \\vee Z = \\gamma_\\mu^T Z" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "\n\\Phi = \\sum_{ab}\\Psi(x)_{ab}Z_{ab},\n" }, { "math_id": 23, "text": "b" }, { "math_id": 24, "text": "\n(\\gamma^\\mu \\partial_\\mu +m)\\Psi(x)_b = 0.\n" }, { "math_id": 25, "text": "dx^\\mu \\vee \\partial_\\mu \\rightarrow dx^\\mu \\vee D_\\mu" }, { "math_id": 26, "text": "\n(d-\\delta+m)\\Phi = iA\\vee \\Phi.\n" }, { "math_id": 27, "text": "A = eA_\\mu dx^\\mu" }, { "math_id": 28, "text": "\\Phi" }, { "math_id": 29, "text": "h" }, { "math_id": 30, "text": "C^{(h)}_{x,H}" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": "\nC^{(h)} = \\sum_{x,H}\\alpha_{x,H}C^{(h)}_{x,H}.\n" }, { "math_id": 33, "text": "\\Delta C_{x,H}^{(h)}" }, { "math_id": 34, "text": "\\nabla C_{x,H}^{(h)}" }, { "math_id": 35, "text": "\\Phi^{(h)}(C^{(h)})" }, { "math_id": 36, "text": "\\hat \\Delta" }, { "math_id": 37, "text": "\\hat \\nabla" }, { "math_id": 38, "text": "\n(\\hat \\Delta \\Phi)(C) = \\Phi(\\Delta C), \\ \\ \\ \\ \\ \\ \\ (\\hat \\nabla \\Phi)(C) = \\Phi(\\nabla C).\n" }, { "math_id": 39, "text": "\n(\\hat \\Delta - \\hat \\nabla +m)\\Phi(C) = 0.\n" }, { "math_id": 40, "text": "\\Phi(C)" }, { "math_id": 41, "text": "\\text{SO}(2,4)" }, { "math_id": 42, "text": "2^{2^n}" }, { "math_id": 43, "text": "2^{2^{n-1}}" } ]
https://en.wikipedia.org/wiki?curid=71667849
71672474
WISEA 1810−1010
Substellar object in Serpens constellation &lt;/td&gt; ! style="text-align: center; background-color: #FFFFC0;" colspan="2" | Observation dataEpoch J2000      Equinox J2000 ! style="text-align:left" | Constellation ! style="text-align:left" | Right ascension ! style="text-align:left" | Declination ! style="background-color: #FFFFC0; text-align: center;" colspan="2"| Characteristics ! style="text-align:left" | Evolutionary stage ! style="text-align:left" | Spectral type ! style="text-align:left" | Apparent magnitude (J) ! style="text-align:left" | Apparent magnitude (H) ! style="text-align:left" | Apparent magnitude (K) &lt;/th&gt;&lt;/tr&gt; &lt;/th&gt;&lt;/tr&gt; WISEA J181006.18-101000.5 or WISEA 1810-1010 is a substellar object in the constellation Serpens about 8.9 parsec or 29 light-years distant from earth. It stands out because of its peculiar colors matching both L-type and T-type objects, likely due to its very low metallicity. Together with WISEA 0414−5854 it is the first discovered extreme subdwarf (esd) of spectral type T. Lodieu et al. describe WISEA 1810-1010 as a water vapor dwarf due to its atmosphere being dominated by hydrogen and water vapor. Discovery. WISEA 1810-1010 was first identified with the NEOWISE proper motion survey in 2016, but the proper motion could not be confirmed because of the high density of background stars in this field near the galactic plane. In 2020 the object was re-examined with the WiseView tool by the researchers of the Backyard Worlds project and was found to have significant proper motion. Additionally the object was independently discovered by the citizen scientist Arttu Sainio via the Backyard Worlds project. Observations. The object was initially observed by the Backyard Worlds researchers from US and Canada with Keck/NIRES and Palomar/TripleSpec. Later it was observed by another team from Spain, UK and Poland with NOT/ALFOSC, GTC/multiple instruments and Calar Alto/Omega2000. Analysis of the Keck and Palomar spectrum found that WISEA 1810-1010 has much deeper 1.15 μm (Y/J-band) absorption when compared to the extreme subdwarf of spectral type L7 2MASS 0532+8246, but the shape of the H-band is similar to this esdL7. The Y- and J-band spectrum does match better with spectra from subdwarfs with early spectral type T. Distance and physical properties. The distance was first poorly constrained at either 14 or 67 parsec, but using archived and new data the parallax was measured, which constrained the distance to . The object has a mass of , which makes this object a brown dwarf or a sub-brown dwarf, with a temperature of 700 to 900 K. This temperature suggests a spectral type of esdT7±0.5 based on field objects. It might be a later spectral type, because subdwarfs of spectral type L are generally warmer than field type objects. The tentative spectral type by Schneider et al. is based on a larger distance and a higher temperature, which does not reflect the most recent knowledge about this object. Atmosphere. The only chemicals detected in the atmosphere of WISEA 1810-1010 are hydrogen and strong absorption due to water vapor. This is surprising because T-dwarfs are defined by methane in their atmosphere and the hotter L-dwarfs are partly defined by carbon monoxide in their atmosphere. Both are missing in WISEA 1810-1010. The missing of carbon monoxide and methane can be explained by a carbon-deficient and metal-poor atmosphere. Alternatively the spectrum could be explained by an oxygen-enhanced atmosphere. Model spectra suggest a very metal-poor atmosphere with formula_0. Spectral type. Schneider et al. noted first the similarities of the spectrum with both L-dwarfs and T-dwarfs. The tentative classification as esdT0.0±1.0 was given due to the low estimated temperature. The discovery by Lodieu et al. that methane was not present in the near-infrared spectrum raised the question if a T-dwarf classification was possible. Methane is a key diagnostic feature for T-dwarfs. Jun-Yan Zhang et al. noted that WISEA 1810 cannot be classified as an L-dwarf either because of some key differences, such as: JWST observations of the methane band and other molecules in the mid-infrared of WISEA 1810 or other proposed esdT might resolve the question if these objects can be classified as T-dwarfs. If these objects cannot be classified as T-dwarfs, they might be given a new spectral type. Jun-Yan Zhang et al. proposed the letters H or Z (therefore H-dwarf or Z-dwarf). New esdT (or H/Z-dwarfs) might be discovered in the future with ESA's Euclid and the Rubin Observatory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[Fe/H]=-1.5\\plusmn0.5" } ]
https://en.wikipedia.org/wiki?curid=71672474
71674201
Sun Hong Rhie
Korean-American astrophysicist Dr. Sun Hong Rhie (1 March 1955 – 14 October 2013) was a Korean–American astrophysicist best known for her foundational contributions to the theory of gravitational microlensing, a technique for the discovery of exoplanets. Early life. Rhie was born to Lee Sin Woo and Kim Soon Im on 1 March 1955, near Chiri Mountain in Gurae, South Korea. The family later moved to the city of Gwangju for her father's job as a school principal. Education. She achieved national notoriety for being the top-scoring girl in South Korea in that year's national pre-entrance exams. She attended Seoul National University, where she received her bachelor's degree in physics in 1978. Rhie moved to the United States for her graduate work and received a master's degree in physics from UCLA in 1982. She then transferred to Stanford University, where in 1988 she received her PhD with a thesis on heavy fourth-generation neutrinos, supervised by Fred Gilman. She followed this in 1990 with postdoc positions at the University of California, Berkeley, and Lawrence Livermore National Laboratory. She became a research professor in the department of physics at the University of Notre Dame, where she conducted her most prominent research. Contributions to gravitational microlensing. When the first gravitational microlensing event, MACHO-LMC-1, was discovered in 1993, Rhie noticed that the light curve had a feature that could be explained by a planetary companion. This was noted by astronomer Phil Yock: "It was at one of these early meetings, probably the 1995 one, that Sun said to me that the first microlensing event found by the MACHO group, the one in the LMC that was shown on the cover of Nature, could include a planet." Along with her husband David Bennett, Rhie developed the first planetary microlensing light-curve code, including finite source effects, that enabled the modeling of planetary microlensing light curves. This discovery, and the prompt detection of such events with the MACHO survey, led to the proposal to NASA of a mission concept that would become known as the Microlensing Planet Finder. Eventually its exoplanet measurement capabilities were combined with similar cosmology capabilities that were subsumed into the Nancy Grace Roman Space Telescope. In 1999, the technique was used to discover the first planet orbiting a binary star. Her most noteworthy work was her 2003 demonstration, through an elegant perturbation argument, that a lens system of "N≥2" point masses can have 5("N" − 1) images. The problem is equivalent to a pure analytical question in mathematics concerning the number of zeros of the rational harmonic function of degree "N:" formula_0. The result was considered so noteworthy in pure mathematics, it warranted a 2008 review article in the "Notices of the American Mathematical Society." Personal life. She was married to astrophysicist David Bennett and has a daughter. In her later years, Rhie was diagnosed with schizophrenia, limited her ability to continue her research; unable to tolerate the refereeing of her papers, much of her work is published only at arXiv.org.
[ { "math_id": 0, "text": "f(z)=p(z)/q(z)-\\bar{z}" } ]
https://en.wikipedia.org/wiki?curid=71674201
71675950
Orthogonality (mathematics)
In mathematics, orthogonality is the generalization of the geometric notion of "perpendicularity" to the linear algebra of bilinear forms. Two elements u and v of a vector space with bilinear form formula_0 are orthogonal when formula_1. Depending on the bilinear form, the vector space may contain non-zero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form an orthogonal basis. The concept has been used in the context of orthogonal functions, orthogonal polynomials, and combinatorics. Definitions. A set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set. In certain cases, the word "normal" is used to mean "orthogonal", particularly in the geometric sense as in the normal to a surface. For example, the "y"-axis is normal to the curve formula_15 at the origin. However, "normal" may also refer to the magnitude of a vector. In particular, a set is called orthonormal (orthogonal plus normal) if it is an orthogonal set of unit vectors. As a result, use of the term "normal" to mean "orthogonal" is often avoided. The word "normal" also has a different meaning in probability and statistics. A vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two vectors results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality. In the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given formula_16. Euclidean vector spaces. In Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i.e. they make an angle of 90° (formula_17 radians), or one of the vectors is zero. Hence orthogonality of vectors is an extension of the concept of perpendicular vectors to spaces of any dimension. The orthogonal complement of a subspace is the space of all vectors that are orthogonal to every vector in the subspace. In a three-dimensional Euclidean vector space, the orthogonal complement of a line through the origin is the plane through the origin perpendicular to it, and vice versa. Note that the geometric concept of two planes being perpendicular does not correspond to the orthogonal complement, since in three dimensions a pair of vectors, one from each of a pair of perpendicular planes, might meet at any angle. In four-dimensional Euclidean space, the orthogonal complement of a line is a hyperplane and vice versa, and that of a plane is a plane. Orthogonal functions. By using integral calculus, it is common to use the following to define the inner product of two functions formula_18 and formula_19 with respect to a nonnegative weight function formula_20 over an interval formula_21: formula_22 In simple cases, formula_23. We say that functions formula_18 and formula_19 are orthogonal if their inner product (equivalently, the value of this integral) is zero: formula_24 Orthogonality of two functions with respect to one inner product does not imply orthogonality with respect to another inner product. We write the norm with respect to this inner product as formula_25 The members of a set of functionsformula_26 are "orthogonal" with respect to formula_20 on the interval formula_21 if formula_27 The members of such a set of functions are "orthonormal" with respect to formula_28 on the interval formula_21 if formula_29 where formula_30 is the Kronecker delta. In other words, every pair of them (excluding pairing of a function with itself) is orthogonal, and the norm of each is 1. See in particular the orthogonal polynomials. Examples. Orthogonal polynomials. Various polynomial sequences named for mathematicians of the past are sequences of orthogonal polynomials. In particular: Combinatorics. In combinatorics, two formula_52 Latin squares are said to be orthogonal if their superimposition yields all possible formula_53 combinations of entries. Completely orthogonal. Two flat planes formula_5 and formula_0 of a Euclidean four-dimensional space are called "completely orthogonal" if and only if every line in formula_5 is orthogonal to every line in formula_0. In that case the planes formula_5 and formula_0 intersect at a single point formula_54, so that if a line in formula_5 intersects with a line in formula_0, they intersect at formula_54. formula_5 and formula_0 are perpendicular "and" Clifford parallel. In 4 dimensional space we can construct 4 perpendicular axes and 6 perpendicular planes through a point. Without loss of generality, we may take these to be the axes and orthogonal central planes of a formula_55 Cartesian coordinate system. In 4 dimensions we have the same 3 orthogonal planes formula_56 that we have in 3 dimensions, and also 3 others formula_57. Each of the 6 orthogonal planes shares an axis with 4 of the others, and is "completely orthogonal" to just one of the others: the only one with which it does not share an axis. Thus there are 3 pairs of completely orthogonal planes: formula_58 and formula_59 intersect only at the origin; formula_60 and formula_61 intersect only at the origin; formula_62 and formula_63 intersect only at the origin. More generally, two flat subspaces formula_64 and formula_65 of dimensions formula_6 and formula_66 of a Euclidean space formula_67 of at least formula_68 dimensions are called "completely orthogonal" if every line in formula_64 is orthogonal to every line in formula_65. If formula_69 then formula_64 and formula_65 intersect at a single point formula_54. If formula_70 then formula_64 and formula_65 may or may not intersect. If formula_69 then a line in formula_64 and a line in formula_65 may or may not intersect; if they intersect then they intersect at formula_54. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "B(\\mathbf{u},\\mathbf{v})= 0" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "\\langle \\mathbf{u}, \\mathbf{v} \\rangle" }, { "math_id": 4, "text": "\\mathbf{u} \\perp \\mathbf{v}" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "M^*" }, { "math_id": 8, "text": "m'" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "\\langle m',m \\rangle = 0" }, { "math_id": 11, "text": "S' \\subseteq M^* " }, { "math_id": 12, "text": "S \\subseteq M " }, { "math_id": 13, "text": "S' " }, { "math_id": 14, "text": "S " }, { "math_id": 15, "text": "y = x^2 " }, { "math_id": 16, "text": "\\phi " }, { "math_id": 17, "text": "\\frac{\\pi}{2} " }, { "math_id": 18, "text": "f " }, { "math_id": 19, "text": "g " }, { "math_id": 20, "text": "w " }, { "math_id": 21, "text": "[a,b] " }, { "math_id": 22, "text": "\\langle f, g\\rangle_w = \\int_a^b f(x)g(x)w(x)\\,dx." }, { "math_id": 23, "text": "w(x) = 1 " }, { "math_id": 24, "text": "\\langle f, g\\rangle_w = 0." }, { "math_id": 25, "text": "\\|f\\|_w = \\sqrt{\\langle f, f\\rangle_w}" }, { "math_id": 26, "text": "{f_i \\mid i \\in \\mathbb{N}} " }, { "math_id": 27, "text": "\\langle f_i, f_j \\rangle_w=0 \\mid i \\ne j." }, { "math_id": 28, "text": "w" }, { "math_id": 29, "text": "\\langle f_i, f_j \\rangle_w=\\delta_{ij}," }, { "math_id": 30, "text": "\\delta_{ij}=\\left\\{\\begin{matrix}1, & & i=j \\\\ 0, & & i\\neq j\\end{matrix}\\right." }, { "math_id": 31, "text": "(1,3,2)^\\text{T} , (3,-1,0)^\\text{T} , (1,3,-5)^\\text{T} " }, { "math_id": 32, "text": "(1)(3) + (3)(-1) + (2)(0) = 0 \\ , " }, { "math_id": 33, "text": "\\ (3)(1) + (-1)(3) + (0)(-5) = 0 \\ , " }, { "math_id": 34, "text": "(1)(1) + (3)(3) + (2)(-5) = 0 " }, { "math_id": 35, "text": "(1,0,1,0, \\ldots)^\\text{T} " }, { "math_id": 36, "text": "(0,1,0,1,\\ldots)^\\text{T} " }, { "math_id": 37, "text": "\\mathbb{Z}_2^n " }, { "math_id": 38, "text": "\\mathbf{v}_k = \\sum_{i=0\\atop ai+k < n}^{n/a} \\mathbf{e}_i" }, { "math_id": 39, "text": "a" }, { "math_id": 40, "text": "1 \\le k \\le a-1" }, { "math_id": 41, "text": "\\begin{bmatrix}1 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\end{bmatrix}" }, { "math_id": 42, "text": "\\begin{bmatrix}0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\end{bmatrix}" }, { "math_id": 43, "text": "\\begin{bmatrix}0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\end{bmatrix}" }, { "math_id": 44, "text": "2t+3" }, { "math_id": 45, "text": "45t^2 + 9t - 17" }, { "math_id": 46, "text": "\\int_{-1}^1 \\left(2t+3\\right)\\left(45t^2+9t-17\\right)\\,dt = 0" }, { "math_id": 47, "text": "1, \\sin{(nx)}, \\cos{(nx)} \\mid n \\in \\mathbb{N}" }, { "math_id": 48, "text": "[0,2\\pi],[-\\pi,\\pi]" }, { "math_id": 49, "text": "2\\pi" }, { "math_id": 50, "text": "[-1,1]" }, { "math_id": 51, "text": "\\frac{1}{\\sqrt{1-x^2}}." }, { "math_id": 52, "text": "n \\times n" }, { "math_id": 53, "text": "n^2" }, { "math_id": 54, "text": "O" }, { "math_id": 55, "text": "(w,x,y,z)" }, { "math_id": 56, "text": "(xy,xz,yz)" }, { "math_id": 57, "text": "(wx,wy,wz)" }, { "math_id": 58, "text": "xy" }, { "math_id": 59, "text": "wz" }, { "math_id": 60, "text": "xz" }, { "math_id": 61, "text": "wy" }, { "math_id": 62, "text": "yz" }, { "math_id": 63, "text": "wx" }, { "math_id": 64, "text": "S_1" }, { "math_id": 65, "text": "S_2" }, { "math_id": 66, "text": "N" }, { "math_id": 67, "text": "S" }, { "math_id": 68, "text": "M+N" }, { "math_id": 69, "text": "\\dim(S) = M+N" }, { "math_id": 70, "text": "\\dim(S) > M+N" } ]
https://en.wikipedia.org/wiki?curid=71675950
716803
Quantale
A certain thing in mathematics In mathematics, quantales are certain partially ordered algebraic structures that generalize locales (point free topologies) as well as various multiplicative lattices of ideals from ring theory and functional analysis (C*-algebras, von Neumann algebras). Quantales are sometimes referred to as "complete residuated semigroups". Overview. A quantale is a complete lattice formula_0 with an associative binary operation formula_1, called its multiplication, satisfying a distributive property such that formula_2 and formula_3 for all formula_4 and formula_5 (here formula_6 is any index set). The quantale is unital if it has an identity element formula_7 for its multiplication: formula_8 for all formula_9. In this case, the quantale is naturally a monoid with respect to its multiplication formula_10. A unital quantale may be defined equivalently as a monoid in the category Sup of complete join semi-lattices. A unital quantale is an idempotent semiring under join and multiplication. A unital quantale in which the identity is the top element of the underlying lattice is said to be strictly two-sided (or simply "integral"). A commutative quantale is a quantale whose multiplication is commutative. A frame, with its multiplication given by the meet operation, is a typical example of a strictly two-sided commutative quantale. Another simple example is provided by the unit interval together with its usual multiplication. An idempotent quantale is a quantale whose multiplication is idempotent. A frame is the same as an idempotent strictly two-sided quantale. An involutive quantale is a quantale with an involution formula_11 that preserves joins: formula_12 A quantale homomorphism is a map formula_13 that preserves joins and multiplication for all formula_14 and formula_5: formula_15 formula_16
[ { "math_id": 0, "text": "Q" }, { "math_id": 1, "text": "\\ast\\colon Q \\times Q \\to Q" }, { "math_id": 2, "text": "x*\\left(\\bigvee_{i\\in I}{y_i}\\right) = \\bigvee_{i\\in I}(x*y_i)" }, { "math_id": 3, "text": "\\left(\\bigvee_{i\\in I}{y_i}\\right)*{x}=\\bigvee_{i\\in I}(y_i*x)" }, { "math_id": 4, "text": "x, y_i \\in Q" }, { "math_id": 5, "text": "i \\in I" }, { "math_id": 6, "text": "I" }, { "math_id": 7, "text": "e" }, { "math_id": 8, "text": "x*e = x = e*x" }, { "math_id": 9, "text": "x \\in Q" }, { "math_id": 10, "text": "\\ast" }, { "math_id": 11, "text": "(xy)^\\circ = y^\\circ x^\\circ" }, { "math_id": 12, "text": "\\biggl(\\bigvee_{i\\in I}{x_i}\\biggr)^\\circ =\\bigvee_{i\\in I}(x_i^\\circ)." }, { "math_id": 13, "text": "f\\colon Q_1 \\to Q_2" }, { "math_id": 14, "text": "x, y, x_i \\in Q_1" }, { "math_id": 15, "text": "f(xy) = f(x) f(y)," }, { "math_id": 16, "text": "f\\left(\\bigvee_{i \\in I}{x_i}\\right) = \\bigvee_{i \\in I} f(x_i)." } ]
https://en.wikipedia.org/wiki?curid=716803
716969
Rational pricing
Assumption in financial economics Rational pricing is the assumption in financial economics that asset prices – and hence asset pricing models – will reflect the arbitrage-free price of the asset as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments. Arbitrage mechanics. Arbitrage is the practice of taking advantage of a state of imbalance between two (or possibly more) markets. Where this mismatch can be exploited (i.e. after transaction costs, storage costs, transport costs, dividends etc.) the arbitrageur can "lock in" a risk-free profit by purchasing and selling simultaneously in both markets. In general, arbitrage ensures that "the law of one price" will hold; arbitrage also equalises the prices of assets with identical cash flows, and sets the price of assets with known future cash flows. The law of one price. The same asset must trade at the same price on all markets ("the law of one price"). Where this is not true, the arbitrageur will: Assets with identical cash flows. Two assets with identical cash flows must trade at the same price. Where this is not true, the arbitrageur will: An asset with a known future-price. An asset with a known price in the future must today trade at that price discounted at the risk free rate. Note that this condition can be viewed as an application of the above, where the two assets in question are the asset to be delivered and the risk free asset. (a) where the discounted future price is "higher" than today's price: (b) where the discounted future price is "lower" than today's price: Point (b) is only possible for those holding the asset but not needing it until the future date. There may be few such parties if short-term demand exceeds supply, leading to backwardation. "See also Fixed income arbitrage; Bond credit rating." Fixed income securities. Rational pricing is one approach used in pricing fixed rate bonds. Here, each cash flow on the bond can be matched by trading in either (a) some multiple of a zero-coupon bond, ZCB, corresponding to each coupon date, and of equivalent credit worthiness (if possible, from the same issuer as the bond being valued) with the corresponding maturity, or (b) in a strip corresponding to each coupon, and a ZCB for the return of principle on maturity. Then, given that the cash flows can be replicated, the price of the bond must today equal the sum of each of its cash flows discounted at the same rate as each ZCB (per ). Were this not the case, arbitrage would be possible and would bring the price back into line with the price based on ZCBs. The mechanics are as follows. Where the price of the bond is misaligned with the present value of the ZCBs, the arbitrageur could: The pricing formula is then formula_0, where each cash flow formula_1 is discounted at the rate formula_2 that matches the coupon date. Often, the formula is expressed as formula_3, using prices instead of rates, as prices are more readily available. Yield curve modeling. Per the logic outlined, rational pricing applies also to interest rate modeling more generally. Here, "yield curves" in entirety must be arbitrage-free with respect to the prices of individual instruments. Were this not the case, the ZCBs implied by the curve would result in quoted bond-prices, e.g., differing from those observed in the market, presenting an arbitrage opportunity. Investment banks, and other market makers here, thus invest considerable resources in "curve stripping". See Bootstrapping (finance) and Multi-curve framework for the methods employed, and Model risk for further discussion. Pricing derivatives. A derivative is an instrument that allows for buying and selling of the same asset on two markets – the spot market and the derivatives market. Mathematical finance assumes that any imbalance between the two markets will be arbitraged away. Thus, in a correctly priced derivative contract, the derivative price, the strike price (or reference rate), and the spot price will be related such that arbitrage is not possible. See Fundamental theorem of arbitrage-free pricing. Futures. In a futures contract, for no arbitrage to be possible, the price paid on delivery (the forward price) must be the same as the cost (including interest) of buying and storing the asset. In other words, the rational forward price represents the expected future value of the underlying discounted at the risk free rate (the "asset with a known future-price", as above); see Spot–future parity. Thus, for a simple, non-dividend paying asset, the value of the future/forward, formula_4, will be found by accumulating the present value formula_5 at time formula_6 to maturity formula_7 by the rate of risk-free return formula_8. formula_9 This relationship may be modified for storage costs, dividends, dividend yields, and convenience yields; see futures contract pricing. Any deviation from this equality allows for arbitrage as follows. Swaps. Rational pricing underpins the logic of swap valuation. Here, two counterparties "swap" obligations, effectively exchanging cash flow streams calculated against a notional principal amount, and the value of the swap is the present value (PV) of both sets of future cash flows "netted off" against each other. To be arbitrage free, the terms of a swap contract are such that, initially, the "Net" present value of these future cash flows is equal to zero; see . Once traded, swaps can (must) also be priced using rational pricing. The examples below are for interest rate swaps – and are representative of pure rational pricing as these exclude credit risk – although the principle applies to . Valuation at initiation. Consider a fixed-to-floating Interest rate swap where Party A pays a fixed rate ("Swap rate"), and Party B pays a floating rate. Here, the "fixed rate" would be such that the present value of future fixed rate payments by Party A is equal to the present value of the "expected" future floating rate payments (i.e. the NPV is zero). Were this not the case, an arbitrageur, C, could: Subsequent valuation. The Floating leg of an interest rate swap can be "decomposed" into a series of forward rate agreements. Here, since the swap has identical payments to the FRA, arbitrage free pricing must apply as above – i.e. the value of this leg is equal to the value of the corresponding FRAs. Similarly, the "receive-fixed" leg of a swap can be valued by comparison to a bond with the same schedule of payments. (Relatedly, given that their underlyings have the same cash flows, bond options and swaptions are equatable.) See further under . Options. As above, where the value of an asset in the future is known (or expected), this value can be used to determine the asset's rational price today. In an option contract, however, exercise is dependent on the price of the underlying, and hence payment is uncertain. Option pricing models therefore include logic that either "locks in" or "infers" this future value; both approaches deliver identical results. Methods that lock-in future cash flows assume "arbitrage free pricing", and those that infer expected value assume "risk neutral valuation". To do this, (in their simplest, though widely used form) both approaches assume a "binomial model" for the behavior of the underlying instrument, which allows for only two states – up or down. If S is the current price, then in the next period the price will either be "S up" or "S down". Here, the value of the share in the up-state is S × u, and in the down-state is S × d (where u and d are multipliers with d &lt; 1 &lt; u and assuming d &lt; 1+r &lt; u; see the binomial options model). Then, given these two states, the "arbitrage free" approach creates a position that has an identical value in either state – the cash flow in one period is therefore known, and arbitrage pricing is applicable. The risk neutral approach infers expected option value from the intrinsic values at the later two nodes. Although this logic appears far removed from the Black–Scholes formula and the lattice approach in the Binomial options model, it in fact underlies both models; see The Black–Scholes PDE. The assumption of binomial behaviour in the underlying price is defensible as the number of time steps between today (valuation) and exercise increases, and the period per time-step is correspondingly short. The Binomial options model allows for a high number of very short time-steps (if coded correctly), while Black–Scholes, in fact, models a continuous process. The examples below have shares as the underlying, but may be generalised to other instruments. The value of a put option can be derived as below, or may be found from the value of the call using put–call parity. Arbitrage free pricing. Here, the future payoff is "locked in" using either "delta hedging" or the "replicating portfolio" approach. As above, this payoff is then discounted, and the result is used in the valuation of the option today. Delta hedging. It is possible to create a position consisting of Δ shares and 1 call sold, such that the position's value will be identical in the "S up" and "S down" states, and hence known with certainty (see Delta hedging). This certain value corresponds to the forward price above ("An asset with a known future price"), and as above, for no arbitrage to be possible, the present value of the position must be its expected future value discounted at the risk free rate, r. The value of a call is then found by equating the two. The replicating portfolio. It is possible to create a position consisting of Δ shares and $B borrowed at the risk free rate, which will produce identical cash flows to one option on the underlying share. The position created is known as a "replicating portfolio" since its cash flows replicate those of the option. As shown above ("Assets with identical cash flows"), in the absence of arbitrage opportunities, since the cash flows produced are identical, the price of the option today must be the same as the value of the position today. Note that there is no discounting here – the interest rate appears only as part of the construction. This approach is therefore used in preference to others where it is not clear whether the risk free rate may be applied as the discount rate at each decision point, or whether, instead, a premium over risk free, differing by state, would be required. The best example of this would be under real options analysis where managements' actions actually change the risk characteristics of the project in question, and hence the Required rate of return could differ in the up- and down-states. Here, in the above formulae, we then have: "Δ × "S up" - B × (1 + r up)..." and "Δ × "S down" - B × (1 + r down)...". See . (Another case where the modelling assumptions may depart from rational pricing is the valuation of employee stock options.) Risk neutral valuation. Here the value of the option is calculated using the risk neutrality assumption. Under this assumption, the "expected value" (as opposed to "locked in" value) is discounted. The expected value is calculated using the intrinsic values from the later two nodes: "Option up" and "Option down", with u and d as price multipliers as above. These are then weighted by their respective probabilities: "probability" p of an up move in the underlying, and "probability" (1-p) of a down move. The expected value is then discounted at r, the risk-free rate. The risk neutrality assumption. Note that above, the risk neutral formula does not refer to the expected or forecast return of the underlying, nor its volatility – p as solved, relates to the risk-neutral measure as opposed to the actual probability distribution of prices. Nevertheless, both arbitrage free pricing and risk neutral valuation deliver identical results. In fact, it can be shown that "delta hedging" and "risk-neutral valuation" use identical formulae expressed differently. Given this equivalence, it is valid to assume "risk neutrality" when pricing derivatives. A more formal relationship is described via the fundamental theorem of arbitrage-free pricing. Pricing shares. The arbitrage pricing theory (APT), a general theory of asset pricing, has become influential in the pricing of shares. APT holds that the expected return of a financial asset can be modelled as a linear function of various macro-economic factors, where sensitivity to changes in each factor is represented by a factor specific beta coefficient: formula_14 where * formula_15 is the risky asset's expected return, * formula_16 is the risk free rate, * formula_17 is the macroeconomic factor, * formula_18 is the sensitivity of the asset to factor formula_19, * and formula_20 is the risky asset's idiosyncratic random shock with mean zero. The model derived rate of return will then be used to price the asset correctly – the asset price should equal the expected end of period price discounted at the rate implied by model. If the price diverges, arbitrage should bring it back into line. Here, to perform the arbitrage, the investor "creates" a correctly priced asset (a "synthetic" asset), a "portfolio" with the same net-exposure to each of the macroeconomic factors as the mispriced asset but a different expected return. See the arbitrage pricing theory article for detail on the construction of the portfolio. The arbitrageur is then in a position to make a risk free profit as follows: Note that under "true arbitrage", the investor locks-in a "guaranteed" payoff, whereas under APT arbitrage, the investor locks-in a positive "expected" payoff. The APT thus assumes "arbitrage in expectations" – i.e. that arbitrage by investors will bring asset prices back into line with the returns expected by the model. The capital asset pricing model (CAPM) is an earlier, (more) influential theory on asset pricing. Although based on different assumptions, the CAPM can, in some ways, be considered a "special case" of the APT; specifically, the CAPM's security market line represents a single-factor model of the asset price, where beta is exposure to changes in the "value of the market" as a whole. No-arbitrage pricing under systemic risk. Classical valuation methods like the Black–Scholes model or the Merton model cannot account for systemic counterparty risk which is present in systems with financial interconnectedness. More details regarding risk-neutral, arbitrage-free asset and derivative valuation can be found in the systemic risk article (see also valuation under systemic risk). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " P_0 = \\sum_{t=1}^T\\frac{C_t}{(1+r_t)^t}" }, { "math_id": 1, "text": "C_t\\," }, { "math_id": 2, "text": "r_t\\," }, { "math_id": 3, "text": " P_0 = \\sum_{t=1}^ T C(t) \\times P(t)" }, { "math_id": 4, "text": "F(t)\\," }, { "math_id": 5, "text": "S(t)\\," }, { "math_id": 6, "text": "t\\," }, { "math_id": 7, "text": "T\\," }, { "math_id": 8, "text": "r\\," }, { "math_id": 9, "text": "F(t) = S(t)\\times (1+r)^{(T-t)}\\," }, { "math_id": 10, "text": "max" }, { "math_id": 11, "text": "\\max" }, { "math_id": 12, "text": "\n\\begin{align}\nS &= \\frac{p \\times S_u + (1-p)\\times S_d}{1 + r} \\\\\n&= \\frac{p\\times u\\times S + (1-p)\\times d\\times S}{1 + r} \\\\\n\\Rightarrow p &= \\frac{(1+r) - d}{u-d}\\\\\n\\end{align}\n" }, { "math_id": 13, "text": "\n\\begin{align}\nC &= \\frac{p\\times C_u + (1-p) \\times C_d}{1+r} \\\\\n&= \\frac{p\\times \\max(S_u - k, 0) + (1-p) \\times\\max(S_d -k, 0)}{1+r} \\\\\n\\end{align}\n" }, { "math_id": 14, "text": "E\\left(r_j\\right) = r_f + b_{j1}F_1 + b_{j2}F_2 + ... + b_{jn}F_n + \\epsilon_j" }, { "math_id": 15, "text": "E(r_j)" }, { "math_id": 16, "text": "r_f" }, { "math_id": 17, "text": "F_k" }, { "math_id": 18, "text": "b_{jk}" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": "\\epsilon_j" } ]
https://en.wikipedia.org/wiki?curid=716969
71698021
Atmospheric correction for interferometric synthetic aperture radar technique
Atmospheric correction for Interferometric Synthetic ApertureRadar (InSAR) technique is a set of different methods to remove artefact displacement from an interferogram caused by the effect of weather variables such as humidity, temperature, and pressure. An interferogram is generated by processing two synthetic-aperture radar images before and after a geophysical event like an earthquake. Corrections for atmospheric variations are an important stage of InSAR data processing in many study areas to measure surface displacement because relative humidity differences of 20% can cause inaccuracies of 10–14 cm InSAR due to varying delays in the radar signal. Overall, atmospheric correction methods can be divided into two categories: a) Using Atmospheric Phase Screen (APS) statistical properties and b) Using auxiliary (external) data such as GPS measurements, multi-spectral observations, local meteorological models, and global atmospheric models. On the other side, atmospheric noise might have some value for atmospheric research in meteorology because atmospheric artefacts signals are related to water vapour in the troposphere. The spatial resolution of the InSAR map for C-band satellites like Sentinel-1 without multi-looking is around 20 meters. That means InSAR can measure Precipitable Water Vapor (PWV) in the atmosphere in a 20m grid over hundreds of kilometres, which is much denser than other methods such as GNSS and space-borne passive sensors. However, the long revisit time of Sentinel-1 (temporal resolution, 12 days) at the moment is the main disadvantage of this technique from the meteorologists' side. Nevertheless, using the capability of InSAR to measure PWV in high spatial resolution is interesting for meteorological research. What is InSAR? InSAR can provide accurate (millimeter-level) ground displacement fields for large areas over hundreds of kilometres. This technique uses two synthetic aperture radar images of the same area acquired at different times to measure surface motion between those times. Nevertheless, the result of interferometric synthetic-aperture radar in interferogram form includes actual displacement and other effects. Hence, these other effects must be calculated and removed from interferograms to achieve an accurate result of real ground displacement. Some of these errors have more influence, such as orbital errors, topographic effects and atmospheric artefacts. There are several methods to remove all these noises until a reasonable and acceptable amount except atmospheric effects with high accuracy. Thus, a significant challenge for the InSAR community remains atmospheric artefacts. Sometimes especially in areas with high humidity, the effect of atmospheric noise is much larger than geophysical events and prevents surface displacement detection. Atmospheric noise. In radar satellites, microwave signals are reflected off a persistent scatter in a target area, and their two-way travel time is measured by satellites. Water vapor in the troposphere and free electrons in the ionosphere affect the propagation of microwave signals through the atmosphere because the different refractive index in these layers affects the speed of propagation. Ionosphere. Ionospheric phase noise, which occurs more apparent with larger wavelengths, such as P or L-band radars, is a consequence of variations in free electron density in the 100–1000 km altitude along the travel path. Radar satellites with large wavelengths like ALOS Polsar1/2 (L-band, λ=~24 cm) and NISAR are more vulnerable to ionospheric delays. However, this noise is roughly less effective in C-Band (λ=5.6 cm) and X-band (λ=3.1 cm) SAR satellites such as Envisat, RADARSAT, ERS1/2 and TerraSAR-X and usually is negligible. There are a couple of methods to remove the Ionospheric noise artefact: Troposphere. The troposphere is the lower layer of the atmosphere, with up to 90% of the atmosphere's water vapor. A height of 1.4 km from the ground surface contains 50% of the water vapor mass, and both on the global and local scales, most of the world's weather takes place in this layer. Therefore, the tropospheric path delays are caused by differences in temperature, atmospheric pressure, and the water vapor in the lower part of the atmosphere. Microwave atmospheric delay is a sum of the turbulent and vertical stratified components. The turbulent component can affect flat and mountainous terrains and is usually highly correlated in space and uncorrelated in time. On the other hand, the vertical stratified component with a high correlation to topography only influences areas with topography variations, such as hills or mountains. Although the tropospheric effect is regarded as noise in the InSAR community, it has great advantages in meteorology and enables scientists to predict and model water vapor in the troposphere. The propagationtion delay of electromagnetic waves of radar satellites in the troposphere will be explained in the following in terms of meteorology and the InSAR side. Troposphere effect on InSAR. From the InSAR side, signal delay caused by variations of tropospheric properties in space and time is the source of major challenges for InSAR technique. In other words, tropospheric perturbation, caused by the differences in relative humidity, temperature, and pressure in the lower part of the troposphere between two acquisitions, can lead to additional noise in form of fringes up to 15–20 cm on interferograms. The atmospheric noise on InSAR results can include a wide range of wavelengths (short to long). Long wavelength errors, usually seen as a ramp (like a trend) in interferograms, are caused by changing the weather system in the study area very smoothly between two SAR images. Since this noise is similar to orbital ramp error and solid earth tides, detecting that in an interferogram is complicated. On the other hand, rapidly changing weather in a small area can cause artefacts signals that correlate with topography because water vapour variation in surface and altitudes is different. Moreover, a rain cloud in a small region can generate a turbulence error which would be visible like uplift or subsidence on interferogram. Overall, the tropospheric error on interferogram can be classified into space and time: Troposphere effect on Meteorology. In terms of meteorology, tropospheric delay interestingly can be regarded as a useful tool for meteorology purposes. Traditional methods for measuring water vapor include using a) Meteorological stations, b) Radiosonde, c) Spectrometer, and d) GPS.In recent years, InSAR has enabled meteorological scientists to measure precipitable water vapour (PWV) in the atmosphere with high spatial resolution. Although InSAR temporal sampling (12 days for sentinel1) of atmosphere properties is coarser than the GPS (15 min), the main advantage of InSAR PWV maps is its high spatial resolution of up to a few meters (20 m). Therefore, meteorological scientists combine traditional GPS tomography of atmosphere with InSAR data (atmospheric part) to increase their models' temporal and spatial resolution. Tropospheric correction methods. The phase delay through the troposphere can be characterized by the refractivity (N). formula_0 The atmospheric parameters can be used to calculate N(z) as the atmosphere's refractivity. This delay can be divided into two components: Dry delay and Wet delay. The dry delay, which is determined by the temperature and pressure of dry air, and the wet delay, which is determined mainly by the amount of water vapor in the troposphere. formula_1 where P indicates total atmospheric pressure, T is the temperature and e is the partial pressure of water vapor. The coefficients k1 =0.776 K/Pa, k2 = 0.716 K/Pa, and k3 = 3.75 × 103 K2/Pa are called refractivity constants. Since pressure varies with height, the dry part is correlated with the topography and can reach a 2.3–2.4 m delay in the zenith direction (zenith hydrostatic delay (ZHD)). Although the dry part significantly affects delay, this component is not a big challenge because the amount of that is relatively stable in terms of temporal and spatial in the atmosphere. Therefore, the differential of this component between two SAR images is almost zero. On the contrary, although the wet component (zenith wet delay (ZWD)) includes just 10% of the total delay (max =~30 cm ), this part is the major source of tropospheric noise in InSAR because water content moves in the troposphere and cause stratified and turbulent effect in the interferogram. Therefore, the total delay is calculated by the integral of the wet and dry components in the line of sight (LOS) direction between the surface elevation and satellite: formula_2 In this equation: Since there is no accurate technique that could determine the total refractivity on the same spatial scale and temporal sampling as the interferogram itself, various methods have been developed to measure for the atmospheric contribution within interferograms. In the following a couple of methods are introduced to mitigate tropospheric artefacts from interferograms: Stacking has been one of the helpful strategies for reducing the tropospheric artefacts from interferograms. The stacking method processes several to hundreds of filtered and unwrapped interferograms together using different algorithms like Small Baseline Subset (SBAS) and persistent scatterer (PS) to measure surface displacement during a long period. These methods increase the signal-to-noise ratio of signals without any external information and are very useful for mitigating turbulence errors. The other advantage of this method is that it is independent of external data and straightforward to implement. Limitation: For time series analysis approaches such as PS and SBAS, a large dataset has to be processed and this method mostly can reduce turbulent part of troposphere noise and signals of interest may be incorrectly removed. Thus, this method cannot be implemented only for one interferogram. GNSS uses microwave portion of the electromagnetic spectrum similar to SAR technique and both measurements are affected by the atmospheric delay and both provide geodetic measurements with comparable accuracy. Therefore, using interpolating GNSS observation (ِdense GNSS network) to estimate tropospheric delay can be a more accurate strategy to correct InSAR observations. Limitation: Interpolation of zenith delay measurements requires dense GNSS network in the study area but GNSS stations are still sparsely distributed or even absent in many regions as well as this method only samples troposphere in the vicinity of individual GNSS sites. Moreover, GNSS datasets are still not freely available to the public in the many areas in the world. Spectrometer measurements are Satellite-based observations from space allow estimating the atmospheric water vapor by band ratios in the near-infrared spectrum. This method has a relatively high spatial resolution to calculate the turbulent component of the atmospheric disturbance. Limitation: Requires collocated sensors and cloud-free conditions and only available in daytime. Time differences between radar and Precipitable Water Vapor (PWV) data can be regarded as limitation. Moreover, the Spectrometer cannot estimate the atmosphere's dry component and calculate the wet part of the delay. The advantage of this method is insensitive to the presence of clouds and Global/regional/ local coverage. By using the ERA, uncertainty in ZWD in high latitudes (&gt; 30∘), reach 1–2 cm, while uncertainty in low latitudes (&lt; 30∘), is about 2–6 cm. Most of studies reported varying degrees of success for using this method. Nonetheless, ERA-interim, and ERA5 global re-analysis models are popular models provided by the European Center for Medium-Range Weather Forecasting (ECMWF) and the HRES ECMWF forecast model. Moreover, the GACOS project aims to refine information from HRES with GNSS zenith delays if available. Limitation: The low spatial resolution and the original mismatch in time between the model and the SAR acquisition do not permit addressing the turbulent component that takes place at lower Spatio-temporal scales. Moreover, complex data processing can be regarded as disadvantage of this method. Freely available Global Atmospheric Models such as ERA5 data are generated using Copernicus Climate Change Service Information developed for numerical weather prediction. These models can provide accurate and high-resolution (still is coarse for InSAR studies) parameters for characterizing the atmosphere state. Therefore, these methods are promising and practical techniques for atmospheric noise mitigation in InSAR technique. Based on the combination of different input datasets, the ERA5 is a global atmospheric model calculated by the European Center for Medium-Range Weather Forecasting (ECMWF). Several meteorological parameters are provided, such as pressure, temperature, and relative humidity, at hourly intervals with a horizontal resolution of 0.25 degrees and a vertical resolution of 37 intervals from sea level to 50 km. This method estimates the atmospheric delay along the zenith path (vertical), and then, by using the incidence angle for each pixel, the zenith delay is converted to LOS direction. In order to obtain the phase delays for the entire SAR scene from the sparse grid points, two interpolations are implemented in horizontal and vertical directions. ERA5 data provides all the weather variables at 37 pressure levels (geopotential). First, the amount of delay is calculated in the signal path using cubic spline interpolation (vertical direction) and bilinear interpolation (horizontal direction). Then the total delay is projected to the LOS direction by calculating the cosine of the incidence angle for each pixel. Thus, after calculation formula_3 in first image t1 and second image t2, and subtraction of the original interferogram (formula_4), the effect of atmospheric noise can be removed. formula_5 It is worth mentioning that, the equation (see previous section) is an integration of zenithal path on the grid points of pressure, temperature and relative humidity to measure atmospheric phase delays. These parameters are available in the ERA5 data: Table 1. Variables of ERA5 data The correlation analysis between the interferometric phase and topography can recognize the amount of topography-correlated (stratified component). In other words, the correlation between range change in the LOS direction of InSAR and topography are related to the path length traveled by the electromagnetic wave. Estimation of linear and power-law relationships between the interferometric phase and the topography can be used to remove stratified parts of the troposphere. Hence, investigating the phase-elevation relationship with mathematical methods such as regression can enable us to recognize the atmospheric error in interferograms. Limitation: The main limitation of these model is that other phase terms (e.g., turbulent atmospheric artefacts, deformation related phase, decorrelation noise) can influence the estimate of the coefficient that relates phase with elevation. This method ignores the spatial variability of tropospheric signals and can be easily biased by orbit and topographic errors. Available packages for atmospheric correction. "Terrain": TRAIN consists of MATLAB and shell scripts and can be used for the output of most InSAR software. ("click on the link" ) "PyAPS": This module has been written based on python 3 and can measure the stratified atmosphere noise for interferograms. This module uses ERA5 data for correction. ("click on the link" ) "RAiDER": This python package implements the raytracing method to measure and reduce tropospheric noise. . ("click on the link" ) "GACOS": This package in MATLAB generates high-resolution atmospheric data and then separates stratified and turbulent signals to remove tropospheric noise. This method uses the Iterative Tropospheric Decomposition (ITD) model. ("click on the link" ) "ICAMS": This python module uses ERA5 data and considers stochastic spatial properties of the troposphere (ICAMS) to remove tropospheric noise.This package calculates the delays along the LOS direction. ("click on the link" ) Summary. Characterizing atmospheric noise remains a challenge in the InSAR community, and addressing it helps researchers to take full advantage of the InSAR technique. All methods to mitigate this noise have limitations; sometimes, combining techniques gives a better result, and there is no best exclusive method for reducing tropospheric delays at the moment. Global measurements of tectonic/volcanic deformation commonly benefit from global atmospheric corrections. Although ECMWF data, like ERA5, provides global data, the low spatial resolution of it regarded as a drawback and can cause uncertainty. See also. InSAR technique Atmosphere of Earth MeteorologyAtmosphere of Earth
[ { "math_id": 0, "text": "\\Delta_{atm}=10^-6\\int N(z)dz" }, { "math_id": 1, "text": "N_{tropo}= \\Bigl(k_1\\frac{P}{T}\\Bigr)_{hydr} +\\Bigl(K'_2\\frac{e}{T} + k_3\\frac{e}{T^2}\\Bigr)_{wet}" }, { "math_id": 2, "text": "\\delta L_{LOS}^s(z)=\\frac{10^{-6}}{cos\\theta}\\{\\frac{k_1R_d}{g_m}\\bigl(P(z)-P(z_{ref}))+\\int_{z}^{z_{re}}\n\\Bigl(\\Bigl(k_2-\\frac{R_d}{R_v}k_1\\Bigr)\\frac{e}{T}+k_3\\frac{e}{T^2}\\Bigr)dz\\}" }, { "math_id": 3, "text": "\\phi_{trop}" }, { "math_id": 4, "text": "\\Delta\\phi_{tot}" }, { "math_id": 5, "text": "\\Delta\\phi=\\Delta\\phi_{tot}-\\bigl(\\phi_{trop}^{t_2}-\\phi_{trop}^{t_1}\\bigr)" } ]
https://en.wikipedia.org/wiki?curid=71698021
7170579
Thermophoresis
Molecular diffusion under the effect of a thermal gradient Thermophoresis (also thermomigration, thermodiffusion, the Soret effect, or the Ludwig–Soret effect) is a phenomenon observed in mixtures of mobile particles where the different particle types exhibit different responses to the force of a temperature gradient. This phenomenon tends to move light molecules to hot regions and heavy molecules to cold regions. The term "thermophoresis" most often applies to aerosol mixtures whose mean free path formula_0 is comparable to its characteristic length scale formula_1, but may also commonly refer to the phenomenon in all phases of matter. The term "Soret effect" normally applies to liquid mixtures, which behave according to different, less well-understood mechanisms than gaseous mixtures. Thermophoresis may not apply to thermomigration in solids, especially multi-phase alloys. Thermophoretic force. The phenomenon is observed at the scale of one millimeter or less. An example that may be observed by the naked eye with good lighting is when the hot rod of an electric heater is surrounded by tobacco smoke: the smoke goes away from the immediate vicinity of the hot rod. As the small particles of air nearest the hot rod are heated, they create a fast flow away from the rod, down the temperature gradient. While the kinetic energy of the particles is similar at the same temperature, lighter particles acquire higher velocity compared to the heavy ones. When they collide with the large, slower-moving particles of the tobacco smoke they push the latter away from the rod. The force that has pushed the smoke particles away from the rod is an example of a thermophoretic force, as the mean free path of air at ambient conditions is 68 nm and the characteristic length scales are between 100–1000 nm. Thermodiffusion is labeled "positive" when particles move from a hot to cold region and "negative" when the reverse is true. Typically the heavier/larger species in a mixture exhibit positive thermophoretic behavior while the lighter/smaller species exhibit negative behavior. In addition to the sizes of the various types of particles and the steepness of the temperature gradient, the heat conductivity and heat absorption of the particles play a role. Recently, Braun and coworkers have suggested that the charge and entropy of the hydration shell of molecules play a major role for the thermophoresis of biomolecules in aqueous solutions. The quantitative description is given by: formula_2 formula_3 particle concentration; formula_4 diffusion coefficient; and formula_5 the thermodiffusion coefficient. The quotient of both coefficients formula_6 is called Soret coefficient. The thermophoresis factor has been calculated from molecular interaction potentials derived from known molecular models. Applications. The thermophoretic force has a number of practical applications. The basis for applications is that, because different particle types move differently under the force of the temperature gradient, the particle types can be separated by that force after they have been mixed together, or prevented from mixing if they are already separated. Impurity ions may move from the cold side of a semiconductor wafer towards the hot side, since the higher temperature makes the transition structure required for atomic jumps more achievable. The diffusive flux may occur in either direction (either up or down the temperature gradient), dependent on the materials involved. Thermophoretic force has been used in commercial precipitators for applications similar to electrostatic precipitators. It is exploited in the manufacturing of optical fiber in vacuum deposition processes. It can be important as a transport mechanism in fouling. Thermophoresis has also been shown to have potential in facilitating drug discovery by allowing the detection of aptamer binding by comparison of the bound versus unbound motion of the target molecule. This approach has been termed microscale thermophoresis. Furthermore, thermophoresis has been demonstrated as a versatile technique for manipulating single biological macromolecules, such as genomic-length DNA, and HIV virus in micro- and nanochannels by means of light-induced local heating. Thermophoresis is one of the methods used to separate different polymer particles in field flow fractionation. History. Thermophoresis in gas mixtures was first observed and reported by John Tyndall in 1870 and further understood by John Strutt (Baron Rayleigh) in 1882. Thermophoresis in liquid mixtures was first observed and reported by Carl Ludwig in 1856 and further understood by Charles Soret in 1879. James Clerk Maxwell wrote in 1873 concerning mixtures of different types of molecules (and this could include small particulates larger than molecules): "This process of diffusion... goes on in gases and liquids and even in some solids... The dynamical theory also tells us what will happen if molecules of different masses are allowed to knock about together. The greater masses will go slower than the smaller ones, so that, on an average, every molecule, great or small, will have the same energy of motion. The proof of this dynamical theorem, in which I claim the priority, has recently been greatly developed and improved by Dr. Ludwig Boltzmann." It has been analyzed theoretically by Sydney Chapman. Thermophoresis at solids interfaces was numerically discovered by Schoen et al. in 2006 and was experimentally confirmed by Barreiro et al. Negative thermophoresis in fluids was first noticed in 1967 by Dwyer in a theoretical solution, and the name was coined by Sone. Negative thermophoresis at solids interfaces was first observed by Leng et al. in 2016. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "\\frac{\\partial \\chi}{\\partial t}=\\nabla\\cdot( D\\,\\nabla \\chi+ D_{T}\\, \\chi(1-\\chi)\\,\\nabla T)" }, { "math_id": 3, "text": "\\chi" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "D_T" }, { "math_id": 6, "text": "S_T=\\frac{D_T}{D}" } ]
https://en.wikipedia.org/wiki?curid=7170579
71711287
Von Neumann's elephant
Problem in recreational mathematics Von Neumann's elephant is a problem in recreational mathematics, consisting of constructing a planar curve in the shape of an elephant from only four fixed parameters. It originated from a discussion between physicists John von Neumann and Enrico Fermi. History. In a 2004 article in the journal "Nature", Freeman Dyson recounts his meeting with Fermi in 1953. Fermi evokes his friend von Neumann who, when asking him how many arbitrary parameters he used for his calculations, replied, "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." By this he meant that the Fermi simulations relied on too many input parameters, presupposing an overfitting phenomenon. Solving the problem (defining four complex numbers to draw an elephantine shape) subsequently became an active research subject of recreational mathematics. A 1975 attempt through least-squares function approximation required dozens of terms. An approximation using four parameters was found by three physicists in 2010. Construction. The construction is based on complex Fourier analysis. The curve found in 2010 is parameterized by: formula_0 The four fixed parameters used are complex, with affixes "z"1 50 - 30i, "z"2 18 + 8i, "z"3 12 - 10i, "z"4 -14 - 60i. The affix point "z"5 40 + 20i is added to make the eye of the elephant and this value serves as a parameter for the movement of the "trunk". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left\\lbrace\n\\begin{array}{lcccccc}\nx(t) & = &-60 \\cos(t) & + 30 \\sin(t) & -8\\sin(2t) & +10\\sin(3t)\\\\ \ny(t) & = &50 \\sin(t) & + 18 \\sin(2t) & -12\\cos(3t) & +14\\cos(5t)\n\\end{array}\n\\right.\n" } ]
https://en.wikipedia.org/wiki?curid=71711287
7171253
Single-ended primary-inductor converter
Electrical device The single-ended primary-inductor converter (SEPIC) is a type of DC/DC converter that allows the electrical potential (voltage) at its output to be greater than, less than, or equal to that at its input. The output of the SEPIC is controlled by the duty cycle of the electronic switch (S1). A SEPIC is essentially a boost converter followed by an inverted buck-boost converter. While similar to a traditional buck-boost converter, it has a few advantages. It has a non-inverted output (the output has the same electrical polarity as the input). Its use of a series capacitor to couple energy from the input to the output allows the circuit to respond more gracefully to a short-circuit output. And it is capable of true shutdown: when the switch S1 is turned off enough, the output ("V"0) drops to 0 V, following a fairly hefty transient dump of charge. SEPICs are useful in applications in which a battery voltage can be above and below that of the regulator's intended output. For example, a single lithium ion battery typically discharges from 4.2 volts to 3 volts; if other components require 3.3 volts, then the SEPIC would be effective. Circuit operation. The schematic diagram for a basic SEPIC is shown in Figure 1. As with other switched mode power supplies (specifically DC-to-DC converters), the SEPIC exchanges energy between the capacitors and inductors in order to convert from one voltage to another. The amount of energy exchanged is controlled by switch S1, which is typically a transistor such as a MOSFET. MOSFETs offer much higher input impedance and lower voltage drop than bipolar junction transistors (BJTs), and do not require biasing resistors as MOSFET switching is controlled by differences in voltage rather than a current, as with BJTs. Continuous mode. A SEPIC is said to be in continuous-conduction mode ("continuous mode") if the currents through inductors L1 and L2 never fall to zero during an operating cycle. During a SEPIC's steady-state operation, the average voltage across capacitor C1 ("V"C1) is equal to the input voltage ("V"in). Because capacitor C1 blocks direct current (DC), the average current through it ("I"C1) is zero, making inductor L2 the only source of DC load current. Therefore, the average current through inductor L2 ("I"L2) is the same as the average load current and hence independent of the input voltage. Looking at average voltages, the following can be written: formula_0 Because the average voltage of "V"C1 equals "V"IN, therefore "V"L1 = −"V"L2. For this reason, the two inductors can be wound on the same core, which begins to resemble a flyback converter, the most basic of the transformer-isolated switched-mode power supply topologies. Since the voltages are the same in magnitude, their effects on the mutual inductance will be zero, assuming the polarity of the windings is correct. Also, since the voltages are the same in magnitude, the ripple currents from the two inductors will be equal in magnitude. The average currents can be summed as follows (average capacitor currents must be zero): formula_1 When switch S1 is turned on, current "I"L1 increases and the current "I"L2 goes more negative. (Mathematically, it decreases due to arrow direction.) The energy to increase the current "I"L1 comes from the input source. Since S1 is a short while closed, and the instantaneous voltage "V"L1 is approximately "V"IN, the voltage "V"L2 is approximately −"V"C1. Therefore, D1 is opened and the capacitor C1 supplies the energy to increase the magnitude of the current in "I"L2 and thus increase the energy stored in L2. IL is supplied by C2. The easiest way to visualize this is to consider the bias voltages of the circuit in a DC state, then close S1. When switch S1 is turned off, the current "I"C1 becomes the same as the current "I"L1, since inductors do not allow instantaneous changes in current. The current "I"L2 will continue in the negative direction, in fact it never reverses direction. It can be seen from the diagram that a negative "I"L2 will add to the current "I"L1 to increase the current delivered to the load. Using Kirchhoff's Current Law, it can be shown that "I"D1 = "I"C1 - "I"L2. It can then be concluded, that while S1 is off, power is delivered to the load from both L2 and L1. C1, however is being charged by L1 during this off cycle (as C2 by L1 and L2), and will in turn recharge L2 during the following on cycle. Because the potential (voltage) across capacitor C1 may reverse direction every cycle, a non-polarized capacitor should be used. However, a polarized tantalum or electrolytic capacitor may be used in some cases, because the potential (voltage) across capacitor C1 will not change unless the switch is closed long enough for a half cycle of resonance with inductor L2, and by this time the current in inductor L1 could be quite large. The capacitor CIN has no effect on the ideal circuit's analysis, but is required in actual regulator circuits to reduce the effects of parasitic inductance and internal resistance of the power supply. The boost/buck capabilities of the SEPIC are possible because of capacitor C1 and inductor L2. Inductor L1 and switch S1 create a standard boost converter, which generates a voltage ("V"S1) that is higher than "V"IN, whose magnitude is determined by the duty cycle of the switch S1. Since the average voltage across C1 is "V"IN, the output voltage ("V"O) is "V"S1 - "V"IN. If "V"S1 is less than double "V"IN, then the output voltage will be less than the input voltage. If "V"S1 is greater than double "V"IN, then the output voltage will be greater than the input voltage. Discontinuous mode. A SEPIC is said to be in discontinuous-conduction mode or discontinuous mode if the current through either of inductors L1 or L2 is allowed to fall to zero during an operating cycle. Reliability and efficiency. The voltage drop and switching time of diode D1 is critical to a SEPIC's reliability and efficiency. The diode's switching time needs to be extremely fast in order to not generate high voltage spikes across the inductors, which could cause damage to components. Fast conventional diodes or Schottky diodes may be used. The resistances in the inductors and the capacitors can also have large effects on the converter efficiency and output ripple. Inductors with lower series resistance allow less energy to be dissipated as heat, resulting in greater efficiency (a larger portion of the input power being transferred to the load). Capacitors with low equivalent series resistance (ESR) should also be used for C1 and C2 to minimize ripple and prevent heat build-up, especially in C1 where the current is changing direction frequently. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " V_{IN} = V_{L1} + V_{C1} + V_{L2}" }, { "math_id": 1, "text": "I_{D1} = I_{L1} - I_{L2} " } ]
https://en.wikipedia.org/wiki?curid=7171253
7172270
Fisher's method
Statistical method In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combine the results from several independence tests bearing upon the same overall hypothesis ("H"0). Application to independent test statistics. Fisher's method combines extreme value probabilities from each test, commonly known as ""p"-values", into one test statistic ("X"2) using the formula formula_0 where "p""i" is the "p"-value for the "i"th hypothesis test. When the "p"-values tend to be small, the test statistic "X"2 will be large, which suggests that the null hypotheses are not true for every test. When all the null hypotheses are true, and the "p""i" (or their corresponding test statistics) are independent, "X"2 has a chi-squared distribution with 2"k" degrees of freedom, where "k" is the number of tests being combined. This fact can be used to determine the "p"-value for "X"2. The distribution of "X"2 is a chi-squared distribution for the following reason; under the null hypothesis for test "i", the "p"-value "p""i" follows a uniform distribution on the interval [0,1]. The negative logarithm of a uniformly distributed value follows an exponential distribution. Scaling a value that follows an exponential distribution by a factor of two yields a quantity that follows a chi-squared distribution with two degrees of freedom. Finally, the sum of "k" independent chi-squared values, each with two degrees of freedom, follows a chi-squared distribution with 2"k" degrees of freedom. Limitations of independence assumption. Dependence among statistical tests is generally positive, which means that the "p"-value of "X"2 is too small (anti-conservative) if the dependency is not taken into account. Thus, if Fisher's method for independent tests is applied in a dependent setting, and the "p"-value is not small enough to reject the null hypothesis, then that conclusion will continue to hold even if the dependence is not properly accounted for. However, if positive dependence is not accounted for, and the meta-analysis "p"-value is found to be small, the evidence against the null hypothesis is generally overstated. The mean false discovery rate, formula_1, formula_2 reduced for "k" independent or positively correlated tests, may suffice to control alpha for useful comparison to an over-small "p"-value from Fisher's "X"2. Extension to dependent test statistics. In cases where the tests are not independent, the null distribution of "X"2 is more complicated. A common strategy is to approximate the null distribution with a scaled "χ"2-distribution random variable. Different approaches may be used depending on whether or not the covariance between the different "p"-values is known. Brown's method can be used to combine dependent "p"-values whose underlying test statistics have a multivariate normal distribution with a known covariance matrix. extends Brown's to allow one to combine "p"-values when the covariance matrix is known only up to a scalar multiplicative factor. The harmonic mean "p"-value offers an alternative to Fisher's method for combining "p"-values when the dependency structure is unknown but the tests cannot be assumed to be independent. Interpretation. Fisher's method is typically applied to a collection of independent test statistics, usually from separate studies having the same null hypothesis. The meta-analysis null hypothesis is that all of the separate null hypotheses are true. The meta-analysis alternative hypothesis is that at least one of the separate "alternative" hypotheses is true. In some settings, it makes sense to consider the possibility of "heterogeneity," in which the null hypothesis holds in some studies but not in others, or where different alternative hypotheses may hold in different studies. A common reason for the latter form of heterogeneity is that effect sizes may differ among populations. For example, consider a collection of medical studies looking at the risk of a high glucose diet for developing type II diabetes. Due to genetic or environmental factors, the true risk associated with a given level of glucose consumption may be greater in some human populations than in others. In other settings, the alternative hypothesis is either universally false, or universally true – there is no possibility of it holding in some settings but not in others. For example, consider several experiments designed to test a particular physical law. Any discrepancies among the results from separate studies or experiments must be due to chance, possibly driven by differences in power. In the case of a meta-analysis using two-sided tests, it is possible to reject the meta-analysis null hypothesis even when the individual studies show strong effects in differing directions. In this case, we are rejecting the hypothesis that the null hypothesis is true in every study, but this does not imply that there is a uniform alternative hypothesis that holds across all studies. Thus, two-sided meta-analysis is particularly sensitive to heterogeneity in the alternative hypotheses. One sided meta-analysis can detect heterogeneity in the effect magnitudes, but focuses on a single, pre-specified effect direction. Relation to Stouffer's Z-score method. A closely related approach to Fisher's method is Stouffer's Z, based on Z-scores rather than "p"-values, allowing incorporation of study weights. It is named for the sociologist Samuel A. Stouffer. If we let "Z""i"  =  "Φ" − 1(1−"p""i"), where "Φ" is the standard normal cumulative distribution function, then formula_3 is a Z-score for the overall meta-analysis. This Z-score is appropriate for one-sided right-tailed "p"-values; minor modifications can be made if two-sided or left-tailed "p"-values are being analysed. Specifically, if two-sided "p"-values are being analyzed, the two-sided "p"-value ("pi"/2) is used, or 1-"pi" if left-tailed "p"-values are used. Since Fisher's method is based on the average of −log("p""i") values, and the Z-score method is based on the average of the "Z""i" values, the relationship between these two approaches follows from the relationship between "z" and −log("p") = −log(1−"Φ"("z")). For the normal distribution, these two values are not perfectly linearly related, but they follow a highly linear relationship over the range of Z-values most often observed, from 1 to 5. As a result, the power of the Z-score method is nearly identical to the power of Fisher's method. One advantage of the Z-score approach is that it is straightforward to introduce weights. If the "i""th" Z-score is weighted by "w""i", then the meta-analysis Z-score is formula_4 which follows a standard normal distribution under the null hypothesis. While weighted versions of Fisher's statistic can be derived, the null distribution becomes a weighted sum of independent chi-squared statistics, which is less convenient to work with.
[ { "math_id": 0, "text": "X^2_{2k} = -2\\sum_{i=1}^k \\ln p_i," }, { "math_id": 1, "text": "\\alpha(k+1)/(2k)" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\nZ \\sim \\frac{\\sum_{i=1}^k Z_i}{\\sqrt{k}},\n" }, { "math_id": 4, "text": "\nZ \\sim \\frac{\\sum_{i=1}^k w_iZ_i}{\\sqrt{\\sum_{i=1}^k w_i^2}},\n" } ]
https://en.wikipedia.org/wiki?curid=7172270
71723429
List of quantum logic gates
In gate-based quantum computing, various sets of quantum logic gates are commonly used to express quantum operations. The following tables list several unitary quantum logic gates, together with their common name, how they are represented, and some of their properties. Controlled or conjugate transpose (adjoint) versions of some of these gates may not be listed. Identity gate and global phase. The identity gate is the identity operation formula_0, most of the times this gate is not indicated in circuit diagrams, but it is useful when describing mathematical results. It has been described as being a "wait cycle", and a NOP. The global phase gate introduces a global phase formula_1 to the whole qubit quantum state. A quantum state is uniquely defined up to a phase. Because of the Born rule, a phase factor has no effect on a measurement outcome: formula_2 for any formula_3. Because formula_4 when the global phase gate is applied to a single qubit in a quantum register, the entire register's global phase is changed. Also, formula_5 These gates can be extended to any number of qubits or qudits. Clifford qubit gates. This table includes commonly used Clifford gates for qubits. Other Clifford gates, including higher dimensional ones are not included here but by definition can be generated using formula_7 and formula_6. Note that if a Clifford gate "A" is not in the Pauli group, formula_8 or controlled-"A" are not in the Clifford gates. The Clifford set is not a universal quantum gate set. Non-Clifford qubit gates. Relative phase gates. The phase shift is a family of single-qubit gates that map the basis states formula_9 and formula_10. The probability of measuring a formula_11 or formula_12 is unchanged after applying this gate, however it modifies the phase of the quantum state. This is equivalent to tracing a horizontal circle (a line of latitude), or a rotation along the z-axis on the Bloch sphere by formula_3 radians. A common example is the "T" gate where formula_13 (historically known as the formula_14 gate), the phase gate. Note that some Clifford gates are special cases of the phase shift gate: formula_15 The argument to the phase shift gate is in U(1), and the gate performs a phase rotation in U(1) along the specified basis state (e.g. formula_16 rotates the phase about formula_12). Extending formula_16 to a rotation about a generic phase of both basis states of a 2-level quantum system (a qubit) can be done with a series circuit: formula_17. When formula_18 this gate is the rotation operator formula_19 gate and if formula_20 it is a global phase. The "T" gate's historic name of formula_14 gate comes from the identity formula_21, where formula_22. Arbitrary single-qubit phase shift gates formula_16 are natively available for transmon quantum processors through timing of microwave control pulses. It can be explained in terms of change of frame. As with any single qubit gate one can build a controlled version of the phase shift gate. With respect to the computational basis, the 2-qubit controlled phase shift gate is: shifts the phase with formula_3 only if it acts on the state formula_23: formula_24 The controlled-"Z" (or CZ) gate is the special case where formula_25. The controlled-"S" gate is the case of the controlled-formula_16 when formula_26 and is a commonly used gate."" Rotation operator gates. The rotation operator gates formula_27 and formula_28 are the analog rotation matrices in three Cartesian axes of SO(3), along the x, y or z-axes of the Bloch sphere projection. As Pauli matrices are related to the generator of rotations, these rotation operators can be written as matrix exponentials with Pauli matrices in the argument. Any formula_29 unitary matrix in SU(2) can be written as a product (i.e. series circuit) of three rotation gates or less. Note that for two-level systems such as qubits and spinors, these rotations have a period of 4π. A rotation of 2π (360 degrees) returns the same statevector with a different phase. We also have formula_30 and formula_31 for all formula_32 The rotation matrices are related to the Pauli matrices in the following way: formula_33 It's possible to work out the adjoint action of rotations on the Pauli vector, namely rotation effectively by double the angle a to apply Rodrigues' rotation formula: formula_34 Taking the dot product of any unit vector with the above formula generates the expression of any single qubit gate when sandwiched within adjoint rotation gates. For example, it can be shown that formula_35. Also, using the anticommuting relation we have formula_36. Rotation operators have interesting identities. For example, formula_37 and formula_38 Also, using the anticommuting relations we have formula_39 and formula_40 Global phase and phase shift can be transformed into each others with the Z-rotation operator: formula_41. The formula_42 gate represents a rotation of π/2 about the "x" axis at the Bloch sphere formula_43. Similar rotation operator gates exist for SU(3) using Gell-Mann matrices. They are the rotation operators used with qutrits. Two-qubit interaction gates. The qubit-qubit Ising coupling or Heisenberg interaction gates "Rxx", "Ryy" and "Rzz" are 2-qubit gates that are implemented natively in some trapped-ion quantum computers, using for example the Mølmer–Sørensen gate procedure. Note that these gates can be expressed in sinusoidal form also, for example formula_44. The CNOT gate can be further decomposed as products of rotation operator gates and exactly a single two-qubit interaction gate, for example formula_45 The SWAP gate can be constructed from other gates, for example using the two-qubit interaction gates: formula_46. In superconducting circuits, the family of gates resulting from Heisenberg interactions is sometimes called the "fSim" gate set. They can be realized using flux-tunable qubits with flux-tunable coupling, or using microwave drives in fixed-frequency qubits with fixed coupling. Non-Clifford swap gates. The √SWAP gate performs half-way of a two-qubit swap (see Clifford gates). It is universal such that any many-qubit gate can be constructed from only √SWAP and single qubit gates. More than one application of the √SWAP is required to produce a Bell state from product states. The √SWAP gate arises naturally in systems that exploit exchange interaction. For systems with Ising like interactions, it is sometimes more natural to introduce the imaginary swap or iSWAP. Note that formula_47 and formula_48, or more generally formula_49 for all real "n" except 0. SWAP"α" arises naturally in spintronic quantum computers. The Fredkin gate (also CSWAP or CS gate), named after Edward Fredkin, is a 3-bit gate that performs a controlled swap. It is universal for classical computation. It has the useful property that the numbers of 0s and 1s are conserved throughout, which in the billiard ball model means the same number of balls are output as input. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I|\\psi\\rangle=|\\psi\\rangle" }, { "math_id": 1, "text": "e^{i\\varphi}" }, { "math_id": 2, "text": "|e^{i\\varphi}|=1" }, { "math_id": 3, "text": "\\varphi" }, { "math_id": 4, "text": "e^{i\\delta}|\\psi\\rangle \\otimes |\\phi\\rangle = e^{i\\delta}(|\\psi\\rangle \\otimes |\\phi\\rangle)," }, { "math_id": 5, "text": "\\mathrm{Ph}(0)=I." }, { "math_id": 6, "text": "\\mathrm{CNOT}" }, { "math_id": 7, "text": "H,S" }, { "math_id": 8, "text": "\\sqrt{A}" }, { "math_id": 9, "text": "P(\\varphi)|0\\rangle = |0\\rangle" }, { "math_id": 10, "text": "P(\\varphi)|1\\rangle= e^{i\\varphi}|1\\rangle" }, { "math_id": 11, "text": "|0\\rangle" }, { "math_id": 12, "text": "|1\\rangle" }, { "math_id": 13, "text": "\\varphi = \\frac{\\pi}{4}" }, { "math_id": 14, "text": "\\pi /8" }, { "math_id": 15, "text": "P(0)=I,\\;P(\\pi)=Z;P(\\pi/2)=S." }, { "math_id": 16, "text": "P(\\varphi)" }, { "math_id": 17, "text": "P(\\beta) \\cdot X \\cdot P(\\alpha) \\cdot X = \\begin{bmatrix} e^{i\\alpha} & 0 \\\\ 0 & e^{i\\beta} \\end{bmatrix}" }, { "math_id": 18, "text": "\\alpha = -\\beta" }, { "math_id": 19, "text": "R_z(2\\beta)" }, { "math_id": 20, "text": "\\alpha =\\beta" }, { "math_id": 21, "text": "R_z(\\pi/4) \\operatorname{Ph}\\left(\\frac{\\pi}{8}\\right) = P(\\pi/4)" }, { "math_id": 22, "text": "R_z(\\pi/4) = \\begin{bmatrix} e^{-i\\pi/8} & 0 \\\\ 0 & e^{i\\pi/8} \\end{bmatrix} " }, { "math_id": 23, "text": "|11\\rangle" }, { "math_id": 24, "text": " |a,b\\rangle \\mapsto \\begin{cases}\ne^{i\\varphi}|a,b\\rangle & \\mbox{for }a=b=1 \\\\\n|a,b\\rangle & \\mbox{otherwise.}\n\\end{cases}" }, { "math_id": 25, "text": "\\varphi = \\pi" }, { "math_id": 26, "text": "\\varphi = \\pi/2" }, { "math_id": 27, "text": "R_x(\\theta),R_y(\\theta)" }, { "math_id": 28, "text": "R_z(\\theta)" }, { "math_id": 29, "text": "2 \\times 2" }, { "math_id": 30, "text": "R_{b}(-\\theta)=R_{b}(\\theta)^{\\dagger}" }, { "math_id": 31, "text": "R_{b}(0)=I" }, { "math_id": 32, "text": " b \\in \\{x, y, z\\}." }, { "math_id": 33, "text": "R_x(\\pi)=-iX, R_y(\\pi)=-iY, R_z(\\pi)=-iZ." }, { "math_id": 34, "text": " \nR_n(-a)\\vec{\\sigma}R_n(a)=e^{i \\frac{a}{2}\\left(\\hat{n} \\cdot \\vec{\\sigma}\\right)} ~ \\vec{\\sigma}~ e^{-i \\frac{a}{2}\\left(\\hat{n} \\cdot \\vec{\\sigma}\\right)} = \\vec{\\sigma} \\cos (a) + \\hat{n} \\times \\vec{\\sigma} ~\\sin (a)+ \\hat{n} ~ \\hat{n} \\cdot \\vec{\\sigma} ~ (1 - \\cos (a))~ .\n" }, { "math_id": 35, "text": "R_y(-\\pi/2)XR_y(\\pi/2)=\\hat{x}\\cdot (\\hat{y}\\times \\vec{\\sigma})=Z" }, { "math_id": 36, "text": "R_y(-\\pi/2)XR_y(\\pi/2)=XR_y(+\\pi/2)R_y(\\pi/2)=X(-iY)=Z" }, { "math_id": 37, "text": "R_y(\\pi/2)Z = H" }, { "math_id": 38, "text": "X R_y(\\pi/2) = H." }, { "math_id": 39, "text": "ZR_y(-\\pi/2) = H" }, { "math_id": 40, "text": "R_y(-\\pi/2)X = H." }, { "math_id": 41, "text": "R_z(\\gamma) \\operatorname{Ph}\\left(\\frac{\\gamma}{2}\\right) = P(\\gamma)" }, { "math_id": 42, "text": "\\sqrt{X}" }, { "math_id": 43, "text": "\\sqrt{X}=e^{i\\pi/4}R_x(\\pi/2)" }, { "math_id": 44, "text": "R_{xx}(\\phi) = \\exp\\left(-i \\frac{\\phi}{2} X\\otimes X\\right)= \\cos\\left(\\frac{\\phi}{2}\\right)I\\otimes I-i \\sin\\left(\\frac{\\phi}{2}\\right)X\\otimes X\n" }, { "math_id": 45, "text": " \\mbox{CNOT} =e^{-i\\frac{\\pi}{4}}R_{y_1}(-\\pi/2)R_{x_1}(-\\pi/2)R_{x_2}(-\\pi/2)R_{xx}(\\pi/2)R_{y_1}(\\pi/2). " }, { "math_id": 46, "text": "\\text{SWAP} = e^{i\\frac{\\pi}{4}}R_{xx}(\\pi/2)R_{yy}(\\pi/2)R_{zz}(\\pi/2)" }, { "math_id": 47, "text": "i\\mbox{SWAP}=R_{xx}(-\\pi/2)R_{yy}(-\\pi/2)" }, { "math_id": 48, "text": "\\sqrt{i\\mbox{SWAP}}=R_{xx}(-\\pi/4)R_{yy}(-\\pi/4)" }, { "math_id": 49, "text": "\\sqrt[n]{i\\mbox{SWAP}}=R_{xx}(-\\pi/2n)R_{yy}(-\\pi/2n)" } ]
https://en.wikipedia.org/wiki?curid=71723429
71731023
Félix I
Brazilian space project Félix I (officially "F-360-BD") was a Brazilian Army Technical School (today's Military Institute of Engineering) project led by Lieutenant Colonel Manoel dos Santos Lage which aimed, in 1959, to launch the Flamengo cat into space. But the project was canceled due to pressure from animal advocacy groups, and the launch never took place. History. Origins. The project, also known as "Operation Meow", with limited financial resources, was part of the graduation class of 1958 of the Army Technical School that aimed to create a sounding rocket, something unheard of in Brazil at the time. The official name was "Rocket Sonda 360-BD", unrelated to the later Sonda I. The rocket had an outer diameter of 400 mm, a length of 4.3 meters, and a total mass of 350 kg with the payload, and it used only a single stage and was propelled by gunpowder, reaching a maximum speed of 1.950 m/s. Lieutenant-Colonel Myearel dos Santos Lage's ultimate goal, head of the Rocket Program and leader of the project, but not shared by the institution, was to develop a satellite launch vehicle. The project also had the collaboration of scientists Carlos Chagas Filho and César Lattes. Carlos Chagas Filho was responsible for the idea of choosing a cat, because he was interested in observing how these animals reacted under laboratory conditions. Most of the material used to build the rocket was obtained from the War Arsenal. The project, which aimed to test a guided missile costing Cr$600,000, was nicknamed "Felix I" by the Rio de Janeiro press after they discovered their intention to launch a cat, Flamengo, into space. Originally they planned for the rocket to reach the 300 km mark, but this was abandoned due to difficulties in the calculations. The final decision was that the class of 1958 would develop a rocket that reached an apogee of 120 km and the class of 1950 would work on one that reached 300 km, with the ultimate goal of developing a Thor-type rocket that would reach orbits greater than 500 km by June 1960. Initially the rocket was to be launched in 1957, but it was delayed twice and by December 1958 they hoped to launch in early January 1959. Flight plan. The rocket would be launched from a base in Cabo Frio. Its accelerometer would be connected to a transmitter at a frequency of 73 Mc/s. César Lattes was responsible for building three transmitters and the instruments aimed at cosmic ray detection; Lieutenant-Colonel Carlos Alberto Braga Coelho built the electronics of the rocket; Carlos Chagas Filho (IBCCF) developed the instruments for monitoring the cat's health; and astronomer Mário Ferreira Dias, from the Valongo Observatory, developed the calculations related to the flight. The combustion chamber was built by the Army War Arsenal in company with the students of the Armaments Course, with the carbon steel plate produced by the Companhia Siderúrgica Nacional. The rocket was painted silver with red stripes in a spiral, to help the visibility of the rocket in flight, as the process would be monitored by the National Observatory. The rocket thrust was predicted to be 1,920 kgf with 6G of acceleration, 19.3s of combustion, and a final velocity of 1,960 m/s. The propellant, developed by the Army Technical School, was called "BD 1000C Gunpowder". The rocket would carry a 180-kilogram payload of gunpowder to reach the ionosphere. The payload fairing, with a final mass of 30 kg, would contain an acrylic chamber for the cat, as well as the other instruments for the mission. The chamber, with the return speed estimated at 1,800 m/s, would initially be rescued by two formula_0 air braking devices, and would be followed by a 68 kg parachute developed by the Army Air Ground Division Core, open at an altitude of 5,000 meters, all in an automatic way. The cat would have four hours of oxygen and would be placed face up on a nylon mattress. The flight would last 40 minutes, falling into the sea 30 kilometers from the launch pad, off Angra dos Reis, and would be rescued by the Brazilian Navy. Rescuing the cat alive was considered the greatest challenge of the project. The rocket stages would be rescued by two parachutes. Finally, the flight date would be analyzed by César Lattes. If the mission was successful, the future rockets would be made available to the National Nuclear Energy Council and the Biophysics Institute for scientific research. Flamengo. Flamengo, popularly known as "Meow", the tomcat of Lieutenant-Colonel Lage's daughters, was one of the twelve candidates for the flight. He was the leading candidate and would only be released if he was in good health on the day of the flight and his presence on the flight was already confirmed in December 1958. But in October 1958, the Diário do Paraná announced that Carlos Chagas Filho would replace the animal with an amoeba, arguing that a microscopic animal would be of greater scientific use in the study of cosmic rays. Despite this, Colonel Lage kept the cat in the project and when asked in 1959 about the reason for launching the cat, he replied: "... the recovery of this cat, alive, will be an extraordinary achievement". On 19 December 1958 the cat posed for the media inside the Technical School. If the launch had taken place, it would have been Latin America's first living being in space. Controversy. Carlos Chagas Filho, when the experiment began to gain visibility in the media, renounced any renewed interest in sending a cat on the mission and the possibility of any scientific learning, besides citing that the acrylic capsule would face difficulties with drastic temperature changes. In addition to the disagreement with Carlos Chagas Filho, the project team received protests from the "North American Feline Society," something that the project manager disregarded, believing in the safety of the vehicle. The SUIPA also opposed the use of the cat. Members of the Faculty of Veterinary Medicine and other experts were also skeptical of Flamengo's chances of survival, and Leo Rosen, vice president of SUIPA, also reiterated the group's position against the experiment SUIPA also sent an appeal and a petition, signed by, among others, Rachel de Queiroz and Carlos Drummond de Andrade, to the Commander of the Army Technical School and to the Minister of War, General Teixeira Lott, against launching the cat in the rocket. On the issue of animal experiments, SUIPA only advocated when extremely necessary, and was skeptical of the need for the cat experiment. The Brazilian government received thousands of letters protesting against the experiment, but the Army ignored them. And despite all the protests, including from Europe, the project leader continued with his plans. In November 1958 it was announced that the launch would be held in secret to "avoid sensationalism" and in the same month Colonel João Luís Vieira Maldonado, director of the Meteorology Service, said that the rocket would only carry sounding devices, and no longer the cat. However, in January 1959, Colonel Lage still hoped to make the launch with the cat and in February of the same year they planned to launch in March. However, in May 1959, the launch had not yet occurred and freshmen from the National Engineering School held a parade where, among other things, they criticized and satirized the project. In December 1958 the Army announced that it would test a prototype of the rocket before the official launch. End of project. In January 1959 the rocket was on display at the Armament Museum of the Army Technical School. By 1961 it was already clear that the launch had not taken place. It was the last rocket project that Colonel Lage participated in and was terminated without flying. Finally, on 18 October 1963, the cat Félicette made a suborbital flight as part of the French space program, returned alive, and was sacrificed after two months for an autopsy and study of her brain. Colonel Lage was transferred from the Army Technical School in 1960 and all the equipment related to the rocket was disassembled. Myearel Lage, already a General, born on 4 June 1910, died on 5 August 1977. The Army Technical School was abolished in favor of the Military Institute of Engineering. Because of the project, at that time Brazil was considered one of the three countries with space technology, alongside the United States and the Soviet Union. In terms of satellite launch capabilities, years later Brazil developed the unsuccessful VLS project, terminated in 2016. The country is currently working on the VLM project. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "4.5 m^2" } ]
https://en.wikipedia.org/wiki?curid=71731023
717358
120-cell
Four-dimensional analog of the dodecahedron In geometry, the 120-cell is the convex regular 4-polytope (four-dimensional analogue of a Platonic solid) with Schläfli symbol {5,3,3}. It is also called a C120, dodecaplex (short for "dodecahedral complex"), hyperdodecahedron, polydodecahedron, hecatonicosachoron, dodecacontachoron and hecatonicosahedroid. The boundary of the 120-cell is composed of 120 dodecahedral cells with 4 meeting at each vertex. Together they form 720 pentagonal faces, 1200 edges, and 600 vertices. It is the 4-dimensional analogue of the regular dodecahedron, since just as a dodecahedron has 12 pentagonal facets, with 3 around each vertex, the "dodecaplex" has 120 dodecahedral facets, with 3 around each edge. Its dual polytope is the 600-cell. Geometry. The 120-cell incorporates the geometries of every convex regular polytope in the first four dimensions (except the polygons {7} and above). As the sixth and largest regular convex 4-polytope, it contains inscribed instances of its four predecessors (recursively). It also contains 120 inscribed instances of the first in the sequence, the 5-cell, which is not found in any of the others. The 120-cell is a four-dimensional Swiss Army knife: it contains one of everything. It is daunting but instructive to study the 120-cell, because it contains examples of "every" relationship among "all" the convex regular polytopes found in the first four dimensions. Conversely, it can only be understood by first understanding each of its predecessors, and the sequence of increasingly complex symmetries they exhibit. That is why Stillwell titled his paper on the 4-polytopes and the history of mathematics of more than 3 dimensions "The Story of the 120-cell". !style="vertical-align:top;text-align:right;"|Short radius !style="vertical-align:top;text-align:right;"|Area !style="vertical-align:top;text-align:right;"|Volume !style="vertical-align:top;text-align:right;"|4-Content Cartesian coordinates. Natural Cartesian coordinates for a 4-polytope centered at the origin of 4-space occur in different frames of reference, depending on the long radius (center-to-vertex) chosen. √8 radius coordinates. The 120-cell with long radius √8 = 2√2 ≈ 2.828 has edge length 4−2φ = 3−√5 ≈ 0.764. In this frame of reference, its 600 vertex coordinates are the {permutations} and [even permutations] of the following: where φ (also called 𝝉) is the golden ratio, ≈ 1.618. Unit radius coordinates. The unit-radius 120-cell has edge length ≈ 0.270. In this frame of reference the 120-cell lies vertex up in standard orientation, and its coordinates are the {permutations} and [even permutations] in the left column below: The table gives the coordinates of at least one instance of each 4-polytope, but the 120-cell contains multiples-of-five inscribed instances of each of its precursor 4-polytopes, occupying different subsets of its vertices. The (600-point) 120-cell is the convex hull of 5 disjoint (120-point) 600-cells. Each (120-point) 600-cell is the convex hull of 5 disjoint (24-point) 24-cells, so the 120-cell is the convex hull of 25 disjoint 24-cells. Each 24-cell is the convex hull of 3 disjoint (8-point) 16-cells, so the 120-cell is the convex hull of 75 disjoint 16-cells. Uniquely, the (600-point) 120-cell is the convex hull of 120 disjoint (5-point) 5-cells. Chords. The 600-point 120-cell has all 8 of the 120-point 600-cell's distinct chord lengths, plus two additional important chords: its own shorter edges, and the edges of its 120 inscribed regular 5-cells. These two additional chords give the 120-cell its characteristic isoclinic rotation, in addition to all the rotations of the other regular 4-polytopes which it inherits. They also give the 120-cell a characteristic great circle polygon: an "irregular" great hexagon in which three 120-cell edges alternate with three 5-cell edges. The 120-cell's edges do not form regular great circle polygons in a single central plane the way the edges of the 600-cell, 24-cell, and 16-cell do. Like the edges of the 5-cell and the 8-cell tesseract, they form zig-zag Petrie polygons instead. The 120-cell's Petrie polygon is a triacontagon {30} zig-zag skew polygon. Since the 120-cell has a circumference of 30 edges, it has 15 distinct chord lengths, ranging from its edge length to its diameter. Every regular convex 4-polytope is inscribed in the 120-cell, and the 15 chords enumerated in the rows of the following table are all the distinct chords that make up the regular 4-polytopes and their great circle polygons. The first thing to notice about this table is that it has eight columns, not six; in addition to the six regular convex 4-polytopes, two irregular 4-polytopes occur naturally in the sequence of nested 4-polytopes: the 96-point snub 24-cell and the 480-point diminished 120-cell. The second thing to notice is that each numbered row (each chord) is marked with a triangle , square ☐, phi symbol 𝜙 or pentagram ✩. The 15 chords form polygons of four kinds: great squares ☐ characteristic of the 16-cell, great hexagons and great triangles △ characteristic of the 24-cell, great decagons and great pentagons 𝜙 characteristic of the 600-cell, and skew pentagrams ✩ or decagrams characteristic of the 5-cell which are Petrie polygons that circle through a set of central planes and form face polygons but not great polygons. The annotated chord table is a complete bill of materials for constructing the 120-cell. All of the 2-polytopes, 3-polytopes and 4-polytopes in the 120-cell are made from the 15 1-polytopes in the table. The black integers in table cells are incidence counts of the row's chord in the column's 4-polytope. For example, in the #3 chord row, the 600-cell's 72 great decagons contain 720 #3 chords in all. The red integers are the number of disjoint 4-polytopes above (the column label) which compounded form a 120-cell. For example, the 120-cell is a compound of 25 disjoint 24-cells (25 * 24 vertices = 600 vertices). The green integers are the number of distinct 4-polytopes above (the column label) which can be picked out in the 120-cell. For example, the 120-cell contains 225 distinct 24-cells which share components. The blue integers in the right column are incidence counts of the row's chord at each 120-cell vertex. For example, in the #3 chord row, 24 #3 chords converge at each of the 120-cell's 600 vertices, forming a double icosahedral vertex figure 2{3,5}. In total 300 major chords of 15 distinct lengths meet at each vertex of the 120-cell. Relationships among interior polytopes. The 120-cell is the compound of all five of the other regular convex 4-polytopes. All the relationships among the regular 1-, 2-, 3- and 4-polytopes occur in the 120-cell. It is a four-dimensional jigsaw puzzle in which all those polytopes are the parts. Although there are many sequences in which to construct the 120-cell by putting those parts together, ultimately they only fit together one way. The 120-cell is the unique solution to the combination of all these polytopes. The regular 1-polytope occurs in only 15 distinct lengths in any of the component polytopes of the 120-cell. By Alexandrov's uniqueness theorem, convex polyhedra with distinct shapes from each other also have distinct metric spaces of surface distances, so each regular 4-polytope has its own unique subset of these 15 chords. Only 4 of those 15 chords occur in the 16-cell, 8-cell and 24-cell. The four hypercubic chords √1, √2, √3 and √4 are sufficient to build the 24-cell and all its component parts. The 24-cell is the unique solution to the combination of these 4 chords and all the regular polytopes that can be built from them. An additional 4 of the 15 chords are required to build the 600-cell. The four golden chords are square roots of irrational fractions that are functions of √5. The 600-cell is the unique solution to the combination of these 8 chords and all the regular polytopes that can be built from them. Notable among the new parts found in the 600-cell which do not occur in the 24-cell are pentagons, and icosahedra. All 15 chords, and 15 other distinct chordal distances enumerated below, occur in the 120-cell. Notable among the new parts found in the 120-cell which do not occur in the 600-cell are The relationships between the "regular" 5-cell (the simplex regular 4-polytope) and the other regular 4-polytopes are manifest directly only in the 120-cell. The 600-point 120-cell is a compound of 120 disjoint 5-point 5-cells, and it is also a compound of 5 disjoint 120-point 600-cells (two different ways). Each 5-cell has one vertex in each of 5 disjoint 600-cells, and therefore in each of 5 disjoint 24-cells, 5 disjoint 8-cells, and 5 disjoint 16-cells. Each 5-cell is a ring (two different ways) joining 5 disjoint instances of each of the other regular 4-polytopes. Geodesic rectangles. The 30 distinct chords found in the 120-cell occur as 15 pairs of 180° complements. They form 15 distinct kinds of great circle polygon that lie in central planes of several kinds: △ planes that intersect {12} vertices in an irregular dodecagon, 𝜙 planes that intersect {10} vertices in a regular decagon, and ☐ planes that intersect {4} vertices in several kinds of rectangle, including a square. Each great circle polygon is characterized by its pair of 180° complementary chords. The chord pairs form great circle polygons with parallel opposing edges, so each great polygon is either a rectangle or a compound of a rectangle, with the two chords as the rectangle's edges. Each of the 15 complementary chord pairs corresponds to a distinct pair of opposing polyhedral sections of the 120-cell, beginning with a vertex, the 00 section. The correspondence is that each 120-cell vertex is surrounded by each polyhedral section's vertices at a uniform distance (the chord length), the way a polyhedron's vertices surround its center at the distance of its long radius. The #1 chord is the "radius" of the 10 section, the tetrahedral vertex figure of the 120-cell. The #14 chord is the "radius" of its congruent opposing 290 section. The #7 chord is the "radius" of the central section of the 120-cell, in which two opposing 150 sections are coincident. Each kind of great circle polygon (each distinct pair of 180° complementary chords) plays a role in a discrete isoclinic rotation of a distinct class, which takes its great rectangle edges to similar edges in Clifford parallel great polygons of the same kind. There is a distinct left and right rotation of this class for each fiber bundle of Clifford parallel great circle polygons in the invariant planes of the rotation. In each class of rotation, vertices rotate on a distinct kind of circular geodesic isocline which has a characteristic circumference, skew Clifford polygram and chord number, listed in the Rotation column above. Polyhedral graph. Considering the adjacency matrix of the vertices representing the polyhedral graph of the unit-radius 120-cell, the graph diameter is 15, connecting each vertex to its coordinate-negation at a Euclidean distance of 2 away (its circumdiameter), and there are 24 different paths to connect them along the polytope edges. From each vertex, there are 4 vertices at distance 1, 12 at distance 2, 24 at distance 3, 36 at distance 4, 52 at distance 5, 68 at distance 6, 76 at distance 7, 78 at distance 8, 72 at distance 9, 64 at distance 10, 56 at distance 11, 40 at distance 12, 12 at distance 13, 4 at distance 14, and 1 at distance 15. The adjacency matrix has 27 distinct eigenvalues ranging from ≈ 0.270, with a multiplicity of 4, to 2, with a multiplicity of 1. The multiplicity of eigenvalue 0 is 18, and the rank of the adjacency matrix is 582. The vertices of the 120-cell polyhedral graph are 3-colorable. The graph is Eulerian having degree 4 in every vertex. Its edge set can be decomposed into two Hamiltonian cycles. Constructions. The 120-cell is the sixth in the sequence of 6 convex regular 4-polytopes (in order of size and complexity). It can be deconstructed into ten distinct instances (or five disjoint instances) of its predecessor (and dual) the 600-cell, just as the 600-cell can be deconstructed into twenty-five distinct instances (or five disjoint instances) of its predecessor the 24-cell, the 24-cell can be deconstructed into three distinct instances of its predecessor the tesseract (8-cell), and the 8-cell can be deconstructed into two disjoint instances of its predecessor (and dual) the 16-cell. The 120-cell contains 675 distinct instances (75 disjoint instances) of the 16-cell. The reverse procedure to construct each of these from an instance of its predecessor preserves the radius of the predecessor, but generally produces a successor with a smaller edge length. The 600-cell's edge length is ~0.618 times its radius (the inverse golden ratio), but the 120-cell's edge length is ~0.270 times its radius. Dual 600-cells. Since the 120-cell is the dual of the 600-cell, it can be constructed from the 600-cell by placing its 600 vertices at the center of volume of each of the 600 tetrahedral cells. From a 600-cell of unit long radius, this results in a 120-cell of slightly smaller long radius ( ≈ 0.926) and edge length of exactly 1/4. Thus the unit edge-length 120-cell (with long radius φ2√2 ≈ 3.702) can be constructed in this manner just inside a 600-cell of long radius 4. The unit radius 120-cell (with edge-length ≈ 0.270) can be constructed in this manner just inside a 600-cell of long radius ≈ 1.080. Reciprocally, the unit-radius 120-cell can be constructed just outside a 600-cell of slightly smaller long radius ≈ 0.926, by placing the center of each dodecahedral cell at one of the 120 600-cell vertices. The 120-cell whose coordinates are given above of long radius √8 = 2√2 ≈ 2.828 and edge-length = 3−√5 ≈ 0.764 can be constructed in this manner just outside a 600-cell of long radius φ2, which is smaller than √8 in the same ratio of ≈ 0.926; it is in the golden ratio to the edge length of the 600-cell, so that must be φ. The 120-cell of edge-length 2 and long radius φ2√8 ≈ 7.405 given by Coxeter can be constructed in this manner just outside a 600-cell of long radius φ4 and edge-length φ3. Therefore, the unit-radius 120-cell can be constructed from its predecessor the unit-radius 600-cell in three reciprocation steps. Cell rotations of inscribed duals. Since the 120-cell contains inscribed 600-cells, it contains its own dual of the same radius. The 120-cell contains five disjoint 600-cells (ten overlapping inscribed 600-cells of which we can pick out five disjoint 600-cells in two different ways), so it can be seen as a compound of five of its own dual (in two ways). The vertices of each inscribed 600-cell are vertices of the 120-cell, and (dually) each dodecahedral cell center is a tetrahedral cell center in each of the inscribed 600-cells. The dodecahedral cells of the 120-cell have tetrahedral cells of the 600-cells inscribed in them. Just as the 120-cell is a compound of five 600-cells (in two ways), the dodecahedron is a compound of five regular tetrahedra (in two ways). As two opposing tetrahedra can be inscribed in a cube, and five cubes can be inscribed in a dodecahedron, ten tetrahedra in five cubes can be inscribed in a dodecahedron: two opposing sets of five, with each set covering all 20 vertices and each vertex in two tetrahedra (one from each set, but not the opposing pair of a cube obviously). This shows that the 120-cell contains, among its many interior features, 120 compounds of ten tetrahedra, each of which is dimensionally analogous to the whole 120-cell as a compound of ten 600-cells. All ten tetrahedra can be generated by two chiral five-click rotations of any one tetrahedron. In each dodecahedral cell, one tetrahedral cell comes from each of the ten 600-cells inscribed in the 120-cell. Therefore the whole 120-cell, with all ten inscribed 600-cells, can be generated from just one 600-cell by rotating its cells. Augmentation. Another consequence of the 120-cell containing inscribed 600-cells is that it is possible to construct it by placing 4-pyramids of some kind on the cells of the 600-cell. These tetrahedral pyramids must be quite irregular in this case (with the apex blunted into four 'apexes'), but we can discern their shape in the way a tetrahedron lies inscribed in a dodecahedron. Only 120 tetrahedral cells of each 600-cell can be inscribed in the 120-cell's dodecahedra; its other 480 tetrahedra span dodecahedral cells. Each dodecahedron-inscribed tetrahedron is the center cell of a cluster of five tetrahedra, with the four others face-bonded around it lying only partially within the dodecahedron. The central tetrahedron is edge-bonded to an additional 12 tetrahedral cells, also lying only partially within the dodecahedron. The central cell is vertex-bonded to 40 other tetrahedral cells which lie entirely outside the dodecahedron. Weyl orbits. Another construction method uses quaternions and the Icosahedral symmetry of Weyl group orbits formula_0 of order 120. The following describe formula_1 and formula_2 24-cells as quaternion orbit weights of D4 under the Weyl group W(D4): O(1000) : V1 O(0010) : V2 O(0001) : V3 formula_3 With quaternions formula_4 where formula_5 is the conjugate of formula_6 and formula_7 and formula_8, then the Coxeter group formula_9 is the symmetry group of the 600-cell and the 120-cell of order 14400. Given formula_10 such that formula_11 and formula_12 as an exchange of formula_13 within formula_6, we can construct: As a configuration. This configuration matrix represents the 120-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole 120-cell. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_19 Here is the configuration expanded with "k"-face elements and "k"-figures. The diagonal element counts are the ratio of the full Coxeter group order, 14400, divided by the order of the subgroup with mirror removal. Visualization. The 120-cell consists of 120 dodecahedral cells. For visualization purposes, it is convenient that the dodecahedron has opposing parallel faces (a trait it shares with the cells of the tesseract and the 24-cell). One can stack dodecahedrons face to face in a straight line bent in the 4th direction into a great circle with a circumference of 10 cells. Starting from this initial ten cell construct there are two common visualizations one can use: a layered stereographic projection, and a structure of intertwining rings (discrete Hopf fibration). Layered stereographic projection. The cell locations lend themselves to a hyperspherical description. Pick an arbitrary dodecahedron and label it the "north pole". Twelve great circle meridians (four cells long) radiate out in 3 dimensions, converging at the fifth "south pole" cell. This skeleton accounts for 50 of the 120 cells (2 + 4 × 12). Starting at the North Pole, we can build up the 120-cell in 9 latitudinal layers, with allusions to terrestrial 2-sphere topography in the table below. With the exception of the poles, the centroids of the cells of each layer lie on a separate 2-sphere, with the equatorial centroids lying on a great 2-sphere. The centroids of the 30 equatorial cells form the vertices of an icosidodecahedron, with the meridians (as described above) passing through the center of each pentagonal face. The cells labeled "interstitial" in the following table do not fall on meridian great circles. The cells of layers 2, 4, 6 and 8 are located over the faces of the pole cell. The cells of layers 3 and 7 are located directly over the vertices of the pole cell. The cells of layer 5 are located over the edges of the pole cell. Intertwining rings. The 120-cell can be partitioned into 12 disjoint 10-cell great circle rings, forming a discrete/quantized Hopf fibration. Starting with one 10-cell ring, one can place another ring alongside it that spirals around the original ring one complete revolution in ten cells. Five such 10-cell rings can be placed adjacent to the original 10-cell ring. Although the outer rings "spiral" around the inner ring (and each other), they actually have no helical torsion. They are all equivalent. The spiraling is a result of the 3-sphere curvature. The inner ring and the five outer rings now form a six ring, 60-cell solid torus. One can continue adding 10-cell rings adjacent to the previous ones, but it's more instructive to construct a second torus, disjoint from the one above, from the remaining 60 cells, that interlocks with the first. The 120-cell, like the 3-sphere, is the union of these two (Clifford) tori. If the center ring of the first torus is a meridian great circle as defined above, the center ring of the second torus is the equatorial great circle that is centered on the meridian circle. Also note that the spiraling shell of 50 cells around a center ring can be either left handed or right handed. It's just a matter of partitioning the cells in the shell differently, i.e. picking another set of disjoint (Clifford parallel) great circles. Other great circle constructs. There is another great circle path of interest that alternately passes through opposing cell vertices, then along an edge. This path consists of 6 edges alternating with 6 cell diameter chords, forming an irregular dodecagon in a central plane. Both these great circle paths have dual great circle paths in the 600-cell. The 10 cell face to face path above maps to a 10 vertex path solely traversing along edges in the 600-cell, forming a decagon. The alternating cell/edge path maps to a path consisting of 12 tetrahedrons alternately meeting face to face then vertex to vertex (six triangular bipyramids) in the 600-cell. This latter path corresponds to a ring of six icosahedra meeting face to face in the snub 24-cell (or icosahedral pyramids in the 600-cell), forming a hexagon. Another great circle polygon path exists which is unique to the 120-cell and has no dual counterpart in the 600-cell. This path consists of 3 120-cell edges alternating with 3 inscribed 5-cell edges (#8 chords), forming the irregular great hexagon with alternating short and long edges illustrated above. Each 5-cell edge runs through the volume of three dodecahedral cells (in a ring of ten face-bonded dodecahedral cells), to the opposite pentagonal face of the third dodecahedron. This irregular great hexagon lies in the same central plane (on the same great circle) as the irregular great dodecagon described above, but it intersects only {6} of the {12} dodecagon vertices. There are two irregular great hexagons inscribed in each irregular great dodecagon, in alternate positions. Perspective projections. As in all the illustrations in this article, only the edges of the 120-cell appear in these renderings. All the other chords are not shown. The complex interior parts of the 120-cell, all its inscribed 600-cells, 24-cells, 8-cells, 16-cells and 5-cells, are completely invisible in all illustrations. The viewer must imagine them. These projections use perspective projection, from a specific viewpoint in four dimensions, projecting the model as a 3D shadow. Therefore, faces and cells that look larger are merely closer to the 4D viewpoint. A comparison of perspective projections of the 3D dodecahedron to 2D (below left), and projections of the 4D 120-cell to 3D (below right), demonstrates two related perspective projection methods, by dimensional analogy. Schlegel diagrams use perspective to show depth in the dimension which has been flattened, choosing a view point "above" a specific cell, thus making that cell the envelope of the model, with other cells appearing smaller inside it. Stereographic projections use the same approach, but are shown with curved edges, representing the spherical polytope as a tiling of a 3-sphere. Both these methods distort the object, because the cells are not actually nested inside each other (they meet face-to-face), and they are all the same size. Other perspective projection methods exist, such as the rotating animations above, which do not exhibit this particular kind of distortion, but rather some other kind of distortion (as all projections must). Orthogonal projections. Orthogonal projections of the 120-cell can be done in 2D by defining two orthonormal basis vectors for a specific view direction. The 30-gonal projection was made in 1963 by B. L. Chilton. The H3 decagonal projection shows the plane of the van Oss polygon. 3-dimensional orthogonal projections can also be made with three orthonormal basis vectors, and displayed as a 3d model, and then projecting a certain perspective in 3D for a 2d image. Related polyhedra and honeycombs. H4 polytopes. The 120-cell is one of 15 regular and uniform polytopes with the same H4 symmetry [3,3,5]: {p,3,3} polytopes. The 120-cell is similar to three regular 4-polytopes: the 5-cell {3,3,3} and tesseract {4,3,3} of Euclidean 4-space, and the hexagonal tiling honeycomb {6,3,3} of hyperbolic space. All of these have a tetrahedral vertex figure {3,3}: {5,3,p} polytopes. The 120-cell is a part of a sequence of 4-polytopes and honeycombs with dodecahedral cells: Tetrahedrally diminished 120-cell. Since the 600-point 120-cell has 5 disjoint inscribed 600-cells, it can be diminished by the removal of one of those 120-point 600-cells, creating an irregular 480-point 4-polytope. Each dodecahedral cell of the 120-cell is diminished by removal of 4 of its 20 vertices, creating an irregular 16-point polyhedron called the tetrahedrally diminished dodecahedron because the 4 vertices removed formed a tetrahedron inscribed in the dodecahedron. Since the vertex figure of the dodecahedron is the triangle, each truncated vertex is replaced by a triangle. The 12 pentagon faces are replaced by 12 trapezoids, as one vertex of each pentagon is removed and two of its edges are replaced by the pentagon's diagonal chord. The tetrahedrally diminished dodecahedron has 16 vertices and 16 faces: 12 trapezoid faces and four equilateral triangle faces. Since the vertex figure of the 120-cell is the tetrahedron, each truncated vertex is replaced by a tetrahedron, leaving 120 tetrahedrally diminished dodecahedron cells and 120 regular tetrahedron cells. The regular dodecahedron and the tetrahedrally diminished dodecahedron both have 30 edges, and the regular 120-cell and the tetrahedrally diminished 120-cell both have 1200 edges. The 480-point diminished 120-cell may be called the tetrahedrally diminished 120-cell because its cells are tetrahedrally diminished, or the 600-cell diminished 120-cell because the vertices removed formed a 600-cell inscribed in the 120-cell, or even the regular 5-cells diminished 120-cell because removing the 120 vertices removes one vertex from each of the 120 inscribed regular 5-cells, leaving 120 regular tetrahedra. Davis 120-cell. The Davis 120-cell, introduced by , is a compact 4-dimensional hyperbolic manifold obtained by identifying opposite faces of the 120-cell, whose universal cover gives the regular honeycomb {5,3,3,5} of 4-dimensional hyperbolic space. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\Lambda)=W(H_4)=I" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "T'" }, { "math_id": 3, "text": "T'=\\sqrt{2}\\{V1\\oplus V2\\oplus V3 \\} = \\begin{pmatrix}\n\\frac{-1-e_1}{\\sqrt{2}} & \\frac{1-e_1}{\\sqrt{2}} &\n\\frac{-1+e_1}{\\sqrt{2}} & \\frac{1+e_1}{\\sqrt{2}} &\n\\frac{-e_2-e_3}{\\sqrt{2}} & \\frac{e_2-e_3}{\\sqrt{2}} &\n\\frac{-e_2+e_3}{\\sqrt{2}} & \\frac{e_2+e_3}{\\sqrt{2}}\n\\\\\n\\frac{-1-e_2}{\\sqrt{2}} & \\frac{1-e_2}{\\sqrt{2}} &\n\\frac{-1+e_2}{\\sqrt{2}} & \\frac{1+e_2}{\\sqrt{2}} &\n\\frac{-e_1-e_3}{\\sqrt{2}} & \\frac{e_1-e_3}{\\sqrt{2}} &\n\\frac{-e_1+e_3}{\\sqrt{2}} & \\frac{e_1+e_3}{\\sqrt{2}}\n\\\\\n\\frac{-e_1-e_2}{\\sqrt{2}} & \\frac{e_1-e_2}{\\sqrt{2}} &\n\\frac{-e_1+e_2}{\\sqrt{2}} & \\frac{e_1+e_2}{\\sqrt{2}} &\n\\frac{-1-e_3}{\\sqrt{2}} & \\frac{1-e_3}{\\sqrt{2}} &\n\\frac{-1+e_3}{\\sqrt{2}} & \\frac{1+e_3}{\\sqrt{2}}\n\\end{pmatrix};" }, { "math_id": 4, "text": "(p,q)" }, { "math_id": 5, "text": "\\bar p" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "[p,q]:r\\rightarrow r'=prq" }, { "math_id": 8, "text": "[p,q]^*:r\\rightarrow r''=p\\bar rq" }, { "math_id": 9, "text": "W(H_4)=\\lbrace[p,\\bar p] \\oplus [p,\\bar p]^*\\rbrace " }, { "math_id": 10, "text": "p \\in T" }, { "math_id": 11, "text": "\\bar p=\\pm p^4, \\bar p^2=\\pm p^3, \\bar p^3=\\pm p^2, \\bar p^4=\\pm p" }, { "math_id": 12, "text": "p^\\dagger" }, { "math_id": 13, "text": "-1/\\varphi \\leftrightarrow \\varphi" }, { "math_id": 14, "text": "S=\\sum_{i=1}^4\\oplus p^i T" }, { "math_id": 15, "text": "I=T+S=\\sum_{i=0}^4\\oplus p^i T" }, { "math_id": 16, "text": "J=\\sum_{i,j=0}^4\\oplus p^i\\bar p^{\\dagger j}T'" }, { "math_id": 17, "text": "S'=\\sum_{i=1}^4\\oplus p^i\\bar p^{\\dagger i}T'" }, { "math_id": 18, "text": "T \\oplus T' \\oplus S'" }, { "math_id": 19, "text": "\\begin{bmatrix}\\begin{matrix}600 & 4 & 6 & 4 \\\\ 2 & 1200 & 3 & 3 \\\\ 5 & 5 & 720 & 2 \\\\ 20 & 30 & 12 & 120 \\end{matrix}\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=717358
71736840
Colour refinement algorithm
In graph theory and theoretical computer science, the colour refinement algorithm also known as the naive vertex classification, or the 1-dimensional version of the Weisfeiler-Leman algorithm, is a routine used for testing whether two graphs are isomorphic. While it solves graph isomorphism on almost all graphs, there are graphs such as all regular graphs that cannot be distinguished using colour refinement. Description. The algorithm takes as an input a graph formula_0 with formula_1 vertices. It proceeds in iterations and in each iteration produces a new colouring of the vertices. Formally a "colouring" is a function from the vertices of this graph into some set (of "colours"). In each iteration, we define a sequence of vertex colourings formula_2 as follows: In other words, the new colour of the vertex formula_5 is the pair formed from the previous colour and the multiset of the colours of its neighbours. This algorithm keeps refining the current colouring. At some point it stabilises, i.e., formula_8. This final colouring is called the "stable colouring". Graph Isomorphism. Colour refinement can be used as a subroutine for an important computational problem: graph isomorphism. In this problem we have as input two graphs formula_9 and our task is to determine whether they are isomorphic. Informally, this means that the two graphs are the same up to relabelling of vertices. To test if formula_10 and formula_11 are isomorphic we could try the following. Run colour refinement on both graphs. If the stable colourings produced are different we know that the two graphs are not isomorphic. However, it could be that the same stable colouring is produced despite the two graphs not being isomorphic; see below. Complexity. It is easy to see that if colour refinement is given a formula_1 vertex graph as input, a stable colouring is produced after at most formula_12 iterations. Conversely, there exist graphs where this bound is realised. This leads to a formula_13 implementation where formula_14 is the number of vertices and formula_15 the number of edges. This complexity has been proven to be optimal under reasonable assumptions. Expressivity. We say that two graphs formula_10 and formula_11 are "distinguished" by colour refinement if the algorithm yields a different output on formula_10 as on formula_11. There are simple examples of graphs that are not distinguished by colour refinement. For example, it does not distinguish a cycle of length 6 from a pair of triangles (example V.1 in ). Despite this, the algorithm is very powerful in that a random graph will be identified by the algorithm asymptotically almost surely. Even stronger, it has been shown that as formula_1 increases, the proportion of graphs that are "not" identified by colour refinement decreases exponentially in order formula_1. The expressivity of colour refinement also has a logical characterisation: two graphs can be distinguished by colour refinement if and only if they can be distinguished by the two variable fragment of first order logic with counting. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " G " }, { "math_id": 1, "text": " n " }, { "math_id": 2, "text": " \\lambda_i " }, { "math_id": 3, "text": " \\lambda_0 " }, { "math_id": 4, "text": "\\lambda_0(v)" }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "\\lambda_0" }, { "math_id": 7, "text": "\\lambda_{i+1} = \\left(\\lambda_i(v), \\{\\{ \\lambda_i(w) \\mid w \\text{ is a neighbor of } v\\}\\}\\right)" }, { "math_id": 8, "text": "\\lambda_{i+1} \\equiv \\lambda_i" }, { "math_id": 9, "text": " G, H " }, { "math_id": 10, "text": " G " }, { "math_id": 11, "text": " H " }, { "math_id": 12, "text": " n-1 " }, { "math_id": 13, "text": " O((n+m)\\log n) " }, { "math_id": 14, "text": "n " }, { "math_id": 15, "text": "m " } ]
https://en.wikipedia.org/wiki?curid=71736840
717377
Algebraic stack
Generalization of algebraic spaces or schemes In mathematics, an algebraic stack is a vast generalization of algebraic spaces, or schemes, which are foundational for studying moduli theory. Many moduli spaces are constructed using techniques specific to algebraic stacks, such as Artin's representability theorem, which is used to construct the moduli space of pointed algebraic curves formula_0 and the moduli stack of elliptic curves. Originally, they were introduced by Alexander Grothendieck to keep track of automorphisms on moduli spaces, a technique which allows for treating these moduli spaces as if their underlying schemes or algebraic spaces are smooth. After Grothendieck developed the general theory of descent, and Giraud the general theory of stacks, the notion of algebraic stacks was defined by Michael Artin. Definition. Motivation. One of the motivating examples of an algebraic stack is to consider a groupoid scheme formula_1 over a fixed scheme formula_2. For example, if formula_3 (where formula_4 is the group scheme of roots of unity), formula_5, formula_6 is the projection map, formula_7 is the group actionformula_8and formula_9 is the multiplication mapformula_10on formula_4. Then, given an formula_2-scheme formula_11, the groupoid scheme formula_12 forms a groupoid (where formula_13 are their associated functors). Moreover, this construction is functorial on formula_14 forming a contravariant 2-functorformula_15where formula_16 is the 2-category of small categories. Another way to view this is as a fibred category formula_17 through the Grothendieck construction. Getting the correct technical conditions, such as the Grothendieck topology on formula_14, gives the definition of an algebraic stack. For instance, in the associated groupoid of formula_18-points for a field formula_18, over the origin object formula_19 there is the groupoid of automorphisms formula_20. However, in order to get an algebraic stack from formula_21, and not just a stack, there are additional technical hypotheses required for formula_21. Algebraic stacks. It turns out using the fppf-topology (faithfully flat and locally of finite presentation) on formula_14, denoted formula_22, forms the basis for defining algebraic stacks. Then, an algebraic stack is a fibered categoryformula_23such that Explanation of technical conditions. Using the fppf topology. First of all, the fppf-topology is used because it behaves well with respect to descent. For example, if there are schemes formula_29 and formula_30can be refined to an fppf-cover of formula_31, if formula_32 is flat, locally finite type, or locally of finite presentation, then formula_31 has this property. this kind of idea can be extended further by considering properties local either on the target or the source of a morphism formula_33. For a cover formula_34 we say a property formula_35 is local on the source ifformula_33 has formula_35 if and only if each formula_36 has formula_35.There is an analogous notion on the target called local on the target. This means given a cover formula_37formula_33 has formula_35 if and only if each formula_38 has formula_35.For the fppf topology, having an immersion is local on the target. In addition to the previous properties local on the source for the fppf topology, formula_39 being universally open is also local on the source. Also, being locally Noetherian and Jacobson are local on the source and target for the fppf topology. This does not hold in the fpqc topology, making it not as "nice" in terms of technical properties. Even though this is true, using algebraic stacks over the fpqc topology still has its use, such as in chromatic homotopy theory. This is because the Moduli stack of formal group laws formula_40 is an fpqc-algebraic stackpg 40. Representable diagonal. By definition, a 1-morphism formula_41 of categories fibered in groupoids is representable by algebraic spaces if for any fppf morphism formula_27 of schemes and any 1-morphism formula_42, the associated category fibered in groupoidsformula_43is representable as an algebraic space, meaning there exists an algebraic spaceformula_44such that the associated fibered category formula_45 is equivalent to formula_43. There are a number of equivalent conditions for representability of the diagonal which help give intuition for this technical condition, but one of main motivations is the following: for a scheme formula_46 and objects formula_47 the sheaf formula_48 is representable as an algebraic space. In particular, the stabilizer group for any point on the stack formula_49 is representable as an algebraic space. Another important equivalence of having a representable diagonal is the technical condition that the intersection of any two algebraic spaces in an algebraic stack is an algebraic space. Reformulated using fiber productsformula_50the representability of the diagonal is equivalent to formula_51 being representable for an algebraic space formula_31. This is because given morphisms formula_52 from algebraic spaces, they extend to maps formula_53 from the diagonal map. There is an analogous statement for algebraic spaces which gives representability of a sheaf on formula_54 as an algebraic space. Note that an analogous condition of representability of the diagonal holds for some formulations of higher stacks where the fiber product is an formula_55-stack for an formula_56-stack formula_24. Surjective and smooth atlas. 2-Yoneda lemma. The existence of an formula_26 scheme formula_27 and a 1-morphism of fibered categories formula_28 which is surjective and smooth depends on defining a smooth and surjective morphisms of fibered categories. Here formula_57 is the algebraic stack from the representable functor formula_58 on formula_59 upgraded to a category fibered in groupoids where the categories only have trivial morphisms. This means the setformula_60is considered as a category, denoted formula_61, with objects in formula_62 as formula_26 morphismsformula_63and morphisms are the identity morphism. Henceformula_64is a 2-functor of groupoids. Showing this 2-functor is a sheaf is the content of the 2-Yoneda lemma. Using the Grothendieck construction, there is an associated category fibered in groupoids denoted formula_28. Representable morphisms of categories fibered in groupoids. To say this morphism formula_28 is smooth or surjective, we have to introduce representable morphisms. A morphism formula_65 of categories fibered in groupoids over formula_66 is said to be representable if given an object formula_67 in formula_66 and an object formula_68 the 2-fibered product formula_69is representable by a scheme. Then, we can say the morphism of categories fibered in groupoids formula_70 is smooth and surjective if the associated morphismformula_71of schemes is smooth and surjective. Deligne-Mumford stacks. Algebraic stacks, also known as Artin stacks, are by definition equipped with a smooth surjective atlas formula_28, where formula_57 is the stack associated to some scheme formula_27. If the atlas formula_72 is moreover étale, then formula_24 is said to be a Deligne-Mumford stack. The subclass of Deligne-Mumford stacks is useful because it provides the correct setting for many natural stacks considered, such as the moduli stack of algebraic curves. In addition, they are strict enough that object represented by points in Deligne-Mumford stacks do not have infinitesimal automorphisms. This is very important because infinitesimal automorphisms make studying the deformation theory of Artin stacks very difficult. For example, the deformation theory of the Artin stack formula_73, the moduli stack of rank formula_56 vector bundles, has infinitesimal automorphisms controlled partially by the Lie algebra formula_74. This leads to an infinite sequence of deformations and obstructions in general, which is one of the motivations for studying moduli of stable bundles. Only in the special case of the deformation theory of line bundles formula_75 is the deformation theory tractable, since the associated Lie algebra is abelian. Note that many stacks cannot be naturally represented as Deligne-Mumford stacks because it only allows for finite covers, or, algebraic stacks with finite covers. Note that because every Etale cover is flat and locally of finite presentation, algebraic stacks defined with the fppf-topology subsume this theory; but, it is still useful since many stacks found in nature are of this form, such as the moduli of curves formula_76. Also, the differential-geometric analogue of such stacks are called orbifolds. The Etale condition implies the 2-functorformula_77sending a scheme to its groupoid of formula_4-torsors is representable as a stack over the Etale topology, but the Picard-stack formula_78 of formula_79-torsors (equivalently the category of line bundles) is not representable. Stacks of this form are representable as stacks over the fppf-topology. Another reason for considering the fppf-topology versus the etale topology is over characteristic formula_70 the Kummer sequenceformula_80is exact only as a sequence of fppf sheaves, but not as a sequence of etale sheaves. Defining algebraic stacks over other topologies. Using other Grothendieck topologies on formula_81 gives alternative theories of algebraic stacks which are either not general enough, or don't behave well with respect to exchanging properties from the base of a cover to the total space of a cover. It is useful to recall there is the following hierarchy of generalizationformula_82of big topologies on formula_81. Structure sheaf. The structure sheaf of an algebraic stack is an object pulled back from a universal structure sheaf formula_83 on the site formula_66. This universal structure sheaf is defined asformula_84and the associated structure sheaf on a category fibered in groupoidsformula_85is defined asformula_86where formula_87 comes from the map of Grothendieck topologies. In particular, this means is formula_88 lies over formula_46, so formula_89, then formula_90. As a sanity check, it's worth comparing this to a category fibered in groupoids coming from an formula_2-scheme formula_32 for various topologies. For example, if formula_91is a category fibered in groupoids over formula_66, the structure sheaf for an open subscheme formula_92 givesformula_93so this definition recovers the classic structure sheaf on a scheme. Moreover, for a quotient stack formula_94, the structure sheaf this just gives the formula_95-invariant sectionsformula_96for formula_97 in formula_66. Examples. Classifying stacks. Many classifying stacks for algebraic groups are algebraic stacks. In fact, for an algebraic group space formula_95 over a scheme formula_2 which is flat of finite presentation, the stack formula_98 is algebraictheorem 6.1.
[ { "math_id": 0, "text": "\\mathcal{M}_{g,n}" }, { "math_id": 1, "text": "(R,U,s,t,m)" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "R = \\mu_n\\times_S\\mathbb{A}^n_S" }, { "math_id": 4, "text": "\\mu_n" }, { "math_id": 5, "text": "U = \\mathbb{A}^n_S" }, { "math_id": 6, "text": "s = \\text{pr}_U" }, { "math_id": 7, "text": "t" }, { "math_id": 8, "text": "\\zeta_n \\cdot (x_1,\\ldots, x_n)=(\\zeta_n x_1,\\ldots,\\zeta_n x_n)" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "m: (\\mu_n\\times_S \\mathbb{A}^n_S)\\times_{\\mu_n\\times_S \\mathbb{A}^n_S} (\\mu_n\\times_S \\mathbb{A}^n_S) \\to \\mu_n\\times_S \\mathbb{A}^n_S" }, { "math_id": 11, "text": "\\pi:X\\to S" }, { "math_id": 12, "text": "(R(X),U(X),s,t,m)" }, { "math_id": 13, "text": "R,U" }, { "math_id": 14, "text": "(\\mathrm{Sch}/S)" }, { "math_id": 15, "text": "(R(-),U(-),s,t,m): (\\mathrm{Sch}/S)^\\mathrm{op} \\to \\text{Cat}" }, { "math_id": 16, "text": "\\text{Cat}" }, { "math_id": 17, "text": "[U/R] \\to (\\mathrm{Sch}/S)" }, { "math_id": 18, "text": "k" }, { "math_id": 19, "text": "0 \\in \\mathbb{A}^n_S(k)" }, { "math_id": 20, "text": "\\mu_n(k)" }, { "math_id": 21, "text": "[U/R]" }, { "math_id": 22, "text": "(\\mathrm{Sch}/S)_{fppf}" }, { "math_id": 23, "text": "p: \\mathcal{X} \\to (\\mathrm{Sch}/S)_{fppf}" }, { "math_id": 24, "text": "\\mathcal{X}" }, { "math_id": 25, "text": "\\Delta:\\mathcal{X} \\to \\mathcal{X}\\times_S\\mathcal{X}" }, { "math_id": 26, "text": "fppf" }, { "math_id": 27, "text": "U \\to S" }, { "math_id": 28, "text": "\\mathcal{U} \\to \\mathcal{X}" }, { "math_id": 29, "text": "X,Y \\in \\operatorname{Ob}(\\mathrm{Sch}/S)" }, { "math_id": 30, "text": "X \\to Y" }, { "math_id": 31, "text": "Y" }, { "math_id": 32, "text": "X" }, { "math_id": 33, "text": "f:X\\to Y" }, { "math_id": 34, "text": "\\{X_i \\to X\\}_{i \\in I}" }, { "math_id": 35, "text": "\\mathcal{P}" }, { "math_id": 36, "text": "X_i \\to Y" }, { "math_id": 37, "text": "\\{Y_i \\to Y \\}_{i \\in I}" }, { "math_id": 38, "text": "X\\times_YY_i \\to Y_i" }, { "math_id": 39, "text": "f" }, { "math_id": 40, "text": "\\mathcal{M}_{fg}" }, { "math_id": 41, "text": "f:\\mathcal{X} \\to \\mathcal{Y}" }, { "math_id": 42, "text": "y: (Sch/U)_{fppf} \\to \\mathcal{Y}" }, { "math_id": 43, "text": "(Sch/U)_{fppf}\\times_{\\mathcal{Y}} \\mathcal{X}" }, { "math_id": 44, "text": "F:(Sch/S)^{op}_{fppf} \\to Sets" }, { "math_id": 45, "text": "\\mathcal{S}_F \\to (Sch/S)_{fppf}" }, { "math_id": 46, "text": "U" }, { "math_id": 47, "text": "x, y \\in \\operatorname{Ob}(\\mathcal{X}_U)" }, { "math_id": 48, "text": "\\operatorname{Isom}(x,y)" }, { "math_id": 49, "text": "x : \\operatorname{Spec}(k) \\to \\mathcal{X}_{\\operatorname{Spec}(k)}" }, { "math_id": 50, "text": "\\begin{matrix}\nY \\times_{\\mathcal{X}}Z & \\to & Y \\\\\n\\downarrow & & \\downarrow \\\\\nZ & \\to & \\mathcal{X}\n\\end{matrix}" }, { "math_id": 51, "text": "Y \\to \\mathcal{X}" }, { "math_id": 52, "text": "Y \\to \\mathcal{X}, Z \\to \\mathcal{X}" }, { "math_id": 53, "text": "\\mathcal{X}\\times\\mathcal{X}" }, { "math_id": 54, "text": "(F/S)_{fppf}" }, { "math_id": 55, "text": "(n-1)" }, { "math_id": 56, "text": "n" }, { "math_id": 57, "text": "\\mathcal{U}" }, { "math_id": 58, "text": "h_U" }, { "math_id": 59, "text": "h_U: (Sch/S)_{fppf}^{op} \\to Sets" }, { "math_id": 60, "text": "h_U(T) = \\text{Hom}_{(Sch/S)_{fppf}}(T,U)" }, { "math_id": 61, "text": "h_\\mathcal{U}(T)" }, { "math_id": 62, "text": "h_U(T)" }, { "math_id": 63, "text": "f:T \\to U" }, { "math_id": 64, "text": "h_{\\mathcal{U}}:(Sch/S)_{fppf}^{op} \\to Groupoids" }, { "math_id": 65, "text": "p:\\mathcal{X} \\to \\mathcal{Y}" }, { "math_id": 66, "text": "(Sch/S)_{fppf}" }, { "math_id": 67, "text": "T \\to S" }, { "math_id": 68, "text": "t \\in \\text{Ob}(\\mathcal{Y}_T)" }, { "math_id": 69, "text": "(Sch/T)_{fppf}\\times_{t,\\mathcal{Y}} \\mathcal{X}_T" }, { "math_id": 70, "text": "p" }, { "math_id": 71, "text": "(Sch/T)_{fppf}\\times_{t,\\mathcal{Y}} \\mathcal{X}_T \\to (Sch/T)_{fppf}" }, { "math_id": 72, "text": "\\mathcal{U}\\to \\mathcal{X}" }, { "math_id": 73, "text": "BGL_n = [*/GL_n]" }, { "math_id": 74, "text": "\\mathfrak{gl}_n" }, { "math_id": 75, "text": "[*/GL_1] = [*/\\mathbb{G}_m]" }, { "math_id": 76, "text": "\\mathcal{M}_g" }, { "math_id": 77, "text": "B\\mu_n:(\\mathrm{Sch}/S)^\\text{op} \\to \\text{Cat}" }, { "math_id": 78, "text": "B\\mathbb{G}_m" }, { "math_id": 79, "text": "\\mathbb{G}_m" }, { "math_id": 80, "text": "0 \\to \\mu_p \\to \\mathbb{G}_m \\to \\mathbb{G}_m \\to 0" }, { "math_id": 81, "text": "(F/S)" }, { "math_id": 82, "text": "\\text{fpqc} \\supset \\text{fppf} \\supset \\text{smooth} \\supset \\text{etale} \\supset \\text{Zariski}" }, { "math_id": 83, "text": "\\mathcal{O}" }, { "math_id": 84, "text": "\\mathcal{O}:(Sch/S)_{fppf}^{op} \\to Rings, \\text{ where } U/X \\mapsto \\Gamma(U,\\mathcal{O}_U)" }, { "math_id": 85, "text": "p:\\mathcal{X} \\to (Sch/S)_{fppf}" }, { "math_id": 86, "text": "\\mathcal{O}_\\mathcal{X} := p^{-1}\\mathcal{O}" }, { "math_id": 87, "text": "p^{-1}" }, { "math_id": 88, "text": "x \\in \\text{Ob}(\\mathcal{X})" }, { "math_id": 89, "text": "p(x) = U" }, { "math_id": 90, "text": "\\mathcal{O}_\\mathcal{X}(x)=\\Gamma(U,\\mathcal{O}_U)" }, { "math_id": 91, "text": "(\\mathcal{X}_{Zar},\\mathcal{O}_\\mathcal{X}) = ((Sch/X)_{Zar}, \\mathcal{O}_X)" }, { "math_id": 92, "text": "U \\to X" }, { "math_id": 93, "text": "\\mathcal{O}_\\mathcal{X}(U) = \\mathcal{O}_X(U) = \\Gamma(U,\\mathcal{O}_X)" }, { "math_id": 94, "text": "\\mathcal{X} = [X/G]" }, { "math_id": 95, "text": "G" }, { "math_id": 96, "text": "\\mathcal{O}_{\\mathcal{X}}(U) = \\Gamma(U,u^*\\mathcal{O}_X)^{G}" }, { "math_id": 97, "text": "u:U\\to X" }, { "math_id": 98, "text": "BG" } ]
https://en.wikipedia.org/wiki?curid=717377
7173874
Ecophysiology
Study of adaptation of an organism's physiology to environmental conditions Ecophysiology (from Greek , "oikos", "house(hold)"; , "physis", "nature, origin"; and , "-logia"), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym. Plants. Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis. In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions. Light. Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum efficiency; the x-intercept is the light intensity at which biochemical assimilation (gross assimilation) balances leaf respiration so that the net CO2 exchange of the leaf is zero, called light compensation point; and a horizontal asymptote representing the maximum assimilation rate. Sometimes after reaching the maximum assimilation declines for processes collectively known as photoinhibition. As with most abiotic factors, light intensity (irradiance) can be both suboptimal and excessive. Suboptimal light (shade) typically occurs at the base of a plant canopy or in an understory environment. Shade tolerant plants have a range of adaptations to help them survive the altered quantity and quality of light typical of shade environments. Excess light occurs at the top of canopies and on open ground when cloud cover is low and the sun's zenith angle is low, typically this occurs in the tropics and at high altitudes. Excess light incident on a leaf can result in photoinhibition and photodestruction. Plants adapted to high light environments have a range of adaptations to avoid or dissipate the excess light energy, as well as mechanisms that reduce the amount of injury caused. Light intensity is also an important component in determining the temperature of plant organs (energy budget). Temperature. In response to extremes of temperature, plants can produce various proteins. These protect them from the damaging effects of ice formation and falling rates of enzyme catalysis at low temperatures, and from enzyme denaturation and increased photorespiration at high temperatures. As temperatures fall, production of antifreeze proteins and dehydrins increases. As temperatures rise, production of heat shock proteins increases. Metabolic imbalances associated with temperature extremes result in the build-up of reactive oxygen species, which can be countered by antioxidant systems. Cell membranes are also affected by changes in temperature and can cause the membrane to lose its fluid properties and become a gel in cold conditions or to become leaky in hot conditions. This can affect the movement of compounds across the membrane. To prevent these changes, plants can change the composition of their membranes. In cold conditions, more unsaturated fatty acids are placed in the membrane and in hot conditions, more saturated fatty acids are inserted. Plants can avoid overheating by minimising the amount of sunlight absorbed and by enhancing the cooling effects of wind and transpiration. Plants can reduce light absorption using reflective leaf hairs, scales, and waxes. These features are so common in warm dry regions that these habitats can be seen to form a 'silvery landscape' as the light scatters off the canopies. Some species, such as "Macroptilium purpureum", can move their leaves throughout the day so that they are always orientated to avoid the sun ("paraheliotropism"). Knowledge of these mechanisms has been key to breeding for heat stress tolerance in agricultural plants. Plants can avoid the full impact of low temperature by altering their microclimate. For example, "Raoulia" plants found in the uplands of New Zealand are said to resemble 'vegetable sheep' as they form tight cushion-like clumps to insulate the most vulnerable plant parts and shield them from cooling winds. The same principle has been applied in agriculture by using plastic mulch to insulate the growing points of crops in cool climates in order to boost plant growth. Water. Too much or too little water can damage plants. If there is too little water then tissues will dehydrate and the plant may die. If the soil becomes waterlogged then the soil will become anoxic (low in oxygen), which can kill the roots of the plant. The ability of plants to access water depends on the structure of their roots and on the water potential of the root cells. When soil water content is low, plants can alter their water potential to maintain a flow of water into the roots and up to the leaves (Soil plant atmosphere continuum). This remarkable mechanism allows plants to lift water as high as 120 m by harnessing the gradient created by transpiration from the leaves. In very dry soil, plants close their stomata to reduce transpiration and prevent water loss. The closing of the stomata is often mediated by chemical signals from the root (i.e., abscisic acid). In irrigated fields, the fact that plants close their stomata in response to drying of the roots can be exploited to 'trick' plants into using less water without reducing yields (see partial rootzone drying). The use of this technique was largely developed by Dr Peter Dry and colleagues in Australia If drought continues, the plant tissues will dehydrate, resulting in a loss of turgor pressure that is visible as wilting. As well as closing their stomata, most plants can also respond to drought by altering their water potential (osmotic adjustment) and increasing root growth. Plants that are adapted to dry environments (Xerophytes) have a range of more specialized mechanisms to maintain water and/or protect tissues when desiccation occurs. Waterlogging reduces the supply of oxygen to the roots and can kill a plant within days. Plants cannot avoid waterlogging, but many species overcome the lack of oxygen in the soil by transporting oxygen to the root from tissues that are not submerged. Species that are tolerant of waterlogging develop specialised roots near the soil surface and aerenchyma to allow the diffusion of oxygen from the shoot to the root. Roots that are not killed outright may also switch to less oxygen-hungry forms of cellular respiration. Species that are frequently submerged have evolved more elaborate mechanisms that maintain root oxygen levels, such as the aerial roots seen in mangrove forests. However, for many terminally overwatered houseplants, the initial symptoms of waterlogging can resemble those due to drought. This is particularly true for flood-sensitive plants that show drooping of their leaves due to epinasty (rather than wilting). CO2 concentration. CO2 is vital for plant growth, as it is the substrate for photosynthesis. Plants take in CO2 through stomatal pores on their leaves. At the same time as CO2 enters the stomata, moisture escapes. This trade-off between CO2 gain and water loss is central to plant productivity. The trade-off is all the more critical as Rubisco, the enzyme used to capture CO2, is efficient only when there is a high concentration of CO2 in the leaf. Some plants overcome this difficulty by concentrating CO2 within their leaves using C4 carbon fixation or Crassulacean acid metabolism. However, most species used C3 carbon fixation and must open their stomata to take in CO2 whenever photosynthesis is taking place. The concentration of CO2 in the atmosphere is rising due to deforestation and the combustion of fossil fuels. This would be expected to increase the efficiency of photosynthesis and possibly increase the overall rate of plant growth. This possibility has attracted considerable interest in recent years, as an increased rate of plant growth could absorb some of the excess CO2 and reduce the rate of global warming. Extensive experiments growing plants under elevated CO2 using Free-Air Concentration Enrichment have shown that photosynthetic efficiency does indeed increase. Plant growth rates also increase, by an average of 17% for above-ground tissue and 30% for below-ground tissue. However, detrimental impacts of global warming, such as increased instances of heat and drought stress, mean that the overall effect is likely to be a reduction in plant productivity. Reduced plant productivity would be expected to accelerate the rate of global warming. Overall, these observations point to the importance of avoiding further increases in atmospheric CO2 rather than risking runaway climate change. Wind. Wind has three very different effects on plants. Exchange of mass and energy. Wind influences the way leaves regulate moisture, heat, and carbon dioxide. When no wind is present, a layer of still air builds up around each leaf. This is known as the boundary layer and in effect insulates the leaf from the environment, providing an atmosphere rich in moisture and less prone to convective heating or cooling. As wind speed increases, the leaf environment becomes more closely linked to the surrounding environment. It may become difficult for the plant to retain moisture as it is exposed to dry air. On the other hand, a moderately high wind allows the plant to cool its leaves more easily when exposed to full sunlight. Plants are not entirely passive in their interaction with wind. Plants can make their leaves less vulnerable to changes in wind speed, by coating their leaves in fine hairs (trichomes) to break up the airflow and increase the boundary layer. In fact, leaf and canopy dimensions are often finely controlled to manipulate the boundary layer depending on the prevailing environmental conditions. Acclimation. Plants can sense the wind through the deformation of its tissues. This signal leads to inhibits the elongation and stimulates the radial expansion of their shoots, while increasing the development of their root system. This syndrome of responses known as thigmomorphogenesis results in shorter, stockier plants with strengthened stems, as well as to an improved anchorage. It was once believed that this occurs mostly in very windy areas. But it has been found that it happens even in areas with moderate winds, so that wind-induced signal were found to be a major ecological factor. Trees have a particularly well-developed capacity to reinforce their trunks when exposed to wind. From the practical side, this realisation prompted arboriculturalists in the UK in the 1960s to move away from the practice of staking young amenity trees to offer artificial support. Wind damage. Wind can damage most of the organs of the plants. Leaf abrasion (due to the rubbing of leaves and branches or to the effect of airborne particles such as sand) and leaf of branch breakage are rather common phenomena, that plants have to accommodate. In the more extreme cases, plants can be mortally damaged or uprooted by wind. This has been a major selective pressure acting over terrestrial plants. Nowadays, it is one of the major threatening for agriculture and forestry even in temperate zones. It is worse for agriculture in hurricane-prone regions, such as the banana-growing Windward Islands in the Caribbean. When this type of disturbance occurs in natural systems, the only solution is to ensure that there is an adequate stock of seeds or seedlings to quickly take the place of the mature plants that have been lost- although, in many cases, a successional stage will be needed before the ecosystem can be restored to its former state. Animals. Humans. The environment can have major influences on human physiology. Environmental effects on human physiology are numerous; one of the most carefully studied effects is the alterations in thermoregulation in the body due to outside stresses. This is necessary because in order for enzymes to function, blood to flow, and for various body organs to operate, temperature must remain at consistent, balanced levels. Thermoregulation. To achieve this, the body alters three main things to achieve a constant, normal body temperature: The hypothalamus plays an important role in thermoregulation. It connects to thermal receptors in the dermis, and detects changes in surrounding blood to make decisions of whether to stimulate internal heat production or to stimulate evaporation. There are two main types of stresses that can be experienced due to extreme environmental temperatures: heat stress and cold stress. Heat stress is physiologically combated in four ways: radiation, conduction, convection, and evaporation. Cold stress is physiologically combated by shivering, accumulation of body fat, circulatory adaptations (that provide an efficient transfer of heat to the epidermis), and increased blood flow to the extremities. There is one part of the body fully equipped to deal with cold stress. The respiratory system protects itself against damage by warming the incoming air to 80-90 degrees Fahrenheit before it reaches the bronchi. This means that not even the most frigid of temperatures can damage the respiratory tract. In both types of temperature-related stress, it is important to remain well-hydrated. Hydration reduces cardiovascular strain, enhances the ability of energy processes to occur, and reduces feelings of exhaustion. Altitude. Extreme temperatures are not the only obstacles that humans face. High altitudes also pose serious physiological challenges on the body. Some of these effects are reduced arterial formula_0, the rebalancing of the acid-base content in body fluids, increased hemoglobin, increased RBC synthesis, enhanced circulation, and increased levels of the glycolysis byproduct 2,3 diphosphoglycerate, which promotes off-loading of O2 by hemoglobin in the hypoxic tissues. Environmental factors can play a huge role in the human body's fight for homeostasis. However, humans have found ways to adapt, both physiologically and tangibly. Scientists. George A. Bartholomew (1919–2006) was a founder of animal physiological ecology. He served on the faculty at UCLA from 1947 to 1989, and almost 1,200 individuals can trace their academic lineages to him. Knut Schmidt-Nielsen (1915–2007) was also an important contributor to this specific scientific field as well as comparative physiology. Hermann Rahn (1912–1990) was an early leader in the field of environmental physiology. Starting out in the field of zoology with a Ph.D. from University of Rochester (1933), Rahn began teaching physiology at the University of Rochester in 1941. It is there that he partnered with Wallace O. Fenn to publish "A Graphical Analysis of the Respiratory Gas Exchange" in 1955. This paper included the landmark O2-CO2 diagram, which formed the basis for much of Rahn's future work. Rahn's research into applications of this diagram led to the development of aerospace medicine and advancements in hyperbaric breathing and high-altitude respiration. Rahn later joined the University at Buffalo in 1956 as the Lawrence D. Bell Professor and Chairman of the Department of Physiology. As Chairman, Rahn surrounded himself with outstanding faculty and made the University an international research center in environmental physiology. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P_{{\\mathrm{O}}_2}" } ]
https://en.wikipedia.org/wiki?curid=7173874
71739731
Countable Borel relation
Descriptive set theory relation In descriptive set theory, specifically invariant descriptive set theory, countable Borel relations are a class of relations between standard Borel space which are particularly well behaved. This concept encapsulates various more specific concepts, such as that of a hyperfinite equivalence relation, but is of interest in and of itself. Motivation. A main area of study in invariant descriptive set theory is the relative complexity of equivalence relations. An equivalence relation formula_0 on a set formula_1 is considered more complex than an equivalence relation formula_2 on a set formula_3 if one can "compute formula_2 using formula_0" - formally, if there is a function formula_4 which is well behaved in some sense (for example, one often requires that formula_5 is Borel measurable) such that formula_6. Such a function If this holds in both directions, that one can both "compute formula_2 using formula_0" and "compute formula_0 using formula_2", then formula_0 and formula_2 have a similar level of complexity. When one talks about Borel equivalence relations and requires formula_5 to be Borel measurable, this is often denoted by formula_7. Countable Borel equivalence relations, and relations of similar complexity in the sense described above, appear in various places in mathematics (see examples below, and see for more). In particular, the Feldman-Moore theorem described below proved useful in the study of certain Von Neumann algebras (see ). Definition. Let formula_1 and formula_3 be standard Borel spaces. A "countable Borel relation" between formula_1 and formula_3 is a subset formula_8 of the cartesian product formula_9 which is a Borel set (as a subset in the Product topology) and satisfies that for any formula_10, the set formula_11 is countable. Note that this definition is not symmetric in formula_1 and formula_3, and thus it is possible that a relation formula_8 is a countable Borel relation between formula_1 and formula_3 but the converse relation is not a countable Borel relation between formula_3 and formula_1. The Luzin–Novikov theorem. This theorem, named after Nikolai Luzin and his doctoral student Pyotr Novikov, is an important result used is many proofs about countable Borel relations. Theorem. Suppose formula_1 and formula_3 are standard Borel spaces and formula_8 is a countable Borel relation between formula_1 and formula_3. Then the set formula_25 is a Borel subset of formula_3. Furthermore, there is a Borel function formula_26 (known as a Borel uniformization) such that the graph of formula_5 is a subset of formula_8. Finally, there exist Borel subsets formula_27 of formula_1 and Borel functions formula_28 such that formula_8 is the union of the graphs of the formula_29, that is formula_30. This has a couple of easy consequences: Below are two more results which can be proven using the Luzin-Novikov Novikov theorem, concerning countable Borel equivalence relations: Feldman–Moore theorem. The Feldman–Moore theorem, named after Jacob Feldman and Calvin C. Moore, states: Theorem. Suppose formula_0 is a Borel equivalence relation on a standard Borel space formula_1 which has countable equivalence classes. Then there exists a countable group formula_37 and action of formula_37 on formula_1 such that for every formula_38 the function formula_39 is Borel measurable, and for any formula_10, the equivalence class of formula_40 with respect to formula_0 is exactly the orbit of formula_40 under the action. That is to say, countable Borel equivalence relations are exactly those generated by Borel actions by countable groups. Marker lemma. This lemma is due to Theodore Slaman and John R. Steel, and can be proven using the Feldman–Moore theorem: Lemma. Suppose formula_0 is a Borel equivalence relation on a standard Borel space formula_1 which has countable equivalence classes. Let formula_41. Then there is a decreasing sequence formula_42 such that formula_43 for all formula_44 and formula_45. Less formally, the lemma says that the infinite equivalence classes can be approximated by "arbitrarily small" set (for instance, if we have a Borel probability measure formula_46 on formula_1 the lemma implies that formula_47 by the continuity of the measure). References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "Y" }, { "math_id": 4, "text": "f:Y \\to X" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "\\forall x,y \\in Y: xFy \\iff f(x)Ef(y)" }, { "math_id": 7, "text": "E \\sim_B F" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "X \\times Y" }, { "math_id": 10, "text": "x \\in X" }, { "math_id": 11, "text": "\\lbrace y \\in Y | (x,y) \\in R \\rbrace" }, { "math_id": 12, "text": "f:X\\to Y" }, { "math_id": 13, "text": "\\Gamma(f)" }, { "math_id": 14, "text": "\\lbrace y \\in Y | (x,y) \\in \\Gamma(f) \\rbrace = \\lbrace f(x) \\rbrace" }, { "math_id": 15, "text": "\\lbrace (f(x),x) | x \\in X \\rbrace" }, { "math_id": 16, "text": "F_2" }, { "math_id": 17, "text": "\\lbrace G \\in X | a \\in G \\rbrace" }, { "math_id": 18, "text": "\\lbrace G \\in X | a \\notin G \\rbrace" }, { "math_id": 19, "text": "a \\in F_2" }, { "math_id": 20, "text": "2^{F_2}" }, { "math_id": 21, "text": "G \\sim H \\iff \\exists a \\in F_2 : G=a^{-1}Ha" }, { "math_id": 22, "text": "\\mathcal{C} " }, { "math_id": 23, "text": "\\lbrace X \\in \\mathcal{C} | n \\in X \\rbrace" }, { "math_id": 24, "text": "\\lbrace X \\in \\mathcal{C} | n \\notin X \\rbrace" }, { "math_id": 25, "text": "Proj_X(R)=\\lbrace x \\in X | \\exists y \\in Y:(x,y) \\in R\\rbrace" }, { "math_id": 26, "text": "f:Proj_X(R) \\to Y" }, { "math_id": 27, "text": "\\lbrace A_n \\rbrace_{n=1}^\\infty" }, { "math_id": 28, "text": "f_n:A_n \\to Y" }, { "math_id": 29, "text": "f_n" }, { "math_id": 30, "text": "R=\\lbrace (x,y) \\in X \\times Y | \\exists n \\in \\N : x \\in A_n \\and y=f_n(x) \\rbrace" }, { "math_id": 31, "text": "Proj_Y(R)" }, { "math_id": 32, "text": "A" }, { "math_id": 33, "text": "[A]_E=\\lbrace x \\in X | \\exists x' \\in A:xEx'\\rbrace" }, { "math_id": 34, "text": "Proj_X(R)" }, { "math_id": 35, "text": "R=E\\cap(X \\times A)" }, { "math_id": 36, "text": "X \\times A" }, { "math_id": 37, "text": "G" }, { "math_id": 38, "text": "g \\in G" }, { "math_id": 39, "text": "x \\mapsto g.x" }, { "math_id": 40, "text": "x" }, { "math_id": 41, "text": "B=\\lbrace x \\in X | |[x]_E|=\\aleph_0 \\rbrace" }, { "math_id": 42, "text": "B \\supseteq S_1 \\supseteq S_2 \\supseteq ... " }, { "math_id": 43, "text": "[S_n]_E=B" }, { "math_id": 44, "text": "S_n" }, { "math_id": 45, "text": "\\bigcap_{n=1}^\\infty S_n = \\emptyset" }, { "math_id": 46, "text": "\\mu" }, { "math_id": 47, "text": "\\lim_{n \\to \\infty} \\mu(S_n) = 0" } ]
https://en.wikipedia.org/wiki?curid=71739731
71740001
Random polytope
Mathematical object In mathematics, a random polytope is a structure commonly used in convex analysis and the analysis of linear programs in "d"-dimensional Euclidean space formula_0. Depending on use the construction and definition, random polytopes may differ. Definition. There are multiple non equivalent definitions of a Random polytope. For the following definitions. Let "K" be a bounded convex set in a Euclidean space: Properties definition 1. Let formula_3 be the set of convex bodies in formula_0. Assume formula_4 and consider a set of uniformly distributed points formula_5 in formula_6. The convex hull of these points, formula_7, is called a random polytope inscribed in formula_6. formula_8 where the set formula_9 stands for the convex hull of the set. We define formula_10 to be the expected volume of formula_11. For a large enough formula_12 and given formula_13. Given we have formula_24 is the volume of a smaller cap cut off from formula_25 by affformula_26, and formula_27 is a facet if and only if formula_28 are all on one side of aff formula_29. Properties definition 2. Assume we are given a multivariate probability distribution on formula_33 that is Given this distribution, and our assumptions, the following properties hold: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\R^d" }, { "math_id": 1, "text": "r:(\\R^d \\times \\{0,1\\})^m\\rightarrow \\text{Polytopes} \\in \\R^d" }, { "math_id": 2, "text": "r((p_1, 0), (p_2, 1), (p_3, 1)...(p_m, i_m)) = \\{x \\in \\R^n: | \\frac{p_j}{||p_j||} \\cdot x \\leq ||p_j|| \\text{ if } i_j=1, \\frac{p_j}{||p_j||} \\cdot x \\geq ||p_j|| \\text{ if } i_j=0 \\}" }, { "math_id": 3, "text": "\\Kappa " }, { "math_id": 4, "text": "K \\in\\Kappa " }, { "math_id": 5, "text": "x_1, ..., x_n" }, { "math_id": 6, "text": "K " }, { "math_id": 7, "text": "K_n " }, { "math_id": 8, "text": "K_n = [x_1, ..., x_n] " }, { "math_id": 9, "text": "[S] " }, { "math_id": 10, "text": "E(k,n)" }, { "math_id": 11, "text": "K - K_n" }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": "K \\in \\R^n" }, { "math_id": 14, "text": "K(\\frac{1}{n})\\ll E(K,n) \\ll" }, { "math_id": 15, "text": "K(\\frac{1}{n})" }, { "math_id": 16, "text": "E(K,n) " }, { "math_id": 17, "text": "E(K,n)" }, { "math_id": 18, "text": "B^d \\in \\R^d" }, { "math_id": 19, "text": "B^d(v \\leq t)" }, { "math_id": 20, "text": "\\frac{B^d}{(1-h)B^d}" }, { "math_id": 21, "text": "t^{\\frac{2}{d+1}}" }, { "math_id": 22, "text": "E(B^d,n) \\approx " }, { "math_id": 23, "text": "B^d (\\frac{1}{n}) \\approx n^{\\frac{-2}{d+1}}" }, { "math_id": 24, "text": "V = V(x_1,...,x_d)" }, { "math_id": 25, "text": "K" }, { "math_id": 26, "text": "(x_1,...,x_d)" }, { "math_id": 27, "text": "F=[x_1,...,x_d]" }, { "math_id": 28, "text": "x_{d+1},...,x_n" }, { "math_id": 29, "text": "\\{x_1,...,x_d\\}" }, { "math_id": 30, "text": "E_{\\phi}(K_n) = {{n}\\choose{d}} \\int_K ... \\int_K [(1-V)^{n-d} + V^{n-d}]\\phi(F)dx_1...dx_d" }, { "math_id": 31, "text": "\\phi = f_{d-1}" }, { "math_id": 32, "text": "\\phi(F) = 1" }, { "math_id": 33, "text": "(\\R^d \\times \\{ 0, 1\\})^m=(p_1\\times i_1,\\dots,p_m\\times i_m)^m" }, { "math_id": 34, "text": "(p_1,\\dots,p_d)" }, { "math_id": 35, "text": "i" }, { "math_id": 36, "text": "\\frac{1}{2}" }, { "math_id": 37, "text": "(\\R^d \\times \\{ 0, 1\\})^m" }, { "math_id": 38, "text": "k" }, { "math_id": 39, "text": "m" }, { "math_id": 40, "text": "E_k(m) = 2^{d-k} \\sum_{i = d - k}^{d}{{i}\\choose{d-k}}{{m}\\choose{i}}/\\sum_{i=0}^{d}{{m}\\choose{i}}" }, { "math_id": 41, "text": "\\lim_{m \\to \\infty}E_k(m) = {{d}\\choose{d-k}}2^{d-k}" }, { "math_id": 42, "text": "m>d" }, { "math_id": 43, "text": "V_{max} = {m-[\\frac{1}{2}(d+1)]\\choose m-d} + {m-[\\frac{1}{2}(d+2)]\\choose m-d}" }, { "math_id": 44, "text": "\\pi_{m} = 1 - \\frac{2\\sum_{i=0}^{d-1}{{m-1}\\choose i}}{\\sum_{i=0}^{d}{m\\choose i}}" }, { "math_id": 45, "text": "\\lim_{m \\to \\infty}{\\pi_m} = 1" }, { "math_id": 46, "text": "C_d(m) = \\frac{2m\\sum_{i=0}^{d-1}{{m-1}\\choose i}}{\\sum_{i=0}^{d}{{m}\\choose i}}" }, { "math_id": 47, "text": "\\lim_{m \\to \\infty}C_d(m) = 2d" } ]
https://en.wikipedia.org/wiki?curid=71740001
7174287
Contact number
In chemistry, a contact number (CN) is a simple solvent exposure measure that measures residue burial in proteins. The definition of CN varies between authors, but is generally defined as the number of either Cformula_0 or Cformula_1 atoms within a sphere around the Cformula_0 or Cformula_1 atom of the residue. The radius of the sphere is typically chosen to be between 8 and 14Å. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta" }, { "math_id": 1, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=7174287
717519
Centered pentagonal number
Centered figurate number that represents a pentagon with a dot in the center A centered pentagonal number is a centered figurate number that represents a pentagon with a dot in the center and all other dots surrounding the center in successive pentagonal layers. The centered pentagonal number for "n" is given by the formula formula_0 The first few centered pentagonal numbers are 1, 6, 16, 31, 51, 76, 106, 141, 181, 226, 276, 331, 391, 456, 526, 601, 681, 766, 856, 951, 1051, 1156, 1266, 1381, 1501, 1626, 1756, 1891, 2031, 2176, 2326, 2481, 2641, 2806, 2976 (sequence in the OEIS). formula_1 formula_2 formula_3
[ { "math_id": 0, "text": "P_{n}={{5n^2 - 5n + 2} \\over 2}, n\\geq1" }, { "math_id": 1, "text": "P_{n}=P_{n-1}+5n , P_0=1" }, { "math_id": 2, "text": "P_{n}=3(P_{n-1}-P_{n-2})+P_{n-3} , P_0=1,P_1=6,P_2=16" }, { "math_id": 3, "text": "P_{n}=5T_{n-1}+1" } ]
https://en.wikipedia.org/wiki?curid=717519
717591
Real and nominal value
Value in economics and accounting In economics, nominal value refers to value measured in terms of absolute money amounts, whereas real value is considered and measured against the actual goods or services for which it can be exchanged at a given time. Real value takes into account inflation and the value of an asset in relation to its purchasing power. In macroeconomics, the real gross domestic product compensates for inflation so economists can exclude inflation from growth figures, and see how much an economy actually grows. Nominal GDP would include inflation, and thus be higher. Commodity bundles, price indices and inflation. A commodity bundle is a sample of goods, which is used to represent the sum total of goods across the economy to which the goods belong, for the purpose of comparison across different times (or locations). At a single point of time, a commodity bundle consists of a list of goods, and each good in the list has a market price and a quantity. The market value of the good is the market price times the quantity at that point of time. The nominal value of the commodity bundle at a point of time is the total market value of the commodity bundle, depending on the market price, and the quantity, of each good in the commodity bundle which are current at the time. A price index is the relative price of a commodity bundle. A price index can be measured over time, or at different locations or markets. If it is measured over time, it is a series of values formula_0 over time formula_1. A time series price index is calculated relative to a base or reference date. formula_2 is the value of the index at the base date. For example, if the base date is (the end of) 1992, formula_2 is the value of the index at (the end of) 1992. The price index is typically normalized to start at 100 at the base date, so formula_2 is set to 100. The length of time between each value of formula_1 and the next one, is normally constant regular time interval, such as a calendar year. formula_0 is the value of the price index at time formula_1 after the base date. formula_0 equals 100 times the value of the commodity bundle at time formula_1, divided by the value of the commodity bundle at the base date. If the price of the commodity bundle has increased by one percent over the first period after the base date, then "P"1 = 101. The inflation rate formula_3 between time formula_4 and time formula_1 is the change in the price index divided by the price index value at time formula_4: formula_5 formula_6 expressed as a percentage. Real value. The nominal value of a commodity bundle tends to change over time. In contrast, by definition, the real value of the commodity bundle in aggregate remains the same over time. The real values of individual goods or commodities may rise or fall against each other, in relative terms, but a representative commodity bundle as a whole retains its real value as a constant from one period to the next. Real values can for example be expressed in constant 1992 dollars, with the price level fixed 100 at the base date. The price index is applied to adjust the nominal value formula_7 of a quantity, such as wages or total production, to obtain its real value. The real value is the value expressed in terms of purchasing power in the base year. The index price divided by its base-year value formula_8 gives the growth factor of the price index. Real values can be found by dividing the nominal value by the growth factor of a price index. Using the price index growth factor as a divisor for converting a nominal value into a real value, the real value at time "t" relative to the base date is: formula_9 Real growth rate. The real growth rate formula_10 is the change in a nominal quantity formula_11 in real terms since the previous date formula_4. It measures by how much the buying power of the quantity has changed over a single period. formula_12 formula_13 formula_14 formula_15 where formula_16 is the nominal growth rate of formula_11, and formula_3 is the inflation rate. formula_17 For values of formula_3 between −1 and 1 (i.e. ±100 percent), we have the Taylor series formula_18 so formula_19 formula_20 Hence as a first-order ("i.e." linear) approximation, formula_21 Real wages and real gross domestic products. The bundle of goods used to measure the Consumer Price Index (CPI) is applicable to consumers. So for wage earners as consumers, an appropriate way to measure real wages (the buying power of wages) is to divide the nominal wage (after-tax) by the growth factor in the CPI. Gross domestic product (GDP) is a measure of aggregate output. Nominal GDP in a particular period reflects prices that were current at the time, whereas real GDP compensates for inflation. Price indices and the U.S. National Income and Product Accounts are constructed from bundles of commodities and their respective prices. In the case of GDP, a suitable price index is the GDP price index. In the U.S. National Income and Product Accounts, nominal GDP is called "GDP in current dollars" (that is, in prices current for each designated year), and real GDP is called "GDP in [base-year] dollars" (that is, in dollars that can purchase the same quantity of commodities as in the base year). Real interest rates. As was shown in the section above on the real growth rate, formula_17 where formula_10 is the rate of increase of a quantity in real terms, formula_16 is the rate of increase of the same quantity in nominal terms, and formula_3 is the rate of inflation, and as a first-order approximation, formula_22 In the case where the growing quantity is a financial asset, formula_16 is a nominal interest rate and formula_10 is the corresponding real interest rate; the first-order approximation formula_21 is known as the Fisher equation. Looking back into the past, the "ex post" real interest rate is approximately the historical nominal interest rate minus inflation. Looking forward into the future, the expected real interest rate is approximately the nominal interest rate minus the expected inflation rate. Cross-sectional comparison. Not only time-series data, as above, but also cross-sectional data which depends on prices which may vary geographically for example, can be adjusted in a similar way. For example, the total value of a good produced in a region of a country depends on both the amount and the price. To compare the output of different regions, the nominal output in a region can be adjusted by repricing the goods at common or average prices. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_t" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "P_0" }, { "math_id": 3, "text": "i_t" }, { "math_id": 4, "text": "t-1" }, { "math_id": 5, "text": "i_t = \\frac{P_t-P_{t-1}}{P_{t-1}}" }, { "math_id": 6, "text": "= \\frac{P_t}{P_{t-1}} - 1" }, { "math_id": 7, "text": "Q" }, { "math_id": 8, "text": "P_t / P_0" }, { "math_id": 9, "text": "\\frac{P_0 \\cdot Q_t}{P_t}" }, { "math_id": 10, "text": "r_t" }, { "math_id": 11, "text": "Q_t" }, { "math_id": 12, "text": "r_t = \\frac{P_0 \\cdot Q_t}{P_t} \\Bigg/ \\frac{P_0 \\cdot Q_{t-1}}{P_{t-1}} - 1" }, { "math_id": 13, "text": "= \\frac{P_{t-1} \\cdot Q_t}{P_t \\cdot Q_{t-1}} - 1" }, { "math_id": 14, "text": "= \\frac{Q_t}{Q_{t-1}} (\\frac{P_t}{P_{t-1}})^{-1} - 1" }, { "math_id": 15, "text": "= \\frac{1 + g_t}{1 + i_t} - 1" }, { "math_id": 16, "text": "g_t" }, { "math_id": 17, "text": "1 + r_t = \\frac{1 + g_t}{1 + i_t}" }, { "math_id": 18, "text": "(1 + i_t)^{-1} = 1 - i_t + i_t^2 - i_t^3 + ..." }, { "math_id": 19, "text": "1 + r_t = (1 + g_t)(1 - i_t + i_t^2 - i_t^3 + ...)" }, { "math_id": 20, "text": "= 1 + g_t - i_t - g_t i_t + i_t^2 + \\text {higher order terms.}" }, { "math_id": 21, "text": "r_t = g_t - i_t" }, { "math_id": 22, "text": "r_t = g_t - i_t." } ]
https://en.wikipedia.org/wiki?curid=717591
71759684
Lexicographic dominance
Statistical property Lexicographic dominance is a total order between random variables. It is a form of stochastic ordering. It is defined as follows. Random variable A has lexicographic dominance over random variable B (denoted formula_0) if one of the following holds: In other words: let "k" be the first index for which the probability of receiving the k-th best outcome is different for A and B. Then this probability should be higher for A. Variants. Upward lexicographic dominance is defined as follows. Random variable A has upward lexicographic dominance over random variable B (denoted formula_1) if one of the following holds: To distinguish between the two notions, the standard lexicographic dominance notion is sometimes called downward lexicographic dominance and denoted formula_2. Relation to other dominance notions. First-order stochastic dominance implies both downward-lexicographic and upward-lexicographic dominance. The opposite is not true. For example, suppose there are four outcomes ranked z &gt; y &gt; x &gt; w. Consider the two lotteries that assign to z, y, x, w the following probabilities: Then the following holds: Applications. Lexicographic dominance relations are used in social choice theory to define notions of strategyproofness, incentives for participation, ordinal efficiency and envy-freeness. Hosseini and Larson analyse the properties of rules for fair random assignment based on lexicographic dominance.
[ { "math_id": 0, "text": "A \\succ_{ld} B" }, { "math_id": 1, "text": "A \\succ_{ul} B" }, { "math_id": 2, "text": "A \\succ_{dl} B" }, { "math_id": 3, "text": "B \\succ_{ul} A" }, { "math_id": 4, "text": "A \\not\\succ_{sd} B" }, { "math_id": 5, "text": "B \\not\\succ_{sd} A" } ]
https://en.wikipedia.org/wiki?curid=71759684
71765251
Job 14
Job 14 is the fourteenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q100 (4QJobb; 50–1 BCE) with extant verses 4–6 and 4Q101 (4QpaleoJobc; 250–150 BCE) with extant verses 13–18. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 14 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapters 12 to 14 contain Job's closing speech of the first round, where he directly addresses his friends (12:2–3; 13:2, 4–12). There are two major units in chapter 14, each with a distinct key question: Job laments the brevity of human life (14:1–6). This section contains Job's laments of his suffering against the backdrop of human sorrow in general (echoing chapter 7). Three phrases ("born of a woman", "few of days" and "full of trouble"; verse 1) and the analogies to "a flower" and "a shadow" (verse 2) emphasize human limitations as well as the brevity of human life. Job attempts to protest that God treats him as a "hired man", which is 'unsuited for his limilations' (verses 5–6). [Job said:] ""Look away from him that he may rest," "Till like a hired man he finishes his day."" Verse 6. Here Job depicts humans as "hired laborers" under a harsh taskmaster, so 'life becomes mere tedium driven by obligation and fear', instead of 'joyful service to a caring master'. Job laments the lack of hope for humans (14:7–22). There are three units in this section: The center point is that Job wants God to 'remember' him (verse 13; cf. Job 7:7, 10:9) and protect him from divine wrath, believing that God is in charge, although in the ways that Job does not fully understand. [Job said:] "Oh, that You would hide me in the grave," "that You would conceal me until Your wrath is past," "that You would appoint me a set time" "and remember me!" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71765251
71765416
Job 15
Job 15 is the fifteenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Eliphaz the Temanite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 35 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 15 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 15 consists of three parts: Eliphaz challenges Job (15:1–16). The first part of this section contains Eliphaz's rebuke to Job for the choices Job made and the emptiness of the words of Job, who thinks of himself as a wise man (verses 1–6). Eliphaz concerns that Job undermines the proper attitude of respecting God (Eliphaz is the only one of Job's three friends who refers to the "fear of God"). Eliphaz challenges each of Job's possible justifications and rejects each in turn: Eliphaz suggests that Job should be satisfied with his current condition, rather than searching for further answers, because no human can come to God with a clean slate (verse 11–13, 16). [Eliphaz said:] "Should a wise man answer with windy knowledge," "and fill his belly with the east wind?" Eliphaz describes the fate of the wicked (15:17–35). The lengthy description exploring the fate of the wicked in this section serves as a warning to Job. Each of the three friends states their particular description with different functions: Eliphaz claims that Job would have known the teaching because it is in the tradition of the sages (verses 18–19). In essence, Eliphaz describes the negative aspect of the doctrine of retribution, that is, 'God will punish those who do evil' (verses 20–24 and 27–35). Eliphaz's final verdict uses the imagery of birth that the conceived wickedness and deceit will grow up to be evil. [Eliphaz said:] "They conceive trouble and give birth to evil," "and their womb prepares deceit." Verse 35. The last statement that 'Job's belly prepares deception' forms an 'inclusio' which frames Eliphaz's speech with the statement at the start that 'Job’s belly was filled with the wind'. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71765416
71765420
Job 31
Job 31 is the 31st chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around the 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –31:40. Text. The original text is written in Hebrew language. This chapter is divided into 40 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q99 (4QJoba; 175–60 BCE) with extant verses 14–19 and 4Q100 (4QJobb; 50–1 BCE) with extant verses 20–21. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 31 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. At the end of the Dialogue, Job sums up his speech in a comprehensive review (chapters 29–31), with Job 29 describes Job's former prosperity, Job 30 focuses on Job's current suffering and Job 31 outlines Job's final defense. The whole part is framed by Job's longing for a restored relationship with God (Job 29:2) and the legal challenge to God (Job 31:35–27). Chapter 31 contains Job's final defense before God, in which he pledges the "oath of clearance", a form of self-curse, that is calling down upon oneself the wrath of God, if what the person is swearing is false. This chapter has been regarded as an important source to understand the Hebrew Bible (Old Testament) perspective of "personal ethics of a righteous person". There is no clear structure of Job's oath of clearance as it lists a succession of possible breaches of laws, starting with an "if" and extending throughout the chapter. Job has rejected evil (31:1–12). One by one Job lists his attitudes and actions which reject evil in this section of his oath of clearance. These evil deeds include lust towards young (unmarried) girls (verse 2–4), falsehood and deceit (verses 5–6), moral impurities (verses 7–8), and adultery (verses 9–12). [Job said:] "let me be weighed in an even balance" "that God may know my integrity." Job has behaved righteously (31:13–34). In this section Job lists how he treated his servants (verses 13–15), the poor and marginalized (verses 16–23; refuting Eliphaz's charges in Job 22:5–9), his refusal to trust in riches (verse 24–25) or adopt pagan worship practices (verses 26–28) and some other accusations of sins (verses 29–32). Job strongly denies that he hides any sins (verses 33–34). [Job said:] "Did not He who made me in the womb make him?" "And did not the same One fashion us in the womb?" Verse 15. Job treats his slaves beyond what is required in the Mosaic law (cf. Exodus 21:1-11; Leviticus 25:39-55; Deuteronomy 15:12-18). In the ancient Near East, slaves were typically regarded as property, but Job views his slaves as fellow humans made by God, possessing the same human rights. Job's final plea of vindication (31:35–40). The last part begins with an appeal to compel a plaintiff to present any evidence of Job's wrongdoings. This is seen within the boundary of true piety, as a righteous man seeking a vindication. Job completes the last part of his oath of clearance by stating his right treatment of the land. After these statements, there is a note that "the words of Job are ended", that is, Job ends his dispute with God at this point, although Job will still make two short contributions in response of God's speeches (Job 40:3–5; 42:1–6). [Job said:] "let thistles grow instead of wheat," "and weeds instead of barley." "The words of Job are ended." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71765420
7176679
Data fusion
Integration of multiple data sources to provide better information Data fusion is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source. Data fusion processes are often categorized as low, intermediate, or high, depending on the processing stage at which fusion takes place. Low-level data fusion combines several sources of raw data to produce new raw data. The expectation is that fused data is more informative and synthetic than the original inputs. For example, sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion. The concept of data fusion has origins in the evolved capacity of humans and animals to incorporate information from multiple senses to improve their ability to survive. For example, a combination of sight, touch, smell, and taste may indicate whether a substance is edible. The JDL/DFIG model. In the mid-1980s, the Joint Directors of Laboratories formed the Data Fusion Subpanel (which later became known as the Data Fusion Group). With the advent of the World Wide Web, data fusion thus included data, sensor, and information fusion. The JDL/DFIG introduced a model of data fusion that divided the various processes. Currently, the six levels with the Data Fusion Information Group (DFIG) model are: Although the JDL Model (Level 1–4) is still in use today, it is often criticized for its implication that the levels necessarily happen in order and also for its lack of adequate representation of the potential for a human-in-the-loop. The DFIG model (Level 0–5) explored the implications of situation awareness, user refinement, and mission management. Despite these shortcomings, the JDL/DFIG models are useful for visualizing the data fusion process, facilitating discussion and common understanding, and important for systems-level information fusion design. Geospatial applications. In the geospatial (GIS) domain, data fusion is often synonymous with data integration. In these applications, there is often a need to combine diverse data sets into a unified (fused) data set which includes all of the data points and time steps from the input data sets. The fused data set is different from a simple combined superset in that the points in the fused data set contain attributes and metadata which might not have been included for these points in the original data set. A simplified example of this process is shown below where data set "α" is fused with data set β to form the fused data set δ. Data points in set "α" have spatial coordinates X and Y and attributes A1 and A2. Data points in set β have spatial coordinates X and Y and attributes B1 and B2. The fused data set contains all points and attributes. In a simple case where all attributes are uniform across the entire analysis domain, the attributes may be simply assigned: "M?, N?, Q?, R?" to M, N, Q, R. In a real application, attributes are not uniform and some type of interpolation is usually required to properly assign attributes to the data points in the fused set. In a much more complicated application, marine animal researchers use data fusion to combine animal tracking data with bathymetric, meteorological, sea surface temperature (SST) and animal habitat data to examine and understand habitat utilization and animal behavior in reaction to external forces such as weather or water temperature. Each of these data sets exhibit a different spatial grid and sampling rate so a simple combination would likely create erroneous assumptions and taint the results of the analysis. But through the use of data fusion, all data and attributes are brought together into a single view in which a more complete picture of the environment is created. This enables scientists to identify key locations and times and form new insights into the interactions between the environment and animal behaviors. In the figure at right, rock lobsters are studied off the coast of Tasmania. Hugh Pederson of the University of Tasmania used data fusion software to fuse southern rock lobster tracking data (color-coded for in yellow and black for day and night, respectively) with bathymetry and habitat data to create a unique 4D picture of rock lobster behavior. Data integration. In applications outside of the geospatial domain, differences in the usage of the terms Data integration and Data fusion apply. In areas such as business intelligence, for example, data integration is used to describe the combining of data, whereas data fusion is integration followed by reduction or replacement. Data integration might be viewed as set combination wherein the larger set is retained, whereas fusion is a set reduction technique with improved confidence. Application areas. &lt;templatestyles src="Div col/styles.css"/&gt; From multiple traffic sensing modalities. The data from the different sensing technologies can be combined in intelligent ways to determine the traffic state accurately. A Data fusion based approach that utilizes the road side collected acoustic, image and sensor data has been shown to combine the advantages of the different individual methods. Decision fusion. In many cases, geographically dispersed sensors are severely energy- and bandwidth-limited. Therefore, the raw data concerning a certain phenomenon are often summarized in a few bits from each sensor. When inferring on a binary event (i.e., formula_0 or formula_1 ), in the extreme case only binary decisions are sent from sensors to a Decision Fusion Center (DFC) and combined in order to obtain improved classification performance. For enhanced contextual awareness. With a multitude of built-in sensors including motion sensor, environmental sensor, position sensor, a modern mobile device typically gives mobile applications access to a number of sensory data which could be leveraged to enhance the contextual awareness. Using signal processing and data fusion techniques such as feature generation, feasibility study and principal component analysis (PCA) such sensory data will greatly improve the positive rate of classifying the motion and contextual relevant status of the device. Many context-enhanced information techniques are provided by Snidaro, et al. Statistical methods. Bayesian auto-regressive Gaussian processes. Gaussian processes are a popular machine learning model. If an auto-regressive relationship between the data is assumed, and each data source is assumed to be a Gaussian process, this constitutes a non-linear Bayesian regression problem. Semiparametric estimation. Many data fusion methods assume common conditional distributions across several data sources. Recently, methods have been developed to enable efficient estimation within the resulting semiparametric model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{H}_0" }, { "math_id": 1, "text": "\\mathcal{H}_1" } ]
https://en.wikipedia.org/wiki?curid=7176679
7176811
Absolute presentation of a group
In mathematics, an absolute presentation is one method of defining a group. Recall that to define a group formula_0 by means of a presentation, one specifies a set formula_1 of generators so that every element of the group can be written as a product of some of these generators, and a set formula_2 of relations among those generators. In symbols: formula_3 Informally formula_0 is the group generated by the set formula_1 such that formula_4 for all formula_5. But here there is a tacit assumption that formula_0 is the "freest" such group as clearly the relations are satisfied in any homomorphic image of formula_0. One way of being able to eliminate this tacit assumption is by specifying that certain words in formula_1 should not be equal to formula_6 That is we specify a set formula_7, called the set of irrelations, such that formula_8 for all formula_9 Formal definition. To define an absolute presentation of a group formula_0 one specifies a set formula_1 of generators and sets formula_2 and formula_7 of relations and irrelations among those generators. We then say formula_0 has absolute presentation formula_10 provided that: A more algebraic, but equivalent, way of stating condition 2 is: 2a. If formula_14 is a non-trivial normal subgroup of formula_0 then formula_15 Remark: The concept of an absolute presentation has been fruitful in fields such as algebraically closed groups and the Grigorchuk topology. In the literature, in a context where absolute presentations are being discussed, a presentation (in the usual sense of the word) is sometimes referred to as a relative presentation, which is an instance of a retronym. Example. The cyclic group of order 8 has the presentation formula_16 But, up to isomorphism there are three more groups that "satisfy" the relation formula_17 namely: formula_18 formula_19 and formula_20 However, none of these satisfy the irrelation formula_21. So an absolute presentation for the cyclic group of order 8 is: formula_22 It is part of the definition of an absolute presentation that the irrelations are not satisfied in any proper homomorphic image of the group. Therefore: formula_23 Is "not" an absolute presentation for the cyclic group of order 8 because the irrelation formula_24 is satisfied in the cyclic group of order 4. Background. The notion of an absolute presentation arises from Bernhard Neumann's study of the isomorphism problem for algebraically closed groups. A common strategy for considering whether two groups formula_0 and formula_25 are isomorphic is to consider whether a presentation for one might be transformed into a presentation for the other. However algebraically closed groups are neither finitely generated nor recursively presented and so it is impossible to compare their presentations. Neumann considered the following alternative strategy: Suppose we know that a group formula_0 with finite presentation formula_26 can be embedded in the algebraically closed group formula_27 then given another algebraically closed group formula_28, we can ask "Can formula_0 be embedded in formula_28?" It soon becomes apparent that a presentation for a group does not contain enough information to make this decision for while there may be a homomorphism formula_29, this homomorphism need not be an embedding. What is needed is a specification for formula_27 that "forces" any homomorphism preserving that specification to be an embedding. An absolute presentation does precisely this.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "G \\simeq \\langle S \\mid R \\rangle." }, { "math_id": 4, "text": "r = 1" }, { "math_id": 5, "text": "r \\in R" }, { "math_id": 6, "text": "1." }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "i \\ne 1" }, { "math_id": 9, "text": "i \\in I." }, { "math_id": 10, "text": "\\langle S \\mid R, I\\rangle." }, { "math_id": 11, "text": "\\langle S \\mid R\\rangle." }, { "math_id": 12, "text": "h:G\\rightarrow H" }, { "math_id": 13, "text": "h(G)" }, { "math_id": 14, "text": "N\\triangleleft G" }, { "math_id": 15, "text": "I\\cap N\\neq \\left\\{ 1\\right\\} ." }, { "math_id": 16, "text": "\\langle a \\mid a^8 = 1\\rangle." }, { "math_id": 17, "text": "a^8 = 1," }, { "math_id": 18, "text": "\\langle a \\mid a^4 = 1\\rangle" }, { "math_id": 19, "text": "\\langle a \\mid a^2 = 1\\rangle" }, { "math_id": 20, "text": "\\langle a \\mid a = 1\\rangle." }, { "math_id": 21, "text": "a^4 \\neq 1" }, { "math_id": 22, "text": "\\langle a \\mid a^8 = 1, a^4 \\neq 1\\rangle." }, { "math_id": 23, "text": "\\langle a \\mid a^8 = 1, a^2 \\neq 1\\rangle" }, { "math_id": 24, "text": "a^2 \\neq 1" }, { "math_id": 25, "text": "H" }, { "math_id": 26, "text": "G=\\langle x_1,x_2 \\mid R \\rangle" }, { "math_id": 27, "text": "G^{*}" }, { "math_id": 28, "text": "H^{*}" }, { "math_id": 29, "text": "h:G\\rightarrow H^{*}" } ]
https://en.wikipedia.org/wiki?curid=7176811
71774359
Job 17
Job 17 is the seventeenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 16 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 17 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 17 lacks a clear structure with some verses a continuation from the previous chapter and the complaints addressed alternately to God and Job's friends: Job complains for the lack of hope (17:1–10). The section opens with the anguish of the previous chapter, both in Job's expectation of death (verse 1; cf. Job 16:22) and by the useless, mocking words of his friends (verse 2; cf. Job 16:20). Thereafter, Job addresses God directly, asking why God has closed the minds of his friends to understanding Job's plight (verse 4). Then, Job turns to his friends (or onlookers; "among you" or "all of you", verse 10) and conveying his dismay that God, who runs the world, belittles him in the presence of ("spit in the face" or "spit in front of") others (verse 6), before closing with charging his friends for lacking wisdom in their responses (verse 10). [Job said:] "And He has made me a byword of the people," "someone in whose face they spit." Job expresses his despair (17:11–16). In this section Job sinks back to his current despair, as if his life were over ("my days are past") and there is no future for his "plans" or "desires" (verse 11). Job imagines that he would go "over to the dark side" (the darkness of Sheol) to make his "house" (or "bed"; verse 13), where he seems to belong. Job diligently searches for a way forward in the present darkness, but concedes that this does not seem to be feasible (verse 16). [Job said:] "Will they go down to the gates of Sheol?" "Shall we have rest together in the dust?" Verse 16. Job realizes that death cannot return his children to him, cannot restore to him a sense of family (cf. Job 3:17–19; 7:9; Psalm 6:5). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71774359
7177687
Gross–Pitaevskii equation
Description of the ground state of a quantum system The Gross–Pitaevskii equation (GPE, named after Eugene P. Gross and Lev Petrovich Pitaevskii) describes the ground state of a quantum system of identical bosons using the Hartree–Fock approximation and the pseudopotential interaction model. A Bose–Einstein condensate (BEC) is a gas of bosons that are in the same quantum state, and thus can be described by the same wavefunction. A free quantum particle is described by a single-particle Schrödinger equation. Interaction between particles in a real gas is taken into account by a pertinent many-body Schrödinger equation. In the Hartree–Fock approximation, the total wave-function formula_0 of the system of formula_1 bosons is taken as a product of single-particle functions formula_2: formula_3 where formula_4 is the coordinate of the formula_5-th boson. If the average spacing between the particles in a gas is greater than the scattering length (that is, in the so-called dilute limit), then one can approximate the true interaction potential that features in this equation by a pseudopotential. At sufficiently low temperature, where the de Broglie wavelength is much longer than the range of boson–boson interaction, the scattering process can be well approximated by the "s"-wave scattering (i.e. formula_6 in the partial-wave analysis, a.k.a. the hard-sphere potential) term alone. In that case, the pseudopotential model Hamiltonian of the system can be written as formula_7 where formula_8 is the mass of the boson, formula_9 is the external potential, formula_10 is the boson–boson "s"-wave scattering length, and formula_11 is the Dirac delta-function. The variational method shows that if the single-particle wavefunction satisfies the following Gross–Pitaevskii equation formula_12 the total wave-function minimizes the expectation value of the model Hamiltonian under normalization condition formula_13 Therefore, such single-particle wavefunction describes the ground state of the system. GPE is a model equation for the ground-state single-particle wavefunction in a Bose–Einstein condensate. It is similar in form to the Ginzburg–Landau equation and is sometimes referred to as the "nonlinear Schrödinger equation". The non-linearity of the Gross–Pitaevskii equation has its origin in the interaction between the particles: setting the coupling constant of interaction in the Gross–Pitaevskii equation to zero (see the following section) recovers the single-particle Schrödinger equation describing a particle inside a trapping potential. The Gross–Pitaevskii equation is said to be limited to the weakly interacting regime. Nevertheless, it may also fail to reproduce interesting phenomena even within this regime. In order to study the BEC beyond that limit of weak interactions, one needs to implement the Lee-Huang-Yang (LHY) correction. Alternatively, in 1D systems one can use either an exact approach, namely the Lieb-Liniger model, or an extended equation, e.g. the Lieb-Liniger Gross–Pitaevskii equation (sometimes called modified or generalized nonlinear Schrödinger equation). Form of equation. The equation has the form of the Schrödinger equation with the addition of an interaction term. The coupling constant formula_14 is proportional to the "s"-wave scattering length formula_10 of two interacting bosons: formula_15 where formula_16 is the reduced Planck constant, and formula_8 is the mass of the boson. The energy density is formula_17 where formula_0 is the wavefunction, or order parameter, and formula_9 is the external potential (e.g. a harmonic trap). The time-independent Gross–Pitaevskii equation, for a conserved number of particles, is formula_18 where formula_19 is the chemical potential, which is found from the condition that the number of particles is related to the wavefunction by formula_20 From the time-independent Gross–Pitaevskii equation, we can find the structure of a Bose–Einstein condensate in various external potentials (e.g. a harmonic trap). The time-dependent Gross–Pitaevskii equation is formula_21 From this equation we can look at the dynamics of the Bose–Einstein condensate. It is used to find the collective modes of a trapped gas. Solutions. Since the Gross–Pitaevskii equation is a nonlinear partial differential equation, exact solutions are hard to come by. As a result, solutions have to be approximated via myriad techniques. Exact solutions. Free particle. The simplest exact solution is the free-particle solution, with formula_22: formula_23 This solution is often called the Hartree solution. Although it does satisfy the Gross–Pitaevskii equation, it leaves a gap in the energy spectrum due to the interaction: formula_24 According to the Hugenholtz–Pines theorem, an interacting Bose gas does not exhibit an energy gap (in the case of repulsive interactions). Soliton. A one-dimensional soliton can form in a Bose–Einstein condensate, and depending upon whether the interaction is attractive or repulsive, there is either a bright or dark soliton. Both solitons are local disturbances in a condensate with a uniform background density. If the BEC is repulsive, so that formula_25, then a possible solution of the Gross–Pitaevskii equation is formula_26 where formula_27 is the value of the condensate wavefunction at formula_28, and formula_29 is the "coherence length" (a.k.a. the "healing length", see below). This solution represents the dark soliton, since there is a deficit of condensate in a space of nonzero density. The dark soliton is also a type of topological defect, since formula_2 flips between positive and negative values across the origin, corresponding to a formula_30 phase shift. For formula_31 the solution is formula_32 where the chemical potential is formula_33. This solution represents the bright soliton, since there is a concentration of condensate in a space of zero density. Healing length. The healing length gives the minimum distance over which the order parameter can heal, which describes how quickly the wave function of the BEC can adjust to changes in the potential. If the condensate density grows from 0 to n within a distance ξ, the healing length can calculated by equating the quantum pressure and the interaction energy: formula_34 The healing length must be much smaller than any length scale in the solution of the single-particle wavefunction. The healing length also determines the size of vortices that can form in a superfluid. It is the distance over which the wavefunction recovers from zero in the center of the vortex to the value in the bulk of the superfluid (hence the name "healing" length). Variational solutions. In systems where an exact analytical solution may not be feasible, one can make a variational approximation. The basic idea is to make a variational ansatz for the wavefunction with free parameters, plug it into the free energy, and minimize the energy with respect to the free parameters. Numerical solutions. Several numerical methods, such as the split-step Crank–Nicolson and Fourier spectral methods, have been used for solving GPE. There are also different Fortran and C programs for its solution for the contact interaction and long-range dipolar interaction. Thomas–Fermi approximation. If the number of particles in a gas is very large, the interatomic interaction becomes large so that the kinetic energy term can be neglected in the Gross–Pitaevskii equation. This is called the Thomas–Fermi approximation and leads to the single-particle wavefunction formula_35 And the density profile is formula_36 In a harmonic trap (where the potential energy is quadratic with respect to displacement from the center), this gives a density profile commonly referred to as the "inverted parabola" density profile. Bogoliubov approximation. Bogoliubov treatment of the Gross–Pitaevskii equation is a method that finds the elementary excitations of a Bose–Einstein condensate. To that purpose, the condensate wavefunction is approximated by a sum of the equilibrium wavefunction formula_37 and a small perturbation formula_38: formula_39 Then this form is inserted in the time-dependent Gross–Pitaevskii equation and its complex conjugate, and linearized to first order in formula_38: formula_40 formula_41 Assuming that formula_42 one finds the following coupled differential equations for formula_43 and formula_44 by taking the formula_45 parts as independent components: formula_46 formula_47 For a homogeneous system, i.e. for formula_48, one can get formula_49 from the zeroth-order equation. Then we assume formula_43 and formula_44 to be plane waves of momentum formula_50, which leads to the energy spectrum formula_51 For large formula_50, the dispersion relation is quadratic in formula_50, as one would expect for usual non-interacting single-particle excitations. For small formula_50, the dispersion relation is linear: formula_52 with formula_53 being the speed of sound in the condensate, also known as second sound. The fact that formula_54 shows, according to Landau's criterion, that the condensate is a superfluid, meaning that if an object is moved in the condensate at a velocity inferior to "s", it will not be energetically favorable to produce excitations, and the object will move without dissipation, which is a characteristic of a superfluid. Experiments have been done to prove this superfluidity of the condensate, using a tightly focused blue-detuned laser. The same dispersion relation is found when the condensate is described from a microscopical approach using the formalism of second quantization. Superfluid in rotating helical potential. The optical potential well formula_55 might be formed by two counterpropagating optical vortices with wavelengths formula_56, effective width formula_57 and topological charge formula_58: formula_59 where formula_60. In cylindrical coordinate system formula_61 the potential well have a remarkable "double-helix geometry": formula_62 In a reference frame rotating with angular velocity formula_63, time-dependent Gross–Pitaevskii equation with helical potential is formula_64 where formula_65 is the angular-momentum operator. The solution for condensate wavefunction formula_66 is a superposition of two phase-conjugated matter–wave vortices: formula_67 The macroscopically observable momentum of condensate is formula_68 where formula_69 is number of atoms in condensate. This means that atomic ensemble moves coherently along formula_70 axis with group velocity whose direction is defined by signs of topological charge formula_58 and angular velocity formula_71: formula_72 The angular momentum of helically trapped condensate is exactly zero: formula_73 Numerical modeling of cold atomic ensemble in spiral potential have shown the confinement of individual atomic trajectories within helical potential well. Derivations and Generalisations. The Gross–Pitaevskii equation can also be derived as the semi-classical limit of the many body theory of s-wave interacting identical bosons represented in terms of coherent states. The semi-classical limit is reached for a large number of quanta, expressing the field theory either in the positive-P representation (generalised Glauber-Sudarshan P representation) or Wigner representation. Finite-temperature effects can be treated within a generalised Gross–Pitaevskii equation by including scattering between condensate and noncondensate atoms, from which the Gross–Pitaevskii equation may be recovered in the low-temperature limit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Psi" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "\\psi" }, { "math_id": 3, "text": "\n \\Psi(\\mathbf{r}_1, \\mathbf{r}_2, \\dots, \\mathbf{r}_N) = \\psi(\\mathbf{r}_1) \\psi(\\mathbf{r}_2) \\dots \\psi(\\mathbf{r}_N),\n" }, { "math_id": 4, "text": "\\mathbf{r}_i" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "\\ell = 0" }, { "math_id": 7, "text": "\n H = \\sum_{i=1}^N \\left(-\\frac{\\hbar^2}{2m} \\frac{\\partial^2}{\\partial\\mathbf{r}_i^2} + V(\\mathbf{r}_i)\\right)\n + \\sum_{i<j} \\frac{4\\pi\\hbar^2 a_s}{m} \\delta(\\mathbf{r}_i - \\mathbf{r}_j),\n" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "V" }, { "math_id": 10, "text": "a_s" }, { "math_id": 11, "text": "\\delta(\\mathbf{r})" }, { "math_id": 12, "text": "\n \\left(-\\frac{\\hbar^2}{2m} \\frac{\\partial^2}{\\partial\\mathbf{r}^2} + V(\\mathbf{r}) + \\frac{4\\pi\\hbar^2 a_s}{m} |\\psi(\\mathbf{r})|^2\\right) \\psi(\\mathbf{r}) = \\mu\\psi(\\mathbf{r}),\n" }, { "math_id": 13, "text": "\\int dV\\, |\\Psi|^2 = N." }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "g = \\frac{4\\pi\\hbar^2 a_s}{m}," }, { "math_id": 16, "text": "\\hbar" }, { "math_id": 17, "text": "\\mathcal{E} = \\frac{\\hbar^2}{2m} |\\nabla\\Psi(\\mathbf{r})|^2 + V(\\mathbf{r}) |\\Psi(\\mathbf{r})|^2 + \\frac{1}{2} g |\\Psi(\\mathbf{r})|^4," }, { "math_id": 18, "text": "\\mu \\Psi(\\mathbf{r}) = \\left(-\\frac{\\hbar^2}{2m} \\nabla^2 + V(\\mathbf{r}) + g |\\Psi(\\mathbf{r})|^2\\right) \\Psi(\\mathbf{r})," }, { "math_id": 19, "text": "\\mu" }, { "math_id": 20, "text": "N = \\int |\\Psi(\\mathbf{r})|^2 \\, d^3r." }, { "math_id": 21, "text": "i\\hbar\\frac{\\partial\\Psi(\\mathbf{r}, t)}{\\partial t} = \\left(-\\frac{\\hbar^2}{2m} \\nabla^2 + V(\\mathbf{r}) + g |\\Psi(\\mathbf{r}, t)|^2\\right) \\Psi(\\mathbf{r}, t)." }, { "math_id": 22, "text": "V(\\mathbf{r}) = 0" }, { "math_id": 23, "text": "\\Psi(\\mathbf{r}) = \\sqrt{\\frac{N}{V}} e^{i\\mathbf{k}\\cdot\\mathbf{r}}." }, { "math_id": 24, "text": "E(\\mathbf{k}) = N \\left[ \\frac{\\hbar^2 k^2}{2m} + g \\frac{N}{2 V}\\right]." }, { "math_id": 25, "text": "g > 0" }, { "math_id": 26, "text": "\\psi(x) = \\psi_0 \\tanh\\left(\\frac{x}{\\sqrt{2}\\xi}\\right)," }, { "math_id": 27, "text": "\\psi_0" }, { "math_id": 28, "text": "\\infty" }, { "math_id": 29, "text": "\\xi = \\hbar/\\sqrt{2m n_0 g} = 1/\\sqrt{8\\pi a_s n_0}" }, { "math_id": 30, "text": "\\pi" }, { "math_id": 31, "text": "g < 0" }, { "math_id": 32, "text": "\\psi(x, t) = \\psi(0) e^{-i\\mu t/\\hbar} \\frac{1}{\\cosh\\left(\\sqrt{2m|\\mu|/\\hbar^2}x\\right)}," }, { "math_id": 33, "text": "\\mu = g |\\psi(0)|^2/2" }, { "math_id": 34, "text": "\\frac{\\hbar^2}{2m\\xi^2} = gn \\implies \\xi=(8\\pi n a_s)^{-1/2}" }, { "math_id": 35, "text": "\\psi(x, t) = \\sqrt{\\frac{\\mu - V(x)}{Ng}}." }, { "math_id": 36, "text": "n(x, t) =\\frac{\\mu - V(x)}{g}." }, { "math_id": 37, "text": "\\psi_0 = \\sqrt{n} e^{-i\\mu t}" }, { "math_id": 38, "text": "\\delta\\psi" }, { "math_id": 39, "text": "\\psi = \\psi_0 + \\delta\\psi." }, { "math_id": 40, "text": "i\\hbar\\frac{\\partial\\delta\\psi}{\\partial t} = -\\frac{\\hbar^2}{2m} \\nabla^2 \\delta\\psi + V\\delta\\psi + g(2|\\psi_0|^2\\delta\\psi + \\psi_0^2\\delta\\psi^*)," }, { "math_id": 41, "text": "-i\\hbar\\frac{\\partial\\delta\\psi^*}{\\partial t} = -\\frac{\\hbar^2}{2m} \\nabla^2 \\delta\\psi^* + V\\delta\\psi^* + g(2|\\psi_0|^2\\delta\\psi^* + (\\psi_0^*)^2\\delta\\psi)." }, { "math_id": 42, "text": "\\delta\\psi = e^{-i\\mu t} \\big(u(\\boldsymbol{r}) e^{-i\\omega t} - v^*(\\boldsymbol{r}) e^{i\\omega t}\\big)," }, { "math_id": 43, "text": "u" }, { "math_id": 44, "text": "v" }, { "math_id": 45, "text": "e^{\\pm i\\omega t}" }, { "math_id": 46, "text": "\\left(-\\frac{\\hbar^2}{2m} \\nabla^2 + V + 2gn - \\hbar\\mu - \\hbar\\omega\\right) u - gnv = 0," }, { "math_id": 47, "text": "\\left(-\\frac{\\hbar^2}{2m} \\nabla^2 + V + 2gn - \\hbar\\mu + \\hbar\\omega\\right) v - gnu = 0." }, { "math_id": 48, "text": "V(\\boldsymbol{r}) = \\text{const}" }, { "math_id": 49, "text": "V = \\hbar\\mu - gn" }, { "math_id": 50, "text": "\\boldsymbol{q}" }, { "math_id": 51, "text": "\\hbar\\omega = \\epsilon_\\boldsymbol{q} = \\sqrt{\\frac{\\hbar^2|\\boldsymbol{q}|^2}{2m} \\left(\\frac{\\hbar^2|\\boldsymbol{q}|^2}{2m} + 2gn\\right)}." }, { "math_id": 52, "text": "\\epsilon_\\boldsymbol{q} = s \\hbar q," }, { "math_id": 53, "text": "s = \\sqrt{ng/m}" }, { "math_id": 54, "text": "\\epsilon_\\boldsymbol{q}/(\\hbar q) > s" }, { "math_id": 55, "text": "V_\\text{twist}(\\mathbf{r}, t) = V_\\text{twist}(z, r, \\theta, t)" }, { "math_id": 56, "text": "\\lambda_\\pm = 2 \\pi c/\\omega_\\pm" }, { "math_id": 57, "text": "D" }, { "math_id": 58, "text": "\\ell" }, { "math_id": 59, "text": "\n E_\\pm(\\mathbf{r}, t) \\sim \\exp\\left(-\\frac{r^2}{2 D^2}\\right) r^{|\\ell|} \\exp(-i\\omega_\\pm t \\pm ik_\\pm z + i\\ell \\theta),\n" }, { "math_id": 60, "text": "\\delta\\omega = (\\omega_+ - \\omega_-)" }, { "math_id": 61, "text": "(z, r, \\theta)" }, { "math_id": 62, "text": "\n V_\\text{twist}(\\mathbf{r}, t) \\sim\n V_0 \\exp\\left(-\\frac{r^2}{D^2}\\right) r^{2|\\ell|} \\left(1 + \\cos[\\delta\\omega t + (k_+ + k_-)z + 2\\ell \\theta]\\right).\n" }, { "math_id": 63, "text": "\\Omega = \\delta\\omega / 2\\ell" }, { "math_id": 64, "text": "\n i\\hbar\\frac{\\partial\\Psi(\\mathbf{r}, t)}{\\partial t} =\n \\left(-\\frac{\\hbar^2}{2m} \\nabla^2 + V_\\text{twist}(\\mathbf{r}) + g |\\Psi(\\mathbf{r}, t)|^2 - \\Omega \\hat L \\right) \\Psi(\\mathbf{r}, t)," }, { "math_id": 65, "text": "\\hat L = -i\\hbar \\frac{\\partial}{\\partial\\theta}" }, { "math_id": 66, "text": "\\Psi(\\mathbf{r}, t)" }, { "math_id": 67, "text": "\n \\Psi(\\mathbf{r}, t) \\sim \\exp\\left(-\\frac{r^2}{2D^2}\\right) r^{|\\ell|} \\times\n \\big(\\exp(-i\\omega_+ t + ik_+ z + i\\ell \\theta) + \\exp(-i\\omega_- t - ik_- z - i\\ell \\theta) \\big).\n" }, { "math_id": 68, "text": "\\langle \\Psi | \\hat P | \\Psi \\rangle = N_\\text{at} \\hbar (k_+ - k_-)," }, { "math_id": 69, "text": "N_\\text{at}" }, { "math_id": 70, "text": "z" }, { "math_id": 71, "text": "\\Omega" }, { "math_id": 72, "text": "V_z = \\frac{2\\Omega \\ell}{k_+ + k_-}." }, { "math_id": 73, "text": "\\langle \\Psi | \\hat L | \\Psi \\rangle = N_\\text{at} [\\ell\\hbar - \\ell\\hbar] = 0." } ]
https://en.wikipedia.org/wiki?curid=7177687
717778
Kinetic term
Type of terms in Lagrangians In physics, a kinetic term is the part of the Lagrangian that is bilinear in the fields (and for nonlinear sigma models, they are not even bilinear), and usually contains two derivatives with respect to time (or space); in the case of fermions, the kinetic term usually has one derivative only. The equation of motion derived from such a Lagrangian contains differential operators which are generated by the kinetic term. Unitarity requires kinetic terms to be positive. In mechanics, the kinetic term is formula_0 In quantum field theory, the kinetic terms for real scalar fields, electromagnetic field and Dirac field are formula_1
[ { "math_id": 0, "text": " T = \\frac{1}{2}\\dot x^2 = \\frac{1}{2}\\left( \\frac{\\partial x}{\\partial t}\\right)^2 ." }, { "math_id": 1, "text": " T = \\frac{1}{2}\\partial_\\mu \\Phi \\partial^\\mu \\Phi + \\frac{1}{4g^2}F_{\\mu\\nu}F^{\\mu\\nu} + i \\bar \\psi \\gamma^\\mu \\partial_\\mu \\psi ." } ]
https://en.wikipedia.org/wiki?curid=717778
71785719
Millennials in the United States
Cohort born from 1981 to 1996 Millennials, also known as Generation Y or Gen Y, are the demographic cohort following Generation X and preceding Generation Z. Unlike their counterparts in most other developed nations, Millennials in the United States are a relatively large cohort in their nation's population, which has implications for their nation's economy and geopolitics. But like their counterparts in other Western nations, Americans who came of age in the 2010s were less inclined to have sexual intercourse and less likely to be religious than their predecessors, though they may identify as spiritual. Millennials have faced economic challenges posed by the Great Recession, and another one in 2020 due to the COVID-19 pandemic. Millennials are sometimes known as digital natives because they came of age when the Internet, electronic devices, and social media entered widespread usage. Terminology and etymology. Authors William Strauss and Neil Howe, who created the Strauss–Howe generational theory, coined the term 'millennial' in 1987. because the oldest members of this demographic cohort came of age at around the turn of the third millennium A.D. They wrote about the cohort in their books "Generations: The History of America's Future, 1584 to 2069" (1991) and "Millennials Rising: The Next Great Generation" (2000). In August 1993, an "Advertising Age" editorial coined the phrase "Generation Y" to describe teenagers of the day, then aged 13–19 (born 1974–1980), who were at the time defined as different from Generation X. However, the 1974–1980 cohort was later re-identified by most media sources as the last wave of Generation X, and by 2003 "Ad Age" had moved their Generation Y starting year up to 1982. According to journalist Bruce Horovitz, in 2012, "Ad Age" "threw in the towel by conceding that millennials is a better name than Gen Y," and by 2014, a past director of data strategy at "Ad Age" said to NPR "the Generation Y label was a placeholder until we found out more about them." Millennials are sometimes called "echo boomers", due to them often being the offspring of the baby boomers, the significant increase in birth rates from the early 1980s to mid-1990s, and their generation's large size relative to that of boomers. In the United States, the echo boom's birth rates peaked in August 1990 and a twentieth-century trend toward smaller families in developed countries continued. Psychologist Jean Twenge described millennials as "Generation Me" in her 2006 book "Generation Me: Why Today's Young Americans Are More Confident, Assertive, Entitled – and More Miserable Than Ever Before", while in 2013, "Time" magazine ran a cover story titled "Millennials: The Me Me Me Generation". Alternative names for this group proposed include the "Net Generation", "Generation 9/11", "Generation Next", and "The Burnout Generation". American sociologist Kathleen Shaputis labeled millennials as the "Boomerang Generation" or "Peter Pan Generation" because of the members' perceived tendency for delaying some rites of passage into adulthood for longer periods than most generations before them. These labels were also a reference to a trend toward members living with their parents for longer periods than previous generations. Kimberly Palmer regards the high cost of housing and higher education, and the relative affluence of older generations, as among the factors driving the trend. Questions regarding a clear definition of what it means to be an adult also impact a debate about delayed transitions into adulthood and the emergence of a new life stage, Emerging Adulthood. A 2012 study by professors at Brigham Young University found that college students were more likely to define "adult" based on certain personal abilities and characteristics rather than more traditional "rite of passage" events. Larry Nelson noted that "In prior generations, you get married and you start a career and you do that immediately. What young people today are seeing is that approach has led to divorces, to people unhappy with their careers … The majority want to get married […] they just want to do it right the first time, the same thing with their careers." Date and age range definitions. Oxford Living Dictionaries describes a millennial as a person "born between the early 1980s and the late 1990s." Merriam-Webster Dictionary defines millennial as "a person born in the 1980s or 1990s." Jonathan Rauch, senior fellow at the Brookings Institution, wrote for "The Economist" in 2018 that "generations are squishy concepts", but the 1981 to 1996 birth cohort is a "widely accepted" definition for millennials. Reuters also state that the "widely accepted definition" is 1981–1996. Although the United States Census Bureau have said that "there is no official start and end date for when millennials were born" and they do not officially define millennials, a U.S. Census publication in 2022 noted that Millennials are "colloquially defined as" the cohort born from 1981 to 1996, using this definition in a breakdown of Survey of Income and Program Participation (SIPP) data. The Pew Research Center defines millennials as born from 1981 to 1996, choosing these dates for "key political, economic and social factors", including the September 11 terrorist attacks, the 2003 invasion of Iraq, Great Recession, and Internet explosion. The United States Library of Congress explains that "defining generations is not an exact science" however cites Pew’s 1981-1996 definition to define millennials. Various media outlets and statistical organizations have cited Pew's definition including "The Washington Post", "The New York Times", "The Wall Street Journal", PBS, "The Los Angeles Times", and the United States Bureau of Labor Statistics. The Brookings Institution defines the millennial generation as those born from 1981 to 1996, as does Gallup, Federal Reserve Board, American Psychological Association, and CBS. Psychologist Jean Twenge defines millennials as those born 1980–1994. CNN reports that studies often use 1981–1996 to define millennials, but sometimes list 1980–2000. Sociologist Elwood Carlson, who calls the generation "New Boomers", identified the birth years of 1983–2001, based on the upswing in births after 1983 and finishing with the "political and social challenges" that occurred after the 9/11 terrorist acts. Author Neil Howe defines millennials as being born from 1982 to 2004. The cohorts born during the cusp years before and after millennials have been identified as "microgenerations" with characteristics of both generations. Names given to these cuspers include "Xennials", "Generation Catalano", the "Oregon Trail Generation"; "Zennials," and "Zillennials", respectively. Psychology. Psychologist Jean Twenge, the author of the 2006 book "Generation Me", considers millennials, along with younger members of Generation X, to be part of what she calls "Generation Me". Twenge attributes millennials with the traits of confidence and tolerance, but also describes a sense of entitlement and narcissism, based on NPI surveys showing increased narcissism among millennials compared to preceding generations when they were teens and in their twenties. Psychologist Jeffrey Arnett of Clark University, Worcester has criticized Twenge's research on narcissism among millennials, stating "I think she is vastly misinterpreting or over-interpreting the data, and I think it's destructive". He doubts that the Narcissistic Personality Inventory really measures narcissism at all. Arnett says that not only are millennials less narcissistic, they're "an exceptionally generous generation that holds great promise for improving the world." A study published in 2017 in the journal "Psychological Science" found a small "decline" in narcissism among young people since the 1990s. Authors William Strauss and Neil Howe argue that each generation has common characteristics that give it a specific character with four basic generational archetypes, repeating in a cycle. According to their hypothesis, they predicted millennials would become more like the "civic-minded" G.I. Generation with a strong sense of community both local and global. Strauss and Howe ascribe seven basic traits to the millennial cohort: special, sheltered, confident, team-oriented, conventional, pressured, and achieving. However, Arthur E. Levine, author of "When Hope and Fear Collide: A Portrait of Today's College Student", dismissed these generational images as "stereotypes". In addition, psychologist Jean Twenge says Strauss and Howe's assertions are overly deterministic, non-falsifiable, and unsupported by rigorous evidence. Polling agency Ipsos-MORI warned that the word 'millennials' is "misused to the point where it's often mistaken for just another meaningless buzzword" because "many of the claims made about millennial characteristics are simplified, misinterpreted or just plain wrong, which can mean real differences get lost" and that "[e]qually important are the similarities between other generations—the attitudes and behaviors that are staying the same are sometimes just as important and surprising." Though it is often said that millennials ignore conventional advertising, they are in fact heavily influenced by it. They are particularly sensitive to appeals to transparency, to experiences rather than things, and flexibility. Cognitive abilities. Intelligence researcher James R. Flynn discovered that back in the 1950s, the gap between the vocabulary levels of adults and children was much smaller than it is in the early twenty-first century. Between 1953 and 2006, adult gains on the vocabulary subtest of the Wechsler IQ test were 17.4 points whereas the corresponding gains for children were only 4. He asserted that some of the reasons for this are the surge in interest in higher education and cultural changes. The number of Americans pursuing tertiary qualifications and cognitively demanding jobs has risen significantly since the 1950s. This boosted the level of vocabulary among adults. Back in the 1950s, children generally imitated their parents and adopted their vocabulary. This was no longer the case in the 2000s, when teenagers often developed their own subculture and as such were less likely to use adult-level vocabulary on their essays. Psychologists Jean Twenge, W. Keith Campbell, and Ryne A. Sherman analyzed vocabulary test scores on the U.S. General Social Survey (formula_0) and found that after correcting for education, the use of sophisticated vocabulary has declined between the mid-1970s and the mid-2010s across all levels of education, from below high school to graduate school. Those with at least a bachelor's degree saw the steepest decline. Hence, the gap between people who never received a high-school diploma and a university graduate has shrunk from an average of 3.4 correct answers in the mid- to late-1970s to 2.9 in the early- to mid-2010s. Higher education offers little to no benefits to verbal ability. Because those with only a moderate level of vocabulary were more likely to be admitted to university than in the past, the average for degree holders declined. There are various explanations for this. Accepting high levels of immigrants, many of whom not particularly proficient in the English language, could lower the national adult average. Young people nowadays are much less likely to read for pleasure, thus reducing their levels of vocabulary. On the other hand, while the College Board has reported that SAT verbal scores were on the decline, these scores are an imperfect measure of the vocabulary level of the nation as a whole because the test-taking demographic has changed and because more students take the SAT in the 2010s then in the 1970s, which means there are more with limited ability who took it. Population aging is unconvincing because the effect is too weak. Cultural identity. A 2007 report by the National Endowment of the Arts stated that as a group, American adults were reading for pleasure less often than before. In particular, Americans aged 15 to 24 spent an average of two hours watching television and only seven minutes on reading. In 2002, only 52% of Americans between the ages of 18 and 24 voluntarily read books, down from 59% in 1992. Reading comprehension skills of American adults of all levels of education deteriorated between the early 1990s and the early 2000s, especially among those with advanced degrees. According to employers, almost three quarters of university graduates were "deficient" in English writing skills. Meanwhile, the reading scores of American tenth-graders proved mediocre, in fifteenth place out of 31 industrialized nations, and the number of twelfth-graders who had never read for pleasure doubled to 19%. Publishers and booksellers observed that the sales of adolescent and young-adult fiction remained strong. This could be because older adults were buying titles intended for younger people, which inflated the market, and because there were fewer readers buying more books. By the late 2010s, viewership of late-night American television among adults aged 18 to 49, the most important demographic group for advertisers, has fallen substantially despite an abundance of materials. This is due in part to the availability and popularity of streaming services. However, when delayed viewing within three days is taken into account, the top shows all saw their viewership numbers boosted. This development undermines the current business model of the television entertainment industry. "If the sky isn't exactly falling on the broadcast TV advertising model, it certainly seems to be a lot closer to the ground than it once was," wrote reporter Anthony Crupi for "Ad Age". Despite having the reputation for "killing" many things of value to the older generations, millennials and Generation Z are nostalgically preserving Polaroid cameras, vinyl records, needlepoint, and home gardening, to name just some. In fact, Millennials are a key cohort behind the vinyl revival. However, due to the COVID-19 pandemic in the early 2020s, certain items whose futures were in doubt due to a general lack of interest by millennials appear to be reviving with stronger sales than in previous years, such as canned food. A 2019 poll by Ypulse found that among people aged 27 to 37, the musicians most representative of their generation were Taylor Swift, Beyoncé, the Backstreet Boys, Michael Jackson, Drake, and Eminem. (The last two were tied in fifth place.) Since the 2000 U.S. Census, millennials have taken advantage of the possibility of selecting more than one racial group in abundance. In 2015, the Pew Research Center conducted research regarding generational identity that said a majority of millennials surveyed did not like the "millennial" label. It was discovered that millennials are less likely to strongly identify with the generational term when compared to Generation X or the baby boomers, with only 40% of those born between 1981 and 1997 identifying as millennials. Among older millennials, those born 1981–1988, Pew Research found that 43% personally identified as members of the older demographic cohort, Generation X, while only 35% identified as millennials. Among younger millennials (born 1989–1997), generational identity was not much stronger, with only 45% personally identifying as millennials. It was also found that millennials chose most often to define themselves with more negative terms such as self-absorbed, wasteful, or greedy. Fred Bonner, a Samuel DeWitt Proctor Chair in Education at Rutgers University and author of "Diverse Millennial Students in College: Implications for Faculty and Student Affairs", believes that much of the commentary on the Millennial Generation may be partially correct, but overly general and that many of the traits they describe apply primarily to "white, affluent teenagers who accomplish great things as they grow up in the suburbs, who confront anxiety when applying to super-selective colleges, and who multitask with ease as their helicopter parents hover reassuringly above them." During class discussions, Bonner listened to black and Hispanic students describe how some or all of the so-called core traits did not apply to them. They often said that the "special" trait, in particular, is unrecognizable. Other socioeconomic groups often do not display the same attributes commonly attributed to millennials. "It's not that many diverse parents don't want to treat their kids as special," he says, "but they often don't have the social and cultural capital, the time and resources, to do that." The University of Michigan's "Monitoring the Future" study of high school seniors (conducted continually since 1975) and the American Freshman Survey, conducted by UCLA's Higher Education Research Institute of new college students since 1966, showed an increase in the proportion of students who consider wealth a very important attribute, from 45% for Baby Boomers (surveyed between 1967 and 1985) to 70% for Gen Xers, and 75% for millennials. The percentage who said it was important to keep abreast of political affairs fell, from 50% for Baby Boomers to 39% for Gen Xers, and 35% for millennials. The notion of "developing a meaningful philosophy of life" decreased the most across generations, from 73% for Boomers to 45% for millennials. The willingness to be involved in an environmental cleanup program dropped from 33% for Baby Boomers to 21% for millennials. Demographics. Historically, the early Anglo-Protestant settlers in the seventeenth century were the most successful group, culturally, economically, and politically, and they maintained their dominance till the early twentieth century. Commitment to the ideals of the Enlightenment meant that they sought to assimilate newcomers from outside of the British Isles, but few were interested in adopting a pan-European identity for the nation, much less turning it into a global melting pot. But in the early 1900s, liberal progressives and modernists began promoting more inclusive ideals for what the national identity of the United States should be. While the more traditionalist segments of society continued to maintain their Anglo-Protestant ethnocultural traditions, universalism and cosmopolitanism started gaining favor among the elites. These ideals became institutionalized after the Second World War, and ethnic minorities started moving towards institutional parity with the once dominant Anglo-Protestants. The Immigration and Nationality Act of 1965 (also known as the Hart-Cellar Act), passed at the urging of President Lyndon B. Johnson, abolished national quotas for immigrants and replaced it with a system that admits a fixed number of persons per year based in qualities such as skills and the need for refuge. Immigration subsequently surged from elsewhere in North America (especially Canada and Mexico), Asia, Central America, and the West Indies. By the mid-1980s, most immigrants originated from Asia and Latin America. Some were refugees from Vietnam, Cuba, Haiti, and other parts of the Americas while others came illegally by crossing the long and largely undefended U.S.-Mexican border. At the same time, the postwar baby boom and subsequently falling fertility rate seemed to jeopardize America's social security system as the Baby Boomers retire in the twenty-first century. Provisional data from the Center for Disease Control and Prevention reveal that U.S. fertility rates have fallen below the replacement level of 2.1 since 1971. (In 2017, it fell to 1.765.) Among women born during the late 1950s, one fifth had no children, compared to 10% of those born in the 1930s, thereby leaving behind neither genetic nor cultural legacy. 17% of women from the Baby Boomer generation had only one child each and were responsible for only 8% of the next generation. On the other hand, 11% of Baby Boomer women gave birth to at least four children each, for a grand total of one quarter of the millennial generation. This will likely cause cultural, political, and social changes in the future as parents wield a great deal of influence on their children. For example, by the early 2000s, it had already become apparent that mainstream American culture was shifting from secular individualism towards religiosity. Millennial population size varies, depending on the definition used. Using its own definition, the Pew Research Center estimated that millennials comprised 27% of the U.S. population in 2014. In the same year, using dates ranging from 1982 to 2004, Neil Howe revised the number to over 95 million people in the U.S. In a 2012 "Time" magazine article, it was estimated that there were approximately 80 million U.S. millennials. The United States Census Bureau, using birth dates ranging from 1982 to 2000, stated the estimated number of U.S. millennials in 2015 was 83.1 million people. In 2017, fewer than 56% millennial were non-Hispanic whites, compared with more than 84% of Americans in their 70s and 80s, 57% had never been married, and 67% lived in a metropolitan area. According to the Brookings Institution, millennials are the “demographic bridge between the largely white older generations (pre-millennials) and much more racially diverse younger generations (post-millennials).” By analyzing data from the U.S. Census Bureau, the Pew Research Center estimated that millennials, whom they define as people born between 1981 and 1996, outnumbered baby boomers, born from 1946 to 1964, for the first time in 2019. That year, there were 72.1 million millennials compared to 71.6 million baby boomers, who had previously been the largest living adult generation in the country. Data from the National Center for Health Statistics shows that about 62 million millennials were born in the United States, compared to 55 million members of Generation X, 76 million baby boomers, and 47 million from the Silent Generation. Between 1981 and 1996, an average of 3.9 million millennial babies were born each year, compared to 3.4 million average Generation X births per year between 1965 and 1980. But millennials continue to grow in numbers as a result of immigration and naturalization. In fact, millennials form the largest group of immigrants to the United States in the 2010s. Pew projected that the millennial generation would reach around 74.9 million in 2033, after which mortality would outweigh immigration. Yet 2020 would be the first time millennials (who are between the ages of 24 and 39) find their share of the electorate shrink as the leading wave of Generation Z (aged 18 to 23) became eligible to vote. In other words, their electoral power peaked in 2016. In absolute terms, however, the number of foreign-born millennials continues to increase as they become naturalized citizens. In fact, 10% of American voters were born outside the country by the 2020 election, up from 6% in 2000. The fact that people from different racial or age groups vote differently means that this demographic change will influence the future of the American political landscape. While younger voters hold significantly different views from their elders, they are considerably less likely to vote. Non-whites tend to favor candidates from the Democratic Party while whites by and large prefer the Republican Party. As of the mid-2010s, the United States is one of the few developed countries that does "not" have a top-heavy population pyramid. In fact, as of 2016, the median age of the U.S. population was younger than that of all other rich nations except Australia, New Zealand, Cyprus, Ireland, and Iceland, whose combined population is only a fraction of the United States. This is because American baby boomers had a higher fertility rate compared to their counterparts from much of the developed world. Canada, Germany, Italy, Japan, and South Korea are all aging rapidly by comparison because their millennials are smaller in number than their parents. This demographic reality puts the United States at an advantage compared to many other major economies as the millennials reach middle age: the nation will still have a significant number of consumers, investors, and taxpayers. According to the Pew Research Center, "Among men, only 4% of millennials [ages 21 to 36 in 2017] are veterans, compared with 47%" of men in their 70s and 80s, "many of whom came of age during the Korean War and its aftermath." Some of these former military service members are combat veterans, having fought in Afghanistan and/or Iraq. As of 2016, millennials are the majority of the total veteran population. According to the Pentagon in 2016, 19% of Millennials are interested in serving in the military, and 15% have a parent with a history of military service. Economic prospects and trends. Employment and finances. Quantitative historian Peter Turchin observed that demand for labor in the United States had been stagnant since 2000 and would likely continue to 2020 as the nation approached the trough of the Kondratiev wave. (See right.) Moreover, the share of people in their 20s continued to grow till the end of the 2010s according projections by the U.S. Census Bureau, meaning the youth bulge would likely not fade away before the 2020s. As such the gap between the supply and demand in the labor market would likely not fall before then, and falling or stagnant wages generate sociopolitical stress. For example, between the mid-1970s and 2011, the number of law-school graduates tripled, from around 400,000 to 1.2 million while the population grew by only 45%. During the 2010s, U.S. law schools produced 25,000 surplus graduates each year, and many of them were in debt. The number of people with a Master's of Business Administration (MBA) degree grew even faster. Having more highly educated people than the market can absorb—elite overproduction—can destabilize society. In any case, Millennials were expected to make up approximately half of the U.S. workforce by 2020. The youth unemployment rate in the U.S. reached a record 19% in July 2010 since the statistic started being gathered in 1948. Underemployment is also a major factor. In the U.S. the economic difficulties have led to dramatic increases in youth poverty, unemployment, and the numbers of young people living with their parents. In April 2012, it was reported that half of all new college graduates in the US were still either unemployed or underemployed. According to a Bloomberg L.P., "Three and a half years after the worst recession since the Great Depression, the earnings and employment gap between those in the under-35 population and their parents and grandparents threatens to unravel the American dream of each generation doing better than the last. The nation's younger workers have benefited least from an economic recovery that has been the most uneven in recent history." Despite higher college attendance rates than Generation X, many were stuck in low-paid jobs, with the percentage of degree-educated young adults working in low-wage industries rising from 23% to 33% between 2000 and 2014. Not only did they receive lower wages, they also had to work longer hours for fewer benefits. By the mid-2010s, it had already become clear that the U.S. economy was evolving into a highly dynamic and increasingly service-oriented system, with careers getting replaced by short-term full-time jobs, full-time jobs by part-time positions, and part-time positions by income-generating hobbies. In one important way the economic prospects of millennials are similar to those of their parents the baby boomers: their huge number means that the competition for jobs was always going to be intense. In 2014, millennials were entering an increasingly multi-generational workplace. Even though research has shown that millennials are joining the workforce during a tough economic time, they still have remained optimistic, as shown when about nine out of ten millennials surveyed by the Pew Research Center said that they currently have enough money or that they will eventually reach their long-term financial goals. Data from a 2014 study of U.S. millennials revealed over 56% of this cohort considers themselves as part of the working class, with only approximately 35% considering themselves as part of the middle class; this class identity is the lowest polling of any generation. A 2013 joint study by sociologists at the University of Virginia and Harvard University found that the decline and disappearance of stable full-time jobs with health insurance and pensions for people who lack a college degree has had profound effects on working-class Americans, who now are less likely to marry and have children within marriage than those with college degrees. A 2020 paper by economists William G. Gale, Hilary Gelfond, Jason J. Fichtner, and Benjamin H. Harris examines the wealth accumulated by different demographic cohorts using data from the Survey of Consumer Finances. They find that while the Great Recession has diminished the wealth of all age groups in the short run, a longitudinal analysis reveals that older generations have been able to acquire more wealth whereas millennials have gotten poorer overall. In particular, the wealth of millennials in 2016 was less than that of older generations when they were their age in 1989 and 2007. Millennials enjoy a number of important advantages compared to their elders, such as higher levels of education, and longer working lives, but they suffer some disadvantages including limited prospects of economic growth, leading to delayed home ownership and marriage. According to a 2019 TD Ameritrade survey of 1,015 U.S. adults aged 23 and older with at least US$10,000 in investable assets, two thirds of people aged 23 to 38 (millennials) felt they were not saving enough for retirement, and the top reason why was expensive housing (37%). This was especially true for millennials with families. 21% said student debt prevented them from saving for the future. For comparison, this number was 12% for Generation X and 5% for the Baby Boomers. While millennials are well known for taking out large amounts of student loans, these are actually "not" their main source of non-mortgage personal debt, but rather credit card debt. According to a 2019 Harris poll, the average non-mortgage personal debt of millennials was US$27,900, with credit card debt representing the top source at 25%. For comparison, mortgages were the top source of debt for the Baby Boomers and Generation X (28% and 30%, respectively) and student loans for Generation Z (20%). As they saw their economic prospects improved in the aftermath of the Great Recession, the COVID-19 global pandemic hit, forcing lock-down measures that resulted in an enormous number of people losing their jobs. For millennials, this is the second major economic downturn in their adult lives so far. However, by early 2022, as the pandemic waned, workers aged 25 to 64 were returning to the work force at a steady pace. According to the Economist, if the trend continued, then their work-force participation would return to the pre-pandemic level of 83% by the end of 2022. Even so, the U.S. economy would continue to face labor shortages, which puts workers at an advantage while contributing to inflation. According to the Department of Education, people with technical or vocational training are slightly more likely to be employed than those with a bachelor's degree and significantly more likely to be employed in their fields of specialty. The United States currently suffers from a shortage of skilled tradespeople. Twenty-first-century manufacturing is increasingly sophisticated, using advanced robotics, 3D printing, cloud computing, among other modern technologies, and technologically savvy employees are precisely what employers need. Four-year university degrees are unnecessary; technical or vocational training, or perhaps apprenticeships would do. According to the Bureau of Labor Statistics, the occupations with the highest median annual pay in the United States in 2018 included medical doctors (especially psychiatrists, anesthesiologists, obstetricians and gynecologists, surgeons, and orthodontists), chief executives, dentists, information system managers, chief architects and engineers, pilots and flight engineers, petroleum engineers, and marketing managers. Their median annual pay ranged from about US$134,000 (marketing managers) to over US$208,000 (aforementioned medical specialties). Meanwhile, the occupations with the fastest projected growth rate between 2018 and 2028 are solar cell and wind turbine technicians, healthcare and medical aides, cyber security experts, statisticians, speech–language pathologists, genetic counselors, mathematicians, operations research analysts, software engineers, forest fire inspectors and prevention specialists, post-secondary health instructors, and phlebotomists. Their projected growth rates are between 23% (medical assistants) and 63% (solar cell installers); their annual median pays range between roughly US$24,000 (personal care aides) to over US$108,000 (physician assistants). Occupations with the highest projected numbers of jobs added between 2018 and 2028 are healthcare and personal aides, nurses, restaurant workers (including cooks and waiters), software developers, janitors and cleaners, medical assistants, construction workers, freight laborers, marketing researchers and analysts, management analysts, landscapers and groundskeepers, financial managers, tractor and truck drivers, and medical secretaries. The total numbers of jobs added ranges from 881,000 (personal care aides) to 96,400 (medical secretaries). Annual median pays range from over US$24,000 (fast-food workers) to about US$128,000 (financial managers). Despite economic recovery and despite being more likely to have a bachelor's degree or higher, millennials are at a financial disadvantage compared to the Baby Boomers and Generation X because of the Great Recession and expensive higher education. Income has become less predictable due to the rise of short-term and freelance positions. Risk management specialist and business economist Olivia S. Mitchell of the University of Pennsylvania calculated that in order to retire at 50% of their last salary before retirement, millennials will have to save 40% of their incomes for 30 years. She told CNBC, "Benefits from Social Security are 76% higher if you claim at age 70 versus 62, which can substitute for a "lot" of extra savings." Maintaining a healthy lifestyle—avoiding smoking, over-drinking, and sleep deprivation—should prove beneficial. Workplace attitudes. Millennials have regarded as troublesome to deal with by employers. They have great expectations for advancement, salary, benefits, and for a coaching relationship with their manager, and frequently switch jobs as a result. They are also more likely to value a "work-life balance" than older cohorts. Their attitude in the work place has led author Ron Alsop to call them the "Trophy Kids," a term that reflects a trend in competitive sports, as well as many other aspects of life, where mere participation is frequently enough for a reward. However, psychologist Jean Twenge reports data suggesting there are differences between older and younger millennials regarding workplace expectations, with younger millennials being "more practical" and "more attracted to industries with steady work and are more likely to say they are willing to work overtime," which Twenge attributes to younger millennials coming of age following the financial crisis of 2007–2008. Data also suggests millennials are highly interesting in volunteering. Volunteering activities between 2007 and 2008 show the millennial age group experienced almost three-times the increase of the overall population, which is consistent with a survey of 130 college upperclassmen depicting an emphasis on altruism in their upbringing. This has led, according to a Harvard University Institute of Politics, six out of ten millennials to consider a career in public service. The 2014 Brookings publication shows a generational adherence to corporate social responsibility, with the National Society of High School Scholars (NSHSS) 2013 survey and Universum's 2011 survey, depicting a preference to work for companies engaged in the betterment of society. Millennials' shift in attitudes has led to data depicting 64% of millennials would take a 60% pay cut to pursue a career path aligned with their passions, and financial institutions have fallen out of favor with banks comprising 40% of the generation's least liked brands. Housing. Despite the availability of affordable housing, broadband Internet, the possibility of telecommuting, the reality of high student loan debts and the stereotype of living in their parents' basement, millennials were steadily leaving rural counties for urban areas for lifestyle and economic reasons in the early 2010s. At that time, millennials were responsible for the so-called "back-to-the-city" trend. Many urban areas in different parts of the United States grew considerably as a result. Mini-apartments became more and more common in major urban areas with among young people living alone, who are willing to give up space in exchange for living in a location they liked. Data from the Census Bureau reveals that in 2018, 34% of American adults below the age of 35 owned a home, compared to the national average of almost 64%. Yet by the late 2010s, things changed. Like older generations, millennials reevaluate their life choices as they age. Many prefer the slower pace of life and lower costs of living in rural places. While rural America lacked the occupational variety offered by urban America, multiple rural counties can still match one major city in terms of economic opportunities. In addition, rural towns suffered from shortages of certain kinds of professionals, such as medical doctors, and young people moving in, or back, could make a difference for both themselves and their communities. U.S. Census data shows that following the Great Recession, American suburbs grew faster than dense urban cores thanks to Millennial homeowners relocating to the suburbs. This trend will likely continue as more and more millennials purchase a home. 2019 was the fourth consecutive year where the number of millennials living in the major American cities declined measurably. Economic recovery and easily obtained mortgages help explain this phenomenon. Exurbs are increasingly popular among millennials, too. According to Karen Harris, managing director at Bain Macro Trends, at the current rate of growth, exurbs will have more people than cities for the first time in 2025. Among the Baby Boomers who have retired, a significant portion opts to live in the suburbs, where the Millennials are also moving to in large numbers as they have children of their own. These confluent trends increase the level of economic activities in the American suburbs. While 14% of the U.S. population relocate at least once each year, Americans in their 20s and 30s are more likely to move than retirees. People leaving the big cities generally look for places with low cost of living, including housing costs, warmer climates, lower taxes, better economic opportunities, and better school districts for their children. Economics of space is also important, now that it has become much easier to transmit information and that e-commerce and delivery services have contracted perceived distances. Places in the South and Southwestern United States are especially popular. In some communities, millennials and their children are moving in so quickly that schools and roads are becoming overcrowded. This rising demand pushes prices upwards, making affordable housing options less plentiful. Entry-level homes, which almost ceased to exist after housing bubble busted in the 2000s, started to return in numbers as builders respond to rising demand from millennials. In order to cut construction costs, builders offer few to no options for floor plans. Previously, the Great Recession forced millennials delay home ownership. But by the late 2010s, older millennials had accumulated sufficient savings and were ready to buy a home. Prices have risen in the late 2010s due to high demand, but this could incentivize more companies to enter the business of building affordable homes. Historically, between the 1950s and 1980s, Americans left the cities for the suburbs because of crime. Suburban growth slowed because of the Great Recession but picked up pace afterwards. Overall, American cities with the largest net losses in their millennial populations were New York City, Los Angeles, and Chicago, while those with the top net gains were Houston, Denver, and Dallas. High taxes and high cost of living are also reasons why people are leaving entire states behind. As is the case with cities, young people are the most likely to relocate. For example, a 2019 poll by Edelman Intelligence of 1,900 residents of California found that 63% of millennials said they were thinking about leaving the Golden State and 55% said they wanted to do so within five years. Popular destinations include Oregon, Nevada, Arizona, and Texas, according to California's Legislative Analyst's Office. Broadly speaking, among younger cohorts of home owners, the Millennials were migrating North while Generation Z was going South. As a consequence of the COVID-19 pandemic in the United States, interest in suburban properties skyrocketed, with millennials being the largest block of buyers. As more and more people reconsider whether or not they would like to live in a densely populated urban environment with high-rise apartments, cultural amenities, and shared spaces rather than a suburban single-family home with their own backyard, the home-building industry was seeing better recovery than expected. As millennials and senior citizens increasingly demand affordable housing outside the major cities, to prevent another housing bubble, banks and regulators have restricted lending to filter out speculators and those with bad credit. By the time they neared midlife in the early 2020s, the bulk of older American millennials had entered the housing market, the number of millennial homeowners has grown substantially between the late 2010s and the early 2020s, so much so that by 2022, home-owning millennials outnumbered their renting counterparts for the first time. Millennials working remotely were especially interested in suburban life. Despite their ethnic diversity, younger millennials appear to prefer homogeneous neighborhoods. Education. General trends. According to the Pew Research Center, 53% of American millennials attended or were enrolled in university in 2002. By the early 2020s, 39% of millennials had at least a bachelor's degree, more than the Baby Boomers at 25%. Historically, university students were more likely to be male than female. But the late 2010s, the situation has reversed. Women are now more likely to enroll in university than men. In 2018, upwards of one third of each sex is a university student. In the United States today, high school students are generally encouraged to attend college or university after graduation while the options of technical school and vocational training are often neglected. Historically, high schools separated students on career tracks, but all this changed in the late 1980s and early 1990s thanks to a major effort in the large cities to provide more abstract academic education to everybody. The mission of high schools became preparing students for college. However, this program faltered in the 2010s, as institutions of higher education came under heightened skepticism due to high costs and disappointing results. People became increasingly concerned about debts and deficits. No longer were promises of educating "citizens of the world" or estimates of economic impact coming from abstruse calculations convincing. Colleges and universities found it necessary to prove their worth by clarifying how much money from which industry and company funded research, and how much it would cost to attend. According to the U.S. Department of Education, people with technical or vocational trainings are slightly more likely to be employed than those with a bachelor's degree and significantly more likely to be employed in their fields of specialty. The United States currently suffers from a shortage of skilled tradespeople. Because jobs (that suited what one studied) were so difficult to find in the few years following the Great Recession, the value of getting a liberal arts degree and studying the humanities at an American university came into question, their ability to develop a well-rounded and broad-minded individual notwithstanding. Those who majored in the humanities and the liberal arts in the 2010s were most likely to regret having done so, whereas those in STEM, especially computer science and engineering, were the least likely. As of 2019, the total college debt has exceeded US$1.5 trillion, and two out of three college graduates are saddled with debt. The average borrower owes US$37,000, up US$10,000 from ten years before. A 2019 survey by TD Ameritrade found that over 18% of millennials (and 30% of Generation Z) said they have considered taking a gap year between high school and college. In 2019, the Federal Reserve Bank of St. Louis published research (using data from the 2016 "Survey of Consumer Finances") demonstrating that after controlling for race and age cohort families with heads of household with post-secondary education who were born before 1980 there have been wealth and income premiums, while for families with heads of household with post-secondary education but born after 1980 the wealth premium has weakened to point of statistical insignificance (in part because of the rising cost of college) and the income premium while remaining positive has declined to historic lows (with more pronounced downward trajectories with heads of household with postgraduate degrees). Quantitative historian Peter Turchin noted that the United States was overproducing university graduates in the 2000s and predicted, using historical trends, that this would be one of the causes of political instability in the 2020s, alongside income inequality, stagnating or declining real wages, growing public debt. According to Turchin, intensifying competition among graduates, whose numbers were larger than what the economy could absorb, leads to political polarization, social fragmentation, and even violence as many become disgruntled with their dim prospects despite having attained a high level of education. He warned that the turbulent 1960s and 1970s could return, as having a massive young population with university degrees was one of the key reasons for the instability of the past. According to the American Academy of Arts and Sciences, students were turning away from liberal arts programs. Between 2012 and 2015, the number of graduates in the humanities dropped from 234,737 to 212,512. Consequently, many schools have relinquished these subjects, dismissed faculty members, or closed completely. Data from the National Center for Education Statistics revealed that between 2008 and 2017, the number of people majoring in English plummeted by just over a quarter. At the same time, those in philosophy and religion fell 22% and those who studied foreign languages dropped 16%. Meanwhile, the number of university students majoring in homeland security, science, technology, engineering, and mathematics (STEM), and healthcare skyrocketed. (See figure below.) Despite the fact that educators and political leaders, such as President Barack Obama, have been trying to years to improve the quality of STEM education in the United States, and that various polls have demonstrated that more students are interested in these subjects, many fail to earn a university degree in STEM. According to "The Atlantic", 48% of students majoring in STEM dropped out of their programs between 2003 and 2009. Data collected by the University of California, Los Angeles, (UCLA) in 2011 showed that although these students typically came in with excellent high school GPAs and SAT scores, among science and engineering students, including pre-medical students, 60% changed their majors or failed to graduate, twice the attrition rate of all other majors combined. Despite their initial interest in secondary school, many university students find themselves overwhelmed by the reality of a rigorous STEM education. Some are mathematically unskilled, while others are simply lazy. The National Science Board raised the alarm all the way back in the mid-1980s that students often forget why they wanted to be scientists and engineers in the first place. Many bright students had an easy time in high school and failed to develop good study habits. In contrast, Chinese, Indian, and Singaporean students are exposed to mathematics and science at a high level from a young age. Moreover, according education experts, many mathematics schoolteachers were not as well-versed in their subjects as they should be, and might well be uncomfortable with mathematics. Given two students who are equally prepared, the one who goes to a more prestigious university is less likely to graduate with a STEM degree than the one who attends a less difficult school. Competition can defeat even the top students. Meanwhile, grade inflation is a real phenomenon in the humanities, giving students an attractive alternative if their STEM ambitions prove too difficult to achieve. Whereas STEM classes build on top of each other—one has to master the subject matter before moving to the next course—and have black and white answers, this is not the case in the humanities, where things are a lot less clear-cut. In 2015, educational psychologist Jonathan Wai analyzed average test scores from the Army General Classification Test in 1946 (10,000 students), the Selective Service College Qualification Test in 1952 (38,420), Project Talent in the early 1970s (400,000), the Graduate Record Examination between 2002 and 2005 (over 1.2 million), and the SAT Math and Verbal in 2014 (1.6 million). Wai identified one consistent pattern: those with the highest test scores tended to pick the physical sciences and engineering as their majors while those with the lowest were more likely to choose education. (See figure below.) During the 2010s, the mental health of American graduate students in general was in a state of crisis. Knowledge of history. A February 2018 survey of 1,350 individuals found that 66% of the American millennials (and 41% of all U.S. adults) surveyed did not know what Auschwitz was, while 41% incorrectly claimed that two million Jews or fewer were killed during the Holocaust, and 22% said that they had never heard of the Holocaust. Over 95% of American millennials were unaware that a portion of the Holocaust occurred in the Baltic states, which lost over 90% of their pre-war Jewish population, and 49% were not able to name a single Nazi concentration camp or ghetto in German-occupied Europe. However, at least 93% surveyed believed that teaching about the Holocaust in school is important and 96% believed the Holocaust happened. The YouGov survey found that 42% of American millennials have never heard of Mao Zedong and another 40% are unfamiliar with Che Guevara. Health and welfare. Teenage pregnancy. Teenage pregnancy rates in the United States have been falling steadily since the 1990s. Physical health. Even though the majority of strokes affect people aged 65 or older and the probability of having a stroke doubles only every decade after the age of 55, anyone can suffer from a stroke at any age. A stroke occurs when the blood supply to the brain is disrupted, causing neurons to die within minutes, leading to irreparable brain damage, disability, or even death. Statistics from the Centers for Disease Control and Prevention (CDC), strokes are the fifth leading cause of death and a major factor behind disability in the United States. According to the National Strokes Association, the risk of having a stroke is increasing among young adults (those in their 20s and 30s) and even adolescents. During the 2010s, there was a 44% increase in the number of young people hospitalized for strokes. Health experts believe this development is due to a variety of reasons related to lifestyle choices, including obesity, smoking, alcoholism, and physical inactivity. Obesity is also linked to hypertension, diabetes, and high cholesterol levels. CDC data reveals that during the mid-2000s, about 28% of young Americans were obese; this number rose to 36% a decade later. Up to 80% of strokes can be prevented by making healthy lifestyle choices while the rest are due to factors beyond a person's control, namely age and genetic defects (such as congenital heart disease). In addition, between 30% and 40% of young patients suffered from cryptogenic strokes, or those with unknown causes. According to a 2019 report from the American College of Cardiology, the prevalence of heart attacks among Americans under the age of 40 increased by an average rate of two percent per year in the previous decade. About one in five patients suffered from a heart attack came from this age group. This is despite the fact that Americans in general were less likely to suffer from heart attacks than before, due in part to a decline in smoking. The consequences of having a heart attack were much worse for young patients who also had diabetes. Besides the common risk factors of heart attacks, namely diabetes, high blood pressure, and family history, young patients also reported marijuana and cocaine intake, but less alcohol consumption. Sports and fitness. Fewer American millennials follow sports than their Generation X predecessors, with a McKinsey survey finding that 38 percent of millennials in contrast to 45 percent of Generation X are committed sports fans. However, the trend is not uniform across all sports; the gap disappears for basketball, mixed martial arts, soccer, and collegiate sports. In the United States, while the popularity of football has declined among millennials, the popularity of soccer has increased more among this group than for any other generation. As of 2018 soccer was the second most popular sport among those aged 18 to 34. Other athletic activities popular among Millennials include boxing, cycling, running, and swimming. On the other hand, golf has fallen in popularity. The Physical Activity Council's 2018 Participation Report found that millennials were more likely than other generations to participate in water sports such as stand up paddling, board-sailing and surfing. According to the survey of 30,999 Americans, which was conducted in 2017, approximately half of American millennials participated in high caloric activities while approximately one quarter were sedentary. The same report also found millennials to be more active than Baby Boomers in 2017. Thirty-five percent of both millennials and Generation X were reported to be "active to a healthy level," with millennial's activity level reported as higher overall than that of Generation X in 2017. Vision health. The American Optometric Association sounded the alarm on the link between the regular use of handheld electronic devices and eyestrain. According to a spokeswoman, digital eyestrain, or computer vision syndrome, is "rampant, especially as we move toward smaller devices and the prominence of devices increase in our everyday lives." Symptoms include dry and irritated eyes, fatigue, eye strain, blurry vision, difficulty focusing, headaches. However, the syndrome does not cause vision loss or any other permanent damage. In order to alleviate or prevent eyestrain, the Vision Council recommends that people limit screen time, take frequent breaks, adjust screen brightness, change the background from bright colors to gray, increase text sizes, and blink more often. Dental health. Millennials struggle with dental and oral health. More than 30% of young adults have untreated tooth decay (the highest of any age group), 35% have trouble biting and chewing, and some 38% of this age group find life in general “less satisfying” due to teeth and mouth problems. Political views and participation. Views. A 2004 Gallup poll of Americans aged 13 to 17 found that 71% said their social and political views were more or less the same as those of their parents. 21% thought they were more liberal and 7% more conservative. According to demographer and public policy analyst Philip Longman, "even among baby boomers, those who wound up having children have turned out to be remarkably similar to their parents in their attitudes about 'family' values." In the postwar era, most returning servicemen looked forward to "making a home and raising a family" with their wives and lovers, and for many men, family life was a source of fulfillment and a refuge from the stress of their careers. Life in the late 1940s and 1950s was centered about the family and the family was centered around children. However, political scientist Elias Dinas discovered, by studying the results from the Political Socialization Panel Study and further data from the United Kingdom and the United States, that while children born to politically engaged parents tended to be politically engaged themselves, those who absorbed their parents' views the earliest were also the most likely to abandon them later in life. "The Economist" observed in 2013 that, like their British counterparts, millennials in the United States held more positive attitudes towards recognizing same-sex marriage than older demographic cohorts. However, a 2018 poll conducted by Harris on behalf of the LGBT advocacy group GLAAD found that despite being frequently described as the most tolerant segment of society, people aged 18 to 34—most millennials and the oldest members of Generation Z—have become less accepting LGBT individuals compared to previous years. Harris found that young women were driving this development; their overall comfort levels dived from 64% in 2017 to 52% in 2018. In general, the fall of comfort levels was the steepest among people aged 18 to 34 between 2016 and 2018. (Seniors aged 72 or above became more tolerant of LGBT doctors or having their (grand) children taking LGBT history lessons during the same period, albeit with a bump in discomfort levels in 2017.) Results from this Harris poll were released on the 50th anniversary of the riots that broke out in Stonewall Inn, New York City, in June 1969, thought to be the start of the LGBT rights movement. At that time, homosexuality was considered a mental illness or a crime in many U.S. states. In 2018, Gallup conducted a survey of almost 14,000 Americans from all 50 states and the District of Columbia aged 18 and over on their political sympathies. They found that overall, younger adults tended to lean liberal while older adults tilted conservative. More specifically, groups with strong conservative leanings included the elderly, residents of the Midwest and the South, and people with some or no college education. Groups with strong liberal leanings were adults with advanced degrees, whereas those with moderate liberal leanings included younger adults (18 to 29 and 30 to 49), women, and residents of the East. Gallup found little variations by income groups compared to the national average. Between 1992 and 2018, the number of people identifying as liberals steadily increased, 17% to 26%, mainly at the expense of the group identifying as moderates. Meanwhile, the proportion of conservatives remained largely unchanged, albeit with fluctuations.2018 surveys of American teenagers 13 to 17 and adults aged 18 or over conducted by the Pew Research Center found that millennials and Generation Z held similar views on various political and social issues. More specifically, 56% of millennials believed that climate change is real and is due to human activities while only 8% reject the scientific consensus on climate change. 64% wanted the government to play a more active role in solving their problems. 65% were indifferent towards pre-nuptial cohabitation. 48% considered single motherhood to be neither a positive or a negative for society. 61% saw increased ethnic or racial diversity as good for society. 47% did the same for same-sex marriage, and 53% interracial marriage. (See chart.) In most cases, millennials tended hold quite different views from the Silent Generation, with the Baby Boomers and Generation X in between. In the case of financial responsibility in a two-parent household, though, majorities from across the generations answered that it should be shared, with 58% for the Silent Generation, 73% for the Baby Boomers, 78% for Generation X, and 79% for both the millennials and Generation Z. Across all the generations surveyed, at least 84% thought that both parents ought to be responsible for rearing children. Very few thought that fathers should be the ones mainly responsible for taking care of children. In 2015, a Pew Research study found 40% of millennials in the United States supported government restriction of public speech deemed offensive to certain groups. Support for restricting offensive speech was lower among older generations, with 27% of Gen Xers, 24% of Baby Boomers, and only 12% of the Silent Generation supporting such restrictions. Pew Research noted similar age related trends in the United Kingdom, but not in Germany and Spain, where millennials were less supportive of restricting offensive speech than older generations. In the U.S. and UK during the mid-2010s, younger millennials raised their concerns over microaggressions and advocated for safe spaces and trigger warnings in the university setting. Critics of such changes have raised concerns regarding their impact on free speech, asserting these changes can promote censorship, while proponents have described these changes as promoting inclusiveness. A 2018 Gallup poll found that people aged 18 to 29 have a more favorable view of socialism than capitalism, 51% to 45%. Nationally, 56% of Americans prefer capitalism compared to 37% who favor socialism. Older Americans consistently prefer capitalism to socialism. Whether the current attitudes of millennials and Generation Z on capitalism and socialism will persist or dissipate as they grow older remains to be seen. Gallup polls conducted in 2019 revealed that 62% of people aged 18 to 29—older members of Generation Z and younger millennials—support giving women access to abortion while 33% opposed. In general, the older someone was, the less likely that they supported abortion. (See chart to the right.) Gallup found in 2018 that nationwide, Americans are split on the issue of abortion, with equal numbers of people considering themselves "pro-life" or "pro-choice", 48%. Polls conducted by Gallup and the Pew Research Center found that support for stricter gun laws among people aged 18 to 29 and 18 to 36, respectively, is statistically no different from that of the general population. According to Gallup, 57% of Americans are in favor of stronger gun control legislation. In a 2017 poll, Pew found that among the age group 18 to 29, 27% personally owned a gun and 16% lived with a gun owner, for a total of 43% living in a household with at least one gun. Nationwide, a similar percentage of American adults lived in a household with a gun (41%). In 2019, the Pew Research Center interviewed over 2,000 Americans aged 18 and over on their views of various components of the federal government. They found that 54% of the people between the ages of 18 and 29 wanted larger government and larger compared to 43% who preferred smaller government and fewer services. Meanwhile, 46% of those between the ages of 30 and 49 favored larger government compared to 49% who picked the other option. Older people were more likely to dislike larger government. Overall, the American people remain divided over the size and scope of government, with 48% preferring smaller government with fewer services and 46% larger government and more services. They found that the most popular federal agencies were the U.S. Postal Service (90% favorable), the National Park Service (86%), NASA (81%), the CDC (80%), the FBI (70%), the Census Bureau (69%), the SSA (66%), the CIA, and the Federal Reserve (both 65%). There is very little to no partisan divide on the Postal Service, the National Park Service, NASA, the CIA, the Census Bureau. According to a 2019 CBS News poll on 2,143 U.S. residents, 72% of Americans 18 to 44 years of age—Generations X, Y (millennials), and Z—believed that it is a matter of personal responsibility to tackle climate change while 61% of older Americans did the same. In addition, 42% of American adults under 45 years old thought that the U.S. could realistically transition to 100% renewable energy by 2050 while 29% deemed it unrealistic and 29% were unsure. Those numbers for older Americans are 34%, 40%, and 25%, respectively. Differences in opinion might be due to education as younger Americans are more likely to have been taught about climate change in schools than their elders. As of 2019, only 17% of electricity in the U.S. is generated from renewable energy, of which, 7% is from hydroelectric dams, 6% from wind turbines, and 1% solar panels. There are no rivers for new dams. Meanwhile, nuclear power plants generate about 20%, but their number is declining as they are being deactivated but not replaced. In early 2019, Harvard University's Institute of Politics Youth Poll asked voters aged 18 to 29—younger millennials and the first wave of Generation Z—what they would like to be priorities for U.S. foreign policy. They found that the top issues for these voters were countering terrorism and protecting human rights (both 39%), and protecting the environment (34%). Preventing nuclear proliferation and defending U.S. allies were not as important to young American voters. The Poll found that support for single-payer universal healthcare and free college dropped, down 8% to 47% and down 5% to 51%, respectively, if cost estimates were provided. As is the case with many European countries, empirical evidence poses real challenges to the popular argument that the surge of nationalism and populism is an ephemeral phenomenon due to 'angry white old men' who would inevitably be replaced by younger and more liberal voters. Especially since the 1970s, working-class voters, who had previously formed the backbone of support for the New Deal introduced by President Franklin D. Roosevelt, have been turning away from the left-leaning Democratic Party in favor of the right-leaning Republican Party. As the Democratic Party attempted to make itself friendlier towards the university-educated and women during the 1990s, more blue-collar workers and non-degree holders left. Political scientist Larry Bartels argued because about one quarter of Democrat supporters held social views more in-tune with Republican voters and because there was no guarantee millennials would maintain their current political attitudes due to life-cycle effects, this process of political re-alignment would likely continue. As is the case with Europe, there are potential pockets of support for national populism among younger generations. Votes. Millennials are more willing to vote than previous generations when they were at the same age. With voter rates being just below 50% for the four presidential cycles before 2017, they have already surpassed members of Generation X of the same age who were at just 36%. Pew Research described millennials as playing a significant role in the election of Barack Obama as President of the United States. Millennials were between 12 and 27 during the 2008 U.S. presidential election. That year, the number of voters aged 18 to 29 who chose the Democratic candidate was 66%, a record since 1980. The total share of voters who backed the President's party was 53%, another record. For comparison, only 31% of voters in that age group backed John McCain, who got only 46% of the votes. Among millennials, Obama received votes from 54% of whites, 95% of blacks, and 72% of Hispanics. There was no significant difference between those with college degrees and those without, but millennial women were more likely to vote for Obama than men (69% vs. 62%). Among voters between the ages of 18 and 29, 45% identified with the Democratic Party while only 26% sided with the Republican Party, a gap of 19%. Back in 2000, the two main American political parties split the vote of this age group. This was a significant shift in the American political landscape. Millennials not only provided their votes but also the enthusiasm that marked the 2008 election. They volunteered in political campaigns and donated money. But that millennial enthusiasm all but vanished by the next election cycle while older voters showed more interest. In 2012, when Americans reelected Barack Obama, the voter participation gap between people above the age of 65 and those aged 18 to 24 was 31%. Pew polls conducted a year prior showed that while millennials preferred Barack Obama to Mitt Romney (61% to 37%), members of the Silent Generation leaned towards Romney rather than Obama (54% to 41%). But when looking at white millennials only, Pew found that Obama's advantage which he enjoyed in 2008 ceased to be, as they were split between the two candidates. Although millennials are one of the largest voting blocs in the United States, their voting turnout rates have been subpar. Between the mid-2000s and the mid-2010s, millennial voting participation was consistently below those of their elders, fluctuating between 46% and 51%. For comparison, turnout rates for Generation X and the Baby Boomers rose during the same period, 60% to 69% and 41% to 63%, respectively, while those of the oldest of voters remained consistently at 69% or more. Millennials may still be a potent force at the ballot box, but it may be years before their participation rates reach their numerical potential as young people are consistently less likely to vote than their elders. In addition, despite the hype surrounding the political engagement and possible record turnout among young voters, millennials' voting power is even weaker than first appeared due to the comparatively higher number of them who are non-citizens (12%, as of 2019), according to William Frey of the Brookings Institution. In general, the phenomenon of growing political distrust and de-alignment in the United States is similar to what has been happening in Europe since the last few decades of the twentieth century, even though events like the Watergate scandal or the threatened impeachment of President Bill Clinton are unique to the United States. Such an atmosphere depresses turnouts among younger voters. Among voters in the 18-to-24 age group, turnout dropped from 51% in 1964 to 38% in 2012. Although people between the ages of 25 and 44 were more likely to vote, their turnout rate followed a similarly declining trend during the same period. Political scientists Roger Eatwell and Matthew Goodwin argued that it was therefore unrealistic for Hillary Clinton to expect high turnout rates among millennials in 2016. This political environment also makes voters more likely to consider political outsiders such as Bernie Sanders and Donald Trump. The Brookings Institution predicted that after 2016, millennials could affect how politics is conducted in the two-party system of the United States, given that they were more likely to identify as liberals or conservatives than Democrats or Republicans, respectively. In particular, while Trump supporters were markedly enthusiastic about their chosen candidate, the number of young voters identifying with the GOP has not increased. Bernie Sanders, a self-proclaimed democratic socialist and Democratic candidate in the 2016 United States presidential election, was the most popular candidate among millennial voters in the primary phase, having garnered more votes from people under 30 in 21 states than the major parties' candidates, Donald Trump and Hillary Clinton, did combined. According to the Brookings Institution, turnout among voters aged 18 to 29 in the 2016 election was 50%. Hillary Clinton won 55% of the votes from this age group while Donald Trump secured 37%. Polls conducted right before the election showed that millennial blacks and Hispanics were concerned about a potential Trump presidency. By contrast, Trump commanded support among young whites, especially men. There was also an enthusiasm gap for the two main candidates. While 32% of young Trump supporters felt excited about the possibility of him being President, only 18% of Clinton supporters said the same about her. The Bookings Institution found that among Trump voters in the 18-to-29 age group, 15% were white women with college degrees, 18% were the same without, 14% were white men with college degrees, and 32% were the same without, for a grand total of 79%. These groups were only 48% of Clinton voters of the same age range in total. On the other hand, a total of 52% of Clinton voters aged 18 to 29 were non-whites with college degrees (17%) and non-whites without them (35%). Clinton's chances of success were hampered by low turnouts among minorities and millennials with university degrees and students. Meanwhile, Trump voters included 41% of white millennials. These people tended to be non-degree holders with full-time jobs and were markedly "less" likely to be financially insecure than those who did not support Trump. Contrary to the claim that young Americans felt comfortable with the ongoing transformation of the ethnic composition of their country due to immigration, not all of them approve of this change despite the fact that they are an ethnically diverse cohort. In the end, Trump won more votes from whites between the ages of 18 and 29 than early polls suggested. A Reuters-Ipsos survey of 16,000 registered voters aged 18 to 34 conducted in the first three months of 2018 (and before the 2018 midterm election) showed that overall support for Democratic Party among such voters fell by nine percent between 2016 and 2018 and that an increasing number favored the Republican Party's approach to the economy. Pollsters found that white millennials, especially men, were driving this change. In 2016, 47% of young whites said they would vote for the Democratic Party, compared to 33% for the Republican Party, a gap of 14% in favor of the Democrats. But in 2018, that gap vanished, and the corresponding numbers were 39% for each party. For young white men the shift was even more dramatic. In 2016, 48% said they would vote for the Democratic Party and 36% for the Republican Party. But by 2018, those numbers were 37% and 46%, respectively. This is despite the fact that almost two thirds of young voters disapproved of the performance of Republican President Donald J. Trump. According to the Pew Research Center, only 27% of millennials approved of the Trump presidency while 65% disapproved that year. Although American voters below the age of 30 helped Joe Biden win the 2020 U.S. Presidential election, their support for him fell quickly afterwards. By late 2021, only 29% of adults in this age group approved of his performance as President whereas 50% disapproved, a gap of 21 points, the largest of all age groups. In the 2022 midterm election, voters below the age of 30 were the only major age group supporting the Democratic Party, and their numbers were large enough to prevent a 'red wave'. Preferred modes of transportation. Millennials were initially not keen on getting a driver's license or owning a vehicle thanks to new licensing laws and the state of the economy when they came of age, but the oldest among them have already begun buying cars in great numbers. In 2016, millennials purchased more cars and trucks than any living generation except the Baby Boomers. A working paper by economists Christopher Knittel and Elizabeth Murphy then at the Massachusetts Institute of Technology and the National Bureau of Economic Research analyzed data from the U.S. Department of Transportation's National Household Transportation Survey, the U.S. Census Bureau, and American Community Survey in order to compare the driving habits of the Baby Boomers, Generation X, and the oldest millennials (born between 1980 and 1984). That found that on the surface, the popular story is true: American millennials on average own 0.4 fewer cars than their elders. But when various factors—including income, marital status, number of children, and geographical location—were taken into account, such a distinction ceased to be. In addition, once those factors are accounted for, millennials actually drive longer distances than the Baby Boomers. Economic forces, namely low gasoline prices, higher income, and suburban growth, result in millennials having an attitude towards cars that is no different from that of their predecessors. An analysis of the National Household Travel Survey by the State Smart Transportation Initiative revealed that higher-income millennials drive less than their peers probably because they are able to afford the higher costs of living in large cities, where they can take advantage of alternative modes of transportation, including public transit and ride-hailing services. According to the Pew Research Center, young people are more likely to ride public transit. In 2016, 21% of adults aged 18 to 21 took public transit on a daily, almost daily, or weekly basis. By contrast, this number of all U.S. adults was 11%. Also according to Pew, 51% of U.S. adults aged 18 to 29 used Lyft or Uber in 2018 compared to 28% in 2015. That number for all U.S. adults were 15% in 2015 and 36% in 2018. In general, users tend to be urban residents, young (18–29), university graduates, and high income earners ($75,000 a year or more). Religious beliefs. Although the United States is relatively religious by Western standards, the nation continues to secularize, though the rate is slower than that observed in Europe. In the U.S., millennials are the least likely to be religious when compared to older generations. There is a trend towards irreligion that has been increasing since the 1940s. According to a 2012 study by Pew Research, 32 percent of Americans aged 18–29 are irreligious, as opposed to 21 percent aged 30–49, 15 percent aged 50–64, and only 9 percent born aged 65 and above. A 2005 study looked at 1,385 people aged 18 to 25 and found that more than half of those in the study said that they pray regularly before a meal. One-third said that they discussed religion with friends, attended religious services, and read religious material weekly. Twenty-three percent of those studied did not identify themselves as religious practitioners. A 2010 Pew Research Center study on millennials shows that of those between 18 and 29 years old, only 3% of these emerging adults self-identified as "atheists" and only 4% self-identified as "agnostics." Overall, 25% of millennials are "Nones" and 75% are religiously affiliated. On the other hand, Millennials often describe themselves as "spiritual but not religious" and will sometimes turn to astrology, meditation or mindfulness techniques possibly to seek meaning or a sense of control. A 2016 survey by Barna and Impact 360 Institute on about 1,500 Americans aged 13 and up suggests that the proportion of atheists and agnostics was 21% among Generation Z, 15% for millennials, 13% for Generation X, and 9% for Baby Boomers. 59% of Generation Z were Christians (including Catholics), as were 65% for the millennials, 65% for Generation X, and 75% for the Baby Boomers. 41% of teens believed that science and the Bible are fundamentally at odds with one another, with 27% taking the side of science and 17% picking religion. For comparison, 45% of millennials, 34% of Generation X, and 29% of the Baby Boomers believed such a conflict exists. 31% of Generation Z believed that science and religion refer to different aspects of reality, on par with millennials and Generation X (both 30%), and above the Baby Boomers (25%). 28% of Generation Z thought that science and religion are complementary, compared to 25% of millennials, 36% of Generation X, and 45% for Baby Boomers. A 2019 survey by Five Thirty Eight and the American Enterprise Institute identified three key reasons why Millennials were leaving religion in large numbers. Many had grown up in largely secular households and as such never felt a strong connection to organized religion. More young people had irreligious romantic partners or spouses, reinforcing their secular outlook and way of life, and those who had children were less likely to view religion as a source of morality. Social tendencies. Social circles. In March 2014, the Pew Research Center issued a report about how "millennials in adulthood" are "detached from institutions and networked with friends." The report said millennials are somewhat more upbeat than older adults about America's future, with 49% of millennials saying the country's best years are ahead, though they're the first in the modern era to have higher levels of student loan debt and unemployment. Courtship behavior. Writing for "The Atlantic" in 2018, Kate Julian reported that among the countries that kept track of the sexual behavior of their citizens—Australia, Finland, Japan, the Netherlands, Sweden, the United Kingdom, and the United States—all saw a decline in the frequency of sexual intercourse among teenagers and young adults. Although experts disagree on the methodology of data analysis, they do believe that young people today are less sexually engaged than their elders, such as the baby boomers, when they were their age. This is despite the fact that online dating platforms allow for the possibility of casual sex, the wide availability of contraception, and the relaxation of attitudes towards sex outside of marriage. A 2020 study published in the Journal of the American Medical Association (JAMA) by researchers from Indiana University in the United States and the Karolinska Institutet from Sweden found that during the first two decades of the twenty-first century, young Americans had sexual intercourse less frequently than in the past. Among men aged 18 to 24, the share of the sexually inactive increased from 18.9% between 2000 and 2002 to 30.9% between 2016 and 2018. Women aged 18 to 34 had sex less often as well. Reasons for this trend are manifold. People who were unemployed, only had part-time jobs, and students were the most likely to forego sexual experience while those who had higher income were stricter in mate selection. Psychologist Jean Twenge, who did not participate in the study, suggested that this might be due to "a broader cultural trend toward delayed development," meaning various adult activities are postponed. She noted that being economically dependent on one's parents discourages sexual intercourse. Other researchers noted that the rise of the Internet, computer games, and social media could play a role, too, since older and married couples also had sex less often. In short, people had many options. A 2019 study by the London School of Hygiene and Tropical Medicine found a similar trend in the United Kingdom. Although this trend precedes the COVID-19 pandemic, fear of infection is likely to fuel the trend the future, study co-author Peter Ueda told Reuters. In a 2019 poll, the Pew Research Center found that about 47% American adults believed dating had become more difficult within the last decade or so, while only 19% said it became easier and 33% thought it was the same. Majorities of both men (65%) and women (43%) agreed that the 'Me-too' movement posed challenges for the dating market while 24% and 38%, respectively, thought it made no difference. In all, one in two of single adults were not looking for a romantic relationship. Among the rest, 10% were only interested in casual relationships, 14% wanted committed relationships only, and 26% were open to either kind. Among younger people (18 to 39), 27% wanted a committed relationship only, 15% casual dates only, and 58% either type of relationship. For those between the ages of 18 and 49, the top reasons for their decision to avoid dating were having more important priorities in life (61%), preferring being single (41%), being too busy (29%), and pessimism about their chances of success (24%). While most Americans found their romantic partners with the help of friends and family, younger adults were more likely to encounter them online than their elders, with 21% of those aged 18 to 29 and 15% of those aged 30 to 49 saying they met their current partners this way. For comparison, only 8% of those aged 50 to 64 and 5% of those aged 65 and over did the same. People aged 18 to 29 were most likely to have met their current partners in school while adults aged 50 and up were more likely to have met their partners at work. Among those in the 18 to 29 age group, 41% were single, including 51% of men and 32% of women. Among those in the 30 to 49 age group, 23% were single, including 27% of men and 19% of women. This reflects the general trend across the generations that men tend to marry later (and die earlier) than women. Most single people, regardless of whether or not they were interested in dating, felt little to no pressure from their friends and family to seek a romantic partner. Young people, however, were under significant pressure compared to the sample average or older age groups. 53% of bachelors and spinsters aged 18 to 29 thought there was at least some pressure from society on them to find a partner, compared to 42% for people aged 30 to 49, 32% for people aged 50 to 64, and 21% for people aged 50 to 64. Family life and offspring. Research by the Urban Institute conducted in 2014, projected that if current trends continue, millennials will have a lower marriage rate compared to previous generations, predicting that by age 40, 31% of millennial women will remain single, approximately twice the share of their single Gen X counterparts. The data showed similar trends for males. A 2016 study from Pew Research showed that young adults aged 18–34 were more likely to live with parents than with a lover or a spouse, an unprecedented occurrence since data collection began in 1880. High student debt is not necessarily the dominant factor for this shift as the data shows the trend is stronger for those without a college education. Richard Fry, a senior economist for Pew Research said of millennials, "they're the group much more likely to live with their parents," further stating that "they're concentrating more on school, careers and work and less focused on forming new families, spouses or partners and children." In the United States, between the late 1970s and the late 2010s, the shares of people who were married declined among the lower class (from 60% down to 33%) and the middle class (84% down to 66%), but remained steady among the upper class (~80%). In fact, it was the lower and middle classes that were driving the U.S. marriage rate down. Among Americans aged 25 to 39, the divorce rate per 1,000 married persons dropped from 30 to 24 between 1990 and 2015. For comparison, among those aged 50 and up, the divorce rate went from 5 in 1990 to 10 in 2015; that among people aged 40 to 49 increased from 18 to 21 per 1,000 married persons. In general, the level of education is a predictor of marriage and income. University graduates are more likely to get married but less likely to divorce. According to a cross-generational study comparing millennials to Generation X conducted at Wharton School of Business, more than half of millennial undergraduates surveyed do not plan to have children. The researchers compared surveys of the Wharton graduating class of 1992 and 2012. In 1992, 78% of women planned to eventually have children dropping to 42% in 2012. The results were similar for male students. The research revealed among both genders the proportion of undergraduates who reported they eventually planned to have children had dropped in half over the course of a generation. But as their economic prospects improve, most millennials in the United States say they desire marriage, children, and home ownership. According to the Brookings Institution, the number of American mothers who never married ballooned between 1968, when they were extremely rare, and 2008, when they became much more common, especially among the less educated. In particular, in 2008, the number of mothers who never married with at least 16 years of education was 3.3%, compared to 20.1% of those who never graduated from high school. Unintended pregnancies were also higher among the less educated. Geopolitical analyst Peter Zeihan argued that because of the size of the millennial cohort relative to the size of the U.S. population and because they are having children, the United States will continue to maintain an economic advantage over most other developed nations, whose millennial cohorts are not only smaller than their those of their elders but are also not having as high a fertility rate. The prospects of any given country is constrained by its demography. Psychologist Jean Twenge and a colleague's analysis of data from the General Social Survey of 40,000 Americans aged 30 and over from the 1970s to the 2010s suggests that socioeconomic status (as determined by factors such as income, educational attainment, and occupational prestige), marriage, and happiness are positively correlated and that these relationships are independent of cohort or age. However, the data cannot tell whether marriage causes happiness or the other way around; correlation does not mean causation. Demographer and futurist Mark McCrindle suggested the name "Generation Alpha" (or Generation formula_1) for the offspring of a majority of millennials, people born after Generation Z, noting that scientific disciplines often move to the Greek alphabet after exhausting the Roman alphabet. By 2016, the cumulative number of American women of the millennial generation who had given birth at least once reached 17.3 million. Effects of intensifying assortative mating will likely be seen in the next generation, as parental income and educational level are positively correlated with children's success. In the United States, children from families in the highest income quintile are the most likely to live with married parents (94% in 2018), followed by children of the middle class (74%) and the bottom quintile (35%). Living in the digital age, Millennial parents have taken plenty of photographs of their children, and have chosen both digital storage (e.g. Dropbox) or physical photo albums to preserve their memories. Use of digital technology. Marc Prensky coined the term "digital native" to describe "K through college" students in 2001, explaining they "represent the first generations to grow up with this new technology." In their 2007 book "Connecting to the Net.Generation: What Higher Education Professionals Need to Know About Today's Students", authors Reynol Junco and Jeanna Mastrodicasa expanded on the work of William Strauss and Neil Howe to include research-based information about the personality profiles of millennials, especially as it relates to higher education. They conducted a large-sample (7,705) research study of college students. They found that Net Generation college students, born 1982 onwards, were frequently in touch with their parents and they used technology at higher rates than people from other generations. In their survey, they found that 97% of these students owned a computer, 94% owned a mobile phone, and 56% owned an MP3 player. They also found that students spoke with their parents an average of 1.5 times a day about a wide range of topics. Other findings in the Junco and Mastrodicasa survey revealed 76% of students used instant messaging, 92% of those reported multitasking while instant messaging, 40% of them used television to get most of their news, and 34% of students surveyed used the Internet as their primary news source. 2015 study by Microsoft found that 77% of respondents aged 18 to 24 said yes to the statement, "When nothing is occupying my attention, the first thing I do is reach for my phone," compared to just 10% for those aged 65 and over. One of the most popular forms of media use by millennials is social networking. Millennials use social networking sites, such as Facebook and Twitter, to create a different sense of belonging, make acquaintances, and to remain connected with friends. In 2010, research was published in the Elon Journal of Undergraduate Research which claimed that students who used social media and decided to quit showed the same withdrawal symptoms of a drug addict who quit their stimulant. In the 2014 PBS "Frontline" episode "Generation Like" there is discussion about millennials, their dependence on technology, and the ways the social media sphere is commoditized. Some millennials enjoy having hundreds of channels from cable TV. However, some other millennials do not even have a TV, so they watch media over the Internet using smartphones and tablets. Jesse Singal of "New York" magazine argues that this technology has created a rift within the generation; older millennials, defined here as those born 1988 and earlier, came of age prior to widespread usage and availability of smartphones, in contrast to younger millennials, those born in 1989 and later, who were exposed to this technology in their teen years. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n = 29,912" }, { "math_id": 1, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=71785719
71787098
Job 18
Job 18 is the eighteenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Bildad the Shuhite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 21 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 18 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 18 can be divided into two parts: Whereas in their first speech both Eliphaz and Bildad focus on the nature of God, in their second speech both explore the topic of the fate of the wicked, suggesting in the course of the conversation they become more convinced that Job is among the wicked. Bildad rebukes Job (18:1–5). The chapter opens with Bildad's rebuke of Job for considering his friends as fools (like cattle, verse 3; cf Job 17:10) and urge for Job to be sensible and have broader perspective. [Bildad said:] "“Indeed, the light of the wicked is put out," "and the flame of his fire does not shine." Job expresses his despair (18:5–21). The second part of the chapter contains Bildad's extended description of the fate of the wicked: insecurity, terror and hopelessness. It can be implied that Job is at least on the way to be one of the wicked, so the whole section serves as a strong warning to Job. This is strongly emphasized in the last two verses of the chapter (verses 20–21), which demonstrates Bildad's view of Job's descent into the wickedness. [Bildad said:] "They who come after him will be astonished at his day," "as they who went before were seized with fright." Verse 20. In relation to the geography, there are Hebrew terms of the seas: "the hinder sea", referring to the Mediterranean (in the "West"), and "the front sea", referring to the Dead Sea (Zechariah 14:8), namely, the "East". The Greek Septuagint (among other versions) understood the verse as "temporal": "the last groaned for him, and wonder seized the first". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71787098
71788110
Job 19
Job 19 is the nineteenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 29 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 19 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 19 is largely a lament that can be divided into several parts: Job's lament to God and the people (19:1–22). Job's lament in this section is framed by his complaint of his friends tormenting him (verses 1–6) and his plea for his friends to stop doing that action (verses 21–22). In between, Job laments that he no doubt believes God's ultimate power over his fate, but he simply cannot understand why God took away his dignity and reputation ("glory" and "crown", verse 9), also that his family and the people have deserted him ("his brothers", verse 13; "all who knew him", verses 13b, 14b), "closest friends" (verse 19), basically the entire community (cf. Job 30). [Job said:] "And if indeed I have erred," "my error remains with me." Verse 4. Job insists that even if it were true he has committed a minor, inadvertent sin (cf. Leviticus 5:18; Numbers 15:8), definitely not the intentional sin being accused by his friends, then it is solely Job concern, a matter between Job and God alone, not for his friends to prosecute him. The Greek Septuagint version has an insertion between the two lines: "in having spoken words which it is not right to speak, and my words err, and are unreasonable". Job's lament to God and the people (19:23–27). This section is seen as the high point of Job's faith and hope, showing his belief with confidence in a "living redeemer" (verse 25a). The identity of this redeemer could be a hypothetical legal figure, like the "umpire/arbiter" (Job 9:33) or "witness" (Job 16:19). Job's biggest desire is not justice or vindication, but the restoration of his relationship with God. At the end, Job warns his friends ("you" in verse 28a is plural) to be afraid of judgment to them for their wrongful treatment of Job. [Job said:] "For I know that my Redeemer lives," "and He will stand at last on the earth;" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71788110
71788972
Job 20
Job 20 is the twentieth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Zophar the Naamathite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 29 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 20 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 20 contains Zophar's second (and final) speech, which can be divided into several parts: Zophar's initial response (20:1–3). In the opening part of the chapter, Zophar responds to Job's rebuke to the three friends (Job 19:28–29) with increasing impatience and growing "troubled thoughts" he felt as he listens to Job. Zophar claims that a "spirit from/out of his understandings answers me" (verse 3b) which prompts him to reply. [Zophar said:] "I have heard the rebuke that reproaches me," "And the spirit of my understanding causes me to answer." Verse 3. These words (and also the opening statements of other friends of Job) tends to reveal that Job's friends seem more concerned about their wounded pride than about Job's grievous suffering. Zophar's explanation that the wicked will not escape God's wrath (20:4–29). Zophar states his resolutely fixed position of the retribution theology in this final speech (Zophar would not participate in the third round of debate), which he focuses mainly at the 'negative side of the equation': 'God always destroys the wicked'. Like Bildad in the first round and Eliphaz in the second round (Job 15) Zophar appeals to tradition, but in a more hyperbolic way to emphasize his certainty of his stance. Two themes are emphasized: Zophar's traditional understanding weighs more that wickedness will reap desctructive consequences (verses14, 16, 18–19, 21; 'self desctructive nature of human evil') than the involvement of God, despite the belief that God is still working behind it. At the end, God will also show the active wrath against the wicked, as an 'inheritance' allotted to those people (verse 29). [Zophar said:] "This is the wicked man’s portion from God," "and the inheritance appointed to him by God." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71788972
71788979
Job 30
Job 30 is the 30th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 31 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 30 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. At the end of the Dialogue, Job sums up his speech in a comprehensive review (chapters 29–31), with Job 29 describes Job's former prosperity, Job 30 focuses on Job's current suffering and Job 31 outlines Job's final defense. The whole part is framed by Job's longing for a restored relationship with God (Job 29:2) and the legal challenge to God (Job 31:35–27). Chapter 30 describes Job's suffering after his world was turned upside down (in stark contrast with chapter 29), from enjoying "the respect of the most respectable" (Job 29:21–25) to undergo "the contempt of the most contemptible" (Job 30:1, 9–12). Job complains to God directly about his condition as he believes God determines all aspects of his life (verses 16–23), before withdrawing in despair that no one, not even God, has shown him mercy or care (verses 24–31). Job speaks of the attack of mockers (30:1–15). The first part of the section describes Job's mockers from Job's point-of-view (verses 2–8. With the recurrence of "and now" (verse 9; cf. "but now" in verse 1), Job returns to the complaint about the treatment of him by his "enemies", who include the outcasts of the community. The attacks are depicted as overwhelming in its severity and persistence. [Job said:] "But now those who are younger than I mock me," "whose fathers I disdained to put with the dogs of my flock." Verse 1. The last statement means that Job did not think highly enough of their father to put them with the dogs. Job shows despair of God's treatment to him (30:16–31). In this section Job reiterates his conviction that God is in total control of his life, so he complains that he was not given mercy by God. Job hopes for restoration ("good") but only faces disaster ("evil"), so he can only see bleak pictures of his future life. [Job said:] "My harp is turned to mourning," "and my flute to the voice of those who weep." Verse 31. The mention of the musical instruments may parallel with the naming of jackals and owls in verse 29 which are known for emitting screeching sounds (cf. Micah 1:8), instead of life-enhancing tones like lyre (harp) and pipes (flute). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71788979
71790076
Ordinal priority approach
Multiple-criteria decision analysis method Ordinal priority approach (OPA) is a multiple-criteria decision analysis method that aids in solving the group decision-making problems based on preference relations. Description. Various methods have been proposed to solve multi-criteria decision-making problems. The basis of methods such as analytic hierarchy process and analytic network process is pairwise comparison matrix. The advantages and disadvantages of the pairwise comparison matrix were discussed by Munier and Hontoria in their book. In recent years, the OPA method was proposed to solve the multi-criteria decision-making problems based on the ordinal data instead of using the pairwise comparison matrix. The OPA method is a major part of Dr. Amin Mahmoudi's PhD thesis from the Southeast University of China. This method uses linear programming approach to compute the weights of experts, criteria, and alternatives simultaneously. The main reason for using ordinal data in the OPA method is the accessibility and accuracy of the ordinal data compared with exact ratios used in group decision-making problems involved with humans. In real-world situations, the experts might not have enough knowledge regarding one alternative or criterion. In this case, the input data of the problem is incomplete, which needs to be incorporated into the linear programming of the OPA. To handle the incomplete input data in the OPA method, the constraints related to the criteria or alternatives should be removed from the OPA linear-programming model. Various types of data normalization methods have been employed in multi-criteria decision-making methods in recent years. Palczewski and Sałabun showed that using various data normalization methods can change the final ranks of the multi-criteria decision-making methods. Javed and colleagues showed that a multiple-criteria decision-making problem can be solved by avoiding the data normalization. There is no need to normalize the preference relations and thus, the OPA method does not require data normalization. The OPA method. The OPA model is a linear programming model, which can be solved using a simplex algorithm. The steps of this method are as follows: Step 1: Identifying the experts and determining the preference of experts based on their working experience, educational qualification, etc. Step 2: identifying the criteria and determining the preference of the criteria by each expert. Step 3: identifying the alternatives and determining the preference of the alternatives in each criterion by each expert. Step 4: Constructing the following linear programming model and solving it by an appropriate optimization software such as LINGO, GAMS, MATLAB, etc. formula_0 In the above model, formula_1 represents the rank of expert formula_2, formula_3 represents the rank of criterion formula_4, formula_5 represents the rank of alternative formula_6, and formula_7 represents the weight of alternative formula_8 in criterion formula_4 by expert formula_2. After solving the OPA linear programming model, the weight of each alternative is calculated by the following equation: formula_9 The weight of each criterion is calculated by the following equation: formula_10 And the weight of each expert is calculated by the following equation: formula_11 Example. Suppose that we are going to investigate the issue of buying a house. There are two experts in this decision problem. Also, there are two criteria called cost (c), and construction quality (q) for buying the house. On the other hand, there are three houses (h1, h2, h3) for purchasing. The first expert (x) has three years of working experience and the second expert (y) has two years of working experience. The structure of the problem is shown in the figure. Step 1: The first expert (x) has more experience than expert (y), hence x &gt; y. Step 2: The criteria and their preference are summarized in the following table: Step 3: The alternatives and their preference are summarized in the following table: Step 4: The OPA linear programming model is formed based on the input data as follows: formula_12 After solving the above model using optimization software, the weights of experts, criteria and alternatives are obtained as follows: formula_13 Therefore, House 1 (h1) is considered as the best alternative. Moreover, we can understand that criterion cost (c) is more important than criterion construction quality (q). Also, based on the experts' weights, we can understand that expert (x) has a higher impact on final selection compared with expert (y). Applications. The applications of the OPA method in various field of studies are summarized as follows: Agriculture, manufacturing, services Construction industry Energy and environment Healthcare Information technology Transportation Extensions. Several extensions of the OPA method are listed as follows: Software. The following non-profit tools are available to solve the MCDM problems using the OPA method: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n&Max Z \\\\\n&S.t. \\\\\n&Z \\leq r_{i}\\bigg (r_{j} \\big(r_{k} (w_{ijk}^{r_{k}} - w_{ijk}^{{r_{k}}+1}) \\big)\\bigg) \\; \\; \\; \\; \\forall i,j \\; and \\; r_{k} \\\\\n&Z \\leq r_{i} r_{j} r_{m} w_{ijk}^{r_{m}} \\; \\; \\; \\forall i,j \\; and \\; r_{m} \\\\\n&\\sum_{i=1}^{p}\\sum_{j=1}^{n}\\sum_{k=1}^{m} w_{ijk} = 1 \\\\\n&w_{ijk}\\geq0 \\; \\; \\; \\forall i, j \\; and \\; k \\\\\n&Z: Unrestricted\\;in\\;sign \\\\\n\\end{align}\n" }, { "math_id": 1, "text": "r_i(i=1,...,p)" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "r_j(j=1...,n)" }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": "r_k(k=1...,m)" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "w_{ijk}" }, { "math_id": 8, "text": "k " }, { "math_id": 9, "text": "\\begin{aligned}\n&w_k=\\sum_{i=1}^{p}\\sum_{j=1}^{n} w_{ijk} \\; \\; \\; \\; \\forall k \\\\\n\\end{aligned}" }, { "math_id": 10, "text": "\\begin{aligned}\n&w_j=\\sum_{i=1}^{p}\\sum_{k=1}^{m} w_{ijk} \\; \\; \\; \\; \\forall j \\\\\n\\end{aligned}" }, { "math_id": 11, "text": "\\begin{aligned}\n&w_i=\\sum_{j=1}^{n}\\sum_{k=1}^{m} w_{ijk} \\; \\; \\; \\; \\forall i \\\\\n\\end{aligned}" }, { "math_id": 12, "text": "\\begin{align}\n&Max Z \\\\\n&S.t. \\\\\n&Z \\leq 1*1* 1* (w_{xch1} - w_{xch3}) \\; \\; \\; \\; \\\\\n&Z \\leq 1*1*2* (w_{xch3} - w_{xch2})\\; \\; \\; \\; \\\\\n&Z \\leq 1* 1 *3* w_{xch2} \\; \\; \\; \\\\\n\\\\\n&Z \\leq 1*2* 1* (w_{xqh2} - w_{xqh1}) \\; \\; \\; \\; \\\\\n&Z \\leq 1*2* 2* (w_{xqh1} - w_{xqh3}) \\; \\; \\; \\; \\\\\n&Z \\leq 1* 2 *3* w_{xqh3} \\; \\; \\; \\\\\n\\\\\n&Z \\leq 2*2* 1* (w_{ych1} - w_{ych2}) \\; \\; \\; \\; \\\\\n&Z \\leq 2*2*2* (w_{ych2} - w_{ych3})\\; \\; \\; \\; \\\\\n&Z \\leq 2* 2 *3* w_{ych3} \\; \\; \\; \\\\\n\\\\\n&Z \\leq 2*1* 1* (w_{yqh2} - w_{yqh3}) \\; \\; \\; \\; \\\\\n&Z \\leq 2*1* 2* (w_{yqh3} - w_{yqh1}) \\; \\; \\; \\; \\\\\n&Z \\leq 2* 1 *3* w_{yqh1} \\; \\; \\; \\\\\n\\\\\n&w_{xch1} + w_{xch2} + w_{xch3} + w_{xqh1} + w_{xqh2} + w_{xqh3}+w_{ych1} + w_{ych2} + w_{ych3} + w_{yqh1} + w_{yqh2} + w_{yqh3}= 1 \\\\\n\\\\\n\\end{align}\n" }, { "math_id": 13, "text": "\\begin{align}&w_{x}=w_{xch1} + w_{xch2} + w_{xch3} + w_{xqh1} + w_{xqh2} + w_{xqh3}=0.666667 \\\\\\\\&w_{y}=w_{ych1} + w_{ych2} + w_{ych3} + w_{yqh1} + w_{yqh2} + w_{yqh3}=0.333333 \\\\\\\\\\\\&w_{c}=w_{xch1} + w_{xch2} + w_{xch3} + w_{ych1} + w_{ych2} + w_{ych3}=0.555556 \\\\\\\\&w_{q}=w_{xqh1} + w_{xqh2} + w_{xqh3} + w_{yqh1} + w_{yqh2} + w_{yqh3}=0.444444 \\\\\\\\\\\\&w_{h1}=w_{xch1} + w_{xqh1} + w_{ych1} + w_{yqh1} = 0.425926 \\\\\\\\&w_{h2}=w_{xch2} + w_{xqh2} + w_{ych2} + w_{yqh2} =0.351852 \\\\\\\\&w_{h3}=w_{xch3} + w_{xqh3} + w_{ych3} + w_{yqh3} =0.222222\\\\\\\\\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=71790076
7179738
Truncated distribution
In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or even to know about, occurrences is limited to values which lie above or below a given threshold or within a specified range. For example, if the dates of birth of children in a school are examined, these would typically be subject to truncation relative to those of all children in the area given that the school accepts only children in a given age range on a specific date. There would be no information about how many children in the locality had dates of birth before or after the school's cutoff dates if only a direct approach to the school were used to obtain information. Where sampling is such as to retain knowledge of items that fall outside the required range, without recording the actual values, this is known as censoring, as opposed to the truncation here. Definition. The following discussion is in terms of a random variable having a continuous distribution although the same ideas apply to discrete distributions. Similarly, the discussion assumes that truncation is to a semi-open interval "y" ∈ ("a,b"] but other possibilities can be handled straightforwardly. Suppose we have a random variable, formula_0 that is distributed according to some probability density function, formula_1, with cumulative distribution function formula_2 both of which have infinite support. Suppose we wish to know the probability density of the random variable after restricting the support to be between two constants so that the support, formula_3. That is to say, suppose we wish to know how formula_0 is distributed given formula_4. formula_5 where formula_6 for all formula_7 and formula_8 everywhere else. That is, formula_9 where formula_10 is the indicator function. Note that the denominator in the truncated distribution is constant with respect to the formula_11. Notice that in fact formula_12 is a density: formula_13. Truncated distributions need not have parts removed from the top and bottom. A truncated distribution where just the bottom of the distribution has been removed is as follows: formula_14 where formula_6 for all formula_15 and formula_8 everywhere else, and formula_16 is the cumulative distribution function. A truncated distribution where the top of the distribution has been removed is as follows: formula_17 where formula_6 for all formula_18 and formula_8 everywhere else, and formula_16 is the cumulative distribution function. Expectation of truncated random variable. Suppose we wish to find the expected value of a random variable distributed according to the density formula_1 and a cumulative distribution of formula_2 given that the random variable, formula_0, is greater than some known value formula_19. The expectation of a truncated random variable is thus: formula_20 where again formula_21 is formula_6 for all formula_22 and formula_8 everywhere else. Letting formula_23 and formula_24 be the lower and upper limits respectively of support for the original density function formula_25 (which we assume is continuous), properties of formula_26, where formula_27 is some continuous function with a continuous derivative, include: and formula_31 Provided that the limits exist, that is: formula_34, formula_35 and formula_36 where formula_37 represents either formula_38 or formula_39. Examples. The truncated normal distribution is an important example. The Tobit model employs truncated distributions. Other examples include truncated binomial at x=0 and truncated poisson at x=0. Random truncation. Suppose we have the following set up: a truncation value, formula_40, is selected at random from a density, formula_41, but this value is not observed. Then a value, formula_11, is selected at random from the truncated distribution, formula_42. Suppose we observe formula_11 and wish to update our belief about the density of formula_40 given the observation. First, by definition: formula_43, and formula_44 Notice that formula_40 must be greater than formula_11, hence when we integrate over formula_40, we set a lower bound of formula_11. The functions formula_45 and formula_16 are the unconditional density and unconditional cumulative distribution function, respectively. By Bayes' rule, formula_46 which expands to formula_47 Two uniform distributions (example). Suppose we know that "t" is uniformly distributed from [0,"T"] and "x"|"t" is distributed uniformly on [0,"t"]. Let "g"("t") and "f"("x"|"t") be the densities that describe "t" and "x" respectively. Suppose we observe a value of "x" and wish to know the distribution of "t" given that value of "x". formula_48
[ { "math_id": 0, "text": " X " }, { "math_id": 1, "text": " f(x) " }, { "math_id": 2, "text": " F(x) " }, { "math_id": 3, "text": " y = (a,b] " }, { "math_id": 4, "text": " a < X \\leq b " }, { "math_id": 5, "text": "f(x|a < X \\leq b) = \\frac{g(x)}{F(b)-F(a)} = \\frac{f(x) \\cdot I(\\{a < x \\leq b\\})}{F(b)-F(a)} \\propto_x f(x) \\cdot I(\\{a < x \\leq b\\})" }, { "math_id": 6, "text": "g(x) = f(x)" }, { "math_id": 7, "text": " a <x \\leq b " }, { "math_id": 8, "text": " g(x) = 0 " }, { "math_id": 9, "text": " g(x) = f(x)\\cdot I(\\{a < x \\leq b\\}) " }, { "math_id": 10, "text": "I" }, { "math_id": 11, "text": "x" }, { "math_id": 12, "text": "f(x|a < X \\leq b)" }, { "math_id": 13, "text": "\\int_{a}^{b} f(x|a < X \\leq b)dx = \\frac{1}{F(b)-F(a)} \\int_{a}^{b} g(x) dx = 1 " }, { "math_id": 14, "text": "f(x|X>y) = \\frac{g(x)}{1-F(y)}" }, { "math_id": 15, "text": " y < x " }, { "math_id": 16, "text": "F(x)" }, { "math_id": 17, "text": "f(x|X \\leq y) = \\frac{g(x)}{F(y)}" }, { "math_id": 18, "text": " x \\leq y " }, { "math_id": 19, "text": " y " }, { "math_id": 20, "text": " E(X|X>y) = \\frac{\\int_y^\\infty x g(x) dx}{1 - F(y)} " }, { "math_id": 21, "text": " g(x) " }, { "math_id": 22, "text": " x > y " }, { "math_id": 23, "text": " a " }, { "math_id": 24, "text": " b " }, { "math_id": 25, "text": "f" }, { "math_id": 26, "text": " E(u(X)|X>y) " }, { "math_id": 27, "text": "u" }, { "math_id": 28, "text": " \\lim_{y \\to a} E(u(X)|X>y) = E(u(X)) " }, { "math_id": 29, "text": " \\lim_{y \\to b} E(u(X)|X>y) = u(b) " }, { "math_id": 30, "text": " \\frac{\\partial}{\\partial y}[E(u(X)|X>y)] = \\frac{f(y)}{1-F(y)}[E(u(X)|X>y) - u(y)] " }, { "math_id": 31, "text": " \\frac{\\partial}{\\partial y}[E(u(X)|X<y)] = \\frac{f(y)}{F(y)}[-E(u(X)|X<y) + u(y)] " }, { "math_id": 32, "text": " \\lim_{y \\to a}\\frac{\\partial}{\\partial y}[E(u(X)|X>y)] = f(a)[E(u(X)) - u(a)] " }, { "math_id": 33, "text": " \\lim_{y \\to b}\\frac{\\partial}{\\partial y}[E(u(X)|X>y)] = \\frac{1}{2}u'(b) " }, { "math_id": 34, "text": " \\lim_{y \\to c} u'(y) = u'(c) " }, { "math_id": 35, "text": " \\lim_{y \\to c} u(y) = u(c) " }, { "math_id": 36, "text": "\\lim_{y \\to c} f(y) = f(c) " }, { "math_id": 37, "text": " c " }, { "math_id": 38, "text": "a" }, { "math_id": 39, "text": " b" }, { "math_id": 40, "text": "t" }, { "math_id": 41, "text": "g(t)" }, { "math_id": 42, "text": "f(x|t)=Tr(x)" }, { "math_id": 43, "text": "f(x)=\\int_{x}^{\\infty} f(x|t)g(t)dt " }, { "math_id": 44, "text": "F(a)=\\int_{x}^a \\left[\\int_{-\\infty}^{\\infty} f(x|t)g(t)dt \\right]dx ." }, { "math_id": 45, "text": "f(x)" }, { "math_id": 46, "text": "g(t|x)= \\frac{f(x|t)g(t)}{f(x)} ," }, { "math_id": 47, "text": "g(t|x) = \\frac{f(x|t)g(t)}{\\int_{x}^{\\infty} f(x|t)g(t)dt} ." }, { "math_id": 48, "text": "g(t|x) =\\frac{f(x|t)g(t)}{f(x)} = \\frac{1}{t(\\ln(T) - \\ln(x))} \\quad \\text{for all } t > x ." } ]
https://en.wikipedia.org/wiki?curid=7179738
7180468
Unity amplitude
A sinusoidal waveform is said to have a unity amplitude when the amplitude of the wave is equal to 1. formula_0 where formula_1. This terminology is most commonly used in digital signal processing and is usually associated with the Fourier series and Fourier Transform sinusoids that involve a duty cycle, formula_2, and a defined fundamental period, formula_3. Analytic signals with unit amplitude satisfy the Bedrosian Theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x(t) = a \\sin(\\theta(t))" }, { "math_id": 1, "text": "a = 1" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "T_o" } ]
https://en.wikipedia.org/wiki?curid=7180468
71805117
Permutation codes
Class of error correction codes Permutation codes are a family of error correction codes that were introduced first by Slepian in 1965. and have been widely studied both in Combinatorics and Information theory due to their applications related to Flash memory and Power-line communication. Definition and properties. A permutation code formula_0 is defined as a subset of the Symmetric Group in formula_1 endowed with the usual Hamming distance between strings of length formula_2. More precisely, if formula_3 are permutations in formula_1, then formula_4 The minimum distance of a permutation code formula_0 is defined to be the minimum positive integer formula_5 such that there exist formula_3 formula_6 formula_0, distinct, such that formula_7. One of the reasons why permutation codes are suitable for certain channels is that the alphabet symbols only appear once in each codeword, which for example makes the errors occurring in the context of powerline communication less impactful on codewords Gilbert-Varshamov bound. A main problem in permutation codes is to determine the value of formula_8, where formula_8 is defined to be the maximum number of codewords in a permutation code of length formula_2 and minimum distance formula_9. There has been little progress made for formula_10, except for small lengths. We can define formula_11 with formula_12 to denote the set of all permutations in formula_1 which have distance exactly formula_13 from the identity. Let formula_14 with formula_15, where formula_16 is the number of derangements of order formula_13. The Gilbert-Varshamov bound is a very well known upper bound, and so far outperforms other bounds for small values of formula_9. Theorem 1: formula_17 There has been improvements on it for the case where formula_18 as the next theorem shows. Theorem 2: If formula_19 for some integer formula_20, then formula_21. For small values of formula_2 and formula_9, researchers have developed various computer searching strategies to directly look for permutation codes with some prescribed automorphisms Other Bounds. There are numerous bounds on permutation codes, we list two here Gilbert-Varshamov Bound Improvement. An Improvement is done to the Gilbert-Varshamov bound already discussed above. Using the connection between permutation codes and independent sets in certain graphs one can improve the Gilbert–Varshamov bound asymptotically by a factor formula_22, when the code length goes to infinity. Let formula_23 denote the subgraph induced by the neighbourhood of identity in formula_24, the Cayley graph formula_25 and formula_26. Let formula_27 denotes the maximum degree in formula_23 Theorem 3: Let formula_28 and formula_29 Then, formula_30 where formula_31. The Gilbert-Varshamov bound is, formula_32 Theorem 4: when formula_9 is fixed and formula_2 does to infinity, we have formula_33 Lower bounds using linear codes. Using a formula_34 linear block code, one can prove that there exists a permutation code in the symmetric group of degree formula_2, having minimum distance at least formula_9 and large cardinality. A lower bound for permutation codes that provides asymptotic improvements in certain regimes of length and distance of the permutation code is discussed below. For a given subset formula_35 of the symmetric group formula_1, we denote by formula_36 the maximum cardinality of a permutation code of minimum distance at least formula_9 entirely contained in formula_35, i.e. formula_37. Theorem 5: Let formula_38 be integers such that formula_39 and formula_40. Moreover let formula_41 be a prime power and formula_42 be positive integers such that formula_43 and formula_44. If there exists an formula_34 code formula_0 such that formula_45 has a codeword of Hamming weight formula_2, then formula_46 where formula_47 Corollary 1: for every prime power formula_48, for every formula_49, formula_50. Corollary 2: for every prime power formula_51, for every formula_52, formula_53. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "S_n" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\sigma, \\tau" }, { "math_id": 4, "text": "d(\\tau, \\sigma) = |\\left \\{ i \\in \\{1, 2, ..., n\\} : \\sigma(i) \\neq \\tau(i) \\right \\}|" }, { "math_id": 5, "text": "d_{min}" }, { "math_id": 6, "text": "\\in" }, { "math_id": 7, "text": "d(\\sigma, \\tau) = d_{min} " }, { "math_id": 8, "text": "M(n,d)" }, { "math_id": 9, "text": "d" }, { "math_id": 10, "text": "4 \\leq d \\leq n-1" }, { "math_id": 11, "text": "D(n,k)" }, { "math_id": 12, "text": "k \\in \\{0, 1, ..., n\\}" }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": "D(n,k)= \\{ \\sigma \\in S_n: d_H (\\sigma, id)=k\\}" }, { "math_id": 15, "text": "|D(n,k)|=\\tbinom{n}{k}D_k" }, { "math_id": 16, "text": "D_k" }, { "math_id": 17, "text": "\\frac{n!}{\\sum _{k=0} ^{d-1} |D(n,k)|} \\leq M(n,d) \\leq \\frac{n!}{\\sum _{k=0} ^{[\\frac{d-1}{2}]} |D(n,k)|}\n" }, { "math_id": 18, "text": "d = 4" }, { "math_id": 19, "text": "k^2 \\leq n \\leq k^2+k-2" }, { "math_id": 20, "text": "k \\geq 2" }, { "math_id": 21, "text": "\\frac{n!}{M(n,4)} \\geq 1 + \\frac{(n+1)n(n-1)}{n(n-1)-(n-k^2)((k+1)^2-n)((k+2)(k-1)-n)}" }, { "math_id": 22, "text": "\\log(n)" }, { "math_id": 23, "text": "G(n, d)" }, { "math_id": 24, "text": "\\Gamma (n, d)" }, { "math_id": 25, "text": "\\Gamma (n, d) := \\Gamma (S_n, S(n, d - 1))" }, { "math_id": 26, "text": "S(n, k):= \\bigcup_{i = 1}^k D(n, i)" }, { "math_id": 27, "text": "m(n, d)" }, { "math_id": 28, "text": "m'(n, d) = m(n, d) + 1" }, { "math_id": 29, "text": "M_{IS}(n, d) := n!.\\int_0^1 \\frac{(1 - t)^{\\frac{1}{m'(n, d)}}}{m'(n, d) + [\\Delta(n, d) - m'(n, d)]t}dt" }, { "math_id": 30, "text": "M(n, d) \\ge M_{IS}(n, d)" }, { "math_id": 31, "text": "\\Delta(n, d) = \\sum_{k = 0}^{d - 1}\\binom{n}{k}D_k" }, { "math_id": 32, "text": "M(n, d) \\ge M_{GV}(n, d) := \\frac{n!}{1 + \\Delta(n, d)}" }, { "math_id": 33, "text": "\\frac{M_{IS}(n, d)}{M_{GV}(n, d)} = \\Omega(\\log(n))" }, { "math_id": 34, "text": "[n, k, d]_q" }, { "math_id": 35, "text": "\\Kappa" }, { "math_id": 36, "text": "M(\\Kappa, d)" }, { "math_id": 37, "text": "M(\\Kappa, d) = max\\{|\\Gamma| : \\Gamma \\subset \\Kappa , d(\\Gamma) \\ge d\\}" }, { "math_id": 38, "text": "d, k, n" }, { "math_id": 39, "text": "0 < k < n" }, { "math_id": 40, "text": "1 < d \\le n" }, { "math_id": 41, "text": "q" }, { "math_id": 42, "text": "s, r" }, { "math_id": 43, "text": "n = qs + r" }, { "math_id": 44, "text": "0 \\le r < q" }, { "math_id": 45, "text": "C^\\perp" }, { "math_id": 46, "text": "M(n, d) \\ge \\frac{n!M(\\Kappa, d)}{(s + 1)!^r s!^{q-r}q^{n - k -1}}," }, { "math_id": 47, "text": "\\Kappa = (S_{s + 1})^r \\times (S_s)^{q-r}" }, { "math_id": 48, "text": "q \\ge n" }, { "math_id": 49, "text": "2 < d \\le n" }, { "math_id": 50, "text": "M(n, d) \\ge \\frac{n!}{q^{d - 2}}" }, { "math_id": 51, "text": "q " }, { "math_id": 52, "text": "3 < d < q" }, { "math_id": 53, "text": "M(q + 1, d) \\ge \\frac{(q + 1)!}{2q^{d - 2}}" } ]
https://en.wikipedia.org/wiki?curid=71805117
7180591
Pressure head
In fluid mechanics, the height of a liquid column In fluid mechanics, pressure head is the height of a liquid column that corresponds to a particular pressure exerted by the liquid column on the base of its container. It may also be called static pressure head or simply static head (but not "static head pressure"). Mathematically this is expressed as: formula_0 where formula_1 is pressure head (which is actually a length, typically in units of meters or centimetres of water) formula_2 is fluid pressure (i.e. force per unit area, typically expressed in pascals) formula_3 is the specific weight (i.e. force per unit volume, typically expressed in N/m3 units) formula_4 is the density of the fluid (i.e. mass per unit volume, typically expressed in kg/m3) formula_5 is acceleration due to gravity (i.e. rate of change of velocity, expressed in m/s2). Note that in this equation, the pressure term may be gauge pressure or absolute pressure, depending on the design of the container and whether it is open to the ambient air or sealed without air. Head equation. Pressure head is a component of hydraulic head, in which it is combined with elevation head. When considering dynamic (flowing) systems, there is a third term needed: velocity head. Thus, the three terms of "velocity head", "elevation head", and "pressure head" appear in the head equation derived from the Bernoulli equation for incompressible fluids: formula_6 where formula_7 is velocity head, formula_8 is elevation head, formula_1 is pressure head, and formula_9 is a constant for the system Practical uses for pressure head. Fluid flow is measured with a wide variety of instruments. The venturi meter in the diagram on the left shows two columns of a measurement fluid at different heights. The height of each column of fluid is proportional to the pressure of the fluid. To demonstrate a classical measurement of pressure head, we could hypothetically replace the working fluid with another fluid having different physical properties. For example, if the original fluid was water and we replaced it with mercury at the same pressure, we would expect to see a rather different value for pressure head. In fact the specific weight of water is 9.8 kN/m3 and the specific weight of mercury is 133 kN/m3. So, for any particular measurement of pressure head, the height of a column of water will be about [133/9.8 = 13.6] 13.6 times taller than a column of mercury would be. So if a water column meter reads "13.6 cm H2O", then an equivalent measurement is "1.00 cm Hg". This example demonstrates why there is some confusion surrounding pressure head and its relationship to pressure. Scientists frequently use columns of water (or mercury) to measure pressure (manometric pressure measurement), since for a given fluid, pressure head is proportional to pressure. Measuring pressure in units of "mm of mercury" or "inches of water" makes sense for instrumentation, but these raw measurements of head must frequently be converted to more convenient pressure units using the equations above to solve for pressure. In summary pressure head is a measurement of length, which can be converted to the units of pressure (force per unit area), as long as strict attention is paid to the density of the measurement fluid and the local value of g. Implications for gravitational anomalies on "ψ". We would normally use pressure head calculations in areas in which formula_5 is constant. However, if the gravitational field fluctuates, we can prove that pressure head fluctuates with it. Applications. Static. A mercury barometer is one of the classic uses of static pressure head. Such barometers are an enclosed column of mercury standing vertically with gradations on the tube. The lower end of the tube is bathed in a pool of mercury open to the ambient to measure the local atmospheric pressure. The reading of a mercury barometer (in mm of Hg, for example) can be converted into an absolute pressure using the above equations. If we had a column of mercury 767 mm high, we could calculate the atmospheric pressure as (767 mm)•(133 kN/m3) = 102 kPa. See the torr, millimeter of mercury, and pascal (unit) articles for barometric pressure measurements at standard conditions. Differential. The venturi meter and manometer is a common type of flow meter which can be used in many fluid applications to convert differential pressure heads into volumetric flow rate, linear fluid speed, or mass flow rate using Bernoulli's principle. The reading of these meters (in inches of water, for example) can be converted into a differential, or gauge pressure, using the above equations. Velocity head. The pressure of a fluid is different when it flows than when it is not flowing. This is why static pressure and dynamic pressure are never the same in a system in which the fluid is in motion. This pressure difference arises from a change in fluid velocity that produces velocity head, which is a term of the Bernoulli equation that is zero when there is no bulk motion of the fluid. In the picture on the right, the pressure differential is entirely due to the change in velocity head of the fluid, but it can be measured as a pressure head because of the Bernoulli principle. If, on the other hand, we could measure the velocity of the fluid, the pressure head could be calculated from the velocity head. See the Derivations of Bernoulli equation.
[ { "math_id": 0, "text": "\\psi = \\frac{p}{\\gamma} = \\frac{p}{\\rho \\, g}" }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "\\gamma" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "h_{v} + z_\\text{elevation} + \\psi = C\\," }, { "math_id": 7, "text": "h_{v}" }, { "math_id": 8, "text": "z_\\text{elevation}" }, { "math_id": 9, "text": "C" }, { "math_id": 10, "text": "p>0" }, { "math_id": 11, "text": "g>0" }, { "math_id": 12, "text": "g<0" }, { "math_id": 13, "text": "p<0" }, { "math_id": 14, "text": "\\psi>0" }, { "math_id": 15, "text": "\\psi<0" } ]
https://en.wikipedia.org/wiki?curid=7180591
7180897
Per Enflo
Swedish mathematician and concert pianist Per H. Enflo (; born 20 May 1944) is a Swedish mathematician working primarily in functional analysis, a field in which he solved problems that had been considered fundamental. Three of these problems had been open for more than forty years: In solving these problems, Enflo developed new techniques which were then used by other researchers in functional analysis and operator theory for years. Some of Enflo's research has been important also in other mathematical fields, such as number theory, and in computer science, especially computer algebra and approximation algorithms. Enflo works at Kent State University, where he holds the title of University Professor. Enflo has earlier held positions at the Miller Institute for Basic Research in Science at the University of California, Berkeley, Stanford University, École Polytechnique, (Paris) and The Royal Institute of Technology, Stockholm. Enflo is also a concert pianist. Enflo's contributions to functional analysis and operator theory. In mathematics, Functional analysis is concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. In functional analysis, an important class of vector spaces consists of the complete normed vector spaces over the real or complex numbers, which are called Banach spaces. An important example of a Banach space is a Hilbert space, where the norm arises from an inner product. Hilbert spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, stochastic processes, and time-series analysis. Besides studying spaces of functions, functional analysis also studies the continuous linear operators on spaces of functions. Hilbert's fifth problem and embeddings. At Stockholm University, Hans Rådström suggested that Enflo consider Hilbert's fifth problem in the spirit of functional analysis. In two years, 1969–1970, Enflo published five papers on Hilbert's fifth problem; these papers are collected in Enflo (1970), along with a short summary. Some of the results of these papers are described in Enflo (1976) and in the last chapter of Benyamini and Lindenstrauss. Applications in computer science. Enflo's techniques have found application in computer science. Algorithm theorists derive approximation algorithms that embed finite metric spaces into low-dimensional Euclidean spaces with low "distortion" (in Gromov's terminology for the Lipschitz category; c.f. Banach–Mazur distance). Low-dimensional problems have lower computational complexity, of course. More importantly, if the problems embed well in either the Euclidean plane or the three-dimensional Euclidean space, then geometric algorithms become exceptionally fast. However, such embedding techniques have limitations, as shown by Enflo's (1969) theorem: For every formula_0, the Hamming cube formula_1 cannot be embedded with "distortion formula_2" (or less) into formula_3-dimensional Euclidean space if formula_4. Consequently, the optimal embedding is the natural embedding, which realizes formula_5 as a subspace of formula_6-dimensional Euclidean space. This theorem, "found by Enflo [1969], is probably the first result showing an unbounded distortion for embeddings into Euclidean spaces. Enflo considered the problem of uniform embeddability among Banach spaces, and the distortion was an auxiliary device in his proof." Geometry of Banach spaces. A uniformly convex space is a Banach space so that, for every formula_7 there is some formula_8 so that for any two vectors with formula_9 and formula_10 formula_11 implies that formula_12 Intuitively, the center of a line segment inside the unit ball must lie deep inside the unit ball unless the segment is short. In 1972 Enflo proved that "every super-reflexive Banach space admits an equivalent uniformly convex norm". The basis problem and Mazur's goose. With one paper, which was published in 1973, Per Enflo solved three problems that had stumped functional analysts for decades: The basis problem of Stefan Banach, the "Goose problem" of Stanislaw Mazur, and the approximation problem of Alexander Grothendieck. Grothendieck had shown that his approximation problem was the central problem in the theory of Banach spaces and continuous linear operators. Basis problem of Banach. The basis problem was posed by Stefan Banach in his book, "Theory of Linear Operators". Banach asked whether every separable Banach space has a Schauder basis. A Schauder basis or countable basis is similar to the usual (Hamel) basis of a vector space; the difference is that for Hamel bases we use linear combinations that are "finite" sums, while for Schauder bases they may be "infinite" sums. This makes Schauder bases more suitable for the analysis of infinite-dimensional topological vector spaces including Banach spaces. Schauder bases were described by Juliusz Schauder in 1927. Let "V" denote a Banach space over the field "F". A "Schauder basis" is a sequence ("b""n") of elements of "V" such that for every element "v" ∈ "V" there exists a "unique" sequence (α"n") of elements in "F" so that formula_13 where the convergence is understood with respect to the norm topology. Schauder bases can also be defined analogously in a general topological vector space. Problem 153 in the Scottish Book: Mazur's goose. Banach and other Polish mathematicians would work on mathematical problems at the Scottish Café. When a problem was especially interesting and when its solution seemed difficult, the problem would be written down in the book of problems, which soon became known as the "Scottish Book". For problems that seemed especially important or difficult or both, the problem's proposer would often pledge to award a prize for its solution. On 6 November 1936, Stanislaw Mazur posed a problem on representing continuous functions. Formally writing down "problem 153" in the "Scottish Book", Mazur promised as the reward a "live goose", an especially rich price during the Great Depression and on the eve of World War II. Fairly soon afterwards, it was realized that Mazur's problem was closely related to Banach's problem on the existence of Schauder bases in separable Banach spaces. Most of the other problems in the "Scottish Book" were solved regularly. However, there was little progress on Mazur's problem and a few other problems, which became famous open problems to mathematicians around the world. Grothendieck's formulation of the approximation problem. Grothendieck's work on the theory of Banach spaces and continuous linear operators introduced the approximation property. A Banach space is said to have the approximation property, if every compact operator is a limit of finite-rank operators. The converse is always true. In a long monograph, Grothendieck proved that if every Banach space had the approximation property, then every Banach space would have a Schauder basis. Grothendieck thus focused the attention of functional analysts on deciding whether every Banach space have the approximation property. Enflo's solution. In 1972, Per Enflo constructed a separable Banach space that lacks the approximation property and a Schauder basis. In 1972, Mazur awarded a live goose to Enflo in a ceremony at the Stefan Banach Center in Warsaw; the "goose reward" ceremony was broadcast throughout Poland. Invariant subspace problem and polynomials. In functional analysis, one of the most prominent problems was the invariant subspace problem, which required the evaluation of the truth of the following proposition: Given a complex Banach space "H" of dimension &gt; 1 and a bounded linear operator "T" : "H" → "H", then "H" has a non-trivial closed "T"-invariant subspace, i.e. there exists a closed linear subspace "W" of "H" which is different from {0} and "H" such that "T"("W") ⊆ "W". For Banach spaces, the first example of an operator without an invariant subspace was constructed by Enflo. (For Hilbert spaces, the invariant subspace problem remains open.) Enflo proposed a solution to the invariant subspace problem in 1975, publishing an outline in 1976. Enflo submitted the full article in 1981 and the article's complexity and length delayed its publication to 1987 Enflo's long "manuscript had a world-wide circulation among mathematicians" and some of its ideas were described in publications besides Enflo (1976). Enflo's works inspired a similar construction of an operator without an invariant subspace for example by Beauzamy, who acknowledged Enflo's ideas. In the 1990s, Enflo developed a "constructive" approach to the invariant subspace problem on Hilbert spaces. Multiplicative inequalities for homogeneous polynomials. An essential idea in Enflo's construction was "concentration of polynomials at low degrees": For all positive integers formula_6 and formula_14, there exists formula_15 such that for all homogeneous polynomials formula_16 and formula_17 of degrees formula_6 and formula_14 (in formula_18 variables), then formula_19 where formula_20 denotes the sum of the absolute values of the coefficients of formula_16. Enflo proved that formula_21 does not depend on the number of variables formula_18. Enflo's original proof was simplified by Montgomery. This result was generalized to other norms on the vector space of homogeneous polynomials. Of these norms, the most used has been the Bombieri norm. Bombieri norm. The Bombieri norm is defined in terms of the following scalar product: For all formula_22 we have formula_23 if formula_24 For every formula_25 we define formula_26 where we use the following notation: if formula_27, we write formula_28 and formula_29 and formula_30 The most remarkable property of this norm is the Bombieri inequality: Let formula_31 be two homogeneous polynomials respectively of degree formula_32 and formula_33 with formula_34 variables, then, the following inequality holds: formula_35 In the above statement, the Bombieri inequality is the left-hand side inequality; the right-hand side inequality means that the Bombieri norm is a norm of the algebra of polynomials under multiplication. The Bombieri inequality implies that the product of two polynomials cannot be arbitrarily small, and this lower-bound is fundamental in applications like polynomial factorization (or in Enflo's construction of an operator without an invariant subspace). Applications. Enflo's idea of "concentration of polynomials at low degrees" has led to important publications in number theory algebraic and Diophantine geometry, and polynomial factorization. Mathematical biology: Population dynamics. In applied mathematics, Per Enflo has published several papers in mathematical biology, specifically in population dynamics. Human evolution. Enflo has also published in population genetics and paleoanthropology. Today, all humans belong to one population of "Homo sapiens sapiens", which is individed by species barrier. However, according to the "Out of Africa" model this is not the first species of hominids: the first species of genus "Homo", "Homo habilis", evolved in East Africa at least 2 Ma, and members of this species populated different parts of Africa in a relatively short time. "Homo erectus" evolved more than 1.8 Ma, and by 1.5 Ma had spread throughout the Old World. Anthropologists have been divided as to whether current human population evolved as one interconnected population (as postulated by the Multiregional Evolution hypothesis), or evolved only in East Africa, speciated, and then migrating out of Africa and replaced human populations in Eurasia (called the "Out of Africa" Model or the "Complete Replacement" Model). Neanderthals and modern humans coexisted in Europe for several thousand years, but the duration of this period is uncertain. Modern humans may have first migrated to Europe 40–43,000 years ago. Neanderthals may have lived as recently as 24,000 years ago in refugia on the south coast of the Iberian peninsula such as Gorham's Cave. Inter-stratification of Neanderthal and modern human remains has been suggested, but is disputed. With Hawks and Wolpoff, Enflo published an explanation of fossil evidence on the DNA of Neanderthal and modern humans. This article tries to resolve a debate in the evolution of modern humans between theories suggesting either multiregional and single African origins. In particular, the extinction of Neanderthals could have happened due to waves of modern humans entered Europe – in technical terms, due to "the continuous influx of modern human DNA into the Neandertal gene pool." Enflo has also written about the population dynamics of zebra mussels in Lake Erie. Piano. Per Enflo is also a concert pianist. A child prodigy in both music and mathematics, Enflo won the Swedish competition for young pianists at age 11 in 1956, and he won the same competition in 1961. At age 12, Enflo appeared as a soloist with the Royal Opera Orchestra of Sweden. He debuted in the Stockholm Concert Hall in 1963. Enflo's teachers included Bruno Seidlhofer, Géza Anda, and Gottfried Boon (who himself was a student of Arthur Schnabel). In 1999 Enflo competed in the first annual Van Cliburn Foundation's International Piano Competition for Outstanding Amateurs . Enflo performs regularly around Kent and in a Mozart series in Columbus, Ohio (with the Triune Festival Orchestra). His solo piano recitals have appeared on the Classics Network of the radio station WOSU, which is sponsored by Ohio State University.
[ { "math_id": 0, "text": "m\\geq 2" }, { "math_id": 1, "text": "C_m" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "2^m" }, { "math_id": 4, "text": " D < \\sqrt{ m }" }, { "math_id": 5, "text": "\\{0,1\\}^m" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "\\epsilon>0" }, { "math_id": 8, "text": "\\delta>0" }, { "math_id": 9, "text": "\\|x\\|\\le1" }, { "math_id": 10, "text": "\\|y\\|\\le 1," }, { "math_id": 11, "text": "\\|x+y\\|>2-\\delta" }, { "math_id": 12, "text": "\\|x-y\\|<\\epsilon." }, { "math_id": 13, "text": " v = \\sum_{n \\in \\N} \\alpha_n b_n \\," }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "C(m,n) > 0 " }, { "math_id": 16, "text": "P" }, { "math_id": 17, "text": "Q" }, { "math_id": 18, "text": "k" }, { "math_id": 19, "text": "|PQ|\\geq C(m,n)|P|\\,|Q|," }, { "math_id": 20, "text": "|P|" }, { "math_id": 21, "text": "C(m,n)" }, { "math_id": 22, "text": " \\alpha,\\beta \\in \\mathbb{N}^N" }, { "math_id": 23, "text": "\\langle X^\\alpha | X^\\beta \\rangle = 0" }, { "math_id": 24, "text": "\\alpha \\neq \\beta" }, { "math_id": 25, "text": " \\alpha \\in \\mathbb{N}^N" }, { "math_id": 26, "text": "||X^\\alpha||^2 = \\frac{|\\alpha|!}{\\alpha!}," }, { "math_id": 27, "text": "\\alpha = (\\alpha_1,\\dots,\\alpha_N) \\in \\mathbb{N}^N" }, { "math_id": 28, "text": "|\\alpha| = \\Sigma_{i=1}^N \\alpha_i" }, { "math_id": 29, "text": "\\alpha! = \\Pi_{i=1}^N (\\alpha_i!)" }, { "math_id": 30, "text": "X^\\alpha = \\Pi_{i=1}^N X_i^{\\alpha_i}." }, { "math_id": 31, "text": "P,Q" }, { "math_id": 32, "text": "d^\\circ(P)" }, { "math_id": 33, "text": "d^\\circ(Q)" }, { "math_id": 34, "text": "N" }, { "math_id": 35, "text": "\\frac{d^\\circ(P)!d^\\circ(Q)!}{(d^\\circ(P)+d^\\circ(Q))!}||P||^2 \\, ||Q||^2 \\leq\n ||P\\cdot Q||^2 \\leq ||P||^2 \\, ||Q||^2." }, { "math_id": 36, "text": " L^p " } ]
https://en.wikipedia.org/wiki?curid=7180897
71812823
Job 21
Book of Job, chapter 21 Job 21 is the 21st chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 34 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 21 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapter 21 contains Job's last speech in the second cycle of debates with his friends, notably the only speech in which "Job confines his remarks to his friends". The chapter can be divided into the following parts: Job's plea to the friends to change their attitude (21:1–6). Job opens his speech with a plea for his friends to actually listen (instead of "mock") his words because if they were doing that, it would bring real comfort to him (verse 2–3). Job's issue is that the friends are interfering his complaint with God with their inaccurate presumption or silence toward his defence (verse 4–5). The task to lay complaint before an almight God is a dangerous task, hence Job approaches this with trembling (verse 6). [Job said:] "As for me, is my complaint against man?" "And if it were, why should I not be impatient?" Job explores why the wicked are not always punished as the friends insisted (21:7–26). This section has two main parts in which Job explores the apparent anomalies of what the friends stated about the fate of the wicked: Job is suspicious of any attempt to trim the facts to fit into a 'tidy theological system', and he confronts the friends to match their neat imaginary world with the reality. Verse 7 contains the statement of the general problem for the first topic: "why the wicked not only exist but also live a long life ("advance to old age") and grow mighty in power and wealth". The second topic is framed by the 'reality of death' (verses 17–18 and verses 25–26) as Job asks "how often do the wicked die prematurely" in a series of rhetorical questions with the expected answer: "hardly ever". The implication of both topics is the arbitrariness (lack of connection) between 'a person's righteousness and the fullness of that person's life', thus the divine retribution is not actually reflected in the world. [Job said:] "Lo, their good is not in their hand: the counsel of the wicked is far from me." Job remarks the failure of the friends' rebuttals (21:27–34). After challenging the friends of their thinking process, Job criticizes them for being blind an deaf to reality because of their rigid theological systems.(verses 29–33). Job closes the second round of debate by pointing out the insubstantiality of his friends' comfort until now ('mere hot air') and the faithlessness or treachery of what is left standing in their speeches (verse 34). [Job said:] "Have you not asked them who travel the road?" "And do you not know their signs?" Verse 29. The Greek Septuagint version renders the verse as: “Ask those who go by the way, and do not disown their signs.” References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71812823
7181855
Borel conjecture
In mathematics, specifically geometric topology, the Borel conjecture (named for Armand Borel) asserts that an aspherical closed manifold is determined by its fundamental group, up to homeomorphism. It is a rigidity conjecture, asserting that a weak, algebraic notion of equivalence (namely, homotopy equivalence) should imply a stronger, topological notion (namely, homeomorphism). Precise formulation of the conjecture. Let formula_0 and formula_1 be closed and aspherical topological manifolds, and let formula_2 be a homotopy equivalence. The Borel conjecture states that the map formula_3 is homotopic to a homeomorphism. Since aspherical manifolds with isomorphic fundamental groups are homotopy equivalent, the Borel conjecture implies that aspherical closed manifolds are determined, up to homeomorphism, by their fundamental groups. This conjecture is false if topological manifolds and homeomorphisms are replaced by smooth manifolds and diffeomorphisms; counterexamples can be constructed by taking a connected sum with an exotic sphere. The origin of the conjecture. In a May 1953 letter to Jean-Pierre Serre, Armand Borel raised the question whether two aspherical manifolds with isomorphic fundamental groups are homeomorphic. A positive answer to the question "Is every homotopy equivalence between closed aspherical manifolds homotopic to a homeomorphism?" is referred to as the "so-called Borel Conjecture" in a 1986 paper of Jonathan Rosenberg. Motivation for the conjecture. A basic question is the following: if two closed manifolds are homotopy equivalent, are they homeomorphic? This is not true in general: there are homotopy equivalent lens spaces which are not homeomorphic. Nevertheless, there are classes of manifolds for which homotopy equivalences between them can be homotoped to homeomorphisms. For instance, the Mostow rigidity theorem states that a homotopy equivalence between closed hyperbolic manifolds is homotopic to an isometry—in particular, to a homeomorphism. The Borel conjecture is a topological reformulation of Mostow rigidity, weakening the hypothesis from hyperbolic manifolds to aspherical manifolds, and similarly weakening the conclusion from an isometry to a homeomorphism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "f \\colon M \\to N" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "f \\colon M \\to BG" }, { "math_id": 5, "text": "S^3" }, { "math_id": 6, "text": "T^3 = S^1 \\times S^1 \\times S^1" } ]
https://en.wikipedia.org/wiki?curid=7181855
71818883
Job 22
Job 22 is the 22nd chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 30 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 22 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. In his last speech of the book (chapter 22), Eliphaz becomes more direct in his accusation of Job as a sinner, even further than the position of Bildad and Zophar, by confronting Job with a list of alleged offenses (verses 1–11) in contrast to God's knowledge and power (verses 12–20), so at the end Eliphaz urges Job to repent (verses 21–30). Eliphaz lists Job's offenses (22:1–11). Although Eliphaz opens his speech with a gentle tone, he soon attacks Job for having a defective piety toward God ("fear of God"), which could be Job's attempt to bribe God into overlooking his real wickedness (verse 4). It is followed by a string of accusations (summary in verse 5, illustrations in verses 6–11) that Job could have sinned, betraying Eliphaz' steep belief in the retribution theology that only great guilt can explain Job's great suffering. Job will specifically denied all of these charges in his oath of clearance in chapter 31. [Eliphaz said:] "Is it because of your fear of Him that He corrects you," "and enters into judgment with you?" Eliphaz urges Job to acknowledge God's knowledge and repent from his sins (22:12–30). In the first part of this section Eliphaz describes God's majesty (verse 12) to counter what he perceived as Job's claim of God as having limited knowledge and unable to see through deep darkness, so unable to properly judge. Eliphaz concludes that Job must be guilty by association, as he describes the wicked and implies that Job must be like them (verses 15–20). Finally, Eliphaz outlines the way for Job to return to God, that is, beyond the initial returning also to receive instruction ("tora" or Torah from God and place His words in his heart (verse 22); a good advice which is misdirected – it is Eliphaz that will need to follow it (Job 42:7–9), instead of Job. Eliphaz' tidy analysis and advice are unfortunately based on a misdiagnosis of Job's situation and with this speech, Eliphaz seems to run out of arguments as his part in the dialogue is grinding to a halt (verse 29). [Eliphaz said:] "Then you will lay up gold as dust," "and the gold of Ophir as the stones of the brooks." [Eliphaz said:] "When men are cast down, and you say, ‘There is a time of exaltation!’" "then He will save the humble person." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71818883
71819390
Job 23
Biblical canon Job 23 is the 23rd chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 30 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 23 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. In response to Eliphaz, Job starts by speaking to God indirectly (as third person) although it is spoken to his friends (chapter 23), before he addresses Eliphaz directly (chapter 24) on the issues raised in chapter 22. In chapter 23, Job again ponders on the possible legal case against God (verses 1–7), but he is terrified on the prospect of facing God, which he desperately seeks but cannot see (verses 8–9), yet he believes God knows all Job's way and will complete the purposes in Job's life (verses 10–14), so Job testifies that he both is longing and is afraid of God's presence (verses 15–17). Job ponders the litigation against God (23:1–7). The language of litigation is prominent in this section, as Job revisits a possibility of a legal action to get a vindication. Job does not intend to achieve victory in the case, but his ultimate goal is to prove how righteous God's judgment is (verse 7). [Job said:] "There an upright man could argue with him," "and I would be acquitted forever by my judge." Job searches the terrifying God (23:8–17). This section describes Job's search for God which is framed by the absence (verses 8–9) and the presence (verses 15–17) of God. Job understands that God's purpose in him must be fulfilled (verse 14) before Job can face Him. However, Job then realizes how terrified he would feel when he finally can achieve his wish of being in the presence of God. [Job said:] "16 For God makes my heart soft," "and the Almighty troubles me;" "17because I was not cut off from the presence of darkness," "nor has He covered the darkness from my face" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71819390
71820767
Job 24
Job 24 is the 24th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 25 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 24 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. In response to Eliphaz, Job starts by speaking to God indirectly (as third person) although it is spoken to his friends (chapter 23). Next (in chapter 24), Job addresses the issue of the oppression of the poor that Eliphaz had raised (Job 22:6-20). Job concurs that oppression exists, but questions why God does not act in judgment against the oppressors, while listing the kind of actions and attitudes that Job regards as morally reprehensible (to be expanded in chapter 31. Job reflects on the oppression (24:1–17). In this section Job asks about the "times" and God's "days" when the wicked are allowed to oppress and prosper without punishment, followed by ample evidence: (verse 6), whereas their garment was taken in pledge for a loan (verses 7, 10a; picking up the detail of Eliphaz's speech in Job 22:6b), leaving them naked, hungry, and thirsty, but nonetheless forced to work, carrying sheaves and making olive oil and wine (verses 10–11); in summary, people (cf. Job 11:3) 'groan under their oppression' (cf. the groaning land in Exodus 6s), and 'the spirit of the wounded cry out for help' (verse 12). Next are the more serious wrongdoing in the world against the wicked (verses 13-17): murder (verse 14), adultery (verse 15), and stealing (verse 16), in the same order of appearance in the Ten Commandments. The key image here is of darkness (verses 15, 16, 17) and light (verses 13, 14, 16), with a key implication that those who choose the paths of darkness are not caught or held to account, whereas God's light should expose them. [Job said:] "Men groan from outside the city," "and the soul of the wounded cries out;" "yet God does not charge them with wrong." Verse 12. The 'groan' here is comparable to the cries of the Israelites in their Egyptian bondage (Exodus 2:23-25). Job expounds the fate of the wicked (24:18–25). Job knows that the wicked would be swallowed by Sheol or death (verse 19b) suddenly (verses 18–19a) and completely that they would be utterly forgotten, as if never existed (verses 20). However, before that happens, God seems to preserve and prolong the lives of such wicked (verse 22), who wrongfully treat childless women and widows (verse 21), even to give them security and protection for a "long time" (verse 23). Therefore Job challenges his friends to prove him wrong about the examples he has given. [Job said:] "If it is not so, who will prove me a liar" "and make my speech worth nothing?" Verse 25. Job dares his friends to disprove his argument that there is observable injustice in the world, but that God will eventually balance the scales of justice. Significantly, none of the three friends takes up Job's challenge. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71820767
7184
C*-algebra
Topological complex vector space In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra "A" of continuous linear operators on a complex Hilbert space with two additional properties: Another important class of non-Hilbert C*-algebras includes the algebra formula_0 of complex-valued continuous functions on "X" that vanish at infinity, where "X" is a locally compact Hausdorff space. C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras that are now known as von Neumann algebras. Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space. C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras. Abstract characterization. We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark. A C*-algebra, "A", is a Banach algebra over the field of complex numbers, together with a map formula_1 for formula_2 with the following properties: formula_3 formula_4 formula_5 formula_7 formula_8 Remark. The first four identities say that "A" is a *-algebra. The last identity is called the C* identity and is equivalent to: formula_9 which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the section below. The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure: formula_10 A bounded linear map, "π" : "A" → "B", between C*-algebras "A" and "B" is called a *-homomorphism if formula_11 formula_12 In the case of C*-algebras, any *-homomorphism "π" between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity. A bijective *-homomorphism "π" is called a C*-isomorphism, in which case "A" and "B" are said to be isomorphic. Some history: B*-algebras and C*-algebras. The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition: This condition automatically implies that the *-involution is isometric, that is, formula_14. Hence, formula_15, and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition formula_16. For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'. The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of "B"("H"), namely, the space of bounded operators on some Hilbert space "H". 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space". Structure of C*-algebras. C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism. Self-adjoint elements. Self-adjoint elements are those of the form formula_17. The set of elements of a C*-algebra "A" of the form formula_18 forms a closed convex cone. This cone is identical to the elements of the form formula_19. Elements of this cone are called "non-negative" (or sometimes "positive", even though this terminology conflicts with its use for elements of formula_20) The set of self-adjoint elements of a C*-algebra "A" naturally has the structure of a partially ordered vector space; the ordering is usually denoted formula_21. In this ordering, a self-adjoint element formula_22 satisfies formula_23 if and only if the spectrum of formula_24 is non-negative, if and only if formula_25 for some formula_26. Two self-adjoint elements formula_27 and formula_28 of "A" satisfy formula_29 if formula_30. This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction. Quotients and approximate identities. Any C*-algebra "A" has an approximate identity. In fact, there is a directed family {"e"λ}λ∈I of self-adjoint elements of "A" such that formula_31 formula_32 In case "A" is separable, "A" has a sequential approximate identity. More generally, "A" will have a sequential approximate identity if and only if "A" contains a strictly positive element, i.e. a positive element "h" such that "hAh" is dense in "A". Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra. Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra. Examples. Finite-dimensional C*-algebras. The algebra M("n", C) of "n" × "n" matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, C"n", and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type: Theorem. A finite-dimensional C*-algebra, "A", is canonically isomorphic to a finite direct sum formula_33 where min "A" is the set of minimal nonzero self-adjoint central projections of "A". Each C*-algebra, "Ae", is isomorphic (in a noncanonical way) to the full matrix algebra M(dim("e"), C). The finite family indexed on min "A" given by {dim("e")}"e" is called the "dimension vector" of "A". This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the "K"0 group of "A". A †-algebra (or, more explicitly, a "†-closed algebra") is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science. An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras. C*-algebras of operators. The prototypical example of a C*-algebra is the algebra "B(H)" of bounded (equivalently continuous) linear operators defined on a complex Hilbert space "H"; here "x*" denotes the adjoint operator of the operator "x" : "H" → "H". In fact, every C*-algebra, "A", is *-isomorphic to a norm-closed adjoint closed subalgebra of "B"("H") for a suitable Hilbert space, "H"; this is the content of the Gelfand–Naimark theorem. C*-algebras of compact operators. Let "H" be a separable infinite-dimensional Hilbert space. The algebra "K"("H") of compact operators on "H" is a norm closed subalgebra of "B"("H"). It is also closed under involution; hence it is a C*-algebra. Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras: Theorem. If "A" is a C*-subalgebra of "K"("H"), then there exists Hilbert spaces {"Hi"}"i"∈"I" such that formula_34 where the (C*-)direct sum consists of elements ("Ti") of the Cartesian product Π "K"("Hi") with ||"Ti"|| → 0. Though "K"("H") does not have an identity element, a sequential approximate identity for "K"("H") can be developed. To be specific, "H" is isomorphic to the space of square summable sequences "l"2; we may assume that "H" = "l"2. For each natural number "n" let "Hn" be the subspace of sequences of "l"2 which vanish for indices "k" ≥ "n" and let "en" be the orthogonal projection onto "Hn". The sequence {"en"}"n" is an approximate identity for "K"("H"). "K"("H") is a two-sided closed ideal of "B"("H"). For separable Hilbert spaces, it is the unique ideal. The quotient of "B"("H") by "K"("H") is the Calkin algebra. Commutative C*-algebras. Let "X" be a locally compact Hausdorff space. The space formula_0 of complex-valued continuous functions on "X" that "vanish at infinity" (defined in the article on local compactness) forms a commutative C*-algebra formula_0 under pointwise multiplication and addition. The involution is pointwise conjugation. formula_0 has a multiplicative unit element if and only if formula_35 is compact. As does any C*-algebra, formula_0 has an approximate identity. In the case of formula_0 this is immediate: consider the directed set of compact subsets of formula_35, and for each compact formula_36 let formula_37 be a function of compact support which is identically 1 on formula_36. Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions formula_38 is an approximate identity. The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra formula_0, where formula_35 is the space of characters equipped with the weak* topology. Furthermore, if formula_0 is isomorphic to formula_39 as C*-algebras, it follows that formula_35 and formula_40 are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs. C*-enveloping algebra. Given a Banach *-algebra "A" with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E("A") and *-morphism π from "A" into E("A") that is universal, that is, every other continuous *-morphism π ' : "A" → "B" factors uniquely through π. The algebra E("A") is called the C*-enveloping algebra of the Banach *-algebra "A". Of particular importance is the C*-algebra of a locally compact group "G". This is defined as the enveloping C*-algebra of the group algebra of "G". The C*-algebra of "G" provides context for general harmonic analysis of "G" in the case "G" is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra. Von Neumann algebras. Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology. The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it. Type for C*-algebras. A C*-algebra "A" is of type I if and only if for all non-degenerate representations π of "A" the von Neumann algebra π("A")″ (that is, the bicommutant of π("A")) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π("A")″ is a factor. A locally compact group is said to be of type I if and only if its group C*-algebra is type I. However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties. C*-algebras and quantum field theory. In quantum mechanics, one typically describes a physical system with a C*-algebra "A" with unit element; the self-adjoint elements of "A" (elements "x" with "x*" = "x") are thought of as the "observables", the measurable quantities, of the system. A "state" of the system is defined as a positive functional on "A" (a C-linear map φ : "A" → C with φ("u*u") ≥ 0 for all "u" ∈ "A") such that φ(1) = 1. The expected value of the observable "x", if the system is in state φ, is then φ("x"). This C*-algebra approach is used in the Haag–Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_0(X)" }, { "math_id": 1, "text": " x \\mapsto x^* " }, { "math_id": 2, "text": " x\\in A" }, { "math_id": 3, "text": " x^{**} = (x^*)^* = x " }, { "math_id": 4, "text": " (x + y)^* = x^* + y^* " }, { "math_id": 5, "text": " (x y)^* = y^* x^*" }, { "math_id": 6, "text": "\\lambda\\in\\mathbb{C}" }, { "math_id": 7, "text": " (\\lambda x)^* = \\overline{\\lambda} x^* ." }, { "math_id": 8, "text": " \\|x^* x \\| = \\|x\\|\\|x^*\\|." }, { "math_id": 9, "text": "\\|xx^*\\| = \\|x\\|^2," }, { "math_id": 10, "text": " \\|x\\|^2 = \\|x^* x\\| = \\sup\\{|\\lambda| : x^* x - \\lambda \\,1 \\text{ is not invertible} \\}." }, { "math_id": 11, "text": " \\pi(x y) = \\pi(x) \\pi(y) \\," }, { "math_id": 12, "text": " \\pi(x^*) = \\pi(x)^* \\," }, { "math_id": 13, "text": "\\lVert x x^* \\rVert = \\lVert x \\rVert ^2" }, { "math_id": 14, "text": "\\lVert x \\rVert = \\lVert x^* \\rVert " }, { "math_id": 15, "text": "\\lVert xx^*\\rVert = \\lVert x \\rVert \\lVert x^*\\rVert" }, { "math_id": 16, "text": "\\lVert x \\rVert = \\lVert x^* \\rVert" }, { "math_id": 17, "text": " x = x^* " }, { "math_id": 18, "text": " x^*x " }, { "math_id": 19, "text": " xx^* " }, { "math_id": 20, "text": "\\mathbb{R}" }, { "math_id": 21, "text": " \\geq " }, { "math_id": 22, "text": " x \\in A " }, { "math_id": 23, "text": " x \\geq 0 " }, { "math_id": 24, "text": " x " }, { "math_id": 25, "text": " x = s^*s " }, { "math_id": 26, "text": " s \\in A" }, { "math_id": 27, "text": "x" }, { "math_id": 28, "text": " y " }, { "math_id": 29, "text": " x \\geq y " }, { "math_id": 30, "text": " x - y \\geq 0 " }, { "math_id": 31, "text": " x e_\\lambda \\rightarrow x " }, { "math_id": 32, "text": " 0 \\leq e_\\lambda \\leq e_\\mu \\leq 1\\quad \\mbox{ whenever } \\lambda \\leq \\mu. " }, { "math_id": 33, "text": " A = \\bigoplus_{e \\in \\min A } A e" }, { "math_id": 34, "text": " A \\cong \\bigoplus_{i \\in I } K(H_i)," }, { "math_id": 35, "text": "X" }, { "math_id": 36, "text": "K" }, { "math_id": 37, "text": "f_K" }, { "math_id": 38, "text": "\\{f_K\\}" }, { "math_id": 39, "text": "C_0(Y)" }, { "math_id": 40, "text": "Y" } ]
https://en.wikipedia.org/wiki?curid=7184
7184215
5-cube
5-dimensional hypercube In five-dimensional geometry, a 5-cube is a name for a five-dimensional hypercube with 32 vertices, 80 edges, 80 square faces, 40 cubic cells, and 10 tesseract 4-faces. It is represented by Schläfli symbol {4,3,3,3} or {4,33}, constructed as 3 tesseracts, {4,3,3}, around each cubic ridge. Related polytopes. It is a part of an infinite hypercube family. The dual of a 5-cube is the 5-orthoplex, of the infinite family of orthoplexes. Applying an "alternation" operation, deleting alternating vertices of the 5-cube, creates another uniform 5-polytope, called a 5-demicube, which is also part of an infinite family called the demihypercubes. The 5-cube can be seen as an "order-3 tesseractic honeycomb" on a 4-sphere. It is related to the Euclidean 4-space (order-4) tesseractic honeycomb and paracompact hyperbolic honeycomb order-5 tesseractic honeycomb. As a configuration. This configuration matrix represents the 5-cube. The rows and columns correspond to vertices, edges, faces, cells, and 4-faces. The diagonal numbers say how many of each element occur in the whole 5-cube. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_0 Cartesian coordinates. The Cartesian coordinates of the vertices of a 5-cube centered at the origin and having edge length 2 are (±1,±1,±1,±1,±1), while this 5-cube's interior consists of all points ("x"0, "x"1, "x"2, "x"3, "x"4) with -1 &lt; "x""i" &lt; 1 for all "i". Images. "n"-cube Coxeter plane projections in the Bk Coxeter groups project into k-cube graphs, with power of two vertices overlapping in the projective graphs. Projection. The 5-cube can be projected down to 3 dimensions with a rhombic icosahedron envelope. There are 22 exterior vertices, and 10 interior vertices. The 10 interior vertices have the convex hull of a pentagonal antiprism. The 80 edges project into 40 external edges and 40 internal ones. The 40 cubes project into golden rhombohedra which can be used to dissect the rhombic icosahedron. The projection vectors are u = {1, φ, 0, -1, φ}, v = {φ, 0, 1, φ, 0}, w = {0, 1, φ, 0, -1}, where φ is the golden ratio, formula_1. It is also possible to project penteracts into three-dimensional space, similarly to projecting a cube into two-dimensional space. Symmetry. The "5-cube" has Coxeter group symmetry B5, abstract structure formula_2, order 3840, containing 25 hyperplanes of reflection. The Schläfli symbol for the 5-cube, {4,3,3,3}, matches the Coxeter notation symmetry [4,3,3,3]. Prisms. All hypercubes have lower symmetry forms constructed as prisms. The 5-cube has 7 prismatic forms from the lowest 5-orthotope, { }5, and upwards as orthogonal edges are constrained to be of equal length. The vertices in a prism are equal to the product of the vertices in the elements. The edges of a prism can be partitioned into the number of edges in an element times the number of vertices in all the other elements. Related polytopes. The "5-cube" is 5th in a series of hypercube: The regular skew polyhedron {4,5| 4} can be realized within the 5-cube, with its 32 vertices, 80 edges, and 40 square faces, and the other 40 square faces of the 5-cube become square "holes". This polytope is one of 31 uniform 5-polytopes generated from the regular 5-cube or 5-orthoplex.
[ { "math_id": 0, "text": "\\begin{bmatrix}\\begin{matrix}\n32 & 5 & 10 & 10 & 5 \\\\ \n2 & 80 & 4 & 6 & 4 \\\\ \n4 & 4 & 80 & 3 & 3 \\\\ \n8 & 12 & 6 & 40 & 2 \\\\ \n16 & 32 & 24 & 8 & 10 \n\\end{matrix}\\end{bmatrix}" }, { "math_id": 1, "text": "\\frac{1 + \\sqrt{5}}{2}" }, { "math_id": 2, "text": "C_{2}\\wr S_{5}" } ]
https://en.wikipedia.org/wiki?curid=7184215
7184831
Primary pseudoperfect number
Type of number In mathematics, and particularly in number theory, "N" is a primary pseudoperfect number if it satisfies the Egyptian fraction equation formula_0 where the sum is over only the prime divisors of "N". Properties. Equivalently, "N" is a primary pseudoperfect number if it satisfies formula_1 Except for the primary pseudoperfect number "N" = 2, this expression gives a representation for "N" as the sum of distinct divisors of "N". Therefore, each primary pseudoperfect number "N" (except "N" = 2) is also pseudoperfect. The eight known primary pseudoperfect numbers are 2, 6, 42, 1806, 47058, 2214502422, 52495396602, 8490421583559688410706771261086 (sequence in the OEIS). The first four of these numbers are one less than the corresponding numbers in Sylvester's sequence, but then the two sequences diverge. It is unknown whether there are infinitely many primary pseudoperfect numbers, or whether there are any odd primary pseudoperfect numbers. The prime factors of primary pseudoperfect numbers sometimes may provide solutions to Znám's problem, in which all elements of the solution set are prime. For instance, the prime factors of the primary pseudoperfect number 47058 form the solution set {2,3,11,23,31} to Znám's problem. However, the smaller primary pseudoperfect numbers 2, 6, 42, and 1806 do not correspond to solutions to Znám's problem in this way, as their sets of prime factors violate the requirement that no number in the set can equal one plus the product of the other numbers. Anne (1998) observes that there is exactly one solution set of this type that has "k" primes in it, for each "k" ≤ 8, and conjectures that the same is true for larger "k". If a primary pseudoperfect number "N" is one less than a prime number, then "N" × ("N" + 1) is also primary pseudoperfect. For instance, 47058 is primary pseudoperfect, and 47059 is prime, so 47058 × 47059 = 2214502422 is also primary pseudoperfect. History. Primary pseudoperfect numbers were first investigated and named by Butske, Jaje, and Mayernik (2000). Using computational search techniques, they proved the remarkable result that for each positive integer "r" up to 8, there exists exactly one primary pseudoperfect number with precisely "r" (distinct) prime factors, namely, the "r"th known primary pseudoperfect number. Those with 2 ≤ "r" ≤ 8, when reduced modulo 288, form the arithmetic progression 6, 42, 78, 114, 150, 186, 222, as was observed by Sondow and MacMillan (2017).
[ { "math_id": 0, "text": "\\frac{1}{N} + \\sum_{p \\,|\\;\\! N}\\frac{1}{p} = 1," }, { "math_id": 1, "text": "1 + \\sum_{p \\,|\\;\\! N} \\frac{N}{p} = N." } ]
https://en.wikipedia.org/wiki?curid=7184831
7184839
Exposure range
In photography, exposure range may refer to any of several types of dynamic range: The exposure range of a device is usually expressed in stops, which are equivalent to formula_0 where "c" is the medium or device's contrast ratio. For example, average Digital Video (DV) has a contrast ratio of 45:1, so its exposure range is roughly 5.5 stops. Film has an exposure range of approximately 14 stops. Exposure is usually controlled by changing the lens aperture (the amount of light it gathers), the shutter speed (how long light is gathered) or sensitivity (how strongly the film or sensor responds to light). Changing exposure does not change the exposure range. A graduated neutral density filter can be also used to improve the reproduction of the exposure range of the scene, by darkening bright parts of the image. A graduated filter will reduce the extreme highlights of an image, such as a clear, open sky as well as sunlight. The filter is usually an even gradation of neutral gray to clear, covering the top third to half of the filter in gray. Therefore, it mostly affects the top third to half of the frame's exposure, leaving the highlights in the bottom half to two-thirds unaffected. Often these filters are the square gelatin, polycarbonate, or glass type. A mounting kit such as a square frame receptacle mounted to interchangeable screw-in rings will hold the filter at an appropriate orientation. These can be mounted to the end of an SLR lens camera in the same fashion as a polarizing lens, UV filter, and various other screw-in type filters for SLR and DSLR cameras. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\log_{2} (c)" } ]
https://en.wikipedia.org/wiki?curid=7184839
71850826
Job 25
Job 25 is the 25th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Bildad the Shuhite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 6 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 25 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. Clearly Bildad has little to say and running out of steam before he (and the other two friends, Eliphaz and Zophar) tail off into silence. The dialogue between Job and his three friends is practically over, with neither Job nor the friends getting closer in their positions to each other. Bildad's strong belief in the retribution theology makes him to see that humans are worthless and contemptible before the transcendent God who establishes "order" (literally "peace") in heavens (verse 2; cf. Genesis 1:2–3; Job 9:13; 26:12–13; comparable to the defeat of chaos in the Babylonian and Canaanite myth). This speech adds little because it seems like a mechanical repetition of what Eliphaz has said in his first two speeches (Job 4:17; 15:14; and concurred by Job in Job 9:2; 14:4) that no one is righteous before God and Job has accepted that he is a sinner, only that Job still questions his sufferings compared to other sinners. [Bildad said:] "5Behold, even the moon does not shine," "and the stars are not pure in His sight;" "6how much less man, who is a maggot?" "And the son of man, who is a worm?" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71850826
71851227
Job 26
Job 26 is the 26th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 26 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. Job's final speech in the third cycle of debate mainly comprises chapters 26 to 27, but in the silence of his friends, Job continues his speech until chapter 31. Chapter 26 can be divided into two parts: Job rebukes his friends (26:1–4). Job focuses the first part of his speech to challenge Bildad's arguments by asking him to show how Bildad has helped someone who has no power nor strength (verse 2), or advised someone who has no wisdom, or caused anyone to experience abundant success; all of these evoke no answer from Bildad. Job previously clarifies that wisdom, power and strength belong to God (Job 12:13–16), but none of these was in Bildad's speeches. The allusion in verse 4 refers to Eliphaz's words in Job 4:15, which were echoed by Bildad in his last speech (Job 25:4), implying that none of these statements came from God or reliable sources. At this point, Job ceases to address his friends and focuses his attention to the character of God. [Job said:] "To whom have you uttered words?" "And whose spirit came from you?" Job praises God's majestic power (26:5–14). This section contains Job's praise to God, emphasizing his belief in the big view of God controlling his world, although he cannot understand how his suffering can be part of God's good plan. God's authority covers even the dead people, which cannot hide from God (explained using three different terms for the dead: "shades/ghost" (verse 5a; cf. Proverbs 2:18; 9:18; Psalm 88:10), "Sheol" (verse 6a, "place of the dead") and "Abaddon" (verse 6b, "the place of destruction"). God also controls the mythological forces of chaos, such as "Rahab" (verse 12b; cf. Job 9:13) and the fleeing serpent (verse 13b), in anticipation of YHWH's second speech (chapters 40–41). Job knows that his knowledge of God is so little (just the "outskirts" or like a "whisper" (verse 14). [Job said:] "Behold, these are but the outskirts of his ways," "and how small a whisper do we hear of him!" "But the thunder of his power who can understand?" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71851227
71851317
Gaudin model
Gaudin model In physics, the Gaudin model, sometimes known as the "quantum" Gaudin model, is a model, or a large class of models, in statistical mechanics first described in its simplest case by Michel Gaudin. They are exactly solvable models, and are also examples of quantum spin chains. History. The simplest case was first described by Michel Gaudin in 1976, with the associated Lie algebra taken to be formula_0, the two-dimensional special linear group. Mathematical formulation. Let formula_1 be a semi-simple Lie algebra of finite dimension formula_2. Let formula_3 be a positive integer. On the complex plane formula_4, choose formula_3 different points, formula_5. Denote by formula_6 the finite-dimensional irreducible representation of formula_1 corresponding to the dominant integral element formula_7. Let formula_8 be a set of dominant integral weights of formula_1. Define the tensor product formula_9. The model is then specified by a set of operators formula_10 acting on formula_11, known as the Gaudin Hamiltonians. They are described as follows. Denote by formula_12 the invariant scalar product on formula_1 (this is often taken to be the Killing form). Let formula_13 be a basis of formula_1 and formula_14 be the dual basis given through the scalar product. For an element formula_15, denote by formula_16 the operator formula_17 which acts as formula_18 on the formula_19th factor of formula_11 and as identity on the other factors. Then formula_20 These operators are mutually commuting. One problem of interest in the theory of Gaudin models is finding simultaneous eigenvectors and eigenvalues of these operators. Instead of working with the multiple Gaudin Hamiltonians, there is another operator formula_21, sometimes referred to as the Gaudin Hamiltonian. It depends on a complex parameter formula_22, and also on the quadratic Casimir, which is an element of the universal enveloping algebra formula_23, defined as formula_24 This acts on representations formula_11 by multiplying by a number dependent on the representation, denoted formula_25. This is sometimes referred to as the index of the representation. The Gaudin Hamiltonian is then defined formula_26 Commutativity of formula_21 for different values of formula_22 follows from the commutativity of the formula_10. Higher Gaudin Hamiltonians. When formula_1 has rank greater than 1, the commuting algebra spanned by the Gaudin Hamiltonians and the identity can be expanded to a larger commuting algebra, known as the Gaudin algebra. Similarly to the Harish-Chandra isomorphism, these commuting elements have associated degrees, and in particular the Gaudin Hamiltonians form the degree 2 part of the algebra. For formula_27, the Gaudin Hamiltonians and the identity span the Gaudin algebra. There is another commuting algebra which is 'universal', underlying the Gaudin algebra for any choice of sites and weights, called the Feigin–Frenkel center. See here. Then eigenvectors of the Gaudin algebra define linear functionals on the algebra. If formula_28 is an element of the Gaudin algebra formula_29, and formula_30 an eigenvector of the Gaudin algebra, one obtains a linear functional formula_31 given by formula_32 The linear functional formula_33 is called a character of the Gaudin algebra. The spectral problem, that is, determining eigenvalues and simultaneous eigenvectors of the Gaudin algebra, then becomes a matter of determining characters on the Gaudin algebra. Solutions. A solution to a Gaudin model often means determining the spectrum of the Gaudin Hamiltonian or Gaudin Hamiltonians. There are several methods of solution, including Algebraic Bethe ansatz. For sl2. For formula_27, let formula_34 be the standard basis. For any formula_35, one can define the operator-valued meromorphic function formula_36 Its residue at formula_37 is formula_38, while formula_39 the 'full' tensor representation. The formula_40 and formula_41 satisfy several useful properties but the formula_40 do not form a representation: formula_45. The third property is useful as it allows us to also diagonalize with respect to formula_46, for which a diagonal (but degenerate) basis is known. For an formula_0 Gaudin model specified by sites formula_47 and weights formula_48, define the vacuum vector to be the tensor product of the highest weight states from each representation: formula_49. A Bethe vector (of spin deviation formula_50) is a vector of the form formula_51 for formula_52. Guessing eigenvectors of the form of Bethe vectors is the Bethe ansatz. It can be shown that a Bethe vector is an eigenvector of the Gaudin Hamiltonians if the set of equations formula_53 holds for each formula_54 between 1 and formula_50. These are the Bethe ansatz equations for spin deviation formula_50. For formula_55, this reduces to formula_56 Completeness. In theory, the Bethe ansatz equations can be solved to give the eigenvectors and eigenvalues of the Gaudin Hamiltonian. In practice, if the equations are to completely solve the spectral problem, one must also check If, for a specific configuration of sites and weights, the Bethe ansatz generates all eigenvectors, then it is said to be complete for that configuration of Gaudin model. It is possible to construct examples of Gaudin models which are incomplete. One problem in the theory of Gaudin models is then to determine when a given configuration is complete or not, or at least characterize the 'space of models' for which the Bethe ansatz is complete. For formula_27, for formula_5 in general position the Bethe ansatz is known to be complete. Even when the Bethe ansatz is not complete, in this case it is due to the multiplicity of a root being greater than one in the Bethe ansatz equations, and it is possible to find a complete basis by defining generalized Bethe vectors. Conversely, for formula_57, there exist specific configurations for which completeness fails due to the Bethe ansatz equations having no solutions. For general complex simple g. Analogues of the Bethe ansatz equation can be derived for Lie algebras of higher rank. However, these are much more difficult to derive and solve than the formula_0 case. Furthermore, for formula_1 of rank greater than 1, that is, all others besides formula_0, there are higher Gaudin Hamiltonians, for which it is unknown how to generalize the Bethe ansatz. ODE/IM isomorphism. There is an ODE/IM isomorphism between the Gaudin algebra (or the universal Feigin–Frenkel center), which are the 'integrals of motion' for the theory, and opers, which are ordinary differential operators, in this case on formula_58. Generalizations. There exist generalizations arising from weakening the restriction on formula_1 being a strictly semi-simple Lie algebra. For example, when formula_1 is allowed to be an affine Lie algebra, the model is called an affine Gaudin model. A different way to generalize is to pick out a preferred automorphism of a particular Lie algebra formula_1. One can then define Hamiltonians which transform nicely under the action of the automorphism. One class of such models are cyclotomic Gaudin models. There is also a notion of classical Gaudin model. Historically, the quantum Gaudin model was defined and studied first, unlike most physical systems. Certain classical integrable field theories can be viewed as classical dihedral affine Gaudin models. Therefore, understanding quantum affine Gaudin models may allow understanding of the integrable structure of quantum integrable field theories. Such classical field theories include the principal chiral model, coset sigma models and affine Toda field theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{sl}_2" }, { "math_id": 1, "text": "\\mathfrak{g}" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "\\mathbb{C}" }, { "math_id": 5, "text": "z_i" }, { "math_id": 6, "text": "V_\\lambda" }, { "math_id": 7, "text": "\\lambda" }, { "math_id": 8, "text": "(\\boldsymbol{\\lambda}) := (\\lambda_1, \\cdots, \\lambda_N)" }, { "math_id": 9, "text": "V_{(\\boldsymbol{\\lambda})}:=V_{\\lambda_1}\\otimes \\cdots \\otimes V_{\\lambda_N}" }, { "math_id": 10, "text": "H_i" }, { "math_id": 11, "text": "V_{(\\boldsymbol{\\lambda})}" }, { "math_id": 12, "text": "\\langle \\cdot ,\\cdot\\rangle" }, { "math_id": 13, "text": "\\{I_a\\}" }, { "math_id": 14, "text": "\\{I^a\\}" }, { "math_id": 15, "text": "A\\in \\mathfrak{g}" }, { "math_id": 16, "text": "A^{(i)}" }, { "math_id": 17, "text": "1\\otimes\\cdots\\otimes A \\otimes \\cdots \\otimes 1" }, { "math_id": 18, "text": "A" }, { "math_id": 19, "text": "i" }, { "math_id": 20, "text": "H_i = \\sum_{j \\neq i} \\sum_{a=1}^{d}\\frac{I_a^{(i)}I^{a(j)}}{z_i - z_j}." }, { "math_id": 21, "text": "S(u)" }, { "math_id": 22, "text": "u" }, { "math_id": 23, "text": "U(\\mathfrak{g})" }, { "math_id": 24, "text": "\\Delta = \\frac{1}{2}\\sum_{a=1}^d I_a I^a." }, { "math_id": 25, "text": "\\Delta(\\lambda)" }, { "math_id": 26, "text": "S(u) = \\sum_{i=1}^N \\left[\\frac{H_i}{u - z_i} + \\frac{\\Delta(\\lambda_i)}{(u - z_i)^2}\\right]." }, { "math_id": 27, "text": "\\mathfrak{g} = \\mathfrak{sl}_2" }, { "math_id": 28, "text": "X" }, { "math_id": 29, "text": "\\mathfrak{G}" }, { "math_id": 30, "text": "v" }, { "math_id": 31, "text": "\\chi_v: \\mathfrak{G} \\rightarrow \\mathbb{C}" }, { "math_id": 32, "text": "Xv = \\chi_v(X)v." }, { "math_id": 33, "text": "\\chi_v" }, { "math_id": 34, "text": "\\{E, H, F\\}" }, { "math_id": 35, "text": "X \\in \\mathfrak{g}" }, { "math_id": 36, "text": " X(z) = \\sum_{i = 1}^N\\frac{X^{(i)}}{z - z_i}." }, { "math_id": 37, "text": "z = z_i" }, { "math_id": 38, "text": "X^{(i)}" }, { "math_id": 39, "text": "\\lim_{z \\rightarrow \\infty} zX(z) = \\sum_{i = 1}^N X^{(i)} =: X^{(\\infty)}," }, { "math_id": 40, "text": "X(z)" }, { "math_id": 41, "text": "X^{(\\infty)}" }, { "math_id": 42, "text": "[X(z), Y^{(\\infty)}] = [X, Y](z)" }, { "math_id": 43, "text": "S(u) = \\frac{1}{2}\\sum_a I_a(z) I^a(z)" }, { "math_id": 44, "text": "[H_i, X^{(\\infty)}] = 0" }, { "math_id": 45, "text": "[X(z), Y(z)] = -[X,Y]'(z)" }, { "math_id": 46, "text": "H^{\\infty}" }, { "math_id": 47, "text": "z_1, \\cdots, z_N \\in \\mathbb{C}" }, { "math_id": 48, "text": "\\lambda_1, \\cdots, \\lambda_N \\in \\mathbb{N}" }, { "math_id": 49, "text": "v_0 := v_{\\lambda_1}\\otimes \\cdots \\otimes v_{\\lambda_N}" }, { "math_id": 50, "text": "m" }, { "math_id": 51, "text": "F(w_1)\\cdots F(w_m)v_0" }, { "math_id": 52, "text": "w_i \\in \\mathbb{C}" }, { "math_id": 53, "text": "\\sum_{i = 1}^N \\frac{\\lambda_i}{w_k - z_i} - 2 \\sum_{j \\neq k} \\frac{1}{w_k - w_j} = 0" }, { "math_id": 54, "text": "k" }, { "math_id": 55, "text": "m = 1" }, { "math_id": 56, "text": "\\boldsymbol{\\lambda}(w) := \\sum_{i = 1}^N \\frac{\\lambda_i}{w - z_i} = 0." }, { "math_id": 57, "text": "\\mathfrak{g} = \\mathfrak{sl}_3" }, { "math_id": 58, "text": "\\mathbb{P}^1" } ]
https://en.wikipedia.org/wiki?curid=71851317
71853265
Job 27
27th chapter of the Book of Job Job 27 is the 27th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around the 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in the Hebrew language. This chapter is divided into 23 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 27 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. Job's final speech in the third cycle of debate mainly comprises chapters 26 to 27, but in the silence of his friends, Job continues his speech until chapter 31. Chapter 27 can be divided into three parts: Job again insists on his integrity (27:1–6). After a possible brief pause (see verse 1) Job resumes his speech with a complain that God's denial to provide him justice has greatly impacted Job emotionally so God has made his life bitter (verse 2). Despite all this, as long as he lives, Job persists in his struggle and does not speak deceitfully (verses 3–4). Job uses the "oath formula" for the first time in verses 2–4 to declare his innocence (a longer legal form appears in chapter 31). Job claims a clear conscience with no reproach by his own "heart" ("the core of his being"), so he is still seeking God to vindicate his integrity and righteousnes (verses 5–6). [Job said:] "And Job again took up his discourse, and said:" Job speaks on the fate of the wicked (27:7–23). This section contains Job's point about the wicked, opened with a strong declaration for those against him (verse 7) and followed by a teaching about the fate of the wicked. Job asks a series of rhetorical questions about the relationship between the wicked and God (verses 8–10) to challenge his friends why they could not see the reality and they became so "vain" or "lightweight" (verse 12; cf. Ecclesiastes 1:2ff). In the subsequent speech, Job states his stand on the fate of the wicked, correcting the error in his friends' statements about the same issue that actually back themselves into the same group as the wicked. According to Job, the wicked will eventually be driven out by God, although for a while they seemingly prosper, and be swept away without pity (verses 20–23). [Job said:] "For what is the hope of the hypocrite," "Though he may gain much," "If God takes away his life?" Verse 8. The verse has been linked to the words of Jesus "What shall it profit a man, if he gain the whole world, and lose his own soul? or what shall a man give in exchange for his soul?" (Mark 8:36, 37) and to a parable of Jesus "But God said to him, ‘Fool! This night your soul will be required of you; then whose will those things be which you have provided?’" (Luke 12:20). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71853265
7185405
Cahen's constant
Sum of an infinite series, approx 0.6434 In mathematics, Cahen's constant is defined as the value of an infinite series of unit fractions with alternating signs: formula_0 (sequence in the OEIS) Here formula_1 denotes Sylvester's sequence, which is defined recursively by formula_2 Combining these fractions in pairs leads to an alternative expansion of Cahen's constant as a series of positive unit fractions formed from the terms in even positions of Sylvester's sequence. This series for Cahen's constant forms its greedy Egyptian expansion: formula_3 This constant is named after Eugène Cahen (also known for the Cahen–Mellin integral), who was the first to introduce it and prove its irrationality. Continued fraction expansion. The majority of naturally occurring mathematical constants have no known simple patterns in their continued fraction expansions. Nevertheless, the complete continued fraction expansion of Cahen's constant formula_4 is known: it is formula_5 where the sequence of coefficients &lt;templatestyles src="Block indent/styles.css"/&gt; is defined by the recurrence relation formula_6 All the partial quotients of this expansion are squares of integers. Davison and Shallit made use of the continued fraction expansion to prove that formula_4 is transcendental. Alternatively, one may express the partial quotients in the continued fraction expansion of Cahen's constant through the terms of Sylvester's sequence: To see this, we prove by induction on formula_7 that formula_8. Indeed, we have formula_9, and if formula_8 holds for some formula_7, then formula_10where we used the recursion for formula_11 in the first step respectively the recursion for formula_12 in the final step. As a consequence, formula_13 holds for every formula_7, from which it is easy to conclude that formula_14. Best approximation order. Cahen's constant formula_4 has best approximation order formula_15. That means, there exist constants formula_16 such that the inequality formula_17 has infinitely many solutions formula_18, while the inequality formula_19 has at most finitely many solutions formula_18. This implies (but is not equivalent to) the fact that formula_4 has irrationality measure 3, which was first observed by . To give a proof, denote by formula_20 the sequence of convergents to Cahen's constant (that means, formula_21). But now it follows from formula_13and the recursion for formula_12 that formula_22 for every formula_7. As a consequence, the limits formula_23 and formula_24 (recall that formula_25) both exist by basic properties of infinite products, which is due to the absolute convergence of formula_26. Numerically, one can check that formula_27. Thus the well-known inequality formula_28 yields formula_29 and formula_30 for all sufficiently large formula_31. Therefore formula_4 has best approximation order 3 (with formula_32), where we use that any solution formula_18 to formula_33 is necessarily a convergent to Cahen's constant. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C = \\sum_{i=0}^\\infty \\frac{(-1)^i}{s_i-1}=\\frac11 - \\frac12 + \\frac16 - \\frac1{42} + \\frac1{1806} - \\cdots\\approx 0.643410546288..." }, { "math_id": 1, "text": "(s_i)_{i \\geq 0}" }, { "math_id": 2, "text": "\\begin{array}{l}\ns_0~~~ = 2; \\\\\ns_{i+1} = 1 + \\prod_{j=0}^i s_j \\text{ for } i \\geq 0.\n\\end{array}" }, { "math_id": 3, "text": "C = \\sum\\frac{1}{s_{2i}}=\\frac12+\\frac17+\\frac1{1807}+\\frac1{10650056950807}+\\cdots" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "C = \\left[a_0^2; a_1^2, a_2^2, a_3^2, a_4^2, \\ldots\\right] = [0;1,1,1,4,9,196,16641,\\ldots]" }, { "math_id": 6, "text": "a_0 = 0,~a_1 = 1,~a_{n+2} = a_n\\left(1 + a_n a_{n+1}\\right)~\\forall~n\\in\\mathbb{Z}_{\\geqslant 0}." }, { "math_id": 7, "text": "n \\geq 1" }, { "math_id": 8, "text": "1+a_n a_{n+1} = s_{n-1}" }, { "math_id": 9, "text": "1+a_1 a_2 = 2 = s_0" }, { "math_id": 10, "text": "1+a_{n+1}a_{n+2} = 1+a_{n+1} \\cdot a_n(1+a_n a_{n+1})= 1+a_n a_{n+1} + (a_na_{n+1})^2 = s_{n-1} + (s_{n-1}-1)^2 = s_{n-1}^2-s_{n-1}+1 = s_n," }, { "math_id": 11, "text": "(a_n)_{n \\geq 0}" }, { "math_id": 12, "text": "(s_n)_{n \\geq 0}" }, { "math_id": 13, "text": "a_{n+2} = a_n \\cdot s_{n-1}" }, { "math_id": 14, "text": "C = [0;1,1,1,s_0^2, s_1^2, (s_0s_2)^2, (s_1s_3)^2, (s_0s_2s_4)^2,\\ldots]" }, { "math_id": 15, "text": "q^{-3}" }, { "math_id": 16, "text": "K_1, K_2 > 0" }, { "math_id": 17, "text": " 0 < \\Big| C - \\frac{p}{q} \\Big| < \\frac{K_1}{q^3} " }, { "math_id": 18, "text": " (p,q) \\in \\mathbb{Z} \\times \\mathbb{N} " }, { "math_id": 19, "text": " 0 < \\Big| C - \\frac{p}{q} \\Big| < \\frac{K_2}{q^3} " }, { "math_id": 20, "text": "(p_n/q_n)_{n \\geq 0}" }, { "math_id": 21, "text": "q_{n-1} = a_n \\text{ for every } n \\geq 1" }, { "math_id": 22, "text": "\\frac{a_{n+2}}{a_{n+1}^2} = \\frac{a_{n} \\cdot s_{n-1}}{a_{n-1}^2 \\cdot s_{n-2}^2} = \\frac{a_n}{a_{n-1}^2} \\cdot \\frac{s_{n-2}^2 - s_{n-2} + 1}{s_{n-1}^2} = \\frac{a_n}{a_{n-1}^2} \\cdot \\Big( 1 - \\frac{1}{s_{n-1}} + \\frac{1}{s_{n-1}^2} \\Big)" }, { "math_id": 23, "text": "\\alpha := \\lim_{n \\to \\infty} \\frac{q_{2n+1}}{q_{2n}^2} = \\prod_{n=0}^\\infty \\Big( 1 - \\frac{1}{s_{2n}} + \\frac{1}{s_{2n}^2}\\Big)" }, { "math_id": 24, "text": "\\beta := \\lim_{n \\to \\infty} \\frac{q_{2n+2}}{q_{2n+1}^2} = 2 \\cdot \\prod_{n=0}^\\infty \\Big( 1 - \\frac{1}{s_{2n+1}} + \\frac{1}{s_{2n+1}^2}\\Big)" }, { "math_id": 25, "text": "s_0 = 2" }, { "math_id": 26, "text": "\\sum_{n=0}^\\infty \\Big| \\frac{1}{s_{n}} - \\frac{1}{s_{n}^2} \\Big|" }, { "math_id": 27, "text": "0 < \\alpha < 1 < \\beta < 2" }, { "math_id": 28, "text": "\\frac{1}{q_n(q_n + q_{n+1})} \\leq \\Big| C - \\frac{p_n}{q_n} \\Big| \\leq \\frac{1}{q_nq_{n+1}}" }, { "math_id": 29, "text": "\\Big| C - \\frac{p_{2n+1}}{q_{2n+1}} \\Big| \\leq \\frac{1}{q_{2n+1}q_{2n+2}} = \\frac{1}{q_{2n+1}^3 \\cdot \n\\frac{q_{2n+2}}{q_{2n+1}^2}} < \\frac{1}{q_{2n+1}^3}" }, { "math_id": 30, "text": "\\Big| C - \\frac{p_n}{q_n} \\Big| \\geq \\frac{1}{q_n(q_n + q_{n+1})} > \\frac{1}{q_n(q_n + 2q_{n}^2)} \\geq \\frac{1}{3q_n^3}" }, { "math_id": 31, "text": "n" }, { "math_id": 32, "text": "K_1 = 1 \\text{ and } K_2 = 1/3" }, { "math_id": 33, "text": "0 < \\Big| C - \\frac{p}{q} \\Big| < \\frac{1}{3q^3}" } ]
https://en.wikipedia.org/wiki?curid=7185405
7185428
Random dynamical system
In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space "S", a set of maps formula_0 from "S" into itself that can be thought of as the set of all possible equations of motion, and a probability distribution "Q" on the set formula_0 that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state formula_1 evolving according to a succession of maps randomly chosen according to the distribution "Q". An example of a random dynamical system is a stochastic differential equation; in this case the distribution Q is typically determined by "noise terms". It consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. Another example is discrete state random dynamical system; some elementary contradistinctions between Markov chain and random dynamical system descriptions of a stochastic dynamics are discussed. Motivation 1: Solutions to a stochastic differential equation. Let formula_2 be a formula_3-dimensional vector field, and let formula_4. Suppose that the solution formula_5 to the stochastic differential equation formula_6 exists for all positive time and some (small) interval of negative time dependent upon formula_7, where formula_8 denotes a formula_3-dimensional Wiener process (Brownian motion). Implicitly, this statement uses the classical Wiener probability space formula_9 In this context, the Wiener process is the coordinate process. Now define a flow map or (solution operator) formula_10 by formula_11 (whenever the right hand side is well-defined). Then formula_12 (or, more precisely, the pair formula_13) is a (local, left-sided) random dynamical system. The process of generating a "flow" from the solution to a stochastic differential equation leads us to study suitably defined "flows" on their own. These "flows" are random dynamical systems. Motivation 2: Connection to Markov Chain. An i.i.d random dynamical system in the discrete space is described by a triplet formula_14. The discrete random dynamical system comes as follows, The random variable formula_26 is constructed by means of composition of independent random maps, formula_27. Clearly, formula_26 is a Markov Chain. Reversely, can, and how, a given MC be represented by the compositions of i.i.d. random transformations? Yes, it can, but not unique. The proof for existence is similar with Birkhoff–von Neumann theorem for doubly stochastic matrix. Here is an example that illustrates the existence and non-uniqueness. Example: If the state space formula_28 and the set of the transformations formula_0 expressed in terms of deterministic transition matrices. Then a Markov transition matrixformula_29 can be represented by the following decomposition by the min-max algorithm, formula_30 In the meantime, another decomposition could be formula_31 Formal definition. Formally, a random dynamical system consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. In detail. Let formula_32 be a probability space, the noise space. Define the base flow formula_33 as follows: for each "time" formula_34, let formula_35 be a measure-preserving measurable function: formula_36 for all formula_37 and formula_34; Suppose also that That is, formula_42, formula_34, forms a group of measure-preserving transformation of the noise formula_32. For one-sided random dynamical systems, one would consider only positive indices formula_43; for discrete-time random dynamical systems, one would consider only integer-valued formula_43; in these cases, the maps formula_42 would only form a commutative monoid instead of a group. While true in most applications, it is not usually part of the formal definition of a random dynamical system to require that the measure-preserving dynamical system formula_44 is ergodic. Now let formula_45 be a complete separable metric space, the phase space. Let formula_46 be a formula_47-measurable function such that formula_51 In the case of random dynamical systems driven by a Wiener process formula_52, the base flow formula_35 would be given by formula_53. This can be read as saying that formula_42 "starts the noise at time formula_43 instead of time 0". Thus, the cocycle property can be read as saying that evolving the initial condition formula_54 with some noise formula_55 for formula_43 seconds and then through formula_56 seconds with the same noise (as started from the formula_43 seconds mark) gives the same result as evolving formula_54 through formula_57 seconds with that same noise. Attractors for random dynamical systems. The notion of an attractor for a random dynamical system is not as straightforward to define as in the deterministic case. For technical reasons, it is necessary to "rewind time", as in the definition of a pullback attractor. Moreover, the attractor is dependent upon the realisation formula_58 of the noise. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma" }, { "math_id": 1, "text": "X \\in S" }, { "math_id": 2, "text": "f : \\mathbb{R}^{d} \\to \\mathbb{R}^{d}" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "\\varepsilon > 0" }, { "math_id": 5, "text": "X(t, \\omega; x_{0})" }, { "math_id": 6, "text": "\\left\\{ \\begin{matrix} \\mathrm{d} X = f(X) \\, \\mathrm{d} t + \\varepsilon \\, \\mathrm{d} W (t); \\\\ X (0) = x_{0}; \\end{matrix} \\right." }, { "math_id": 7, "text": "\\omega \\in \\Omega" }, { "math_id": 8, "text": "W : \\mathbb{R} \\times \\Omega \\to \\mathbb{R}^{d}" }, { "math_id": 9, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P}) := \\left( C_{0} (\\mathbb{R}; \\mathbb{R}^{d}), \\mathcal{B} (C_{0} (\\mathbb{R}; \\mathbb{R}^{d})), \\gamma \\right)." }, { "math_id": 10, "text": "\\varphi : \\mathbb{R} \\times \\Omega \\times \\mathbb{R}^{d} \\to \\mathbb{R}^{d}" }, { "math_id": 11, "text": "\\varphi (t, \\omega, x_{0}) := X(t, \\omega; x_{0})" }, { "math_id": 12, "text": "\\varphi" }, { "math_id": 13, "text": "(\\mathbb{R}^{d}, \\varphi)" }, { "math_id": 14, "text": "(S, \\Gamma, Q)" }, { "math_id": 15, "text": "S" }, { "math_id": 16, "text": "\\{s_1, s_2,\\cdots, s_n\\}" }, { "math_id": 17, "text": "S\\rightarrow S" }, { "math_id": 18, "text": "n\\times n" }, { "math_id": 19, "text": "Q" }, { "math_id": 20, "text": "\\sigma" }, { "math_id": 21, "text": "x_0" }, { "math_id": 22, "text": "\\alpha_1" }, { "math_id": 23, "text": "x_1=\\alpha_1(x_0)" }, { "math_id": 24, "text": "\\alpha_2" }, { "math_id": 25, "text": "x_2=\\alpha_2(x_1)" }, { "math_id": 26, "text": "X_n" }, { "math_id": 27, "text": "X_n=\\alpha_n\\circ \\alpha_{n-1}\\circ \\dots \\circ \\alpha_1(X_0)" }, { "math_id": 28, "text": "S=\\{1, 2\\}" }, { "math_id": 29, "text": " M =\\left(\\begin{array}{cc}\n 0.4 & 0.6 \\\\ 0.7 & 0.3 \n \\end{array}\\right)" }, { "math_id": 30, "text": " M =0.6\\left(\\begin{array}{cc}\n 0 & 1 \\\\ 1 & 0\n \\end{array}\\right)+0.3 \\left(\\begin{array}{cc}\n 1 & 0 \\\\ 0 & 1 \n \\end{array}\\right)+ 0.1\\left(\\begin{array}{cc}\n 1 & 0 \\\\ 1 & 0 \n \\end{array}\\right)." }, { "math_id": 31, "text": " M = 0.18 \\left(\\begin{array}{cc}\n 0 & 1 \\\\ 0 & 1 \n \\end{array}\\right)+ 0.28\\left(\\begin{array}{cc}\n 1 & 0 \\\\ 1 & 0 \n \\end{array}\\right)\n +0.42\\left(\\begin{array}{cc}\n 0 & 1 \\\\ 1 & 0 \n \\end{array}\\right)+0.12\\left(\\begin{array}{cc}\n 1 & 0 \\\\ 0 & 1\n \\end{array}\\right)." }, { "math_id": 32, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 33, "text": "\\vartheta : \\mathbb{R} \\times \\Omega \\to \\Omega" }, { "math_id": 34, "text": "s \\in \\mathbb{R}" }, { "math_id": 35, "text": "\\vartheta_{s} : \\Omega \\to \\Omega" }, { "math_id": 36, "text": "\\mathbb{P} (E) = \\mathbb{P} (\\vartheta_{s}^{-1} (E))" }, { "math_id": 37, "text": "E \\in \\mathcal{F}" }, { "math_id": 38, "text": "\\vartheta_{0} = \\mathrm{id}_{\\Omega} : \\Omega \\to \\Omega" }, { "math_id": 39, "text": "\\Omega" }, { "math_id": 40, "text": "s, t \\in \\mathbb{R}" }, { "math_id": 41, "text": "\\vartheta_{s} \\circ \\vartheta_{t} = \\vartheta_{s + t}" }, { "math_id": 42, "text": "\\vartheta_{s}" }, { "math_id": 43, "text": "s" }, { "math_id": 44, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P}, \\vartheta)" }, { "math_id": 45, "text": "(X, d)" }, { "math_id": 46, "text": "\\varphi : \\mathbb{R} \\times \\Omega \\times X \\to X" }, { "math_id": 47, "text": "(\\mathcal{B} (\\mathbb{R}) \\otimes \\mathcal{F} \\otimes \\mathcal{B} (X), \\mathcal{B} (X))" }, { "math_id": 48, "text": "\\varphi (0, \\omega) = \\mathrm{id}_{X} : X \\to X" }, { "math_id": 49, "text": "X" }, { "math_id": 50, "text": "(t,x) \\mapsto \\varphi (t, \\omega,x) " }, { "math_id": 51, "text": "\\varphi (t, \\vartheta_{s} (\\omega)) \\circ \\varphi (s, \\omega) = \\varphi (t + s, \\omega)." }, { "math_id": 52, "text": "W : \\mathbb{R} \\times \\Omega \\to X" }, { "math_id": 53, "text": "W (t, \\vartheta_{s} (\\omega)) = W (t + s, \\omega) - W(s, \\omega)" }, { "math_id": 54, "text": "x_{0}" }, { "math_id": 55, "text": "\\omega " }, { "math_id": 56, "text": "t" }, { "math_id": 57, "text": "(t + s)" }, { "math_id": 58, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=7185428
71855994
Cameron–Fon-Der-Flaass IBIS theorem
In mathematics, Cameron–Fon-Der-Flaass IBIS theorem arises in the dynamical algebraic combinatorics. The theorem was discovered in 1995 by two mathematicians Peter Cameron and Dima Fon-Der-Flaas. The theorem is considered to be a link between group theory and graph theory as it studies redundancy of a group. Statement. Let formula_0 be a permutational group of formula_1, then the following will be similar.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "\\Omega" } ]
https://en.wikipedia.org/wiki?curid=71855994
71856072
Algebraic decision diagram
Symbolic boolean function representation, extension of BDDs An algebraic decision diagram (ADD) or a multi-terminal binary decision diagram (MTBDD), is a data structure that is used to symbolically represent a Boolean function whose codomain is an arbitrary finite set S. An ADD is an extension of a reduced ordered binary decision diagram, or commonly named binary decision diagram (BDD) in the literature, which terminal nodes are not restricted to the Boolean values 0 (FALSE) and 1 (TRUE). The terminal nodes may take any value from a set of constants S. Definition. An ADD represents a Boolean function from formula_0 to a finite set of constants S, or carrier of the algebraic structure. An ADD is a rooted, directed, acyclic graph, which has several nodes, like a BDD. However, an ADD can have more than two terminal nodes which are elements of the set S, unlike a BDD. An ADD can also be seen as a Boolean function, or a vectorial Boolean function, by extending the codomain of the function, such that formula_1 with formula_2 and formula_3 for some integer n. Therefore, the theorems of the Boolean algebra applies to ADD, notably the Boole's expansion theorem. Each node of is labeled by a Boolean variable and has two outgoing edges: a 1-edge which represents the evaluation of the variable to the value TRUE, and a 0-edge for its evaluation to FALSE. An ADD employs the same reduction rules as a BDD (or Reduced Ordered BDD): ADDs are canonical according to a particular variable ordering. Matrix partitioning. An ADD can be represented by a matrix according to its cofactors. Applications. ADDs were first implemented for sparse matrix multiplication and shortest path algorithms (Bellman-Ford, Repeated Squaring, and Floyd-Warshall procedures). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{0,1\\}^n" }, { "math_id": 1, "text": "f: \\{0,1\\}^n \\to Q " }, { "math_id": 2, "text": "S \\subseteq Q" }, { "math_id": 3, "text": "card(Q) = 2^n" } ]
https://en.wikipedia.org/wiki?curid=71856072
7185671
Uniformly Cauchy sequence
In mathematics, a sequence of functions formula_0 from a set "S" to a metric space "M" is said to be uniformly Cauchy if: Another way of saying this is that formula_6 as formula_7, where the uniform distance formula_8 between two functions is defined by formula_9 Convergence criteria. A sequence of functions {"f"n} from "S" to "M" is pointwise Cauchy if, for each "x" ∈ "S", the sequence {"f"n("x")} is a Cauchy sequence in "M". This is a weaker condition than being uniformly Cauchy. In general a sequence can be pointwise Cauchy and not pointwise convergent, or it can be uniformly Cauchy and not uniformly convergent. Nevertheless, if the metric space "M" is complete, then any pointwise Cauchy sequence converges pointwise to a function from "S" to "M". Similarly, any uniformly Cauchy sequence will tend uniformly to such a function. The uniform Cauchy property is frequently used when the "S" is not just a set, but a topological space, and "M" is a complete metric space. The following theorem holds: Generalization to uniform spaces. A sequence of functions formula_0 from a set "S" to a uniform space "U" is said to be uniformly Cauchy if:
[ { "math_id": 0, "text": "\\{f_{n}\\}" }, { "math_id": 1, "text": "\\varepsilon > 0" }, { "math_id": 2, "text": "N>0" }, { "math_id": 3, "text": "x\\in S" }, { "math_id": 4, "text": "d(f_{n}(x), f_{m}(x)) < \\varepsilon" }, { "math_id": 5, "text": "m, n > N" }, { "math_id": 6, "text": "d_u (f_{n}, f_{m}) \\to 0" }, { "math_id": 7, "text": "m, n \\to \\infty" }, { "math_id": 8, "text": "d_u" }, { "math_id": 9, "text": "d_{u} (f, g) := \\sup_{x \\in S} d (f(x), g(x))." }, { "math_id": 10, "text": "\\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=7185671
71859947
Job 28
28th chapter of the Book of Job in the Hebrew Bible Job 28 is the 28th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 28 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 28 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Comparing the three cycles of debate, the third (and final) round can be seen as 'incomplete', because there is no speech from Zophar and the speech by Bildad is very short (6 verses only), which may indicate as a symptom of disintegration of the friends' arguments. Job's final speech in the third cycle of debate mainly comprises chapters 26 to 27, but in the silence of his friends, Job continues his speech until chapter 31. Chapter 28 can be divided into three parts, separated by two refrains (verses 12, 20), and concluded by the final statement of "fear of the Lord" (verse 28): The refrains ask the question, "Where shall wisdom be found?" and the closing statement apparently gives the answer, "the fear of the Lord, that is wisdom." The tone of Job 28 is calm, quite different from the 'turgid emotions of Job's speeches' both in the preceding (27) and in succeeding chapters (29-31), with a distinctive content as well. This leads to the debate whether Job is really the speaker of the whole chapter. This chapter may well be an interlude spoken by the narrator (or 'authorial comment') serving as a transition from the three rounds of dialogue (Job 3–27) to the three extended monologues by Job (Job 29–31), Elihu (Job 32–37), and God (Job 38–41). Several factors have been listed to doubt that Job is not the speaker: Although Zophar cover similar themes in Job 11:7–12 only few scholars regard this chapter as Zophar's 'missing third speech' due to, among others, the absence of accusation. Some scholars suggest that it was spoken by Elihu, or that it belongs after Job 42:6, whereas others propose to remove it as inauthentic. The location of Job 28 within the structure of the whole book suggests that it is not an anticipation of the conclusion and theme of the book, so it cannot be "the high point of the book", because it still does not provide the answer for Job. This chapter serves the important literary function of preparing for what is to follow, that is, the possible conclusion of verse 28 is reframed by chapters 29–31, in which Job insists that the issues are not resolved, and finally leads to God's verdict. The achievements of humanity (28:1–12). Job 28:1-11 presents a vivid description of the ancient technology that was used in mining precious metals and gemstones. There was no mining activity in Israel (cf. 1 Samuel 13:19-22), because its natural resources were limited, unlike in Egypt where extensive mining activity began around 2000 BC, or other parts in the ancient Mesopotamia. Mining requires delving deep in the dark places to produce stunning products (sapphire/lapis lazuli, gold) from rocks or dusts; a principle that can be applied to the search of wisdom, "brings hidden things to light" (verse 11). [Job said:] "He breaks open a shaft away from the inhabitants;" "in places forgotten by feet" "they hang far away from men; and they totter."" Elusive Wisdom (28:13–20). The second stanza describes the limited ability of humans to master wisdom, as they cannot comprehend the value of 'wisdom and understanding' (verse 13), nor can they offer anything to gain them (verses 15–19). Human mining skills cannot be used to find these 'twin jewels of "wisdom and understanding"'. Thus, the whole stanza asserts that wisdom is neither fully attainable nor properly valued. [Job said:] "It cannot be valued in the gold of Ophir," "in precious onyx or sapphire." Source of wisdom (28:20–28). The key assertion of the last stanza is that wisdom is generally concealed from living creatures (verse 21), its location is unknown even when searched as far as the very edges of reality (verse 22). Only God knows the location and nature of wisdom, while also makes it known to others (verse 27). The conclusion is that "wisdom and understanding" can only be acquired by fearing God and turning away from evil. [Job said:] "And he said to man," "‘Behold, the fear of the Lord, that is wisdom," "and to turn away from evil is understanding.’"&lt;ref&gt; ESV/&lt;ref&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71859947
7186232
5-demicube
Regular 5-polytope In five-dimensional geometry, a demipenteract or 5-demicube is a semiregular 5-polytope, constructed from a "5-hypercube" (penteract) with alternated vertices removed. It was discovered by Thorold Gosset. Since it was the only semiregular 5-polytope (made of more than one type of regular facets), he called it a 5-ic semi-regular. E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as HM5 for a 5-dimensional "half measure" polytope. Coxeter named this polytope as 121 from its Coxeter diagram, which has branches of length 2, 1 and 1 with a ringed node on one of the short branches, and Schläfli symbol formula_0 or {3,32,1}. It exists in the k21 polytope family as 121 with the Gosset polytopes: 221, 321, and 421. The graph formed by the vertices and edges of the demipenteract is sometimes called the Clebsch graph, though that name sometimes refers to the folded cube graph of order five instead. Cartesian coordinates. Cartesian coordinates for the vertices of a demipenteract centered at the origin and edge length 2√2 are alternate halves of the penteract: (±1,±1,±1,±1,±1) with an odd number of plus signs. As a configuration. This configuration matrix represents the 5-demicube. The rows and columns correspond to vertices, edges, faces, cells and 4-faces. The diagonal numbers say how many of each element occur in the whole 5-demicube. The nondiagonal numbers say how many of the column's element occur in or at the row's element. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time. Related polytopes. It is a part of a dimensional family of uniform polytopes called demihypercubes for being alternation of the hypercube family. There are 23 Uniform 5-polytopes (uniform 5-polytopes) that can be constructed from the D5 symmetry of the demipenteract, 8 of which are unique to this family, and 15 are shared within the penteractic family. The 5-demicube is third in a dimensional series of semiregular polytopes. Each progressive uniform polytope is constructed vertex figure of the previous polytope. Thorold Gosset identified this series in 1900 as containing all regular polytope facets, containing all simplexes and orthoplexes (5-simplices and 5-orthoplexes in the case of the 5-demicube). In Coxeter's notation the 5-demicube is given the symbol 121.
[ { "math_id": 0, "text": "\\left\\{3 \\begin{array}{l}3, 3\\\\3\\end{array}\\right\\}" } ]
https://en.wikipedia.org/wiki?curid=7186232
7186253
Extended precision
Floating point number formats Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to "extended precision", arbitrary-precision arithmetic refers to implementations of much larger numeric types (with a storage count that usually is not a power of two) using special software (or, rarely, hardware). Extended precision implementations. There is a long history of extended floating-point formats reaching back nearly to the middle of the last century. Various manufacturers have used different formats for extended precision for different machines. In many cases the format of the extended precision is not quite the same as a scale-up of the ordinary single- and double-precision formats it is meant to extend. In a few cases the implementation was merely a software-based change in the floating-point data format, but in most cases extended precision was implemented in hardware, either built into the central processor itself, or more often, built into the hardware of an optional, attached processor called a "floating-point unit" (FPU) or "floating-point processor" (FPP), accessible to the CPU as a fast input / output device. IBM extended precision formats. The IBM 1130, sold in 1965, offered two floating-point formats: A 32-bit "standard precision" format and a 40-bit "extended precision" format. Standard precision format contains a 24-bit two's complement significand while extended precision utilizes a 32-bit two's complement significand. The latter format makes full use of the CPU's 32-bit integer operations. The characteristic in both formats is an 8-bit field containing the power of two biased by 128. Floating-point arithmetic operations are performed by software, and double precision is not supported at all. The extended format occupies three 16-bit words, with the extra space simply ignored. The IBM System/360 supports a 32-bit "short" floating-point format and a 64-bit "long" floating-point format. The 360/85 and follow-on System/370 add support for a 128-bit "extended" format. These formats are still supported in the current design, where they are now called the "hexadecimal floating-point" (HFP) formats. Microsoft MBF extended precision format. The Microsoft BASIC port for the 6502 CPU, such as in adaptations like Commodore BASIC, AppleSoft BASIC, KIM-1 BASIC or MicroTAN BASIC, supports an extended 40-bit variant of the floating-point format "Microsoft Binary Format" (MBF) since 1977. IEEE 754 extended precision formats. The IEEE 754 floating-point standard recommends that implementations provide extended precision formats. The standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementor's choice. The IA32, x86-64, and Itanium processors support what is by far the most influential format on this standard, the Intel 80-bit (64 bit significand) "double extended" format, described in the next section. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors also support a 64-bit significand extended precision format (similar to the Intel format, although padded to a 96-bit format with 16 unused bits inserted between the exponent and significand fields, and values with exponent zero and bit 63 one are normalized values). The follow-on Coldfire processors do not support this 96-bit extended precision format. The FPA10 math coprocessor for early ARM processors also supports a 64-bit significand extended precision format (similar to the Intel format although padded to a 96-bit format with 16 zero bits inserted between the sign and the exponent fields), but without correct rounding. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE 754-1985 double extended format, as does the IEEE 754 128-bit binary format. x86 extended precision format. The x86 extended precision format is an 80 bit format first implemented in the Intel 8087 math coprocessor and is supported by all processors that are based on the x86 design that incorporate a floating-point unit (FPU). The Intel 8087 was the first x86 device which supported floating-point arithmetic in hardware. It was designed to support a 32 bit "single precision" format and a 64 bit "double-precision" format for encoding and interchanging floating-point numbers. The extended format was designed not to store data at higher precision, but rather to allow for the computation of temporary double results more reliably and accurately by minimising overflow and roundoff-errors in intermediate calculations. All the floating-point registers in the 8087 hold this format, and it automatically converts numbers to this format when loading registers from memory and also converts results back to the more conventional formats when storing the registers back into memory. To enable intermediate subexpression results to be saved in extended precision scratch variables and continued across programming language statements, and otherwise interrupted calculations to resume where they were interrupted, it provides instructions which transfer values between these internal registers and memory without performing any conversion, which therefore enables access to the extended format for calculations – also reviving the issue of the accuracy of functions of such numbers, but at a higher precision. The floating-point units (FPU) on all subsequent x86 processors have supported this format. As a result, software can be developed which takes advantage of the higher precision provided by this format. William Kahan, a primary designer of the x87 arithmetic and initial IEEE 754 standard proposal notes on the development of the x87 floating point: "An extended format as wide as we dared (80 bits) was included to serve the same support role as the 13 decimal internal format serves in Hewlett-Packard's 10 decimal calculators." Moreover, Kahan notes that 64 bits was the widest significand across which carry propagation could be done without increasing the cycle time on the 8087, and that the x87 extended precision was designed to be extensible to higher precision in future processors: "For now the 10 byte extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16 byte format. ... That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed." This 80 bit format uses one bit for the sign of the significand, 15 bits for the exponent field (i.e. the same range as the 128 bit quadruple precision IEEE 754 format) and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the actual An exponent field value of 32767 (all fifteen bits 1) is reserved so as to enable the representation of special states such as infinity and Not a Number. If the exponent field is zero, the value is a denormal number and the exponent of 2 is −16382. In the following table, "s" is the value of the sign bit (0 means positive, 1 means negative), "e" is the value of the exponent field interpreted as a positive integer, and "m" is the significand interpreted as a positive binary number, where the binary point is located between bits 63 and 62. The "m" field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit / hidden bit. Rather, bit 63 contains the integer part of the significand and bits 62–0 hold the fractional part. Bit 63 will be 1 on all normalized numbers. There were several advantages to this design when the 8087 was being developed: Introduction to use. The 80 bit floating-point format was widely available by 1984, after the development of C, Fortran and similar computer languages, which initially offered only the common 32 and 64 bit floating-point sizes. On the x86 design most C compilers now support 80 bit extended precision via the long double type, and this was specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). Compilers on x86 for other languages often support extended precision as well, sometimes via nonstandard extensions: For example, Turbo Pascal offers an type, and several Fortran compilers have a type (analogous to and ). Such compilers also typically include extended-precision mathematical subroutines, such as square root and trigonometric functions, in their standard libraries. Working range. The 80 bit floating-point format has a range (including subnormals) from approximately 3.65 × 10−4951 to 1.18 × 10+4932 . Although this format is usually described as giving approximately eighteen significant digits of precision (the floor of the minimum guaranteed precision). The use of decimal when talking about binary is unfortunate because most decimal fractions are recurring sequences in binary just as is in decimal. Thus, a value such as 10.15, is represented in binary as equivalent to 10.1499996185 etc. in decimal for but 10.15000000000000035527 etc. in : inter-conversion will involve approximation, except for those few decimal fractions that represent an exact binary value, such as 0.625 . For , the decimal string is 10.1499999999999999996530553 etc. The last 9 digit is the eighteenth fractional digit and thus the twentieth significant digit of the string. Bounds on conversion between decimal and binary for the 80 bit format can be given as follows: If a decimal string with at most 18 significant digits is correctly rounded to an 80 bit IEEE 754 binary floating-point value (as on input) then converted back to the same number of significant decimal digits (as for output), then the final string will exactly match the original; while, conversely, if an 80 bit IEEE 754 binary floating-point value is correctly converted and (nearest) rounded to a decimal string with at least 21 significant decimal digits then converted back to binary format it will exactly match the original. These approximations are particularly troublesome when specifying the best value for constants in formulae to high precision, as might be calculated via arbitrary-precision arithmetic. Need for the 80 bit format. A notable example of the need for a minimum of 64 bits of precision in the significand of the extended precision format is the need to avoid precision loss when performing exponentiation on double-precision values. The x86 floating-point units do not provide an instruction that directly performs exponentiation: Instead they provide a set of instructions that a program can use in sequence to perform exponentiation using the equation: formula_0 In order to avoid precision loss, the intermediate results "log2("x")" and ""y"·log2("x")" must be computed with much higher precision, because effectively both the exponent and the significand fields of x must fit into the significand field of the intermediate result. Subsequently, the significand field of the intermediate result is split between the exponent and significand fields of the final result when is calculated. The following discussion describes this requirement in more detail. With a little unpacking, an IEEE 754 double-precision value can be represented as: formula_1 where s is the sign of the exponent (either 0 or 1), E is the unbiased exponent, which is an integer that ranges from 0 to 1023, and M is the significand which is a 53 bit value that falls in the range Negative numbers and zero can be ignored because the logarithm of these values is undefined. For purposes of this discussion M does not have 53 bits of precision because it is constrained to be greater than or equal to one i.e. the hidden bit does not count towards the precision (Note that in situations where M is less than 1, the value is actually a de-normal and therefore may have already suffered precision loss. This situation is beyond the scope of this article). Taking the log of this representation of a double-precision number and simplifying results in the following: formula_2 This result demonstrates that when taking base 2 logarithm of a number, the sign of the exponent of the original value becomes the sign of the logarithm, the exponent of the original value becomes the integer part of the significand of the logarithm, and the significand of the original value is transformed into the fractional part of the significand of the logarithm. Because E is an integer in the range 0 to 1023, up to 10 bits to the left of the radix point are needed to represent the integer part of the logarithm. Because M falls in the range the value of will fall in the range so at least 52 bits are needed to the right of the radix point to represent the fractional part of the logarithm. Combining 10 bits to the left of the radix point with 52 bits to the right of the radix point means that the significand part of the logarithm must be computed to at least 62 bits of precision. In practice values of M less than formula_3 require 53 bits to the right of the radix point and values of M less than formula_4 require 54 bits to the right of the radix point to avoid precision loss. Balancing this requirement for added precision to the right of the radix point, exponents less than 512 only require 9 bits to the left of the radix point and exponents less than 256 require only 8 bits to the left of the radix point. The final part of the exponentiation calculation is computing The "intermediate result" consists of an integer part "I" added to a fractional part "F". If the intermediate result is negative then a slight adjustment is needed to get a positive fractional part because both "I" and "F" are negative numbers. For positive intermediate results: formula_5 For negative intermediate results: formula_6 Thus the integer part of the intermediate result ("I" or plus a bias becomes the exponent of the final result and transformed positive fractional part of the intermediate result: 2"F" or becomes the significand of the final result. In order to supply 52 bits of precision to the final result, the positive fractional part must be maintained to at least 52 bits. In conclusion, the exact number of bits of precision needed in the significand of the intermediate result is somewhat data dependent but 64 bits is sufficient to avoid precision loss in the vast majority of exponentiation computations involving double-precision numbers. The number of bits needed for the exponent of the extended precision format follows from the requirement that the product of two double-precision numbers should not overflow when computed using the extended format. The largest possible exponent of a double-precision value is 1023 so the exponent of the largest possible product of two double-precision numbers is 2047 (an 11 bit value). Adding in a bias to account for negative exponents means that the exponent field must be at least 12 bits wide. Combining these requirements: 1 bit for the sign, 12 bits for the biased exponent, and 64 bits for the significand means that the extended precision format would need at least 77 bits. Engineering considerations resulted in the final definition of the 80 bit format (in particular the IEEE 754 standard requires the exponent range of an extended precision format to match that of the next largest, quad, precision format which is 15 bits). Another example of calculations that benefit from extended precision arithmetic are iterative refinement schemes, used to indirectly clean out errors accumulated in the direct solution during the typically very large number of calculations made for numerical linear algebra. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " x^y = 2^{\\ y\\ \\cdot\\ \\log_2( x ) } " }, { "math_id": 1, "text": "2^{(-1)^s\\ \\cdot\\ E}\\ \\cdot\\ M\\ " }, { "math_id": 2, "text": " \\log_2(2^{(-1)^s\\ \\cdot\\ E}\\ \\cdot\\,M) = (-1)^s\\ \\cdot\\ E\\ \\cdot\\ \\log_2( 2 )\\ +\\ \\log_2(M) = \\pm\\ E\\ +\\ \\log_2( M )\\ " }, { "math_id": 3, "text": "\\ \\sqrt{ 2\\ }\\ " }, { "math_id": 4, "text": "\\ \\sqrt[4]{2\\ }\\ " }, { "math_id": 5, "text": "\\ 2^ \\mathsf{ intermediate\\ result } = 2^{I + F} = 2^I\\ 2^F\\ " }, { "math_id": 6, "text": "\\ 2^{\\mathrm{intermediate\\ result}} = 2^{I+F} = 2^{I\\ +\\ (1-1)\\ +\\ F} = 2^{(I - 1)\\ +\\ (F + 1)} = 2^{ I - 1 }\\ 2^{ F + 1 }\\ " } ]
https://en.wikipedia.org/wiki?curid=7186253
71867222
Job 29
Job 29 is the 29th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –. Text. The original text is written in Hebrew language. This chapter is divided into 25 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the book is as follows: Within the structure, chapter 29 is grouped into the Dialogue section with the following outline: The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. At the end of the Dialogue, Job sums up his speech in a comprehensive review (chapters 29–31), with Job 29 describes Job's former prosperity, Job 30 focuses on Job's current suffering and Job 31 outlines Job's final defense. The whole part is framed by Job's longing for a restored relationship with God (Job 29:2) and the legal challenge to God (Job 31:35–27). Chapter 29 begins with the description of Job's former experience of his relationship with God in his family and personal circumstances (verses 2–6), then his former honorable place in the community (verses 7–10) as he actively worked for justice (verses 11–17), followed by the section comprising verses 18–20 that shows Job's expectation of ongoing peace, then closed by a summary of Job's former prominence as a respected leader in the community. Job's former blessings, honor and public role (29:1–20). The section starts with Job reminiscing "the day when God watched over me", which he puts before his own prosperity (verse 2), before his full family (verse 5) or abundant materials (verse 6), so it is Job's friendship with God that Job desperately misses. Before his suffering, Job assumed a respected public profile (verse 7) with people young and old acknowledging his wisdom (verse 8) that even "princes" and "nobles" stop speaking as soon as Job started to speak (verse 9–10). There is a list of Job's just actions in the community, especially towards the poor and marginalized (verses 12–16), depicting him as the wise ruler of Proverbs (Proverbs 28:4–6. 15–16; 31:4–5). Job describes his expectation in his former life of a peaceful and fulfilling situation (verses 18.–20). [Job said:] "when my steps were washed with butter," "and the rock poured out for me streams of oil!" Job's prominence in the community (29:21–25). This section rounds up Job's summary of his former life, picking up some concepts in verses 7–10, mainly about his position in the community. Job's advice was very respected that it usually becomes the outcome of ending discussion, described as "final and life-giving". However, Job also involves in the lives of others, acting in genuine care for the people. [Job said:] "I chose their way and sat as chief," "and I lived like a king among his troops," "like one who comforts mourners.’" Verse 25. Job's final recollection of his past is how he was deeply loved and well-respected just like a king, who comforts mourning people, a stark contrast to the treatment of his friends to him now when he was mourning. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=71867222
71867310
Safety and liveness properties
Properties of an execution of a computer program—particularly for concurrent and distributed systems—have long been formulated by giving "safety properties" ("bad things don't happen") and "liveness properties" ("good things do happen"). A program is totally correct with respect to a precondition formula_0 and postcondition formula_1 if any execution started in a state satisfying formula_0 terminates in a state satisfying formula_1. Total correctness is a conjunction of a safety property and a liveness property: Note that a "bad thing" is discrete, since it happens at a particular place during execution. A "good thing" need not be discrete, but the liveness property of termination is discrete. Formal definitions that were ultimately proposed for safety properties and liveness properties demonstrated that this decomposition is not only intuitively appealing but is also complete: all properties of an execution are a conjunction of safety and liveness properties. Moreover, undertaking the decomposition can be helpful, because the formal definitions enable a proof that different methods must be used for verifying safety properties versus for verifying liveness properties. Safety. A safety property proscribes discrete "bad things" from occurring during an execution. A safety property thus characterizes what is permitted by stating what is prohibited. The requirement that the "bad thing" be discrete means that a "bad thing" occurring during execution necessarily occurs at some identifiable point. Examples of a discrete "bad thing" that could be used to define a safety property include: An execution of a program can be described formally by giving the infinite sequence of program states that results as execution proceeds, where the last state for a terminating program is repeated infinitely. For a program of interest, let formula_4 denote the set of possible program states, formula_5 denote the set of finite sequences of program states, and formula_6 denote the set of infinite sequences of program states. The relation formula_7 holds for sequences formula_8 and formula_9 iff formula_8 is a prefix of formula_9 or formula_8 equals formula_9. A property of a program is the set of allowed executions. The essential characteristic of a safety property formula_10 is: If some execution formula_8 does not satisfy formula_10 then the defining "bad thing" for that safety property occurs at some point in formula_8. Notice that after such a "bad thing", if further execution results in an execution formula_11, then formula_11 also does not satisfy formula_10, since the "bad thing" in formula_8 also occurs in formula_11. We take this inference about the irremediability of "bad things" to be the defining characteristic for formula_10 to be a safety property. Formalizing this in predicate logic gives a formal definition for formula_10 being a safety property. formula_12 This formal definition for safety properties implies that if an execution formula_8 satisfies a safety property formula_10 then every prefix of formula_8 (with the last state repeated) also satisfies formula_10. Liveness. A liveness property prescribes "good things" for every execution or, equivalently, describes something that must happen during an execution. The "good thing" need not be discrete—it might involve an infinite number of steps. Examples of a "good thing" used to define a liveness property include: The "good thing" in the first example is discrete but not in the others. Producing an answer within a specified real-time bound is a safety property rather than a liveness property. This is because a discrete "bad thing" is being proscribed: a partial execution that reaches a state where the answer still has not been produced and the value of the clock (a state variable) violates the bound. Deadlock freedom is a safety property: the "bad thing" is a deadlock (which is discrete). Most of the time, knowing that a program eventually does some "good thing" is not satisfactory; we want to know that the program performs the "good thing" within some number of steps or before some deadline. A property that gives a specific bound to the "good thing" is a safety property (as noted above), whereas the weaker property that merely asserts the bound exists is a liveness property. Proving such a liveness property is likely to be easier than proving the tighter safety property because proving the liveness property doesn't require the kind of detailed accounting that is required for proving the safety property. To differ from a safety property, a liveness property formula_13 cannot rule out any finite prefix formula_14 of an execution (since such an formula_15 would be a "bad thing" and, thus, would be defining a safety property). That leads to defining a liveness property formula_13 to be a property that does not rule out any finite prefix. formula_16 This definition does not restrict a "good thing" to being discrete—the "good thing" can involve all of formula_9, which is an infinite-length execution. History. Lamport used the terms "safety property" and "liveness property" in his 1977 paper on proving the correctness of multiprocess (concurrent) programs. He borrowed the terms from Petri net theory, which was using the terms "liveness" and "boundedness" for describing how the assignment of a Petri net's "tokens" to its "places" could evolve; Petri net "safety" was a specific form of "boundedness". Lamport subsequently developed a formal definition of safety for a NATO short course on distributed systems in Munich. It assumed that properties are invariant under stuttering. The formal definition of safety given above appears in a paper by Alpern and Schneider; the connection between the two formalizations of safety properties appears in a paper by Alpern, Demers, and Schneider. Alpern and Schneider gives the formal definition for liveness, accompanied by a proof that all properties can be constructed using safety properties and liveness properties. That proof was inspired by Gordon Plotkin's insight that safety properties correspond to closed sets and liveness properties correspond to dense sets in a natural topology on the set formula_6 of infinite sequences of program states. Subsequently, Alpern and Schneider not only gave a Büchi automaton characterization for the formal definitions of safety properties and liveness properties but used these automata formulations to show that verification of safety properties would require an invariant and verification of liveness properties would require a well-foundedness argument. The correspondence between the kind of property (safety vs liveness) with kind of proof (invariance vs well-foundedness) was a strong argument that the decomposition of properties into safety and liveness (as opposed to some other partitioning) was a useful one—knowing the type of property to be proved dictated the type of proof that is required. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "\\{P\\} C \\{Q\\}" }, { "math_id": 4, "text": "S" }, { "math_id": 5, "text": "S^*" }, { "math_id": 6, "text": "S^\\omega" }, { "math_id": 7, "text": "\\sigma\\le\\tau" }, { "math_id": 8, "text": "\\sigma" }, { "math_id": 9, "text": "\\tau" }, { "math_id": 10, "text": "SP" }, { "math_id": 11, "text": "\\sigma^\\prime" }, { "math_id": 12, "text": "\\forall \\sigma\\in S^\\omega: \\sigma \\notin SP\\implies (\\exists \\beta \\le \\sigma: (\\forall \\tau \\in S^\\omega: \\beta\\tau \\notin SP))" }, { "math_id": 13, "text": "LP" }, { "math_id": 14, "text": "\\alpha \\in S^*" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "\\forall \\alpha \\in S^*: (\\exists \\tau \\in S^\\omega: \\alpha\\tau \\in LP)" } ]
https://en.wikipedia.org/wiki?curid=71867310