id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
5993806
Quantum relative entropy
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy. Motivation. For simplicity, it will be assumed that all objects in the article are finite-dimensional. We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distribution "P" = {"p"1..."p""n"}, but somehow we mistakenly assumed it to be "Q" = {"q"1..."q""n"}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about the "j"-th event, or equivalently, the amount of information provided after observing the "j"-th event, is formula_0 The (assumed) average uncertainty of all possible events is then formula_1 On the other hand, the Shannon entropy of the probability distribution "p", defined by formula_2 is the real amount of uncertainty before observation. Therefore the difference between these two quantities formula_3 is a measure of the distinguishability of the two probability distributions "p" and "q". This is precisely the classical relative entropy, or Kullback–Leibler divergence: formula_4 Note Definition. As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions to density matrices. Let "ρ" be a density matrix. The von Neumann entropy of "ρ", which is the quantum mechanical analog of the Shannon entropy, is given by formula_6 For two density matrices "ρ" and "σ", the quantum relative entropy of "ρ" with respect to "σ" is defined by formula_7 We see that, when the states are classically related, i.e. "ρσ" = "σρ", the definition coincides with the classical case, in the sense that if formula_8 and formula_9 with formula_10 and formula_11 (because formula_12 and formula_13 commute, they are simultaneously diagonalizable), then formula_14 is just the ordinary Kullback-Leibler divergence of the probability vector formula_15 with respect to the probability vector formula_16. Non-finite (divergent) relative entropy. In general, the "support" of a matrix "M" is the orthogonal complement of its kernel, i.e. formula_17. When considering the quantum relative entropy, we assume the convention that −"s" · log 0 = ∞ for any "s" > 0. This leads to the definition that formula_18 when formula_19 This can be interpreted in the following way. Informally, the quantum relative entropy is a measure of our ability to distinguish two quantum states where larger values indicate states that are more different. Being orthogonal represents the most different quantum states can be. This is reflected by non-finite quantum relative entropy for orthogonal quantum states. Following the argument given in the Motivation section, if we erroneously assume the state formula_12 has support in formula_20, this is an error impossible to recover from. However, one should be careful not to conclude that the divergence of the quantum relative entropy formula_21 implies that the states formula_12 and formula_13 are orthogonal or even very different by other measures. Specifically, formula_21 can diverge when formula_12 and formula_13 differ by a "vanishingly small amount" as measured by some norm. For example, let formula_13 have the diagonal representation formula_22 with formula_23 for formula_24 and formula_25 for formula_26 where formula_27 is an orthonormal set. The kernel of formula_13 is the space spanned by the set formula_28. Next let formula_29 for a small positive number formula_30. As formula_12 has support (namely the state formula_31) in the kernel of formula_13, formula_21 is divergent even though the trace norm of the difference formula_32 is formula_33 . This means that difference between formula_12 and formula_13 as measured by the trace norm is vanishingly small as formula_34 even though formula_21 is divergent (i.e. infinite). This property of the quantum relative entropy represents a serious shortcoming if not treated with care. Non-negativity of relative entropy. Corresponding classical statement. For the classical Kullback–Leibler divergence, it can be shown that formula_35 and the equality holds if and only if "P" = "Q". Colloquially, this means that the uncertainty calculated using erroneous assumptions is always greater than the real amount of uncertainty. To show the inequality, we rewrite formula_36 Notice that log is a concave function. Therefore -log is convex. Applying Jensen's inequality, we obtain formula_37 Jensen's inequality also states that equality holds if and only if, for all "i", "qi" = (Σ"qj") "pi", i.e. "p" = "q". The result. Klein's inequality states that the quantum relative entropy formula_38 is non-negative in general. It is zero if and only if "ρ" = "σ". Proof Let "ρ" and "σ" have spectral decompositions formula_39 So formula_40 Direct calculation gives formula_41 formula_42 formula_43 where "Pi j" = |"vi*wj"|2. Since the matrix ("Pi j")"i j" is a doubly stochastic matrix and -log is a convex function, the above expression is formula_44 Define "r"i = Σ"j""qj Pi j". Then {"r"i} is a probability distribution. From the non-negativity of classical relative entropy, we have formula_45 The second part of the claim follows from the fact that, since -log is strictly convex, equality is achieved in formula_46 if and only if ("Pi j") is a permutation matrix, which implies "ρ" = "σ", after a suitable labeling of the eigenvectors {"vi"} and {"wi"}. Joint convexity of relative entropy. The relative entropy is jointly convex. For formula_47 and states formula_48 we have formula_49 Monotonicity of relative entropy. The relative entropy decreases monotonically under completely positive trace preserving (CPTP) operations formula_50 on density matrices, formula_51. This inequality is called Monotonicity of quantum relative entropy and was first proved by Lindblad. An entanglement measure. Let a composite quantum system have state space formula_52 and "ρ" be a density matrix acting on "H". The relative entropy of entanglement of "ρ" is defined by formula_53 where the minimum is taken over the family of separable states. A physical interpretation of the quantity is the optimal distinguishability of the state "ρ" from separable states. Clearly, when "ρ" is not entangled formula_54 by Klein's inequality. Relation to other quantum information quantities. One reason the quantum relative entropy is useful is that several other important quantum information quantities are special cases of it. Often, theorems are stated in terms of the quantum relative entropy, which lead to immediate corollaries concerning the other quantities. Below, we list some of these relations. Let "ρ"AB be the joint state of a bipartite system with subsystem "A" of dimension "n"A and "B" of dimension "n"B. Let "ρ"A, "ρ"B be the respective reduced states, and "I"A, "I"B the respective identities. The maximally mixed states are "I"A/"n"A and "I"B/"n"B. Then it is possible to show with direct computation that formula_55 formula_56 formula_57 where "I"("A":"B") is the quantum mutual information and "S"("B"|"A") is the quantum conditional entropy. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\; - \\log q_j." }, { "math_id": 1, "text": "\\; - \\sum_j p_j \\log q_j." }, { "math_id": 2, "text": "\\; - \\sum_j p_j \\log p_j," }, { "math_id": 3, "text": "\\; - \\sum_j p_j \\log q_j - \\left(- \\sum_j p_j \\log p_j\\right) = \\sum_j p_j \\log p_j - \\sum_j p_j \\log q_j" }, { "math_id": 4, "text": "D_{\\mathrm{KL}}(P\\|Q) = \\sum_j p_j \\log \\frac{p_j}{q_j} \\!." }, { "math_id": 5, "text": "\\lim_{x \\searrow 0} x \\log(x) = 0" }, { "math_id": 6, "text": "S(\\rho) = - \\operatorname{Tr} \\rho \\log \\rho." }, { "math_id": 7, "text": "\nS(\\rho \\| \\sigma) = - \\operatorname{Tr} \\rho \\log \\sigma - S(\\rho) = \\operatorname{Tr} \\rho \\log \\rho - \\operatorname{Tr} \\rho \\log \\sigma = \\operatorname{Tr}\\rho (\\log \\rho - \\log \\sigma).\n" }, { "math_id": 8, "text": "\\rho = S D_1 S^{\\mathsf{T}}" }, { "math_id": 9, "text": "\\sigma = S D_2 S^{\\mathsf{T}}" }, { "math_id": 10, "text": "D_1 = \\text{diag}(\\lambda_1, \\ldots, \\lambda_n)" }, { "math_id": 11, "text": "D_2 = \\text{diag}(\\mu_1, \\ldots, \\mu_n)" }, { "math_id": 12, "text": "\\rho" }, { "math_id": 13, "text": "\\sigma" }, { "math_id": 14, "text": "S(\\rho \\| \\sigma) = \\sum_{j = 1}^{n} \\lambda_j \\ln\\left(\\frac{\\lambda_j}{\\mu_j}\\right)" }, { "math_id": 15, "text": "(\\lambda_1, \\ldots, \\lambda_n)" }, { "math_id": 16, "text": "(\\mu_1, \\ldots, \\mu_n)" }, { "math_id": 17, "text": "\\text{supp}(M) = \\text{ker}(M)^\\perp " }, { "math_id": 18, "text": "S(\\rho \\| \\sigma) = \\infty" }, { "math_id": 19, "text": "\\text{supp}(\\rho) \\cap \\text{ker}(\\sigma) \\neq \\{ 0 \\}." }, { "math_id": 20, "text": "\\text{ker}(\\sigma)" }, { "math_id": 21, "text": "S(\\rho\\|\\sigma)" }, { "math_id": 22, "text": "\\sigma=\\sum_{n}\\lambda_n|f_n\\rangle\\langle f_n|" }, { "math_id": 23, "text": "\\lambda_n>0" }, { "math_id": 24, "text": "n=0,1,2,\\ldots" }, { "math_id": 25, "text": "\\lambda_n=0" }, { "math_id": 26, "text": "n=-1,-2,\\ldots" }, { "math_id": 27, "text": "\\{|f_n\\rangle, n\\in \\Z\\}" }, { "math_id": 28, "text": "\\{|f_{n}\\rangle, n=-1,-2,\\ldots\\}" }, { "math_id": 29, "text": " \\rho=\\sigma+\\epsilon|f_{-1}\\rangle\\langle f_{-1}| - \\epsilon|f_1\\rangle\\langle f_1|" }, { "math_id": 30, "text": "\\epsilon" }, { "math_id": 31, "text": "|f_{-1}\\rangle" }, { "math_id": 32, "text": "(\\rho-\\sigma)" }, { "math_id": 33, "text": "2\\epsilon" }, { "math_id": 34, "text": "\\epsilon\\to 0" }, { "math_id": 35, "text": "D_{\\mathrm{KL}}(P\\|Q) = \\sum_j p_j \\log \\frac{p_j}{q_j} \\geq 0," }, { "math_id": 36, "text": "D_{\\mathrm{KL}}(P\\|Q) = \\sum_j p_j \\log \\frac{p_j}{q_j} = \\sum_j (- \\log \\frac{q_j}{p_j})(p_j)." }, { "math_id": 37, "text": "\nD_{\\mathrm{KL}}(P\\|Q) = \\sum_j (- \\log \\frac{q_j}{p_j})(p_j) \\geq - \\log ( \\sum_j \\frac{q_j}{p_j} p_j ) = 0.\n" }, { "math_id": 38, "text": "\nS(\\rho \\| \\sigma) = \\operatorname{Tr}\\rho (\\log \\rho - \\log \\sigma).\n" }, { "math_id": 39, "text": "\\rho = \\sum_i p_i v_i v_i ^* \\; , \\; \\sigma = \\sum_i q_i w_i w_i ^*." }, { "math_id": 40, "text": "\\log \\rho = \\sum_i (\\log p_i) v_i v_i ^* \\; , \\; \\log \\sigma = \\sum_i (\\log q_i)w_i w_i ^*." }, { "math_id": 41, "text": "S(\\rho \\| \\sigma)= \\sum_k p_k \\log p_k - \\sum_{i,j} (p_i \\log q_j) | v_i ^* w_j |^2" }, { "math_id": 42, "text": "\\qquad \\quad \\; = \\sum_i p_i ( \\log p_i - \\sum_j \\log q_j | v_i ^* w_j |^2)" }, { "math_id": 43, "text": "\\qquad \\quad \\; = \\sum_i p_i (\\log p_i - \\sum_j (\\log q_j )P_{ij})," }, { "math_id": 44, "text": "\\geq \\sum_i p_i (\\log p_i - \\log (\\sum_j q_j P_{ij}))." }, { "math_id": 45, "text": "S(\\rho \\| \\sigma) \\geq \\sum_i p_i \\log \\frac{p_i}{r_i} \\geq 0." }, { "math_id": 46, "text": "\n\\sum_i p_i (\\log p_i - \\sum_j (\\log q_j )P_{ij}) \\geq \\sum_i p_i (\\log p_i - \\log (\\sum_j q_j P_{ij}))\n" }, { "math_id": 47, "text": "0\\leq \\lambda\\leq 1" }, { "math_id": 48, "text": "\\rho_{1(2)}, \\sigma_{1(2)}" }, { "math_id": 49, "text": "D(\\lambda\\rho_1+(1-\\lambda)\\rho_2\\|\\lambda\\sigma_1+(1-\\lambda)\\sigma_2)\\leq \\lambda D(\\rho_1\\|\\sigma_1)+(1-\\lambda)D(\\rho_2\\|\\sigma_2)" }, { "math_id": 50, "text": "\\mathcal{N}" }, { "math_id": 51, "text": "S(\\mathcal{N}(\\rho)\\|\\mathcal{N}(\\sigma))\\leq S(\\rho\\|\\sigma)" }, { "math_id": 52, "text": "H = \\otimes _k H_k" }, { "math_id": 53, "text": "\\; D_{\\mathrm{REE}} (\\rho) = \\min_{\\sigma} S(\\rho \\| \\sigma)" }, { "math_id": 54, "text": "\\; D_{\\mathrm{REE}} (\\rho) = 0" }, { "math_id": 55, "text": "S(\\rho_{A} || I_{A}/n_A) = \\mathrm{log}(n_A)- S(\\rho_{A}), \\;" }, { "math_id": 56, "text": "S(\\rho_{AB} || \\rho_{A} \\otimes \\rho_{B}) = S(\\rho_{A}) + S(\\rho_{B}) - S(\\rho_{AB}) = I(A:B), " }, { "math_id": 57, "text": "S(\\rho_{AB} || \\rho_{A} \\otimes I_{B}/n_B) = \\mathrm{log}(n_B) + S(\\rho_{A}) - S(\\rho_{AB}) = \\mathrm{log}(n_B)- S(B|A), " } ]
https://en.wikipedia.org/wiki?curid=5993806
59938063
Locally linear graph
Graph where every edge is in one triangle In graph theory, a locally linear graph is an undirected graph in which every edge belongs to exactly one triangle. Equivalently, for each vertex of the graph, its neighbors are each adjacent to exactly one other neighbor, so the neighbors can be paired up into an induced matching. Locally linear graphs have also been called locally matched graphs. Their triangles form the hyperedges of triangle-free 3-uniform linear hypergraphs and the blocks of certain partial Steiner triple systems, and the locally linear graphs are exactly the Gaifman graphs of these hypergraphs or partial Steiner systems. Many constructions for locally linear graphs are known. Examples of locally linear graphs include the triangular cactus graphs, the line graphs of 3-regular triangle-free graphs, and the Cartesian products of smaller locally linear graphs. Certain Kneser graphs, and certain strongly regular graphs, are also locally linear. The question of how many edges locally linear graphs can have is one of the formulations of the Ruzsa–Szemerédi problem. Although dense graphs can have a number of edges proportional to the square of the number of vertices, locally linear graphs have a smaller number of edges, falling short of the square by at least a small non-constant factor. The densest planar graphs that can be locally linear are also known. The least dense locally linear graphs are the triangular cactus graphs. Constructions. Gluing and products. The friendship graphs, graphs formed by gluing together a collection of triangles at a single shared vertex, are locally linear. They are the only finite graphs having the stronger property that every pair of vertices (adjacent or not) share exactly one common neighbor. More generally every triangular cactus graph, a graph formed by gluing triangles at shared vertices without forming any additional cycles, is locally linear. Locally linear graphs may be formed from smaller locally linear graphs by the following operation, a form of the clique-sum operation on graphs. Let formula_0 and formula_1 be any two locally linear graphs, select a triangle from each of them, and glue the two graphs by merging together corresponding pairs of vertices in the two selected triangles. Then the resulting graph remains locally linear. The Cartesian product of any two locally linear graphs remains locally linear, because any triangles in the product come from triangles in one or the other factors. For instance, the nine-vertex Paley graph (the graph of the 3-3 duoprism) is the Cartesian product of two triangles. The Hamming graph formula_2 is a Cartesian product of formula_3 triangles, and again is locally linear. From smaller graphs. Some graphs that are not themselves locally linear can be used as a framework to construct larger locally linear graphs. One such construction involves line graphs. For any graph formula_0, the line graph formula_4 is a graph that has a vertex for every edge of formula_0. Two vertices in formula_4 are adjacent when the two edges that they represent in formula_0 have a common endpoint. If formula_0 is a 3-regular triangle-free graph, then its line graph formula_4 is 4-regular and locally linear. It has a triangle for every vertex formula_5 of formula_0, with the vertices of the triangle corresponding to the three edges incident to formula_5. Every 4-regular locally linear graph can be constructed in this way. For instance, the graph of the cuboctahedron is the line graph of a cube, so it is locally linear. The locally linear nine-vertex Paley graph, constructed above as a Cartesian product, may also be constructed in a different way as the line graph of the utility graph formula_6. The line graph of the Petersen graph is also locally linear by this construction. It has a property analogous to the cages: it is the smallest possible graph in which the largest clique has three vertices, each vertex is in exactly two edge-disjoint cliques, and the shortest cycle with edges from distinct cliques has length five. A more complicated expansion process applies to planar graphs. Let formula_0 be a planar graph embedded in the plane in such a way that every face is a quadrilateral, such as the graph of a cube. Gluing a square antiprism onto each face of formula_0, and then deleting the original edges of formula_0, produces a new locally linear planar graph. The numbers of edges and vertices of the result can be calculated from Euler's polyhedral formula: if formula_0 has formula_7 vertices, it has exactly formula_8 faces, and the result of replacing the faces of formula_0 by antiprisms has formula_9 vertices and formula_10 edges. For instance, the cuboctahedron can again be produced in this way, from the two faces (the interior and exterior) of a 4-cycle. The removed 4-cycle of this construction can be seen on the cuboctahedron as a cycle of four diagonals of its square faces, bisecting the polyhedron. Algebraic constructions. Certain Kneser graphs, graphs constructed from the intersection patterns of equal-size sets, are locally linear. Kneser graphs are described by two parameters, the size of the sets they represent and the size of the universe from which these sets are drawn. The Kneser graph formula_11 has formula_12 vertices (in the standard notation for binomial coefficients), representing the formula_13-element subsets of an formula_14-element set. In this graph, two vertices are adjacent when the corresponding subsets are disjoint sets, having no elements in common. In the special case when formula_15, the resulting graph is locally linear, because for each two disjoint formula_13-element subsets formula_16 and formula_17 there is exactly one other formula_13-element subset disjoint from both of them, consisting of all the elements that are neither in formula_16 nor in formula_17. The resulting locally linear graph has formula_18 vertices and formula_19 edges. For instance, for formula_20 the Kneser graph formula_21 is locally linear with 15 vertices and 45 edges. Locally linear graphs can also be constructed from progression-free sets of numbers. Let formula_22 be a prime number, and let formula_23 be a subset of the numbers modulo formula_22 such that no three members of formula_23 form an arithmetic progression modulo formula_22. (That is, formula_23 is a Salem–Spencer set modulo formula_22.) This set can be used to construct a tripartite graph with formula_24 vertices and formula_25 edges that is locally linear. To construct this graph, make three sets of vertices, each numbered from formula_26 to formula_27. For each number formula_28 in the range from formula_26 to formula_27 and each element formula_14 of formula_23, construct a triangle connecting the vertex with number formula_28 in the first set of vertices, the vertex with number formula_29 in the second set of vertices, and the vertex with number formula_30 in the third set of vertices. Form a graph as the union of all of these triangles. Because it is a union of triangles, every edge of the resulting graph belongs to a triangle. However, there can be no other triangles than the ones formed in this way. Any other triangle would have vertices numbered formula_31 where formula_14, formula_13, and formula_32 all belong to formula_23, violating the assumption that there be no arithmetic progressions formula_33 in formula_23. For example, with formula_34 and formula_35, the result of this construction is the nine-vertex Paley graph. The triangles in a locally linear graph can be equivalently thought of as forming a 3-uniform hypergraph. Such a hypergraph must be linear, meaning that no two of its hyperedges (the triangles) can share more than one vertex. The locally linear graph itself is the Gaifman graph of the hypergraph, the graph of pairs of vertices that belong to a common hyperedge. In this view it makes sense to talk about the girth of the hypergraph. In graph terms, this is the length of the shortest cycle that is not one of the triangles of the graph. An algebraic construction based on polarity graphs (also called Brown graphs) has been used, in this context, to find dense locally linear graphs that have no 4-cycles; their hypergraph girth is five. A polarity graph is defined from a finite projective plane, and a polarity, an incidence-preserving bijection between its points and its lines. The vertices of the polarity graph are points, and an edge connects two points whenever one is polar to a line containing the other. More algebraically, the vertices of the same graph can be represented by homogeneous coordinates: these are triples of values formula_36 from a finite field, not all zero, where two triples define the same point in the plane whenever they are scalar multiples of each other. Two points, represented by triples in this way, are adjacent when their inner product is zero. The polarity graph for a finite field of odd order formula_37 has formula_38 vertices, of which formula_39 are self-adjacent and do not belong to any triangles. When these are removed, the result is a locally linear graph with formula_40 vertices, formula_41 edges, and hypergraph girth five, giving the maximum possible number of edges for a locally linear graph of this girth up to lower-order terms. Regularity. Regular graphs with few vertices. A graph is regular when all of its vertices have the same degree, the number of incident edges. Every locally linear graph must have even degree at each vertex, because the edges at each vertex can be paired up into triangles. The Cartesian product of two locally linear regular graphs is again locally linear and regular, with degree equal to the sum of the degrees of the factors. Therefore, one can take Cartesian products of locally linear graphs of degree two (triangles) to produce regular locally linear graphs of every even degree. The formula_42-regular locally linear graphs must have at least formula_43 vertices, because there are this many vertices among any triangle and its neighbors alone. (No two vertices of the triangle can share a neighbor without violating local linearity.) Regular graphs with exactly this many vertices are possible only when formula_44 is 1, 2, 3, or 5, and are uniquely defined for each of these four cases. The four regular graphs meeting this bound on the number of vertices are the 3-vertex 2-regular triangle formula_45, the 9-vertex 4-regular Paley graph, the 15-vertex 6-regular Kneser graph formula_21, and the 27-vertex 10-regular complement graph of the Schläfli graph. The final 27-vertex 10-regular graph also represents the intersection graph of the 27 lines on a cubic surface. Strongly regular graphs. A strongly regular graph can be characterized by a quadruple of parameters formula_46 where formula_7 is the number of vertices, formula_47 is the number of incident edges per vertex, formula_48 is the number of shared neighbors for every adjacent pair of vertices, and formula_49 is the number of shared neighbors for every non-adjacent pair of vertices. When formula_50 the graph is locally linear. The locally linear graphs already mentioned above that are strongly regular graphs and their parameters are Other locally linear strongly regular graphs include Other potentially-valid combinations with formula_50 include (99,14,1,2) and (115,18,1,3) but it is unknown whether strongly regular graphs with those parameters exist. The question of the existence of a strongly regular graph with parameters (99,14,1,2) is known as Conway's 99-graph problem, and John Horton Conway has offered a $1000 prize for its solution. Distance-regular graphs. There are finitely many distance-regular graphs of degree 4 or 6 that are locally linear. Beyond the strongly regular graphs of the same degrees, they also include the line graph of the Petersen graph, the Hamming graph formula_51, and the halved Foster graph. Density. One formulation of the Ruzsa–Szemerédi problem asks for the maximum number of edges in an formula_7-vertex locally linear graph. As Imre Z. Ruzsa and Endre Szemerédi proved, this maximum number is formula_52 but is formula_53 for every formula_54. The construction of locally linear graphs from progression-free sets leads to the densest known locally linear graphs, with formula_55 edges. (In these formulas, formula_56, formula_57, and formula_58 are examples of little o notation, big Omega notation, and big O notation, respectively.) Among planar graphs, the maximum number of edges in a locally linear graph with formula_7 vertices is formula_59. The graph of the cuboctahedron is the first in an infinite sequence of polyhedral graphs with formula_60 vertices and formula_61 edges, for formula_62, constructed by expanding the quadrilateral faces of formula_63 into antiprisms. These examples show that the formula_59 upper bound can be attained. Every locally linear graph has the property that it remains connected after any matching is removed from it, because in any path through the graph, each matched edge can be replaced by the other two edges of its triangle. Among the graphs with this property, the least dense are the triangular cactus graphs, which are also the least dense locally linear graphs. Applications. One application of locally linear graphs occurs in the formulation of Greechie diagrams, which are used in quantum logic to help determine whether certain Hilbert space equations can be inferred from each other. In this application, the triangles of locally linear graphs form the blocks of Greechie diagrams with block size three. The Greechie diagrams corresponding to lattices come from the locally linear graphs of hypergraph girth five or more, as constructed for instance from polarity graphs. A combination of random sampling and a graph removal lemma can be used to find large high-girth 3-uniform hypergraphs within arbitrary 3-uniform linear hypergraphs or partial Steiner triple systems. This method can then be used to prove asymptotically tight lower bounds on the independence number of 3-uniform linear hypergraphs and partial Steiner triple systems. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "H(d,3)" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "L(G)" }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "K_{3,3}" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "n-2" }, { "math_id": 9, "text": "5(n-2)+2" }, { "math_id": 10, "text": "12(n-2)" }, { "math_id": 11, "text": "K G_{a,b}" }, { "math_id": 12, "text": "\\tbinom{a}{b}" }, { "math_id": 13, "text": "b" }, { "math_id": 14, "text": "a" }, { "math_id": 15, "text": "a=3b" }, { "math_id": 16, "text": "X" }, { "math_id": 17, "text": "Y" }, { "math_id": 18, "text": "\\tbinom{3b}{b}" }, { "math_id": 19, "text": "\\tfrac{1}{2}\\tbinom{3b}{b}\\tbinom{2b}{b}" }, { "math_id": 20, "text": "b=2" }, { "math_id": 21, "text": "KG_{6,2}" }, { "math_id": 22, "text": "p" }, { "math_id": 23, "text": "A" }, { "math_id": 24, "text": "3p" }, { "math_id": 25, "text": "3p\\cdot|A|" }, { "math_id": 26, "text": "0" }, { "math_id": 27, "text": "p-1" }, { "math_id": 28, "text": "x" }, { "math_id": 29, "text": "x+a" }, { "math_id": 30, "text": "x+2a" }, { "math_id": 31, "text": "(x,x+a,x+a+b)" }, { "math_id": 32, "text": "c=(a+b)/2" }, { "math_id": 33, "text": "(a,c,b)" }, { "math_id": 34, "text": "p=3" }, { "math_id": 35, "text": "A=\\{\\pm 1\\}" }, { "math_id": 36, "text": "(x,y,z)" }, { "math_id": 37, "text": "q" }, { "math_id": 38, "text": "q^2+q+1" }, { "math_id": 39, "text": "q+1" }, { "math_id": 40, "text": "q^2" }, { "math_id": 41, "text": "\\bigl(\\tfrac12+o(1)\\bigr)q^3" }, { "math_id": 42, "text": "2r" }, { "math_id": 43, "text": "6r-3" }, { "math_id": 44, "text": "r" }, { "math_id": 45, "text": "K_3" }, { "math_id": 46, "text": "(n,k,\\lambda,\\mu)" }, { "math_id": 47, "text": "k" }, { "math_id": 48, "text": "\\lambda" }, { "math_id": 49, "text": "\\mu" }, { "math_id": 50, "text": "\\lambda=1" }, { "math_id": 51, "text": "H(3,3)" }, { "math_id": 52, "text": "o(n^2)" }, { "math_id": 53, "text": "\\Omega(n^{2-\\varepsilon})" }, { "math_id": 54, "text": "\\varepsilon>0" }, { "math_id": 55, "text": "n^2/\\exp O(\\sqrt{\\log n})" }, { "math_id": 56, "text": "o" }, { "math_id": 57, "text": "\\Omega" }, { "math_id": 58, "text": "O" }, { "math_id": 59, "text": "\\tfrac{12}{5}(n-2)" }, { "math_id": 60, "text": "n=5k+2" }, { "math_id": 61, "text": "\\tfrac{12}{5}(n-2)=12k" }, { "math_id": 62, "text": "k=2,3,\\dots" }, { "math_id": 63, "text": "K_{2,k}" } ]
https://en.wikipedia.org/wiki?curid=59938063
5994167
Carnot cycle
Idealized thermodynamic cycle A Carnot cycle is an ideal thermodynamic cycle proposed by French physicist Sadi Carnot in 1824 and expanded upon by others in the 1830s and 1840s. By Carnot's theorem, it provides an upper limit on the efficiency of any classical thermodynamic engine during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference through the application of work to the system. In a Carnot cycle, a system or engine transfers energy in the form of heat between two thermal reservoirs at temperatures formula_0 and formula_1 (referred to as the hot and cold reservoirs, respectively), and a part of this transferred energy is converted to the work done by the system. The cycle is reversible, and entropy is conserved, merely transferred between the thermal reservoirs and the system without gain or loss. When work is applied to the system, heat moves from the cold to hot reservoir (heat pump or refrigeration). When heat moves from the hot to the cold reservoir, the system applies work to the environment. The work formula_2 done by the system or engine to the environment per Carnot cycle depends on the temperatures of the thermal reservoirs and the entropy transferred from the hot reservoir to the system formula_3 per cycle such as formula_4, where formula_5 is heat transferred from the hot reservoir to the system per cycle. Stages. A Carnot cycle as an idealized thermodynamic cycle performed by a Carnot heat engine, consisting of the following steps: In this case, since it is a reversible thermodynamic cycle (no net change in the system and its surroundings per cycle) formula_6 or, formula_7 This is true as formula_8 and formula_9 are both smaller in magnitude and in fact are in the same ratio as formula_10. The pressure–volume graph. When a Carnot cycle is plotted on a pressure–volume diagram (Figure 1), the isothermal stages follow the isotherm lines for the working fluid, the adiabatic stages move between isotherms, and the area bounded by the complete cycle path represents the total work that can be done during one cycle. From point 1 to 2 and point 3 to 4 the temperature is constant (isothermal process). Heat transfer from point 4 to 1 and point 2 to 3 are equal to zero (adiabatic process). Properties and significance. The temperature–entropy diagram. The behavior of a Carnot engine or refrigerator is best understood by using a temperature–entropy diagram ("T"–"S" diagram), in which the thermodynamic state is specified by a point on a graph with entropy ("S") as the horizontal axis and temperature ("T") as the vertical axis (Figure 2). For a simple closed system (control mass analysis), any point on the graph represents a particular state of the system. A thermodynamic process is represented by a curve connecting an initial state (A) and a final state (B). The area under the curve is: which is the amount of heat transferred in the process. If the process moves the system to greater entropy, the area under the curve is the amount of heat absorbed by the system in that process; otherwise, it is the amount of heat removed from or leaving the system. For any cyclic process, there is an upper portion of the cycle and a lower portion. In "T"-"S" diagrams for a clockwise cycle, the area under the upper portion will be the energy absorbed by the system during the cycle, while the area under the lower portion will be the energy removed from the system during the cycle. The area inside the cycle is then the difference between the two (the absorbed net heat energy), but since the internal energy of the system must have returned to its initial value, this difference must be the amount of work done by the system per cycle. Referring to Figure 1, mathematically, for a reversible process, we may write the amount of work done over a cyclic process as: Since "dU" is an exact differential, its integral over any closed loop is zero and it follows that the area inside the loop on a "T"–"S" diagram is (a) equal to the total work performed by the system on the surroundings if the loop is traversed in a clockwise direction, and (b) is equal to the total work done on the system by the surroundings as the loop is traversed in a counterclockwise direction. The Carnot cycle. Evaluation of the above integral is particularly simple for a Carnot cycle. The amount of energy transferred as work is formula_11 The total amount of heat transferred from the hot reservoir to the system (in the isothermal expansion) will be formula_12 and the total amount of heat transferred from the system to the cold reservoir (in the isothermal compression) will be formula_13 Due to energy conservation, the net heat transferred, formula_14, is equal to the work performed formula_15 The efficiency formula_16 is defined to be: where The expression with the temperature formula_20 can be derived from the expressions above with the entropy: formula_21 and formula_22. Since formula_23, a minus sign appears in the final expression for formula_16. This is the Carnot heat engine working efficiency definition as the fraction of the work done by the system to the thermal energy received by the system from the hot reservoir per cycle. This thermal energy is the cycle initiator. Reversed Carnot cycle. A Carnot heat-engine cycle described is a totally reversible cycle. That is, all the processes that compose it can be reversed, in which case it becomes the Carnot heat pump and refrigeration cycle. This time, the cycle remains exactly the same except that the directions of any heat and work interactions are reversed. Heat is absorbed from the low-temperature reservoir, heat is rejected to a high-temperature reservoir, and a work input is required to accomplish all this. The "P"–"V" diagram of the reversed Carnot cycle is the same as for the Carnot heat-engine cycle except that the directions of the processes are reversed. Carnot's theorem. It can be seen from the above diagram that for any cycle operating between temperatures formula_0 and formula_1, none can exceed the efficiency of a Carnot cycle. Carnot's theorem is a formal statement of this fact: "No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs." Thus, Equation 3 gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: "All reversible engines operating between the same heat reservoirs are equally efficient." Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent: Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature. In other words, the maximum efficiency is achieved if and only if entropy does not change per cycle. An entropy change per cycle is made, for example, if there is friction leading to dissipation of work into heat. In that case, the cycle is not reversible and the Clausius theorem becomes an inequality rather than an equality. Otherwise, since entropy is a state function, the required dumping of heat into the environment to dispose of excess entropy leads to a (minimal) reduction in efficiency. So Equation 3 gives the efficiency of any reversible heat engine. In mesoscopic heat engines, work per cycle of operation in general fluctuates due to thermal noise. If the cycle is performed quasi-statically, the fluctuations vanish even on the mesoscale. However, if the cycle is performed faster than the relaxation time of the working medium, the fluctuations of work are inevitable. Nevertheless, when work and heat fluctuations are counted, an exact equality relates the exponential average of work performed by any heat engine to the heat transfer from the hotter heat bath. Efficiency of real heat engines. Carnot realized that, in reality, it is not possible to build a thermodynamically reversible engine. So, real heat engines are even less efficient than indicated by Equation 3. In addition, real engines that operate along the Carnot cycle style (isothermal expansion / isentropic expansion / isothermal compression / isentropic compression) are rare. Nevertheless, Equation 3 is extremely useful for determining the maximum efficiency that could ever be expected for a given set of thermal reservoirs. Although Carnot's cycle is an idealization, Equation 3 as the expression of the Carnot efficiency is still useful. Consider the average temperatures, formula_24 formula_25 at which the first integral is over a part of a cycle where heat goes into the system and the second integral is over a cycle part where heat goes out from the system. Then, replace "TH" and "TC" in Equation 3 by 〈"TH"〉 and 〈"TC"〉, respectively, to estimate the efficiency a heat engine. For the Carnot cycle, or its equivalent, the average value 〈"TH"〉 will equal the highest temperature available, namely "TH", and 〈"TC"〉 the lowest, namely "TC". For other less efficient thermodynamic cycles, 〈"TH"〉 will be lower than "TH", and 〈"TC"〉 will be higher than "TC". This can help illustrate, for example, why a reheater or a regenerator can improve the thermal efficiency of steam power plants and why the thermal efficiency of combined-cycle power plants (which incorporate gas turbines operating at even higher temperatures) exceeds that of conventional steam plants. The first prototype of the diesel engine was based on the principles of the Carnot cycle. As a macroscopic construct. The Carnot heat engine is, ultimately, a theoretical construct based on an "idealized" thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built. References. <templatestyles src="Reflist/styles.css" /> * Carnot, Sadi, Reflections on the Motive Power of Fire * Ewing, J. A. (1910) The Steam-Engine and Other Engines edition 3, page 62, via Internet Archive * * American Institute of Physics, 2011. . Abstract at: . Full article (24 pages ), also at .
[ { "math_id": 0, "text": "T_H" }, { "math_id": 1, "text": "T_C" }, { "math_id": 2, "text": "W" }, { "math_id": 3, "text": "\\Delta S" }, { "math_id": 4, "text": "W = (T_H - T_C) \\Delta S = (T_H - T_C) \\frac{Q_H}{T_H}" }, { "math_id": 5, "text": "Q_H" }, { "math_id": 6, "text": "\\Delta S_H + \\Delta S_C = \\Delta S_\\text{cycle} = 0, " }, { "math_id": 7, "text": " \\frac{Q_H}{T_H} = - \\frac{Q_C}{T_C}." }, { "math_id": 8, "text": " Q_C " }, { "math_id": 9, "text": " T_C " }, { "math_id": 10, "text": " Q_H/T_H " }, { "math_id": 11, "text": "W = \\oint PdV = \\oint TdS = (T_H-T_C)(S_B-S_A)" }, { "math_id": 12, "text": "Q_H = T_H (S_B-S_A) = T_H \\Delta S_H" }, { "math_id": 13, "text": "Q_C = T_C (S_A - S_B) = T_C \\Delta S_C < 0" }, { "math_id": 14, "text": "Q" }, { "math_id": 15, "text": "W = Q = Q_H - Q_C" }, { "math_id": 16, "text": "\\eta" }, { "math_id": 17, "text": "Q_C" }, { "math_id": 18, "text": "S_B" }, { "math_id": 19, "text": "S_A" }, { "math_id": 20, "text": "\\eta= 1-\\frac{T_C}{T_H}" }, { "math_id": 21, "text": "Q_H = T_H (S_B - S_A) = T_H \\Delta S_H " }, { "math_id": 22, "text": "Q_C = T_C (S_A - S_B) = T_C \\Delta S_C < 0" }, { "math_id": 23, "text": " \\Delta S_C = S_A - S_B = - \\Delta S_H " }, { "math_id": 24, "text": "\\langle T_H\\rangle = \\frac{1}{\\Delta S} \\int_{Q_\\text{in}} TdS" }, { "math_id": 25, "text": "\\langle T_C\\rangle = \\frac{1}{\\Delta S} \\int_{Q_\\text{out}} TdS" } ]
https://en.wikipedia.org/wiki?curid=5994167
59945
History of logic
The history of logic deals with the study of the development of the science of valid inference (logic). Formal logics developed in ancient times in India, China, and Greece. Greek methods, particularly Aristotelian logic (or term logic) as found in the "Organon", found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic. Christian and Islamic philosophers such as Boethius (died 524), Avicenna (died 1037), Thomas Aquinas (died 1274) and William of Ockham (died 1347) further developed Aristotle's logic in the Middle Ages, reaching a high point in the mid-fourteenth century, with Jean Buridan. The period between the fourteenth century and the beginning of the nineteenth century saw largely decline and neglect, and at least one historian of logic regards this time as barren. Empirical methods ruled the day, as evidenced by Sir Francis Bacon's "Novum Organon" of 1620. Logic revived in the mid-nineteenth century, at the beginning of a revolutionary period when the subject developed into a rigorous and formal discipline which took as its exemplar the exact method of proof used in mathematics, a hearkening back to the Greek tradition. The development of the modern "symbolic" or "mathematical" logic during this period by the likes of Boole, Frege, Russell, and Peano is the most significant in the two-thousand-year history of logic, and is arguably one of the most important and remarkable events in human intellectual history. Progress in mathematical logic in the first few decades of the twentieth century, particularly arising from the work of Gödel and Tarski, had a significant impact on analytic philosophy and philosophical logic, particularly from the 1950s onwards, in subjects such as modal logic, temporal logic, deontic logic, and relevance logic. Logic in the East. Origin. The Nasadiya Sukta of the "Rigveda" (RV 10.129) contains ontological speculation in terms of various logical divisions that were later recast formally as the four circles of "catuskoti": "A", "not A", "A and 'not A'", and "not A and not not A". &lt;templatestyles src="Rquote/styles.css"/&gt;{ class="rquote pullquote floatright" role="presentation" style="display:table; border-collapse:collapse; border-style:none; float:right; margin:0.5em 0.75em; width:33%; " Logic began independently in ancient India and continued to develop to early modern times without any known influence from Greek logic. Before Gautama. Though the origins in India of public debate ("pariṣad"), one form of rational inquiry, are not clear, we know that public debates were common in preclassical India, for they are frequently alluded to in various "Upaniṣads" and in the early Buddhist literature. Public debate is not the only form of public deliberations in preclassical India. Assemblies ("pariṣad" or "sabhā") of various sorts, comprising relevant experts, were regularly convened to deliberate on a variety of matters, including administrative, legal and religious matters. Dattatreya. A philosopher named Dattatreya is stated in the Bhagavata purana to have taught Anviksiki to Aiarka, Prahlada and others. It appears from the Markandeya purana that the Anviksiki-vidya expounded by him consisted of a mere disquisition on soul in accordance with the yoga philosophy. Dattatreya expounded the philosophical side of Anviksiki and not its logical aspect. Medhatithi Gautama. While the teachers mentioned before dealt with some particular topics of Anviksiki, the credit of founding the Anviksiki in its special sense of a science is to be attributed to Medhatithi Gautama (c. 6th century BC). Guatama founded the "anviksiki" school of logic. The "Mahabharata" (12.173.45), around the 5th century BC, refers to the "anviksiki" and "tarka" schools of logic. Panini. (c. 5th century BC) developed a form of logic (to which Boolean logic has some similarities) for his formulation of Sanskrit grammar. Logic is described by Chanakya (c. 350–283 BC) in his "Arthashastra" as an independent field of inquiry. Nyaya-Vaisheshika. Two of the six Indian schools of thought deal with logic: Nyaya and Vaisheshika. The Nyāya Sūtras of Aksapada Gautama (c. 2nd century AD) constitute the core texts of the Nyaya school, one of the six orthodox schools of Hindu philosophy. This realist school developed a rigid five-member schema of inference involving an initial premise, a reason, an example, an application, and a conclusion. The idealist Buddhist philosophy became the chief opponent to the Naiyayikas. Jain Logic. Jains made its own unique contribution to this mainstream development of logic by also occupying itself with the basic epistemological issues, namely, with those concerning the nature of knowledge, how knowledge is derived, and in what way knowledge can be said to be reliable. The Jains have doctrines of relativity used for logic and reasoning: These Jain philosophical concepts made most important contributions to the ancient Indian philosophy, especially in the areas of skepticism and relativity. Nagarjuna. Nagarjuna (c. 150–250 AD), the founder of the Madhyamaka ("Middle Way") developed an analysis known as the catuṣkoṭi (Sanskrit), a "four-cornered" system of argumentation that involves the systematic examination and rejection of each of the four possibilities of a proposition, "P": Dignaga. However, Dignāga (c 480–540 AD) is sometimes said to have developed a formal syllogism, and it was through him and his successor, Dharmakirti, that Buddhist logic reached its height; it is contested whether their analysis actually constitutes a formal syllogistic system. In particular, their analysis centered on the definition of an inference-warranting relation, "vyapti", also known as invariable concomitance or pervasion. To this end, a doctrine known as "apoha" or differentiation was developed. This involved what might be called inclusion and exclusion of defining properties. Dignāga's famous "wheel of reason" ("Hetucakra") is a method of indicating when one thing (such as smoke) can be taken as an invariable sign of another thing (like fire), but the inference is often inductive and based on past observation. Matilal remarks that Dignāga's analysis is much like John Stuart Mill's Joint Method of Agreement and Difference, which is inductive. Logic in China. In China, a contemporary of Confucius, Mozi, "Master Mo", is credited with founding the Mohist school, whose canons dealt with issues relating to valid inference and the conditions of correct conclusions. In particular, one of the schools that grew out of Mohism, the Logicians, are credited by some scholars for their early investigation of formal logic. Due to the harsh rule of Legalism in the subsequent Qin Dynasty, this line of investigation disappeared in China until the introduction of Indian philosophy by Buddhists. Logic in the West. Prehistory of logic. Valid reasoning has been employed in all periods of human history. However, logic studies the "principles" of valid reasoning, inference and demonstration. It is probable that the idea of demonstrating a conclusion first arose in connection with geometry, which originally meant the same as "land measurement". The ancient Egyptians discovered geometry, including the formula for the volume of a truncated pyramid. Ancient Babylon was also skilled in mathematics. Esagil-kin-apli's medical "Diagnostic Handbook" in the 11th century BC was based on a logical set of axioms and assumptions, while Babylonian astronomers in the 8th and 7th centuries BC employed an internal logic within their predictive planetary systems, an important contribution to the philosophy of science. Ancient Greece before Aristotle. While the ancient Egyptians empirically discovered some truths of geometry, the great achievement of the ancient Greeks was to replace empirical methods by demonstrative proof. Both Thales and Pythagoras of the Pre-Socratic philosophers seemed aware of geometric methods. Fragments of early proofs are preserved in the works of Plato and Aristotle, and the idea of a deductive system was probably known in the Pythagorean school and the Platonic Academy. The proofs of Euclid of Alexandria are a paradigm of Greek geometry. The three basic principles of geometry are as follows: Further evidence that early Greek thinkers were concerned with the principles of reasoning is found in the fragment called "dissoi logoi", probably written at the beginning of the fourth century BC. This is part of a protracted debate about truth and falsity. In the case of the classical Greek city-states, interest in argumentation was also stimulated by the activities of the Rhetoricians or Orators and the Sophists, who used arguments to defend or attack a thesis, both in legal and political contexts. Thales. It is said Thales, most widely regarded as the first philosopher in the Greek tradition, measured the height of the pyramids by their shadows at the moment when his own shadow was equal to his height. Thales was said to have had a sacrifice in celebration of discovering Thales' theorem just as Pythagoras had the Pythagorean theorem. Thales is the first known individual to use deductive reasoning applied to geometry, by deriving four corollaries to his theorem, and the first known individual to whom a mathematical discovery has been attributed. Indian and Babylonian mathematicians knew his theorem for special cases before he proved it. It is believed that Thales learned that an angle inscribed in a semicircle is a right angle during his travels to Babylon. Pythagoras. Before 520 BC, on one of his visits to Egypt or Greece, Pythagoras might have met the c. 54 years older Thales. The systematic study of proof seems to have begun with the school of Pythagoras (i. e. the Pythagoreans) in the late sixth century BC. Indeed, the Pythagoreans, believing all was number, are the first philosophers to emphasize "form" rather than "matter". Heraclitus and Parmenides. The writing of Heraclitus (c. 535 – c. 475 BC) was the first place where the word "logos" was given special attention in ancient Greek philosophy, Heraclitus held that everything changes and all was fire and conflicting opposites, seemingly unified only by this "Logos". He is known for his obscure sayings. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;This "logos" holds always but humans always prove unable to understand it, both before hearing it and when they have first heard it. For though all things come to be in accordance with this "logos", humans are like the inexperienced when they experience such words and deeds as I set out, distinguishing each in accordance with its nature and saying how it is. But other people fail to notice what they do when awake, just as they forget what they do while asleep. In contrast to Heraclitus, Parmenides held that all is one and nothing changes. He may have been a dissident Pythagorean, disagreeing that One (a number) produced the many. "X is not" must always be false or meaningless. What exists can in no way not exist. Our sense perceptions with its noticing of generation and destruction are in grievous error. Instead of sense perception, Parmenides advocated "logos" as the means to Truth. He has been called the discoverer of logic, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;For this view, that That Which Is Not exists, can never predominate. You must debar your thought from this way of search, nor let ordinary experience in its variety force you along this way, (namely, that of allowing) the eye, sightless as it is, and the ear, full of sound, and the tongue, to rule; but (you must) judge by means of the Reason (Logos) the much-contested proof which is expounded by me. Zeno of Elea, a pupil of Parmenides, had the idea of a standard argument pattern found in the method of proof known as "reductio ad absurdum". This is the technique of drawing an obviously false (that is, "absurd") conclusion from an assumption, thus demonstrating that the assumption is false. Therefore, Zeno and his teacher are seen as the first to apply the art of logic. Plato's dialogue Parmenides portrays Zeno as claiming to have written a book defending the monism of Parmenides by demonstrating the absurd consequence of assuming that there is plurality. Zeno famously used this method to develop his paradoxes in his arguments against motion. Such "dialectic" reasoning later became popular. The members of this school were called "dialecticians" (from a Greek word meaning "to discuss"). Plato. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Let no one ignorant of geometry enter here. None of the surviving works of the great fourth-century philosopher Plato (428–347 BC) include any formal logic, but they include important contributions to the field of philosophical logic. Plato raises three questions: The first question arises in the dialogue "Theaetetus", where Plato identifies thought or opinion with talk or discourse ("logos"). The second question is a result of Plato's theory of Forms. Forms are not things in the ordinary sense, nor strictly ideas in the mind, but they correspond to what philosophers later called universals, namely an abstract entity common to each set of things that have the same name. In both the "Republic" and the "Sophist", Plato suggests that the necessary connection between the assumptions of a valid argument and its conclusion corresponds to a necessary connection between "forms". The third question is about definition. Many of Plato's dialogues concern the search for a definition of some important concept (justice, truth, the Good), and it is likely that Plato was impressed by the importance of definition in mathematics. What underlies every definition is a Platonic Form, the common nature present in different particular things. Thus, a definition reflects the ultimate object of understanding, and is the foundation of all valid inference. This had a great influence on Plato's student Aristotle, in particular Aristotle's notion of the essence of a thing. Aristotle. The logic of Aristotle, and particularly his theory of the syllogism, has had an enormous influence in Western thought. Aristotle was the first logician to attempt a systematic analysis of logical syntax, of noun (or "term"), and of verb. He was the first "formal logician", in that he demonstrated the principles of reasoning by employing variables to show the underlying logical form of an argument. He sought relations of dependence which characterize necessary inference, and distinguished the validity of these relations, from the truth of the premises. He was the first to deal with the principles of contradiction and excluded middle in a systematic way. The Organon. His logical works, called the "Organon", are the earliest formal study of logic that have come down to modern times. Though it is difficult to determine the dates, the probable order of writing of Aristotle's logical works is: These works are of outstanding importance in the history of logic. In the "Categories", he attempts to discern all the possible things to which a term can refer; this idea underpins his philosophical work "Metaphysics", which itself had a profound influence on Western thought. He also developed a theory of non-formal logic ("i.e.," the theory of fallacies), which is presented in "Topics" and "Sophistical Refutations". "On Interpretation" contains a comprehensive treatment of the notions of opposition and conversion; chapter 7 is at the origin of the square of opposition (or logical square); chapter 9 contains the beginning of modal logic. The "Prior Analytics" contains his exposition of the "syllogism", where three important principles are applied for the first time in history: the use of variables, a purely formal treatment, and the use of an axiomatic system. Stoics. The other great school of Greek logic is that of the Stoics. Stoic logic traces its roots back to the late 5th century BC philosopher Euclid of Megara, a pupil of Socrates and slightly older contemporary of Plato, probably following in the tradition of Parmenides and Zeno. His pupils and successors were called "Megarians", or "Eristics", and later the "Dialecticians". The two most important dialecticians of the Megarian school were Diodorus Cronus and Philo, who were active in the late 4th century BC. The Stoics adopted the Megarian logic and systemized it. The most important member of the school was Chrysippus (c. 278 – c. 206 BC), who was its third head, and who formalized much of Stoic doctrine. He is supposed to have written over 700 works, including at least 300 on logic, almost none of which survive. Unlike with Aristotle, we have no complete works by the Megarians or the early Stoics, and have to rely mostly on accounts (sometimes hostile) by later sources, including prominently Diogenes Laërtius, Sextus Empiricus, Galen, Aulus Gellius, Alexander of Aphrodisias, and Cicero. Three significant contributions of the Stoic school were (i) their account of modality, (ii) their theory of the Material conditional, and (iii) their account of meaning and truth. * Everything that is past is true and necessary. * The impossible does not follow from the possible. * What neither is nor will be is possible. Diodorus used the plausibility of the first two to prove that nothing is possible if it neither is nor will be true. Chrysippus, by contrast, denied the second premise and said that the impossible could follow from the possible. * If "T0", then "T1" * If "F0", then "T0" * If "F0", then "F1" The following conditional does not meet this requirement, and is therefore a false statement according to Philo: * If "T0", then "F0" Indeed, Sextus says "According to [Philo], there are three ways in which a conditional may be true, and one in which it may be false." Philo's criterion of truth is what would now be called a truth-functional definition of "if ... then"; it is the definition used in modern logic. In contrast, Diodorus allowed the validity of conditionals only when the antecedent clause could never lead to an untrue conclusion. A century later, the Stoic philosopher Chrysippus attacked the assumptions of both Philo and Diodorus. Medieval logic. Logic in the Middle East. The works of Al-Kindi, Al-Farabi, Avicenna, Al-Ghazali, Averroes and other Muslim logicians were based on Aristotelian logic and were important in communicating the ideas of the ancient world to the medieval West. Al-Farabi (Alfarabi) (873–950) was an Aristotelian logician who discussed the topics of future contingents, the number and relation of the categories, the relation between logic and grammar, and non-Aristotelian forms of inference. Al-Farabi also considered the theories of conditional syllogisms and analogical inference, which were part of the Stoic tradition of logic rather than the Aristotelian. Maimonides (1138-1204) wrote a "Treatise on Logic" (Arabic: "Maqala Fi-Sinat Al-Mantiq"), referring to Al-Farabi as the "second master", the first being Aristotle. Ibn Sina (Avicenna) (980–1037) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world, and also had an important influence on Western medieval writers such as Albertus Magnus. Avicenna wrote on the hypothetical syllogism and on the propositional calculus, which were both part of the Stoic logical tradition. He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic. He also made use of inductive logic, such as the methods of agreement, difference, and concomitant variation which are critical to the scientific method. One of Avicenna's ideas had a particularly important influence on Western logicians such as William of Ockham: Avicenna's word for a meaning or notion ("ma'na"), was translated by the scholastic logicians as the Latin "intentio"; in medieval logic and epistemology, this is a sign in the mind that naturally represents a thing. This was crucial to the development of Ockham's conceptualism: A universal term ("e.g.," "man") does not signify a thing existing in reality, but rather a sign in the mind ("intentio in intellectu") which represents many things in reality; Ockham cites Avicenna's commentary on "Metaphysics" V in support of this view. Fakhr al-Din al-Razi (b. 1149) criticised Aristotle's "first figure" and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill (1806–1873). Al-Razi's work was seen by later Islamic scholars as marking a new direction for Islamic logic, towards a Post-Avicennian logic. This was further elaborated by his student Afdaladdîn al-Khûnajî (d. 1249), who developed a form of logic revolving around the subject matter of conceptions and assents. In response to this tradition, Nasir al-Din al-Tusi (1201–1274) began a tradition of Neo-Avicennian logic which remained faithful to Avicenna's work and existed as an alternative to the more dominant Post-Avicennian school over the following centuries. The Illuminationist school was founded by Shahab al-Din Suhrawardi (1155–1191), who developed the idea of "decisive necessity", which refers to the reduction of all modalities (necessity, possibility, contingency and impossibility) to the single mode of necessity. Ibn al-Nafis (1213–1288) wrote a book on Avicennian logic, which was a commentary of Avicenna's "Al-Isharat" ("The Signs") and "Al-Hidayah" ("The Guidance"). Ibn Taymiyyah (1263–1328), wrote the "Ar-Radd 'ala al-Mantiqiyyin", where he argued against the usefulness, though not the validity, of the syllogism and in favour of inductive reasoning. Ibn Taymiyyah also argued against the certainty of syllogistic arguments and in favour of analogy; his argument is that concepts founded on induction are themselves not certain but only probable, and thus a syllogism based on such concepts is no more certain than an argument based on analogy. He further claimed that induction itself is founded on a process of analogy. His model of analogical reasoning was based on that of juridical arguments. This model of analogy has been used in the recent work of John F. Sowa. The "Sharh al-takmil fi'l-mantiq" written by Muhammad ibn Fayd Allah ibn Muhammad Amin al-Sharwani in the 15th century is the last major Arabic work on logic that has been studied. However, "thousands upon thousands of pages" on logic were written between the 14th and 19th centuries, though only a fraction of the texts written during this period have been studied by historians, hence little is known about the original work on Islamic logic produced during this later period. Logic in medieval Europe. "Medieval logic" (also known as "Scholastic logic") generally means the form of Aristotelian logic developed in medieval Europe throughout roughly the period 1200–1600. For centuries after Stoic logic had been formulated, it was the dominant system of logic in the classical world. When the study of logic resumed after the Dark Ages, the main source was the work of the Christian philosopher Boethius, who was familiar with some of Aristotle's logic, but almost none of the work of the Stoics. Until the twelfth century, the only works of Aristotle available in the West were the "Categories", "On Interpretation", and Boethius's translation of the Isagoge of Porphyry (a commentary on the Categories). These works were known as the "Old Logic" ("Logica Vetus" or "Ars Vetus"). An important work in this tradition was the "Logica Ingredientibus" of Peter Abelard (1079–1142). His direct influence was small, but his influence through pupils such as John of Salisbury was great, and his method of applying rigorous logical analysis to theology shaped the way that theological criticism developed in the period that followed. The proof for the principle of explosion, also known as the principle of Pseudo-Scotus, the law according to which any proposition can be proven from a contradiction (including its negation), was first given by the 12th century French logician William of Soissons. By the early thirteenth century, the remaining works of Aristotle's "Organon", including the "Prior Analytics", "Posterior Analytics", and the "Sophistical Refutations" (collectively known as the "Logica Nova" or "New Logic"), had been recovered in the West. Logical work until then was mostly paraphrasis or commentary on the work of Aristotle. The period from the middle of the thirteenth to the middle of the fourteenth century was one of significant developments in logic, particularly in three areas which were original, with little foundation in the Aristotelian tradition that came before. These were: The last great works in this tradition are the "Logic" of John Poinsot (1589–1644, known as John of St Thomas), the "Metaphysical Disputations" of Francisco Suarez (1548–1617), and the "Logica Demonstrativa" of Giovanni Girolamo Saccheri (1667–1733). Traditional logic. The textbook tradition. "Traditional logic" generally means the textbook tradition that begins with Antoine Arnauld's and Pierre Nicole's "Logic, or the Art of Thinking", better known as the "Port-Royal Logic". Published in 1662, it was the most influential work on logic after Aristotle until the nineteenth century. The book presents a loosely Cartesian doctrine (that the proposition is a combining of ideas rather than terms, for example) within a framework that is broadly derived from Aristotelian and medieval term logic. Between 1664 and 1700, there were eight editions, and the book had considerable influence after that. The Port-Royal introduces the concepts of extension and intension. The account of propositions that Locke gives in the "Essay" is essentially that of the Port-Royal: "Verbal propositions, which are words, [are] the signs of our ideas, put together or separated in affirmative or negative sentences. So that proposition consists in the putting together or separating these signs, according as the things which they stand for agree or disagree." Dudley Fenner helped popularize Ramist logic, a reaction against Aristotle. Another influential work was the "Novum Organum" by Francis Bacon, published in 1620. The title translates as "new instrument". This is a reference to Aristotle's work known as the "Organon". In this work, Bacon rejects the syllogistic method of Aristotle in favor of an alternative procedure "which by slow and faithful toil gathers information from things and brings it into understanding". This method is known as inductive reasoning, a method which starts from empirical observation and proceeds to lower axioms or propositions; from these lower axioms, more general ones can be induced. For example, in finding the cause of a "phenomenal nature" such as heat, three lists should be constructed: Then, the "form nature" (or cause) of heat may be defined as that which is common to every situation of the presence list, and which is lacking from every situation of the absence list, and which varies by degree in every situation of the variability list. Other works in the textbook tradition include Isaac Watts's "Logick: Or, the Right Use of Reason" (1725), Richard Whately's "Logic" (1826), and John Stuart Mill's "A System of Logic" (1843). Although the latter was one of the last great works in the tradition, Mill's view that the foundations of logic lie in introspection influenced the view that logic is best understood as a branch of psychology, a view which dominated the next fifty years of its development, especially in Germany. Logic in Hegel's philosophy. G.W.F. Hegel indicated the importance of logic to his philosophical system when he condensed his extensive "Science of Logic" into a shorter work published in 1817 as the first volume of his "Encyclopaedia of the Philosophical Sciences." The "Shorter" or "Encyclopaedia" "Logic", as it is often known, lays out a series of transitions which leads from the most empty and abstract of categories—Hegel begins with "Pure Being" and "Pure Nothing"—to the "Absolute", the category which contains and resolves all the categories which preceded it. Despite the title, Hegel's "Logic" is not really a contribution to the science of valid inference. Rather than deriving conclusions about concepts through valid inference from premises, Hegel seeks to show that thinking about one concept compels thinking about another concept (one cannot, he argues, possess the concept of "Quality" without the concept of "Quantity"); this compulsion is, supposedly, not a matter of individual psychology, because it arises almost organically from the content of the concepts themselves. His purpose is to show the rational structure of the "Absolute"—indeed of rationality itself. The method by which thought is driven from one concept to its contrary, and then to further concepts, is known as the Hegelian dialectic. Although Hegel's "Logic" has had little impact on mainstream logical studies, its influence can be seen elsewhere: Logic and psychology. Between the work of Mill and Frege stretched half a century during which logic was widely treated as a descriptive science, an empirical study of the structure of reasoning, and thus essentially as a branch of psychology. The German psychologist Wilhelm Wundt, for example, discussed deriving "the logical from the psychological laws of thought", emphasizing that "psychological thinking is always the more comprehensive form of thinking." This view was widespread among German philosophers of the period: Such was the dominant view of logic in the years following Mill's work. This psychological approach to logic was rejected by Gottlob Frege. It was also subjected to an extended and destructive critique by Edmund Husserl in the first volume of his "Logical Investigations" (1900), an assault which has been described as "overwhelming". Husserl argued forcefully that grounding logic in psychological observations implied that all logical truths remained unproven, and that skepticism and relativism were unavoidable consequences. Such criticisms did not immediately extirpate what is called "psychologism". For example, the American philosopher Josiah Royce, while acknowledging the force of Husserl's critique, remained "unable to doubt" that progress in psychology would be accompanied by progress in logic, and vice versa. Rise of modern logic. The period between the fourteenth century and the beginning of the nineteenth century had been largely one of decline and neglect, and is generally regarded as barren by historians of logic. The revival of logic occurred in the mid-nineteenth century, at the beginning of a revolutionary period where the subject developed into a rigorous and formalistic discipline whose exemplar was the exact method of proof used in mathematics. The development of the modern "symbolic" or "mathematical" logic during this period is the most significant in the 2000-year history of logic, and is arguably one of the most important and remarkable events in human intellectual history. A number of features distinguish modern logic from the old Aristotelian or traditional logic, the most important of which are as follows: Modern logic is fundamentally a "calculus" whose rules of operation are determined only by the "shape" and not by the "meaning" of the symbols it employs, as in mathematics. Many logicians were impressed by the "success" of mathematics, in that there had been no prolonged dispute about any truly mathematical result. C. S. Peirce noted that even though a mistake in the evaluation of a definite integral by Laplace led to an error concerning the moon's orbit that persisted for nearly 50 years, the mistake, once spotted, was corrected without any serious dispute. Peirce contrasted this with the disputation and uncertainty surrounding traditional logic, and especially reasoning in metaphysics. He argued that a truly "exact" logic would depend upon mathematical, i.e., "diagrammatic" or "iconic" thought. "Those who follow such methods will ... escape all error except such as will be speedily corrected after it is once suspected". Modern logic is also "constructive" rather than "abstractive"; i.e., rather than abstracting and formalising theorems derived from ordinary language (or from psychological intuitions about validity), it constructs theorems by formal methods, then looks for an interpretation in ordinary language. It is entirely symbolic, meaning that even the logical constants (which the medieval logicians called "syncategoremata") and the categoric terms are expressed in symbols. Modern logic. The development of modern logic falls into roughly five periods: Embryonic period. The idea that inference could be represented by a purely mechanical process is found as early as Raymond Llull, who proposed a (somewhat eccentric) method of drawing conclusions by a system of concentric rings. The work of logicians such as the Oxford Calculators led to a method of using letters instead of writing out logical calculations ("calculationes") in words, a method used, for instance, in the "Logica magna" by Paul of Venice. Three hundred years after Llull, the English philosopher and logician Thomas Hobbes suggested that all logic and reasoning could be reduced to the mathematical operations of addition and subtraction. The same idea is found in the work of Leibniz, who had read both Llull and Hobbes, and who argued that logic can be represented through a combinatorial process or calculus. But, like Llull and Hobbes, he failed to develop a detailed or comprehensive system, and his work on this topic was not published until long after his death. Leibniz says that ordinary languages are subject to "countless ambiguities" and are unsuited for a calculus, whose task is to expose mistakes in inference arising from the forms and structures of words; hence, he proposed to identify an alphabet of human thought comprising fundamental concepts which could be composed to express complex ideas, and create a "calculus ratiocinator" that would make all arguments "as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate." Gergonne (1816) said that reasoning does not have to be about objects about which one has perfectly clear ideas, because algebraic operations can be carried out without having any idea of the meaning of the symbols involved. Bolzano anticipated a fundamental idea of modern proof theory when he defined logical consequence or "deducibility" in terms of variables:Hence I say that propositions formula_0, formula_1, formula_2... are "deducible" from propositions formula_3, formula_4, formula_5, formula_6... with respect to variable parts formula_7, formula_8..., if every class of ideas whose substitution for formula_7, formula_8... makes all of formula_3, formula_4, formula_5, formula_6... true, also makes all of formula_0, formula_1, formula_2... true. Occasionally, since it is customary, I shall say that propositions formula_0, formula_1, formula_2... "follow", or can be "inferred" or "derived", from formula_3, formula_4, formula_5, formula_6... Propositions formula_3, formula_4, formula_5, formula_6... I shall call the "premises", formula_0, formula_1, formula_2... the "conclusions."This is now known as semantic validity. Algebraic period. Modern logic begins with what is known as the "algebraic school", originating with Boole and including Peirce, Jevons, Schröder, and Venn. Their objective was to develop a calculus to formalise reasoning in the area of classes, propositions, and probabilities. The school begins with Boole's seminal work "Mathematical Analysis of Logic" which appeared in 1847, although De Morgan (1847) is its immediate precursor. The fundamental idea of Boole's system is that algebraic formulae can be used to express logical relations. This idea occurred to Boole in his teenage years, working as an usher in a private school in Lincoln, Lincolnshire. For example, let x and y stand for classes, let the symbol "=" signify that the classes have the same members, xy stand for the class containing all and only the members of x and y and so on. Boole calls these "elective symbols", i.e. symbols which select certain objects for consideration. An expression in which elective symbols are used is called an "elective function", and an equation of which the members are elective functions, is an "elective equation". The theory of elective functions and their "development" is essentially the modern idea of truth-functions and their expression in disjunctive normal form. Boole's system admits of two interpretations, in class logic, and propositional logic. Boole distinguished between "primary propositions" which are the subject of syllogistic theory, and "secondary propositions", which are the subject of propositional logic, and showed how under different "interpretations" the same algebraic system could represent both. An example of a primary proposition is "All inhabitants are either Europeans or Asiatics." An example of a secondary proposition is "Either all inhabitants are Europeans or they are all Asiatics." These are easily distinguished in modern predicate logic, where it is also possible to show that the first follows from the second, but it is a significant disadvantage that there is no way of representing this in the Boolean system. In his "Symbolic Logic" (1881), John Venn used diagrams of overlapping areas to express Boolean relations between classes or truth-conditions of propositions. In 1869 Jevons realised that Boole's methods could be mechanised, and constructed a "logical machine" which he showed to the Royal Society the following year. In 1885 Allan Marquand proposed an electrical version of the machine that is still extant (picture at the Firestone Library). The defects in Boole's system (such as the use of the letter "v" for existential propositions) were all remedied by his followers. Jevons published "Pure Logic, or the Logic of Quality apart from Quantity" in 1864, where he suggested a symbol to signify exclusive or, which allowed Boole's system to be greatly simplified. This was usefully exploited by Schröder when he set out theorems in parallel columns in his "Vorlesungen" (1890–1905). Peirce (1880) showed how all the Boolean elective functions could be expressed by the use of a single primitive binary operation, "neither ... nor ..." and equally well "not both ... and ...", however, like many of Peirce's innovations, this remained unknown or unnoticed until Sheffer rediscovered it in 1913. Boole's early work also lacks the idea of the logical sum which originates in Peirce (1867), Schröder (1877) and Jevons (1890), and the concept of inclusion, first suggested by Gergonne (1816) and clearly articulated by Peirce (1870). The success of Boole's algebraic system suggested that all logic must be capable of algebraic representation, and there were attempts to express a logic of relations in such form, of which the most ambitious was Schröder's monumental "Vorlesungen über die Algebra der Logik" ("Lectures on the Algebra of Logic", vol iii 1895), although the original idea was again anticipated by Peirce. Boole's unwavering acceptance of Aristotle's logic is emphasized by the historian of logic John Corcoran in an accessible introduction to "Laws of Thought." Corcoran also wrote a point-by-point comparison of "Prior Analytics" and "Laws of Thought". According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were "to go under, over, and beyond" Aristotle's logic by 1) providing it with mathematical foundations involving equations, 2) extending the class of problems it could treat—from assessing validity to solving equations—and 3) expanding the range of applications it could handle—e.g. from propositions having only two terms to those having arbitrarily many. More specifically, Boole agreed with what Aristotle said; Boole's 'disagreements', if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced the four propositional forms of Aristotelian logic to formulas in the form of equations—by itself a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic—another revolutionary idea—involved Boole's doctrine that Aristotle's rules of inference (the "perfect syllogisms") must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce "No quadrangle that is a square is a rectangle that is a rhombus" from "No square that is a quadrangle is a rhombus that is a rectangle" or from "No rhombus that is a rectangle is a square that is a quadrangle". Logicist period. After Boole, the next great advances were made by the German mathematician Gottlob Frege. Frege's objective was the program of Logicism, i.e. demonstrating that arithmetic is identical with logic. Frege went much further than any of his predecessors in his rigorous and formal approach to logic, and his calculus or Begriffsschrift is important. Frege also tried to show that the concept of number can be defined by purely logical means, so that (if he was right) logic includes arithmetic and all branches of mathematics that are reducible to arithmetic. He was not the first writer to suggest this. In his pioneering work (The Foundations of Arithmetic), sections 15–17, he acknowledges the efforts of Leibniz, J. S. Mill as well as Jevons, citing the latter's claim that "algebra is a highly developed logic, and number but logical discrimination." Frege's first work, the "Begriffsschrift" ("concept script") is a rigorously axiomatised system of propositional logic, relying on just two connectives (negational and conditional), two rules of inference ("modus ponens" and substitution), and six axioms. Frege referred to the "completeness" of this system, but was unable to prove this. The most significant innovation, however, was his explanation of the quantifier in terms of mathematical functions. Traditional logic regards the sentence "Caesar is a man" as of fundamentally the same form as "all men are mortal." Sentences with a proper name subject were regarded as universal in character, interpretable as "every Caesar is a man". At the outset Frege abandons the traditional "concepts "subject" and "predicate"", replacing them with "argument" and "function" respectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the words "if, and, not, or, there is, some, all," and so forth, deserves attention". Frege argued that the quantifier expression "all men" does not have the same logical or semantic form as "all men", and that the universal proposition "every A is B" is a complex proposition involving two "functions", namely ' – is A' and ' – is B' such that whatever satisfies the first, also satisfies the second. In modern notation, this would be expressed as formula_9 In English, "for all x, if Ax then Bx". Thus only singular propositions are of subject-predicate form, and they are irreducibly singular, i.e. not reducible to a general proposition. Universal and particular propositions, by contrast, are not of simple subject-predicate form at all. If "all mammals" were the logical subject of the sentence "all mammals are land-dwellers", then to negate the whole sentence we would have to negate the predicate to give "all mammals are "not" land-dwellers". But this is not the case. This functional analysis of ordinary-language sentences later had a great impact on philosophy and linguistics. This means that in Frege's calculus, Boole's "primary" propositions can be represented in a different way from "secondary" propositions. "All inhabitants are either men or women" is formula_10 whereas "All the inhabitants are men or all the inhabitants are women" is formula_11 As Frege remarked in a critique of Boole's calculus: "The real difference is that I avoid [the Boolean] division into two parts ... and give a homogeneous presentation of the lot. In Boole the two parts run alongside one another, so that one is like the mirror image of the other, but for that very reason stands in no organic relation to it." As well as providing a unified and comprehensive system of logic, Frege's calculus also resolved the ancient problem of multiple generality. The ambiguity of "every girl kissed a boy" is difficult to express in traditional logic, but Frege's logic resolves this through the different scope of the quantifiers. Thus formula_12 means that to every girl there corresponds some boy (any one will do) who the girl kissed. But formula_13 means that there is some particular boy whom every girl kissed. Without this device, the project of logicism would have been doubtful or impossible. Using it, Frege provided a definition of the ancestral relation, of the many-to-one relation, and of mathematical induction. This period overlaps with the work of what is known as the "mathematical school", which included Dedekind, Pasch, Peano, Hilbert, Zermelo, Huntington, Veblen and Heyting. Their objective was the axiomatisation of branches of mathematics like geometry, arithmetic, analysis and set theory. Most notable was Hilbert's Program, which sought to ground all of mathematics to a finite set of axioms, proving its consistency by "finitistic" means and providing a procedure which would decide the truth or falsity of any mathematical statement. The standard axiomatization of the natural numbers is named the Peano axioms eponymously. Peano maintained a clear distinction between mathematical and logical symbols. While unaware of Frege's work, he independently recreated his logical apparatus based on the work of Boole and Schröder. The logicist project received a near-fatal setback with the discovery of a paradox in 1901 by Bertrand Russell. This proved Frege's naive set theory led to a contradiction. Frege's theory contained the axiom that for any formal criterion, there is a set of all objects that meet the criterion. Russell showed that a set containing exactly the sets that are not members of themselves would contradict its own definition (if it is not a member of itself, it is a member of itself, and if it is a member of itself, it is not). This contradiction is now known as Russell's paradox. One important method of resolving this paradox was proposed by Ernst Zermelo. Zermelo set theory was the first axiomatic set theory. It was developed into the now-canonical Zermelo–Fraenkel set theory (ZF). Russell's paradox symbolically is as follows: formula_14 The monumental Principia Mathematica, a three-volume work on the foundations of mathematics, written by Russell and Alfred North Whitehead and published 1910–1913 also included an attempt to resolve the paradox, by means of an elaborate system of types: a set of elements is of a different type than is each of its elements (set is not the element; one element is not the set) and one cannot speak of the "set of all sets". The "Principia" was an attempt to derive all mathematical truths from a well-defined set of axioms and inference rules in symbolic logic. Metamathematical period. The names of Gödel and Tarski dominate the 1930s, a crucial period in the development of metamathematics—the study of mathematics using mathematical methods to produce metatheories, or mathematical theories about other mathematical theories. Early investigations into metamathematics had been driven by Hilbert's program. Work on metamathematics culminated in the work of Gödel, who in 1929 showed that a given first-order sentence is deducible if and only if it is logically valid—i.e. it is true in every structure for its language. This is known as Gödel's completeness theorem. A year later, he proved two important theorems, which showed Hibert's program to be unattainable in its original form. The first is that no consistent system of axioms whose theorems can be listed by an effective procedure such as an algorithm or computer program is capable of proving all facts about the natural numbers. For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second is that if such a system is also capable of proving certain basic facts about the natural numbers, then the system cannot prove the consistency of the system itself. These two results are known as Gödel's incompleteness theorems, or simply "Gödel's Theorem". Later in the decade, Gödel developed the concept of set-theoretic constructibility, as part of his proof that the axiom of choice and the continuum hypothesis are consistent with Zermelo–Fraenkel set theory. In proof theory, Gerhard Gentzen developed natural deduction and the sequent calculus. The former attempts to model logical reasoning as it 'naturally' occurs in practice and is most easily applied to intuitionistic logic, while the latter was devised to clarify the derivation of logical proofs in any formal system. Since Gentzen's work, natural deduction and sequent calculi have been widely applied in the fields of proof theory, mathematical logic and computer science. Gentzen also proved normalization and cut-elimination theorems for intuitionistic and classical logic which could be used to reduce logical proofs to a normal form. Alfred Tarski, a pupil of Łukasiewicz, is best known for his definition of truth and logical consequence, and the semantic concept of logical satisfaction. In 1933, he published (in Polish) "The concept of truth in formalized languages", in which he proposed his semantic theory of truth: a sentence such as "snow is white" is true if and only if snow is white. Tarski's theory separated the metalanguage, which makes the statement about truth, from the object language, which contains the sentence whose truth is being asserted, and gave a correspondence (the T-schema) between phrases in the object language and elements of an interpretation. Tarski's approach to the difficult idea of explaining truth has been enduringly influential in logic and philosophy, especially in the development of model theory. Tarski also produced important work on the methodology of deductive systems, and on fundamental principles such as completeness, decidability, consistency and definability. According to Anita Feferman, Tarski "changed the face of logic in the twentieth century". Alonzo Church and Alan Turing proposed formal models of computability, giving independent negative solutions to Hilbert's "Entscheidungsproblem" in 1936 and 1937, respectively. The "Entscheidungsproblem" asked for a procedure that, given any formal mathematical statement, would algorithmically determine whether the statement is true. Church and Turing proved there is no such procedure; Turing's paper introduced the halting problem as a key example of a mathematical problem without an algorithmic solution. Church's system for computation developed into the modern λ-calculus, while the Turing machine became a standard model for a general-purpose computing device. It was soon shown that many other proposed models of computation were equivalent in power to those proposed by Church and Turing. These results led to the Church–Turing thesis that any deterministic algorithm that can be carried out by a human can be carried out by a Turing machine. Church proved additional undecidability results, showing that both Peano arithmetic and first-order logic are undecidable. Later work by Emil Post and Stephen Cole Kleene in the 1940s extended the scope of computability theory and introduced the concept of degrees of unsolvability. The results of the first few decades of the twentieth century also had an impact upon analytic philosophy and philosophical logic, particularly from the 1950s onwards, in subjects such as modal logic, temporal logic, deontic logic, and relevance logic. Logic after WWII. After World War II, mathematical logic branched into four inter-related but separate areas of research: model theory, proof theory, computability theory, and set theory. In set theory, the method of forcing revolutionized the field by providing a robust method for constructing models and obtaining independence results. Paul Cohen introduced this method in 1963 to prove the independence of the continuum hypothesis and the axiom of choice from Zermelo–Fraenkel set theory. His technique, which was simplified and extended soon after its introduction, has since been applied to many other problems in all areas of mathematical logic. Computability theory had its roots in the work of Turing, Church, Kleene, and Post in the 1930s and 40s. It developed into a study of abstract computability, which became known as recursion theory. The priority method, discovered independently by Albert Muchnik and Richard Friedberg in the 1950s, led to major advances in the understanding of the degrees of unsolvability and related structures. Research into higher-order computability theory demonstrated its connections to set theory. The fields of constructive analysis and computable analysis were developed to study the effective content of classical mathematical theorems; these in turn inspired the program of reverse mathematics. A separate branch of computability theory, computational complexity theory, was also characterized in logical terms as a result of investigations into descriptive complexity. Model theory applies the methods of mathematical logic to study models of particular mathematical theories. Alfred Tarski published much pioneering work in the field, which is named after a series of papers he published under the title "Contributions to the theory of models". In the 1960s, Abraham Robinson used model-theoretic techniques to develop calculus and analysis based on infinitesimals, a problem that first had been proposed by Leibniz. In proof theory, the relationship between classical mathematics and intuitionistic mathematics was clarified via tools such as the realizability method invented by Georg Kreisel and Gödel's "Dialectica" interpretation. This work inspired the contemporary area of proof mining. The Curry–Howard correspondence emerged as a deep analogy between logic and computation, including a correspondence between systems of natural deduction and typed lambda calculi used in computer science. As a result, research into this class of formal systems began to address both logical and computational aspects; this area of research came to be known as modern type theory. Advances were also made in ordinal analysis and the study of independence results in arithmetic such as the Paris–Harrington theorem. This was also a period, particularly in the 1950s and afterwards, when the ideas of mathematical logic begin to influence philosophical thinking. For example, tense logic is a formalised system for representing, and reasoning about, propositions qualified in terms of time. The philosopher Arthur Prior played a significant role in its development in the 1960s. Modal logics extend the scope of formal logic to include the elements of modality (for example, possibility and necessity). The ideas of Saul Kripke, particularly about possible worlds, and the formal system now called Kripke semantics have had a profound impact on analytic philosophy. His best known and most influential work is "Naming and Necessity" (1980). Deontic logics are closely related to modal logics: they attempt to capture the logical features of obligation, permission and related concepts. Although some basic novelties syncretizing mathematical and philosophical logic were shown by Bolzano in the early 1800s, it was Ernst Mally, a pupil of Alexius Meinong, who was to propose the first formal deontic system in his "Grundgesetze des Sollens", based on the syntax of Whitehead's and Russell's propositional calculus. Another logical system founded after World War II was fuzzy logic by Azerbaijani mathematician Lotfi Asker Zadeh in 1965. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "O" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "D" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "j" }, { "math_id": 9, "text": "\\forall \\; x \\big( A(x) \\rightarrow B (x) \\big)" }, { "math_id": 10, "text": "\\forall \\; x \\Big( I(x) \\rightarrow \\big( M(x) \\lor W(x) \\big) \\Big) " }, { "math_id": 11, "text": "\\forall \\; x \\big( I(x) \\rightarrow M(x) \\big) \\lor \\forall \\;x \\big( I(x) \\rightarrow W(x) \\big)" }, { "math_id": 12, "text": "\\forall \\; x \\Big( G(x) \\rightarrow \\exists \\; y \\big( B(y) \\land K(x,y) \\big) \\Big)" }, { "math_id": 13, "text": "\\exists \\;x \\Big( B(x) \\land \\forall \\;y \\big( G(y) \\rightarrow K(y, x) \\big) \\Big)" }, { "math_id": 14, "text": "\\text{Let } R = \\{ x \\mid x \\not \\in x \\} \\text{, then } R \\in R \\iff R \\not \\in R" } ]
https://en.wikipedia.org/wiki?curid=59945
59945606
Berlekamp–Van Lint–Seidel graph
In graph theory, the Berlekamp–Van Lint–Seidel graph is a locally linear strongly regular graph with parameters formula_0. This means that it has 243 vertices, 22 edges per vertex (for a total of 2673 edges), exactly one shared neighbor per pair of adjacent vertices, and exactly two shared neighbors per pair of non-adjacent vertices. It was constructed by Elwyn Berlekamp, J. H. van Lint, and Johan Jacob Seidel as the coset graph of the ternary Golay code. This graph is the Cayley graph of an abelian group. Among abelian Cayley graphs that are strongly regular and have the last two parameters differing by one, it is the only graph that is not a Paley graph. It is also an integral graph, meaning that the eigenvalues of its adjacency matrix are integers. Like the formula_1 Sudoku graph it is an integral abelian Cayley graph whose group elements all have order 3, one of a small number of possibilities for the orders in such a graph. There are five possible combinations of parameters for strongly regular graphs that have one shared neighbor per pair of adjacent vertices and exactly two shared neighbors per pair of non-adjacent vertices. Of these, two are known to exist: the Berlekamp–Van Lint–Seidel graph and the 9-vertex Paley graph with parameters formula_2. Conway's 99-graph problem concerns the existence of another of these graphs, the one with parameters formula_3. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(243,22,1,2)" }, { "math_id": 1, "text": "9\\times 9" }, { "math_id": 2, "text": "(9,4,1,2)" }, { "math_id": 3, "text": "(99,14,1,2)" } ]
https://en.wikipedia.org/wiki?curid=59945606
599563
Voltage-controlled oscillator
Oscillator with frequency controlled by a voltage input A voltage-controlled oscillator (VCO) is an electronic oscillator whose oscillation frequency is controlled by a voltage input. The applied input voltage determines the instantaneous oscillation frequency. Consequently, a VCO can be used for frequency modulation (FM) or phase modulation (PM) by applying a modulating signal to the control input. A VCO is also an integral part of a phase-locked loop. VCOs are used in synthesizers to generate a waveform whose pitch can be adjusted by a voltage determined by a musical keyboard or other input. A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear in frequency control over a wide range of input control voltages. Types. VCOs can be generally categorized into two groups based on the type of waveform produced. Frequency control. A voltage-controlled capacitor is one method of making an LC oscillator vary its frequency in response to a control voltage. Any reverse-biased semiconductor diode displays a measure of voltage-dependent capacitance and can be used to change the frequency of an oscillator by varying a control voltage applied to the diode. Special-purpose variable-capacitance varactor diodes are available with well-characterized wide-ranging values of capacitance. A varactor is used to change the capacitance (and hence the frequency) of an LC tank. A varactor can also change loading on a crystal resonator and pull its resonant frequency. For low-frequency VCOs, other methods of varying the frequency (such as altering the charging rate of a capacitor by means of a voltage-controlled current source) are used (see function generator). The frequency of a ring oscillator is controlled by varying either the supply voltage, the current available to each inverter stage, or the capacitive loading on each stage. Phase-domain equations. VCOs are used in analog applications such as frequency modulation and frequency-shift keying. The functional relationship between the control voltage and the output frequency for a VCO (especially those used at radio frequency) may not be linear, but over small ranges, the relationship is approximately linear, and linear control theory can be used. A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear over a wide range of input voltages. Modeling for VCOs is often not concerned with the amplitude or shape (sinewave, triangle wave, sawtooth) but rather its instantaneous phase. In effect, the focus is not on the time-domain signal "A" sin("ωt"+"θ"0) but rather the argument of the sine function (the phase). Consequently, modeling is often done in the phase domain. The instantaneous frequency of a VCO is often modeled as a linear relationship with its instantaneous control voltage. The output phase of the oscillator is the integral of the instantaneous frequency. formula_0 * formula_1 is the instantaneous frequency of the oscillator at time t (not the waveform amplitude) * formula_2 is the quiescent frequency of the oscillator (not the waveform amplitude) * formula_3 is called the oscillator sensitivity, or gain. Its units are hertz per volt. * formula_4 is the VCO's frequency * formula_5 is the VCO's output phase * formula_6 is the time-domain control input or tuning voltage of the VCO For analyzing a control system, the Laplace transforms of the above signals are useful. formula_7 Design and circuits. Tuning range, tuning gain and phase noise are the important characteristics of a VCO. Generally, low phase noise is preferred in a VCO. Tuning gain and noise present in the control signal affect the phase noise; high noise or high tuning gain imply more phase noise. Other important elements that determine the phase noise are sources of flicker noise (1/"f" noise) in the circuit, the output power level, and the loaded Q factor of the resonator. (see Leeson's equation). The low frequency flicker noise affects the phase noise because the flicker noise is heterodyned to the oscillator output frequency due to the non-linear transfer function of active devices. The effect of flicker noise can be reduced with negative feedback that linearizes the transfer function (for example, emitter degeneration). VCOs generally have lower Q factor compared to similar fixed-frequency oscillators, and so suffer more jitter. The jitter can be made low enough for many applications (such as driving an ASIC), in which case VCOs enjoy the advantages of having no off-chip components (expensive) or on-chip inductors (low yields on generic CMOS processes). LC oscillators. Commonly used VCO circuits are the Clapp and Colpitts oscillators. The more widely used oscillator of the two is Colpitts and these oscillators are very similar in configuration. Crystal oscillators. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;voltage-controlled crystal oscillator (VCXO) is used for fine adjustment of the operating frequency. The frequency of a voltage-controlled crystal oscillator can be varied a few tens of parts per million (ppm) over a control voltage range of typically 0 to 3 volts, because the high Q factor of the crystals allows frequency control over only a small range of frequencies. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;temperature-compensated VCXO (TCVCXO) incorporates components that partially correct the dependence on temperature of the resonant frequency of the crystal. A smaller range of voltage control then suffices to stabilize the oscillator frequency in applications where temperature varies, such as heat buildup inside a transmitter. Placing the oscillator in a crystal oven at a constant but higher-than-ambient temperature is another way to stabilize oscillator frequency. High stability crystal oscillator references often place the crystal in an oven and use a voltage input for fine control. The temperature is selected to be the "turnover temperature": the temperature where small changes do not affect the resonance. The control voltage can be used to occasionally adjust the reference frequency to a NIST source. Sophisticated designs may also adjust the control voltage over time to compensate for crystal aging. Clock generators. A clock generator is an oscillator that provides a timing signal to synchronize operations in digital circuits. VCXO clock generators are used in many areas such as digital TV, modems, transmitters and computers. Design parameters for a VCXO clock generator are tuning voltage range, center frequency, frequency tuning range and the timing jitter of the output signal. Jitter is a form of phase noise that must be minimised in applications such as radio receivers, transmitters and measuring equipment. When a wider selection of clock frequencies is needed the VCXO output can be passed through digital divider circuits to obtain lower frequencies or be fed to a phase-locked loop (PLL). ICs containing both a VCXO (for external crystal) and a PLL are available. A typical application is to provide clock frequencies in a range from 12 kHz to 96 kHz to an audio digital-to-analog converter. Frequency synthesizers. A frequency synthesizer generates precise and adjustable frequencies based on a stable single-frequency clock. A digitally controlled oscillator based on a frequency synthesizer may serve as a digital alternative to analog voltage controlled oscillator circuits. Applications. VCOs are used in function generators, phase-locked loops including frequency synthesizers used in communication equipment and the production of electronic music, to generate variable tones in synthesizers. Function generators are low-frequency oscillators which feature multiple waveforms, typically sine, square, and triangle waves. Monolithic function generators are voltage-controlled. Analog phase-locked loops typically contain VCOs. High-frequency VCOs are usually used in phase-locked loops for radio receivers. Phase noise is the most important specification in this application. Audio-frequency VCOs are used in analog music synthesizers. For these, sweep range, linearity, and distortion are often the most important specifications. Audio-frequency VCOs for use in musical contexts were largely superseded in the 1980s by their digital counterparts, digitally controlled oscillators (DCOs), due to their output stability in the face of temperature changes during operation. Since the 1990s, musical software has become the dominant sound-generating method. Voltage-to-frequency converters are voltage-controlled oscillators with a highly linear relation between applied voltage and frequency. They are used to convert a slow analog signal (such as from a temperature transducer) to a signal suitable for transmission over a long distance, since the frequency will not drift or be affected by noise. Oscillators in this application may have sine or square wave outputs. Where the oscillator drives equipment that may generate radio-frequency interference, adding a varying voltage to its control input, called dithering, can disperse the interference spectrum to make it less objectionable (see spread spectrum clock). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n f(t) &= f_0 + K_0 \\cdot \\ v_\\text{in}(t) \\\\\n \\theta(t) &= \\int_{-\\infty}^t f(\\tau)\\,d\\tau \\\\\n\\end{align}" }, { "math_id": 1, "text": "f(t) " }, { "math_id": 2, "text": "f_0 " }, { "math_id": 3, "text": "K_0 " }, { "math_id": 4, "text": "f(\\tau) " }, { "math_id": 5, "text": "\\theta(t) " }, { "math_id": 6, "text": "v_\\text{in}(t) " }, { "math_id": 7, "text": "\\begin{align}\n F(s) &= K_0 \\cdot \\ V_\\text{in}(s) \\\\\n \\Theta(s) &= {F(s) \\over s} \\\\\n\\end{align} " } ]
https://en.wikipedia.org/wiki?curid=599563
59958
Power series
Infinite sum of monomials In mathematics, a power series (in one variable) is an infinite series of the form formula_0 where "an" represents the coefficient of the "n"th term and "c" is a constant. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. In many situations, "c" (the "center" of the series) is equal to zero, for instance when considering a Maclaurin series. In such cases, the power series takes the simpler form formula_1 Beyond their role in mathematical analysis, power series also occur in combinatorics as generating functions (a kind of formal power series) and in electronic engineering (under the name of the Z-transform). The familiar decimal notation for real numbers can also be viewed as an example of a power series, with integer coefficients, but with the argument "x" fixed at &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄10. In number theory, the concept of "p"-adic numbers is also closely related to that of a power series. Examples. Polynomial. A polynomial of degree d can be expressed as a power series around any center "c", where all terms of degree higher than d have a zero coefficient. For instance, the polynomial formula_2 can be written as a power series around the center formula_3 as formula_4 or around the center formula_5 as formula_6 This is because of the Taylor series expansion of f(x) around formula_7 is formula_8 as formula_9 and the non-zero derivatives are formula_10, so formula_11 and formula_12, a constant. Or indeed the expansion is possible around any other center "c". One can view power series as being like "polynomials of infinite degree," although power series are not polynomials. Geometric series, exponential function and sine. The geometric series formula formula_13 which is valid for formula_14, is one of the most important examples of a power series, as are the exponential function formula formula_15 and the sine formula formula_16 valid for all real "x". These power series are also examples of Taylor series. On the set of exponents. Negative powers are not permitted in a power series; for instance, formula_17 is not considered a power series (although it is a Laurent series). Similarly, fractional powers such as formula_18 are not permitted (but see Puiseux series). The coefficients formula_19 are not allowed to depend on formula_20, thus for instance: formula_21 is not a power series. Radius of convergence. A power series formula_22 is convergent for some values of the variable "x", which will always include "x" = "c" (as usual, formula_23 evaluates as and the sum of the series is thus formula_24 for "x" = "c"). The series may diverge for other values of x. If "c" is not the only point of convergence, then there is always a number "r" with 0 &lt; "r" ≤ ∞ such that the series converges whenever and diverges whenever . The number "r" is called the radius of convergence of the power series; in general it is given as formula_25 or, equivalently, formula_26 (this is the Cauchy–Hadamard theorem; see limit superior and limit inferior for an explanation of the notation). The relation formula_27 is also satisfied, if this limit exists. The set of the complex numbers such that is called the disc of convergence of the series. The series converges absolutely inside its disc of convergence, and converges uniformly on every compact subset of the disc of convergence. For |"x" – "c"| = "r", there is no general statement on the convergence of the series. However, Abel's theorem states that if the series is convergent for some value z such that |"z" – "c"| = "r", then the sum of the series for "x" = "z" is the limit of the sum of the series for "x" = "c" + "t" ("z" – "c") where t is a real variable less than that tends to . Operations on power series. Addition and subtraction. When two functions "f" and "g" are decomposed into power series around the same center "c", the power series of the sum or difference of the functions can be obtained by termwise addition and subtraction. That is, if formula_28 and formula_29 then formula_30 It is not true that if two power series formula_31 and formula_32 have the same radius of convergence, then formula_33 also has this radius of convergence. If formula_34 and formula_35, then both series have the same radius of convergence of 1, but the series formula_36 has a radius of convergence of 3. The sum of two power series will have, at minimum, a radius of convergence of the smaller of the two radii of convergence of the two series (and it may be higher than either, as seen in the example above). Multiplication and division. With the same definitions for formula_37 and formula_38, the power series of the product and quotient of the functions can be obtained as follows: formula_39 The sequence formula_40 is known as the convolution of the sequences formula_41 and formula_42. For division, if one defines the sequence formula_43 by formula_44 then formula_45 and one can solve recursively for the terms formula_43 by comparing coefficients. Solving the corresponding equations yields the formulae based on determinants of certain matrices of the coefficients of formula_37 and formula_38 formula_46 formula_47 Differentiation and integration. Once a function formula_37 is given as a power series as above, it is differentiable on the interior of the domain of convergence. It can be differentiated and integrated quite easily, by treating every term separately: formula_48 Both of these series have the same radius of convergence as the original one. Analytic functions. A function "f" defined on some open subset "U" of R or C is called analytic if it is locally given by a convergent power series. This means that every "a" ∈ "U" has an open neighborhood "V" ⊆ "U", such that there exists a power series with center "a" that converges to "f"("x") for every "x" ∈ "V". Every power series with a positive radius of convergence is analytic on the interior of its region of convergence. All holomorphic functions are complex-analytic. Sums and products of analytic functions are analytic, as are quotients as long as the denominator is non-zero. If a function is analytic, then it is infinitely differentiable, but in the real case the converse is not generally true. For an analytic function, the coefficients "a""n" can be computed as formula_49 where formula_50 denotes the "n"th derivative of "f" at "c", and formula_51. This means that every analytic function is locally represented by its Taylor series. The global form of an analytic function is completely determined by its local behavior in the following sense: if "f" and "g" are two analytic functions defined on the same connected open set "U", and if there exists an element "c" ∈ "U" such that "f"("n")("c") = "g"("n")("c") for all "n" ≥ 0, then "f"("x") = "g"("x") for all "x" ∈ "U". If a power series with radius of convergence "r" is given, one can consider analytic continuations of the series, i.e. analytic functions "f" which are defined on larger sets than {"x"} and agree with the given power series on this set. The number "r" is maximal in the following sense: there always exists a complex number x with |"x" − "c"| = "r" such that no analytic continuation of the series can be defined at x. The power series expansion of the inverse function of an analytic function can be determined using the Lagrange inversion theorem. Behavior near the boundary. The sum of a power series with a positive radius of convergence is an analytic function at every point in the interior of the disc of convergence. However, different behavior can occur at points on the boundary of that disc. For example: Formal power series. In abstract algebra, one attempts to capture the essence of power series without being restricted to the fields of real and complex numbers, and without the need to talk about convergence. This leads to the concept of formal power series, a concept of great utility in algebraic combinatorics. Power series in several variables. An extension of the theory is necessary for the purposes of multivariable calculus. A power series is here defined to be an infinite series of the form formula_62 where "j" = ("j"1, …, "j""n") is a vector of natural numbers, the coefficients "a"("j"1, …, "j""n") are usually real or complex numbers, and the center "c" = ("c"1, …, "c""n") and argument "x" = ("x"1, …, "x""n") are usually real or complex vectors. The symbol formula_63 is the product symbol, denoting multiplication. In the more convenient multi-index notation this can be written formula_64 where formula_65 is the set of natural numbers, and so formula_66 is the set of ordered "n"-tuples of natural numbers. The theory of such series is trickier than for single-variable series, with more complicated regions of convergence. For instance, the power series formula_67 is absolutely convergent in the set formula_68 between two hyperbolas. (This is an example of a "log-convex set", in the sense that the set of points formula_69, where formula_70 lies in the above region, is a convex set. More generally, one can show that when c=0, the interior of the region of absolute convergence is always a log-convex set in this sense.) On the other hand, in the interior of this region of convergence one may differentiate and integrate under the series sign, just as one may with ordinary power series. Order of a power series. Let α be a multi-index for a power series "f"("x"1, "x"2, …, "x""n"). The order of the power series "f" is defined to be the least value formula_71 such that there is "a""α" ≠ 0 with formula_72, or formula_73 if "f" ≡ 0. In particular, for a power series "f"("x") in a single variable "x", the order of "f" is the smallest power of "x" with a nonzero coefficient. This definition readily extends to Laurent series. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_{n=0}^\\infty a_n \\left(x - c\\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \\dots" }, { "math_id": 1, "text": "\\sum_{n=0}^\\infty a_n x^n = a_0 + a_1 x + a_2 x^2 + \\dots." }, { "math_id": 2, "text": "f(x) = x^2 + 2x + 3" }, { "math_id": 3, "text": "c = 0" }, { "math_id": 4, "text": "f(x) = 3 + 2 x + 1 x^2 + 0 x^3 + 0 x^4 + \\cdots" }, { "math_id": 5, "text": "c = 1" }, { "math_id": 6, "text": "f(x) = 6 + 4(x - 1) + 1(x - 1)^2 + 0(x - 1)^3 + 0(x - 1)^4 + \\cdots " }, { "math_id": 7, "text": "x = 1" }, { "math_id": 8, "text": "f(x) = f(1)+\\frac {f'(1)}{1!} (x-1)+ \\frac{f''(1)}{2!} (x-1)^2+\\frac{f'''(1)}{3!}(x-1)^3+ \\cdots, " }, { "math_id": 9, "text": "f(x=1) = 1 + 2 +3 = 6 " }, { "math_id": 10, "text": "f'(x) = 2x + 2" }, { "math_id": 11, "text": "f'(1) = 4" }, { "math_id": 12, "text": "f''(x) = 2" }, { "math_id": 13, "text": "\\frac{1}{1 - x} = \\sum_{n=0}^\\infty x^n = 1 + x + x^2 + x^3 + \\cdots," }, { "math_id": 14, "text": "|x| < 1" }, { "math_id": 15, "text": "e^x = \\sum_{n=0}^\\infty \\frac{x^n}{n!} = 1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\cdots," }, { "math_id": 16, "text": "\\sin(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+1}}{(2n + 1)!} = x - \\frac{x^3}{3!} + \\frac{x^5}{5!} - \\frac{x^7}{7!} + \\cdots," }, { "math_id": 17, "text": "1 + x^{-1} + x^{-2} + \\cdots" }, { "math_id": 18, "text": "x^\\frac{1}{2}" }, { "math_id": 19, "text": " a_n" }, { "math_id": 20, "text": "x" }, { "math_id": 21, "text": "\\sin(x) x + \\sin(2x) x^2 + \\sin(3x) x^3 + \\cdots " }, { "math_id": 22, "text": " \\sum_{n=0}^\\infty a_n(x-c)^n" }, { "math_id": 23, "text": "(x-c)^0" }, { "math_id": 24, "text": "a_0" }, { "math_id": 25, "text": "r = \\liminf_{n\\to\\infty} \\left|a_n\\right|^{-\\frac{1}{n}}" }, { "math_id": 26, "text": "r^{-1} = \\limsup_{n\\to\\infty} \\left|a_n\\right|^\\frac{1}{n}" }, { "math_id": 27, "text": "r^{-1} = \\lim_{n\\to\\infty}\\left|{a_{n+1}\\over a_n}\\right|" }, { "math_id": 28, "text": "f(x) = \\sum_{n=0}^\\infty a_n (x - c)^n" }, { "math_id": 29, "text": "g(x) = \\sum_{n=0}^\\infty b_n (x - c)^n" }, { "math_id": 30, "text": "f(x) \\pm g(x) = \\sum_{n=0}^\\infty (a_n \\pm b_n) (x - c)^n." }, { "math_id": 31, "text": "\\sum_{n=0}^\\infty a_n x^n" }, { "math_id": 32, "text": "\\sum_{n=0}^\\infty b_n x^n" }, { "math_id": 33, "text": "\\sum_{n=0}^\\infty \\left(a_n + b_n\\right) x^n" }, { "math_id": 34, "text": "a_n = (-1)^n" }, { "math_id": 35, "text": "b_n = (-1)^{n+1} \\left(1 - \\frac{1}{3^n}\\right)" }, { "math_id": 36, "text": "\\sum_{n=0}^\\infty \\left(a_n + b_n\\right) x^n = \\sum_{n=0}^\\infty \\frac{(-1)^n}{3^n} x^n" }, { "math_id": 37, "text": "f(x)" }, { "math_id": 38, "text": "g(x)" }, { "math_id": 39, "text": "\\begin{align}\n f(x)g(x) &= \\left(\\sum_{n=0}^\\infty a_n (x-c)^n\\right)\\left(\\sum_{n=0}^\\infty b_n (x - c)^n\\right) \\\\\n &= \\sum_{i=0}^\\infty \\sum_{j=0}^\\infty a_i b_j (x - c)^{i+j} \\\\\n &= \\sum_{n=0}^\\infty \\left(\\sum_{i=0}^n a_i b_{n-i}\\right) (x - c)^n.\n\\end{align}" }, { "math_id": 40, "text": "m_n = \\sum_{i=0}^n a_i b_{n-i}" }, { "math_id": 41, "text": "a_n" }, { "math_id": 42, "text": "b_n" }, { "math_id": 43, "text": "d_n" }, { "math_id": 44, "text": "\\frac{f(x)}{g(x)} = \\frac{\\sum_{n=0}^\\infty a_n (x - c)^n}{\\sum_{n=0}^\\infty b_n (x - c)^n} = \\sum_{n=0}^\\infty d_n (x - c)^n" }, { "math_id": 45, "text": "f(x) = \\left(\\sum_{n=0}^\\infty b_n (x - c)^n\\right)\\left(\\sum_{n=0}^\\infty d_n (x - c)^n\\right)" }, { "math_id": 46, "text": "d_0=\\frac{a_0}{b_0}" }, { "math_id": 47, "text": "d_n=\\frac{1}{b_0^{n+1}} \\begin{vmatrix}\na_n &b_1 &b_2 &\\cdots&b_n \\\\\na_{n-1}&b_0 &b_1 &\\cdots&b_{n-1}\\\\\na_{n-2}&0 &b_0 &\\cdots&b_{n-2}\\\\\n\\vdots &\\vdots&\\vdots&\\ddots&\\vdots \\\\\na_0 &0 &0 &\\cdots&b_0\\end{vmatrix}" }, { "math_id": 48, "text": "\\begin{align}\n f'(x) &= \\sum_{n=1}^\\infty a_n n (x - c)^{n-1} = \\sum_{n=0}^\\infty a_{n+1} (n + 1) (x - c)^n, \\\\\n \\int f(x)\\,dx &= \\sum_{n=0}^\\infty \\frac{a_n (x - c)^{n+1}}{n + 1} + k = \\sum_{n=1}^\\infty \\frac{a_{n-1} (x - c)^n}{n} + k.\n\\end{align}" }, { "math_id": 49, "text": "a_n = \\frac{f^{\\left( n \\right)} \\left( c \\right)}{n!}" }, { "math_id": 50, "text": "f^{(n)}(c)" }, { "math_id": 51, "text": "f^{(0)}(c) = f(c)" }, { "math_id": 52, "text": "\\sum_{n=0}^{\\infty}z^n" }, { "math_id": 53, "text": "1" }, { "math_id": 54, "text": "|z|=1" }, { "math_id": 55, "text": "|z|<1" }, { "math_id": 56, "text": "\\frac{1}{1-z}" }, { "math_id": 57, "text": "z=1" }, { "math_id": 58, "text": "\\sum_{n=1}^{\\infty}\\frac{z^n}{n}" }, { "math_id": 59, "text": "z=-1" }, { "math_id": 60, "text": "\\sum_{n=1}^{\\infty}\\frac{z^n}{n^2}" }, { "math_id": 61, "text": "\\sum_{n=1}^{\\infty}\\frac{1}{n^2}" }, { "math_id": 62, "text": "f(x_1, \\dots, x_n) = \\sum_{j_1, \\dots, j_n = 0}^\\infty a_{j_1, \\dots, j_n} \\prod_{k=1}^n (x_k - c_k)^{j_k}," }, { "math_id": 63, "text": "\\Pi" }, { "math_id": 64, "text": "f(x) = \\sum_{\\alpha \\in \\N^n} a_\\alpha (x - c)^\\alpha." }, { "math_id": 65, "text": "\\N" }, { "math_id": 66, "text": "\\N^n" }, { "math_id": 67, "text": "\\sum_{n=0}^\\infty x_1^n x_2^n" }, { "math_id": 68, "text": "\\{ (x_1, x_2): |x_1 x_2| < 1\\}" }, { "math_id": 69, "text": "(\\log |x_1|, \\log |x_2|)" }, { "math_id": 70, "text": "(x_1, x_2)" }, { "math_id": 71, "text": "r" }, { "math_id": 72, "text": "r = |\\alpha| = \\alpha_1 + \\alpha_2 + \\cdots + \\alpha_n" }, { "math_id": 73, "text": "\\infty" } ]
https://en.wikipedia.org/wiki?curid=59958
59958932
Danzer set
Set of points touching all convex bodies of unit volume &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Does a Danzer set with bounded density or bounded separation exist? In geometry, a Danzer set is a set of points that touches every convex body of unit volume. Ludwig Danzer asked whether it is possible for such a set to have bounded density. Several variations of this problem remain unsolved. Formulation. A "Danzer set", in an n-dimensional Euclidean space, is a set of points in the space that has a non-empty intersection with every convex body whose n-dimensional volume is one. The whole space is itself a Danzer set, but it is possible for a Danzer set to be a discrete set with only finitely many points in any bounded area. Danzer's question asked whether, more strongly, the average number of points per unit area could be bounded. One way to define the problem more formally is to consider the growth rate of a set formula_0 in formula_1-dimensional Euclidean space, defined as the function that maps a real number formula_2 to the number of points of formula_0 that are within distance formula_2 of the origin. Danzer's question is whether it is possible for a Danzer set to have growth rate formula_3, expressed in big O notation. If so, this would equal the growth rate of well-spaced point sets like the integer lattice (which is not a Danzer set). An equivalent formulation involves the density of a set formula_0, defined as formula_4 where formula_5 denotes the Euclidean ball of radius formula_2 in formula_1-dimensional Euclidean space, centered at the origin, and formula_6 denotes its volume. Danzer's question asks whether there exists a Danzer set of bounded density or, alternatively, whether every set of bounded density has arbitrarily high-volume convex sets disjoint from it. Instead of asking for a set of bounded density that intersects arbitrary convex sets of unit volume, it is equivalent to ask for a set of bounded density that intersects all ellipsoids of unit volume, or all hyperrectangles of unit volume. For instance, in the plane, the shapes of these intersecting sets can be restricted to ellipses, or to rectangles. However, these shapes do not necessarily have their sides or axes parallel to the coordinate axes. Partial results. It is possible to construct a Danzer set of growth rate that is within a polylogarithmic factor of formula_3. For instance, overlaying rectangular grids whose cells have constant volume but differing aspect ratios can achieve a growth rate of formula_7. A construction for Danzer sets is known with a somewhat slower growth rate, formula_8. This construction is based on deep results of Marina Ratner in ergodic theory (Ratner's theorems). Because both the overlaid grids and the improved construction have growth rates faster than formula_3, these sets do not have bounded density, and the answer to Danzer's question remains unknown. Although the existence of a Danzer set of bounded density remains open, it is possible to restrict the classes of point sets that may be Danzer sets in other ways than by their densities, ruling out certain types of solution to Danzer's question. In particular, a Danzer set cannot be the union of finitely many lattices, it cannot be generated by choosing a point in each tile of a substitution tiling (in the same position for each tile of the same type), and it cannot be generated by the cut-and-project method for constructing aperiodic tilings. Therefore, the vertices of the pinwheel tiling and Penrose tiling are not Danzer sets. Variations. Bounded coverage. A strengthened variation of the problem, posed by Timothy Gowers, asks whether there exists a Danzer set formula_0 for which there is a finite bound formula_9 on the number of points of intersection between formula_0 and any convex body of unit volume. This version has been solved: it is impossible for a Danzer set with this property to exist. Separation. Another strengthened variation of the problem, still unsolved, is Conway's dead fly problem. John Horton Conway recalled that, as a child, he slept in a room with wallpaper whose flower pattern resembled an array of dead flies, and that he would try to find convex regions that did not have a dead fly in them. In Conway's formulation, the question is whether there exists a Danzer set in which the points of the set (the dead flies) are separated at a bounded distance from each other. Such a set would necessarily also have an upper bound on the distance from each point of the plane to a dead fly (in order to touch all circles of unit area), so it would form a Delone set, a set with both lower and upper bounds on the spacing of the points. It would also necessarily have growth rate formula_3, so if it exists then it would also solve the original version of Danzer's problem. Conway offered a $1000 prize for a solution to his problem, as part of a set of problems also including Conway's 99-graph problem, the analysis of sylver coinage, and the thrackle conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "O(r^d)" }, { "math_id": 4, "text": "\\limsup_{r\\to\\infty} \\frac{|S\\cap B_d(r)|}{V_d(r)}," }, { "math_id": 5, "text": "B_d(r)" }, { "math_id": 6, "text": "V_d(r)" }, { "math_id": 7, "text": "O(n^d\\log^{d-1}n)" }, { "math_id": 8, "text": "O(r^d \\log r)" }, { "math_id": 9, "text": "C" } ]
https://en.wikipedia.org/wiki?curid=59958932
599590
PDF417
Type of barcode PDF417 is a stacked linear barcode format used in a variety of applications such as transport, identification cards, and inventory management. "PDF" stands for Portable Data File. The "417" signifies that each pattern in the code consists of 4 bars and spaces in a pattern that is 17 units (modules) long. The PDF417 symbology was invented by Dr. Ynjiun P. Wang at Symbol Technologies in 1991. It is defined in ISO 15438. Applications. PDF417 is used in many applications by both commercial and government organizations. PDF417 is one of the formats (along with Data Matrix) that can be used to print postage accepted by the United States Postal Service. PDF417 is also used by the airline industry's Bar Coded Boarding Pass (BCBP) standard as the 2D bar code symbolism for paper boarding passes. PDF417 is the standard selected by the Department of Homeland Security as the machine readable zone technology for RealID compliant driver licenses and state issued identification cards. PDF417 barcodes are also included on visas and border crossing cards issued by the State of Israel (). Features. In addition to features typical of two dimensional bar codes, PDF417's capabilities include: The introduction of the ISO/IEC document states: Manufacturers of bar code equipment and users of bar code technology require publicly available standard symbology specifications to which they can refer when developing equipment and application standards. It is the intent and understanding of ISO/IEC that the symbology presented in this International Standard is entirely in the public domain and free of all user restrictions, licences and fees. Format. The PDF417 bar code (also called a symbol) consists of 3 to 90 rows, each of which is like a small linear bar code. Each row has: All rows are the same width; each row has the same number of codewords. Codewords. PDF417 uses a base 929 encoding. Each codeword represents a number from 0 to 928. The codewords are represented by patterns of dark (bar) and light (space) regions. Each of these patterns contains four bars and four spaces (where the 4 in the name comes from). The total width is 17 times the width of the narrowest allowed vertical bar (the X dimension); this is where the 17 in the name comes from. Each pattern starts with a bar and ends with a space. The row height must be at least 3 times the minimum width: Y ≥ 3 X. There are three distinct bar–space patterns used to represent each codeword. These patterns are organized into three groups known as clusters. The clusters are labeled 0, 3, and 6. No bar–space pattern is used in more than one cluster. The rows of the symbol cycle through the three clusters, so row 1 uses patterns from cluster 0, row 2 uses cluster 3, row 3 uses cluster 6, and row 4 again uses cluster 0. Which cluster can be determined by an equation: formula_0 Where "K" is the cluster number and the "bi" refer to the width of the "i"-th black bar in the symbol character (in "X" units). Alternatively, formula_1 Where "Ei" is the "i"-th edge-to-next-same-edge distance. Odd indices are the leading edge of a bar to the leading edge of the next bar; even indices are for the trailing edges. One purpose of the three clusters is to determine which row (mod 3) the codeword is in. The clusters allow portions of the symbol to be read using a single scan line that may be skewed from the horizontal. For instance, the scan might start on row 6 at the start of the row but end on row 10. At the beginning of the scan, the scanner sees the constant start pattern, and then it sees symbols in cluster 6. When the skewed scan straddles rows 6 and 7, then the scanner sees noise. When the scan is on row 7, the scanner sees symbols in cluster 0. Consequently, the scanner knows the direction of the skew. By the time the scanner reaches the right, it is on row 10, so it sees cluster 0 patterns. The scanner will also see a constant stop pattern. Encoding. Of the 929 available code words, 900 are used for data, and 29 for special functions, such as shifting between major modes. The three major modes encode different types of data in different ways, and can be mixed as necessary within a single bar code: Error correction. When the PDF417 symbol is created, from 2 to 512 error detection and correction codewords are added. PDF417 uses Reed–Solomon error correction. When the symbol is scanned, the maximum number of corrections that can be made is equal to the number of codewords added, but the standard recommends that two codewords be held back to ensure reliability of the corrected information. Comparison with other symbologies. PDF417 is a stacked barcode that can be read with a simple linear scan being swept over the symbol. Those linear scans need the left and right columns with the start and stop code words. Additionally, the scan needs to know what row it is scanning, so each row of the symbol must also encode its row number. Furthermore, the reader's line scan won't scan just a row; it will typically start scanning one row, but then cross over to a neighbor and possibly continuing on to cross successive rows. In order to minimize the effect of these crossings, the PDF417 modules are tall and narrow — the height is typically three times the width. Also, each code word must indicate which row it belongs to so crossovers, when they occur, can be detected. The code words are also designed to be delta-decodable, so some code words are redundant. Each PDF data code word represents about 10 bits of information (log2(900) ≈ 9.8), but the printed code word (character) is 17 modules wide. Including a height of 3 modules, a PDF417 code word takes 51 square modules to represent 10 bits. That area does not count other overhead such as the start, stop, row, format, and ECC information. Other 2D codes, such as DataMatrix and QR, are decoded with image sensors instead of uncoordinated linear scans. Those codes still need recognition and alignment patterns, but they do not need to be as prominent. An 8 bit code word will take 8 square modules (ignoring recognition, alignment, format, and ECC information). In practice, a PDF417 symbol takes about four times the area of a DataMatrix or QR Code. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K = b_1 - b_2 + b_3 - b_4 + 9 \\,\\, \\pmod 9" }, { "math_id": 1, "text": "K = E_1 - E_2 + E_5 - E_6 + 9 \\,\\, \\pmod 9" } ]
https://en.wikipedia.org/wiki?curid=599590
59961754
Indistinguishability obfuscation
Type of cryptographic software obfuscation In cryptography, indistinguishability obfuscation (abbreviated IO or iO) is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, iO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable. Indistinguishability obfuscation has several interesting theoretical properties. Firstly, iO is the "best-possible" obfuscation (in the sense that any secret about a program that can be hidden by any obfuscator at all can also be hidden by iO). Secondly, iO can be used to construct nearly the entire gamut of cryptographic primitives, including both mundane ones such as public-key cryptography and more exotic ones such as deniable encryption and functional encryption (which are types of cryptography that no-one previously knew how to construct), but with the notable exception of collision-resistant hash function families. For this reason, it has been referred to as "crypto-complete". Lastly, unlike many other kinds of cryptography, indistinguishability obfuscation continues to exist even if P=NP (though it would have to be constructed differently in this case), though this does not necessarily imply that iO exists unconditionally. Though the idea of cryptographic software obfuscation has been around since 1996, indistinguishability obfuscation was first proposed by Barak et al. (2001), who proved that iO exists if P=NP is the case. For the P≠NP case (which is harder, but also more plausible), progress was slower: Garg et al. (2013) proposed a construction of iO based on a computational hardness assumption relating to multilinear maps, but this assumption was later disproven. A construction based on "well-founded assumptions" (hardness assumptions that have been well-studied by cryptographers, and thus widely assumed secure) had to wait until Jain, Lin, and Sahai (2020). (Even so, one of these assumptions used in the 2020 proposal is not secure against quantum computers.) Currently known indistinguishability obfuscation candidates are very far from being practical. As measured by a 2017 paper, even obfuscating the toy function which outputs the logical conjunction of its thirty-two Boolean data type inputs produces a program nearly a dozen gigabytes large. Formal definition. Let formula_0 be some uniform probabilistic polynomial-time algorithm. Then formula_0 is called an "indistinguishability obfuscator" if and only if it satisfies both of the following two statements: History. In 2001, Barak et al., showing that black-box obfuscation is impossible, also proposed the idea of an indistinguishability obfuscator, and constructed an inefficient one. Although this notion seemed relatively weak, Goldwasser and Rothblum (2007) showed that an efficient indistinguishability obfuscator would be a best-possible obfuscator, and any best-possible obfuscator would be an indistinguishability obfuscator. (However, for "inefficient" obfuscators, no best-possible obfuscator exists unless the polynomial hierarchy collapses to the second level.) An open-source software implementation of an iO candidate was created in 2015. Candidate constructions. Barak et al. (2001) proved that an "inefficient" indistinguishability obfuscator exists for circuits; that is, the lexicographically first circuit that computes the same function. If P = NP holds, then an indistinguishability obfuscator exists, even though no other kind of cryptography would also exist. A candidate construction of iO with provable security under concrete hardness assumptions relating to multilinear maps was published by Garg et al. (2013), but this assumption was later invalidated. (Previously, Garg, Gentry, and Halevi (2012) had constructed a candidate version of a multilinear map based on heuristic assumptions.) Starting from 2016, Lin began to explore constructions of iO based on less strict versions of multilinear maps, constructing a candidate based on maps of degree up to 30, and eventually a candidate based on maps of degree up to 3. Finally, in 2020, Jain, Lin, and Sahai proposed a construction of iO based on the symmetric external Diffie-Helman, learning with errors, and learning plus noise assumptions, as well as the existence of a super-linear stretch pseudorandom generator in the function class NC0. (The existence of pseudorandom generators in NC0 (even with sub-linear stretch) was a long-standing open problem until 2006.) It is possible that this construction could be broken with quantum computing, but there is an alternative construction that may be secure even against that (although the latter relies on less established security assumptions). Practicality. There have been attempts to implement and benchmark iO candidates. In 2017, an obfuscation of the function formula_9 at a security level of 80 bits took 23.5 minutes to produce and measured 11.6 GB, with an evaluation time of 77 ms. Additionally, an obfuscation of the Advanced Encryption Standard encryption circuit at a security level of 128 bits would measure 18 PB and have an evaluation time of about 272 years. Existence. It is useful to divide the question of the existence of iO by using Russell Impagliazzo's "five worlds", which are five different hypothetical situations about average-case complexity: Potential applications. Indistinguishability obfuscators, if they exist, could be used for an enormous range of cryptographic applications, so much so that it has been referred to as a "central hub" for cryptography, the "crown jewel of cryptography", or "crypto-complete". Concretely, an indistinguishability obfuscator (with the additional assumption of the existence of one-way functions) could be used to construct the following kinds of cryptography: Additionally, if iO and one-way functions exist, then problems in the PPAD complexity class are provably hard. However, indistinguishability obfuscation cannot be used to construct "every" possible cryptographic protocol: for example, no black-box construction can convert an indistinguishability obfuscator to a collision-resistant hash function family, even with a trapdoor permutation, except with an "exponential" loss of security. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{iO}" }, { "math_id": 1, "text": "x\\in\\{0, 1\\}^n" }, { "math_id": 2, "text": "\\Pr[C'(x) = C(x): C' \\leftarrow\\mathcal{iO}(C) ] = 1." }, { "math_id": 3, "text": "C_0, C_1" }, { "math_id": 4, "text": "\\{\\mathcal{iO}(C_0)\\}" }, { "math_id": 5, "text": "\\{\\mathcal{iO}(C_1)\\}" }, { "math_id": 6, "text": "\\varepsilon(k)" }, { "math_id": 7, "text": "1/p(k)" }, { "math_id": 8, "text": "|\\Pr[A(\\mathcal{iO}(C_0))=1] - \\Pr[A(\\mathcal{iO}(C_1))=1]|\\leq\\varepsilon(k)." }, { "math_id": 9, "text": "x_1\\wedge x_2\\wedge \\dots \\wedge x_{32}" } ]
https://en.wikipedia.org/wiki?curid=59961754
59961913
Games graph
In graph theory, the Games graph is the largest known locally linear strongly regular graph. Its parameters as a strongly regular graph are (729,112,1,20). This means that it has 729 vertices, and 40824 edges (112 per vertex). Each edge is in a unique triangle (it is a locally linear graph) and each non-adjacent pair of vertices have exactly 20 shared neighbors. It is named after Richard A. Games, who suggested its construction in an unpublished communication and wrote about related constructions. Construction. The construction of this graph involves the 56-point cap set in formula_0. This is a subset of points with no three in line in the five-dimensional projective geometry over a three-element field, and is unique up to symmetry. The six-dimensional projective geometry, formula_1, can be partitioned into a six-dimensional affine space formula_2 and a copy of formula_0, which forms the set of points at infinity with respect to the affine space. The Games graph has as its vertices the 729 points of the affine space formula_2. Each line in the affine space goes through three of these points, and through a fourth point at infinity. The graph contains a triangle for every line of three affine points that passes through a point of the cap set. Properties. Several of the graph's properties follow immediately from this construction. It has formula_3 vertices, because the number of points in an affine space is the size of the base field to the power of the dimension. For each affine point, there are 56 lines through cap set points, 56 triangles containing the corresponding vertex, and formula_4 neighbors of the vertex. And there can be no triangles other than the ones coming from the construction, because any other triangle would have to come from three different lines meeting in a common plane of formula_1, and the three cap set points of the three lines would all lie on the intersection of this plane with formula_0, which is a line. But this would violate the defining property of a cap set that it has no three points on a line, so no such extra triangle can exist. The remaining property of strongly regular graphs, that all non-adjacent pairs of points have the same number of shared neighbors, depends on the specific properties of the 5-dimensional cap set. Related graphs. With the formula_5 Rook's graph and the Brouwer–Haemers graph, the Games graph is one of only three possible strongly regular graphs whose parameters have the form formula_6. The same properties that produce a strongly regular graph from a cap set can also be used with an 11-point cap set in formula_7, producing a smaller strongly regular graph with parameters (243,22,1,2). This graph is the Berlekamp–Van Lint–Seidel graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "PG(5,3)" }, { "math_id": 1, "text": "PG(6,3)" }, { "math_id": 2, "text": "AG(6,3)" }, { "math_id": 3, "text": "729=3^6" }, { "math_id": 4, "text": "112=56\\times 2" }, { "math_id": 5, "text": "3\\times 3" }, { "math_id": 6, "text": "\\bigl((n^2+3n-1)^2,n^2(n+3),1,n(n+1)\\bigr)" }, { "math_id": 7, "text": "PG(4,3)" } ]
https://en.wikipedia.org/wiki?curid=59961913
59962
Discrete cosine transform
Technique used in signal processing and data compression A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations. A DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simply "the DCT". This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT, used in several ISO/IEC and ITU-T international standards. DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks. DCT blocks sizes including 8x8 pixels for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels. The DCT has a strong "energy compaction" property, capable of achieving high quality at high data compression ratios. However, blocky compression artifacts can appear when heavy DCT compression is applied. History. The DCT was first conceived by Nasir Ahmed, T. Natarajan and K. R. Rao while working at Kansas State University. The concept was proposed to the National Science Foundation in 1972. The DCT was originally intended for image compression. Ahmed developed a practical DCT algorithm with his PhD students T. Raj Natarajan, Wills Dietrich, and Jeremy Fries, and his friend Dr. K. R. Rao at the University of Texas at Arlington in 1973. They presented their results in a January 1974 paper, titled "Discrete Cosine Transform". It described what is now called the type-II DCT (DCT-II),51 as well as the type-III inverse DCT (IDCT). Since its introduction in 1974, there has been significant research on the DCT. In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm. Further developments include a 1978 paper by M. J. Narasimha and A. M. Peterson, and a 1984 paper by B. G. Lee. These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by the Joint Photographic Experts Group as the basis for JPEG's lossy image compression algorithm in 1992. The discrete sine transform (DST) was derived from the DCT, by replacing the Neumann condition at "x=0" with a Dirichlet condition. The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao. A type-I DST (DST-I) was later described by Anil K. Jain in 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978. In 1975, John A. Roese and Guner S. Robinson adapted the DCT for inter-frame motion-compensated video coding. They experimented with the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, also called block motion compensation. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards. A DCT variant, the modified discrete cosine transform (MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used in most modern audio compression formats, such as Dolby Digital (AC-3), MP3 (which uses a hybrid DCT-FFT algorithm), Advanced Audio Coding (AAC), and Vorbis (Ogg). Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at the University of New Mexico in 1995. This allows the DCT technique to be used for lossless compression of images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT and delta modulation. It is a more effective lossless compression algorithm than entropy coding. Lossless DCT is also known as LDCT. Applications. The DCT is the most widely used transformation technique in signal processing, and by far the most widely used linear transform in data compression. Uncompressed digital media as well as lossless compression have high memory and bandwidth requirements, which is significantly reduced by the DCT lossy compression technique, capable of achieving data compression ratios from 8:1 to 14:1 for near-studio-quality, up to 100:1 for acceptable-quality content. DCT compression standards are used in digital media technologies, such as digital images, digital photos, digital video, streaming media, digital television, streaming television, video on demand (VOD), digital cinema, high-definition video (HD video), and high-definition television (HDTV). The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strong "energy compaction" property. In typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlated Markov processes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform (which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions. DCTs are widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even and odd boundary conditions at the two ends of the array. DCTs are closely related to Chebyshev polynomials, and fast DCT algorithms (below) are used in Chebyshev approximation of arbitrary functions by series of Chebyshev polynomials, for example in Clenshaw–Curtis quadrature. General applications. The DCT is widely used in many applications, which include the following. Visual media standards. The DCT-II is an important image compression technique. It is used in image compression standards such as JPEG, and video compression standards such as H.26x, MJPEG, MPEG, DV, Theora and Daala. There, the two-dimensional DCT-II of formula_0 blocks are computed and the results are quantized and entropy coded. In this case, formula_1 is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the formula_2 element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies. The integer DCT, an integer approximation of the DCT, is used in Advanced Video Coding (AVC), introduced in 2003, and High Efficiency Video Coding (HEVC), introduced in 2013. The integer DCT is also used in the High Efficiency Image Format (HEIF), which uses a subset of the HEVC video coding format for coding still images. AVC uses 4 x 4 and 8 x 8 blocks. HEVC and HEIF use varied block sizes between 4 x 4 and 32 x 32 pixels. As of 2019[ [update]], AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers. Multidimensional DCT. Multidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems, variable temporal length 3-D DCT coding, video coding algorithms, adaptive video coding and 3-D Compression. Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using MD DCTs is rapidly increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform and cosine-modulated wavelet bases. Digital signal processing. DCT plays an important role in digital signal processing specifically data compression. The DCT is widely implemented in digital signal processors (DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such as encoding, decoding, video, audio, multiplexing, control signals, signaling, and analog-to-digital conversion. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips. Compression artifacts. A common issue with DCT compression in digital media are blocky compression artifacts, caused by DCT blocks. In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT blocks is taken within each block and the resulting DCT coefficients are quantized. This process can cause blocking artifacts, primarily at high data compression ratios. This can also cause the mosquito noise effect, commonly found in digital video. DCT blocks are often used in glitch art. The artist Rosa Menkman makes use of DCT-based compression artifacts in her glitch art, particularly the DCT blocks found in most digital media formats such as JPEG digital images and MP3 digital audio. Another example is "Jpegs" by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style. Informal overview. Like any Fourier-related transform, discrete cosine transforms (DCTs) express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the discrete Fourier transform (DFT), a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form of complex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies different boundary conditions from the DFT or other related transforms. The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an "extension" of that function outside the domain. That is, once you write a function formula_3 as a sum of sinusoids, you can evaluate that sum at any formula_4, even for formula_4 where the original formula_3 was not specified. The DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an even extension of the original function. However, because DCTs operate on "finite", "discrete" sequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd at "both" the left and right boundaries of the domain (i.e. the min-"n" and max-"n" boundaries in the definitions below, respectively). Second, one has to specify around "what point" the function is even or odd. In particular, consider a sequence "abcd" of four equally spaced data points, and say that we specify an even "left" boundary. There are two sensible possibilities: either the data are even about the sample "a", in which case the even extension is "dcbabcd", or the data are even about the point "halfway" between "a" and the previous point, in which case the even extension is "dcbaabcd" ("a" is repeated). These choices lead to all the standard variations of DCTs and also discrete sine transforms (DSTs). Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. Half of these possibilities, those where the "left" boundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST. These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solve partial differential equations by spectral methods, the boundary conditions are directly specified as a part of the problem being solved. Or, for the MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for the "energy compactification" properties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series. In particular, it is well known that any discontinuities in a function reduce the rate of convergence of the Fourier series, so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed. (Here, we think of the DFT or DCT as approximations for the Fourier series or cosine series of a function, respectively, in order to talk about its "smoothness".) However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries. (A similar problem arises for the DST, in which the odd left boundary condition implies a discontinuity for any function that does not happen to be zero at that boundary.) In contrast, a DCT where "both" boundaries are even "always" yields a continuous extension at the boundaries (although the slope is generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience. Formal definition. Formally, the discrete cosine transform is a linear, invertible function formula_5 (where formula_6 denotes the set of real numbers), or equivalently an invertible N × N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers formula_7 are transformed into the N real numbers formula_8 according to one of the formulas: formula_9 DCT-I. Some authors further multiply the formula_10 and formula_11 terms by formula_12 and correspondingly multiply the formula_13 and formula_14 terms by formula_15 which makes the DCT-I matrix orthogonal, if one further multiplies by an overall scale factor of formula_16 but breaks the direct correspondence with a real-even DFT. The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of formula_17 real numbers with even symmetry. For example, a DCT-I of formula_18 real numbers formula_19 is exactly equivalent to a DFT of eight real numbers formula_20 (even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.) Note, however, that the DCT-I is not defined for formula_21 less than 2, while all other DCT types are defined for any positive formula_22 Thus, the DCT-I corresponds to the boundary conditions: formula_23 is even around formula_24 and even around formula_25; similarly for formula_26 formula_27 DCT-II. The DCT-II is probably the most commonly used form, and is often simply referred to as "the DCT". This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of formula_28 real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the formula_28 inputs formula_29 where formula_30 formula_31 for formula_32 formula_33 and formula_34 for formula_35 DCT-II transformation is also possible using 2N signal followed by a multiplication by half shift. This is demonstrated by Makhoul. Some authors further multiply the formula_13 term by formula_36 and multiply the rest of the matrix by an overall scale factor of formula_37 (see below for the corresponding change in DCT-III). This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. This is the normalization used by Matlab, for example, see. In many applications, such as JPEG, the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. the quantization step in JPEG), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications. The DCT-II implies the boundary conditions: formula_23 is even around formula_38 and even around formula_39 formula_40 is even around formula_41 and odd around formula_42 formula_43 DCT-III. Because it is the inverse of DCT-II up to a scale factor (see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT"). Some authors divide the formula_44 term by formula_45 instead of by 2 (resulting in an overall formula_46 term) and multiply the resulting matrix by an overall scale factor of formula_47 (see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output. The DCT-III implies the boundary conditions: formula_48 is even around formula_49 and odd around formula_50 formula_51 is even around formula_52 and even around formula_53 formula_54 DCT-IV. The DCT-IV matrix becomes orthogonal (and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of formula_55 A variant of the DCT-IV, where data from different transforms are "overlapped", is called the modified discrete cosine transform (MDCT). The DCT-IV implies the boundary conditions: formula_23 is even around formula_56 and odd around formula_57 similarly for formula_58 DCT V-VIII. DCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary. In other words, DCT types I–IV are equivalent to real-even DFTs of even order (regardless of whether formula_21 is even or odd), since the corresponding DFT is of length formula_17 (for DCT-I) or formula_59 (for DCT-II &amp; III) or formula_60 (for DCT-IV). The four additional types of discrete cosine transform correspond essentially to real-even DFTs of logically odd order, which have factors of formula_61 in the denominators of the cosine arguments. However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below. Inverse transforms. Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/("N" − 1). The inverse of DCT-IV is DCT-IV multiplied by 2/"N". The inverse of DCT-II is DCT-III multiplied by 2/"N" and vice versa. Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by formula_63 so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of √2 (see above), this can be used to make the transform matrix orthogonal. Multidimensional DCTs. Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension. M-D DCT-II. For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above): formula_64 The inverse of a multi-dimensional DCT is just a separable product of the inverses of the corresponding one-dimensional DCTs (see above), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm. The "3-D DCT-II" is only the extension of "2-D DCT-II" in three dimensional space and mathematically can be calculated by the formula formula_65 The inverse of 3-D DCT-II is 3-D DCT-III and can be computed from the formula given by formula_66 Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a "row-column" algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed. 3-D DCT-II VR DIF. In order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows. The transform size "N × N × N" is assumed to be 2. formula_67 where formula_68 The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, where formula_69. The original 3-D DCT-II now can be written as formula_70 where formula_71 If the even and the odd parts of formula_72 and formula_73 and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as formula_74 where formula_75 formula_76 formula_77 formula_78 formula_79 Arithmetic complexity. The whole 3-D DCT calculation needs formula_80 stages, and each stage involves formula_81 butterflies. The whole 3-D DCT requires formula_82 butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is formula_83 and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by formula_84 The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by formula_85 and formula_86 respectively. From Table 1, it can be seen that the total number of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications. The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor. Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications, it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of the Cooley–Tukey FFT algorithm in 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-style Cooley–Tukey FFT algorithms. The image to the right shows a combination of horizontal and vertical frequencies for an 8 × 8 formula_87 two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data ( 8×8 ) is transformed to a linear combination of these 64 frequency squares. MD-DCT-IV. The M-D DCT-IV is just an extension of 1-D DCT-IV on to M dimensional domain. The 2-D DCT-IV of a matrix or an image is given by formula_88 for formula_89 and formula_90 We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields. Computation. Although the direct application of these formulas would require formula_91 operations, it is possible to compute the same thing with only formula_92 complexity by factorizing the computation similarly to the fast Fourier transform (FFT). One can also compute DCTs via FFTs combined with formula_93 pre- and post-processing steps. In general, formula_94 methods to compute DCTs are known as fast cosine transform (FCT) algorithms. The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plus formula_95 extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least for power-of-two sizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically . Algorithms based on the Cooley–Tukey FFT algorithm are most common, but any other FFT algorithm is also applicable. For example, the Winograd FFT algorithm leads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well . While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengths N with FFT-based algorithms. Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the 8 × 8 DCT-II used in JPEG compression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.) In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size formula_96 with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used in FFTPACK and FFTW) was described by and , and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT-II. Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent size formula_97 real-data FFT is also performed by a real-data split-radix algorithm (as in ), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II (formula_98 real-arithmetic operations). A recent reduction in the operation count to formula_99 also uses a real-data FFT. So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for small formula_100 but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.) Example of IDCT. Consider this 8x8 grayscale image of capital letter A. Each basis function is multiplied by its coefficient and then this product is added to the final image. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N \\times N" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "(0,0)" }, { "math_id": 3, "text": "f(x)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": " f : \\R^{N} \\to \\R^{N} " }, { "math_id": 6, "text": " \\R" }, { "math_id": 7, "text": "~ x_0,\\ \\ldots\\ x_{N - 1} ~" }, { "math_id": 8, "text": " X_0,\\, \\ldots,\\, X_{N - 1} " }, { "math_id": 9, "text": "X_k\n = \\frac{1}{2} (x_0 + (-1)^k x_{N-1}) \n + \\sum_{n=1}^{N-2} x_n \\cos \\left[\\, \\frac{\\pi}{\\,N-1\\,} \\, n \\, k \\,\\right]\n \\qquad \\text{ for } ~ k = 0,\\ \\ldots\\ N-1 ~." }, { "math_id": 10, "text": "x_0 " }, { "math_id": 11, "text": " x_{N-1} " }, { "math_id": 12, "text": " \\sqrt{2\\,}\\, ," }, { "math_id": 13, "text": " X_0 " }, { "math_id": 14, "text": " X_{N-1}" }, { "math_id": 15, "text": " 1/\\sqrt{2\\,} \\,," }, { "math_id": 16, "text": " \\sqrt{\\tfrac{2}{N-1\\,}\\,} ," }, { "math_id": 17, "text": " 2(N-1) " }, { "math_id": 18, "text": "N = 5 " }, { "math_id": 19, "text": " a\\ b\\ c\\ d\\ e " }, { "math_id": 20, "text": " a\\ b\\ c\\ d\\ e\\ d\\ c\\ b " }, { "math_id": 21, "text": " N " }, { "math_id": 22, "text": " N ." }, { "math_id": 23, "text": " x_n " }, { "math_id": 24, "text": " n = 0 " }, { "math_id": 25, "text": " n = N - 1 " }, { "math_id": 26, "text": " X_k ." }, { "math_id": 27, "text": "X_k =\n \\sum_{n=0}^{N-1} x_n \\cos \\left[\\, \\tfrac{\\,\\pi\\,}{N} \\left( n + \\tfrac{1}{2} \\right) k \\, \\right]\n \\qquad \\text{ for } ~ k = 0,\\ \\dots\\ N-1 ~." }, { "math_id": 28, "text": "4N" }, { "math_id": 29, "text": " y_n ," }, { "math_id": 30, "text": " y_{2n} = 0 ," }, { "math_id": 31, "text": " y_{2n+1} = x_n " }, { "math_id": 32, "text": " 0 \\leq n < N ," }, { "math_id": 33, "text": " y_{2N} = 0 ," }, { "math_id": 34, "text": " y_{4N-n} = y_n " }, { "math_id": 35, "text": " 0 < n < 2N ." }, { "math_id": 36, "text": " 1/\\sqrt{N\\,} \\, " }, { "math_id": 37, "text": "\\sqrt{{2}/{N}}" }, { "math_id": 38, "text": " n = -1/2 " }, { "math_id": 39, "text": " n = N - 1/2 \\,;" }, { "math_id": 40, "text": " X_k " }, { "math_id": 41, "text": " k = 0 " }, { "math_id": 42, "text": " k = N ." }, { "math_id": 43, "text": " X_k =\n \\tfrac{1}{2} x_0 +\n \\sum_{n=1}^{N-1} x_n \\cos \\left[\\, \\tfrac{\\,\\pi\\,}{N} \\left( k + \\tfrac{1}{2} \\right) n \\,\\right]\n \\qquad \\text{ for } ~ k = 0,\\ \\ldots\\ N-1 ~." }, { "math_id": 44, "text": "x_0" }, { "math_id": 45, "text": "\\sqrt{2}" }, { "math_id": 46, "text": "x_0/\\sqrt{2}" }, { "math_id": 47, "text": " \\sqrt{2/N}" }, { "math_id": 48, "text": "x_n" }, { "math_id": 49, "text": "n = 0" }, { "math_id": 50, "text": "n = N ;" }, { "math_id": 51, "text": "X_k" }, { "math_id": 52, "text": "k = -1/2" }, { "math_id": 53, "text": "k = N - 1/2." }, { "math_id": 54, "text": "X_k =\n \\sum_{n=0}^{N-1} x_n \\cos \\left[\\, \\tfrac{\\,\\pi\\,}{N} \\, \\left(n + \\tfrac{1}{2} \\right)\\left(k + \\tfrac{1}{2} \\right) \\,\\right]\n \\qquad \\text{ for } k = 0,\\ \\ldots\\ N-1 ~." }, { "math_id": 55, "text": " \\sqrt{2/N}." }, { "math_id": 56, "text": "n = -1/2" }, { "math_id": 57, "text": "n = N - 1/2;" }, { "math_id": 58, "text": "X_k." }, { "math_id": 59, "text": " 4 N " }, { "math_id": 60, "text": " 8 N " }, { "math_id": 61, "text": " N \\pm {1}/{2} " }, { "math_id": 62, "text": " N = 1 ." }, { "math_id": 63, "text": "\\sqrt{2/N}" }, { "math_id": 64, "text": "\n\\begin{align}\nX_{k_1,k_2} &=\n \\sum_{n_1=0}^{N_1-1}\n\\left( \\sum_{n_2=0}^{N_2-1}\n x_{n_1,n_2} \n\\cos \\left[\\frac{\\pi}{N_2} \\left(n_2+\\frac{1}{2}\\right) k_2 \\right]\\right)\n\\cos \\left[\\frac{\\pi}{N_1} \\left(n_1+\\frac{1}{2}\\right) k_1 \\right]\\\\\n&= \\sum_{n_1=0}^{N_1-1}\n \\sum_{n_2=0}^{N_2-1}\n x_{n_1,n_2} \n\\cos \\left[\\frac{\\pi}{N_1} \\left(n_1+\\frac{1}{2}\\right) k_1 \\right]\n\\cos \\left[\\frac{\\pi}{N_2} \\left(n_2+\\frac{1}{2}\\right) k_2 \\right] .\n\\end{align}\n" }, { "math_id": 65, "text": "\nX_{k_1,k_2,k_3} =\n \\sum_{n_1=0}^{N_1-1}\n \\sum_{n_2=0}^{N_2-1}\n\\sum_{n_3=0}^{N_3-1}\n x_{n_1,n_2,n_3} \n\\cos \\left[\\frac{\\pi}{N_1} \\left(n_1+\\frac{1}{2}\\right) k_1 \\right]\n\\cos \\left[\\frac{\\pi}{N_2} \\left(n_2+\\frac{1}{2}\\right) k_2 \\right]\n\\cos \\left[\\frac{\\pi}{N_3} \\left(n_3+\\frac{1}{2}\\right) k_3 \\right],\\quad \n\\text{for } k_i = 0,1,2,\\dots,N_i-1.\n" }, { "math_id": 66, "text": "\nx_{n_1,n_2,n_3} =\n \\sum_{k_1=0}^{N_1-1}\n \\sum_{k_2=0}^{N_2-1}\n\\sum_{k_3=0}^{N_3-1}\n X_{k_1,k_2,k_3} \n\\cos \\left[\\frac{\\pi}{N_1} \\left(n_1+\\frac{1}{2}\\right) k_1 \\right]\n\\cos \\left[\\frac{\\pi}{N_2} \\left(n_2+\\frac{1}{2}\\right) k_2 \\right]\n\\cos \\left[\\frac{\\pi}{N_3} \\left(n_3+\\frac{1}{2}\\right) k_3 \\right],\\quad\n\\text{for } n_i=0,1,2,\\dots,N_i-1.\n" }, { "math_id": 67, "text": "\n\\begin{array}{lcl}\\tilde{x}(n_1,n_2,n_3) =x(2n_1,2n_2,2n_3)\\\\ \n\\tilde{x}(n_1,n_2,N-n_3-1)=x(2n_1,2n_2,2n_3+1)\\\\\n\\tilde{x}(n_1,N-n_2-1,n_3)=x(2n_1,2n_2+1,2n_3)\\\\\n\\tilde{x}(n_1,N-n_2-1,N-n_3-1)=x(2n_1,2n_2+1,2n_3+1)\\\\\n\\tilde{x}(N-n_1-1,n_2,n_3)=x(2n_1+1,2n_2,2n_3)\\\\\n\\tilde{x}(N-n_1-1,n_2,N-n_3-1)=x(2n_1+1,2n_2,2n_3+1)\\\\\n\\tilde{x}(N-n_1-1,N-n_2-1,n_3)=x(2n_1+1,2n_2+1,2n_3)\\\\\n\\tilde{x}(N-n_1-1,N-n_2-1,N-n_3-1)=x(2n_1+1,2n_2+1,2n_3+1)\\\\\n\\end{array}\n" }, { "math_id": 68, "text": "0\\leq n_1,n_2,n_3 \\leq \\frac{N}{2} -1" }, { "math_id": 69, "text": "c(\\varphi_i)=\\cos(\\varphi_i)" }, { "math_id": 70, "text": "X(k_1,k_2,k_3)=\\sum_{n_1=1}^{N-1}\\sum_{n_2=1}^{N-1}\\sum_{n_3=1}^{N-1}\\tilde{x}(n_1,n_2,n_3) \\cos(\\varphi k_1)\\cos(\\varphi k_2)\\cos(\\varphi k_3)\n" }, { "math_id": 71, "text": "\\varphi_i= \\frac{\\pi}{2N}(4N_i+1),\\text{ and } i= 1,2,3." }, { "math_id": 72, "text": "k_1,k_2" }, { "math_id": 73, "text": "k_3" }, { "math_id": 74, "text": "X(k_1,k_2,k_3)=\\sum_{n_1=1}^{\\tfrac N 2 -1}\\sum_{n_2=1}^{\\tfrac N 2 -1}\\sum_{n_1=1}^{\\tfrac N 2 -1}\\tilde{x}_{ijl}(n_1,n_2,n_3) \\cos(\\varphi (2k_1+i)\\cos(\\varphi (2k_2+j)\n\\cos(\\varphi (2k_3+l))" }, { "math_id": 75, "text": "\\tilde{x}_{ijl}(n_1,n_2,n_3)=\\tilde{x}(n_1,n_2,n_3)+(-1)^l\\tilde{x}\\left(n_1,n_2,n_3+\\frac{n}{2}\\right) " }, { "math_id": 76, "text": "+(-1)^j\\tilde{x}\\left(n_1,n_2+\\frac{n}{2},n_3\\right)+(-1)^{j+l}\\tilde{x}\\left(n_1,n_2+\\frac{n}{2},n_3+\\frac{n}{2}\\right) " }, { "math_id": 77, "text": "+(-1)^i\\tilde{x}\\left(n_1+\\frac{n}{2},n_2,n_3\\right)+(-1)^{i+j}\\tilde{x}\\left(n_1+\\frac{n}{2}+\\frac{n}{2},n_2,n_3\\right) " }, { "math_id": 78, "text": "+(-1)^{i+l}\\tilde{x}\\left(n_1+\\frac{n}{2},n_2,n_3+\\frac{n}{3}\\right)" }, { "math_id": 79, "text": "+(-1)^{i+j+l}\\tilde{x}\\left(n_1+\\frac{n}{2},n_2+\\frac{n}{2},n_3+\\frac{n}{2}\\right) \\text{ where } i,j,l= 0 \\text{ or } 1." }, { "math_id": 80, "text": "~ [\\log_2 N] ~" }, { "math_id": 81, "text": "~ \\tfrac{1}{8}\\ N^3 ~" }, { "math_id": 82, "text": "~ \\left[ \\tfrac{1}{8}\\ N^3 \\log_2 N \\right] ~" }, { "math_id": 83, "text": "~ \\left[ \\tfrac{7}{8}\\ N^3\\ \\log_2 N \\right] ~," }, { "math_id": 84, "text": "~ \\underbrace{\\left[\\frac{3}{2}N^3 \\log_2N\\right]}_\\text{Real}+\\underbrace{\\left[\\frac{3}{2}N^3 \\log_2N-3N^3+3N^2\\right]}_\\text{Recursive} = \\left[\\frac{9}{2}N^3 \\log_2N-3N^3+3N^2\\right] ~." }, { "math_id": 85, "text": "~\\left[\\frac{3}{2}N^3 \\log_2 N \\right]~" }, { "math_id": 86, "text": "~ \\left[\\frac{9}{2}N^3 \\log_2 N - 3N^3 + 3N^2 \\right] ~," }, { "math_id": 87, "text": "(~ N_1 = N_2 = 8 ~)" }, { "math_id": 88, "text": " X_{k,\\ell} =\n \\sum_{n=0}^{N-1} \\; \\sum_{m=0}^{M-1} \\ x_{n,m} \\cos\\left(\\ \\frac{\\,( 2 m + 1 )( 2 k + 1 )\\ \\pi \\,}{4N} \\ \\right) \\cos\\left(\\ \\frac{\\, ( 2n + 1 )( 2 \\ell + 1 )\\ \\pi \\,}{4M} \\ \\right) ~," }, { "math_id": 89, "text": "~~ k = 0,\\ 1,\\ 2\\ \\ldots\\ N-1 ~~" }, { "math_id": 90, "text": "~~ \\ell= 0,\\ 1,\\ 2,\\ \\ldots\\ M-1 ~." }, { "math_id": 91, "text": "~ \\mathcal{O}(N^2) ~" }, { "math_id": 92, "text": "~ \\mathcal{O}(N \\log N ) ~" }, { "math_id": 93, "text": "~\\mathcal{O}(N)~" }, { "math_id": 94, "text": "~\\mathcal{O}(N \\log N )~" }, { "math_id": 95, "text": "~ \\mathcal{O}(N) ~" }, { "math_id": 96, "text": "~ 4N ~" }, { "math_id": 97, "text": "~ N ~" }, { "math_id": 98, "text": "~ 2 N \\log_2 N - N + 2 ~" }, { "math_id": 99, "text": "~ \\tfrac{17}{9} N \\log_2 N + \\mathcal{O}(N)" }, { "math_id": 100, "text": "~ N ~," } ]
https://en.wikipedia.org/wiki?curid=59962
59968610
Learning curve (machine learning)
&lt;templatestyles src="Machine learning/styles.css"/&gt; In machine learning, a learning curve (or training curve) plots the optimal value of a model's loss function for a training set against this loss function evaluated on a validation data set with same parameters as produced the optimal function. Synonyms include "error curve", "experience curve", "improvement curve" and "generalization curve". More abstractly, the learning curve is a curve of (learning effort)-(predictive performance), where usually learning effort means number of training samples and predictive performance means accuracy on testing samples. The machine learning curve is useful for many purposes including comparing different algorithms, choosing model parameters during design, adjusting optimization to improve convergence, and determining the amount of data used for training. Formal definition. One model of a machine learning is producing a function, f(x), which given some information, x, predicts some variable, y, from training data formula_0 and formula_1. It is distinct from mathematical optimization because formula_2 should predict well for formula_3 outside of formula_4. We often constrain the possible functions to a parameterized family of functions, formula_5, so that our function is more generalizable or so that the function has certain properties such as those that make finding a good formula_2 easier, or because we have some a priori reason to think that these properties are true. Given that it is not possible to produce a function that perfectly fits our data, it is then necessary to produce a loss function formula_6 to measure how good our prediction is. We then define an optimization process which finds a formula_7 which minimizes formula_8 referred to as formula_9 . Training curve for amount of data. Then if our training data is formula_10 and our validation data is formula_11 a learning curve is the plot of the two curves where formula_14 Training curve for number of iterations. Many optimization processes are iterative, repeating the same step until the process converges to an optimal value. Gradient descent is one such algorithm. If you define formula_15 as the approximation of the optimal formula_16 after formula_17 steps, a learning curve is the plot of Choosing the size of the training dataset. It is a tool to find out how much a machine model benefits from adding more training data and whether the estimator suffers more from a variance error or a bias error. If both the validation score and the training score converge to a value that is too low with increasing size of the training set, it will not benefit much from more training data. In the machine learning domain, there are two implications of learning curves differing in the x-axis of the curves, with experience of the model graphed either as the number of training examples used for learning or the number of iterations used in training the model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_\\text{train} " }, { "math_id": 1, "text": "Y_\\text{train} " }, { "math_id": 2, "text": "f " }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "X_\\text{train}" }, { "math_id": 5, "text": "\\{f_\\theta(x): \\theta \\in \\Theta \\} " }, { "math_id": 6, "text": "L(f_\\theta(X), Y') " }, { "math_id": 7, "text": "\\theta " }, { "math_id": 8, "text": "L(f_\\theta(X_, Y)) " }, { "math_id": 9, "text": "\\theta^*(X, Y) " }, { "math_id": 10, "text": "\\{x_1, x_2, \\dots, x_n \\}, \\{ y_1, y_2, \\dots y_n \\} " }, { "math_id": 11, "text": "\\{ x_1', x_2', \\dots x_m' \\}, \\{ y_1', y_2', \\dots y_m' \\} " }, { "math_id": 12, "text": "i \\mapsto L(f_{\\theta^*(X_i, Y_i)}(X_i), Y_i ) " }, { "math_id": 13, "text": "i \\mapsto L(f_{\\theta^*(X_i, Y_i)}(X_i'), Y_i' ) " }, { "math_id": 14, "text": "X_i = \\{ x_1, x_2, \\dots x_i \\} " }, { "math_id": 15, "text": "\\theta_i^*" }, { "math_id": 16, "text": "\\theta" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "i \\mapsto L(f_{\\theta_i^*(X, Y)}(X), Y) " }, { "math_id": 19, "text": "i \\mapsto L(f_{\\theta_i^*(X, Y)}(X'), Y') " } ]
https://en.wikipedia.org/wiki?curid=59968610
59969558
Learning rate
Tuning parameter (hyperparameter) in optimization &lt;templatestyles src="Machine learning/styles.css"/&gt; In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns". In the adaptive control literature, the learning rate is commonly referred to as gain. In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum. In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method. The learning rate is related to the step length determined by inexact line search in quasi-Newton methods and related optimization algorithms. Learning rate schedule. Initial rate can be left as system default or can be selected using a range of techniques. A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum. There are many different learning rate schedules but the most common are time-based, step-based and exponential. Decay serves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minimum, and is controlled by a hyperparameter. Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyperparameter analogous to a ball's mass which must be chosen manually—too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose. The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras. Time-based learning schedules alter the learning rate depending on the learning rate of the previous time iteration. Factoring in the decay the mathematical formula for the learning rate is: formula_0 where formula_1 is the learning rate, formula_2 is a decay parameter and formula_3 is the iteration step. Step-based learning schedules changes the learning rate according to some predefined steps. The decay application formula is here defined as: formula_4 where formula_5 is the learning rate at iteration formula_3, formula_6 is the initial learning rate, formula_2 is how much the learning rate should change at each drop (0.5 corresponds to a halving) and formula_7 corresponds to the "drop rate", or how often the rate should be dropped (10 corresponds to a drop every 10 iterations). The "floor" function (formula_8) here drops the value of its input to 0 for all values smaller than 1. Exponential learning schedules are similar to step-based, but instead of steps, a decreasing exponential function is used. The mathematical formula for factoring in the decay is: formula_9 where formula_2 is a decay parameter. Adaptive learning rate. The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this, there are many different types of adaptive gradient descent algorithms such as Adagrad, Adadelta, RMSprop, and Adam which are generally built into deep learning libraries such as Keras. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta_{n+1} = \\frac{\\eta_n }{1+dn}" }, { "math_id": 1, "text": "\\eta" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\eta_{n} = \\eta_0d^{\\left\\lfloor\\frac{1+n}{r}\\right\\rfloor}" }, { "math_id": 5, "text": "\\eta_{n}" }, { "math_id": 6, "text": "\\eta_0" }, { "math_id": 7, "text": "r" }, { "math_id": 8, "text": "\\lfloor\\dots\\rfloor" }, { "math_id": 9, "text": "\\eta_{n} = \\eta_0e^{-dn}" } ]
https://en.wikipedia.org/wiki?curid=59969558
5997445
TED spread
Difference between the interest rates on interbank loans The TED spread is the difference between the interest rates on interbank loans and on short-term U.S. government debt ("T-bills"). TED is an acronym formed from "T-Bill" and "ED", the ticker symbol for the Eurodollar futures contract. Initially, the TED spread was the difference between the interest rates for three-month U.S. Treasuries contracts and the three-month Eurodollars contract as represented by the London Interbank Offered Rate (LIBOR). However, since the Chicago Mercantile Exchange dropped T-bill futures after the 1987 crash, the TED spread is now calculated as the difference between the three-month LIBOR and the three-month T-bill interest rate. Formula and reading. formula_0 The size of the spread is usually denominated in basis points (bps). For example, if the T-bill rate is 5.10% and ED trades at 5.50%, the TED spread is 40 bps. The TED spread fluctuates over time but generally has remained within the range of 10 and 50 bps (0.1% and 0.5%) except in times of financial crisis. A rising TED spread often presages a downturn in the U.S. stock market, as it indicates that liquidity is being withdrawn. Indicator of counterparty risk. The TED spread is an indicator of perceived credit risk in the general economy, since T-bills are considered risk-free while LIBOR reflects the credit risk of lending to commercial banks. An increase in the TED spread is a sign that lenders believe the risk of default on interbank loans (also known as counterparty risk) is increasing. Interbank lenders, therefore, demand a higher rate of interest, or accept lower returns on safe investments such as T-bills. When the risk of bank defaults is considered to be decreasing, the TED spread decreases. Boudt, Paulus, and Rosenthal show that a TED spread above 48 basis points is indicative of economic crisis. Historical levels. Highs. The long-term average of the TED spread has been 30 basis points with a maximum of 50 bps. During 2007, the subprime mortgage crisis ballooned the TED spread to a region of 150–200 bps. On September 17, 2008, the TED spread exceeded 300 bps, breaking the previous record set after the Black Monday crash of 1987. Some higher readings for the spread were due to inability to obtain accurate LIBOR rates in the absence of a liquid unsecured lending market. On October 10, 2008, the TED spread reached another new high of 457 basis points. Lows. In October 2013, due to worries regarding a potential default on US debt, the 1-month TED went negative for the first time since tracking started. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{TED spread} = {\\mbox{3-month LIBOR rate} - \\mbox{3-month T-bill interest rate}}" } ]
https://en.wikipedia.org/wiki?curid=5997445
59979
Roman surface
Self-intersecting, highly symmetrical mapping of the real projective plane into 3D space In mathematics, the Roman surface or Steiner surface is a self-intersecting mapping of the real projective plane into three-dimensional space, with an unusually high degree of symmetry. This mapping is not an immersion of the projective plane; however, the figure resulting from removing six singular points is one. Its name arises because it was discovered by Jakob Steiner when he was in Rome in 1844. The simplest construction is as the image of a sphere centered at the origin under the map formula_0 This gives an implicit formula of formula_1 Also, taking a parametrization of the sphere in terms of longitude (θ) and latitude (φ), gives parametric equations for the Roman surface as follows: formula_2 formula_3 formula_4 The origin is a triple point, and each of the xy-, yz-, and xz-planes are tangential to the surface there. The other places of self-intersection are double points, defining segments along each coordinate axis which terminate in six pinch points. The entire surface has tetrahedral symmetry. It is a particular type (called type 1) of Steiner surface, that is, a 3-dimensional linear projection of the Veronese surface. Derivation of implicit formula. For simplicity we consider only the case "r" = 1. Given the sphere defined by the points ("x", "y", "z") such that formula_5 we apply to these points the transformation "T" defined by formula_6 say. But then we have formula_7 and so formula_8 as desired. Conversely, suppose we are given ("U", "V", "W") satisfying (*) formula_9 We prove that there exists ("x","y","z") such that (**) formula_5 for which formula_10 with one exception: In case 3.b. below, we show this cannot be proved. 1. In the case where none of "U", "V", "W" is 0, we can set formula_11 It is easy to use (*) to confirm that (**) holds for "x", "y", "z" defined this way. 2. Suppose that "W" is 0. From (*) this implies formula_12 and hence at least one of "U", "V" must be 0 also. This shows that is it impossible for exactly one of "U", "V", "W" to be 0. 3. Suppose that exactly two of "U", "V", "W" are 0. Without loss of generality we assume (***)formula_13 It follows that formula_14 a. In the subcase where formula_18 if we determine "x" and "y" by formula_19 and formula_20 this ensures that (*) holds. It is easy to verify that formula_21 and hence choosing the signs of "x" and "y" appropriately will guarantee formula_22 Since also formula_23 this shows that this subcase leads to the desired converse. b. In this remaining subcase of the case 3., we have formula_24 Since formula_25 it is easy to check that formula_26 and thus in this case, where formula_27 there is no ("x", "y", "z") satisfying formula_28 Hence the solutions ("U", 0, 0) of the equation (*) with formula_29 and likewise, (0, "V", 0) with formula_30 and (0, 0, "W") with formula_31 (each of which is a noncompact portion of a coordinate axis, in two pieces) do not correspond to any point on the Roman surface. 4. If ("U", "V", "W") is the point (0, 0, 0), then if any two of "x", "y", "z" are zero and the third one has absolute value 1, clearly formula_32 as desired. This covers all possible cases. Derivation of parametric equations. Let a sphere have radius "r", longitude "φ", and latitude "θ". Then its parametric equations are formula_33 formula_34 formula_35 Then, applying transformation "T" to all the points on this sphere yields formula_36 formula_37 formula_38 which are the points on the Roman surface. Let "φ" range from 0 to 2π, and let "θ" range from 0 to "π/2". Relation to the real projective plane. The sphere, before being transformed, is not homeomorphic to the real projective plane, "RP2". But the sphere centered at the origin has this property, that if point "(x,y,z)" belongs to the sphere, then so does the antipodal point "(-x,-y,-z)" and these two points are different: they lie on opposite sides of the center of the sphere. The transformation "T" converts both of these antipodal points into the same point, formula_39 formula_40 Since this is true of all points of S2, then it is clear that the Roman surface is a continuous image of a "sphere modulo antipodes". Because some distinct pairs of antipodes are all taken to identical points in the Roman surface, it is not homeomorphic to "RP2", but is instead a quotient of the real projective plane "RP2 = S2 / (x~-x)". Furthermore, the map T (above) from S2 to this quotient has the special property that it is locally injective away from six pairs of antipodal points. Or from RP2 the resulting map making this an immersion of RP2 — minus six points — into 3-space. Structure of the Roman surface. The Roman surface has four bulbous "lobes", each one on a different corner of a tetrahedron. A Roman surface can be constructed by splicing together three hyperbolic paraboloids and then smoothing out the edges as necessary so that it will fit a desired shape (e.g. parametrization). Let there be these three hyperbolic paraboloids: These three hyperbolic paraboloids intersect externally along the six edges of a tetrahedron and internally along the three axes. The internal intersections are loci of double points. The three loci of double points: "x" = 0, "y" = 0, and "z" = 0, intersect at a triple point at the origin. For example, given "x" = "yz" and "y" = "zx", the second paraboloid is equivalent to "x" = "y"/"z". Then formula_41 and either "y" = 0 or "z"2 = 1 so that "z" = ±1. Their two external intersections are Likewise, the other external intersections are Let us see the pieces being put together. Join the paraboloids "y" = "xz" and "x" = "yz". The result is shown in Figure 1. The paraboloid "y = x z" is shown in blue and orange. The paraboloid "x = y z" is shown in cyan and purple. In the image the paraboloids are seen to intersect along the "z = 0" axis. If the paraboloids are extended, they should also be seen to intersect along the lines The two paraboloids together look like a pair of orchids joined back-to-back. Now run the third hyperbolic paraboloid, "z" = "xy", through them. The result is shown in Figure 2. On the west-southwest and east-northeast directions in Figure 2 there are a pair of openings. These openings are lobes and need to be closed up. When the openings are closed up, the result is the Roman surface shown in Figure 3. A pair of lobes can be seen in the West and East directions of Figure 3. Another pair of lobes are hidden underneath the third ("z" = "xy") paraboloid and lie in the North and South directions. If the three intersecting hyperbolic paraboloids are drawn far enough that they intersect along the edges of a tetrahedron, then the result is as shown in Figure 4. One of the lobes is seen frontally—head on—in Figure 4. The lobe can be seen to be one of the four corners of the tetrahedron. If the continuous surface in Figure 4 has its sharp edges rounded out—smoothed out—then the result is the Roman surface in Figure 5. One of the lobes of the Roman surface is seen frontally in Figure 5, and its bulbous – balloon-like—shape is evident. If the surface in Figure 5 is turned around 180 degrees and then turned upside down, the result is as shown in Figure 6. Figure 6 shows three lobes seen sideways. Between each pair of lobes there is a locus of double points corresponding to a coordinate axis. The three loci intersect at a triple point at the origin. The fourth lobe is hidden and points in the direction directly opposite from the viewer. The Roman surface shown at the top of this article also has three lobes in sideways view. One-sidedness. The Roman surface is non-orientable, i.e. one-sided. This is not quite obvious. To see this, look again at Figure 3. Imagine an ant on top of the "third" hyperbolic paraboloid, "z = x y". Let this ant move North. As it moves, it will pass through the other two paraboloids, like a ghost passing through a wall. These other paraboloids only seem like obstacles due to the self-intersecting nature of the immersion. Let the ant ignore all double and triple points and pass right through them. So the ant moves to the North and falls off the edge of the world, so to speak. It now finds itself on the northern lobe, hidden underneath the third paraboloid of Figure 3. The ant is standing upside-down, on the "outside" of the Roman surface. Let the ant move towards the Southwest. It will climb a slope (upside-down) until it finds itself "inside" the Western lobe. Now let the ant move in a Southeastern direction along the inside of the Western lobe towards the "z = 0" axis, always above the "x-y" plane. As soon as it passes through the "z = 0" axis the ant will be on the "outside" of the Eastern lobe, standing rightside-up. Then let it move Northwards, over "the hill", then towards the Northwest so that it starts sliding down towards the "x = 0" axis. As soon as the ant crosses this axis it will find itself "inside" the Northern lobe, standing right side up. Now let the ant walk towards the North. It will climb up the wall, then along the "roof" of the Northern lobe. The ant is back on the third hyperbolic paraboloid, but this time under it and standing upside-down. (Compare with Klein bottle.) Double, triple, and pinching points. The Roman surface has four "lobes". The boundaries of each lobe are a set of three lines of double points. Between each pair of lobes there is a line of double points. The surface has a total of three lines of double points, which lie (in the parametrization given earlier) on the coordinate axes. The three lines of double points intersect at a triple point which lies on the origin. The triple point cuts the lines of double points into a pair of half-lines, and each half-line lies between a pair of lobes. One might expect from the preceding statements that there could be up to eight lobes, one in each octant of space which has been divided by the coordinate planes. But the lobes occupy alternating octants: four octants are empty and four are occupied by lobes. If the Roman surface were to be inscribed inside the tetrahedron with least possible volume, one would find that each edge of the tetrahedron is tangent to the Roman surface at a point, and that each of these six points happens to be a "Whitney singularity". These singularities, or pinching points, all lie at the edges of the three lines of double points, and they are defined by this property: that there is no plane tangent to any surface at the singularity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x,y,z)=(yz,xz,xy)." }, { "math_id": 1, "text": " x^2 y^2 + y^2 z^2 + z^2 x^2 - r^2 x y z = 0. \\," }, { "math_id": 2, "text": "x=r^{2} \\cos \\theta \\cos \\varphi \\sin \\varphi" }, { "math_id": 3, "text": "y=r^{2} \\sin \\theta \\cos \\varphi \\sin \\varphi" }, { "math_id": 4, "text": "z=r^{2} \\cos \\theta \\sin \\theta \\cos^{2} \\varphi " }, { "math_id": 5, "text": "x^2 + y^2 + z^2 = 1,\\," }, { "math_id": 6, "text": " T(x, y, z) = (y z, z x, x y) = (U,V,W),\\, " }, { "math_id": 7, "text": "\n\\begin{align}\nU^2 V^2 + V^2 W^2 + W^2 U^2 & = z^2 x^2 y^4 + x^2 y^2 z^4 + y^2 z^2 x^4 = (x^2 + y^2 + z^2)(x^2 y^2 z^2) \\\\[8pt]\n& = (1)(x^2 y^2 z^2) = (xy) (yz) (zx) = U V W,\n\\end{align}\n" }, { "math_id": 8, "text": "U^2 V^2 + V^2 W^2 + W^2 U^2 - U V W = 0\\," }, { "math_id": 9, "text": "U^2 V^2 + V^2 W^2 + W^2 U^2 - U V W = 0.\\," }, { "math_id": 10, "text": "U = x y, V = y z, W = z x,\\," }, { "math_id": 11, "text": "x = \\sqrt{\\frac{WU}{V}},\\ y = \\sqrt{\\frac{UV}{W}},\\ z = \\sqrt{\\frac{VW}{U}}.\\," }, { "math_id": 12, "text": "U^2 V^2 = 0\\," }, { "math_id": 13, "text": " U \\neq 0, V = W = 0.\\," }, { "math_id": 14, "text": "z = 0,\\," }, { "math_id": 15, "text": "z \\neq 0,\\," }, { "math_id": 16, "text": "x = y = 0,\\," }, { "math_id": 17, "text": "U = 0,\\," }, { "math_id": 18, "text": "|U| \\leq \\frac{1}{2}," }, { "math_id": 19, "text": "x^2 = \\frac{1 + \\sqrt{1 - 4 U^2}}{2}" }, { "math_id": 20, "text": "y^2 = \\frac{1 - \\sqrt{1 - 4 U^2}}{2}," }, { "math_id": 21, "text": "x^2 y^2 = U^2,\\," }, { "math_id": 22, "text": " x y = U.\\," }, { "math_id": 23, "text": "y z = 0 = V\\text{ and }z x = 0 = W,\\," }, { "math_id": 24, "text": "|U| > \\frac{1}{2}." }, { "math_id": 25, "text": "x^2 + y^2 = 1,\\," }, { "math_id": 26, "text": "xy \\leq \\frac{1}{2}," }, { "math_id": 27, "text": "|U| >1/2,\\ V = W = 0," }, { "math_id": 28, "text": " U = xy,\\ V = yz,\\ W =zx." }, { "math_id": 29, "text": "|U| > \\frac12" }, { "math_id": 30, "text": "|V| > \\frac12" }, { "math_id": 31, "text": "|W| > \\frac12" }, { "math_id": 32, "text": "(xy, yz, zx) = (0, 0, 0) = (U, V, W)\\," }, { "math_id": 33, "text": " x = r \\, \\cos \\theta \\, \\cos \\phi, " }, { "math_id": 34, "text": " y = r \\, \\cos \\theta \\, \\sin \\phi, " }, { "math_id": 35, "text": " z = r \\, \\sin \\theta. " }, { "math_id": 36, "text": " x' = y z = r^2 \\, \\cos \\theta \\, \\sin \\theta \\, \\sin \\phi, " }, { "math_id": 37, "text": " y' = z x = r^2 \\, \\cos \\theta \\, \\sin \\theta \\, \\cos \\phi, " }, { "math_id": 38, "text": " z' = x y = r^2 \\, \\cos^2 \\theta \\, \\cos \\phi \\, \\sin \\phi, " }, { "math_id": 39, "text": " T : (x, y, z) \\rightarrow (y z, z x, x y), " }, { "math_id": 40, "text": " T : (-x, -y, -z) \\rightarrow ((-y) (-z), (-z) (-x), (-x) (-y)) = (y z, z x, x y). " }, { "math_id": 41, "text": " y z = {y \\over z} " } ]
https://en.wikipedia.org/wiki?curid=59979
59982100
StyleGAN
Novel generative adversarial network StyleGAN is a generative adversarial network (GAN) introduced by Nvidia researchers in December 2018, and made source available in February 2019. StyleGAN depends on Nvidia's CUDA software, GPUs, and Google's TensorFlow, or Meta AI's PyTorch, which supersedes TensorFlow as the official implementation library in later StyleGAN versions. The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.&lt;ref name="NVlabs/stylegan2"&gt;&lt;/ref&gt; Nvidia introduced StyleGAN3, described as an "alias-free" version, on June 23, 2021, and made source available on October 12, 2021. History. A direct predecessor of the StyleGAN series is the Progressive GAN, published in 2017. In December 2018, Nvidia researchers distributed a preprint with accompanying software introducing StyleGAN, a GAN for producing an unlimited number of (often convincing) portraits of fake human faces. StyleGAN was able to run on Nvidia's commodity GPU processors. In February 2019, Uber engineer Phillip Wang used the software to create the website This Person Does Not Exist, which displayed a new face on each web page reload. Wang himself has expressed amazement, given that humans are evolved to specifically understand human faces, that nevertheless StyleGAN can competitively "pick apart all the relevant features (of human faces) and recompose them in a way that's coherent." In September 2019, a website called Generated Photos published 100,000 images as a collection of stock photos. The collection was made using a private dataset shot in a controlled environment with similar light and angles. Similarly, two faculty at the University of Washington's Information School used StyleGAN to create "Which Face is Real?", which challenged visitors to differentiate between a fake and a real face side by side. The faculty stated the intention was to "educate the public" about the existence of this technology so they could be wary of it, "just like eventually most people were made aware that you can Photoshop an image". The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality. In 2021, a third version was released, improving consistency between fine and coarse details in the generator. Dubbed "alias-free", this version was implemented with pytorch. Illicit use. In December 2019, Facebook took down a network of accounts with false identities, and mentioned that some of them had used profile pictures created with machine learning techniques. Architecture. Progressive GAN. Progressive GAN is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as formula_0, and the discriminator as formula_1. During training, at first only formula_2 are used in a GAN game to generate 4x4 images. Then formula_3 are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images. To avoid discontinuity between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper). For example, this is how the second stage GAN game starts: StyleGAN. StyleGAN is designed as a combination of Progressive GAN with neural style transfer. The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant formula_7 array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer uses Gramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance). At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector). After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles. Style-mixing between two images formula_8 can be performed as well. First, run a gradient descent to find formula_9 such that formula_10. This is called "projecting an image back to style latent space". Then, formula_11 can be fed to the lower style blocks, and formula_12 to the higher style blocks, to generate a composite image that has the large-scale style of formula_13, and the fine-detail style of formula_14. Multiple images can also be composed this way. StyleGAN2. StyleGAN2 improves upon StyleGAN in two ways. One, it applies the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem. The "blob" problem roughly speaking is because using the style latent vector to normalize the generated image destroys useful information. Consequently, the generator learned to create a "distraction" by a large blob, which absorbs most of the effect of normalization (somewhat similar to using flares to distract a heat-seeking missile). Two, it uses residual connections, which helps it avoid the phenomenon where certain features are stuck at intervals of pixels. For example, the seam between two teeth may be stuck at pixels divisible by 32, because the generator learned to generate teeth during stage N-5, and consequently could only generate primitive teeth at that stage, before scaling up 5 times (thus intervals of 32). This was updated by the StyleGAN2-ADA ("ADA" stands for "adaptive"), which uses invertible data augmentation. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive". StyleGAN3. StyleGAN3 improves upon StyleGAN2 by solving the "texture sticking" problem, which can be seen in the official videos. They analyzed the problem by the Nyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon. To solve this, they proposed imposing strict lowpass filters between each generator's layers, so that the generator is forced to operate on the pixels in a way faithful to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more signal filters. The resulting StyleGAN-3 is able to generate images that rotate and translate smoothly, and without texture sticking. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = G_1 \\circ G_2 \\circ \\cdots \\circ G_N" }, { "math_id": 1, "text": "D = D_N \\circ D_{N-1} \\circ \\cdots \\circ D_1" }, { "math_id": 2, "text": "G_N, D_N" }, { "math_id": 3, "text": "G_{N-1}, D_{N-1}" }, { "math_id": 4, "text": "((1-\\alpha) + \\alpha\\cdot G_{N-1})\\circ u \\circ G_N, D_N \\circ d \\circ ((1-\\alpha) + \\alpha\\cdot D_{N-1})" }, { "math_id": 5, "text": "u, d" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "4\\times 4 \\times 512" }, { "math_id": 8, "text": "x, x'" }, { "math_id": 9, "text": "z, z'" }, { "math_id": 10, "text": "G(z)\\approx x, G(z')\\approx x'" }, { "math_id": 11, "text": "z" }, { "math_id": 12, "text": "z'" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "x'" } ]
https://en.wikipedia.org/wiki?curid=59982100
599865
Brill–Noether theory
Field of algebraic geometry In algebraic geometry, Brill–Noether theory, introduced by Alexander von Brill and Max Noether (1874), is the study of special divisors, certain divisors on a curve C that determine more compatible functions than would be predicted. In classical language, special divisors move on the curve in a "larger than expected" linear system of divisors. Throughout, we consider a projective smooth curve over the complex numbers (or over some other algebraically closed field). The condition to be a special divisor D can be formulated in sheaf cohomology terms, as the non-vanishing of the "H"1 cohomology of the sheaf of sections of the invertible sheaf or line bundle associated to D. This means that, by the Riemann–Roch theorem, the "H"0 cohomology or space of holomorphic sections is larger than expected. Alternatively, by Serre duality, the condition is that there exist holomorphic differentials with divisor ≥ –"D" on the curve. Main theorems of Brill–Noether theory. For a given genus g, the moduli space for curves C of genus g should contain a dense subset parameterizing those curves with the minimum in the way of special divisors. One goal of the theory is to 'count constants', for those curves: to predict the dimension of the space of special divisors (up to linear equivalence) of a given degree d, as a function of g, that "must" be present on a curve of that genus. The basic statement can be formulated in terms of the Picard variety Pic("C") of a smooth curve C, and the subset of Pic("C") corresponding to divisor classes of divisors D, with given values d of deg("D") and r of "l"("D") – 1 in the notation of the Riemann–Roch theorem. There is a lower bound ρ for the dimension dim("d", "r", "g") of this subscheme in Pic("C"): formula_0 called the Brill–Noether number. The formula can be memorized via the mnemonic (using our desired formula_1 and Riemann-Roch) formula_2 For smooth curves C and for "d" ≥ 1, "r" ≥ 0 the basic results about the space &amp;NoBreak;&amp;NoBreak; of linear systems on C of degree d and dimension r are as follows. Other more recent results not necessarily in terms of space &amp;NoBreak;&amp;NoBreak; of linear systems are: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dim(d,r,g) \\geq \\rho = g-(r+1)(g-d+r)" }, { "math_id": 1, "text": "h^0(D) = r+1 " }, { "math_id": 2, "text": "g-(r+1)(g-d+r) = g - h^0(D)h^1(D)" }, { "math_id": 3, "text": "H^0(\\mathcal{O}_{\\mathbb{P}^r}(n))\\rightarrow H^0(\\mathcal{O}_{C}(n))" }, { "math_id": 4, "text": "(r-1)n \\leq (r + 1)d - (r-3)(g-1)," } ]
https://en.wikipedia.org/wiki?curid=599865
5998747
Helium dilution technique
Lung function test The helium dilution technique is the way of measuring the functional residual capacity of the lungs (the volume left in the lungs after normal expiration). This technique is a closed-circuit system where a spirometer is filled with a mixture of helium (He) and oxygen. The amount of He in the spirometer is known at the beginning of the test (concentration × volume = amount). The patient is then asked to breathe (normal breaths) in the mixture starting from FRC (functional residual capacity), which is the gas volume in the lung after a normal breath out. The spirometer measures helium concentration. The helium spreads into the lungs of the patient, and settles at a new concentration (C2). Because there is no leak of substances in the system, the amount of helium remains constant during the test, and the FRC is calculated by using the following equation: formula_0 formula_1 formula_2 V2 = total gas volume (FRC + volume of spirometer) V1 = volume of gas in spirometer C1 = initial (known) helium concentration C2 = final helium concentration (measured by the spirometer) Measure. Note to measure FRC the patient is connected to the spirometer directly after a normal breath (when the lung volume equals FRC), if the patient is initially connected to the spirometer at a different lung volume (like TLC or RV) the measured volume will be the initial volume started from and not FRC. In patients with obstructive pulmonary diseases the measurements of the helium dilution technique are not reliable because of incomplete equilibration of the helium in all areas of the lungs. In such cases it is more accurate to use a body plethysmograph. A simplified helium dilution technique may be used as an alternative to quantitative CT scans to assess end-expiratory lung volumes (EELV) among patients who are on mechanical ventilation with diagnosis of ALI/ARDS according to a cross-sectional study. The results show a good correlation [EELV(He)=208+0.858xEELV(CT), "r"=0.941, "p" &lt; 0.001] between the two methods, and the helium dilution technique offers the advantages of lower cost, decreased transportation of critically ill patients, and reduced radiation exposure. This study's results may have limited generalizability due to its specificity to the ALI/ARDS population and its small sample size (21 patients). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_1V_1=C_2V_2" }, { "math_id": 1, "text": "C_1 V_1 = C_2 (V_1 + FRC)" }, { "math_id": 2, "text": "FRC = \\frac{C1V1}{C2}- V1" } ]
https://en.wikipedia.org/wiki?curid=5998747
59995500
Crouzeix's conjecture
Unsolved problem in matrix analysis Crouzeix's conjecture is an unsolved problem in matrix analysis. It was proposed by Michel Crouzeix in 2004, and it can be stated as follows: formula_0 where the set formula_1 is the field of values of a "n"×"n" (i.e. square) complex matrix formula_2 and formula_3 is a complex function that is analytic in the interior of formula_1 and continuous up to the boundary of formula_1. Slightly reformulated, the conjecture can also be stated as follows: for all square complex matrices formula_2 and all complex polynomials formula_4: formula_5 holds, where the norm on the left-hand side is the spectral operator 2-norm. History. Crouzeix's theorem, proved in 2007, states that: formula_6 (the constant formula_7 is independent of the matrix dimension, thus transferable to infinite-dimensional settings). Michel Crouzeix and Cesar Palencia proved in 2017 that the result holds for formula_8, improving the original constant of formula_7. The not yet proved conjecture states that the constant can be refined to formula_9. Special cases. While the general case is unknown, it is known that the conjecture holds for some special cases. For instance, it holds for all normal matrices, for tridiagonal 3×3 matrices with elliptic field of values centered at an eigenvalue and for general "n"×"n" matrices that are nearly Jordan blocks. Furthermore, Anne Greenbaum and Michael L. Overton provided numerical support for Crouzeix's conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\|f(A)\\| \\le 2 \\sup_{z\\in W(A)} |f(z)|," }, { "math_id": 1, "text": "W(A)" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "\\|p(A)\\| \\le 2 \\sup_{z\\in W(A)} |p(z)|" }, { "math_id": 6, "text": "\\|f(A)\\| \\le 11.08 \\sup_{z\\in W(A)} |f(z)|" }, { "math_id": 7, "text": "11.08" }, { "math_id": 8, "text": "1+\\sqrt{2}" }, { "math_id": 9, "text": "2" } ]
https://en.wikipedia.org/wiki?curid=59995500
60000338
Tripod packing
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: How many tripods can have their apexes packed into a given cube? In combinatorics, tripod packing is a problem of finding many disjoint tripods in a three-dimensional grid, where a tripod is an infinite polycube, the union of the grid cubes along three positive axis-aligned rays with a shared apex. Several problems of tiling and packing tripods and related shapes were formulated in 1967 by Sherman K. Stein. Stein originally called the tripods of this problem "semicrosses", and they were also called Stein corners by Solomon W. Golomb. A collection of disjoint tripods can be represented compactly as a monotonic matrix, a square matrix whose nonzero entries increase along each row and column and whose equal nonzero entries are placed in a monotonic sequence of cells, and the problem can also be formulated in terms of finding sets of triples satisfying a compatibility condition called "2-comparability", or of finding compatible sets of triangles in a convex polygon. The best lower bound known for the number of tripods that can have their apexes packed into an formula_0 grid is formula_1, and the best upper bound is formula_2, both expressed in big Omega notation. Equivalent problems. The coordinates formula_3 of the apexes of a solution to the tripod problem form a 2-comparable sets of triples, where two triples are defined as being 2-comparable if there are either at least two coordinates where one triple is smaller than the other, or at least two coordinates where one triple is larger than the other. This condition ensures that the tripods defined from these triples do not have intersecting rays. Another equivalent two-dimensional version of the question asks how many cells of an formula_4 array of square cells (indexed from formula_5 to formula_6) can be filled in by the numbers from formula_5 to formula_6 in such a way that the non-empty cells of each row and each column of the array form strictly increasing sequences of numbers, and the positions holding each value formula_7 form a monotonic chain within the array. An array with these properties is called a monotonic matrix. A collection of disjoint tripods with apexes formula_3 can be transformed into a monotonic matrix by placing the number formula_8 in array cell formula_9 and vice versa. The problem is also equivalent to finding as many triangles as possible among the vertices of a convex polygon, such that no two triangles that share a vertex have nested angles at that vertex. This triangle-counting problem was posed by Peter Braß and its equivalence to tripod packing was observed by Aronov et al. Lower bounds. It is straightforward to find a solution to the tripod packing problem with formula_10 tripods. For instance, for formula_11, the formula_10 triples formula_12 are 2-comparable. After several earlier improvements to this naïve bound, Gowers and Long found solutions to the tripod problem of cardinality formula_1. Upper bounds. From any solution to the tripod packing problem, one can derive a balanced tripartite graph whose vertices are three copies of the numbers from formula_13 to formula_14 (one for each of the three coordinates) with a triangle of edges connecting the three vertices corresponding to the coordinates of the apex of each tripod. There are no other triangles in these graphs (they are locally linear graphs) because any other triangle would lead to a violation of 2-comparability. Therefore, by the known upper bounds to the Ruzsa–Szemerédi problem (one version of which is to find the maximum density of edges in a balanced tripartite locally linear graph), the maximum number of disjoint tripods that can be packed in an formula_0 grid is formula_15, and more precisely formula_16. Although Tiskin writes that "tighter analysis of the parameters" can produce a bound that is less than quadratic by a polylogarithmic factor, he does not supply details and his proof that the number is formula_15 uses only the same techniques that are known for the Ruzsa–Szemerédi problem, so this stronger claim appears to be a mistake. An argument of Dean Hickerson shows that, because tripods cannot pack space with constant density, the same is true for analogous problems in higher dimensions. Small instances. For small instances of the tripod problem, the exact solution is known. The numbers of tripods that can be packed into an formula_0 cube, for formula_17, are: &lt;templatestyles src="Block indent/styles.css"/&gt;1, 2, 5, 8, 11, 14, 19, 23, 28, 32, 38, ... For instance, the figure shows the 11 tripods that can be packed into a formula_18 cube. The number of distinct monotonic matrices of order formula_6, for formula_19 is &lt;templatestyles src="Block indent/styles.css"/&gt;2, 19, 712, 87685, 31102080, 28757840751, ... References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\times n\\times n" }, { "math_id": 1, "text": "\\Omega(n^{1.546})" }, { "math_id": 2, "text": "n^2/\\exp \\Omega(\\log^* n)" }, { "math_id": 3, "text": "(x_i,y_i,z_i)" }, { "math_id": 4, "text": "n\\times n" }, { "math_id": 5, "text": "1" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "z_i" }, { "math_id": 9, "text": "(x_i,y_i)" }, { "math_id": 10, "text": "\\Omega(n^{3/2})" }, { "math_id": 11, "text": "k=\\lfloor\\sqrt{n}\\rfloor" }, { "math_id": 12, "text": "\\bigl\\{ (ak+b+1,bk+c+1,ak+c+1) \\big| a,b,c\\in[0,k-1]\\bigr\\}" }, { "math_id": 13, "text": "0" }, { "math_id": 14, "text": "n-1" }, { "math_id": 15, "text": "o(n^2)" }, { "math_id": 16, "text": "n^2/\\exp\\Omega(\\log^* n)" }, { "math_id": 17, "text": "n\\le 11" }, { "math_id": 18, "text": "5\\times 5\\times 5" }, { "math_id": 19, "text": "n=1,2,3,\\dots" } ]
https://en.wikipedia.org/wiki?curid=60000338
600011
List of mathematical conjectures
This is a list of notable mathematical conjectures. Open problems. The following conjectures remain open. The (incomplete) column "cites" lists the number of results for a Google Scholar search for the term, in double quotes as of September 2022[ [update]]. Conjectures now proved (theorems). The conjecture terminology may persist: theorems often enough may still be referred to as conjectures, using the anachronistic names. Disproved (no longer conjectures). The conjectures in following list were not necessarily generally accepted as true before being disproved. In mathematics, ideas are supposedly not accepted as fact until they have been rigorously proved. However, there have been some ideas that were fairly accepted in the past but which were subsequently shown to be false. The following list is meant to serve as a repository for compiling a list of such ideas. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{2^m}+1" }, { "math_id": 1, "text": "2^{2^5}+1=4,294,967,297 = 641 \\times 6,700,417." }, { "math_id": 2, "text": "\\beth_1" }, { "math_id": 3, "text": "\\aleph_0" }, { "math_id": 4, "text": "\\pi(x) > \\mathrm{li}(x)" } ]
https://en.wikipedia.org/wiki?curid=600011
600030
Poincaré lemma
In mathematics, the Poincaré lemma gives a sufficient condition for a closed differential form to be exact (while an exact form is necessarily closed). Precisely, it states that every closed "p"-form on an open ball in R"n" is exact for "p" with 1 ≤ "p" ≤ "n". The lemma was introduced by Henri Poincaré in 1886. Especially in calculus, the Poincaré lemma also says that every closed 1-form on a simply connected open subset in formula_0 is exact. In the language of cohomology, the Poincaré lemma says that the "k"-th de Rham cohomology group of a contractible open subset of a manifold "M" (e.g., formula_1) vanishes for formula_2. In particular, it implies that the de Rham complex yields a resolution of the constant sheaf formula_3 on "M". The singular cohomology of a contractible space vanishes in positive degree, but the Poincaré lemma "does not follow" from this, since the fact that the singular cohomology of a manifold can be computed as the de Rham cohomology of it, that is, the de Rham theorem, relies on the Poincaré lemma. It does, however, mean that it is enough to prove the Poincaré lemma for open balls; the version for contractible manifolds then follows from the topological consideration. The Poincaré lemma is also a special case of the homotopy invariance of de Rham cohomology; in fact, it is common to establish the lemma by showing the homotopy invariance or at least a version of it. Proofs. A standard proof of the Poincaré lemma uses the homotopy invariance formula (cf. see the proofs below as well as Integration along fibers#Example). The local form of the homotopy operator is described in and the connection of the lemma with the Maurer-Cartan form is explained in . Direct proof. The Poincaré lemma can be proved by means of integration along fibers. (This approach is a straightforward generalization of constructing a primitive function by means of integration in calculus.) We shall prove the lemma for an open subset formula_4 that is star-shaped or a cone over formula_5; i.e., if formula_6 is in formula_7, then formula_8 is in formula_7 for formula_9. This case in particular covers the open ball case, since an open ball can be assumed to centered at the origin without loss of generality. The trick is to consider differential forms on formula_10 (we use formula_11 for the coordinate on formula_5). First define the operator formula_12 (called the fiber integration) for "k"-forms on formula_13 by formula_14 where formula_15, formula_16 and similarly for formula_17 and formula_18. Now, for formula_19, since formula_20, using the differentiation under the integral sign, we have: formula_21 where formula_22 denote the restrictions of formula_23 to the hyperplanes formula_24 and they are zero since formula_25 is zero there. If formula_26, then a similar computation gives formula_27. Thus, the above formula holds for any formula_28-form formula_23 on formula_13. Finally, let formula_29 and then set formula_30. Then, with the notation formula_31, we get: for any formula_28-form formula_32 on formula_7, formula_33 the formula known as the homotopy formula. The operator formula_34 is called the homotopy operator (also called a chain homotopy). Now, if formula_32 is closed, formula_35. On the other hand, formula_36 and formula_37. Hence, formula_38 which proves the Poincaré lemma. The same proof in fact shows the Poincaré lemma for any contractible open subset "U" of a manifold. Indeed, given such a "U", we have the homotopy formula_39 with formula_40 the identity and formula_41 a point. Approximating such formula_39, we can assume formula_39 is in fact smooth. The fiber integration formula_12 is also defined for formula_42. Hence, the same argument goes through. Proof using Lie derivatives. Cartan's magic formula for Lie derivatives can be used to give a short proof of the Poincaré lemma. The formula states that the Lie derivative along a vector field formula_43 is given as: formula_44 where formula_45 denotes the interior product; i.e., formula_46. Let formula_47 be a smooth family of smooth maps for some open subset "U" of formula_0 such that formula_48 is defined for "t" in some closed interval "I" and formula_48 is a diffeomorphism for "t" in the interior of "I". Let formula_49 denote the tangent vectors to the curve formula_50; i.e., formula_51. For a fixed "t" in the interior of "I", let formula_52. Then formula_53. Thus, by the definition of a Lie derivative, formula_54. That is, formula_55 Assume formula_56. Then, integrating both sides of the above and then using Cartan's formula and the differentiation under the integral sign, we get: for formula_57, formula_58 where the integration means the integration of each coefficient in a differential form. Letting formula_59, we then have: formula_60 with the notation formula_61 Now, assume formula_7 is an open ball with center formula_62; then we can take formula_63. Then the above formula becomes: formula_64, which proves the Poincaré lemma when formula_32 is closed. Proof in the two-dimensional case. In two dimensions the Poincaré lemma can be proved directly for closed 1-forms and 2-forms as follows. If "ω" = "p" "dx" + "q" "dy" is a closed 1-form on ("a", "b") × ("c", "d"), then "p""y" = "q""x". If "ω" = "df" then "p" = "f""x" and "q" = "f""y". Set formula_65 so that "g""x" = "p". Then "h" = "f" − "g" must satisfy "h""x" = 0 and "h""y" = "q" − "g""y". The right hand side here is independent of "x" since its partial derivative with respect to "x" is 0. So formula_66 and hence formula_67 Similarly, if Ω = "r" "dx" ∧ "dy" then Ω = "d"("a" "dx" + "b" "dy") with "b""x" − "a""y" = "r". Thus a solution is given by "a" = 0 and formula_68 Implication for de Rham cohomology. By definition, the "k"-th de Rham cohomology group formula_69 of an open subset "U" of a manifold "M" is defined as the quotient vector space formula_70 Hence, the conclusion of the Poincaré lemma is precisely that formula_71 for formula_2. Now, differential forms determine a cochain complex called the de Rham complex: formula_72 where "n" = the dimension of "M" and formula_73 denotes the sheaf of differential "k"-forms; i.e., formula_74 consists of "k"-forms on "U" for each open subset "U" of "M". It then gives rise to the complex (the augmented complex) formula_75 where formula_3 is the constant sheaf with values in formula_76; i.e., it is the sheaf of locally constant real-valued functions and formula_77 the inclusion. The kernel of formula_78 is formula_3, since the smooth functions with zero derivatives are locally constant. Also, a sequence of sheaves is exact if and only if it is so locally. The Poincaré lemma thus says the rest of the sequence is exact too (since a manifold is locally diffeomorphic to an open subset of formula_0 and then each point has an open ball as a neighborhood). In the language of homological algebra, it means that the de Rham complex determines a resolution of the constant sheaf formula_3. This then implies the de Rham theorem; i.e., the de Rham cohomology of a manifold coincides with the singular cohomology of it (in short, because the singular cohomology can be viewed as a sheaf cohomology.) Once one knows the de Rham theorem, the conclusion of the Poincaré lemma can then be obtained purely topologically. For example, it implies a version of the Poincaré lemma for contractible or simply connected open sets (see §Simply connected case). Simply connected case. Especially in calculus, the Poincaré lemma is stated for a simply connected open subset formula_4. In that case, the lemma says that each closed 1-form on "U" is exact. This version can be seen using algebraic topology as follows. The rational Hurewicz theorem (or rather the real analog of that) says that formula_79 since "U" is simply connected. Since formula_76 is a field, the "k"-th cohomology formula_80 is the dual vector space of the "k"-th homology formula_81. In particular, formula_82 By the de Rham theorem (which follows from the Poincaré lemma for open balls), formula_83 is the same as the first de Rham cohomology group (see §Implication to de Rham cohomology). Hence, each closed 1-form on "U" is exact. Complex-geometry analog. On complex manifolds, the use of the Dolbeault operators formula_84 and formula_85 for complex differential forms, which refine the exterior derivative by the formula formula_86, lead to the notion of formula_85-closed and formula_85-exact differential forms. The local exactness result for such closed forms is known as the Dolbeault–Grothendieck lemma (or formula_85-Poincaré lemma). Importantly, the geometry of the domain on which a formula_85-closed differential form is formula_85-exact is more restricted than for the Poincaré lemma, since the proof of the Dolbeault–Grothendieck lemma holds on a polydisk (a product of disks in the complex plane, on which the multidimensional Cauchy's integral formula may be applied) and there exist counterexamples to the lemma even on contractible domains. The formula_85-Poincaré lemma holds in more generality for pseudoconvex domains. Using both the Poincaré lemma and the formula_85-Poincaré lemma, a refined local formula_87-Poincaré lemma can be proven, which is valid on domains upon which both the aforementioned lemmas are applicable. This lemma states that formula_88-closed complex differential forms are actually locally formula_87-exact (rather than just formula_88 or formula_85-exact, as implied by the above lemmas). Relative Poincaré lemma. The relative Poincaré lemma generalizes Poincaré lemma from a point to a submanifold (or some more general locally closed subset). It states: let "V" be a submanifold of a manifold "M" and "U" a tubular neighborhood of "V". If formula_89 is a closed "k"-form on "U", "k" ≥ 1, that vanishes on "V", then there exists a ("k"-1)-form formula_90 on "U" such that formula_91 and formula_90 vanishes on "V". The relative Poincaré lemma can be proved in the same way the original Poincaré lemma is proved. Indeed, since "U" is a tubular neighborhood, there is a smooth strong deformation retract from "U" to "V"; i.e., there is a smooth homotopy formula_92 from the projection formula_93 to the identity such that formula_39 is the identity on "V". Then we have the homotopy formula on "U": formula_94 where formula_34 is the homotopy operator given by either Lie derivatives or integration along fibers. Now, formula_95 and so formula_96. Since formula_97 and formula_98, we get formula_99; take formula_100. That formula_90 vanishes on "V" follows from the definition of "J" and the fact formula_101. (So the proof actually goes through if "U" is not a tubular neighborhood but if "U" deformation-retracts to "V" with homotopy relative to "V".) formula_102 On singular spaces. The Poincaré lemma generally fails for singular spaces. For example, if one considers "algebraic" differential forms on a complex algebraic variety (in the Zariski topology), the lemma is not true for those differential forms. However, the variants of the lemma still likely hold for some singular spaces (precise formulation and proof depend on the definitions of such spaces and non-smooth differential forms on them.) For example, Kontsevich and Soibelman claim the lemma holds for certain variants of different forms (called PA forms) on their piecewise algebraic spaces. Footnote. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "M = \\mathbb{R}^n" }, { "math_id": 2, "text": "k \\ge 1" }, { "math_id": 3, "text": "\\mathbb{R}_M" }, { "math_id": 4, "text": "U \\subset \\mathbb{R}^n" }, { "math_id": 5, "text": "[0, 1]" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "U" }, { "math_id": 8, "text": "tx" }, { "math_id": 9, "text": "0 \\le t \\le 1" }, { "math_id": 10, "text": "U \\times [0, 1] \\subset \\mathbb{R}^{n+1}" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "\\pi_*" }, { "math_id": 13, "text": "U \\times [0, 1]" }, { "math_id": 14, "text": "\\pi_* \\left( \\sum_{i_1 < \\cdots < i_{k-1}} \\alpha_i dt \\wedge dx^i + \\sum_{j_1 < \\cdots < j_k} \\beta_j dx^j \\right) = \\left( \\int_0^1 \\alpha_i(\\cdot, t) \\, dt \\right) \\, dx^i" }, { "math_id": 15, "text": "dx^i = dx_{i_1} \\wedge \\cdots \\wedge dx_{i_k}" }, { "math_id": 16, "text": "\\alpha_i = \\alpha_{i_1, \\dots, i_k}" }, { "math_id": 17, "text": "dx^j" }, { "math_id": 18, "text": "\\beta_j" }, { "math_id": 19, "text": "\\alpha = f \\, dt \\wedge dx^i" }, { "math_id": 20, "text": "d \\alpha = - \\sum_l \\frac{\\partial f}{\\partial x_l} dt \\wedge dx_l \\wedge dx^i" }, { "math_id": 21, "text": "\\pi_*(d \\alpha) = -d(\\pi_* \\alpha) = \\alpha_1 - \\alpha_0 - d(\\pi_* \\alpha)" }, { "math_id": 22, "text": "\\alpha_0, \\alpha_1" }, { "math_id": 23, "text": "\\alpha" }, { "math_id": 24, "text": "t = 0, t = 1" }, { "math_id": 25, "text": "dt" }, { "math_id": 26, "text": "\\alpha = f \\, dx^j" }, { "math_id": 27, "text": "\\pi_*(d \\alpha) = \\alpha_1 - \\alpha_0 - d(\\pi_* \\alpha)" }, { "math_id": 28, "text": "k" }, { "math_id": 29, "text": "h(x, t) = tx" }, { "math_id": 30, "text": "J = \\pi_* \\circ h^*" }, { "math_id": 31, "text": "h_t = h(\\cdot, t)" }, { "math_id": 32, "text": "\\omega" }, { "math_id": 33, "text": "h_1^* \\omega - h_0^* \\omega = J d \\omega + d J \\omega," }, { "math_id": 34, "text": "J" }, { "math_id": 35, "text": "J d \\omega = 0" }, { "math_id": 36, "text": "h_1^* \\omega = \\omega" }, { "math_id": 37, "text": "h_0^* \\omega = 0" }, { "math_id": 38, "text": "\\omega = d J \\omega," }, { "math_id": 39, "text": "h_t" }, { "math_id": 40, "text": "h_1 = " }, { "math_id": 41, "text": "h_0(U) = " }, { "math_id": 42, "text": "\\pi : U \\times [0, 1] \\to U" }, { "math_id": 43, "text": "\\xi" }, { "math_id": 44, "text": "L_{\\xi} = d \\, i(\\xi) + i(\\xi) d" }, { "math_id": 45, "text": "i(\\xi)" }, { "math_id": 46, "text": "i(\\xi)\\omega = \\omega(\\xi, \\cdot)" }, { "math_id": 47, "text": "f_t : U \\to U" }, { "math_id": 48, "text": "f_t" }, { "math_id": 49, "text": "\\xi_t(x)" }, { "math_id": 50, "text": "f_t(x)" }, { "math_id": 51, "text": "\\frac{d}{dt}f_t(x) = \\xi_t(f_t(x))" }, { "math_id": 52, "text": "g_s = f_{t + s} \\circ f_t^{-1}" }, { "math_id": 53, "text": "g_0 = \\operatorname{id}, \\, \\frac{d}{ds}g_s|_{s=0}= \\xi_t" }, { "math_id": 54, "text": "(L_{\\xi_t} \\omega)(f_t(x)) = \\frac{d}{ds} g_s^* \\omega(f_t(x))|_{s = 0} = \\frac{d}{ds} f_{t+s}^* \\omega(x)|_{s = 0} = \\frac{d}{dt} f_t^* \\omega(x)" }, { "math_id": 55, "text": "\\frac{d}{dt} f_t^* \\omega = f_t^* L_{\\xi_t} \\omega." }, { "math_id": 56, "text": "I = [0, 1]" }, { "math_id": 57, "text": "0 < t_0 < t_1 < 1" }, { "math_id": 58, "text": "f_{t_1}^* \\omega - f_{t_0}^* \\omega = d \\int_{t_0}^{t_1} f_t^* i(\\xi_t) \\omega \\, dt + \\int_{t_0}^{t_1} f_t^* i(\\xi_t) d \\omega \\, dt" }, { "math_id": 59, "text": "t_0, t_1 \\to 0, 1" }, { "math_id": 60, "text": "f_1^* \\omega - f_0^* \\omega = d J \\omega + J d \\omega" }, { "math_id": 61, "text": "J \\omega = \\int_0^1 f_t^* i(\\xi_t) \\omega \\, dt." }, { "math_id": 62, "text": "x_0" }, { "math_id": 63, "text": "f_t(x) = t(x - x_0) + x_0" }, { "math_id": 64, "text": "\\omega = d J \\omega + J d \\omega" }, { "math_id": 65, "text": "g(x,y)=\\int_a^x p(t,y)\\, dt, " }, { "math_id": 66, "text": "h(x,y)=\\int_c^y q(a,s)\\, ds - g(a,y)=\\int_c^y q(a,s)\\, ds," }, { "math_id": 67, "text": "f(x,y)=\\int_a^x p(t,y)\\, dt + \\int_c^y q(a,s)\\, ds." }, { "math_id": 68, "text": "b(x,y)=\\int_a^x r(t,y) \\, dt. " }, { "math_id": 69, "text": "\\operatorname{H}_{dR}^k(U)" }, { "math_id": 70, "text": "\\operatorname{H}_{dR}^k(U) = \\{ \\textrm{ closed } \\, k\\text{-forms} \\, \\textrm { on } \\, U \\}/\\{ \\textrm{ exact } \\, k\\text{-forms} \\, \\textrm { on } \\, U \\}." }, { "math_id": 71, "text": "\\operatorname{H}_{dR}^k(U) = 0" }, { "math_id": 72, "text": "\\Omega^* : 0 \\to \\Omega^0 \\overset{d^0}\\to \\Omega^1 \\overset{d^1}\\to \\cdots \\to \\Omega^n \\to 0" }, { "math_id": 73, "text": "\\Omega^k" }, { "math_id": 74, "text": "\\Omega^k(U)" }, { "math_id": 75, "text": "0 \\to \\mathbb{R}_M \\overset{\\epsilon}\\to \\Omega^0 \\overset{d^0}\\to \\Omega^1 \\overset{d^1}\\to \\cdots \\to \\Omega^n \\to 0" }, { "math_id": 76, "text": "\\mathbb{R}" }, { "math_id": 77, "text": "\\epsilon" }, { "math_id": 78, "text": "d^0" }, { "math_id": 79, "text": "\\operatorname{H}_1(U; \\mathbb{R}) = 0" }, { "math_id": 80, "text": "\\operatorname{H}^k(U; \\mathbb{R})" }, { "math_id": 81, "text": "\\operatorname{H}_k(U; \\mathbb{R})" }, { "math_id": 82, "text": "\\operatorname{H}^1(U; \\mathbb{R}) = 0." }, { "math_id": 83, "text": "\\operatorname{H}^1(U; \\mathbb{R})" }, { "math_id": 84, "text": "\\partial" }, { "math_id": 85, "text": "\\bar \\partial" }, { "math_id": 86, "text": "d=\\partial + \\bar \\partial" }, { "math_id": 87, "text": "\\partial \\bar \\partial" }, { "math_id": 88, "text": "d" }, { "math_id": 89, "text": "\\sigma" }, { "math_id": 90, "text": "\\eta" }, { "math_id": 91, "text": "d \\eta = \\sigma" }, { "math_id": 92, "text": "h_t : U \\to U" }, { "math_id": 93, "text": "U \\to V" }, { "math_id": 94, "text": "h_1^* - h_0^* = d J + J d" }, { "math_id": 95, "text": "h_0 (U) \\subset V" }, { "math_id": 96, "text": "h_0^* \\sigma = 0" }, { "math_id": 97, "text": "d \\sigma = 0" }, { "math_id": 98, "text": "h_1^* \\sigma = \\sigma" }, { "math_id": 99, "text": "\\sigma = d J \\sigma" }, { "math_id": 100, "text": "\\eta = J \\sigma" }, { "math_id": 101, "text": "h_t(V) \\subset V" }, { "math_id": 102, "text": "\\square" } ]
https://en.wikipedia.org/wiki?curid=600030
6000460
Kelvin transform
The Kelvin transform is a device used in classical potential theory to extend the concept of a harmonic function, by allowing the definition of a function which is 'harmonic at infinity'. This technique is also used in the study of subharmonic and superharmonic functions. In order to define the Kelvin transform "f"* of a function "f", it is necessary to first consider the concept of inversion in a sphere in R"n" as follows. It is possible to use inversion in any sphere, but the ideas are clearest when considering a sphere with centre at the origin. Given a fixed sphere "S"(0, "R") with centre 0 and radius "R", the inversion of a point "x" in R"n" is defined to be formula_0 A useful effect of this inversion is that the origin 0 is the image of formula_1, and formula_1 is the image of 0. Under this inversion, spheres are transformed into spheres, and the exterior of a sphere is transformed to the interior, and vice versa. The Kelvin transform of a function is then defined by: If "D" is an open subset of R"n" which does not contain 0, then for any function "f" defined on "D", the Kelvin transform "f"* of "f" with respect to the sphere "S"(0, "R") is formula_2 One of the important properties of the Kelvin transform, and the main reason behind its creation, is the following result: Let "D" be an open subset in R"n" which does not contain the origin 0. Then a function "u" is harmonic, subharmonic or superharmonic in "D" if and only if the Kelvin transform "u"* with respect to the sphere "S"(0, "R") is harmonic, subharmonic or superharmonic in "D"*. This follows from the formula formula_3
[ { "math_id": 0, "text": "x^* = \\frac{R^2}{|x|^2} x." }, { "math_id": 1, "text": "\\infty" }, { "math_id": 2, "text": "f^*(x^*) = \\frac{|x|^{n-2}}{R^{2n-4}}f(x) = \\frac{1}{|x^*|^{n-2}}f(x) = \\frac{1}{|x^*|^{n-2}} f\\left(\\frac{R^2}{|x^*|^2} x^*\\right)." }, { "math_id": 3, "text": "\\Delta u^*(x^*) = \\frac{R^{4}}{|x^*|^{n+2}}(\\Delta u)\\left(\\frac{R^2}{|x^*|^2} x^*\\right)." } ]
https://en.wikipedia.org/wiki?curid=6000460
6000466
Tisserand's parameter
Named after Félix Tisserand Tisserand's parameter (or Tisserand's invariant) is a number calculated from several orbital elements (semi-major axis, orbital eccentricity, and inclination) of a relatively small object and a larger "perturbing body". It is used to distinguish different kinds of orbits. The term is named after French astronomer Félix Tisserand who derived it and applies to restricted three-body problems in which the three objects all differ greatly in mass. Definition. For a small body with semi-major axis formula_0, orbital eccentricity formula_1, and orbital inclination formula_2, relative to the orbit of a perturbing larger body with semimajor axis formula_3, the parameter is defined as follows: formula_4 Tisserand invariant conservation. In the three-body problem, the quasi-conservation of Tisserand's invariant is derived as the limit of the Jacobi integral away from the main two bodies (usually the star and planet). Numerical simulations show that the Tisserand invariant of orbit-crossing bodies is conserved in the three-body problem on Gigayear timescales. Applications. The Tisserand parameter's conservation was originally used by Tisserand to determine whether or not an observed orbiting body is the same as one previously observed. This is usually known as the Tisserand's criterion. Orbit classification. The value of the Tisserand parameter with respect to the planet that most perturbs a small body in the solar system can be used to delineate groups of objects that may have similar origins. Related notions. The parameter is derived from one of the so-called Delaunay standard variables, used to study the perturbed Hamiltonian in a three-body system. Ignoring higher-order perturbation terms, the following value is conserved: formula_7 Consequently, perturbations may lead to the resonance between the orbital inclination and eccentricity, known as Kozai resonance. Near-circular, highly inclined orbits can thus become very eccentric in exchange for lower inclination. For example, such a mechanism can produce sungrazing comets, because a large eccentricity with a constant semimajor axis results in a small perihelion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a\\,\\!" }, { "math_id": 1, "text": "e\\,\\!" }, { "math_id": 2, "text": "i\\,\\!" }, { "math_id": 3, "text": "a_P" }, { "math_id": 4, "text": "T_P\\ = \\frac{a_P}{a} + 2\\cos i\\sqrt{\\frac{a}{a_P} (1-e^2)}" }, { "math_id": 5, "text": "T_J > 3" }, { "math_id": 6, "text": "2< T_J < 3" }, { "math_id": 7, "text": " \\sqrt{a (1-e^2)} \\cos i" } ]
https://en.wikipedia.org/wiki?curid=6000466
60008412
Greenberg's conjectures
Two unsolved conjectures in algebraic number theory Greenberg's conjecture is either of two conjectures in algebraic number theory proposed by Ralph Greenberg. Both are still unsolved as of 2021. Invariants conjecture. The first conjecture was proposed in 1976 and concerns Iwasawa invariants. This conjecture is related to Vandiver's conjecture, Leopoldt's conjecture, Birch–Tate conjecture, all of which are also unsolved. The conjecture, also referred to as Greenberg's invariants conjecture, firstly appeared in Greenberg's Princeton University thesis of 1971 and originally stated that, assuming that formula_0 is a totally real number field and that formula_1 is the cyclotomic formula_2-extension, formula_3, i.e. the power of formula_4 dividing the class number of formula_5 is bounded as formula_6. Note that if Leopoldt's conjecture holds for formula_0 and formula_4, the only formula_2-extension of formula_0 is the cyclotomic one (since it is totally real). In 1976, Greenberg expanded the conjecture by providing more examples for it and slightly reformulated it as follows: given that formula_7 is a finite extension of formula_8 and that formula_9 is a fixed prime, with consideration of subfields of cyclotomic extensions of formula_7, one can define a tower of number fields formula_10 such that formula_11 is a cyclic extension of formula_7 of degree formula_12. If formula_7 is totally real, is the power of formula_13 dividing the class number of formula_11 bounded as  formula_6? Now, if formula_7 is an arbitrary number field, then there exist integers formula_14, formula_15 and formula_16 such that the power of formula_9 dividing the class number of formula_11 is formula_17, where formula_18 for all sufficiently large formula_19. The integers formula_14, formula_15, formula_16 depend only on formula_7 and formula_9. Then, we ask: is formula_20 for formula_7 totally real? Simply speaking, the conjecture asks whether we have formula_21 for any totally real number field formula_7 and any prime number formula_9, or the conjecture can also be reformulated as asking whether both invariants "λ" and "μ" associated to the cyclotomic formula_22-extension of a totally real number field vanish. In 2001, Greenberg generalized the conjecture (thus making it known as Greenberg's pseudo-null conjecture or, sometimes, as Greenberg's generalized conjecture): Supposing that formula_0 is a totally real number field and that formula_4 is a prime, let formula_23 denote the compositum of all formula_2-extensions of formula_0. (Recall that if Leopoldt's conjecture holds for formula_0 and formula_4, then formula_24.) Let formula_25 denote the pro-formula_4 Hilbert class field of formula_23 and let formula_26, regarded as a module over the ring formula_27. Then formula_28 is a pseudo-null formula_29-module. A possible reformulation: Let formula_30 be the compositum of all the formula_2-extensions of formula_7 and let formula_31, then formula_32 is a pseudo-null formula_33-module. Another related conjecture (also unsolved as of yet) exists: We have formula_34 for any number field formula_7 and any prime number formula_9. This related conjecture was justified by Bruce Ferrero and Larry Washington, both of whom proved (see: Ferrero–Washington theorem) that formula_34 for any abelian extension formula_7 of the rational number field formula_35 and any prime number formula_9. "p"-rationality conjecture. Another conjecture, which can be referred to as Greenberg's conjecture, was proposed by Greenberg in 2016, and is known as Greenberg's formula_4-rationality conjecture. It states that for any odd prime formula_4 and for any formula_36, there exists a formula_4-rational field formula_37 such that formula_38. This conjecture is related to the Inverse Galois problem.
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "F_\\infty/F" }, { "math_id": 2, "text": "\\mathbb{Z}_p" }, { "math_id": 3, "text": "\\lambda(F_\\infty/F) = \\mu(F_\\infty/F) = 0" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "F_n" }, { "math_id": 6, "text": "n \\rightarrow \\infty" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "\\mathbf{Q}" }, { "math_id": 9, "text": "\\ell" }, { "math_id": 10, "text": "k = k_0 \\subset k_1 \\subset k_2 \\subset \\cdots \\subset k_n \\subset \\cdots" }, { "math_id": 11, "text": "k_n" }, { "math_id": 12, "text": "\\ell^n" }, { "math_id": 13, "text": "l" }, { "math_id": 14, "text": "\\lambda" }, { "math_id": 15, "text": "\\mu" }, { "math_id": 16, "text": "\\nu" }, { "math_id": 17, "text": "\\ell^{e_n}" }, { "math_id": 18, "text": "e_n = {\\lambda}n + \\mu^{\\ell_n} + \\nu" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "\\lambda = \\mu = 0" }, { "math_id": 21, "text": "\\mu_\\ell(k) = \\lambda_\\ell(k) = 0" }, { "math_id": 22, "text": "Z_p" }, { "math_id": 23, "text": "\\tilde{F}" }, { "math_id": 24, "text": "\\tilde F=F" }, { "math_id": 25, "text": "\\tilde{L}" }, { "math_id": 26, "text": "\\tilde{X} = \\operatorname{Gal}(\\tilde{L}/\\tilde{F})" }, { "math_id": 27, "text": "\\tilde{\\Lambda} = {\\mathbb{Z}_p}[[\\operatorname{Gal}(\\tilde{F}/F)]]" }, { "math_id": 28, "text": "\\tilde{X}" }, { "math_id": 29, "text": "\\tilde{\\Lambda}" }, { "math_id": 30, "text": "\\tilde{k}" }, { "math_id": 31, "text": "\\operatorname{Gal}(\\tilde{k}/k) \\simeq \\mathbb{Z}^n_p" }, { "math_id": 32, "text": "Y_\\tilde{k}" }, { "math_id": 33, "text": "\\Lambda_n" }, { "math_id": 34, "text": "\\mu_\\ell(k) = 0" }, { "math_id": 35, "text": "\\mathbb{Q}" }, { "math_id": 36, "text": "t" }, { "math_id": 37, "text": "K" }, { "math_id": 38, "text": "\\operatorname{Gal}(K/\\mathbb{Q}) \\cong (\\mathbb{Z}/\\mathbb{2Z})^t" }, { "math_id": 39, "text": "\\mu_p" } ]
https://en.wikipedia.org/wiki?curid=60008412
60012
Formal power series
Infinite sum that is considered independently from any notion of convergence In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.). A formal power series is a special kind of formal series, of the form formula_0 where the formula_1 called "coefficients", are numbers or, more generally, elements of some ring, and the formula_2 are formal powers of the symbol formula_3 that is called an indeterminate or, commonly, a variable. Hence, power series can be viewed as a generalization of polynomials, where the number of terms is allowed to be infinite, and differ from usual power series by the absence of convergence requirements, which implies that a power series may not represent a function of its variable. Formal power series are in one to one correspondence with their sequences of coefficients, but the two concepts must not be confused, since the operations that can be applied are different. A formal power series with coefficients in a ring formula_4 is called a formal power series over formula_5 The formal power series over a ring formula_4 form a ring, commonly denoted formula_6 which is the ("x")-adic completion of the polynomial ring formula_7 in the same way as the p-adic integers are the p-adic completion of the ring of the integers. Formal powers series in several indeterminates are defined similarly by replacing the powers of a single indeterminate by monomials in several indeterminates. Formal power series are widely used in combinatorics for representing sequences of integers as generating functions. In this context, a recurrence relation between the elements of a sequence may often be interpreted as a differential equation that the generating function satisfies. This allows using methods of complex analysis for combinatorial problems (see analytic combinatorics). Introduction. A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable "X" denotes any numerical value (not even an unknown value). For example, consider the series formula_8 If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, ... ] as coefficients, even though the corresponding power series diverges for any nonzero value of "X". Arithmetic on formal power series is carried out by simply pretending that the series are polynomials. For example, if formula_9 then we add "A" and "B" term by term: formula_10 We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product): formula_11 Notice that each coefficient in the product "AB" only depends on a "finite" number of coefficients of "A" and "B". For example, the "X"5 term is given by formula_12 For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis. Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series "A" is a formal power series "C" such that "AC" = 1, provided that such a formal power series exists. It turns out that if "A" has a multiplicative inverse, it is unique, and we denote it by "A"−1. Now we can define division of formal power series by defining "B"/"A" to be the product "BA"−1, provided that the inverse of "A" exists. For example, one can use the definition of multiplication above to verify the familiar formula formula_13 An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator formula_14 applied to a formal power series formula_15 in one variable extracts the coefficient of the formula_16th power of the variable, so that formula_17 and formula_18. Other examples include formula_19 Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below. The ring of formal power series. If one considers the set of all formal power series in "X" with coefficients in a commutative ring "R", the elements of this set collectively constitute another ring which is written formula_20 and called the ring of formal power series in the variable "X" over "R". Definition of the formal power series ring. One can characterize formula_21 abstractly as the completion of the polynomial ring formula_22 equipped with a particular metric. This automatically gives formula_21 the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe formula_21 more explicitly, and define the ring structure and topological structure separately, as follows. Ring structure. As a set, formula_21 can be constructed as the set formula_23 of all infinite sequences of elements of formula_4, indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index formula_16 is formula_24 by formula_25, one defines addition of two such sequences by formula_26 and multiplication by formula_27 This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, formula_23 becomes a commutative ring with zero element formula_28 and multiplicative identity formula_29. The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds formula_4 into formula_21 by sending any (constant) formula_30 to the sequence formula_31 and designates the sequence formula_32 by formula_33; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as formula_34 these are precisely the polynomials in formula_33. Given this, it is quite natural and convenient to designate a general sequence formula_35 by the formal expression formula_36, even though the latter "is not" an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation of the above definitions as formula_37 and formula_38 which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition. Topological structure. Having stipulated conventionally that one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in formula_23 is defined and a topology on formula_23 is constructed. There are several equivalent ways to define the desired topology. Informally, two sequences formula_25 and formula_45 become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of formula_33 the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (1), regardless of the values formula_24, since inclusion of the term for formula_46 gives the last (and in fact only) change to the coefficient of formula_47. It is also obvious that the limit of the sequence of partial sums is equal to the left hand side. This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over formula_4 and is denoted by formula_21. The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of formula_33 occurs in only finitely many terms. The topological structure allows much more flexible usage of infinite summations. For instance the rule for multiplication can be restated simply as formula_48 since only finitely many terms on the right affect any fixed formula_47. Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1 (in which case the product is nonzero) or infinitely many factors have no constant term (in which case the product is zero). Alternative topologies. The above topology is the finest topology for which formula_49 always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring formula_4 already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series. In the ring of formal power series formula_50, the topology of above construction only relates to the indeterminate formula_51, since the topology that was put on formula_52 has been replaced by the discrete topology when defining the topology of the whole ring. So formula_53 converges (and its sum can be written as formula_54); however formula_55 would be considered to be divergent, since every term affects the coefficient of formula_51. This asymmetry disappears if the power series ring in formula_51 is given the product topology where each copy of formula_52 is given its topology as a ring of formal power series rather than the discrete topology. With this topology, a sequence of elements of formula_50 converges if the coefficient of each power of formula_51 converges to a formal power series in formula_33, a weaker condition than stabilizing entirely. For instance, with this topology, in the second example given above, the coefficient of formula_51converges to formula_56, so the whole summation converges to formula_57. This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all indeterminates at once. In the above example that would mean constructing formula_58 and here a sequence converges if and only if the coefficient of every monomial formula_59 stabilizes. This topology, which is also the formula_60-adic topology, where formula_61 is the ideal generated by formula_33 and formula_51, still enjoys the property that a summation converges if and only if its terms tend to 0. The same principle could be used to make other divergent limits converge. For instance in formula_62 the limit formula_63 does not exist, so in particular it does not converge to formula_64 This is because for formula_65 the coefficient formula_66 of formula_67 does not stabilize as formula_68. It does however converge in the usual topology of formula_69, and in fact to the coefficient formula_70 of formula_71. Therefore, if one would give formula_62 the product topology of formula_72 where the topology of formula_69 is the usual topology rather than the discrete one, then the above limit would converge to formula_71. This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would "not" be the case that a summation converges if and only if its terms tend to 0. Universal property. The ring formula_21 may be characterized by the following universal property. If formula_73 is a commutative associative algebra over formula_4, if formula_60 is an ideal of formula_73 such that the formula_60-adic topology on formula_73 is complete, and if formula_3 is an element of formula_60, then there is a "unique" formula_74 with the following properties: Operations on formal power series. One can perform algebraic operations on power series to generate new power series. Besides the ring structure operations defined above, we have the following. Power series raised to powers. For any natural number "n", the nth power of a formal power series S is defined recursively by formula_77 If "m" and "a"0 are invertible in the ring of coefficients, one can prove formula_78 where formula_79 In the case of formal power series with complex coefficients, the complex powers are well defined for series "f" with constant term equal to 1. In this case, formula_80 can be defined either by composition with the binomial series (1+"x")"α", or by composition with the exponential and the logarithmic series, formula_81 or as the solution of the differential equation (in terms of series) formula_82 with constant term 1; the three definitions are equivalent. The rules of calculus formula_83 and formula_84 easily follow. Multiplicative inverse. The series formula_85 is invertible in formula_21 if and only if its constant coefficient formula_40 is invertible in formula_4. This condition is necessary, for the following reason: if we suppose that formula_15 has an inverse formula_86 then the constant term formula_87 of formula_88 is the constant term of the identity series, i.e. it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series formula_89 via the explicit recursive formula formula_90 An important special case is that the geometric series formula is valid in formula_21: formula_91 If formula_92 is a field, then a series is invertible if and only if the constant term is non-zero, i.e. if and only if the series is not divisible by formula_33. This means that formula_93 is a discrete valuation ring with uniformizing parameter formula_33. Division. The computation of a quotient formula_94 formula_95 assuming the denominator is invertible (that is, formula_40 is invertible in the ring of scalars), can be performed as a product formula_96 and the inverse of formula_97, or directly equating the coefficients in formula_98: formula_99 Extracting coefficients. The coefficient extraction operator applied to a formal power series formula_100 in "X" is written formula_101 and extracts the coefficient of "Xm", so that formula_102 Composition. Given two formal power series formula_103 formula_104 such that formula_105 one may form the "composition" formula_106 where the coefficients "c""n" are determined by "expanding out" the powers of "f"("X"): formula_107 Here the sum is extended over all ("k", "j") with formula_108 and formula_109 with formula_110 Since formula_105 one must have formula_111 and formula_112 for every formula_113 This implies that the above sum is finite and that the coefficient formula_114 is the coefficient of formula_47 in the polynomial formula_115, where formula_116 and formula_117 are the polynomials obtained by truncating the series at formula_118 that is, by removing all terms involving a power of formula_33 higher than formula_119 A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0. Composition is only valid when formula_120 has "no constant term", so that each formula_114 depends on only a finite number of coefficients of formula_120 and formula_121. In other words, the series for formula_122 converges in the topology of formula_21. Example. Assume that the ring formula_4 has characteristic 0 and the nonzero integers are invertible in formula_4. If one denotes by formula_71 the formal power series formula_123 then the equality formula_124 makes perfect sense as a formal power series, since the constant coefficient of formula_125 is zero. Composition inverse. Whenever a formal series formula_126 has "f"0 = 0 and "f"1 being an invertible element of "R", there exists a series formula_127 that is the composition inverse of formula_96, meaning that composing formula_96 with formula_97 gives the series representing the identity function formula_128. The coefficients of formula_97 may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity "X" (that is 1 at degree 1 and 0 at every degree greater than 1). In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula (discussed below) provides a powerful tool to compute the coefficients of "g", as well as the coefficients of the (multiplicative) powers of "g". Formal differentiation. Given a formal power series formula_129 we define its formal derivative, denoted "Df" or "f" ′, by formula_130 The symbol "D" is called the formal differentiation operator. This definition simply mimics term-by-term differentiation of a polynomial. This operation is "R"-linear: formula_131 for any "a", "b" in "R" and any "f", "g" in formula_132 Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid: formula_133 and the chain rule works as well: formula_134 whenever the appropriate compositions of series are defined (see above under composition of series). Thus, in these respects formal power series behave like Taylor series. Indeed, for the "f" defined above, we find that formula_135 where "D""k" denotes the "k"th formal derivative (that is, the result of formally differentiating "k" times). Formal antidifferentiation. If formula_4 is a ring with characteristic zero and the nonzero integers are invertible in formula_4, then given a formal power series formula_129 we define its formal antiderivative or formal indefinite integral by formula_136 for any constant formula_137. This operation is "R"-linear: formula_138 for any "a", "b" in "R" and any "f", "g" in formula_132 Additionally, the formal antiderivative has many of the properties of the usual antiderivative of calculus. For example, the formal antiderivative is the right inverse of the formal derivative: formula_139 for any formula_140. Properties. Algebraic properties of the formal power series ring. formula_21 is an associative algebra over formula_4 which contains the ring formula_22 of polynomials over formula_4; the polynomials correspond to the sequences which end in zeros. The Jacobson radical of formula_21 is the ideal generated by formula_33 and the Jacobson radical of formula_4; this is implied by the element invertibility criterion discussed above. The maximal ideals of formula_21 all arise from those in formula_4 in the following manner: an ideal formula_141 of formula_21 is maximal if and only if formula_142 is a maximal ideal of formula_4 and formula_141 is generated as an ideal by formula_33 and formula_142. Several algebraic properties of formula_4 are inherited by formula_21: Topological properties of the formal power series ring. The metric space formula_144 is complete. The ring formula_21 is compact if and only if "R" is finite. This follows from Tychonoff's theorem and the characterisation of the topology on formula_21 as a product topology. Weierstrass preparation. The ring of formal power series with coefficients in a complete local ring satisfies the Weierstrass preparation theorem. Applications. Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions. One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of formula_145: formula_146 formula_147 Then one can show that formula_148 formula_149 formula_150 The last one being valid in the ring formula_151 For "K" a field, the ring formula_152 is often used as the "standard, most general" complete local ring over "K" in algebra. Interpreting formal power series as functions. In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let formula_153 and suppose formula_73 is a commutative associative algebra over formula_4, formula_60 is an ideal in formula_73 such that the I-adic topology on formula_73 is complete, and formula_3 is an element of formula_60. Define: formula_154 This series is guaranteed to converge in formula_73 given the above assumptions on formula_3. Furthermore, we have formula_155 and formula_156 Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved. Since the topology on formula_21 is the formula_157-adic topology and formula_21 is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal formula_157): formula_158, formula_159 and formula_160 are all well defined for any formal power series formula_161 With this formalism, we can give an explicit formula for the multiplicative inverse of a power series formula_96 whose constant coefficient formula_162 is invertible in formula_4: formula_163 If the formal power series formula_97 with formula_164 is given implicitly by the equation formula_165 where formula_96 is a known power series with formula_166, then the coefficients of formula_97 can be explicitly computed using the Lagrange inversion formula. Generalizations. Formal Laurent series. The formal Laurent series over a ring formula_4 are defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree. That is, they are the series that can be written as formula_167 for some integer formula_168, so that there are only finitely many negative formula_16 with formula_169. (This is different from the classical Laurent series of complex analysis.) For a non-zero formal Laurent series, the minimal integer formula_16 such that formula_170 is called the "order" of formula_96 and is denoted formula_171 (The order of the zero series is formula_172.) Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of formula_173 of two series with respective sequences of coefficients formula_174 and formula_175 is formula_176 This sum has only finitely many nonzero terms because of the assumed vanishing of coefficients at sufficiently negative indices. The formal Laurent series form the ring of formal Laurent series over formula_4, denoted by formula_177. It is equal to the localization of the ring formula_21 of formal power series with respect to the set of positive powers of formula_33. If formula_92 is a field, then formula_178 is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain formula_93. As with formula_21, the ring formula_177 of formal Laurent series may be endowed with the structure of a topological ring by introducing the metric formula_179 One may define formal differentiation for formal Laurent series in the natural (term-by-term) way. Precisely, the formal derivative of the formal Laurent series formula_96 above is formula_180 which is again a formal Laurent series. If formula_96 is a non-constant formal Laurent series and with coefficients in a field of characteristic 0, then one has formula_181 However, in general this is not the case since the factor formula_16 for the lowest order term could be equal to 0 in formula_4. Formal residue. Assume that formula_143 is a field of characteristic 0. Then the map formula_182 defined above is a formula_143-derivation that satisfies formula_183 formula_184 The latter shows that the coefficient of formula_185 in formula_96 is of particular interest; it is called "formal residue of formula_96" and denoted formula_186. The map formula_187 is formula_143-linear, and by the above observation one has an exact sequence formula_188 Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any formula_189 Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to formula_190. Property (iii): any formula_96 can be written in the form formula_191, with formula_192 and formula_193: then formula_194 formula_193 implies formula_97 is invertible in formula_195 whence formula_196 Property (iv): Since formula_197 we can write formula_198 with formula_199. Consequently, formula_200 and (iv) follows from (i) and (iii). Property (v) is clear from the definition. The Lagrange inversion formula. As mentioned above, any formal series formula_201 with "f"0 = 0 and "f"1 ≠ 0 has a composition inverse formula_202 The following relation between the coefficients of "gn" and "f"−"k" holds ("&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Lagrange inversion formula"): formula_203 In particular, for "n" = 1 and all "k" ≥ 1, formula_204 Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting it here. Noting formula_205, we can apply the rules of calculus above, crucially Rule (iv) substituting formula_206, to get: formula_207 Generalizations. One may observe that the above computation can be repeated plainly in more general settings than "K"(("X")): a generalization of the Lagrange inversion formula is already available working in the formula_208-modules formula_209 where α is a complex exponent. As a consequence, if "f" and "g" are as above, with formula_210, we can relate the complex powers of "f" / "X" and "g" / "X": precisely, if α and β are non-zero complex numbers with negative integer sum, formula_211 then formula_212 For instance, this way one finds the power series for complex powers of the Lambert function. Power series in several variables. Formal power series in any number of indeterminates (even infinitely many) can be defined. If "I" is an index set and "XI" is the set of indeterminates "Xi" for "i"∈"I", then a monomial "X""α" is any finite product of elements of "XI" (repetitions allowed); a formal power series in "XI" with coefficients in a ring "R" is determined by any mapping from the set of monomials "X""α" to a corresponding coefficient "c""α", and is denoted formula_213. The set of all such formal power series is denoted formula_214 and it is given a ring structure by defining formula_215 and formula_216 Topology. The topology on formula_217 is such that a sequence of its elements converges only if for each monomial "X"α the corresponding coefficient stabilizes. If "I" is finite, then this the "J"-adic topology, where "J" is the ideal of formula_217 generated by all the indeterminates in "XI". This does not hold if "I" is infinite. For example, if formula_218 then the sequence formula_219 with formula_220 does not converge with respect to any "J"-adic topology on "R", but clearly for each monomial the corresponding coefficient stabilizes. As remarked above, the topology on a repeated formal power series ring like formula_221 is usually chosen in such a way that it becomes isomorphic as a topological ring to formula_222 Operations. All of the operations defined for series in one variable may be extended to the several variables case. In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other. Universal property. In the several variables case, the universal property characterizing formula_223 becomes the following. If "S" is a commutative associative algebra over "R", if "I" is an ideal of "S" such that the "I"-adic topology on "S" is complete, and if "x"1, …, "xr" are elements of "I", then there is a "unique" map formula_224 with the following properties: Non-commuting variables. The several variable case can be further generalised by taking "non-commuting variables" "Xi" for "i" ∈ "I", where "I" is an index set and then a monomial "X"α is any word in the "XI"; a formal power series in "XI" with coefficients in a ring "R" is determined by any mapping from the set of monomials "X"α to a corresponding coefficient "c"α, and is denoted formula_225. The set of all such formal power series is denoted "R"«"XI"», and it is given a ring structure by defining addition pointwise formula_226 and multiplication by formula_227 where · denotes concatenation of words. These formal power series over "R" form the Magnus ring over "R". On a semiring. Given an alphabet formula_228 and a semiring formula_73. The formal power series over formula_73 supported on the language formula_229 is denoted by formula_230. It consists of all mappings formula_231, where formula_229 is the free monoid generated by the non-empty set formula_228. The elements of formula_230 can be written as formal sums formula_232 where formula_233 denotes the value of formula_234 at the word formula_235. The elements formula_236 are called the coefficients of formula_234. For formula_237 the support of formula_234 is the set formula_238 A series where every coefficient is either formula_239 or formula_240 is called the characteristic series of its support. The subset of formula_230 consisting of all series with a finite support is denoted by formula_241 and called polynomials. For formula_242 and formula_243, the sum formula_244 is defined by formula_245 The (Cauchy) product formula_246 is defined by formula_247 The Hadamard product formula_248 is defined by formula_249 And the products by a scalar formula_250 and formula_251 by formula_252 and formula_253, respectively. With these operations formula_254 and formula_255 are semirings, where formula_256 is the empty word in formula_229. These formal power series are used to model the behavior of weighted automata, in theoretical computer science, when the coefficients formula_233 of the series are taken to be the weight of a path with label formula_257 in the automata. Replacing the index set by an ordered abelian group. Suppose formula_258 is an ordered abelian group, meaning an abelian group with a total ordering formula_259 respecting the group's addition, so that formula_260 if and only if formula_261 for all formula_262. Let I be a well-ordered subset of formula_258, meaning I contains no infinite descending chain. Consider the set consisting of formula_263 for all such I, with formula_264 in a commutative ring formula_4, where we assume that for any index set, if all of the formula_264 are zero then the sum is zero. Then formula_265 is the ring of formal power series on formula_258; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same. Sometimes the notation formula_266 is used to denote formula_265. Various properties of formula_4 transfer to formula_265. If formula_4 is a field, then so is formula_265. If formula_4 is an ordered field, we can order formula_265 by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if formula_258 is a divisible group and formula_4 is a real closed field, then formula_265 is a real closed field, and if formula_4 is algebraically closed, then so is formula_265. This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_{n=0}^\\infty a_nx^n=a_0+a_1x+ a_2x^2+\\cdots," }, { "math_id": 1, "text": "a_n," }, { "math_id": 2, "text": "x^n" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "R." }, { "math_id": 6, "text": "R[[x]]," }, { "math_id": 7, "text": "R[x]," }, { "math_id": 8, "text": "A = 1 - 3X + 5X^2 - 7X^3 + 9X^4 - 11X^5 + \\cdots." }, { "math_id": 9, "text": "B = 2X + 4X^3 + 6X^5 + \\cdots," }, { "math_id": 10, "text": "A + B = 1 - X + 5X^2 - 3X^3 + 9X^4 - 5X^5 + \\cdots." }, { "math_id": 11, "text": "AB = 2X - 6X^2 + 14X^3 - 26X^4 + 44X^5 + \\cdots." }, { "math_id": 12, "text": "44X^5 = (1\\times 6X^5) + (5X^2 \\times 4X^3) + (9X^4 \\times 2X)." }, { "math_id": 13, "text": "\\frac{1}{1 + X} = 1 - X + X^2 - X^3 + X^4 - X^5 + \\cdots." }, { "math_id": 14, "text": "[X^n]" }, { "math_id": 15, "text": "A" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "[X^2]A=5" }, { "math_id": 18, "text": "[X^5]A=-11" }, { "math_id": 19, "text": "\\begin{align}\n\\left[X^3\\right] (B) &= 4, \\\\ \n\\left[X^2 \\right] (X + 3 X^2 Y^3 + 10 Y^6) &= 3Y^3, \\\\\n\\left[X^2Y^3 \\right] ( X + 3 X^2 Y^3 + 10 Y^6) &= 3, \\\\\n\\left[X^n \\right] \\left(\\frac{1}{1+X} \\right) &= (-1)^n, \\\\ \n\\left[X^n \\right] \\left(\\frac{X}{(1-X)^2} \\right) &= n.\n\\end{align}" }, { "math_id": 20, "text": "R[[X]]," }, { "math_id": 21, "text": "R[[X]]" }, { "math_id": 22, "text": "R[X]" }, { "math_id": 23, "text": "R^\\N" }, { "math_id": 24, "text": "a_n" }, { "math_id": 25, "text": "(a_n)" }, { "math_id": 26, "text": "(a_n)_{n\\in\\N} + (b_n)_{n\\in\\N} = \\left( a_n + b_n \\right)_{n\\in\\N}" }, { "math_id": 27, "text": "(a_n)_{n\\in\\N} \\times (b_n)_{n\\in\\N} = \\left( \\sum_{k=0}^n a_k b_{n-k} \\right)_{\\!n\\in\\N}." }, { "math_id": 28, "text": "(0,0,0,\\ldots)" }, { "math_id": 29, "text": "(1,0,0,\\ldots)" }, { "math_id": 30, "text": "a \\in R" }, { "math_id": 31, "text": "(a,0,0,\\ldots)" }, { "math_id": 32, "text": "(0,1,0,0,\\ldots)" }, { "math_id": 33, "text": "X" }, { "math_id": 34, "text": "(a_0, a_1, a_2, \\ldots, a_n, 0, 0, \\ldots) = a_0 + a_1 X + \\cdots + a_n X^n = \\sum_{i=0}^n a_i X^i;" }, { "math_id": 35, "text": "(a_n)_{n\\in\\N}" }, { "math_id": 36, "text": "\\textstyle\\sum_{i\\in\\N}a_i X^i" }, { "math_id": 37, "text": "\\left(\\sum_{i\\in\\N} a_i X^i\\right)+\\left(\\sum_{i\\in\\N} b_i X^i\\right) = \\sum_{i\\in\\N}(a_i+b_i) X^i" }, { "math_id": 38, "text": "\\left(\\sum_{i\\in\\N} a_i X^i\\right) \\times \\left(\\sum_{i\\in\\N} b_i X^i\\right) = \\sum_{n\\in\\N} \\left(\\sum_{k=0}^n a_k b_{n-k}\\right) X^n." }, { "math_id": 39, "text": "I=(X)" }, { "math_id": 40, "text": "a_0" }, { "math_id": 41, "text": "(a_n), (b_n) \\in R^{\\N}," }, { "math_id": 42, "text": "d((a_n), (b_n)) = 2^{-k}," }, { "math_id": 43, "text": "k" }, { "math_id": 44, "text": "a_k\\neq b_k" }, { "math_id": 45, "text": "(b_n)" }, { "math_id": 46, "text": "i=n" }, { "math_id": 47, "text": "X^n" }, { "math_id": 48, "text": "\\left(\\sum_{i\\in\\N} a_i X^i\\right) \\times \\left(\\sum_{i\\in\\N} b_i X^i\\right) = \\sum_{i,j\\in\\N} a_i b_j X^{i+j}," }, { "math_id": 49, "text": "\\sum_{i=0}^\\infty a_i X^i" }, { "math_id": 50, "text": "\\Z[[X]][[Y]]" }, { "math_id": 51, "text": "Y" }, { "math_id": 52, "text": "\\Z[[X]]" }, { "math_id": 53, "text": "\\sum_{i = 0}^\\infty XY^i" }, { "math_id": 54, "text": "\\tfrac{X}{1-Y}" }, { "math_id": 55, "text": "\\sum_{i = 0}^\\infty X^i Y" }, { "math_id": 56, "text": "\\tfrac{1}{1-X}" }, { "math_id": 57, "text": "\\tfrac{Y}{1-X}" }, { "math_id": 58, "text": "\\Z[[X,Y]]" }, { "math_id": 59, "text": "X^iY^j" }, { "math_id": 60, "text": "I" }, { "math_id": 61, "text": "I=(X,Y)" }, { "math_id": 62, "text": "\\R[[X]]" }, { "math_id": 63, "text": "\\lim_{n\\to\\infty}\\left(1+\\frac{X}{n}\\right)^{\\!n}" }, { "math_id": 64, "text": "\\exp(X) = \\sum_{n\\in\\N}\\frac{X^n}{n!}." }, { "math_id": 65, "text": "i\\geq 2" }, { "math_id": 66, "text": "\\tbinom{n}{i}/n^i" }, { "math_id": 67, "text": "X^i" }, { "math_id": 68, "text": "n\\to \\infty" }, { "math_id": 69, "text": "\\R" }, { "math_id": 70, "text": "\\tfrac{1}{i!}" }, { "math_id": 71, "text": "\\exp(X)" }, { "math_id": 72, "text": "\\R^\\N" }, { "math_id": 73, "text": "S" }, { "math_id": 74, "text": "\\Phi: R[[X]]\\to S" }, { "math_id": 75, "text": "\\Phi" }, { "math_id": 76, "text": "\\Phi(X)=x" }, { "math_id": 77, "text": "\\begin{align}S^1&=S\\\\\nS^n&=S\\cdot S^{n-1}\\quad\\text{for } n>1.\\end{align}" }, { "math_id": 78, "text": " \\left( \\sum_{k=0}^\\infty a_k X^k \\right)^{\\!n} =\\, \\sum_{m=0}^\\infty c_m X^m," }, { "math_id": 79, "text": "\\begin{align} \nc_0 &= a_0^n,\\\\\nc_m &= \\frac{1}{m a_0} \\sum_{k=1}^m (kn - m+k) a_{k} c_{m-k}, \\ \\ \\ m \\geq 1.\n\\end{align}" }, { "math_id": 80, "text": "f^{\\alpha}" }, { "math_id": 81, "text": "f^{\\alpha} = \\exp(\\alpha\\log(f))," }, { "math_id": 82, "text": " f(f^{\\alpha})' = \\alpha f^{\\alpha} f'" }, { "math_id": 83, "text": "(f^\\alpha)^\\beta = f^{\\alpha\\beta}" }, { "math_id": 84, "text": "f^\\alpha g^\\alpha = (fg)^\\alpha" }, { "math_id": 85, "text": "A = \\sum_{n=0}^\\infty a_n X^n \\in R[[X]]" }, { "math_id": 86, "text": "B = b_0 + b_1 x + \\cdots" }, { "math_id": 87, "text": "a_0b_0" }, { "math_id": 88, "text": "A \\cdot B" }, { "math_id": 89, "text": "B" }, { "math_id": 90, "text": "\\begin{align}\nb_0 &= \\frac{1}{a_0},\\\\\nb_n &= -\\frac{1}{a_0} \\sum_{i=1}^n a_i b_{n-i}, \\ \\ \\ n \\geq 1.\n\\end{align}" }, { "math_id": 91, "text": "(1 - X)^{-1} = \\sum_{n=0}^\\infty X^n." }, { "math_id": 92, "text": "R=K" }, { "math_id": 93, "text": "K[[X]]" }, { "math_id": 94, "text": "f/g=h" }, { "math_id": 95, "text": " \\frac{\\sum_{n=0}^\\infty b_n X^n }{\\sum_{n=0}^\\infty a_n X^n } =\\sum_{n=0}^\\infty c_n X^n, " }, { "math_id": 96, "text": "f" }, { "math_id": 97, "text": "g" }, { "math_id": 98, "text": "f=gh" }, { "math_id": 99, "text": "c_n = \\frac{1}{a_0}\\left(b_n - \\sum_{k=1}^n a_k c_{n-k}\\right)." }, { "math_id": 100, "text": "f(X) = \\sum_{n=0}^\\infty a_n X^n " }, { "math_id": 101, "text": " \\left[ X^m \\right] f(X) " }, { "math_id": 102, "text": " \\left[ X^m \\right] f(X) = \\left[ X^m \\right] \\sum_{n=0}^\\infty a_n X^n = a_m." }, { "math_id": 103, "text": "f(X) = \\sum_{n=1}^\\infty a_n X^n = a_1 X + a_2 X^2 + \\cdots" }, { "math_id": 104, "text": "g(X) = \\sum_{n=0}^\\infty b_n X^n = b_0 + b_1 X + b_2 X^2 + \\cdots" }, { "math_id": 105, "text": "a_0=0," }, { "math_id": 106, "text": "g(f(X)) = \\sum_{n=0}^\\infty b_n (f(X))^n = \\sum_{n=0}^\\infty c_n X^n," }, { "math_id": 107, "text": "c_n:=\\sum_{k\\in\\N, |j|=n} b_k a_{j_1} a_{j_2} \\cdots a_{j_k}." }, { "math_id": 108, "text": "k\\in\\N" }, { "math_id": 109, "text": "j\\in\\N_+^k" }, { "math_id": 110, "text": "|j|:=j_1+\\cdots+j_k=n." }, { "math_id": 111, "text": "k\\le n" }, { "math_id": 112, "text": "j_i\\le n" }, { "math_id": 113, "text": "i. " }, { "math_id": 114, "text": "c_n" }, { "math_id": 115, "text": "g_n(f_n(X))" }, { "math_id": 116, "text": "f_n" }, { "math_id": 117, "text": "g_n" }, { "math_id": 118, "text": "x^n," }, { "math_id": 119, "text": "n." }, { "math_id": 120, "text": "f(X)" }, { "math_id": 121, "text": "g(X)" }, { "math_id": 122, "text": "g(f(X))" }, { "math_id": 123, "text": "\\exp(X) = 1 + X + \\frac{X^2}{2!} + \\frac{X^3}{3!} + \\frac{X^4}{4!} + \\cdots," }, { "math_id": 124, "text": "\\exp(\\exp(X) - 1) = 1 + X + X^2 + \\frac{5X^3}6 + \\frac{5X^4}8 + \\cdots" }, { "math_id": 125, "text": "\\exp(X) - 1" }, { "math_id": 126, "text": "f(X)=\\sum_k f_k X^k \\in R[[X]]" }, { "math_id": 127, "text": "g(X)=\\sum_k g_k X^k" }, { "math_id": 128, "text": "x = 0 + 1x + 0x^2+ 0x^3+\\cdots" }, { "math_id": 129, "text": "f = \\sum_{n\\geq 0} a_n X^n \\in R[[X]]," }, { "math_id": 130, "text": " Df = f' = \\sum_{n \\geq 1} a_n n X^{n-1}." }, { "math_id": 131, "text": "D(af + bg) = a \\cdot Df + b \\cdot Dg" }, { "math_id": 132, "text": "R[[X]]." }, { "math_id": 133, "text": "D(fg) \\ =\\ f \\cdot (Dg) + (Df) \\cdot g," }, { "math_id": 134, "text": "D(f\\circ g ) = ( Df\\circ g ) \\cdot Dg," }, { "math_id": 135, "text": "(D^k f)(0) = k! a_k, " }, { "math_id": 136, "text": " D^{-1} f = \\int f\\ dX = C + \\sum_{n \\geq 0} a_n \\frac{X^{n+1}}{n+1}." }, { "math_id": 137, "text": "C \\in R" }, { "math_id": 138, "text": "D^{-1}(af + bg) = a \\cdot D^{-1}f + b \\cdot D^{-1}g" }, { "math_id": 139, "text": "D(D^{-1}(f)) = f" }, { "math_id": 140, "text": "f \\in R[[X]]" }, { "math_id": 141, "text": "M" }, { "math_id": 142, "text": "M\\cap R" }, { "math_id": 143, "text": "K" }, { "math_id": 144, "text": "(R[[X]], d)" }, { "math_id": 145, "text": "\\Q[[X]]" }, { "math_id": 146, "text": " \\sin(X) := \\sum_{n \\ge 0} \\frac{(-1)^n} {(2n+1)!} X^{2n+1} " }, { "math_id": 147, "text": " \\cos(X) := \\sum_{n \\ge 0} \\frac{(-1)^n} {(2n)!} X^{2n} " }, { "math_id": 148, "text": "\\sin^2(X) + \\cos^2(X) = 1," }, { "math_id": 149, "text": "\\frac{\\partial}{\\partial X} \\sin(X) = \\cos(X)," }, { "math_id": 150, "text": "\\sin (X+Y) = \\sin(X) \\cos(Y) + \\cos(X) \\sin(Y)." }, { "math_id": 151, "text": "\\Q[[X, Y]]." }, { "math_id": 152, "text": "K[[X_1, \\ldots, X_r]]" }, { "math_id": 153, "text": "f = \\sum a_n X^n \\in R[[X]]," }, { "math_id": 154, "text": "f(x) = \\sum_{n\\ge 0} a_n x^n." }, { "math_id": 155, "text": " (f+g)(x) = f(x) + g(x)" }, { "math_id": 156, "text": " (fg)(x) = f(x) g(x)." }, { "math_id": 157, "text": "(X)" }, { "math_id": 158, "text": "f(0)" }, { "math_id": 159, "text": "f(X^2-X)" }, { "math_id": 160, "text": "f((1-X)^{-1}-1)" }, { "math_id": 161, "text": "f \\in R[[X]]." }, { "math_id": 162, "text": "a=f(0)" }, { "math_id": 163, "text": "f^{-1} = \\sum_{n \\ge 0} a^{-n-1} (a-f)^n." }, { "math_id": 164, "text": "g(0)=0" }, { "math_id": 165, "text": "f(g) =X" }, { "math_id": 166, "text": "f(0)=0" }, { "math_id": 167, "text": "f = \\sum_{n = N}^\\infty a_n X^n" }, { "math_id": 168, "text": "N" }, { "math_id": 169, "text": "a_n \\neq 0" }, { "math_id": 170, "text": " a_n\\neq 0" }, { "math_id": 171, "text": "\\operatorname{ord}(f)." }, { "math_id": 172, "text": "+\\infty" }, { "math_id": 173, "text": "X^k" }, { "math_id": 174, "text": "\\{a_n\\}" }, { "math_id": 175, "text": "\\{b_n\\}" }, { "math_id": 176, "text": "\\sum_{i\\in\\Z}a_ib_{k-i}." }, { "math_id": 177, "text": "R((X))" }, { "math_id": 178, "text": "K((X))" }, { "math_id": 179, "text": "d(f,g)=2^{-\\operatorname{ord}(f-g)}." }, { "math_id": 180, "text": "f' = Df = \\sum_{n\\in\\Z} na_n X^{n-1}," }, { "math_id": 181, "text": "\\operatorname{ord}(f')= \\operatorname{ord}(f)-1." }, { "math_id": 182, "text": "D\\colon K((X))\\to K((X))" }, { "math_id": 183, "text": "\\ker D=K" }, { "math_id": 184, "text": "\\operatorname{im} D= \\left \\{f\\in K((X)) : [X^{-1}]f=0 \\right \\}." }, { "math_id": 185, "text": "X^{-1}" }, { "math_id": 186, "text": "\\operatorname{Res}(f)" }, { "math_id": 187, "text": "\\operatorname{Res} : K((X))\\to K" }, { "math_id": 188, "text": "0 \\to K \\to K((X)) \\overset{D}{\\longrightarrow} K((X)) \\;\\overset{\\operatorname{Res}}{\\longrightarrow}\\; K \\to 0." }, { "math_id": 189, "text": "f, g\\in K((X))" }, { "math_id": 190, "text": "(fg)'=f'g+fg'" }, { "math_id": 191, "text": "f=X^mg" }, { "math_id": 192, "text": "m=\\operatorname{ord}(f)" }, { "math_id": 193, "text": "\\operatorname{ord}(g)=0" }, { "math_id": 194, "text": "f'/f = mX^{-1}+g'/g." }, { "math_id": 195, "text": "K[[X]]\\subset \\operatorname{im}(D) = \\ker(\\operatorname{Res})," }, { "math_id": 196, "text": "\\operatorname{Res}(f'/f)=m." }, { "math_id": 197, "text": "\\operatorname{im}(D) = \\ker(\\operatorname{Res})," }, { "math_id": 198, "text": "g=g_{-1}X^{-1}+G'," }, { "math_id": 199, "text": "G \\in K((X))" }, { "math_id": 200, "text": "(g\\circ f)f'= g_{-1}f^{-1}f'+(G'\\circ f)f' = g_{-1}f'/f + (G \\circ f)'" }, { "math_id": 201, "text": "f \\in K[[X]]" }, { "math_id": 202, "text": "g \\in K[[X]]." }, { "math_id": 203, "text": "k[X^k] g^n=n[X^{-n}]f^{-k}." }, { "math_id": 204, "text": "[X^k] g=\\frac{1}{k} \\operatorname{Res}\\left( f^{-k}\\right)." }, { "math_id": 205, "text": "\\operatorname{ord}(f) =1 " }, { "math_id": 206, "text": "X \\rightsquigarrow f(X)" }, { "math_id": 207, "text": "\n\\begin{align}\nk[X^k] g^n & \n\\ \\stackrel{\\mathrm{(v)}}=\\ \nk\\operatorname{Res}\\left( g^n X^{-k-1} \\right)\n\\ \\stackrel{\\mathrm{(iv)}}=\\ \nk\\operatorname{Res}\\left(X^n f^{-k-1}f'\\right)\n\\ \\stackrel{\\mathrm{chain}}=\\ \n-\\operatorname{Res}\\left(X^n (f^{-k})'\\right) \\\\\n& \n\\ \\stackrel{\\mathrm{(ii)}}=\\ \n\\operatorname{Res}\\left(\\left(X^n\\right)' f^{-k}\\right)\n\\ \\stackrel{\\mathrm{chain}}=\\ \nn\\operatorname{Res}\\left(X^{n-1}f^{-k}\\right)\n\\ \\stackrel{\\mathrm{(v)}}=\\ \nn[X^{-n}]f^{-k}.\n\\end{align}\n" }, { "math_id": 208, "text": "\\Complex((X))" }, { "math_id": 209, "text": "X^{\\alpha}\\Complex((X))," }, { "math_id": 210, "text": "f_1=g_1=1" }, { "math_id": 211, "text": "m=-\\alpha-\\beta\\in\\N," }, { "math_id": 212, "text": "\\frac{1}{\\alpha}[X^m]\\left( \\frac{f}{X} \\right)^\\alpha=-\\frac{1}{\\beta}[X^m]\\left( \\frac{g}{X} \\right)^\\beta." }, { "math_id": 213, "text": "\\sum_\\alpha c_\\alpha X^\\alpha" }, { "math_id": 214, "text": "R[[X_I]]," }, { "math_id": 215, "text": "\\left(\\sum_\\alpha c_\\alpha X^\\alpha\\right)+\\left(\\sum_\\alpha d_\\alpha X^\\alpha \\right)= \\sum_\\alpha (c_\\alpha+d_\\alpha) X^\\alpha" }, { "math_id": 216, "text": "\\left(\\sum_\\alpha c_\\alpha X^\\alpha\\right)\\times\\left(\\sum_\\beta d_\\beta X^\\beta\\right)=\\sum_{\\alpha,\\beta} c_\\alpha d_\\beta X^{\\alpha+\\beta}" }, { "math_id": 217, "text": "R[[X_I]]" }, { "math_id": 218, "text": "I=\\N," }, { "math_id": 219, "text": "(f_n)_{n\\in \\N}" }, { "math_id": 220, "text": "f_n = X_n + X_{n+1} + X_{n+2} + \\cdots " }, { "math_id": 221, "text": "R[[X]][[Y]]" }, { "math_id": 222, "text": "R[[X,Y]]." }, { "math_id": 223, "text": "R[[X_1, \\ldots, X_r]]" }, { "math_id": 224, "text": "\\Phi: R[[X_1, \\ldots, X_r]] \\to S" }, { "math_id": 225, "text": "\\textstyle\\sum_\\alpha c_\\alpha X^\\alpha " }, { "math_id": 226, "text": "\\left(\\sum_\\alpha c_\\alpha X^\\alpha\\right)+\\left(\\sum_\\alpha d_\\alpha X^\\alpha\\right)=\\sum_\\alpha(c_\\alpha+d_\\alpha)X^\\alpha" }, { "math_id": 227, "text": "\\left(\\sum_\\alpha c_\\alpha X^\\alpha\\right)\\times\\left(\\sum_\\alpha d_\\alpha X^\\alpha\\right)=\\sum_{\\alpha,\\beta} c_\\alpha d_\\beta X^{\\alpha} \\cdot X^{\\beta}" }, { "math_id": 228, "text": "\\Sigma" }, { "math_id": 229, "text": "\\Sigma^*" }, { "math_id": 230, "text": "S\\langle\\langle \\Sigma^*\\rangle\\rangle" }, { "math_id": 231, "text": "r:\\Sigma^*\\to S" }, { "math_id": 232, "text": "r = \\sum_{w \\in \\Sigma^*} (r,w)w." }, { "math_id": 233, "text": "(r,w)" }, { "math_id": 234, "text": "r" }, { "math_id": 235, "text": "w\\in\\Sigma^*" }, { "math_id": 236, "text": "(r,w)\\in S" }, { "math_id": 237, "text": "r\\in S\\langle\\langle \\Sigma^*\\rangle\\rangle" }, { "math_id": 238, "text": "\\operatorname{supp}(r)=\\{w\\in\\Sigma^*|\\ (r,w)\\neq 0\\}" }, { "math_id": 239, "text": "0" }, { "math_id": 240, "text": "1" }, { "math_id": 241, "text": "S\\langle \\Sigma^*\\rangle" }, { "math_id": 242, "text": "r_1, r_2\\in S\\langle\\langle \\Sigma^*\\rangle\\rangle" }, { "math_id": 243, "text": "s\\in S" }, { "math_id": 244, "text": "r_1+r_2" }, { "math_id": 245, "text": "(r_1+r_2,w)=(r_1,w)+(r_2,w)" }, { "math_id": 246, "text": "r_1\\cdot r_2" }, { "math_id": 247, "text": "(r_1\\cdot r_2,w) = \\sum_{w_1w_2=w}(r_1,w_1)(r_2,w_2)" }, { "math_id": 248, "text": "r_1\\odot r_2" }, { "math_id": 249, "text": "(r_1\\odot r_2,w)=(r_1,w)(r_2,w)" }, { "math_id": 250, "text": "sr_1" }, { "math_id": 251, "text": "r_1s" }, { "math_id": 252, "text": "(sr_1,w)=s(r_1,w)" }, { "math_id": 253, "text": "(r_1s,w)=(r_1,w)s" }, { "math_id": 254, "text": "(S\\langle\\langle \\Sigma^*\\rangle\\rangle,+,\\cdot,0,\\varepsilon)" }, { "math_id": 255, "text": "(S\\langle \\Sigma^*\\rangle, +,\\cdot,0,\\varepsilon)" }, { "math_id": 256, "text": "\\varepsilon" }, { "math_id": 257, "text": "w" }, { "math_id": 258, "text": "G" }, { "math_id": 259, "text": "<" }, { "math_id": 260, "text": "a<b" }, { "math_id": 261, "text": "a+c<b+c" }, { "math_id": 262, "text": "c" }, { "math_id": 263, "text": "\\sum_{i \\in I} a_i X^i " }, { "math_id": 264, "text": "a_i" }, { "math_id": 265, "text": "R((G))" }, { "math_id": 266, "text": "[[R^G]]" } ]
https://en.wikipedia.org/wiki?curid=60012
60013689
Stanley sequence
Mathematical sequence involving arithmetic progressions In mathematics, a Stanley sequence is an integer sequence generated by a greedy algorithm that chooses the sequence members to avoid arithmetic progressions. If formula_0 is a finite set of non-negative integers on which no three elements form an arithmetic progression (that is, a Salem–Spencer set), then the Stanley sequence generated from formula_0 starts from the elements of formula_0, in sorted order, and then repeatedly chooses each successive element of the sequence to be a number that is larger than the already-chosen numbers and does not form any three-term arithmetic progression with them. These sequences are named after Richard P. Stanley. Binary–ternary sequence. The Stanley sequence starting from the empty set consists of those numbers whose ternary representations have only the digits 0 and 1. That is, when written in ternary, they look like binary numbers. These numbers are 0, 1, 3, 4, 9, 10, 12, 13, 27, 28, 30, 31, 36, 37, 39, 40, ... (sequence in the OEIS) By their construction as a Stanley sequence, this sequence is the lexicographically first arithmetic-progression-free sequence. Its elements are the sums of distinct powers of three, the numbers formula_1 such that the formula_1th central binomial coefficient is 1 mod 3, and the numbers whose balanced ternary representation is the same as their ternary representation. The construction of this sequence from the ternary numbers is analogous to the construction of the Moser–de Bruijn sequence, the sequence of numbers whose base-4 representations have only the digits 0 and 1, and the construction of the Cantor set as the subset of real numbers in the interval formula_2 whose ternary representations use only the digits 0 and 2. More generally, they are a 2-regular sequence, one of a class of integer sequences defined by a linear recurrence relation with multiplier 2. This sequence includes three powers of two: 1, 4, and 256 = 35 + 32 + 3 + 1. Paul Erdős conjectured that these are the only powers of two that it contains. Growth rate. Andrew Odlyzko and Richard P. Stanley observed that the number of elements up to some threshold formula_1 in the binary–ternary sequence, and in other Stanley sequences starting from formula_3 or formula_4, grows proportionally to formula_5. For other starting sets formula_6 the Stanley sequences that they considered appeared to grow more erratically but even more sparsely. For instance, the first irregular case is formula_7, which generates the sequence 0, 4, 5, 7, 11, 12, 16, 23, 26, 31, 33, 37, 38, 44, 49, 56, 73, 78, 80, 85, 95, 99, ... (sequence in the OEIS) Odlyzko and Stanley conjectured that in such cases the number of elements up to any threshold formula_1 is formula_8. That is, there is a dichotomy in the growth rate of Stanley sequences between the ones with similar growth to the binary–ternary sequence and others with a much smaller growth rate; according to this conjecture, there should be no Stanley sequences with intermediate growth. Moy proved that Stanley sequences cannot grow significantly more slowly than the conjectured bound for the sequences of slow growth. Every Stanley sequence has formula_9 elements up to formula_1. More precisely Moy showed that, for every such sequence, every formula_10, and all sufficiently large formula_1, the number of elements is at least formula_11. Later authors improved the constant factor in this bound, and proved that for Stanley sequences that grow as formula_12 the constant factor in their growth rates can be any rational number whose denominator is a power of three. History. A variation of the binary–ternary sequence (with one added to each element) was considered in 1936 by Paul Erdős and Pál Turán, who observed that it has no three-term arithmetic progression and conjectured (incorrectly) that it was the densest possible sequence with no arithmetic progression. In unpublished work with Andrew Odlyzko in 1978, Richard P. Stanley experimented with the greedy algorithm to generate progression-free sequences. The sequences they studied were exactly the Stanley sequences for the initial sets formula_6. Stanley sequences were named, and generalized to other starting sets than formula_6, in a paper published in 1999 by Erdős (posthumously) with four other authors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "[0,1]" }, { "math_id": 3, "text": "\\{0,3^k\\}" }, { "math_id": 4, "text": "\\{0,2\\cdot 3^k\\}" }, { "math_id": 5, "text": "n^{\\log_2 3}\\approx n^{0.631}" }, { "math_id": 6, "text": "\\{0,s\\}" }, { "math_id": 7, "text": "s=4" }, { "math_id": 8, "text": "O\\bigl(\\sqrt{n\\log n}\\bigr)" }, { "math_id": 9, "text": "\\Omega\\bigl(\\sqrt{n}\\bigr)" }, { "math_id": 10, "text": "\\varepsilon>0" }, { "math_id": 11, "text": "(\\sqrt 2 - \\varepsilon)\\sqrt n" }, { "math_id": 12, "text": "n^{\\log_2 3}" } ]
https://en.wikipedia.org/wiki?curid=60013689
6002026
Symplectic sum
In mathematics, specifically in symplectic geometry, the symplectic sum is a geometric modification on symplectic manifolds, which glues two given manifolds into a single new one. It is a symplectic version of connected summation along a submanifold, often called a fiber sum. The symplectic sum is the inverse of the symplectic cut, which decomposes a given manifold into two pieces. Together the symplectic sum and cut may be viewed as a deformation of symplectic manifolds, analogous for example to deformation to the normal cone in algebraic geometry. The symplectic sum has been used to construct previously unknown families of symplectic manifolds, and to derive relationships among the Gromov–Witten invariants of symplectic manifolds. Definition. Let formula_0 and formula_1 be two symplectic formula_2-manifolds and formula_3 a symplectic formula_4-manifold, embedded as a submanifold into both formula_0 and formula_1 via formula_5 such that the Euler classes of the normal bundles are opposite: formula_6 In the 1995 paper that defined the symplectic sum, Robert Gompf proved that for any orientation-reversing isomorphism formula_7 there is a canonical isotopy class of symplectic structures on the connected sum formula_8 meeting several conditions of compatibility with the summands formula_9. In other words, the theorem defines a symplectic sum operation whose result is a symplectic manifold, unique up to isotopy. To produce a well-defined symplectic structure, the connected sum must be performed with special attention paid to the choices of various identifications. Loosely speaking, the isomorphism formula_10 is composed with an orientation-reversing symplectic involution of the normal bundles of formula_3 (or rather their corresponding punctured unit disk bundles); then this composition is used to glue formula_0 to formula_1 along the two copies of formula_3. Generalizations. In greater generality, the symplectic sum can be performed on a single symplectic manifold formula_11 containing two disjoint copies of formula_3, gluing the manifold to itself along the two copies. The preceding description of the sum of two manifolds then corresponds to the special case where formula_12 consists of two connected components, each containing a copy of formula_3. Additionally, the sum can be performed simultaneously on submanifolds formula_13 of equal dimension and meeting formula_3 transversally. Other generalizations also exist. However, it is not possible to remove the requirement that formula_3 be of codimension two in the formula_9, as the following argument shows. A symplectic sum along a submanifold of codimension formula_14 requires a symplectic involution of a formula_14-dimensional annulus. If this involution exists, it can be used to patch two formula_14-dimensional balls together to form a symplectic formula_14-dimensional sphere. Because the sphere is a compact manifold, a symplectic form formula_15 on it induces a nonzero cohomology class formula_16 But this second cohomology group is zero unless formula_17. So the symplectic sum is possible only along a submanifold of codimension two. Identity element. Given formula_11 with codimension-two symplectic submanifold formula_3, one may projectively complete the normal bundle of formula_3 in formula_11 to the formula_18-bundle formula_19 This formula_20 contains two canonical copies of formula_3: the zero-section formula_21, which has normal bundle equal to that of formula_3 in formula_11, and the infinity-section formula_22, which has opposite normal bundle. Therefore, one may symplectically sum formula_23 with formula_24; the result is again formula_11, with formula_21 now playing the role of formula_3: formula_25 So for any particular pair formula_23 there exists an identity element formula_20 for the symplectic sum. Such identity elements have been used both in establishing theory and in computations; see below. Symplectic sum and cut as deformation. It is sometimes profitable to view the symplectic sum as a family of manifolds. In this framework, the given data formula_0, formula_1, formula_3, formula_26, formula_27, formula_10 determine a unique smooth formula_28-dimensional symplectic manifold formula_29 and a fibration formula_30 in which the central fiber is the singular space formula_31 obtained by joining the summands formula_9 along formula_3, and the generic fiber formula_32 is a symplectic sum of the formula_9. (That is, the generic fibers are all members of the unique isotopy class of the symplectic sum.) Loosely speaking, one constructs this family as follows. Choose a nonvanishing holomorphic section formula_33 of the trivial complex line bundle formula_34 Then, in the direct sum formula_35 with formula_36 representing a normal vector to formula_3 in formula_9, consider the locus of the quadratic equation formula_37 for a chosen small formula_38. One can glue both formula_39 (the summands with formula_3 deleted) onto this locus; the result is the symplectic sum formula_32. As formula_40 varies, the sums formula_32 naturally form the family formula_41 described above. The central fiber formula_42 is the symplectic cut of the generic fiber. So the symplectic sum and cut can be viewed together as a quadratic deformation of symplectic manifolds. An important example occurs when one of the summands is an identity element formula_20. For then the generic fiber is a symplectic manifold formula_11 and the central fiber is formula_11 with the normal bundle of formula_3 "pinched off at infinity" to form the formula_18-bundle formula_20. This is analogous to deformation to the normal cone along a smooth divisor formula_3 in algebraic geometry. In fact, symplectic treatments of Gromov–Witten theory often use the symplectic sum/cut for "rescaling the target" arguments, while algebro-geometric treatments use deformation to the normal cone for these same arguments. However, the symplectic sum is not a complex operation in general. The sum of two Kähler manifolds need not be Kähler. History and applications. The symplectic sum was first clearly defined in 1995 by Robert Gompf. He used it to demonstrate that any finitely presented group appears as the fundamental group of a symplectic four-manifold. Thus the category of symplectic manifolds was shown to be much larger than the category of Kähler manifolds. Around the same time, Eugene Lerman proposed the symplectic cut as a generalization of symplectic blow up and used it to study the symplectic quotient and other operations on symplectic manifolds. A number of researchers have subsequently investigated the behavior of pseudoholomorphic curves under symplectic sums, proving various versions of a symplectic sum formula for Gromov–Witten invariants. Such a formula aids computation by allowing one to decompose a given manifold into simpler pieces, whose Gromov–Witten invariants should be easier to compute. Another approach is to use an identity element formula_20 to write the manifold formula_11 as a symplectic sum formula_43 A formula for the Gromov–Witten invariants of a symplectic sum then yields a recursive formula for the Gromov–Witten invariants of formula_11.
[ { "math_id": 0, "text": "M_1" }, { "math_id": 1, "text": "M_2" }, { "math_id": 2, "text": "2n" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "(2n - 2)" }, { "math_id": 5, "text": "j_i : V \\hookrightarrow M_i," }, { "math_id": 6, "text": "e(N_{M_1} V) = -e(N_{M_2} V)." }, { "math_id": 7, "text": "\\psi : N_{M_1} V \\to N_{M_2} V" }, { "math_id": 8, "text": "(M_1, V) \\# (M_2, V)" }, { "math_id": 9, "text": "M_i" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": "X" }, { "math_id": 13, "text": "X_i \\subseteq M_i" }, { "math_id": 14, "text": "2k" }, { "math_id": 15, "text": "\\omega" }, { "math_id": 16, "text": "[\\omega] \\in H^2(\\mathbb{S}^{2k}, \\mathbb{R})." }, { "math_id": 17, "text": "2k = 2" }, { "math_id": 18, "text": "\\mathbb{CP}^1" }, { "math_id": 19, "text": "P := \\mathbb{P}(N_M V \\oplus \\mathbb{C})." }, { "math_id": 20, "text": "P" }, { "math_id": 21, "text": "V_0" }, { "math_id": 22, "text": "V_\\infty" }, { "math_id": 23, "text": "(M, V)" }, { "math_id": 24, "text": "(P, V_\\infty)" }, { "math_id": 25, "text": "(M, V) = ((M, V) \\# (P, V_\\infty), V_0)." }, { "math_id": 26, "text": "j_1" }, { "math_id": 27, "text": "j_2" }, { "math_id": 28, "text": "(2n + 2)" }, { "math_id": 29, "text": "Z" }, { "math_id": 30, "text": "Z \\to D \\subseteq \\mathbb{C}" }, { "math_id": 31, "text": "Z_0 = M_1 \\cup_V M_2" }, { "math_id": 32, "text": "Z_\\epsilon" }, { "math_id": 33, "text": "\\eta" }, { "math_id": 34, "text": "N_{M_1} V \\otimes_\\mathbb{C} N_{M_2} V." }, { "math_id": 35, "text": "N_{M_1} V \\oplus N_{M_2} V," }, { "math_id": 36, "text": "v_i" }, { "math_id": 37, "text": "v_1 \\otimes v_2 = \\epsilon \\eta" }, { "math_id": 38, "text": "\\epsilon \\in \\mathbb{C}" }, { "math_id": 39, "text": "M_i \\setminus V" }, { "math_id": 40, "text": "\\epsilon" }, { "math_id": 41, "text": "Z \\to D" }, { "math_id": 42, "text": "Z_0" }, { "math_id": 43, "text": "(M, V) = (M, V) \\# (P, V_\\infty)." } ]
https://en.wikipedia.org/wiki?curid=6002026
60022
Fractal compression
Compression method for digital images Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Iterated function systems. Fractal image representation may be described mathematically as an iterated function system (IFS). For binary images. We begin with the representation of a binary image, where the image may be thought of as a subset of formula_0. An IFS is a set of contraction mappings "ƒ"1...,"ƒN", formula_1 According to these mapping functions, the IFS describes a two-dimensional set "S" as the fixed point of the Hutchinson operator formula_2 That is, "H" is an operator mapping sets to sets, and "S" is the unique set satisfying "H"("S") = "S". The idea is to construct the IFS such that this set "S" is the input binary image. The set "S" can be recovered from the IFS by fixed point iteration: for any nonempty compact initial set "A"0, the iteration "A""k"+1 = "H"("Ak") converges to "S". The set "S" is self-similar because "H"("S") = "S" implies that "S" is a union of mapped copies of itself: formula_3 So we see the IFS is a fractal representation of "S". Extension to grayscale. IFS representation can be extended to a grayscale image by considering the image's graph as a subset of formula_4. For a grayscale image "u"("x","y"), consider the set "S" = {("x","y","u"("x","y"))}. Then similar to the binary case, "S" is described by an IFS using a set of contraction mappings "ƒ"1...,"ƒN", but in formula_4, formula_5 Encoding. A challenging problem of ongoing research in fractal image representation is how to choose the "ƒ"1...,"ƒN" such that its fixed point approximates the input image, and how to do this efficiently. A simple approach for doing so is the following partitioned iterated function system (PIFS): In the second step, it is important to find a similar block so that the IFS accurately represents the input image, so a sufficient number of candidate blocks for "Di" need to be considered. On the other hand, a large search considering many blocks is computationally costly. This bottleneck of searching for similar blocks is why PIFS fractal encoding is much slower than for example DCT and wavelet based image representation. The initial square partitioning and brute-force search algorithm presented by Jacquin provides a starting point for further research and extensions in many possible directions—different ways of partitioning the image into range blocks of various sizes and shapes; fast techniques for quickly finding a close-enough matching domain block for each range block rather than brute-force searching, such as fast motion estimation algorithms; different ways of encoding the mapping from the domain block to the range block; etc. Other researchers attempt to find algorithms to automatically encode an arbitrary image as RIFS (recurrent iterated function systems) or global IFS, rather than PIFS; and algorithms for fractal video compression including motion compensation and three dimensional iterated function systems. Fractal image compression has many similarities to vector quantization image compression. Features. With fractal compression, encoding is extremely computationally expensive because of the search used to find the self-similarities. Decoding, however, is quite fast. While this asymmetry has so far made it impractical for real time applications, when video is archived for distribution from disk storage or file downloads fractal compression becomes more competitive. At common compression ratios, up to about 50:1, fractal compression provides similar results to DCT-based algorithms such as JPEG. At high compression ratios fractal compression may offer superior quality. For satellite imagery, ratios of over 170:1 have been achieved with acceptable results. Fractal video compression ratios of 25:1–244:1 have been achieved in reasonable compression times (2.4 to 66 sec/frame). Compression efficiency increases with higher image complexity and color depth, compared to simple grayscale images. Resolution independence and fractal scaling. An inherent feature of fractal compression is that images become resolution independent after being converted to fractal code. This is because the iterated function systems in the compressed file scale indefinitely. This indefinite scaling property of a fractal is known as "fractal scaling". Fractal interpolation. The resolution independence of a fractal-encoded image can be used to increase the display resolution of an image. This process is also known as "fractal interpolation". In fractal interpolation, an image is encoded into fractal codes via fractal compression, and subsequently decompressed at a higher resolution. The result is an up-sampled image in which iterated function systems have been used as the interpolant. Fractal interpolation maintains geometric detail very well compared to traditional interpolation methods like bilinear interpolation and bicubic interpolation. Since the interpolation cannot reverse Shannon entropy however, it ends up sharpening the image by adding random instead of meaningful detail. One cannot, for example, enlarge an image of a crowd where each person's face is one or two pixels and hope to identify them. History. Michael Barnsley led the development of fractal compression from 1985 at the Georgia Institute of Technology (where both Barnsley and Sloan were professors in the mathematics department). The work was sponsored by DARPA and the Georgia Tech Research Corporation. The project resulted in several patents from 1987. Barnsley's graduate student Arnaud Jacquin implemented the first automatic algorithm in software in 1992. All methods are based on the fractal transform using iterated function systems. Michael Barnsley and Alan Sloan formed Iterated Systems Inc. in 1987 which was granted over 20 additional patents related to fractal compression. A major breakthrough for Iterated Systems Inc. was the automatic fractal transform process which eliminated the need for human intervention during compression as was the case in early experimentation with fractal compression technology. In 1992, Iterated Systems Inc. received a US$2.1 million government grant to develop a prototype digital image storage and decompression chip using fractal transform image compression technology. Fractal image compression has been used in a number of commercial applications: onOne Software, developed under license from Iterated Systems Inc., Genuine Fractals 5 which is a Photoshop plugin capable of saving files in compressed FIF (Fractal Image Format). To date the most successful use of still fractal image compression is by Microsoft in its Encarta multimedia encyclopedia, also under license. Iterated Systems Inc. supplied a shareware encoder (Fractal Imager), a stand-alone decoder, a Netscape plug-in decoder and a development package for use under Windows. The redistribution of the "decompressor DLL" provided by the ColorBox III SDK was governed by restrictive per-disk or year-by-year licensing regimes for proprietary software vendors and by a discretionary scheme that entailed the promotion of the Iterated Systems products for certain classes of other users. ClearVideo – also known as RealVideo (Fractal) – and SoftVideo were early fractal video compression products. ClearFusion was Iterated's freely distributed streaming video plugin for web browsers. In 1994 SoftVideo was licensed to Spectrum Holobyte for use in its CD-ROM games including Falcon Gold and . In 1996, Iterated Systems Inc. announced an alliance with the Mitsubishi Corporation to market ClearVideo to their Japanese customers. The original ClearVideo 1.2 decoder driver is still supported by Microsoft in Windows Media Player although the encoder is no longer supported. Two firms, Total Multimedia Inc. and Dimension, both claim to own or have the exclusive licence to Iterated's video technology, but neither has yet released a working product. The technology basis appears to be Dimension's U.S. patents 8639053 and 8351509, which have been considerably analyzed. In summary, it is a simple quadtree block-copying system with neither the bandwidth efficiency nor PSNR quality of traditional DCT-based codecs. In January 2016, TMMI announced that it was abandoning fractal-based technology altogether. Research papers between 1997 and 2007 discussed possible solutions to improve fractal algorithms and encoding hardware. Implementations. A library called "Fiasco" was created by Ullrich Hafner. In 2001, "Fiasco" was covered in the "Linux Journal". According to the 2000-04 "Fiasco" manual, "Fiasco" can be used for video compression. The Netpbm library includes the "Fiasco" library. Femtosoft developed an implementation of fractal image compression in Object Pascal and Java. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^2" }, { "math_id": 1, "text": "f_i:\\mathbb{R}^2\\to \\mathbb{R}^2." }, { "math_id": 2, "text": "H(A)=\\bigcup_{i=1}^N f_i(A), \\quad A \\subset \\mathbb{R}^2." }, { "math_id": 3, "text": "S=f_1(S)\\cup f_2(S) \\cup\\cdots\\cup f_N(S)" }, { "math_id": 4, "text": "\\mathbb{R}^3" }, { "math_id": 5, "text": "f_i:\\mathbb{R}^3\\to \\mathbb{R}^3." } ]
https://en.wikipedia.org/wiki?curid=60022
6002878
Slack variable
Mathematical concept In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality constraint. A non-negativity constraint on the slack variable is also added. Slack variables are used in particular in linear programming. As with the other variables in the augmented constraints, the slack variable cannot take on negative values, as the simplex algorithm requires them to be positive or zero. Slack variables are also used in the Big M method. Example. By introducing the slack variable formula_0, the inequality formula_1 can be converted to the equation formula_2. Embedding in orthant. Slack variables give an embedding of a polytope formula_3 into the standard "f"-orthant, where formula_4 is the number of constraints (facets of the polytope). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized), and is expressed in terms of the "constraints" (linear functionals, covectors). Slack variables are "dual" to generalized barycentric coordinates, and, dually to generalized barycentric coordinates (which are not unique but can all be realized), are uniquely determined, but cannot all be realized. Dually, generalized barycentric coordinates express a polytope with formula_5 vertices (dual to facets), regardless of dimension, as the "image" of the standard formula_6-simplex, which has formula_5 vertices – the map is onto: formula_7 and expresses points in terms of the "vertices" (points, vectors). The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not having "unique" generalized barycentric coordinates. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{s} \\ge \\mathbf{0}" }, { "math_id": 1, "text": "\\mathbf{A}\\mathbf{x} \\le \\mathbf{b}" }, { "math_id": 2, "text": "\\mathbf{A}\\mathbf{x} + \\mathbf{s} = \\mathbf{b}" }, { "math_id": 3, "text": "P \\hookrightarrow (\\mathbf{R}_{\\geq 0})^f" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "(n-1)" }, { "math_id": 7, "text": "\\Delta^{n-1} \\twoheadrightarrow P," } ]
https://en.wikipedia.org/wiki?curid=6002878
60034541
Miller's recurrence algorithm
Procedure for calculating a rapidly decreasing solution of a linear recurrence relation Miller's recurrence algorithm is a procedure for calculating a rapidly decreasing solution of a linear recurrence relation developed by J. C. P. Miller. It was originally developed to compute tables of the modified Bessel function but also applies to Bessel functions of the first kind and has other applications such as computation of the coefficients of Chebyshev expansions of other special functions. Many families of special functions satisfy a recurrence relation that relates the values of the functions of different orders with common argument formula_0. The modified Bessel functions of the first kind formula_1 satisfy the recurrence relation formula_2. However, the modified Bessel functions of the second kind formula_3 also satisfy the same recurrence relation formula_4. The first solution decreases rapidly with formula_5. The second solution increases rapidly with formula_5. Miller's algorithm provides a numerically stable procedure to obtain the decreasing solution. To compute the terms of a recurrence formula_6 through formula_7 according to Miller's algorithm, one first chooses a value formula_8 much larger than formula_9 and computes a trial solution taking initial conditionformula_10 to an arbitrary non-zero value (such as 1) and taking formula_11 and later terms to be zero. Then the recurrence relation is used to successively compute trial values for formula_12, formula_13 down to formula_6. Noting that a second sequence obtained from the trial sequence by multiplication by a constant normalizing factor will still satisfy the same recurrence relation, one can then apply a separate normalizing relationship to determine the normalizing factor that yields the actual solution. In the example of the modified Bessel functions, a suitable normalizing relation is a summation involving the even terms of the recurrence: formula_14 where the infinite summation becomes finite due to the approximation that formula_11 and later terms are zero. Finally, it is confirmed that the approximation error of the procedure is acceptable by repeating the procedure with a second choice of formula_8 larger than the initial choice and confirming that the second set of results for formula_6 through formula_7 agree within the first set within the desired tolerance. Note that to obtain this agreement, the value of formula_8 must be large enough such that the term formula_10 is small compared to the desired tolerance. In contrast to Miller's algorithm, attempts to apply the recurrence relation in the forward direction starting from known values of formula_15 and formula_16 obtained by other methods will fail as rounding errors introduce components of the rapidly increasing solution. Olver and Gautschi analyses the error propagation of the algorithm in detail. For Bessel functions of the first kind, the equivalent recurrence relation and normalizing relationship are: formula_17 formula_18. The algorithm is particularly efficient in applications that require the values of the Bessel functions for all orders formula_19 for each value of formula_0 compared to direct independent computations of formula_20 separate functions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "I_n(x)" }, { "math_id": 2, "text": "I_{n-1}(x)=\\frac{2n}{x}I_n(x)+I_{n+1}(x)" }, { "math_id": 3, "text": "K_n(x)" }, { "math_id": 4, "text": "K_{n-1}(x)=\\frac{2n}{x}K_n(x)+K_{n+1}(x)" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "a_0" }, { "math_id": 7, "text": "a_N" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "a_M" }, { "math_id": 11, "text": "a_{M+1}" }, { "math_id": 12, "text": "a_{M-1}" }, { "math_id": 13, "text": "a_{M-2}" }, { "math_id": 14, "text": "I_0(x)+2\\sum_{m=1}^\\infty (-1)^mI_{2m}(x)=1" }, { "math_id": 15, "text": "I_0(x)" }, { "math_id": 16, "text": "I_1(x)" }, { "math_id": 17, "text": "J_{n-1}(x)=\\frac{2n}{x}J_{n}(x)-J_{n+1}(x)" }, { "math_id": 18, "text": "J_0(x)+2\\sum_{m=1}^\\infty J_{2m}(x)=1" }, { "math_id": 19, "text": "0 \\cdots N" }, { "math_id": 20, "text": "N+1" } ]
https://en.wikipedia.org/wiki?curid=60034541
6004113
History of string theory
The history of string theory spans several decades of intense research including two superstring revolutions. Through the combined efforts of many researchers, string theory has developed into a broad and varied subject with connections to quantum gravity, particle and condensed matter physics, cosmology, and pure mathematics. 1943–1959: S-matrix theory. String theory represents an outgrowth of S-matrix theory, a research program begun by Werner Heisenberg in 1943 following John Archibald Wheeler's 1937 introduction of the S-matrix. Many prominent theorists picked up and advocated S-matrix theory, starting in the late 1950s and throughout the 1960s. The field became marginalized and discarded in the mid-1970s and disappeared in the 1980s. Physicists neglected it because some of its mathematical methods were alien, and because quantum chromodynamics supplanted it as an experimentally better-qualified approach to the strong interactions. The theory presented a radical rethinking of the foundations of physical laws. By the 1940s it had become clear that the proton and the neutron were not pointlike particles like the electron. Their magnetic moment differed greatly from that of a pointlike spin-½ charged particle, too much to attribute the difference to a small perturbation. Their interactions were so strong that they scattered like a small sphere, not like a point. Heisenberg proposed that the strongly interacting particles were in fact extended objects, and because there are difficulties of principle with extended relativistic particles, he proposed that the notion of a space-time point broke down at nuclear scales. Without space and time, it becomes difficult to formulate a physical theory. Heisenberg proposed a solution to this problem: focusing on the observable quantities—those things measurable by experiments. An experiment only sees a microscopic quantity if it can be transferred by a series of events to the classical devices that surround the experimental chamber. The objects that fly to infinity are stable particles, in quantum superpositions of different momentum states. Heisenberg proposed that even when space and time are unreliable, the notion of momentum state, which is defined far away from the experimental chamber, still works. The physical quantity he proposed as fundamental is the quantum mechanical amplitude for a group of incoming particles to turn into a group of outgoing particles, and he did not admit that there were any steps in between. The S-matrix is the quantity that describes how a collection of incoming particles turn into outgoing ones. Heisenberg proposed to study the S-matrix directly, without any assumptions about space-time structure. But when transitions from the far-past to the far-future occur in one step with no intermediate steps, it becomes difficult to calculate anything. In quantum field theory, the intermediate steps are the fluctuations of fields or equivalently the fluctuations of virtual particles. In this proposed S-matrix theory, there are no local quantities at all. Heisenberg proposed to use unitarity to determine the S-matrix. In all conceivable situations, the sum of the squares of the amplitudes must equal 1. This property can determine the amplitude in a quantum field theory order by order in a perturbation series once the basic interactions are given, and in many quantum field theories the amplitudes grow too fast at high energies to make a unitary S-matrix. But without extra assumptions on the high-energy behavior, unitarity is not enough to determine the scattering, and the proposal was ignored for many years. Heisenberg's proposal was revived in 1956 when Murray Gell-Mann recognized that dispersion relations—like those discovered by Hendrik Kramers and Ralph Kronig in the 1920s (see Kramers–Kronig relations)—allow the formulation of a notion of causality, a notion that events in the future would not influence events in the past, even when the microscopic notion of past and future are not clearly defined. He also recognized that these relations might be useful in computing observables for the case of strong interaction physics. The dispersion relations were analytic properties of the S-matrix, and they imposed more stringent conditions than those that follow from unitarity alone. This development in S-matrix theory stemmed from Murray Gell-Mann and Marvin Leonard Goldberger's (1954) discovery of crossing symmetry, another condition that the S-matrix had to fulfil. Prominent advocates of the new "dispersion relations" approach included Stanley Mandelstam and Geoffrey Chew, both at UC Berkeley at the time. Mandelstam discovered the double dispersion relations, a new and powerful analytic form, in 1958, and believed that it would provide the key to progress in the intractable strong interactions. 1959–1968: Regge theory and bootstrap models. By the late 1950s, many strongly interacting particles of ever higher spins had been discovered, and it became clear that they were not all fundamental. While Japanese physicist Shoichi Sakata proposed that the particles could be understood as bound states of just three of them (the proton, the neutron and the Lambda; see Sakata model), Geoffrey Chew believed that none of these particles are fundamental (for details, see Bootstrap model). Sakata's approach was reworked in the 1960s into the quark model by Murray Gell-Mann and George Zweig by making the charges of the hypothetical constituents fractional and rejecting the idea that they were observed particles. At the time, Chew's approach was considered more mainstream because it did not introduce fractional charge values and because it focused on experimentally measurable S-matrix elements, not on hypothetical pointlike constituents. In 1959, Tullio Regge, a young theorist in Italy, discovered that bound states in quantum mechanics can be organized into families known as Regge trajectories, each family having distinctive angular momenta. This idea was generalized to relativistic quantum mechanics by Stanley Mandelstam, Vladimir Gribov and Marcel Froissart, using a mathematical method (the Sommerfeld–Watson representation) discovered decades earlier by Arnold Sommerfeld and Kenneth M. Watson: the result was dubbed the Froissart–Gribov formula. In 1961, Geoffrey Chew and Steven Frautschi recognized that mesons had straight line Regge trajectories (in their scheme, spin is plotted against mass squared on a so-called Chew–Frautschi plot), which implied that the scattering of these particles would have very strange behavior—it should fall off exponentially quickly at large angles. With this realization, theorists hoped to construct a theory of composite particles on Regge trajectories, whose scattering amplitudes had the asymptotic form demanded by Regge theory. In 1967, a notable step forward in the bootstrap approach was the principle of DHS duality introduced by Richard Dolen, David Horn, and Christoph Schmid in 1967, at Caltech (the original term for it was "average duality" or "finite energy sum rule (FESR) duality"). The three researchers noticed that Regge pole exchange (at high energy) and resonance (at low energy) descriptions offer multiple representations/approximations of one and the same physically observable process. 1968–1974: Dual resonance model. The first model in which hadronic particles essentially follow the Regge trajectories was the dual resonance model that was constructed by Gabriele Veneziano in 1968, who noted that the Euler beta function could be used to describe 4-particle scattering amplitude data for such particles. The Veneziano scattering amplitude (or Veneziano model) was quickly generalized to an "N"-particle amplitude by Ziro Koba and Holger Bech Nielsen (their approach was dubbed the Koba–Nielsen formalism), and to what are now recognized as closed strings by Miguel Virasoro and Joel A. Shapiro (their approach was dubbed the Shapiro–Virasoro model). In 1969, the Chan–Paton rules (proposed by Jack E. Paton and Hong-Mo Chan) enabled isospin factors to be added to the Veneziano model. In 1969–70, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind presented a physical interpretation of the Veneziano amplitude by representing nuclear forces as vibrating, one-dimensional strings. However, this string-based description of the strong force made many predictions that directly contradicted experimental findings. In 1971, Pierre Ramond and, independently, John H. Schwarz and André Neveu attempted to implement fermions into the dual model. This led to the concept of "spinning strings", and pointed the way to a method for removing the problematic tachyon (see RNS formalism). Dual resonance models for strong interactions were a relatively popular subject of study between 1968 and 1973. The scientific community lost interest in string theory as a theory of strong interactions in 1973 when quantum chromodynamics became the main focus of theoretical research (mainly due to the theoretical appeal of its asymptotic freedom). 1974–1984: Bosonic string theory and superstring theory. In 1974, John H. Schwarz and Joël Scherk, and independently Tamiaki Yoneya, studied the boson-like patterns of string vibration and found that their properties exactly matched those of the graviton, the gravitational force's hypothetical messenger particle. Schwarz and Scherk argued that string theory had failed to catch on because physicists had underestimated its scope. This led to the development of bosonic string theory. String theory is formulated in terms of the Polyakov action, which describes how strings move through space and time. Like springs, the strings tend to contract to minimize their potential energy, but conservation of energy prevents them from disappearing, and instead they oscillate. By applying the ideas of quantum mechanics to strings it is possible to deduce the different vibrational modes of strings, and that each vibrational state appears to be a different particle. The mass of each particle, and the fashion with which it can interact, are determined by the way the string vibrates—in essence, by the "note" the string "sounds." The scale of notes, each corresponding to a different kind of particle, is termed the "spectrum" of the theory. Early models included both "open" strings, which have two distinct endpoints, and "closed" strings, where the endpoints are joined to make a complete loop. The two types of string behave in slightly different ways, yielding two spectra. Not all modern string theories use both types; some incorporate only the closed variety. The earliest string model has several problems: it has a critical dimension "D" = 26, a feature that was originally discovered by Claud Lovelace in 1971; the theory has a fundamental instability, the presence of tachyons (see tachyon condensation); additionally, the spectrum of particles contains only bosons, particles like the photon that obey particular rules of behavior. While bosons are a critical ingredient of the Universe, they are not its only constituents. Investigating how a string theory may include fermions in its spectrum led to the invention of supersymmetry (in the West) in 1971, a mathematical transformation between bosons and fermions. String theories that include fermionic vibrations are now known as superstring theories. In 1977, the GSO projection (named after Ferdinando Gliozzi, Joël Scherk, and David I. Olive) led to a family of tachyon-free unitary free string theories, the first consistent superstring theories (see ). 1984–1994: First superstring revolution. The first superstring revolution is a period of important discoveries that began in 1984. It was realized that string theory was capable of describing all elementary particles as well as the interactions between them. Hundreds of physicists started to work on string theory as the most promising idea to unify physical theories. The revolution was started by a discovery of anomaly cancellation in type I string theory via the Green–Schwarz mechanism (named after Michael Green and John H. Schwarz) in 1984. The ground-breaking discovery of the heterotic string was made by David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm in 1985. It was also realized by Philip Candelas, Gary Horowitz, Andrew Strominger, and Edward Witten in 1985 that to obtain formula_0 supersymmetry, the six small extra dimensions (the "D" = 10 critical dimension of superstring theory had been originally discovered by John H. Schwarz in 1972) need to be compactified on a Calabi–Yau manifold. (In string theory, compactification is a generalization of Kaluza–Klein theory, which was first proposed in the 1920s.) By 1985, five separate superstring theories had been described: type I, type II (IIA and IIB), and heterotic (SO(32) and "E"8×"E"8). "Discover" magazine in the November 1986 issue (vol. 7, #11) featured a cover story written by Gary Taubes, "Everything's Now Tied to Strings", which explained string theory for a popular audience. In 1987, Eric Bergshoeff, Ergin Sezgin and Paul Townsend showed that there are no superstrings in eleven dimensions (the largest number of dimensions consistent with a single graviton in supergravity theories), but supermembranes. 1994–2003: Second superstring revolution. In the early 1990s, Edward Witten and others found strong evidence that the different superstring theories were different limits of an 11-dimensional theory that became known as M-theory (for details, see Introduction to M-theory). These discoveries sparked the second superstring revolution that took place approximately between 1994 and 1995. The different versions of superstring theory were unified, as long hoped, by new equivalences. These are known as S-duality, T-duality, U-duality, mirror symmetry, and conifold transitions. The different theories of strings were also related to M-theory. In 1995, Joseph Polchinski discovered that the theory requires the inclusion of higher-dimensional objects, called D-branes: these are the sources of electric and magnetic Ramond–Ramond fields that are required by string duality. D-branes added additional rich mathematical structure to the theory, and opened possibilities for constructing realistic cosmological models in the theory (for details, see Brane cosmology). In 1997–98, Juan Maldacena conjectured a relationship between type IIB string theory and "N" = 4 supersymmetric Yang–Mills theory, a gauge theory. This conjecture, called the AdS/CFT correspondence, has generated a great deal of interest in high energy physics. It is a realization of the holographic principle, which has far-reaching implications: the AdS/CFT correspondence has helped elucidate the mysteries of black holes suggested by Stephen Hawking's work and is believed to provide a resolution of the black hole information paradox. 2003–present. In 2003, Michael R. Douglas's discovery of the string theory landscape, which suggests that string theory has a large number of inequivalent false vacua, led to much discussion of what string theory might eventually be expected to predict, and how cosmology can be incorporated into the theory. A possible mechanism of string theory vacuum stabilization (the KKLT mechanism) was proposed in 2003 by Shamit Kachru, Renata Kallosh, Andrei Linde, and Sandip Trivedi. Much of the present-day research is focused on characterizing the "swampland" of theories incompatible with quantum gravity. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N=1" } ]
https://en.wikipedia.org/wiki?curid=6004113
6004559
R-factor (crystallography)
Component of crystallography In crystallography, the R-factor (sometimes called residual factor or reliability factor or the R-value or RWork) is a measure of the agreement between the crystallographic model and the experimental X-ray diffraction data. In other words, it is a measure of how well the refined structure predicts the observed data. The value is also sometimes called the discrepancy index, as it mathematically describes the difference between the experimental observations and the ideal calculated values. It is defined by the following equation: formula_0 where "F" is the so-called structure factor and the sum extends over all the reflections of X-rays measured and their calculated counterparts respectively. The structure factor is closely related to the intensity of the reflection it describes: formula_1. The minimum possible value is zero, indicating perfect agreement between experimental observations and the structure factors predicted from the model. There is no theoretical maximum, but in practice, values are considerably less than one even for poor models, provided the model includes a suitable scale factor. Random experimental errors in the data contribute to formula_2 even for a perfect model, and these have more leverage when the data are weak or few, such as for a low-resolution data set. Model inadequacies such as incorrect or missing parts and unmodeled disorder are the other main contributors to formula_2, making it useful to assess the progress and final result of a crystallographic model refinement. For large molecules, the R-factor usually ranges between 0.6 (when computed for a random model and against an experimental data set) and 0.2 (for example for a well refined macro-molecular model at a resolution of 2.5 Ångström). Small molecules (up to "ca". 1000 atoms) usually form better-ordered crystals than large molecules, and thus it is possible to attain lower R-factors. In the Cambridge Structural Database of small-molecule structures, more than 95% of the 500,000+ crystals have an R-factor lower than 0.15, and 9.5% have an R-factor lower than 0.03. Crystallographers also use the Free R-Factor (formula_3) to assess possible overmodeling of the data. formula_3 is computed according to the same formula given above, but on a small, random sample of data that are set aside for the purpose and never included in the refinement. formula_3 will always be greater than formula_2 because the model is not fitted to the reflections that contribute to formula_3, but the two statistics should be similar because a correct model should predict "all" the data with uniform accuracy. If the two statistics differ significantly then that indicates the model has been over-parameterized, so that to some extent it predicts not the ideal error-free data for the correct model, but rather the error-afflicted data actually observed. The quantities formula_4 and formula_5 are similarly used to describe the internal agreement of measurements in a crystallographic data set. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = \\frac{ \\sum{||F_\\text{obs}| - |F_\\text{calc}|| } }{ \\sum{ |F_\\text{obs}|}}," }, { "math_id": 1, "text": "I_{hkl} \\propto |F(hkl)|^2" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "R_{Free}" }, { "math_id": 4, "text": "R_\\text{sym}" }, { "math_id": 5, "text": "R_\\text{merge}" } ]
https://en.wikipedia.org/wiki?curid=6004559
60048114
Maria Heep-Altiner
German mathematician (b. 1959) Maria Heep-Altiner (born 29 December 1959 in Niederzeuzheim) is a German mathematician, actuary and university lecturer. Life. After graduating from the Prince Johann Ludwig School  in Hadamar in 1978, Heep-Altiner studied mathematics and economics at the University of Bonn. In 1989 she earned her doctorate in mathematics on the number theory topic "Period relations for formula_0" under Günter Harder and Michael Rapoport. She then worked as an actuary for Gerling, before she moved companies in 1994 to Allgemeine Versicherungs-AG. There she became the actuarial manager for property insurance. In 2006, she moved to Talanx, where she was responsible for setting up an internal holding model. In 2008, Heep-Altiner returned to academia as a professor at the Institute of Insurance at Cologne university of applied sciences. There she is responsible for the area of financing in the insurance company. She is a member of the German Actuarial Society executive board. In addition, she has co-published various publications on various actuarial topics, in particular on the Solvency II Directive 2009. Publications. For the following books Heep-Altiner was the main author or significant part of the writing team:
[ { "math_id": 0, "text": "GL_2(f)" } ]
https://en.wikipedia.org/wiki?curid=60048114
60049406
Impartial culture
Impartial culture (IC) or the culture of indifference is a probabilistic model used in social choice theory for analyzing ranked voting method rules. The model is understood to be unrealistic, and not a good representation of real-world voting behavior, however, it is useful for mathematical comparisons of voting methods under reproducible, worst-case scenarios. The model assumes that each voter provides a complete strict ranking of all the candidates (with no equal rankings or blanks), which is drawn from a set of all possible rankings. For formula_0 candidates, there are formula_1 possible strict rankings (permutations). There are three variations of the model that use different subsets of the full set of possible rankings, so that different election permutations are drawn with different probabilities: Impartial Culture (IC). This model assumes that each voter's ranking is randomly selected from a uniform distribution. If these are chosen by formula_2 voters, there are thus formula_3 possible elections ("preference profiles".) Impartial Anonymous Culture (IAC). This reduces the set of possible elections by eliminating those that are equivalent if the voter identities are unknown. For example, the two-candidate, three-voter election {A&gt;B, A&gt;B, B&gt;A} is equivalent to the election where the second and third voters swap votes: {A&gt;B, B&gt;A, A&gt;B}, and so all variations on this set of votes are only included once. The set of all such elections is called the anonymous equivalence class (AEC), and if the strict rankings are being chosen by formula_2 voters, there are formula_4 possible elections. This is also referred to as the "Dirichlet" or "simplex" model. Impartial, Anonymous, and Neutral Culture (IANC). This reduces the set of possible elections further, by eliminating those that are equivalent if the candidate identities are unknown. For example, the two-candidate, three-voter election {A&gt;B, A&gt;B, B&gt;A} is equivalent to the election where the two candidates are swapped: {B&gt;A, B&gt;A, A&gt;B}.
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "m!" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "m!^n" }, { "math_id": 4, "text": "\\left ( \\frac {n+m!-1} {m!-1} \\right )" } ]
https://en.wikipedia.org/wiki?curid=60049406
60055007
Lyapunov dimension
In the mathematics of dynamical systems, the concept of Lyapunov dimension was suggested by Kaplan and Yorke for estimating the Hausdorff dimension of attractors. Further the concept has been developed and rigorously justified in a number of papers, and nowadays various different approaches to the definition of Lyapunov dimension are used. Remark that the attractors with noninteger Hausdorff dimension are called strange attractors. Since the direct numerical computation of the Hausdorff dimension of attractors is often a problem of high numerical complexity, estimations via the Lyapunov dimension became widely spread. The Lyapunov dimension was named after the Russian mathematician Aleksandr Lyapunov because of the close connection with the Lyapunov exponents. Definitions. Consider a dynamical system formula_0, where formula_1 is the shift operator along the solutions: formula_2, of ODE formula_3, formula_4, or difference equation formula_5, formula_6, with continuously differentiable vector-function formula_7. Then formula_8 is the fundamental matrix of solutions of linearized system and denote by formula_9, singular values with respect to their algebraic multiplicity, ordered by decreasing for any formula_10 and formula_11. Definition via finite-time Lyapunov dimension. The concept of finite-time Lyapunov dimension and related definition of the Lyapunov dimension, developed in the works by N. Kuznetsov, is convenient for the numerical experiments where only finite time can be observed. Consider an analog of the Kaplan–Yorke formula for the finite-time Lyapunov exponents: formula_12 formula_13 with respect to the ordered set of "finite-time Lyapunov exponents" formula_14 at the point formula_10. The "finite-time Lyapunov dimension" of dynamical system with respect to invariant set formula_15 is defined as follows formula_16 In this approach the use of the analog of Kaplan–Yorke formula is rigorously justified by the Douady–Oesterlè theorem, which proves that for any fixed formula_17 the "finite-time Lyapunov dimension" for a closed bounded invariant set formula_15 is an upper estimate of the Hausdorff dimension: formula_18 Looking for best such estimation formula_19, "the Lyapunov dimension" is defined as follows: formula_20 The possibilities of changing the order of the time limit and the supremum over set is discussed, e.g., in. Note that the above defined Lyapunov dimension is invariant under Lipschitz diffeomorphisms. Exact Lyapunov dimension. Let the Jacobian matrix formula_21 at one of the equilibria have simple real eigenvalues: formula_22, then formula_23 If the supremum of local Lyapunov dimensions on the global attractor, which involves all equilibria, is achieved at an equilibrium point, then this allows one to get analytical formula of the exact Lyapunov dimension of the global attractor (see corresponding Eden’s conjecture). Definition via statistical physics approach and ergodicity. Following the statistical physics approach and assuming the ergodicity the Lyapunov dimension of attractor is estimated by limit value of the local Lyapunov dimension formula_24 of a "typical" trajectory, which belongs to the attractor. In this case formula_25 and formula_26. From a practical point of view, the rigorous use of ergodic Oseledec theorem, verification that the considered trajectory formula_27 is a "typical" trajectory, and the use of corresponding Kaplan–Yorke formula is a challenging task (see, e.g. discussions in). The exact limit values of finite-time Lyapunov exponents, if they exist and are the same for all formula_28, are called the "absolute" ones formula_29 and used in the Kaplan–Yorke formula. Examples of the rigorous use of the ergodic theory for the computation of the Lyapunov exponents and dimension can be found in. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\big(\\{\\varphi^t\\}_{t\\geq0}, (U\\subseteq \\mathbb{R}^n, \\|\\cdot\\|)\\big) " }, { "math_id": 1, "text": "\\varphi^t" }, { "math_id": 2, "text": " \\varphi^t(u_0) = u(t,u_0)" }, { "math_id": 3, "text": "\\dot{u} = f({u})" }, { "math_id": 4, "text": " t \\leq 0" }, { "math_id": 5, "text": "{u}(t+1) = f({u}(t))" }, { "math_id": 6, "text": " t=0,1,..." }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "D\\varphi^t(u)" }, { "math_id": 9, "text": "\\sigma_i(t,u) = \\sigma_i(D\\varphi^t(u)), \\ i = 1...n" }, { "math_id": 10, "text": "u" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "\n d_{\\rm KY}(\\{ {\\rm LE}_i(t,u)\\}_{i=1}^n)=j(t,u) + \n \\frac{ {\\rm LE}_1(t,u) + \\cdots + {\\rm LE}_{j(t,u)}(t,u)}{| {\\rm LE}_{j(t,u)+1}(t,u)|},\n " }, { "math_id": 13, "text": "\nj(t,u) = \\max\\{m: \\sum_{i=1}^m {\\rm LE}_i(t,u) \\geq 0\\},\n" }, { "math_id": 14, "text": "\\{{\\rm LE}_i(t,u)\\}_{i=1}^n = \\{\\frac{1}{t}\\ln\\sigma_i(t,u)\\}_{i=1}^n" }, { "math_id": 15, "text": "K" }, { "math_id": 16, "text": "\n \\dim_{\\rm L}(t, K) = \\sup\\limits_{u \\in K}\n d_{\\rm KY}(\\{{\\rm LE}_i(t,u)\\}_{i=1}^n).\n" }, { "math_id": 17, "text": "t > 0" }, { "math_id": 18, "text": "\n \\dim_{\\rm H} K \\leq \\dim_{\\rm L}(t, K).\n" }, { "math_id": 19, "text": "\n\\inf_{t>0} \\dim_{\\rm L} (t, K)\n = \\liminf_{t \\to +\\infty}\\sup\\limits_{u \\in K} \\dim_{\\rm L}(t,u)\n" }, { "math_id": 20, "text": "\n \\dim_{\\rm L} K = \\liminf_{t \\to +\\infty}\\sup\\limits_{u \\in K} \\dim_{\\rm L}(t,u).\n" }, { "math_id": 21, "text": "Df(u_\\text{eq})" }, { "math_id": 22, "text": "\\{\\lambda_i(u_\\text{eq})\\}_{i=1}^n, \\lambda_{i}(u_\\text{eq}) \\geq \\lambda_{i+1}(u_\\text{eq})" }, { "math_id": 23, "text": "\n \\dim_{\\rm L}u_\\text{eq} = d_{\\rm KY}(\\{\\lambda_i(u_\\text{eq})\\}_{i=1}^n).\n" }, { "math_id": 24, "text": "\\lim_{t\\to+\\infty}\\dim_{\\rm L} (t, u_0)" }, { "math_id": 25, "text": "\\{\\lim\\limits_{t\\to+\\infty}{\\rm LE}_i(t,u_0)\\}_{i}^n = \\{ {\\rm LE}_i(u_0)\\}_1^n" }, { "math_id": 26, "text": "\\dim_{\\rm L}u_0= d_{\\rm KY}(\\{ {\\rm LE}_i(u_0)\\}_{i=1}^n)=j(u_0) + \\frac{ {\\rm LE}_1(u_0) + \\cdots + {\\rm LE}_{j(u_0)}(u_0)}{| {\\rm LE}_{j(u_0)+1}(u_0)|} " }, { "math_id": 27, "text": "u(t,u_0)" }, { "math_id": 28, "text": "u_0 \\in U" }, { "math_id": 29, "text": "\\{\\lim\\limits_{t\\to+\\infty}{\\rm LE}_i(t,u_0)\\}_{i}^n = \\{ {\\rm LE}_i(u_0)\\}_1^n \\equiv \\{ {\\rm LE}_i \\}_1^n" } ]
https://en.wikipedia.org/wiki?curid=60055007
6006062
Kummer sum
In mathematics, Kummer sum is the name given to certain cubic Gauss sums for a prime modulus "p", with "p" congruent to 1 modulo 3. They are named after Ernst Kummer, who made a conjecture about the statistical properties of their arguments, as complex numbers. These sums were known and used before Kummer, in the theory of cyclotomy. Definition. A Kummer sum is therefore a finite sum formula_0 taken over "r" modulo "p", where χ is a Dirichlet character taking values in the cube roots of unity, and where "e"("x") is the exponential function exp(2π"ix"). Given "p" of the required form, there are two such characters, together with the trivial character. The cubic exponential sum "K"("n","p") defined by formula_1 is easily seen to be a linear combination of the Kummer sums. In fact it is 3"P" where "P" is one of the Gaussian periods for the subgroup of index 3 in the residues mod "p", under multiplication, while the Gauss sums are linear combinations of the "P" with cube roots of unity as coefficients. However it is the Gauss sum for which the algebraic properties hold. Such cubic exponential sums are also now called Kummer sums. Statistical questions. It is known from the general theory of Gauss sums that formula_2 In fact the prime decomposition of "G"("χ") in the cyclotomic field it naturally lies in is known, giving a stronger form. What Kummer was concerned with was the argument formula_3 of "G"("χ"). Unlike the quadratic case, where the square of the Gauss sum is known and the precise square root was determined by Gauss, here the cube of "G"("χ") lies in the Eisenstein integers, but its argument is determined by that of the Eisenstein prime dividing "p", which splits in that field. Kummer made a statistical conjecture about "θ""p" and its distribution modulo 2π (in other words, on the argument of the Kummer sum on the unit circle). For that to make sense, one has to choose between the two possible χ: there is a distinguished choice, in fact, based on the cubic residue symbol. Kummer used available numerical data for "p" up to 500 (this is described in the 1892 book "Theory of Numbers" by George B. Mathews). There was, however, a 'law of small numbers' operating, meaning that Kummer's original conjecture, of a lack of uniform distribution, suffered from a small-number bias. In 1952 John von Neumann and Herman Goldstine extended Kummer's computations, on ENIAC. The calculations were programmed and coded by Hedvig Selberg but her work was only acknowledged at the end of the paper, similarly as with Mary Tsingou on the Fermi–Pasta–Ulam–Tsingou problem (formerly the Fermi–Pasta–Ulam problem). In the twentieth century, progress was finally made on this question, which had been left untouched for over 100 years. Building on work of Tomio Kubota, S. J. Patterson and Roger Heath-Brown in 1978 disproved Kummer conjecture and proved a modified form of Kummer conjecture. In fact they showed that there was equidistribution of the θ"p". This work involved automorphic forms for the metaplectic group, and Vaughan's lemma in analytic number theory. In 2000 further refinements were attained by Heath-Brown. Cassels' conjecture. A second conjecture on Kummer sums was made by J. W. S. Cassels, again building on previous ideas of Tomio Kubota. This was a product formula in terms of elliptic functions with complex multiplication by the Eisenstein integers. The conjecture was proved in 1978 by Charles Matthews. Patterson's conjecture. In 1978 Patterson conjectured that θ"p" was equidistributed with error term asymptotically of order formula_4 instead of quadratic as with Gauss sums which could explain the initial bias observed by Kummer. Next year his subsequent work with Heath-Brown disproving Kummer's conjecture showed that in fact it was equidistributed, but whether the order of the asymptotic was correct remained unknown. More than 20 years later, Heath-Brown closed on the problem, giving a new sieve method, and conjectured that it could be improved to obtain the predicted order. In 2021 the problem was demonstrated conditionally on the generalized Riemann hypothesis by Alexander Dunn and Maksym Radziwill, who also showed that the sieve of Heath Brown could not be improved as expected. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum \\chi(r)e(r/p) = G(\\chi)" }, { "math_id": 1, "text": "K(n,p)=\\sum_{x=1}^p e(nx^3/p)" }, { "math_id": 2, "text": " |G(\\chi)| = \\sqrt p. \\, " }, { "math_id": 3, "text": " \\theta_p \\, " }, { "math_id": 4, "text": "X^{\\frac{5}{6}}" } ]
https://en.wikipedia.org/wiki?curid=6006062
60061151
Day convolution
Convolution In mathematics, specifically in category theory, Day convolution is an operation on functors that can be seen as a categorified version of function convolution. It was first introduced by Brian Day in 1970 in the general context of enriched functor categories. Day convolution acts as a tensor product for a monoidal category structure on the category of functors formula_0 over some monoidal category formula_1. Definition. Let formula_2 be a monoidal category enriched over a symmetric monoidal closed category formula_3. Given two functors formula_4, we define their Day convolution as the following coend. formula_5 If formula_6 is symmetric, then formula_7 is also symmetric. We can show this defines an associative monoidal product. formula_8 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[\\mathbf{C},V]" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "(\\mathbf{C}, \\otimes_c)" }, { "math_id": 3, "text": "(V, \\otimes)" }, { "math_id": 4, "text": "F,G \\colon \\mathbf{C} \\to V" }, { "math_id": 5, "text": "F \\otimes_d G = \\int^{x,y \\in \\mathbf{C}} \\mathbf{C}(x \\otimes_c y , -) \\otimes Fx \\otimes Gy" }, { "math_id": 6, "text": "\\otimes_c" }, { "math_id": 7, "text": "\\otimes_d" }, { "math_id": 8, "text": "\\begin{aligned} & (F \\otimes_d G) \\otimes_d H \\\\[5pt]\n\\cong {} & \\int^{c_1,c_2} (F \\otimes_d G)c_1 \\otimes Hc_2 \\otimes \\mathbf{C}(c_1 \\otimes_c c_2, -) \\\\[5pt]\n\\cong {} & \\int^{c_1,c_2} \\left( \\int^{c_3,c_4} Fc_3 \\otimes Gc_4 \\otimes \\mathbf{C}(c_3 \\otimes_c c_4 , c_1) \\right) \\otimes Hc_2 \\otimes \\mathbf{C}(c_1 \\otimes_c c_2, -) \\\\[5pt]\n\\cong {} & \\int^{c_1,c_2,c_3,c_4} Fc_3 \\otimes Gc_4 \\otimes Hc_2 \\otimes \\mathbf{C}(c_3 \\otimes_c c_4 , c_1) \\otimes \\mathbf{C}(c_1 \\otimes_c c_2, -) \\\\[5pt]\n\\cong {} & \\int^{c_1,c_2,c_3,c_4} Fc_3 \\otimes Gc_4 \\otimes Hc_2 \\otimes \\mathbf{C}(c_3 \\otimes_c c_4 \\otimes_c c_2, -) \\\\[5pt]\n\\cong {} & \\int^{c_1,c_2,c_3,c_4} Fc_3 \\otimes Gc_4 \\otimes Hc_2 \\otimes \\mathbf{C}(c_2 \\otimes_c c_4 , c_1) \\otimes \\mathbf{C}(c_3 \\otimes_c c_1, -) \\\\[5pt]\n\\cong {} & \\int^{c_1,c_3} Fc_3 \\otimes (G \\otimes_d H)c_1 \\otimes \\mathbf{C}(c_3 \\otimes_c c_1, -) \\\\[5pt]\n\\cong {} & F \\otimes_d (G \\otimes_d H)\\end{aligned}" } ]
https://en.wikipedia.org/wiki?curid=60061151
60062148
Cyanoethylation
Cyanoethylation is a process for the attachment of CH2CH2CN group to another organic substrate. The method is used in the synthesis of organic compounds. Cyanoethylation entails addition of protic nucleophiles to acrylonitrile. Typical protic nucleophiles are alcohols, thiols, and amines. Two new bonds form: C-H and C-X (X = carbon, nitrogen, sulfur, phosphorus, etc): formula_0 The β-carbon atom that is furthest from the nitrile group is positively polarized and therefore binds the heteroatom on the nucleophile. Acrylonitrile is a Michael acceptor. The reaction is normally catalyzed by a base. Cyanethylation is used to prepared numerous commercial chemicals. Detailed laboratory procedures are available for several variants of this reaction. An alternative method for cyanoethylation entails alkylation of the substrate with 3-chloropropionitrile. De-cyanoethylation. Cyanoethyl is a protecting group. It is removed by treatment with base: RNuCH2CH2CN + OH− → RNu− + CH2=CHCN + H2O This methodology is popular in the synthesis of oligonucleotides.
[ { "math_id": 0, "text": " \\mathrm{YH + H_2C{=}CH{-}CN \\longrightarrow Y{-}CH_2{-}CH_2{-}CN}" } ]
https://en.wikipedia.org/wiki?curid=60062148
60063318
Nickel double salts
Class of chemical compounds Nickel is one of the metals that can form Tutton's salts. The singly charged ion can be any of the full range of potassium, rubidium, cesium, ammonium (&lt;chem&gt;NH4&lt;/chem&gt;), or thallium. As a mineral the ammonium nickel salt, (NH4)2Ni(SO4)2 · 6 H2O, can be called nickelboussingaultite. With sodium, the double sulfate is nickelblödite Na2Ni(SO4)2 · 4 H2O from the blödite family. Nickel can be substituted by other divalent metals of similar sized to make mixtures that crystallise in the same form. Nickel forms double salts with Tutton's salt structure with tetrafluoroberyllate with the range of cations of ammonia, potassium, rubidium, cesium, and thallium. Anhydrous salts of the formula M2Ni2(SO4)3, which can be termed metal nickel trisulfates, belong to the family of langbeinites. The known salts include (NH4)2Ni2(SO4)3, K2Ni2(SO4)3 and Rb2Ni2(SO4)3, and those of Tl and Cs are predicted to exist. Some minerals are double salts, for example Nickelzippeite Ni2(UO2)6(SO4)3(OH)10 · 16H2O which is isomorphic to cobaltzippeite, magnesiozippeite and zinczippeite, part of the zippeite group. Double hydrides of nickel exist, such as Mg2NiH4. Double fluorides include the above-mentioned fluoroanion salts, and those fluoronickelates such as NiF4 and NiF6. Other odd ones include an apple green coloured KNiF3·H2O and NaNiF3·H2O, aluminium nickel pentafluoride AlNiF5·7H2O, ceric nickelous decafluoride Ce2NiF10·7H2O, niobium nickel fluoride Ni3H4Nb2F20·19H2O, vanadium nickel pentafluoride VNiF5·7H2O, vanadyl nickel tetrafluoride VONiF4·7H2O, chromic nickelous pentafluoride CrNiF5·7H2O, molybdenum nickel dioxytetrafluoride NiMoO2F4·6H2O, tungsten nickel dioxytetrafluoride NiWO2F4·6H2O and NiWO2F4·10H2O, manganic nickel pentafluoride MnNiF4·7H2O, nickelous ferric fluoride FeNiF5·7H2O. Nickel trichloride double salts exist which are polymers. Nickel is in octahedral coordination, with double halogen bridges. Examples of this include RbNiCl3, pinkish tan coloured H2NN(CH3)3NiCl3. Other double trichlorides include potassium nickel trichloride KNiCl3·5H2O, yellow cesium nickel trichloride CsNiCl3, lithium nickel trichloride LiNiCl3·3H2O, hyrdrazinium nickel tetrachloride, and nickel ammonium chloride hexahydrate NH4NiCl3·6H2O. The tetrachloronickelates contain a tetrahedral NiCl42− and are dark blue. Some salts of organic bases are ionic liquids at standard conditions. tetramethylammonium nickel trichloride is pink and very insoluble. Other tetrachlorides include rubidium nickel tetrachloride, lithium nickel tetrachloride Li2NiCl4·4H2O stable from 23 to 60°, stannous nickel tetrachloride formula_0, stannic nickel hexachloride formula_1 is tetragonal. Lithium nickel hexachloride Li4NiCl6·10H2O is stable from 0 to 23°. Copper nickel dioxychloride 2CuO·NiCl2·6H2O, and copper nickel trioxychloride 3CuO·NiCl2·4H2O. Cadmium dinickel hexachloride, formula_2 crystallises in hexagonal system, dicadmium dinickel hexachloride, formula_3 has rhombic crystals, and is pleochroic varying from light to dark green. Thallic nickel octochloride formula_4 is bright green. Double bromides include the tetrabromonickelates, and also caesium nickel tribromide, CsNiBr3 copper nickel trioxybromide, 3CuO·NiBr2·4H2O mercuric nickel bromide, Hg2NiBr6, HgNiBr4. Aqueous nickel bromide reacting with mercuric oxide yields mercuric nickel oxybromide, formula_5 didymium nickel bromide, formula_6 is reddish brown (mixture of praseodymium and neodymium) Lanthanum nickel bromide, formula_7 nickel stannic bromide (or nickel bromostannate) NiSnBr6·8H2O is apple green. The tetraiodonickelates are blood red coloured salts of the NiI4 ion with large cations. Double iodides known include mercuric nickel hexaiodide 2HgI2•NiI2 · 6 H2O, mercuric nickel tetraiodide HgI2•NiI2 · 6 H2O, and lead nickel hexaiodide I2•2NiI2 · 3 H2O. The diperiodatonickelates of nickel IV are strong oxidisers, and akali monoperiodatonickelates also are known. Nickel forms double nitrates with the lighter rare earth elements. The solid crystals have the formula formula_8. The metals include La Ce Pr Nd Sm Gd and the non rare earth Bi. Nickel can also be replaced by similar divalent ions, Mg, Mn Co Zn. For the nickel salts melting temperatures range from 110.5° for La, 108.5° for Ce, 108° for Pr, 105.6° for Nd, 92.2° for Sm and down to 72.5° for Gd The Bi salt melting at 69°. Crystal structure is hexagonal with Z=3. formula_9 becomes ferromagnetic below 0.393 K. These double nickel nitrates have been used to separate the rare earth elements by fractional crystallization. Nickel thorium nitrate has formula NiTh(NO3)6 · 8 H2O. Nickel atoms can be substituted by other ions with radius 0.69 to 0.83 Å. The nitrates are coordinated on the thorium atom and the water to the nickel. Enthalp of solution of the octahydrate is 7 kJ/mol. Enthalpy of formation is -4360 kJ/mol. At 109° the octahydrate becomes formula_10, and at 190° formula_11 and anhydrous at 215°. The hexahydrate has "Pa"3 cubic structure. Various double amides containing nickel clusters have been made using liquid ammonia as a solvent. Substances made include red Li3Ni4(NH2)11·NH3 (Pna21; Z = 4; a = 16.344(3) Å; b = 12.310(2) Å; c = 8.113(2) Å v=1631 D=1.942), and Cs2Ni(NH2)4•NH3 (P21/c; Z = 4; a =9.553(3) Å; b = 8.734(3) Å; c = 14.243(3) Å; β = 129.96(3)° V=910 D=2.960). These are called amidonickel compounds. Yet others include Li4Ni4(NH2)12·NH3, Na2Ni(NH2)4, orange red Na2Ni(NH2)4•2NH3, Na2Ni(NH2)4•NH3, K2Ni(NH2)4•0.23KNH2, and Rb2Ni(NH2)4•0.23RbNH2. Nickel dihydrogen phosphide (Ni(PH2)2) can form orange, green or black double salts KNi(PH2)3) that crystallise from liquid ammonia. They are unstable above -78 °C, giving off ammonia, phosphine and hydrogen.
[ { "math_id": 0, "text": "\\ce{SnCl2.NiCl2.6H2O}" }, { "math_id": 1, "text": "\\ce{SnCl4.NiCl2.6H2O}" }, { "math_id": 2, "text": "\\ce{CdCl2.2NiCl2.12H2O}" }, { "math_id": 3, "text": "\\ce{2CdCl2.NiCl2.12H2O}" }, { "math_id": 4, "text": "\\ce{2TlCl3.NiCl2.8H2O}" }, { "math_id": 5, "text": "\\ce{6NiO.NiBr2.HgBr2.20H2O}" }, { "math_id": 6, "text": "\\ce{2(Pr,Nd)Br3.3NiBr2.18H2O}" }, { "math_id": 7, "text": "\\ce{2LaBr3.3NiBr2.18H2O}" }, { "math_id": 8, "text": "\\ce{Ni3Me2(NO3)12.24H2O}" }, { "math_id": 9, "text": "\\ce{Ni3La2(NO3)12.24H2O}" }, { "math_id": 10, "text": "\\ce{NiTh(NO3)6.6H2O}" }, { "math_id": 11, "text": "\\ce{NiTh(NO3)6.3H2O}" } ]
https://en.wikipedia.org/wiki?curid=60063318
60069168
K-Mirror (optics)
A K-mirror is a system of 3 plane mirrors mounted on a common motor axis which runs parallel to the chief ray of the system. If looking at the system parallel to the mirror surfaces, where only the edges of the mirrors remain visible, the middle mirror and the front and back mirror look like the backbone and legs of a capital-K; this illustrates the origin of the name. Beam rotation. The principal use of the element is to rotate a beam that hits the first mirror on some optical axis, hits the middle and exit mirror, and leaves the system on the same principal axis. A frequent implementation occurs in the derotation stages of optical telescopes where a beam angle implied by the optical axis of the telescope is "undone" to keep its orientation aligned with some downstream optics. Because there is an odd number of mirrors, the overall effect also includes a flip of the image. The design refers to a nominal zero reference angle of the motor axis, where the first mirror deflects the beam "upward" to the middle mirror, that one deflects the beam "downward" to the last mirror. The picture sketches the three mirrors outlined by magenta quadrangles, three colored rays entering from the right, an exit pupil as a green canvas, and where the rays end up in the exit pupil. If the mirrors are rotated by 20 degrees, an equivalent ray tracing shows that they rays hit the exit pupil at places rotated by 40 degrees away from the places of the nominal angle. Matrix optics. The overall effect on a ray that hits the first mirror in the laboratory frame, where x is the horizontal distance to the beam center and y the vertical distance, can be computed as a succession of The three matrices act on column vectors from the left, so the product of them shows the first matrix on the right. β+β0 is the motor angle and its offset in the laboratory reference frame: formula_0 The interesting point here is that the rotation of the mechanics by the angle β rotates the image by the angle 2β in the laboratory frame. Because the elements flip the image, the determinant of the matrix is negative.
[ { "math_id": 0, "text": "\n\\left(\\begin{array}{cc}\n\\cos (\\beta+\\beta_0) & -\\sin(\\beta+\\beta_0) \\\\\n\\sin (\\beta+\\beta_0) & \\cos(\\beta+\\beta_0) \\\\\n\\end{array}\\right)\n\\left(\\begin{array}{cc}\n1 & 0 \\\\\n0 & -1 \\\\\n\\end{array}\\right)\n\\left(\\begin{array}{cc}\n\\cos (-\\beta-\\beta_0) & -\\sin(-\\beta-\\beta_0) \\\\\n\\sin (-\\beta-\\beta_0) & \\cos(-\\beta-\\beta_0) \\\\\n\\end{array}\\right)\n=\n\\left(\\begin{array}{cc}\n\\cos [2(\\beta+\\beta_0)] & \\sin[2(\\beta+\\beta_0)] \\\\\n\\sin [2(\\beta+\\beta_0)] & -\\cos[2(\\beta+\\beta_0)] \\\\\n\\end{array}\\right)\n" } ]
https://en.wikipedia.org/wiki?curid=60069168
60073928
Pinwheel scheduling
In mathematics and computer science, the pinwheel scheduling problem is a problem in real-time scheduling with repeating tasks of unit length and hard constraints on the time between repetitions. When a pinwheel scheduling problem has a solution, it has one in which the schedule repeats periodically. This repeating pattern resembles the repeating pattern of set and unset pins on the gears of a pinwheel cipher machine, justifying the name. If the fraction of time that is required by each task totals less than 3/4 of the total time, a solution always exists, but some pinwheel scheduling problems whose tasks use a total of slightly more than 5/6 of the total time do not have solutions. Certain formulations of the pinwheel scheduling problem are NP-hard. Definition. The input to pinwheel scheduling consists of a list of tasks, each of which is assumed to take unit time per instantiation. Each task has an associated positive integer value, its maximum repeat time (the maximum time from the start of one instantiation of the task to the next). Only one task can be performed at any given time. The desired output is an infinite sequence specifying which task to perform in each unit of time. Each input task should appear infinitely often in the sequence, with the largest gap between two consecutive instantiations of a task at most equal to the repeat time of the task. For example, the infinitely repeating sequence ABACABACABAC... would be a valid pinwheel schedule for three tasks A, B, and C with repeat times that are at least 2, 4, and 4 respectively. Density. If the task to be scheduled are numbered from formula_0 to formula_1, let formula_2 denote the repeat time for task formula_3. In any valid schedule, task formula_3 must use a formula_4 fraction of the total time, the amount that would be used in a schedule that repeats that task at exactly its specified repeat time. The "density" of a pinwheel scheduling problem is defined as the sum of these fractions, formula_5. For a solution to exist, the times devoted to each task cannot sum to more than the total available time, so it is necessary for the density to be at most formula_0. This condition on density is also sufficient for a schedule to exist in the special case that all repeat times are multiples of each other. For instance, this would be true when all repeat times are powers of two. In this case one can solve the problem using a disjoint covering system. Having density at most formula_0 is also sufficient when there are exactly two distinct repeat times. However, having density at most 1 is not sufficient in some other cases. In particular, there is no schedule for three items with repeat times formula_6, formula_7, and formula_8, no matter how large formula_8 may be, even though the density of this system is only formula_9. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: Does every pinwheel scheduling problem with density at most 5/6 have a solution? Every instance of pinwheel scheduling with density at most formula_10 has a solution, and it has been conjectured that every instance with density at most formula_11 has a solution. Every instance with three distinct repeat times and density at most formula_11 does have a solution. Additionally, case analysis has confirmed that every instance with at most 12 tasks and density at most formula_11 has a solution. Periodicity and complexity. When a solution exists, it can be assumed to be periodic, with a period at most equal to the product of the repeat times. However, it is not always possible to find a repeating schedule of sub-exponential length. With a compact input representation that specifies, for each distinct repeat time, the number of objects that have that repeat time, pinwheel scheduling is NP-hard. Algorithms. Despite the NP-hardness of the pinwheel scheduling problem for general inputs, some types of inputs can be scheduled efficiently. An example of this occurs for inputs where (when listed in sorted order) each repeat time evenly divides the next one, and the density is at most one. In this case, the problem can be solved by a greedy algorithm that schedules the tasks in sorted order, scheduling each task to repeat at exactly its repeat time. At each step in this algorithm, the time slots that have already been assigned form a repeating sequence, with period equal to the repeat time of the most recently-scheduled task. This pattern allows each successive task to be scheduled greedily, maintaining the same invariant. The same idea can be used for arbitrary instances with density at most 1/2, by rounding down each repeat time to a power of two that is less than or equal to it. This rounding process at most doubles the density, keeping it at most one. After rounding, all densities are multiples of each other, allowing the greedy algorithm to work. The resulting schedule repeats each task at its rounded repeat time; because these rounded times do not exceed the input times, the schedule is valid. Instead of rounding to powers of two, a greater density threshold can be achieved by rounding to other sequences of multiples, such as the numbers of the form formula_12 for a careful choice of the coefficient formula_13, or by rounding to two different geometric series and generalizing the idea that tasks with two distinct repeat times can be scheduled up to density one. Applications. The original work on pinwheel scheduling proposed it for an application in which a single base station must communicate with multiple satellites or remote sensors, one at a time, with distinct communications requirements. In this application, each satellite becomes a task in a pinwheel scheduling problem, with a repeat time chosen to give it adequate bandwidth. The resulting schedule is used to assign time slots for each satellite to communicate with the base station. Other applications of pinwheel scheduling include scheduling maintenance sessions for a collection of objects (such as oil changes for automobiles), the arrangement of repeated symbols on the print chains of line printers, computer processing of multimedia data, and contention resolution in real-time wireless computer networks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "t_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "1/t_i" }, { "math_id": 5, "text": "\\textstyle\\sum 1/t_i" }, { "math_id": 6, "text": "t_1=2" }, { "math_id": 7, "text": "t_2=3" }, { "math_id": 8, "text": "t_3" }, { "math_id": 9, "text": "5/6 + 1/t_3" }, { "math_id": 10, "text": "3/4" }, { "math_id": 11, "text": "5/6" }, { "math_id": 12, "text": "x\\cdot 2^i" }, { "math_id": 13, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=60073928
60075
Unknot
Loop seen as a trivial knot In the mathematical theory of knots, the unknot, not knot, or trivial knot, is the least knotted of all knots. Intuitively, the unknot is a closed loop of rope without a knot tied into it, unknotted. To a knot theorist, an unknot is any embedded topological circle in the 3-sphere that is ambient isotopic (that is, deformable) to a geometrically round circle, the standard unknot. The unknot is the only knot that is the boundary of an embedded disk, which gives the characterization that only unknots have Seifert genus 0. Similarly, the unknot is the identity element with respect to the knot sum operation. Unknotting problem. Deciding if a particular knot is the unknot was a major driving force behind knot invariants, since it was thought this approach would possibly give an efficient algorithm to recognize the unknot from some presentation such as a knot diagram. Unknot recognition is known to be in both NP and co-NP. It is known that knot Floer homology and Khovanov homology detect the unknot, but these are not known to be efficiently computable for this purpose. It is not known whether the Jones polynomial or finite type invariants can detect the unknot. Examples. It can be difficult to find a way to untangle string even though the fact it started out untangled proves the task is possible. Thistlethwaite and Ochiai provided many examples of diagrams of unknots that have no obvious way to simplify them, requiring one to temporarily increase the diagram's crossing number. While rope is generally not in the form of a closed loop, sometimes there is a canonical way to imagine the ends being joined together. From this point of view, many useful practical knots are actually the unknot, including those that can be tied in a bight. Every tame knot can be represented as a linkage, which is a collection of rigid line segments connected by universal joints at their endpoints. The stick number is the minimal number of segments needed to represent a knot as a linkage, and a stuck unknot is a particular unknotted linkage that cannot be reconfigured into a flat convex polygon. Like crossing number, a linkage might need to be made more complex by subdividing its segments before it can be simplified. Invariants. The Alexander–Conway polynomial and Jones polynomial of the unknot are trivial: formula_0 No other knot with 10 or fewer crossings has trivial Alexander polynomial, but the Kinoshita–Terasaka knot and Conway knot (both of which have 11 crossings) have the same Alexander and Conway polynomials as the unknot. It is an open problem whether any non-trivial knot has the same Jones polynomial as the unknot. The unknot is the only knot whose knot group is an infinite cyclic group, and its knot complement is homeomorphic to a solid torus. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta(t) = 1,\\quad \\nabla(z) = 1,\\quad V(q) = 1." } ]
https://en.wikipedia.org/wiki?curid=60075
60084531
Bioche's rules
Aids the computation of indefinite integrals involving sines and cosines Bioche's rules, formulated by the French mathematician Charles Bioche (1859–1949), are rules to aid in the computation of certain indefinite integrals in which the integrand contains sines and cosines. In the following, formula_0 is a rational expression in formula_1 and formula_2. In order to calculate formula_3, consider the integrand formula_4. We consider the behavior of this entire integrand, including the formula_5, under translation and reflections of the "t" axis. The translations and reflections are ones that correspond to the symmetries and periodicities of the basic trigonometric functions. Bioche's rules state that: Because rules 1 and 2 involve flipping the "t" axis, they flip the sign of "dt", and therefore the behavior of "ω" under these transformations differs from that of "ƒ" by a sign. Although the rules could be stated in terms of "ƒ", stating them in terms of "ω" has a mnemonic advantage, which is that we choose the change of variables "u"("t") that has the same symmetry as "ω". These rules can be, in fact, stated as a theorem: one shows that the proposed change of variable reduces (if the rule applies and if "f" is actually of the form formula_14) to the integration of a rational function in a new variable, which can be calculated by partial fraction decomposition. Case of polynomials. To calculate the integral formula_15, Bioche's rules apply as well. Another version for hyperbolic functions. Suppose one is calculating formula_19. If Bioche's rules suggest calculating formula_20 by formula_17 (respectively, formula_21), in the case of hyperbolic sine and cosine, a good change of variable is formula_22 (respectively, formula_23). In every case, the change of variable formula_24 allows one to reduce to a rational function, this last change of variable being most interesting in the fourth case (formula_25). Examples. Example 1. As a trivial example, consider formula_26 Then formula_27 is an odd function, but under a reflection of the "t" axis about the origin, ω stays the same. That is, ω acts like an even function. This is the same as the symmetry of the cosine, which is an even function, so the mnemonic tells us to use the substitution formula_7 (rule 1). Under this substitution, the integral becomes formula_28. The integrand involving transcendental functions has been reduced to one involving a rational function (a constant). The result is formula_29, which is of course elementary and could have been done without Bioche's rules. Example 2. The integrand in formula_30 has the same symmetries as the one in example 1, so we use the same substitution formula_7. So formula_31 This transforms the integral into formula_32 which can be integrated using partial fractions, since formula_33. The result is that formula_34 Example 3. Consider formula_35 where formula_36. Although the function "f" is even, the integrand as a whole ω is odd, so it does not fall under rule 1. It also lacks the symmetries described in rules 2 and 3, so we fall back to the last-resort substitution formula_13. Using formula_37 and a second substitution formula_38 leads to the result formula_39 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(t)" }, { "math_id": 1, "text": "\\sin t" }, { "math_id": 2, "text": "\\cos t" }, { "math_id": 3, "text": "\\int f(t)\\,dt" }, { "math_id": 4, "text": "\\omega(t)=f(t)\\,dt" }, { "math_id": 5, "text": " dt" }, { "math_id": 6, "text": "\\omega(-t)=\\omega(t)" }, { "math_id": 7, "text": "u=\\cos t" }, { "math_id": 8, "text": "\\omega(\\pi-t)=\\omega(t)" }, { "math_id": 9, "text": "u=\\sin t" }, { "math_id": 10, "text": "\\omega(\\pi+t)=\\omega(t)" }, { "math_id": 11, "text": "u=\\tan t" }, { "math_id": 12, "text": "u=\\cos 2t" }, { "math_id": 13, "text": "u=\\tan(t/2)" }, { "math_id": 14, "text": "f(t) = \\frac{P(\\sin t, \\cos t)}{Q(\\sin t, \\cos t)}" }, { "math_id": 15, "text": "\\int\\sin^p(t)\\cos^q(t)dt" }, { "math_id": 16, "text": "u = \\cos(2t)" }, { "math_id": 17, "text": "u = \\cos(t)" }, { "math_id": 18, "text": "u = \\sin(t)" }, { "math_id": 19, "text": "\\int g(\\cosh t, \\sinh t)dt" }, { "math_id": 20, "text": "\\int g(\\cos t, \\sin t)dt" }, { "math_id": 21, "text": "\\sin t, \\tan t, \\cos(2t), \\tan(t/2)" }, { "math_id": 22, "text": "u = \\cosh(t)" }, { "math_id": 23, "text": "\\sinh(t), \\tanh(t), \\cosh(2t), \\tanh(t/2)" }, { "math_id": 24, "text": "u = e^t" }, { "math_id": 25, "text": "u = \\tanh(t/2)" }, { "math_id": 26, "text": "\\int \\sin t \\,dt." }, { "math_id": 27, "text": "f(t)=\\sin t" }, { "math_id": 28, "text": "-\\int du" }, { "math_id": 29, "text": "-u+c=-\\cos t+c" }, { "math_id": 30, "text": "\\int \\frac{dt}{\\sin t}" }, { "math_id": 31, "text": "\\frac{dt}{\\sin t} = - \\frac{du}{\\sin^2 t} = - \\frac{du}{\\ 1-\\cos^2 t}. " }, { "math_id": 32, "text": "\\int - \\frac{du}{1 - u^2}," }, { "math_id": 33, "text": "\\frac {1}{1-u^2} = \\frac {1}{2} \\left( \\frac{1}{1+u}+\\frac{1}{1-u}\\right)" }, { "math_id": 34, "text": "\\int \\frac{dt}{\\sin t}=-\\frac{1}{2}\\ln\\frac{1+\\cos t}{1-\\cos t}+c." }, { "math_id": 35, "text": "\\int \\frac{dt}{1+\\beta\\cos t}," }, { "math_id": 36, "text": "\\beta^2<1" }, { "math_id": 37, "text": "\\cos t=\\frac{1-\\tan^2(t/2)}{1+\\tan^2(t/2)}" }, { "math_id": 38, "text": "v=\\sqrt{\\frac{1-\\beta}{1+\\beta}}u" }, { "math_id": 39, "text": "\\int \\frac{\\mathrm{d}t}{1+\\beta\\cos t} = \\frac{2}{\\sqrt{1-\\beta^2}}\\arctan\\left[\\sqrt{\\frac{1-\\beta}{1+\\beta}}\\tan \\frac{t}{2}\\right] + c." } ]
https://en.wikipedia.org/wiki?curid=60084531
600892
Arbitrary-precision arithmetic
Calculations where numbers' precision is only limited by computer memory In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are potentially limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision. Several modern programming languages have built-in support for bignums, and others have libraries available for arbitrary-precision integer and floating-point math. Rather than storing values as a fixed number of bits related to the size of the processor register, these implementations typically use variable-length arrays of digits. Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required. It should not be confused with the symbolic computation provided by many computer algebra systems, which represent numbers by expressions such as "π"·sin(2), and can thus "represent" any computable number with infinite precision. Applications. A common application is public-key cryptography, whose algorithms commonly employ arithmetic with integers having hundreds of digits. Another is in situations where artificial limits and overflows would be inappropriate. It is also useful for checking the results of fixed-precision calculations, and for determining optimal or near-optimal values for coefficients needed in formulae, for example the formula_0 that appears in Gaussian integration. Arbitrary precision arithmetic is also used to compute fundamental mathematical constants such as π to millions or more digits and to analyze the properties of the digit strings or more generally to investigate the precise behaviour of functions such as the Riemann zeta function where certain questions are difficult to explore via analytical methods. Another example is in rendering fractal images with an extremely high magnification, such as those found in the Mandelbrot set. Arbitrary-precision arithmetic can also be used to avoid overflow, which is an inherent limitation of fixed-precision arithmetic. Similar to a five-digit odometer's display which changes from 99999 to 00000, a fixed-precision integer may exhibit "wraparound" if numbers grow too large to represent at the fixed level of precision. Some processors can instead deal with overflow by "saturation," which means that if a result would be unrepresentable, it is replaced with the nearest representable value. (With 16-bit unsigned saturation, adding any positive amount to 65535 would yield 65535.) Some processors can generate an exception if an arithmetic result exceeds the available precision. Where necessary, the exception can be caught and recovered from—for instance, the operation could be restarted in software using arbitrary-precision arithmetic. In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow. Such guarantees may be based on pragmatic limits: a school attendance program may have a task limit of 4,000 students. A programmer may design the computation so that intermediate results stay within specified precision boundaries. Some programming languages such as Lisp, Python, Perl, Haskell, Ruby and Raku use, or have an option to use, arbitrary-precision numbers for "all" integer arithmetic. Although this reduces performance, it eliminates the possibility of incorrect results (or exceptions) due to simple overflow. It also makes it possible to guarantee that arithmetic results will be the same on all machines, regardless of any particular machine's word size. The exclusive use of arbitrary-precision numbers in a programming language also simplifies the language, because "a number is a number" and there is no need for multiple types to represent different levels of precision. Implementation issues. Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in hardware arithmetic whereas the former must be implemented in software. Even if the computer lacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only. There are exceptions, as certain "variable word length" machines of the 1950s and 1960s, notably the IBM 1620, IBM 1401 and the Honeywell 200 series, could manipulate numbers bound only by available storage, with an extra bit that delimited the value. Numbers can be stored in a fixed-point format, or in a floating-point format as a significand multiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the numerator and for the denominator. But even with the greatest common divisor divided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99 − 1/100 = 1/9900, and if 1/101 is then added, the result is 10001/999900. The size of arbitrary-precision numbers is limited in practice by the total storage available, and computation time. Numerous algorithms have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that "N" digits are employed, algorithms have been designed to minimize the asymptotic complexity for large "N". The simplest algorithms are for addition and subtraction, where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields an O("N") algorithm (see big O notation). Comparison is also very simple. Compare the high-order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case is formula_1("N"), but usually it will go much faster. For multiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require formula_1("N"2) operations, but multiplication algorithms that achieve O("N" log("N") log(log("N"))) complexity have been devised, such as the Schönhage–Strassen algorithm, based on fast Fourier transforms, and there are also algorithms with slightly worse complexity but with sometimes superior real-world performance for smaller "N". The Karatsuba multiplication is such an algorithm. For division, see division algorithm. For a list of algorithms along with complexity estimates, see computational complexity of mathematical operations. For examples in x86 assembly, see external links. Pre-set precision. In some languages such as REXX, the precision of all calculations must be set before doing a calculation. Other languages, such as Python and Ruby, extend the precision automatically to prevent overflow. Example. The calculation of factorials can easily produce very large numbers. This is not a problem for their usage in many formulas (such as Taylor series) because they appear along with other terms, so that—given careful attention to the order of evaluation—intermediate calculation values are not troublesome. If approximate values of factorial numbers are desired, Stirling's approximation gives good results using floating-point arithmetic. The largest representable value for a fixed-size integer variable may be exceeded even for relatively small arguments as shown in the table below. Even floating-point numbers are soon outranged, so it may help to recast the calculations in terms of the logarithm of the number. But if exact values for large factorials are desired, then special software is required, as in the pseudocode that follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc. the successive factorial numbers. constants: Limit = 1000 "% Sufficient digits." Base = 10 "% The base of the simulated arithmetic." FactorialLimit = 365 "% Target number to solve, 365!" tdigit: Array[0:9] of character = ["0","1","2","3","4","5","6","7","8","9"] variables: digit: Array[1:Limit] of 0..9 "% The big number." carry, d: Integer "% Assistants during multiplication." last: Integer "% Index into the big number's digits." text: Array[1:Limit] of character "% Scratchpad for the output." digit[*] := 0 "% Clear the whole array." last := 1 "% The big number starts as a single-digit," digit[1] := 1 "% its only digit is 1." for n := 1 to FactorialLimit: "% Step through producing 1!, 2!, 3!, 4!, etc. " carry := 0 "% Start a multiply by n." for i := 1 to last: "% Step along every digit." d := digit[i] * n + carry "% Multiply a single digit." digit[i] := d mod Base "% Keep the low-order digit of the result." carry := d div Base "% Carry over to the next digit." while carry &gt; 0: "% Store the remaining carry in the big number." if last &gt;= Limit: error("overflow") last := last + 1 "% One more digit." digit[last] := carry mod Base carry := carry div Base "% Strip the last digit off the carry." text[*] := " " "% Now prepare the output." for i := 1 to last: "% Translate from binary to text." text[Limit - i + 1] := tdigit[digit[i]] "% Reversing the order." print text[Limit - last + 1:Limit], " = ", n, "!" With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base. The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variable d must be able to hold the result of a single-digit multiply "plus the carry" from the prior digit's multiply. In base ten, a sixteen-bit integer is certainly adequate as it allows up to 32767. However, this example cheats, in that the value of n is not itself limited to a single digit. This has the consequence that the method will fail for "n" &gt; 3200 or so. In a more general implementation, n would also use a multi-digit representation. A second consequence of the shortcut is that after the multi-digit multiply has been completed, the last value of "carry" may need to be carried into multiple higher-order digits, not just one. There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of array "digit", but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so this implementation builds the representation in a space-padded text variable and then prints that. The first few results (with spacing every fifth digit and annotation added here) are: This implementation could make more effective use of the computer's built in arithmetic. A simple escalation would be to use base 100 (with corresponding changes to the translation process for output), or, with sufficiently wide computer variables (such as 32-bit integers) we could use larger bases, such as 10,000. Working in a power-of-2 base closer to the computer's built-in integer operations offers advantages, although conversion to a decimal base for output becomes more difficult. On typical modern computers, additions and multiplications take constant time independent of the values of the operands (so long as the operands fit in single machine words), so there are large gains in packing as much of a bignumber as possible into each element of the digit array. The computer may also offer facilities for splitting a product into a digit and carry without requiring the two operations of "mod" and "div" as in the example, and nearly all arithmetic units provide a "carry flag" which can be exploited in multiple-precision addition and subtraction. This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run faster than the result of the compilation of a high-level language, which does not provide direct access to such facilities but instead maps the high-level statements to its model of the target machine using an optimizing compiler. For a single-digit multiply the working variables must be able to hold the value (base−1)2 + carry, where the maximum value of the carry is (base−1). Similarly, the variables used to index the digit array are themselves limited in width. A simple way to extend the indices would be to deal with the bignumber's digits in blocks of some convenient size so that the addressing would be via (block "i", digit "j") where "i" and "j" would be small integers, or, one could escalate to employing bignumber techniques for the indexing variables. Ultimately, machine storage capacity and execution time impose limits on the problem size. History. IBM's first business computer, the IBM 702 (a vacuum-tube machine) of the mid-1950s, implemented integer arithmetic "entirely in hardware" on digit strings of any length from 1 to 511 digits. The earliest widespread software implementation of arbitrary-precision arithmetic was probably that in Maclisp. Later, around 1980, the operating systems VAX/VMS and VM/CMS offered bignum facilities as a collection of string functions in the one case and in the languages EXEC 2 and REXX in the other. An early widespread implementation was available via the IBM 1620 of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used lookup tables) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered 60 000 digits, however Fortran compilers for the 1620 settled on fixed sizes such as 10, though it could be specified on a control card if the default was not satisfactory. Software libraries. Arbitrary-precision arithmetic in most computer software is implemented by calling an external library that provides data types and subroutines to store numbers with the requested precision and to perform computations. Different libraries have different ways of representing arbitrary-precision numbers, some libraries work only with integer numbers, others store floating point numbers in a variety of bases (decimal or binary powers). Rather than representing a number as single value, some store numbers as a numerator/denominator pair (rationals) and some can fully represent computable numbers, though only up to some storage limit. Fundamentally, Turing machines cannot represent all real numbers, as the cardinality of formula_2 exceeds the cardinality of formula_3. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{\\frac{1}{3}}" }, { "math_id": 1, "text": "\\Theta" }, { "math_id": 2, "text": "\\mathbb{R}" }, { "math_id": 3, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=600892
60089520
Spherical Bernstein's problem
The spherical Bernstein's problem is a possible generalization of the original Bernstein's problem in the field of global differential geometry, first proposed by Shiing-Shen Chern in 1969, and then later in 1970, during his plenary address at the International Congress of Mathematicians in Nice. The problem. Are the equators in formula_0 the only smooth embedded minimal hypersurfaces which are topological formula_1-dimensional spheres? Additionally, the spherical Bernstein's problem, while itself a generalization of the original Bernstein's problem, can, too, be generalized further by replacing the ambient space formula_0 by a simply-connected, compact symmetric space. Some results in this direction are due to Wu-Chung Hsiang and Wu-Yi Hsiang work. Alternative formulations. Below are two alternative ways to express the problem: The second formulation. Let the ("n" − 1) sphere be embedded as a minimal hypersurface in formula_2(1). Is it necessarily an equator? By the Almgren–Calabi theorem, it's true when "n" = 3 (or "n" = 2 for the 1st formulation). Wu-Chung Hsiang proved it for "n" ∈  {4, 5, 6, 7, 8, 10, 12, 14} (or "n" ∈ {3, 4, 5, 6, 7, 9, 11, 13}, respectively) In 1987, Per Tomter proved it for all even "n" (or all odd "n", respectively). Thus, it only remains unknown for all odd "n" ≥ 9 (or all even "n" ≥ 8, respectively) The third formulation. Is it true that an embedded, minimal hypersphere inside the Euclidean formula_1-sphere is necessarily an equator? Geometrically, the problem is analogous to the following problem: Is the local topology at an isolated singular point of a minimal hypersurface necessarily different from that of a disc? For example, the affirmative answer for spherical Bernstein problem when "n" = 3 is equivalent to the fact that the local topology at an isolated singular point of any minimal hypersurface in an arbitrary Riemannian 4-manifold must be different from that of a disc.
[ { "math_id": 0, "text": "\\mathbb{S}^{n+1}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "S^n" } ]
https://en.wikipedia.org/wiki?curid=60089520
60091545
Moses effect
Magnetic effect In physics, the Moses effect is a phenomenon of deformation of the surface of a diamagnetic liquid by a magnetic field. The effect was named after the biblical figure Moses, inspired by the mythological "crossing of the Red Sea" in the Old Testament. The rapid progress in the development of neodymium magnets, supplying magnetic fields as high as c. 1 T, allows simple and inexpensive experiments related to the Moses effect and its visualization. The application of magnetic fields on the order of magnitude of 0.5-1 T results in the formation of the near-surface "well" with a depth of dozens of micrometers. In contrast, the surface of a paramagnetic liquid is raised by the magnetic field. This effect is called as the inverse Moses effect. It is usually latently suggested that the shape of the well arises from the interplay of magnetic force and gravity and the shape of the near-surface well is given by the following equation: formula_0 where "χ" and "ρ" are the magnetic susceptibility and density of the liquid respectively, B is the magnetic field, "g" is the gravity acceleration, and "μ0" is the magnetic permittivity of vacuum. Actually, the shape of the near surface well depends also on the surface tension of the liquid. The Moses effect enables trapping of floating diamagnetic particles and formation of micro-patterns. The application of a magnetic field ("B"≅0.5 T) on diamagnetic liquid/vapor interfaces enables the driving of floating diamagnetic bodies and soap bubbles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h(r)= \\frac{\\chi |\\mathbf{B}(r)|^2}{2\\rho g\\mu_0} " } ]
https://en.wikipedia.org/wiki?curid=60091545
60091964
MTTFd
Mean Time to Dangerous Failure. In a safety system MTTFD is the portion of failure modes that can lead to failures that may result in hazards to personnel, environment or equipment. MTTFD is critical to the determination of the performance level of a safety system. ISO 13849 defines three levels of MTTFD: ISO 13849 prescribes three methods to determine the MTTFD of a safety channel: Mean Time to Failure (MTTF) is assumed constant during the useful life period of a component. The MTTF can be calculated according to: formula_0 where λ is the failure rate for the component. The relationship between MTBF and MTTF is expressed as: formula_1 where MTTR is the mean time to repair. The MTTF of a system is the sum of MTTFS and MTTFD. To understand the relationship between MTTFS and MTTFD consider the case of a switch that turns a motor on or off. The switch has two failure modes: the switch can fail stuck closed or the switch can fail stuck open. If the switch fails stuck open, the motor will never energize; as a result, the motor will not create any hazards due to its operation. In contrast, if the switch fails stuck closed, this failure can lead to a dangerous situation like for example the case where the operator needs to stop the motor, but the motor will not stop because the switch is stuck in the closed position. The failure mode where the switch is stuck in the open position is denominated the safe failure mode, whereas the stuck closed failure mode is denominated the dangerous failure mode. The likelihood of occurrence of a dangerous or safe failure may differ and is a function of several variables in the construction and design of a component. A poorly designed switch may have a higher proportion of dangerous failures (thus a lower MTTFD), whereas switches rated for use in safety circuits may very well preclude the occurrence of stuck closed failure modes (thus have infinite or very high MTTFD). Assessing the performance level of a safety system, requires knowing the distribution of the dangerous vs. safe failure modes of its components and ultimately a determination of its MTTFD.
[ { "math_id": 0, "text": "\\text{MTTF} = \\frac{1}{\\lambda}[hours] \\!" }, { "math_id": 1, "text": "\\text{MTBF} = MTTF + MTTR \\!" } ]
https://en.wikipedia.org/wiki?curid=60091964
60094428
Chern's conjecture (affine geometry)
Chern's conjecture for affinely flat manifolds was proposed by Shiing-Shen Chern in 1955 in the field of affine geometry. As of 2018, it remains an unsolved mathematical problem. Chern's conjecture states that the Euler characteristic of a compact affine manifold vanishes. Details. In case the connection ∇ is the Levi-Civita connection of a Riemannian metric, the Chern–Gauss–Bonnet formula: formula_0 implies that the Euler characteristic is zero. However, not all flat torsion-free connections on formula_1 admit a compatible metric, and therefore, Chern–Weil theory cannot be used in general to write down the Euler class in terms of the curvature. History. The conjecture is known to hold in several special cases: Additionally obtained related results: For flat pseudo-Riemannian manifolds or complex affine manifolds, this follows from the Chern–Gauss–Bonnet theorem. Also, as proven by M.W. Hirsch and William Thurston in 1975 for incomplete affine manifolds, the conjecture holds if the holonomy group is a finite extension, a free product of amenable groups (however, their result applies to any flat bundles over manifolds). In 1977, John Smillie produced a manifold with the tangent bundle with nonzero-torsion flat connection and nonzero Euler characteristic, thus he disproved the strong version of the conjecture asking whether the Euler characteristic of a closed flat manifold vanishes. Later, Huyk Kim and Hyunkoo Lee proved for affine manifolds, and more generally projective manifolds developing into an affine space with amenable holonomy by a different technique using nonstandard polyhedral Gauss–Bonnet theorem developed by Ethan Bloch and Kim and Lee. In 2002, Suhyoung Choi slightly generalized the result of Hirsch and Thurston that if the holonomy of a closed affine manifold is isomorphic to amenable groups amalgamated or HNN-extended along finite groups, then the Euler characteristic of the manifold is 0. He showed that if an even-dimensional manifold is obtained from a connected sum operation from "K"("π", 1)s with amenable fundamental groups, then the manifold does not admit an affine structure (generalizing a result of Smillie). In 2008, after Smillie's simple examples of closed manifolds with flat tangent bundles (these would have affine connections with zero curvature, but possibly nonzero torsion), Bucher and Gelander obtained further results in this direction. In 2015, Mihail Cocos proposed a possible way to solve the conjecture and proved that the Euler characteristic of a closed even-dimensional affine manifold vanishes. In 2016, Huitao Feng () and , both of Nankai University, claimed to prove the conjecture in general case, but a serious flaw had been found, so the claim was thereafter retracted. After the correction, their current result is a formula that counts the Euler number of a flat vector bundle in terms of vertices of transversal open coverings. Notoriously, the intrinsic Chern–Gauss–Bonnet theorem proved by Chern that the Euler characteristic of a closed affine manifold is 0 applies only to orthogonal connections, not linear ones, hence why the conjecture remains open in this generality (affine manifolds are considerably more complicated than Riemannian manifolds, where metric completeness is equivalent to geodesic completeness). There also exists a related conjecture by Mikhail Leonidovich Gromov on the vanishing of bounded cohomology of affine manifolds. Related conjectures. The conjecture of Chern can be considered a particular case of the following conjecture: A closed aspherical manifold with nonzero Euler characteristic doesn't admit a flat structure This conjecture was originally stated for general closed manifolds, not just for aspherical ones (but due to Smillie, there's a counterexample), and it itself can, in turn, also be considered a special case of even more general conjecture: A closed aspherical manifold with nonzero simplicial volume doesn't admit a flat structure While generalizing the Chern's conjecture on affine manifolds in these ways, it's known as the generalized Chern conjecture for manifolds that are locally a product of surfaces. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\chi(M) = \\left ( \\frac{1}{2\\pi} \\right )^n \\int_M \\operatorname{Pf}(K)" }, { "math_id": 1, "text": "T M" }, { "math_id": 2, "text": "(n, \\mathbb{R})" }, { "math_id": 3, "text": "_4" } ]
https://en.wikipedia.org/wiki?curid=60094428
60095433
Hicks equation
In fluid dynamics, Hicks equation, sometimes also referred as Bragg–Hawthorne equation or Squire–Long equation, is a partial differential equation that describes the distribution of stream function for axisymmetric inviscid fluid, named after William Mitchinson Hicks, who derived it first in 1898. The equation was also re-derived by Stephen Bragg and William Hawthorne in 1950 and by Robert R. Long in 1953 and by Herbert Squire in 1956. The Hicks equation without swirl was first introduced by George Gabriel Stokes in 1842. The Grad–Shafranov equation appearing in plasma physics also takes the same form as the Hicks equation. Representing formula_0 as coordinates in the sense of cylindrical coordinate system with corresponding flow velocity components denoted by formula_1, the stream function formula_2 that defines the meridional motion can be defined as formula_3 that satisfies the continuity equation for axisymmetric flows automatically. The Hicks equation is then given by formula_4 where formula_5 where formula_6 is the total head, c.f. Bernoulli's Principle. and formula_7 is the circulation, both of them being conserved along streamlines. Here, formula_8 is the pressure and formula_9 is the fluid density. The functions formula_6 and formula_10 are known functions, usually prescribed at one of the boundary; see the example below. If there are closed streamlines in the interior of the fluid domain, say, a recirculation region, then the functions formula_6 and formula_10 are typically unknown and therefore in those regions, Hicks equation is not useful; Prandtl–Batchelor theorem provides details about the closed streamline regions. Derivation. Consider the axisymmetric flow in cylindrical coordinate system formula_0 with velocity components formula_1 and vorticity components formula_11. Since formula_12 in axisymmetric flows, the vorticity components are formula_13. Continuity equation allows to define a stream function formula_14 such that formula_15 (Note that the vorticity components formula_16 and formula_17 are related to formula_18 in exactly the same way that formula_19 and formula_20 are related to formula_2). Therefore the azimuthal component of vorticity becomes formula_21 The inviscid momentum equations formula_22, where formula_23 is the Bernoulli constant, formula_8 is the fluid pressure and formula_9 is the fluid density, when written for the axisymmetric flow field, becomes formula_24 in which the second equation may also be written as formula_25, where formula_26 is the material derivative. This implies that the circulation formula_27 round a material curve in the form of a circle centered on formula_28-axis is constant. If the fluid motion is steady, the fluid particle moves along a streamline, in other words, it moves on the surface given by formula_29constant. It follows then that formula_30 and formula_31, where formula_32. Therefore the radial and the azimuthal component of vorticity are formula_33. The components of formula_34 and formula_35 are locally parallel. The above expressions can be substituted into either the radial or axial momentum equations (after removing the time derivative term) to solve for formula_36. For instance, substituting the above expression for formula_16 into the axial momentum equation leads to formula_37 But formula_36 can be expressed in terms of formula_2 as shown at the beginning of this derivation. When formula_36 is expressed in terms of formula_2, we get formula_38 This completes the required derivation. Example: Fluid with uniform axial velocity and rigid body rotation in far upstream. Consider the problem where the fluid in the far stream exhibit uniform axial velocity formula_39 and rotates with angular velocity formula_40. This upstream motion corresponds to formula_41 From these, we obtain formula_42 indicating that in this case, formula_43 and formula_44 are simple linear functions of formula_2. The Hicks equation itself becomes formula_45 which upon introducing formula_46 becomes formula_47 where formula_48. Yih equation. For an incompressible flow formula_49, but with variable density, Chia-Shun Yih derived the necessary equation. The velocity field is first transformed using Yih transformation formula_50 where formula_51 is some reference density, with corresponding Stokes streamfunction formula_52 defined such that formula_53 Let us include the gravitational force acting in the negative formula_28 direction. The Yih equation is then given by formula_54 where formula_55 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(r,\\theta,z)" }, { "math_id": 1, "text": "(v_r,v_\\theta,v_z)" }, { "math_id": 2, "text": "\\psi" }, { "math_id": 3, "text": "rv_r = - \\frac{\\partial \\psi}{\\partial z}, \\quad rv_z = \\frac{\\partial \\psi}{\\partial r}" }, { "math_id": 4, "text": "\\frac{\\partial^2 \\psi}{\\partial r^2} - \\frac{1}{r} \\frac{\\partial \\psi}{\\partial r} + \\frac{\\partial^2 \\psi}{\\partial z^2} = r^2 \\frac{\\mathrm{d}H}{\\mathrm{d} \\psi} - \\Gamma\\frac{\\mathrm{d} \\Gamma}{\\mathrm{d}\\psi}" }, { "math_id": 5, "text": "H(\\psi) = \\frac{p}{\\rho} + \\frac{1}{2}(v_r^2+v_\\theta^2+v_z^2), \\quad \\Gamma(\\psi) = rv_\\theta" }, { "math_id": 6, "text": "H(\\psi)" }, { "math_id": 7, "text": "2\\pi\\Gamma" }, { "math_id": 8, "text": "p" }, { "math_id": 9, "text": "\\rho" }, { "math_id": 10, "text": "\\Gamma(\\psi)" }, { "math_id": 11, "text": "(\\omega_r,\\omega_\\theta,\\omega_z)" }, { "math_id": 12, "text": "\\partial/\\partial \\theta=0" }, { "math_id": 13, "text": "\\omega_r = -\\frac{\\partial v_\\theta}{\\partial z}, \\quad \\omega_\\theta= \\frac{\\partial v_r}{\\partial z} - \\frac{\\partial v_z}{\\partial r}, \\quad \\omega_z = \\frac{1}{r}\\frac{\\partial (rv_\\theta)}{\\partial r}" }, { "math_id": 14, "text": "\\psi(r,z)" }, { "math_id": 15, "text": "v_r=-\\frac{1}{r} \\frac{\\partial \\psi}{\\partial z}, \\quad v_z = \\frac{1}{r}\\frac{\\partial \\psi}{\\partial r}" }, { "math_id": 16, "text": "\\omega_r" }, { "math_id": 17, "text": "\\omega_z" }, { "math_id": 18, "text": "rv_\\theta" }, { "math_id": 19, "text": "v_r" }, { "math_id": 20, "text": "v_z" }, { "math_id": 21, "text": "\\omega_\\theta = - \\frac{1}{r}\\left(\\frac{\\partial^2\\psi }{\\partial r^2} - \\frac{1}{r}\\frac{\\partial \\psi}{\\partial r} + \\frac{\\partial^2\\psi }{\\partial z^2}\\right)." }, { "math_id": 22, "text": "\\partial\\boldsymbol{v}/\\partial t-\\boldsymbol{v}\\times\\boldsymbol{\\omega} = -\\nabla H" }, { "math_id": 23, "text": "H= \\frac{1}{2}(v_r^2+v_\\theta^2+v_z^2) + \\frac{p}{\\rho}" }, { "math_id": 24, "text": "\n\\begin{align}\nv_\\theta \\omega_z - v_z\\omega_\\theta - \\frac{\\partial v_r}{\\partial t} &= \\frac{\\partial H}{\\partial r},\\\\\nv_z\\omega_r - v_r \\omega_z - \\frac{\\partial v_\\theta}{\\partial t}&=0,\\\\\nv_r\\omega_\\theta - v_\\theta \\omega_r - \\frac{\\partial v_z}{\\partial t} &= \\frac{\\partial H}{\\partial z}\n\\end{align}\n" }, { "math_id": 25, "text": "D(rv_\\theta)/Dt=0" }, { "math_id": 26, "text": "D/Dt" }, { "math_id": 27, "text": "2\\pi rv_\\theta" }, { "math_id": 28, "text": "z" }, { "math_id": 29, "text": "\\psi=" }, { "math_id": 30, "text": "H=H(\\psi)" }, { "math_id": 31, "text": "\\Gamma=\\Gamma(\\psi)" }, { "math_id": 32, "text": "\\Gamma=rv_\\theta" }, { "math_id": 33, "text": "\\omega_r = v_r\\frac{\\mathrm d\\Gamma}{\\mathrm d\\psi}, \\quad \\omega_z = v_z\\frac{\\mathrm d\\Gamma}{\\mathrm d\\psi}" }, { "math_id": 34, "text": "\\boldsymbol{v}" }, { "math_id": 35, "text": "\\boldsymbol{\\omega}" }, { "math_id": 36, "text": "\\omega_\\theta" }, { "math_id": 37, "text": "\n\\begin{align}\n\\frac{\\omega_\\theta}{r}&= \\frac{v_\\theta \\omega_r}{r v_r} + \\frac{1}{rv_r}\\frac{\\mathrm dH}{\\mathrm d\\psi} \\frac{\\partial \\psi}{\\partial z}\\\\\n&= \\frac{\\Gamma}{r^2}\\frac{\\mathrm d\\Gamma}{\\mathrm d\\psi}-\\frac{\\mathrm dH}{\\mathrm d\\psi}.\n\\end{align}\n" }, { "math_id": 38, "text": "\\frac{\\partial^2 \\psi}{\\partial r^2} - \\frac{1}{r} \\frac{\\partial \\psi}{\\partial r} + \\frac{\\partial^2 \\psi}{\\partial z^2} = r^2 \\frac{\\mathrm{d}H}{\\mathrm{d} \\psi} - \\Gamma\\frac{\\mathrm{d} \\Gamma}{\\mathrm{d}\\psi}." }, { "math_id": 39, "text": "U" }, { "math_id": 40, "text": "\\Omega" }, { "math_id": 41, "text": "\\psi = \\frac{1}{2}Ur^2, \\quad \\Gamma = \\Omega r^2, \\quad H = \\frac{1}{2}U^2 + \\Omega^2 r^2." }, { "math_id": 42, "text": "H(\\psi) = \\frac{1}{2}U^2 + \\frac{2\\Omega^2}{U} \\psi, \\qquad \\Gamma(\\psi) = \\frac{2\\Omega}{U} \\psi" }, { "math_id": 43, "text": "H" }, { "math_id": 44, "text": "\\Gamma" }, { "math_id": 45, "text": "\\frac{\\partial^2 \\psi}{\\partial r^2} - \\frac{1}{r} \\frac{\\partial \\psi}{\\partial r} + \\frac{\\partial^2 \\psi}{\\partial z^2} = \\frac{2\\Omega^2}{U} r^2 - \\frac{4\\Omega^2}{U^2} \\psi" }, { "math_id": 46, "text": "\\psi(r,z) = Ur^2/2 + r f(r,z)" }, { "math_id": 47, "text": "\\frac{\\partial^2 f}{\\partial r^2} + \\frac{1}{r} \\frac{\\partial f}{\\partial r} + \\frac{\\partial^2 f}{\\partial z^2} + \\left(k^2-\\frac{1}{r^2}\\right) f= 0" }, { "math_id": 48, "text": "k=2\\Omega/U" }, { "math_id": 49, "text": "D\\rho/Dt=0" }, { "math_id": 50, "text": "(v_r',v_\\theta',v_z') = \\sqrt{\\frac{\\rho}{\\rho_0}}(v_r,v_\\theta,v_z)" }, { "math_id": 51, "text": "\\rho_0" }, { "math_id": 52, "text": "\\psi'" }, { "math_id": 53, "text": "rv_r' = - \\frac{\\partial \\psi'}{\\partial z}, \\quad rv_z' = \\frac{\\partial \\psi'}{\\partial r}." }, { "math_id": 54, "text": "\\frac{\\partial^2 \\psi'}{\\partial r^2} - \\frac{1}{r} \\frac{\\partial \\psi'}{\\partial r} + \\frac{\\partial^2 \\psi'}{\\partial z^2} = r^2 \\frac{\\mathrm{d}H}{\\mathrm{d} \\psi'} - r^2 \\frac{\\mathrm{d}\\rho}{\\mathrm{d}\\psi'}\\frac{g}{\\rho_0}z - \\Gamma\\frac{\\mathrm{d} \\Gamma}{\\mathrm{d}\\psi'} " }, { "math_id": 55, "text": "H(\\psi') = \\frac{p}{\\rho_0} + \\frac{\\rho}{2\\rho_0}(v_r'^2+v_\\theta'^2+v_z'^2) + \\frac{\\rho}{\\rho_0} g z, \\quad \\Gamma(\\psi') = rv_\\theta'" } ]
https://en.wikipedia.org/wiki?curid=60095433
60097
Topological ring
In mathematics, a topological ring is a ring formula_0 that is also a topological space such that both the addition and the multiplication are continuous as maps: formula_1 where formula_2 carries the product topology. That means formula_0 is an additive topological group and a multiplicative topological semigroup. Topological rings are fundamentally related to topological fields and arise naturally while studying them, since for example completion of a topological field may be a topological ring which is not a field. General comments. The group of units formula_3 of a topological ring formula_0 is a topological group when endowed with the topology coming from the embedding of formula_3 into the product formula_2 as formula_4 However, if the unit group is endowed with the subspace topology as a subspace of formula_5 it may not be a topological group, because inversion on formula_3 need not be continuous with respect to the subspace topology. An example of this situation is the adele ring of a global field; its unit group, called the idele group, is not a topological group in the subspace topology. If inversion on formula_3 is continuous in the subspace topology of formula_0 then these two topologies on formula_3 are the same. If one does not require a ring to have a unit, then one has to add the requirement of continuity of the additive inverse, or equivalently, to define the topological ring as a ring that is a topological group (for formula_6) in which multiplication is continuous, too. Examples. Topological rings occur in mathematical analysis, for example as rings of continuous real-valued functions on some topological space (where the topology is given by pointwise convergence), or as rings of continuous linear operators on some normed vector space; all Banach algebras are topological rings. The rational, real, complex and formula_7-adic numbers are also topological rings (even topological fields, see below) with their standard topologies. In the plane, split-complex numbers and dual numbers form alternative topological rings. See hypercomplex numbers for other low-dimensional examples. In commutative algebra, the following construction is common: given an ideal formula_8 in a commutative ring formula_5 the I-adic topology on formula_0 is defined as follows: a subset formula_9 of formula_0 is open if and only if for every formula_10 there exists a natural number formula_11 such that formula_12 This turns formula_0 into a topological ring. The formula_8-adic topology is Hausdorff if and only if the intersection of all powers of formula_8 is the zero ideal formula_13 The formula_7-adic topology on the integers is an example of an formula_8-adic topology (with formula_14). Completion. Every topological ring is a topological group (with respect to addition) and hence a uniform space in a natural manner. One can thus ask whether a given topological ring formula_0 is complete. If it is not, then it can be "completed": one can find an essentially unique complete topological ring formula_15 that contains formula_0 as a dense subring such that the given topology on formula_0 equals the subspace topology arising from formula_16 If the starting ring formula_0 is metric, the ring formula_15 can be constructed as a set of equivalence classes of Cauchy sequences in formula_5 this equivalence relation makes the ring formula_15 Hausdorff and using constant sequences (which are Cauchy) one realizes a (uniformly) continuous morphism (CM in the sequel) formula_17 such that, for all CM formula_18 where formula_19 is Hausdorff and complete, there exists a unique CM formula_20 such that formula_21 If formula_0 is not metric (as, for instance, the ring of all real-variable rational valued functions, that is, all functions formula_22 endowed with the topology of pointwise convergence) the standard construction uses minimal Cauchy filters and satisfies the same universal property as above (see Bourbaki, General Topology, III.6.5). The rings of formal power series and the formula_7-adic integers are most naturally defined as completions of certain topological rings carrying formula_8-adic topologies. Topological fields. Some of the most important examples are topological fields. A topological field is a topological ring that is also a field, and such that inversion of non zero elements is a continuous function. The most common examples are the complex numbers and all its subfields, and the valued fields, which include the formula_7-adic fields. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "R \\times R \\to R" }, { "math_id": 2, "text": "R \\times R" }, { "math_id": 3, "text": "R^\\times" }, { "math_id": 4, "text": "\\left(x, x^{-1}\\right)." }, { "math_id": 5, "text": "R," }, { "math_id": 6, "text": "+" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "I" }, { "math_id": 9, "text": "U" }, { "math_id": 10, "text": "x \\in U" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "x + I^n \\subseteq U." }, { "math_id": 13, "text": "(0)." }, { "math_id": 14, "text": "I = p\\Z" }, { "math_id": 15, "text": "S" }, { "math_id": 16, "text": "S." }, { "math_id": 17, "text": "c : R \\to S" }, { "math_id": 18, "text": "f : R \\to T" }, { "math_id": 19, "text": "T" }, { "math_id": 20, "text": "g : S \\to T" }, { "math_id": 21, "text": "f = g \\circ c." }, { "math_id": 22, "text": "f : \\R \\to \\Q" } ]
https://en.wikipedia.org/wiki?curid=60097
601025
Minkowski–Bouligand dimension
Method of determining fractal dimension In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set formula_0 in a Euclidean space formula_1, or more generally in a metric space formula_2. It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand. To calculate this dimension for a fractal formula_0, imagine this fractal lying on an evenly spaced grid and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid finer by applying a box-counting algorithm. Suppose that formula_3 is the number of boxes of side length formula_4 required to cover the set. Then the box-counting dimension is defined as formula_5 Roughly speaking, this means that the dimension is the exponent formula_6 such that formula_7, which is what one would expect in the trivial case where formula_0 is a smooth space (a manifold) of integer dimension formula_6. If the above limit does not exist, one may still take the limit superior and limit inferior, which respectively define the upper box dimension and lower box dimension. The upper box dimension is sometimes called the entropy dimension, Kolmogorov dimension, Kolmogorov capacity, limit capacity or upper Minkowski dimension, while the lower box dimension is also called the lower Minkowski dimension. The upper and lower box dimensions are strongly related to the more popular Hausdorff dimension. Only in very special applications is it important to distinguish between the three (see below). Yet another measure of fractal dimension is the correlation dimension. Alternative definitions. It is possible to define the box dimensions using balls, with either the covering number or the packing number. The covering number formula_8 is the "minimal" number of open balls of radius formula_4 required to cover the fractal, or in other words, such that their union contains the fractal. We can also consider the intrinsic covering number formula_9, which is defined the same way but with the additional requirement that the centers of the open balls lie in the set "S". The packing number formula_10 is the "maximal" number of disjoint open balls of radius formula_4 one can situate such that their centers would be in the fractal. While formula_11, formula_12, formula_13 and formula_14 are not exactly identical, they are closely related to each other and give rise to identical definitions of the upper and lower box dimensions. This is easy to show once the following inequalities are proven: formula_15 These, in turn, follow either by definition or with little effort from the triangle inequality. The advantage of using balls rather than squares is that this definition generalizes to any metric space. In other words, the box definition is extrinsic — one assumes the fractal space "S" is contained in a Euclidean space, and defines boxes according to the external geometry of the containing space. However, the dimension of "S" should be intrinsic, independent of the environment into which "S" is placed, and the ball definition can be formulated intrinsically. One defines an internal ball as all points of "S" within a certain distance of a chosen center, and one counts such balls to get the dimension. (More precisely, the "N"covering definition is extrinsic, but the other two are intrinsic.) The advantage of using boxes is that in many cases "N"("ε") may be easily calculated explicitly, and that for boxes the covering and packing numbers (defined in an equivalent way) are equal. The logarithm of the packing and covering numbers are sometimes referred to as "entropy numbers" and are somewhat analogous to the concepts of thermodynamic entropy and information-theoretic entropy, in that they measure the amount of "disorder" in the metric space or fractal at scale "ε" and also measure how many bits or digits one would need to specify a point of the space to accuracy "ε". Another equivalent (extrinsic) definition for the box-counting dimension is given by the formula formula_16 where for each "r" &gt; 0, the set formula_17 is defined to be the "r"-neighborhood of "S", i.e. the set of all points in formula_18 that are at distance less than "r" from "S" (or equivalently, formula_17 is the union of all the open balls of radius "r" which have a center that is a member of "S"). Properties. The upper box dimension is finitely stable, i.e. if {"A"1, ..., "A""n"} is a finite collection of sets, then formula_19 However, it is not countably stable, i.e. this equality does not hold for an "infinite" sequence of sets. For example, the box dimension of a single point is 0, but the box dimension of the collection of rational numbers in the interval [0, 1] has dimension 1. The Hausdorff dimension by comparison, is countably stable. The lower box dimension, on the other hand, is not even finitely stable. An interesting property of the upper box dimension not shared with either the lower box dimension or the Hausdorff dimension is the connection to set addition. If "A" and "B" are two sets in a Euclidean space, then "A" + "B" is formed by taking all the pairs of points "a", "b" where "a" is from "A" and "b" is from "B" and adding "a" + "b". One has formula_20 Relations to the Hausdorff dimension. The box-counting dimension is one of a number of definitions for dimension that can be applied to fractals. For many well behaved fractals all these dimensions are equal; in particular, these dimensions coincide whenever the fractal satisfies the open set condition (OSC). For example, the Hausdorff dimension, lower box dimension, and upper box dimension of the Cantor set are all equal to log(2)/log(3). However, the definitions are not equivalent. The box dimensions and the Hausdorff dimension are related by the inequality formula_21 In general, both inequalities may be strict. The upper box dimension may be bigger than the lower box dimension if the fractal has different behaviour in different scales. For example, examine the set of numbers in the interval [0, 1] satisfying the condition for any "n", all the digits between the 22"n"-th digit and the (22"n"+1 − 1)-th digit are zero. The digits in the "odd place-intervals", i.e. between digits 22"n"+1 and 22"n"+2 − 1 are not restricted and may take any value. This fractal has upper box dimension 2/3 and lower box dimension 1/3, a fact which may be easily verified by calculating "N"("ε") for formula_22 and noting that their values behave differently for "n" even and odd. Another example: the set of rational numbers formula_23, a countable set with formula_24, has formula_25 because its closure, formula_26, has dimension 1. In fact, formula_27 These examples show that adding a countable set can change box dimension, demonstrating a kind of instability of this dimension. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "\\R^n" }, { "math_id": 2, "text": "(X,d)" }, { "math_id": 3, "text": "N(\\varepsilon)" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "\\dim_\\text{box}(S) := \\lim_{\\varepsilon \\to 0} \\frac {\\log N(\\varepsilon)}{\\log(1/\\varepsilon)}." }, { "math_id": 6, "text": "d" }, { "math_id": 7, "text": "N(\\varepsilon)\\approx C\\varepsilon^{-d}" }, { "math_id": 8, "text": "N_\\text{covering}(\\varepsilon)" }, { "math_id": 9, "text": "N'_\\text{covering}(\\varepsilon)" }, { "math_id": 10, "text": "N_\\text{packing}(\\varepsilon)" }, { "math_id": 11, "text": "N" }, { "math_id": 12, "text": "N_\\text{covering}" }, { "math_id": 13, "text": "N'_\\text{covering}" }, { "math_id": 14, "text": "N_\\text{packing}" }, { "math_id": 15, "text": "N_\\text{packing}(\\varepsilon) \\leq N'_\\text{covering}(\\varepsilon) \\leq N_\\text{covering}(\\varepsilon/2) \\leq N'_\\text{covering}(\\varepsilon/2) \\leq N_\\text{packing}(\\varepsilon/4)." }, { "math_id": 16, "text": "\\dim_\\text{box}(S) = n - \\lim_{r \\to 0} \\frac{\\log \\text{vol}(S_r)}{\\log r}," }, { "math_id": 17, "text": "S_r" }, { "math_id": 18, "text": "R^n" }, { "math_id": 19, "text": "\\dim_\\text{upper box}(A_1 \\cup \\dotsb \\cup A_n) = \\max\\{\\dim_\\text{upper box}(A_1), \\dots, \\dim_\\text{upper box}(A_n)\\}." }, { "math_id": 20, "text": "\\dim_\\text{upper box}(A + B) \\leq \\dim_\\text{upper box}(A) + \\dim_\\text{upper box}(B)." }, { "math_id": 21, "text": "\\dim_\\text{Haus} \\leq \\dim_\\text{lower box} \\leq \\dim_\\text{upper box}." }, { "math_id": 22, "text": "\\varepsilon = 10^{-2^n}" }, { "math_id": 23, "text": "\\mathbb{Q}" }, { "math_id": 24, "text": "\\dim_\\text{Haus} = 0" }, { "math_id": 25, "text": "\\dim_\\text{box} = 1" }, { "math_id": 26, "text": "\\mathbb{R}" }, { "math_id": 27, "text": "\\dim_\\text{box}\\left\\{0, 1, \\frac{1}{2}, \\frac{1}{3}, \\frac{1}{4}, \\ldots\\right\\} = \\frac{1}{2}." } ]
https://en.wikipedia.org/wiki?curid=601025
60103721
Akima spline
In applied mathematics, an Akima spline is a type of non-smoothing spline that gives good fits to curves where the second derivative is rapidly varying. The Akima spline was published by Hiroshi Akima in 1970 from Akima's pursuit of a cubic spline curve that would appear more natural and smooth, akin to an intuitively hand-drawn curve. The Akima spline has become the algorithm of choice for several computer graphics applications. Its advantage over the cubic spline curve is its stability with respect to outliers. Method. Given a set of "knot" points formula_0, where the formula_1 are strictly increasing, the Akima spline will go through each of the given points. At those points, its slope, formula_2, is a function of the locations of the points formula_3 through formula_4. Specifically, if we define formula_5 as the slope of the line segment from formula_6 to formula_7, namely formula_8 then the spline slopes formula_2 are defined as the following weighted average of formula_9 and formula_5, formula_10 If the denominator equals zero, the slope is given as formula_11 The first two and the last two points need a special prescription, for example, formula_12 The spline is then defined as the piecewise cubic function whose value between formula_1 and formula_13 is the unique cubic polynomial formula_14, formula_15 where the coefficients of the polynomial are chosen such that the four conditions of continuity of the spline together with its first derivative are satisfied, formula_16 which gives formula_17 formula_18 formula_19 formula_20 Due to these conditions the Akima spline is a  C1 differentiable function, that is, the function itself is continuous and the first derivative is also continuous. However, in general, the second derivative is not necessarily continuous. An advantage of the Akima spline is due to the fact that it uses only values from neighboring knot points in the construction of the coefficients of the interpolation polynomial between any two knot points. This means that there is no large system of equations to solve and the Akima spline avoids unphysical wiggles in regions where the second derivative in the underlying curve is rapidly changing. A possible disadvantage of the Akima spline is that it has a discontinuous second derivative. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x_i, y_i)_{i=1\\dots n}" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "s_i" }, { "math_id": 3, "text": "(x_{i-2}, y_{i-2})" }, { "math_id": 4, "text": "(x_{i+2}, y_{i+2})" }, { "math_id": 5, "text": "m_i" }, { "math_id": 6, "text": "(x_i, y_i)" }, { "math_id": 7, "text": "(x_{i+1},y_{i+1})" }, { "math_id": 8, "text": "m_i=\\frac{y_{i+1} - y_i}{x_{i+1}-x_i} \\,," }, { "math_id": 9, "text": "m_{i-1}" }, { "math_id": 10, "text": "s_i = \\frac{|m_{i+1} - m_i|m_{i-1} + |m_{i-1} - m_{i-2}|m_i}{|m_{i+1} - m_i| + |m_{i-1} - m_{i-2}|} \\,." }, { "math_id": 11, "text": "s_i=\\frac{m_{i-1}+m_{i}}{2}\\,." }, { "math_id": 12, "text": "s_1=m_1 \\,, s_2=\\frac{m_{1}+m_{2}}{2} \\,, s_{n-1}=\\frac{m_{n-2}+m_{n-1}}{2} \\,, s_n=m_{n-1} \\,." }, { "math_id": 13, "text": "x_{i+1}" }, { "math_id": 14, "text": "P_i(x)" }, { "math_id": 15, "text": "P_i(x)=a_i+b_i(x-x_i)+c_i(x-x_i)^2+d_i(x-x_i)^3 \\,," }, { "math_id": 16, "text": "P(x_i) = y_i \\,, P(x_{i+1}) = y_{i+1} \\,, P'(x_i) = s_i \\,, P'(x_{i+1}) = s_{i+1} \\,." }, { "math_id": 17, "text": "a_i=y_i \\,," }, { "math_id": 18, "text": "b_i=s_i \\,," }, { "math_id": 19, "text": "c_i=\\frac{3m_i-2s_i-s_{i+1}}{x_{i+1}-x_i} \\,," }, { "math_id": 20, "text": "d_i=\\frac{s_i+s_{i+1}-2m_i}{(x_{i+1}-x_i)^2} \\,." } ]
https://en.wikipedia.org/wiki?curid=60103721
60105148
Deep reinforcement learning
Machine learning that combines deep learning and reinforcement learning &lt;templatestyles src="Machine learning/styles.css"/&gt; Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g. every pixel rendered to the screen in a video game) and decide what actions to perform to optimize an objective (e.g. maximizing the game score). Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare. Overview. Deep learning. Deep learning is a form of machine learning that utilizes a neural network to transform a set of inputs into a set of outputs via an artificial neural network. Deep learning methods, often using supervised learning with labeled datasets, have been shown to solve tasks that involve handling complex, high-dimensional raw input data (such as images) with less manual feature engineering than prior methods, enabling significant progress in several fields including computer vision and natural language processing. In the past decade, deep RL has achieved remarkable results on a range of problems, from single and multiplayer games such as Go, Atari Games, and "Dota 2" to robotics. Reinforcement learning. Reinforcement learning is a process in which an agent learns to make decisions through trial and error. This problem is often modeled mathematically as a Markov decision process (MDP), where an agent at every timestep is in a state formula_0, takes action formula_1, receives a scalar reward and transitions to the next state formula_2 according to environment dynamics formula_3. The agent attempts to learn a policy formula_4, or map from observations to actions, in order to maximize its returns (expected sum of rewards). In reinforcement learning (as opposed to optimal control) the algorithm only has access to the dynamics formula_3 through sampling. Deep reinforcement learning. In many practical decision-making problems, the states formula_0 of the MDP are high-dimensional (e.g., images from a camera or the raw sensor stream from a robot) and cannot be solved by traditional RL algorithms. Deep reinforcement learning algorithms incorporate deep learning to solve such MDPs, often representing the policy formula_4 or other learned functions as a neural network and developing specialized algorithms that perform well in this setting. History. Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning, where a neural network is used in reinforcement learning to represent policies or value functions. Because in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single neural network, it is also sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon. Four inputs were used for the number of pieces of a given color at a given location on the board, totaling 198 input signals. With zero knowledge built in, the network learned to play the game at an intermediate level by self-play and TD(formula_5). Seminal textbooks by Sutton and Barto on reinforcement learning, Bertsekas and Tsitiklis on neuro-dynamic programming, and others advanced knowledge and interest in the field. Katsunari Shibata's group showed that various functions emerge in this framework, including image recognition, color constancy, sensor motion (active recognition), hand-eye coordination and hand reaching movement, explanation of brain activities, knowledge transfer, memory, selective attention, prediction, and exploration. Starting around 2012, the so-called deep learning revolution led to an increased interest in using deep neural networks as function approximators across a variety of domains. This led to a renewed interest in researchers using deep neural networks to learn the policy, value, and/or Q functions present in existing reinforcement learning algorithms. Beginning around 2013, DeepMind showed impressive learning results using deep RL to play Atari video games. The computer player a neural network trained using a deep RL algorithm, a deep version of Q-learning they termed deep Q-networks (DQN), with the game score as the reward. They used a deep convolutional neural network to process 4 frames RGB pixels (84x84) as inputs. All 49 games were learned using the same network architecture and with minimal prior knowledge, outperforming competing methods on almost all the games and performing at a level comparable or superior to a professional human game tester. Deep reinforcement learning reached another milestone in 2015 when AlphaGo, a computer program trained with deep RL to play Go, became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board. In a subsequent project in 2017, AlphaZero improved performance on Go while also demonstrating they could use the same algorithm to learn to play chess and shogi at a level competitive or superior to existing computer programs for those games, and again improved in 2019 with MuZero. Separately, another milestone was achieved by researchers from Carnegie Mellon University in 2019 developing Pluribus, a computer program to play poker that was the first to beat professionals at multiplayer games of no-limit Texas hold 'em. OpenAI Five, a program for playing five-on-five "Dota 2" beat the previous world champions in a demonstration match in 2019. Deep reinforcement learning has also been applied to many domains beyond games. In robotics, it has been used to let robots perform simple household tasks and solve a Rubik's cube with a robot hand. Deep RL has also found sustainability applications, used to reduce energy consumption at data centers. Deep RL for autonomous driving is an active area of research in academia and industry. Loon explored deep RL for autonomously navigating their high-altitude balloons. Algorithms. Various techniques exist to train policies to solve tasks with deep reinforcement learning algorithms, each having their own benefits. At the highest level, there is a distinction between model-based and model-free reinforcement learning, which refers to whether the algorithm attempts to learn a forward model of the environment dynamics. In model-based deep reinforcement learning algorithms, a forward model of the environment dynamics is estimated, usually by supervised learning using a neural network. Then, actions are obtained by using model predictive control using the learned model. Since the true environment dynamics will usually diverge from the learned dynamics, the agent re-plans often when carrying out actions in the environment. The actions selected may be optimized using Monte Carlo methods such as the cross-entropy method, or a combination of model-learning with model-free methods. In model-free deep reinforcement learning algorithms, a policy formula_4 is learned without explicitly modeling the forward dynamics. A policy can be optimized to maximize returns by directly estimating the policy gradient but suffers from high variance, making it impractical for use with function approximation in deep RL. Subsequent algorithms have been developed for more stable learning and widely applied. Another class of model-free deep reinforcement learning algorithms rely on dynamic programming, inspired by temporal difference learning and Q-learning. In discrete action spaces, these algorithms usually learn a neural network Q-function formula_6 that estimates the future returns taking action formula_1 from state formula_0. In continuous spaces, these algorithms often learn both a value estimate and a policy. Research. Deep reinforcement learning is an active area of research, with several lines of inquiry. Exploration. An RL agent must balance the exploration/exploitation tradeoff: the problem of deciding whether to pursue actions that are already known to yield high rewards or explore other actions in order to discover higher rewards. RL agents usually collect data with some type of stochastic policy, such as a Boltzmann distribution in discrete action spaces or a Gaussian distribution in continuous action spaces, inducing basic exploration behavior. The idea behind novelty-based, or curiosity-driven, exploration is giving the agent a motive to explore unknown outcomes in order to find the best solutions. This is done by "modify[ing] the loss function (or even the network architecture) by adding terms to incentivize exploration". An agent may also be aided in exploration by utilizing demonstrations of successful trajectories, or reward-shaping, giving an agent intermediate rewards that are customized to fit the task it is attempting to complete. Off-policy reinforcement learning. An important distinction in RL is the difference between on-policy algorithms that require evaluating or improving the policy that collects data, and off-policy algorithms that can learn a policy from data generated by an arbitrary policy. Generally, value-function based methods such as Q-learning are better suited for off-policy learning and have better sample-efficiency - the amount of data required to learn a task is reduced because data is re-used for learning. At the extreme, offline (or "batch") RL considers learning a policy from a fixed dataset without additional interaction with the environment. Inverse reinforcement learning. Inverse RL refers to inferring the reward function of an agent given the agent's behavior. Inverse reinforcement learning can be used for learning from demonstrations (or apprenticeship learning) by inferring the demonstrator's reward and then optimizing a policy to maximize returns with RL. Deep learning approaches have been used for various forms of imitation learning and inverse RL. Goal-conditioned reinforcement learning. Another active area of research is in learning goal-conditioned policies, also called contextual or universal policies formula_7 that take in an additional goal formula_8 as input to communicate a desired aim to the agent. Hindsight experience replay is a method for goal-conditioned RL that involves storing and learning from previous failed attempts to complete a task. While a failed attempt may not have reached the intended goal, it can serve as a lesson for how achieve the unintended result through hindsight relabeling. Multi-agent reinforcement learning. Many applications of reinforcement learning do not involve just a single agent, but rather a collection of agents that learn together and co-adapt. These agents may be competitive, as in many games, or cooperative as in many real-world multi-agent systems. Multi-agent reinforcement learning studies the problems introduced in this setting. Generalization. The promise of using deep learning tools in reinforcement learning is generalization: the ability to operate correctly on previously unseen inputs. For instance, neural networks trained for image recognition can recognize that a picture contains a bird even it has never seen that particular image or even that particular bird. Since deep RL allows raw data (e.g. pixels) as input, there is a reduced need to predefine the environment, allowing the model to be generalized to multiple applications. With this layer of abstraction, deep reinforcement learning algorithms can be designed in a way that allows them to be general and the same model can be used for different tasks. One method of increasing the ability of policies trained with deep RL policies to generalize is to incorporate representation learning.
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "s'" }, { "math_id": 3, "text": "p(s'|s, a)" }, { "math_id": 4, "text": "\\pi(a|s)" }, { "math_id": 5, "text": "\\lambda" }, { "math_id": 6, "text": "Q(s, a)" }, { "math_id": 7, "text": "\\pi(a|s, g)" }, { "math_id": 8, "text": "g" } ]
https://en.wikipedia.org/wiki?curid=60105148
6010542
Period mapping
In mathematics, in the field of algebraic geometry, the period mapping relates families of Kähler manifolds to families of Hodge structures. Ehresmann's theorem. Let "f" : "X" → "B" be a holomorphic submersive morphism. For a point "b" of "B", we denote the fiber of "f" over "b" by "X""b". Fix a point 0 in "B". Ehresmann's theorem guarantees that there is a small open neighborhood "U" around 0 in which "f" becomes a fiber bundle. That is, "f"−1("U") is diffeomorphic to "X"0 × "U". In particular, the composite map formula_0 is a diffeomorphism. This diffeomorphism is not unique because it depends on the choice of trivialization. The trivialization is constructed from smooth paths in "U", and it can be shown that the homotopy class of the diffeomorphism depends only on the choice of a homotopy class of paths from "b" to 0. In particular, if "U" is contractible, there is a well-defined diffeomorphism up to homotopy. The diffeomorphism from "X""b" to "X"0 induces an isomorphism of cohomology groups formula_1 and since homotopic maps induce identical maps on cohomology, this isomorphism depends only on the homotopy class of the path from "b" to 0. Local unpolarized period mappings. Assume that "f" is proper and that "X"0 is a Kähler variety. The Kähler condition is open, so after possibly shrinking "U", "X""b" is compact and Kähler for all "b" in "U". After shrinking "U" further we may assume that it is contractible. Then there is a well-defined isomorphism between the cohomology groups of "X"0 and "X""b". These isomorphisms of cohomology groups will not in general preserve the Hodge structures of "X"0 and "X""b" because they are induced by diffeomorphisms, not biholomorphisms. Let "FpHk"("Xb", C) denote the "p"th step of the Hodge filtration. The Hodge numbers of "Xb" are the same as those of "X"0, so the number "b""p","k" dim "FpHk"("Xb", C) is independent of "b". The period map is the map formula_2 where "F" is the flag variety of chains of subspaces of dimensions "b""p","k" for all "p", that sends formula_3 Because "Xb" is a Kähler manifold, the Hodge filtration satisfies the Hodge–Riemann bilinear relations. These imply that formula_4 Not all flags of subspaces satisfy this condition. The subset of the flag variety satisfying this condition is called the unpolarized local period domain and is denoted formula_5. formula_5 is an open subset of the flag variety "F". Local polarized period mappings. Assume now not just that each "X""b" is Kähler, but that there is a Kähler class that varies holomorphically in "b". In other words, assume there is a class ω in H2("X", Z) such that for every "b", the restriction ω"b" of ω to "X""b" is a Kähler class. ω"b" determines a bilinear form "Q" on "H""k"("X""b", C) by the rule formula_6 This form varies holomorphically in "b", and consequently the image of the period mapping satisfies additional constraints which again come from the Hodge–Riemann bilinear relations. These are: "k", the restriction of formula_7 to the primitive classes of type ("p", "q") is positive definite. The polarized local period domain is the subset of the unpolarized local period domain whose flags satisfy these additional conditions. The first condition is a closed condition, and the second is an open condition, and consequently the polarized local period domain is a locally closed subset of the unpolarized local period domain and of the flag variety "F". The period mapping is defined in the same way as before. The polarized local period domain and the polarized period mapping are still denoted formula_5 and formula_8, respectively. Global period mappings. Focusing only on local period mappings ignores the information present in the topology of the base space "B". The global period mappings are constructed so that this information is still available. The difficulty in constructing global period mappings comes from the monodromy of "B": There is no longer a unique homotopy class of diffeomorphisms relating the fibers "Xb" and "X0". Instead, distinct homotopy classes of paths in "B" induce possibly distinct homotopy classes of diffeomorphisms and therefore possibly distinct isomorphisms of cohomology groups. Consequently there is no longer a well-defined flag for each fiber. Instead, the flag is defined only up to the action of the fundamental group. In the unpolarized case, define the "monodromy group" Γ to be the subgroup of GL("Hk"("X"0, Z)) consisting of all automorphisms induced by a homotopy class of curves in "B" as above. The flag variety is a quotient of a Lie group by a parabolic subgroup, and the monodromy group is an arithmetic subgroup of the Lie group. The global unpolarized period domain is the quotient of the local unpolarized period domain by the action of Γ (it is thus a collection of double cosets). In the polarized case, the elements of the monodromy group are required to also preserve the bilinear form "Q", and the global polarized period domain is constructed as a quotient by Γ in the same way. In both cases, the period mapping takes a point of "B" to the class of the Hodge filtration on "Xb". Properties. Griffiths proved that the period map is holomorphic. His transversality theorem limits the range of the period map. Period matrices. The Hodge filtration can be expressed in coordinates using period matrices. Choose a basis δ1, ..., δr for the torsion-free part of the "k"th integral homology group "H""k"("X", Z). Fix "p" and "q" with "p" + "q" "k", and choose a basis ω1, ..., ωs for the harmonic forms of type ("p", "q"). The period matrix of "X"0 with respect to these bases is the matrix formula_9 The entries of the period matrix depend on the choice of basis and on the complex structure. The δs can be varied by a choice of a matrix Λ in SL("r", Z), and the ωs can be varied by a choice of a matrix "A" in GL("s", C). A period matrix is "equivalent" to Ω if it can be written as "A"ΩΛ for some choice of "A" and Λ. The case of elliptic curves. Consider the family of elliptic curves formula_10 where λ is any complex number not equal to zero or one. The Hodge filtration on the first cohomology group of a curve has two steps, "F"0 and "F"1. However, "F"0 is the entire cohomology group, so the only interesting term of the filtration is "F"1, which is "H"1,0, the space of holomorphic harmonic 1-forms. "H"1,0 is one-dimensional because the curve is elliptic, and for all λ, it is spanned by the differential form ω "dx"/"y". To find explicit representatives of the homology group of the curve, note that the curve can be represented as the graph of the multivalued function formula_11 on the Riemann sphere. The branch points of this function are at zero, one, λ, and infinity. Make two branch cuts, one running from zero to one and the other running from λ to infinity. These exhaust the branch points of the function, so they cut the multi-valued function into two single-valued sheets. Fix a small ε &gt; 0. On one of these sheets, trace the curve γ("t") 1/2 + (1/2 + ε)exp(2π"it"). For ε sufficiently small, this curve surrounds the branch cut [0, 1] and does not meet the branch cut [λ, ∞]. Now trace another curve δ("t") that begins in one sheet as δ("t") 1 + 2(λ − 1)t for 0 ≤ t ≤ 1/2 and continues in the other sheet as δ("t") λ + 2(1 − λ)(t − 1/2) for 1/2 ≤ t ≤ 1. Each half of this curve connects the points 1 and λ on the two sheets of the Riemann surface. From the Seifert–van Kampen theorem, the homology group of the curve is free of rank two. Because the curves meet in a single point, 1 + ε, neither of their homology classes is a proper multiple of some other homology class, and hence they form a basis of "H"1. The period matrix for this family is therefore formula_12 The first entry of this matrix we will abbreviate as "A", and the second as "B". The bilinear form √−1"Q" is positive definite because locally, we can always write ω as "f dz", hence formula_13 By Poincaré duality, γ and δ correspond to cohomology classes γ* and δ* which together are a basis for "H"1("X"0, Z). It follows that ω can be written as a linear combination of γ* and δ*. The coefficients are given by evaluating ω with respect to the dual basis elements γ and δ: formula_14 When we rewrite the positive definiteness of "Q" in these terms, we have formula_15 Since γ* and δ* are integral, they do not change under conjugation. Furthermore, since γ and δ intersect in a single point and a single point is a generator of "H"0, the cup product of γ* and δ* is the fundamental class of "X"0. Consequently this integral equals formula_16. The integral is strictly positive, so neither "A" nor "B" can be zero. After rescaling ω, we may assume that the period matrix equals (1 τ) for some complex number τ with strictly positive imaginary part. This removes the ambiguity coming from the GL(1, C) action. The action of SL(2, Z) is then the usual action of the modular group on the upper half-plane. Consequently, the period domain is the Riemann sphere. This is the usual parameterization of an elliptic curve as a lattice. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_b \\hookrightarrow f^{-1}(U) \\cong X_0 \\times U \\twoheadrightarrow X_0" }, { "math_id": 1, "text": "H^k(X_b, \\mathbf{Z}) \\cong H^k(X_b \\times U, \\mathbf{Z}) \\cong H^k(X_0 \\times U, \\mathbf{Z}) \\cong H^k(X_0, \\mathbf{Z})," }, { "math_id": 2, "text": "\\mathcal{P} : U \\rarr F = F_{b_{1,k}, \\ldots, b_{k,k}}(H^k(X_0, \\mathbf{C}))," }, { "math_id": 3, "text": "b \\mapsto (F^pH^k(X_b, \\mathbf{C}))_p." }, { "math_id": 4, "text": "H^k(X_b, \\mathbf{C}) = F^pH^k(X_b, \\mathbf{C}) \\oplus \\overline{F^{k-p+1}H^k(X_b, \\mathbf{C})}." }, { "math_id": 5, "text": "\\mathcal{D}" }, { "math_id": 6, "text": "Q(\\xi, \\eta) = \\int \\omega_b^{n-k} \\wedge \\xi \\wedge \\eta." }, { "math_id": 7, "text": "\\textstyle (-1)^{k(k-1)/2}i^{p-q}Q" }, { "math_id": 8, "text": "\\mathcal{P}" }, { "math_id": 9, "text": "\\Omega = \\Big(\\int_{\\delta_i} \\omega_j\\Big)_{1 \\le i \\le r, 1 \\le j \\le s}." }, { "math_id": 10, "text": "y^2 = x(x - 1)(x - \\lambda)" }, { "math_id": 11, "text": "y = \\sqrt{x(x-1)(x-\\lambda)}" }, { "math_id": 12, "text": "\\begin{pmatrix} \\int_\\gamma \\omega \\\\ \\int_\\delta \\omega \\end{pmatrix}." }, { "math_id": 13, "text": "\\sqrt{-1}\\int_{X_0} \\omega \\wedge \\bar\\omega = \\sqrt{-1}\\int_{X_0} |f|^2\\,dz \\wedge d\\bar{z} > 0." }, { "math_id": 14, "text": "\\omega = A\\gamma^* + B\\delta^*." }, { "math_id": 15, "text": "\\sqrt{-1}\\int_{X_0} A\\bar{B}\\gamma^* \\wedge \\bar{\\delta}^* + \\bar{A}B\\bar{\\gamma}^* \\wedge \\delta^* = \\int_{X_0} \\operatorname{Im}\\,(2\\bar{A}B \\bar{\\gamma}^* \\wedge \\delta^*) > 0" }, { "math_id": 16, "text": "\\operatorname{Im}\\,2\\bar{A}B" }, { "math_id": 17, "text": "x^m + y^n = 1" } ]
https://en.wikipedia.org/wiki?curid=6010542
601060
Pedoe's inequality
Inequality applying to triangles In geometry, Pedoe's inequality (also Neuberg–Pedoe inequality), named after Daniel Pedoe (1910–1998) and Joseph Jean Baptiste Neuberg (1840–1926), states that if "a", "b", and "c" are the lengths of the sides of a triangle with area "ƒ", and "A", "B", and "C" are the lengths of the sides of a triangle with area "F", then formula_0 with equality if and only if the two triangles are similar with pairs of corresponding sides ("A, a"), ("B, b"), and ("C, c"). The expression on the left is not only symmetric under any of the six permutations of the set { ("A", "a"), ("B", "b"), ("C", "c") } of pairs, but also—perhaps not so obviously—remains the same if "a" is interchanged with "A" and "b" with "B" and "c" with "C". In other words, it is a symmetric function of the pair of triangles. Pedoe's inequality is a generalization of Weitzenböck's inequality, which is the case in which one of the triangles is equilateral. Pedoe discovered the inequality in 1941 and published it subsequently in several articles. Later he learned that the inequality was already known in the 19th century to Neuberg, who however did not prove that the equality implies the similarity of the two triangles. Proof. By Heron's formula, the area of the two triangles can be expressed as: formula_1 formula_2 and then, using Cauchy-Schwarz inequality we have, formula_3 formula_4 formula_5 So, formula_6 formula_7 and the proposition is proven. Equality holds if and only if formula_8, that is, the two triangles are similar.
[ { "math_id": 0, "text": "A^2(b^2+c^2-a^2)+B^2(a^2+c^2-b^2)+C^2(a^2+b^2-c^2)\\geq 16Ff,\\," }, { "math_id": 1, "text": "16f^2=(a+b+c)(a+b-c)(a-b+c)(b+c-a)=(a^2+b^2+c^2)^2-2(a^4+b^4+c^4)" }, { "math_id": 2, "text": "16F^2=(A+B+C)(A+B-C)(A-B+C)(B+C-A)=(A^2+B^2+C^2)^2-2(A^4+B^4+C^4)" }, { "math_id": 3, "text": "16Ff+2a^2A^2+2b^2B^2+2c^2C^2" }, { "math_id": 4, "text": "\\leq \\sqrt{16f^2+2a^4+2b^4+2c^4}\\sqrt{16F^2+2A^4+2B^4+2C^4}" }, { "math_id": 5, "text": "= (a^2+b^2+c^2)(A^2+B^2+C^2) " }, { "math_id": 6, "text": "16Ff\\leq A^2(a^2+b^2+c^2)-2a^2A^2+B^2(a^2+b^2+c^2)-2b^2B^2+C^2(a^2+b^2+c^2)-2c^2C^2 " }, { "math_id": 7, "text": "=A^2(b^2+c^2-a^2)+B^2(a^2+c^2-b^2)+C^2(a^2+b^2-c^2)" }, { "math_id": 8, "text": "\\tfrac{a}{A}=\\tfrac{b}{B}=\\tfrac{c}{C}=\\sqrt{\\tfrac{f}{F}}" } ]
https://en.wikipedia.org/wiki?curid=601060
60106341
Timed word
In model checking, a subfield of computer science, a timed word is an extension of the notion of words, in a formal language, in which each letter is associated with a positive time tag. The sequence of time tags must be non-decreasing, which intuitively means that letters are received. For example, a system receiving a word over a network may associate to each letter the time at which the letter is received. The non-decreasing condition here means that the letters are received in the correct order. A timed language is a set of timed words. Example. Consider an elevator. What is formally called a letter could be in fact information such as "someone pressed the button on the 2nd floor", or "the doors opened on the third floor". In this case, a timed word is a sequence of actions taken by the elevators and its users, with time stamps to recall those actions. The timed word can then be analyzed by formal methods to check whether a property such as "each time the elevator is called, it arrives in less than three minutes assuming that no one held the door for more than fifteen seconds" holds. A statement such as this one is usually expressed in metric temporal logic, an extension of linear temporal logic that allows the expression of time constraints. A timed word may be passed to a model, such as a timed automaton, which will decide, given the letters or actions that already occurred, what is the next action that should be done. In our example, to which floor the elevator must go. Then a program may test this timed automaton and check the above-mentioned property. That is, it will try to generate a timed word in which the door is never held open for more than fifteen seconds, and in which a user must wait more than three minutes after calling the elevator. Definition. Given an alphabet "A", a timed word is a sequence, finite or infinite formula_0 with formula_1, formula_2 with formula_3 for each integer formula_4. If the sequence is infinite but the sequence of formula_5 is bounded, then this word is said to be a Zeno timed word, in reference to Zeno's paradoxes, where an infinite number of actions occur in a finite time. The word formula_6 is the word formula_7 without its time stamps, i.e. it is formula_8. Given a timed language formula_9, formula_10 is then the set of formula_6 for formula_11.
[ { "math_id": 0, "text": "w=(a_0,t_0)(a_1,t_1)\\dots" }, { "math_id": 1, "text": "a_i\\in A" }, { "math_id": 2, "text": "t_i\\in\\mathbb R_+" }, { "math_id": 3, "text": "t_i\\le t_{i+1}" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "(t_0)(t_1)\\dots" }, { "math_id": 6, "text": "\\operatorname{untimed}(w)" }, { "math_id": 7, "text": "w" }, { "math_id": 8, "text": "a_0a_1\\dots" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "\\operatorname{untimed}(L)" }, { "math_id": 11, "text": "w\\in L" } ]
https://en.wikipedia.org/wiki?curid=60106341
60109204
Yau's conjecture on the first eigenvalue
In mathematics, Yau's conjecture on the first eigenvalue is, as of 2018, an unsolved conjecture proposed by Shing-Tung Yau in 1982. It asks: Is it true that the first eigenvalue for the Laplace–Beltrami operator on an embedded minimal hypersurface of formula_0 is formula_1? If true, it will imply that the area of embedded minimal hypersurfaces in formula_2 will have an upper bound depending only on the genus. Some possible reformulations are as follows: The Yau's conjecture is verified for several special cases, but still open in general. Shiing-Shen Chern conjectured that a closed, minimally immersed hypersurface in formula_0(1), whose second fundamental form has constant length, is isoparametric. If true, it would have established the Yau's conjecture for the minimal hypersurface whose second fundamental form has constant length. A possible generalization of the Yau's conjecture: Let formula_5 be a closed minimal submanifold in the unit sphere formula_6(1) with dimension formula_7 of formula_5 satisfying formula_8. Is it true that the first eigenvalue of formula_5 is formula_7?
[ { "math_id": 0, "text": "S^{n+1}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "S^3" }, { "math_id": 3, "text": "M^n" }, { "math_id": 4, "text": "{\\sum}^n \\subset S^{n+1}" }, { "math_id": 5, "text": "M^d" }, { "math_id": 6, "text": "S^{N+1}" }, { "math_id": 7, "text": "d" }, { "math_id": 8, "text": "d \\ge \\frac{2}{3}n + 1" } ]
https://en.wikipedia.org/wiki?curid=60109204
6011
Chomsky hierarchy
Hierarchy of classes of formal grammars The Chomsky hierarchy in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary (or alphabet) that are valid according to the language's syntax. The linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes (set inclusive). History. The general idea of a hierarchy of grammars was first described by Noam Chomsky in "Three models for the description of language". Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper "The algebraic theory of context free languages" describes the modern hierarchy, including context-free grammars. Independently, alongside linguists, mathematicians were developing models of computation (via automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models. The hierarchy. The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have. The classes are defined by the constraints on the productions rules. &lt;templatestyles src="Reflist/styles.css" /&gt; Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1. Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular. Regular (Type-3) grammars. Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal, in which case the grammar is "right regular". Alternatively, all the rules can have their right-hand sides consist of a single terminal, possibly "preceded" by a single nonterminal ("left regular"). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule formula_8 is also allowed here if formula_9 does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite-state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages. For example, the regular language formula_4 is generated by the Type-3 grammar formula_10 with the productions formula_11 being the following. "S" → "aS" "S" → "a" Context-free (Type-2) grammars. Type-2 grammars generate the context-free languages. These are defined by rules of the form formula_5 with formula_0 being a nonterminal and formula_1 being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser. For example, the context-free language formula_6 is generated by the Type-2 grammar formula_10 with the productions formula_11 being the following. "S" → "aSb" "S" → "ab" The language is context-free but not regular (by the pumping lemma for regular languages). Context-sensitive (Type-1) grammars. Type-1 grammars generate context-sensitive languages. These grammars have rules of the form formula_12 with formula_0 a nonterminal and formula_1, formula_2 and formula_3 strings of terminals and/or nonterminals. The strings formula_1 and formula_2 may be empty, but formula_3 must be nonempty. The rule formula_13 is allowed if formula_9 does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.) For example, the context-sensitive language formula_7 is generated by the Type-1 grammar formula_14 with the productions formula_11 being the following. "S" → "aBC" "S" → "aSBC" "CB" → "CZ" "CZ" → "WZ" "WZ" → "WC" "WC" → "BC" "aB" → "ab" "bB" → "bb" "bC" → "bc" "cC" → "cc" The language is context-sensitive but not context-free (by the pumping lemma for context-free languages). A proof that this grammar generates formula_7 is sketched in the article on Context-sensitive grammars. Recursively enumerable (Type-0) grammars. Type-0 grammars include all formal grammars. There are no constraints on the productions rules. They generate exactly all languages that can be recognized by a Turing machine, thus any language that is possible to be generated can be generated by a Type-0 grammar. These languages are also known as the "recursively enumerable" or "Turing-recognizable" languages. Note that this is different from the recursive languages, which can be "decided" by an always-halting Turing machine. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\beta" }, { "math_id": 3, "text": "\\gamma" }, { "math_id": 4, "text": "L = \\{a^n|n > 0\\}" }, { "math_id": 5, "text": "A \\rightarrow \\alpha" }, { "math_id": 6, "text": "L = \\{a^nb^n|n > 0\\}" }, { "math_id": 7, "text": "L = \\{a^nb^nc^n|n > 0\\}" }, { "math_id": 8, "text": "S \\rightarrow \\varepsilon" }, { "math_id": 9, "text": "S" }, { "math_id": 10, "text": "G = (\\{S\\}, \\{a, b\\}, P, S)" }, { "math_id": 11, "text": "P" }, { "math_id": 12, "text": "\\alpha A\\beta \\rightarrow \\alpha\\gamma\\beta" }, { "math_id": 13, "text": "S \\rightarrow \\epsilon" }, { "math_id": 14, "text": "G = (\\{S,A,B,C,W,Z\\}, \\{a, b\\}, P, S)" } ]
https://en.wikipedia.org/wiki?curid=6011
60116911
Signal (model checking)
In model checking, a subfield of computer science, a signal or timed state sequence is an extension of the notion of words in a formal language, in which letters are continuously emitted. While a word is traditionally defined as a function from a set of non-negative integers to letters, a signal is a function from a set of real numbers to letters. This allow the use of formalisms similar to the ones of automata theory to deal with continuous signals. Example. Consider an elevator. What is formally called a letter could be in fact information such as "someone is pressing the button on the 2nd floor", or "the doors are currently open on the third floor". In this case, a signal indicates, at each time, which is the current state of the elevator and its buttons. The signal can then be analyzed using formal methods to check whether a property such that "each time the elevator is called, it arrives in less than three minutes, assuming that no one held the door for more than fifteen seconds" holds. A statement such as this one is usually expressed in metric temporal logic, an extension of linear temporal logic that allows the expression of time constraints. A signal may be passed to a model, such as a signal automaton, which will decide, given the letters or actions that already occurred, what is the next action that should be performed, in our example, to which floor the elevator must go. Then a program may test this signal and check the above-mentioned property. That is, it will try to generate a signal in which the door is never held open for more than fifteen seconds, and in which a user must wait more than three minutes after calling the elevator. Definition. Given an alphabet "A", a signal formula_0 is a sequence formula_1, finite or infinite, such that formula_2, each formula_3 are pairwise disjoint intervals, formula_4, and formula_5 is also an interval. Given formula_6 for some formula_7, formula_8 represents formula_9. Properties. Some authors restrict the kind of signals they consider. We list here some standard properties that a signal may or may not satisfy. Finite variability. Intuitively, a signal is said to be finitely variable, or to have the finite variability property, if during each bounded interval, the letter change a finite number of times. In our previous elevator example, this property would mean that a user may only press a button a finite number of times during a finite time. And similarly, in a finite time, the elevator can only open and close its door a finite number of times. Formally, a signal is said to have the finite variability property, unless the sequence is infinite and formula_10 is bounded. Intuitively, the finite variability property states that there is not an infinite number of changes in a finite time. Having the finite variability property is similar to the notion of being non-Zeno for a timed word. Bounded variability. The notion of bounded variability is a restriction to the notion of finite variability. A signal has the bounded variability property if there exists a lower bound between the beginning of two intervals with the same letter. Before giving a formal definition, we give an example of signal which is finitely variable but not boundedly variable. Take the alphabet formula_11. Take the interval formula_0 which sends the reals of the form formula_12 with formula_13 and formula_14 to formula_15 and every other reals to formula_16. During each finite time interval, the letter changes a finite number of times. Thus this signal is finitely variable. However, the distance between two successive occurrences of the letter formula_15 is arbitrarily small. Thus it does not have the bounded variability property. Let a sequence formula_17. If formula_18 for each integer formula_7, then the sequence is said to have the bounded variability property if there exists a real formula_19 such that, for each formula_20 with formula_21 such that there exists no formula_22 with formula_23 and formula_24 then the difference between the lower bound of formula_25 and of formula_26 is at least formula_27. Note that each sequence formula_0 is equivalent to a sequence formula_28 in which two successive letters are distinct. The sequence formula_0 is said to have the bounded variability property if and only if formula_28 has the bounded variability property. A set of signal is said to has the bounded variability property if the above-mentioned lower bound formula_27 can be chosen to be the same for each signal of the set. We know give main reason to consider signals with bounded variabilities. Assume we need to create a system, such as a signal automaton, which need to recall everything which occurred in the last time units. If we know that the signal is boundedly variable, we can compute an upper bound on the number of action which occurred during one time unit. Thus, we can create such a system and ensure that it only requires a finite memory. For example, for an arbitrary predicate formula_29, the signal stating whether the statement "formula_30 holds sometime in the next time unit" holds has the bounded variability property. Indeed, when this statement becomes true, it remains true for a full time unit. Thus the difference between two occurrences where this statement becomes true is greater than a time unit. Bipartite signal. A signal is said to be "bipartite" if the sequence of intervals start with a singular interval – i.e. a closed interval whose lower and upper bound are equal, hence a set which is a singleton. And if the sequence alternate between singular intervals and open intervals. Each signal is equivalent to a bipartite signal. Indeed, any interval which is closed on the left is the union of a singular interval and of an interval open on the left, in this order. And similarly for intervals closed on the right. A signal automaton reading a bipartite signal has a special form. Its set of locations can be partitioned into locations for singular interval, and locations for open intervals. Each transition goes from a singular location to an open one and reciprocally.
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "(I_0,a_0),(I_1,a_1),\\dots," }, { "math_id": 2, "text": "a_i\\in A" }, { "math_id": 3, "text": "I_i\\subseteq\\mathbb R_+" }, { "math_id": 4, "text": "0\\in I_0" }, { "math_id": 5, "text": "I_i\\cup I_{i+1}" }, { "math_id": 6, "text": "t\\in I_i" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "\\gamma(t)" }, { "math_id": 9, "text": "a_i" }, { "math_id": 10, "text": "\\bigcup_i I_i" }, { "math_id": 11, "text": "A=\\{a,b\\}" }, { "math_id": 12, "text": "n+\\frac{c}n" }, { "math_id": 13, "text": "n\\in\\mathbb N" }, { "math_id": 14, "text": "c<n" }, { "math_id": 15, "text": "a" }, { "math_id": 16, "text": "b" }, { "math_id": 17, "text": "(I_0,a_0),(I_1,a_1),\\dots" }, { "math_id": 18, "text": "a_i\\ne a_{i+1}" }, { "math_id": 19, "text": "r>0" }, { "math_id": 20, "text": "i<j" }, { "math_id": 21, "text": "a_i=a_j" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "i<j<k" }, { "math_id": 24, "text": "a_i=a_k" }, { "math_id": 25, "text": "I_j" }, { "math_id": 26, "text": "I_i" }, { "math_id": 27, "text": "r" }, { "math_id": 28, "text": "\\gamma'" }, { "math_id": 29, "text": "e" }, { "math_id": 30, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=60116911
60117345
Taylor–Culick flow
In fluid dynamics, Taylor–Culick flow describes the axisymmetric flow inside a long slender cylinder with one end closed, supplied by a constant flow injection through the sidewall. The flow is named after Geoffrey Ingram Taylor and F. E. C. Culick. In 1956, Taylor showed that when a fluid forced into porous sheet of cone or wedge, a favorable longitudinal pressure gradient is set up in the direction of the flow inside the cone or wedge and the flow is rotational; this is in contrast in the vice versa case wherein the fluid is forced out of the cone or wedge sheet from inside in which case, the flow is uniform inside the cone or wedge and is obviously potential. Taylor also obtained solutions for the velocity in the limiting case where the cone or the wedge degenerates into a circular tube or parallel plates. Later in 1966, Culick found the solution corresponding to the tube problem, in problem applied to solid-propellant rocket combustion. Here the thermal expansion of the gas due to combustion occurring at the inner surface of the combustion chamber (long slender cylinder) generates a flow directed towards the axis. Flow description. The axisymmetric inviscid equation is governed by the Hicks equation, that reduces when no swirl is present (i.e., zero circulation) to formula_0 where formula_1 is the stream function, formula_2 is the radial distance from the axis, and formula_3 is the axial distance measured from the closed end of the cylinder. The function formula_4 is found to predict the correct solution. The solution satisfying the required boundary conditions is given by formula_5 where formula_6 is the radius of the cylinder and formula_7 is the injection velocity at the wall. Despite the simple-looking formula, the solution has been experimentally verified to be accurate. The solution is wrong for distances of order formula_8 since boundary layer separation at formula_9 is inevitable; that is, the Taylor–Culick profile is correct for formula_10. The Taylor–Culick profile with injection at the closed end of the cylinder can also be solved analytically. Although the solution is derived for the inviscid equation, it satisfies the non-slip condition at the wall since, as Taylor argued, any boundary layer at the sidewall will be blown off by flow injection. Hence, the flow is referred to as quasi-viscous. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial^2 \\psi}{\\partial r^2} - \\frac{1}{r} \\frac{\\partial \\psi}{\\partial r} + \\frac{\\partial^2 \\psi}{\\partial z^2} = -r^2 f(\\psi)," }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "z" }, { "math_id": 4, "text": "f(\\psi) = \\pi^2\\psi" }, { "math_id": 5, "text": "\\psi= aU z \\sin \\left(\\frac{\\pi r^2}{2 a^2}\\right)," }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "U" }, { "math_id": 8, "text": "z\\sim a" }, { "math_id": 9, "text": "z=0" }, { "math_id": 10, "text": "z\\gg 1" } ]
https://en.wikipedia.org/wiki?curid=60117345
6011769
Quantum mutual information
Measure in quantum information theory In quantum information theory, quantum mutual information, or von Neumann mutual information, after John von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog of Shannon mutual information. Motivation. For simplicity, it will be assumed that all objects in the article are finite-dimensional. The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variables "p"("x", "y"), the two marginal distributions are formula_0 The classical mutual information "I"("X":"Y") is defined by formula_1 where "S"("q") denotes the Shannon entropy of the probability distribution "q". One can calculate directly formula_2 So the mutual information is formula_3 Where the logarithm is taken in basis 2 to obtain the mutual information in bits. But this is precisely the relative entropy between "p"("x", "y") and "p"("x")"p"("y"). In other words, if we assume the two variables "x" and "y" to be uncorrelated, mutual information is the "discrepancy in uncertainty" resulting from this (possibly erroneous) assumption. It follows from the property of relative entropy that "I"("X":"Y") ≥ 0 and equality holds if and only if "p"("x", "y") = "p"("x")"p"("y"). Definition. The quantum mechanical counterpart of classical probability distributions are modeled with density matrices. Consider a quantum system that can be divided into two parts, A and B, such that independent measurements can be made on either part. The state space of the entire quantum system is then the tensor product of the spaces for the two parts. formula_4 Let "ρ""AB" be a density matrix acting on states in "H""AB". The von Neumann entropy of a density matrix S("ρ"), is the quantum mechanical analogy of the Shannon entropy. formula_5 For a probability distribution "p"("x","y"), the marginal distributions are obtained by integrating away the variables "x" or "y". The corresponding operation for density matrices is the partial trace. So one can assign to "ρ" a state on the subsystem "A" by formula_6 where Tr"B" is partial trace with respect to system "B". This is the reduced state of "ρAB" on system "A". The reduced von Neumann entropy of "ρAB" with respect to system "A" is formula_7 "S"("ρB") is defined in the same way. It can now be seen that the definition of quantum mutual information, corresponding to the classical definition, should be as follows. formula_8 Quantum mutual information can be interpreted the same way as in the classical case: it can be shown that formula_9 where formula_10 denotes quantum relative entropy. Note that there is an alternative generalization of mutual information to the quantum case. The difference between the two for a given state is called quantum discord, a measure for the quantum correlations of the state in question. Properties. When the state formula_11 is pure (and thus formula_12), the mutual information is twice the entanglement entropy of the state: formula_13 A positive quantum mutual information is not necessarily indicative of entanglement, however. A classical mixture of separable states will always have zero entanglement, but can have nonzero QMI, such as formula_14 formula_15 In this case, the state is merely a classically correlated state. Multiparty generalization. Suppose a system is composed by n subsystems formula_16 then: formula_17 where formula_18 and the sum is over all the distinct combinations of the subsystem without repetition. For example, take formula_19: formula_20 Take now formula_21: formula_22 Note that what we are actually doing is taking the partial trace over one subsystem per time, take the formula_23 example, in the first term we are tracing over formula_24, in the second term the trace is over formula_25 and so on.
[ { "math_id": 0, "text": "p(x) = \\sum_{y} p(x,y), \\qquad p(y) = \\sum_{x} p(x,y)." }, { "math_id": 1, "text": "I(X:Y) = S(p(x)) + S(p(y)) - S(p(x,y))" }, { "math_id": 2, "text": "\\begin{align}\nS(p(x)) + S(p(y)) &= - \\left (\\sum_x p_x \\log p(x) + \\sum_y p_y \\log p(y) \\right ) \\\\\n&= -\\left (\\sum_x \\left ( \\sum_{y'} p(x,y') \\log \\sum_{y'} p(x,y') \\right ) + \\sum_y \\left ( \\sum_{x'} p(x',y) \\log \\sum_{x'} p(x',y) \\right ) \\right ) \\\\\n&= -\\left (\\sum_{x,y} p(x,y) \\left (\\log \\sum_{y'} p(x,y') + \\log \\sum_{x'} p(x',y) \\right ) \\right )\\\\\n&= -\\sum_{x,y} p(x,y) \\log p(x) p(y)\n\\end{align}" }, { "math_id": 3, "text": "I(X:Y) = \\sum_{x,y} p(x,y) \\log \\frac{p(x,y)}{p(x) p(y)}," }, { "math_id": 4, "text": "H_{AB} := H_A \\otimes H_B." }, { "math_id": 5, "text": "S(\\rho) = - \\operatorname{Tr} \\rho \\log \\rho." }, { "math_id": 6, "text": "\\rho^A = \\operatorname{Tr}_B \\; \\rho^{AB}" }, { "math_id": 7, "text": "\\;S(\\rho^A)." }, { "math_id": 8, "text": "\\; I(A\\!:\\!B) := S(\\rho^A) + S(\\rho^B) - S(\\rho^{AB})." }, { "math_id": 9, "text": "I(A\\!:\\!B) = S(\\rho^{AB} \\| \\rho^A \\otimes \\rho^B)" }, { "math_id": 10, "text": "S(\\cdot \\| \\cdot)" }, { "math_id": 11, "text": "\\rho^{AB}" }, { "math_id": 12, "text": "S(\\rho^{AB})=0" }, { "math_id": 13, "text": "I(A\\!:\\!B) = S(\\rho^A) + S(\\rho^B) - S(\\rho^{AB}) = S(\\rho^A) + S(\\rho^B) = 2S(\\rho^A)" }, { "math_id": 14, "text": "\\rho^{AB} = \\frac{1}{2}\\left(|00\\rangle\\langle00| + |11\\rangle\\langle11|\\right)" }, { "math_id": 15, "text": "\n\\begin{aligned}\nI(A\\!:\\!B) &= S(\\rho^A) + S(\\rho^B) - S(\\rho^{AB})\\\\\n&= S\\left(\\frac{1}{2}(|0\\rangle\\langle0| + |1\\rangle\\langle1|)\\right) + S\\left(\\frac{1}{2}(|0\\rangle\\langle0| + |1\\rangle\\langle1|)\\right) - S\\left(\\frac{1}{2}(|00\\rangle\\langle00| + |11\\rangle\\langle11|)\\right)\\\\\n&= \\log 2 +\\log 2 - \\log 2= \\log 2\n\\end{aligned}\n" }, { "math_id": 16, "text": " A_1,\\dots,A_n " }, { "math_id": 17, "text": "I(A_1\\!:\\!A_2:\\dots:A_n) = \\sum [S(X_{k_1},\\,X_{k_2},\\dots,X_{k_{n-1}})]-(n-1)S(A_1,\\,A_2,\\,\\dots,\\,A_n)" }, { "math_id": 18, "text": "X_{k_i} \\in \\{A_1,\\,A_2,\\,\\dots,\\,A_n\\}" }, { "math_id": 19, "text": " n=3 " }, { "math_id": 20, "text": " I(A\\!:\\!B\\!:\\!C)=S(AB)+S(AC)+S(BC)-2S(ABC)" }, { "math_id": 21, "text": "n=4" }, { "math_id": 22, "text": " I(A_1\\!:\\!A_2\\!:\\!A_3\\!:\\!A_4)=S(A_1A_2A_3)+S(A_1A_2A_4)+S(A_1A_3A_4)+S(A_2A_3A_4)-3S(A_1A_2A_3A_4)" }, { "math_id": 23, "text": " n=4 " }, { "math_id": 24, "text": "A_4" }, { "math_id": 25, "text": "A_3" } ]
https://en.wikipedia.org/wiki?curid=6011769
60121551
Chern's conjecture for hypersurfaces in spheres
Ugandan Social Media influencer / blogger born 1995 in mbarara town Chern's conjecture for hypersurfaces in spheres, unsolved as of 2018, is a conjecture proposed by Chern in the field of differential geometry. It originates from the Chern's unanswered question: Consider closed minimal submanifolds formula_0 immersed in the unit sphere formula_1 with second fundamental form of constant length whose square is denoted by formula_2. Is the set of values for formula_2 discrete? What is the infimum of these values of formula_3? The first question, i.e., whether the set of values for "σ" is discrete, can be reformulated as follows: Let formula_0 be a closed minimal submanifold in formula_4 with the second fundamental form of constant length, denote by formula_5 the set of all the possible values for the squared length of the second fundamental form of formula_0, is formula_5 a discrete? Its affirmative hand, more general than the Chern's conjecture for hypersurfaces, sometimes also referred to as the Chern's conjecture and is still, as of 2018, unanswered even with "M" as a hypersurface (Chern proposed this special case to the Shing-Tung Yau's open problems' list in differential geometry in 1982): Consider the set of all compact minimal hypersurfaces in formula_6 with constant scalar curvature. Think of the scalar curvature as a function on this set. Is the image of this function a discrete set of positive numbers? Formulated alternatively: Consider closed minimal hypersurfaces formula_7 with constant scalar curvature formula_8. Then for each formula_9 the set of all possible values for formula_8 (or equivalently formula_10) is discrete This became known as the Chern's conjecture for minimal hypersurfaces in spheres (or Chern's conjecture for minimal hypersurfaces in a sphere) This hypersurface case was later, thanks to progress in isoparametric hypersurfaces' studies, given a new formulation, now known as Chern's conjecture for isoparametric hypersurfaces in spheres (or Chern's conjecture for isoparametric hypersurfaces in a sphere): Let formula_0 be a closed, minimally immersed hypersurface of the unit sphere formula_11 with constant scalar curvature. Then formula_12 is isoparametric Here, formula_11 refers to the (n+1)-dimensional sphere, and n ≥ 2. In 2008, Zhiqin Lu proposed a conjecture similar to that of Chern, but with formula_13 taken instead of formula_2: Let formula_0 be a closed, minimally immersed submanifold in the unit sphere formula_14 with constant formula_13. If formula_15, then there is a constant formula_16 such thatformula_17 Here, formula_0 denotes an n-dimensional minimal submanifold; formula_18 denotes the second largest eigenvalue of the semi-positive symmetric matrix formula_19 where formula_20s (formula_21) are the shape operators of formula_12 with respect to a given (local) normal orthonormal frame. formula_2 is rewritable as formula_22. Another related conjecture was proposed by Robert Bryant (mathematician): A piece of a minimal hypersphere of formula_23 with constant scalar curvature is isoparametric of type formula_24 Formulated alternatively: Let formula_25 be a minimal hypersurface with constant scalar curvature. Then formula_12 is isoparametric Chern's conjectures hierarchically. Put hierarchically and formulated in a single style, Chern's conjectures (without conjectures of Lu and Bryant) can look like this: Let formula_12 be a compact minimal hypersurface in the unit sphere formula_26. If formula_12 has constant scalar curvature, then the possible values of the scalar curvature of formula_12 form a discrete set If formula_12 has constant scalar curvature, then formula_12 is isoparametric Denote by formula_10 the squared length of the second fundamental form of formula_12. Set formula_27, for formula_28. Then we have: Or alternatively: Denote by formula_35 the squared length of the second fundamental form of formula_12. Set formula_27, for formula_28. Then we have: One should pay attention to the so-called first and second pinching problems as special parts for Chern. Other related and still open problems. Besides the conjectures of Lu and Bryant, there're also others: In 1983, Chia-Kuei Peng and Chuu-Lian Terng proposed the problem related to Chern: Let formula_12 be a formula_9-dimensional closed minimal hypersurface in formula_41. Does there exist a positive constant formula_42 depending only on formula_9 such that if formula_43, then formula_44, i.e., formula_12 is one of the Clifford torus formula_45? In 2017, Li Lei, Hongwei Xu and Zhiyuan Xu proposed 2 Chern-related problems. The 1st one was inspired by Yau's conjecture on the first eigenvalue: Let formula_12 be an formula_9-dimensional compact minimal hypersurface in formula_26. Denote by formula_46 the first eigenvalue of the Laplace operator acting on functions over formula_12: The second is their own generalized Chern's conjecture for hypersurfaces with constant mean curvature: Let formula_12 be a closed hypersurface with constant mean curvature formula_49 in the unit sphere formula_26: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M^n" }, { "math_id": 1, "text": "S^{n+m}" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "\\sigma > \\frac{n}{2-\\frac{1}{m}}" }, { "math_id": 4, "text": "\\mathbb{S}^{n+m}" }, { "math_id": 5, "text": "\\mathcal{A}_n" }, { "math_id": 6, "text": "S^N" }, { "math_id": 7, "text": "M \\subset \\mathbb{S}^{n+1}" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "S" }, { "math_id": 11, "text": "S^{n+1}" }, { "math_id": 12, "text": "M" }, { "math_id": 13, "text": "\\sigma + \\lambda_2" }, { "math_id": 14, "text": "\\mathbb{S}^{n+m} " }, { "math_id": 15, "text": "\\sigma + \\lambda_2 > n" }, { "math_id": 16, "text": "\\epsilon(n, m) > 0" }, { "math_id": 17, "text": "\\sigma + \\lambda_2 > n + \\epsilon(n, m)" }, { "math_id": 18, "text": "\\lambda_2" }, { "math_id": 19, "text": "S := (\\left \\langle A^\\alpha, B^\\beta \\right \\rangle)" }, { "math_id": 20, "text": "A^\\alpha" }, { "math_id": 21, "text": "\\alpha = 1, \\cdots, m" }, { "math_id": 22, "text": "{\\left \\Vert \\sigma \\right \\Vert}^2" }, { "math_id": 23, "text": "\\mathbb{S}^4" }, { "math_id": 24, "text": "g \\le 3" }, { "math_id": 25, "text": "M \\subset \\mathbb{S}^4" }, { "math_id": 26, "text": "\\mathbb{S}^{n+1}" }, { "math_id": 27, "text": "a_k = (k - \\operatorname{sgn}(5-k))n" }, { "math_id": 28, "text": "k \\in \\{ m \\in \\mathbb{Z}^+ ; 1 \\le m \\le 5 \\}" }, { "math_id": 29, "text": "k \\in \\{ m \\in \\mathbb{Z}^+ ; 1 \\le m \\le 4 \\}" }, { "math_id": 30, "text": "a_k \\le S \\le a_{k+1}" }, { "math_id": 31, "text": "S \\equiv a_k" }, { "math_id": 32, "text": "S \\equiv a_{k+1}" }, { "math_id": 33, "text": "S \\ge a_5" }, { "math_id": 34, "text": "S \\equiv a_5" }, { "math_id": 35, "text": "A" }, { "math_id": 36, "text": "a_k \\le {\\left \\vert A \\right \\vert}^2 \\le a_{k+1}" }, { "math_id": 37, "text": "{\\left \\vert A \\right \\vert}^2 \\equiv a_k" }, { "math_id": 38, "text": "{\\left \\vert A \\right \\vert}^2 \\equiv a_{k+1}" }, { "math_id": 39, "text": "{\\left \\vert A \\right \\vert}^2 \\ge a_5" }, { "math_id": 40, "text": "{\\left \\vert A \\right \\vert}^2 \\equiv a_5" }, { "math_id": 41, "text": "S^{n+1}, n \\ge 6" }, { "math_id": 42, "text": "\\delta(n)" }, { "math_id": 43, "text": "n \\le n + \\delta(n)" }, { "math_id": 44, "text": "S \\equiv n" }, { "math_id": 45, "text": "S^k\\left(\\sqrt{\\frac{k}{n}}\\right) \\times S^{n-k}\\left(\\sqrt{\\frac{n-k}{n}}\\right), k = 1, 2, \\ldots, n-1" }, { "math_id": 46, "text": "\\lambda_1(M)" }, { "math_id": 47, "text": "\\lambda_1(M) = n" }, { "math_id": 48, "text": "k \\in \\{ m \\in \\mathbb{Z}^+ ; 2 \\le m \\le 4 \\}" }, { "math_id": 49, "text": "H" }, { "math_id": 50, "text": "a \\le S \\le b" }, { "math_id": 51, "text": "a < b" }, { "math_id": 52, "text": "\\left [ a, b \\right ] \\cap I = \\left \\lbrace a, b \\right \\rbrace" }, { "math_id": 53, "text": "S \\equiv a" }, { "math_id": 54, "text": "S \\equiv b" }, { "math_id": 55, "text": "S \\le c" }, { "math_id": 56, "text": "c = \\sup_{t \\in I}{t}" }, { "math_id": 57, "text": "S \\equiv c" } ]
https://en.wikipedia.org/wiki?curid=60121551
60123
I-adic topology
Concept in commutative algebra In commutative algebra, the mathematical study of commutative rings, adic topologies are a family of topologies on the underlying set of a module, generalizing the p-adic topologies on the integers. Definition. Let R be a commutative ring and M an R-module. Then each ideal 𝔞 of R determines a topology on M called the 𝔞-adic topology, characterized by the pseudometric formula_0 The family formula_1 is a basis for this topology. An 𝔞-adic topology is a linear topology (a topology generated by some submodules). Properties. With respect to the topology, the module operations of addition and scalar multiplication are continuous, so that M becomes a topological module. However, M need not be Hausdorff; it is Hausdorff if and only ifformula_2so that d becomes a genuine metric. Related to the usual terminology in topology, where a Hausdorff space is also called separated, in that case, the 𝔞-adic topology is called "separated". By Krull's intersection theorem, if R is a Noetherian ring which is an integral domain or a local ring, it holds that formula_3 for any proper ideal 𝔞 of R. Thus under these conditions, for any proper ideal 𝔞 of R and any R-module M, the 𝔞-adic topology on M is separated. For a submodule N of M, the canonical homomorphism to "M"/"N" induces a quotient topology which coincides with the 𝔞-adic topology. The analogous result is not necessarily true for the submodule N itself: the subspace topology need not be the 𝔞-adic topology. However, the two topologies coincide when R is Noetherian and M finitely generated. This follows from the Artin-Rees lemma. Completion. When M is Hausdorff, M can be completed as a metric space; the resulting space is denoted by formula_4 and has the module structure obtained by extending the module operations by continuity. It is also the same as (or canonically isomorphic to): formula_5 where the right-hand side is an inverse limit of quotient modules under natural projection. For example, let formula_6 be a polynomial ring over a field k and 𝔞 ("x"1, ..., "x""n") the (unique) homogeneous maximal ideal. Then formula_7, the formal power series ring over k in n variables. Closed submodules. The 𝔞-adic closure of a submodule formula_8 is formula_9 This closure coincides with N whenever R is 𝔞-adically complete and M is finitely generated. R is called Zariski with respect to 𝔞 if every ideal in R is 𝔞-adically closed. There is a characterization: R is Zariski with respect to 𝔞 if and only if 𝔞 is contained in the Jacobson radical of R. In particular a Noetherian local ring is Zariski with respect to the maximal ideal.
[ { "math_id": 0, "text": "d(x,y) = 2^{-\\sup{\\{n \\mid x-y\\in\\mathfrak{a}^nM\\}}}." }, { "math_id": 1, "text": "\\{x+\\mathfrak{a}^nM:x\\in M,n\\in\\mathbb{Z}^+\\}" }, { "math_id": 2, "text": "\\bigcap_{n > 0}{\\mathfrak{a}^nM} = 0\\text{,}" }, { "math_id": 3, "text": "\\bigcap_{n > 0}{\\mathfrak{a}^n} = 0" }, { "math_id": 4, "text": "\\widehat M" }, { "math_id": 5, "text": "\\widehat{M} = \\varprojlim M/\\mathfrak{a}^n M" }, { "math_id": 6, "text": "R = k[x_1, \\ldots, x_n]" }, { "math_id": 7, "text": "\\hat{R} = k[[x_1, \\ldots, x_n]]" }, { "math_id": 8, "text": "N \\subseteq M" }, { "math_id": 9, "text": "\\overline{N} = \\bigcap_{n > 0}{(N + \\mathfrak{a}^n M)}\\text{.}" } ]
https://en.wikipedia.org/wiki?curid=60123
60128175
Well-colored graph
In graph theory, a subfield of mathematics, a well-colored graph is an undirected graph for which greedy coloring uses the same number of colors regardless of the order in which colors are chosen for its vertices. That is, for these graphs, the chromatic number (minimum number of colors) and Grundy number (maximum number of greedily-chosen colors) are equal. Examples. The well-colored graphs include the complete graphs and odd-length cycle graphs (the graphs that form the exceptional cases to Brooks' theorem) as well as the complete bipartite graphs and complete multipartite graphs. The simplest example of a graph that is not well-colored is a four-vertex path. Coloring the vertices in path order uses two colors, the optimum for this graph. However, coloring the ends of the path first (using the same color for each end) causes the greedy coloring algorithm to use three colors for this graph. Because there exists a non-optimal vertex ordering, the path is not well-colored. Complexity. A graph is well-colored if and only if does not have two vertex orderings for which the greedy coloring algorithm produces different numbers of colors. Therefore, recognizing non-well-colored graphs can be performed within the complexity class NP. On the other hand, a graph formula_0 has Grundy number formula_1 or more if and only if the graph obtained from formula_0 by adding a formula_2-vertex clique is well-colored. Therefore, by a reduction from the Grundy number problem, it is NP-complete to test whether these two orderings exist. It follows that it is co-NP-complete to test whether a given graph is well-colored. Related properties. A graph is hereditarily well-colored if every induced subgraph is well-colored. The hereditarily well-colored graphs are exactly the cographs, the graphs that do not have a four-vertex path as an induced subgraph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "(k-1)" } ]
https://en.wikipedia.org/wiki?curid=60128175
6013248
Regular homotopy
In the mathematical field of topology, a regular homotopy refers to a special kind of homotopy between immersions of one manifold in another. The homotopy must be a 1-parameter family of immersions. Similar to homotopy classes, one defines two immersions to be in the same regular homotopy class if there exists a regular homotopy between them. Regular homotopy for immersions is similar to isotopy of embeddings: they are both restricted types of homotopies. Stated another way, two continuous functions formula_0 are homotopic if they represent points in the same path-components of the mapping space formula_1, given the compact-open topology. The space of immersions is the subspace of formula_1 consisting of immersions, denoted by formula_2. Two immersions formula_3 are regularly homotopic if they represent points in the same path-component of formula_4. Examples. Any two knots in 3-space are equivalent by regular homotopy, though not by isotopy. The Whitney–Graustein theorem classifies the regular homotopy classes of a circle into the plane; two immersions are regularly homotopic if and only if they have the same turning number – equivalently, total curvature; equivalently, if and only if their Gauss maps have the same degree/winding number. Stephen Smale classified the regular homotopy classes of a "k"-sphere immersed in formula_5 – they are classified by homotopy groups of Stiefel manifolds, which is a generalization of the Gauss map, with here "k" partial derivatives not vanishing. More precisely, the set formula_6 of regular homotopy classes of embeddings of sphere formula_7 in formula_8 is in one-to-one correspondence with elements of group formula_9. In case formula_10 we have formula_11. Since formula_12 is path connected, formula_13 and formula_14 and due to Bott periodicity theorem we have formula_15 and since formula_16 then we have formula_17. Therefore all immersions of spheres formula_18 and formula_19 in euclidean spaces of one more dimension are regular homotopic. In particular, spheres formula_20 embedded in formula_21 admit eversion if formula_22. A corollary of his work is that there is only one regular homotopy class of a "2"-sphere immersed in formula_23. In particular, this means that sphere eversions exist, i.e. one can turn the 2-sphere "inside-out". Both of these examples consist of reducing regular homotopy to homotopy; this has subsequently been substantially generalized in the homotopy principle (or "h"-principle) approach. Non-degenerate homotopy. For locally convex, closed space curves, one can also define non-degenerate homotopy. Here, the 1-parameter family of immersions must be non-degenerate (i.e. the curvature may never vanish). There are 2 distinct non-degenerate homotopy classes. Further restrictions of non-vanishing torsion lead to 4 distinct equivalence classes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f,g : M \\to N" }, { "math_id": 1, "text": "C(M, N)" }, { "math_id": 2, "text": "\\operatorname{Imm}(M, N)" }, { "math_id": 3, "text": "f, g: M \\to N" }, { "math_id": 4, "text": "\\operatorname{Imm}(M,N)" }, { "math_id": 5, "text": "\\mathbb R^n" }, { "math_id": 6, "text": "I(n,k)" }, { "math_id": 7, "text": "S^k" }, { "math_id": 8, "text": "\\mathbb{R}^n" }, { "math_id": 9, "text": "\\pi_k\\left(V_k\\left(\\mathbb{R}^n\\right)\\right)" }, { "math_id": 10, "text": "k = n - 1" }, { "math_id": 11, "text": "V_{n-1}\\left(\\mathbb{R}^n\\right) \\cong SO(n)" }, { "math_id": 12, "text": "SO(1)" }, { "math_id": 13, "text": "\\pi_2(SO(3)) \\cong \\pi_2\\left(\\mathbb{R}P^3\\right) \\cong \\pi_2\\left(S^3\\right) \\cong 0" }, { "math_id": 14, "text": "\\pi_6(SO(6)) \\to \\pi_6(SO(7)) \\to \\pi_6\\left(S^6\\right) \\to \\pi_5(SO(6)) \\to \\pi_5(SO(7))" }, { "math_id": 15, "text": "\\pi_6(SO(6))\\cong \\pi_6(\\operatorname{Spin}(6))\\cong \\pi_6(SU(4))\\cong \\pi_6(U(4)) \\cong 0" }, { "math_id": 16, "text": "\\pi_5(SO(6)) \\cong \\mathbb{Z},\\ \\pi_5(SO(7)) \\cong 0" }, { "math_id": 17, "text": "\\pi_6(SO(7))\\cong 0" }, { "math_id": 18, "text": "S^0,\\ S^2" }, { "math_id": 19, "text": "S^6" }, { "math_id": 20, "text": "S^n" }, { "math_id": 21, "text": "\\mathbb{R}^{n+1}" }, { "math_id": 22, "text": "n = 0, 2, 6" }, { "math_id": 23, "text": "\\mathbb R^3" } ]
https://en.wikipedia.org/wiki?curid=6013248
6013654
Navier–Stokes existence and smoothness
Millennium Prize Problem The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence, which remains one of the greatest unsolved problems in physics, despite its immense importance in science and engineering. Even more basic (and seemingly intuitive) properties of the solutions to Navier–Stokes have never been proven. For the three-dimensional system of equations, and given some initial conditions, mathematicians have neither proved that smooth solutions always exist, nor found any counter-examples. This is called the "Navier–Stokes existence and smoothness" problem. Since understanding the Navier–Stokes equations is considered to be the first step to understanding the elusive phenomenon of turbulence, the Clay Mathematics Institute in May 2000 made this problem one of its seven Millennium Prize problems in mathematics. It offered a US$1,000,000 prize to the first person providing a solution for a specific statement of the problem: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"Prove or give a counter-example of the following statement:"&lt;br&gt; In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations. The Navier–Stokes equations. In mathematics, the Navier–Stokes equations are a system of nonlinear partial differential equations for abstract vector fields of any size. In physics and engineering, they are a system of equations that model the motion of liquids or non-rarefied gases (in which the mean free path is short enough so that it can be thought of as a continuum mean instead of a collection of particles) using continuum mechanics. The equations are a statement of Newton's second law, with the forces modeled according to those in a viscous Newtonian fluid—as the sum of contributions by pressure, viscous stress and an external body force. Since the setting of the problem proposed by the Clay Mathematics Institute is in three dimensions, for an incompressible and homogeneous fluid, only that case is considered below. Let formula_0 be a 3-dimensional vector field, the velocity of the fluid, and let formula_1 be the pressure of the fluid. The Navier–Stokes equations are: formula_2 where formula_3 is the kinematic viscosity, formula_4 the external volumetric force, formula_5 is the gradient operator and formula_6 is the Laplacian operator, which is also denoted by formula_7 or formula_8. Note that this is a vector equation, i.e. it has three scalar equations. Writing down the coordinates of the velocity and the external force formula_9 then for each formula_10 there is the corresponding scalar Navier–Stokes equation: formula_11 The unknowns are the velocity formula_0 and the pressure formula_1. Since in three dimensions, there are three equations and four unknowns (three scalar velocities and the pressure), then a supplementary equation is needed. This extra equation is the continuity equation for incompressible fluids that describes the conservation of mass of the fluid: formula_12 Due to this last property, the solutions for the Navier–Stokes equations are searched in the set of solenoidal ("divergence-free") functions. For this flow of a homogeneous medium, density and viscosity are constants. Since only its gradient appears, the pressure "p" can be eliminated by taking the curl of both sides of the Navier–Stokes equations. In this case the Navier–Stokes equations reduce to the vorticity-transport equations. The Navier–Stokes equations are nonlinear because the terms in the equations do not have a simple linear relationship with each other. This means that the equations cannot be solved using traditional linear techniques, and more advanced methods must be used instead. Nonlinearity is important in the Navier–Stokes equations because it allows the equations to describe a wide range of fluid dynamics phenomena, including the formation of shock waves and other complex flow patterns. However, the nonlinearity of the Navier–Stokes equations also makes them more difficult to solve, as traditional linear methods may not work. One way to understand the nonlinearity of the Navier–Stokes equations is to consider the term (v · ∇)v in the equations. This term represents the acceleration of the fluid, and it is a product of the velocity vector v and the gradient operator ∇. Because the gradient operator is a linear operator, the term (v · ∇)v is nonlinear in the velocity vector v. This means that the acceleration of the fluid depends on the magnitude and direction of the velocity, as well as the spatial distribution of the velocity within the fluid. The nonlinear nature of the Navier–Stokes equations can be seen in the term formula_13, which represents the acceleration of the fluid due to its own velocity. This term is nonlinear because it involves the product of two velocity vectors, and the resulting acceleration is therefore dependent on the magnitude and direction of both vectors. Another source of nonlinearity in the Navier–Stokes equations is the pressure term formula_14. The pressure in a fluid depends on the density and the gradient of the pressure, and this term is therefore nonlinear in the pressure. One example of the nonlinear nature of the Navier–Stokes equations can be seen in the case of a fluid flowing around a circular obstacle. In this case, the velocity of the fluid near the obstacle will be higher than the velocity of the fluid farther away from the obstacle. This results in a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. To see this more explicitly, consider the case of a circular obstacle of radius formula_15 placed in a uniform flow with velocity formula_16 and density formula_17. Let formula_18 be the velocity of the fluid at position formula_19 and time formula_20, and let formula_21 be the pressure at the same position and time. The Navier–Stokes equations in this case are: formula_22 formula_23 where formula_24 is the kinematic viscosity of the fluid. Assuming that the flow is steady (meaning that the velocity and pressure do not vary with time), we can set the time derivative terms equal to zero: formula_25 formula_23 We can now consider the flow near the circular obstacle. In this region, the velocity of the fluid will be higher than the uniform flow velocity formula_16 due to the presence of the obstacle. This results in a nonlinear term formula_26 in the Navier–Stokes equations that is proportional to the velocity of the fluid. At the same time, the presence of the obstacle will also result in a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. This can be seen by considering the continuity equation, which states that the mass flow rate through any surface must be constant. Since the velocity is higher near the obstacle, the mass flow rate through a surface near the obstacle will be higher than the mass flow rate through a surface farther away from the obstacle. This can be compensated for by a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. As a result of these nonlinear effects, the Navier–Stokes equations in this case become difficult to solve, and approximations or numerical methods must be used to find the velocity and pressure fields in the flow. Consider the case of a two-dimensional fluid flow in a rectangular domain, with a velocity field formula_27 and a pressure field formula_28. We can use a finite element method to solve the Navier–Stokes equation for the velocity field: formula_29 To do this, we divide the domain into a series of smaller elements, and represent the velocity field as: formula_30 where formula_31 is the number of elements, and formula_32 are the shape functions associated with each element. Substituting this expression into the Navier–Stokes equation and applying the finite element method, we can derive a system of ordinary differential equations: formula_33 where formula_34 is the domain, and the integrals are over the domain. This system of ordinary differential equations can be solved using techniques such as the finite element method or spectral methods. Here, we will use the finite difference method. To do this, we can divide the time interval formula_35 into a series of smaller time steps, and approximate the derivative at each time step using a finite difference formula: formula_36 where formula_37 is the size of the time step, and formula_38 and formula_39 are the values of formula_38 and formula_20 at time step formula_40. Using this approximation, we can iterate through the time steps and compute the value of formula_38 at each time step. For example, starting at time step formula_40 and using the approximation above, we can compute the value of formula_38 at time step formula_41: formula_42 This process can be repeated until we reach the final time step formula_43. There are many other approaches to solving ordinary differential equations, each with its own advantages and disadvantages. The choice of approach depends on the specific equation being solved, and the desired accuracy and efficiency of the solution. Two settings: unbounded and periodic space. There are two different settings for the one-million-dollar-prize Navier–Stokes existence and smoothness problem. The original problem is in the whole space formula_44, which needs extra conditions on the growth behavior of the initial condition and the solutions. In order to rule out the problems at infinity, the Navier–Stokes equations can be set in a periodic framework, which implies that they are no longer working on the whole space formula_44 but in the 3-dimensional torus formula_45. Each case will be treated separately. Statement of the problem in the whole space. Hypotheses and growth conditions. The initial condition formula_46 is assumed to be a smooth and divergence-free function (see smooth function) such that, for every multi-index formula_47 (see multi-index notation) and any formula_48, there exists a constant formula_49 such that formula_50 for all formula_51 The external force formula_52 is assumed to be a smooth function as well, and satisfies a very analogous inequality (now the multi-index includes time derivatives as well): formula_53 for all formula_54 For physically reasonable conditions, the type of solutions expected are smooth functions that do not grow large as formula_55. More precisely, the following assumptions are made: Condition 1 implies that the functions are smooth and globally defined and condition 2 means that the kinetic energy of the solution is globally bounded. The Millennium Prize conjectures in the whole space. (A) Existence and smoothness of the Navier–Stokes solutions in formula_44 Let formula_60. For any initial condition formula_46 satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector formula_27 and a pressure formula_28 satisfying conditions 1 and 2 above. (B) Breakdown of the Navier–Stokes solutions in formula_44 There exists an initial condition formula_46 and an external force formula_52 such that there exists no solutions formula_27 and formula_28 satisfying conditions 1 and 2 above. The Millennium Prize conjectures are two mathematical problems that were chosen by the Clay Mathematics Institute as the most important unsolved problems in mathematics. The first conjecture, which is known as the "smoothness" conjecture, states that there should always exist smooth and globally defined solutions to the Navier–Stokes equations in three-dimensional space. The second conjecture, known as the "breakdown" conjecture, states that there should be at least one set of initial conditions and external forces for which there are no smooth solutions to the Navier–Stokes equations. The Navier–Stokes equations are a set of partial differential equations that describe the motion of fluids. They are given by: formula_61 formula_62 where formula_27 is the velocity field of the fluid, formula_28 is the pressure, formula_17 is the density, formula_24 is the kinematic viscosity, and formula_52 is an external force. The first equation is known as the momentum equation, and the second equation is known as the continuity equation. These equations are typically accompanied by boundary conditions, which describe the behavior of the fluid at the edges of the domain. For example, in the case of a fluid flowing through a pipe, the boundary conditions might specify that the velocity and pressure are fixed at the walls of the pipe. The Navier–Stokes equations are nonlinear and highly coupled, making them difficult to solve in general. In particular, the difficulty of solving these equations lies in the term formula_63, which represents the nonlinear advection of the velocity field by itself. This term makes the Navier–Stokes equations highly sensitive to initial conditions, and it is the main reason why the Millennium Prize conjectures are so challenging. In addition to the mathematical challenges of solving the Navier–Stokes equations, there are also many practical challenges in applying these equations to real-world situations. For example, the Navier–Stokes equations are often used to model fluid flows that are turbulent, which means that the fluid is highly chaotic and unpredictable. Turbulence is a difficult phenomenon to model and understand, and it adds another layer of complexity to the problem of solving the Navier–Stokes equations. To solve the Navier–Stokes equations, we need to find a velocity field formula_27 and a pressure field formula_28 that satisfy the equations and the given boundary conditions. This can be done using a variety of numerical techniques, such as finite element methods, spectral methods, or finite difference methods. For example, consider the case of a two-dimensional fluid flow in a rectangular domain, with velocity and pressure fields formula_27 and a pressure field formula_28,respectively. The Navier–Stokes equations can be written as: formula_29 formula_64 formula_65 formula_66 where formula_17 is the density, formula_24 is the kinematic viscosity, and formula_67 is an external force. The boundary conditions might specify that the velocity is fixed at the walls of the domain, or that the pressure is fixed at certain points. The last identity occurs because the flow is solenoidal. To solve these equations numerically, we can divide the domain into a series of smaller elements, and solve the equations locally within each element. For example, using a finite element method, we might represent the velocity and pressure fields as: formula_30 formula_68 formula_69 where formula_31 is the number of elements, and formula_32 are the shape functions associated with each element. Substituting these expressions into the Navier–Stokes equations and applying the finite element method, we can derive a system of ordinary differential equations Statement of the periodic problem. Hypotheses. The functions sought now are periodic in the space variables of period 1. More precisely, let formula_70 be the unitary vector in the "i"- direction: formula_71 Then formula_27 is periodic in the space variables if for any formula_10, then: formula_72 Notice that this is considering the coordinates mod 1. This allows working not on the whole space formula_44 but on the quotient space formula_73, which turns out to be the 3-dimensional torus: formula_74 Now the hypotheses can be stated properly. The initial condition formula_46 is assumed to be a smooth and divergence-free function and the external force formula_52 is assumed to be a smooth function as well. The type of solutions that are physically relevant are those who satisfy these conditions: Just as in the previous case, condition 3 implies that the functions are smooth and globally defined and condition 4 means that the kinetic energy of the solution is globally bounded. The periodic Millennium Prize theorems. (C) Existence and smoothness of the Navier–Stokes solutions in formula_75 Let formula_60. For any initial condition formula_46 satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector formula_27 and a pressure formula_28 satisfying conditions 3 and 4 above. (D) Breakdown of the Navier–Stokes solutions in formula_75 There exists an initial condition formula_46 and an external force formula_52 such that there exists no solutions formula_27 and formula_28 satisfying conditions 3 and 4 above. In popular culture. Unsolved problems have been used to indicate a rare mathematical talent in fiction. The Navier–Stokes problem features in "The Mathematician's Shiva" (2014), a book about a prestigious, deceased, fictional mathematician named Rachela Karnokovitch taking the proof to her grave in protest of academia. The movie "Gifted" (2017) referenced the Millennium Prize problems and dealt with the potential for a 7-year-old girl and her deceased mathematician mother for solving the Navier–Stokes problem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{v}(\\boldsymbol{x},t)" }, { "math_id": 1, "text": "p(\\boldsymbol{x},t)" }, { "math_id": 2, "text": "\\frac{\\partial \\mathbf{v}}{\\partial t} + ( \\mathbf{v}\\cdot\\nabla ) \\mathbf{v} = -\\frac{1}{\\rho}\\nabla p + \\nu\\Delta \\mathbf{v} +\\mathbf{f}(\\boldsymbol{x},t)" }, { "math_id": 3, "text": "\\nu>0" }, { "math_id": 4, "text": "\\mathbf{f}(\\boldsymbol{x},t)" }, { "math_id": 5, "text": "\\nabla" }, { "math_id": 6, "text": "\\displaystyle \\Delta" }, { "math_id": 7, "text": "\\nabla\\cdot\\nabla" }, { "math_id": 8, "text": "\\nabla^2" }, { "math_id": 9, "text": "\\mathbf{v}(\\boldsymbol{x},t)=\\big(\\,v_1(\\boldsymbol{x},t),\\,v_2(\\boldsymbol{x},t),\\,v_3(\\boldsymbol{x},t)\\,\\big)\\,,\\qquad \\mathbf{f}(\\boldsymbol{x},t)=\\big(\\,f_1(\\boldsymbol{x},t),\\,f_2(\\boldsymbol{x},t),\\,f_3(\\boldsymbol{x},t)\\,\\big)" }, { "math_id": 10, "text": "i=1,2,3" }, { "math_id": 11, "text": "\\frac{\\partial v_i}{\\partial t} +\\sum_{j=1}^{3}\\frac{\\partial v_i}{\\partial x_j}v_j= -\\frac{1}{\\rho}\\frac{\\partial p}{\\partial x_i} + \\nu\\sum_{j=1}^{3}\\frac{\\partial^2 v_i}{\\partial x_j^2} +f_i(\\boldsymbol{x},t)." }, { "math_id": 12, "text": " \\nabla\\cdot \\mathbf{v} = 0." }, { "math_id": 13, "text": "(\\mathbf{v}\\cdot\\nabla ) \\mathbf{v}" }, { "math_id": 14, "text": "-\\frac{1}{\\rho}\\nabla p" }, { "math_id": 15, "text": "R" }, { "math_id": 16, "text": "\\mathbf{v_0}" }, { "math_id": 17, "text": "\\rho" }, { "math_id": 18, "text": "\\mathbf{v}(\\mathbf{x},t)" }, { "math_id": 19, "text": "\\mathbf{x}" }, { "math_id": 20, "text": "t" }, { "math_id": 21, "text": "p(\\mathbf{x},t)" }, { "math_id": 22, "text": "\\frac{\\partial \\mathbf{v}}{\\partial t} + ( \\mathbf{v}\\cdot\\nabla ) \\mathbf{v} = -\\frac{1}{\\rho}\\nabla p + \\nu\\Delta \\mathbf{v}" }, { "math_id": 23, "text": " \\nabla\\cdot \\mathbf{v} = 0" }, { "math_id": 24, "text": "\\nu" }, { "math_id": 25, "text": " ( \\mathbf{v}\\cdot\\nabla ) \\mathbf{v} = -\\frac{1}{\\rho}\\nabla p + \\nu\\Delta \\mathbf{v}" }, { "math_id": 26, "text": "( \\mathbf{v}\\cdot\\nabla ) \\mathbf{v}" }, { "math_id": 27, "text": "\\mathbf{v}(x,t)" }, { "math_id": 28, "text": "p(x,t)" }, { "math_id": 29, "text": "\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y} = -\\frac{1}{\\rho} \\frac{\\partial p}{\\partial x} + \\nu \\left( \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} \\right) + f_x(x,y,t)" }, { "math_id": 30, "text": "u(x,y,t) = \\sum_{i=1}^N U_i(t) \\phi_i(x,y)" }, { "math_id": 31, "text": "N" }, { "math_id": 32, "text": "\\phi_i(x,y)" }, { "math_id": 33, "text": "\\frac{d U_i}{d t} = -\\frac{1}{\\rho} \\sum_{j=1}^N \\left( \\frac{\\partial p}{\\partial x} \\right) j\\int{\\Omega} \\phi_j \\frac{\\partial \\phi_i}{\\partial x} d\\Omega + \\nu \\sum_{j=1}^N \\int_{\\Omega} \\left( \\frac{\\partial^2 u}{\\partial x^2} \\right)\\phi_j \\frac{\\partial^2 \\phi_i}{\\partial x^2} d\\Omega + \\int{\\Omega} f_x \\phi_i d\\Omega" }, { "math_id": 34, "text": "\\Omega" }, { "math_id": 35, "text": "[t_0, t_f]" }, { "math_id": 36, "text": "\\frac{U_{i+1} - U_i}{\\Delta t} \\approx -\\frac{1}{\\rho} \\sum_{j=1}^N \\left( \\frac{\\partial p}{\\partial x} \\right)j \\int{\\Omega} \\phi_j \\frac{\\partial \\phi_i}{\\partial x} d\\Omega + \\nu \\sum_{j=1}^N \\int_{\\Omega} \\left( \\frac{\\partial^2 u}{\\partial x^2} \\right)j \\phi_j \\frac{\\partial^2 \\phi_i}{\\partial x^2} d\\Omega + \\int{\\Omega} f_x \\phi_i d\\Omega" }, { "math_id": 37, "text": "\\Delta t = t_{i+1} - t_i" }, { "math_id": 38, "text": "U_i" }, { "math_id": 39, "text": "t_i" }, { "math_id": 40, "text": "i" }, { "math_id": 41, "text": "i+1" }, { "math_id": 42, "text": "U_{i+1} = U_i + \\Delta t \\cdot \\left(-\\frac{1}{\\rho} \\sum_{j=1}^N \\left( \\frac{\\partial p}{\\partial x} \\right)j \\int{\\Omega} \\phi_j \\frac{\\partial \\phi_i}{\\partial x} d\\Omega + \\nu \\sum_{j=1}^N \\int_{\\Omega} \\left( \\frac{\\partial^2 u}{\\partial x^2} \\right)_j \\phi_j \\frac{\\partial^2 \\phi_i}{\\partial x^2} d\\Omega + \\int_{\\Omega} f_x \\phi_i d\\Omega \\right)" }, { "math_id": 43, "text": "t_f" }, { "math_id": 44, "text": "\\mathbb{R}^3" }, { "math_id": 45, "text": "\\mathbb{T}^3=\\mathbb{R}^3/\\mathbb{Z}^3" }, { "math_id": 46, "text": "\\mathbf{v}_0(x)" }, { "math_id": 47, "text": "\\alpha" }, { "math_id": 48, "text": "K>0" }, { "math_id": 49, "text": "C=C(\\alpha,K)>0" }, { "math_id": 50, "text": "\\vert \\partial^\\alpha \\mathbf{v_0}(x)\\vert\\le \\frac{C}{(1+\\vert x\\vert)^K}\\qquad" }, { "math_id": 51, "text": "\\qquad x\\in\\mathbb{R}^3." }, { "math_id": 52, "text": "\\mathbf{f}(x,t)" }, { "math_id": 53, "text": "\\vert \\partial^\\alpha \\mathbf{f}(x,t)\\vert\\le \\frac{C}{(1+\\vert x\\vert + t)^K}\\qquad" }, { "math_id": 54, "text": "\\qquad (x,t)\\in\\mathbb{R}^3\\times[0,\\infty)." }, { "math_id": 55, "text": "\\vert x\\vert\\to\\infty" }, { "math_id": 56, "text": "\\mathbf{v}(x,t)\\in C^\\infty(\\mathbb{R}^3\\times[0,\\infty)),\\qquad p(x,t)\\in C^\\infty(\\mathbb{R}^3\\times[0,\\infty))" }, { "math_id": 57, "text": "E\\in (0,\\infty)" }, { "math_id": 58, "text": "\\int_{\\mathbb{R}^3} \\vert \\mathbf{v}(x,t)\\vert^2 \\, dx <E" }, { "math_id": 59, "text": "t\\ge 0\\,." }, { "math_id": 60, "text": "\\mathbf{f}(x,t)\\equiv 0" }, { "math_id": 61, "text": "\\frac{\\partial \\mathbf{v}}{\\partial t} + (\\mathbf{v} \\cdot \\nabla) \\mathbf{v} = -\\frac{1}{\\rho} \\nabla p + \\nu \\nabla^2 \\mathbf{v} + \\mathbf{f}" }, { "math_id": 62, "text": "\\nabla \\cdot \\mathbf{v} = 0" }, { "math_id": 63, "text": "(\\mathbf{v} \\cdot \\nabla) \\mathbf{v}" }, { "math_id": 64, "text": "\\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} = -\\frac{1}{\\rho} \\frac{\\partial p}{\\partial y} + \\nu \\left( \\frac{\\partial^2 v}{\\partial x^2} + \\frac{\\partial^2 v}{\\partial y^2} \\right) + f_y(x,y,t)" }, { "math_id": 65, "text": "\\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} = 0" }, { "math_id": 66, "text": "\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y} = 0" }, { "math_id": 67, "text": "\\mathbf{f}(x,y,t) = (f_x(x,y,t),f_y(x,y,t))" }, { "math_id": 68, "text": "v(x,y,t) = \\sum_{i=1}^N V_i(t) \\phi_i(x,y)" }, { "math_id": 69, "text": "p(x,y,t) = \\sum_{i=1}^N P_i(t) \\phi_i(x,y)" }, { "math_id": 70, "text": "e_i" }, { "math_id": 71, "text": "e_1=(1,0,0)\\,,\\qquad e_2=(0,1,0)\\,,\\qquad e_3=(0,0,1)" }, { "math_id": 72, "text": "\\mathbf{v}(x+e_i,t)=\\mathbf{v}(x,t)\\text{ for all } (x,t) \\in \\mathbb{R}^3\\times[0,\\infty)." }, { "math_id": 73, "text": "\\mathbb{R}^3/\\mathbb{Z}^3" }, { "math_id": 74, "text": "\\mathbb{T}^3=\\{(\\theta_1,\\theta_2,\\theta_3): 0\\le \\theta_i<2\\pi\\,,\\quad i=1,2,3\\}." }, { "math_id": 75, "text": "\\mathbb{T}^3" }, { "math_id": 76, "text": "\\mathbb{R}^3\\times(0,T)" } ]
https://en.wikipedia.org/wiki?curid=6013654
60141466
Q-value (statistics)
Statistical hypothesis testing measure In statistical hypothesis testing, specifically multiple hypothesis testing, the "q"-value in the Storey procedure provides a means to control the positive false discovery rate (pFDR). Just as the "p"-value gives the expected false positive rate obtained by rejecting the null hypothesis for any result with an equal or smaller "p"-value, the "q"-value gives the expected pFDR obtained by rejecting the null hypothesis for any result with an equal or smaller "q"-value. History. In statistics, testing multiple hypotheses simultaneously using methods appropriate for testing single hypotheses tends to yield many false positives: the so-called multiple comparisons problem. For example, assume that one were to test 1,000 null hypotheses, all of which are true, and (as is conventional in single hypothesis testing) to reject null hypotheses with a significance level of 0.05; due to random chance, one would expect 5% of the results to appear significant ("P" &lt; 0.05), yielding 50 false positives (rejections of the null hypothesis). Since the 1950s, statisticians had been developing methods for multiple comparisons that reduced the number of false positives, such as controlling the family-wise error rate (FWER) using the Bonferroni correction, but these methods also increased the number of false negatives (i.e. reduced the statistical power). In 1995, Yoav Benjamini and Yosef Hochberg proposed controlling the false discovery rate (FDR) as a more statistically powerful alternative to controlling the FWER in multiple hypothesis testing. The pFDR and the "q-"value were introduced by John D. Storey in 2002 in order to improve upon a limitation of the FDR, namely that the FDR is not defined when there are no positive results. Definition. Let there be a null hypothesis formula_0 and an alternative hypothesis formula_1. Perform formula_2 hypothesis tests; let the test statistics be i.i.d. random variables formula_3 such that formula_4. That is, if formula_0 is true for test formula_5 (formula_6), then formula_7 follows the null distribution formula_8; while if formula_1 is true (formula_9), then formula_7 follows the alternative distribution formula_10. Let formula_11, that is, for each test, formula_1 is true with probability formula_12 and formula_0 is true with probability formula_13. Denote the critical region (the values of formula_7 for which formula_0 is rejected) at significance level formula_14 by formula_15. Let an experiment yield a value formula_16 for the test statistic. The "q"-value of formula_16 is formally defined as formula_17 That is, the "q"-value is the infimum of the pFDR if formula_0is rejected for test statistics with values formula_18. Equivalently, the "q"-value equals formula_19 which is the infimum of the probability that formula_0 is true given that formula_0 is rejected (the false discovery rate). Relationship to the "p"-value. The "p"-value is defined as formula_20 the infimum of the probability that formula_0 is rejected given that formula_0 is true (the false positive rate). Comparing the definitions of the "p"- and "q"-values, it can be seen that the "q"-value is the minimum posterior probability that formula_0 is true. Interpretation. The "q"-value can be interpreted as the false discovery rate (FDR): the proportion of false positives among all positive results. Given a set of test statistics and their associated "q"-values, rejecting the null hypothesis for all tests whose "q"-value is less than or equal to some threshold formula_14 ensures that the expected value of the false discovery rate is formula_14. Applications. Biology. Gene expression. Genome-wide analyses of differential gene expression involve simultaneously testing the expression of thousands of genes. Controlling the FWER (usually to 0.05) avoids excessive false positives (i.e. detecting differential expression in a gene that is not differentially expressed) but imposes a strict threshold for the "p"-value that results in many false negatives (many differentially expressed genes are overlooked). However, controlling the pFDR by selecting genes with significant "q"-values lowers the number of false negatives (increases the statistical power) while ensuring that the expected value of the proportion of false positives among all positive results is low (e.g. 5%). For example, suppose that among 10,000 genes tested, 1,000 are actually differentially expressed and 9,000 are not: Implementations. Note: the following is an incomplete list. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_0" }, { "math_id": 1, "text": "H_1" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "T_1, \\ldots, T_m" }, { "math_id": 4, "text": "T_i \\mid D_i \\sim (1 - D_i) \\cdot F_0 + D_i \\cdot F_1" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "D_i = 0" }, { "math_id": 7, "text": "T_i" }, { "math_id": 8, "text": "F_0" }, { "math_id": 9, "text": "D_i = 1" }, { "math_id": 10, "text": "F_1" }, { "math_id": 11, "text": "D_i \\sim \\operatorname{Bernoulli}(\\pi_1)" }, { "math_id": 12, "text": "\\pi_1" }, { "math_id": 13, "text": "\\pi_0 = 1 - \\pi_1" }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": "\\Gamma_\\alpha" }, { "math_id": 16, "text": "t" }, { "math_id": 17, "text": "\\inf_{\\{\\Gamma_\\alpha : t \\in \\Gamma_\\alpha\\}} \\operatorname{pFDR}(\\Gamma_\\alpha)" }, { "math_id": 18, "text": "\\ge t" }, { "math_id": 19, "text": "\\inf_{\\{\\Gamma_\\alpha : t \\in \\Gamma_\\alpha\\}}\\Pr(H = 0 \\mid T \\in \\Gamma_\\alpha)" }, { "math_id": 20, "text": "\\inf_{\\{\\Gamma_\\alpha : t \\in \\Gamma_\\alpha\\}} \\Pr(T \\in \\Gamma_\\alpha \\mid D = 0)" } ]
https://en.wikipedia.org/wiki?curid=60141466
6014225
Pressure drop
Difference in pressure between two points of a fluid Pressure drop (often abbreviated as "dP" or "ΔP") is defined as the difference in total pressure between two points of a fluid carrying network. A pressure drop occurs when frictional forces, caused by the resistance to flow, act on a fluid as it flows through a conduit (such as a channel, pipe, or tube). This friction converts some of the fluid’s hydraulic energy to thermal energy (i.e., internal energy). Since the thermal energy cannot be converted back to hydraulic energy, the fluid experiences a drop in pressure, as is required by conservation of energy. The main determinants of resistance to fluid flow are fluid velocity through the pipe and fluid viscosity. Pressure drop increases proportionally to the frictional shear forces within the piping network. A piping network containing a high relative roughness rating as well as many pipe fittings and joints, tube convergence, divergence, turns, surface roughness, and other physical properties will affect the pressure drop. High flow velocities or high fluid viscosities result in a larger pressure drop across a pipe section, valve, or elbow joint. Low velocity will result in less (or no) pressure drop. The fluid may also be biphasic as in pneumatic conveying with a gas and a solid; in this case, the friction of the solid must also be taken into consideration for calculating the pressure drop. Applications. Fluid in a system will always flow from a region of higher pressure to a region of lower pressure, assuming it has a path to do so. All things being equal, a higher pressure drop will lead to a higher flow (except in cases of choked flow). The pressure drop of a given system will determine the amount of energy needed to convey fluid through that system. For example, a larger pump could be required to move a set amount of water through smaller-diameter pipes (with higher velocity and thus higher pressure drop) as compared to a system with larger-diameter pipes (with lower velocity and thus lower pressure drop). Calculation of pressure drop. Pressure drop is related inversely to pipe diameter to the fifth power. For example, halving a pipe's diameter would increase the pressure drop by a factor of formula_0 (e.g. from 2 psi to 64 psi), assuming no change in flow. Pressure drop in piping is directly proportional to the length of the piping—for example, a pipe with twice the length will have twice the pressure drop, given the same flow rate. Piping fittings (such as elbow and tee joints) generally lead to greater pressure drop than straight pipe. As such, a number of correlations have been developed to calculate equivalent length of fittings. Certain valves are provided with an associated flow coefficient, commonly known as "C"v or "K"v. The flow coefficient relates pressure drop, flow rate, and specific gravity for a given valve. Many empirical calculations exist for calculation of pressure drop, including: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^5=32" } ]
https://en.wikipedia.org/wiki?curid=6014225
601423
National Assembly (South Korea)
Legislature of South Korea The National Assembly of the Republic of Korea, often shortened to the National Assembly, is the unicameral national legislature of South Korea. Elections to the National Assembly are held every four years. The latest legislative elections was held on 10 April 2024. The current National Assembly held its first meeting, and also began its current four year term, on 30 May 2024. The next Speaker was elected 5 June 2024. The National Assembly has 300 seats, with 253 constituency seats and 47 proportional representation seats; 30 of the PR seats are assigned an additional member system, while 17 PR seats use the parallel voting method. The unicameral assembly consists of at least 200 members according to the South Korean constitution. In 1990 the assembly had 299 seats, 224 of which were directly elected from single-member districts in the general elections of April 1988. Under applicable laws, the remaining seventy-five representatives were elected from party lists. By law, candidates for election to the assembly must be at least thirty years of age. As part of a political compromise in 1987, an earlier requirement that candidates have at least five years' continuous residency in the country was dropped to allow Kim Dae-jung, who had spent several years in exile in Japan and the United States during the 1980s, to return to political life. The National Assembly's term is four years. In a change from the more authoritarian Fourth Republic and Fifth Republic (1972–81 and 1981–87, respectively), under the Sixth Republic, the assembly cannot be dissolved by the president. Building. The main building in Yeouido, Seoul, is a stone structure with seven stories above ground and one story below ground. The building has 24 columns, which means the legislature's promise to listen to people 24/7 throughout the year. Structure and appointment. Speaker. The constitution stipulates that the assembly is presided over by a Speaker and two Deputy Speakers, who are responsible for expediting the legislative process. The Speaker and Deputy Speakers are elected in a secret ballot by the members of the Assembly, and their term in office is restricted to two years. The Speaker is independent of party affiliation, and the Speaker and Deputy Speakers may not simultaneously be government ministers. Negotiation groups. Parties that hold at least 20 seats in the assembly form floor negotiation groups (, Hanja: 交涉團體, RR: ), which are entitled to a variety of rights that are denied to smaller parties. These include a greater amount of state funding and participation in the leaders' summits that determine the assembly's legislative agenda. In order to meet the quorum, the United Liberal Democrats, who then held 17 seats, arranged to "rent" three legislators from the Millennium Democratic Party. The legislators returned to the MDP after the collapse of the ULD-MDP coalition in September 2001. Legislative process. For a legislator to introduce a bill, they must submit the proposal to the Speaker, accompanied by the signatures of at least ten other assembly members. A committee must then review the bill to verify that it employs precise and orderly language. Following this, the Assembly may either approve or reject the bill. Committees. There are 17 standing committees which examine bills and petitions falling under their respective jurisdictions, and perform other duties as prescribed by relevant laws. Election. The National Assembly has 300 seats, with 254 constituency seats under FPTP and 46 proportional representation seats. With electoral reform taken in 2019, the PR seats apportionment method was replaced by a variation of additional member system from the previous parallel voting system. However, 17 seats were temporarily assigned under parallel voting in the 2020 South Korean legislative election. Per Article 189 of Public Official Election Act, the PR seats are awarded to parties that have either obtained at least 3% of the total valid votes in the legislative election or at least five constituency seats. The number of seats allocated to each eligible party is decided by the formula: formula_0 where If the integer is less than 1, then "n"initial is set to 0 and the party does not get any seats. Then the sum of initially allocated seats is compared to the total seats for the additional member system and recalculated. formula_1 formula_2 Final seats are assigned through the largest remainder method, and if the remainder is equal, the winner is determined by lottery among the relevant political parties. The voting age was also lowered from 19 to 18 years old, expanding the electorate by over half a million voters. Legislative violence. From 2004 to 2009, the assembly gained notoriety as a frequent site for legislative violence. The Assembly first came to the world's attention during a violent dispute on impeachment proceedings for then President Roh Moo-hyun, when open physical combat took place in the assembly. Since then, it has been interrupted by periodic conflagrations, piquing the world's curiosity once again in 2009 when members battled each other with sledgehammers and fire extinguishers. The National Assembly since then has taken preventive measures to prevent any more legislative violence. History. First Republic. Elections for the assembly were held under UN supervision on 10 May 1948. The First Republic of Korea was established on 17 July 1948 when the constitution of the First Republic was established by the Assembly. The Assembly also had the job of electing the president and elected anti-communist Syngman Rhee as president on 10 May 1948. Under the first constitution, the National Assembly was unicameral. Under the second and third constitutions, the National Assembly was to be bicameral and consist of the House of Representatives and the House of Councillors, but in practice, the legislature was unicameral because the House of Representatives was prevented from passing the law necessary to establish the House of Councillors. &lt;templatestyles src="Legend/styles.css" /&gt;  Conservative &lt;templatestyles src="Legend/styles.css" /&gt;  Liberal &lt;templatestyles src="Legend/styles.css" /&gt;  Progressive &lt;templatestyles src="Legend/styles.css" /&gt;  majority &lt;templatestyles src="Legend/styles.css" /&gt;  plurality only &lt;templatestyles src="Legend/styles.css" /&gt;  largest minority Third Republic. Since the reopening of the National Assembly in 1963 until today, it has been unicameral. Sixth Republic. &lt;templatestyles src="Legend/styles.css" /&gt;  majority &lt;templatestyles src="Legend/styles.css" /&gt;  plurality &lt;templatestyles src="Legend/styles.css" /&gt;  largest minority References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n_\\text{initial} = \\left\\lfloor \\frac{(n_{\\text{Assembly}} - n_{\\text{ineligibles}}) \\times \\text{PR votes ratio} - n_{\\text{obtained constituencies}} + 1}{2}\\right\\rfloor" }, { "math_id": 1, "text": "n_\\text{remainder} = \\left(n_\\text{ams}-\\sum n_\\text{initial}\\right) \\times \\text{PR votes ratio}" }, { "math_id": 2, "text": "n_\\text{final} =\n\\begin{cases}\nn_\\text{initial}+n_\\text{remainder}, & \\text{if }\\sum n_\\text{initial} < n_\\text{ams} \\\\\nn_\\text{ams} \\times \\dfrac{n_\\text{initial}}{\\sum n_\\text{initial}}, & \\text{if }\\sum n_\\text{initial} > n_\\text{ams}\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=601423
6014991
Prolonged sine
The law of the prolonged sine was observed when measuring strength of the reaction of the plant stems and roots in response to turning from their usual vertical orientation. Such organisms maintained their usual vertical growth, and, if turned, start bending back toward the vertical. The prolonged sine law was observed when measuring the dependence of the bending speed from the angle of reorientation. The observed law. It was observed that deviation from the desired growth direction by more than the 90 degrees causes further increase of the bending speed. After turning the 135 degrees the reoriented plant or fungi understands that it is placed "head down" and bends faster than turned by just 45 degrees. Poul Larsen in 1962 proposed, that the intensity of the gravitropic reaction (bending rate) is proportional to formula_0 where α is the angle of reorientation, g - gravity vector and constants a and b are determined experimentally. Significance. Following the popular hypothesis of the mechanism of the plant spatial orientation, the bending from the horizontal position is caused by some small heavy particles that after turning put the pressure on the side cell (statocyte) wall, irritating some system and activating the bending process. The pressure of such particle to the cell was would be proportional to the sine of the reorientation angle, being maximal at 90° reorientation. It would be equal for the reorientations by both 45° and 135°. The prolonged sine law concludes that there are very significant deviations from such a predicted reaction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g \\times a\\sin\\alpha \\times (1-b\\cos\\alpha)" } ]
https://en.wikipedia.org/wiki?curid=6014991
60151482
Metric interval temporal logic
Fragment of metric temporal logic In model checking, the Metric Interval Temporal Logic (MITL) is a fragment of Metric Temporal Logic (MTL). This fragment is often preferred to MTL because some problems that are undecidable for MTL become decidable for MITL. Definition. A MITL formula is an MTL formula, such that each set of reals used in subscript are intervals, which are not singletons, and whose bounds are either a natural number or are infinite. Difference from MTL. MTL can express a statement such as the sentence S: "P held exactly ten time units ago". This is impossible in MITL. Instead, MITL can say T: "P held between 9 and 10 time units ago". Since MITL can express T but not S, in a sense, MITL is a restriction of MTL which allows only less precise statements. Problems that MITL avoids. One reason to want to avoid a statement such as S is that its truth value may change an arbitrary number of times in a single time unit. Indeed, the truth value of this statement may change as many times as the truth value of P changes, and P itself may change an arbitrary number of time in a single time unit. Let us now consider a system, such as a timed automaton or a signal automaton, which want to know at each instant whether S holds or not. This system should recall everything that occurred in the last 10 time units. As seen above, it means that it must recall an arbitrarily large number of events. This can not be implemented by a system with finite memory and clocks. Bounded variability. One of the main advantage of MITL is that each operator has the bounded variability property. Example: Given the statement T defined above. Each time the truth value of T switches from false to true, it remains true for at least one time unit. Proof: At a time t where T becomes true, it means that: Hence, P was true exactly 9 time units ago. It follows that, for each formula_0, at time formula_1, P was true formula_2 time units ago. Since formula_3, at time formula_4, T holds. A system, at each instant, wants to know the value of T. Such a system must recall what occurred during the last ten time units. However, thanks to the bounded variability property, it must recall at most 10 time units when T becomes true. And hence 11 times when T becomes false. Thus this system must recall at most 21 events, and hence can be implemented as a timed automaton or a signal automaton. Examples. Examples of MITL formulas: Fragments. Safety-MTL0,∞. The fragment Safety-MTL0,v is defined as the subset of MITL0,∞ containing only formulas in positive normal form where the interval of every until operator has an upper bound. For example, the formula formula_11 which states that each formula_12 is followed, less than one time unit later, by a formula_13, belongs to this logic. Open and closed MITL. The fragment Open-MTL contains the formula in positive normal form such that: The fragment "Closed-MITL" contains the negation of formulas of "Open-MITL". Flat and Coflat MITL. The fragment Flat-MTL contains the formula in positive normal form such that: The fragment Coflat-MITL contains the negation of formulas of "Flat-MITL". Non-strict variant. Given any fragment "L", the fragment "Lns" is the restriction of "L" in which only non strict operators are used. MITL0,∞ and MITL0. Given any fragment "L", the fragment "L0,∞" is the subset of "L" where the lower bound of each interval is 0 or the upper bound is infinity. Similarly we denote by "L0" (respectively, "L∞") the subset of "L" such that the lower bound of each interval is 0 (respectively, the upper bound of each interval is ∞). Expressiveness over signals. Over signals, MITL0 is as expressive as MITL. This can be proven by applying the following rewriting rules to a MITL formula. Applying those rewriting rules exponentially increases the size of the formula. Indeed, the numbers formula_12 and formula_13 are traditionally written in binary, and those rules should be applied formula_30 times. Expressiveness over timed words. Contrary to the case of signals, MITL is strictly more expressive than MITL0,∞. The rewriting rules given above do not apply in the case of timed words because, in order to rewrite formula_31 the assumption must be that some event occurs between times 0 and formula_12, which is not necessarily the case. Satisfiability problem. The problem of deciding whether a MITL formula is satisfiable over a signal is in PSPACE-complete. References needed. R. Alur, T. Feder, and T.A. Henzinger. The Benefits of Relaxing Punctuality. Journal of the ACM, 43(1):116–146, 1996. R. Alur and T.A. Henzinger. Logics and Models of Real-Time: A Survey. In Proc. REX Workshop, Real-time: Theory in Practice, pages 74–106. LNCS 600, Springer, 1992. T.A. Henzinger. It's about Time: Real-time Logics Reviewed. In Proc. CONCUR'98, pages 439–454. LNCS 1466, Springer, 1998. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t'\\in[0,1]" }, { "math_id": 1, "text": "t+t'" }, { "math_id": 2, "text": "9+t'" }, { "math_id": 3, "text": "9+t'\\in[9,10]" }, { "math_id": 4, "text": "t'" }, { "math_id": 5, "text": "\\square\\diamond_{(0,1)}p" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "\\triangleright_{\\{i\\}}p" }, { "math_id": 8, "text": "\\square_{(0,i)}\\neg p\\land\\diamond_{(0,i]}p" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "p\\land\\square(p\\implies\\triangleright_{\\{1\\}}p)" }, { "math_id": 11, "text": "\\Box(a\\implies\\Diamond_{(0,1]}b)" }, { "math_id": 12, "text": "a" }, { "math_id": 13, "text": "b" }, { "math_id": 14, "text": "\\mathcal U_I" }, { "math_id": 15, "text": "I" }, { "math_id": 16, "text": "\\mathcal R_I" }, { "math_id": 17, "text": "\\phi_1\\mathcal U_I\\phi_2" }, { "math_id": 18, "text": "\\phi_1" }, { "math_id": 19, "text": "\\phi_1\\mathcal R_I\\phi_2" }, { "math_id": 20, "text": "\\phi_2" }, { "math_id": 21, "text": "\\phi\\mathcal U_{(a,b)}\\psi" }, { "math_id": 22, "text": "\\Box_{(0,a]}\\phi\\mathcal U\\psi\\land\\Diamond_{(a,b)}\\psi" }, { "math_id": 23, "text": "\\Diamond_{(a,b)}\\phi" }, { "math_id": 24, "text": "\\Diamond_{(2a-b,a)}\\Box_{(0,b-a)}\\Diamond_{(0,b-a)}\\phi" }, { "math_id": 25, "text": "2a-b\\ge0" }, { "math_id": 26, "text": "\\Diamond_{(a,b-a)}\\Diamond_{(0,a)}\\phi" }, { "math_id": 27, "text": "a<b" }, { "math_id": 28, "text": "\\Diamond_{(a,+\\infty)}\\phi" }, { "math_id": 29, "text": "\\Box_{(0,a)}\\Diamond\\phi" }, { "math_id": 30, "text": "O\\left(\\frac{b}{b-a}\\right)" }, { "math_id": 31, "text": "\\Diamond_{(a,b)}" } ]
https://en.wikipedia.org/wiki?curid=60151482
6016181
Bounce rate
Internet marketing term in web traffic analysis Bounce rate is an Internet marketing term used in web traffic analysis. It represents the percentage of visitors who enter the site and then leave ("bounce") rather than continuing to view other pages within the same site. Bounce rate is calculated by counting the number of single page visits and dividing that by the total visits. It is then represented as a percentage of total visits. Bounce rate is a measure of "stickiness." The thinking being that an effective website will engage visitors deeper into the website thus encouraging visitors to continue with their visit. It is expressed as a percentage and represents the proportion of single page visits to total visits. Bounce rate (%) = Visits that access only a single page (#) ÷ Total visits (#) to the website. Purpose. Bounce rates can be used to help determine the effectiveness or performance of an entry page at generating the interest of visitors. An entry page with a low bounce rate means that the page effectively causes visitors to view more pages and continue deeper into the website. High bounce rates typically indicate that the website is not doing a good job of attracting the continued interest of visitors. That means visitors only view single pages without looking at others or taking some form of action within the site before a specified time period. Interpretation of the bounce rate measure should be relevant to a website's business objectives and definitions of conversion, as having a high bounce rate is not always a sign of poor performance. On sites where an objective can be met without viewing more than one page, for example on websites sharing specific knowledge on some subject (dictionary entry, specific recipe), the bounce rate would not be as meaningful for determining conversion success. In contrast, the bounce rate of an e-commerce site could be interpreted in correlation with the purchase conversion rate, providing the bounces are considered representative of visits where no purchase was made. Typically, Bounce Rate for e-commerce websites is in the range of 20% to 45%, with top performers operating at a 36% average Bounce Rate. Construction. A bounce occurs when a website visitor only views a single page on a website, that is, the visitor leaves a site without visiting any other pages before a specified session-timeout occurs. There is no industry standard minimum or maximum time by which a visitor must leave in order for a bounce to occur. Rather, this is determined by the session timeout of the analytics tracking software. formula_0 where *Rb = Bounce rate *Tv = Total number of visitors viewing one page only *Te = Total entries to page A visitor may bounce by: * Clicking on a link to a page on a different website * Closing an open window or tab * Typing a new URL * Clicking the "Back" button to leave the site * Session timeout There are two exceptions: 1) You have a one-page website 2) Your offline value proposition is so compelling that people would see just one single webpage and get all the information they need and leave. A commonly used session timeout value is 30 minutes. In this case, if a visitor views a page, does not look at another page, and leaves his or her browser idle for longer than 30 minutes, they will register as a bounce. If the visitor continues to navigate after this delay, a new session will occur. The bounce rate for a single page is the number of visitors who enter the site at a page and leave within the specified timeout period without viewing another page, divided by the total number of visitors who entered the site at that page. In contrast, the bounce rate for a website is the number of website visitors who visit only a single page of a website per session divided by the total number of website visits. Caveats. While site-wide bounce rate can be a useful metric for sites with well-defined conversion steps requiring multiple page views, it may be of questionable value for sites where visitors are likely to find what they are looking for on the entry page. This type of behavior is common on web portals and referential content sites. For example, a visitor looking for the definition of a particular word may enter an online dictionary site on that word's definition page. Similarly, a visitor who wants to read about a specific news story may enter a news site on an article written for that story. These example entry pages could have a bounce rate above 80% (thereby increasing the site-wide average), however they may still be considered successful. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_b = \\frac{T_v}{T_e}" } ]
https://en.wikipedia.org/wiki?curid=6016181
60162
Tidal locking
Situation in which an astronomical object's orbital period matches its rotational period Tidal locking between a pair of co-orbiting astronomical bodies occurs when one of the objects reaches a state where there is no longer any net change in its rotation rate over the course of a complete orbit. In the case where a tidally locked body possesses synchronous rotation, the object takes just as long to rotate around its own axis as it does to revolve around its partner. For example, the same side of the Moon always faces Earth, although there is some variability because the Moon's orbit is not perfectly circular. Usually, only the satellite is tidally locked to the larger body. However, if both the difference in mass between the two bodies and the distance between them are relatively small, each may be tidally locked to the other; this is the case for Pluto and Charon, as well as for Eris and Dysnomia. Alternative names for the tidal locking process are gravitational locking, captured rotation, and spin–orbit locking. The effect arises between two bodies when their gravitational interaction slows a body's rotation until it becomes tidally locked. Over many millions of years, the interaction forces changes to their orbits and rotation rates as a result of energy exchange and heat dissipation. When one of the bodies reaches a state where there is no longer any net change in its rotation rate over the course of a complete orbit, it is said to be tidally locked. The object tends to stay in this state because leaving it would require adding energy back into the system. The object's orbit may migrate over time so as to undo the tidal lock, for example, if a giant planet perturbs the object. There is ambiguity in the use of the terms 'tidally locked' and 'tidal locking', in that some scientific sources use it to refer exclusively to 1:1 synchronous rotation (e.g. the Moon), while others include non-synchronous orbital resonances in which there is no further transfer of angular momentum over the course of one orbit (e.g. Mercury). In Mercury's case, the planet completes three rotations for every two revolutions around the Sun, a 3:2 spin–orbit resonance. In the special case where an orbit is nearly circular and the body's rotation axis is not significantly tilted, such as the Moon, tidal locking results in the same hemisphere of the revolving object constantly facing its partner. Regardless of which definition of tidal locking is used, the hemisphere that is visible changes slightly due to variations in the locked body's orbital velocity and the inclination of its rotation axis over time. Mechanism. Consider a pair of co-orbiting objects, A and B. The change in rotation rate necessary to tidally lock body B to the larger body A is caused by the torque applied by A's gravity on bulges it has induced on B by tidal forces. The gravitational force from object A upon B will vary with distance, being greatest at the nearest surface to A and least at the most distant. This creates a gravitational gradient across object B that will distort its equilibrium shape slightly. The body of object B will become elongated along the axis oriented toward A, and conversely, slightly reduced in dimension in directions orthogonal to this axis. The elongated distortions are known as tidal bulges. (For the solid Earth, these bulges can reach displacements of up to around .) When B is not yet tidally locked, the bulges travel over its surface due to orbital motions, with one of the two "high" tidal bulges traveling close to the point where body A is overhead. For large astronomical bodies that are nearly spherical due to self-gravitation, the tidal distortion produces a slightly prolate spheroid, i.e. an axially symmetric ellipsoid that is elongated along its major axis. Smaller bodies also experience distortion, but this distortion is less regular. The material of B exerts resistance to this periodic reshaping caused by the tidal force. In effect, some time is required to reshape B to the gravitational equilibrium shape, by which time the forming bulges have already been carried some distance away from the A–B axis by B's rotation. Seen from a vantage point in space, the points of maximum bulge extension are displaced from the axis oriented toward A. If B's rotation period is shorter than its orbital period, the bulges are carried forward of the axis oriented toward A in the direction of rotation, whereas if B's rotation period is longer, the bulges instead lag behind. Because the bulges are now displaced from the A–B axis, A's gravitational pull on the mass in them exerts a torque on B. The torque on the A-facing bulge acts to bring B's rotation in line with its orbital period, whereas the "back" bulge, which faces away from A, acts in the opposite sense. However, the bulge on the A-facing side is closer to A than the back bulge by a distance of approximately B's diameter, and so experiences a slightly stronger gravitational force and torque. The net resulting torque from both bulges, then, is always in the direction that acts to synchronize B's rotation with its orbital period, leading eventually to tidal locking. Orbital changes. The angular momentum of the whole A–B system is conserved in this process, so that when B slows down and loses rotational angular momentum, its "orbital" angular momentum is boosted by a similar amount (there are also some smaller effects on A's rotation). This results in a raising of B's orbit about A in tandem with its rotational slowdown. For the other case where B starts off rotating too slowly, tidal locking both speeds up its rotation, and "lowers" its orbit. Locking of the larger body. The tidal locking effect is also experienced by the larger body A, but at a slower rate because B's gravitational effect is weaker due to B's smaller mass. For example, Earth's rotation is gradually being slowed by the Moon, by an amount that becomes noticeable over geological time as revealed in the fossil record. Current estimations are that this (together with the tidal influence of the Sun) has helped lengthen the Earth day from about 6 hours to the current 24 hours (over about 4.5 billion years). Currently, atomic clocks show that Earth's day lengthens, on average, by about 2.3 milliseconds per century. Given enough time, this would create a mutual tidal locking between Earth and the Moon. The length of Earth's day would increase and the length of a lunar month would also increase. Earth's sidereal day would eventually have the same length as the Moon's orbital period, about 47 times the length of the Earth day at present. However, Earth is not expected to become tidally locked to the Moon before the Sun becomes a red giant and engulfs Earth and the Moon. For bodies of similar size the effect may be of comparable size for both, and both may become tidally locked to each other on a much shorter timescale. An example is the dwarf planet Pluto and its satellite Charon. They have already reached a state where Charon is visible from only one hemisphere of Pluto and vice versa. Eccentric orbits. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A widely spread misapprehension is that a tidally locked body permanently turns one side to its host. For orbits that do not have an eccentricity close to zero, the rotation rate tends to become locked with the orbital speed when the body is at periapsis, which is the point of strongest tidal interaction between the two objects. If the orbiting object has a companion, this third body can cause the rotation rate of the parent object to vary in an oscillatory manner. This interaction can also drive an increase in orbital eccentricity of the orbiting object around the primary – an effect known as eccentricity pumping. In some cases where the orbit is eccentric and the tidal effect is relatively weak, the smaller body may end up in a so-called spin–orbit resonance, rather than being tidally locked. Here, the ratio of the rotation period of a body to its own orbital period is some simple fraction different from 1:1. A well known case is the rotation of Mercury, which is locked to its own orbit around the Sun in a 3:2 resonance. This results in the rotation speed roughly matching the orbital speed around perihelion. Many exoplanets (especially the close-in ones) are expected to be in spin–orbit resonances higher than 1:1. A Mercury-like terrestrial planet can, for example, become captured in a 3:2, 2:1, or 5:2 spin–orbit resonance, with the probability of each being dependent on the orbital eccentricity. Occurrence. Moons. All twenty known moons in the Solar System that are large enough to be round are tidally locked with their primaries, because they orbit very closely and tidal force increases rapidly (as a cubic function) with decreasing distance. On the other hand, most of the irregular outer satellites of the giant planets (e.g. Phoebe), which orbit much farther away than the large well-known moons, are not tidally locked. Pluto and Charon are an extreme example of a tidal lock. Charon is a relatively large moon in comparison to its primary and also has a very close orbit. This results in Pluto and Charon being mutually tidally locked. Pluto's other moons are not tidally locked; Styx, Nix, Kerberos, and Hydra all rotate chaotically due to the influence of Charon. Similarly, Eris and Dysnomia are mutually tidally locked. Orcus and Vanth might also be mutually tidally locked, but the data is not conclusive. The tidal locking situation for asteroid moons is largely unknown, but closely orbiting binaries are expected to be tidally locked, as well as contact binaries. Earth's Moon. Earth's Moon's rotation and orbital periods are tidally locked with each other, so no matter when the Moon is observed from Earth, the same hemisphere of the Moon is always seen. Most of the far side of the Moon was not seen until 1959, when photographs of most of the far side were transmitted from the Soviet spacecraft "Luna 3". When Earth is observed from the Moon, Earth does not appear to move across the sky. It remains in the same place while showing nearly all its surface as it rotates on its axis. Despite the Moon's rotational and orbital periods being exactly locked, about 59 percent of the Moon's total surface may be seen with repeated observations from Earth, due to the phenomena of libration and parallax. Librations are primarily caused by the Moon's varying orbital speed due to the eccentricity of its orbit: this allows up to about 6° more along its perimeter to be seen from Earth. Parallax is a geometric effect: at the surface of Earth observers are offset from the line through the centers of Earth and Moon; this accounts for about a 1° difference in the Moon's surface which can be seen around the sides of the Moon when comparing observations made during moonrise and moonset. Planets. It was thought for some time that Mercury was in synchronous rotation with the Sun. This was because whenever Mercury was best placed for observation, the same side faced inward. Radar observations in 1965 demonstrated instead that Mercury has a 3:2 spin–orbit resonance, rotating three times for every two revolutions around the Sun, which results in the same positioning at those observation points. Modeling has demonstrated that Mercury was captured into the 3:2 spin–orbit state very early in its history, probably within 10–20 million years after its formation. The 583.92-day interval between successive close approaches of Venus to Earth is equal to 5.001444 Venusian solar days, making approximately the same face visible from Earth at each close approach. Whether this relationship arose by chance or is the result of some kind of tidal locking with Earth is unknown. The exoplanet Proxima Centauri b discovered in 2016 which orbits around Proxima Centauri, is almost certainly tidally locked, expressing either synchronized rotation or a 3:2 spin–orbit resonance like that of Mercury. One form of hypothetical tidally locked exoplanets are eyeball planets, which in turn are divided into "hot" and "cold" eyeball planets. Stars. Close binary stars throughout the universe are expected to be tidally locked with each other, and extrasolar planets that have been found to orbit their primaries extremely closely are also thought to be tidally locked to them. An unusual example, confirmed by MOST, may be Tau Boötis, a star that is probably tidally locked by its planet Tau Boötis b. If so, the tidal locking is almost certainly mutual. Timescale. An estimate of the time for a body to become tidally locked can be obtained using the following formula: formula_0 where formula_11 and formula_12 are generally very poorly known except for the Moon, which has formula_13. For a really rough estimate it is common to take formula_14 (perhaps conservatively, giving overestimated locking times), and formula_15 where Even knowing the size and density of the satellite leaves many parameters that must be estimated (especially "ω", "Q", and "μ"), so that any calculated locking times obtained are expected to be inaccurate, even to factors of ten. Further, during the tidal locking phase the semi-major axis formula_19 may have been significantly different from that observed nowadays due to subsequent tidal acceleration, and the locking time is extremely sensitive to this value. Because the uncertainty is so high, the above formulas can be simplified to give a somewhat less cumbersome one. By assuming that the satellite is spherical, formula_20, and it is sensible to guess one revolution every 12 hours in the initial non-locked state (most asteroids have rotational periods between about 2 hours and about 2 days) formula_21 with masses in kilograms, distances in meters, and formula_22 in newtons per meter squared; formula_22 can be roughly taken as 3×1010 N/m2 for rocky objects and 4×109 N/m2 for icy ones. There is an extremely strong dependence on semi-major axis formula_19. For the locking of a primary body to its satellite as in the case of Pluto, the satellite and primary body parameters can be swapped. One conclusion is that, "other things being equal" (such as formula_11 and formula_22), a large moon will lock faster than a smaller moon at the same orbital distance from the planet because formula_23 grows as the cube of the satellite radius formula_6. A possible example of this is in the Saturn system, where Hyperion is not tidally locked, whereas the larger Iapetus, which orbits at a greater distance, is. However, this is not clear cut because Hyperion also experiences strong driving from the nearby Titan, which forces its rotation to be chaotic. The above formulae for the timescale of locking may be off by orders of magnitude, because they ignore the frequency dependence of formula_24. More importantly, they may be inapplicable to viscous binaries (double stars, or double asteroids that are rubble), because the spin–orbit dynamics of such bodies is defined mainly by their viscosity, not rigidity. List of known tidally locked bodies. Solar System. All the bodies below are tidally locked, and all but Mercury are moreover in synchronous rotation. (Mercury is tidally locked, but not in synchronous rotation.) Bodies likely to be locked. Solar System. Based on comparison between the likely time needed to lock a body to its primary, and the time it has been in its present orbit (comparable with the age of the Solar System for most planetary moons), a number of moons are thought to be locked. However their rotations are not known or not known enough. These are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nt_{\\text{lock}} \\approx \\frac{\\omega a^6 I Q}{3 G m_p^2 k_2 R^5}\n" }, { "math_id": 1, "text": "\\omega\\," }, { "math_id": 2, "text": "a\\," }, { "math_id": 3, "text": "I\\," }, { "math_id": 4, "text": "\\approx 0.4\\; m_s R^2" }, { "math_id": 5, "text": "m_s" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "Q\\," }, { "math_id": 8, "text": "G\\," }, { "math_id": 9, "text": "m_p\\," }, { "math_id": 10, "text": "k_2\\," }, { "math_id": 11, "text": "Q" }, { "math_id": 12, "text": "k_2" }, { "math_id": 13, "text": "k_2/Q=0.0011" }, { "math_id": 14, "text": "Q \\approx 100" }, { "math_id": 15, "text": "\nk_2 \\approx \\frac{1.5}{1+\\frac{19\\mu}{2\\rho g R}},\n" }, { "math_id": 16, "text": "\\rho\\," }, { "math_id": 17, "text": "g\\approx Gm_s/R^2" }, { "math_id": 18, "text": "\\mu\\," }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "k_2\\ll1\\, , Q = 100" }, { "math_id": 21, "text": "\nt_{\\text{lock}} \\approx 6\\ \\frac{a^6R\\mu}{m_sm_p^2} \\times 10^{10}\\ \\text{years},\n" }, { "math_id": 22, "text": "\\mu" }, { "math_id": 23, "text": "m_s\\," }, { "math_id": 24, "text": "k_2/Q" } ]
https://en.wikipedia.org/wiki?curid=60162
601621
Hubbert peak theory
One of the primary theories on peak oil The Hubbert peak theory says that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production tends to follow a bell-shaped curve. It is one of the primary theories on peak oil. Choosing a particular curve determines a point of maximum production based on discovery rates, production rates, and cumulative production. Early in the curve (pre-peak), the production rate increases due to the discovery rate and the addition of infrastructure. Late in the curve (post-peak), production declines because of resource depletion. The Hubbert peak theory is based on the observation that the amount of oil under the ground in any region is finite; therefore, the rate of discovery, which initially increases quickly, must reach a maximum and then decline. In the US, oil extraction followed the discovery curve after a time lag of 32 to 35 years. The theory is named after American geophysicist M. King Hubbert, who created a method of modeling the production curve given an assumed ultimate recovery volume. Hubbert's peak. "Hubbert's peak" can refer to the peaking of production in a particular area, which has now been observed for many fields and regions. Hubbert's peak was thought to have been achieved in the United States contiguous 48 states (that is, excluding Alaska and Hawaii) in the early 1970s. Oil production peaked at per day in 1970 and then declined over the subsequent 35 years in a pattern that closely followed the one predicated by Hubbert in the mid-1950s. However, beginning in the late 20th century, advances in extraction technology, particularly those that led to the extraction of tight oil and unconventional oil resulted in a large increase in U.S. oil production. Thus, establishing a pattern that deviated drastically from the model predicted by Hubbert for the contiguous 48-states as a whole. Production from Wells utilizing these advances extraction techniques, exhibit a rate of decline far greater than traditional means. In November 2017 the United States once again surpassed the 10 million barrel mark for the first time since 1970. Peak oil as a proper noun, or "Hubbert's peak" applied more generally, refers to a predicted event: the peak of the entire planet's oil production. After peak oil, according to the Hubbert Peak Theory, the rate of oil production on Earth would enter a terminal decline. Based on his theory, in a paper he presented to the American Petroleum Institute in 1956, Hubbert correctly predicted that production of oil from conventional sources would peak in the continental United States around 1965–1970. Hubbert further predicted a worldwide peak at "about half a century" from publication and approximately 12 gigabarrels (GB) a year in magnitude. In a 1976 TV interview Hubbert added that the actions of OPEC might flatten the global production curve but this would only delay the peak for perhaps 10 years. The development of new technologies has provided access to large quantities of unconventional resources, and the boost in production has largely discounted Hubbert's prediction. Hubbert's theory. Hubbert curve. In 1956, Hubbert proposed that fossil fuel production in a given region over time would follow a roughly bell-shaped curve without giving a precise formula; he later used the Hubbert curve, the derivative of the logistic curve, for estimating future production using past observed discoveries. Hubbert assumed that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. At some point, a peak output is reached, and production begins declining until it approximates an exponential decline. The Hubbert curve satisfies these constraints. Furthermore, it is symmetrical, with the peak of production reached when half of the fossil fuel that will ultimately be produced has been produced. It also has a single peak. Given past oil discovery and production data, a Hubbert curve that attempts to approximate past discovery data may be constructed and used to provide estimates for future production. In particular, the date of peak oil production or the total amount of oil ultimately produced can be estimated that way. Cavallo defines the Hubbert curve used to predict the U.S. peak as the derivative of: formula_0 where formula_1max is the total resource available (ultimate recovery of crude oil), formula_2 the cumulative production, and formula_3 and formula_4 are constants. The year of maximum annual production (peak) is: formula_5 so now the cumulative production formula_2 reaches the half of the total available resource: formula_6 The Hubbert equation assumes that oil production is symmetrical about the peak. Others have used similar but non-symmetrical equations which may provide better a fit to empirical production data. Use of multiple curves. The sum of multiple Hubbert curves, a technique not developed by Hubbert himself, may be used in order to model more complicated real life scenarios. When new production methods, namely hydraulic fracturing, were pioneered on the previously unproductive oil-bearing Shale formations, the sudden, dramatic increase in production necessitated a distinct curve. Advances in technologies such as these are limited, but when a paradigm shifting idea impacts production and causes a need for a new curve to be added to the old curve, or the entire curve to be reworked. It should be noted, &amp; it is well documented, that production from shale wells are unlike that of traditional well. A traditional oil well's rate of decline is shallow, &amp; exhibits a slow, predictable rate of decline as the reservoir is drawn down (Drinking of Milkshake). Whereas production from shale wells, assuming successful fracturing, will see its peak production at the moment the well is brought in, with a drastic rate of decline shortly thereafter. However, one revolutionary aspect of these types of production methods are the ability to refracture the well. Production may be brought back up, to near peak levels with a reapplication of the fracturing technology to the subject formation. Once again releasing the hydrocarbons trapped tightly within the shale &amp; allowing them to be drawn to the surface. This process allows the for an outward manipulation of the curve, simply by purposefully neglecting to rework the well until the operator's desired market conditions are present. Reliability. Crude oil. Hubbert, in his 1956 paper, presented two scenarios for US crude oil production: Hubbert's upper-bound estimate, which he regarded as optimistic, accurately predicted that US oil production would peak in 1970, although the actual peak was 17% higher than Hubbert's curve. Production declined, as Hubbert had predicted, and stayed within 10 percent of Hubbert's predicted value from 1974 through 1994; since then, actual production has been significantly greater than the Hubbert curve. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction. Hubbert's 1956 production curves depended on geological estimates of ultimate recoverable oil resources, but he was dissatisfied by the uncertainty this introduced, given the various estimates ranging from 110 billion to 590 billion barrels for the US. Starting in his 1962 publication, he made his calculations, including that of ultimate recovery, based only on mathematical analysis of production rates, proved reserves, and new discoveries, independent of any geological estimates of future discoveries. He concluded that the ultimate recoverable oil resource of the contiguous 48 states was 170 billion barrels, with a production peak in 1966 or 1967. He considered that because his model incorporated past technical advances, that any future advances would occur at the same rate, and were also incorporated. Hubbert continued to defend his calculation of 170 billion barrels in his publications of 1965 and 1967, although by 1967 he had moved the peak forward slightly, to 1968 or 1969. A post-hoc analysis of peaked oil wells, fields, regions and nations found that Hubbert's model was the "most widely useful" (providing the best fit to the data), though many areas studied had a sharper "peak" than predicted. A 2007 study of oil depletion by the UK Energy Research Centre pointed out that there is no theoretical and no robust practical reason to assume that oil production will follow a logistic curve. Neither is there any reason to assume that the peak will occur when half the ultimate recoverable resource has been produced; and in fact, empirical evidence appears to contradict this idea. An analysis of a 55 post-peak countries found that the average peak was at 25 percent of the ultimate recovery. Natural gas. Hubbert also predicted that natural gas production would follow a logistic curve similar to that of oil. The graph shows actual gas production in blue compared to his predicted gas production for the United States in red, published in 1962. Economics. Energy return on energy investment. The ratio of energy extracted to the energy expended in the process is often referred to as the Energy Return on Energy Investment (EROI or EROEI). Should the EROEI drops to one, or equivalently the Net energy gain falls to zero, the oil production is no longer a net energy source. There is a difference between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca tar sands plants. Cheap natural gas has also led to ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate. The assumption of inevitable declining volumes of oil and gas produced per unit of effort is contrary to recent experience in the US. In the United States, as of 2017, there has been an ongoing decade-long increase in the productivity of oil and gas drilling in all the major tight oil and gas plays. The US Energy Information Administration reports, for instance, that in the Bakken Shale production area of North Dakota, the volume of oil produced per day of drilling rig time in January 2017 was 4 times the oil volume per day of drilling five years previous, in January 2012, and nearly 10 times the oil volume per day of ten years previous, in January 2007. In the Marcellus gas region of the northeast, The volume of gas produced per day of drilling time in January 2017 was 3 times the gas volume per day of drilling five years previous, in January 2012, and 28 times the gas volume per day of drilling ten years previous, in January 2007. Growth-based economic models. Insofar as economic growth is driven by oil consumption growth, post-peak societies must adapt. Hubbert believed: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Our principal constraints are cultural. During the last two centuries, we have known nothing but exponential growth and in parallel, we have evolved what amounts to an exponential-growth culture, a culture so heavily dependent upon the continuance of exponential growth for its stability that it is incapable of reckoning with problems of non-growth. Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth". Brief oil interruptions in 1973 and 1979 markedly slowed—but did not stop—the growth of world GDP. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), in their 2003 study "Food, Land, Population and the U.S. Economy", placed the maximum U.S. population for a sustainable economy at 200 million (actual population approx. 290m in 2003, 329m in 2019). To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before. Hubbert peaks. Although Hubbert's peak theory receives the most attention concerning peak oil production, it has also been applied to other natural resources. Natural gas. Doug Reynolds predicted in 2005 that the North American peak would occur in 2007. Bentley predicted a world "decline in conventional gas production from about 2020". Coal. Although observers believe that peak coal is significantly further out than peak oil, Hubbert studied the specific example of anthracite in the US, a high-grade coal, whose production peaked in the 1920s. Hubbert found that anthracite matches a curve closely. Hubbert had recoverable coal reserves worldwide at 2.500 × 1012 metric tons and peaking around 2150 (depending on usage). More recent estimates suggest an earlier peak. "Coal: Resources and Future Production" (PDF 630KB), published on April 5, 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years. Reporting on this, Richard Heinberg also notes that the date of peak annual energetic extraction from coal is likely to come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively. A second study, "The Future of Coal" by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for the European Commission Joint Research Centre, reaches similar conclusions and states that "coal might not be so abundant, widely available and reliable as an energy source in the future". Work by David Rutledge of Caltech predicts that the total world coal production will amount to only about 450 gigatonnes. This implies that coal is running out faster than usually assumed. Fissionable materials. In a paper in 1956, after a review of US fissionable reserves, Hubbert notes of nuclear power: As of 2015, the identified resources of uranium are sufficient to provide more than 135 years of supply at the present rate of consumption. Technologies such as the thorium fuel cycle, reprocessing and fast breeders can, in theory, extend the life of uranium reserves from hundreds to thousands of years. Caltech physics professor David Goodstein stated in 2004 that Helium. Almost all helium on Earth is a result of radioactive decay of uranium and thorium. Helium is extracted by fractional distillation from natural gas, which contains up to 7% helium. The world's largest helium-rich natural gas fields are found in the United States, especially in the Hugoton and nearby gas fields in Kansas, Oklahoma, and Texas. The extracted helium is stored underground in the National Helium Reserve near Amarillo, Texas, the self-proclaimed "Helium Capital of the World". Helium production is expected to decline along with natural gas production in these areas. Helium, which is the second-lightest chemical element, will rise to the upper layers of Earth's atmosphere, where it can forever break free from Earth's gravitational attraction. Approximately 1,600 tons of helium are lost per year as a result of atmospheric escape mechanisms. Transition metals. Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The price of copper rose 500% between 2003 and 2007 and was attributed by some to peak copper. Copper prices later fell, along with many other commodities and stock prices, as demand shrank from fear of a global recession. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years. A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled. Precious metals. In 2009, Aaron Regent president of the Canadian gold giant Barrick Gold said that global output has been falling by roughly one million ounces a year since the start of the decade. The total global mine supply has dropped by 10 percent as ore quality erodes, implying that the roaring bull market of the last eight years may have further to run. "There is a strong case to be made that we are already at 'peak gold'," he told The Daily Telegraph at the RBC's annual gold conference in London. "Production peaked around 2000 and it has been in decline ever since, and we forecast that decline to continue. It is increasingly difficult to find ore," he said. Ore grades have fallen from around 12 grams per tonne in 1950 to nearer 3 grams in the US, Canada, and Australia. South Africa's output has halved since peaking in 1970. Output fell a further 14 percent in South Africa in 2008 as companies were forced to dig ever deeper – at greater cost – to replace depleted reserves. World mined gold production has peaked four times since 1900: in 1912, 1940, 1971, and 2001, each peak being higher than previous peaks. The latest peak was in 2001 when production reached 2,600 metric tons, then declined for several years. Production started to increase again in 2009, spurred by high gold prices, and achieved record new highs each year in 2012, 2013, and 2014, when production reached 2,990 tonnes. Phosphorus. Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. According to a 2008 study, the total reserves of phosphorus are estimated to be approximately 3,200 MT, with peak production at 28 MT/year in 2034. Individual countries' supplies vary widely; without a recycling initiative America's supply is estimated around 30 years. Phosphorus supplies affect agricultural output which in turn limits alternative fuels such as biodiesel and ethanol. Its increasing price and scarcity (the global price of rock phosphate rose 8-fold in the 2 years to mid-2008) could change global agricultural patterns. Lands, perceived as marginal because of remoteness, but with very high phosphorus content, such as the Gran Chaco may get more agricultural development, while other farming areas, where nutrients are a constraint, may drop below the line of profitability. Wood. Unlike fossil resources, forests keep growing, thus the Hubbert peak theory does not apply. There had been wood shortages in the past, called Holznot in German-speaking regions, but no global peak wood yet, despite the early 2021 "Lumber Crisis". Besides, deforestation may cause other problems, like erosion and drought by ending forests' Biotic pump effect. Water. Hubbert's original analysis did not apply to renewable resources. However, over-exploitation often results in a Hubbert peak nonetheless. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced. For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe aquifers whose water is not being recharged. Fishing. At least one researcher has attempted to perform Hubbert linearization (Hubbert curve) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. The Atlantic northwest cod fishery was a renewable resource, but the numbers of fish taken exceeded the fish's rate of recovery. The end of the cod fishery does match the exponential drop of the Hubbert bell curve. Another example is the cod of the North Sea. Air/oxygen. Half the world's oxygen is produced by phytoplankton. The plankton was once thought to have dropped by 40% since the 1950s. However, the authors reanalyzed their data with better calibrations and found plankton abundance dropped globally by only a few percent over this time interval (Boyce et al. 2014) Criticisms of peak oil. Economist Michael Lynch argues that the theory behind the Hubbert curve is simplistic and relies on an overly Malthusian point of view. Lynch claims that Campbell's predictions for world oil production are strongly biased towards underestimates, and that Campbell has repeatedly pushed back the date. Leonardo Maugeri, vice president of the Italian energy company Eni, argues that nearly all of peak estimates do not take into account unconventional oil even though the availability of these resources is significant and the costs of extraction and processing, while still very high, are falling because of improved technology. He also notes that the recovery rate from existing world oil fields has increased from about 22% in 1980 to 35% today because of new technology and predicts this trend will continue. The ratio between proven oil reserves and current production has constantly improved, passing from 20 years in 1948 to 35 years in 1972 and reaching about 40 years in 2003. These improvements occurred even with low investment in new exploration and upgrading technology because of the low oil prices during the last 20 years. However, Maugeri feels that encouraging more exploration will require relatively high oil prices. Edward Luttwak, an economist and historian, claims that unrest in countries such as Russia, Iran and Iraq has led to a massive underestimate of oil reserves. The Association for the Study of Peak Oil and Gas (ASPO) responds by claiming neither Russia nor Iran are troubled by unrest currently, but Iraq is. Cambridge Energy Research Associates authored a report that is critical of Hubbert-influenced predictions: CERA does not believe there will be an endless abundance of oil, but instead believes that global production will eventually follow an "undulating plateau" for one or more decades before declining slowly, and that production will reach 40 Mb/d by 2015. Alfred J. Cavallo, while predicting a conventional oil supply shortage by no later than 2015, does not think Hubbert's peak is the correct theory to apply to world production. Criticisms of peak element scenarios. Although M. King Hubbert himself made major distinctions between decline in petroleum production versus depletion (or relative lack of it) for elements such as fissionable uranium and thorium, some others have predicted peaks like peak uranium and peak phosphorus soon on the basis of published reserve figures compared to present and future production. According to some economists, though, the amount of proved reserves inventoried at a time may be considered "a poor indicator of the total future supply of a mineral resource." As some illustrations, tin, copper, iron, lead, and zinc all had both production from 1950 to 2000 and reserves in 2000 much exceed world reserves in 1950, which would be impossible except for how "proved reserves are like an inventory of cars to an auto dealer" at a time, having little relationship to the actual total affordable to extract in the future. In the example of peak phosphorus, additional concentrations exist intermediate between 71,000 Mt of identified reserves (USGS) and the approximately 30,000,000,000 Mt of other phosphorus in Earth's crust, with the average rock being 0.1% phosphorus, so showing decline in human phosphorus production will occur soon would require far more than comparing the former figure to the 190 Mt/year of phosphorus extracted in mines (2011 figure). See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nQ(t) = {Q_{{\\rm max}}\\over {1 + ae^{-bt}}}\n" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "Q(t)" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "\nt_{{\\rm max}} = {1\\over b}\\ln \\left({a} \\right).\n" }, { "math_id": 6, "text": "\nQ(t) = Q_\\text{max}/2\n" } ]
https://en.wikipedia.org/wiki?curid=601621
6016645
Alpha max plus beta min algorithm
High-speed approximation of the square root of the sum of two squares The alpha max plus beta min algorithm is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a useful function, because it finds the hypotenuse of a right triangle given the two side lengths, the norm of a 2-D vector, or the magnitude formula_0 of a complex number "z" = "a" + "bi" given the real and imaginary parts. The algorithm avoids performing the square and square-root operations, instead using simple operations such as comparison, multiplication, and addition. Some choices of the α and β parameters of the algorithm allow the multiplication operation to be reduced to a simple shift of binary digits that is particularly well suited to implementation in high-speed digital circuitry. The approximation is expressed as formula_1 where formula_2 is the maximum absolute value of "a" and "b", and formula_3 is the minimum absolute value of "a" and "b". For the closest approximation, the optimum values for formula_4 and formula_5 are formula_6 and formula_7, giving a maximum error of 3.96%. Improvements. When formula_9, formula_10 becomes smaller than formula_2 (which is geometrically impossible) near the axes where formula_3 is near 0. This can be remedied by replacing the result with formula_2 whenever that is greater, essentially splitting the line into two different segments. formula_11 Depending on the hardware, this improvement can be almost free. Using this improvement changes which parameter values are optimal, because they no longer need a close match for the entire interval. A lower formula_4 and higher formula_5 can therefore increase precision further. "Increasing precision:" When splitting the line in two like this one could improve precision even more by replacing the first segment by a better estimate than formula_2, and adjust formula_4 and formula_5 accordingly. formula_12 formula_13 formula_14 Beware however, that a non-zero formula_8 would require at least one extra addition and some bit-shifts (or a multiplication), probably nearly doubling the cost and, depending on the hardware, possibly defeat the purpose of using an approximation in the first place. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|z| = \\sqrt{a^2 + b^2}" }, { "math_id": 1, "text": "|z| = \\alpha\\,\\mathbf{Max} + \\beta\\,\\mathbf{Min}," }, { "math_id": 2, "text": "\\mathbf{Max}" }, { "math_id": 3, "text": "\\mathbf{Min}" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "\\alpha_0 = \\frac{2 \\cos \\frac{\\pi}{8}}{1 + \\cos \\frac{\\pi}{8}} = 0.960433870103..." }, { "math_id": 7, "text": "\\beta_0 = \\frac{2 \\sin \\frac{\\pi}{8}}{1 + \\cos \\frac{\\pi}{8}} = 0.397824734759..." }, { "math_id": 8, "text": "\\beta_0" }, { "math_id": 9, "text": "\\alpha < 1" }, { "math_id": 10, "text": "|z|" }, { "math_id": 11, "text": "|z| = \\max(\\mathbf{Max}, \\alpha\\,\\mathbf{Max} + \\beta\\,\\mathbf{Min})." }, { "math_id": 12, "text": "|z| = \\max\\big(|z_0|, |z_1|\\big)," }, { "math_id": 13, "text": "|z_0| = \\alpha_0\\,\\mathbf{Max} + \\beta_0\\,\\mathbf{Min}," }, { "math_id": 14, "text": "|z_1| = \\alpha_1\\,\\mathbf{Max} + \\beta_1\\,\\mathbf{Min}." } ]
https://en.wikipedia.org/wiki?curid=6016645
60167
Average
Number taken as representative of a list of numbers In ordinary language, an average is a single number or value that best represents a set of data. The type of average taken as most typically representative of a list of numbers is the arithmetic mean – the sum of the numbers divided by how many numbers are in the list. For example, the mean average of the numbers 2, 3, 4, 7, and 9 (summing to 25) is 5. Depending on the context, the most representative statistic to be taken as the average might be another measure of central tendency, such as the mid-range, median, mode or geometric mean. For example, the average personal income is often given as the median – the number below which are 50% of personal incomes and above which are 50% of personal incomes – because the mean would be higher by including personal incomes from a few billionaires. For this reason, it is recommended to avoid using the word "average" when discussing measures of central tendency and specify which type of measure of average is being used. General properties. If all numbers in a list are the same number, then their average is also equal to this number. This property is shared by each of the many types of average. Another universal property is monotonicity: if two lists of numbers "A" and "B" have the same length, and each entry of list "A" is at least as large as the corresponding entry on list "B", then the average of list "A" is at least that of list "B". Also, all averages satisfy linear homogeneity: if all numbers of a list are multiplied by the same positive number, then its average changes by the same factor. In some types of average, the items in the list are assigned different weights before the average is determined. These include the weighted arithmetic mean, the weighted geometric mean and the weighted median. Also, for some types of moving average, the weight of an item depends on its position in the list. Most types of average, however, satisfy permutation-insensitivity: all items count equally in determining their average value and their positions in the list are irrelevant; the average of (1, 2, 3, 4, 6) is the same as that of (3, 2, 6, 4, 1). Pythagorean means. The arithmetic mean, the geometric mean and the harmonic mean are known collectively as the "Pythagorean means". Statistical location. The mode, the median, and the mid-range are often used in addition to the mean as estimates of central tendency in descriptive statistics. These can all be seen as minimizing variation by some measure; see . Mode. The most frequently occurring number in a list is called the mode. For example, the mode of the list (1, 2, 2, 3, 3, 3, 4) is 3. It may happen that there are two or more numbers which occur equally often and more often than any other number. In this case there is no agreed definition of mode. Some authors say they are all modes and some say there is no mode. Median. The median is the middle number of the group when they are ranked in order. (If there are an even number of numbers, the mean of the middle two is taken.) Thus to find the median, order the list according to its elements' magnitude and then repeatedly remove the pair consisting of the highest and lowest values until either one or two values are left. If exactly one value is left, it is the median; if two values, the median is the arithmetic mean of these two. This method takes the list 1, 7, 3, 13 and orders it to read 1, 3, 7, 13. Then the 1 and 13 are removed to obtain the list 3, 7. Since there are two elements in this remaining list, the median is their arithmetic mean, (3 + 7)/2 = 5. Mid-range. The mid-range is the arithmetic mean of the highest and lowest values of a set. Summary of types. The table of mathematical symbols explains the symbols used below. Miscellaneous types. Other more sophisticated averages are: trimean, trimedian, and normalized mean, with their generalizations. One can create one's own average metric using the generalized "f"-mean: formula_0 where "f" is any invertible function. The harmonic mean is an example of this using "f"("x") = 1/"x", and the geometric mean is another, using "f"("x") = log "x". However, this method for generating means is not general enough to capture all averages. A more general method for defining an average takes any function "g"("x"1, "x"2, ..., "x""n") of a list of arguments that is continuous, strictly increasing in each argument, and symmetric (invariant under permutation of the arguments). The average "y" is then the value that, when replacing each member of the list, results in the same function value: "g"("y", "y", ..., "y") = "g"("x"1, "x"2, ..., "x""n"). This most general definition still captures the important property of all averages that the average of a list of identical elements is that element itself. The function "g"("x"1, "x"2, ..., "x""n") = "x"1+"x"2+ ··· + "x""n" provides the arithmetic mean. The function "g"("x"1, "x"2, ..., "x""n") = "x"1"x"2···"x""n" (where the list elements are positive numbers) provides the geometric mean. The function "g"("x"1, "x"2, ..., "x""n") = ("x"1−1+"x"2−1+ ··· + "x""n"−1)−1) (where the list elements are positive numbers) provides the harmonic mean. Average percentage return and CAGR. A type of average used in finance is the average percentage return. It is an example of a geometric mean. When the returns are annual, it is called the Compound Annual Growth Rate (CAGR). For example, if we are considering a period of two years, and the investment return in the first year is −10% and the return in the second year is +60%, then the average percentage return or CAGR, "R", can be obtained by solving the equation: (1 − 10%) × (1 + 60%) = (1 − 0.1) × (1 + 0.6) = (1 + "R") × (1 + "R"). The value of "R" that makes this equation true is 0.2, or 20%. This means that the total return over the 2-year period is the same as if there had been 20% growth each year. The order of the years makes no difference – the average percentage returns of +60% and −10% is the same result as that for −10% and +60%. This method can be generalized to examples in which the periods are not equal. For example, consider a period of a half of a year for which the return is −23% and a period of two and a half years for which the return is +13%. The average percentage return for the combined period is the single year return, "R", that is the solution of the following equation: (1 − 0.23)0.5 × (1 + 0.13)2.5 = (1 + "R")0.5+2.5, giving an average return "R" of 0.0600 or 6.00%. Moving average. Given a time series, such as daily stock market prices or yearly temperatures, people often want to create a smoother series. This helps to show underlying trends or perhaps periodic behavior. An easy way to do this is the "moving average": one chooses a number "n" and creates a new series by taking the arithmetic mean of the first "n" values, then moving forward one place by dropping the oldest value and introducing a new value at the other end of the list, and so on. This is the simplest form of moving average. More complicated forms involve using a weighted average. The weighting can be used to enhance or suppress various periodic behavior and there is very extensive analysis of what weightings to use in the literature on filtering. In digital signal processing the term "moving average" is used even when the sum of the weights is not 1.0 (so the output series is a scaled version of the averages). The reason for this is that the analyst is usually interested only in the trend or the periodic behavior. History. Origin. The first recorded time that the arithmetic mean was extended from 2 to n cases for the use of estimation was in the sixteenth century. From the late sixteenth century onwards, it gradually became a common method to use for reducing errors of measurement in various areas. At the time, astronomers wanted to know a real value from noisy measurement, such as the position of a planet or the diameter of the moon. Using the mean of several measured values, scientists assumed that the errors add up to a relatively small number when compared to the total of all measured values. The method of taking the mean for reducing observation errors was indeed mainly developed in astronomy. A possible precursor to the arithmetic mean is the mid-range (the mean of the two extreme values), used for example in Arabian astronomy of the ninth to eleventh centuries, but also in metallurgy and navigation. However, there are various older vague references to the use of the arithmetic mean (which are not as clear, but might reasonably have to do with our modern definition of the mean). In a text from the 4th century, it was written that (text in square brackets is a possible missing text that might clarify the meaning): In the first place, we must set out in a row the sequence of numbers from the monad up to nine: 1, 2, 3, 4, 5, 6, 7, 8, 9. Then we must add up the amount of all of them together, and since the row contains nine terms, we must look for the ninth part of the total to see if it is already naturally present among the numbers in the row; and we will find that the property of being [one] ninth [of the sum] only belongs to the [arithmetic] mean itself... Even older potential references exist. There are records that from about 700 BC, merchants and shippers agreed that damage to the cargo and ship (their "contribution" in case of damage by the sea) should be shared equally among themselves. This might have been calculated using the average, although there seem to be no direct record of the calculation. Etymology. The root is found in Arabic as عوار "ʿawār", a defect, or anything defective or damaged, including partially spoiled merchandise; and عواري "ʿawārī" (also عوارة "ʿawāra") = "of or relating to "ʿawār", a state of partial damage". Within the Western languages the word's history begins in medieval sea-commerce on the Mediterranean. 12th and 13th century Genoa Latin "avaria" meant "damage, loss and non-normal expenses arising in connection with a merchant sea voyage"; and the same meaning for "avaria" is in Marseille in 1210, Barcelona in 1258 and Florence in the late 13th. 15th-century French "avarie" had the same meaning, and it begot English "averay" (1491) and English "average" (1502) with the same meaning. Today, Italian "avaria", Catalan "avaria" and French "avarie" still have the primary meaning of "damage". The huge transformation of the meaning in English began with the practice in later medieval and early modern Western merchant-marine law contracts under which if the ship met a bad storm and some of the goods had to be thrown overboard to make the ship lighter and safer, then all merchants whose goods were on the ship were to suffer proportionately (and not whoever's goods were thrown overboard); and more generally there was to be proportionate distribution of any "avaria". From there the word was adopted by British insurers, creditors, and merchants for talking about their losses as being spread across their whole portfolio of assets and having a mean proportion. Today's meaning developed out of that, and started in the mid-18th century, and started in English. Marine damage is either "particular average", which is borne only by the owner of the damaged property, or general average, where the owner can claim a proportional contribution from all the parties to the marine venture. The type of calculations used in adjusting general average gave rise to the use of "average" to mean "arithmetic mean". A second English usage, documented as early as 1674 and sometimes spelled "averish", is as the residue and second growth of field crops, which were considered suited to consumption by draught animals ("avers"). There is earlier (from at least the 11th century), unrelated use of the word. It appears to be an old legal term for a tenant's day labour obligation to a sheriff, probably anglicised from "avera" found in the English Domesday Book (1085). The Oxford English Dictionary, however, says that derivations from German "hafen" haven, and Arabic "ʿawâr" loss, damage, have been "quite disposed of" and the word has a Romance origin. Averages as a rhetorical tool. Due to the aforementioned colloquial nature of the term "average", the term can be used to obfuscate the true meaning of data and suggest varying answers to questions based on the averaging method (most frequently arithmetic mean, median, or mode) used. In his article "Framed for Lying: Statistics as In/Artistic Proof", University of Pittsburgh faculty member Daniel Libertz comments that statistical information is frequently dismissed from rhetorical arguments for this reason. However, due to their persuasive power, averages and other statistical values should not be discarded completely, but instead used and interpreted with caution. Libertz invites us to engage critically not only with statistical information such as averages, but also with the language used to describe the data and its uses, saying: "If statistics rely on interpretation, rhetors should invite their audience to interpret rather than insist on an interpretation." In many cases, data and specific calculations are provided to help facilitate this audience-based interpretation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y = f^{-1}\\left(\\frac{1}{n}\\left[f(x_1) + f(x_2) + \\cdots + f(x_n)\\right]\\right)" } ]
https://en.wikipedia.org/wiki?curid=60167
60170617
Ruby pressure scale
Method used in diamond anvil cells The ruby fluorescence pressure scale is an optical method to measure pressure within a sample chamber of a diamond anvil cell apparatus. Since it is an optical method, which fully make use of the transparency of diamond anvils and only requires an access to a small scale laser generator, it has become the most prevalent pressure gauge method in high pressure sciences. Principles. Ruby is chromium-doped corundum (Al2O3). The Cr3+ in corundum's lattice forms an octahedra with surrounding oxygen ions. The octahedral crystal field together with spin-orbital interaction results in different energy levels. Once 3d electrons in Cr3+ are energized by lasers, the excited electrons would go to 4T2 and 2T2 levels. Later they return to 2E levels and the R1, R2 lines come from luminescence from 2E levels to 4A2 ground level. The energy difference of 2E levels are 29 cm−1, corresponding to the splitting of R1, R2 lines at 1.39 nm. Development. Ruby fluorescence spectra has two strong sharp lines, R1 and R2. R1 refers to the stronger intensity and lower energy (longer wavelength) excitation and is used to gauge pressure. Pressure is calculated as: formula_0, where λ0 is the R1 wavelength measured at 1atm, a and b are constants. (e.g. a = 19.04, b = 5) Since first demonstrated by Forman and colleagues in 1972, many scientists have contributed to the establishment of accurate ruby pressure scale in various experimental conditions. A likely incomplete summary of is given below:
[ { "math_id": 0, "text": "P(Mbar)=\\frac{a}{b}[\\left ( \\frac{\\lambda}{\\lambda_0} \\right )^b-1]" } ]
https://en.wikipedia.org/wiki?curid=60170617
6018334
Bending moment
Force tending to bend a structural element In solid mechanics, a bending moment is the reaction induced in a structural element when an external force or moment is applied to the element, causing the element to bend. The most common or simplest structural element subjected to bending moments is the beam. The diagram shows a beam which is simply supported (free to rotate and therefore lacking bending moments) at both ends; the ends can only react to the shear loads. Other beams can have both ends fixed (known as encastre beam); therefore each end support has both bending moments and shear reaction loads. Beams can also have one end fixed and one end simply supported. The simplest type of beam is the cantilever, which is fixed at one end and is free at the other end (neither simple nor fixed). In reality, beam supports are usually neither absolutely fixed nor absolutely rotating freely. The internal reaction loads in a cross-section of the structural element can be resolved into a resultant force and a resultant couple. For equilibrium, the moment created by external forces/moments must be balanced by the couple induced by the internal loads. The resultant internal couple is called the bending moment while the resultant internal force is called the "shear force" (if it is transverse to the plane of element) or the "normal force" (if it is along the plane of the element). Normal force is also termed as axial force. The bending moment at a section through a structural element may be defined as the sum of the moments about that section of all external forces acting to one side of that section. The forces and moments on either side of the section must be equal in order to counteract each other and maintain a state of equilibrium so the same bending moment will result from summing the moments, regardless of which side of the section is selected. If clockwise bending moments are taken as negative, then a negative bending moment within an element will cause "hogging", and a positive moment will cause "sagging". It is therefore clear that a point of zero bending moment within a beam is a point of contraflexure—that is, the point of transition from hogging to sagging or vice versa. Moments and torques are measured as a force multiplied by a distance so they have as unit newton-metres (N·m), or pound-foot (lb·ft). The concept of bending moment is very important in engineering (particularly in civil and mechanical engineering) and physics. Background. Tensile and compressive stresses increase proportionally with bending moment, but are also dependent on the second moment of area of the cross-section of a beam (that is, the shape of the cross-section, such as a circle, square or I-beam being common structural shapes). Failure in bending will occur when the bending moment is sufficient to induce tensile/compressive stresses greater than the yield stress of the material throughout the entire cross-section. In structural analysis, this bending failure is called a plastic hinge, since the full load carrying ability of the structural element is not reached until the full cross-section is past the yield stress. It is possible that failure of a structural element in shear may occur before failure in bending, however the mechanics of failure in shear and in bending are different. Moments are calculated by multiplying the external vector forces (loads or reactions) by the vector distance at which they are applied. When analysing an entire element, it is sensible to calculate moments at both ends of the element, at the beginning, centre and end of any uniformly distributed loads, and directly underneath any point loads. Of course any "pin-joints" within a structure allow free rotation, and so zero moment occurs at these points as there is no way of transmitting turning forces from one side to the other. It is more common to use the convention that a clockwise bending moment to the left of the point under consideration is taken as positive. This then corresponds to the second derivative of a function which, when positive, indicates a curvature that is 'lower at the centre' i.e. sagging. When defining moments and curvatures in this way calculus can be more readily used to find slopes and deflections. Critical values within the beam are most commonly annotated using a bending moment diagram, where negative moments are plotted to scale above a horizontal line and positive below. Bending moment varies linearly over unloaded sections, and parabolically over uniformly loaded sections. Engineering descriptions of the computation of bending moments can be confusing because of unexplained sign conventions and implicit assumptions. The descriptions below use vector mechanics to compute moments of force and bending moments in an attempt to explain, from first principles, why particular sign conventions are chosen. Computing the moment of force. An important part of determining bending moments in practical problems is the computation of moments of force. Let formula_0 be a force vector acting at a point A in a body. The moment of this force about a reference point (O) is defined as formula_1 where formula_2 is the moment vector and formula_3 is the position vector from the reference point (O) to the point of application of the force (A). The formula_4 symbol indicates the vector cross product. For many problems, it is more convenient to compute the moment of force about an axis that passes through the reference point O. If the unit vector along the axis is formula_5, the moment of force about the axis is defined as formula_6 where formula_7 indicates the vector dot product. Example. The adjacent figure shows a beam that is acted upon by a force formula_8. If the coordinate system is defined by the three unit vectors formula_9, we have the following formula_10 Therefore, formula_11 The moment about the axis formula_12 is then formula_13 Sign conventions. The negative value suggests that a moment that tends to rotate a body clockwise around an axis should have a negative sign. However, the actual sign depends on the choice of the three axes formula_9. For instance, if we choose another right handed coordinate system with formula_14, we have formula_15 Then, formula_16 For this new choice of axes, a positive moment tends to rotate body clockwise around an axis. Computing the bending moment. In a rigid body or in an unconstrained deformable body, the application of a moment of force causes a pure rotation. But if a deformable body is constrained, it develops internal forces in response to the external force so that equilibrium is maintained. An example is shown in the figure below. These internal forces will cause local deformations in the body. For equilibrium, the sum of the internal force vectors is equal to the negative of the sum of the applied external forces, and the sum of the moment vectors created by the internal forces is equal to the negative of the moment of the external force. The internal force and moment vectors are oriented in such a way that the total force (internal + external) and moment (external + internal) of the system is zero. The internal moment vector is called the bending moment. Though bending moments have been used to determine the stress states in arbitrary shaped structures, the physical interpretation of the computed stresses is problematic. However, physical interpretations of bending moments in beams and plates have a straightforward interpretation as the stress resultants in a cross-section of the structural element. For example, in a beam in the figure, the bending moment vector due to stresses in the cross-section "A" perpendicular to the "x"-axis is given by formula_17 Expanding this expression we have, formula_18 We define the bending moment components as formula_19 The internal moments are computed about an origin that is at the neutral axis of the beam or plate and the integration is through the thickness (formula_20) Example. In the beam shown in the adjacent figure, the external forces are the applied force at point A (formula_21) and the reactions at the two support points O and B (formula_22 and formula_23). For this situation, the only non-zero component of the bending moment is formula_24 where formula_25 is the height in the formula_26 direction of the beam. The minus sign is included to satisfy the sign convention. In order to calculate formula_27, we begin by balancing the forces, which gives one equation with the two unknown reactions, formula_28 To obtain each reaction a second equation is required. Balancing the moments about any arbitrary point X would give us a second equation we can use to solve for formula_29 and formula_30 in terms of formula_8. Balancing about the point O is simplest but let's balance about point A just to illustrate the point, i.e. formula_31 If formula_32 is the length of the beam, we have formula_33 Evaluating the cross-products: formula_34 If we solve for the reactions we have formula_35 Now to obtain the internal bending moment at X we sum all the moments about the point X due to all the external forces to the right of X (on the positive formula_36 side), and there is only one contribution in this case, formula_37 We can check this answer by looking at the free body diagram and the part of the beam to the left of point X, and the total moment due to these external forces is formula_38 If we compute the cross products, we have formula_39 Thanks to the equilibrium, the internal bending moment due to external forces to the left of X must be exactly balanced by the internal turning force obtained by considering the part of the beam to the right of X formula_40 which is clearly the case. Sign convention. In the above discussion, it is implicitly assumed that the bending moment is positive when the top of the beam is compressed. That can be seen if we consider a linear distribution of stress in the beam and find the resulting bending moment. Let the top of the beam be in compression with a stress formula_41 and let the bottom of the beam have a stress formula_42. Then the stress distribution in the beam is formula_43. The bending moment due to these stresses is formula_44 where formula_45 is the area moment of inertia of the cross-section of the beam. Therefore, the bending moment is positive when the top of the beam is in compression. Many authors follow a different convention in which the stress resultant formula_46 is defined as formula_47 In that case, positive bending moments imply that the top of the beam is in tension. Of course, the definition of top depends on the coordinate system being used. In the examples above, the top is the location with the largest formula_48-coordinate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{F}" }, { "math_id": 1, "text": "\n \\mathbf{M} = \\mathbf{r} \\times \\mathbf{F}\n " }, { "math_id": 2, "text": "\\mathbf{M}" }, { "math_id": 3, "text": "\\mathbf{r}" }, { "math_id": 4, "text": "\\times" }, { "math_id": 5, "text": "\\mathbf{e}" }, { "math_id": 6, "text": "\n M = \\mathbf{e}\\cdot\\mathbf{M} = \\mathbf{e}\\cdot(\\mathbf{r} \\times \\mathbf{F})\n " }, { "math_id": 7, "text": "\\cdot" }, { "math_id": 8, "text": "F" }, { "math_id": 9, "text": "\\mathbf{e}_x, \\mathbf{e}_y, \\mathbf{e}_z" }, { "math_id": 10, "text": "\n \\mathbf{F} = 0\\,\\mathbf{e}_x - F\\,\\mathbf{e}_y + 0\\,\\mathbf{e}_z\n \\quad \\text{and} \\quad \\mathbf{r} = x\\,\\mathbf{e}_x + 0\\,\\mathbf{e}_y + 0\\,\\mathbf{e}_z \\,.\n " }, { "math_id": 11, "text": "\n \\mathbf{M} = \\mathbf{r}\\times\\mathbf{F} = \\left|\\begin{matrix}\\mathbf{e}_x & \\mathbf{e}_y & \\mathbf{e}_z \\\\ x & 0 & 0 \\\\ 0 & -F & 0 \n \\end{matrix}\\right| = -Fx\\,\\mathbf{e}_z \\,.\n " }, { "math_id": 12, "text": "\\mathbf{e}_z" }, { "math_id": 13, "text": "\n M_z = \\mathbf{e}_z\\cdot\\mathbf{M} = -Fx \\,.\n " }, { "math_id": 14, "text": "\\mathbf{E}_x = \\mathbf{e}_x, \\mathbf{E}_y = -\\mathbf{e}_z, \\mathbf{E}_z = \\mathbf{e}_y" }, { "math_id": 15, "text": "\n \\mathbf{F} = 0\\,\\mathbf{E}_x + 0\\,\\mathbf{E}_y -F\\,\\mathbf{E}_z\n \\quad \\text{and} \\quad \\mathbf{r} = x\\,\\mathbf{E}_x + 0\\,\\mathbf{E}_y + 0\\,\\mathbf{E}_z \\,.\n " }, { "math_id": 16, "text": "\n \\mathbf{M} = \\mathbf{r}\\times\\mathbf{F} = \\left|\\begin{matrix}\\mathbf{E}_x & \\mathbf{E}_y & \\mathbf{E}_z \\\\ x & 0 & 0 \\\\ 0 & 0 & -F \n \\end{matrix}\\right| = Fx\\,\\mathbf{E}_y \n \\quad \\text{and} \\quad M_y = \\mathbf{E}_y\\cdot\\mathbf{M} = Fx \\,.\n " }, { "math_id": 17, "text": "\n \\mathbf{M}_x = \\int_A \\mathbf{r} \\times (\\sigma_{xx} \\mathbf{e}_x + \\sigma_{xy} \\mathbf{e}_y + \\sigma_{xz} \\mathbf{e}_z)\\, dA \n \\quad \\text{where} \\quad \n \\mathbf{r} = y\\,\\mathbf{e}_y + z\\,\\mathbf{e}_z \\,.\n " }, { "math_id": 18, "text": "\n \\mathbf{M}_x = \\int_A \\left(-y\\sigma_{xx}\\mathbf{e}_z + y\\sigma_{xz}\\mathbf{e}_x + z\\sigma_{xx}\\mathbf{e}_y - z\\sigma_{xy}\\mathbf{e}_x\\right)dA =: M_{xx}\\,\\mathbf{e}_x + M_{xy}\\,\\mathbf{e}_y + M_{xz}\\,\\mathbf{e}_z\\,.\n " }, { "math_id": 19, "text": "\n \\begin{bmatrix} M_{xx} \\\\ M_{xy} \\\\M_{xz} \\end{bmatrix}\n := \\int_A \\begin{bmatrix} y\\sigma_{xz} - z\\sigma_{xy} \\\\ z\\sigma_{xx} \\\\ -y\\sigma_{xx} \\end{bmatrix}\\,dA \\,.\n " }, { "math_id": 20, "text": "h" }, { "math_id": 21, "text": "-F\\mathbf{e}_y" }, { "math_id": 22, "text": "\\mathbf{R}_O = R_O\\mathbf{e}_y" }, { "math_id": 23, "text": " \\mathbf{R}_B = R_B\\mathbf{e}_y" }, { "math_id": 24, "text": "\n \\mathbf{M}_{xz} = -\\left[\\int_z\\left[\\int_0^h y\\,\\sigma_{xx}\\,dy\\right]\\,dz\\right]\\mathbf{e}_z \\,.\n " }, { "math_id": 25, "text": " h" }, { "math_id": 26, "text": " y" }, { "math_id": 27, "text": "\\mathbf{M}_{xz}" }, { "math_id": 28, "text": "\n R_O + R_B - F = 0 \\,.\n " }, { "math_id": 29, "text": "R_0" }, { "math_id": 30, "text": "R_B" }, { "math_id": 31, "text": "\n -\\mathbf{r}_A\\times\\mathbf{R}_O + (\\mathbf{r}_B-\\mathbf{r}_A)\\times\\mathbf{R}_B = \\mathbf{0} \\,.\n " }, { "math_id": 32, "text": "L" }, { "math_id": 33, "text": "\n \\mathbf{r}_A = x_A\\mathbf{e}_x \\quad \\text{and} \\quad \\mathbf{r}_B = L\\mathbf{e}_x \\,.\n " }, { "math_id": 34, "text": "\n \\left|\\begin{matrix}\\mathbf{e}_x & \\mathbf{e}_y & \\mathbf{e}_z \\\\ -x_A & 0 & 0 \\\\ 0 & R_0 & 0 \\end{matrix}\\right| +\n \\left|\\begin{matrix}\\mathbf{e}_x & \\mathbf{e}_y & \\mathbf{e}_z \\\\ L-x_A & 0 & 0 \\\\ 0 & R_B & 0 \\end{matrix}\\right|\n = -x_AR_0\\,\\mathbf{e}_z +(L-x_A)R_B\\,\\mathbf{e}_z = 0 \\,.\n " }, { "math_id": 35, "text": "\n R_O = \\left(1 - \\frac{x_A}{L}\\right) F \\quad \\text{and} \\quad R_B = \\frac{x_A}{L}\\,F \\,.\n " }, { "math_id": 36, "text": "x" }, { "math_id": 37, "text": "\n \\mathbf{M}_{xz}= (\\mathbf{r}_B-\\mathbf{r}_X)\\times\\mathbf{R}_B\n = \\left|\\begin{matrix}\\mathbf{e}_x & \\mathbf{e}_y & \\mathbf{e}_z \\\\ L - x & 0 & 0 \\\\ 0 & R_B & 0 \\end{matrix}\\right| \n= \\frac{F x_A}{L}(L-x)\\,\\mathbf{e}_z \\,.\n " }, { "math_id": 38, "text": "\n \\mathbf{M} = (\\mathbf{r}_A-\\mathbf{r}_X)\\times\\mathbf{F} + (-\\mathbf{r}_X)\\times\\mathbf{R}_O = \n \\left[(x_A-x)\\mathbf{e}_x\\right]\\times\\left(-F\\mathbf{e}_y\\right)\n + \\left(-x\\mathbf{e}_x\\right)\\times\\left(R_O\\mathbf{e}_y\\right) \\,.\n" }, { "math_id": 39, "text": "\n \\mathbf{M} \n = \\left|\\begin{matrix}\\mathbf{e}_x & \\mathbf{e}_y & \\mathbf{e}_z \\\\ x_A - x & 0 & 0 \\\\ 0 & -F & 0 \\end{matrix}\\right| +\n \\left|\\begin{matrix}\\mathbf{e}_x & \\mathbf{e}_y & \\mathbf{e}_z \\\\ -x & 0 & 0 \\\\ 0 & R_0 & 0 \\end{matrix}\\right|\n = F(x-x_A)\\,\\mathbf{e}_z -R_0x\\,\\mathbf{e}_z = -\\frac{F x_A}{L}(L-x)\\,\\mathbf{e}_z \\,.\n " }, { "math_id": 40, "text": "\n \\mathbf{M} + \\mathbf{M}_{xz} = \\mathbf{0} \\,.\n " }, { "math_id": 41, "text": "-\\sigma_0" }, { "math_id": 42, "text": "\\sigma_0" }, { "math_id": 43, "text": "\\sigma_{xx}(y) = -y\\sigma_0" }, { "math_id": 44, "text": "\n M_{xz} = -\\left[\\int_z\\int_{-h/2}^{h/2} y\\,(-y\\sigma_0)\\,dy\\,dz\\right] = \\sigma_0\\,I\n " }, { "math_id": 45, "text": "I" }, { "math_id": 46, "text": "M_{xz}" }, { "math_id": 47, "text": "\n \\mathbf{M}_{xz} = \\left[\\int_z\\int_{-h/2}^{h/2} y\\,\\sigma_{xx}\\,dy\\,dz\\right]\\mathbf{e}_z \\,.\n " }, { "math_id": 48, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=6018334
6018468
Form factor (quantum field theory)
Function approximating net physical effect In elementary particle physics and mathematical physics, in particular in effective field theory, a form factor is a function that encapsulates the properties of a certain particle interaction without including all of the underlying physics, but instead, providing the momentum dependence of suitable matrix elements. It is further measured experimentally in confirmation or specification of a theory—see experimental particle physics. Photon–nucleon example. For example, at low energies the interaction of a photon with a nucleon is a very complicated calculation involving interactions between the photon and a sea of quarks and gluons, and often the calculation cannot be fully performed from first principles. Often in this context, form factors are also called "structure functions", since they can be used to describe the structure of the nucleon. However, the generic Lorentz-invariant form of the matrix element for the electromagnetic current interaction is known, formula_0 where formula_1 represents the photon momentum (equal in magnitude to "E"/"c", where "E" is the energy of the photon). The three functions: formula_2 are associated to the electric and magnetic form factors for this interaction, and are routinely measured experimentally; these three effective vertices can then be used to check, or perform calculations that would otherwise be too difficult to perform from first principles. This matrix element then serves to determine the transition amplitude involved in the scattering interaction or the respective particle decay—cf. Fermi's golden rule. In general, the Fourier transforms of form factor components correspond to electric charge or magnetic profile space distributions (such as the charge radius) of the hadron involved. The analogous QCD structure functions are a probe of the quark and gluon distributions of nucleons.
[ { "math_id": 0, "text": "\\varepsilon_\\mu \\bar{N}\\left(\\alpha(q^2) \\gamma^\\mu + \\beta(q^2) q^\\mu + \\kappa(q^2) \\sigma^{\\mu \\nu} q_\\nu \\right)N \\, " }, { "math_id": 1, "text": "q^\\mu" }, { "math_id": 2, "text": " \\alpha, \\beta , \\kappa " } ]
https://en.wikipedia.org/wiki?curid=6018468
60187666
Sums of three cubes
Problem in number theory &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Is there a number that is not 4 or 5 modulo 9 and that cannot be expressed as a sum of three cubes? In the mathematics of sums of powers, it is an open problem to characterize the numbers that can be expressed as a sum of three cubes of integers, allowing both positive and negative cubes in the sum. A necessary condition for an integer formula_1 to equal such a sum is that formula_1 cannot equal 4 or 5 modulo 9, because the cubes modulo 9 are 0, 1, and −1, and no three of these numbers can sum to 4 or 5 modulo 9. It is unknown whether this necessary condition is sufficient. Variations of the problem include sums of non-negative cubes and sums of rational cubes. All integers have a representation as a sum of rational cubes, but it is unknown whether the sums of non-negative cubes form a set with non-zero natural density. Small cases. A nontrivial representation of 0 as a sum of three cubes would give a counterexample to Fermat's Last Theorem for the exponent three, as one of the three cubes would have the opposite sign as the other two and its negation would equal the sum of the other two. Therefore, by Leonhard Euler's proof of that case of Fermat's last theorem, there are only the trivial solutions formula_2 For representations of 1 and 2, there are infinite families of solutions formula_3 (discovered by K. Mahler in 1936) and formula_4 (discovered by A.S. Verebrusov in 1908, quoted by L.J. Mordell). These can be scaled to obtain representations for any cube or any number that is twice a cube. There are also other known representations of 2 that are not given by these infinite families: formula_5 formula_6 formula_7 However, 1 and 2 are the only numbers with representations that can be parameterized by quartic polynomials as above. Even in the case of representations of 3, Louis J. Mordell wrote in 1953 "I do not know anything" more than its small solutions formula_8 and the fact that each of the three cubed numbers must be equal modulo 9. Computational results. Since 1955, and starting with the instigation of Mordell, many authors have implemented computational searches for these representations. used a method of Noam Elkies (2000) involving lattice reduction to search for all solutions to the Diophantine equation formula_0 for positive formula_1 at most 1000 and for formula_9, leaving only 33, 42, 74, 114, 165, 390, 579, 627, 633, 732, 795, 906, 921, and 975 as open problems in 2009 for formula_10, and 192, 375, and 600 remain with no primitive solutions (i.e. formula_11). After Timothy Browning covered the problem on Numberphile in 2016, extended these searches to formula_12 solving the case of 74, with solution formula_13 Through these searches, it was discovered that all formula_14 that are unequal to 4 or 5 modulo 9 have a solution, with at most two exceptions, 33 and 42. However, in 2019, Andrew Booker settled the case formula_15 by discovering that formula_16 In order to achieve this, Booker exploited an alternative search strategy with running time proportional to formula_17 rather than to their maximum, an approach originally suggested by Heath-Brown et al. He also found that formula_18 and established that there are no solutions for formula_19 or any of the other unresolved formula_20 with formula_21. Shortly thereafter, in September 2019, Booker and Andrew Sutherland finally settled the formula_19 case, using 1.3 million hours of computing on the Charity Engine global grid to discover that formula_22 as well as solutions for several other previously unknown cases including formula_23 and formula_24 for formula_20. Booker and Sutherland also found a third representation of 3 using a further 4 million compute-hours on Charity Engine: formula_25 This discovery settled a 65-year-old question of Louis J. Mordell that has stimulated much of the research on this problem. While presenting the third representation of 3 during his appearance in a video on the Youtube channel Numberphile, Booker also presented a representation for 906: formula_26 The only remaining unsolved cases up to 1,000 are the seven numbers 114, 390, 627, 633, 732, 921, and 975, and there are no known primitive solutions (i.e. formula_11) for 192, 375, and 600. Popular interest. The sums of three cubes problem has been popularized in recent years by Brady Haran, creator of the YouTube channel Numberphile, beginning with the 2015 video "The Uncracked Problem with 33" featuring an interview with Timothy Browning. This was followed six months later by the video "74 is Cracked" with Browning, discussing Huisman's 2016 discovery of a solution for 74. In 2019, Numberphile published three related videos, "42 is the new 33", "The mystery of 42 is solved", and "3 as the sum of 3 cubes", to commemorate the discovery of solutions for 33, 42, and the new solution for 3. Booker's solution for 33 was featured in articles appearing in "Quanta Magazine" and "New Scientist", as well as an article in "Newsweek" in which Booker's collaboration with Sutherland was announced: "...the mathematician is now working with Andrew Sutherland of MIT in an attempt to find the solution for the final unsolved number below a hundred: 42". The number 42 has additional popular interest due to its appearance in the 1979 Douglas Adams science fiction novel "The Hitchhiker's Guide to the Galaxy" as the answer to The Ultimate Question of Life, the Universe, and Everything. Booker and Sutherland's announcements of a solution for 42 received international press coverage, including articles in "New Scientist", "Scientific American", "Popular Mechanics", "The Register", "Die Zeit", "Der Tagesspiegel", "Helsingin Sanomat", "Der Spiegel", "New Zealand Herald", "Indian Express", "Der Standard", "Las Provincias", Nettavisen, Digi24, and BBC World Service. "Popular Mechanics" named the solution for 42 as one of the "10 Biggest Math Breakthroughs of 2019". The resolution of Mordell's question by Booker and Sutherland a few weeks later sparked another round of news coverage. In Booker's invited talk at the fourteenth Algorithmic Number Theory Symposium he discusses some of the popular interest in this problem and the public reaction to the announcement of solutions for 33 and 42. Solvability and decidability. In 1992, Roger Heath-Brown conjectured that every formula_1 unequal to 4 or 5 modulo 9 has infinitely many representations as sums of three cubes. The case formula_15 of this problem was used by Bjorn Poonen as the opening example in a survey on undecidable problems in number theory, of which Hilbert's tenth problem is the most famous example. Although this particular case has since been resolved, it is unknown whether representing numbers as sums of cubes is decidable. That is, it is not known whether an algorithm can, for every input, test in finite time whether a given number has such a representation. If Heath-Brown's conjecture is true, the problem is decidable. In this case, an algorithm could correctly solve the problem by computing formula_1 modulo 9, returning false when this is 4 or 5, and otherwise returning true. Heath-Brown's research also includes more precise conjectures on how far an algorithm would have to search to find an explicit representation rather than merely determining whether one exists. Variations. A variant of this problem related to Waring's problem asks for representations as sums of three cubes of non-negative integers. In the 19th century, Carl Gustav Jacob Jacobi and collaborators compiled tables of solutions to this problem. It is conjectured that the representable numbers have positive natural density. This remains unknown, but Trevor Wooley has shown that formula_27 of the numbers from formula_28 to formula_1 have such representations. The density is at most formula_29. Every integer can be represented as a sum of three cubes of rational numbers (rather than as a sum of cubes of integers). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^3+y^3+z^3=n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "a^3 + (-a)^3 + 0^3 = 0." }, { "math_id": 3, "text": "(9b^4)^3+(3b-9b^4)^3+(1-9b^3)^3=1" }, { "math_id": 4, "text": "(1+6c^3)^3+(1-6c^3)^3+(-6c^2)^3=2" }, { "math_id": 5, "text": "1\\ 214\\ 928^3 + 3\\ 480\\ 205^3 + (-3\\ 528\\ 875)^3 = 2, " }, { "math_id": 6, "text": "37\\ 404\\ 275\\ 617^3 + (-25\\ 282\\ 289\\ 375)^3 + (-33\\ 071\\ 554\\ 596)^3 = 2," }, { "math_id": 7, "text": "3\\ 737\\ 830\\ 626\\ 090^3 + 1\\ 490\\ 220\\ 318\\ 001^3 + (-3\\ 815\\ 176\\ 160\\ 999)^3 = 2." }, { "math_id": 8, "text": "1^3+1^3+1^3=4^3+4^3+(-5)^3=3 " }, { "math_id": 9, "text": "\\max(|x|,|y|,|z|)<10^{14}" }, { "math_id": 10, "text": "n\\le 1000" }, { "math_id": 11, "text": "\\gcd(x,y,z)=1" }, { "math_id": 12, "text": "\\max(|x|,|y|,|z|)<10^{15}" }, { "math_id": 13, "text": "74=(-284\\ 650\\ 292\\ 555\\ 885)^3+66\\ 229\\ 832\\ 190\\ 556^3+283\\ 450\\ 105\\ 697\\ 727^3." }, { "math_id": 14, "text": "n < 100" }, { "math_id": 15, "text": "n=33" }, { "math_id": 16, "text": "33=8\\ 866\\ 128\\ 975\\ 287\\ 528^3+(-8\\ 778\\ 405\\ 442\\ 862\\ 239)^3+(-2\\ 736\\ 111\\ 468\\ 807\\ 040)^3." }, { "math_id": 17, "text": "\\min(|x|,|y|,|z|)" }, { "math_id": 18, "text": "795=(-14\\ 219\\ 049\\ 725\\ 358\\ 227)^3 + 14\\ 197\\ 965\\ 759\\ 741\\ 571^3 + 2\\ 337\\ 348\\ 783\\ 323\\ 923^3," }, { "math_id": 19, "text": "n=42" }, { "math_id": 20, "text": "n \\le 1000" }, { "math_id": 21, "text": "|z|\\le 10^{16}" }, { "math_id": 22, "text": "42=(-80\\ 538\\ 738\\ 812\\ 075\\ 974)^3 + 80\\ 435\\ 758\\ 145\\ 817\\ 515^3 + 12\\ 602\\ 123\\ 297\\ 335\\ 631^3," }, { "math_id": 23, "text": "n=165" }, { "math_id": 24, "text": "579" }, { "math_id": 25, "text": "3 = 569\\ 936\\ 821\\ 221\\ 962\\ 380\\ 720^3 + (-569\\ 936\\ 821\\ 113\\ 563\\ 493\\ 509)^3 + (-472\\ 715\\ 493\\ 453\\ 327\\ 032)^3." }, { "math_id": 26, "text": "906 = (-74\\ 924\\ 259\\ 395\\ 610\\ 397)^3 + 72\\ 054\\ 089\\ 679\\ 353\\ 378^3 + 35\\ 961\\ 979\\ 615\\ 356\\ 503^3." }, { "math_id": 27, "text": "\\Omega(n^{0.917})" }, { "math_id": 28, "text": "1" }, { "math_id": 29, "text": "\\Gamma(4/3)^3/6\\approx 0.119" } ]
https://en.wikipedia.org/wiki?curid=60187666
6019
Computational chemistry
Branch of chemistry Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena. Overview. Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. Historically, computational chemistry has had two different aspects: These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms. History. Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 "Introduction to Quantum Mechanics – with Applications to Chemistry", Eyring, Walter and Kimball's 1944 "Quantum Chemistry", Heitler's 1945 "Elementary Wave Mechanics – with Applications to Quantum Chemistry", and later Coulson's 1952 textbook "Valence", each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first "ab initio" Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of "ab initio" calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in "ab initio" theory have been published by Schaefer. In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO. In the early 1970s, efficient "ab initio" computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed "ab initio" calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger. One of the first mentions of the term "computational chemistry" can be found in the 1970 book "Computers and Their Role in the Physical Sciences" by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of "computational chemistry". The "Journal of Computational Chemistry" was first published in 1980. Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". Applications. There are several fields within computational chemistry. These fields can give rise to several applications as shown below. Catalysis. Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties. Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions. Drug development. Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs. Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them. Computational chemistry databases. Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry. Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental. Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following. Methods. "Ab initio" method. The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called "ab initio methods". A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz. A common type of "ab initio" electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones. In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface. Computational thermochemistry. A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Chemical dynamics. After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are: Split operator technique. How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution. This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation. formula_0 The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting. formula_1 There are ways to reduce this error, which include taking an average of two split equations. Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy. Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments. Density functional methods. Density functional theory (DFT) methods are often considered to be "ab initio methods" for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods. Semi-empirical methods. Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics. Molecular mechanics. In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or "ab initio" calculations. The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules. Molecular dynamics. Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion. Monte Carlo. Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called "importance sampling". Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms. Quantum mechanics/molecular mechanics (QM/MM). QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes. Quantum Computational Chemistry. Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators. Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions. Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior. While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations. Computational costs in chemistry algorithms. The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems.This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains. In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately. Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems. Algorithmic complexity examples. The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry. Molecular dynamics. Algorithm. Solves Newton's equations of motion for atoms and molecules. Complexity. The standard pairwise interaction calculation in MD leads to an formula_2complexity for formula_3 particles. This is because each particle interacts with every other particle, resulting in formula_4 interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to formula_5 or even formula_6 by grouping distant particles and treating them as a single entity or using clever mathematical approximations. Quantum mechanics/molecular mechanics (QM/MM). Algorithm. Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment. Complexity. The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as formula_7, where formula_8 is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved. Hartree-Fock method. Algorithm. Finds a single Fock state that minimizes the energy. Complexity. NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as formula_9 to formula_6 depending on implementation, with formula_3 being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism. Density functional theory. Algorithm. Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases. Complexity. Traditional implementations of DFT typically scale as formula_9, mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements. Standard CCSD and CCSD(T) method. Algorithm. CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects. Complexity. CCSD. Scales as formula_10 where formula_8 is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation. CCSD(T). With the addition of perturbative triples, the complexity increases to formula_11. This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations. Linear-scaling CCSD(T) method. Algorithm. An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems. Complexity. Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy. Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems. For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations. Accuracy. Computational chemistry is not an "exact" description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). Software packages. Many self-sufficient exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e^{h(A+B)} " }, { "math_id": 1, "text": "e^{h(A+B)} \\approx e^{hA}e^{hB} " }, { "math_id": 2, "text": "\\mathcal{O}(N^2)" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "\\frac{N(N-1)}{2}" }, { "math_id": 5, "text": "\\mathcal{O}(N \\log N)" }, { "math_id": 6, "text": "\\mathcal{O}(N)" }, { "math_id": 7, "text": "\\mathcal{O}(M^2)" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "\\mathcal{O}(N^3)" }, { "math_id": 10, "text": "\\mathcal{O}(M^6)" }, { "math_id": 11, "text": "\\mathcal{O}(M^7)" } ]
https://en.wikipedia.org/wiki?curid=6019
6019404
Class formation
In mathematics, a class formation is a topological group acting on a module satisfying certain conditions. Class formations were introduced by Emil Artin and John Tate to organize the various Galois groups and modules that appear in class field theory. Definitions. A formation is a topological group "G" together with a topological "G"-module "A" on which "G" acts continuously. A layer "E"/"F" of a formation is a pair of open subgroups "E", "F" of "G" such that "F" is a finite index subgroup of "E". It is called a normal layer if "F" is a normal subgroup of "E", and a cyclic layer if in addition the quotient group is cyclic. If "E" is a subgroup of "G", then "A""E" is defined to be the elements of "A" fixed by "E". We write "H""n"("E"/"F") for the Tate cohomology group "H""n"("E"/"F", "A""F") whenever "E"/"F" is a normal layer. (Some authors think of "E" and "F" as fixed fields rather than subgroup of "G", so write "F"/"E" instead of "E"/"F".) In applications, "G" is often the absolute Galois group of a field, and in particular is profinite, and the open subgroups therefore correspond to the finite extensions of the field contained in some fixed separable closure. A class formation is a formation such that for every normal layer "E"/"F" "H"1("E"/"F") is trivial, and "H"2("E"/"F") is cyclic of order |"E"/"F"|. In practice, these cyclic groups come provided with canonical generators "u""E"/"F" ∈ "H"2("E"/"F"), called fundamental classes, that are compatible with each other in the sense that the restriction (of cohomology classes) of a fundamental class is another fundamental class. Often the fundamental classes are considered to be part of the structure of a class formation. A formation that satisfies just the condition "H"1("E"/"F")=1 is sometimes called a field formation. For example, if "G" is any finite group acting on a field "L" and "A=L×", then this is a field formation by Hilbert's theorem 90. Examples. The most important examples of class formations (arranged roughly in order of difficulty) are as follows: It is easy to verify the class formation property for the finite field case and the archimedean local field case, but the remaining cases are more difficult. Most of the hard work of class field theory consists of proving that these are indeed class formations. This is done in several steps, as described in the sections below. The first inequality. The "first inequality" of class field theory states that |"H"0("E"/"F")| ≥ |"E"/"F"| for cyclic layers "E"/"F". It is usually proved using properties of the Herbrand quotient, in the more precise form |"H"0("E"/"F")| = |"E"/"F"|×|"H"1("E"/"F")|. It is fairly straightforward to prove, because the Herbrand quotient is easy to work out, as it is multiplicative on short exact sequences, and is 1 for finite modules. Before about 1950, the first inequality was known as the second inequality, and vice versa. The second inequality. The second inequality of class field theory states that |"H"0("E"/"F")| ≤ |"E"/"F"| for all normal layers "E"/"F". For local fields, this inequality follows easily from Hilbert's theorem 90 together with the first inequality and some basic properties of group cohomology. The second inequality was first proved for global fields by Weber using properties of the L series of number fields, as follows. Suppose that the layer "E"/"F" corresponds to an extension "k"⊂"K" of global fields. By studying the Dedekind zeta function of "K" one shows that the degree 1 primes of "K" have Dirichlet density given by the order of the pole at "s"=1, which is 1 (When "K" is the rationals, this is essentially Euler's proof that there are infinitely many primes using the pole at "s"=1 of the Riemann zeta function.) As each prime in "k" that is a norm is the product of deg("K"/"k")= |"E"/"F"| distinct degree 1 primes of "K", this shows that the set of primes of "k" that are norms has density 1/|"E"/"F"|. On the other hand, by studying Dirichlet L-series of characters of the group "H"0("E"/"F"), one shows that the Dirichlet density of primes of "k" representing the trivial element of this group has density 1/|"H"0("E"/"F")|. (This part of the proof is a generalization of Dirichlet's proof that there are infinitely many primes in arithmetic progressions.) But a prime represents a trivial element of the group "H"0("E"/"F") if it is equal to a norm modulo principal ideals, so this set is at least as dense as the set of primes that are norms. So 1/|"H"0("E"/"F")| ≥ 1/|"E"/"F"| which is the second inequality. In 1940 Chevalley found a purely algebraic proof of the second inequality, but it is longer and harder than Weber's original proof. Before about 1950, the second inequality was known as the first inequality; the name was changed because Chevalley's algebraic proof of it uses the first inequality. Takagi defined a class field to be one where equality holds in the second inequality. By the Artin isomorphism below, "H"0("E"/"F") is isomorphic to the abelianization of "E"/"F", so equality in the second inequality holds exactly for abelian extensions, and class fields are the same as abelian extensions. The first and second inequalities can be combined as follows. For cyclic layers, the two inequalities together prove that "H"1("E"/"F")|"E"/"F"| = "H"0("E"/"F") ≤ |"E"/"F"| so "H"0("E"/"F") = |"E"/"F"| and "H"1("E"/"F") = 1. Now a basic theorem about cohomology groups shows that since "H"1("E"/"F") = 1 for all cyclic layers, we have "H"1("E"/"F") = 1 for all normal layers (so in particular the formation is a field formation). This proof that "H"1("E"/"F") is always trivial is rather roundabout; no "direct" proof of it (whatever this means) for global fields is known. (For local fields the vanishing of "H"1("E"/"F") is just Hilbert's theorem 90.) For cyclic group, "H"0 is the same as "H"2, so "H"2("E"/"F") = |"E"/"F"| for all cyclic layers. Another theorem of group cohomology shows that since "H"1("E"/"F") = 1 for all normal layers and "H"2("E"/"F") ≤ |"E"/"F"| for all cyclic layers, we have "H"2("E"/"F")≤ |"E"/"F"| for all normal layers. (In fact, equality holds for all normal layers, but this takes more work; see the next section.) The Brauer group. The Brauer groups "H"2("E"/*) of a class formation are defined to be the direct limit of the groups "H"2("E"/"F") as "F" runs over all open subgroups of "E". An easy consequence of the vanishing of "H"1 for all layers is that the groups "H"2("E"/"F") are all subgroups of the Brauer group. In local class field theory the Brauer groups are the same as Brauer groups of fields, but in global class field theory the Brauer group of the formation is not the Brauer group of the corresponding global field (though they are related). The next step is to prove that "H"2("E"/"F") is cyclic of order exactly |"E"/"F"|; the previous section shows that it has at most this order, so it is sufficient to find some element of order |"E"/"F"| in "H"2("E"/"F"). The proof for arbitrary extensions uses a homomorphism from the group "G" onto the profinite completion of the integers with kernel "G"∞, or in other words a compatible sequence of homomorphisms of "G" onto the cyclic groups of order "n" for all "n", with kernels "G""n". These homomorphisms are constructed using cyclic cyclotomic extensions of fields; for finite fields they are given by the algebraic closure, for non-archimedean local fields they are given by the maximal unramified extensions, and for global fields they are slightly more complicated. As these extensions are given explicitly one can check that they have the property that H2("G"/"G""n") is cyclic of order "n", with a canonical generator. It follows from this that for any layer "E", the group H2("E"/"E"∩"G"∞) is canonically isomorphic to Q/Z. This idea of using roots of unity was introduced by Chebotarev in his proof of Chebotarev's density theorem, and used shortly afterwards by Artin to prove his reciprocity theorem. For general layers "E","F" there is an exact sequence formula_0 The last two groups in this sequence can both be identified with Q/Z and the map between them is then multiplication by |"E"/"F"|. So the first group is canonically isomorphic to Z/"nZ. As "H"2("E"/"F") has order at most Z/"nZ is must be equal to Z/"n"Z (and in particular is contained in the middle group)). This shows that the second cohomology group "H"2("E"/"F") of any layer is cyclic of order |"E"/"F"|, which completes the verification of the axioms of a class formation. With a little more care in the proofs, we get a canonical generator of "H"2("E"/"F"), called the fundamental class. It follows from this that the Brauer group "H"2("E"/*) is (canonically) isomorphic to the group Q/Z, except in the case of the archimedean local fields R and C when it has order 2 or 1. Tate's theorem and the Artin map. Tate's theorem in group cohomology is as follows. Suppose that "A" is a module over a finite group "G" and "a" is an element of "H"2("G","A"), such that for every subgroup "E" of "G" Then cup product with "a" is an isomorphism If we apply the case "n"=−2 of Tate's theorem to a class formation, we find that there is an isomorphism for any normal layer "E"/"F". The group "H""−2"("E"/"F",Z) is just the abelianization of "E"/"F", and the group "H"0("E"/"F","A""F") is "A""E" modulo the group of norms of "A""F". In other words, we have an explicit description of the abelianization of the Galois group "E"/"F" in terms of "A""E". Taking the inverse of this isomorphism gives a homomorphism "A""E" → abelianization of "E"/"F", and taking the limit over all open subgroups "F" gives a homomorphism "A""E" → abelianization of "E", called the Artin map. The Artin map is not necessarily surjective, but has dense image. By the existence theorem below its kernel is the connected component of "A""E" (for class field theory), which is trivial for class field theory of non-archimedean local fields and for function fields, but is non-trivial for archimedean local fields and number fields. The Takagi existence theorem. The main remaining theorem of class field theory is the Takagi existence theorem, which states that every finite index closed subgroup of the idele class group is the group of norms corresponding to some abelian extension. The classical way to prove this is to construct some extensions with small groups of norms, by first adding in many roots of unity, and then taking Kummer extensions and Artin–Schreier extensions. These extensions may be non-abelian (though they are extensions of abelian groups by abelian groups); however, this does not really matter, as the norm group of a non-abelian Galois extension is the same as that of its maximal abelian extension (this can be shown using what we already know about class fields). This gives enough (abelian) extensions to show that there is an abelian extension corresponding to any finite index subgroup of the idele class group. A consequence is that the kernel of the Artin map is the connected component of the identity of the idele class group, so that the abelianization of the Galois group of "F" is the profinite completion of the idele class group. For local class field theory, it is also possible to construct abelian extensions more explicitly using Lubin–Tate formal group laws. For global fields, the abelian extensions can be constructed explicitly in some cases: for example, the abelian extensions of the rationals can be constructed using roots of unity, and the abelian extensions of quadratic imaginary fields can be constructed using elliptic functions, but finding an analog of this for arbitrary global fields is an unsolved problem. "This is not a Weyl group and has no connection with the Weil–Châtelet group or the Mordell–Weil group" Weil group. The Weil group of a class formation with fundamental classes "u""E"/"F" ∈ "H"2("E"/"F", "A""F") is a kind of modified Galois group, introduced by and used in various formulations of class field theory, and in particular in the Langlands program. If "E"/"F" is a normal layer, then the Weil group "U" of "E"/"F" is the extension 1 → "A""F" → "U" → "E"/"F" → 1 corresponding to the fundamental class "u""E"/"F" in "H"2("E"/"F", "A""F"). The Weil group of the whole formation is defined to be the inverse limit of the Weil groups of all the layers "G"/"F", for "F" an open subgroup of "G". The reciprocity map of the class formation ("G", "A") induces an isomorphism from "AG" to the abelianization of the Weil group. See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "0\\rightarrow H^2(E/F)\\cap H^2(E/E\\cap G_\\infty) \\rightarrow H^2(E/E\\cap G_\\infty)\\rightarrow H^2(F/F\\cap G_\\infty)" } ]
https://en.wikipedia.org/wiki?curid=6019404
6019813
Operational transconductance amplifier
Electrical circuit The operational transconductance amplifier (OTA) is an amplifier that outputs a current proportional to its input voltage. Thus, it is a voltage controlled current source (VCCS). Three types of OTAs are single-input single-output, differential-input single-output, and differential-input differential-output (a.k.a. fully differential), however this article focuses on differential-input single-output. There may be an additional input for a current to control the amplifier's transconductance. The first commercially available integrated circuit units were produced by RCA in 1969 (before being acquired by General Electric) in the form of the CA3080. Although most units are constructed with bipolar transistors, field effect transistor units are also produced. Like a standard operational amplifier, the OTA also has a high impedance differential input stage and may be used with negative feedback. But the OTA differs in that: These differences mean the vast majority of standard operational amplifier applications aren't directly implementable with OTAs. However, OTAs can implement voltage-controlled filters, voltage-controlled oscillators (e.g. variable frequency oscillators), voltage-controlled resistors, and voltage-controlled variable gain amplifiers. Basic operation. In the ideal OTA, the output current is a linear function of the differential input voltage, calculated as follows: formula_0 where "V"in+ is the voltage at the non-inverting input, "V"in− is the voltage at the inverting input and gm is the transconductance of the amplifier. If the load is just a resistance of formula_1 to ground, the OTA's output voltage is the product of its output current and its load resistance: formula_2 The voltage gain is then the output voltage divided by the differential input voltage: formula_3 The transconductance of the amplifier is usually controlled by an input current, denoted Iabc ("amplifier bias current"). The amplifier's transconductance is directly proportional to this current. This is the feature that makes it useful for electronic control of amplifier gain, etc. Non-ideal characteristics. As with the standard op-amp, practical OTA's have some non-ideal characteristics. These include: Subsequent improvements. Earlier versions of the OTA had neither the Ibias terminal (shown in the diagram) nor the diodes (shown adjacent to it). They were all added in later versions. As depicted in the diagram, the anodes of the diodes are attached together and the cathode of one is attached to the non inverting input (Vin+) and the cathode of the other to the inverting input (Vin−). The diodes are biased at the anodes by a current (Ibias) that is injected into the Ibias terminal. These additions make two substantial improvements to the OTA. First, when used with input resistors, the diodes distort the differential input voltage to offset a significant amount of input stage non linearity at higher differential input voltages. According to National Semiconductor, the addition of these diodes increases the linearity of the input stage by a factor of 4. That is, using the diodes, the signal distortion level at 80 mV of differential input is the same as that of the simple differential amplifier at a differential input of 20 mV. Second, the action of the biased diodes offsets much of the temperature sensitivity of the OTA's transconductance. A second improvement is the integration of an optional-use output buffer amplifier to the chip on which the OTA resides. This is actually a convenience to a circuit designer rather than an improvement to the OTA itself; dispensing with the need to employ a separate buffer. It also allows the OTA to be used as a traditional op-amp, if desired, by converting its output current to a voltage. An example of a chip combining both of these features is the National Semiconductor LM13600 and its successor, the LM13700.
[ { "math_id": 0, "text": "I_\\mathrm{out} = (V_\\mathrm{in+} - V_\\mathrm{in-}) \\cdot g_\\mathrm{m}" }, { "math_id": 1, "text": "R_\\text{load}" }, { "math_id": 2, "text": "V_\\mathrm{out} = I_\\mathrm{out} \\cdot R_\\mathrm{load}" }, { "math_id": 3, "text": "G_\\mathrm{voltage} = {V_\\mathrm{out} \\over V_\\mathrm{in+} - V_\\mathrm{in-}} = R_\\mathrm{load} \\cdot g_\\mathrm{m}" } ]
https://en.wikipedia.org/wiki?curid=6019813
6020635
Hannay angle
Mechanics analogue of the whirling geometric phase In classical mechanics, the Hannay angle is a mechanics analogue of the whirling geometric phase (or Berry phase). It was named after John Hannay of the University of Bristol, UK. Hannay first described the angle in 1985, extending the ideas of the recently formalized Berry phase to classical mechanics. Consider a one-dimensional system moving in a cycle, like a pendulum. Now slowly vary a slow parameter formula_0, like pulling and pushing on the string of a pendulum. We can picture the motion of the system as having a fast oscillation and a slow oscillation. The fast oscillation is the motion of the pendulum, and the slow oscillation is the motion of our pulling on its string. If we picture the system in phase space, its motion sweeps out a torus. The adiabatic theorem in classical mechanics states that the action variable, which corresponds to the phase space area enclosed by the system's orbit, remains approximately constant. Thus, after one slow oscillation period, the fast oscillation is back to the same cycle, but its phase on the cycle has changed during the time. The phase change has two leading orders. The first order is the "dynamical angle", which is simply formula_1. This angle depends on the precise details of the motion, and it is of order formula_2. The second order is Hannay's angle, which surprisingly is independent of the precise details of formula_3. It depends on the trajectory of formula_0, but not how fast or slow it traverses the trajectory. It is of order formula_4. Hannay angle in classical mechanics. The Hannay angle is defined in the context of action-angle coordinates. In an initially time-invariant system, an action variable formula_5 is a constant. After introducing a periodic perturbation formula_6, the action variable formula_5 becomes an adiabatic invariant, and the Hannay angle formula_7 for its corresponding angle variable can be calculated according to the path integral that represents an evolution in which the perturbation formula_6 gets back to the original value formula_8 where formula_9 and formula_10 are canonical variables of the Hamiltonian, and formula_11 is the symplectic Hamiltonian 2-form. Example. Foucault pendulum. The Foucault pendulum is an example from classical mechanics that is sometimes also used to illustrate the Berry phase. Below we study the Foucault pendulum using action-angle variables. For simplicity, we will avoid using the Hamilton–Jacobi equation, which is employed in the general protocol. We consider a plane pendulum with frequency formula_11 under the effect of Earth's rotation whose angular velocity is formula_12 with amplitude denoted as formula_13. Here, the formula_14 direction points from the center of the Earth to the pendulum. The Lagrangian for the pendulum is formula_15 The corresponding motion equation is formula_16 formula_17 We then introduce an auxiliary variable formula_18 that is in fact an angle variable. We now have an equation for formula_19: formula_20 From its characteristic equation formula_21 we obtain its characteristic root (we note that formula_22) formula_23 The solution is then formula_24 After the Earth rotates one full rotation that is formula_25, we have the phase change for formula_19 formula_26 The first term is due to dynamic effect of the pendulum and is termed as the dynamic phase, while the second term representing a geometric phase that is essentially the Hannay angle formula_27 Rotation of a rigid body. A free rigid body tumbling in free space has two conserved quantities: energy and angular momentum vector formula_28. Viewed from within the rigid body's frame, the angular momentum direction is moving about, but its length is preserved. After a certain time formula_29, the angular momentum direction would return to its starting point. Viewed in the inertial frame, the body has undergone a rotation (since all elements in "SO(3)" are rotations). A classical result states that during time formula_29, the body has rotated by angle formula_30 where formula_31 is the solid angle swept by the angular momentum direction as viewed from within the rigid body's frame. Other examples. The heavy top. The orbit of earth, periodically perturbed by the orbit of Jupiter. The rotational transform associated with the magnetic surfaces of a toroidal magnetic field with a nonplanar axis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "\\int_0^T \\omega(\\lambda) \\dot\\lambda dt" }, { "math_id": 2, "text": "O(T)" }, { "math_id": 3, "text": "\\dot \\lambda" }, { "math_id": 4, "text": "O(1)" }, { "math_id": 5, "text": "I_\\alpha" }, { "math_id": 6, "text": "\\lambda(t)" }, { "math_id": 7, "text": "\\theta^H_\\alpha" }, { "math_id": 8, "text": "\\theta^H_\\alpha = -\\frac{\\partial}{\\partial I_\\alpha}\\oint\\!\\boldsymbol{p} \\cdot \\frac{\\partial \\boldsymbol{q}}{\\partial \\lambda}\\mathrm{d}\\lambda = -\\partial_{I_\\alpha} \\iint \\omega" }, { "math_id": 9, "text": "\\boldsymbol{p}" }, { "math_id": 10, "text": "\\boldsymbol{q}" }, { "math_id": 11, "text": "\\omega" }, { "math_id": 12, "text": "\\vec{\\Omega}=(\\Omega_x,\\Omega_y,\\Omega_z)" }, { "math_id": 13, "text": "\\Omega=|\\vec{\\Omega}|" }, { "math_id": 14, "text": "z" }, { "math_id": 15, "text": "L=\\frac{1}{2}m(\\dot{x}^2+\\dot{y}^2)-\\frac{1}{2}m\\omega^2(x^2+y^2)+m\\Omega_z(x\\dot{y}-y\\dot{x})" }, { "math_id": 16, "text": "\\ddot{x}+\\omega^2x=2\\Omega_z\\dot{y}" }, { "math_id": 17, "text": "\\ddot{y}+\\omega^2y=-2\\Omega_z\\dot{x}" }, { "math_id": 18, "text": "\\varpi=x+iy" }, { "math_id": 19, "text": "\\varpi" }, { "math_id": 20, "text": "\n\\ddot{\\varpi}+\\omega^2\\varpi=-2i\\Omega_z\\dot{\\varpi}\n" }, { "math_id": 21, "text": "\n\\lambda^2+\\omega^2=-2i\\Omega_z\\lambda\n" }, { "math_id": 22, "text": "\\Omega \\ll \\omega" }, { "math_id": 23, "text": "\n\\lambda=-i\\Omega_z\\pm i\\sqrt{\\Omega_z^2+\\omega^2}\\approx-i\\Omega_z\\pm i\\omega\n" }, { "math_id": 24, "text": "\n\\varpi=e^{-i\\Omega_zt}(Ae^{i\\omega t}+Be^{-i\\omega t})\n" }, { "math_id": 25, "text": "T=2\\pi/\\Omega\\approx 24h" }, { "math_id": 26, "text": "\n\\Delta \\varphi=2\\pi\\frac{\\omega}{\\Omega}+2\\pi\\frac{\\Omega_z}{\\Omega}\n" }, { "math_id": 27, "text": "\n\\theta^H=2\\pi\\frac{\\Omega_z}{\\Omega}\n" }, { "math_id": 28, "text": "E, \\vec L" }, { "math_id": 29, "text": "T" }, { "math_id": 30, "text": "2ET/\\|\\vec L\\| - \\Omega" }, { "math_id": 31, "text": "\\Omega" } ]
https://en.wikipedia.org/wiki?curid=6020635
60207552
HEAAN
HEAAN (Homomorphic Encryption for Arithmetic of Approximate Numbers) is an open source homomorphic encryption (HE) library which implements an approximate HE scheme proposed by Cheon, Kim, Kim and Song (CKKS). The first version of HEAAN was published on GitHub on 15 May 2016, and later a new version of HEAAN with a bootstrapping algorithm was released. Currently, the latest version is Version 2.1. CKKS plaintext space. Unlike other HE schemes, the CKKS scheme supports approximate arithmetics over complex numbers (hence, real numbers). More precisely, the plaintext space of the CKKS scheme is formula_0 for some power-of-two integer formula_1. To deal with the complex plaintext vector efficiently, Cheon et al. proposed plaintext encoding/decoding methods which exploits a ring isomorphism formula_2. Encoding method. Given a plaintext vector formula_3 and a scaling factor formula_4, the plaintext vector is encoded as a polynomial formula_5 by computing formula_6 where formula_7 denotes the coefficient-wise rounding function. Decoding method. Given a message polynomial formula_8 and a scaling factor formula_4, the message polynomial is decoded to a complex vector formula_9 by computing formula_10. Here the scaling factor formula_11 enables us to control the encoding/decoding error which is occurred by the rounding process. Namely, one can obtain the approximate equation formula_12 by controlling formula_13 where formula_14 and formula_15 denote the encoding and decoding algorithm, respectively. From the ring-isomorphic property of the mapping formula_2, for formula_16 and formula_17, the following hold: where formula_20 denotes the Hadamard product of the same-length vectors. These properties guarantee the approximate correctness of the computations in the encoded state when the scaling factor formula_21 is chosen appropriately. Algorithms. The CKKS scheme basically consists of those algorithms: key Generation, encryption, decryption, homomorphic addition and multiplication, and rescaling. For a positive integer formula_22, let formula_23 be the quotient ring of formula_24 modulo formula_22. Let formula_25, formula_26 and formula_27 be distributions over formula_28 which output polynomials with small coefficients. These distributions, the initial modulus formula_29, and the ring dimension formula_30 are predetermined before the key generation phase. Key generation. The key generation algorithm is following: Encryption. The encryption algorithm is following: Decryption. The decryption algorithm is following: The decryption outputs an approximate value of the original message, i.e., formula_46, and the approximation error is determined by the choice of distributions formula_47. When considering homomorphic operations, the evaluation errors are also included in the approximation error. Basic homomorphic operations, addition and multiplication, are done as follows. Homomorphic addition. The homomorphic addition algorithm is following: The correctness holds as formula_52. Homomorphic multiplication. The homomorphic multiplication algorithm is following: The correctness holds as formula_58. Note that the approximation error (on the message) exponentially grows up on the number of homomorphic multiplications. To overcome this problem, most of HE schemes usually use a modulus-switching technique which was introduced by Brakerski, Gentry and Vaikuntanathan. In case of HEAAN, the modulus-switching procedure is called rescaling. The Rescaling algorithm is very simple compared to Brakerski-Gentry-Vaikuntanathan's original algorithm. Applying the rescaling algorithm after a homomomorphic multiplication, the approximation error grows linearly, not exponentially. Rescaling. The rescaling algorithm is following: The total procedure of the CKKS scheme is as following: Each plaintext vector formula_62 which consists of complex (or real) numbers is firstly encoded as a polynomial formula_8 by the encoding method, and then encrypted as a ciphertext formula_63. After several homomorphic operations, the resulting ciphertext is decrypted as a polynomial formula_64 and then decoded as a plaintext vector formula_65 which is the final output. Security. The IND-CPA security of the CKKS scheme is based on the hardness assumption of the ring learning with errors (RLWE) problem, the ring variant of very promising lattice-based hard problem Learning with errors (LWE). Currently the best known attacks for RLWE over a power-of-two cyclotomic ring are general LWE attacks such as dual attack and primal attack. The bit security of the CKKS scheme based on known attacks was estimated by Albrecht's LWE estimator. Library. Version 1.0, 1.1 and 2.1 have been released so far. Version 1.0 is the first implementation of the CKKS scheme without bootstrapping. In the second version, the bootstrapping algorithm was attached so that users are able to address large-scale homomorphic computations. In Version 2.1, currently the latest version, the multiplication of ring elements in formula_66 was accelerated by utilizing fast Fourier transform (FFT)-optimized number theoretic transform (NTT) implementation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbb{C}^{n/2} " }, { "math_id": 1, "text": " n " }, { "math_id": 2, "text": " \\phi: \\mathbb{R}[X]/(X^n+1) \\rightarrow \\mathbb{C}^{n/2} " }, { "math_id": 3, "text": " \\vec z = (z_1,z_2,...,z_{n/2}) \\in \\mathbb{C}^{n/2} " }, { "math_id": 4, "text": " \\Delta > 1 " }, { "math_id": 5, "text": " m(X) \\in R:= \\mathbb{Z}[X]/(X^n+1) " }, { "math_id": 6, "text": " m(X) = \\lfloor \\Delta \\cdot \\phi^{-1}(\\vec z) \\rceil \\in R" }, { "math_id": 7, "text": " \\lfloor \\cdot \\rceil " }, { "math_id": 8, "text": " m(X) \\in R " }, { "math_id": 9, "text": " \\vec z \\in \\mathbb{C}^{n/2} " }, { "math_id": 10, "text": " \\vec z = \\Delta^{-1}\\cdot \\phi(m(X)) \\in \\mathbb{C}^{n/2}" }, { "math_id": 11, "text": " \\Delta > 1" }, { "math_id": 12, "text": " \\text{Dcd}(\\text{Ecd}(\\vec z; \\Delta); \\Delta) \\approx \\vec z " }, { "math_id": 13, "text": " \\Delta " }, { "math_id": 14, "text": " \\text{Ecd} " }, { "math_id": 15, "text": " \\text{Dcd} " }, { "math_id": 16, "text": " m_1 = \\text{Ecd}(\\vec z_1;\\Delta) " }, { "math_id": 17, "text": " m_2 = \\text{Ecd}(\\vec z_2;\\Delta) " }, { "math_id": 18, "text": " \\text{Dcd}(m_1 + m_2;\\Delta) \\approx \\vec z_1 + \\vec z_2 " }, { "math_id": 19, "text": " \\text{Dcd}(m_1\\cdot m_2;\\Delta) \\approx \\vec z_1 \\circ \\vec z_2 " }, { "math_id": 20, "text": "\\circ " }, { "math_id": 21, "text": "\\Delta " }, { "math_id": 22, "text": "q" }, { "math_id": 23, "text": "R_q := R/qR " }, { "math_id": 24, "text": " R " }, { "math_id": 25, "text": "\\chi_s" }, { "math_id": 26, "text": "\\chi_r" }, { "math_id": 27, "text": "\\chi_e" }, { "math_id": 28, "text": "R" }, { "math_id": 29, "text": " Q " }, { "math_id": 30, "text": "n " }, { "math_id": 31, "text": " s \\leftarrow \\chi_s " }, { "math_id": 32, "text": " a " }, { "math_id": 33, "text": "a' " }, { "math_id": 34, "text": " R_Q " }, { "math_id": 35, "text": "R_{PQ} " }, { "math_id": 36, "text": " e,e' \\leftarrow \\chi_e " }, { "math_id": 37, "text": " sk \\leftarrow (1, s)\\in R_Q^2 " }, { "math_id": 38, "text": " pk \\leftarrow (b = -a \\cdot s + e, a) \\in R_Q^2 " }, { "math_id": 39, "text": "evk \\leftarrow (b' = -a' \\cdot s + e' + P\\cdot s^2, a') \\in R_{PQ}^2" }, { "math_id": 40, "text": " r \\leftarrow \\chi_r " }, { "math_id": 41, "text": " m \\in R " }, { "math_id": 42, "text": " ct \\leftarrow (c_0 = r\\cdot b + e_0 + m, c_1 = r\\cdot a + e_1) \\in R_Q^2 " }, { "math_id": 43, "text": " ct \\in R_q^2 " }, { "math_id": 44, "text": " m' \\leftarrow \\langle ct, sk \\rangle " }, { "math_id": 45, "text": " (\\text{mod } q) " }, { "math_id": 46, "text": " \\text{Dec}(sk, \\text{Enc}(pk, m)) \\approx m" }, { "math_id": 47, "text": " \\chi_s, \\chi_e, \\chi_r " }, { "math_id": 48, "text": "ct " }, { "math_id": 49, "text": " ct'" }, { "math_id": 50, "text": " R_q^2" }, { "math_id": 51, "text": " ct_{\\text{add}} \\leftarrow ct + ct' \\in R_q^2" }, { "math_id": 52, "text": " \\text{Dec}(sk, ct_\\text{add}) \\approx \\text{Dec}(sk, ct) + \\text{Dec}(sk, ct') " }, { "math_id": 53, "text": " ct =(c_0, c_1) " }, { "math_id": 54, "text": " ct' =(c_0', c_1')" }, { "math_id": 55, "text": " (d_0, d_1, d_2) = (c_0c_0', c_0c_1'+c_1c_0', c_1c_1')" }, { "math_id": 56, "text": " (\\text{mod } q)" }, { "math_id": 57, "text": " ct_{\\text{mult}} \\leftarrow (d_0, d_1) + \\lfloor P^{-1}\\cdot d_2 \\cdot evk \\rceil \\in R_q^2" }, { "math_id": 58, "text": " \\text{Dec}(sk, ct_\\text{mult}) \\approx \\text{Dec}(sk, ct) \\cdot \\text{Dec}(sk, ct') " }, { "math_id": 59, "text": " ct \\in R_q^2" }, { "math_id": 60, "text": " q' < q" }, { "math_id": 61, "text": " ct_{\\text{rs}}\\leftarrow \\lfloor (q'/q)\\cdot ct\\rceil \\in R_{q'}^2" }, { "math_id": 62, "text": "\\vec z " }, { "math_id": 63, "text": "ct \\in R_q^2 " }, { "math_id": 64, "text": " m'(X) \\in R " }, { "math_id": 65, "text": " \\vec z' " }, { "math_id": 66, "text": " R_q " } ]
https://en.wikipedia.org/wiki?curid=60207552
60219968
Electron channelling contrast imaging
Microscope diffraction technique Electron channelling contrast imaging (ECCI) is a scanning electron microscope (SEM) diffraction technique used in the study of defects in materials. These can be dislocations or stacking faults that are close to the surface of the sample, low angle grain boundaries or atomic steps. Unlike the use of transmission electron microscopy (TEM) for the investigation of dislocations, the ECCI approach has been called a rapid and non-destructive characterisation technique Mechanism. The word channelling in ECCI, and, similarly, in electron channelling patterns refers to diffraction of the electron beam on its way in the sample. With enough spatial resolution, very small crystal imperfections would change the phase of the incident electron wave-function, and this, in turn, would be reflected in the backscattering probability, showing up as "contrast" (sharp change in backscattered intensity) close to a dislocation Background. While we now talk about ECCI being a SEM technique, there was a significant gap of about thirty years between the prediction that defects ought to show up as contrast in the backscattered electron micrographs of the SEM and the development of ECCI as an accessible technique for the user of a standard SEM. In the meanwhile the field emission gun (FEG) had to be developed and integrated in the commercial SEM in order to improve the spatial resolution for channelling applications. Shortly after, Wilkinson used ECCI to investigate clusters of misfit dislocations lying more than 1 μm underneath the surface at the interface of Si-Ge layers grown on Si. They noted that at this depth the spatial resolution is too low to resolve individual dislocations. However, they still concluded that the formula_0 invisibility criteria can be applied similarly to TEM. These pioneering ECCI investigations were made on highly tilted samples (formula_1), with side-mounted backscattered electron (BSE) detectors, similarly to the electron backscatter diffraction (EBSD) set up. When applied to metals ECCI tended to be used in a low tilt ( formula_2 ) configuration. This set up offers a number of advantages: the standard Si-diode detector can be mounted on the pole piece offering a large BSE signal collection angle and the interaction volume is minimised granting higher spatial resolution. The downside to this geometry is the reduction in BSE signal which, for metals, is less of an issue than for semiconductors due to higher atomic numbers. A comprehensive overview of the applications of ECCI for metallic materials has been made by Weidner and Biermann. From 2006 Trager-Cowan's group showed that using ECCI in the characterisation of nitrides is an excellent idea. Since then ECCI has been used in the forescatter geometry to reveal extended defects and morphological features of GaN samples. Picard et al. also argued that the formula_3 dislocation type identification criterion can no longer be applied due to surface relaxation. Instead, they used simulations to determine the Burger vectors of dislocations, laying the grounds for a non-destructive dislocation characterisation method. Literature continues to call ECCI a new technique even though it has been around for almost forty years. There are a number of reasons for this including the fact that it resisted being standardised such that every group has their own method of acquiring ECC-micrographs depending on the material studied, the SEM abilities and the available detectors. Different groups proposed flavours of ECCI to distinguish between procedures. Gutierrez-Urrutia et al. and Zaefferer and Elhami coined the term "controlled ECCI" (cECCI) for a low tilt geometry ECCI aided by crystallographic information obtained form EBSD maps acquired at formula_4 tilt. Similarly, Mansour et al. used low tilt ECCI together with high resolution selected area channelling patterns to characterise dislocations in fine-grained Si steel and labelled it "accurate ECCI" (aECCI). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{g} \\cdot \\mathbf{b} = \\mathbf{g} \\cdot \\mathbf{b} \\times \\mathbf{u} = 0" }, { "math_id": 1, "text": "40^{\\circ} - 70^{\\circ}" }, { "math_id": 2, "text": "<10^{\\circ}" }, { "math_id": 3, "text": "\\mathbf{g} \\cdot\\mathbf{b}" }, { "math_id": 4, "text": "70^{\\circ} " } ]
https://en.wikipedia.org/wiki?curid=60219968
60220238
Apportionment in the Hellenic Parliament
Apportionment in the Hellenic Parliament refers to those provisions of the Greek electoral law relating to the distribution of Greece's 300 parliamentary seats to the parliamentary constituencies, as well as to the method of seat allocation in Greek legislative elections for the various political parties. The electoral law was codified for the first time through a 2012 Presidential Decree. Articles 1, 2, and 3 deal with how the parliamentary seats are allocated to the various constituencies, while articles 99 and 100 legislate the method of parliamentary apportionment for political parties in an election. In both cases, Greece uses the largest remainder method. Up to and including the 2019 Greek legislative election, Greece will continue to employ a semi-proportional representation system with a 50-seat majority bonus. The next election will see the electoral system change to proportional representation, as the majority bonus will cease to be applied since it was abolished in 2016. This article is reflective of this method. The election after next will revert to semi-proportional representation with a sliding scale bonus after it was passed in parliament in 2020. Background. Greek parliamentary constituencies correspond to the former prefectures of Greece with the exception of constituencies in Thessaloniki and Attica, which are divided into two and eight constituencies respectively. Constituencies have generic names, with constituencies which are broken up receiving Greek numerals to differentiate each other, with Thessaloniki A () for example translating to "first (electoral district) of Thessaloniki". The break-up of Athens B introduced Arabic numerals to the names as well, for example Athens B1 (). As of December 2018, Greece is divided into 59 electoral constituencies of varying sizes. The smallest constituencies are single-seat, while the largest, Athens B3, is represented by 18 members of parliament. Prior to the 2018 reform, which saw Athens B broken up into three smaller constituencies (Athens B1, Athens B2, and Athens B3), Athens B was the largest constituency in the country with 44 MPs, dwarfing the second-largest constituency of Thessaloniki A with 16 MPs. The Constitution of Greece includes provisions relating both to the constituencies and the number of MPs. Article 51 sets the minimum number of MPs at 200 and the maximum at 300, while Article 53 regulates the way in which constituencies and the electoral law can be changed, while also specifying that up to one twentieth of the total number of MPs (5%, or between 10 and 15 depending on the size of the parliament) may be elected on a national level instead of in constituencies. 12 MPs (4%, or one twenty-fifth) are elected in this manner. Proportional representation (PR) was first introduced in Greece in the 1926 election, replacing the older approval voting system in use since 1864. Under that system, voters cast lead ballots in ballot boxes corresponding to the number of candidates running in a constituency, placing their lead ballot in the white partition for approval and the black partition for disapproval; the candidates with the highest approval counts were selected until all seats were filled. A majoritarian system was later adopted, with voters voting on party lists and the candidates with the most votes (even if below 50%) being elected. The introduction of PR created unstable governments and it has been re-established a number of times since, most recently in 1989, before being abolished. Reinforced proportional representation favouring the largest party, the type of electoral system used now, was first used in 1951. Overall, Greece has changed its electoral law regarding apportionment in the parliament on average once every 1.5 elections. The majority bonus was introduced in 2004 to replace the older method of reinforcing the proportional system. Under the previous system, seats were awarded in two or three stages, with increasing quota requirements for each stage. In its explanatory report regarding the law, the Hellenic Parliament committee responsible for that piece of legislation concluded that reinforcing the proportional system through a simple majority bonus would significantly lower the malapportionment factor of Greek elections, since it would ensure that all seats are awarded completely proportionally with the exception of the winning party, which would receive a boost equal to 13.33% of the total seats. In particular, political parties would at a minimum be awarded at least 87% of the seats they would be entitled to if the system was not reinforced, as opposed to 70% under the previous law. The bonus was increased to 50 seats in 2009 (a boost of 16.66%), and this was first applied in the May 2012 Greek legislative election. The majority bonus was abolished in 2016 but was still applied at the 2019 Greek legislative election, with the first elections held after that using proportional representation. As part of the 2019 revision to the Constitution of Greece, the Syriza-led government also wants to enshrine proportional representation in the Constitution. The Constitution would define 'proportional representation' as any system with a margin of error of less than 10% between percentage of the national vote received and percentage of seats awarded, and a maximum electoral threshold of 3% of the national vote. New Democracy, the main opposition party, claimed that proportional representation would result in weak governments like it did in the French Fourth Republic and the Weimar Republic. The 1864 Greek Constitution previously specified approval voting as the official electoral method, until the introduction of proportional representation in 1926; no constitution since then has specified the type of electoral system to be used. Apportionment of seats allocated to the constituencies. The number of seats per constituency is calculated by first figuring out the national quota by dividing the total legal population of Greece as recorded in the last census by the number of seats elected in constituencies (285). The integer of the division of the population of each constituency by the national quota, disregarding the decimals (marked formula_0 in the function below), is the number of seats allocated to that constituency; a constituency with a sum of 5.6 is awarded 5 seats. If there are seats left empty in the first round of allocations, all 59 constituencies are ranked in descending order of leftover decimals (formula_1) and a seat is awarded to any constituency with a formula_0 larger than or equal to formula_2, where formula_3 is the number of seats which remained empty in the first allocation; if there are 9 unassigned seats, the constituencies with the 9 highest leftover decimals are awarded a seat each. The mathematical formula below is a summary of this allocation process: formula_4 The use of integers only to determine the number of seats in the first allocation means that there are always seats left empty for allocation in the second step, since the number of seats allocated by rounding down is always less than the number of seats a constituency is entitled to. In the 2023 apportionment of seats, 256 seats were awarded to constituencies in the first step and 29 in the second; Evrytania and Lefkada received no seats in the first step, but received one seat each due to their leftover decimals in the second step. Apportionment of seats allocated to parties in legislative elections. The first step in determining the result of a Greek legislative election is to find the electoral quota () of each party on a national level. This is done by taking the number of votes received by each party polling at least 3% nationally and multiplying it by a number as shown in formula, thereupon dividing that sum by the total number of votes cast for political parties which have received at least 3% of the national vote; the number of seats elected in constituencies, without the possible majority bonus. The integer of this calculation gives the number of seats that each party is awarded, in proportion to its electoral result. As with the allocation of seats to the constituencies, if there are any seats left vacant in the first allocation, the political parties are ranked in descending order of leftover decimals (formula_5) and a seat is awarded to any constituency with a formula_0 larger than or equal to formula_2, where formula_3 is the number of seats which remained empty in the first allocation. This formula below is a summary of this calculation process: formula_6 Because the number of votes is multiplied by a smaller number rather than 300 it ensures that there are always some seats left vacant for the majority bonus. The national quota is used later in order to 'correct' the results in the constituencies, ensuring that the seats of the majority bonus are left vacant as intended. This provision was abolished in 2016, along with the majority bonus, and in elections after 2019 the total number of votes received by a party will be multiplied by 300 instead of 250. The 15 MPs elected nationally. In accordance with the constitution, several seats in the parliament are elected on a national level. These MPs are elected through party-list proportional representation using the largest remainder method, with the whole of Greece acting as a single 15-seat constituency. The seats are allocated by first finding the national quota for these 15 seats, dividing the total number of votes cast for all parties which have received at least 3% of the vote nationally by 15, and then dividing the total number of votes for each party which has received at least 3% of the national vote by the national quota for the 15 seats. The integer of this calculation is the number of seats each party is awarded in this apportionment. If there are any seats left vacant in the first allocation, the political parties are ranked in descending order of leftover decimals (formula_5) and a seat is awarded to any party with a formula_0 larger than or equal to formula_2, where formula_3 is the number of seats which remained empty in the first allocation. This formula below is a summary of this calculation process: formula_7 The 9 MPs elected with First Past the Post. Nine constituencies of Greece (Cephalonia, Evrytania, Grevena, Kastoria, Kefalonia, Lefkada, Phocis, Samos, and Zakynthos) have only a single MP each. They are mostly islands, and their legal population ranges from 24,545 in Evrytania to 48,464 in Kastoria. Each seat is awarded to the party which has received the most valid votes in each of the single-seat constituencies, provided that that party has received at least 3% of the national vote. This means that if a party is very popular in a single-seat constituency but has not received 3% of the vote nationally, it is disqualified. There is no requirement for a party to reach 50% of the vote before being awarded the seat, as a simple plurality is enough. In this case, the system employed is first-past-the-post or plurality voting. The MPs who are elected proportionally. The apportionment of the MPs proportionally-elected in constituencies is by far the most complex step of all the processes, and involves a number of stages. In the first stage, the electoral quota of each constituency is calculated by dividing the total number of votes cast for all parties in the constituency (regardless of if they achieved 3% of the national vote) and dividing it by the number of seats in that constituency. The total number of votes cast for each party in the constituency is then divided by the constituency quota, and the integer of that calculation corresponds to the number of seats awarded to that party, so that a sum of 5.6 would award 5 seats. Any parties which have received more seats than they have candidates are awarded the same number of seats as their number of fielded candidates. To fill any seats left empty after the first stage, the difference in seats each party is entitled to in accordance with the national quota established before the apportionment of seats began minus the total number of seats that have been awarded to each party so far is calculated. The number of 'unused votes' for each party in the constituency is then calculated, by multiplying the number of seats it has been awarded in the constituency, times the electoral quota established for that constituency. Empty seats in two- and three-member constituencies are awarded, in order and one by one, to the parties which have the highest number of unused votes in that constituency. If any party has been awarded more seats so far than it is entitled to in accordance with the national quota, one seat is removed from it in three-member (and if necessary two-member) constituencies, until it has the same number of allocated seats as it is entitled to. If there are still constituencies with empty seats, all constituencies with empty seats are ranked in descending order of the unused votes of the political party with the smallest number of valid votes on a national level (that has secured at least 3% of the national vote), and one seat is awarded to that party in those constituencies where it is showing the highest number of unused votes, until that party has reached the number of seats it is entitled to in accordance with the national quota. If there are still seats left empty, this procedure is followed for all other parties in ascending order of total valid votes (that have secured at least 3% of the national vote) until all seats have been allocated. The seats of the majority bonus. The seats of the majority bonus can either be awarded to party which has achieved a plurality of votes or to an electoral coalition provided that the average percentage of votes received by the members of the coalition is larger than the percentage of votes received by the political party which has a plurality of votes on a national level. The judgement on whether an organisation is a political party or a political coalition rests with the Supreme Civil and Criminal Court of Greece. If, in exceptional circumstances, the electoral arithmetic results in a situation where the largest party is awarded more seats than there are seats available including the bonus, as a result of the re-allocation of empty seats in the constituencies, then the majority bonus can be reduced so that the largest party can keep the seats awarded to it in the constituencies. Malapportionment. The Gallagher index (or Least Squares Index), was developed by Michael Gallagher as a means of measuring the electoral disproportionality or malapportionment between the percentage of votes parties receive in an election versus the percentage of seats allocated to them. Greece's "notorious" 'reinforced proportionality' system produces Gallagher Indices more closely approximating those of first-past-the-post systems rather than proportional systems like Denmark or New Zealand. Taking the examples of the Gallagher Indices of selected countries in their last election, the September 2015 Greek election had an index of 9.69. The United Kingdom, Australia, and Canada, all first-past-the-post systems, scored 6.47, 11.48, and 12.01 respectively in their last elections (the United Kingdom's Indices prior in the four elections prior to the last one averaged between 15 and 17). France's two-round system produced an index of 21.12, while Denmark and New Zealand, using proportional representation and mixed-member proportional representation, scored 0.79 and 2.73. Greece used proportional representation in the legislative elections of June 1989, November 1989, and 1990, which had lower Gallagher Indices of 4.37, 3.94, and 3.97. When the majority bonus was raised from 40 seats to 50 in the May 2012 election, the Gallagher Index nearly doubled from 7.29 in 2009 to 12.88. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "k_1 \\cdots k_{59}" }, { "math_id": 2, "text": "k_x" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "\\text{Constituency seats} = \\left \\lfloor {\\text{(total legal population of the constituency)} \\over \\text{(total legal population of Greece)}/285} \\right \\rfloor\n\\begin{cases}\n+1, & \\text{if } k_n \\geqslant k_x \\\\\n+0, & \\text{if } k_n < k_x\n\\end{cases}" }, { "math_id": 5, "text": "k_1 \\cdots k_n" }, { "math_id": 6, "text": "\\text{Quota} = \\left \\lfloor {\\text{(total number of votes for each party polling over 3}\\% \\text{ nationally)} \\times \\begin{cases}\n300, & \\text{if first party percent }< 25% \\\\\n280-2*(\\text{first party percent }-25 %), & 25% \\leqslant \\text{if first party percent }< 40 % \\\\\n250, & \\text{if first party percent }\\geqslant 40%\n\\end{cases} \\over \\text{(total number of votes cast for all parties polling over 3}\\% \\text{ nationally)}} \\right \\rfloor\n\\begin{cases}\n+1, & \\text{if } k_n \\geqslant k_x \\\\\n+0, & \\text{if } k_n < k_x\n\\end{cases}\n" }, { "math_id": 7, "text": "\\left \\lfloor {\\text{(total number of votes for each party polling over 3}\\% \\text{ nationally)} \\over \\text{(total number of votes cast for all parties polling over 3}\\% \\text{ nationally)} \\div 15} \\right \\rfloor\n\\begin{cases}\n+1, & \\text{if } k_n \\geqslant k_x \\\\\n+0, & \\text{if } k_n < k_x\n\\end{cases}\n" } ]
https://en.wikipedia.org/wiki?curid=60220238
602211
Bloom filter
Data structure for approximate set membership A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set". Elements can be added to the set, but not removed (though this can be addressed with the counting Bloom filter variant); the more items added, the larger the probability of false positives. Bloom proposed the technique for applications where the amount of source data would require an impractically large amount of memory if "conventional" error-free hashing techniques were applied. He gave the example of a hyphenation algorithm for a dictionary of 500,000 words, out of which 90% follow simple hyphenation rules, but the remaining 10% require expensive disk accesses to retrieve specific hyphenation patterns. With sufficient core memory, an error-free hash could be used to eliminate all unnecessary disk accesses; on the other hand, with limited core memory, Bloom's technique uses a smaller hash area but still eliminates most unnecessary accesses. For example, a hash area only 18% of the size needed by an ideal error-free hash still eliminates 87% of the disk accesses. More generally, fewer than 10 bits per element are required for a 1% false positive probability, independent of the size or number of elements in the set. Algorithm description. An "empty Bloom filter" is a bit array of m bits, all set to 0. It is equipped with k different hash functions, which map set elements to one of the m possible array positions. To be optimal, the hash functions should be uniformly distributed and independent. Typically, k is a small constant which depends on the desired false error rate ε, while m is proportional to k and the number of elements to be added. To "add" an element, feed it to each of the k hash functions to get k array positions. Set the bits at all these positions to 1. To "test" whether an element is in the set, feed it to each of the k hash functions to get k array positions. If "any" of the bits at these positions is 0, the element is definitely not in the set; if it were, then all the bits would have been set to 1 when it was inserted. If all are 1, then either the element is in the set, "or" the bits have by chance been set to 1 during the insertion of other elements, resulting in a false positive. In a simple Bloom filter, there is no way to distinguish between the two cases, but more advanced techniques can address this problem. The requirement of designing k different independent hash functions can be prohibitive for large k. For a good hash function with a wide output, there should be little if any correlation between different bit-fields of such a hash, so this type of hash can be used to generate multiple "different" hash functions by slicing its output into multiple bit fields. Alternatively, one can pass k different initial values (such as 0, 1, ..., k − 1) to a hash function that takes an initial value; or add (or append) these values to the key. For larger m and/or k, independence among the hash functions can be relaxed with negligible increase in false positive rate. (Specifically, show the effectiveness of deriving the k indices using enhanced double hashing and triple hashing, variants of double hashing that are effectively simple random number generators seeded with the two or three hash values.) Removing an element from this simple Bloom filter is impossible because there is no way to tell which of the k bits it maps to should be cleared. Although setting any one of those k bits to zero suffices to remove the element, it would also remove any other elements that happen to map onto that bit. Since the simple algorithm provides no way to determine whether any other elements have been added that affect the bits for the element to be removed, clearing any of the bits would introduce the possibility of false negatives. One-time removal of an element from a Bloom filter can be simulated by having a second Bloom filter that contains items that have been removed. However, false positives in the second filter become false negatives in the composite filter, which may be undesirable. In this approach re-adding a previously removed item is not possible, as one would have to remove it from the "removed" filter. It is often the case that all the keys are available but are expensive to enumerate (for example, requiring many disk reads). When the false positive rate gets too high, the filter can be regenerated; this should be a relatively rare event. Space and time advantages. While risking false positives, Bloom filters have a substantial space advantage over other data structures for representing sets, such as self-balancing binary search trees, tries, hash tables, or simple arrays or linked lists of the entries. Most of these require storing at least the data items themselves, which can require anywhere from a small number of bits, for small integers, to an arbitrary number of bits, such as for strings (tries are an exception since they can share storage between elements with equal prefixes). However, Bloom filters do not store the data items at all, and a separate solution must be provided for the actual storage. Linked structures incur an additional linear space overhead for pointers. A Bloom filter with a 1% error and an optimal value of k, in contrast, requires only about 9.6 bits per element, regardless of the size of the elements. This advantage comes partly from its compactness, inherited from arrays, and partly from its probabilistic nature. The 1% false-positive rate can be reduced by a factor of ten by adding only about 4.8 bits per element. However, if the number of potential values is small and many of them can be in the set, the Bloom filter is easily surpassed by the deterministic bit array, which requires only one bit for each potential element. Hash tables gain a space and time advantage if they begin ignoring collisions and store only whether each bucket contains an entry; in this case, they have effectively become Bloom filters with k = 1. Bloom filters also have the unusual property that the time needed either to add items or to check whether an item is in the set is a fixed constant, , completely independent of the number of items already in the set. No other constant-space set data structure has this property, but the average access time of sparse hash tables can make them faster in practice than some Bloom filters. In a hardware implementation, however, the Bloom filter shines because its k lookups are independent and can be parallelized. To understand its space efficiency, it is instructive to compare the general Bloom filter with its special case when k = 1. If k = 1, then in order to keep the false positive rate sufficiently low, a small fraction of bits should be set, which means the array must be very large and contain long runs of zeros. The information content of the array relative to its size is low. The generalized Bloom filter (k greater than 1) allows many more bits to be set while still maintaining a low false positive rate; if the parameters (k and m) are chosen well, about half of the bits will be set, and these will be apparently random, minimizing redundancy and maximizing information content. Probability of false positives. Assume that a hash function selects each array position with equal probability. If "m" is the number of bits in the array, the probability that a certain bit is not set to 1 by a certain hash function during the insertion of an element is formula_0 If "k" is the number of hash functions and each has no significant correlation between each other, then the probability that the bit is not set to 1 by any of the hash functions is formula_1 We can use the well-known identity for "e"−1 formula_2 to conclude that, for large "m", formula_3 If we have inserted "n" elements, the probability that a certain bit is still 0 is formula_4 the probability that it is 1 is therefore formula_5 Now test membership of an element that is not in the set. Each of the "k" array positions computed by the hash functions is 1 with a probability as above. The probability of all of them being 1, which would cause the algorithm to erroneously claim that the element is in the set, is often given as formula_6 This is not strictly correct as it assumes independence for the probabilities of each bit being set. However, assuming it is a close approximation we have that the probability of false positives decreases as "m" (the number of bits in the array) increases, and increases as "n" (the number of inserted elements) increases. The true probability of a false positive, without assuming independence, is formula_7 where the {braces} denote Stirling numbers of the second kind. An alternative analysis arriving at the same approximation without the assumption of independence is given by Mitzenmacher and Upfal. After all "n" items have been added to the Bloom filter, let "q" be the fraction of the "m" bits that are set to 0. (That is, the number of bits still set to 0 is "qm".) Then, when testing membership of an element not in the set, for the array position given by any of the "k" hash functions, the probability that the bit is found set to 1 is formula_8. So the probability that all "k" hash functions find their bit set to 1 is formula_9. Further, the expected value of "q" is the probability that a given array position is left untouched by each of the "k" hash functions for each of the "n" items, which is (as above) formula_10. It is possible to prove, without the independence assumption, that "q" is very strongly concentrated around its expected value. In particular, from the Azuma–Hoeffding inequality, they prove that formula_11 Because of this, we can say that the exact probability of false positives is formula_12 as before. Optimal number of hash functions. The number of hash functions, "k", must be a positive integer. Putting this constraint aside, for a given "m" and "n", the value of "k" that minimizes the false positive probability is formula_13 The required number of bits, "m", given "n" (the number of inserted elements) and a desired false positive probability "ε" (and assuming the optimal value of "k" is used) can be computed by substituting the optimal value of "k" in the probability expression above: formula_14 which can be simplified to: formula_15 This results in: formula_16 So the optimal number of bits per element is formula_17 with the corresponding number of hash functions "k" (ignoring integrality): formula_18 This means that for a given false positive probability "ε", the length of a Bloom filter "m" is proportionate to the number of elements being filtered "n" and the required number of hash functions only depends on the target false positive probability "ε". The formula formula_19 is approximate for three reasons. First, and of least concern, it approximates formula_20 as formula_21, which is a good asymptotic approximation (i.e., which holds as "m" →∞). Second, of more concern, it assumes that during the membership test the event that one tested bit is set to 1 is independent of the event that any other tested bit is set to 1. Third, of most concern, it assumes that formula_22 is fortuitously integral. Goel and Gupta, however, give a rigorous upper bound that makes no approximations and requires no assumptions. They show that the false positive probability for a finite Bloom filter with "m" bits (formula_23), "n" elements, and "k" hash functions is at most formula_24 This bound can be interpreted as saying that the approximate formula formula_25 can be applied at a penalty of at most half an extra element and at most one fewer bit. Approximating the number of items in a Bloom filter. The number of items in a Bloom filter can be approximated with the following formula, formula_26 where formula_27 is an estimate of the number of items in the filter, m is the length (size) of the filter, k is the number of hash functions, and X is the number of bits set to one. The union and intersection of sets. Bloom filters are a way of compactly representing a set of items. It is common to try to compute the size of the intersection or union between two sets. Bloom filters can be used to approximate the size of the intersection and union of two sets. For two Bloom filters of length m, their counts, respectively can be estimated as formula_28 and formula_29 The size of their union can be estimated as formula_30 where formula_31 is the number of bits set to one in either of the two Bloom filters. Finally, the intersection can be estimated as formula_32 using the three formulas together. Alternatives. Classic Bloom filters use formula_33 bits of space per inserted key, where formula_34 is the false positive rate of the Bloom filter. However, the space that is strictly necessary for any data structure playing the same role as a Bloom filter is only formula_35 per key. Hence Bloom filters use 44% more space than an equivalent optimal data structure. Pagh et al. provide a data structure that uses formula_36 bits while supporting constant amortized expected-time operations. Their data structure is primarily theoretical, but it is closely related to the widely-used quotient filter, which can be parameterized to useformula_37 bits of space, for an arbitrary parameter formula_38, while supporting formula_39-time operations. Advantages of the quotient filter, when compared to the Bloom filter, include its locality of reference and the ability to support deletions. Another alternative to classic Bloom filter is the cuckoo filter, based on space-efficient variants of cuckoo hashing. In this case, a hash table is constructed, holding neither keys nor values, but short fingerprints (small hashes) of the keys. If looking up the key finds a matching fingerprint, then key is probably in the set. Cuckoo filters support deletions and have better locality of reference than Bloom filters. Additionally, in some parameter regimes, cuckoo filters can be parameterized to offer nearly optimal space guarantees. Many alternatives to Bloom filters, including quotient filters and cuckoo filters, are based on the idea of hashing keys to random formula_40-bit fingerprints, and then storing those fingerprints in a compact hash table. This technique, which was first introduced by Carter et al. in 1978, relies on the fact that compact hash tables can be implemented to use roughly formula_41 bits less space than their non-compact counterparts. Using succinct hash tables, the space usage can be reduced to as little as formula_42 bits while supporting constant-time operations in a wide variety of parameter regimes. have studied some variants of Bloom filters that are either faster or use less space than classic Bloom filters. The basic idea of the fast variant is to locate the k hash values associated with each key into one or two blocks having the same size as processor's memory cache blocks (usually 64 bytes). This will presumably improve performance by reducing the number of potential memory cache misses. The proposed variants have however the drawback of using about 32% more space than classic Bloom filters. The space efficient variant relies on using a single hash function that generates for each key a value in the range formula_43 where formula_34 is the requested false positive rate. The sequence of values is then sorted and compressed using Golomb coding (or some other compression technique) to occupy a space close to formula_44 bits. To query the Bloom filter for a given key, it will suffice to check if its corresponding value is stored in the Bloom filter. Decompressing the whole Bloom filter for each query would make this variant totally unusable. To overcome this problem the sequence of values is divided into small blocks of equal size that are compressed separately. At query time only half a block will need to be decompressed on average. Because of decompression overhead, this variant may be slower than classic Bloom filters but this may be compensated by the fact that a single hash function needs to be computed. describes an approach called an xor filter, where they store fingerprints in a particular type of perfect hash table, producing a filter which is more memory efficient (formula_45 bits per key) and faster than Bloom or cuckoo filters. (The time saving comes from the fact that a lookup requires exactly three memory accesses, which can all execute in parallel.) However, filter creation is more complex than Bloom and cuckoo filters, and it is not possible to modify the set after creation. Extensions and applications. There are over 60 variants of Bloom filters, many surveys of the field, and a continuing churn of applications (see e.g., Luo, "et al" ). Some of the variants differ sufficiently from the original proposal to be breaches from or forks of the original data structure and its philosophy. A treatment which unifies Bloom filters with other work on random projections, compressive sensing, and locality sensitive hashing remains to be done (though see Dasgupta, "et al" for one attempt inspired by neuroscience). Cache filtering. Content delivery networks deploy web caches around the world to cache and serve web content to users with greater performance and reliability. A key application of Bloom filters is their use in efficiently determining which web objects to store in these web caches. Nearly three-quarters of the URLs accessed from a typical web cache are "one-hit-wonders" that are accessed by users only once and never again. It is clearly wasteful of disk resources to store one-hit-wonders in a web cache, since they will never be accessed again. To prevent caching one-hit-wonders, a Bloom filter is used to keep track of all URLs that are accessed by users. A web object is cached only when it has been accessed at least once before, i.e., the object is cached on its second request. The use of a Bloom filter in this fashion significantly reduces the disk write workload, since most one-hit-wonders are not written to the disk cache. Further, filtering out the one-hit-wonders also saves cache space on disk, increasing the cache hit rates. Avoiding false positives in a finite universe. Kiss "et al" described a new construction for the Bloom filter that avoids false positives in addition to the typical non-existence of false negatives. The construction applies to a finite universe from which set elements are taken. It relies on existing non-adaptive combinatorial group testing scheme by Eppstein, Goodrich and Hirschberg. Unlike the typical Bloom filter, elements are hashed to a bit array through deterministic, fast and simple-to-calculate functions. The maximal set size for which false positives are completely avoided is a function of the universe size and is controlled by the amount of allocated memory. Alternatively, an initial Bloom filter can be constructed in the standard way and then, with a finite and tractably-enumerable domain, all false positives can be exhaustively found and then a second Bloom filter constructed from that list; false positives in the second filter are similarly handled by constructing a third, and so on. As the universe is finite and the set of false positives strictly shrinks with each step, this procedure results in a finite "cascade" of Bloom filters that (on this closed, finite domain) will produce only true positives and true negatives. To check for membership in the filter cascade, the initial filter is queried, and, if the result is positive, the second filter is then consulted, and so on. This construction is used in CRLite, a proposed certificate revocation status distribution mechanism for the Web PKI, and Certificate Transparency is exploited to close the set of extant certificates. Counting Bloom filters. Counting filters provide a way to implement a "delete" operation on a Bloom filter without recreating the filter afresh. In a counting filter, the array positions (buckets) are extended from being a single bit to being a multibit counter. In fact, regular Bloom filters can be considered as counting filters with a bucket size of one bit. Counting filters were introduced by . The insert operation is extended to "increment" the value of the buckets, and the lookup operation checks that each of the required buckets is non-zero. The delete operation then consists of decrementing the value of each of the respective buckets. Arithmetic overflow of the buckets is a problem and the buckets should be sufficiently large to make this case rare. If it does occur then the increment and decrement operations must leave the bucket set to the maximum possible value in order to retain the properties of a Bloom filter. The size of counters is usually 3 or 4 bits. Hence counting Bloom filters use 3 to 4 times more space than static Bloom filters. In contrast, the data structures of and also allow deletions but use less space than a static Bloom filter. Another issue with counting filters is limited scalability. Because the counting Bloom filter table cannot be expanded, the maximal number of keys to be stored simultaneously in the filter must be known in advance. Once the designed capacity of the table is exceeded, the false positive rate will grow rapidly as more keys are inserted. introduced a data structure based on d-left hashing that is functionally equivalent but uses approximately half as much space as counting Bloom filters. The scalability issue does not occur in this data structure. Once the designed capacity is exceeded, the keys could be reinserted in a new hash table of double size. The space efficient variant by could also be used to implement counting filters by supporting insertions and deletions. introduced a new general method based on variable increments that significantly improves the false positive probability of counting Bloom filters and their variants, while still supporting deletions. Unlike counting Bloom filters, at each element insertion, the hashed counters are incremented by a hashed variable increment instead of a unit increment. To query an element, the exact values of the counters are considered and not just their positiveness. If a sum represented by a counter value cannot be composed of the corresponding variable increment for the queried element, a negative answer can be returned to the query. Kim et al. (2019) shows that false positive of Counting Bloom filter decreases from k=1 to a point defined formula_46, and increases from formula_46to positive infinity, and finds formula_46as a function of count threshold. Decentralized aggregation. Bloom filters can be organized in distributed data structures to perform fully decentralized computations of aggregate functions. Decentralized aggregation makes collective measurements locally available in every node of a distributed network without involving a centralized computational entity for this purpose. Distributed Bloom filters. Parallel Bloom filters can be implemented to take advantage of the multiple processing elements (PEs) present in parallel shared-nothing machines. One of the main obstacles for a parallel Bloom filter is the organization and communication of the unordered data which is, in general, distributed evenly over all PEs at the initiation or at batch insertions. To order the data two approaches can be used, either resulting in a Bloom filter over all data being stored on each PE, called replicating bloom filter, or the Bloom filter over all data being split into equal parts, each PE storing one part of it. For both approaches a "Single Shot" Bloom filter is used which only calculates one hash, resulting in one flipped bit per element, to reduce the communication volume. Distributed Bloom filters are initiated by first hashing all elements on their local PE and then sorting them by their hashes locally. This can be done in linear time using e.g. Bucket sort and also allows local duplicate detection. The sorting is used to group the hashes with their assigned PE as separator to create a Bloom filter for each group. After encoding these Bloom filters using e.g. Golomb coding each bloom filter is sent as packet to the PE responsible for the hash values that where inserted into it. A PE p is responsible for all hashes between the values formula_47 and formula_48, where s is the total size of the Bloom filter over all data. Because each element is only hashed once and therefore only a single bit is set, to check if an element was inserted into the Bloom filter only the PE responsible for the hash value of the element needs to be operated on. Single insertion operations can also be done efficiently because the Bloom filter of only one PE has to be changed, compared to Replicating Bloom filters where every PE would have to update its Bloom filter. By distributing the global Bloom filter over all PEs instead of storing it separately on each PE the Bloom filters size can be far larger, resulting in a larger capacity and lower false positive rate. Distributed Bloom filters can be used to improve duplicate detection algorithms by filtering out the most 'unique' elements. These can be calculated by communicating only the hashes of elements, not the elements themselves which are far larger in volume, and removing them from the set, reducing the workload for the duplicate detection algorithm used afterwards. During the communication of the hashes the PEs search for bits that are set in more than one of the receiving packets, as this would mean that two elements had the same hash and therefore could be duplicates. If this occurs a message containing the index of the bit, which is also the hash of the element that could be a duplicate, is sent to the PEs which sent a packet with the set bit. If multiple indices are sent to the same PE by one sender it can be advantageous to encode the indices as well. All elements that didn't have their hash sent back are now guaranteed to not be a duplicate and won't be evaluated further, for the remaining elements a Repartitioning algorithm can be used. First all the elements that had their hash value sent back are sent to the PE that their hash is responsible for. Any element and its duplicate is now guaranteed to be on the same PE. In the second step each PE uses a sequential algorithm for duplicate detection on the receiving elements, which are only a fraction of the amount of starting elements. By allowing a false positive rate for the duplicates, the communication volume can be reduced further as the PEs don't have to send elements with duplicated hashes at all and instead any element with a duplicated hash can simply be marked as a duplicate. As a result, the false positive rate for duplicate detection is the same as the false positive rate of the used bloom filter. The process of filtering out the most 'unique' elements can also be repeated multiple times by changing the hash function in each filtering step. If only a single filtering step is used it has to archive a small false positive rate, however if the filtering step is repeated once the first step can allow a higher false positive rate while the latter one has a higher one but also works on less elements as many have already been removed by the earlier filtering step. While using more than two repetitions can reduce the communication volume further if the number of duplicates in a set is small, the payoff for the additional complications is low. Replicating Bloom filters organize their data by using a well known hypercube algorithm for gossiping, e.g. First each PE calculates the Bloom filter over all local elements and stores it. By repeating a loop where in each step i the PEs send their local Bloom filter over dimension i and merge the Bloom filter they receive over the dimension with their local Bloom filter, it is possible to double the elements each Bloom filter contains in every iteration. After sending and receiving Bloom filters over all formula_49 dimensions each PE contains the global Bloom filter over all elements. Replicating Bloom filters are more efficient when the number of queries is much larger than the number of elements that the Bloom filter contains, the break even point compared to Distributed Bloom filters is approximately after formula_50 accesses, with formula_51 as the false positive rate of the bloom filter. Data synchronization. Bloom filters can be used for approximate data synchronization as in . Counting Bloom filters can be used to approximate the number of differences between two sets and this approach is described in . Bloom filters for streaming data. Bloom filters can be adapted to the context of streaming data. For instance, proposed Stable Bloom filters, which consist of a counting Bloom filter where insertion of a new element sets the associated counters to a value c, and then only a fixed amount s of counters are decreased by 1, hence the memory mostly contains information about recent elements (intuitively, one could assume that the lifetime of an element inside a SBF of N counters is around formula_52). Another solution is the Aging Bloom filter, that consists of two Bloom filter each occupying half the total available memory: when one filter is full, the second filter is erased and newer elements are then added to this newly empty filter. However, it has been shown that no matter the filter, after n insertions, the sum of the false positive formula_53 and false negative formula_54 probabilities is bounded below by formula_55 where L is the amount of all possible elements (the alphabet size), m the memory size (in bits), assuming formula_56. This result shows that for L big enough and n going to infinity, then the lower bound converges to formula_57, which is the characteristic relation of a random filter. Hence, after enough insertions, and if the alphabet is too big to be stored in memory (which is assumed in the context of probabilistic filters), it is impossible for a filter to perform better than randomness. This result can be leveraged by only expecting a filter to operate on a sliding window rather than the whole stream. In this case, the exponent n in the formula above is replaced by w, which gives a formula that might deviate from 1, if w is not too small. Bloomier filters. designed a generalization of Bloom filters that could associate a value with each element that had been inserted, implementing an associative array. Like Bloom filters, these structures achieve a small space overhead by accepting a small probability of false positives. In the case of "Bloomier filters", a "false positive" is defined as returning a result when the key is not in the map. The map will never return the wrong value for a key that "is" in the map. Compact approximators. proposed a lattice-based generalization of Bloom filters. A compact approximator associates to each key an element of a lattice (the standard Bloom filters being the case of the Boolean two-element lattice). Instead of a bit array, they have an array of lattice elements. When adding a new association between a key and an element of the lattice, they compute the maximum of the current contents of the k array locations associated to the key with the lattice element. When reading the value associated to a key, they compute the minimum of the values found in the k locations associated to the key. The resulting value approximates from above the original value. Parallel-partitioned Bloom filters. This implementation used a separate array for each hash function. This method allows for parallel hash calculations for both insertions and inquiries. Scalable Bloom filters. proposed a variant of Bloom filters that can adapt dynamically to the number of elements stored, while assuring a minimum false positive probability. The technique is based on sequences of standard Bloom filters with increasing capacity and tighter false positive probabilities, so as to ensure that a maximum false positive probability can be set beforehand, regardless of the number of elements to be inserted. Spatial Bloom filters. Spatial Bloom filters (SBF) were originally proposed by as a data structure designed to store location information, especially in the context of cryptographic protocols for location privacy. However, the main characteristic of SBFs is their ability to store multiple sets in a single data structure, which makes them suitable for a number of different application scenarios. Membership of an element to a specific set can be queried, and the false positive probability depends on the set: the first sets to be entered into the filter during construction have higher false positive probabilities than sets entered at the end. This property allows a prioritization of the sets, where sets containing more "important" elements can be preserved. Layered Bloom filters. A layered Bloom filter consists of multiple Bloom filter layers. Layered Bloom filters allow keeping track of how many times an item was added to the Bloom filter by checking how many layers contain the item. With a layered Bloom filter a check operation will normally return the deepest layer number the item was found in. Attenuated Bloom filters. An attenuated Bloom filter of depth D can be viewed as an array of D normal Bloom filters. In the context of service discovery in a network, each node stores regular and attenuated Bloom filters locally. The regular or local Bloom filter indicates which services are offered by the node itself. The attenuated filter of level i indicates which services can be found on nodes that are i-hops away from the current node. The i-th value is constructed by taking a union of local Bloom filters for nodes i-hops away from the node. For example, consider a small network, shown on the graph below. Say we are searching for a service A whose id hashes to bits 0,1, and 3 (pattern 11010). Let n1 node to be the starting point. First, we check whether service A is offered by n1 by checking its local filter. Since the patterns don't match, we check the attenuated Bloom filter in order to determine which node should be the next hop. We see that n2 doesn't offer service A but lies on the path to nodes that do. Hence, we move to n2 and repeat the same procedure. We quickly find that n3 offers the service, and hence the destination is located. By using attenuated Bloom filters consisting of multiple layers, services at more than one hop distance can be discovered while avoiding saturation of the Bloom filter by attenuating (shifting out) bits set by sources further away. Chemical structure searching. Bloom filters are often used to search large chemical structure databases (see chemical similarity). In the simplest case, the elements added to the filter (called a fingerprint in this field) are just the atomic numbers present in the molecule, or a hash based on the atomic number of each atom and the number and type of its bonds. This case is too simple to be useful. More advanced filters also encode atom counts, larger substructure features like carboxyl groups, and graph properties like the number of rings. In hash-based fingerprints, a hash function based on atom and bond properties is used to turn a subgraph into a PRNG seed, and the first output values used to set bits in the Bloom filter. Molecular fingerprints started in the late 1940s as way to search for chemical structures searched on punched cards. However, it wasn't until around 1990 that Daylight Chemical Information Systems, Inc. introduced a hash-based method to generate the bits, rather than use a precomputed table. Unlike the dictionary approach, the hash method can assign bits for substructures which hadn't previously been seen. In the early 1990s, the term "fingerprint" was considered different from "structural keys", but the term has since grown to encompass most molecular characteristics which can be used for a similarity comparison, including structural keys, sparse count fingerprints, and 3D fingerprints. Unlike Bloom filters, the Daylight hash method allows the number of bits assigned per feature to be a function of the feature size, but most implementations of Daylight-like fingerprints use a fixed number of bits per feature, which makes them a Bloom filter. The original Daylight fingerprints could be used for both similarity and screening purposes. Many other fingerprint types, like the popular ECFP2, can be used for similarity but not for screening because they include local environmental characteristics that introduce false negatives when used as a screen. Even if these are constructed with the same mechanism, these are not Bloom filters because they cannot be used to filter. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1-\\frac{1}{m}." }, { "math_id": 1, "text": "\\left(1-\\frac{1}{m}\\right)^k." }, { "math_id": 2, "text": "\\lim_{m\\to\\infty} \\left(1 - \\frac{1}{m}\\right)^m = \\frac{1}{e}" }, { "math_id": 3, "text": "\\left(1-\\frac{1}{m}\\right)^k = \\left(\\left(1-\\frac{1}{m}\\right)^m\\right)^{k/m} \\approx e^{-k/m}." }, { "math_id": 4, "text": "\\left(1-\\frac{1}{m}\\right)^{kn} \\approx e^{-kn/m};" }, { "math_id": 5, "text": "1-\\left(1-\\frac{1}{m}\\right)^{kn} \\approx 1 - e^{-kn/m}." }, { "math_id": 6, "text": "\\varepsilon = \\left(1-\\left[1-\\frac{1}{m}\\right]^{kn}\\right)^k \\approx \\left( 1-e^{-kn/m} \\right)^k." }, { "math_id": 7, "text": "\\frac{1}{m^{k(n+1)}}\\sum_{i=1}^m i^k i! {m \\choose i} \\left\\{ {kn \\atop i}\\right\\}" }, { "math_id": 8, "text": "1-q" }, { "math_id": 9, "text": "(1 - q)^k" }, { "math_id": 10, "text": "E[q] = \\left(1 - \\frac{1}{m}\\right)^{kn}" }, { "math_id": 11, "text": " \\Pr(\\left|q - E[q]\\right| \\ge \\frac{\\lambda}{m}) \\le 2\\exp(-2\\lambda^2/kn) " }, { "math_id": 12, "text": " \\sum_{t} \\Pr(q = t) (1 - t)^k \\approx (1 - E[q])^k = \\left(1-\\left[1-\\frac{1}{m}\\right]^{kn}\\right)^k \\approx \\left( 1-e^{-kn/m} \\right)^k" }, { "math_id": 13, "text": "k = \\frac{m}{n} \\ln 2." }, { "math_id": 14, "text": "\\varepsilon = \\left( 1-e^{-(\\frac m n \\ln 2) \\frac n m} \\right)^{\\frac m n\\ln 2}=\\left(\\frac 1 2 \\right)^{\\frac m n\\ln 2}" }, { "math_id": 15, "text": "\\ln (\\varepsilon) = -\\frac{m}{n}\\ln(2)^2." }, { "math_id": 16, "text": "m=-\\frac{n\\ln(\\varepsilon)}{\\ln (2)^2}" }, { "math_id": 17, "text": "\\frac{m}{n}=-\\frac{\\ln(\\varepsilon)}{\\ln (2)^2}\\approx-2.08\\ln(\\varepsilon)" }, { "math_id": 18, "text": "k=-\\frac{\\ln (\\varepsilon)}{\\ln(2)}. " }, { "math_id": 19, "text": "m=-\\frac{n\\ln\\varepsilon}{(\\ln 2)^2}" }, { "math_id": 20, "text": "1 - \\frac{1}{m}" }, { "math_id": 21, "text": "e^{-\\frac{1}{m}}" }, { "math_id": 22, "text": "k = \\frac{m}{n} \\ln 2" }, { "math_id": 23, "text": " m > 1" }, { "math_id": 24, "text": "\\varepsilon \\leq \\left( 1-e^{-\\frac{k(n+0.5)}{m-1}} \\right)^k." }, { "math_id": 25, "text": "\\left( 1-e^{-\\frac{kn}m} \\right)^k" }, { "math_id": 26, "text": " n^* = -\\frac{m}{k} \\ln \\left[ 1 - \\frac{X}{m} \\right], " }, { "math_id": 27, "text": "n^*" }, { "math_id": 28, "text": " n(A^*) = -\\frac{m}{k} \\ln \\left[ 1 - \\frac{|A|} m \\right] " }, { "math_id": 29, "text": " n(B^*) = -\\frac{m}{k} \\ln \\left[ 1 - \\frac{|B|} m \\right]. " }, { "math_id": 30, "text": " n(A^*\\cup B^*) = -\\frac{m}{k} \\ln \\left[ 1 - \\frac{|A \\cup B|} m \\right], " }, { "math_id": 31, "text": "n(A \\cup B)" }, { "math_id": 32, "text": " n(A^*\\cap B^*) = n(A^*) + n(B^*) - n(A^* \\cup B^*)," }, { "math_id": 33, "text": "1.44\\log_2(1/\\varepsilon)" }, { "math_id": 34, "text": "\\varepsilon" }, { "math_id": 35, "text": "\\log_2(1/\\varepsilon)" }, { "math_id": 36, "text": "(1 + o(1)) n \\log_2 (1/\\epsilon) + O(n) " }, { "math_id": 37, "text": "(1 + \\delta) n \\log \\epsilon^{-1} + 3n" }, { "math_id": 38, "text": "\\delta > 0" }, { "math_id": 39, "text": "O(\\delta^{-2})" }, { "math_id": 40, "text": "(\\log n + \\log \\epsilon^{-1})\n" }, { "math_id": 41, "text": "n \\log n" }, { "math_id": 42, "text": "n \\log_2 (e/\\epsilon) + o(n) " }, { "math_id": 43, "text": "\\left[0,n/\\varepsilon\\right]" }, { "math_id": 44, "text": "n\\log_2(1/\\varepsilon)" }, { "math_id": 45, "text": "1.23\\log_2(1/\\varepsilon)" }, { "math_id": 46, "text": "k_{opt}" }, { "math_id": 47, "text": "p*(s/|\\text{PE}|)" }, { "math_id": 48, "text": "(p+1)*(s/|\\text{PE}|)" }, { "math_id": 49, "text": "\\log |\\text{PE}|" }, { "math_id": 50, "text": "|\\text{PE}| * |\\text{Elements}| / \\log_{f^\\text{+}} |\\text{PE}|" }, { "math_id": 51, "text": "f^\\text{+}" }, { "math_id": 52, "text": "c \\tfrac s N" }, { "math_id": 53, "text": "FP" }, { "math_id": 54, "text": "FN" }, { "math_id": 55, "text": " FP + FN \\geq 1 - \\frac{1 - \\left(1 - \\frac {1}{L} \\right)^m}{1 - \\left( 1 - \\frac {1}{L} \\right)^{n}}" }, { "math_id": 56, "text": "n > m" }, { "math_id": 57, "text": "FP + FN = 1" } ]
https://en.wikipedia.org/wiki?curid=602211
6022330
Leopold Gegenbauer
Austrian mathematician Leopold Bernhard Gegenbauer (2 February 1849, Asperhofen – 3 June 1903, Gießhübl) was an Austrian mathematician remembered best as an algebraist. Gegenbauer polynomials are named after him. Leopold Gegenbauer was the son of a doctor. He studied at the University of Vienna from 1869 until 1873. He then went to Berlin where he studied from 1873 to 1875 working under Weierstrass and Kronecker. After graduating from Berlin, Gegenbauer was appointed to the position of extraordinary professor at the University of Czernowitz in 1875. Czernowitz, on the upper Prut River in the Carpathian foothills, was at that time in the Austrian Empire but after World War I it was in Romania, then after 1944 it became Chernivtsi, Ukraine. Czernowitz University was founded in 1875 and Gegenbauer was the first professor of mathematics there. He remained in Czernowitz for three years before moving to the University of Innsbruck where he worked with Otto Stolz. Again he held the position of extraordinary professor in Innsbruck. After three years teaching in Innsbruck Gegenbauer was appointed full professor in 1881, then he was appointed full professor at the University of Vienna in 1893. During the session 1897–98 he was Dean of the university. He remained at Vienna until his death. Among the students who studied with him at Vienna were the Slovenian Josip Plemelj, the American James Pierpont, Ernst Fischer, and Lothar von Rechtenstamm. Gegenbauer had many mathematical interests such as number theory, complex analysis, and the theory of integration, but he was chiefly an algebraist. He is remembered for the Gegenbauer polynomials, a class of orthogonal polynomials. They are obtained from the hypergeometric series in certain cases where the series is in fact finite. The Gegenbauer polynomials are solutions to the Gegenbauer differential equation and are generalizations of the associated Legendre polynomials. Gegenbauer also gave his name to arithmetic functions studied in analytic number theory. The Gegenbauer functions Ρ and ρ (upper case and lower case rho) are defined as follows. formula_0 In 1973 in Vienna in the district of Floridsdorf (21. Bezirk) a street was named in his honor the "Gegenbauerweg". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_{a,r}(n):=\\sum_{d\\,\\mid\\,n; d^{1/r}\\in\\mathbb N}d^a=:n^a\\rho_{-a,r}(n)" } ]
https://en.wikipedia.org/wiki?curid=6022330
602264
LC circuit
Electrical "resonator" circuit, consisting of inductive and capacitive elements with no resistance An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit, is an electric circuit consisting of an inductor, represented by the letter L, and a capacitor, represented by the letter C, connected together. The circuit can act as an electrical resonator, an electrical analogue of a tuning fork, storing energy oscillating at the circuit's resonant frequency. LC circuits are used either for generating signals at a particular frequency, or picking out a signal at a particular frequency from a more complex signal; this function is called a bandpass filter. They are key components in many electronic devices, particularly radio equipment, used in circuits such as oscillators, filters, tuners and frequency mixers. An LC circuit is an idealized model since it assumes there is no dissipation of energy due to resistance. Any practical implementation of an LC circuit will always include loss resulting from small but non-zero resistance within the components and connecting wires. The purpose of an LC circuit is usually to oscillate with minimal damping, so the resistance is made as low as possible. While no practical circuit is without losses, it is nonetheless instructive to study this ideal form of the circuit to gain understanding and physical intuition. For a circuit model incorporating resistance, see RLC circuit. Terminology. The two-element LC circuit described above is the simplest type of inductor-capacitor network (or LC network). It is also referred to as a "second order LC circuit" to distinguish it from more complicated (higher order) LC networks with more inductors and capacitors. Such LC networks with more than two reactances may have more than one resonant frequency. The order of the network is the order of the rational function describing the network in the complex frequency variable s. Generally, the order is equal to the number of L and C elements in the circuit and in any event cannot exceed this number. Operation. An LC circuit, oscillating at its natural resonant frequency, can store electrical energy. See the animation. A capacitor stores energy in the electric field (E) between its plates, depending on the voltage across it, and an inductor stores energy in its magnetic field (B), depending on the current through it. If an inductor is connected across a charged capacitor, the voltage across the capacitor will drive a current through the inductor, building up a magnetic field around it. The voltage across the capacitor falls to zero as the charge is used up by the current flow. At this point, the energy stored in the coil's magnetic field induces a voltage across the coil, because inductors oppose changes in current. This induced voltage causes a current to begin to recharge the capacitor with a voltage of opposite polarity to its original charge. Due to Faraday's law, the EMF which drives the current is caused by a decrease in the magnetic field, thus the energy required to charge the capacitor is extracted from the magnetic field. When the magnetic field is completely dissipated the current will stop and the charge will again be stored in the capacitor, with the opposite polarity as before. Then the cycle will begin again, with the current flowing in the opposite direction through the inductor. The charge flows back and forth between the plates of the capacitor, through the inductor. The energy oscillates back and forth between the capacitor and the inductor until (if not replenished from an external circuit) internal resistance makes the oscillations die out. The tuned circuit's action, known mathematically as a harmonic oscillator, is similar to a pendulum swinging back and forth, or water sloshing back and forth in a tank; for this reason the circuit is also called a "tank circuit". The natural frequency (that is, the frequency at which it will oscillate when isolated from any other system, as described above) is determined by the capacitance and inductance values. In most applications the tuned circuit is part of a larger circuit which applies alternating current to it, driving continuous oscillations. If the frequency of the applied current is the circuit's natural resonant frequency (natural frequency formula_0 below), resonance will occur, and a small driving current can excite large amplitude oscillating voltages and currents. In typical tuned circuits in electronic equipment the oscillations are very fast, from thousands to billions of times per second. Resonance effect. Resonance occurs when an LC circuit is driven from an external source at an angular frequency "ω"0 at which the inductive and capacitive reactances are equal in magnitude. The frequency at which this equality holds for the particular circuit is called the resonant frequency. The resonant frequency of the LC circuit is formula_1 where L is the inductance in henries, and C is the capacitance in farads. The angular frequency "ω"0 has units of radians per second. The equivalent frequency in units of hertz is formula_2 Applications. The resonance effect of the LC circuit has many important applications in signal processing and communications systems. LC circuits behave as electronic resonators, which are a key component in many applications: Time domain solution. Kirchhoff's laws. By Kirchhoff's voltage law, the voltage VC across the capacitor plus the voltage VL across the inductor must equal zero: formula_3 Likewise, by Kirchhoff's current law, the current through the capacitor equals the current through the inductor: formula_4 From the constitutive relations for the circuit elements, we also know that formula_5 Differential equation. Rearranging and substituting gives the second order differential equation formula_6 The parameter "ω"0, the resonant angular frequency, is defined as formula_7 Using this can simplify the differential equation: formula_8 The associated Laplace transform is formula_9 thus formula_10 where j is the imaginary unit. Solution. Thus, the complete solution to the differential equation is formula_11 and can be solved for A and B by considering the initial conditions. Since the exponential is complex, the solution represents a sinusoidal alternating current. Since the electric current I is a physical quantity, it must be real-valued. As a result, it can be shown that the constants A and B must be complex conjugates: formula_12 Now let formula_13 Therefore, formula_14 Next, we can use Euler's formula to obtain a real sinusoid with amplitude "I"0, angular frequency "ω"0 , and phase angle formula_15. Thus, the resulting solution becomes formula_16 formula_17 Initial conditions. The initial conditions that would satisfy this result are formula_18 formula_19 Series circuit. In the series configuration of the LC circuit, the inductor (L) and capacitor (C) are connected in series, as shown here. The total voltage V across the open terminals is simply the sum of the voltage across the inductor and the voltage across the capacitor. The current I into the positive terminal of the circuit is equal to the current through both the capacitor and the inductor. formula_20 Resonance. Inductive reactance formula_21 increases as frequency increases, while capacitive reactance formula_22 decreases with increase in frequency (defined here as a positive number). At one particular frequency, these two reactances are equal and the voltages across them are equal and opposite in sign; that frequency is called the resonant frequency "f"0 for the given circuit. Hence, at resonance, formula_23 Solving for ω, we have formula_24 which is defined as the resonant angular frequency of the circuit. Converting angular frequency (in radians per second) into frequency (in Hertz), one has formula_25 and formula_26 at formula_27. In a series configuration, XC and XL cancel each other out. In real, rather than idealised, components, the current is opposed, mostly by the resistance of the coil windings. Thus, the current supplied to a series resonant circuit is maximal at resonance. Impedance. In the series configuration, resonance occurs when the complex electrical impedance of the circuit approaches zero. First consider the impedance of the series LC circuit. The total impedance is given by the sum of the inductive and capacitive impedances: formula_28 Writing the inductive impedance as ZL "jωL"  and capacitive impedance as ZC   and substituting gives formula_29 Writing this expression under a common denominator gives formula_30 Finally, defining the natural angular frequency as formula_31 the impedance becomes formula_32 where formula_33 gives the reactance of the inductor at resonance. The numerator implies that in the limit as "ω" → ±"ω"0, the total impedance Z will be zero and otherwise non-zero. Therefore the series LC circuit, when connected in series with a load, will act as a band-pass filter having zero impedance at the resonant frequency of the LC circuit. Parallel circuit. When the inductor (L) and capacitor (C) are connected in parallel as shown here, the voltage V across the open terminals is equal to both the voltage across the inductor and the voltage across the capacitor. The total current I flowing into the positive terminal of the circuit is equal to the sum of the current flowing through the inductor and the current flowing through the capacitor: formula_34 Resonance. When XL equals XC, the two branch currents are equal and opposite. They cancel each other out to give minimal current in the main line (in principle, zero current). However, there is a large current circulating between the capacitor and inductor. In principle, this circulating current is infinite, but in reality is limited by resistance in the circuit, particularly resistance in the inductor windings. Since total current is minimal, in this state the total impedance is maximal. The resonant frequency is given by formula_35 Any branch current is not minimal at resonance, but each is given separately by dividing source voltage (V) by reactance (Z). Hence  "I" "" , as per Ohm's law. Impedance. The same analysis may be applied to the parallel LC circuit. The total impedance is then given by formula_36 and after substitution of ZL "j ω L" and ZC and simplification, gives formula_37 Using formula_38 it further simplifies to formula_39 Note that formula_40 but for all other values of ω the impedance is finite. Thus, the parallel LC circuit connected in series with a load will act as band-stop filter having infinite impedance at the resonant frequency of the LC circuit, while the parallel LC circuit connected in parallel with a load will act as band-pass filter. Laplace solution. The LC circuit can be solved using the Laplace transform. We begin by defining the relation between current and voltage across the capacitor and inductor in the usual way: formula_41 formula_42 and formula_43 Then by application of Kirchhoff's laws, we may arrive at the system's governing differential equations formula_44 With initial conditions formula_45 and formula_46 Making the following definitions, formula_47 and formula_48 gives formula_49 Now we apply the Laplace transform. formula_50 formula_51 The Laplace transform has turned our differential equation into an algebraic equation. Solving for V in the s domain (frequency domain) is much simpler viz. formula_52 formula_53 Which can be transformed back to the time domain via the inverse Laplace transform: formula_54 formula_55 For the second summand, an equivalent fraction of formula_27 is needed: formula_56 For the second summand, an equivalent fraction of formula_27 is needed: formula_57 formula_58 The final term is dependent on the exact form of the input voltage. Two common cases are the Heaviside step function and a sine wave. For a Heaviside step function we get formula_59 formula_60 formula_61 For the case of a sinusoidal function as input we get: formula_62 where formula_63 is the amplitude and formula_64 the frequency of the applied function. formula_65 Using the partial fraction method: formula_66 Simplifiying on both sides formula_67 formula_68 formula_69 We solve the equation for A, B and C: formula_70 formula_71 formula_72 formula_73 formula_74 formula_75 formula_76 Substitute the values of A, B and C: formula_77 Isolating the constant and using equivalent fractions to adjust for lack of numerator: formula_78 Performing the reverse Laplace transform on each summands: formula_79 formula_80 formula_81 Using initial conditions in the Laplace solution: formula_82 History. The first evidence that a capacitor and inductor could produce electrical oscillations was discovered in 1826 by French scientist Felix Savary. He found that when a Leyden jar was discharged through a wire wound around an iron needle, sometimes the needle was left magnetized in one direction and sometimes in the opposite direction. He correctly deduced that this was caused by a damped oscillating discharge current in the wire, which reversed the magnetization of the needle back and forth until it was too small to have an effect, leaving the needle magnetized in a random direction. American physicist Joseph Henry repeated Savary's experiment in 1842 and came to the same conclusion, apparently independently. Irish scientist William Thomson (Lord Kelvin) in 1853 showed mathematically that the discharge of a Leyden jar through an inductance should be oscillatory, and derived its resonant frequency. British radio researcher Oliver Lodge, by discharging a large battery of Leyden jars through a long wire, created a tuned circuit with its resonant frequency in the audio range, which produced a musical tone from the spark when it was discharged. In 1857, German physicist Berend Wilhelm Feddersen photographed the spark produced by a resonant Leyden jar circuit in a rotating mirror, providing visible evidence of the oscillations. In 1868, Scottish physicist James Clerk Maxwell calculated the effect of applying an alternating current to a circuit with inductance and capacitance, showing that the response is maximum at the resonant frequency. The first example of an electrical resonance curve was published in 1887 by German physicist Heinrich Hertz in his pioneering paper on the discovery of radio waves, showing the length of spark obtainable from his spark-gap LC resonator detectors as a function of frequency. One of the first demonstrations of resonance between tuned circuits was Lodge's "syntonic jars" experiment around 1889. He placed two resonant circuits next to each other, each consisting of a Leyden jar connected to an adjustable one-turn coil with a spark gap. When a high voltage from an induction coil was applied to one tuned circuit, creating sparks and thus oscillating currents, sparks were excited in the other tuned circuit only when the circuits were adjusted to resonance. Lodge and some English scientists preferred the term ""syntony" for this effect, but the term "resonance"" eventually stuck. The first practical use for LC circuits was in the 1890s in spark-gap radio transmitters to allow the receiver and transmitter to be tuned to the same frequency. The first patent for a radio system that allowed tuning was filed by Lodge in 1897, although the first practical systems were invented in 1900 by Italian radio pioneer Guglielmo Marconi. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_0\\," }, { "math_id": 1, "text": "\\omega_0 = \\frac{1}{\\sqrt{LC}}," }, { "math_id": 2, "text": "f_0 = \\frac{\\omega_0}{2 \\pi} = \\frac{1}{2 \\pi \\sqrt{LC}}." }, { "math_id": 3, "text": "V_C + V_L = 0." }, { "math_id": 4, "text": "I_C = I_L." }, { "math_id": 5, "text": "\\begin{align}\n V_L(t) &= L \\frac{\\mathrm{d}I_L}{\\mathrm{d}t}, \\\\\n I_C(t) &= C \\frac{\\mathrm{d}V_C}{\\mathrm{d}t}.\n\\end{align}" }, { "math_id": 6, "text": "\\frac{\\mathrm{d}^2}{\\mathrm{d}t^2}I(t) + \\frac{1}{LC} I(t) = 0." }, { "math_id": 7, "text": "\\omega_0 = \\frac{1}\\sqrt{LC}." }, { "math_id": 8, "text": "\\frac{\\mathrm{d}^2}{\\mathrm{d}t^2}I(t) + \\omega_0^2 I(t) = 0." }, { "math_id": 9, "text": "s^2 + \\omega_0^2 = 0," }, { "math_id": 10, "text": "s = \\pm j \\omega_0," }, { "math_id": 11, "text": "I(t) = Ae^{+j \\omega_0 t} + Be^{-j \\omega_0 t}" }, { "math_id": 12, "text": "A = B^*." }, { "math_id": 13, "text": "A = \\frac{I_0}{2} e^{+j \\phi}." }, { "math_id": 14, "text": "B = \\frac{I_0}{2} e^{-j \\phi}." }, { "math_id": 15, "text": "\\phi" }, { "math_id": 16, "text": "I(t) = I_0 \\cos\\left(\\omega_0 t + \\phi \\right)," }, { "math_id": 17, "text": "V_L(t) = L \\frac{\\mathrm{d}I}{\\mathrm{d}t} = -\\omega_0 L I_0 \\sin\\left(\\omega_0 t + \\phi \\right)." }, { "math_id": 18, "text": "I(0) = I_0 \\cos \\phi," }, { "math_id": 19, "text": "V_L(0) = L \\frac{\\mathrm{d}I}{\\mathrm{d}t}\\Bigg|_{t=0} = -\\omega_0 L I_0 \\sin \\phi." }, { "math_id": 20, "text": "\\begin{align}\n V &= V_L + V_C, \\\\\n I &= I_L = I_C.\n\\end{align}" }, { "math_id": 21, "text": "\\ X_\\mathsf{L} = \\omega L\\ " }, { "math_id": 22, "text": "\\ X_\\mathsf{C} = \\frac{1}{\\ \\omega C\\ }\\ " }, { "math_id": 23, "text": "\\begin{align}\n X_\\mathsf{L} &= X_\\mathsf{C}\\ , \\\\\n \\omega L &= \\frac{ 1 }{\\ \\omega C\\ } ~.\n\\end{align}" }, { "math_id": 24, "text": "\\omega = \\omega_0 = \\frac{ 1 }{\\ \\sqrt{ L C\\;}\\ }\\ ," }, { "math_id": 25, "text": "f_0 = \\frac{ \\omega_0 }{\\ 2 \\pi\\ } = \\frac{ 1 }{\\ 2 \\pi \\sqrt{ L C\\;}\\ }\\ ," }, { "math_id": 26, "text": "X_{\\mathsf{L} 0} = X_{\\mathsf{C} 0} = \\sqrt{\\frac{\\ L\\ }{C}\\;}" }, { "math_id": 27, "text": "\\omega_0" }, { "math_id": 28, "text": " Z = Z_\\mathsf{L} + Z_\\mathsf{C} ~." }, { "math_id": 29, "text": " Z(\\omega) = j \\omega L + \\frac{ 1 }{\\ j\\omega C\\ } ~." }, { "math_id": 30, "text": " Z(\\omega) = j \\left( \\frac{\\ \\omega^2 L C - 1\\ }{\\omega C} \\right) ~." }, { "math_id": 31, "text": " \\omega_0 = \\frac{ 1 }{\\ \\sqrt{ L C \\;}\\ }\\ ," }, { "math_id": 32, "text": " Z(\\omega) = j\\ L\\ \\left( \\frac{\\ \\omega^2 - \\omega_0^2\\ }{ \\omega } \\right) = j\\ \\omega_0 L\\ \\left( \\frac{ \\omega }{\\ \\omega_0\\ } - \\frac{\\ \\omega_0\\ }{ \\omega } \\right) = j\\ \\frac{ 1 }{\\ \\omega_0 C\\ } \\left( \\frac{ \\omega }{\\ \\omega_0\\ } - \\frac{\\ \\omega_0\\ }{ \\omega } \\right)\\ ," }, { "math_id": 33, "text": "\\, \\omega_0 L\\ \\," }, { "math_id": 34, "text": "\\begin{align}\n V &= V_\\mathsf{L} = V_\\mathsf{C}\\ , \\\\\n I &= I_\\mathsf{L} + I_\\mathsf{C} ~.\n\\end{align}" }, { "math_id": 35, "text": "f_0 = \\frac{ \\omega_0 }{\\ 2 \\pi\\ } = \\frac{1}{\\ 2 \\pi \\sqrt{ L C\\;}\\ } ~." }, { "math_id": 36, "text": "Z = \\frac{\\ Z_\\mathsf{L} Z_\\mathsf{C}\\ }{ Z_\\mathsf{L} + Z_\\mathsf{C} }\\ ," }, { "math_id": 37, "text": "Z(\\omega) = -j \\cdot \\frac{ \\omega L }{\\ \\omega^2 L C - 1\\ } ~." }, { "math_id": 38, "text": "\\omega_0 = \\frac{1}{\\ \\sqrt{ L C\\;}\\ }\\ ," }, { "math_id": 39, "text": "Z(\\omega) = -j\\ \\left(\\frac{1}{\\ C\\ } \\right) \\left( \\frac{\\omega}{\\ \\omega^2 - \\omega_0^2\\ } \\right) = + j\\ \\frac{ 1 }{\\ \\omega_0 C \\left( \\tfrac{\\omega_0}{\\omega} - \\tfrac{\\omega}{\\omega_0} \\right)\\ } = + j\\ \\frac{ \\omega_0 L }{\\ \\left( \\tfrac{\\omega_0}{\\omega} - \\tfrac{\\omega}{\\omega_0} \\right)\\ } ~." }, { "math_id": 40, "text": "\\lim_{\\omega \\to \\omega_0} Z(\\omega) = \\infty\\ ," }, { "math_id": 41, "text": " v_\\mathrm{C}(t) = v(t)\\ , ~" }, { "math_id": 42, "text": " i(t) = C\\ \\frac{\\mathrm{d}\\ v_\\mathrm{C}}{\\mathrm{d}t}\\ , ~ " }, { "math_id": 43, "text": " ~ v_\\mathrm{L}(t) = L\\ \\frac{\\mathrm{d}\\ i}{\\mathrm{d}t} \\;." }, { "math_id": 44, "text": " v_{in} (t) = v_\\mathrm{L} (t) + v_\\mathrm{C}(t) = L\\ \\frac{ \\mathrm{d}\\ i }{\\mathrm{d}t} + v = L\\ C\\ \\frac{\\mathrm{d}^2\\ v}{\\mathrm{d}t^2} + v \\;." }, { "math_id": 45, "text": "\\ v(0) = v_0\\ " }, { "math_id": 46, "text": "\\ i(0) = i_0 = C \\cdot v'(0) = C \\cdot v'_0 \\;. " }, { "math_id": 47, "text": " \\omega_0 \\equiv \\frac{1}{\\ \\sqrt{L\\ C\\ } } ~" }, { "math_id": 48, "text": "~ f(t) \\equiv \\omega_0^2\\ v_\\mathrm{in} (t) " }, { "math_id": 49, "text": " f(t) = \\frac{\\ \\mathrm{d}^2\\ v\\ }{\\mathrm{d}t^2} + \\omega_0^2\\ v \\;." }, { "math_id": 50, "text": " \\operatorname\\mathcal{L} \\left[\\ f(t)\\ \\right] = \\operatorname\\mathcal{L} \\left[\\ \\frac{\\ \\mathrm{d}^2\\ v\\ }{ \\mathrm{d}t^2 } + \\omega_0^2\\ v\\ \\right] \\,," }, { "math_id": 51, "text": " F(s) = s^2\\ V(s) - s\\ v_0 - v'_0 + \\omega_0^2\\ V(s) \\;." }, { "math_id": 52, "text": " V(s) = \\frac{\\ s\\ v_0 + v'_0 + F(s)\\ }{ s^2 + \\omega_0^2 } " }, { "math_id": 53, "text": "\nV(s) = \\frac{\\ s\\ v_0 }{ s^2 + \\omega_0^2 } +\n\\frac{v'_0}{ s^2 + \\omega_0^2 } +\n\\frac{F(s)\\ }{ s^2 + \\omega_0^2 } \\,,\n" }, { "math_id": 54, "text": " v(t) = \\operatorname\\mathcal{L}^{-1} \\left[\\ V(s) \\ \\right]" }, { "math_id": 55, "text": " v(t) = \\operatorname\\mathcal{L}^{-1} \\left[\\ \\frac{\\ s\\ v_0 }{ s^2 + \\omega_0^2 } +\n\\frac{v'_0}{ s^2 + \\omega_0^2 } +\n\\frac{F(s)\\ }{ s^2 + \\omega_0^2 }\\ \\right],\n" }, { "math_id": 56, "text": " v(t) = v_0 \\operatorname\\mathcal{L}^{-1} \\left[\\ \\frac{ s }{ s^2 + \\omega_0^2 }\\ \\right] +\nv'_0 \\operatorname\\mathcal{L}^{-1}\\left[\\ \\frac{ \\omega_0 }{ \\omega_0 ( s^2 + \\omega_0^2 )}\\ \\right] +\n\\operatorname\\mathcal{L}^{-1} \\left[\\ \\frac{F(s)\\ }{ s^2 + \\omega_0^2 }\\ \\right],\n" }, { "math_id": 57, "text": " v(t) = v_0 \\operatorname\\mathcal{L}^{-1} \\left[\\ \\frac{ s }{ s^2 + \\omega_0^2 }\\ \\right] +\n\\frac{v'_0}{\\omega_0} \\operatorname\\mathcal{L}^{-1}\\left[\\ \\frac{ \\omega_0 }{ ( s^2 + \\omega_0^2 )}\\ \\right] +\n\\operatorname\\mathcal{L}^{-1} \\left[\\ \\frac{F(s)\\ }{ s^2 + \\omega_0^2 }\\ \\right],\n" }, { "math_id": 58, "text": " v(t) = v_0\\cos(\\omega_0\\ t)+ \\frac{ v'_0 }{\\ \\omega_0\\ }\\ \\sin(\\omega_0\\ t) + \\operatorname\\mathcal{L}^{-1} \\left[\\ \\frac{ F(s) }{\\ s^2 + \\omega_0^2\\ }\\ \\right] " }, { "math_id": 59, "text": " v_\\mathrm{in}(t) = M\\ u(t) \\,," }, { "math_id": 60, "text": " \\operatorname\\mathcal{L}^{-1}\\left[\\ \\omega_0^2 \\frac{ V_\\mathrm{in}(s) }{\\ s^2 + \\omega_0^2\\ }\\ \\right] ~=~ \\operatorname\\mathcal{L}^{-1}\\left[\\ \\omega_0^2\\ M\\ \\frac{1}{\\ s\\ (s^2 + \\omega_0^2)\\ }\\ \\right] ~=~ M\\ \\Bigl( 1 - \\cos(\\omega_0\\ t) \\Bigr) \\,," }, { "math_id": 61, "text": " v(t) = v_0 \\ \\cos(\\omega_0\\ t)+ \\frac{ v'_0 }{ \\omega_0 }\\ \\sin(\\omega_0\\ t) + M\\ \\Bigl(1-\\cos(\\omega_0\\ t)\\Bigr) \\;." }, { "math_id": 62, "text": "\nv_\\mathrm{in}(t) = U\\ \\sin(\\omega_\\mathrm{f}\\ t) \\Rightarrow V_\\mathrm{in}(s)= \\frac{\\ U\\ \\omega_\\mathrm{f}\\ }{\\ s^2 + \\omega_\\mathrm{f}^2 \\ } \\,\n" }, { "math_id": 63, "text": "U" }, { "math_id": 64, "text": "\\omega_f" }, { "math_id": 65, "text": "\n\\operatorname\\mathcal{L}^{-1}\\left[\\ \\omega_0^2\\ \\frac{1}{\\ s^2 + \\omega_0^2\\ }\\ \\frac{U\\ \\omega_\\mathrm{f}}{\\ s^2+\\omega_\\mathrm{f}^2\\ }\\ \\right]\n" }, { "math_id": 66, "text": "\n\\operatorname\\mathcal{L}^{-1}\\left[\\ \\omega_0^2\\ U\\ \\omega_\\mathrm{f} \\frac{1}{\\ s^2 + \\omega_0^2\\ }\\ \\frac{1}{\\ s^2+\\omega_\\mathrm{f}^2\\ }\\ \\right]\n=\n\\operatorname\\mathcal{L}^{-1}\\left[\\ \\omega_0^2\\ U\\ \\omega_\\mathrm{f} \\frac{A + Bs}{\\ s^2 + \\omega_0^2\\ }\\ + \\frac{C + Ds}{\\ s^2+\\omega_\\mathrm{f}^2\\ }\\ \\right]\n" }, { "math_id": 67, "text": "\n1 = (A + Bs)( \\ s^2 + \\omega_\\mathrm{f}^2\\ ) + (C + Ds)( \\ s^2 + \\omega_0^2\\ )\n" }, { "math_id": 68, "text": "\n1 = ( A\\ s^2 + \\ A\\ \\omega_\\mathrm{f}^2\\ + \\ B\\ s^3 + \\ B\\ \\omega_\\mathrm{f}^2\\ ) + \\ C\\ s^2 + \\ C\\ \\omega_0^2\\ + \\ D\\ s^3 + \\ D\\ s \\omega_0^2\\ )\n" }, { "math_id": 69, "text": "\n1 = s^3 (B\\ + \\ D\\ ) + s^2 (A\\ + \\ C) + s (B\\ \\omega_\\mathrm{f}^2 + \\ D\\ \\omega_0^2) + (A\\ \\omega_\\mathrm{f}^2\\ + \\ C\\ \\omega_0^2)\n" }, { "math_id": 70, "text": "\nA+C=0 \\Rightarrow C=-A\n" }, { "math_id": 71, "text": "\nA\\ \\omega_\\mathrm{f}^2\\ + \\ C\\ \\omega_0^2 = 1\n\\Rightarrow \nA\\ \\omega_\\mathrm{f}^2\\ - \\ A\\ \\omega_0^2 = 1\n" }, { "math_id": 72, "text": "\n\\Rightarrow \nA\\ = \\frac{1}{(\\omega_\\mathrm{f}^2\\ - \\omega_0^2) }\n" }, { "math_id": 73, "text": "\n\\Rightarrow \nC\\ = -\\frac{1}{(\\omega_\\mathrm{f}^2\\ - \\omega_0^2) }\n" }, { "math_id": 74, "text": "B + C = 0" }, { "math_id": 75, "text": "B\\ \\omega_\\mathrm{f}^2 + \\ D\\ \\omega_0^2 = 0\n\\Rightarrow \nB\\ \\omega_\\mathrm{f}^2 - \\ B\\ \\omega_0^2 = 0\n\\Rightarrow \nB\\ (\\omega_\\mathrm{f}^2 - \\omega_0^2) = 0\n" }, { "math_id": 76, "text": "\n\\Rightarrow \nB = 0 , \\ D = 0\n" }, { "math_id": 77, "text": "\n\\operatorname\\mathcal{L}^{-1}\\left[\\ \\omega_0^2\\ U\\ \\omega_\\mathrm{f} \\frac{\\frac{1}{(\\omega_\\mathrm{f}^2\\ - \\omega_0^2) }}{\\ s^2 + \\omega_0^2\\ } + \\frac{ -\\frac{1}{(\\omega_\\mathrm{f}^2\\ - \\omega_0^2) }}{\\ s^2+\\omega_\\mathrm{f}^2\\ }\\ \\right]\n" }, { "math_id": 78, "text": "\n\\frac{\\ \\omega_0^2\\ U\\omega_\\mathrm{f}\\ }{\\ \\omega_\\mathrm{f}^2-\\omega_0^2\\ } \\operatorname\\mathcal{L}^{-1}\\left[ \\left(\\frac{ \\omega_0 }{ \\omega_0 (s^2 + \\omega_0^2)} - \\frac{ \\omega_f }{ \\omega_f (s^2+\\omega_f^2)}\\right) \\right] \\,\n" }, { "math_id": 79, "text": "\n\\frac{\\ \\omega_0^2\\ U\\omega_\\mathrm{f}\\ }{\\ \\omega_\\mathrm{f}^2-\\omega_0^2\\ } \\left( \\operatorname\\mathcal{L}^{-1}\\left[\\ \\frac{1}{\\omega_0} \\frac{ \\omega_0 }{ (s^2 + \\omega_0^2)} \\right] - \\operatorname\\mathcal{L}^{-1}\\left[\\frac{1}{\\omega_\\mathrm{f}\\ } \\frac{ \\omega_\\mathrm{f}\\ }{ (s^2+\\omega_f^2)} \\right] \\right) \\,\n" }, { "math_id": 80, "text": "\n\\frac{\\ \\omega_0^2\\ U\\omega_\\mathrm{f}\\ }{\\ \\omega_\\mathrm{f}^2-\\omega_0^2\\ } \\left(\\frac{1}{\\omega_0} \\operatorname\\mathcal{L}^{-1}\\left[\\frac{ \\omega_0 }{ (s^2 + \\omega_0^2)} \\right] - \\frac{1}{\\omega_\\mathrm{f}\\ } \\operatorname\\mathcal{L}^{-1}\\left[\\frac{ \\omega_\\mathrm{f}\\ }{ (s^2+\\omega_f^2)} \\right] \\right) \\,\n" }, { "math_id": 81, "text": "\nv_\\mathrm{in}(t) = \\frac{\\ \\omega_0^2\\ U\\ \\omega_\\mathrm{f}\\ }{ \\omega_\\mathrm{f}^2 - \\omega_0^2 } \\left( \\frac{1}{\\omega_0}\\ \\sin(\\omega_0\\ t) - \\frac{1}{\\ \\omega_\\mathrm{f}\\ }\\ \\sin(\\omega_\\mathrm{f}\\ t) \\right) \\;,\n" }, { "math_id": 82, "text": "\nv(t) = v_0 \\cos(\\omega_0\\ t)+ \\frac{ v'_0}{ \\omega_0\\ }\\ \\sin(\\omega_0\\ t) + \\frac{ \\omega_0^2\\ U\\ \\omega_\\mathrm{f} }{\\ \\omega_\\mathrm{f}^2 - \\omega_0^2\\ }\\left(\\frac{1}{\\omega_0}\\ \\sin(\\omega_0\\ t) - \\frac{1}{\\ \\omega_\\mathrm{f}\\ }\\ \\sin(\\omega_\\mathrm{f}\\ t) \\right) \\;." } ]
https://en.wikipedia.org/wiki?curid=602264
6023946
Metamath
Formal language and associated computer program Metamath is a formal language and an associated computer program (a proof assistant) for archiving and verifying mathematical proofs. Several databases of proved theorems have been developed using Metamath covering standard results in logic, set theory, number theory, algebra, topology and analysis, among others. By 2023, Metamath had been used to prove 74 of the 100 theorems of the "Formalizing 100 Theorems" challenge. At least 19 proof verifiers use the Metamath format. The Metamath website provides a database of formalized theorems which can be browsed interactively. Metamath language. The Metamath language is a metalanguage for formal systems. The Metamath language has no specific logic embedded in it. Instead, it can be regarded as a way to prove that inference rules (asserted as axioms or proven later) can be applied. The largest database of proved theorems follows conventional first-order logic and ZFC set theory. The Metamath language design (employed to state the definitions, axioms, inference rules and theorems) is focused on simplicity. Proofs are checked using an algorithm based on variable substitution. The algorithm also has optional provisos for what variables must remain distinct after a substitution is made. Language basics. The set of symbols that can be used for constructing formulas is declared using codice_0 (constant symbols) and codice_1 (variable symbols) statements; for example: The grammar for formulas is specified using a combination of codice_2 (floating (variable-type) hypotheses) and codice_3 (axiomatic assertion) statements; for example: Axioms and rules of inference are specified with codice_3 statements along with codice_5 and codice_6 for block scoping and optional codice_7 (essential hypotheses) statements; for example: Using one construct, codice_3 statements, to capture syntactic rules, axiom schemas, and rules of inference is intended to provide a level of flexibility similar to higher order logical frameworks without a dependency on a complex type system. Proofs. Theorems (and derived rules of inference) are written with codice_9 statements; for example: Note the inclusion of the proof in the codice_9 statement. It abbreviates the following detailed proof: tt $f term t tze $a term 0 1,2 tpl $a term ( t + 0 ) 3,1 weq $a wff ( t + 0 ) = t 1,1 weq $a wff t = t 1 a2 $a |- ( t + 0 ) = t 1,2 tpl $a term ( t + 0 ) 7,1 weq $a wff ( t + 0 ) = t 1,2 tpl $a term ( t + 0 ) 9,1 weq $a wff ( t + 0 ) = t 1,1 weq $a wff t = t 10,11 wim $a wff ( ( t + 0 ) = t -&gt; t = t ) 1 a2 $a |- ( t + 0 ) = t 1,2 tpl $a term ( t + 0 ) 14,1,1 a1 $a |- ( ( t + 0 ) = t -&gt; ( ( t + 0 ) = t -&gt; t = t ) ) 8,12,13,15 mp $a |- ( ( t + 0 ) = t -&gt; t = t ) 4,5,6,16 mp $a |- t = t The "essential" form of the proof elides syntactic details, leaving a more conventional presentation: a2 $a |- ( t + 0 ) = t a2 $a |- ( t + 0 ) = t a1 $a |- ( ( t + 0 ) = t -&gt; ( ( t + 0 ) = t -&gt; t = t ) ) 2,3 mp $a |- ( ( t + 0 ) = t -&gt; t = t ) 1,4 mp $a |- t = t Substitution. All Metamath proof steps use a single substitution rule, which is just the simple replacement of a variable with an expression and not the proper substitution described in works on predicate calculus. Proper substitution, in Metamath databases that support it, is a derived construct instead of one built into the Metamath language itself. The substitution rule makes no assumption about the logic system in use and only requires that the substitutions of variables are correctly done. Here is a detailed example of how this algorithm works. Steps 1 and 2 of the theorem codice_11 in the Metamath Proof Explorer ("set.mm") are depicted left. Let's explain how Metamath uses its substitution algorithm to check that step 2 is the logical consequence of step 1 when you use the theorem codice_12. Step 2 states that ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). It is the conclusion of the theorem codice_12. The theorem codice_12 states that if "A" = "B", then ("C F A") = ("C F B"). This theorem would never appear under this cryptic form in a textbook but its literate formulation is banal: when two quantities are equal, one can replace one by the other in an operation. To check the proof Metamath attempts to unify ("C F A") = ("C F B") with ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). There is only one way to do so: unifying with , with +, with and with ( 1 + 1 ). So now Metamath uses the premise of codice_12. This premise states that "A" = "B". As a consequence of its previous computation, Metamath knows that should be substituted by and by ( 1 + 1 ). The premise "A" = "B" becomes 2=( 1 + 1 ) and thus step 1 is therefore generated. In its turn step 1 is unified with codice_16. codice_16 is the definition of the number codice_18 and states that codice_19. Here the unification is simply a matter of constants and is straightforward (no problem of variables to substitute). So the verification is finished and these two steps of the proof of codice_11 are correct. When Metamath unifies ( 2 + 2 ) with it has to check that the syntactical rules are respected. In fact has the type codice_21 thus Metamath has to check that ( 2 + 2 ) is also typed codice_21. Metamath proof checker. The Metamath program is the original program created to manipulate databases written using the Metamath language. It has a text (command line) interface and is written in C. It can read a Metamath database into memory, verify the proofs of a database, modify the database (in particular by adding proofs), and write them back out to storage. It has a "prove" command that enables users to enter a proof, along with mechanisms to search for existing proofs. The Metamath program can convert statements to HTML or TeX notation; for example, it can output the modus ponens axiom from set.mm as: formula_0 Many other programs can process Metamath databases, in particular, there are at least 19 proof verifiers for databases that use the Metamath format. Metamath databases. The Metamath website hosts several databases that store theorems derived from various axiomatic systems. Most databases (".mm" files) have an associated interface, called an "Explorer", which allows one to navigate the statements and proofs interactively on the website, in a user-friendly way. Most databases use a Hilbert system of formal deduction though this is not a requirement. Metamath Proof Explorer. The Metamath Proof Explorer (recorded in "set.mm") is the main database. It is based on classical first-order logic and ZFC set theory (with the addition of Tarski-Grothendieck set theory when needed, for example in category theory). The database has been maintained for over thirty years (the first proofs in "set.mm" are dated September 1992). The database contains developments, among other fields, of set theory (ordinals and cardinals, recursion, equivalents of the axiom of choice, the continuum hypothesis...), the construction of the real and complex number systems, order theory, graph theory, abstract algebra, linear algebra, general topology, real and complex analysis, Hilbert spaces, number theory, and elementary geometry. The Metamath Proof Explorer references many text books that can be used in conjunction with Metamath. Thus, people interested in studying mathematics can use Metamath in connection with these books and verify that the proved assertions match the literature. Intuitionistic Logic Explorer. This database develops mathematics from a constructive point of view, starting with the axioms of intuitionistic logic and continuing with axiom systems of constructive set theory. New Foundations Explorer. This database develops mathematics from Quine's New Foundations set theory. Higher-Order Logic Explorer. This database starts with higher-order logic and derives equivalents to axioms of first-order logic and of ZFC set theory. Databases without explorers. The Metamath website hosts a few other databases which are not associated with explorers but are nonetheless noteworthy. The database "peano.mm" written by Robert Solovay formalizes Peano arithmetic. The database "nat.mm" formalizes natural deduction. The database "miu.mm" formalizes the MU puzzle based on the formal system MIU presented in "Gödel, Escher, Bach". Older explorers. The Metamath website also hosts a few older databases which are not maintained anymore, such as the "Hilbert Space Explorer", which presents theorems pertaining to Hilbert space theory which have now been merged into the Metamath Proof Explorer, and the "Quantum Logic Explorer", which develops quantum logic starting with the theory of orthomodular lattices. Natural deduction. Because Metamath has a very generic concept of what a proof is (namely a tree of formulas connected by inference rules) and no specific logic is embedded in the software, Metamath can be used with species of logic as different as Hilbert-style logics or sequents-based logics or even with lambda calculus. However, Metamath provides no direct support for natural deduction systems. As noted earlier, the database "nat.mm" formalizes natural deduction. The Metamath Proof Explorer (with its database "set.mm") instead uses a set of conventions that allow the use of natural deduction approaches within a Hilbert-style logic. Other works connected to Metamath. Proof checkers. Using the design ideas implemented in Metamath, Raph Levien has implemented very small proof checker, "mmverify.py", at only 500 lines of Python code. Ghilbert is a similar though more elaborate language based on mmverify.py. Levien would like to implement a system where several people could collaborate and his work is emphasizing modularity and connection between small theories. Using Levien’s seminal work, many other implementations of the Metamath design principles have been implemented for a broad variety of languages. Juha Arpiainen has implemented his own proof checker in Common Lisp called Bourbaki and Marnix Klooster has coded a proof checker in Haskell called "Hmm". Although they all use the overall Metamath approach to formal system checker coding, they also implement new concepts of their own. Editors. Mel O'Cat designed a system called "Mmj2", which provides a graphic user interface for proof entry. The initial aim of Mel O'Cat was to allow the user to enter the proofs by simply typing the formulas and letting "Mmj2" find the appropriate inference rules to connect them. In Metamath on the contrary you may only enter the theorems names. You may not enter the formulas directly. "Mmj2" has also the possibility to enter the proof forward or backward (Metamath only allows to enter proof backward). Moreover "Mmj2" has a real grammar parser (unlike Metamath). This technical difference brings more comfort to the user. In particular Metamath sometimes hesitates between several formulas it analyzes (most of them being meaningless) and asks the user to choose. In "Mmj2" this limitation no longer exists. There is also a project by William Hale to add a graphical user interface to Metamath called "Mmide". Paul Chapman in its turn is working on a new proof browser, which has highlighting that allows you to see the referenced theorem before and after the substitution was made. Milpgame is a proof assistant and a checker (it shows a message only something gone wrong) with a graphic user interface for the Metamath language(set.mm), written by Filip Cernatescu, it is an open source(MIT License) Java application (cross-platform application: Window, Linux, Mac OS). User can enter the demonstration(proof) in two modes : forward and backward relative to the statement to prove. Milpgame checks if a statement is well formed (has a syntactic verifier). It can save unfinished proofs without the use of dummylink theorem. The demonstration is shown as tree, the statements are shown using html definitions (defined in typesetting chapter). Milpgame is distributed as Java .jar(JRE version 6 update 24 written in NetBeans IDE). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vdash \\varphi\\quad\\&\\quad \\vdash ( \\varphi \\rightarrow \\psi )\\quad\\Rightarrow\\quad \\vdash \\psi" } ]
https://en.wikipedia.org/wiki?curid=6023946
602480
Frobenius theorem (differential topology)
On finding a maximal set of solutions of a system of first-order homogeneous linear PDEs In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of "r" vector fields mesh into coordinate grids on "r"-dimensional integral manifolds. The theorem is foundational in differential topology and calculus on manifolds. Contact geometry studies 1-forms that maximally violates the assumptions of Frobenius' theorem. An example is shown on the right. Introduction. One-form version. Suppose we are to find the trajectory of a particle in a subset of 3D space, but we do not know its trajectory formula. Instead, we know only that its trajectory satisfies formula_0, where formula_1 are smooth functions of formula_2. Thus, our only certainty is that if at some moment in time the particle is at location formula_3, then its velocity at that moment is restricted within the plane with equation formula_4 In other words, we can draw a "local plane" at each point in 3D space, and we know that the particle's trajectory must be tangent to the local plane at all times. If we have two equationsformula_5then we can draw two local planes at each point, and their intersection is generically a line, allowing us to uniquely solve for the curve starting at any point. In other words, with two 1-forms, we can foliate the domain into curves. If we have only one equation formula_0, then we might be able to foliate formula_6 into surfaces, in which case, we can be sure that a curve starting at a certain surface must be restricted to wander within that surface. If not, then a curve starting at any point might end up at any other point in formula_6. One can imagine starting with a cloud of little planes, and quilting them together to form a full surface. The main danger is that, if we quilt the little planes two at a time, we might go on a cycle and return to where we began, but shifted by a small amount. If this happens, then we would not get a 2-dimensional surface, but a 3-dimensional blob. An example is shown in the diagram on the right. If the one-form is integrable, then loops exactly close upon themselves, and each surface would be 2-dimensional. Frobenius' theorem states that this happens precisely when formula_7 over all of the domain, where formula_8. The notation is defined in the article on one-forms. During his development of axiomatic thermodynamics, Carathéodory proved that if formula_9 is an integrable one-form on an open subset of formula_10, then formula_11 for some scalar functions formula_12 on the subset. This is usually called Carathéodory's theorem in axiomatic thermodynamics. One can prove this intuitively by first constructing the little planes according to formula_9, quilting them together into a foliation, then assigning each surface in the foliation with a scalar label. Now for each point formula_13, define formula_14 to be the scalar label of the surface containing point formula_13. Now, formula_15 is a one-form that has exactly the same planes as formula_9. However, it has "even thickness" everywhere, while formula_9 might have "uneven thickness". This can be fixed by a scalar scaling by formula_16, giving formula_11. This is illustrated on the right. Multiple one-forms. In its most elementary form, the theorem addresses the problem of finding a maximal set of independent solutions of a regular system of first-order linear homogeneous partial differential equations. Let formula_17 be a collection of "C"1 functions, with "r" &lt; "n", and such that the matrix ( "f" ) has rank "r" when evaluated at any point of R"n". Consider the following system of partial differential equations for a "C"2 function "u" : R"n" → R: formula_18 One seeks conditions on the existence of a collection of solutions "u"1, ..., "u""n"−"r" such that the gradients ∇"u"1, ..., ∇"u""n"−"r" are linearly independent. The Frobenius theorem asserts that this problem admits a solution locally if, and only if, the operators "Lk" satisfy a certain integrability condition known as "involutivity". Specifically, they must satisfy relations of the form formula_19 for 1 ≤ "i", "j" ≤ "r", and all "C"2 functions "u", and for some coefficients "c""k""ij"("x") that are allowed to depend on "x". In other words, the commutators ["Li", "Lj"] must lie in the linear span of the "Lk" at every point. The involutivity condition is a generalization of the commutativity of partial derivatives. In fact, the strategy of proof of the Frobenius theorem is to form linear combinations among the operators "Li" so that the resulting operators do commute, and then to show that there is a coordinate system "yi" for which these are precisely the partial derivatives with respect to "y"1, ..., "yr". From analysis to geometry. Even though the system is overdetermined there are typically infinitely many solutions. For example, the system of differential equations formula_20 clearly permits multiple solutions. Nevertheless, these solutions still have enough structure that they may be completely described. The first observation is that, even if "f"1 and "f"2 are two different solutions, the level surfaces of "f"1 and "f"2 must overlap. In fact, the level surfaces for this system are all planes in R3 of the form "x" − "y" + "z" "C", for C a constant. The second observation is that, once the level surfaces are known, all solutions can then be given in terms of an arbitrary function. Since the value of a solution "f" on a level surface is constant by definition, define a function "C"("t") by: formula_21 Conversely, if a function "C"("t") is given, then each function "f" given by this expression is a solution of the original equation. Thus, because of the existence of a family of level surfaces, solutions of the original equation are in a one-to-one correspondence with arbitrary functions of one variable. Frobenius' theorem allows one to establish a similar such correspondence for the more general case of solutions of (1). Suppose that "u"1, ..., "un−r" are solutions of the problem (1) satisfying the independence condition on the gradients. Consider the level sets of ("u"1, ..., "un−r") as functions with values in R"n−r". If "v"1, ..., "vn−r" is another such collection of solutions, one can show (using some linear algebra and the mean value theorem) that this has the same family of level sets but with a possibly different choice of constants for each set. Thus, even though the independent solutions of (1) are not unique, the equation (1) nonetheless determines a unique family of level sets. Just as in the case of the example, general solutions "u" of (1) are in a one-to-one correspondence with (continuously differentiable) functions on the family of level sets. The level sets corresponding to the maximal independent solution sets of (1) are called the "integral manifolds" because functions on the collection of all integral manifolds correspond in some sense to constants of integration. Once one of these constants of integration is known, then the corresponding solution is also known. Frobenius' theorem in modern language. The Frobenius theorem can be restated more economically in modern language. Frobenius' original version of the theorem was stated in terms of Pfaffian systems, which today can be translated into the language of differential forms. An alternative formulation, which is somewhat more intuitive, uses vector fields. Formulation using vector fields. In the vector field formulation, the theorem states that a subbundle of the tangent bundle of a manifold is integrable (or involutive) if and only if it arises from a regular foliation. In this context, the Frobenius theorem relates integrability to foliation; to state the theorem, both concepts must be clearly defined. One begins by noting that an arbitrary smooth vector field formula_22 on a manifold formula_23 defines a family of curves, its integral curves formula_24 (for intervals formula_25). These are the solutions of formula_26, which is a system of first-order ordinary differential equations, whose solvability is guaranteed by the Picard–Lindelöf theorem. If the vector field formula_22 is nowhere zero then it defines a one-dimensional subbundle of the tangent bundle of formula_23, and the integral curves form a regular foliation of formula_23. Thus, one-dimensional subbundles are always integrable. If the subbundle has dimension greater than one, a condition needs to be imposed. One says that a subbundle formula_27 of the tangent bundle formula_28 is integrable (or involutive), if, for any two vector fields formula_22 and formula_29 taking values in formula_30, the Lie bracket formula_31 takes values in formula_30 as well. This notion of integrability need only be defined locally; that is, the existence of the vector fields formula_22 and formula_29 and their integrability need only be defined on subsets of formula_23. Several definitions of foliation exist. Here we use the following: Definition. A "p"-dimensional, class "Cr" foliation of an "n"-dimensional manifold "M" is a decomposition of "M" into a union of disjoint connected submanifolds {"L"α}α∈"A", called the "leaves" of the foliation, with the following property: Every point in "M" has a neighborhood "U" and a system of local, class "Cr" coordinates "x"=("x"1, ⋅⋅⋅, "xn") : "U"→R"n" such that for each leaf "L"α, the components of "U" ∩ "L"α are described by the equations "x""p"+1=constant, ⋅⋅⋅, "xn"=constant. A foliation is denoted by formula_32={"L"α}α∈"A". Trivially, any foliation of formula_23 defines an integrable subbundle, since if formula_33 and formula_34 is the leaf of the foliation passing through formula_13 then formula_35 is integrable. Frobenius' theorem states that the converse is also true: Given the above definitions, Frobenius' theorem states that a subbundle formula_30 is integrable if and only if the subbundle formula_30 arises from a regular foliation of formula_23. Differential forms formulation. Let "U" be an open set in a manifold M, Ω1("U") be the space of smooth, differentiable 1-forms on "U", and "F" be a submodule of Ω1("U") of rank "r", the rank being constant in value over "U". The Frobenius theorem states that "F" is integrable if and only if for every p in U the stalk "Fp" is generated by "r" exact differential forms. Geometrically, the theorem states that an integrable module of 1-forms of rank "r" is the same thing as a codimension-r foliation. The correspondence to the definition in terms of vector fields given in the introduction follows from the close relationship between differential forms and Lie derivatives. Frobenius' theorem is one of the basic tools for the study of vector fields and foliations. There are thus two forms of the theorem: one which operates with distributions, that is smooth subbundles "D" of the tangent bundle "TM"; and the other which operates with subbundles of the graded ring Ω("M") of all forms on "M". These two forms are related by duality. If "D" is a smooth tangent distribution on M, then the annihilator of "D", "I"("D") consists of all forms formula_36 (for any formula_37) such that formula_38 for all formula_39. The set "I"("D") forms a subring and, in fact, an ideal in Ω("M"). Furthermore, using the definition of the exterior derivative, it can be shown that "I"("D") is closed under exterior differentiation (it is a differential ideal) if and only if "D" is involutive. Consequently, the Frobenius theorem takes on the equivalent form that "I"("D") is closed under exterior differentiation if and only if "D" is integrable. Generalizations. The theorem may be generalized in a variety of ways. Infinite dimensions. One infinite-dimensional generalization is as follows. Let X and Y be Banach spaces, and "A" ⊂ "X", "B" ⊂ "Y" a pair of open sets. Let formula_40 be a continuously differentiable function of the Cartesian product (which inherits a differentiable structure from its inclusion into "X" × "Y") into the space "L"("X","Y") of continuous linear transformations of X into "Y". A differentiable mapping "u" : "A" → "B" is a solution of the differential equation formula_41 if formula_42 The equation (1) is completely integrable if for each formula_43, there is a neighborhood "U" of "x"0 such that (1) has a unique solution "u"("x") defined on "U" such that "u"("x"0)="y"0. The conditions of the Frobenius theorem depend on whether the underlying field is R or C. If it is R, then assume "F" is continuously differentiable. If it is C, then assume "F" is twice continuously differentiable. Then (1) is completely integrable at each point of "A" × "B" if and only if formula_44 for all "s"1, "s"2 ∈ "X". Here "D"1 (resp. "D"2) denotes the partial derivative with respect to the first (resp. second) variable; the dot product denotes the action of the linear operator "F"("x", "y") ∈ "L"("X", "Y"), as well as the actions of the operators "D"1"F"("x", "y") ∈ "L"("X", "L"("X", "Y")) and "D"2"F"("x", "y") ∈ "L"("Y", "L"("X", "Y")). Banach manifolds. The infinite-dimensional version of the Frobenius theorem also holds on Banach manifolds. The statement is essentially the same as the finite-dimensional version. Let M be a Banach manifold of class at least "C"2. Let E be a subbundle of the tangent bundle of M. The bundle E is involutive if, for each point "p" ∈ "M" and pair of sections X and "Y" of E defined in a neighborhood of "p", the Lie bracket of X and "Y" evaluated at "p", lies in "Ep": formula_45 On the other hand, E is integrable if, for each "p" ∈ "M", there is an immersed submanifold "φ" : "N" → "M" whose image contains "p", such that the differential of φ is an isomorphism of "TN" with "φ"−1"E". The Frobenius theorem states that a subbundle E is integrable if and only if it is involutive. Holomorphic forms. The statement of the theorem remains true for holomorphic 1-forms on complex manifolds — manifolds over C with biholomorphic transition functions. Specifically, if formula_46 are "r" linearly independent holomorphic 1-forms on an open set in C"n" such that formula_47 for some system of holomorphic 1-forms "ψ", 1 ≤ "i", "j" ≤ "r", then there exist holomorphic functions "f"ij and "gi" such that, on a possibly smaller domain, formula_48 This result holds locally in the same sense as the other versions of the Frobenius theorem. In particular, the fact that it has been stated for domains in C"n" is not restrictive. Higher degree forms. The statement does not generalize to higher degree forms, although there is a number of partial results such as Darboux's theorem and the Cartan-Kähler theorem. History. Despite being named for Ferdinand Georg Frobenius, the theorem was first proven by Alfred Clebsch and Feodor Deahna. Deahna was the first to establish the sufficient conditions for the theorem, and Clebsch developed the necessary conditions. Frobenius is responsible for applying the theorem to Pfaffian systems, thus paving the way for its usage in differential topology. Applications. Carathéodory's axiomatic thermodynamics. In classical thermodynamics, Frobenius' theorem can be used to construct entropy and temperature in Carathéodory's formalism. Specifically, Carathéodory considered a thermodynamic system (concretely one can imagine a piston of gas) that can interact with the outside world by either heat conduction (such as setting the piston on fire) or mechanical work (pushing on the piston). He then defined "adiabatic process" as any process that the system may undergo without heat conduction, and defined a relation of "adiabatic accessibility" thus: if the system can go from state A to state B after an adiabatic process, then formula_49 is adiabatically accessible from formula_50. Write it as formula_51. Now assume that Then, we can foliate the state space into subsets of states that are mutually adiabatically accessible. With mild assumptions on the smoothness of formula_54, each subset is a manifold of codimension 1. Call these manifolds "adiabatic surfaces". By the first law of thermodynamics, there exists a scalar function formula_55 ("internal energy") on the state space, such thatformula_56where formula_57 are the possible ways to perform mechanical work on the system. For example, if the system is a tank of ideal gas, then formula_58. Now, define the one-form on the state spaceformula_59Now, since the adiabatic surfaces are tangent to formula_9 at every point in state space, formula_9 is integrable, so by Carathéodory's theorem, there exists two scalar functions formula_60 on state space, such that formula_61. These are the temperature and entropy functions, up to a multiplicative constant. By plugging in the ideal gas laws, and noting that Joule expansion is an (irreversible) adiabatic process, we can fix the sign of formula_62, and find that formula_51 means formula_63. That is, entropy is preserved in reversible adiabatic processes, and increases during irreversible adiabatic processes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "adx + bdy + cdz = 0" }, { "math_id": 1, "text": "a, b, c" }, { "math_id": 2, "text": "(x,y,z)" }, { "math_id": 3, "text": "(x_0, y_0, z_0)" }, { "math_id": 4, "text": "a(x_0, y_0, z_0)[x-x_0] + b(x_0, y_0, z_0)[y-y_0] + c(x_0, y_0, z_0)[z-z_0] = 0" }, { "math_id": 5, "text": "\\begin{cases}\nadx + bdy + cdz = 0 \\\\\na'dx + b'dy + c'dz = 0\n\\end{cases}" }, { "math_id": 6, "text": "\\R^3" }, { "math_id": 7, "text": "\\omega \\wedge d\\omega = 0" }, { "math_id": 8, "text": "\\omega := adx + bdy + cdz" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "\\R^n" }, { "math_id": 11, "text": "\\omega = f dg" }, { "math_id": 12, "text": "f, g" }, { "math_id": 13, "text": "p" }, { "math_id": 14, "text": "g(p)" }, { "math_id": 15, "text": "dg" }, { "math_id": 16, "text": "f" }, { "math_id": 17, "text": " \\left \\{ f_k^i : \\mathbf{R}^n \\to \\mathbf{R} \\ : \\ 1 \\leq i \\leq n, 1 \\leq k \\leq r \\right \\}" }, { "math_id": 18, "text": "(1) \\quad \\begin{cases}\n L_1u\\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_i f_1^i(x)\\frac{\\partial u}{\\partial x^i} = \\vec f_1 \\cdot \\nabla u = 0\\\\\n L_2u\\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_i f_2^i(x)\\frac{\\partial u}{\\partial x^i} = \\vec f_2 \\cdot \\nabla u = 0\\\\\n \\qquad \\cdots \\\\\n L_ru\\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_i f_r^i(x)\\frac{\\partial u}{\\partial x^i} = \\vec f_r \\cdot \\nabla u = 0\n\\end{cases}" }, { "math_id": 19, "text": "L_iL_ju(x)-L_jL_iu(x)=\\sum_k c_{ij}^k(x)L_ku(x)" }, { "math_id": 20, "text": "\\begin{cases} \\frac{\\partial f}{\\partial x} + \\frac{\\partial f}{\\partial y} =0\\\\ \\frac{\\partial f}{\\partial y}+ \\frac{\\partial f}{\\partial z}=0\n\\end{cases}" }, { "math_id": 21, "text": "f(x,y,z)=C(t) \\text{ whenever } x - y + z = t." }, { "math_id": 22, "text": "X" }, { "math_id": 23, "text": "M" }, { "math_id": 24, "text": "u:I\\to M" }, { "math_id": 25, "text": "I" }, { "math_id": 26, "text": "\\dot u(t) = X_{u(t)}" }, { "math_id": 27, "text": "E\\subset TM" }, { "math_id": 28, "text": "TM" }, { "math_id": 29, "text": "Y" }, { "math_id": 30, "text": "E" }, { "math_id": 31, "text": "[X,Y]" }, { "math_id": 32, "text": "\\mathcal{F}" }, { "math_id": 33, "text": "p\\in M" }, { "math_id": 34, "text": "N\\subset M" }, { "math_id": 35, "text": "E_p = T_pN" }, { "math_id": 36, "text": "\\alpha\\in\\Omega^k (M)" }, { "math_id": 37, "text": "k\\in \\{1,\\dots, \\operatorname{dim}M\\}" }, { "math_id": 38, "text": "\\alpha(v_1,\\dots,v_k) = 0" }, { "math_id": 39, "text": "v_1,\\dots,v_k\\in D" }, { "math_id": 40, "text": "F:A\\times B \\to L(X,Y)" }, { "math_id": 41, "text": "(1) \\quad y' = F(x,y) " }, { "math_id": 42, "text": "\\forall x \\in A: \\quad u'(x) = F(x, u(x))." }, { "math_id": 43, "text": "(x_0, y_0)\\in A\\times B" }, { "math_id": 44, "text": "D_1F(x,y)\\cdot(s_1,s_2) + D_2F(x,y)\\cdot(F(x,y)\\cdot s_1,s_2) = D_1F(x,y) \\cdot (s_2,s_1) + D_2F(x,y)\\cdot(F(x,y)\\cdot s_2,s_1)" }, { "math_id": 45, "text": " [X,Y]_p \\in E_p" }, { "math_id": 46, "text": "\\omega^1,\\dots,\\omega^r" }, { "math_id": 47, "text": "d\\omega^j = \\sum_{i=1}^r \\psi_i^j \\wedge \\omega^i" }, { "math_id": 48, "text": "\\omega^j = \\sum_{i=1}^r f_i^jdg^i." }, { "math_id": 49, "text": "B" }, { "math_id": 50, "text": "A" }, { "math_id": 51, "text": "A \\succeq B" }, { "math_id": 52, "text": "A, B" }, { "math_id": 53, "text": "B \\succeq A" }, { "math_id": 54, "text": "\\succeq" }, { "math_id": 55, "text": "U" }, { "math_id": 56, "text": "dU = \\delta W + \\delta Q = \\sum_i X_i dx_i + \\delta Q" }, { "math_id": 57, "text": "X_1 dx_1, ..., X_n dx_n" }, { "math_id": 58, "text": "\\delta W = -p dV" }, { "math_id": 59, "text": "\\omega := dU - \\sum_i X_i dx_i" }, { "math_id": 60, "text": "T, S" }, { "math_id": 61, "text": "\\omega = TdS" }, { "math_id": 62, "text": "dS" }, { "math_id": 63, "text": "S(A) \\leq S(B)" } ]
https://en.wikipedia.org/wiki?curid=602480
602490
Sober space
Topological space whose topology is fully captured by its lattice of open sets In mathematics, a sober space is a topological space "X" such that every (nonempty) irreducible closed subset of "X" is the closure of exactly one point of "X": that is, every nonempty irreducible closed subset has a unique generic point. Definitions. Sober spaces have a variety of cryptomorphic definitions, which are documented in this section. All except the definition in terms of nets are described in. In each case below, replacing "unique" with "at most one" gives an equivalent formulation of the T0 axiom. Replacing it with "at least one" is equivalent to the property that the T0 quotient of the space is sober, which is sometimes referred to as having "enough points" in the literature. With irreducible closed sets. A closed set is irreducible if it cannot be written as the union of two proper closed subsets. A space is sober if every nonempty irreducible closed subset is the closure of a unique point. In terms of morphisms of frames and locales. A topological space "X" is sober if every map that preserves all joins and all finite meets from its partially ordered set of open subsets to formula_0 is the inverse image of a unique continuous function from the one-point space to "X". This may be viewed as a correspondence between the notion of a point in a locale and a point in a topological space, which is the motivating definition. Using completely prime filters. A filter "F" of open sets is said to be "completely prime" if for any family formula_1 of open sets such that formula_2, we have that formula_3 for some "i". A space X is sober if it each completely prime filter is the neighbourhood filter of a unique point in X. In terms of nets. A net formula_4 is "self-convergent" if it converges to every point formula_5 in formula_4, or equivalently if its eventuality filter is completely prime. A net formula_4 that converges to formula_6 "converges strongly" if it can only converge to points in the closure of formula_6. A space is sober if every self-convergent net formula_4 converges strongly to a unique point formula_6. In particular, a space is T1 and sober precisely if every self-convergent net is constant. As a property of sheaves on the space. A space "X" is sober if every functor from the category of sheaves "Sh(X)" to "Set" that preserves all finite limits and all small colimits must be the stalk functor of a unique point "x". Properties and examples. Any Hausdorff (T2) space is sober (the only irreducible subsets being points), and all sober spaces are Kolmogorov (T0), and both implications are strict. Sobriety is not comparable to the T1 condition: Moreover T2 is stronger than T1 "and" sober, i.e., while every T2 space is at once T1 and sober, there exist spaces that are simultaneously T1 and sober, but not T2. One such example is the following: let X be the set of real numbers, with a new point p adjoined; the open sets being all real open sets, and all cofinite sets containing p. Sobriety of "X" is precisely a condition that forces the lattice of open subsets of "X" to determine "X" up to homeomorphism, which is relevant to pointless topology. Sobriety makes the specialization preorder a directed complete partial order. Every continuous directed complete poset equipped with the Scott topology is sober. Finite T0 spaces are sober. The prime spectrum Spec("R") of a commutative ring "R" with the Zariski topology is a compact sober space. In fact, every spectral space (i.e. a compact sober space for which the collection of compact open subsets is closed under finite intersections and forms a base for the topology) is homeomorphic to Spec("R") for some commutative ring "R". This is a theorem of Melvin Hochster. More generally, the underlying topological space of any scheme is a sober space. The subset of Spec("R") consisting only of the maximal ideals, where "R" is a commutative ring, is not sober in general. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{0,1\\}" }, { "math_id": 1, "text": "O_i" }, { "math_id": 2, "text": "\\bigcup_i O_i \\in F" }, { "math_id": 3, "text": "O_i \\in F" }, { "math_id": 4, "text": "x_{\\bullet}" }, { "math_id": 5, "text": "x_i" }, { "math_id": 6, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=602490
60249313
Butler matrix
A Butler matrix is a beamforming network used to feed a phased array of antenna elements. Its purpose is to control the direction of a beam, or beams, of radio transmission. It consists of an formula_0 matrix (formula_1 some power of two) with hybrid couplers and fixed-value phase shifters at the junctions. The device has formula_1 input ports (the beam ports) to which power is applied, and formula_1 output ports (the element ports) to which formula_1 antenna elements are connected. The Butler matrix feeds power to the elements with a progressive phase difference between elements such that the beam of radio transmission is in the desired direction. The beam direction is controlled by switching power to the desired beam port. More than one beam, or even all formula_1 of them can be activated simultaneously. The concept was first proposed by Butler and Lowe in 1961. It is a development of the work of Blass in 1960. Its advantage over other methods of angular beamforming is the simplicity of the hardware. It requires far fewer phase shifters than other methods and can be implemented in microstrip on a low-cost printed circuit board. Antenna elements. The antenna elements fed by a Butler matrix are typically horn antennae at the microwave frequencies at which Butler matrices are usually used. Horns have limited bandwidth and more complex antennae may be used if more than an octave is required. The elements are commonly arranged in a linear array. A Butler matrix can also feed a circular array giving 360° coverage. A further application with a circular antenna array is to produce formula_1 omnidirectional beams with orthogonal phase-modes so that multiple mobile stations can all simultaneously use the same frequency, each using a different phase-mode. A circular antenna array can be made to simultaneously produce an omnidirectional beam and multiple directional beams when fed through two Butler matrices back-to-back. Butler matrices can be used with both transmitters and receivers. Since they are passive and reciprocal, the same matrix can do both – in a transceiver for instance. They have the advantageous property that in transmit mode they deliver the full power of the transmitter to the beam, and in receive mode they collect signals from each of the beam directions with the full gain of the antenna array. Components. The essential components needed to build a Butler matrix are hybrid couplers and fixed-value phase shifters. Additionally, fine control of the beam direction can be provided with variable phase shifters in addition to the fixed phase shifters. By using the variable phase shifters in combination with switching the power to the beam ports, a continuous sweep of the beam can be produced. An additional component that can be used is a planar crossover distributed-element circuit. Microwave circuits are often manufactured in the planar format called microstrip. Lines that need to cross over each other are typically implemented as an air bridge. These are unsuitable for this application because there is unavoidably some coupling between the lines being crossed. An alternative which allows the Butler matrix to be implemented entirely in printed circuit form, and thus more economically, is a crossover in the form of a branch-line coupler. The crossover coupler is equivalent to two 90° hybrid couplers connected in cascade. This will add an additional 90° phase shift to the lines being crossed, but this can be compensated for by adding an equivalent amount to the phase shifters in lines not being crossed. An ideal branch-line crossover theoretically has no coupling between the two paths through it. In this kind of implementation, the phase shifters are constructed as delay lines of the appropriate length. This is just a meandering line on the printed circuit. Microstrip is cheap, but is not suitable for all applications. When there are a large number of antenna elements, the path through the Butler matrix goes through a large number of hybrids and phase shifters. The cumulative insertion loss from all these components in microstrip can make it impractical. The technology usually used to overcome this problem, especially at the higher frequencies, is waveguide which is much less lossy. Not only is this more expensive, it is also much more bulky and heavier, which is a major drawback for aircraft use. Another choice that is less bulky, but still less lossy than microstrip, is substrate-integrated waveguide. Applications. A typical use of Butler matrices is in the base stations of mobile networks to keep the beams pointing towards the mobile users. Linear antenna arrays driven by Butler matrices, or some other beam-forming network, to produce a scanning beam are used in direction finding applications. They are important for military warning systems and target location. They are especially useful in naval systems because of the wide angular coverage that can be obtained. Another feature that makes Butler matrices attractive for military applications is their speed over mechanical scanning systems. These need to allow settling time for the servos. Analysis. A linear antenna array will produce a beam perpendicular to the line of elements (broadside beam) if they are all fed in phase. If they are fed with a phase change between elements of formula_2 then a beam in the direction of the line (endfire beam) will be produced. Using an intermediate value of phase shift between elements will produce a beam at some angle intermediate between these two extremes. In a Butler matrix, the phase shift of each beam is made formula_3 and the angle between the outer beams is given by formula_4 The expression shows that formula_5 decreases with increasing frequency. This effect is called "beam squint". Both the Blass matrix and Butler matrix suffer from beam squint and the effect limits the bandwidth that can be achieved. Another undesirable effect is that the further a beam is off boresight (broadside beam) the lower is the beam peak field. The total number of circuit blocks required is formula_6 hybrids and, formula_7 fixed phase shifters. Since formula_1 is always a power of 2, we can let formula_8, then the required number of hybrids is formula_9 and phase shifters formula_10. formula_1 number of antenna elements, equal to number of beam ports formula_11 distance between antenna elements formula_12 index number of antenna port formula_13 wavelength formula_14 frequency formula_15 phase shift formula_5 angle formula_16 speed of light Orthogonality. To be orthogonal (that is, not interfere with each other) the beam shapes must meet the Nyquist ISI criterion, but with distance as the independent variable rather than time. Assuming a sinc function beam shape, the beams must be spaced so that their crossovers occur at formula_17 of their peak value (about 4 dB down). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\times n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "2 \\pi d \\over \\lambda" }, { "math_id": 3, "text": "\\phi = \\frac {(2k-1) \\pi}{n} \\ ," }, { "math_id": 4, "text": "\\theta = 2 \\arcsin \\left[ \\frac{c}{2df} \\left ( 1 - {1 \\over n} \\right ) \\right ] \\ ." }, { "math_id": 5, "text": "\\theta" }, { "math_id": 6, "text": "{n \\over 2} \\log_2 n" }, { "math_id": 7, "text": "{n \\over 2} (\\log_2n - 1)" }, { "math_id": 8, "text": "n=2^m" }, { "math_id": 9, "text": "2^{m-1}m" }, { "math_id": 10, "text": "2^{m-1}(m-1)" }, { "math_id": 11, "text": "d" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "\\lambda" }, { "math_id": 14, "text": "f" }, { "math_id": 15, "text": "\\phi" }, { "math_id": 16, "text": "c" }, { "math_id": 17, "text": "2/\\pi" } ]
https://en.wikipedia.org/wiki?curid=60249313
60254398
Land equivalent ratio
The land equivalent ratio is a concept in agriculture that describes the relative land area required under sole cropping (monoculture) to produce the same yield as under intercropping (polyculture). Definition. The FAO defines land equivalent ratio (LER) as: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;the ratio of the area under sole cropping to the area under intercropping needed to give equal amounts of yield at the same management level. It is the sum of the fractions of the intercropped yields divided by the sole-crop yields. For a scenario where a total of formula_0 crops are intercropped, the land equivalent ratio LER can be calculated as formula_1 where formula_0 is the number of different crops intercropped, formula_2 is the yield for the formula_3 crop under intercropping, and formula_4 is the yield for the formula_3 crop under a sole-crop regime on the same area. Example calculation. The table in this section provides yield values for a hypothetical scenario intercropping a grain crop with a fruit tree crop. The first two columns state the yields for intercropping (IY) and sole yields (SY). The third column, "equivalent area", column calculates the area of sole cropping land required to achieve the same yield as 1 ha of intercropping, at the same management level. The land equivalent ration can be calculated as formula_5 An interpretation of this result would be that a total of 1.4 ha of sole cropping area would be required to produce the same yields as 1 ha of the intercropped system. Applications. The land equivalent ratio can be used whenever more than one type of yield can be obtained from the same area. This can be intercropping of annual crops (e.g. sorghum and pigeonpea) or combination of annual and perennial crops e.g. in agroforestry systems (e.g. jackfruit and eggplant). It is also possible to calculate LERs for combinations of plant and non-plant yields, e.g. in agrivoltaic systems. The table below lists some examples for land equivalent ratios published in scientific journals: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textstyle m" }, { "math_id": 1, "text": "LER = \\sum_{i=1}^m {\\frac{IY_i}{SY_i}}" }, { "math_id": 2, "text": "\\textstyle IY_i" }, { "math_id": 3, "text": "\\textstyle i^{th}" }, { "math_id": 4, "text": "\\textstyle SY_i" }, { "math_id": 5, "text": "LER = \\sum_{i=1}^m {\\frac{IY_i}{SY_i}} = \\frac{IY_{grain}}{SY_{grain}} + \\frac{IY_{fruit}}{SY_{fruit}} = \\frac{4,000}{5,000} + \\frac{9,000}{15,000} = 0.8 + 0.6 = 1.4" } ]
https://en.wikipedia.org/wiki?curid=60254398
60255694
Orlicz sequence space
In mathematics, an Orlicz sequence space is any of certain class of linear spaces of scalar-valued sequences, endowed with a special norm, specified below, under which it forms a Banach space. Orlicz sequence spaces generalize the formula_0 spaces, and as such play an important role in functional analysis. Orlicz sequence spaces are particular examples of Orlicz spaces. Definition. Fix formula_1 so that formula_2 denotes either the real or complex scalar field. We say that a function formula_3 is an Orlicz function if it is continuous, nondecreasing, and (perhaps nonstrictly) convex, with formula_4 and formula_5. In the special case where there exists formula_6 with formula_7 for all formula_8 it is called degenerate. In what follows, unless otherwise stated we'll assume all Orlicz functions are nondegenerate. This implies formula_9 for all formula_10. For each scalar sequence formula_11 set formula_12 We then define the Orlicz sequence space with respect to formula_13, denoted formula_14, as the linear space of all formula_11 such that formula_15 for some formula_16, endowed with the norm formula_17. Two other definitions will be important in the ensuing discussion. An Orlicz function formula_13 is said to satisfy the Δ2 condition at zero whenever formula_18 We denote by formula_19 the subspace of scalar sequences formula_20 such that formula_15 for all formula_16. Properties. The space formula_14 is a Banach space, and it generalizes the classical formula_0 spaces in the following precise sense: when formula_21, formula_22, then formula_17 coincides with the formula_0-norm, and hence formula_23; if formula_13 is the degenerate Orlicz function then formula_17 coincides with the formula_24-norm, and hence formula_25 in this special case, and formula_26 when formula_13 is degenerate. In general, the unit vectors may not form a basis for formula_14, and hence the following result is of considerable importance. Theorem 1. If formula_13 is an Orlicz function then the following conditions are equivalent: Two Orlicz functions formula_13 and formula_27 satisfying the Δ2 condition at zero are called equivalent whenever there exist are positive constants formula_28 such that formula_29 for all formula_8. This is the case if and only if the unit vector bases of formula_14 and formula_30 are equivalent. formula_14 can be isomorphic to formula_30 without their unit vector bases being equivalent. (See the example below of an Orlicz sequence space with two nonequivalent symmetric bases.) Theorem 2. Let formula_13 be an Orlicz function. Then formula_14 is reflexive if and only if formula_31 and formula_32. Theorem 3 (K. J. Lindberg). Let formula_33 be an infinite-dimensional closed subspace of a separable Orlicz sequence space formula_14. Then formula_33 has a subspace formula_34 isomorphic to some Orlicz sequence space formula_30 for some Orlicz function formula_27 satisfying the Δ2 condition at zero. If furthermore formula_33 has an unconditional basis then formula_34 may be chosen to be complemented in formula_33, and if formula_33 has a symmetric basis then formula_33 itself is isomorphic to formula_30. Theorem 4 (Lindenstrauss/Tzafriri). Every separable Orlicz sequence space formula_14 contains a subspace isomorphic to formula_0 for some formula_22. Corollary. Every infinite-dimensional closed subspace of a separable Orlicz sequence space contains a further subspace isomorphic to formula_0 for some formula_22. Note that in the above Theorem 4, the copy of formula_0 may not always be chosen to be complemented, as the following example shows. Example (Lindenstrauss/Tzafriri). There exists a separable and reflexive Orlicz sequence space formula_14 which fails to contain a complemented copy of formula_0 for any formula_35. This same space formula_14 contains at least two nonequivalent symmetric bases. Theorem 5 (K. J. Lindberg &amp; Lindenstrauss/Tzafriri). If formula_14 is an Orlicz sequence space satisfying formula_36 (i.e., the two-sided limit exists) then the following are all true. Example. For each formula_22, the Orlicz function formula_37 satisfies the conditions of Theorem 5 above, but is not equivalent to formula_38.
[ { "math_id": 0, "text": "\\ell_p" }, { "math_id": 1, "text": "\\mathbb{K}\\in\\{\\mathbb{R},\\mathbb{C}\\}" }, { "math_id": 2, "text": "\\mathbb{K}" }, { "math_id": 3, "text": "M:[0,\\infty)\\to[0,\\infty)" }, { "math_id": 4, "text": "M(0)=0" }, { "math_id": 5, "text": "\\lim_{t\\to\\infty}M(t)=\\infty" }, { "math_id": 6, "text": "b>0" }, { "math_id": 7, "text": "M(t)=0" }, { "math_id": 8, "text": "t\\in[0,b]" }, { "math_id": 9, "text": "M(t)>0" }, { "math_id": 10, "text": "t>0" }, { "math_id": 11, "text": "(a_n)_{n=1}^\\infty\\in\\mathbb{K}^\\mathbb{N}" }, { "math_id": 12, "text": "\\left\\|(a_n)_{n=1}^\\infty\\right\\|_M=\\inf\\left\\{\\rho>0:\\sum_{n=1}^\\infty M(|a_n|/\\rho)\\leqslant 1\\right\\}." }, { "math_id": 13, "text": "M" }, { "math_id": 14, "text": "\\ell_M" }, { "math_id": 15, "text": "\\sum_{n=1}^\\infty M(|a_n|/\\rho)<\\infty" }, { "math_id": 16, "text": "\\rho>0" }, { "math_id": 17, "text": "\\|\\cdot\\|_M" }, { "math_id": 18, "text": "\\limsup_{t\\to 0}\\frac{M(2t)}{M(t)}<\\infty." }, { "math_id": 19, "text": "h_M" }, { "math_id": 20, "text": "(a_n)_{n=1}^\\infty\\in\\ell_M" }, { "math_id": 21, "text": "M(t)=t^p" }, { "math_id": 22, "text": "1\\leqslant p<\\infty" }, { "math_id": 23, "text": "\\ell_M=\\ell_p" }, { "math_id": 24, "text": "\\ell_\\infty" }, { "math_id": 25, "text": "\\ell_M=\\ell_\\infty" }, { "math_id": 26, "text": "h_M=c_0" }, { "math_id": 27, "text": "N" }, { "math_id": 28, "text": "A,B,b>0" }, { "math_id": 29, "text": "AN(t)\\leqslant M(t)\\leqslant BN(t)" }, { "math_id": 30, "text": "\\ell_N" }, { "math_id": 31, "text": "\\liminf_{t\\to 0}\\frac{tM'(t)}{M(t)}>1\\;\\;" }, { "math_id": 32, "text": "\\;\\;\\limsup_{t\\to 0}\\frac{tM'(t)}{M(t)}<\\infty" }, { "math_id": 33, "text": "X" }, { "math_id": 34, "text": "Y" }, { "math_id": 35, "text": "1\\leqslant p\\leqslant\\infty" }, { "math_id": 36, "text": "\\liminf_{t\\to 0}tM'(t)/M(t)=\\limsup_{t\\to 0}tM'(t)/M(t)" }, { "math_id": 37, "text": "M(t)=t^p/(1-\\log (t))" }, { "math_id": 38, "text": "t^p" } ]
https://en.wikipedia.org/wiki?curid=60255694
6025658
Weighted majority algorithm (machine learning)
Method of using a pool of algorithms In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well. Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0&lt;β&lt;1. It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from a pool of algorithms formula_0 is formula_1 if one algorithm in formula_2 makes at most formula_3 mistakes. There are many variations of the weighted majority algorithm to handle different situations, like shifting targets, infinite pools, or randomized predictions. The core mechanism remains similar, with the final performances of the compound algorithm bounded by a function of the performance of the specialist (best performing algorithm) in the pool.
[ { "math_id": 0, "text": " \\mathbf{A} " }, { "math_id": 1, "text": "\\mathbf{O(log|A|+m)}" }, { "math_id": 2, "text": " \\mathbf{x}_i " }, { "math_id": 3, "text": " \\mathbf{m} " } ]
https://en.wikipedia.org/wiki?curid=6025658
60257417
Optimal labor income taxation
Subarea of optimal tax theory Optimal labour income tax is a sub-area of optimal tax theory which refers to the study of designing a tax on individual labour income such that a given economic criterion like social welfare is optimized. Efficiency-equity tradeoff. The modern literature on optimal labour income taxation largely follows from James Mirrlees' "Exploration in the Theory of Optimum Income Taxation". The approach is based on asymmetric information, as the government is assumed to be unable to observe the number of hours people work or how productive they are, but can observe individuals' incomes. This imposes incentive compatibility constraints that limit the taxes which the government is able to levy, and prevents it from taxing high-productivity people at higher rates than low-productivity people. The government seeks to maximise a utilitarian social welfare function subject to these constraints. It faces a tradeoff between efficiency and equity: Mechanical, behavioral and welfare effects. Emmanuel Saez in his article titled "Using Elasticities to Derive Optimal Income Tax Rates" derives a formula for optimal level of income tax using both the compensated and uncompensated elasticities. Saez writes that the tradeoff between equity and efficiency is a central consideration of optimal taxation, and implementing a progressive tax allows the government to reallocate their resources where they are needed most. However, this deters those of higher income levels to work at their optimal level. Saez decomposes the marginal effects of a tax change into mechanical, behavioural and welfare effects, as follows: The sum of these effects should be zero at the optimum. Stipulating this condition results in the following formula for the optimal top tax rate, if incomes are Pareto distributed: formula_0 where: Empirical estimation of the parameters of this equation suggests that the revenue-maximising top tax rate is between approximately 50% and 80%, although this estimate neglects long-run behavioural responses, which would imply higher elasticities and a lower optimal tax rate. Saez's analysis can also be generalised to tax rates other than the top rate. Arithmetic vs. economic effects. In the late 1970s, Arthur Laffer developed the Laffer curve, which demonstrates that there are two effects of changing tax rates: These correspond to the mechanical and behavioural effects discussed by Saez. The Laffer curve illustrates that, for sufficiently high tax levels, the (negative) behavioural effect will outweigh the (positive) mechanical effect of a tax increase, and so increasing tax rates will reduce tax revenue. In fact, tax revenue with a tax rate of 100% is likely to be 0, since there is no remaining incentive to work at all. Therefore, the tax rate that maximises revenue collected will typically be below 100% - as estimated by Saez, the revenue-maximising top rate is between 50% and 80%. Family and gender effects. Since only economic actors who engage in market activity of "entering the labour market" have an income tax liability on their wages, people who are able to consume leisure or engage in household production outside the market, by, for example, providing housewife services in "lieu" of hiring a maid, are taxed more lightly. With the "married filing jointly" tax unit in U.S. income tax law, the second earner's income is added to the first wage earner's taxable income and thus gets the highest marginal rate. This type of tax creates a large distortion, disfavoring women from the labour force during years when the couple has the greatest child care needs. Optimal linear income tax. Eytan Sheshinski has studied a simplified income-tax model, in which the tax is a linear function of the income: formula_7, where "y" is the income, "t(y)" is the tax paid by an individual with an income of "y", 1-"b" is the tax rate, and "a" is a lump sum tax. The goal is to find the values of "a" and "b" such that the social welfare (the sum of individual utilities) is maximized. In his model, all agents have the same utility function, which depends on consumption and labour: formula_8. The consumption "c" is determined by the after-income tax: formula_9. The before-tax income "y" is determined by the amount of labor "l" and an innate ability factor "n", where the relation is assumed to be linear too: formula_10. Each individual decides on the amount of labour "l" which maximizes his utility: formula_11. These decisions define the labor supply as a function of the tax parameters "a" and "b". Under certain natural assumptions, it is proved that the optimal linear tax has "a"&gt;0, i.e., it provides a positive lump-sum to individuals with zero income. This coincides with the idea of negative income tax. Additionally, the optimal tax rate is bounded above by a fraction that decreases with the minimum elasticity of the labour supply. Developments. The theory of optimal labour income taxation started with a simple model of optimal linear taxation. It then developed to consider optimal nonlinear income taxation. Then, it considered various extensions of the standard model: tax avoidance, income shifting, international migration, rent-seeking, relative income concerns, couples and children, and non-cash transfers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau = \\frac{1 - \\bar{g}}{1-\\bar{g}+\\bar{\\zeta}^u +\\bar{\\zeta}^c(\\alpha -1)}" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\bar{g}" }, { "math_id": 3, "text": "\\bar{g}=0" }, { "math_id": 4, "text": "\\bar{\\zeta}^u" }, { "math_id": 5, "text": "\\bar{\\zeta}^c" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "t(y) = -a + (1-b) y" }, { "math_id": 8, "text": "u = u(c,l)" }, { "math_id": 9, "text": "c(y) = y-t(y) = a + b y" }, { "math_id": 10, "text": "y = y(n,l) = n\\cdot l" }, { "math_id": 11, "text": "u(a+b n l , l)" } ]
https://en.wikipedia.org/wiki?curid=60257417