id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
657958
Cooperative game theory
Game where groups of players may enforce cooperative behaviour In game theory, a cooperative game (or coalitional game) is a game with groups of players who form binding “coalitions” with external enforcement of cooperative behavior (e.g. through contract law). This is different from non-cooperative games in which there is either no possibility to forge alliances or all agreements need to be self-enforcing (e.g. through credible threats). Cooperative games are analysed by focusing on coalitions that can be formed, and the joint actions that groups can take and the resulting collective payoffs. Mathematical definition. A cooperative game is given by specifying a value for every coalition. Formally, the coalitional game consists of a finite set of players formula_0, called the "grand coalition", and a "characteristic function" formula_1 from the set of all possible coalitions of players to a set of payments that satisfies formula_2. The function describes how much collective payoff a set of players can gain by forming a coalition. Cooperative game theory definition. Cooperative game theory is a branch of game theory that deals with the study of games where players can form coalitions, cooperate with one another, and make binding agreements. The theory offers mathematical methods for analysing scenarios in which two or more players are required to make choices that will affect other players wellbeing. Common interests: In cooperative games, players share a common interest in achieving a specific goal or outcome. The players must identify and agree on a common interest to establish the foundation and reasoning for cooperation. Once the players have a clear understanding of their shared interest, they can work together to achieve it. Necessary information exchange: Cooperation requires communication and information exchange among the players. Players must share information about their preferences, resources, and constraints to identify opportunities for mutual gain. By sharing information, players can better understand each other's goals and work towards achieving them together. Voluntariness, equality, and mutual benefit: In cooperative games, players voluntarily come together to form coalitions and make agreements. The players must be equal partners in the coalition, and any agreements must be mutually beneficial. Cooperation is only sustainable if all parties feel they are receiving a fair share of the benefits. Compulsory contract: In cooperative games, agreements between players are binding and mandatory. Once the players have agreed to a particular course of action, they have an obligation to follow through. The players must trust each other to keep their commitments, and there must be mechanisms in place to enforce the agreements. By making agreements binding and mandatory, players can ensure that they will achieve their shared goal. Subgames. Let formula_3 be a non-empty coalition of players. The "subgame" formula_4 on formula_5 is naturally defined as formula_6 In other words, we simply restrict our attention to coalitions contained in formula_5. Subgames are useful because they allow us to apply solution concepts defined for the grand coalition on smaller coalitions. Properties for characterization. Superadditivity. Characteristic functions are often assumed to be superadditive . This means that the value of a union of disjoint coalitions is no less than the sum of the coalitions' separate values: formula_7 whenever formula_8 satisfy formula_9. Monotonicity. Larger coalitions gain more: formula_10. This follows from superadditivity. i.e. if payoffs are normalized so singleton coalitions have zero value. Properties for simple games. A coalitional game v is considered simple if payoffs are either 1 or 0, i.e. coalitions are either "winning" or "losing". Equivalently, a simple game can be defined as a collection W of coalitions, where the members of W are called winning coalitions, and the others losing coalitions. It is sometimes assumed that a simple game is nonempty or that it does not contain an empty set. However, in other areas of mathematics, simple games are also called hypergraphs or Boolean functions (logic functions). A few relations among the above axioms have widely been recognized, such as the following (e.g., Peleg, 2002, Section 2.1): More generally, a complete investigation of the relation among the four conventional axioms (monotonicity, properness, strongness, and non-weakness), finiteness, and algorithmic computability has been made (Kumabe and Mihara, 2011), whose results are summarized in the Table "Existence of Simple Games" below. The restrictions that various axioms for simple games impose on their Nakamura number were also studied extensively. In particular, a computable simple game without a veto player has a Nakamura number greater than 3 only if it is a "proper" and "non-strong" game. Relation with non-cooperative theory. Let "G" be a strategic (non-cooperative) game. Then, assuming that coalitions have the ability to enforce coordinated behaviour, there are several cooperative games associated with "G". These games are often referred to as "representations of G". The two standard representations are: Solution concepts. The main assumption in cooperative game theory is that the grand coalition formula_0 will form. The challenge is then to allocate the payoff formula_21 among the players in some way. (This assumption is not restrictive, because even if players split off and form smaller coalitions, we can apply solution concepts to the subgames defined by whatever coalitions actually form.) A "solution concept" is a vector formula_22 (or a set of vectors) that represents the allocation to each player. Researchers have proposed different solution concepts based on different notions of fairness. Some properties to look for in a solution concept include: An efficient payoff vector is called a "pre-imputation", and an individually rational pre-imputation is called an imputation. Most solution concepts are imputations. <templatestyles src="Template:Visible anchor/styles.css" />The stable set. The stable set of a game (also known as the "von Neumann-Morgenstern solution" ) was the first solution proposed for games with more than 2 players. Let formula_25 be a game and let formula_32, formula_40 be two imputations of formula_25. Then formula_32 "dominates" formula_40 if some coalition formula_41 satisfies formula_42 and formula_43. In other words, players in formula_5 prefer the payoffs from formula_32 to those from formula_40, and they can threaten to leave the grand coalition if formula_40 is used because the payoff they obtain on their own is at least as large as the allocation they receive under formula_32. A "stable set" is a set of imputations that satisfies two properties: Von Neumann and Morgenstern saw the stable set as the collection of acceptable behaviours in a society: None is clearly preferred to any other, but for each unacceptable behaviour there is a preferred alternative. The definition is very general allowing the concept to be used in a wide variety of game formats. The core. Let formula_25 be a game. The "core" of formula_25 is the set of payoff vectors formula_46 In words, the core is the set of imputations under which no coalition has a value greater than the sum of its members' payoffs. Therefore, no coalition has incentive to leave the grand coalition and receive a larger payoff. The core of a simple game with respect to preferences. For simple games, there is another notion of the core, when each player is assumed to have preferences on a set formula_47 of alternatives. A "profile" is a list formula_48 of individual preferences formula_49 on formula_47. Here formula_50 means that individual formula_51 prefers alternative formula_52 to formula_53 at profile formula_54. Given a simple game formula_28 and a profile formula_54, a "dominance" relation formula_55 is defined on formula_47 by formula_56 if and only if there is a winning coalition formula_57 (i.e., formula_58) satisfying formula_50 for all formula_59. The "core" formula_60 of the simple game formula_28 with respect to the profile formula_54 of preferences is the set of alternatives undominated by formula_55 (the set of maximal elements of formula_47 with respect to formula_55): formula_61 if and only if there is no formula_62 such that formula_63. The "Nakamura number" of a simple game is the minimal number of winning coalitions with empty intersection. "Nakamura's theorem" states that the core formula_60 is nonempty for all profiles formula_54 of "acyclic" (alternatively, "transitive") preferences if and only if formula_47 is finite "and" the cardinal number (the number of elements) of formula_47 is less than the Nakamura number of formula_28. A variant by Kumabe and Mihara states that the core formula_60 is nonempty for all profiles formula_54 of preferences that have a "maximal element" if and only if the cardinal number of formula_47 is less than the Nakamura number of formula_28. (See Nakamura number for details.) The strong epsilon-core. Because the core may be empty, a generalization was introduced in . The "strong formula_64-core" for some number formula_65 is the set of payoff vectors formula_66 In economic terms, the strong formula_64-core is the set of pre-imputations where no coalition can improve its payoff by leaving the grand coalition, if it must pay a penalty of formula_64 for leaving. formula_64 may be negative, in which case it represents a bonus for leaving the grand coalition. Clearly, regardless of whether the core is empty, the strong formula_64-core will be non-empty for a large enough value of formula_64 and empty for a small enough (possibly negative) value of formula_64. Following this line of reasoning, the "least-core", introduced in , is the intersection of all non-empty strong formula_64-cores. It can also be viewed as the strong formula_64-core for the smallest value of formula_64 that makes the set non-empty . The Shapley value. The "Shapley value" is the unique payoff vector that is efficient, symmetric, and satisfies monotonicity. It was introduced by Lloyd Shapley who showed that it is the unique payoff vector that is efficient, symmetric, additive, and assigns zero payoffs to dummy players. The Shapley value of a superadditive game is individually rational, but this is not true in general. The kernel. Let formula_1 be a game, and let formula_22 be an efficient payoff vector. The "maximum surplus" of player "i" over player "j" with respect to "x" is formula_67 the maximal amount player "i" can gain without the cooperation of player "j" by withdrawing from the grand coalition "N" under payoff vector "x", assuming that the other players in "i"'s withdrawing coalition are satisfied with their payoffs under "x". The maximum surplus is a way to measure one player's bargaining power over another. The "kernel" of formula_28 is the set of imputations "x" that satisfy for every pair of players "i" and "j". Intuitively, player "i" has more bargaining power than player "j" with respect to imputation "x" if formula_70, but player "j" is immune to player "i"'s threats if formula_71, because he can obtain this payoff on his own. The kernel contains all imputations where no player has this bargaining power over another. This solution concept was first introduced in . Harsanyi dividend. The "Harsanyi dividend" (named after John Harsanyi, who used it to generalize the Shapley value in 1963) identifies the surplus that is created by a coalition of players in a cooperative game. To specify this surplus, the worth of this coalition is corrected by the surplus that is already created by subcoalitions. To this end, the dividend formula_72 of coalition formula_57 in game formula_28 is recursively determined by formula_73 An explicit formula for the dividend is given by formula_74. The function formula_75 is also known as the Möbius inverse of formula_76. Indeed, we can recover formula_28 from formula_77 by help of the formula formula_78. Harsanyi dividends are useful for analyzing both games and solution concepts, e.g. the Shapley value is obtained by distributing the dividend of each coalition among its members, i.e., the Shapley value formula_79 of player formula_51 in game formula_28 is given by summing up a player's share of the dividends of all coalitions that she belongs to, formula_80. The nucleolus. Let formula_1 be a game, and let formula_22 be a payoff vector. The "excess" of formula_32 for a coalition formula_81 is the quantity formula_82; that is, the gain that players in coalition formula_5 can obtain if they withdraw from the grand coalition formula_0 under payoff formula_32 and instead take the payoff formula_83. The "nucleolus" of formula_25 is the imputation for which the vector of excesses of all coalitions (a vector in formula_84) is smallest in the leximin order. The nucleolus was introduced in . gave a more intuitive description: Starting with the least-core, record the coalitions for which the right-hand side of the inequality in the definition of formula_85 cannot be further reduced without making the set empty. Continue decreasing the right-hand side for the remaining coalitions, until it cannot be reduced without making the set empty. Record the new set of coalitions for which the inequalities hold at equality; continue decreasing the right-hand side of remaining coalitions and repeat this process as many times as necessary until all coalitions have been recorded. The resulting payoff vector is the nucleolus. <templatestyles src="Template:Visible anchor/styles.css" />Convex cooperative games. Introduced by Shapley in , convex cooperative games capture the intuitive property some games have of "snowballing". Specifically, a game is "convex" if its characteristic function formula_25 is supermodular: formula_86 It can be shown (see, e.g., Section V.1 of ) that the supermodularity of formula_25 is equivalent to formula_87 that is, "the incentives for joining a coalition increase as the coalition grows" , leading to the aforementioned snowball effect. For cost games, the inequalities are reversed, so that we say the cost game is "convex" if the characteristic function is submodular. Properties. Convex cooperative games have many nice properties: Similarities and differences with combinatorial optimization. Submodular and supermodular set functions are also studied in combinatorial optimization. Many of the results in have analogues in , where submodular functions were first presented as generalizations of matroids. In this context, the core of a convex cost game is called the "base polyhedron", because its elements generalize base properties of matroids. However, the optimization community generally considers submodular functions to be the discrete analogues of convex functions , because the minimization of both types of functions is computationally tractable. Unfortunately, this conflicts directly with Shapley's original definition of supermodular functions as "convex". The relationship between cooperative game theory and firm. Corporate strategic decisions can develop and create value through cooperative game theory. This means that cooperative game theory can become the strategic theory of the firm, and different CGT solutions can simulate different institutions.
[ { "math_id": 0, "text": " N " }, { "math_id": 1, "text": " v : 2^N \\to \\mathbb{R} " }, { "math_id": 2, "text": " v( \\emptyset ) = 0 " }, { "math_id": 3, "text": " S \\subsetneq N " }, { "math_id": 4, "text": " v_S : 2^S \\to \\mathbb{R} " }, { "math_id": 5, "text": " S " }, { "math_id": 6, "text": " v_S(T) = v(T), \\forall~ T \\subseteq S." }, { "math_id": 7, "text": " v ( S \\cup T ) \\geq v (S) + v (T) " }, { "math_id": 8, "text": " S, T \\subseteq N " }, { "math_id": 9, "text": " S \\cap T = \\emptyset " }, { "math_id": 10, "text": " S \\subseteq T \\Rightarrow v (S) \\le v (T) " }, { "math_id": 11, "text": "S \\in W" }, { "math_id": 12, "text": "S\\subseteq T" }, { "math_id": 13, "text": "T \\in W" }, { "math_id": 14, "text": "N\\setminus S \\notin W" }, { "math_id": 15, "text": "S \\notin W" }, { "math_id": 16, "text": "N\\setminus S \\in W" }, { "math_id": 17, "text": "v(S) = 1 - v(N \\setminus S)" }, { "math_id": 18, "text": "\\bigcap W := \\bigcap_{S\\in W} S" }, { "math_id": 19, "text": "T \\subseteq N" }, { "math_id": 20, "text": "S\\cap T \\in W" }, { "math_id": 21, "text": " v(N) " }, { "math_id": 22, "text": " x \\in \\mathbb{R}^N " }, { "math_id": 23, "text": " \\sum_{ i \\in N } x_i = v(N) " }, { "math_id": 24, "text": " x_i \\geq v(\\{i\\}), \\forall~ i \\in N " }, { "math_id": 25, "text": " v " }, { "math_id": 26, "text": " v( S \\cup \\{ i \\} ) = w( S \\cup \\{ i \\} ), \\forall~ S \\subseteq N \\setminus \\{ i \\} " }, { "math_id": 27, "text": " x_i " }, { "math_id": 28, "text": "v" }, { "math_id": 29, "text": "w" }, { "math_id": 30, "text": " v( S \\cup \\{ i \\} ) \\leq w( S \\cup \\{ i \\} ), \\forall~ S \\subseteq N \\setminus \\{ i \\} " }, { "math_id": 31, "text": " |N| " }, { "math_id": 32, "text": " x " }, { "math_id": 33, "text": " x_i = x_j " }, { "math_id": 34, "text": " i " }, { "math_id": 35, "text": " j " }, { "math_id": 36, "text": " v( S \\cup \\{ i \\} ) = v( S \\cup \\{ j \\} ), \\forall~ S \\subseteq N \\setminus \\{ i, j \\} " }, { "math_id": 37, "text": " \\omega " }, { "math_id": 38, "text": " ( v + \\omega ) " }, { "math_id": 39, "text": " v( S \\cup \\{ i \\} ) = v( S ), \\forall~ S \\subseteq N \\setminus \\{ i \\} " }, { "math_id": 40, "text": " y " }, { "math_id": 41, "text": " S \\neq \\emptyset " }, { "math_id": 42, "text": " x_i > y _i, \\forall~ i \\in S " }, { "math_id": 43, "text": " \\sum_{ i \\in S } x_i \\leq v(S) " }, { "math_id": 44, "text": "n-2" }, { "math_id": 45, "text": "n-3" }, { "math_id": 46, "text": " C( v ) = \\left\\{ x \\in \\mathbb{R}^N: \\sum_{ i \\in N } x_i = v(N); \\quad \\sum_{ i \\in S } x_i \\geq v(S), \\forall~ S \\subseteq N \\right\\}." }, { "math_id": 47, "text": "X" }, { "math_id": 48, "text": "p=(\\succ_i^p)_{i \\in N}" }, { "math_id": 49, "text": "\\succ_i^p" }, { "math_id": 50, "text": "x \\succ_i^p y" }, { "math_id": 51, "text": "i" }, { "math_id": 52, "text": "x" }, { "math_id": 53, "text": "y" }, { "math_id": 54, "text": "p" }, { "math_id": 55, "text": "\\succ^p_v" }, { "math_id": 56, "text": "x \\succ^p_v y" }, { "math_id": 57, "text": "S" }, { "math_id": 58, "text": "v(S)=1" }, { "math_id": 59, "text": "i \\in S" }, { "math_id": 60, "text": "C(v,p)" }, { "math_id": 61, "text": "x \\in C(v,p)" }, { "math_id": 62, "text": "y\\in X" }, { "math_id": 63, "text": "y \\succ^p_v x" }, { "math_id": 64, "text": " \\varepsilon " }, { "math_id": 65, "text": " \\varepsilon \\in \\mathbb{R} " }, { "math_id": 66, "text": " C_\\varepsilon( v ) = \\left\\{ x \\in \\mathbb{R}^N: \\sum_{ i \\in N } x_i = v(N); \\quad \\sum_{ i \\in S } x_i \\geq v(S) - \\varepsilon, \\forall~ S \\subseteq N \\right\\}. " }, { "math_id": 67, "text": " s_{ij}^v(x) = \\max \\left\\{ v(S) - \\sum_{ k \\in S } x_k : S \\subseteq N \\setminus \\{ j \\}, i \\in S \\right\\}, " }, { "math_id": 68, "text": " ( s_{ij}^v(x) - s_{ji}^v(x) ) \\times ( x_j - v(j) ) \\leq 0 " }, { "math_id": 69, "text": " ( s_{ji}^v(x) - s_{ij}^v(x) ) \\times ( x_i - v(i) ) \\leq 0 " }, { "math_id": 70, "text": "s_{ij}^v(x) > s_{ji}^v(x)" }, { "math_id": 71, "text": " x_j = v(j) " }, { "math_id": 72, "text": "d_v(S)" }, { "math_id": 73, "text": "\\begin{align}\nd_v(\\{i\\})&= v(\\{i\\}) \\\\\nd_v(\\{i,j\\})&= v(\\{i,j\\})-d_v(\\{i\\})-d_v(\\{j\\}) \\\\\nd_v(\\{i,j,k\\})&= v(\\{i,j,k\\})-d_v(\\{i,j\\})-d_v(\\{i,k\\})-d_v(\\{j,k\\})-d_v(\\{i\\})-d_v(\\{j\\})-d_v(\\{k\\})\\\\\n&\\vdots \\\\\nd_v(S) &= v(S) - \\sum_{T\\subsetneq S }d_v(T)\n\\end{align}" }, { "math_id": 74, "text": "d_v(S)=\\sum_{T\\subseteq S }(-1)^{|S\\setminus T|}v(T)" }, { "math_id": 75, "text": " d_v:2^N \\to \\mathbb{R}" }, { "math_id": 76, "text": " v:2^N \\to \\mathbb{R}" }, { "math_id": 77, "text": "d_v" }, { "math_id": 78, "text": "v(S) = d_v(S) + \\sum_{T\\subsetneq S }d_v(T)" }, { "math_id": 79, "text": "\\phi_i(v)" }, { "math_id": 80, "text": "\\phi_i(v)=\\sum_{S\\subset N: i \\in S }{d_v(S)}/{|S|}" }, { "math_id": 81, "text": " S \\subseteq N " }, { "math_id": 82, "text": " v(S) - \\sum_{ i \\in S } x_i " }, { "math_id": 83, "text": " v(S) " }, { "math_id": 84, "text": " \\mathbb{R}^{2^N} " }, { "math_id": 85, "text": " C_\\varepsilon( v ) " }, { "math_id": 86, "text": " v( S \\cup T ) + v( S \\cap T ) \\geq v(S) + v(T), \\forall~ S, T \\subseteq N." }, { "math_id": 87, "text": " v( S \\cup \\{ i \\} ) - v(S) \\leq v( T \\cup \\{ i \\} ) - v(T), \\forall~ S \\subseteq T \\subseteq N \\setminus \\{ i \\}, \\forall~ i \\in N;" }, { "math_id": 88, "text": " \\pi: N \\to N " }, { "math_id": 89, "text": " S_i = \\{ j \\in N: \\pi(j) \\leq i \\} " }, { "math_id": 90, "text": " 1 " }, { "math_id": 91, "text": " \\pi " }, { "math_id": 92, "text": " i = 0, \\ldots, n " }, { "math_id": 93, "text": " S_0 = \\emptyset " }, { "math_id": 94, "text": " x_i = v( S_{\\pi(i)} ) - v( S_{\\pi(i) - 1} ), \\forall~ i \\in N " } ]
https://en.wikipedia.org/wiki?curid=657958
65801270
Quantile-parameterized distribution
A quantile-parameterized distribution (QPD) is a probability distributions that is directly parameterized by data. They were created to meet the need for easy-to-use continuous probability distributions flexible enough to represent a wide range of uncertainties, such as those commonly encountered in business, economics, engineering, and science. Because QPDs are directly parameterized by data, they have the practical advantage of avoiding the intermediate step of parameter estimation, a time-consuming process that typically requires non-linear iterative methods to estimate probability-distribution parameters from data. Some QPDs have virtually unlimited shape flexibility and closed-form moments as well. History. The development of quantile-parameterized distributions was inspired by the practical need for flexible continuous probability distributions that are easy to fit to data. Historically, the Pearson and Johnson families of distributions have been used when shape flexibility is needed. That is because both families can match the first four moments (mean, variance, skewness, and kurtosis) of any data set. In many cases, however, these distributions are either difficult to fit to data or not flexible enough to fit the data appropriately. For example, the beta distribution is a flexible Pearson distribution that is frequently used to model percentages of a population. However, if the characteristics of this population are such that the desired cumulative distribution function (CDF) should run through certain specific CDF points, there may be no beta distribution that meets this need. Because the beta distribution has only two shape parameters, it cannot, in general, match even three specified CDF points. Moreover, the beta parameters that best fit such data can be found only by nonlinear iterative methods. Practitioners of decision analysis, needing distributions easily parameterized by three or more CDF points (e.g., because such points were specified as the result of an expert-elicitation process), originally invented quantile-parameterized distributions for this purpose. Keelin and Powley (2011) provided the original definition. Subsequently, Keelin (2016) developed the metalog distributions, a family of quantile-parameterized distributions that has virtually unlimited shape flexibility, simple equations, and closed-form moments. Definition. Keelin and Powley define a quantile-parameterized distribution as one whose quantile function (inverse CDF) can be written in the form formula_0 where formula_1 and the functions formula_2 are continuously differentiable and linearly independent basis functions. Here, essentially, formula_3 and formula_4 are the lower and upper bounds (if they exist) of a random variable with quantile function formula_5. These distributions are called quantile-parameterized because for a given set of quantile pairs formula_6, where formula_7, and a set of formula_8 basis functions formula_2, the coefficients formula_9 can be determined by solving a set of linear equations. If one desires to use more quantile pairs than basis functions, then the coefficients formula_9 can be chosen to minimize the sum of squared errors between the stated quantiles formula_10 and formula_11. Keelin and Powley illustrate this concept for a specific choice of basis functions that is a generalization of quantile function of the normal distribution, formula_12, for which the mean formula_13 and standard deviation formula_14 are linear functions of cumulative probability formula_15: formula_16 formula_17 The result is a four-parameter distribution that can be fit to a set of four quantile/probability pairs exactly, or to any number of such pairs by linear least squares. Keelin and Powley call this the Simple Q-Normal distribution. Some skewed and symmetric Simple Q-Normal PDFs are shown in the figures below. Properties. QPD’s that meet Keelin and Powley’s definition have the following properties. Probability density function. Differentiating formula_18 with respect to formula_15 yields formula_19. The reciprocal of this quantity, formula_20, is the probability density function (PDF) formula_21 where formula_22. Note that this PDF is expressed as a function of cumulative probability formula_15 rather than formula_23. To plot it, as shown in the figures, vary formula_24 parametrically. Plot formula_25 on the horizontal axis and formula_26 on the vertical axis. Feasibility. A function of the form of formula_27 is a feasible probability distribution if and only if formula_28 for all formula_29. This implies a feasibility constraint on the set of coefficients formula_30: formula_31 for all formula_29 In practical applications, feasibility must generally be checked rather than assumed. Convexity. A QPD’s set of feasible coefficients formula_32 for all formula_33 is convex. Because convex optimization requires convex feasible sets, this property simplifies optimization applications involving QPDs. Fitting to data. The coefficients formula_34 can be determined from data by linear least squares. Given formula_35 data points formula_36 that are intended to characterize the CDF of a QPD, and formula_37 matrix formula_38 whose elements consist of formula_39, then, so long as formula_40 is invertible, coefficients' column vector formula_34 can be determined as formula_41, where formula_42 and column vector formula_43. If formula_44, this equation reduces to formula_45, where the resulting CDF runs through all data points exactly. An alternate method, implemented as a linear program, determines the coefficients by minimizing the sum of absolute distances between the CDF and the data subject to feasibility constraints. Shape flexibility. A QPD with formula_8 terms, where formula_46, has formula_47 shape parameters. Thus, QPDs can be far more flexible than the Pearson distributions, which have at most two shape parameters. For example, ten-term metalog distributions parameterized by 105 CDF points from 30 traditional source distributions (including normal, student-t, lognormal, gamma, beta, and extreme value) have been shown to approximate each such source distribution within a K–S distance of 0.001 or less. Transformations. QPD transformations are governed by a general property of quantile functions: for any quantile function formula_48 and increasing function formula_49 is a quantile function. For example, the quantile function of the normal distribution, formula_12, is a QPD by the Keelin and Powley definition. The natural logarithm, formula_50, is an increasing function, so formula_51 is the quantile function of the lognormal distribution with lower bound formula_52. Importantly, this transformation converts an unbounded QPD into a semi-bounded QPD. Similarly, applying this log transformation to the unbounded metalog distribution yields the semi-bounded (log) metalog distribution; likewise, applying the logit transformation, formula_53, yields the bounded (logit) metalog distribution with lower and upper bounds formula_52 and formula_54, respectively. Moreover, by considering formula_55 to be formula_27 distributed, where formula_27 is any QPD that meets Keelin and Powley’s definition, the transformed variable maintains the above properties of feasibility, convexity, and fitting to data. Such transformed QPDs have greater shape flexibility than the underlying formula_27, which has formula_47 shape parameters; the log transformation has formula_56 shape parameters, and the logit transformation has formula_8 shape parameters. Moreover, such transformed QPDs share the same set of feasible coefficients as the underlying untransformed QPD. Moments. The formula_57 moment of a QPD is: formula_58 Whether such moments exist in closed form depends on the choice of QPD basis functions formula_59. The unbounded metalog distribution and polynomial QPDs are examples of QPDs for which moments exist in closed form as functions of the coefficients formula_9. Simulation. Since the quantile function formula_60 is expressed in closed form, Keelin and Powley QPDs facilitate Monte Carlo simulation. Substituting in uniformly distributed random samples of formula_15 produces random samples of formula_23 in closed form, thereby eliminating the need to invert a CDF expressed as formula_61. Related distributions. The following probability distributions are QPDs according to Keelin and Powley’s definition: Like the SPT metalog distributions, the Johnson Quantile-Parameterized Distributions (JQPDs) are parameterized by three quantiles. JQPDs do not meet Keelin and Powley’s QPD definition, but rather have their own properties. JQPDs are feasible for all SPT parameter sets that are consistent with the rules of probability. Applications. The original applications of QPDs were by decision analysts wishing to conveniently convert expert-assessed quantiles (e.g., 10th, 50th, and 90th quantiles) into smooth continuous probability distributions. QPDs have also been used to fit output data from simulations in order to represent those outputs (both CDFs and PDFs) as closed-form continuous distributions. Used in this way, they are typically more stable and smoother than histograms. Similarly, since QPDs can impose fewer shape constraints than traditional distributions, they have been used to fit a wide range of empirical data in order to represent those data sets as continuous distributions (e.g., reflecting bimodality that may exist in the data in a straightforward manner). Quantile parameterization enables a closed-form QPD representation of known distributions whose CDFs otherwise have no closed-form expression. Keelin et al. (2019) apply this to the sum of independent identically distributed lognormal distributions, where quantiles of the sum can be determined by a large number of simulations. Nine such quantiles are used to parameterize a semi-bounded metalog distribution that runs through each of these nine quantiles exactly. QPDs have also been applied to assess the risks of asteroid impact, cybersecurity, biases in projections of oil-field production when compared to observed production after the fact, and future Canadian population projections based on combining the probabilistic views of multiple experts. See metalog distributions and Keelin (2016) for additional applications of the metalog distribution. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\nF^{-1} (y)= \\left\\{\n\\begin{array}{cl}\nL_0 & \\text{for } y=0\\\\\n\\sum_{i=1}^n a_i g_i(y) & \\text{for } 0<y<1 \\\\\nL_1 & \\mbox{for } y=1\n\\end{array}\\right.\n" }, { "math_id": 1, "text": "\n\\begin{array}{rcl}\nL_0 &=& \\lim_{y\\rarr 0^+} F^{-1}(y) \\\\\nL_1 &=& \\lim_{y\\rarr 1^-} F^{-1}(y)\n\\end{array}\n" }, { "math_id": 2, "text": "g_i(y)" }, { "math_id": 3, "text": "L_0" }, { "math_id": 4, "text": "L_1" }, { "math_id": 5, "text": "F^{-1}(y)" }, { "math_id": 6, "text": "\\{(x_i, y_i) \\mid i=1,\\ldots,n\\}" }, { "math_id": 7, "text": "x_i=F^{-1}(y_i)" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "a_i" }, { "math_id": 10, "text": "x_i" }, { "math_id": 11, "text": "F^{-1}(y_i)" }, { "math_id": 12, "text": "x=\\mu+\\sigma \\Phi^{-1} (y)" }, { "math_id": 13, "text": "\\mu" }, { "math_id": 14, "text": "\\sigma" }, { "math_id": 15, "text": "y" }, { "math_id": 16, "text": "\\mu(y)=a_1+a_4 y" }, { "math_id": 17, "text": "\\sigma(y)=a_2+a_3 y" }, { "math_id": 18, "text": "x=F^{-1} (y)=\\sum_{i=1}^n a_i g_i (y)" }, { "math_id": 19, "text": "dx/dy" }, { "math_id": 20, "text": "dy/dx" }, { "math_id": 21, "text": "f(y) = \\left( \\sum_{i=1}^n\n a_i {{d g_i(y)}\\over{dy}} \\right)^{-1}" }, { "math_id": 22, "text": "0<y<1" }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": "y\\in(0,1)" }, { "math_id": 25, "text": "x=F^{-1} (y)" }, { "math_id": 26, "text": "f(y)" }, { "math_id": 27, "text": "F^{-1} (y)" }, { "math_id": 28, "text": "f(y)>0" }, { "math_id": 29, "text": "y \\in (0,1)" }, { "math_id": 30, "text": "\\boldsymbol a=(a_1,\\ldots,a_n) \\in \\R^n" }, { "math_id": 31, "text": "\\sum_{i=1}^n a_i {{d g_i(y)}\\over{dy}} >0" }, { "math_id": 32, "text": "S_\\boldsymbol a=\\{\\boldsymbol a\\in\\R^n \\mid \\sum_{i=1}^n a_i d g_i (y)/dy > 0" }, { "math_id": 33, "text": "y\\in (0,1)\\}" }, { "math_id": 34, "text": "\\boldsymbol a" }, { "math_id": 35, "text": "m" }, { "math_id": 36, "text": "(x_i,y_i)" }, { "math_id": 37, "text": "m \\times n" }, { "math_id": 38, "text": "\\boldsymbol Y" }, { "math_id": 39, "text": "g_j (y_i)" }, { "math_id": 40, "text": "\\boldsymbol Y^T \\boldsymbol Y" }, { "math_id": 41, "text": "\\boldsymbol a=(\\boldsymbol Y^T \\boldsymbol Y)^{-1} \\boldsymbol Y^T \\boldsymbol x" }, { "math_id": 42, "text": "m\\geq n" }, { "math_id": 43, "text": "\\boldsymbol x=(x_1,\\ldots,x_m)" }, { "math_id": 44, "text": "m=n" }, { "math_id": 45, "text": "\\boldsymbol a=\\boldsymbol Y^{-1} \\boldsymbol x" }, { "math_id": 46, "text": "n\\ge 2" }, { "math_id": 47, "text": "n-2" }, { "math_id": 48, "text": "x=Q(y)" }, { "math_id": 49, "text": "t(x), x=t^{-1} (Q(y))" }, { "math_id": 50, "text": "t(x)=\\ln(x-b_l)" }, { "math_id": 51, "text": "x=b_l+e^{\\mu+\\sigma \\Phi^{-1} (y)}" }, { "math_id": 52, "text": "b_l" }, { "math_id": 53, "text": "t(x)=\\ln((x-b_l)/(b_u-x))" }, { "math_id": 54, "text": "b_u" }, { "math_id": 55, "text": "t(x)" }, { "math_id": 56, "text": "n-1" }, { "math_id": 57, "text": "k^{th}" }, { "math_id": 58, "text": "E[x^k] = \\int_0^1 \\left( \\sum_{i=1}^n a_i g_i(y) \\right)^k dy" }, { "math_id": 59, "text": "g_i (y)" }, { "math_id": 60, "text": "x=F^{-1}(y)" }, { "math_id": 61, "text": "y=F(x)" }, { "math_id": 62, "text": "x=\\mu - \\beta \\ln(-\\ln(y))" }, { "math_id": 63, "text": "x=x_0+\\gamma \\tan[\\pi(y-0.5)]" }, { "math_id": 64, "text": "x=\\mu+s \\ln(y/(1-y) )" }, { "math_id": 65, "text": "s" } ]
https://en.wikipedia.org/wiki?curid=65801270
65804425
Tom Brown (mathematician)
American-Canadian mathematician Thomas Craig Brown (born 1938) is an American-Canadian mathematician, Ramsey Theorist, and Professor Emeritus at Simon Fraser University. Collaborations. As a mathematician, Brown’s primary focus in his research is in the field of Ramsey Theory. When completing his Ph.D., his thesis was 'On Semigroups which are Unions of Periodic Groups' In 1963 as a graduate student, he showed that if the positive integers are finitely colored, then some color class is "piece-wise syndetic". In "A Density Version of a Geometric Ramsey Theorem." he and Joe P. Buhler show that “for every formula_0 there is an formula_1 such that if formula_2 then any subset of formula_3 with more than formula_4 elements must contain 3 collinear points” where formula_3 is an formula_5-dimensional affine space over the field with formula_6 elements, and formula_7". In "Descriptions of the characteristic sequence of an irrational", Brown discusses the following idea: Let formula_8 be a positive irrational real number. The characteristic sequence of formula_8 is formula_9; where formula_10.” From here he discusses “the various descriptions of the characteristic sequence of α which have appeared in the literature” and refines this description to “obtain a very simple derivation of an arithmetic expression for formula_11.” He then gives some conclusions regarding the conditions for formula_11 which are equivalent to formula_12. He has collaborated with Paul Erdős, including "Quasi-Progressions and Descending Waves" and "Quantitative Forms of a Theorem of Hilbert". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varepsilon > 0" }, { "math_id": 1, "text": "n(\\varepsilon)" }, { "math_id": 2, "text": "n = dim(V) \\geq n(\\varepsilon)" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "\\varepsilon|V|" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "p^d" }, { "math_id": 7, "text": "p \\neq 2" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "f(\\alpha) = f_1 f_2 \\ldots " }, { "math_id": 10, "text": "f_n = [ ( n+1 )\\alpha] [\\alpha] " }, { "math_id": 11, "text": "[n\\alpha]" }, { "math_id": 12, "text": "f_n = 1" } ]
https://en.wikipedia.org/wiki?curid=65804425
658068
Infinitesimal transformation
Limiting form of small transformation In mathematics, an infinitesimal transformation is a limiting form of "small" transformation. For example one may talk about an infinitesimal rotation of a rigid body, in three-dimensional space. This is conventionally represented by a 3×3 skew-symmetric matrix "A". It is not the matrix of an actual rotation in space; but for small real values of a parameter ε the transformation formula_0 is a small rotation, up to quantities of order ε2. History. A comprehensive theory of infinitesimal transformations was first given by Sophus Lie. This was at the heart of his work, on what are now called Lie groups and their accompanying Lie algebras; and the identification of their role in geometry and especially the theory of differential equations. The properties of an abstract Lie algebra are exactly those definitive of infinitesimal transformations, just as the axioms of group theory embody symmetry. The term "Lie algebra" was introduced in 1934 by Hermann Weyl, for what had until then been known as the "algebra of infinitesimal transformations" of a Lie group. Examples. For example, in the case of infinitesimal rotations, the Lie algebra structure is that provided by the cross product, once a skew-symmetric matrix has been identified with a 3-vector. This amounts to choosing an axis vector for the rotations; the defining Jacobi identity is a well-known property of cross products. The earliest example of an infinitesimal transformation that may have been recognised as such was in Euler's theorem on homogeneous functions. Here it is stated that a function "F" of "n" variables "x"1, ..., "x""n" that is homogeneous of degree "r", satisfies formula_1 with formula_2 the Theta operator. That is, from the property formula_3 it is possible to differentiate with respect to λ and then set λ equal to 1. This then becomes a necessary condition on a smooth function "F" to have the homogeneity property; it is also sufficient (by using Schwartz distributions one can reduce the mathematical analysis considerations here). This setting is typical, in that there is a one-parameter group of scalings operating; and the information is coded in an infinitesimal transformation that is a first-order differential operator. Operator version of Taylor's theorem. The operator equation formula_4 where formula_5 is an operator version of Taylor's theorem — and is therefore only valid under "caveats" about "f" being an analytic function. Concentrating on the operator part, it shows that "D" is an infinitesimal transformation, generating translations of the real line via the exponential. In Lie's theory, this is generalised a long way. Any connected Lie group can be built up by means of its infinitesimal generators (a basis for the Lie algebra of the group); with explicit if not always useful information given in the Baker–Campbell–Hausdorff formula.
[ { "math_id": 0, "text": "T=I+\\varepsilon A" }, { "math_id": 1, "text": "\\Theta F=rF \\, " }, { "math_id": 2, "text": "\\Theta=\\sum_i x_i{\\partial\\over\\partial x_i}," }, { "math_id": 3, "text": "F(\\lambda x_1,\\dots, \\lambda x_n)=\\lambda^r F(x_1,\\dots,x_n)\\," }, { "math_id": 4, "text": "e^{tD}f(x)=f(x+t)\\," }, { "math_id": 5, "text": "D={d\\over dx}" } ]
https://en.wikipedia.org/wiki?curid=658068
65807245
Black-box obfuscation
Proposed cryptographic primitive In cryptography, black-box obfuscation was a proposed cryptographic primitive which would allow a computer program to be obfuscated in a way such that it was impossible to determine anything about it except its input and output behavior. Black-box obfuscation has been proven to be impossible, even in principle. Impossibility. The unobfuscatable programs. Barak et al. constructed a family of unobfuscatable programs, for which an efficient attacker can always learn more from "any" obfuscated code than from black-box access. Broadly, they start by engineering a special pair of programs that cannot be obfuscated together. For some randomly selected strings formula_0 of a fixed, pre-determined length formula_1, define one program to be one that computes formula_2 and the other program to one that computes formula_3 If an efficient attacker only has black-box access, Barak et al. argued, then the attacker only has an exponentially small chance of guessing the password formula_5, and so cannot distinguish the pair of programs from a pair where formula_6 is replaced by some program formula_7 that always outputs "0". However, if the attacker has access to any obfuscated implementations formula_8 of formula_9, then the attacker will find formula_10 with probability 1, whereas the attacker will always find formula_11 unless formula_12 (which should happen with negligible probability). This means that the attacker can always distinguish the pair formula_13 from the pair formula_14 with obfuscated code access, but not black-box access. Since "no" obfuscator can prevent this attack, Barak et al. conclude that no black-box obfuscator for pairs of programs exists. To conclude the argument, Barak et al. define a third program to implement the functionality of the two previous: formula_15 Since equivalently efficient implementations of formula_9 can be recovered from one of formula_16 by hardwiring the value of formula_17, Barak et al. conclude that formula_16 cannot be obfuscated either, which concludes their argument. Impossible variants of black-box obfuscation and other types of unobfuscable programs. In their paper, Barak et al. also prove the following (conditional to appropriate cryptographic assumptions): Weaker variants. In their original paper exploring black-box obfuscation, Barak et al. defined two weaker notions of cryptographic obfuscation which they did not rule out: indistinguishability obfuscation and extractability obfuscation (which they called "differing-inputs obfuscation".) Informally, an indistinguishability obfuscator should convert input programs with the same functionality into output programs such that the outputs cannot be efficiently related to the inputs by a bounded attacker, and an extractability obfuscator should be an obfuscator such that if the efficient attacker "could" relate the outputs to the inputs for any two programs, then the attacker could also produce an input such that the two programs being obfuscated produce different outputs. (Note that an extractability obfuscator is necessarily an indistinguishability obfuscator.) As of 2020[ [update]], a candidate implementation of indistinguishability obfuscation is under investigation. In 2013, Boyle et al. explored several candidate implementations of extractability obfuscation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha, \\beta" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "C_{\\alpha, \\beta}(x) := \\begin{cases}\n\\beta & \\text{if }x = \\alpha\\\\\n0 & \\text{otherwise}\n\\end{cases}" }, { "math_id": 3, "text": "D_{\\alpha, \\beta}(X) := \\begin{cases}\n1 & \\text{if }X(\\alpha) = \\beta\\text{ and }X\\text{ runs in time}\\leq\\text{poly}(k)\\\\\n0 & \\text{otherwise}\n\\end{cases}." }, { "math_id": 4, "text": "D_{\\alpha, \\beta}" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "C_{\\alpha, \\beta}" }, { "math_id": 7, "text": "Z" }, { "math_id": 8, "text": "C'_{\\alpha, \\beta}, D'_{\\alpha, \\beta}" }, { "math_id": 9, "text": "C_{\\alpha, \\beta}, D_{\\alpha, \\beta}" }, { "math_id": 10, "text": "D'_{\\alpha, \\beta}(C'_{\\alpha, \\beta}) = 1" }, { "math_id": 11, "text": "D'_{\\alpha, \\beta}(Z) = 0" }, { "math_id": 12, "text": "\\beta = 0" }, { "math_id": 13, "text": "(C'_{\\alpha, \\beta}, D'_{\\alpha, \\beta})" }, { "math_id": 14, "text": "(Z, D'_{\\alpha, \\beta})" }, { "math_id": 15, "text": "F_{\\alpha, \\beta}(b, x):= \\begin{cases}\nC_{\\alpha, \\beta}(x) &\\text{if } b = 0\\\\\nD_{\\alpha, \\beta}(x) &\\text{if } b = 1\\\\\n\\end{cases}." }, { "math_id": 16, "text": "F_{\\alpha, \\beta}" }, { "math_id": 17, "text": "b" } ]
https://en.wikipedia.org/wiki?curid=65807245
65811071
Hunter Lab
Color space defined by Richard S. Hunter Hunter Lab (also known as Hunter L,a,b) is a color space defined in 1948 by Richard S. Hunter. It was designed to be computed via simple formulas from the CIEXYZ space, but to be more perceptually uniform. Hunter named his coordinates "L", "a" and "b". Hunter Lab was a precursor to CIELAB, created in 1976 by the International Commission on Illumination (CIE), which named the coordinates for CIELAB as "L*", "a*", "b*" to distinguish them from Hunter's coordinates. Formulation. "L" is a correlate of lightness and is computed from the "Y" tristimulus value using Priest's approximation to Munsell value: formula_0 where "Y"n is the "Y" tristimulus value of a specified white object. For surface-color applications, the specified white object is usually (though not always) a hypothetical material with unit reflectance that follows Lambert's law. The resulting "L" will be scaled between 0 (black) and 100 (white); roughly ten times the Munsell value. Note that a medium lightness of 50 is produced by a luminance of 25, due to the square root proportionality. "a" and "b" are termed opponent color axes. "a" represents, roughly, Redness (positive) versus Greenness (negative). It is computed as: formula_1 where "K"a is a coefficient that depends upon the illuminant (for D65, "K"a is 172.30; see approximate formula below) and "X"n is the "X" tristimulus value of the specified white object. The other opponent color axis, "b", is positive for yellow colors and negative for blue colors. It is computed as: formula_2 where "K"b is a coefficient that depends upon the illuminant (for D65, "K"b is 67.20; see approximate formula below) and "Z"n is the "Z" tristimulus value of the specified white object. Both "a" and "b" will be zero for objects that have the same chromaticity coordinates as the specified white objects (i.e., achromatic, grey, objects). Approximate formulas for "K"a and "K"b. In the previous version of the Hunter "Lab" color space, "K"a was 175 and "K"b was 70. Hunter Associates Lab discovered that better agreement could be obtained with other color difference metrics, such as CIELAB (see above) by allowing these coefficients to depend upon the illuminants. Approximate formulae are: formula_3 formula_4 which result in the original values for Illuminant "C", the original illuminant with which the "Lab" color space was used. As an Adams chromatic valence space. Adams chromatic valence color spaces are based on two elements: a (relatively) uniform lightness scale and a (relatively) uniform chromaticity scale. If we take as the uniform lightness scale Priest's approximation to the Munsell Value scale, which would be written in modern notation as: formula_5 and, as the uniform chromaticity coordinates: formula_6 formula_7 where "k"e is a tuning coefficient, we obtain the two chromatic axes: formula_8 and formula_9 which is identical to the Hunter "Lab" formulas given above if we select "K" = and "k"e =. Therefore, the Hunter Lab color space is an Adams chromatic valence color space. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = 100\\sqrt\\frac{Y}{Y_\\mathrm{n}}" }, { "math_id": 1, "text": "a = K_{\\mathrm{a}}\\left(\\frac{\\frac{X}{X_\\mathrm{n}} - \\frac{Y}{Y_\\mathrm{n}}}{\\sqrt{\\frac{Y}{Y_\\mathrm{n}}}}\\right)" }, { "math_id": 2, "text": "b = K_{\\mathrm{b}}\\left(\\frac{\\frac{Y}{Y_\\mathrm{n}} - \\frac{Z}{Z_\\mathrm{n}}}{\\sqrt{\\frac{Y}{Y_\\mathrm{n}}}}\\right)" }, { "math_id": 3, "text": "K_{\\mathrm{a}} \\approx \\frac{175}{198.04}(X_{\\mathrm{n}} + Y_{\\mathrm{n}})" }, { "math_id": 4, "text": "K_{\\mathrm{b}} \\approx \\frac{70}{218.11}(Y_{\\mathrm{n}} + Z_{\\mathrm{n}})" }, { "math_id": 5, "text": "L = 100\\sqrt{\\frac{Y}{Y_\\mathrm{n}}}" }, { "math_id": 6, "text": "c_\\mathrm{a} = \\frac{\\frac{X}{X_\\mathrm{n}}}{\\frac{Y}{Y_\\mathrm{n}}} - 1 = \\frac{\\frac{X}{X_\\mathrm{n}} - \\frac{Y}{Y_\\mathrm{n}}}{\\frac{Y}{Y_\\mathrm{n}}}" }, { "math_id": 7, "text": "c_\\mathrm{b} = k_{\\mathrm{e}} \\left(1 - \\frac{\\frac{Z}{Z_\\mathrm{n}}}{\\frac{Y}{Y_\\mathrm{n}}}\\right) = k_\\mathrm{e}\\frac{\\frac{Y}{Y_\\mathrm{n}} - \\frac{Z}{Z_\\mathrm{n}}}{\\frac{Y}{Y_\\mathrm{n}}}" }, { "math_id": 8, "text": "a = K\\cdot L\\cdot c_\\mathrm{a} = K\\cdot 100\\frac{\\frac{X}{X_\\mathrm{n}} - \\frac{Y}{Y_\\mathrm{n}}}{\\sqrt{\\frac{Y}{Y_\\mathrm{n}}}}" }, { "math_id": 9, "text": "b = K\\cdot L\\cdot c_\\mathrm{b} = K\\cdot 100 k_\\mathrm{e} \\frac{\\frac{Y}{Y_\\mathrm{n}} - \\frac{Z}{Z_\\mathrm{n}}}{\\sqrt{\\frac{Y}{Y_\\mathrm{n}}}}" } ]
https://en.wikipedia.org/wiki?curid=65811071
65820239
Tau function (integrable systems)
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by in his "direct method" approach to soliton equations, based on expressing them in an equivalent bilinear form. The term tau function, or formula_0-function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili (or KP) equation and related integrable hierarchies. It is a central ingredient in the theory of solitons. In this setting, given any formula_1-function satisfying a Hirota-type system of bilinear equations (see below), the corresponding solutions of the equations of the integrable hierarchy are explicitly expressible in terms of it and its logarithmic derivatives up to a finite order. Tau functions also appear as matrix model partition functions in the spectral theory of random matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers. There are two notions of formula_2-functions, both introduced by the Sato school. The first is "isospectral formula_2-functions" of the "Sato–Segal–Wilson type" for integrable hierarchies, such as the KP hierarchy, which are parametrized by linear operators satisfying isospectral deformation equations of Lax type. The second is "isomonodromic formula_2-functions". Depending on the specific application, a formula_3-function may either be: 1) an analytic function of a finite or infinite number of independent, commuting flow variables, or deformation parameters; 2) a discrete function of a finite or infinite number of denumerable variables; 3) a formal power series expansion in a finite or infinite number of expansion variables, which need have no convergence domain, but serves as generating function for certain enumerative invariants appearing as the coefficients of the series; 4) a finite or infinite (Fredholm) determinant whose entries are either specific polynomial or quasi-polynomial functions, or parametric integrals, and their derivatives; 5) the Pfaffian of a skew symmetric matrix (either finite or infinite dimensional) with entries similarly of polynomial or quasi-polynomial type. Examples of all these types are given below. In the "Hamilton–Jacobi" approach to "Liouville integrable Hamiltonian systems", "Hamilton's principal function", evaluated on the level surfaces of a complete set of Poisson commuting invariants, plays a role similar to the formula_2-function, serving both as a generating function for the canonical transformation to linearizing canonical coordinates and, when evaluated on simultaneous level sets of a complete set of Poisson commuting invariants, as a complete solution of the Hamilton–Jacobi equation. Tau functions: isospectral and isomonodromic. A formula_0-function of isospectral type is defined as a solution of the Hirota bilinear equations (see below), from which the linear operator undergoing isospectral evolution can be uniquely reconstructed. Geometrically, in the Sato and Segal-Wilson sense, it is the value of the determinant of a Fredholm integral operator, interpreted as the orthogonal projection of an element of a suitably defined (infinite dimensional) Grassmann manifold onto the "origin", as that element evolves under the linear exponential action of a maximal abelian subgroup of the general linear group. It typically arises as a partition function, in the sense of statistical mechanics, many-body quantum mechanics or quantum field theory, as the underlying measure undergoes a linear exponential deformation. "Isomonodromic formula_1-functions" for linear systems of Fuchsian type are defined below in . For the more general case of linear ordinary differential equations with rational coefficients, including irregular singularities, they are developed in reference. Hirota bilinear residue relation for KP tau functions. A KP (Kadomtsev–Petviashvili) formula_1-function formula_4 is a function of an infinite collection formula_5 of variables (called "KP flow variables") that satisfies the bilinear formal residue equation identically in the formula_6 variables, where formula_7 is the formula_8 coefficient in the formal Laurent expansion resulting from expanding all factors as Laurent series in formula_9, and formula_10 As explained below in the section , every such formula_2-function determines a set of solutions to the equations of the KP hierarchy. Kadomtsev–Petviashvili equation. If formula_11 is a KP formula_2-function satisfying the Hirota residue equation (1) and we identify the first three flow variables as formula_12 it follows that the function formula_13 satisfies the formula_14 (spatial)formula_15 (time) dimensional nonlinear partial differential equation known as the "Kadomtsev-Petviashvili" (KP) "equation". This equation plays a prominent role in plasma physics and in shallow water ocean waves. Taking further logarithmic derivatives of formula_16 gives an infinite sequence of functions that satisfy further systems of nonlinear autonomous PDE's, each involving partial derivatives of finite order with respect to a finite number of the KP flow parameters formula_17. These are collectively known as the "KP hierarchy". Formal Baker–Akhiezer function and the KP hierarchy. If we define the (formal) Baker-Akhiezer function formula_18 by Sato's formula formula_19 and expand it as a formal series in the powers of the variable formula_9 formula_20 this satisfies an infinite sequence of compatible evolution equations where formula_21 is a linear ordinary differential operator of degree formula_22 in the variable formula_23, with coefficients that are functions of the flow variables formula_24, defined as follows formula_25 where formula_26 is the formal pseudo-differential operator formula_27 with formula_28, formula_29 is the "wave operator" and formula_30 denotes the projection to the part of formula_31 containing purely non-negative powers of formula_32; i.e. the differential operator part of formula_33 . The pseudodifferential operator formula_26 satisfies the infinite system of isospectral deformation equations and the compatibility conditions for both the system (3) and (4) are This is a compatible infinite system of nonlinear partial differential equations, known as the "KP (Kadomtsev-Petviashvili) hierarchy", for the functions formula_34, with respect to the set formula_5 of independent variables, each of which contains only a finite number of formula_35's, and derivatives only with respect to the three independent variables formula_36. The first nontrivial case of these is the Kadomtsev-Petviashvili equation (2). Thus, every KP formula_1-function provides a solution, at least in the formal sense, of this infinite system of nonlinear partial differential equations. Isomonodromic systems. Isomonodromic tau functions. Fuchsian isomonodromic systems. Schlesinger equations. Consider the overdetermined system of first order matrix partial differential equations where formula_37 are a set of formula_38 formula_39 traceless matrices, formula_40 a set of formula_38 complex parameters, formula_9 a complex variable, and formula_41 is an invertible formula_42 matrix valued function of formula_9 and formula_40. These are the necessary and sufficient conditions for the based monodromy representation of the fundamental group formula_43 of the Riemann sphere punctured at the points formula_40 corresponding to the rational covariant derivative operator formula_44 to be independent of the parameters formula_40; i.e. that changes in these parameters induce an isomonodromic deformation. The compatibility conditions for this system are the Schlesinger equations Isomonodromic formula_1-function. Defining formula_38 functions the Schlesinger equations (8) imply that the differential form formula_45 on the space of parameters is closed: formula_46 and hence, locally exact. Therefore, at least locally, there exists a function formula_47 of the parameters, defined within a multiplicative constant, such that formula_48 The function formula_47 is called the "isomonodromic formula_1-function" associated to the fundamental solution formula_49 of the system (6), (7). Hamiltonian structure of the Schlesinger equations. Defining the Lie Poisson brackets on the space of formula_38-tuples formula_50 of formula_51 matrices: formula_52 formula_53 and viewing the formula_38 functions formula_54 defined in (9) as Hamiltonian functions on this Poisson space, the Schlesinger equations (8) may be expressed in Hamiltonian form as formula_55 for any differentiable function formula_56. Reduction of formula_57, formula_58 case to formula_59. The simplest nontrivial case of the Schlesinger equations is when formula_57 and formula_58. By applying a Möbius transformation to the variable formula_9, two of the finite poles may be chosen to be at formula_60 and formula_61, and the third viewed as the independent variable. Setting the sum formula_62 of the matrices appearing in (6), which is an invariant of the Schlesinger equations, equal to a constant, and quotienting by its stabilizer under formula_63 conjugation, we obtain a system equivalent to the most generic case formula_59 of the six Painlevé transcendent equations, for which many detailed classes of explicit solutions are known. Non-Fuchsian isomonodromic systems. For non-Fuchsian systems, with higher order poles, the "generalized" monodromy data include "Stokes matrices and connection matrices", and there are further isomonodromic deformation parameters associated with the local asymptotics, but the "isomonodromic formula_0-functions" may be defined in a similar way, using differentials on the extended parameter space. There is similarly a Poisson bracket structure on the space of rational matrix valued functions of the spectral parameter formula_9 and corresponding spectral invariant Hamiltonians that generate the isomonodromic deformation dynamics. Taking all possible confluences of the poles appearing in (6) for the formula_57 and formula_58 case, including the one at formula_64, and making the corresponding reductions, we obtain all other instances formula_65 of the Painlevé transcendents, for which numerous special solutions are also known. Fermionic VEV (vacuum expectation value) representations. The fermionic Fock space formula_66, is a semi-infinite exterior product space formula_67 defined on a (separable) Hilbert space formula_68 with basis elements formula_69 and dual basis elements formula_70 for formula_71. The free fermionic creation and annihilation operators formula_72 act as endomorphisms on formula_66 via exterior and interior multiplication by the basis elements formula_73 and satisfy the canonical anti-commutation relations formula_74 These generate the standard fermionic representation of the Clifford algebra on the direct sum formula_75, corresponding to the scalar product formula_76 with the Fock space formula_66 as irreducible module. Denote the vacuum state, in the zero fermionic charge sector formula_77, as formula_78, which corresponds to the "Dirac sea" of states along the real integer lattice in which all negative integer locations are occupied and all non-negative ones are empty. This is annihilated by the following operators formula_79 The dual fermionic Fock space vacuum state, denoted formula_80, is annihilated by the adjoint operators, acting to the left formula_81 Normal ordering formula_82 of a product of linear operators (i.e., finite or infinite linear combinations of creation and annihilation operators) is defined so that its vacuum expectation value (VEV) vanishes formula_83 In particular, for a product formula_84 of a pair formula_85 of linear operators, one has formula_86 The "fermionic charge" operator formula_87 is defined as formula_88 The subspace formula_89 is the eigenspace of formula_90 consisting of all eigenvectors with eigenvalue formula_38 formula_91. The standard orthonormal basis formula_92 for the zero fermionic charge sector formula_77 is labelled by integer partitions formula_93, where formula_94 is a weakly decreasing sequence of formula_95 positive integers, which can equivalently be represented by a Young diagram, as depicted here for the partition formula_96. An alternative notation for a partition formula_97 consists of the Frobenius indices formula_98, where formula_99 denotes the "arm length"; i.e. the number formula_100 of boxes in the Young diagram to the right of the formula_22'th diagonal box, formula_101 denotes the "leg length", i.e. the number of boxes in the Young diagram below the formula_22'th diagonal box, for formula_102, where formula_103 is the "Frobenius rank", which is the number of elements along the principal diagonal. The basis element formula_104 is then given by acting on the vacuum with a product of formula_103 pairs of creation and annihilation operators, labelled by the Frobenius indices formula_105 The integers formula_106 indicate, relative to the Dirac sea, the occupied non-negative sites on the integer lattice while formula_107 indicate the unoccupied negative integer sites. The corresponding diagram, consisting of infinitely many occupied and unoccupied sites on the integer lattice that are a finite perturbation of the Dirac sea are referred to as a "Maya diagram". The case of the null (emptyset) partition formula_108 gives the vacuum state, and the dual basis formula_109 is defined by formula_110 Any KP formula_1-function can be expressed as a sum where formula_111 are the KP flow variables, formula_112 is the Schur function corresponding to the partition formula_97, viewed as a function of the normalized power sum variables formula_113 in terms of an auxiliary (finite or infinite) sequence of variables formula_114 and the constant coefficients formula_115 may be viewed as the Plücker coordinates of an element formula_116 of the infinite dimensional Grassmannian consisting of the orbit, under the action of the general linear group formula_117, of the subspace formula_118 of the Hilbert space formula_119. This corresponds, under the "Bose-Fermi correspondence", to a decomposable element formula_120 of the Fock space formula_77 which, up to projectivization, is the image of the Grassmannian element formula_116 under the Plücker map formula_121 where formula_122 is a basis for the subspace formula_123 and formula_124 denotes projectivization of an element of formula_66. The Plücker coordinates formula_125 satisfy an infinite set of bilinear relations, the Plücker relations, defining the image of the Plücker embedding into the projectivization formula_126 of the fermionic Fock space, which are equivalent to the Hirota bilinear residue relation (1). If formula_127 for a group element formula_128 with fermionic representation formula_129, then the formula_1-function formula_130 can be expressed as the fermionic vacuum state expectation value (VEV): formula_131 where formula_132 is the abelian subgroup of formula_117 that generates the KP flows, and formula_133 are the "current" components. Examples of solutions to the equations of the KP hierarchy. Schur functions. As seen in equation (9), every KP formula_2-function can be represented (at least formally) as a linear combination of Schur functions, in which the coefficients formula_115 satisfy the bilinear set of Plucker relations corresponding to an element formula_134 of an infinite (or finite) Grassmann manifold. In fact, the simplest class of (polynomial) tau functions consists of the Schur functions formula_112 themselves, which correspond to the special element of the Grassmann manifold whose image under the Plücker map is formula_135. Multisoliton solutions. If we choose formula_136 complex constants formula_137 with formula_138's all distinct, formula_139, and define the functions formula_140 we arrive at the Wronskian determinant formula formula_141 which gives the general formula_142-soliton formula_1-function. Theta function solutions associated to algebraic curves. Let formula_143 be a compact Riemann surface of genus formula_144 and fix a canonical homology basis formula_145 of formula_146 with intersection numbers formula_147 Let formula_148 be a basis for the space formula_149 of holomorphic differentials satisfying the standard normalization conditions formula_150 where formula_151 is the "Riemann matrix" of periods. The matrix formula_151 belongs to the "Siegel upper half space" formula_152 The Riemann formula_153 function on formula_154 corresponding to the period matrix formula_151 is defined to be formula_155 Choose a point formula_156, a local parameter formula_157 in a neighbourhood of formula_158 with formula_159 and a positive divisor of degree formula_144 formula_160 For any positive integer formula_161 let formula_162 be the unique meromorphic differential of the second kind characterized by the following conditions: Denote by formula_169 the vector of formula_170-cycles of formula_162: formula_171 Denote the image of formula_172 under the Abel map formula_173 formula_174 with arbitrary base point formula_175. Then the following is a KP formula_1-function: formula_176. Matrix model partition functions as KP formula_1-functions. Let formula_177 be the Lebesgue measure on the formula_178 dimensional space formula_179 of formula_180 complex Hermitian matrices. Let formula_181 be a conjugation invariant integrable density function formula_182 Define a deformation family of measures formula_183 for small formula_184 and let formula_185 be the partition function for this random matrix model. Then formula_186 satisfies the bilinear Hirota residue equation (1), and hence is a formula_2-function of the KP hierarchy. formula_1-functions of hypergeometric type. Generating function for Hurwitz numbers. Let formula_187 be a (doubly) infinite sequence of complex numbers. For any integer partition formula_93 define the "content product" coefficient formula_188, where the product is over all pairs formula_189 of positive integers that correspond to boxes of the Young diagram of the partition formula_190, viewed as positions of matrix elements of the corresponding formula_191 matrix. Then, for every pair of infinite sequences formula_192 and formula_193 of complex vaiables, viewed as (normalized) power sums formula_194 of the infinite sequence of auxiliary variables formula_195 and formula_196, defined by: formula_197, the function is a "double" KP formula_2-function, both in the formula_198 and the formula_199 variables, known as a formula_2-function of "hypergeometric type". In particular, choosing formula_200 for some small parameter formula_201, denoting the corresponding content product coefficient as formula_202 and setting formula_203, the resulting formula_0-function can be equivalently expanded as where formula_204 are the "simple Hurwitz numbers", which are formula_205 times the number of ways in which an element formula_206 of the symmetric group formula_207 in formula_208 elements, with cycle lengths equal to the parts of the partition formula_97, can be factorized as a product of formula_209 formula_210-cycles formula_211, and formula_212 is the power sum symmetric function. Equation (12) thus shows that the (formal) KP hypergeometric formula_2-function (11) corresponding to the content product coefficients formula_202 is a generating function, in the combinatorial sense, for simple Hurwitz numbers.
[ { "math_id": 0, "text": " \\tau " }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\tau " }, { "math_id": 3, "text": "\\tau " }, { "math_id": 4, "text": "\\tau(\\mathbf{t})" }, { "math_id": 5, "text": "\\mathbf{t}=(t_1, t_2, \\dots)" }, { "math_id": 6, "text": " \\delta t_j " }, { "math_id": 7, "text": "\\mathrm{res}_{z=0}" }, { "math_id": 8, "text": "z^{-1}" }, { "math_id": 9, "text": "z" }, { "math_id": 10, "text": " {\\bf s} := {\\bf t} + (\\delta t_1, \\delta t_2, \\cdots ), \\quad [z^{-1}] := (z^{-1}, \\tfrac{z^{-2}}{2}, \\cdots \\tfrac{z^{-j}}{j}, \\cdots). " }, { "math_id": 11, "text": "\\tau(t_1, t_2, t_3, \\dots\\dots) " }, { "math_id": 12, "text": "t_1 =x, \\quad t_2=y,\\quad t_3 =t, " }, { "math_id": 13, "text": "u(x,y,t):=2\\frac{\\partial^2}{\\partial x^2}\\log\\left(\\tau(x,y,t, t_4,\\dots)\\right) " }, { "math_id": 14, "text": "2 " }, { "math_id": 15, "text": " +1" }, { "math_id": 16, "text": "\\tau(t_1, t_2, t_3, \\dots\\dots)" }, { "math_id": 17, "text": "{\\bf t} =(t_1, t_2, \\dots )" }, { "math_id": 18, "text": "\\psi(z, \\mathbf{t})" }, { "math_id": 19, "text": " \\psi(z, \\mathbf{t}) :=\ne^{\\sum_{i=1}^\\infty t_i z^i} \n\\frac{\\tau(\\mathbf{t} - [z^{-1}])}{\\tau(\\mathbf{t})}" }, { "math_id": 20, "text": "\n\\psi(z, \\mathbf{t}) = e^{\\sum_{i=1}^\\infty t_i z^i} \n( 1 + \\sum_{j=1}^\\infty w_j(\\mathbf{t}) z^{-j}), " }, { "math_id": 21, "text": " \\mathcal{D}_i" }, { "math_id": 22, "text": "i" }, { "math_id": 23, "text": "x:= t_1" }, { "math_id": 24, "text": " \\mathbf{t}=(t_1, t_2, \\dots)" }, { "math_id": 25, "text": "\n\\mathcal{D}_i := \\big(\\mathcal{L}^i\\big)_+\n" }, { "math_id": 26, "text": "\\mathcal{L}" }, { "math_id": 27, "text": "\n\\mathcal{L} = \\partial + \\sum_{j=1}^\\infty u_j(\\mathbf{t}) \\partial^{-j}\n= \\mathcal{W} \\circ\\partial \\circ{\\mathcal{W}}^{-1}\n" }, { "math_id": 28, "text": " \\partial := \\frac{\\partial}{\\partial x} " }, { "math_id": 29, "text": "\n\\mathcal{W} := 1 +\\sum_{j=1}^\\infty w_j(\\mathbf{t}) \\partial^{-j}\n" }, { "math_id": 30, "text": "\\big(\\mathcal{L}^i\\big)_+" }, { "math_id": 31, "text": "\\mathcal{L}^i" }, { "math_id": 32, "text": " \\partial " }, { "math_id": 33, "text": "{\\mathcal{L}}^i" }, { "math_id": 34, "text": "\\{u_j(\\mathbf{t})\\}_{j\\in \\mathbf{N}}" }, { "math_id": 35, "text": "u_j" }, { "math_id": 36, "text": "(x, t_i, t_j)" }, { "math_id": 37, "text": " \\{N_i\\}_{i=1, \\dots, n}" }, { "math_id": 38, "text": "n" }, { "math_id": 39, "text": "r\\times r" }, { "math_id": 40, "text": "\\{\\alpha_i\\}_{i=1, \\dots, n}" }, { "math_id": 41, "text": "\\Psi(z, \\alpha_1, \\dots, \\alpha_m)" }, { "math_id": 42, "text": "r \\times r" }, { "math_id": 43, "text": "\\pi_1({\\bf P}^1\\backslash\\{\\alpha_i\\}_{i=1, \\dots, n})" }, { "math_id": 44, "text": "{\\partial \\over \\partial z}- \\sum_{i=1}^n {N_i \\over z - \\alpha_i}" }, { "math_id": 45, "text": "\\omega := \\sum_{i=1}^n H_i d\\alpha_i" }, { "math_id": 46, "text": "d\\omega = 0" }, { "math_id": 47, "text": "\\tau(\\alpha_1, \\dots, \\alpha_n)" }, { "math_id": 48, "text": "\\omega = d\\mathrm{ln}\\tau " }, { "math_id": 49, "text": "\\Psi" }, { "math_id": 50, "text": "\\{N_i\\}_{i=1, \\dots, n}" }, { "math_id": 51, "text": "r \\times r " }, { "math_id": 52, "text": "\\{(N_i)_{ab}, (N_j)_{c,d}\\} = \n\\delta_{ij}\\left((N_i)_{ad}\\delta_{bc} - (N_i)_{cb}\\delta_{ad}\\right)" }, { "math_id": 53, "text": " 1 \\le i,j \\le n, \\quad 1\\le a,b,c,d \\le r, " }, { "math_id": 54, "text": "\\{H_i\\}_{i=1, \\dots,n}" }, { "math_id": 55, "text": " \\frac{\\partial f(N_1, \\dots, N_n)}{\\partial \\alpha_i} = \\{f, H_i\\}, \\quad 1\\le i \\le n " }, { "math_id": 56, "text": "f(N_1, \\dots, N_n)" }, { "math_id": 57, "text": "r=2" }, { "math_id": 58, "text": "n=3" }, { "math_id": 59, "text": "P_{VI}" }, { "math_id": 60, "text": "0" }, { "math_id": 61, "text": "1" }, { "math_id": 62, "text": " \\sum_{i=1}^3 N_i" }, { "math_id": 63, "text": "Gl(2)" }, { "math_id": 64, "text": "z=\\infty" }, { "math_id": 65, "text": "P_{I} \\cdots P_V" }, { "math_id": 66, "text": "\\mathcal{F}" }, { "math_id": 67, "text": "\\mathcal{F} = \\Lambda^{\\infty/2}\\mathcal{H} = \\oplus_{n\\in \\mathbf{Z}}\\mathcal{F}_n " }, { "math_id": 68, "text": "\\mathcal{H} " }, { "math_id": 69, "text": "\\{e_i\\}_{i\\in \\mathbf{Z}}" }, { "math_id": 70, "text": "\\{e^i\\}_{i\\in \\mathbf{Z}}" }, { "math_id": 71, "text": "\\mathcal{H}^* " }, { "math_id": 72, "text": "\\{\\psi_j, \\psi^{\\dagger}_j\\}_{j \\in \\mathbf{Z}}" }, { "math_id": 73, "text": "\\psi_i := e_i \\wedge, \\quad \\psi^\\dagger_i := i_{e^i}, \\quad i \\in \\mathbf{Z}," }, { "math_id": 74, "text": " [\\psi_i,\\psi_k]_+ = [\\psi^\\dagger_i,\\psi^\\dagger_k]_+= 0, \\quad [\\psi_i,\\psi^\\dagger_k]_+= \\delta_{ij}." }, { "math_id": 75, "text": "\\mathcal{H} +\\mathcal{H}^* " }, { "math_id": 76, "text": "Q(u + \\mu, w + \\nu) := \\nu(u) + \\mu(v), \\quad u,v \\in \\mathcal{H},\\ \\mu, \\nu \\in \\mathcal{H}^* " }, { "math_id": 77, "text": "\\mathcal{F}_0" }, { "math_id": 78, "text": "|0\\rangle := e_{-1}\\wedge e_{-2} \\wedge \\cdots" }, { "math_id": 79, "text": " \\psi_{-j}|0 \\rangle = 0, \\quad \\psi^{\\dagger}_{j-1}|0 \\rangle = 0, \\quad j=0, 1, \\dots " }, { "math_id": 80, "text": "\\langle 0 |" }, { "math_id": 81, "text": " \\langle 0| \\psi^\\dagger_{-j} = 0, \\quad \\langle 0 | \\psi_{j-1}|0 = 0, \\quad j=0, 1, \\dots " }, { "math_id": 82, "text": ": L_1, \\cdots L_m:" }, { "math_id": 83, "text": " \\langle 0 |: L_1, \\cdots L_m:|0 \\rangle =0. " }, { "math_id": 84, "text": "L_1 L_2" }, { "math_id": 85, "text": "(L_1, L_2)" }, { "math_id": 86, "text": " {:L_1 L_2:} = L_1 L_2 - \\langle 0 | L_1 L_2|0 \\rangle. " }, { "math_id": 87, "text": " C " }, { "math_id": 88, "text": " C = \\sum_{i\\in \\mathbf{Z}} :\\psi_i \\psi^\\dagger_i: " }, { "math_id": 89, "text": "\\mathcal{F}_n \\subset \\mathcal{F}" }, { "math_id": 90, "text": " C" }, { "math_id": 91, "text": " C | v; n\\rangle = n | v; n\\rangle, \\quad \\forall | v; n\\rangle \\in \\mathcal{F}_n " }, { "math_id": 92, "text": "\\{|\\lambda\\rangle\\}" }, { "math_id": 93, "text": " \\lambda = (\\lambda_1, \\dots, \\lambda_{\\ell(\\lambda)})" }, { "math_id": 94, "text": "\\lambda_1\\ge \\cdots \\ge \\lambda_{\\ell(\\lambda)}" }, { "math_id": 95, "text": "\\ell(\\lambda)" }, { "math_id": 96, "text": "(5, 4, 1)" }, { "math_id": 97, "text": "\\lambda" }, { "math_id": 98, "text": "(\\alpha_1, \\dots \\alpha_r | \\beta_1, \\dots \\beta _r)" }, { "math_id": 99, "text": "\\alpha_i" }, { "math_id": 100, "text": "\\lambda_i -i" }, { "math_id": 101, "text": "\\beta_i" }, { "math_id": 102, "text": "i=1, \\dots, r" }, { "math_id": 103, "text": "r" }, { "math_id": 104, "text": "|\\lambda\\rangle" }, { "math_id": 105, "text": " |\\lambda\\rangle = (-1)^{\\sum_{j=1}^r \\beta_j} \n\\prod_{k=1}^r \\big(\\psi_{\\alpha_k} \\psi^\\dagger_{-\\beta_k-1}\\big)| 0 \\rangle. " }, { "math_id": 106, "text": "\\{\\alpha_i\\}_{i=1, \\dots, r}" }, { "math_id": 107, "text": "\\{-\\beta_i-1\\}_{i=1, \\dots, r}" }, { "math_id": 108, "text": "|\\emptyset\\rangle = |0 \\rangle" }, { "math_id": 109, "text": "\\{\\langle \\mu|\\}" }, { "math_id": 110, "text": "\\langle \\mu|\\lambda\\rangle = \\delta_{\\lambda, \\mu} " }, { "math_id": 111, "text": "\\mathbf{t} = (t_1, t_2, \\dots, \\dots)" }, { "math_id": 112, "text": "s_\\lambda(\\mathbf{t})" }, { "math_id": 113, "text": " t_i := [\\mathbf{x}]_i := \\frac{1}{i} \\sum_{a=1}^n x_a^i \\quad i = 1,2, \\dots " }, { "math_id": 114, "text": "\\mathbf{x}:=(x_1, \\dots, x_N)" }, { "math_id": 115, "text": "\\pi_\\lambda(w)" }, { "math_id": 116, "text": "w\\in \\mathrm{Gr}_{\\mathcal{H}_+}(\\mathcal{H}) " }, { "math_id": 117, "text": "\\mathrm{Gl}(\\mathcal{H})" }, { "math_id": 118, "text": "\\mathcal{H}_+ = \\mathrm{span}\\{e_{-i}\\}_{i \\in \\mathbf{N}} \\subset \\mathcal{H} " }, { "math_id": 119, "text": "\\mathcal{H}" }, { "math_id": 120, "text": " |\\tau_w\\rangle = \\sum_{\\lambda} \\pi_{\\lambda}(w) |\\lambda \\rangle " }, { "math_id": 121, "text": " \\mathcal{Pl}: \\mathrm{span}(w_1, w_2, \\dots ) \n\\longrightarrow [w_1 \\wedge w_2 \\wedge \\cdots ]= [|\\tau_w\\rangle], " }, { "math_id": 122, "text": "(w_1, w_2, \\dots )" }, { "math_id": 123, "text": "w\\subset \\mathcal{H}" }, { "math_id": 124, "text": "[ \\cdots]" }, { "math_id": 125, "text": "\\{\\pi_\\lambda(w)\\}" }, { "math_id": 126, "text": "\\mathbf{P}(\\mathcal{F})" }, { "math_id": 127, "text": " w = g(\\mathcal{H}_+)" }, { "math_id": 128, "text": " g \\in \\mathrm{Gl}(\\mathcal{H})" }, { "math_id": 129, "text": "\\hat{g}" }, { "math_id": 130, "text": "\\tau_w(\\mathbf{t})" }, { "math_id": 131, "text": "\\tau_w(\\mathbf{t}) = \\langle 0 | \\hat{\\gamma}_+(\\mathbf{t}) \\hat{g} | 0 \\rangle, " }, { "math_id": 132, "text": "\\Gamma_+ =\\{\\hat{\\gamma}_+(\\mathbf{t}) = e^{\\sum_{i=1}^\\infty t_i J_i}\\} \n\\subset \\mathrm{Gl}(\\mathcal{H}) " }, { "math_id": 133, "text": " J_i := \\sum_{j\\in \\mathbf{Z}} \\psi_j \\psi^\\dagger_{j+i}, \\quad i=1,2 \\dots " }, { "math_id": 134, "text": "w" }, { "math_id": 135, "text": "|\\lambda>" }, { "math_id": 136, "text": "3N" }, { "math_id": 137, "text": "\\{\\alpha_k, \\beta_k, \\gamma_k\\}_{k=1, \\dots, N}" }, { "math_id": 138, "text": "\\alpha_k, \\beta_k" }, { "math_id": 139, "text": "\\gamma_k \\ne 0" }, { "math_id": 140, "text": "\n y_k({\\bf t}) := e^{\\sum_{i=1}^\\infty t_i \\alpha_k^i} +\\gamma_k e^{\\sum_{i=1}^\\infty t_i \\beta_k^i} \\quad k=1,\\dots, N,\n" }, { "math_id": 141, "text": "\n\\tau^{(N)}_{\\vec\\alpha, \\vec\\beta, \\vec\\gamma}({\\bf t}):=\n\\begin{vmatrix}\n y_1({\\bf t})& y_2({\\bf t}) &\\cdots& y_N({\\bf t})\\\\\n y_1'({\\bf t})& y_2'({\\bf t}) &\\cdots& y_N'({\\bf t})\\\\\n\\vdots & \\vdots &\\ddots &\\vdots\\\\ \n y_1^{(N-1)}({\\bf t})& y_2^{(N-1)}({\\bf t}) &\\cdots& y_N^{(N-1)}({\\bf t})\\\\\n \\end{vmatrix},\n" }, { "math_id": 142, "text": "N" }, { "math_id": 143, "text": "X" }, { "math_id": 144, "text": "g" }, { "math_id": 145, "text": "a_1, \\dots, a_g, b_1, \\dots, b_g" }, { "math_id": 146, "text": "H_1(X,\\mathbf{Z})" }, { "math_id": 147, "text": "\na_i \\circ a_j = b_i \\circ b_j =0, \\quad a_i\\circ b_j =\\delta_{ij},\\quad 1\\leq i,j \\leq g.\n" }, { "math_id": 148, "text": "\\{\\omega_i\\}_{i=1, \\dots, g}" }, { "math_id": 149, "text": "H^1(X)" }, { "math_id": 150, "text": "\n\\oint_{a_i} \\omega_j =\\delta_{ij}, \\quad \\oint_{b_j }\\omega_j = B_{ij},\n" }, { "math_id": 151, "text": "B" }, { "math_id": 152, "text": "\n\\mathbf{S}_g=\\left\\{B \\in \\mathrm{Mat}_{g\\times g}(\\mathbf{C})\\ \\colon\\ B^T = B,\\ \\text{Im}(B) \n\\text{ is positive definite}\\right\\}.\n" }, { "math_id": 153, "text": "\\theta" }, { "math_id": 154, "text": "\\mathbf{C}^g" }, { "math_id": 155, "text": " \\theta(Z | B) := \\sum_{N\\in \\Z^g} e^{i\\pi (N, B N) + 2i\\pi (N, Z)}.\n" }, { "math_id": 156, "text": "p_\\infty \\in X" }, { "math_id": 157, "text": "\\zeta" }, { "math_id": 158, "text": "p_{\\infty}" }, { "math_id": 159, "text": "\\zeta(p_\\infty)=0" }, { "math_id": 160, "text": "\n\\mathcal{D}:= \\sum_{i=1}^g p_i,\\quad p_i \\in X.\n" }, { "math_id": 161, "text": "k\\in \\mathbf{N}^+" }, { "math_id": 162, "text": "\\Omega_k" }, { "math_id": 163, "text": "k+1" }, { "math_id": 164, "text": "p=p_\\infty" }, { "math_id": 165, "text": "p=p_{\\infty}" }, { "math_id": 166, "text": "\\Omega_k = d(\\zeta^{-k} ) + \\sum_{j=1}^\\infty Q_{ij} \\zeta^j d\\zeta" }, { "math_id": 167, "text": "a" }, { "math_id": 168, "text": " \\oint_{a_i }\\Omega_j =0. " }, { "math_id": 169, "text": "\\mathbf{U}_k \\in \\mathbf{C}^g" }, { "math_id": 170, "text": "b" }, { "math_id": 171, "text": "(\\mathbf{U}_k)_j := \\oint_{b_j} \\Omega_k." }, { "math_id": 172, "text": "{\\mathcal D}" }, { "math_id": 173, "text": "\\mathcal{A}: \\mathcal{S}^g(X) \\to \\mathbf{C}^g " }, { "math_id": 174, "text": " \\mathbf{E} := \\mathcal{A}(\\mathcal{D}) \\in \\mathbf{C}^g, \\quad \\mathbf{E}_j \n= \\mathcal{A}_j (\\mathcal{D}) := \\sum_{j=1}^g \\int_{p_0}^{p_i}\\omega_j " }, { "math_id": 175, "text": "p_0" }, { "math_id": 176, "text": " \\tau_{(X, \\mathcal{D}, p_\\infty, \\zeta)}(\\mathbf{t}):= e^{-{1\\over 2} \\sum_{ij} Q_{ij}t _i t_j}\n\\theta\\left(\\mathbf{E} +\\sum_{k=1}^\\infty t_k \\mathbf{U}_k \\Big|B\\right) " }, { "math_id": 177, "text": "d\\mu_0(M)" }, { "math_id": 178, "text": "N^2" }, { "math_id": 179, "text": "{\\mathbf H}^{N\\times N}" }, { "math_id": 180, "text": "N\\times N" }, { "math_id": 181, "text": "\\rho(M)" }, { "math_id": 182, "text": "\n\\rho(U M U^{\\dagger}) = \\rho(M), \\quad U\\in U(N).\n" }, { "math_id": 183, "text": "\nd\\mu_{N,\\rho}(\\mathbf{t}) := e^{\\text{ Tr }(\\sum_{i=1}^\\infty t_i M^i)} \\rho(M) d\\mu_0 (M)\n" }, { "math_id": 184, "text": "\\mathbf{t}= (t_1, t_2, \\cdots)" }, { "math_id": 185, "text": "\n\\tau_{N,\\rho}({\\bf t}):= \\int_{{\\mathbf H}^{N\\times N} }d\\mu_{N,\\rho}({\\bf t}).\n" }, { "math_id": 186, "text": "\\tau_{N,\\rho}(\\mathbf{t})" }, { "math_id": 187, "text": "\\{r_i\\}_{i\\in \\mathbf{Z}}" }, { "math_id": 188, "text": "r_{\\lambda} := \\prod_{(i,j)\\in \\lambda} r_{j-i}" }, { "math_id": 189, "text": "(i,j)" }, { "math_id": 190, "text": " \\lambda " }, { "math_id": 191, "text": "\\ell(\\lambda) \\times \\lambda_1" }, { "math_id": 192, "text": " \\mathbf{t} = (t_1, t_2, \\dots )" }, { "math_id": 193, "text": " \\mathbf{s} = (s_1, s_2, \\dots )" }, { "math_id": 194, "text": " \\mathbf{t} = [\\mathbf{x}], \\ \\mathbf{s} = [\\mathbf{y}]" }, { "math_id": 195, "text": " \\mathbf{x} = (x_1, x_2, \\dots )" }, { "math_id": 196, "text": " \\mathbf{y} = (y_1, y_2, \\dots )" }, { "math_id": 197, "text": " t_j := \\tfrac{1}{j}\\sum_{a=1}^\\infty x_a^j, \\quad s_j := \\tfrac{1}{j} \\sum_{j=1}^\\infty y_a^j" }, { "math_id": 198, "text": " \\mathbf{t}" }, { "math_id": 199, "text": " \\mathbf{s}" }, { "math_id": 200, "text": "r_j = r^{\\beta}_j := e^{j\\beta}" }, { "math_id": 201, "text": "\\beta" }, { "math_id": 202, "text": " r_\\lambda^\\beta" }, { "math_id": 203, "text": "\\mathbf{s} = (1, 0, \\dots)=: \\mathbf{t}_0" }, { "math_id": 204, "text": "\\{H_d(\\lambda)\\}" }, { "math_id": 205, "text": " \\frac{1}{n!}" }, { "math_id": 206, "text": "k_\\lambda \\in \\mathcal{S}_{n}" }, { "math_id": 207, "text": "\\mathcal{S}_{n}" }, { "math_id": 208, "text": "n=|\\lambda|" }, { "math_id": 209, "text": "d" }, { "math_id": 210, "text": "2" }, { "math_id": 211, "text": " k_\\lambda = (a_1 b_1)\\dots (a_d b_d)" }, { "math_id": 212, "text": " p_{\\lambda}(\\mathbf{t}) = \\prod_{i=1}^{\\ell(\\lambda)} p_{\\lambda_i}(\\mathbf{t}), \\ \\text{with}\\ p_i(\\mathbf{t}) := \\sum_{a=1}^\\infty x^i_a = i t_i " } ]
https://en.wikipedia.org/wiki?curid=65820239
6582436
Level set (data structures)
In computer science a level set data structure is designed to represent discretely sampled dynamic level sets functions. A common use of this form of data structure is in efficient image rendering. The underlying method constructs a signed distance field that extends from the boundary, and can be used to solve the motion of the boundary in this field. Chronological developments. The powerful level-set method is due to Osher and Sethian 1988. However, the straightforward implementation via a dense d-dimensional array of values, results in both time and storage complexity of formula_0, where formula_1 is the cross sectional resolution of the spatial extents of the domain and formula_2 is the number of spatial dimensions of the domain. Narrow band. The narrow band level set method, introduced in 1995 by Adalsteinsson and Sethian, restricted most computations to a thin band of active voxels immediately surrounding the interface, thus reducing the time complexity in three dimensions to formula_3 for most operations. Periodic updates of the narrowband structure, to rebuild the list of active voxels, were required which entailed an formula_4 operation in which voxels over the entire volume were accessed. The storage complexity for this narrowband scheme was still formula_5 Differential constructions over the narrow band domain edge require careful interpolation and domain alteration schemes to stabilise the solution. Sparse field. This formula_4 time complexity was eliminated in the approximate "sparse field" level set method introduced by Whitaker in 1998. The sparse field level set method employs a set of linked lists to track the active voxels around the interface. This allows incremental extension of the active region as needed without incurring any significant overhead. While consistently formula_3 efficient in time, formula_4 storage space is still required by the sparse field level set method. See for implementation details. Sparse block grid. The sparse block grid method, introduced by Bridson in 2003, divides the entire bounding volume of size formula_6 into small cubic blocks of formula_7 voxels each. A coarse grid of size formula_8 then stores pointers only to those blocks that intersect the narrow band of the level set. Block allocation and deallocation occur as the surface propagates to accommodate to the deformations. This method has a suboptimal storage complexity of formula_9, but retains the constant time access inherent to dense grids. Octree. The octree level set method, introduced by Strain in 1999 and refined by Losasso, Gibou and Fedkiw, and more recently by Min and Gibou uses a tree of nested cubes of which the leaf nodes contain signed distance values. Octree level sets currently require uniform refinement along the interface (i.e. the narrow band) in order to obtain sufficient precision. This representation is efficient in terms of storage, formula_10 and relatively efficient in terms of access queries, formula_11 An advantage of the level method on octree data structures is that one can solve the partial differential equations associated with typical free boundary problems that use the level set method. The CASL research group has developed this line of work in computational materials, computational fluid dynamics, electrokinetics, image guided surgery and controls. Run-length encoded. The run-length encoding (RLE) level set method, introduced in 2004, applies the RLE scheme to compress regions away from the narrow band to just their sign representation while storing with full precision the narrow band. The sequential traversal of the narrow band is optimal and storage efficiency is further improved over the octree level set. The addition of an acceleration lookup table allows for fast formula_12 random access, where r is the number of runs per cross section. Additional efficiency is gained by applying the RLE scheme in a dimensional recursive fashion, a technique introduced by Nielsen &amp; Museth's similar DT-Grid. Hash Table Local Level Set. The Hash Table Local Level Set method, introduced in 2011 by Eyiyurekli and Breen and extended in 2012 by Brun, Guittet and Gibou, only computes the level set data in a band around the interface, as in the Narrow Band Level-Set Method, but also only stores the data in that same band. A hash table data structure is used, which provides an formula_13 access to the data. However, Brun et al. conclude that their method, while being easier to implement, performs worse than a quadtree implementation. They find that &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;as it is, [...] a quadtree data structure seems more adapted than the hash table data structure for level-set algorithms. Three main reasons for worse efficiency are listed: Point-based. Corbett in 2005 introduced the point-based level set method. Instead of using a uniform sampling of the level set, the continuous level set function is reconstructed from a set of unorganized point samples via moving least squares.
[ { "math_id": 0, "text": "O(n^d)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "O(n^2)" }, { "math_id": 4, "text": "O(n^3)" }, { "math_id": 5, "text": "O(n^3)." }, { "math_id": 6, "text": "n^3" }, { "math_id": 7, "text": "m^3" }, { "math_id": 8, "text": "(n/m\n)^3" }, { "math_id": 9, "text": "O\\left((nm)3 + m^3n^2\\right)" }, { "math_id": 10, "text": "O(n^2)," }, { "math_id": 11, "text": "O(\\log\\, n)." }, { "math_id": 12, "text": "O(\\log r)" }, { "math_id": 13, "text": "O(1)" } ]
https://en.wikipedia.org/wiki?curid=6582436
6582659
Moving least squares
Moving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is requested. In computer graphics, the moving least squares method is useful for reconstructing a surface from a set of points. Often it is used to create a 3D surface from a point cloud through either downsampling or upsampling. In numerical analysis to handle contributions of geometry where it is difficult to obtain discretizations, the moving least squares methods have also been used and generalized to solve PDEs on curved surfaces and other geometries. This includes numerical methods developed for curved surfaces for solving scalar parabolic PDEs and vector-valued hydrodynamic PDEs. In machine learning, moving least squares methods have also been used to develop model classes and learning methods. This includes function regression methods and neural network function and operator regression approaches, such as GMLS-Nets. Definition. Consider a function formula_0 and a set of sample points formula_1. Then, the moving least square approximation of degree formula_2 at the point formula_3 is formula_4 where formula_5 minimizes the weighted least-square error formula_6 over all polynomials formula_7 of degree formula_2 in formula_8. formula_9 is the weight and it tends to zero as formula_10. In the example formula_11. The smooth interpolator of "order 3" is a quadratic interpolator. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f: \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 1, "text": "S = \\{ (x_i,f_i) | f(x_i) = f_i \\} " }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "\\tilde{p}(x)" }, { "math_id": 5, "text": "\\tilde{p}" }, { "math_id": 6, "text": "\\sum_{i \\in I} (p(x_i)-f_i)^2\\theta(\\|x-x_i\\|)" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "\\mathbb{R}^n" }, { "math_id": 9, "text": "\\theta(s)" }, { "math_id": 10, "text": "s\\to \\infty" }, { "math_id": 11, "text": "\\theta(s) = e^{-s^2}" } ]
https://en.wikipedia.org/wiki?curid=6582659
65827
Silicone
Family of polymers of the repeating form [R2Si–O–SiR2] In organosilicon and polymer chemistry, a silicone or polysiloxane is a polymer composed of repeating units of siloxane (, where R = organic group). They are typically colorless oils or rubber-like substances. Silicones are used in sealants, adhesives, lubricants, medicine, cooking utensils, thermal insulation, and electrical insulation. Some common forms include silicone oil, grease, rubber, resin, and caulk. Chemistry. Alfred Stock and Carl Somiesky examined the hydrolysis of dichlorosilane, a reaction that was proposed to initially give the monomer : formula_0 When the hydrolysis is performed by treating a solution of in benzene with water, the product was determined to have the approximate formula . Higher polymers were proposed to form with time. Most polysiloxanes feature organic substituents, e.g., and . All polymerized siloxanes or polysiloxanes, silicones consist of an inorganic silicon–oxygen backbone chain () with two groups attached to each silicon center. The materials can be cyclic or polymeric. By varying the chain lengths, side groups, and crosslinking, silicones can be synthesized with a wide variety of properties and compositions. They can vary in consistency from liquid to gel to rubber to hard plastic. The most common siloxane is linear polydimethylsiloxane (PDMS), a silicone oil. The second-largest group of silicone materials is based on silicone resins, which are formed by branched and cage-like oligosiloxanes. Terminology and history. F. S. Kipping coined the word "silicone" in 1901 to describe the formula of polydiphenylsiloxane, (Ph = phenyl, ), by analogy with the formula of the ketone benzophenone, (his term was originally "silicoketone"). Kipping was well aware that polydiphenylsiloxane is polymeric whereas benzophenone is monomeric and noted the contrasting properties of and . The discovery of the structural differences between Kipping's molecules and the ketones means that "silicone" is no longer the correct term (though it remains in common usage) and that the term "siloxane" is preferred according to the nomenclature of modern chemistry. James Franklin Hyde (born 11 March 1903) was an American chemist and inventor. He has been called the "Father of Silicones" and is credited with the launch of the silicone industry in the 1930s. His most notable contributions include his creation of silicone from silicon compounds and his method of making fused silica, a high-quality glass later used in aeronautics, advanced telecommunications, and computer chips. His work led to the formation of Dow Corning, an alliance between the Dow Chemical Company and Corning Glass Works that was specifically created to produce silicone products. Silicone is often confused with silicon, but they are distinct substances. Silicon is a chemical element, a hard dark-grey semiconducting metalloid, which in its crystalline form is used to make integrated circuits ("electronic chips") and solar cells. Silicones are compounds that contain silicon, carbon, hydrogen, oxygen, and perhaps other kinds of atoms as well, and have many very different physical and chemical properties. Compounds containing silicon–oxygen double bonds, now called silanones, but which could deserve the name "silicone", have long been identified as intermediates in gas-phase processes such as chemical vapor deposition in microelectronics production, and in the formation of ceramics by combustion. However, they have a strong tendency to polymerize into siloxanes. The first stable silanone was obtained in 2014 by A. Filippou and others. Synthesis. Most common are materials based on polydimethylsiloxane, which is derived by hydrolysis of dimethyldichlorosilane. This dichloride reacts with water as follows: formula_1 The polymerization typically produces linear chains capped with or (silanol) groups. Under different conditions, the polymer is a cyclic, not a chain. For consumer applications such as caulks silyl acetates are used instead of silyl chlorides. The hydrolysis of the acetates produces the less dangerous acetic acid (the acid found in vinegar) as the reaction product of a much slower curing process. This chemistry is used in many consumer applications, such as silicone caulk and adhesives. formula_2 Branches or crosslinks in the polymer chain can be introduced by using organosilicone precursors with fewer alkyl groups, such as methyl trichlorosilane and methyltrimethoxysilane. Ideally, each molecule of such a compound becomes a branch point. This process can be used to produce hard silicone resins. Similarly, precursors with three methyl groups can be used to limit molecular weight, since each such molecule has only one reactive site and so forms the end of a siloxane chain. Combustion. When silicone is burned in air or oxygen, it forms solid silica (silicon dioxide, ) as a white powder, char, and various gases. The readily dispersed powder is sometimes called silica fume. The pyrolysis of certain polysiloxanes under an inert atmosphere is a valuable pathway towards the production of amorphous silicon oxycarbide ceramics, also known as polymer derived ceramics. Polysiloxanes terminated with functional ligands such as vinyl, mercapto or acrylate groups have been cross linked to yield preceramic polymers, which can be photopolymerised for the additive manufacturing of polymer derived ceramics by stereolithography techniques. Properties. Silicones exhibit many useful characteristics, including: Silicone can be developed into rubber sheeting, where it has other properties, such as being FDA compliant. This extends the uses of silicone sheeting to industries that demand hygiene, for example, food and beverage, and pharmaceuticals. Applications. Silicones are used in many products. "Ullmann's Encyclopedia of Industrial Chemistry" lists the following major categories of application: Electrical (e.g. insulation), electronics (e.g., coatings), household (e.g., sealants and cooking utensils), automobile (e.g. gaskets), airplane (e.g., seals), office machines (e.g. keyboard pads), medicine and dentistry (e.g. tooth impression molds), textiles and paper (e.g. coatings). For these applications, an estimated 400,000 tonnes of silicones were produced in 1991. Specific examples, both large and small are presented below. Automotive. In the automotive field, silicone grease is typically used as a lubricant for brake components since it is stable at high temperatures, is not water-soluble, and is far less likely than other lubricants to foul. DOT 5 brake fluids are based on liquid silicones. Automotive spark plug wires are insulated by multiple layers of silicone to prevent sparks from jumping to adjacent wires, causing misfires. Silicone tubing is sometimes used in automotive intake systems (especially for engines with forced induction). Sheet silicone is used to manufacture gaskets used in automotive engines, transmissions, and other applications. Automotive body manufacturing plants and paint shops avoid silicones, as trace contamination may cause "fish eyes", which are small, circular craters which mar a smooth finish. Additionally, silicone compounds such as silicone rubber are used as coatings and sealants for airbags; the high strength of silicone rubber makes it an optimal adhesive and sealant for high impact airbags. Silicones in combination with thermoplastics provide improvements in scratch and mar resistance and lowered coefficient of friction. Aerospace. Silicone is a widely used material in the aerospace industry due to its sealing properties, stability across an extreme temperature range, durability, sound dampening and anti-vibration qualities, and naturally flame retardant properties. Maintaining extreme functionality is paramount for passenger safety in the aerospace industry, so each component on an aircraft requires high-performance materials. Specially developed aerospace grades of silicone are stable from −70 to 220 °C, these grades can be used in the construction of gaskets for windows and cabin doors. During operation, aircraft go through large temperature fluctuations in a relatively short period of time; from the ambient temperatures when on the ground in hot countries to sub-zero temperatures when flying at high altitude. Silicone rubber can be molded with tight tolerances ensuring gaskets form airtight seals both on the ground and in the air, where atmospheric pressure decreases. Silicone rubber's resistance to heat corrosion enables it to be used for gaskets in aircraft engines where it will outlast other types of rubber, both improving aircraft safety and reducing maintenance costs. The silicone acts to seal instrument panels and other electrical systems in the cockpit, protecting printed circuit boards from the risks of extreme altitude such as moisture and extremely low temperature. Silicone can be used as a sheath to protect wires and electrical components from any dust or ice that may creep into a plane's inner workings. As the nature of air travel results in much noise and vibration, powerful engines, landings, and high speeds all need to be considered to ensure passenger comfort and safe operation of the aircraft. As silicone rubber has exceptional noise reduction and anti-vibration properties, it can be formed into small components and fitted into small gaps ensuring all equipment can be protected from unwanted vibration such as overhead lockers, vent ducts, hatches, entertainment system seals, and LED lighting systems. Solid propellant. Polydimethylsiloxane (PDMS) based binders along with ammonium perchlorate (NH4ClO4) are used as fast burning solid propellants in rockets. Building construction. The strength and reliability of silicone rubber are widely acknowledged in the construction industry. One-part silicone sealants and caulks are in common use to seal gaps, joints and crevices in buildings. One-part silicones cure by absorbing atmospheric moisture, which simplifies installation. In plumbing, silicone grease is typically applied to O-rings in brass taps and valves, preventing lime from sticking to the metal. Structural silicone has also been used in curtain wall building façades since 1974 when the Art Institute of Chicago became the first building to receive exterior glass fixed only with the material. Silicone membranes have been used to cover and restore industrial roofs, thanks to its extreme UV resistance, and ability to keep their waterproof performance for decades. 3D printing. Silicone rubber can be 3D printed (liquid deposition modelling, LDM) using pump-nozzle extrusion systems. Unfortunately, standard silicone formulations are optimized to be used by extrusion and injection moulding machines and are not applicable in LDM-based 3D printing. The rheological behavior and the pot life need to be adjusted. 3D printing also requires the use of a removable support material that is compatible with the silicone rubber. Coatings. Silicone films can be applied to such silica-based substrates as glass to form a covalently bonded hydrophobic coating. Such coatings were developed for use on aircraft windshields to repel water and to preserve visibility, without requiring mechanical windshield wipers which are impractical at supersonic speeds. Similar treatments were eventually adapted to the automotive market in products marketed by Rain-X and others. Many fabrics can be coated or impregnated with silicone to form a strong, waterproof composite such as silnylon. A silicone polymer can be suspended in water by using stabilizing surfactants. This allows water-based formulations to be used to deliver many ingredients that would otherwise require a stronger solvent, or be too viscous to use effectively. For example, a waterborne formulation using a silane's reactivity and penetration ability into a mineral-based surface can be combined with water-beading properties from a siloxane to produce a more-useful surface protection product. Cookware. As a low-taint, non-toxic material, silicone can be used where contact with food is required. Silicone is becoming an important product in the cookware industry, particularly bakeware and kitchen utensils. Silicone is used as an insulator in heat-resistant potholders and similar items; however, it is more conductive of heat than similar less dense fiber-based products. Silicone oven mitts are able to withstand temperatures up to , making it possible to reach into boiling water. Other products include molds for chocolate, ice, cookies, muffins, and various other foods; non-stick bakeware and reusable mats used on baking sheets; steamers, egg boilers or poachers; cookware lids, pot holders, trivets, and kitchen mats. Defoaming. Silicones are used as active compounds in defoamers due to their low water solubility and good spreading properties. Dry cleaning. Liquid silicone can be used as a dry cleaning solvent, providing an alternative to the traditional chlorine-containing perchloroethylene (perc) solvent. The use of silicones in dry cleaning reduces the environmental effect of a typically high-polluting industry. Electronics. Electronic components are sometimes encased in silicone to increase stability against mechanical and electrical shock, radiation and vibration, a process called "potting". Silicones are used where durability and high performance are demanded of components under extreme environmental conditions, such as in space (satellite technology). They are selected over polyurethane or epoxy encapsulation when a wide operating temperature range is required (−65 to 315 °C). Silicones also have the advantage of little exothermic heat rise during cure, low toxicity, good electrical properties, and high purity. Silicones are often components of thermal pastes used to improve heat transfer from power-dissipating electronic components to heat sinks. The use of silicones in electronics is not without problems, however. Silicones are relatively expensive and can be attacked by certain solvents. Silicone easily migrates as either a liquid or vapor onto other components. Silicone contamination of electrical switch contacts can lead to failures by causing an increase in contact resistance, often late in the life of the contact, well after any testing is completed. Use of silicone-based spray products in electronic devices during maintenance or repairs can cause later failures. Firestops. Silicone foam has been used in North American buildings in an attempt to firestop openings within the fire-resistance-rated wall and floor assemblies to prevent the spread of flames and smoke from one room to another. When properly installed, silicone-foam firestops can be fabricated for building code compliance. Advantages include flexibility and high dielectric strength. Disadvantages include combustibility (hard to extinguish) and significant smoke development. Silicone-foam firestops have been the subject of controversy and press attention due to smoke development from pyrolysis of combustible components within the foam, hydrogen gas escape, shrinkage, and cracking. These problems have led to reportable events among licensees (operators of nuclear power plants) of the Nuclear Regulatory Commission (NRC). Silicone firestops are also used in aircraft. Jewelry. Silicone is a popular alternative to traditional metals (such as silver and gold) with jewelry, specifically rings. Silicone rings are commonly worn in professions where metal rings can lead to injuries, such as electrical conduction and ring avulsions. During the mid-2010's, some professional athletes began wearing silicone rings as an alternative during games. Lubricants. Silicone greases are used for many purposes, such as bicycle chains, airsoft gun parts, and a wide range of other mechanisms. Typically, a dry-set lubricant is delivered with a solvent carrier to penetrate the mechanism. The solvent then evaporates, leaving a clear film that lubricates but does not attract dirt and grit as much as an oil-based or other traditional "wet" lubricant. Silicone personal lubricants are also available for use in medical procedures or sexual activity. Medicine and cosmetic surgery. Silicone is used in microfluidics, seals, gaskets, shrouds, and other applications requiring high biocompatibility. Additionally, the gel form is used in bandages and dressings, breast implants, testicle implants, pectoral implants, contact lenses, and a variety of other medical uses. Scar treatment sheets are often made of medical grade silicone due to its durability and biocompatibility. Polydimethylsiloxane (PDMS) is often used for this purpose, since its specific crosslinking results in a flexible and soft silicone with high durability and tack. It has also been used as the hydrophobic block of amphiphilic synthetic block copolymers used to form the vesicle membrane of polymersomes. Illicit cosmetic silicone injections may induce chronic and definitive silicone blood diffusion with dermatologic complications. Ophthalmology uses many products such as silicone oil used to replace the vitreous humor following vitrectomy, silicone intraocular lenses following cataract extraction, silicone tubes to keep a nasolacrimal passage open following dacryocystorhinostomy, canalicular stents for canalicular stenosis, punctal plugs for punctal occlusion in dry eyes, silicone rubber and bands as an external tamponade in tractional retinal detachment, and anteriorly-located break in rhegmatogenous retinal detachment. Addition and condensation (e.g. polyvinyl siloxane) silicones find wide application as a dental impression material due to its hydrophobic property and thermal stability. Moldmaking. Two-part silicone systems are used as rubber molds to cast resins, foams, rubber, and low-temperature alloys. A silicone mold generally requires little or no mold-release or surface preparation, as most materials do not adhere to silicone. For experimental uses, ordinary one-part silicone can be used to make molds or to mold into shapes. If needed, common vegetable cooking oils or petroleum jelly can be used on mating surfaces as a mold-release agent. Silicone cooking molds used as bakeware do not require coating with cooking oil; in addition, the flexibility of the rubber allows the baked food to be easily removed from the mold after cooking. Personal care. Silicones are ingredients widely used in skincare, color cosmetic and hair care applications. Some silicones, notably the amine functionalized amodimethicones, are excellent hair conditioners, providing improved compatibility, feel, and softness, and lessening frizz. The phenyl dimethicones, in another silicone family, are used in reflection-enhancing and color-correcting hair products, where they increase shine and glossiness (and possibly impart subtle color changes). Phenyltrimethicones, unlike the conditioning amodimethicones, have refractive indices (typically 1.46) close to that of a human hair (1.54). However, if included in the same formulation, amodimethicone and phenyltrimethicone interact and dilute each other, making it difficult to achieve both high shine and excellent conditioning in the same product. Silicone rubber is commonly used in baby bottle nipples (teats) for its cleanliness, aesthetic appearance, and low extractable content. Silicones are used in shaving products and personal lubricants. Toys and hobbies. Silly Putty and similar materials are composed of silicones dimethyl siloxane, polydimethylsiloxane, and decamethyl cyclopentasiloxane, with other ingredients. This substance is noted for its unusual characteristics, e.g., that it bounces, but breaks when given a sharp blow; it will also flow like a liquid and form a puddle given enough time. Silicone "rubber bands" are a long-lasting popular replacement refill for real rubber bands in the 2013 fad "rubber band loom" toys at two to four times the price (in 2014). Silicone bands also come in bracelet sizes that can be custom embossed with a name or message. Large silicone bands are also sold as utility tie-downs. Formerol is a silicone rubber (marketed as Sugru) used as an arts-and-crafts material, as its plasticity allows it to be molded by hand like modeling clay. It hardens at room temperature and it is adhesive to various substances including glass and aluminum. Oogoo is an inexpensive silicone clay, which can be used as a substitute for Sugru. In making aquariums, manufacturers now commonly use 100% silicone sealant to join glass plates. Glass joints made with silicone sealant can withstand great pressure, making obsolete the original aquarium construction method of angle-iron and putty. This same silicone is used to make hinges in aquarium lids or for minor repairs. However, not all commercial silicones are safe for aquarium manufacture, nor is silicone used for the manufacture of acrylic aquariums as silicones do not have long-term adhesion to plastics. Special Effects. Silicone is used in special effects as a material for simulating realistic skin, either for prosthetic makeup, prop body parts, or rubber masks. Platinum silicones are ideal for simulating flesh and skin due to their strength, firmness, and translucency, creating a convincing effect. Silicone masks have an advantage over latex masks in that because of the material properties, the mask hugs the wearers face and moves in a realistic manner with the wearer's facial expressions. Silicone is often used as a hypoallergenic substitute for foam latex prosthetics. Marketing. The leading global manufacturers of silicone base materials belong to three regional organizations: the European Silicone Center (CES) in Brussels, Belgium; the Silicones Environmental, Health, and Safety Center (SEHSC) in Herndon, Virginia, US; and the Silicone Industry Association of Japan (SIAJ) in Tokyo, Japan. Dow Corning Silicones, Evonik Industries, Momentive Performance Materials, Milliken and Company (SiVance Specialty Silicones), Shin-Etsu Silicones, Wacker Chemie, Bluestar Silicones, JNC Corporation, Wacker Asahikasei Silicone, and Dow Corning Toray represent the collective membership of these organizations. A fourth organization, the Global Silicone Council (GSC) acts as an umbrella structure over the regional organizations. All four are non-profit, having no commercial role; their primary missions are to promote the safety of silicones from a health, safety, and environmental perspective. As the European chemical industry is preparing to implement the Registration, Evaluation, and Authorisation of Chemicals (REACH) legislation, CES is leading the formation of a consortium of silicones, silanes, and siloxanes producers and importers to facilitate data and cost-sharing. Safety and environmental considerations. Silicone compounds are pervasive in the environment. Particular silicone compounds, cyclic siloxanes D4 and D5, are air and water pollutants and have negative health effects on test animals. They are used in various personal care products. The European Chemicals Agency found that "D4 is a persistent, bioaccumulative and toxic (PBT) substance and D5 is a very persistent, very bioaccumulative (vPvB) substance". Other silicones biodegrade readily, a process that is accelerated by a variety of catalysts, including clays. Cyclic silicones have been shown to involve the occurrence of silanols during biodegradation in mammals. The resulting silanediols and silanetriols are capable of inhibiting hydrolytic enzymes such as thermolysin, acetylcholinesterase. However, the doses required for inhibition are by orders of magnitude higher than the ones resulting from the accumulated exposure to consumer products containing cyclomethicone. At around in an oxygen-containing atmosphere, polydimethylsiloxane releases traces of formaldehyde (but lesser amounts than other common materials such as polyethylene). At this temperature, silicones were found to have lower formaldehyde generation than mineral oil and plastics (less than 3 to 48 μg CH2O/(g·hr) for a high consistency silicone rubber, versus around 400 μg CH2O/(g·hr) for plastics and mineral oil). By , copious amounts of formaldehyde have been found to be produced by all silicones (1,200 to 4,600 μg CH2O/(g·hr)). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{SiH2Cl2 + H2O -> H2SiO + 2 HCl}" }, { "math_id": 1, "text": "n\\ \\ce{Si(CH3)2Cl2} + n\\ \\ce{H2O -> [Si(CH3)2O]}_n + 2n\\ \\ce{HCl}" }, { "math_id": 2, "text": "n\\ \\ce{Si(CH3)2(CH3COO)2} + n\\ \\ce{H2O -> [Si(CH3)2O]}_n + 2n\\ \\ce{CH3COOH}" } ]
https://en.wikipedia.org/wiki?curid=65827
6583010
Paley–Wiener integral
In mathematics, the Paley–Wiener integral is a simple stochastic integral. When applied to classical Wiener space, it is less general than the Itō integral, but the two agree when they are both defined. The integral is named after its discoverers, Raymond Paley and Norbert Wiener. Definition. Let formula_0 be an abstract Wiener space with abstract Wiener measure formula_1 on formula_2. Let formula_3 be the adjoint of formula_4. (We have abused notation slightly: strictly speaking, formula_5, but since formula_6 is a Hilbert space, it is isometrically isomorphic to its dual space formula_7, by the Riesz representation theorem.) It can be shown that formula_8 is an injective function and has dense image in formula_6. Furthermore, it can be shown that every linear functional formula_9 is also square-integrable: in fact, formula_10 This defines a natural linear map from formula_11 to formula_12, under which formula_13 goes to the equivalence class formula_14 of formula_15 in formula_12. This is well-defined since formula_8 is injective. This map is an isometry, so it is continuous. However, since a continuous linear map between Banach spaces such as formula_6 and formula_12 is uniquely determined by its values on any dense subspace of its domain, there is a unique continuous linear extension formula_16 of the above natural map formula_17 to the whole of formula_6. This isometry formula_16 is known as the Paley–Wiener map. formula_18, also denoted formula_19, is a function on formula_2 and is known as the Paley–Wiener integral (with respect to formula_20). It is important to note that the Paley–Wiener integral for a particular element formula_20 is a function on formula_2. The notation formula_19 does not really denote an inner product (since formula_21 and formula_22 belong to two different spaces), but is a convenient abuse of notation in view of the Cameron–Martin theorem. For this reason, many authors prefer to write formula_23 or formula_24 rather than using the more compact but potentially confusing formula_19 notation. See also. Other stochastic integrals:
[ { "math_id": 0, "text": "i : H \\to E" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "j : E^* \\to H" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "j : E^* \\to H^*" }, { "math_id": 6, "text": "H" }, { "math_id": 7, "text": "H^*" }, { "math_id": 8, "text": "j" }, { "math_id": 9, "text": "f \\in E^*" }, { "math_id": 10, "text": "\\| f \\|_{L^{2} (E, \\gamma; \\mathbb{R})} = \\| j(f) \\|_{H}" }, { "math_id": 11, "text": "j(E^*)" }, { "math_id": 12, "text": "L^2(E, \\gamma; \\mathbb{R})" }, { "math_id": 13, "text": "j(f) \\in j(E^*) \\subseteq H" }, { "math_id": 14, "text": "[f]" }, { "math_id": 15, "text": "f" }, { "math_id": 16, "text": "I : H \\to L^2(E, \\gamma; \\mathbb{R})" }, { "math_id": 17, "text": "j(E^*) \\to L^2(E, \\gamma; \\mathbb{R})" }, { "math_id": 18, "text": "I(h)" }, { "math_id": 19, "text": "\\langle h, x \\rangle^\\sim" }, { "math_id": 20, "text": "h \\in H" }, { "math_id": 21, "text": "h" }, { "math_id": 22, "text": "x" }, { "math_id": 23, "text": "\\langle h, - \\rangle^\\sim (x)" }, { "math_id": 24, "text": "I(h)(x)" } ]
https://en.wikipedia.org/wiki?curid=6583010
6583188
Banach–Mazur theorem
In functional analysis, a field of mathematics, the Banach–Mazur theorem is a theorem roughly stating that most well-behaved normed spaces are subspaces of the space of continuous paths. It is named after Stefan Banach and Stanisław Mazur. Statement. Every real, separable Banach space ("X", ||⋅||) is isometrically isomorphic to a closed subspace of C0([0, 1], R), the space of all continuous functions from the unit interval into the real line. Comments. On the one hand, the Banach–Mazur theorem seems to tell us that the seemingly vast collection of all separable Banach spaces is not that vast or difficult to work with, since a separable Banach space is "only" a collection of continuous paths. On the other hand, the theorem tells us that C0([0, 1], R) is a "really big" space, big enough to contain every possible separable Banach space. Non-separable Banach spaces cannot embed isometrically in the separable space C0([0, 1], R), but for every Banach space X, one can find a compact Hausdorff space K and an isometric linear embedding j of X into the space C("K") of scalar continuous functions on K. The simplest choice is to let K be the unit ball of the continuous dual "X" ′, equipped with the w*-topology. This unit ball K is then compact by the Banach–Alaoglu theorem. The embedding j is introduced by saying that for every "x" ∈ "X", the continuous function "j"("x") on K is defined by formula_0 The mapping j is linear, and it is isometric by the Hahn–Banach theorem. Another generalization was given by Kleiber and Pervin (1969): a metric space of density equal to an infinite cardinal α is isometric to a subspace of C0([0,1]α, R), the space of real continuous functions on the product of α copies of the unit interval. Stronger versions of the theorem. Let us write C"k"[0, 1] for C"k"([0, 1], R). In 1995, Luis Rodríguez-Piazza proved that the isometry "i" : "X" → C0[0, 1] can be chosen so that every non-zero function in the image "i"("X") is nowhere differentiable. Put another way, if "D" ⊂ C0[0, 1] consists of functions that are differentiable at at least one point of [0, 1], then i can be chosen so that "i"("X") ∩ "D" {0}. This conclusion applies to the space C0[0, 1] itself, hence there exists a linear map "i" : C0[0, 1] → C0[0, 1] that is an isometry onto its image, such that image under i of C0[0, 1] (the subspace consisting of functions that are everywhere differentiable with continuous derivative) intersects D only at 0: thus the space of smooth functions (with respect to the uniform distance) is isometrically isomorphic to a space of nowhere-differentiable functions. Note that the (metrically incomplete) space of smooth functions is dense in C0[0, 1].
[ { "math_id": 0, "text": " \\forall x' \\in K: \\qquad j(x)(x') = x'(x)." } ]
https://en.wikipedia.org/wiki?curid=6583188
65833164
Park effects
In sports, park effects are the unique factors of each stadium/arena which impact a game's outcome. These effects are broken down into different components and used in advanced statistical analysis. While most sports have regulation-sized fields, some sports/leagues such as Major League Baseball (MLB) and NCAA Hockey, allow for varying field of play dimensions. The most common example of a park effect is a baseball stadium's batting park factor, but there exists other factors that impact all sports. Every stadium throughout the world has its own unique effects that impact the sports played there. Park Factors (Baseball). Because baseball allows for unique field dimensions, each stadium is prone to favoring certain outcomes, and thus can favor pitchers or hitters. This has become the most prominent park effect, known as park factors (PF), which indicates the difference between a team's offense and defense in home and road games. These calculations generally excluding Coors Field, due to its higher altitude, and interleague games due to the varying of a designated hitter (DH) up until the 2020 season, in which the DH was installed to the National League . Used as a part of many statistical prediction models as well as utilized strategically in a plethora of ways, park factors can explain varying offensive outputs of specific players, teams and eras of baseball. These factors can be produced based on a multitude of offensive statistics, but are generally, and most easily calculated based on team runs and home runs. Calculations. A general description of park factors includes comparing the number of runs scored and allowed by each individual team at home and subsequently compared to league average. Although league average can be adjusted for each team to account for which stadiums each team actually played in, but statistically, the difference is marginal and thus ignored. Runs scored are compared to games, or outs as there are 27 outs in a game, rather than plate appearances because the outs per game stay constant while the number of plate appearances changes. The following formulas calculate park factors: formula_0 where The intermediate park factor (iPF) makes PF applicable to composite stats rather than just home stats. iPF is calculated as follows: formula_1 The final PF (fPF) uses weights to regress the data based on which year it came from. Although weights can be calculated in various ways, the general consensus is any differences are marginal. fPF is calculated as follows: formula_2 where Xi = is the weight for the ith year. These calculations give a fPF on a scale of one, meaning one is league average and every decimal above/below one corresponds to one percent above/below league average. Ex. A fPF of 1.20 means offense, based on runs, is expected to be 20% increased; and vice verse, a fPF of 0.80 results in offense expected to be decreased by 20%. Application outside Major League Baseball. Park factors have, of recent time, been applied to leagues outside the MLB. In Minor League Baseball, there are 160 affiliate teams above the rookie complex level, across 14 leagues, of which all have their own park factors, determined by various offensive stats, most common being runs and home runs. Further examples include the calculation of park factors in Nippon Professional Baseball, as well as prospective future calculations by FanGraphs in the Korea Baseball Organization. Park factors have even been calculated for non-professional leagues, such as the Cape Cod League, a top summer collegiate baseball league in the United States. Altitude. Altitude effects all sports in various ways. At higher altitudes, all physical activity becomes more difficult due to a multitude of reasons, including the lower oxygen levels. But beyond the impact on the athletes, who, in all sports experience physiological effects, higher altitudes result in less air resistance on moving objects. In baseball, this lower air resistance produces more runs. While much of the research regarding the effects of altitude on the flight of baseballs is relative to more home runs, altitude has been shown to increase offense in all aspects which contact is made. The only MLB stadium of drastically high altitude is Coors Field, however, there are several other professional stadiums throughout the country with competingly high altitudes. MLB stadium altitudes range from the 5211 feet above sea level of Coors Field to the 20 feet of Philadelphia's Citizen Bank Park. Yet, one must consider that much of the influence of altitudes are de facto calculated in a stadium's park factor as a result of total offensive output by stadium. In basketball, just as in baseball, altitudes impact shot basketballs. Shooters tend to expand their shooting range as the basketballs experience less air resistance when shot. This is why it is generally expected that teams who play their home games at higher altitudes have a stronger home court advantage, however, this has also been seen to produce a weaker away game production for these same high altitude teams. In football, these same effects on moving objects is in play, allowing the football to travel further with the same energy put into it. Thus kickers tend to kick from further distances with ease as well as quarterbacks throwing further with the same ease. However, players have disagreed on how strong an influence the altitude of the venue plays in the outcome of the game and performance of the players. Soccer, being a worldwide sport, has more variance among its stadiums as well as the geography of these stadiums. This results in stadiums over double the maximum American stadium altitude, and consequently, stronger effects. This has driven much controversy over the validity of certain stadiums, such as the Estadio Hernando Siles in Bolivia, which sits close to 12,000 feet above sea level. Due to the stadium's extremely high altitude, the Estadio Hernando Siles was, at one point, facing a ban by FIFA attributed to an unfair advantage but the ban was revoked after about a year. Latin American countries and leaders argued that this was not intentional but rather a lay of the land and discriminated against many South American countries. Dome Effects (Basketball). One of the more noticeable park effects in basketball occurs in NCAA basketball. There is noticeably lower shooting numbers in domed stadiums, usually football stadiums temporarily converted to a basketball arena, compared to typical basketball arenas. Although the reasoning as to why this is a reoccurring phenomena is undeclared, it has become so well accepted that even sport betting bookmakers have been known to lower point based bets in dome stadiums. The most notable stadium which has wreaked havoc on shooting numbers is NRG Stadium, usual home of the Houston Texans, but at times the home of some college basketball games. However, there are studies which debunk this effect, labeling it as a myth. Using various measures of offense (points, free throws, multiple shooting percentages), it has been argued that domed stadiums do not average lower offensive numbers to any significance, and in some instances is better. Rink Size (Hockey). While the NHL has a constant rink size throughout the league, Olympic hockey uses a larger rink, as well as various rink sizes in NCAA Hockey. Although not measured quite to the extent of baseball, varying rink sizes do strongly impact the outcome of the game and how specific players perform. Those who have played on various sized rinks tend to agree that the smaller the rink, the quicker the pace of the game. And vice versa for larger rinks, in which there is more space to play, players have more time to react and skate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "PF = \\frac{H * T}{(T-1)(R+H)} " }, { "math_id": 1, "text": "iPF = \\frac{PF +1}{2}" }, { "math_id": 2, "text": "fPF = 1-(1-iPF)X_i" } ]
https://en.wikipedia.org/wiki?curid=65833164
6583420
Adapted process
In the study of stochastic processes, a stochastic process is adapted (also referred to as a non-anticipating or non-anticipative process) if information about the value of the process at a given time is available at that same time. An informal interpretation is that "X" is adapted if and only if, for every realisation and every "n", "Xn" is known at time "n". The concept of an adapted process is essential, for instance, in the definition of the Itō integral, which only makes sense if the integrand is an adapted process. Definition. Let The stochastic process formula_11 is said to be adapted to the filtration formula_12 if the random variable formula_13 is a formula_14-measurable function for each formula_15. Examples. Consider a stochastic process "X" : [0, "T"] × Ω → R, and equip the real line R with its usual Borel sigma algebra generated by the open sets. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 1, "text": "I" }, { "math_id": 2, "text": "\\leq" }, { "math_id": 3, "text": "\\mathbb{N}" }, { "math_id": 4, "text": "\\mathbb{N}_0" }, { "math_id": 5, "text": "[0, T]" }, { "math_id": 6, "text": "[0, +\\infty)" }, { "math_id": 7, "text": "\\mathbb F = \\left(\\mathcal{F}_i\\right)_{i \\in I}" }, { "math_id": 8, "text": "\\mathcal{F}" }, { "math_id": 9, "text": "(S,\\Sigma)" }, { "math_id": 10, "text": "X_i: I \\times \\Omega \\to S" }, { "math_id": 11, "text": "(X_i)_{i\\in I}" }, { "math_id": 12, "text": "\\left(\\mathcal{F}_i\\right)_{i \\in I}" }, { "math_id": 13, "text": "X_i: \\Omega \\to S" }, { "math_id": 14, "text": "(\\mathcal{F}_i, \\Sigma)" }, { "math_id": 15, "text": "i \\in I" } ]
https://en.wikipedia.org/wiki?curid=6583420
6583544
Progressively measurable process
In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals. Definition. Let The process formula_10 is said to be progressively measurable (or simply progressive) if, for every time formula_11, the map formula_12 defined by formula_13 is formula_14-measurable. This implies that formula_10 is formula_15-adapted. A subset formula_16 is said to be progressively measurable if the process formula_17 is progressively measurable in the sense defined above, where formula_18 is the indicator function of formula_19. The set of all such subsets formula_19 form a sigma algebra on formula_20, denoted by formula_21, and a process formula_10 is progressively measurable in the sense of the previous paragraph if, and only if, it is formula_21-measurable. formula_24 with respect to Brownian motion formula_25 is defined, is the set of equivalence classes of formula_21-measurable processes in formula_26. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 1, "text": "(\\mathbb{X}, \\mathcal{A})" }, { "math_id": 2, "text": "\\{ \\mathcal{F}_{t} \\mid t \\geq 0 \\}" }, { "math_id": 3, "text": "\\mathcal{F}" }, { "math_id": 4, "text": "X : [0, \\infty) \\times \\Omega \\to \\mathbb{X}" }, { "math_id": 5, "text": "[0, T]" }, { "math_id": 6, "text": "\\mathbb{N}_{0}" }, { "math_id": 7, "text": "[0, \\infty)" }, { "math_id": 8, "text": "\\mathrm{Borel}([0, t])" }, { "math_id": 9, "text": "[0,t]" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "[0, t] \\times \\Omega \\to \\mathbb{X}" }, { "math_id": 13, "text": "(s, \\omega) \\mapsto X_{s} (\\omega)" }, { "math_id": 14, "text": "\\mathrm{Borel}([0, t]) \\otimes \\mathcal{F}_{t}" }, { "math_id": 15, "text": " \\mathcal{F}_{t} " }, { "math_id": 16, "text": "P \\subseteq [0, \\infty) \\times \\Omega" }, { "math_id": 17, "text": "X_{s} (\\omega) := \\chi_{P} (s, \\omega)" }, { "math_id": 18, "text": "\\chi_{P}" }, { "math_id": 19, "text": "P" }, { "math_id": 20, "text": "[0, \\infty) \\times \\Omega" }, { "math_id": 21, "text": "\\mathrm{Prog}" }, { "math_id": 22, "text": "L^2 (B)" }, { "math_id": 23, "text": "X : [0, T] \\times \\Omega \\to \\mathbb{R}^n" }, { "math_id": 24, "text": "\\int_0^T X_t \\, \\mathrm{d} B_t " }, { "math_id": 25, "text": "B" }, { "math_id": 26, "text": "L^2 ([0, T] \\times \\Omega; \\mathbb{R}^n)" } ]
https://en.wikipedia.org/wiki?curid=6583544
65836875
Nussinov algorithm
Nucleic acid structure prediction algorithm The Nussinov algorithm is a nucleic acid structure prediction algorithm used in computational biology to predict the folding of an RNA molecule that makes use of dynamic programming principles. The algorithm was developed by Ruth Nussinov in the late 1970s. Background. RNA origami occurs when an RNA molecule "folds" and binds to itself. This folding often determines the function of the RNA molecule. RNA folds at different levels, this algorithm predicts the secondary structure of the RNA. Algorithm. Scoring. We score a solution by counting the total number of paired bases. Thus, attempting to maximize the score that maximizes the total number of bonds between bases. Motivation. Consider an RNA sequence formula_0 whose elements are taken from the set formula_1. Let us imagine we have an optimal solution to the subproblem of folding formula_2 to formula_3, and an optimal solution for folding formula_4 to formula_5 formula_6 . Now, to align formula_2 to formula_7, we have two options: Algorithm. Consider an RNA sequence formula_0 of length formula_12 such that formula_13. Construct an formula_14 matrix formula_15. Initialize formula_15 such that formula_16 formula_17 for formula_18. formula_19 will contain the maximum score for the subsequence formula_20. Now, fill in entries of formula_15 up and to the right, so that formula_21 where formula_22 After this step, we have a matrix formula_15 where formula_19 represents the optimal score of the folding of formula_20. To determine the structure of the folded RNA by traceback, we first create an empty list of pairs formula_23. We initialize with formula_24. Then, we follow one of three scenarios. When the traceback finishes, formula_23 contains all of the paired bases. Limitations. The Nussinov algorithm does not account for the three-dimensional shape of RNA, nor predict RNA pseudoknots. Furthermore, in its basic form, it does not account for a minimum stem loop size. However, it is still useful as a fast algorithm for basic prediction of secondary structure.
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": " \\{A, U, C, G\\}" }, { "math_id": 2, "text": "S_i" }, { "math_id": 3, "text": "S_{j-1}" }, { "math_id": 4, "text": "S_u" }, { "math_id": 5, "text": "S_v" }, { "math_id": 6, "text": "i\\leq u\\leq v\\leq j-1" }, { "math_id": 7, "text": "S_{j}" }, { "math_id": 8, "text": "S_{k}" }, { "math_id": 9, "text": "i\\leq k<j" }, { "math_id": 10, "text": "S_{k-1}" }, { "math_id": 11, "text": "S_{k+1}" }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": "S_i\\in \\{A, U, C, G\\}" }, { "math_id": 14, "text": "n\\times n" }, { "math_id": 15, "text": "M" }, { "math_id": 16, "text": "M(i, i)=0" }, { "math_id": 17, "text": "M(i, i-1) = 0" }, { "math_id": 18, "text": "1\\leq i\\leq n" }, { "math_id": 19, "text": "M(i,j)" }, { "math_id": 20, "text": "S_i...S_j" }, { "math_id": 21, "text": "M(i,j) = \\max_{i\\leq k<j}\\begin{cases}M(i, k-1)+M(k+1, j-1)+\\text{Score}(S_k,S_j) \\\\ M(i, j-1)\\end{cases}" }, { "math_id": 22, "text": "\\text{Score}(S_k,S_j)=\\begin{cases}1,&S_k\\text{ and }S_j \\text{ complementary}\\\\\n0,&\\text{otherwise.}\\end{cases}" }, { "math_id": 23, "text": "P" }, { "math_id": 24, "text": "i=1,j=n" }, { "math_id": 25, "text": "j\\leq i" }, { "math_id": 26, "text": "M(i,j)=M(i,j-1)" }, { "math_id": 27, "text": "i=i,j=j-1" }, { "math_id": 28, "text": "k: i\\leq k<j" }, { "math_id": 29, "text": "S_k" }, { "math_id": 30, "text": "S_j" }, { "math_id": 31, "text": "M(i,j)=M(i,k-1)+M(k+1,j-1)+1" }, { "math_id": 32, "text": "(k,j)" }, { "math_id": 33, "text": "i=i,j=k-1" }, { "math_id": 34, "text": "i=k+1,j=j-1" } ]
https://en.wikipedia.org/wiki?curid=65836875
658501
Savitch's theorem
Relation between deterministic and nondeterministic space complexity In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function formula_0, formula_1 In other words, if a nondeterministic Turing machine can solve a problem using formula_2 space, a deterministic Turing machine can solve the same problem in the square of that space bound. Although it seems that nondeterminism may produce exponential gains in time (as formalized in the unproven exponential time hypothesis), Savitch's theorem shows that it has a markedly more limited effect on space requirements. Proof. The proof relies on an algorithm for STCON, the problem of determining whether there is a path between two vertices in a directed graph, which runs in formula_3 space for formula_4 vertices. The basic idea of the algorithm is to solve recursively a somewhat more general problem, testing the existence of a path from a vertex formula_5 to another vertex formula_6 that uses at most formula_7 edges, for a parameter formula_7 given as input. STCON is a special case of this problem where formula_7 is set large enough to impose no restriction on the paths (for instance, equal to the total number of vertices in the graph, or any larger value). To test for a formula_7-edge path from formula_5 to formula_6, a deterministic algorithm can iterate through all vertices formula_8, and recursively search for paths of half the length from formula_5 to formula_8 and from formula_8 to formula_6. This algorithm can be expressed in pseudocode (in Python syntax) as follows: def stcon(s, t) -&gt; bool: """Test whether a path of any length exists from s to t""" return k_edge_path(s, t, n) # n is the number of vertices def k_edge_path(s, t, k) -&gt; bool: """Test whether a path of length at most k exists from s to t""" if k == 0: return s == t if k == 1: return (s, t) in edges for u in vertices: if k_edge_path(s, u, floor(k / 2)) and k_edge_path(u, t, ceil(k / 2)): return True return False Because each recursive call halves the parameter formula_7, the number of levels of recursion is formula_9. Each level requires formula_10 bits of storage for its function arguments and local variables: formula_7 and the vertices formula_5, formula_6, and formula_8 require formula_9 bits each. The total auxiliary space complexity is thus formula_3. The input graph is considered to be represented in a separate read-only memory and does not contribute to this auxiliary space bound. Alternatively, it may be represented as an implicit graph. Although described above in the form of a program in a high-level language, the same algorithm may be implemented with the same asymptotic space bound on a Turing machine. This algorithm can be applied to an implicit graph whose vertices represent the configurations of a nondeterministic Turing machine and its tape, running within a given space bound formula_2. The edges of this graph represent the nondeterministic transitions of the machine, formula_5 is set to the initial configuration of the machine, and formula_6 is set to a special vertex representing all accepting halting states. In this case, the algorithm returns true when the machine has a nondeterministic accepting path, and false otherwise. The number of configurations in this graph is formula_11, from which it follows that applying the algorithm to this implicit graph uses space formula_12. Thus by deciding connectivity in a graph representing nondeterministic Turing machine configurations, one can decide membership in the language recognized by that machine, in space proportional to the square of the space used by the Turing machine. Corollaries. Some important corollaries of the theorem include: That is, the languages that can be recognized by deterministic polynomial-space Turing machines and nondeterministic polynomial-space Turing machines are the same. This follows directly from the fact that the square of a polynomial function is still a polynomial function. It is believed that a similar relationship does not exist between the polynomial time complexity classes, P and NP, although this is still an open question. That is, all languages that can be solved nondeterministically in logarithmic space can be solved deterministically in the complexity class formula_13 This follows from the fact that STCON is NL-complete. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f\\in\\Omega(\\log(n))" }, { "math_id": 1, "text": "\\mathsf{NSPACE}\\left(f\\left(n\\right)\\right) \\subseteq \\mathsf{DSPACE}\\left(f\\left(n\\right)^2\\right)." }, { "math_id": 2, "text": "f(n)" }, { "math_id": 3, "text": "O\\left((\\log n)^2\\right)" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "t" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "u" }, { "math_id": 9, "text": "\\lceil\\log_2 n\\rceil" }, { "math_id": 10, "text": "O(\\log n)" }, { "math_id": 11, "text": "O(2^{f(n)})" }, { "math_id": 12, "text": "O(f(n)^2)" }, { "math_id": 13, "text": "\\mathsf{\\color{Blue}L}^2 =\\mathsf{DSPACE}\\left(\\left(\\log n\\right)^2\\right)." } ]
https://en.wikipedia.org/wiki?curid=658501
658518
NSPACE
In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE. Complexity classes. The measure NSPACE is used to define the complexity class whose solutions can be determined by a non-deterministic Turing machine. The complexity class NSPACE("f"("n")) is the set of decision problems that can be solved by a non-deterministic Turing machine, "M", using space "O"("f"("n")), where "n" is the length of the input. Several important complexity classes can be defined in terms of "NSPACE". These include: The Immerman–Szelepcsényi theorem states that NSPACE("s"("n")) is closed under complement for every function "s"("n") ≥ log "n". A further generalization is ASPACE, defined with alternating Turing machines. Relation with other complexity classes. DSPACE. NSPACE is the non-deterministic counterpart of DSPACE, the class of memory space on a deterministic Turing machine. First by definition, then by Savitch's theorem, we have that: formula_2 Time. NSPACE can also be used to determine the time complexity of a deterministic Turing machine by the following theorem: If a language "L" is decided in space "S"("n") (where "S"("n") ≥ log "n") by a non-deterministic TM, then there exists a constant "C" such that "L" is decided in time "O"("C""S"("n")) by a deterministic one. Limitations. The measure of space complexity in terms of DSPACE is useful because it represents the total amount of memory that an actual computer would need to solve a given computational problem with a given algorithm. The reason is that DSPACE describes the space complexity used by deterministic Turing machines, which can represent actual computers. On the other hand, NSPACE describes the space complexity of non-deterministic Turing machines, which are not useful when trying to represent actual computers. For this reason, NSPACE is limited in its usefulness to real-world applications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bigcup_{k\\in\\mathbb{N}} \\mathsf{NSPACE}(n^k)" }, { "math_id": 1, "text": "\\bigcup_{k\\in\\mathbb{N}} \\mathsf{NSPACE}(2^{n^k})" }, { "math_id": 2, "text": "\\mathsf{DSPACE}[s(n)] \\subseteq \\mathsf{NSPACE}[s(n)] \\subseteq \\mathsf{DSPACE}[(s(n))^2]." } ]
https://en.wikipedia.org/wiki?curid=658518
658520
DSPACE
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm. Complexity classes. The measure DSPACE is used to define complexity classes, sets of all of the decision problems that can be solved using a certain amount of memory space. For each function "f"("n"), there is a complexity class SPACE("f"("n")), the set of decision problems that can be solved by a deterministic Turing machine using space "O"("f"("n")). There is no restriction on the amount of computation time that can be used, though there may be restrictions on some other complexity measures (like alternation). Several important complexity classes are defined in terms of DSPACE. These include: "Proof:" Suppose that there exists a non-regular language "L" ∈ DSPACE("s"("n")), for "s"("n") = "o"(log log "n"). Let "M" be a Turing machine deciding "L" in space "s"("n"). By our assumption "L" ∉ DSPACE("O"(1)); thus, for any arbitrary formula_0, there exists an input of "M" requiring more space than "k". Let "x" be an input of smallest size, denoted by n, that requires more space than "k", and formula_1 be the set of all configurations of "M" on input "x". Because "M" ∈ DSPACE("s"("n")), then formula_2, where "c" is a constant depending on "M". Let "S" denote the set of all possible crossing sequences of "M" on "x". Note that the length of a crossing sequence of "M" on "x" is at most formula_3: if it is longer than that, then some configuration will repeat, and "M" will go into an infinite loop. There are also at most formula_3 possibilities for every element of a crossing sequence, so the number of different crossing sequences of "M" on "x" is formula_4 According to pigeonhole principle, there exist indexes "i" &lt; "j" such that formula_5, where formula_6 and formula_7 are the crossing sequences at boundary "i" and "j", respectively. Let x' be the string obtained from x by removing all cells from "i" + 1 to "j". The machine M still behaves exactly the same way on input x' as on input x, so it needs the same space to compute x' as to compute x. However, |"x' "| &lt; |"x"|, contradicting the definition of x. Hence, there does not exist such a language L as assumed. □ The above theorem implies the necessity of the space-constructible function assumption in the space hierarchy theorem. Machine models. DSPACE is traditionally measured on a deterministic Turing machine. Several important space complexity classes are sublinear, that is, smaller than the size of the input. Thus, "charging" the algorithm for the size of the input, or for the size of the output, would not truly capture the memory space used. This is solved by defining the multi-tape Turing machine with input and output, which is a standard multi-tape Turing machine, except that the input tape may never be written-to, and the output tape may never be read from. This allows smaller space classes, such as L (logarithmic space), to be defined in terms of the amount of space used by all of the work tapes (excluding the special input and output tapes). Since many symbols might be packed into one by taking a suitable power of the alphabet, for all "c" ≥ 1 and "f" such that "f"("n") ≥ "1", the class of languages recognizable in "c f"("n") space is the same as the class of languages recognizable in "f"("n") space. This justifies usage of big O notation in the definition. Hierarchy theorem. The space hierarchy theorem shows that, for every space-constructible function formula_10, there exists some language L which is decidable in space formula_11 but not in space formula_12. Relation with other complexity classes. DSPACE is the deterministic counterpart of NSPACE, the class of memory space on a non-deterministic Turing machine. By Savitch's theorem, we have that formula_13 NTIME is related to DSPACE in the following way. For any time constructible function "t"("n"), we have formula_14. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k \\in \\mathbb{N}" }, { "math_id": 1, "text": "\\mathcal{C}" }, { "math_id": 2, "text": "|\\mathcal{C}| \\le 2^{c.s(n)} = o(\\log n)" }, { "math_id": 3, "text": "|\\mathcal{C}|" }, { "math_id": 4, "text": "|S|\\le|\\mathcal{C}|^{|\\mathcal{C}|} \\le (2^{c.s(n)})^{2^{c.s(n)}}= 2^{c.s(n).2^{c.s(n)}}< 2^{2^{2c.s(n)}}=2^{2^{o(\\log \\log n)}} = o(n) " }, { "math_id": 5, "text": "\\mathcal{C}_i(x)=\\mathcal{C}_j(x)" }, { "math_id": 6, "text": "\\mathcal{C}_i(x)" }, { "math_id": 7, "text": "\\mathcal{C}_j(x)" }, { "math_id": 8, "text": "\\bigcup_{k\\in\\mathbb{N}} \\mathsf{DSPACE}(n^k)" }, { "math_id": 9, "text": "\\bigcup_{k\\in\\mathbb{N}} \\mathsf{DSPACE}(2^{n^k})" }, { "math_id": 10, "text": "f: \\mathbb{N} \\to \\mathbb{N}" }, { "math_id": 11, "text": "O(f(n))" }, { "math_id": 12, "text": "o(f(n))" }, { "math_id": 13, "text": "\\mathsf{DSPACE}[s(n)] \\subseteq \\mathsf{NSPACE}[s(n)] \\subseteq \\mathsf{DSPACE}[(s(n))^2]." }, { "math_id": 14, "text": "\\mathsf{NTIME}(t(n)) \\subseteq \\mathsf{DSPACE}(t(n))" } ]
https://en.wikipedia.org/wiki?curid=658520
65853439
Piezospectroscopy
Analytical technique Piezospectroscopy (also known as photoluminescence piezospectroscopy) is an analytical technique that reveals internal stresses in alumina-containing materials, particularly thermal barrier coatings (TBCs). A typical procedure involves illuminating the sample with laser light of a known wavelength, causing the material to release its own radiation in response (see fluorescence). By measuring the emitted radiation and comparing the location of the peaks to a stress-free sample, stresses in the material can be revealed without any destructive interaction. Piezospectroscopy can be used on any material that exhibits fluorescence, but is almost exclusively used on samples containing alumina because of the presence of chromium ions, either as part of the composition or as an impurity, that greatly increase the fluorescent response. As opposed to other methods of stress measurement, such as powder diffraction or the use of a strain gauge, piezospectroscopy can measure internal stresses at higher resolution, on the order of 1 μm, and can measure very quickly, with most systems taking less than one second to acquire data. Theory. Piezospectroscopy takes advantage of both the microstructure and composition of TBCs to generate accurate results. A typical candidate for piezospectroscopy contains three layers: Coating failure is usually a result of spalling or cracking of the TGO layer. Because the TGO is buried beneath a thick layer of ceramic, subsurface stresses are generally difficult to detect. The use of an argon-ion laser makes this possible. The optical band gap (threshold for photon absorption) of the ceramic topcoat is much greater than the energy of argon laser light, effectively making the topcoat translucent and allowing for interaction with the TGO layer. Within the TGO, it is the chromium (Cr3+) ions that produce strong emission spectra and allow for piezospectroscopic analysis. At the subatomic level, the laser light of known wavelength (usually 5149Å) causes the outer electron in the Cr3+ ions to absorb the incoming radiation, which raises it to a higher energy level. Upon returning to a lower energy state, the electron releases its own radiation. Because the energy levels are discrete, the spectrum for stress-free aluminum oxide always exhibits two peaks at wavelengths 14,402 cm−1 and 14,432 cm−1. The wavelength and frequency are related through: formula_0 where v is the frequency, λ is the wavelength, and c is the speed of light. If the coating is under a compressive stress, the peaks will be shifted downward while a tensile stress will shift them upward. The frequency shift is given by the equation: formula_1 where formula_2 is the piezospectroscopic tensor and formula_3 is the residual stress within the coating. Instrumentation. In order to obtain accurate results, a few finely tuned instruments must work in tandem: Laser. A light source, such as a laser, is instrumental to piezospectroscopy. Narrow bandwidth lasers are preferred due to the increased resolution of the resulting spectrum. The fluorescent response is stronger at lower frequencies, but excessively low frequency light can cause sample degradation and interference with the ceramic surface of the coating. Microscope. A microscope is generally used to isolate a certain section of a sample. Because TBC failure can begin at microscopic scales, magnification is often essential to accurately detect stresses. Monochromator. A monochromator is used to filter out weakly scattered light and permit the strong emission peaks from the fluorescent response. In addition, notch or long-pass optical filters are used to filter the peak from the laser wavelength itself. Detector. Many types of detectors are used with piezospectroscopy, the two most common being dispersion through a spectrograph or an interferometer. The resulting signal can be analyzed through Fourier Transform (FT) methods. Array detectors such as CCDs are also common, with many different types being suited for different ranges of wavelengths. Applications. Piezospectroscopy is used in industry to ensure safe operation of TBCs. Quality control. It is critical that TBCs be applied properly in order to prevent premature microfractures, delamination, and other structural failure. Through piezospectroscopy, parts can be put into service with the assurance of a properly protected substrate. Nondestructive inspection/remaining lifetime assessment. Piezospectroscopy can accurately describe the extent of any discovered damage and provide accurate lifetime estimates in actual use. In addition, piezospectroscopy can be set up "in situ." This, along with its noninvasive nature, makes piezospectroscopy an efficient method of onsite damage assessment.
[ { "math_id": 0, "text": "v = {c \\over \\lambda}" }, { "math_id": 1, "text": "\\Delta v= {2 \\over 3} \\amalg_{ii} \\sigma_{av}" }, { "math_id": 2, "text": "\\amalg_{ii}" }, { "math_id": 3, "text": "\\sigma_{av}" } ]
https://en.wikipedia.org/wiki?curid=65853439
658538
DTIME
In computational complexity theory, DTIME (or TIME) is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time (or number of computation steps) that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource (the amount of time it takes a computer to solve a problem). The resource DTIME is used to define complexity classes, sets of all of the decision problems which can be solved using a certain amount of computation time. If a problem of input size "n" can be solved in &amp;NoBreak;&amp;NoBreak;, we have a complexity class &amp;NoBreak;&amp;NoBreak; (or &amp;NoBreak;&amp;NoBreak;). There is no restriction on the amount of memory space used, but there may be restrictions on some other complexity resources (like alternation). Complexity classes in DTIME. Many important complexity classes are defined in terms of DTIME, containing all of the problems that can be solved in a certain amount of deterministic time. Any proper complexity function can be used to define a complexity class, but only certain classes are useful to study. In general, we desire our complexity classes to be robust against changes in the computational model, and to be closed under composition of subroutines. DTIME satisfies the time hierarchy theorem, meaning that asymptotically larger amounts of time always create strictly larger sets of problems. The well-known complexity class P comprises all of the problems which can be solved in a polynomial amount of DTIME. It can be defined formally as: formula_0 P is the smallest robust class which includes linear-time problems formula_1 (AMS 2004, Lecture 2.2, pg. 20). P is one of the largest complexity classes considered "computationally feasible". A much larger class using deterministic time is EXPTIME, which contains all of the problems solvable using a deterministic machine in exponential time. Formally, we have formula_2 Larger complexity classes can be defined similarly. Because of the time hierarchy theorem, these classes form a strict hierarchy; we know that formula_3, and on up. Machine model. For robust classes, such as P, the exact machine model used to define DTIME can vary without affecting the power of the resource. The Computational Complexity literature often defines DTIME based on multitape Turing machines, particularly when discussing very small time classes. A multitape deterministic Turing machine can never provide more than a quadratic time speedup over a singletape machine. Due to the Linear speedup theorem for Turing machines, multiplicative constants in the time bound do not affect the extent of DTIME classes; a constant multiplicative speedup can always be obtained by increasing the number of states in the finite state control and the size of the tape alphabet. In the statement of Papadimitriou, for a language L, Let formula_4. Then, for any formula_5, formula_6, where formula_7. Generalizations. Using a model other than a deterministic Turing machine, there are various generalizations and restrictions of DTIME. For example, if we use a nondeterministic Turing machine, we have the resource NTIME. The relationship between the expressive powers of DTIME and other computational resources are very poorly understood. One of the few known results is formula_8 for multitape machines. This was extended to formula_9 by Santhanam. If we use an alternating Turing machine, we have the resource ATIME. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{P} = \\bigcup_{k\\in\\mathbb{N}} \\mathsf{DTIME}(n^k)" }, { "math_id": 1, "text": "\\mathsf{DTIME}\\left(n\\right)" }, { "math_id": 2, "text": " \\mathsf{EXPTIME} = \\bigcup_{k \\in \\mathbb{N} } \\mathsf{DTIME} \\left( 2^{ n^k } \\right) . " }, { "math_id": 3, "text": "\\mathsf{P} \\subsetneq \\mathsf{EXPTIME} " }, { "math_id": 4, "text": "L \\in \\mathsf{DTIME}(f(n))" }, { "math_id": 5, "text": "\\epsilon > 0" }, { "math_id": 6, "text": "L \\in \\mathsf{DTIME}(f'(n))" }, { "math_id": 7, "text": "f'(n) = \\epsilon f(n) + n + 2" }, { "math_id": 8, "text": "\\mathsf{DTIME}(O(n)) \\neq \\mathsf{NTIME}(O(n))" }, { "math_id": 9, "text": "\\mathsf{DTIME}(O(n\\sqrt{\\log^*n})) \\neq \\mathsf{NTIME}(O(n\\sqrt{\\log^*n}))" } ]
https://en.wikipedia.org/wiki?curid=658538
658539
NTIME
In computational complexity theory, the complexity class NTIME("f"("n")) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time "O"("f"("n")). Here "O" is the big O notation, "f" is some function, and "n" is the size of the input (for which the problem is to be decided). Meaning. This means that there is a non-deterministic machine which, for a given input of size "n", will run in time "O"("f"("n")) (i.e. within a constant multiple of "f"("n"), for "n" greater than some value), and will always "reject" the input if the answer to the decision problem is "no" for that input, while if the answer is "yes" the machine will "accept" that input for at least one computation path. Equivalently, there is a deterministic Turing machine "M" that runs in time "O"("f"("n")) and is able to check an "O"("f"("n"))-length certificate for an input; if the input is a "yes" instance, then at least one certificate is accepted, if the input is a "no" instance, no certificate can make the machine accept. Space constraints. The space available to the machine is not limited, although it cannot exceed "O"("f"("n")), because the time available limits how much of the tape is reachable. Relation to other complexity classes. The well-known complexity class NP can be defined in terms of NTIME as follows: formula_0 Similarly, the class NEXP is defined in terms of NTIME: formula_1 The non-deterministic time hierarchy theorem says that nondeterministic machines can solve more problems in asymptotically more time. NTIME is also related to DSPACE in the following way. For any time constructible function "t"("n"), we have formula_2. A generalization of NTIME is ATIME, defined with alternating Turing machines. It turns out that formula_3. References. "Complexity Zoo": NTIME("f"("n")).
[ { "math_id": 0, "text": "\\mathsf{NP} = \\bigcup_{k\\in\\mathbb{N}} \\mathsf{NTIME}(n^k)" }, { "math_id": 1, "text": "\\mathsf{NEXP} = \\bigcup_{k\\in\\mathbb{N}} \\mathsf{NTIME}(2^{n^k})" }, { "math_id": 2, "text": "\\mathsf{NTIME}(t(n)) \\subseteq \\mathsf{DSPACE}(t(n))" }, { "math_id": 3, "text": "\\mathsf{NTIME}(t(n)) \\subseteq \\mathsf{ATIME}(t(n)) \\subseteq \\mathsf{DSPACE}(t(n))" } ]
https://en.wikipedia.org/wiki?curid=658539
658550
P (complexity)
Class of problems solvable in polynomial time In computational complexity theory, P, also known as PTIME or DTIME("n"O(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb. Definition. A language "L" is in P if and only if there exists a deterministic Turing machine "M", such that P can also be viewed as a uniform family of Boolean circuits. A language "L" is in P if and only if there exists a polynomial-time uniform family of Boolean circuits formula_0, such that The circuit definition can be weakened to use only a logspace uniform family without changing the complexity class. Notable problems in P. P is known to contain many natural problems, including the decision versions of linear programming, and finding a maximum matching. In 2002, it was shown that the problem of determining if a number is prime is in P. The related class of function problems is FP. Several natural problems are complete for P, including "st"-connectivity (or reachability) on alternating graphs. The article on P-complete problems lists further relevant problems in P. Relationships to other classes. A generalization of P is NP, which is the class of decision problems decidable by a non-deterministic Turing machine that runs in polynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is called co-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset, although this belief (the formula_5 hypothesis) remains unproven. Another open problem is whether NP = co-NP; since P = co-P, a negative answer would imply formula_5. P is also known to be at least as large as L, the class of problems decidable in a logarithmic amount of memory space. A decider using formula_6 space cannot use more than formula_7 time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory by alternating Turing machines. P is also known to be no larger than PSPACE, the class of problems decidable in polynomial space. Again, whether P = PSPACE is an open problem. To summarize: formula_8 Here, EXPTIME is the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known: The most difficult problems in P are P-complete problems. Another generalization of P is P/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that an advice string is given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all of BPP. If it contains NP, then the polynomial hierarchy collapses to the second level. On the other hand, it also contains some impractical problems, including some undecidable problems such as the unary version of any undecidable problem. In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara, showed that if there exists a sparse language that is P-complete, then L = P. P is contained in BQP, it is unknown whether the containment is strict. Properties. Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P is low for itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such as random access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine. Languages in P are also closed under reversal, intersection, union, concatenation, Kleene closure, inverse homomorphism, and complementation. Pure existence proofs of polynomial-time algorithms. Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, the Robertson–Seymour theorem guarantees that there is a finite list of forbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O("n"3) algorithm for determining whether a graph has a given graph as a minor. This yields a nonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem. Alternative characterizations. In descriptive complexity, P can be described as the problems expressible in FO(LFP), the first-order logic with a least fixed point operator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity, Immerman ascribes this result to Vardi and to Immerman. It was published in 2001 that PTIME corresponds to (positive) range concatenation grammars. P can also be defined as an algorithmic complexity class for problems that are not decision problems (even though, for example, finding the solution to a 2-satisfiability instance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems. History. Kozen states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time," though Rabin also invented the notion independently and around the same time (Rabin's paper was in a 1967 proceedings of a 1966 conference, while Cobham's was in a 1965 proceedings of a 1964 conference and Edmonds's was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading to Cobham's thesis. However, H. C. Pocklington, in a 1910 paper, analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{C_n:n \\in \\mathbb{N}\\}" }, { "math_id": 1, "text": "n \\in \\mathbb{N}" }, { "math_id": 2, "text": "C_n" }, { "math_id": 3, "text": "C_{|x|}(x)=1" }, { "math_id": 4, "text": "C_{|x|}(x)=0" }, { "math_id": 5, "text": "\\mathsf{P} \\subsetneq \\mathsf{NP}" }, { "math_id": 6, "text": "O(\\log n)" }, { "math_id": 7, "text": "2^{O(\\log n)} = n^{O(1)}" }, { "math_id": 8, "text": "\\mathsf{L} \\subseteq \\mathsf{AL} = \\mathsf{P} \\subseteq \\mathsf{NP} \\subseteq \\mathsf{PSPACE} \\subseteq \\mathsf{EXPTIME}." } ]
https://en.wikipedia.org/wiki?curid=658550
658608
PH (complexity)
Class in computational complexity theory In computational complexity theory, the complexity class PH is the union of all complexity classes in the polynomial hierarchy: formula_0 PH was first defined by Larry Stockmeyer. It is a special case of hierarchy of bounded alternating Turing machine. It is contained in P#P = PPP and PSPACE. PH has a simple logical characterization: it is the set of languages expressible by second-order logic. Relationship to other classes. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: PH contains almost all well-known complexity classes inside PSPACE; in particular, it contains P, NP, and co-NP. It even contains probabilistic classes such as BPP (this is the Sipser–Lautemann theorem) and RP. However, there is some evidence that BQP, the class of problems solvable in polynomial time by a quantum computer, is not contained in PH. P = NP if and only if P = PH. This may simplify a potential proof of P ≠ NP, since it is only necessary to separate P from the more general class PH. PH is a subset of P#P = PPP by Toda's theorem; the class of problems that are decidable by a polynomial time Turing machine with access to a #P or equivalently PP oracle), and also in PSPACE. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{PH} = \\bigcup_{k\\in\\mathbb{N}} \\Delta_k^\\mathrm{P}" } ]
https://en.wikipedia.org/wiki?curid=658608
658651
Polynomial hierarchy
Computer science concept In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH. Classes within the hierarchy have complete problems (with respect to polynomial-time reductions) that ask if quantified Boolean formulae hold, for formulae with restrictions on the quantifier order. It is known that equality between classes on the same level or consecutive levels in the hierarchy would imply a "collapse" of the hierarchy to that level. Definitions. There are multiple equivalent definitions of the classes of the polynomial hierarchy. Oracle definition. For the oracle definition of the polynomial hierarchy, define formula_0 where P is the set of decision problems solvable in polynomial time. Then for i ≥ 0 define formula_1 formula_2 formula_3 where formula_4 is the set of decision problems solvable in polynomial time by a Turing machine augmented by an oracle for some complete problem in class A; the classes formula_5 and formula_6 are defined analogously. For example, formula_7, and formula_8 is the class of problems solvable in polynomial time by a deterministic Turing machine with an oracle for some NP-complete problem. Quantified boolean formulae definition. For the existential/universal definition of the polynomial hierarchy, let L be a language (i.e. a decision problem, a subset of {0,1}*), let p be a polynomial, and define formula_9 where formula_10 is some standard encoding of the pair of binary strings "x" and "w" as a single binary string. The language "L" represents a set of ordered pairs of strings, where the first string "x" is a member of formula_11, and the second string "w" is a "short" (formula_12) witness testifying that "x" is a member of formula_11. In other words, formula_13 if and only if there exists a short witness "w" such that formula_14. Similarly, define formula_15 Note that De Morgan's laws hold: formula_16 and formula_17, where "L"c is the complement of "L". Let C be a class of languages. Extend these operators to work on whole classes of languages by the definition formula_18 formula_19 Again, De Morgan's laws hold: formula_20 and formula_21, where formula_22. The classes NP and co-NP can be defined as formula_23, and formula_24, where P is the class of all feasibly (polynomial-time) decidable languages. The polynomial hierarchy can be defined recursively as formula_25 formula_26 formula_27 Note that formula_28, and formula_29. This definition reflects the close connection between the polynomial hierarchy and the arithmetical hierarchy, where R and RE play roles analogous to P and NP, respectively. The analytic hierarchy is also defined in a similar way to give a hierarchy of subsets of the real numbers. Alternating Turing machines definition. An alternating Turing machine is a non-deterministic Turing machine with non-final states partitioned into existential and universal states. It is eventually accepting from its current configuration if: it is in an existential state and can transition into some eventually accepting configuration; or, it is in a universal state and every transition is into some eventually accepting configuration; or, it is in an accepting state. We define formula_30 to be the class of languages accepted by an alternating Turing machine in polynomial time such that the initial state is an existential state and every path the machine can take swaps at most "k" – 1 times between existential and universal states. We define formula_31 similarly, except that the initial state is a universal state. If we omit the requirement of at most "k" – 1 swaps between the existential and universal states, so that we only require that our alternating Turing machine runs in polynomial time, then we have the definition of the class AP, which is equal to PSPACE. Relations between classes in the polynomial hierarchy. The union of all classes in the polynomial hierarchy is the complexity class PH. The definitions imply the relations: formula_32 formula_33 formula_34 Unlike the arithmetic and analytic hierarchies, whose inclusions are known to be proper, it is an open question whether any of these inclusions are proper, though it is widely believed that they all are. If any formula_35, or if any formula_36, then the hierarchy "collapses to level k": for all formula_37, formula_38. In particular, we have the following implications involving unsolved problems: The case in which NP = PH is also termed as a "collapse" of the PH to "the second level". The case P = NP corresponds to a collapse of PH to P. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: The question of collapse to the first level is generally thought to be extremely difficult. Most researchers do not believe in a collapse, even to the second level. Relationships to other classes. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: The polynomial hierarchy is an analogue (at much lower complexity) of the exponential hierarchy and arithmetical hierarchy. It is known that PH is contained within PSPACE, but it is not known whether the two classes are equal. One useful reformulation of this problem is that PH = PSPACE if and only if second-order logic over finite structures gains no additional power from the addition of a transitive closure operator over relations of relations (i.e., over the second-order variables). If the polynomial hierarchy has any complete problems, then it has only finitely many distinct levels. Since there are PSPACE-complete problems, we know that if PSPACE = PH, then the polynomial hierarchy must collapse, since a PSPACE-complete problem would be a formula_40-complete problem for some "k". Each class in the polynomial hierarchy contains formula_41-complete problems (problems complete under polynomial-time many-one reductions). Furthermore, each class in the polynomial hierarchy is "closed under formula_41-reductions": meaning that for a class C in the hierarchy and a language formula_42, if formula_43, then formula_44 as well. These two facts together imply that if formula_45 is a complete problem for formula_46, then formula_47, and formula_48. For instance, formula_49. In other words, if a language is defined based on some oracle in C, then we can assume that it is defined based on a complete problem for C. Complete problems therefore act as "representatives" of the class for which they are complete. The Sipser–Lautemann theorem states that the class BPP is contained in the second level of the polynomial hierarchy. Kannan's theorem states that for any "k", formula_50 is not contained in SIZE(nk). Toda's theorem states that the polynomial hierarchy is contained in P#P. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta_0^\\mathsf{P} := \\Sigma_0^\\mathsf{P} := \\Pi_0^\\mathsf{P} := \\mathsf{P}," }, { "math_id": 1, "text": "\\Delta_{i+1}^\\mathsf{P} := \\mathsf{P}^{\\Sigma_i^\\mathsf{P}}" }, { "math_id": 2, "text": "\\Sigma_{i+1}^\\mathsf{P} := \\mathsf{NP}^{\\Sigma_i^\\mathsf{P}}" }, { "math_id": 3, "text": "\\Pi_{i+1}^\\mathsf{P} := \\mathsf{coNP}^{\\Sigma_i^\\mathsf{P}}" }, { "math_id": 4, "text": "\\mathsf{P}^{\\rm A}" }, { "math_id": 5, "text": "\\mathsf{NP}^{\\rm A}" }, { "math_id": 6, "text": "\\mathsf{coNP}^{\\rm A}" }, { "math_id": 7, "text": " \\Sigma_1^\\mathsf{P} = \\mathsf{NP}, \\Pi_1^\\mathsf{P} = \\mathsf{coNP} " }, { "math_id": 8, "text": " \\Delta_2^\\mathsf{P} = \\mathsf{P^{NP}} " }, { "math_id": 9, "text": " \\exists^p L := \\left\\{ x \\in \\{0,1\\}^* \\ \\left| \\ \\left( \\exists w \\in \\{0,1\\}^{\\leq p(|x|)} \\right) \\langle x,w \\rangle \\in L \\right. \\right\\}, " }, { "math_id": 10, "text": "\\langle x,w \\rangle \\in \\{0,1\\}^*" }, { "math_id": 11, "text": "\\exists^p L" }, { "math_id": 12, "text": "|w| \\leq p(|x|) " }, { "math_id": 13, "text": "x \\in \\exists^p L" }, { "math_id": 14, "text": " \\langle x,w \\rangle \\in L " }, { "math_id": 15, "text": " \\forall^p L := \\left\\{ x \\in \\{0,1\\}^* \\ \\left| \\ \\left( \\forall w \\in \\{0,1\\}^{\\leq p(|x|)} \\right) \\langle x,w \\rangle \\in L \\right. \\right\\} " }, { "math_id": 16, "text": " \\left( \\exists^p L \\right)^{\\rm c} = \\forall^p L^{\\rm c} " }, { "math_id": 17, "text": " \\left( \\forall^p L \\right)^{\\rm c} = \\exists^p L^{\\rm c} " }, { "math_id": 18, "text": "\\exists^\\mathsf{P} \\mathcal{C} := \\left\\{\\exists^p L \\ | \\ p \\text{ is a polynomial and } L \\in \\mathcal{C} \\right\\}" }, { "math_id": 19, "text": "\\forall^\\mathsf{P} \\mathcal{C} := \\left\\{\\forall^p L \\ | \\ p \\text{ is a polynomial and } L \\in \\mathcal{C} \\right\\}" }, { "math_id": 20, "text": " \\mathsf{co} \\exists^\\mathsf{P} \\mathcal{C} = \\forall^\\mathsf{P} \\mathsf{co} \\mathcal{C} " }, { "math_id": 21, "text": " \\mathsf{co} \\forall^\\mathsf{P} \\mathcal{C} = \\exists^\\mathsf{P} \\mathsf{co} \\mathcal{C} " }, { "math_id": 22, "text": "\\mathsf{co}\\mathcal{C} = \\left\\{ L^c | L \\in \\mathcal{C} \\right\\}" }, { "math_id": 23, "text": " \\mathsf{NP} = \\exists^\\mathsf{P} \\mathsf{P} " }, { "math_id": 24, "text": " \\mathsf{coNP} = \\forall^\\mathsf{P} \\mathsf{P} " }, { "math_id": 25, "text": " \\Sigma_0^\\mathsf{P} := \\Pi_0^\\mathsf{P} := \\mathsf{P} " }, { "math_id": 26, "text": " \\Sigma_{k+1}^\\mathsf{P} := \\exists^\\mathsf{P} \\Pi_k^\\mathsf{P} " }, { "math_id": 27, "text": " \\Pi_{k+1}^\\mathsf{P} := \\forall^\\mathsf{P} \\Sigma_k^\\mathsf{P} " }, { "math_id": 28, "text": " \\mathsf{NP} = \\Sigma_1^\\mathsf{P} " }, { "math_id": 29, "text": " \\mathsf{coNP} = \\Pi_1^\\mathsf{P} " }, { "math_id": 30, "text": "\\Sigma_k^\\mathsf{P}" }, { "math_id": 31, "text": "\\Pi_k^\\mathsf{P}" }, { "math_id": 32, "text": "\\Sigma_i^\\mathsf{P} \\subseteq \\Delta_{i+1}^\\mathsf{P} \\subseteq \\Sigma_{i+1}^\\mathsf{P}" }, { "math_id": 33, "text": "\\Pi_i^\\mathsf{P} \\subseteq \\Delta_{i+1}^\\mathsf{P} \\subseteq \\Pi_{i+1}^\\mathsf{P}" }, { "math_id": 34, "text": "\\Sigma_i^\\mathsf{P} = \\mathsf{co}\\Pi_{i}^\\mathsf{P}" }, { "math_id": 35, "text": "\\Sigma_k^\\mathsf{P} = \\Sigma_{k+1}^\\mathsf{P}" }, { "math_id": 36, "text": "\\Sigma_k^\\mathsf{P} = \\Pi_{k}^\\mathsf{P}" }, { "math_id": 37, "text": "i > k" }, { "math_id": 38, "text": "\\Sigma_i^\\mathsf{P} = \\Sigma_k^\\mathsf{P}" }, { "math_id": 39, "text": "\\Pi_1^\\mathsf{P}" }, { "math_id": 40, "text": "\\Sigma_{k}^\\mathsf{P}" }, { "math_id": 41, "text": "\\leq_{\\rm m}^\\mathsf{P}" }, { "math_id": 42, "text": "L \\in \\mathcal{C}" }, { "math_id": 43, "text": "A \\leq_{\\rm m}^\\mathsf{P} L" }, { "math_id": 44, "text": "A \\in \\mathcal{C}" }, { "math_id": 45, "text": "K_i" }, { "math_id": 46, "text": "\\Sigma_{i}^\\mathsf{P}" }, { "math_id": 47, "text": "\\Sigma_{i+1}^\\mathsf{P} = \\mathsf{NP}^{K_i}" }, { "math_id": 48, "text": "\\Pi_{i+1}^\\mathsf{P} = \\mathsf{coNP}^{K_i}" }, { "math_id": 49, "text": "\\Sigma_{2}^\\mathsf{P} = \\mathsf{NP}^\\mathsf{SAT}" }, { "math_id": 50, "text": "\\Sigma_2" } ]
https://en.wikipedia.org/wiki?curid=658651
65866844
Rugate filter
Dielectric mirror that selectively reflects a particular wavelength range of light A rugate filter, also known as a gradient-index filter, is an optical filter based on a dielectric mirror that selectively reflects specific wavelength ranges of light. This effect is achieved by a periodic, continuous change of the refractive index of the dielectric coating. The word "rugate" is derived from corrugated structures found in nature, which also selectively reflect certain wavelength ranges of light, for example the wings of the Morpho butterfly. Characteristics. In rugate filters the refractive index varies periodically and continuously as a function of the depth of the mirror coating. This is similar to Bragg mirrors with the difference that the refractive index profile of a Bragg mirror is discontinuous. The refractive index profiles of a Rugate and a Bragg mirror are shown in the graph on the right. In Bragg mirrors, the discontinuous transitions are responsible for reflection of incident light, whereas in rugate filters, incident light is reflected throughout the thickness of the coating. According to the Fresnel equations, however, the reflection coefficient is greatest where the greatest change in refractive index occurs. For rugate filters, these are the inflection points in the refractive index profile. The theory of the Bragg mirror leads to a calculation of the wavelength at which the reflection of a rugate filter is greatest. For an alternating sequence in the Bragg mirror, the maximum reflection at a wavelength formula_0 is: formula_1 In this equation formula_2 and formula_3 stand for the high and low refractive indices of the Bragg mirror while formula_4 and formula_5 are the respective thicknesses of these layers. For the more general case that the refractive index changes continuously, the previous equation can be rewritten as: formula_6 On the left hand side is the integral over the refractive index over one period of the refractive index profile formula_7 divided by the period length formula_8. This term corresponds to the mean value of the refractive index profile. As a sanity check for the correctness of this equation, one can solve the integral for a discrete refractive index profile and substitute the period of a Bragg mirror formula_9. The figure on the right shows the reflection spectra calculated by the transfer-matrix method for the refractive index profiles of a Bragg and Rugate filter. It can be seen that both mirrors have their maximum reflectivity at 700 nm, whereas the rugate filter has a lower bandwidth. For this reason rugate filters are often used as optical notch filters. Furthermore, one can see a smaller peak in the spectrum of the rugate filter at formula_10. This peak is not present in the spectrum of the Bragg mirror because of its discrete layer system, which causes destructive interference at this wavelength. However, Bragg mirrors have secondary maxima at wavelengths of formula_11, which may be undesirable if you only want to filter out a certain wavelength. Rugate filters are better suited for this purpose because the sinusoidal refractive index profile has anti-reflection properties similar to those of black silicon. This reduces the intensity of the secondary maxima. Production. Rugate filters can be produced by sputtering and chemical vapor deposition. A special challenge is the creation of the continuous refractive index profile. To achieve this, the chemical composition of the mirror must also change continuously as a function of the layer thickness. This can be achieved by continuously changing the gas composition during the deposition process. Another possibility for the production of rugate filters is electrochemical porosification of silicon. Here, the current density during the etching process is selected so that the resulting porosity and thus the refractive index varies sinusoidally with the layer thickness.
[ { "math_id": 0, "text": "\\lambda_0" }, { "math_id": 1, "text": "n_{\\rm L}d_{\\rm L} + n_{\\rm H}d_{\\rm H} = \\frac{\\lambda_0}{2}" }, { "math_id": 2, "text": "n_{\\rm L}" }, { "math_id": 3, "text": "n_{\\rm H}" }, { "math_id": 4, "text": "d_{\\rm L}" }, { "math_id": 5, "text": "d_{\\rm H}" }, { "math_id": 6, "text": "\\frac{\\int_0^d{n(x)}\\mathrm{d}x}{d}d = \\left \\langle n \\right \\rangle d = \\frac{\\lambda_0}{2} " }, { "math_id": 7, "text": "n(x)" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "d = d_{\\rm H} + d_{\\rm L}" }, { "math_id": 10, "text": "\\lambda_0/2" }, { "math_id": 11, "text": "\\lambda_0/(2n-1)" } ]
https://en.wikipedia.org/wiki?curid=65866844
65869496
Joubert's theorem
In polynomial algebra and field theory, Joubert's theorem states that if formula_0 and formula_1 are fields, formula_1 is a separable field extension of formula_0 of degree 6, and the characteristic of formula_0 is not equal to 2, then formula_1 is generated over formula_0 by some element λ in formula_1, such that the minimal polynomial formula_2 of λ has the form formula_3 = formula_4, for some constants formula_5 in formula_0. The theorem is named in honor of Charles Joubert, a French mathematician, "lycée" professor, and Jesuit priest. In 1867 Joubert published his theorem in his paper "Sur l'équation du sixième degré" in "tome" 64 of "Comptes rendus hebdomadaires des séances de l'Académie des sciences". He seems to have made the assumption that the fields involved in the theorem are subfields of the complex field. Using arithmetic properties of hypersurfaces, Daniel F. Coray gave, in 1987, a proof of Joubert's theorem (with the assumption that the characteristic of formula_0 is neither 2 nor 3). In 2006 Hanspeter Kraft gave a proof of Joubert's theorem "based on an enhanced version of Joubert’s argument". In 2014 Zinovy Reichstein proved that the condition characteristic(formula_0) ≠ 2 is necessary in general to prove the theorem, but the theorem's conclusion can be proved in the characteristic 2 case with some additional assumptions on formula_0 and formula_1.
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "p(t)" }, { "math_id": 4, "text": "t^6 + c_4 t^4 + c_2 t^2 + c_1 t + c_0" }, { "math_id": 5, "text": "c_4, c_2, c_1, c_0" } ]
https://en.wikipedia.org/wiki?curid=65869496
65876372
Multiway number partitioning
In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer "k", and called "k"-way number partitioning. The input to the problem is a multiset "S" of numbers (usually integers), whose sum is "k*T". The associated decision problem is to decide whether "S" can be partitioned into "k" subsets such that the sum of each subset is exactly "T". There is also an optimization problem: find a partition of "S" into "k" subsets, such that the "k" sums are "as near as possible". The exact optimization objective can be defined in several ways: These three objective functions are equivalent when "k"=2, but they are all different when "k"≥3. All these problems are NP-hard, but there are various algorithms that solve it efficiently in many cases. Some closely-related problems are: Approximation algorithms. There are various algorithms that obtain a guaranteed approximation of the optimal solution in polynomial time. There are different approximation algorithms for different objectives. Minimizing the largest sum. The "approximation ratio" in this context is the largest sum in the solution returned by the algorithm, divided by the largest sum in the optimal solution (the ratio is larger than 1). Most algorithms below were developed for identical-machines scheduling. Several polynomial-time approximation schemes (PTAS) have been developed: Maximizing the smallest sum. The approximation ratio in this context is the smallest sum in the solution returned by the algorithm, divided by the smallest sum in the optimal solution (the ratio is less than 1). Maximizing the sum of products. Jin studies a problem in which the goal is to maximize the sum, over every set "i" in 1...,"k", of the product of numbers in set "i". In a more general variant, each set "i" may have a weight "wi", and the goal is to maximize the "weighted" sum of products. This problem has an exact solution that runs in time O("n"2). A PTAS for general objective functions. Let "Ci" (for "i" between 1 and "k") be the sum of subset "i" in a given partition. Instead of minimizing the objective function max("Ci"), one can minimize the objective function max("f"("Ci")), where "f" is any fixed function. Similarly, one can minimize the objective function sum("f"("Ci")), or maximize min(f("Ci")), or maximize sum("f"("Ci")). Alon, Azar, Woeginger and Yadid presented general PTAS-s (generalizing the PTAS-s of Sanhi, Hochbaum and Shmoys, and Woeginger) for these four problems. Their algorithm works for any "f" which satisfies the following two conditions: The runtime of their PTAS-s is linear in "n" (the number of inputs), but exponential in the approximation precision. The PTAS for minimizing sum("f"("Ci")) is based on some combinatorial observations: The PTAS uses an "input rounding" technique. Given the input sequence "S =" ("v"1...,"vn") and a positive integer "d", the rounded sequence "S#"("d") is defined as follows: In "S#"("d"), all inputs are integer multiples of "L"/"d"2. Moreover, the above two observations hold for "S#"("d") too: Based on these observations, all inputs in "S#"("d") are of the form "hL"/"d"2, where "h" is an integer in the range formula_19. Therefore, the input can be represented as an integer vector formula_20, where formula_21 is the number of "hL"/"d"2 inputs in "S#"("d"). Moreover, each subset can be represented as an integer vector formula_22, where formula_23 is the number of "hL"/"d"2 inputs in the subset. The subset sum is then formula_24. Denote by formula_25, the set of vectors formula_26 with formula_27. Since the sum of elements in such a vector is at most 2"d", the total number of these elements is smaller than formula_28, so formula_29. There are two different ways to find an optimal solution to "S#"("d"). One way uses dynamic programming: its run-time is a polynomial whose exponent depends on "d". The other way uses Lenstra's algorithm for integer linear programming. Dynamic programming solution. Define formula_30 as the optimal (minimum) value of the objective function sum("f"("Ci")), when the input vector is formula_20 and it has to be partitioned into "k" subsets, among all partitions in which all subset sums are strictly between "L"#/2 and 2"L#." It can be solved by the following recurrence relation: For each "k" and n, the recurrence relation requires to check at most formula_38 vectors. The total number of vectors n to check is at most formula_39, where "n" is the original number of inputs. Therefore, the run-time of the dynamic programming algorithm is formula_40. It is linear in "n" for any fixed "d". Integer linear programming solution. For each vector t in "T", introduce a variable "x"t denoting the number of subsets with this configuration. Minimizing sum("f"("Ci")) can be attained by the solving the following ILP: The number of variables is at most formula_45, and the number of equations is formula_46 - both are constants independent of "n", "k". Therefore, Lenstra's algorithm can be used. Its run-time is exponential in the dimension (formula_45), but polynomial in the binary representation of the coefficients, which are in O(log("n")). Constructing the ILP itself takes time O("n"). Converting the solution from the rounded to the original instance. The following lemmas relate the partitions of the rounded instance "S#"("d") and the original instance "S". Given a desired approximation precision ε&gt;0, let δ&gt;0 be the constant corresponding to ε/3, whose existence is guaranteed by Condition F*. Let formula_49. It is possible to show that converted partition of "S" has a total cost of at most formula_50, so the approximation ratio is 1+ε. Non-existence of PTAS for some objective functions. In contrast to the above result, if we take f("x") = 2"x", or f("x")=("x"-1)2, then no PTAS for minimizing sum("f"("Ci")) exists unless P=NP. Note that these f("x") are convex, but they do not satisfy Condition F* above. The proof is by reduction from partition problem. Exact algorithms. There are exact algorithms, that always find the optimal partition. Since the problem is NP-hard, such algorithms might take exponential time in general, but may be practically usable in certain cases. Reduction to bin packing. The bin packing problem has many fast solvers. A BP solver can be used to find an optimal number partitioning. The idea is to use binary search to find the optimal makespan. To initialize the binary search, we need a lower bound and an upper bound: Given a lower and an upper bound, run the BP solver with bin size middle := (lower+upper)/2. Variants. In the balanced number partitioning problem, there are constraints on the number of items that can be allocated to each subset (these are called "cardinality constraints"). Another variant is the multidimensional number partitioning. Applications. One application of the partition problem is for manipulation of elections. Suppose there are three candidates (A, B and C). A single candidate should be elected using the veto voting rule, i.e., each voter vetoes a single candidate and the candidate with the fewest vetoes wins. If a coalition wants to ensure that C is elected, they should partition their vetoes among A and B so as to maximize the smallest number of vetoes each of them gets. If the votes are weighted, then the problem can be reduced to the partition problem, and thus it can be solved efficiently using CKK. For "k"=2, the same is true for any other voting rule that is based on scoring. However, for "k"&gt;2 and other voting rules, some other techniques are required. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" }, { "math_id": 1, "text": "2 - 1/k" }, { "math_id": 2, "text": "O(n\\log{n})" }, { "math_id": 3, "text": "\\frac{4}{3} - \\frac{1}{3k} = \\frac{4k-1}{3k}" }, { "math_id": 4, "text": "1 + O(\\log{\\log{n}}/n)" }, { "math_id": 5, "text": "1 + O(1/n)" }, { "math_id": 6, "text": "4/3 - 1/3k" }, { "math_id": 7, "text": "1 + 1/n^{\\Theta(\\log{n})}" }, { "math_id": 8, "text": "1 + \\frac{1-1/k}{1+\\lfloor r/k \\rfloor}" }, { "math_id": 9, "text": "O(2^r n\\log{n})" }, { "math_id": 10, "text": "O(n\\cdot (n^2 / \\epsilon)^{k-1})" }, { "math_id": 11, "text": "O(n^2 / \\epsilon)" }, { "math_id": 12, "text": "O(n(r+\\log{n}))" }, { "math_id": 13, "text": "O(n(r k^4+\\log{n}))" }, { "math_id": 14, "text": "O((n/\\varepsilon)^{(1/\\varepsilon^2)})" }, { "math_id": 15, "text": "\\frac{3k-1}{4k-2}" }, { "math_id": 16, "text": "1-{\\varepsilon}" }, { "math_id": 17, "text": "O(c_{\\varepsilon}n\\log{k})" }, { "math_id": 18, "text": "c_{\\varepsilon}" }, { "math_id": 19, "text": "(d,d+1,\\ldots,d^2)" }, { "math_id": 20, "text": "\\mathbf{n} = (n_d, n_{d+1}, \\ldots, n_{d^2})" }, { "math_id": 21, "text": "n_h" }, { "math_id": 22, "text": "\\mathbf{t} = (t_d, t_{d+1}, \\ldots, t_{d^2})" }, { "math_id": 23, "text": "x_h" }, { "math_id": 24, "text": "C(\\mathbf{t}) = \\sum_{h=d}^{d^2} t_h\\cdot (h L/d^2)" }, { "math_id": 25, "text": "T" }, { "math_id": 26, "text": "\\mathbf{t}" }, { "math_id": 27, "text": "L^{\\#}/2 < C(\\mathbf{t}) < 2 L^{\\#}" }, { "math_id": 28, "text": "{(d^2)}^{2d} = d^{4 d}" }, { "math_id": 29, "text": "|T|\\leq d^{4d}" }, { "math_id": 30, "text": "VAL(k, \\mathbf{n})" }, { "math_id": 31, "text": "VAL(0, \\mathbf{0}) = 0" }, { "math_id": 32, "text": "VAL(1, \\mathbf{n}) = f(C(\\mathbf{n}))" }, { "math_id": 33, "text": "L^{\\#}/2 < C(\\mathbf{n})) < 2 L^{\\#}" }, { "math_id": 34, "text": "C(\\mathbf{n})" }, { "math_id": 35, "text": "VAL(1, \\mathbf{n}) = \\infty" }, { "math_id": 36, "text": "VAL(k, \\mathbf{n}) = \\min_{\\mathbf{t}\\leq \\mathbf{n}, \\mathbf{t}\\in T} [f(C(\\mathbf{t})) + VAL(k-1, \\mathbf{n}-\\mathbf{t})]" }, { "math_id": 37, "text": "k\\geq 2" }, { "math_id": 38, "text": "|T|" }, { "math_id": 39, "text": "n^{d^2}" }, { "math_id": 40, "text": "O(k\\cdot n^{d^2}\\cdot d^{4d})" }, { "math_id": 41, "text": "\\sum_{\\mathbf{t}\\in T} x_{\\mathbf{t}}\\cdot f(C(\\mathbf{t}))" }, { "math_id": 42, "text": "\\sum_{\\mathbf{t}\\in T} x_{\\mathbf{t}} = k" }, { "math_id": 43, "text": "\\sum_{\\mathbf{t}\\in T} x_{\\mathbf{t}} \\cdot \\mathbf{t}= \\mathbf{n}" }, { "math_id": 44, "text": "x_{\\mathbf{t}} \\geq 0" }, { "math_id": 45, "text": "d^{4d}" }, { "math_id": 46, "text": "d^{4d}+d^2-d+2" }, { "math_id": 47, "text": "C_i - \\frac{L}{d} \\leq C_i^{\\#} \\leq \\frac{d+1}{d}C_i + \\frac{L}{d} " }, { "math_id": 48, "text": "\\frac{d}{d+1}C^{\\#}_i - 2 \\frac{L}{d} \\leq C_i \\leq C^{\\#}_i + \\frac{L}{d} " }, { "math_id": 49, "text": "d := \\lceil 5/\\delta \\rceil " }, { "math_id": 50, "text": "(1+\\frac{\\epsilon}{3})\\cdot OPT \\leq(1+ \\epsilon)\\cdot OPT " }, { "math_id": 51, "text": "k!" } ]
https://en.wikipedia.org/wiki?curid=65876372
65877107
Greedy number partitioning
In computer science, greedy number partitioning is a class of greedy algorithms for multiway number partitioning. The input to the algorithm is a set "S" of numbers, and a parameter "k". The required output is a partition of "S" into "k" subsets, such that the sums in the subsets are as nearly equal as possible. Greedy algorithms process the numbers sequentially, and insert the next number into a bin in which the sum of numbers is currently smallest. Approximate algorithms. The simplest greedy partitioning algorithm is called list scheduling. It just processes the inputs in any order they arrive. It always returns a partition in which the largest sum is at most formula_0 times the optimal (minimum) largest sum. This heuristic can be used as an online algorithm, when the order in which the items arrive cannot be controlled. An improved greedy algorithm is called LPT scheduling. It processes the inputs by descending order of value, from large to small. Since it needs to pre-order the inputs, it can be used only as an offline algorithm. It guarantees that the largest sum is at most formula_1 times the optimal (minimum) largest sum, and the smallest sum is at least formula_2 times the optimal (maximum) smallest sum. See LPT scheduling for more details. Complete greedy algorithm. The complete greedy algorithm (CGA) is an exact algorithm, i.e., it always finds an optimal solution. It works in the following way. After sorting the numbers in descending order (as in LPT), it constructs a "k"-ary tree. Each level corresponds to a number, and each of the "k" branches corresponds to a different set in which the current number can be put. Traversing the tree in depth-first order requires only O("n") space, but might take O("kn") time. The runtime can be improved by using the greedy heuristic: in each level, develop first the branch in which the current number is put in the set with the smallest sum. This algorithm finds the greedy (LPT) solution first, but then proceeds to look for better solutions. Several additional heuristics can be used to improve the runtime: Generalizations. In the fair item allocation problem, there are "n" items and "k" people, each of which assigns a possibly different value to each item. The goal is to partition the items among the people in as fair way as possible. The natural generalization of the greedy number partitioning algorithm is the envy-graph algorithm. It guarantees that the allocation is "envy-free up to at most one item" (EF1). Moreover, if the instance is "ordered" (- all agents rank the items in the same order), then the outcome is EFX, and guarantees to each agent at least formula_3 of his maximin share. If the items are "chores", then a similar algorithm guarantees formula_4 MMS. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2-\\frac{1}{k}" }, { "math_id": 1, "text": "\\frac{4 k-1}{3 k}" }, { "math_id": 2, "text": "\\frac{3k-1}{4k-2}" }, { "math_id": 3, "text": "\\frac{2n}{3n-1}" }, { "math_id": 4, "text": "\\frac{4n-1}{3n}" } ]
https://en.wikipedia.org/wiki?curid=65877107
65877151
Pseudopolynomial time number partitioning
In computer science, pseudopolynomial time number partitioning is a pseudopolynomial time algorithm for solving the partition problem. The problem can be solved using dynamic programming when the size of the set and the size of the sum of the integers in the set are not too big to render the storage requirements infeasible. Suppose the input to the algorithm is a multiset formula_0 of cardinality formula_1: "S" = {"x"1, ..., "xN"} Let "K" be the sum of all elements in "S". That is: "K" = "x"1 + ... + "xN". We will build an algorithm that determines whether there is a subset of "S" that sums to formula_2. If there is a subset, then: if "K" is even, the rest of "S" also sums to formula_2 if "K" is odd, then the rest of "S" sums to formula_3. This is as good a solution as possible. e.g.1 "S" = {1, 2, 3, 5}, "K" = "sum(S)" = 11, "K/2" = 5, Find a subset from "S" that is closest to "K/2" -&gt; {2, 3} = 5, 11 - 5 * 2 = 1 e.g.2 "S" = {1, 3, 7}, "K" = "sum(S)" = 11, "K/2" = 5, Find a subset from "S" that is closest to "K/2" -&gt; {1, 3} = 4, 11 - 4 * 2 = 3 Recurrence relation. We wish to determine if there is a subset of "S" that sums to formula_2. Let: "p"("i", "j") be "True" if a subset of { "x"1, ..., "xj" } sums to "i" and "False" otherwise. Then "p"(formula_2, "N") is "True" if and only if there is a subset of "S" that sums to formula_2. The goal of our algorithm will be to compute "p"(formula_2, "N"). In aid of this, we have the following recurrence relation: "p"("i", "j") is True if either "p"("i", "j" − 1) is True or if "p"("i" − "xj", "j" − 1) is True "p"("i", "j") is False otherwise The reasoning for this is as follows: there is some subset of "S" that sums to "i" using numbers "x"1, ..., "xj" if and only if either of the following is true: There is a subset of { "x"1, ..., "xj"−1 } that sums to "i"; there is a subset of { "x"1, ..., "xj"−1 } that sums to "i" − "xj", since "xj" + that subset's sum = "i". The pseudo-polynomial algorithm. The algorithm consists of building up a table of size formula_2 by formula_1 containing the values of the recurrence. Remember that formula_4 is the sum of all formula_1 elements in formula_0. Once the entire table is filled in, we return formula_5. Below is a depiction of the table formula_6. There is a blue arrow from one block to another if the value of the target-block might depend on the value of the source-block. This dependence is a property of the recurrence relation. function can_be_partitioned_equally("S") is input: A list of integers "S". output: True if "S" can be partitioned into two subsets that have equal sum. "n" ← |S| "K" ← sum("S") "P" ← empty boolean table of size "(formula_2 + 1)" by "(n + 1)" initialize top row ("P"(0,"x")) of "P" to True initialize leftmost column ("P"("x", 0)) of "P", except for "P"(0, 0) to False for "i" from 1 to formula_2 for "j" from 1 to n x = "S"["j"-1] if ("i"-"x") &gt;= 0 then "P"("i", "j") ← "P"("i", "j"-1) or "P"("i"-"x", "j"-1) else "P"("i", "j") ← "P"("i", "j"-1) return "P"(formula_2, "n") Example. Below is the table "P" for the example set used above "S" = {3, 1, 1, 2, 2, 1}: Analysis. This algorithm runs in time "O"("K/2 N"), where N is the number of elements in the input set and K is the sum of elements in the input set. The algorithm can be extended to the k-way multi-partitioning problem, but then takes "O"("n"("k" − 1)"m""k" − 1) memory where m is the largest number in the input, making it impractical even for "k" 3 unless the inputs are very small numbers. This algorithm can be generalized to a solution for the subset sum problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "\\lfloor K/2 \\rfloor " }, { "math_id": 3, "text": "\\lceil K/2 \\rceil " }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": "P(\\lfloor K/2 \\rfloor, N)" }, { "math_id": 6, "text": "P" } ]
https://en.wikipedia.org/wiki?curid=65877151
65877237
Largest differencing method
Algorithm for solving the partition problem In computer science, the largest differencing method is an algorithm for solving the partition problem and the multiway number partitioning. It is also called the Karmarkar–Karp algorithm after its inventors, Narendra Karmarkar and Richard M. Karp. It is often abbreviated as LDM. The algorithm. The input to the algorithm is a set "S" of numbers, and a parameter "k". The required output is a partition of "S" into "k" subsets, such that the sums in the subsets are as nearly equal as possible. The main steps of the algorithm are: Two-way partitioning. For "k"=2, the main step (2) works as follows. For example, if S = {8,7,6,5,4}, then the resulting difference-sets are {6,5,4,1} after taking out the largest two numbers {8,7} and inserting the difference 8-7=1 back; Repeat the steps and then we have {4,1,1}, then {3,1} then {2}. Step 3 constructs the subsets in the partition by backtracking. The last step corresponds to {2},{}. Then 2 is replaced by 3 in one set and 1 in the other set: {3},{1}, then {4},{1,1}, then {4,5}, {1,6}, then {4,7,5}, {8,6}, where the sum-difference is indeed 2. The runtime complexity of this algorithm is dominated by the step 1 (sorting), which takes O("n" log "n"). Note that this partition is not optimal: in the partition {8,7}, {6,5,4} the sum-difference is 0. However, there is evidence that it provides a "good" partition: Multi-way partitioning. For any "k" ≥ 2, the algorithm can be generalized in the following way. Examples: There is evidence for the good performance of LDM: Balanced two-way partitioning. Several variants of LDM were developed for the balanced number partitioning problem, in which all subsets must have the same cardinality (up to 1). PDM (Paired Differencing Method) works as follows. PDM has average properties worse than LDM. For two-way partitioning, when inputs are uniformly-distributed random variables, the expected difference between largest and smallest sum is formula_6. RLDM (Restricted Largest Differencing Method) works as follows. For two-way partitioning, when inputs are uniformly-distributed random variables, the expected difference between largest and smallest sum is formula_7. BLDM (Balanced Largest Differencing Method) works as follows. BLDM has average properties similar to LDM. For two-way partitioning, when inputs are uniformly-distributed random variables, the expected difference between largest and smallest sum is formula_5. For multi-way partitioning, when "c"=ceiling("n"/"k") and each of the "k" subsets must contain either ceiling("n"/"k") or floor("n"/"k") items, the approximation ratio of BLDM for the minimum largest sum is exactly 4/3 for "c"=3, 19/12 for "c"=4, 103/60 for "c"=5, 643/360 for "c"=6, and 4603/2520 for "c"=7. The ratios were found by solving a mixed integer linear program. In general (for any "c"), the approximation ratio is at least formula_8 and at most formula_9. The MILP results for 3,4,5,6,7 correspond to the lower bound. When the parameter is the number of subsets ("k"), the approximation ratio is exactly formula_10. Min-max subsequence problem. In the "min-max subsequence problem", the input is a multiset of "n" numbers and an integer parameter "k", and the goal is to order the numbers such that the largest sum of each block of adjacent "k" numbers is as small as possible. The problem occurs in the design of video servers. This problem can be solved in polytime for "k"=2, but it is strongly NP-hard for "k≥"3. A variance of the differencing method can applied to this problem. An exact algorithm. The complete Karmarkar–Karp algorithm (CKK) finds an optimal solution by constructing a tree of degree formula_11. For "k"=2, CKK runs substantially faster than the Complete Greedy Algorithm (CGA) on random instances. This is due to two reasons: when an equal partition does not exist, CKK often allows more trimming than CGA; and when an equal partition does exist, CKK often finds it much faster and thus allows earlier termination. Korf reports that CKK can optimally partition 40 15-digit double-precision numbers in about 3 hours, while CGA requires about 9 hours. In practice, with "k"=2, problems of arbitrary size can be solved by CKK if the numbers have at most 12 significant digits; with "k"=3, at most 6 significant digits. CKK can also run as an anytime algorithm: it finds the KK solution first, and then finds progressively better solutions as time allows (possibly requiring exponential time to reach optimality, for the worst instances). Combining CKK with the balanced-LDM algorithm (BLDM) yields a complete anytime algorithm for solving the balanced partition problem. Previous mentions. An algorithm equivalent to the Karmarkar-Karp differencing heuristic is mentioned in ancient Jewish legal texts by Nachmanides and Joseph ibn Habib. The algorithm is used to combine different testimonies about the same loan. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n^{-\\Theta(\\log(n)))}" }, { "math_id": 1, "text": "1+n^{-\\Theta(\\log(n)))}" }, { "math_id": 2, "text": "\\frac{4}{3}-\\frac{1}{3 (n-k-1)}" }, { "math_id": 3, "text": "\\frac{4}{3}-\\frac{1}{3 k}" }, { "math_id": 4, "text": "\\frac{4}{3}-\\frac{1}{3 (k-1)}" }, { "math_id": 5, "text": "n^{-\\Theta(\\log n)}" }, { "math_id": 6, "text": "\\Theta(1/n)" }, { "math_id": 7, "text": "O(\\log{n}/n^2)" }, { "math_id": 8, "text": "2-\\sum_{j=0}^{c-1}\\frac{j!}{c!}" }, { "math_id": 9, "text": "2-\\frac{1}{c-1}" }, { "math_id": 10, "text": "2-\\frac{1}{k}" }, { "math_id": 11, "text": "k!" } ]
https://en.wikipedia.org/wiki?curid=65877237
6587903
Tonewood
Type of wood used in musical instruments Tonewood refers to specific wood varieties used for woodwind or acoustic stringed instruments. The word implies that certain species exhibit qualities that enhance acoustic properties of the instruments, but other properties of the wood such as aesthetics and availability have always been considered in the selection of wood for musical instruments. According to "Mottola's Cyclopedic Dictionary of Lutherie Terms", tonewood is:Wood that is used to make stringed musical instruments. The term is often used to indicate wood species that are suitable for stringed musical instruments and, by exclusion, those that are not. But the list of species generally considered to be tonewoods changes constantly and has changed constantly throughout history. Varieties of tonewood. As a rough generalization it can be said that stiff-but-light softwoods (i.e. from coniferous trees) are favored for the soundboards or soundboard-like surface that transmits the vibrations of the strings to the ambient air. Hardwoods (i.e. from deciduous trees) are favored for the body or framing element of an instrument. Woods used for woodwind instruments include African blackwood, ("Dalbergia melanoxylon"), also known as grenadilla, used in modern clarinets and oboes. Bassoons are usually made of Maple, especially Norway maple ("Acer platanoides)". Wooden flutes, recorders, and baroque and classical period instruments may be made of various hardwoods, such as pear ("Pyrus" species), boxwood ("Buxus" species), or ebony ("Diospyros" species). Mechanical properties of tonewoods. Some of the mechanical properties of common tonewoods, sorted by density. See also Physical properties of wood. Carbon-fiber/Epoxy, glass, aluminum, and steel added for comparison, since they are sometimes used in musical instruments. Density is measured at 12% moisture content of the wood, i.e. air at 70 °F and 65% relative humidity. Most professional luthiers will build at 8% moisture content (45% relative humidity), and such wood would weigh less on average than that reported here, since it contains less water. Data comes from the Wood Database, except for 𝜈LR, Poisson's ratio, which comes from the Forest Product Laboratory, United States Forest Service, United States Department of Agriculture. The ratio displayed here is for deformation along the radial axis caused by stress along the longitudinal axis. The shrink volume percent shown here is the amount of shrinkage in all three dimensions as the wood goes from green to oven-dry. This can be used as a relative indicator of how much the dry wood will change as humidity changes, sometimes referred to as the instrument's "stability". However, the stability of tuning is primarily due to the length-wise shrinkage of the neck, which is typically only about 0.1% to 0.2% green to dry. The volume shrinkage is mostly due to the radial and tangential shrinkage. In the case of a neck (quarter-sawn), the radial shrinkage affects the thickness of the neck, and the tangential shrinkage affects the width of the neck. Given the dimensions involved, this shrinkage should be practically unnoticeable. The shrinkage of the length of the neck, as a percent, is quite a bit less, but given the dimension, it is enough to affect the pitch of the strings. The sound radiation coefficient is defined as: formula_0 where formula_1 is flexural modulus in Pascals (i.e. the number in the table multiplied by 109), and ρ is the density in kg/m3, as in the table. From this, it can be seen that the loudness of the top of a stringed instrument increases with stiffness, and decreases with density. The loudest wood tops, such as Sitka Spruce, are lightweight and stiff, while maintaining the necessary strength. Denser woods, for example Hard Maple, often used for necks, are stronger but not as loud (R = 6 vs. 12). When wood is used as the top of an acoustic instrument, it can be described using plate theory and plate vibrations. The flexural rigidity of an isotropic plate is: codice_0 where formula_1 is flexural modulus for the material, formula_2 is the plate thickness, and formula_3 is Poisson's ratio for the material. Plate rigidity has units of Pascal·m3 (equivalent to N·m), since it refers to the moment per unit length per unit of curvature, and not the total moment. Of course, wood is not isotropic, it's orthotropic, so this equation describes the rigidity in one orientation. For example, if we use 𝜈LR, then we get the rigidity when bending on the longitudinal axis (with the grain), as would be usual for an instrument's top. This is typically 10 to 20 times the cross-grain rigidity for most species. The value for formula_4 shown in the table was calculated using this formula and a thickness formula_2 of 3.0mm=0.118″, or a little less than 1/8". When wood is used as the neck of an instrument, it can be described using beam theory. Flexural rigidity of a beam (defined as formula_5) varies along the length as a function of x shown in the following equation: formula_6 where formula_1 is the flexural modulus for the material, formula_7 is the second moment of area (in m4), formula_8 is the transverse displacement of the beam at x, and formula_9 is the bending moment at "x". Beam flexural rigidity has units of Pascal·m4 (equivalent to N·m²). The amount of deflection at the end of a cantilevered beam is: formula_10 where formula_11 is the point load at the end, and formula_12 is the length. So deflection is inversely proportional to formula_5. Given two necks of the same shape and dimensions, formula_7 becomes a constant, and deflection becomes inversely proportional to formula_1—in short, the higher this number for a given wood species, the less a neck will deflect under a given force (i.e. from the strings). Read more about mechanical properties in "Wood for Guitars." Selection of tonewoods. In addition to perceived differences in acoustic properties, a luthier may use a tonewood because of: Sources. Many tonewoods come from sustainable sources through specialist dealers. Spruce, for example, is very common, but large pieces with even grain represent a small proportion of total supply and can be expensive. Some tonewoods are particularly hard to find on the open market, and small-scale instrument makers often turn to reclamation, for instance from disused salmon traps in Alaska, various old construction in the U.S Pacific Northwest, from trees that have blown down, or from specially permitted removals in conservation areas where logging is not generally permitted. Mass market instrument manufacturers have started using Asian and African woods, such as Bubinga ("Guibourtia" species) and Wenge ("Millettia laurentii"), as inexpensive alternatives to traditional tonewoods. The Fiemme Valley, in the Alps of Northern Italy, has long served as a source of high-quality spruce for musical instruments, dating from the violins of Antonio Stradivari to the piano soundboards of the contemporary maker Fazioli. Preparation. Tonewood choices vary greatly among different instrument types. Guitar makers generally favor quartersawn wood because it provides added stiffness and dimensional stability. Soft woods, like spruce, may be split rather than sawn into boards so the board surface follows the grain as much as possible, thus limiting run-out. For most applications, wood must be dried before use, either in air or kilns. Some luthiers prefer further seasoning for several years. Wood for instruments is typically used at 8% moisture content (which is in equilibrium with air at 45% relative humidity). This is drier than usually produced by kilns, which is 12% moisture content (65% relative humidity). If an instrument is kept at a humidity that is significantly lower than that at which it was built, it may crack. Therefore, valuable instruments must be contained in controlled environments to prevent cracking, especially cracking of the top. Some guitar manufacturers subject the wood to rarefaction, which mimics the natural aging process of tonewoods. Torrefaction is also used for this purpose, but it often changes the cosmetic properties of the wood. Guitar builders using torrefied soundboards claim improved tone, similar to that of an aged instrument. Softwoods such as Spruce, Cedar, and Redwood, which are commonly used for guitar soundboards, are easier to torrefy than hardwoods, such as Maple. On inexpensive guitars, it is increasingly common to use a product called "Roseacer" for the fretboard, which mimics Rosewood, but is actually a thermally-modified Maple. "Roasted" Maple necks are increasingly popular as manufacturers claim increased stiffness and stability in changing conditions (heat and humidity). However, while engineering tests of the ThermoWood method indicated increased resistance to humidity, they also showed a significant reduction in strength (ultimate breaking point), while stiffness (flexural modulus) remained the same or was slightly reduced. Although the reduction in strength can be controlled by reducing the temperature of the process, the manufacturer recommends not using its product for structural purposes. However, it is perhaps possible to compensate for this loss of strength in guitars by using carbon-fiber stiffeners in necks and increased bracing in tops. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = \\sqrt { \\cfrac {E}{{\\rho} ^ {3}}}" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "\\nu" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "EI" }, { "math_id": 6, "text": "\\ EI {dy \\over dx}\\ = \\int_{0}^{x} M(x) dx + C_1" }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "M(x)" }, { "math_id": 10, "text": "w_C = \\tfrac{PL^3}{3EI}" }, { "math_id": 11, "text": "P" }, { "math_id": 12, "text": "L" } ]
https://en.wikipedia.org/wiki?curid=6587903
65879198
Drift waves
In plasma physics, a drift wave is a type of collective excitation that is driven by a pressure gradient within a magnetised plasma, which can be destabilised by differences between ion and electron motion (then known as drift-wave instability or drift instability). The drift wave typically propagates across the pressure gradient and is perpendicular to the magnetic field. It can occur in relatively simple configurations such as in a column of plasma with a non-uniform density but a straight magnetic field. Drift wave turbulence is responsible for the transport of particles, energy and momentum across magnetic field lines. The characteristic frequency associated with drift waves involving electron flow is given by formula_0 where formula_1 is the wavenumber perpendicular to the pressure gradient of the plasma, formula_2 is the Boltzmann constant, formula_3 is the electron temperature, formula_4 is the elementary charge, formula_5 is the background magnetic field and formula_6 is the density gradient of the plasma. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega^* = k_\\perp \\left(-\\frac{k_BT_e}{eB_0}\\nabla n_0\\right) ," }, { "math_id": 1, "text": "k_\\perp" }, { "math_id": 2, "text": "k_B" }, { "math_id": 3, "text": "T_e" }, { "math_id": 4, "text": "e" }, { "math_id": 5, "text": "B_0" }, { "math_id": 6, "text": "\\nabla n_0" } ]
https://en.wikipedia.org/wiki?curid=65879198
658808
Material conditional
Logical connective The material conditional (also known as material implication) is an operation commonly used in logic. When the conditional symbol formula_0 is interpreted as material implication, a formula formula_1 is true unless formula_2 is true and formula_3 is false. Material implication can also be characterized inferentially by "modus ponens", "modus tollens", conditional proof, and classical "reductio ad absurdum". Material implication is used in all the basic systems of classical logic as well as some nonclassical logics. It is assumed as a model of correct conditional reasoning within mathematics and serves as the basis for commands in many programming languages. However, many logics replace material implication with other operators such as the strict conditional and the variably strict conditional. Due to the paradoxes of material implication and related problems, material implication is not generally considered a viable analysis of conditional sentences in natural language. Notation. In logic and related fields, the material conditional is customarily notated with an infix operator formula_4. The material conditional is also notated using the infixes formula_5 and formula_6. In the prefixed Polish notation, conditionals are notated as formula_7. In a conditional formula formula_8, the subformula formula_9 is referred to as the "antecedent" and formula_10 is termed the "consequent" of the conditional. Conditional statements may be nested such that the antecedent or the consequent may themselves be conditional statements, as in the formula formula_11. History. In "Arithmetices Principia: Nova Methodo Exposita" (1889), Peano expressed the proposition "If formula_12, then formula_13" as formula_12 Ɔ formula_13 with the symbol Ɔ, which is the opposite of C. He also expressed the proposition formula_14 as formula_12 Ɔ formula_13. Hilbert expressed the proposition "If "A", then "B"" as formula_15 in 1918. Russell followed Peano in his "Principia Mathematica" (1910–1913), in which he expressed the proposition "If "A", then "B"" as formula_14. Following Russell, Gentzen expressed the proposition "If "A", then "B"" as formula_14. Heyting expressed the proposition "If "A", then "B"" as formula_14 at first but later came to express it as formula_15 with a right-pointing arrow. Bourbaki expressed the proposition "If "A", then "B"" as formula_16 in 1954. Definitions. Semantics. From a classical semantic perspective, material implication is the binary truth functional operator which returns "true" unless its first argument is true and its second argument is false. This semantics can be shown graphically in a truth table such as the one below. One can also consider the equivalence formula_17. Truth table. The truth table of formula_18: The logical cases where the antecedent A is false and "A" → "B" is true, are called "vacuous truths". Examples are ... Deductive definition. Material implication can also be characterized deductively in terms of the following rules of inference. Unlike the semantic definition, this approach to logical connectives permits the examination of structurally identical propositional forms in various logical systems, where somewhat different properties may be demonstrated. For example, in intuitionistic logic, which rejects proofs by contraposition as valid rules of inference, formula_19 is not a propositional theorem, but the material conditional is used to define negation. Formal properties. When disjunction, conjunction and negation are classical, material implication validates the following equivalences: Similarly, on classical interpretations of the other connectives, material implication validates the following entailments: Tautologies involving material implication include: Discrepancies with natural language. Material implication does not closely match the usage of conditional sentences in natural language. For example, even though material conditionals with false antecedents are vacuously true, the natural language statement "If 8 is odd, then 3 is prime" is typically judged false. Similarly, any material conditional with a true consequent is itself true, but speakers typically reject sentences such as "If I have a penny in my pocket, then Paris is in France". These classic problems have been called the paradoxes of material implication. In addition to the paradoxes, a variety of other arguments have been given against a material implication analysis. For instance, counterfactual conditionals would all be vacuously true on such an account. In the mid-20th century, a number of researchers including H. P. Grice and Frank Jackson proposed that pragmatic principles could explain the discrepancies between natural language conditionals and the material conditional. On their accounts, conditionals denote material implication but end up conveying additional information when they interact with conversational norms such as Grice's maxims. Recent work in formal semantics and philosophy of language has generally eschewed material implication as an analysis for natural-language conditionals. In particular, such work has often rejected the assumption that natural-language conditionals are truth functional in the sense that the truth value of "If "P", then "Q"" is determined solely by the truth values of "P" and "Q". Thus semantic analyses of conditionals typically propose alternative interpretations built on foundations such as modal logic, relevance logic, probability theory, and causal models. Similar discrepancies have been observed by psychologists studying conditional reasoning, for instance, by the notorious Wason selection task study, where less than 10% of participants reasoned according to the material conditional. Some researchers have interpreted this result as a failure of the participants to conform to normative laws of reasoning, while others interpret the participants as reasoning normatively according to nonclassical laws. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightarrow" }, { "math_id": 1, "text": " P \\rightarrow Q" }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "\\to" }, { "math_id": 5, "text": "\\supset" }, { "math_id": 6, "text": "\\Rightarrow" }, { "math_id": 7, "text": "Cpq" }, { "math_id": 8, "text": "p\\to q" }, { "math_id": 9, "text": "p" }, { "math_id": 10, "text": "q" }, { "math_id": 11, "text": "(p\\to q)\\to(r\\to s)" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": "B" }, { "math_id": 14, "text": "A\\supset B" }, { "math_id": 15, "text": "A\\to B" }, { "math_id": 16, "text": "A\\Rightarrow B" }, { "math_id": 17, "text": "A \\to B \\equiv \\neg (A \\land \\neg B) \\equiv \\neg A \\lor B" }, { "math_id": 18, "text": "A \\rightarrow B" }, { "math_id": 19, "text": "(A \\to B) \\Rightarrow \\neg A \\lor B " }, { "math_id": 20, "text": "P \\to Q \\equiv \\neg Q \\to \\neg P" }, { "math_id": 21, "text": "P \\to (Q \\to R) \\equiv (P \\land Q) \\to R" }, { "math_id": 22, "text": "\\neg(P \\to Q) \\equiv P \\land \\neg Q" }, { "math_id": 23, "text": "P \\to Q \\equiv \\neg P \\lor Q" }, { "math_id": 24, "text": "\\big(P \\to (Q \\to R)\\big) \\equiv \\big(Q \\to (P \\to R)\\big)" }, { "math_id": 25, "text": "\\big(R \\to (P \\to Q)\\big) \\equiv \\big((R \\to P) \\to (R \\to Q)\\big)" }, { "math_id": 26, "text": "P \\to Q \\models (P \\land R) \\to Q" }, { "math_id": 27, "text": "\\neg P \\models P \\to Q " }, { "math_id": 28, "text": "(P \\to Q) \\land (Q \\to R) \\models P \\to R" }, { "math_id": 29, "text": "(P \\lor Q) \\to R \\models (P \\to R) \\land (Q \\to R)" }, { "math_id": 30, "text": "\\models P \\to P" }, { "math_id": 31, "text": "\\models (P \\to Q) \\lor (Q \\to P)" }, { "math_id": 32, "text": "\\models (P \\to Q) \\lor (P \\to \\neg Q)" } ]
https://en.wikipedia.org/wiki?curid=658808
65887746
Bin covering problem
Operations research problem of packing items into the largest number of bins In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain "at least" a certain given total size, in a way that maximizes the number of bins used. This problem is a dual of the bin packing problem: in bin covering, the bin sizes are bounded from below and the goal is to maximize their number; in bin packing, the bin sizes are bounded "from above" and the goal is to "minimize" their number. The problem is NP-hard, but there are various efficient approximation algorithms: The bidirectional bin-filling algorithm. Csirik, Frenk, Lebbe and Zhang present the following simple algorithm for 2/3 approximation. Suppose the bin size is 1 and there are "n" items. For any instance "I", denote by formula_1 the number of bins in the optimal solution, and by formula_2 the number of full bins in the bidirectional filling algorithm. Then formula_3, or equivalently, formula_4. Proof. For the proof, the following terminology is used. The sum of each bin formula_9 is at least 1, but if the final item is removed from it, then the remaining sum is smaller than 1. Each of the first "formula_10" bins formula_11 contains an initial item, possibly some middle items, and a final item. Each of the last formula_8 bins formula_12 contains only an initial item and a final item, since both of them are larger than 1/2 and their sum is already larger than 1. The proof considers two cases. The easy case is formula_13, that is, all final items are smaller than 1/2. Then, the sum of every filled formula_14 is at most 3/2, and the sum of remaining items is at most 1, so the sum of all items is at most formula_15. On the other hand, in the optimal solution the sum of every bin is at least 1, so the sum of all items is at least formula_1. Therefore, formula_16 as required. The hard case is formula_17, that is, some final items are larger than 1/2. We now prove an upper bound on formula_1 by presenting it as a sum formula_18 where: We focus first on the optimal bins in formula_22 and formula_23. We present a bijection between the items in each such bin to some items in formula_9 which are at least as valuable. We now focus on the optimal bins in formula_23 and formula_30. Tightness. The 2/3 factor is tight for BDF. Consider the following instance (where formula_36 is sufficiently small):formula_37BDF initializes the first bin with the largest item and fills it with the formula_38 smallest items. Then, the remaining formula_38 items can cover bins only in triplets, so all in all formula_39 bins are filled. But in OPT one can fill formula_40 bins, each of which contains two of the middle-sized items and two small items. Three-classes bin-filling algorithm. Csirik, Frenk, Lebbe and Zhang present another algorithm that attains a 3/4 approximation. The algorithm orders the items from large to small, and partitions them into three classes: The algorithm works in two phases. Phase 1: Phase 2: In the above example, showing the tightness of BDF, the sets are:formula_41TCF attains the optimal outcome, since it initializes all formula_40 bins with pairs of items from Y, and fills them with pairs of items from Z. For any instance "I", denote by formula_1 the number of bins in the optimal solution, and by formula_42 the number of full bins in the three-classes filling algorithm. Then formula_43. The 3/4 factor is tight for TCF. Consider the following instance (where formula_36 is sufficiently small): formula_44 TCF initializes the first bin with the largest two items, and fills it with the formula_45 smallest items. Then, the remaining formula_45 items can cover bins only in groups of four, so all in all formula_46 bins are filled. But in OPT one can fill formula_47 bins, each of which contains 3 middle-sized items and 3 small items. Polynomial-time approximation schemes. Csirik, Johnson and Kenyon present an asymptotic PTAS. It is an algorithm that, for every "ε"&gt;0, fills at least formula_48 bins if the sum of all items is more than formula_49, and at least formula_50 otherwise. It runs in time formula_51. The algorithm solves a variant of the configuration linear program, with formula_52variables and formula_53 constraints. This algorithm is only theoretically interesting, since in order to get better than 3/4 approximation, we must take formula_54, and then the number of variables is more than formula_55. They also present algorithms for the "online" version of the problem. In the online setting, it is not possible to get an asymptotic worst-case approximation factor better than 1/2. However, there are algorithms that perform well in the average case. Jansen and Solis-Oba present an asymptotic FPTAS. It is an algorithm that, for every "ε"&gt;0, fills at least formula_56 bins if the sum of all items is more than formula_57 (if the sum of items is less than that, then the optimum is at most formula_58 anyway). It runs in time formula_59, where formula_60 is the runtime complexity of the best available algorithm for matrix inversion (currently, around formula_61). This algorithm becomes better than the 3/4 approximation already when formula_62, and in this case the constants are reasonable - about formula_63. Performance with divisible item sizes. An important special case of bin covering is that the item sizes form a "divisible sequence" (also called "factored"). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, then some of the heuristic algorithms for bin covering find an optimal solution.Sec.5 Related problems. In the fair item allocation problem, there are different people each of whom attributes a different value to each item. The goal is to allocate to each person a "bin" full of items, such that the value of each bin is at least a certain constant, and as many people as possible receive a bin. Many techniques from bin covering are used in this problem too. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n), O(n \\log n), O(n {\\log}^2 n)" }, { "math_id": 1, "text": "\\mathrm{OPT}(I)" }, { "math_id": 2, "text": "\\mathrm{BDF}(I)" }, { "math_id": 3, "text": "\\mathrm{BDF}(I) \\geq (2/3) \\mathrm{OPT}(I) - (2/3)" }, { "math_id": 4, "text": "\\mathrm{OPT}(I) \\leq (3/2) \\mathrm{BDF}(I)+1" }, { "math_id": 5, "text": "t := \\mathrm{BDF}(I) = " }, { "math_id": 6, "text": "B_1,\\ldots,B_t := " }, { "math_id": 7, "text": "w " }, { "math_id": 8, "text": "t-w " }, { "math_id": 9, "text": "B_1,\\ldots,B_t " }, { "math_id": 10, "text": "w " }, { "math_id": 11, "text": "B_1,\\ldots,B_w " }, { "math_id": 12, "text": "B_{w+1},\\ldots,B_t " }, { "math_id": 13, "text": "w = t " }, { "math_id": 14, "text": "B_i " }, { "math_id": 15, "text": "3t/2+1" }, { "math_id": 16, "text": "\\mathrm{OPT}(I) \\leq 3t/2+1" }, { "math_id": 17, "text": "w < t " }, { "math_id": 18, "text": "\\mathrm{OPT}(I) = |K_0|+|K_1|+|K_2|" }, { "math_id": 19, "text": "K_0 := " }, { "math_id": 20, "text": "K_1 := " }, { "math_id": 21, "text": "K_2 := " }, { "math_id": 22, "text": "K_0 " }, { "math_id": 23, "text": "K_1 " }, { "math_id": 24, "text": "B_1,\\ldots,B_{|K_1|} " }, { "math_id": 25, "text": "B_1,\\ldots,B_{w} " }, { "math_id": 26, "text": "B_{|K_1|+1},\\ldots,B_{w} " }, { "math_id": 27, "text": "|K_1| + (w-|K_1|)/2 = (|K_1|+w)/2 " }, { "math_id": 28, "text": "|K_0| + |K_1| \\leq (|K_1|+w)/2 " }, { "math_id": 29, "text": "2 |K_0| + |K_1| \\leq w \\leq t " }, { "math_id": 30, "text": "K_2 " }, { "math_id": 31, "text": "|K_1| + 2|K_2| " }, { "math_id": 32, "text": "2 t " }, { "math_id": 33, "text": "|K_1| + 2|K_2|\\leq 2 t " }, { "math_id": 34, "text": "2 \\mathrm{OPT}(I) \\leq 3 t" }, { "math_id": 35, "text": "\\mathrm{OPT}(I) \\leq 3 t/2" }, { "math_id": 36, "text": "\\epsilon>0" }, { "math_id": 37, "text": "\\begin{align}\n1-6 k \\epsilon, ~&~ \\tfrac{1}{2}-\\epsilon, \\ldots, \\tfrac{1}{2}-\\epsilon, ~&~ \\epsilon,\\ldots,\\epsilon\n\\\\\n ~&~ \\{\\cdots 6k ~ \\text{units} \\cdots \\} ~&~ \\{ \\cdots 6k ~ \\text{units} \\cdots \\}\n\\end{align}" }, { "math_id": 38, "text": "6k" }, { "math_id": 39, "text": "2k+1" }, { "math_id": 40, "text": "3k" }, { "math_id": 41, "text": "\\begin{align}\n1-6 k \\epsilon, ~&~ \\tfrac{1}{2}-\\epsilon, \\ldots, \\tfrac{1}{2}-\\epsilon, ~&~ \\epsilon,\\ldots,\\epsilon\n\\\\\n \\{ |X|=1 \\} ~&~ \\{ \\cdots |Y|=6k \\cdots \\} ~&~ \\{ \\cdots |Z|=6k \\cdots \\}\n\\end{align}" }, { "math_id": 42, "text": "\\mathrm{TCF}(I)" }, { "math_id": 43, "text": "\\mathrm{TCF}(I) \\geq (3/4) (\\mathrm{OPT}(I) - 4)" }, { "math_id": 44, "text": "\\begin{align}\n\\tfrac{1}{2}-6 k \\epsilon, \\tfrac{1}{2}-6 k \\epsilon, ~&~ \\tfrac{1}{3}-\\epsilon, \\ldots, \\tfrac{1}{3}-\\epsilon, ~&~ \\epsilon,\\ldots,\\epsilon\n\\\\\n ~&~ \\{\\cdots 12k ~ \\text{units} \\cdots \\} ~&~ \\{ \\cdots 12k ~ \\text{units} \\cdots \\}\n\\end{align}" }, { "math_id": 45, "text": "12k" }, { "math_id": 46, "text": "3k+1" }, { "math_id": 47, "text": "4k" }, { "math_id": 48, "text": "(1 - 5 \\varepsilon)\\cdot \\mathrm{OPT}(I) - 4" }, { "math_id": 49, "text": "13 B/\\epsilon^3" }, { "math_id": 50, "text": "(1 - 2 \\varepsilon)\\cdot \\mathrm{OPT}(I) - 1" }, { "math_id": 51, "text": "O(n^{1/\\varepsilon^2})" }, { "math_id": 52, "text": "n^{1/\\varepsilon^2}" }, { "math_id": 53, "text": "1 + 1/\\varepsilon^2" }, { "math_id": 54, "text": "\\varepsilon < 1/20" }, { "math_id": 55, "text": "n^{400}" }, { "math_id": 56, "text": "(1 - \\varepsilon)\\cdot \\mathrm{OPT}(I) -1" }, { "math_id": 57, "text": "13B/\\epsilon^3" }, { "math_id": 58, "text": "13/\\epsilon^3\\in O(1/\\epsilon^3)" }, { "math_id": 59, "text": "O\\left(\n\\frac{1}{\\epsilon^5}\n\\cdot \\ln{\\frac{n}{\\varepsilon}}\n\\cdot \\max{(n^2,\\frac{1}{\\varepsilon}\\ln\\ln\\frac{1}{\\varepsilon^3})}\n+\n\\frac{1}{\\varepsilon^4}\\mathcal{T_M}(\\frac{1}{\\varepsilon^2})\n\\right)" }, { "math_id": 60, "text": "\\mathcal{T_M}(n)" }, { "math_id": 61, "text": "O(n^{2.38})" }, { "math_id": 62, "text": "\\varepsilon < 1/4" }, { "math_id": 63, "text": "2^{10} n^2 + 2^{18}" } ]
https://en.wikipedia.org/wiki?curid=65887746
65888
Electromagnetic induction
Production of voltage by a varying magnetic field Electromagnetic or magnetic induction is the production of an electromotive force (emf) across an electrical conductor in a changing magnetic field. Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of induction. Lenz's law describes the direction of the induced field. Faraday's law was later generalized to become the Maxwell–Faraday equation, one of the four Maxwell equations in his theory of electromagnetism. Electromagnetic induction has found many applications, including electrical components such as inductors and transformers, and devices such as electric motors and generators. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; History. Electromagnetic induction was discovered by Michael Faraday, published in 1831. It was discovered independently by Joseph Henry in 1832. In Faraday's first experimental demonstration (August 29, 1831), he wrapped two wires around opposite sides of an iron ring or "torus" (an arrangement similar to a modern toroidal transformer). Based on his understanding of electromagnets, he expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. He saw a transient current, which he called a "wave of electricity", when he connected the wire to the battery and another when he disconnected it. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. Within two months, Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk"). Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's model, the time varying aspect of electromagnetic induction is expressed as a differential equation, which Oliver Heaviside referred to as Faraday's law even though it is slightly different from Faraday's original formulation and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations. In 1834 Heinrich Lenz formulated the law named after him to describe the "flux through the circuit". Lenz's law gives the direction of the induced emf and current resulting from electromagnetic induction. Theory. Faraday's law of induction and Lenz's law. Faraday's law of induction makes use of the magnetic flux ΦB through a region of space enclosed by a wire loop. The magnetic flux is defined by a surface integral: formula_0 where "dA is an element of the surface Σ enclosed by the wire loop, B is the magnetic field. The dot product B·"dA corresponds to an infinitesimal amount of magnetic flux. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop. When the flux through the surface changes, Faraday's law of induction says that the wire loop acquires an electromotive force (emf). The most widespread version of this law states that the induced electromotive force in any closed circuit is equal to the rate of change of the magnetic flux enclosed by the circuit: formula_2 where formula_1 is the emf and ΦB is the magnetic flux. The direction of the electromotive force is given by Lenz's law which states that an induced current will flow in the direction that will oppose the change which produced it. This is due to the negative sign in the previous equation. To increase the generated emf, a common approach is to exploit flux linkage by creating a tightly wound coil of wire, composed of "N" identical turns, each with the same magnetic flux going through them. The resulting emf is then "N" times that of one single wire. formula_3 Generating an emf through a variation of the magnetic flux through the surface of a wire loop can be achieved in several ways: Maxwell–Faraday equation. In general, the relation between the emf formula_4 in a wire loop encircling a surface Σ, and the electric field E in the wire is given by formula_5 where "d"ℓ is an element of contour of the surface Σ, combining this with the definition of flux formula_0 we can write the integral form of the Maxwell–Faraday equation formula_6 It is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. Faraday's law and relativity. Faraday's law describes two different phenomena: the "motional emf" generated by a magnetic force on a moving wire (see Lorentz force), and the "transformer emf" that is generated by an electric force due to a changing magnetic field (due to the differential form of the Maxwell–Faraday equation). James Clerk Maxwell drew attention to the separate physical phenomena in 1861. This is believed to be a unique example in physics of where such a fundamental law is invoked to explain two such different phenomena. Albert Einstein noticed that the two situations both corresponded to a relative movement between a conductor and a magnet, and the outcome was unaffected by which one was moving. This was one of the principal paths that led him to develop special relativity. Applications. The principles of electromagnetic induction are applied in many devices and systems, including: Electrical generator. The emf generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the "drum generator" is based upon the figure to the bottom-right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right. In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to the disc, causing a current to flow in the radial arm due to the Lorentz force. Mechanical work is necessary to drive this current. When the generated current flows through the conducting rim, a magnetic field is generated by this current through Ampère's circuital law (labelled "induced B" in the figure). The rim thus becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the far side of the figure, the return current flows from the rotating arm through the far side of the rim to the bottom brush. The B-field induced by this return current opposes the applied B-field, tending to "decrease" the flux through that side of the circuit, opposing the "increase" in flux due to rotation. On the near side of the figure, the return current flows from the rotating arm through the near side of the rim to the bottom brush. The induced B-field "increases" the flux on this side of the circuit, opposing the "decrease" in flux due to r the rotation. The energy required to keep the disc moving, despite this reactive force, is exactly equal to the electrical energy generated (plus energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is common to all generators converting mechanical energy to electrical energy. Electrical transformer. When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, formula_7. Therefore, an electromotive force is set up in the second loop called the induced emf or transformer emf. If the two ends of this loop are connected through an electrical load, current will flow. Current clamp. A current clamp is a type of transformer with a split core which can be spread apart and clipped onto a wire or coil to either measure the current in it or, in reverse, to induce a voltage. Unlike conventional instruments the clamp does not make electrical contact with the conductor or require it to be disconnected during attachment of the clamp. Magnetic flow meter. Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage ε generated in the magnetic field "B" due to a conductive liquid moving at velocity "v" is thus given by: formula_8 where ℓ is the distance between electrodes in the magnetic flow meter. Eddy currents. Electrical conductors moving through a steady magnetic field, or stationary conductors within a changing magnetic field, will have circular currents induced within them by induction, called eddy currents. Eddy currents flow in closed loops in planes perpendicular to the magnetic field. They have useful applications in eddy current brakes and induction heating systems. However eddy currents induced in the metal magnetic cores of transformers and AC motors and generators are undesirable since they dissipate energy (called core losses) as heat in the resistance of the metal. Cores for these devices use a number of methods to reduce eddy currents: Electromagnet laminations. Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more magnetic lines of force than the inner portion; hence the induced electromotive force is not uniform; this tends to cause electric currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature. Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to 66 per inch (16 to 26 per centimetre), and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust/oxide coating of the plates is enough to prevent current flow across the laminations. This is a rotor approximately 20 mm in diameter from a DC motor used in a CD player. Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses. Parasitic induction within conductors. In this illustration, a solid copper bar conductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the copper bar. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar. High current power-frequency devices, such as electric motors, generators and transformers, use multiple small conductors in parallel to break up the eddy flows that can form within large solid conductors. The same principle is applied to transformers used at higher than power frequency, for example, those used in switch-mode power supplies and the intermediate frequency coupling transformers of radio receivers. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Phi_\\mathrm{B} = \\int_{\\Sigma} \\mathbf{B} \\cdot d \\mathbf{A}\\, , " }, { "math_id": 1, "text": "\\mathcal{E}" }, { "math_id": 2, "text": "\\mathcal{E} = -\\frac{d\\Phi_\\mathrm{B}}{dt} \\, , " }, { "math_id": 3, "text": " \\mathcal{E} = -N \\frac{d\\Phi_\\mathrm{B}}{dt} " }, { "math_id": 4, "text": " \\mathcal{E}" }, { "math_id": 5, "text": " \\mathcal{E} = \\oint_{\\partial \\Sigma} \\mathbf{E} \\cdot d\\boldsymbol{\\ell} " }, { "math_id": 6, "text": " \\oint_{\\partial \\Sigma} \\mathbf{E} \\cdot d\\boldsymbol{\\ell} = -\\frac{d}{d t} { \\int_{\\Sigma} \\mathbf{B} \\cdot d\\mathbf{A}} " }, { "math_id": 7, "text": "\\frac{d \\Phi_B}{dt}" }, { "math_id": 8, "text": "\\mathcal{E}= - B \\ell v," } ]
https://en.wikipedia.org/wiki?curid=65888
65888580
Multifit algorithm
Optimization algorithm in computer science The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine. The algorithm. The input to the algorithm is a set "S" of numbers, and a parameter "n". The required output is a partition of "S" into "n" subsets, such that the largest subset sum (also called the makespan) is as small as possible. The algorithm uses as a subroutine, an algorithm called "first-fit-decreasing bin packing" (FFD). The FFD algorithm takes as input the same set "S" of numbers, and a bin-capacity "c". It heuristically packs numbers into bins such that the sum of numbers in each bin is at most "C", aiming to use as few bins as possible. Multifit runs FFD multiple times, each time with a different capacity "C", until it finds some "C" such that FFD with capacity "C" packs "S" into at most "n" bins. To find it, it uses binary search as follows. Performance. Multifit is a constant-factor approximation algorithm. It always finds a partition in which the makespan is at most a constant factor larger than the optimal makespan. To find this constant, we must first analyze FFD. While the standard analysis of FFD considers approximation w.r.t. "number of bins" when the capacity is constant, here we need to analyze approximation w.r.t. "capacity" when the number of bins is constant. Formally, for every input size S and integer n, let formula_0 be the smallest capacity such that "S" can be packed into "n" bins of this capacity. Note that formula_0 is the value of the optimal solution to the original scheduling instance. Let formula_1 be the smallest real number such that, for every input "S", FFD with capacity formula_2 uses at most "n" bins. Upper bounds. Coffman, Garey and Johnson prove the following upper bounds on formula_1: During the MultiFit algorithm, the lower bound "L" is always a capacity for which it is impossible to pack "S" into "n" bins. Therefore, formula_7. Initially, the difference formula_8 is at most sum("S") / "n", which is at most formula_0. After the MultiFit algorithm runs for "k" iterations, the difference shrinks "k" times by half, so formula_9. Therefore, formula_10. Therefore, the scheduling returned by MultiFit has makespan at most formula_11 times the optimal makespan. When formula_12 is sufficiently large, the approximation factor of MultiFit can be made arbitrarily close to formula_13, which is at most 1.22. Later papers performed a more detailed analysis of MultiFit, and proved that its approximation ratio is at most 6/5=1.2, and later, at most 13/11≈1.182. The original proof of this missed some cases; presented a complete and simpler proof. The 13/11 cannot be improved: see lower bound below. Lower bounds. For "n"=4: the following shows that formula_14, which is tight. The inputs are 9,7,6,5,5, 4,4,4,4,4,4,4,4,4. They can be packed into 4 bins of capacity 17 as follows: But if we run FFD with bin capacity smaller than 20, then the filled bins are: Note that the sum in each of the first 4 bins is 16, so we cannot put another 4 inside it. Therefore, 4 bins are not sufficient. For "n"=13: the following shows that formula_15, which is tight. The inputs can be packed into 13 bins of capacity 66 as follows: But if we run FFD with bin capacity smaller than 66*13/11 = 78, then the filled bins are: Note that the sum in each of the first 13 bins is 65, so we cannot put another 13 inside it. Therefore, 13 bins are not sufficient. Performance with uniform machines. MultiFit can also be used in the more general setting called uniform-machines scheduling, where machines may have different processing speeds. When there are two uniform machines, the approximation factor is formula_16. When MultiFit is combined with the LPT algorithm, the ratio improves to formula_17. Performance for maximizing the smallest sum. A dual goal to minimizing the largest sum (makespan) is maximizing the smallest sum. Deuermeyer, Friesen and Langston claim that MultiFit does not have a good approximation factor for this problem:"In the solution of the makespan problem using MULTIFIT, it is easy to construct examples where one processor is never used. Such a solution is tolerable for the makespan problem, but is totally unacceptable for our problem [since the smallest sum is 0]. Modifications of MULTIFIT can be devised which would be more suitable for our problem, but we could find none which produces a better worst-case bound than that of LPT." Proof idea. Minimal counterexamples. The upper bounds on formula_1 are proved by contradiction. For any integers "p" ≥ "q," if formula_18, then there exists a ("p"/"q")-counterexample, defined as an instance "S" and a number "n" of bins such that If there exists such a counterexample, then there also exists a "minimal (p/q)-counterexample", which is a ("p"/"q")-counterexample with a smallest number of items in "S" and a smallest number of bins "n". In a minimal "(p/q)"-counterexample, FFD packs all items in "S" except the last (smallest) one into "n" bins with capacity "p". Given a minimal "(p/q)"-counterexample, denote by P1...,P"n" the (incomplete) FFD packing into these "n" bins with capacity "p", by P"n+1" the bin containing the single smallest item, and by Q1...,Q"n" the (complete) optimal packing into "n" bins with capacity "q". The following lemmas can be proved: 5/4 Upper bound. From the above lemmas, it is already possible to prove a loose upper bound formula_25. "Proof". Let "S", "n" be a minimal (5/4)-counterexample. The above lemmas imply that - Structure of FFD packing. To prove tighter bounds, one needs to take a closer look at the FFD packing of the minimal ("p"/"q")-counterexample. The items and FFD bins P1...,P"n" are termed as follows: The following lemmas follow immediately from these definitions and the operation of FFD. In a minimal counterexample, there are no regular 1-bins (since each bin contains at least 2 items), so by the above lemmas, the FFD bins P1...,P"n" are ordered by type: 1.22 upper bound. The upper bound formula_31 is proved by assuming a minimal (122/100)-counterexample. Each item is given a "weight" based on its size and its bin in the FFD packing. The weights are determined such that the total weight in each FFD bin is at least "x", and the total weight in almost each optimal bin is at most "x" (for some predetermined "x"). This implies that the number of FFD bins is at most the number of optimal bins, which contradicts the assumption that it is a counterexample. By the lemmas above, we know that: If "D"&gt;4, the size of each item is larger than 26, so each optimal bin (with capacity 100) must contain at most 3 items. Each item is smaller than 56-2"D" and each FFD bin has a sum larger than 100-"D", so each FFD bin must contain at least 3 items. Therefore, there are at most "n" FFD bins - contradiction. So from now on, we assume "D≤"4. The items are assigned types and weights as follows. Note that the weight of each item is at most its size (the weight can be seen as the size "rounded down"). Still, the total weight of items in every FFD bin is at least 100-"D": The total weight of items in most optimal bins is at most 100-"D": 13/11 upper bound. The upper bound formula_32 is proved by assuming a minimal ((120-3"d")/100)-counterexample, with some "d&lt;"20/33, and deriving a contradiction. Non-monotonicity. MultiFit is not "monotone" in the following sense: it is possible that an input "decreases" while the max-sum in the partition returned by MultiFit "increases". As an example, suppose "n"=3 and the input numbers are:44, 24, 24, 22, 21, 17, 8, 8, 6, 6.FFD packs these inputs into 3 bins of capacity 60 (which is optimal): But if the "17" becomes "16", then FFD with capacity 60 needs 4 bins: so MultiFit must increase the capacity, for example, to 62: This is in contrast to other number partitioning algorithms - List scheduling and Longest-processing-time-first scheduling - which are monotone. Generalization: fair allocation of chores. Multifit has been extended to the more general problem of maximin-share allocation of chores. In this problem, S is a set of chores, and there are "n" agents who assign potentially "different" valuations to the chores. The goal is to give to each agent, a set of chores worth at most "r" times the maximum value in an optimal scheduling based on "i"'s valuations. A naive approach is to let each agent in turn use the MultiFit algorithm to calculate the threshold, and then use the algorithm where each agent uses his own threshold. If this approach worked, we would get an approximation of 13/11. However, this approach fails due to the non-monotonicity of FFD. Example. Here is an example. Suppose there are four agents, and they have valuations of two types: Both types can partition the chores into 4 parts of total value 75. Type A: Type B: If all four agents are of the same, then FFD with threshold 75 fills the 4 optimal bins. But suppose there is one agent of type B, and the others are of type A. Then, in the first round, the agent of type B takes the bundle 51, 24 (the other agents cannot take it since for them the values are 51,25 whose sum is more than 75).In the following rounds, the following bundles are filled for the type A agents: so the last two chores remain unallocated. Optimal value guarantee. Using a more sophisticated threshold calculation, it is possible to guarantee to each agent at most 11/9≈1.22 of his optimal value if the optimal value is known, and at most 5/4≈1.25 of his optimal value (using a polynomial time algorithm) if the optimal value is not known. Using more elaborate arguments, it is possible to guarantee to each agent the same ratio of MultiFit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "OPT(S,n)" }, { "math_id": 1, "text": "r_n" }, { "math_id": 2, "text": "r_n\\cdot OPT(S,n)" }, { "math_id": 3, "text": "r_n\\leq 8/7 \\approx 1.14" }, { "math_id": 4, "text": "r_n\\leq 15/13 \\approx 1.15" }, { "math_id": 5, "text": "r_n\\leq 20/17 \\approx 1.176" }, { "math_id": 6, "text": "r_n\\leq 122/100 = 1.22" }, { "math_id": 7, "text": "L < r_n\\cdot OPT(S,n)" }, { "math_id": 8, "text": "U-L" }, { "math_id": 9, "text": "U-L\\leq (1/2)^k\\cdot OPT(S,n)" }, { "math_id": 10, "text": "U\\leq (r_n + (1/2)^k)\\cdot OPT(S,n)" }, { "math_id": 11, "text": "r_n + (1/2)^k" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "r_n " }, { "math_id": 14, "text": "r_n\\geq 20/17" }, { "math_id": 15, "text": "r_n\\geq 13/11" }, { "math_id": 16, "text": "\\sqrt{6}/2" }, { "math_id": 17, "text": "\\sqrt{2}+1/2" }, { "math_id": 18, "text": "r_n > p/q" }, { "math_id": 19, "text": "s > \\frac{n}{n-1}(p-q)" }, { "math_id": 20, "text": "sum(P_i) + s > p" }, { "math_id": 21, "text": "\\sum_{i=1}^n sum(P_i) + n\\cdot s > n\\cdot p" }, { "math_id": 22, "text": "\\sum_{i=1}^n sum(P_i) + s \\leq n\\cdot q" }, { "math_id": 23, "text": "q - 2s" }, { "math_id": 24, "text": "p - s" }, { "math_id": 25, "text": "r_n\\leq 5/4 = 1.25" }, { "math_id": 26, "text": "s > \\frac{n}{n-1}(5-4) > 1" }, { "math_id": 27, "text": "4 - 2s" }, { "math_id": 28, "text": "5-s" }, { "math_id": 29, "text": "8 - 4s = 5 + (3-3s) - s < 5 - s" }, { "math_id": 30, "text": "\\frac{k}{k+1}\\cdot p" }, { "math_id": 31, "text": "r_n\\leq 1.22" }, { "math_id": 32, "text": "r_n\\leq 13/11\\approx 1.182" } ]
https://en.wikipedia.org/wiki?curid=65888580
65890
Magnetic flux
Surface integral of the magnetic field In physics, specifically electromagnetism, the magnetic flux through a surface is the surface integral of the normal component of the magnetic field B over that surface. It is usually denoted Φ or ΦB. The SI unit of magnetic flux is the weber (Wb; in derived units, volt–seconds), and the CGS unit is the maxwell. Magnetic flux is usually measured with a fluxmeter, which contains measuring coils, and it calculates the magnetic flux from the change of voltage on the coils. Description. The magnetic interaction is described in terms of a vector field, where each point in space is associated with a vector that determines what force a moving charge would experience at that point (see Lorentz force). Since a vector field is quite difficult to visualize, introductory physics instruction often uses field lines to visualize this field. The magnetic flux through some surface, in this simplified picture, is proportional to the number of field lines passing through that surface (in some contexts, the flux may be defined to be precisely the number of field lines passing through that surface; although technically misleading, this distinction is not important). The magnetic flux is the "net" number of field lines passing through that surface; that is, the number passing through in one direction minus the number passing through in the other direction (see below for deciding in which direction the field lines carry a positive sign and in which they carry a negative sign). More sophisticated physical models drop the field line analogy and define magnetic flux as the surface integral of the normal component of the magnetic field passing through a surface. If the magnetic field is constant, the magnetic flux passing through a surface of vector area S is formula_0 where "B" is the magnitude of the magnetic field (the magnetic flux density) having the unit of Wb/m2 (tesla), "S" is the area of the surface, and "θ" is the angle between the magnetic field lines and the normal (perpendicular) to S. For a varying magnetic field, we first consider the magnetic flux through an infinitesimal area element dS, where we may consider the field to be constant: formula_1 A generic surface, S, can then be broken into infinitesimal elements and the total magnetic flux through the surface is then the surface integral formula_2 From the definition of the magnetic vector potential A and the fundamental theorem of the curl the magnetic flux may also be defined as: formula_3 where the line integral is taken over the boundary of the surface S, which is denoted ∂"S". Magnetic flux through a closed surface. Gauss's law for magnetism, which is one of the four Maxwell's equations, states that the total magnetic flux through a closed surface is equal to zero. (A "closed surface" is a surface that completely encloses a volume(s) with no holes.) This law is a consequence of the empirical observation that magnetic monopoles have never been found. In other words, Gauss's law for magnetism is the statement: formula_4 formula_5 formula_6 for any closed surface "S". Magnetic flux through an open surface. While the magnetic flux through a closed surface is always zero, the magnetic flux through an open surface need not be zero and is an important quantity in electromagnetism. When determining the total magnetic flux through a surface only the boundary of the surface needs to be defined, the actual shape of the surface is irrelevant and the integral over any surface sharing the same boundary will be equal. This is a direct consequence of the closed surface flux being zero. Changing magnetic flux. For example, a change in the magnetic flux passing through a loop of conductive wire will cause an electromotive force, and therefore an electric current, in the loop. The relationship is given by Faraday's law: formula_7 where The two equations for the EMF are, firstly, the work per unit charge done against the Lorentz force in moving a test charge around the (possibly moving) surface boundary ∂Σ and, secondly, as the change of magnetic flux through the open surface Σ. This equation is the principle behind an electrical generator. Comparison with electric flux. By way of contrast, Gauss's law for electric fields, another of Maxwell's equations, is formula_9 formula_5 formula_10 where The flux of E through a closed surface is "not" always zero; this indicates the presence of "electric monopoles", that is, free positive or negative charges. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi_B = \\mathbf{B} \\cdot \\mathbf{S} = BS \\cos \\theta," }, { "math_id": 1, "text": "d\\Phi_B = \\mathbf{B} \\cdot d\\mathbf{S}." }, { "math_id": 2, "text": "\\Phi_B = \\iint_S \\mathbf{B} \\cdot d\\mathbf S." }, { "math_id": 3, "text": "\\Phi_B = \\oint_{\\partial S} \\mathbf{A} \\cdot d\\boldsymbol{\\ell}," }, { "math_id": 4, "text": "\\Phi_B=\\,\\!" }, { "math_id": 5, "text": "\\scriptstyle S" }, { "math_id": 6, "text": "\\mathbf{B} \\cdot d\\mathbf S = 0" }, { "math_id": 7, "text": "\\mathcal{E} = \\oint_{\\partial \\Sigma} \\left( \\mathbf{E} + \\mathbf v \\times \\mathbf B\\right) \\cdot d\\boldsymbol{\\ell} = -\\frac{d\\Phi_B}{dt}," }, { "math_id": 8, "text": "\\mathcal{E}" }, { "math_id": 9, "text": "\\Phi_E =\\,\\!" }, { "math_id": 10, "text": "\\mathbf{E}\\cdot d\\mathbf{S} = \\frac{Q}{\\varepsilon_0}\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=65890
65891
Boyle's law
Relation between gas pressure and volume Boyle's law, also referred to as the Boyle–Mariotte law or Mariotte's law (especially in France), is an empirical gas law that describes the relationship between pressure and volume of a confined gas. Boyle's law has been stated as: The absolute pressure exerted by a given mass of an ideal gas is inversely proportional to the volume it occupies if the temperature and amount of gas remain unchanged within a closed system. Mathematically, Boyle's law can be stated as: or where P is the pressure of the gas, V is the volume of the gas, and k is a constant for a particular temperature and amount of gas. Boyle's law states that when the temperature of a given mass of confined gas is constant, the product of its pressure and volume is also constant. When comparing the same substance under two different sets of conditions, the law can be expressed as: formula_0 showing that as volume increases, the pressure of a gas decreases proportionally, and vice versa. Boyle's law is named after Robert Boyle, who published the original law in 1662. An equivalent law is Mariotte’s law, named after French physicist Edme Mariotte. History. The relationship between pressure and volume was first noted by Richard Towneley and Henry Power in the 17th century. Robert Boyle confirmed their discovery through experiments and published the results. According to Robert Gunther and other authorities, it was Boyle's assistant, Robert Hooke, who built the experimental apparatus. Boyle's law is based on experiments with air, which he considered to be a fluid of particles at rest in between small invisible springs. Boyle may have begun experimenting with gases due to an interest in air as an essential element of life; for example, he published works on the growth of plants without air. Boyle used a closed J-shaped tube and after pouring mercury from one side he forced the air on the other side to contract under the pressure of mercury. After repeating the experiment several times and using different amounts of mercury he found that under controlled conditions, the pressure of a gas is inversely proportional to the volume occupied by it. The French physicist Edme Mariotte (1620–1684) discovered the same law independently of Boyle in 1679, after Boyle had published it in 1662. Mariotte did, however, discover that air volume changes with temperature. Thus this law is sometimes referred to as Mariotte's law or the Boyle–Mariotte law. Later, in 1687 in the , Newton showed mathematically that in an elastic fluid consisting of particles at rest, between which are repulsive forces inversely proportional to their distance, the density would be directly proportional to the pressure, but this mathematical treatise does not involve any Mariott temperature dependance and is not the proper physical explanation for the observed relationship. Instead of a static theory, a kinetic theory is needed, which was developed over the next two centuries by Daniel Bernoulli (1738) and more fully by Rudolf Clausius (1857), Maxwell and Boltzmann. This law was the first physical law to be expressed in the form of an equation describing the dependence of two variable quantities. Definition. The law itself can be stated as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Boyle's law is a gas law, stating that the pressure and volume of a gas have an inverse relationship. If volume increases, then pressure decreases and vice versa, when the temperature is held constant. Therefore, when the volume is halved, the pressure is doubled; and if the volume is doubled, the pressure is halved. Relation with kinetic theory and ideal gases. Boyle's law states that "at constant temperature" the volume of a given mass of a dry gas is inversely proportional to its pressure. Most gases behave like ideal gases at moderate pressures and temperatures. The technology of the 17th century could not produce very high pressures or very low temperatures. Hence, the law was not likely to have deviations at the time of publication. As improvements in technology permitted higher pressures and lower temperatures, deviations from the ideal gas behavior became noticeable, and the relationship between pressure and volume can only be accurately described employing real gas theory. The deviation is expressed as the compressibility factor. Boyle (and Mariotte) derived the law solely by experiment. The law can also be derived theoretically based on the presumed existence of atoms and molecules and assumptions about motion and perfectly elastic collisions (see kinetic theory of gases). These assumptions were met with enormous resistance in the positivist scientific community at the time, however, as they were seen as purely theoretical constructs for which there was not the slightest observational evidence. Daniel Bernoulli (in 1737–1738) derived Boyle's law by applying Newton's laws of motion at the molecular level. It remained ignored until around 1845, when John Waterston published a paper building the main precepts of kinetic theory; this was rejected by the Royal Society of England. Later works of James Prescott Joule, Rudolf Clausius and in particular Ludwig Boltzmann firmly established the kinetic theory of gases and brought attention to both the theories of Bernoulli and Waterston. The debate between proponents of energetics and atomism led Boltzmann to write a book in 1898, which endured criticism until his suicide in 1906. Albert Einstein in 1905 showed how kinetic theory applies to the Brownian motion of a fluid-suspended particle, which was confirmed in 1908 by Jean Perrin. Equation. The mathematical equation for Boyle's law is: formula_1 where P denotes the pressure of the system, V denotes the volume of the gas, k is a constant value representative of the temperature of the system and amount of gas. So long as temperature remains constant the same amount of energy given to the system persists throughout its operation and therefore, theoretically, the value of k will remain constant. However, due to the derivation of pressure as perpendicular applied force and the probabilistic likelihood of collisions with other particles through collision theory, the application of force to a surface may not be infinitely constant for such values of V, but will have a limit when differentiating such values over a given time. Forcing the volume V of the fixed quantity of gas to increase, keeping the gas at the initially measured temperature, the pressure P must decrease proportionally. Conversely, reducing the volume of the gas increases the pressure. Boyle's law is used to predict the result of introducing a change, in volume and pressure only, to the initial state of a fixed quantity of gas. The initial and final volumes and pressures of the fixed amount of gas, where the initial and final temperatures are the same (heating or cooling will be required to meet this condition), are related by the equation: formula_2 Here "P"1 and "V"1 represent the original pressure and volume, respectively, and "P"2 and "V"2 represent the second pressure and volume. Boyle's law, Charles's law, and Gay-Lussac's law form the combined gas law. The three gas laws in combination with Avogadro's law can be generalized by the ideal gas law. Human breathing system. Boyle's law is often used as part of an explanation on how the breathing system works in the human body. This commonly involves explaining how the lung volume may be increased or decreased and thereby cause a relatively lower or higher air pressure within them (in keeping with Boyle's law). This forms a pressure difference between the air inside the lungs and the environmental air pressure, which in turn precipitates either inhalation or exhalation as air moves from high to low pressure. See also. Related phenomena: Other gas laws: Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_1 V_1 = P_2 V_2." }, { "math_id": 1, "text": " PV = k " }, { "math_id": 2, "text": "P_1 V_1 = P_2 V_2. " } ]
https://en.wikipedia.org/wiki?curid=65891
65894
Electromotive force
Electrical action produced by a non-electrical source In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted formula_0) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical "transducers" provide an emf by converting other forms of energy into electrical energy. Other electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted formula_1). An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage). In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop. For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit. Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ). Overview. Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electrical grid as the lines of the magnetic field are shifted about and cut across the conductors. In a battery, the charge separation that gives rise to a potential difference (voltage) between the terminals is accomplished by chemical reactions at the electrodes that convert chemical potential energy into electromagnetic potential energy. A voltaic cell can be thought of as having a "charge pump" of atomic dimensions at each electrode, that is: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A (chemical) source of emf can be thought of as a kind of "charge pump" that acts to move positive charges from a point of low potential through its interior to a point of high potential. … By chemical, mechanical or other means, the source of emf performs work formula_2 on that charge to move it to the high-potential terminal. The emf formula_3 of the source is defined as the work formula_2 done per charge formula_4. formula_5. In an electrical generator, a time-varying magnetic field inside the generator creates an electric field via electromagnetic induction, which creates a potential difference between the generator terminals. Charge separation takes place within the generator because electrons flow away from one terminal toward the other, until, in the open-circuit case, an electric field is developed that makes further charge separation impossible. The emf is countered by the electrical voltage due to charge separation. If a load is attached, this voltage can drive a current. The general principle governing the emf in such electrical machines is Faraday's law of induction. History. In 1801, Alessandro Volta introduced the term "force motrice électrique" to describe the active agent of a battery (which he had invented around 1798). This is called the "electromotive force" in English. Around 1830, Michael Faraday established that chemical reactions at each of two electrode–electrolyte interfaces provide the "seat of emf" for the voltaic cell. That is, these reactions drive the current and are not an endless source of energy as the earlier obsolete theory thought. In the open-circuit case, charge separation continues until the electrical field from the separated charges is sufficient to arrest the reactions. Years earlier, Alessandro Volta, who had measured a contact potential difference at the metal–metal (electrode–electrode) interface of his cells, held the incorrect opinion that contact alone (without taking into account a chemical reaction) was the origin of the emf. Notation and units of measurement. Electromotive force is often denoted by formula_0 or "ℰ". In a device without internal resistance, if an electric charge formula_6 passing through that device gains an energy formula_7 via work, the net emf for that device is the energy gained per unit charge: formula_8 Like other measures of energy per charge, emf uses the SI unit volt, which is equivalent to a joule (SI unit of energy) per coulomb (SI unit of charge). Electromotive force in electrostatic units is the statvolt (in the centimeter gram second system of units equal in amount to an erg per electrostatic unit of charge). Formal definitions. "Inside" a source of emf (such as a battery) that is open-circuited, a charge separation occurs between the negative terminal "N" and the positive terminal "P". This leads to an electrostatic field formula_9 that points from "P" to "N", whereas the emf of the source must be able to drive current from "N" to "P" when connected to a circuit. This led Max Abraham to introduce the concept of a nonelectrostatic field formula_10 that exists only inside the source of emf. In the open-circuit case, formula_11, while when the source is connected to a circuit the electric field formula_12 inside the source changes but formula_10 remains essentially the same. In the open-circuit case, the conservative electrostatic field created by separation of charge exactly cancels the forces producing the emf. Mathematically: formula_13 where formula_9 is the conservative electrostatic field created by the charge separation associated with the emf, formula_14 is an element of the path from terminal "N" to terminal "P", 'formula_15' denotes the vector dot product, and formula_16 is the electric scalar potential. This emf is the work done on a unit charge by the source's nonelectrostatic field formula_10 when the charge moves from "N" to "P". When the source is connected to a load, its emf is just formula_17 and no longer has a simple relation to the electric field formula_12 inside it. In the case of a closed path in the presence of a varying magnetic field, the integral of the electric field around the (stationary) closed loop formula_18 may be nonzero. Then, the "induced emf" (often called the "induced voltage") in the loop is: formula_19 where formula_12 is the entire electric field, conservative and non-conservative, and the integral is around an arbitrary, but stationary, closed curve formula_18 through which there is a time-varying magnetic flux formula_20, and formula_21 is the vector potential. The electrostatic field does not contribute to the net emf around a circuit because the electrostatic portion of the electric field is conservative (i.e., the work done against the field around a closed path is zero, see Kirchhoff's voltage law, which is valid, as long as the circuit elements remain at rest and radiation is ignored). That is, the "induced emf" (like the emf of a battery connected to a load) is not a "voltage" in the sense of a difference in the electric scalar potential. If the loop formula_18 is a conductor that carries current formula_22 in the direction of integration around the loop, and the magnetic flux is due to that current, we have that formula_23, where formula_24 is the self inductance of the loop. If in addition, the loop includes a coil that extends from point 1 to 2, such that the magnetic flux is largely localized to that region, it is customary to speak of that region as an inductor, and to consider that its emf is localized to that region. Then, we can consider a different loop formula_25 that consists of the coiled conductor from 1 to 2, and an imaginary line down the center of the coil from 2 back to 1. The magnetic flux, and emf, in loop formula_25 is essentially the same as that in loop formula_18:formula_26 For a good conductor, formula_27 is negligible, so we have, to a good approximation, formula_28 where formula_16 is the electric scalar potential along the centerline between points 1 and 2. Thus, we can associate an effective "voltage drop" formula_29 with an inductor (even though our basic understanding of induced emf is based on the vector potential rather than the scalar potential), and consider it as a load element in Kirchhoff's voltage law, formula_30 where now the induced emf is not considered to be a source emf. This definition can be extended to arbitrary sources of emf and paths "formula_18" moving with velocity formula_31 through the electric field formula_12 and magnetic field formula_32: formula_33 which is a conceptual equation mainly, because the determination of the "effective forces" is difficult. The term formula_34 is often called a "motional emf". In (electrochemical) thermodynamics. When multiplied by an amount of charge formula_35 the emf formula_0 yields a thermodynamic work term formula_36 that is used in the formalism for the change in Gibbs energy when charge is passed in a battery: formula_37 where formula_38 is the Gibbs free energy, formula_39 is the entropy, formula_16 is the system volume, formula_40 is its pressure and formula_41 is its absolute temperature. The combination formula_42 is an example of a conjugate pair of variables. At constant pressure the above relationship produces a Maxwell relation that links the change in open cell voltage with temperature "formula_41" (a measurable quantity) to the change in entropy "formula_39" when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is: formula_43 If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is: formula_44 where formula_45 is the number of electrons/ion, and formula_46 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by: formula_47 where formula_48 is the enthalpy of reaction. The quantities on the right are all directly measurable. Assuming constant temperature and pressure: formula_49 which is used in the derivation of the Nernst equation. Distinction with potential difference. Although an electrical potential difference (voltage) is sometimes called an emf, they are formally distinct concepts: In the case of an open circuit, the electric charge that has been separated by the mechanism generating the emf creates an electric field opposing the separation mechanism. For example, the chemical reaction in a voltaic cell stops when the opposing electric field at each electrode is strong enough to arrest the reactions. A larger opposing field can reverse the reactions in what are called "reversible" cells. The electric charge that has been separated creates an electric potential difference that can (in many cases) be measured with a voltmeter between the terminals of the device, when not connected to a load. The magnitude of the emf for the battery (or other source) is the value of this open-circuit voltage. When the battery is charging or discharging, the emf itself cannot be measured directly using the external voltage because some voltage is lost inside the source. It can, however, be inferred from a measurement of the current formula_22 and potential difference formula_16, provided that the internal resistance formula_50 already has been measured: "formula_51" "Potential difference" is not the same as "induced emf" (often called "induced voltage"). The potential difference (difference in the electric scalar potential) between two points A and B is independent of the path we take from "A" to "B". If a voltmeter always measured the potential difference between "A" and "B", then the position of the voltmeter would make no difference. However, it is quite possible for the measurement by a voltmeter between points "A" and "B" to depend on the position of the voltmeter, if a time-dependent magnetic field is present. For example, consider an infinitely long solenoid using an AC current to generate a varying flux in the interior of the solenoid. Outside the solenoid we have two resistors connected in a ring around the solenoid. The resistor on the left is 100 Ω and the one on the right is 200 Ω, they are connected at the top and bottom at points "A" and "B". The induced voltage, by Faraday's law is formula_16, so the current formula_52 Therefore, the voltage across the 100 Ω resistor is formula_53 and the voltage across the 200 Ω resistor is formula_54, yet the two resistors are connected on both ends, but formula_55 measured with the voltmeter to the left of the solenoid is not the same as formula_55 measured with the voltmeter to the right of the solenoid. Generation. Chemical sources. The question of how batteries (galvanic cells) generate an emf occupied scientists for most of the 19th century. The "seat of the electromotive force" was eventually determined in 1889 by Walther Nernst to be primarily at the interfaces between the electrodes and the electrolyte. Atoms in molecules or solids are held together by chemical bonding, which stabilizes the molecule or solid (i.e. reduces its energy). When molecules or solids of relatively high energy are brought together, a spontaneous chemical reaction can occur that rearranges the bonding and reduces the (free) energy of the system. In batteries, coupled half-reactions, often involving metals and their ions, occur in tandem, with a gain of electrons (termed "reduction") by one conductive electrode and loss of electrons (termed "oxidation") by another (reduction-oxidation or redox reactions). The spontaneous overall reaction can only occur if electrons move through an external wire between the electrodes. The electrical energy given off is the free energy lost by the chemical reaction system. As an example, a Daniell cell consists of a zinc anode (an electron collector) that is oxidized as it dissolves into a zinc sulfate solution. The dissolving zinc leaving behind its electrons in the electrode according to the oxidation reaction ("s" = solid electrode; "aq" = aqueous solution): formula_56 The zinc sulfate is the electrolyte in that half cell. It is a solution which contains zinc cations formula_57, and sulfate anions formula_58 with charges that balance to zero. In the other half cell, the copper cations in a copper sulfate electrolyte move to the copper cathode to which they attach themselves as they adopt electrons from the copper electrode by the reduction reaction: formula_59 which leaves a deficit of electrons on the copper cathode. The difference of excess electrons on the anode and deficit of electrons on the cathode creates an electrical potential between the two electrodes. (A detailed discussion of the microscopic process of electron transfer between an electrode and the ions in an electrolyte may be found in Conway.) The electrical energy released by this reaction (213 kJ per 65.4 g of zinc) can be attributed mostly due to the 207 kJ weaker bonding (smaller magnitude of the cohesive energy) of zinc, which has filled 3d- and 4s-orbitals, compared to copper, which has an unfilled orbital available for bonding. If the cathode and anode are connected by an external conductor, electrons pass through that external circuit (light bulb in figure), while ions pass through the salt bridge to maintain charge balance until the anode and cathode reach electrical equilibrium of zero volts as chemical equilibrium is reached in the cell. In the process the zinc anode is dissolved while the copper electrode is plated with copper. The salt bridge has to close the electrical circuit while preventing the copper ions from moving to the zinc electrode and being reduced there without generating an external current. It is not made of salt but of material able to wick cations and anions (a dissociated salt) into the solutions. The flow of positively charged cations along the bridge is equivalent to the same number of negative charges flowing in the opposite direction. If the light bulb is removed (open circuit) the emf between the electrodes is opposed by the electric field due to the charge separation, and the reactions stop. For this particular cell chemistry, at 298 K (room temperature), the emf formula_0 = 1.0934 V, with a temperature coefficient of formula_60 = −4.53×10−4 V/K. Voltaic cells. Volta developed the voltaic cell about 1792, and presented his work March 20, 1800. Volta correctly identified the role of dissimilar electrodes in producing the voltage, but incorrectly dismissed any role for the electrolyte. Volta ordered the metals in a 'tension series', "that is to say in an order such that any one in the list becomes positive when in contact with any one that succeeds, but negative by contact with any one that precedes it." A typical symbolic convention in a schematic of this circuit ( –|– ) would have a long electrode 1 and a short electrode 2, to indicate that electrode 1 dominates. Volta's law about opposing electrode emfs implies that, given ten electrodes (for example, zinc and nine other materials), 45 unique combinations of voltaic cells (10 × 9/2) can be created. Typical values. The electromotive force produced by primary (single-use) and secondary (rechargeable) cells is usually of the order of a few volts. The figures quoted below are nominal, because emf varies according to the size of the load and the state of exhaustion of the cell. Other chemical sources. Other chemical sources include fuel cells. Electromagnetic induction. Electromagnetic induction is the production of a circulating electric field by a time-dependent magnetic field. A time-dependent magnetic field can be produced either by motion of a magnet relative to a circuit, by motion of a circuit relative to another circuit (at least one of these must be carrying an electric current), or by changing the electric current in a fixed circuit. The effect on the circuit itself, of changing the electric current, is known as self-induction; the effect on another circuit is known as mutual induction. For a given circuit, the electromagnetically induced emf is determined purely by the rate of change of the magnetic flux through the circuit according to Faraday's law of induction. An emf is induced in a coil or conductor whenever there is change in the flux linkages. Depending on the way in which the changes are brought about, there are two types: When the conductor is moved in a stationary magnetic field to procure a change in the flux linkage, the emf is "statically induced". The electromotive force generated by motion is often referred to as "motional emf". When the change in flux linkage arises from a change in the magnetic field around the stationary conductor, the emf is "dynamically induced." The electromotive force generated by a time-varying magnetic field is often referred to as "transformer emf". Contact potentials. When solids of two different materials are in contact, thermodynamic equilibrium requires that one of the solids assume a higher electrical potential than the other. This is called the "contact potential". Dissimilar metals in contact produce what is known also as a contact electromotive force or Galvani potential. The magnitude of this potential difference is often expressed as a difference in Fermi levels in the two solids when they are at charge neutrality, where the Fermi level (a name for the chemical potential of an electron system) describes the energy necessary to remove an electron from the body to some common point (such as ground). If there is an energy advantage in taking an electron from one body to the other, such a transfer will occur. The transfer causes a charge separation, with one body gaining electrons and the other losing electrons. This charge transfer causes a potential difference between the bodies, which partly cancels the potential originating from the contact, and eventually equilibrium is reached. At thermodynamic equilibrium, the Fermi levels are equal (the electron removal energy is identical) and there is now a built-in electrostatic potential between the bodies. The original difference in Fermi levels, before contact, is referred to as the emf. The contact potential cannot drive steady current through a load attached to its terminals because that current would involve a charge transfer. No mechanism exists to continue such transfer and, hence, maintain a current, once equilibrium is attained. One might inquire why the contact potential does not appear in Kirchhoff's law of voltages as one contribution to the sum of potential drops. The customary answer is that any circuit involves not only a particular diode or junction, but also all the contact potentials due to wiring and so forth around the entire circuit. The sum of "all" the contact potentials is zero, and so they may be ignored in Kirchhoff's law. Solar cell. Operation of a solar cell can be understood from its equivalent circuit. Photons with energy greater than the bandgap of the semiconductor create mobile electron–hole pairs. Charge separation occurs because of a pre-existing electric field associated with the p-n junction. This electric field is created from a built-in potential, which arises from the contact potential between the two different materials in the junction. The charge separation between positive holes and negative electrons across the p–n diode yields a "forward voltage", the "photo voltage", between the illuminated diode terminals, which drives current through any attached load. "Photo voltage" is sometimes referred to as the "photo emf", distinguishing between the effect and the cause. Solar cell current–voltage relationship. Two internal current losses formula_61 limit the total current formula_22 available to the external circuit. The light-induced charge separation eventually creates a forward current formula_62 through the cell's internal resistance formula_63 in the direction opposite the light-induced current formula_64. In addition, the induced voltage tends to forward bias the junction, which at high enough voltages will cause a recombination current formula_65 in the diode opposite the light-induced current. When the output is short-circuited, the output voltage is zeroed, and so the voltage across the diode is smallest. Thus, short-circuiting results in the smallest formula_61 losses and consequently the maximum output current, which for a high-quality solar cell is approximately equal to the light-induced current formula_66. Approximately this same current is obtained for forward voltages up to the point where the diode conduction becomes significant. The current delivered by the illuminated diode to the external circuit can be simplified (based on certain assumptions) to: formula_67 formula_68 is the reverse saturation current. Two parameters that depend on the solar cell construction and to some degree upon the voltage itself are the ideality factor "m" and the thermal voltage formula_69, which is about 26 millivolts at room temperature. Solar cell photo emf. Solving the illuminated diode's above simplified current–voltage relationship for output voltage yields: formula_70 which is plotted against formula_71 in the figure. The solar cell's "photo emf" formula_72 has the same value as the open-circuit voltage formula_73, which is determined by zeroing the output current formula_22: formula_74 It has a logarithmic dependence on the light-induced current formula_64 and is where the junction's forward bias voltage is just enough that the forward current completely balances the light-induced current. For silicon junctions, it is typically not much more than 0.5 volts. While for high-quality silicon panels it can exceed 0.7 volts in direct sunlight. When driving a resistive load, the output voltage can be determined using Ohm's law and will lie between the short-circuit value of zero volts and the open-circuit voltage formula_73. When that resistance is small enough such that formula_75 (the near-vertical part of the two illustrated curves), the solar cell acts more like a "current generator" rather than a voltage generator, since the current drawn is nearly fixed over a range of output voltages. This contrasts with batteries, which act more like voltage generators. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{E}" }, { "math_id": 1, "text": "U_s" }, { "math_id": 2, "text": "\\mathit dW" }, { "math_id": 3, "text": "\\mathcal{E}" }, { "math_id": 4, "text": "dq" }, { "math_id": 5, "text": "\\mathcal{E} = \\frac{\\mathit dW}{\\mathit dq}" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "W" }, { "math_id": 8, "text": "\\tfrac{W}{Q}." }, { "math_id": 9, "text": "\\boldsymbol{E}_\\mathrm{open\\ circuit}" }, { "math_id": 10, "text": "\\boldsymbol{E}'" }, { "math_id": 11, "text": "\\boldsymbol{E}' = - \\boldsymbol{E}_\\mathrm{open\\ circuit}" }, { "math_id": 12, "text": "\\boldsymbol{E}" }, { "math_id": 13, "text": "\\mathcal{E}_\\mathrm{source} = \\int_{N}^{P} \\boldsymbol{E}' \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\n= - \\int_{N}^{P} \\boldsymbol{E}_\\mathrm{open\\ circuit} \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\n=V_P - V_N \\ ," }, { "math_id": 14, "text": "\\mathrm{d}\\boldsymbol{\\ell}" }, { "math_id": 15, "text": "\\cdot" }, { "math_id": 16, "text": "V" }, { "math_id": 17, "text": "\\mathcal{E}_\\mathrm{source} = \\int_{N}^{P} \\boldsymbol{E}' \\cdot \\mathrm{d} \\boldsymbol{ \\ell}\\ ," }, { "math_id": 18, "text": "C" }, { "math_id": 19, "text": "\\mathcal{E}_C = \\oint_{C} \\boldsymbol{E} \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\n= - \\frac{d\\Phi_C}{dt}\n= - \\frac{d}{dt} \\oint_{C} \\boldsymbol{A} \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\\ ,\n" }, { "math_id": 20, "text": "\\Phi_C" }, { "math_id": 21, "text": "\\boldsymbol{A}" }, { "math_id": 22, "text": "I" }, { "math_id": 23, "text": "\\Phi_B = L I" }, { "math_id": 24, "text": "L" }, { "math_id": 25, "text": "C'" }, { "math_id": 26, "text": "\\mathcal{E}_C = \\mathcal{E}_{C'}\n= - \\frac{d\\Phi_{C'}}{dt}\n= - L \\frac{d I}{dt}\n= \\oint_C \\boldsymbol{E} \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\n= \\int_1^2 \\boldsymbol{E}_\\mathrm{conductor} \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\n- \\int_1^2 \\boldsymbol{E}_\\mathrm{center\\ line} \\cdot \\mathrm{d} \\boldsymbol{ \\ell }\\ .\n" }, { "math_id": 27, "text": "\\boldsymbol{E}_\\mathrm{conductor}" }, { "math_id": 28, "text": "L \\frac{d I}{dt}\n= \\int_1^2 \\boldsymbol{E}_\\mathrm{center\\ line} \\cdot \\mathrm{d} \\boldsymbol{ \\ell } \n= V_1 - V_2\\ ,\n" }, { "math_id": 29, "text": "L\\ d I / d t" }, { "math_id": 30, "text": " \\sum \\mathcal{E}_\\mathrm{source} = \\sum_\\mathrm{load\\ elements} \\mathrm{voltage\\ drops},\n" }, { "math_id": 31, "text": "\\boldsymbol{v}" }, { "math_id": 32, "text": "\\boldsymbol{B}" }, { "math_id": 33, "text": "\\begin{align}\n\\mathcal{E} &= \\oint_{C} \\left[\\boldsymbol{E} + \\boldsymbol{v} \\times \\boldsymbol{B} \\right] \\cdot \\mathrm{d} \\boldsymbol{ \\ell } \\\\\n&\\qquad+\\frac{1}{q}\\oint_{C}\\mathrm {Effective \\ chemical \\ forces \\ \\cdot} \\ \\mathrm{d} \\boldsymbol{ \\ell } \\\\\n&\\qquad\\qquad+\\frac{1}{q}\\oint_{C}\\mathrm { Effective \\ thermal \\ forces\\ \\cdot}\\ \\mathrm{d} \\boldsymbol{ \\ell } \\ ,\n\\end{align} " }, { "math_id": 34, "text": " \\oint_{C} \\left[\\boldsymbol{E} + \\boldsymbol{v} \\times \\boldsymbol{B} \\right] \\cdot \\mathrm{d} \\boldsymbol{ \\ell } " }, { "math_id": 35, "text": "dQ" }, { "math_id": 36, "text": "\\mathcal{E}\\,dQ" }, { "math_id": 37, "text": "dG = -S\\,dT + V\\,dP + \\mathcal{E}\\,dQ\\ , " }, { "math_id": 38, "text": "G" }, { "math_id": 39, "text": "S" }, { "math_id": 40, "text": "P" }, { "math_id": 41, "text": "T" }, { "math_id": 42, "text": "(\\mathcal{E}, Q)" }, { "math_id": 43, "text": "\n\\left(\\frac{\\partial \\mathcal{E}}{\\partial T}\\right)_Q =\n-\\left(\\frac{\\partial S}{\\partial Q}\\right)_T\n" }, { "math_id": 44, "text": " \\Delta Q = -n_0F_0 \\ , " }, { "math_id": 45, "text": " n_0 " }, { "math_id": 46, "text": " F_0 " }, { "math_id": 47, "text": "\\Delta H = -n_0 F_0 \\left( \\mathcal{E} - T \\frac {d\\mathcal{E}}{dT}\\right) \\ , " }, { "math_id": 48, "text": " \\Delta H " }, { "math_id": 49, "text": "\\Delta G = -n_0 F_0\\mathcal{E}" }, { "math_id": 50, "text": "R" }, { "math_id": 51, "text": "\\mathcal{E} = V + IR \\ ." }, { "math_id": 52, "text": "I = V/(100+200)." }, { "math_id": 53, "text": "100 \\ I" }, { "math_id": 54, "text": "200 \\ I" }, { "math_id": 55, "text": "V_{AB}" }, { "math_id": 56, "text": "\\mathrm{Zn_{(s)} \\rightarrow Zn^{2+}_{(aq)} + 2 e ^- \\ } " }, { "math_id": 57, "text": "\\mathrm{Zn}^{2+}" }, { "math_id": 58, "text": "\\mathrm{SO}_4^{2-} " }, { "math_id": 59, "text": " \\mathrm{Cu^{2+}_{(aq)} + 2 e^- \\rightarrow Cu_{(s)}\\ } " }, { "math_id": 60, "text": "d\\mathcal{E}/dT" }, { "math_id": 61, "text": "I_{SH} + I_D" }, { "math_id": 62, "text": " I_{SH}" }, { "math_id": 63, "text": "R_{SH}" }, { "math_id": 64, "text": "I_L" }, { "math_id": 65, "text": " I_{D}" }, { "math_id": 66, "text": " I_{L}" }, { "math_id": 67, "text": "I = I_L -I_0 \\left( e^{\\frac{V}{m\\ V_\\mathrm{T}}} - 1 \\right) \\ . " }, { "math_id": 68, "text": "I_0" }, { "math_id": 69, "text": "V_\\mathrm{T} = \\tfrac{k T}{q} " }, { "math_id": 70, "text": "V = m\\ V_\\mathrm{T} \\ln \\left( \\frac{I_\\text{L} - I}{I_0}+1 \\right) \\ , " }, { "math_id": 71, "text": "I / I_0 " }, { "math_id": 72, "text": "\\mathcal{E}_\\mathrm{photo}" }, { "math_id": 73, "text": "V_{oc}" }, { "math_id": 74, "text": "\\mathcal{E}_\\mathrm{photo} = V_\\text{oc} = m\\ V_\\mathrm{T} \\ln \\left( \\frac{I_\\text{L}}{I_0}+1 \\right) \\ . " }, { "math_id": 75, "text": "I \\approx I_L" } ]
https://en.wikipedia.org/wiki?curid=65894
65894866
Katrin Leschke
German mathematician Katrin Leschke (born 1968) is a German mathematician specialising in differential geometry and known for her work on quaternionic analysis and Willmore surfaces. She works in England as a reader in mathematics at the University of Leicester, where she also heads the "Maths Meets Arts Tiger Team", an interdisciplinary group for the popularisation of mathematics, and led the "m:iv" project of international collaboration on minimal surfaces. Education and career. Leschke did her undergraduate studies at Technische Universität Berlin, and continued there for a PhD, which she completed in 1997. Her dissertation, "Homogeneity and Canonical Connections of Isoparametric Manifolds", was jointly supervised by Dirk Ferus and Ulrich Pinkall. She was a postdoctoral researcher at Technische Universität Berlin from 1997 to 2002, a visiting assistant professor at the University of Massachusetts Amherst from 2002 to 2005, and a researcher and temporary associate professor at the University of Augsburg from 2005 to 2007. At Augsburg, she completed her habilitation, working in the group of Katrin Wendland. She joined the University of Leicester as New Blood Lecturer in 2007 and became reader there in 2016. Book. Leschke is a coauthor of the book "Conformal Geometry of Surfaces in formula_0 and Quaternions" (Springer, 2002), developing the theory of quaternionic analysis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S^4" } ]
https://en.wikipedia.org/wiki?curid=65894866
65905
Ideal gas
Mathematical model which approximates the behavior of real gases An ideal gas is a theoretical gas composed of many randomly moving point particles that are not subject to interparticle interactions. The ideal gas concept is useful because it obeys the ideal gas law, a simplified equation of state, and is amenable to analysis under statistical mechanics. The requirement of zero interaction can often be relaxed if, for example, the interaction is perfectly elastic or regarded as point-like collisions. Under various conditions of temperature and pressure, many real gases behave qualitatively like an ideal gas where the gas molecules (or atoms for monatomic gas) play the role of the ideal particles. Many gases such as nitrogen, oxygen, hydrogen, noble gases, some heavier gases like carbon dioxide and mixtures such as air, can be treated as ideal gases within reasonable tolerances over a considerable parameter range around standard temperature and pressure. Generally, a gas behaves more like an ideal gas at higher temperature and lower pressure, as the potential energy due to intermolecular forces becomes less significant compared with the particles' kinetic energy, and the size of the molecules becomes less significant compared to the empty space between them. One mole of an ideal gas has a volume of 22.710 954 64... litres (exact value based on 2019 redefinition of the SI base units) at standard temperature and pressure (a temperature of 273.15 K and an absolute pressure of exactly 105 Pa). The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size becomes important. It also fails for most heavy gases, such as many refrigerants, and for gases with strong intermolecular forces, notably water vapor. At high pressures, the volume of a real gas is often considerably larger than that of an ideal gas. At low temperatures, the pressure of a real gas is often considerably less than that of an ideal gas. At some point of low temperature and high pressure, real gases undergo a phase transition, such as to a liquid or a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state. The deviation from the ideal gas behavior can be described by a dimensionless quantity, the compressibility factor, Z. The ideal gas model has been explored in both the Newtonian dynamics (as in "kinetic theory") and in quantum mechanics (as a "gas in a box"). The ideal gas model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron model), and it is one of the most important models in statistical mechanics. If the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not change. (If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.) Types of ideal gas. There are three basic classes of ideal gas: The classical ideal gas can be separated into two types: The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. Both are essentially the same, except that the classical thermodynamic ideal gas is based on classical statistical mechanics, and certain thermodynamic parameters such as the entropy are only specified to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas and quantum Fermi gas in the limit of high temperature to specify these additive constants. The behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The results of the quantum Boltzmann gas are used in a number of cases including the Sackur–Tetrode equation for the entropy of an ideal gas and the Saha ionization equation for a weakly ionized plasma. Classical thermodynamic ideal gas. The classical thermodynamic properties of an ideal gas can be described by two equations of state: Ideal gas law. The ideal gas law is the equation of state for an ideal gas, given by: formula_0 where The ideal gas law is an extension of experimentally discovered gas laws. It can also be derived from microscopic considerations. Real fluids at low density and high temperature approximate the behavior of a classical ideal gas. However, at lower temperatures or a higher density, a real fluid deviates strongly from the behavior of an ideal gas, particularly as it condenses from a gas into a liquid or as it deposits from a gas into a solid. This deviation is expressed as a compressibility factor. This equation is derived from After combining three laws we get formula_4 That is: formula_5 formula_6. Internal energy. The other equation of state of an ideal gas must express Joule's second law, that the internal energy of a fixed mass of ideal gas is a function only of its temperature, with formula_7. For the present purposes it is convenient to postulate an exemplary version of this law by writing: formula_8 where That U for an ideal gas depends only on temperature is a consequence of the ideal gas law, although in the general case ĉV depends on temperature and an integral is needed to compute U. Microscopic model. In order to switch from macroscopic quantities (left hand side of the following equation) to microscopic ones (right hand side), we use formula_9 where The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution. The ideal gas model depends on the following assumptions: The assumption of spherical particles is necessary so that there are no rotational modes allowed, unlike in a diatomic gas. The following three assumptions are very related: molecules are hard, collisions are elastic, and there are no inter-molecular forces. The assumption that the space between particles is much larger than the particles themselves is of paramount importance, and explains why the ideal gas approximation fails at high pressures. Heat capacity. The dimensionless heat capacity at constant volume is generally defined by formula_12 where S is the entropy. This quantity is generally a function of temperature due to intermolecular and intramolecular forces, but for moderate temperatures it is approximately constant. Specifically, the Equipartition Theorem predicts that the constant for a monatomic gas is "ĉV" =  while for a diatomic gas it is "ĉV" =  if vibrations are neglected (which is often an excellent approximation). Since the heat capacity depends on the atomic or molecular nature of the gas, macroscopic measurements on heat capacity provide useful information on the microscopic structure of the molecules. The dimensionless heat capacity at constant pressure of an ideal gas is: formula_13 where "H" "U" + "PV" is the enthalpy of the gas. Sometimes, a distinction is made between an ideal gas, where "ĉV" and "ĉP" could vary with temperature, and a perfect gas, for which this is not the case. The ratio of the constant volume and constant pressure heat capacity is the adiabatic index formula_14 For air, which is a mixture of gases that are mainly diatomic (nitrogen and oxygen), this ratio is often assumed to be 7/5, the value predicted by the classical Equipartition Theorem for diatomic gases. Entropy. Using the results of thermodynamics only, we can go a long way in determining the expression for the entropy of an ideal gas. This is an important step since, according to the theory of thermodynamic potentials, if we can express the entropy as a function of U (U is a thermodynamic potential), volume V and the number of particles N, then we will have a complete statement of the thermodynamic behavior of the ideal gas. We will be able to derive both the ideal gas law and the expression for internal energy from it. Since the entropy is an exact differential, using the chain rule, the change in entropy when going from a reference state 0 to some other state with entropy S may be written as Δ"S" where: formula_15 where the reference variables may be functions of the number of particles N. Using the definition of the heat capacity at constant volume for the first differential and the appropriate Maxwell relation for the second we have: formula_16 Expressing CV in terms of "ĉV" as developed in the above section, differentiating the ideal gas equation of state, and integrating yields: formula_17 which implies that the entropy may be expressed as: formula_18 where all constants have been incorporated into the logarithm as "f"("N") which is some function of the particle number N having the same dimensions as VTĉV in order that the argument of the logarithm be dimensionless. We now impose the constraint that the entropy be extensive. This will mean that when the extensive parameters (V and N) are multiplied by a constant, the entropy will be multiplied by the same constant. Mathematically: formula_19 From this we find an equation for the function "f"("N") formula_20 Differentiating this with respect to a, setting a equal to 1, and then solving the differential equation yields "f"("N"): formula_21 where Φ may vary for different gases, but will be independent of the thermodynamic state of the gas. It will have the dimensions of "VTĉV"/"N". Substituting into the equation for the entropy: formula_22 and using the expression for the internal energy of an ideal gas, the entropy may be written: formula_23 Since this is an expression for entropy in terms of U, V, and N, it is a fundamental equation from which all other properties of the ideal gas may be derived. This is about as far as we can go using thermodynamics alone. Note that the above equation is flawed – as the temperature approaches zero, the entropy approaches negative infinity, in contradiction to the third law of thermodynamics. In the above "ideal" development, there is a critical point, not at absolute zero, at which the argument of the logarithm becomes unity, and the entropy becomes zero. This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much larger than unity – the concept of an ideal gas breaks down at low values of . Nevertheless, there will be a "best" value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given the flawed assumption of ideality. A quantum-mechanical derivation of this constant is developed in the derivation of the Sackur–Tetrode equation which expresses the entropy of a monatomic ("ĉV" =) ideal gas. In the Sackur–Tetrode theory the constant depends only upon the mass of the gas particle. The Sackur–Tetrode equation also suffers from a divergent entropy at absolute zero, but is a good approximation for the entropy of a monatomic ideal gas for high enough temperatures. An alternative way of expressing the change in entropy: formula_24 Thermodynamic potentials. Expressing the entropy as a function of T, V, and N: formula_25 The chemical potential of the ideal gas is calculated from the corresponding equation of state (see thermodynamic potential): formula_26 where G is the Gibbs free energy and is equal to "U" + "PV" − "TS" so that: formula_27 The chemical potential is usually referenced to the potential at some standard pressure "Po" so that, with formula_28: formula_29 For a mixture ("j"=1,2...) of ideal gases, each at partial pressure "Pj", it can be shown that the chemical potential "μj" will be given by the above expression with the pressure "P" replaced by "Pj". The thermodynamic potentials for an ideal gas can now be written as functions of T, V, and N as: where, as before, formula_30. The most informative way of writing the potentials is in terms of their natural variables, since each of these equations can be used to derive all of the other thermodynamic variables of the system. In terms of their natural variables, the thermodynamic potentials of a single-species ideal gas are: formula_31 formula_32 formula_33 formula_34 In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see configuration integral for more details. Speed of sound. The speed of sound in an ideal gas is given by the Newton-Laplace formula: formula_35 where the isentropic Bulk modulus formula_36 For an isentropic process of an ideal gas, formula_37, therefore formula_38 Here, Ideal quantum gases. In the above-mentioned Sackur–Tetrode equation, the best choice of the entropy constant was found to be proportional to the quantum thermal wavelength of a particle, and the point at which the argument of the logarithm becomes zero is roughly equal to the point at which the average distance between particles becomes equal to the thermal wavelength. In fact, quantum theory itself predicts the same thing. Any gas behaves as an ideal gas at high enough temperature and low enough density, but at the point where the Sackur–Tetrode equation begins to break down, the gas will begin to behave as a quantum gas, composed of either bosons or fermions. (See the gas in a box article for a derivation of the ideal quantum gases, including the ideal Boltzmann gas.) Gases tend to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature. Ideal Boltzmann gas. The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identification for the undetermined constant Φ: formula_39 where Λ is the thermal de Broglie wavelength of the gas and g is the degeneracy of states. Ideal Bose and Fermi gases. An ideal gas of bosons (e.g. a photon gas) will be governed by Bose–Einstein statistics and the distribution of energy will be in the form of a Bose–Einstein distribution. An ideal gas of fermions will be governed by Fermi–Dirac statistics and the distribution of energy will be in the form of a Fermi–Dirac distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "PV = nRT" }, { "math_id": 1, "text": "V\\propto\\frac{1}{P}" }, { "math_id": 2, "text": "V\\propto T" }, { "math_id": 3, "text": " V \\propto n" }, { "math_id": 4, "text": "V \\propto \\frac{nT}{P}" }, { "math_id": 5, "text": "V = R\\left(\\frac{nT}{P}\\right)" }, { "math_id": 6, "text": "PV = nRT" }, { "math_id": 7, "text": "U = U(n,T)" }, { "math_id": 8, "text": "U = \\hat{c}_V nRT " }, { "math_id": 9, "text": "nR=N k_\\mathrm{B}" }, { "math_id": 10, "text": "N" }, { "math_id": 11, "text": "k_\\mathrm{B}" }, { "math_id": 12, "text": "\\hat{c}_V=\\frac{1}{n R}T\\left(\\frac{\\partial S}{\\partial T}\\right)_V=\\frac{1}{n R}\\left(\\frac{\\partial U}{\\partial T}\\right)_V " }, { "math_id": 13, "text": "\\hat{c}_P =\\frac{1}{n R}T\\left(\\frac{\\partial S}{\\partial T}\\right)_P= \\frac{1}{n R}\\left(\\frac{\\partial H}{\\partial T}\\right)_P = \\hat{c}_V+1" }, { "math_id": 14, "text": "\\gamma = \\frac{c_P}{c_V} " }, { "math_id": 15, "text": "\\Delta S = \\int_{S_0}^{S}dS\n=\\int_{T_0}^{T} \\left(\\frac{\\partial S}{\\partial T}\\right)_V\\!dT\n+\\int_{V_0}^{V} \\left(\\frac{\\partial S}{\\partial V}\\right)_T\\!dV\n" }, { "math_id": 16, "text": "\\Delta S\n=\\int_{T_0}^{T} \\frac{C_V}{T}\\,dT+\\int_{V_0}^{V}\\left(\\frac{\\partial P}{\\partial T}\\right)_VdV.\n" }, { "math_id": 17, "text": "\\Delta S\n= \\hat{c}_VNk\\ln\\left(\\frac{T}{T_0}\\right)+Nk\\ln\\left(\\frac{V}{V_0}\\right)\n" }, { "math_id": 18, "text": "S= Nk\\ln\\left(\\frac{VT^{\\hat{c}_V}}{f(N)}\\right)\n" }, { "math_id": 19, "text": "S(T,aV,aN)=a S(T,V,N)." }, { "math_id": 20, "text": "af(N)=f(aN)." }, { "math_id": 21, "text": "f(N)=\\Phi N" }, { "math_id": 22, "text": "\\frac{S}{Nk} = \\ln\\left(\\frac{VT^{\\hat{c}_V}}{N\\Phi}\\right)." }, { "math_id": 23, "text": "\\frac{S}{Nk} = \\ln\\left[\\frac{V}{N}\\,\\left(\\frac{U}{\\hat{c}_V k N}\\right)^{\\hat{c}_V}\\,\\frac{1}{\\Phi}\\right]" }, { "math_id": 24, "text": "\\frac {\\Delta S}{Nk\\hat{c}_V}\n= \\ln\\left(\\frac{P}{P_0}\\right)+\\gamma \\ln\\left(\\frac{V}{V_0}\\right) = \\ln\\left(\\frac{PV^\\gamma}{P_0V_0^\\gamma}\\right) \\implies PV^\\gamma=\\mathrm{const.} \\; \\text{for isentropic process}\n" }, { "math_id": 25, "text": "\\frac{S}{kN}=\\ln\\left( \\frac{VT^{\\hat{c}_V}}{N\\Phi}\\right)" }, { "math_id": 26, "text": "\\mu=\\left(\\frac{\\partial G}{\\partial N}\\right)_{T,P}" }, { "math_id": 27, "text": "\\mu(T,P)=kT\\left(\\hat{c}_P-\\ln\\left(\\frac{kT^{\\hat{c}_P}}{P\\Phi}\\right)\\right)" }, { "math_id": 28, "text": "\\mu^o(T)=\\mu(T,P^o)" }, { "math_id": 29, "text": "\\mu(T,P)=\\mu^o(T)+ kT\\ln\\left(\\frac{P}{Po}\\right)" }, { "math_id": 30, "text": "\\hat{c}_P=\\hat{c}_V+1" }, { "math_id": 31, "text": "U(S,V,N)=\\hat{c}_V N k \\left(\\frac{N\\Phi}{V}\\,e^{S/Nk}\\right)^{1/\\hat{c}_V}" }, { "math_id": 32, "text": "A(T,V,N)=NkT\\left(\\hat{c}_V-\\ln\\left(\\frac{VT^{\\hat{c}_V}}{N\\Phi}\\right)\\right)" }, { "math_id": 33, "text": "H(S,P,N)=\\hat{c}_P Nk\\left(\\frac{P\\Phi}{k}\\,e^{S/Nk}\\right)^{1/\\hat{c}_P}" }, { "math_id": 34, "text": "G(T,P,N)=NkT\\left(\\hat{c}_P-\\ln\\left(\\frac{kT^{\\hat{c}_P}}{P\\Phi}\\right)\\right)" }, { "math_id": 35, "text": "c_\\text{sound} = \\sqrt{\\frac{K_s}{\\rho}}=\\sqrt{\\left(\\frac{\\partial P}{\\partial \\rho}\\right)_{s}}, " }, { "math_id": 36, "text": "K_s=\\rho \\left(\\frac{\\partial P}{\\partial \\rho}\\right)_{s} ." }, { "math_id": 37, "text": "PV^\\gamma=\\mathrm{const} \\Rightarrow P \\propto \\left(\\frac{1}{V}\\right)^\\gamma\\propto \\rho ^\\gamma" }, { "math_id": 38, "text": "c_\\text{sound} = \\sqrt{\\left(\\frac{\\partial P}{\\partial \\rho}\\right)_{s}} = \\sqrt{\\frac{\\gamma P}{\\rho}}=\\sqrt{\\frac{\\gamma R T}{M}} " }, { "math_id": 39, "text": "\\Phi = \\frac{T^\\frac32 \\Lambda^3}{g}" } ]
https://en.wikipedia.org/wiki?curid=65905
65905792
Egalitarian item allocation
Fair item allocation problem Egalitarian item allocation, also called max-min item allocation is a fair item allocation problem, in which the fairness criterion follows the egalitarian rule. The goal is to maximize the minimum value of an agent. That is, among all possible allocations, the goal is to find an allocation in which the smallest value of an agent is as large as possible. In case there are two or more allocations with the same smallest value, then the goal is to select, from among these allocations, the one in which the second-smallest value is as large as possible, and so on (by the leximin order). Therefore, an egalitarian item allocation is sometimes called a leximin item allocation. The special case in which the value of each item "j" to each agent is either 0 or some constant "vj" is called the santa claus problem: santa claus has a fixed set of gifts, and wants to allocate them among children such that the least-happy child is as happy as possible. Some related problems are: Normalization. There are two variants of the egalitarian rule: The two rules are equivalent when the agents' valuations are already normalized, that is, all agents assign the same value to the set of all items. However, they may differ with non-normalized valuations. For example, if there are four items, Alice values them at 1,1,1,1 and George values them at 3,3,3,3, then the absolute-leximin rule would give three items to Alice and one item to George, since the utility profile in this case is (3,3), which is optimal. In contrast, the relative-leximin rule would give two items to each agent, since the normalized utility profile in this case, when the total value of both agents is normalized to 1, is (0.5,0.5), which is optimal. Exact algorithms. Although the general problem is NP-hard, small instances can be solved exactly by constraint programming techniques. Randomized algorithms. Demko and Hill present a randomized algorithm that attains an egalitarian item allocation in expectation. Approximation algorithms. Below, "n" is the number of agents and "m" is the number of items. For the special case of the santa claus problem: For the general case, for agents with additive valuations: For agents with submodular utility functions: Ordinally egalitarian allocations. The standard egalitarian rule requires that each agent assigns a numeric value to each object. Often, the agents only have ordinal utilities. There are two generalizations of the egalitarian rule to ordinal settings. 1. Suppose agents have an ordinal ranking over the set of "bundles". Given any discrete allocation, for any agent "i", define "r"("i") as the rank of agent i's bundle, so that r(i)=1 if "i" got his best bundle, r(i)=2 if "i" got his second-best bundle, etc. This r is a vector of size "n" (the number of agents). An ordinally-egalitarian allocation is one that minimizes the largest element in "r." The Decreasing Demand procedure finds an ordinally-egalitarian allocation for any number of agents with any ordering of bundles. 2. Suppose agents have an ordinal ranking over the set of "items". Given any discrete or fractional allocation, for any agent "i" and positive integer "k", define "t"("i","k") as the total fraction that agent "i" receives from his "k" topmost indifference classes. This t is a vector of size at most "n"*"m", where "n" is the number of agents and "m" is the number of items. An ordinally-egalitarian allocation is one that maximizes the vector t in the leximin order. The Simultaneous Eating algorithm with equal eating speeds is the unique rule that returns an ordinally-egalitarian allocation. Comparison to other fairness criteria. Proportionality. Whenever a proportional allocation exists, the relative-leximin allocation is proportional. This is because, in a proportional allocation, the smallest relative value of an agent is at least 1/"n", so the same must hold when we maximize the smallest relative value. However, the absolute-leximin allocation might not be proportional, as shown in the example above. Envy-freeness. 1. When all agents have identical valuations with nonzero marginal utilities, any relative-leximin allocation is PO and EFX. 2. For two agents with additive valuations, any relative-leximin allocation is EF1. However: 3. When all agents have valuations that are matroid rank functions (i.e., submodular with binary marginals), the set of absolute-leximin allocations is equivalent to the set of max-product allocations; all such allocations are max-sum and EF1. 4. In the context of indivisible allocation of "chores" (items with negative utilities), with 3 or 4 agents with additive valuations, any leximin-optimal allocation is PROP1 and PO; with "n" agents with general identical valuations, any leximin-optimal allocation is EFX. Maximin share. When all agents have identical valuations, the egalitarian allocation, by definition, gives each agent at least his/her maximin share. However, when agents have different valuations, the problems are different. The maximin-share allocation is a satisfaction problem: the goal is to guarantee that each agent receives a value above the identical-valuations threshold. In contrast, the egalitarian allocation is an optimization problem: the goal is to give each agent as high value as possible. In some instances, the resulting allocations might be very different. For example, suppose there are four goods and three agents who value them at {3,0,0,0}, {3-2"ε,ε,ε",0} and {1-2"ε",1,1,2"ε"} (where "ε" is a small positive constant). Note that the valuations are normalized (the total value is 3), so absolute-leximin and relative-leximin are equivalent. The example can be extended to 1-out-of-"k" MMS for any "k"&gt;3. There are "k"+1 goods and three agents who value them at {"k", 0, ..., 0}, {"k"-("k"-1)"ε", "ε," ..., "ε", 0} and {1-"kε", 1, 1, ..., k"ε"}. The leximin utility profile must be ("k", "kε, kε") while the 1-out-of-"k" MMS of agent 3 is 1. Real-world application. The leximin rule has been used for fairly allocating unused classrooms in public schools to charter schools. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\log{\\log{n}}/\\log{\\log{\\log{n}}})" }, { "math_id": 1, "text": "(m - n + 1)" }, { "math_id": 2, "text": "O(\\sqrt{n} \\cdot \\log^3 n)" }, { "math_id": 3, "text": "O(m^{\\varepsilon})" }, { "math_id": 4, "text": "O(m^{1/\\varepsilon})" }, { "math_id": 5, "text": "\\varepsilon\\in \\Omega\\left(\\frac{\\log\\log m}{\\log m}\\right)" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "(1 - 1/k)" }, { "math_id": 8, "text": "OPT/k" }, { "math_id": 9, "text": "O(\\sqrt{n})" }, { "math_id": 10, "text": "O(\\sqrt{m} \\cdot n^{1/4} \\cdot \\log n\\cdot \\log^{3/2} m)" } ]
https://en.wikipedia.org/wiki?curid=65905792
65906702
Quantum clock model
The quantum clock model is a quantum lattice model. It is a generalisation of the transverse-field Ising model . It is defined on a lattice with formula_0 states on each site. The Hamiltonian of this model is formula_1 Here, the subscripts refer to lattice sites, and the sum formula_2 is done over pairs of nearest neighbour sites formula_3 and formula_4. The clock matrices formula_5 and formula_6 are formula_7 generalisations of the Pauli matrices satisfying formula_8 and formula_9 where formula_10 is 1 if formula_11 and formula_12 are the same site and zero otherwise. formula_13 is a prefactor with dimensions of energy, and formula_14 is another coupling coefficient that determines the relative strength of the external field compared to the nearest neighbor interaction. The model obeys a global formula_15 symmetry, which is generated by the unitary operator formula_16 where the product is over every site of the lattice. In other words, formula_17 commutes with the Hamiltonian. When formula_18 the quantum clock model is identical to the transverse-field Ising model. When formula_19 the quantum clock model is equivalent to the quantum three-state Potts model. When formula_20, the model is again equivalent to the Ising model. When formula_21, strong evidences have been found that the phase transitions exhibited in these models should be certain generalizations of Kosterlitz–Thouless transition, whose physical nature is still largely unknown. One-dimensional model. There are various analytical methods that can be used to study the quantum clock model specifically in one dimension. Kramers–Wannier duality. A nonlocal mapping of clock matrices known as the Kramers–Wannier duality transformation can be done as follows: formula_22 Then, in terms of the newly defined clock matrices with tildes, which obey the same algebraic relations as the original clock matrices, the Hamiltonian is simply formula_23. This indicates that the model with coupling parameter formula_14 is dual to the model with coupling parameter formula_24, and establishes a duality between the ordered phase and the disordered phase. Note that there are some subtle considerations at the boundaries of the one dimensional chain; as a result of these, the degeneracy and formula_25 symmetry properties of phases are changed under the Kramers–Wannier duality. A more careful analysis involves coupling the theory to a formula_15 gauge field; fixing the gauge reproduces the results of the Kramers Wannier transformation. Phase transition. For formula_26, there is a unique phase transition from the ordered phase to the disordered phase at formula_27. The model is said to be "self-dual" because Kramers–Wannier transformation transforms the Hamiltonian to itself. For formula_21, there are two phase transition points at formula_28 and formula_29. Strong evidences have been found that these phase transitions should be a class of generalizations of Kosterlitz–Thouless transition. The KT transition predicts that the free energy has an essential singularity that goes like formula_30, while perturbative study found that the essential singularity behaves as formula_31 where formula_32 goes from formula_33 to formula_34 as formula_0 increases from formula_35 to formula_36. The physical pictures of these phase transitions are still not clear. Jordan–Wigner transformation. Another nonlocal mapping known as the Jordan Wigner transformation can be used to express the theory in terms of parafermions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "H = -J \\left( \\sum_{ \\langle i, j \\rangle} (Z^\\dagger_i Z_j + Z_i Z^\\dagger_j ) + g \\sum_j (X_j + X^\\dagger_j) \\right)" }, { "math_id": 2, "text": "\\sum_{\\langle i, j \\rangle}" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": "X_j" }, { "math_id": 6, "text": "Z_j" }, { "math_id": 7, "text": " N \\times N " }, { "math_id": 8, "text": " Z_j X_k = e^{\\frac{2\\pi i }{N}\\delta_{j,k}} X_k Z_j " }, { "math_id": 9, "text": " X_j^N = Z_j^N = 1 " }, { "math_id": 10, "text": " \\delta_{j,k} " }, { "math_id": 11, "text": " j " }, { "math_id": 12, "text": " k " }, { "math_id": 13, "text": "J" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": " \\mathbb{Z}_N " }, { "math_id": 16, "text": " U_X = \\prod_j X_j " }, { "math_id": 17, "text": " U_X" }, { "math_id": 18, "text": " N=2" }, { "math_id": 19, "text": " N=3" }, { "math_id": 20, "text": "N=4" }, { "math_id": 21, "text": "N>4" }, { "math_id": 22, "text": "\\begin{align}\\tilde{X_j} &= Z^\\dagger_j Z_{j+1} \\\\\n\\tilde{Z}^\\dagger_j \\tilde{Z}_{j+1} &= X_{j+1} \\end{align}\n" }, { "math_id": 23, "text": "H = -Jg \\sum_j ( \\tilde{Z}^\\dagger_j \\tilde{Z}_{j+1} + g^{-1}\\tilde{X}^\\dagger_{j} + \\textrm{h.c.} )" }, { "math_id": 24, "text": "g^{-1}" }, { "math_id": 25, "text": "\\mathbb{Z}_N\n" }, { "math_id": 26, "text": "N=2,3,4" }, { "math_id": 27, "text": "g=1" }, { "math_id": 28, "text": "g_1<1" }, { "math_id": 29, "text": "g_2=1/g_1>1" }, { "math_id": 30, "text": "e^{-\\tfrac{c}{\\sqrt{|g-g_c|}}}" }, { "math_id": 31, "text": "e^{-\\tfrac{c}{|g-g_c|^\\sigma}}" }, { "math_id": 32, "text": "\\sigma" }, { "math_id": 33, "text": "0.2" }, { "math_id": 34, "text": "0.5" }, { "math_id": 35, "text": "5" }, { "math_id": 36, "text": "9" } ]
https://en.wikipedia.org/wiki?curid=65906702
65907
Elastic collision
Collision in which kinetic energy is conserved In physics, an elastic collision is an encounter (collision) between two bodies in which the total kinetic energy of the two bodies remains the same. In an ideal, perfectly elastic collision, there is no net conversion of kinetic energy into other forms such as heat, noise, or potential energy. During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive or attractive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute). Collisions of atoms are elastic, for example Rutherford backscattering. A useful special case of elastic collision is when the two bodies have equal mass, in which case they will simply exchange their momenta. The "molecules"—as distinct from atoms—of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules’ translational motion and their internal degrees of freedom with each collision. At any instant, half the collisions are, to a varying extent, "inelastic collisions" (the pair possesses less kinetic energy in their translational motions after the collision than before), and half could be described as “super-elastic” (possessing "more" kinetic energy after the collision than before). Averaged across the entire sample, molecular collisions can be regarded as essentially elastic as long as Planck's law forbids energy from being carried away by black-body photons. In the case of macroscopic bodies, perfectly elastic collisions are an ideal never fully realized, but approximated by the interactions of objects such as billiard balls. When considering energies, possible rotational energy before and/or after a collision may also play a role. Equations. One-dimensional Newtonian. In any collision, momentum is conserved; but in an elastic collision, kinetic energy is also conserved. Consider particles A and B with masses "m"A, "m"B, and velocities "v"A1, "v"B1 before collision, "v"A2, "v"B2 after collision. The conservation of momentum before and after the collision is expressed by: formula_0 Likewise, the conservation of the total kinetic energy is expressed by: formula_1 These equations may be solved directly to find formula_2 when formula_3 are known: formula_4 Alternatively the final velocity of a particle, v2 (vA2 or vB2) is expressed by: formula_5 Where: If both masses are the same, we have a trivial solution: formula_6 This simply corresponds to the bodies exchanging their initial velocities with each other. As can be expected, the solution is invariant under adding a constant to all velocities (Galilean relativity), which is like using a frame of reference with constant translational velocity. Indeed, to derive the equations, one may first change the frame of reference so that one of the known velocities is zero, determine the unknown velocities in the new frame of reference, and convert back to the original frame of reference. Ball A: mass = 3 kg, velocity = 4 m/s Ball B: mass = 5 kg, velocity = 0 m/s Ball A: velocity = −1 m/s Ball B: velocity = 3 m/s Examples. Another situation: The following illustrate the case of equal mass, formula_7. In the limiting case where formula_8 is much larger than formula_9, such as a ping-pong paddle hitting a ping-pong ball or an SUV hitting a trash can, the heavier mass hardly changes velocity, while the lighter mass bounces off, reversing its velocity plus approximately twice that of the heavy one. In the case of a large formula_10, the value of formula_11 is small if the masses are approximately the same: hitting a much lighter particle does not change the velocity much, hitting a much heavier particle causes the fast particle to bounce back with high speed. This is why a neutron moderator (a medium which slows down fast neutrons, thereby turning them into thermal neutrons capable of sustaining a chain reaction) is a material full of atoms with light nuclei which do not easily absorb neutrons: the lightest nuclei have about the same mass as a neutron. Derivation of solution. To derive the above equations for formula_12 rearrange the kinetic energy and momentum equations: formula_13 Dividing each side of the top equation by each side of the bottom equation, and using formula_14 gives: formula_15 That is, the relative velocity of one particle with respect to the other is reversed by the collision. Now the above formulas follow from solving a system of linear equations for formula_12 regarding formula_16 as constants: formula_17 Once formula_11 is determined, formula_18 can be found by symmetry. Center of mass frame. With respect to the center of mass, both velocities are reversed by the collision: a heavy particle moves slowly toward the center of mass, and bounces back with the same low speed, and a light particle moves fast toward the center of mass, and bounces back with the same high speed. The velocity of the center of mass does not change by the collision. To see this, consider the center of mass at time formula_19 before collision and time formula_20 after collision: formula_21 Hence, the velocities of the center of mass before and after collision are: formula_22 The numerators of formula_23 and formula_24 are the total momenta before and after collision. Since momentum is conserved, we have formula_25 One-dimensional relativistic. According to special relativity, formula_26 where "p" denotes momentum of any particle with mass, "v" denotes velocity, and "c" is the speed of light. In the center of momentum frame where the total momentum equals zero, formula_27 Here formula_28 represent the rest masses of the two colliding bodies, formula_29 represent their velocities before collision, formula_30 their velocities after collision, formula_31 their momenta, formula_32 is the speed of light in vacuum, and formula_33 denotes the total energy, the sum of rest masses and kinetic energies of the two bodies. Since the total energy and momentum of the system are conserved and their rest masses do not change, it is shown that the momentum of the colliding body is decided by the rest masses of the colliding bodies, total energy and the total momentum. Relative to the center of momentum frame, the momentum of each colliding body does not change magnitude after collision, but reverses its direction of movement. Comparing with classical mechanics, which gives accurate results dealing with macroscopic objects moving much slower than the speed of light, total momentum of the two colliding bodies is frame-dependent. In the center of momentum frame, according to classical mechanics, formula_34 This agrees with the relativistic calculation formula_35 despite other differences. One of the postulates in Special Relativity states that the laws of physics, such as conservation of momentum, should be invariant in all inertial frames of reference. In a general inertial frame where the total momentum could be arbitrary, formula_36 We can look at the two moving bodies as one system of which the total momentum is formula_37 the total energy is formula_33 and its velocity formula_38 is the velocity of its center of mass. Relative to the center of momentum frame the total momentum equals zero. It can be shown that formula_38 is given by: formula_39 Now the velocities before the collision in the center of momentum frame formula_40 and formula_41 are: formula_42 When formula_43 and formula_44 formula_45 Therefore, the classical calculation holds true when the speed of both colliding bodies is much lower than the speed of light (~300,000 kilometres per second). Relativistic derivation using hyperbolic functions. Using the so-called "parameter of velocity" formula_46 (usually called the rapidity), formula_47 we get formula_48 Relativistic energy and momentum are expressed as follows: formula_49 Equations sum of energy and momentum colliding masses formula_50 and formula_51 (velocities formula_52 correspond to the velocity parameters formula_53), after dividing by adequate power formula_32 are as follows: formula_54 and dependent equation, the sum of above equations: formula_55 subtract squares both sides equations "momentum" from "energy" and use the identity formula_56 after simplifying we get: formula_57 for non-zero mass, using the hyperbolic trigonometric identity formula_58 we get: formula_59 as functions formula_60 is even we get two solutions: formula_61 from the last equation, leading to a non-trivial solution, we solve formula_62 and substitute into the dependent equation, we obtain formula_63 and then formula_64 we have: formula_65 It is a solution to the problem, but expressed by the parameters of velocity. Return substitution to get the solution for velocities is: formula_66 Substitute the previous solutions and replace: formula_67 and formula_68 after long transformation, with substituting: formula_69 we get: formula_70 Two-dimensional. For the case of two non-spinning colliding bodies in two dimensions, the motion of the bodies is determined by the three conservation laws of momentum, kinetic energy and angular momentum. The overall velocity of each body must be split into two perpendicular velocities: one tangent to the common normal surfaces of the colliding bodies at the point of contact, the other along the line of collision. Since the collision only imparts force along the line of collision, the velocities that are tangent to the point of collision do not change. The velocities along the line of collision can then be used in the same equations as a one-dimensional collision. The final velocities can then be calculated from the two new component velocities and will depend on the point of collision. Studies of two-dimensional collisions are conducted for many bodies in the framework of a two-dimensional gas. In a center of momentum frame at any time the velocities of the two bodies are in opposite directions, with magnitudes inversely proportional to the masses. In an elastic collision these magnitudes do not change. The directions may change depending on the shapes of the bodies and the point of impact. For example, in the case of spheres the angle depends on the distance between the (parallel) paths of the centers of the two bodies. Any non-zero change of direction is possible: if this distance is zero the velocities are reversed in the collision; if it is close to the sum of the radii of the spheres the two bodies are only slightly deflected. Assuming that the second particle is at rest before the collision, the angles of deflection of the two particles, formula_71 and formula_72, are related to the angle of deflection formula_73 in the system of the center of mass by formula_74 The magnitudes of the velocities of the particles after the collision are: formula_75 Two-dimensional collision with two moving objects. The final x and y velocities components of the first ball can be calculated as: formula_76 where "v"1 and "v"2 are the scalar sizes of the two original speeds of the objects, "m"1 and "m"2 are their masses, "θ"1 and "θ"2 are their movement angles, that is, formula_77 (meaning moving directly down to the right is either a −45° angle, or a 315° angle), and lowercase phi (φ) is the contact angle. (To get the x and y velocities of the second ball, one needs to swap all the '1' subscripts with '2' subscripts.) This equation is derived from the fact that the interaction between the two bodies is easily calculated along the contact angle, meaning the velocities of the objects can be calculated in one dimension by rotating the x and y axis to be parallel with the contact angle of the objects, and then rotated back to the original orientation to get the true x and y components of the velocities. In an angle-free representation, the changed velocities are computed using the centers x1 and x2 at the time of contact as formula_78 where the angle brackets indicate the inner product (or dot product) of two vectors. Other conserved quantities. In the particular case of particles having equal masses, it can be verified by direct computation from the result above that the scalar product of the velocities before and after the collision are the same, that is formula_79 Although this product is not an additive invariant in the same way that momentum and kinetic energy are for elastic collisions, it seems that preservation of this quantity can nonetheless be used to derive higher-order conservation laws. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " m_{A}v_{A1}+m_{B}v_{B1} \\ =\\ m_{A}v_{A2} + m_{B}v_{B2}." }, { "math_id": 1, "text": "\\tfrac12 m_{A}v_{A1}^2+\\tfrac12 m_{B}v_{B1}^2 \\ =\\ \\tfrac12 m_{A}v_{A2}^2 +\\tfrac12 m_{B}v_{B2}^2." }, { "math_id": 2, "text": "v_{A2},v_{B2}" }, { "math_id": 3, "text": "v_{A1},v_{B1}" }, { "math_id": 4, "text": "\n\\begin{array}{ccc}\nv_{A2} &=& \\dfrac{m_A-m_B}{m_A+m_B} v_{A1} + \\dfrac{2m_B}{m_A+m_B} v_{B1} \\\\[.5em]\nv_{B2} &=& \\dfrac{2m_A}{m_A+m_B} v_{A1} + \\dfrac{m_B-m_A}{m_A+m_B} v_{B1}.\n\\end{array}\n" }, { "math_id": 5, "text": "v = (1+e)v_{CoM}-eu, v_{CoM} = \\dfrac{m_Av_{A1} + m_Bv_{B1}}{m_A+m_B} " }, { "math_id": 6, "text": "\n\\begin{align}\nv_{A2} &= v_{B1} \\\\\nv_{B2} &= v_{A1}.\n\\end{align}" }, { "math_id": 7, "text": "m_A=m_B" }, { "math_id": 8, "text": "m_{A}" }, { "math_id": 9, "text": "m_{B}" }, { "math_id": 10, "text": "v_{A1}" }, { "math_id": 11, "text": "v_{A2}" }, { "math_id": 12, "text": "v_{A2},v_{B2}," }, { "math_id": 13, "text": "\\begin{align}\nm_A(v_{A2}^2-v_{A1}^2) &= m_B(v_{B1}^2-v_{B2}^2) \\\\\nm_A(v_{A2}-v_{A1}) &= m_B(v_{B1}-v_{B2})\n\\end{align}" }, { "math_id": 14, "text": "\\tfrac{a^2-b^2}{(a-b)} = a+b," }, { "math_id": 15, "text": " v_{A2}+v_{A1}=v_{B1}+v_{B2} \\quad\\Rightarrow\\quad v_{A2}-v_{B2} = v_{B1}-v_{A1}" }, { "math_id": 16, "text": "m_A,m_B,v_{A1},v_{B1}" }, { "math_id": 17, "text": "\\left\\{\\begin{array}{rcrcc}\nv_{A2} & - & v_{B2} &=& v_{B1}-v_{A1} \\\\\nm_Av_{A1}&+&m_Bv_{B1} &=& m_Av_{A2}+m_Bv_{B2}.\n\\end{array}\\right." }, { "math_id": 18, "text": "v_{B2}" }, { "math_id": 19, "text": " t " }, { "math_id": 20, "text": " t' " }, { "math_id": 21, "text": "\\begin{align}\n\\bar{x}(t) &= \\frac{m_{A} x_{A}(t)+m_{B} x_{B}(t)}{m_{A}+m_{B}} \\\\\n\\bar{x}(t') &= \\frac{m_{A} x_{A}(t')+m_{B} x_{B}(t')}{m_{A}+m_{B}}.\n\\end{align}" }, { "math_id": 22, "text": "\\begin{align}\nv_{ \\bar{x} } &= \\frac{m_{A}v_{A1}+m_{B}v_{B1}}{m_{A}+m_{B}} \\\\\nv_{ \\bar{x} }' &= \\frac{m_{A}v_{A2}+m_{B}v_{B2}}{m_{A}+m_{B}}.\n\\end{align}" }, { "math_id": 23, "text": " v_{ \\bar{x} } " }, { "math_id": 24, "text": " v_{ \\bar{x} }' " }, { "math_id": 25, "text": " v_{ \\bar{x} } = v_{ \\bar{x} }' \\,." }, { "math_id": 26, "text": "p = \\frac{mv}{\\sqrt{1-\\frac{v^2}{c^2}}}" }, { "math_id": 27, "text": "\\begin{align}\np_1 &= - p_2 \\\\\np_1^2 &= p_2^2 \\\\\nE &= \\sqrt {m_1^2c^4 + p_1^2c^2} + \\sqrt {m_2^2c^4 + p_2^2c^2} = E \\\\\np_1 &= \\pm \\frac{\\sqrt{E^4 - 2E^2m_1^2c^4 - 2E^2m_2^2c^4 + m_1^4c^8 - 2m_1^2m_2^2c^8 + m_2^4c^8}}{2cE} \\\\\nu_1 &= -v_1.\n\\end{align}" }, { "math_id": 28, "text": "m_1, m_2" }, { "math_id": 29, "text": "u_1, u_2" }, { "math_id": 30, "text": "v_1, v_2" }, { "math_id": 31, "text": "p_1, p_2" }, { "math_id": 32, "text": "c" }, { "math_id": 33, "text": "E" }, { "math_id": 34, "text": "\\begin{align}\nm_1u_1 + m_2u_2 &= m_1v_1 + m_2v_2 = 0 \\\\\nm_1u_1^2 + m_2u_2^2 &= m_1v_1^2 + m_2v_2^2 \\\\\n\\frac{(m_2u_2)^2}{2m_1} + \\frac{(m_2u_2)^2}{2m_2} &= \\frac{(m_2v_2)^2}{2m_1} + \\frac{(m_2v_2)^2}{2m_2} \\\\\n(m_1 + m_2)(m_2u_2)^2 &= (m_1 + m_2)(m_2v_2)^2 \\\\\nu_2 &= -v_2 \\\\\n\\frac{(m_1u_1)^2}{2m_1} + \\frac{(m_1u_1)^2}{2m_2} &= \n\\frac{(m_1v_1)^2}{2m_1} + \\frac{(m_1v_1)^2}{2m_2} \\\\\n(m_1 + m_2)(m_1u_1)^2 &= (m_1 + m_2)(m_1v_1)^2 \\\\\nu_1 &= -v_1\\,.\n\\end{align}\n" }, { "math_id": 35, "text": "u_1 = -v_1," }, { "math_id": 36, "text": "\\begin{align}\n\\frac{m_1\\;u_1}{\\sqrt{1-u_1^2/c^2}} +\n\\frac{m_2\\;u_2}{\\sqrt{1-u_2^2/c^2}} &= \n\\frac{m_1\\;v_1}{\\sqrt{1-v_1^2/c^2}} +\n\\frac{m_2\\;v_2}{\\sqrt{1-v_2^2/c^2}}=p_T \\\\\n\\frac{m_1c^2}{\\sqrt{1-u_1^2/c^2}} +\n\\frac{m_2c^2}{\\sqrt{1-u_2^2/c^2}} &=\n\\frac{m_1c^2}{\\sqrt{1-v_1^2/c^2}} +\n\\frac{m_2c^2}{\\sqrt{1-v_2^2/c^2}}=E\n\\end{align}" }, { "math_id": 37, "text": "p_T," }, { "math_id": 38, "text": "v_c" }, { "math_id": 39, "text": "v_c = \\frac{p_T c^2}{E}" }, { "math_id": 40, "text": "u_1 '" }, { "math_id": 41, "text": "u_2 '" }, { "math_id": 42, "text": "\\begin{align}\nu_1' &= \\frac{u_1 - v_c}{1- \\frac{u_1 v_c}{c^2}} \\\\\nu_2' &= \\frac{u_2 - v_c}{1- \\frac{u_2 v_c}{c^2}} \\\\\nv_1' &= -u_1' \\\\\nv_2' &= -u_2' \\\\\nv_1 &= \\frac{v_1' + v_c}{1+ \\frac{v_1' v_c}{c^2}} \\\\\nv_2 &= \\frac{v_2' + v_c}{1+ \\frac{v_2' v_c}{c^2}}\n\\end{align}" }, { "math_id": 43, "text": "u_1 \\ll c" }, { "math_id": 44, "text": "u_2 \\ll c\\,, " }, { "math_id": 45, "text": "\\begin{align}\np_T &\\approx m_1 u_1 + m_2 u_2 \\\\\nv_c &\\approx \\frac{m_1 u_1 + m_2 u_2}{m_1 + m_2} \\\\\nu_1' &\\approx u_1 - v_c \\approx \\frac {m_1 u_1 + m_2 u_1 - m_1 u_1 - m_2 u_2}{m_1 + m_2} = \\frac {m_2 (u_1 - u_2)}{m_1 + m_2} \\\\\nu_2' &\\approx \\frac {m_1 (u_2 - u_1)}{m_1 + m_2} \\\\\nv_1' &\\approx \\frac {m_2 (u_2 - u_1)}{m_1 + m_2} \\\\\nv_2' &\\approx \\frac {m_1 (u_1 - u_2)}{m_1 + m_2} \\\\\nv_1 &\\approx v_1' + v_c \\approx \\frac {m_2 u_2 - m_2 u_1 + m_1 u_1 + m_2 u_2}{m_1 + m_2} = \\frac{u_1 (m_1 - m_2) + 2m_2 u_2}{m_1 + m_2} \\\\\nv_2 &\\approx \\frac{u_2 (m_2 - m_1) + 2m_1 u_1}{m_1 + m_2}\n\\end{align}" }, { "math_id": 46, "text": "s" }, { "math_id": 47, "text": "\\frac{v}{c}=\\tanh(s)," }, { "math_id": 48, "text": "\\sqrt{1-\\frac{v^2}{c^2}}=\\operatorname{sech}(s)." }, { "math_id": 49, "text": "\\begin{align}\nE &= \\frac{mc^2}{\\sqrt{1-\\frac{v^2}{c^2}}} = m c^2 \\cosh(s) \\\\\np &= \\frac{mv}{\\sqrt{1-\\frac{v^2}{c^2}}}=m c \\sinh(s)\n\\end{align}" }, { "math_id": 50, "text": "m_1" }, { "math_id": 51, "text": "m_2," }, { "math_id": 52, "text": "v_1, v_2, u_1, u_2" }, { "math_id": 53, "text": "s_1, s_2, s_3, s_4" }, { "math_id": 54, "text": "\\begin{align}\nm_1 \\cosh(s_1)+m_2 \\cosh(s_2) &= m_1 \\cosh(s_3)+m_2 \\cosh(s_4) \\\\\nm_1 \\sinh(s_1)+m_2 \\sinh(s_2) &= m_1 \\sinh(s_3)+m_2 \\sinh(s_4)\n\\end{align}" }, { "math_id": 55, "text": "m_1 e^{s_1}+m_2 e^{s_2}=m_1 e^{s_3}+m_2 e^{s_4}" }, { "math_id": 56, "text": "\\cosh^2(s)-\\sinh^2(s)=1," }, { "math_id": 57, "text": "2 m_1 m_2 (\\cosh(s_1) \\cosh(s_2)-\\sinh(s_2) \\sinh(s_1)) = 2 m_1 m_2 (\\cosh(s_3) \\cosh(s_4)-\\sinh(s_4) \\sinh(s_3))" }, { "math_id": 58, "text": "\\cosh(a-b)=\\cosh(a)\\cosh(b)-\\sinh(b)\\sinh(a)," }, { "math_id": 59, "text": "\\cosh(s_1-s_2) = \\cosh(s_3-s_4)" }, { "math_id": 60, "text": "\\cosh(s)" }, { "math_id": 61, "text": "\\begin{align}\ns_1-s_2 &= s_3-s_4 \\\\\ns_1-s_2 &= -s_3+s_4\n\\end{align}" }, { "math_id": 62, "text": "s_2" }, { "math_id": 63, "text": "e^{s_1}" }, { "math_id": 64, "text": "e^{s_2}," }, { "math_id": 65, "text": "\\begin{align}\ne^{s_1} &= e^{s_4}{\\frac{m_1 e^{s_3}+m_2 e^{s_4}} {m_1 e^{s_4}+m_2 e^{s_3}}} \\\\\ne^{s_2} &= e^{s_3}{\\frac{m_1 e^{s_3}+m_2 e^{s_4}} {m_1 e^{s_4}+m_2 e^{s_3}}}\n\\end{align}" }, { "math_id": 66, "text": "\\begin{align}\nv_1/c &= \\tanh(s_1) = {\\frac{e^{s_1}-e^{-s_1}} {e^{s_1}+e^{-s_1}}} \\\\\nv_2/c &= \\tanh(s_2) = {\\frac{e^{s_2}-e^{-s_2}} {e^{s_2}+e^{-s_2}}}\n\\end{align}" }, { "math_id": 67, "text": "e^{s_3}=\\sqrt{\\frac{c+u_1} {c-u_1}}" }, { "math_id": 68, "text": "e^{s_4}=\\sqrt{\\frac{c+u_2}{c-u_2}}, " }, { "math_id": 69, "text": " Z=\\sqrt{\\left(1-u_1^2/c^2\\right) \\left(1-u_2^2/c^2\\right)} " }, { "math_id": 70, "text": "\\begin{align}\nv_1 &= \\frac{2 m_1 m_2 c^2 u_2 Z+2 m_2^2 c^2 u_2-(m_1^2+m_2^2) u_1 u_2^2+(m_1^2-m_2^2) c^2 u_1} {2 m_1 m_2 c^2 Z-2 m_2^2 u_1 u_2-(m_1^2-m_2^2) u_2^2+(m_1^2+m_2^2) c^2} \\\\\nv_2 &= \\frac{2 m_1 m_2 c^2 u_1 Z+2 m_1^2 c^2 u_1-(m_1^2+m_2^2) u_1^2 u_2+(m_2^2-m_1^2) c^2 u_2} {2 m_1 m_2 c^2 Z-2 m_1^2 u_1 u_2-(m_2^2-m_1^2) u_1^2+(m_1^2+m_2^2) c^2}\\,.\n\\end{align}" }, { "math_id": 71, "text": "\\theta_1" }, { "math_id": 72, "text": "\\theta_2" }, { "math_id": 73, "text": "\\theta" }, { "math_id": 74, "text": "\\tan \\theta_1=\\frac{m_2 \\sin \\theta}{m_1+m_2 \\cos \\theta},\\qquad\n\\theta_2=\\frac{{\\pi}-{\\theta}}{2}." }, { "math_id": 75, "text": "\\begin{align}\nv'_1 &= v_1\\frac{\\sqrt{m_1^2+m_2^2+2m_1m_2\\cos \\theta}}{m_1+m_2} \\\\\nv'_2 &= v_1\\frac{2m_1}{m_1+m_2}\\sin \\frac{\\theta}{2}.\n\\end{align}" }, { "math_id": 76, "text": "\\begin{align}\nv'_{1x} &= \\frac{v_{1}\\cos(\\theta_1-\\varphi)(m_1-m_2)+2m_2v_{2}\\cos(\\theta_2-\\varphi)}{m_1+m_2}\\cos(\\varphi)+v_{1}\\sin(\\theta_1-\\varphi)\\cos(\\varphi + \\tfrac{\\pi}{2})\n\\\\[0.8em]\nv'_{1y} &= \\frac{v_{1}\\cos(\\theta_1-\\varphi)(m_1-m_2)+2m_2v_{2}\\cos(\\theta_2-\\varphi)}{m_1+m_2}\\sin(\\varphi)+v_{1}\\sin(\\theta_1-\\varphi)\\sin(\\varphi + \\tfrac{\\pi}{2}),\n\\end{align}" }, { "math_id": 77, "text": "v_{1x} = v_1\\cos\\theta_1,\\; v_{1y}=v_1\\sin\\theta_1" }, { "math_id": 78, "text": "\\begin{align}\n\\mathbf{v}'_1 &= \\mathbf{v}_1-\\frac{2 m_2}{m_1+m_2} \\ \\frac{\\langle \\mathbf{v}_1-\\mathbf{v}_2,\\,\\mathbf{x}_1-\\mathbf{x}_2\\rangle}{\\|\\mathbf{x}_1-\\mathbf{x}_2\\|^2} \\ (\\mathbf{x}_1-\\mathbf{x}_2),\n\\\\\n\\mathbf{v}'_2 &= \\mathbf{v}_2-\\frac{2 m_1}{m_1+m_2} \\ \\frac{\\langle \\mathbf{v}_2-\\mathbf{v}_1,\\,\\mathbf{x}_2-\\mathbf{x}_1\\rangle}{\\|\\mathbf{x}_2-\\mathbf{x}_1\\|^2} \\ (\\mathbf{x}_2-\\mathbf{x}_1)\n\\end{align}" }, { "math_id": 79, "text": "\\langle \\mathbf{v}'_1,\\mathbf{v}'_2 \\rangle = \\langle \\mathbf{v}_1,\\mathbf{v}_2 \\rangle." } ]
https://en.wikipedia.org/wiki?curid=65907
65907625
Critical three-state Potts model
Two dimensional conformal field theory The three-state Potts CFT, also known as the formula_0 parafermion CFT, is a conformal field theory in two dimensions. It is a minimal model with central charge formula_1. It is considered to be the simplest minimal model with a non-diagonal partition function in Virasoro characters, as well as the simplest non-trivial CFT with the W-algebra as a symmetry. Properties. The critical three-state Potts model has a central charge of formula_2, and thus belongs to the discrete family of unitary minimal models with central charge less than one. These conformal field theories are fully classified and for the most part well-understood. The modular partition function of the critical three-state Potts model is given by formula_3 Here formula_4 refers to the Virasoro character, found by taking the trace over the Verma module generated from the Virasoro primary operator labeled by integers formula_5. The labeling formula_6 is a standard convention for primary operators of the formula_7 minimal models. Furthermore, the critical three-state Potts model is symmetric not only under the Virasoro algebra, but also under an enlarged algebra called the W-algebra that includes the Virasoro algebra as well as some spin-3 currents. The local holomorphic W primaries are given by formula_8. The local antiholomorphic W primaries similarly are given by formula_9 with the same scaling dimensions. Each field in the theory is either a combination of a holomorphic and antiholomorphic W-algebra primary field, or a descendant of such a field generated by acting with W-algebra generators. Some primaries of the Virasoro algebra, such as the formula_10 primary, are not primaries of the W algebra. The partition function is diagonal when expressed in terms of W-algebra characters (where traces are taken over irreducible representations of the W algebra, instead of over irreducible representations of the Virasoro algebra). Since formula_11 and formula_12, we can write formula_13 The operators formula_14 are charged under the action of a global formula_0 symmetry. That is, under a global global formula_0 transformation, they pick up phases formula_15 and formula_16 for formula_17. The fusion rules governing the operator product expansions involving these fields respect the action of this formula_0 transformation. There is also a charge conjugation symmetry that interchanges formula_18. Sometimes the notation formula_19 is used in the literature instead of formula_14. The critical three-state Potts model is one of the two modularly invariant conformal field theories that exist with central charge formula_20. The other such theory is the tetracritical Ising model, which has a diagonal partition function in terms of Virasoro characters. It is possible to obtain the critical three-state Potts model from the tetracritical Ising model by applying a formula_21 orbifold transformation to the latter. Lattice Hamiltonians. The critical three-state Potts conformal field theory can be realised as the low energy effective theory at the phase transition of the one-dimensional quantum three-state Potts model. The Hamiltonian of the quantum three-state Potts model is given by formula_22 Here formula_23 and formula_24 are positive parameters. The first term couples degrees of freedom on nearest neighbour sites in the lattice. formula_25 and formula_26 are formula_27 clock matrices satisfying formula_28 and same-site commutation relation formula_29 where formula_30. This Hamiltonian is symmetric under any permutation of the three formula_31 eigenstates on each site, as long as the same permutation is done on every site. Thus it is said to have a global formula_32 symmetry. A formula_0 subgroup of this symmetry is generated by the unitary operator formula_33. In one dimension, the model has two gapped phases, the ordered phase and the disordered phase. The ordered phase occurs at formula_34 and is characterised by a nonzero ground state expectation value of the order parameter formula_35 at any site formula_36. The ground state in this phase explicitly breaks the global formula_32 symmetry and is thus three-fold degenerate. The disordered phase occurs at formula_37 and is characterised by a single ground state. In between these two phases is a phase transition at formula_38. At this particular value of formula_24, the Hamiltonian is gapless with a ground state energy of formula_39, where formula_40 is the length of the chain. In other words, in the limit of an infinitely long chain, the lowest energy eigenvalues of the Hamiltonian are spaced infinitesimally close to each other. As is the case for most one dimensional gapless theories, it is possible to describe the low energy physics of the 3-state Potts model using a 1+1 dimensional conformal field theory; in this particular lattice model that conformal field theory is none other than the critical three-state Potts model. Lattice operator correspondence. Under the flow of renormalisation group, lattice operators in the quantum three-state Potts model flow to fields in the conformal field theory. In general, understanding which operators flow to what fields is difficult and not obvious. Analytical and numerical arguments suggest a correspondence between a few lattice operators and CFT fields as follows. Lattice indices formula_36 map to the corresponding field positions formula_41 in space-time, and non-universal real number prefactors are ignored. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbb{Z}_3 " }, { "math_id": 1, "text": " c=4/5 " }, { "math_id": 2, "text": " c = 4/5 " }, { "math_id": 3, "text": " Z = |\\chi_{1,1} + \\chi_{4,1}|^2 + |\\chi_{2,1} + \\chi_{3,1}|^2 + 2|\\chi_{4,3}|^2+2|\\chi_{3,3}|^2" }, { "math_id": 4, "text": " \\chi_{r,s} (q) \\equiv \\textrm{Tr}_{(r,s)} (q^{L_0-c/24}) " }, { "math_id": 5, "text": " r, s " }, { "math_id": 6, "text": " (r, s) " }, { "math_id": 7, "text": " c<1 " }, { "math_id": 8, "text": " 1, \\epsilon, \\sigma_1, \\sigma_2, \\psi_1, \\psi_2 " }, { "math_id": 9, "text": " 1, \\bar{\\epsilon}, \\bar{\\sigma}_1, \\bar{\\sigma}_2, \\bar{\\psi}_1, \\bar{\\psi}_2 " }, { "math_id": 10, "text": " (3,1)" }, { "math_id": 11, "text": " \\chi_1 = \\chi_{1,1} + \\chi_{4,1} " }, { "math_id": 12, "text": " \\chi_{\\epsilon} = \\chi_{2,1} + \\chi_{3,1} " }, { "math_id": 13, "text": " Z = |\\chi_{1}|^2 + |\\chi_{\\epsilon}|^2 + |\\chi_{\\psi_1}|^2+|\\chi_{\\psi_2}|^2+|\\chi_{\\sigma_1}|^2+|\\chi_{\\sigma_2}|^2" }, { "math_id": 14, "text": " \\sigma_1, \\sigma_2, \\psi_1, \\psi_2 " }, { "math_id": 15, "text": " \\sigma_a \\to e^{2\\pi i a/3} \\sigma_a " }, { "math_id": 16, "text": " \\psi_a \\to e^{2\\pi i a/3} \\psi_a " }, { "math_id": 17, "text": " a = 1,2" }, { "math_id": 18, "text": " \\sigma_1 \\leftrightarrow \\sigma_2, \\psi_1 \\leftrightarrow \\psi_2 " }, { "math_id": 19, "text": " \\sigma, \\sigma^\\dagger, \\psi, \\psi^\\dagger " }, { "math_id": 20, "text": " c= 4/5 " }, { "math_id": 21, "text": " \\mathbb{Z}_2 " }, { "math_id": 22, "text": "H = -J(\\sum_{ \\langle i, j \\rangle} (Z^\\dagger_i Z_{j}+ Z_i Z^{\\dagger}_{j}) + g \\sum_j (X_j + X^\\dagger_j) )" }, { "math_id": 23, "text": " J " }, { "math_id": 24, "text": " g " }, { "math_id": 25, "text": " X " }, { "math_id": 26, "text": " Z " }, { "math_id": 27, "text": " 3 \\times 3 " }, { "math_id": 28, "text": " X^3 = Z^3 = 1 " }, { "math_id": 29, "text": " ZX = \\omega XZ " }, { "math_id": 30, "text": " \\omega = -\\frac{1}{2} + i \\frac{\\sqrt{3}}{2}" }, { "math_id": 31, "text": "Z" }, { "math_id": 32, "text": " S_3 " }, { "math_id": 33, "text": " \\prod_j X_j " }, { "math_id": 34, "text": " 0 <g <1 " }, { "math_id": 35, "text": " Z_j " }, { "math_id": 36, "text": " j " }, { "math_id": 37, "text": " g>1" }, { "math_id": 38, "text": " g= 1 " }, { "math_id": 39, "text": " E_0 = -(\\frac{4}{3}+ \\frac{2\\sqrt{3}}{\\pi})J L" }, { "math_id": 40, "text": " L " }, { "math_id": 41, "text": " z , \\bar{z} " }, { "math_id": 42, "text": " Z_j \\sim \\Phi_{\\sigma_1, \\bar{\\sigma}_1} (z,\\bar z)" }, { "math_id": 43, "text": " \\frac{2}{15} " }, { "math_id": 44, "text": " \\sigma_1(z) " }, { "math_id": 45, "text": " \\bar \\sigma_1(\\bar z) " }, { "math_id": 46, "text": " Z_j^\\dagger \\sim \\Phi_{\\sigma_2, \\bar{\\sigma}_2} (z,\\bar z)" }, { "math_id": 47, "text": " Z_j Z_{j+1}^\\dagger - \\frac{1}{2}(X_j + X_{j+1}) + \\textrm{h.c.} \\sim \\Phi_{\\epsilon, \\bar{\\epsilon}}(z,\\bar z) " }, { "math_id": 48, "text": " -Z_j Z_{j+1}^\\dagger - \\frac{1}{2}(X_j + X_{j+1}) + \\textrm{h.c.} + \\frac{4}{3}+ \\frac{2\\sqrt{3}}{\\pi} \\sim T(z) + \\bar T(\\bar z) " }, { "math_id": 49, "text": " Z_j(2-3\\omega^2 X_j - 3\\omega X_j^2) -2Z_j^\\dagger (Z_{j-1}^\\dagger +Z_{j+1}^\\dagger) \\sim \\psi_1(z) \\bar \\psi_1(\\bar z)" } ]
https://en.wikipedia.org/wiki?curid=65907625
65908
Inelastic collision
Collision in which energy is lost to heat An inelastic collision, in contrast to an elastic collision, is a collision in which kinetic energy is not conserved due to the action of internal friction. In collisions of macroscopic bodies, some kinetic energy is turned into vibrational energy of the atoms, causing a heating effect, and the bodies are deformed. The molecules of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules' translational motion and their internal degrees of freedom with each collision. At any one instant, half the collisions are – to a varying extent – inelastic (the pair possesses less kinetic energy after the collision than before), and half could be described as “super-elastic” (possessing "more" kinetic energy after the collision than before). Averaged across an entire sample, molecular collisions are elastic. Although inelastic collisions do not conserve kinetic energy, they do obey conservation of momentum. Simple ballistic pendulum problems obey the conservation of kinetic energy "only" when the block swings to its largest angle. In nuclear physics, an inelastic collision is one in which the incoming particle causes the nucleus it strikes to become excited or to break up. Deep inelastic scattering is a method of probing the structure of subatomic particles in much the same way as Rutherford probed the inside of the atom (see Rutherford scattering). Such experiments were performed on protons in the late 1960s using high-energy electrons at the Stanford Linear Accelerator (SLAC). As in Rutherford scattering, deep inelastic scattering of electrons by proton targets revealed that most of the incident electrons interact very little and pass straight through, with only a small number bouncing back. This indicates that the charge in the proton is concentrated in small lumps, reminiscent of Rutherford's discovery that the positive charge in an atom is concentrated at the nucleus. However, in the case of the proton, the evidence suggested three distinct concentrations of charge (quarks) and not one. Formula. The formula for the velocities after a one-dimensional collision is: formula_0 where In a center of momentum frame the formulas reduce to: formula_1 For two- and three-dimensional collisions the velocities in these formulas are the components perpendicular to the tangent line/plane at the point of contact. If assuming the objects are not rotating before or after the collision, the normal impulse is: formula_2 where formula_3 is the normal vector. Assuming no friction, this gives the velocity updates: formula_4 Perfectly inelastic collision. A perfectly inelastic collision occurs when the maximum amount of kinetic energy of a system is lost. In a perfectly inelastic collision, i.e., a zero coefficient of restitution, the colliding particles stick together. In such a collision, kinetic energy is lost by bonding the two bodies together. This bonding energy usually results in a maximum kinetic energy loss of the system. It is necessary to consider conservation of momentum: (Note: In the sliding block example above, momentum of the two body system is only conserved if the surface has zero friction. With friction, momentum of the two bodies is transferred to the surface that the two bodies are sliding upon. Similarly, if there is air resistance, the momentum of the bodies can be transferred to the air.) The equation below holds true for the two-body (Body A, Body B) system collision in the example above. In this example, momentum of the system is conserved because there is no friction between the sliding bodies and the surface. formula_5 where "v" is the final velocity, which is hence given by formula_6The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is. The change in kinetic energy is hence: formula_7 where μ is the reduced mass and urel is the relative velocity of the bodies before collision. With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile, or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation). Partially inelastic collisions. Partially inelastic collisions are the most common form of collisions in the real world. In this type of collision, the objects involved in the collisions do not stick, but some kinetic energy is still lost. Friction, sound and heat are some ways the kinetic energy can be lost through partial inelastic collisions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\nv_a &= \\frac{C_R m_b (u_b - u_a) + m_a u_a + m_b u_b} {m_a+m_b} \\\\\nv_b &= \\frac{C_R m_a (u_a - u_b) + m_a u_a + m_b u_b} {m_a+m_b}\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\nv_a &= -C_R u_a \\\\\nv_b &= -C_R u_b\n\\end{align}\n" }, { "math_id": 2, "text": "J_{n} = \\frac{m_{a} m_{b}}{m_{a} + m_{b}} (1 + C_R) (\\vec{u_{b}} - \\vec{u_{a}}) \\cdot \\vec{n}" }, { "math_id": 3, "text": "\\vec{n}" }, { "math_id": 4, "text": "\n\\begin{align}\n\\Delta \\vec{v_{a}} &= \\frac{J_{n}}{m_{a}} \\vec{n} \\\\\n\\Delta \\vec{v_{b}} &= -\\frac{J_{n}}{m_{b}} \\vec{n}\n\\end{align}\n" }, { "math_id": 5, "text": "m_a u_a + m_b u_b = \\left( m_a + m_b \\right) v " }, { "math_id": 6, "text": " v=\\frac{m_a u_a + m_b u_b}{m_a + m_b}" }, { "math_id": 7, "text": " \\Delta KE = {1\\over 2}\\mu u^2_{\\rm rel} = \\frac{1}{2}\\frac{m_a m_b}{m_a + m_b}|u_a - u_b|^2 " } ]
https://en.wikipedia.org/wiki?curid=65908
65913
Equations of motion
Equations that describe the behavior of a physical system &lt;templatestyles src="Hlist/styles.css"/&gt; In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics. Types. There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since the momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term "dynamics" refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations. However, kinematics is simpler. It concerns only variables derived from the positions of objects and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement ("s"), initial velocity ("u"), final velocity ("v"), acceleration ("a"), and time ("t"). A differential equation of motion, usually identified as some physical law (for example, F = ma) and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants. To state this formally, in general an equation of motion "M" is a function of the position r of the object, its velocity (the first time derivative of r, v ), and its acceleration (the second derivative of r, a ), and time "t". Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second-order ordinary differential equation (ODE) in r, formula_0 where "t" is time, and each overdot denotes one time derivative. The initial conditions are given by the "constant" values at "t" 0, formula_1 The solution r("t") to the equation of motion, with specified initial values, describes the system for all times "t" after "t" 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time "t" is by far the most sought-after quantity. Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how "sensitive" the system is to the initial conditions. History. Kinematics, dynamics and the mathematical models of the universe developed incrementally over three millennia, thanks to many thinkers, only some of whose names we know. In antiquity, priests, astrologers and astronomers predicted solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon. But they had nothing other than a set of algorithms to guide them. Equations of motion were not written down for another thousand years. Medieval scholars in the thirteenth century — for example at the relatively new universities in Oxford and Paris — drew on ancient mathematicians (Euclid and Archimedes) and philosophers (Aristotle) to develop a new body of knowledge, now called physics. At Oxford, Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, who were of similar stature to the intellectuals at the University of Paris. Thomas Bradwardine extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion. For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's "Physics" published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity was not used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are remarkably correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that acceleration would be negative during ascent. Discourses such as these spread throughout Europe, shaping the work of Galileo Galilei and others, and helped in laying the foundation of kinematics. Galileo deduced the equation "s" "gt"2 in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics. Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in "Discourses" that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution. The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.) Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope. Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum. More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum. Thus we arrive at René Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones. Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations. However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields. Kinematic equations for one particle. Kinematic quantities. From the instantaneous position r r("t"), instantaneous meaning at an instant value of time "t", the instantaneous velocity v v("t") and acceleration a a("t") have the general, coordinate-independent definitions; formula_2 Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature. The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ θ("t"), angular velocity ω ω("t"), and angular acceleration α α("t"): formula_3 where n̂ is a unit vector in the direction of the axis of rotation, and "θ" is the angle the object turns through about the axis. The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω: formula_4 where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body. Uniform acceleration. The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below. Constant translational acceleration in a straight line. These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one. formula_5 where: &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Derivation Equations [1] and [2] are from integrating the definitions of velocity and acceleration, subject to the initial conditions r("t"0) r0 and v("t"0) v0; formula_6 in magnitudes, formula_7 Equation [3] involves the average velocity . Intuitively, the velocity increases linearly, so the average velocity multiplied by time is the distance traveled while increasing the velocity from v0 to v, as can be illustrated graphically by plotting velocity against time as a straight line graph. Algebraically, it follows from solving [1] for formula_8 and substituting into [2] formula_9 then simplifying to get formula_10 or in magnitudes formula_11 From [3], formula_12 substituting for t in [1]: formula_13 From [3], formula_14 substituting into [2]: formula_15 Usually only the first 4 are needed, the fifth is optional. Here "a" is "constant" acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity "g" is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two. In elementary physics the same formulae are frequently written in different notation as: formula_16 where "u" has replaced "v"0, "s" replaces "r" - "r"0. They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: "s" = displacement, "u" = initial velocity, "v" = final velocity, "a" = acceleration, "t" = time. Constant linear acceleration in any direction. The initial position, initial velocity, and acceleration vectors need not be collinear, and the equations of motion take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case, formula_17 although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows: formula_18 formula_19 formula_20 Applications. Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial velocity "u", one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity "g". While these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing "s" to measure up from the ground, the acceleration "a" must be in fact "−g", since the force of gravity acts downwards and therefore also the acceleration on the ball due to it. At the highest point, the ball will be at rest: therefore "v" 0. Using equation [4] in the set above, we have: formula_21 Substituting and cancelling minus signs gives: formula_22 Constant circular acceleration. The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary, formula_23 where "α" is the constant angular acceleration, "ω" is the angular velocity, "ω"0 is the initial angular velocity, "θ" is the angle turned through (angular displacement), "θ"0 is the initial angle, and "t" is the time taken to rotate from the initial state to the final state. General planar motion. These are the kinematic equations for a particle traversing a path in a plane, described by position r r("t"). They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity "ω" and angular acceleration "α". These are instantaneous quantities which change with time. The position of the particle is formula_24 where ê"r" and ê"θ" are the polar unit vectors. Differentiating with respect to time gives the velocity formula_25 with radial component and an additional component "rω" due to the rotation. Differentiating with respect to time again obtains the acceleration formula_26 which breaks into the radial acceleration , centripetal acceleration –"rω"2, Coriolis acceleration 2"ω", and angular acceleration "rα". Special cases of motion described by these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration. General 3D motions. In 3D space, the equations in spherical coordinates ("r", "θ", "φ") with corresponding unit vectors ê"r", ê"θ" and ê"φ", the position, velocity, and acceleration generalize respectively to formula_27 In the case of a constant "φ" this reduces to the planar equations above. Dynamic equations of motion. Newtonian mechanics. The first general equation of motion developed was Newton's second law of motion. In its most general form it states the rate of change of momentum p = p("t") = "m"v("t") of an object equals the force F = F(x("t"), v("t"), "t") acting on it,1112 formula_28 The force in the equation is "not" the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as formula_29 since "m" is a constant in Newtonian mechanics. Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continuum, like deformable solids or fluids, but the motion of the system must be accounted for; see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum; see variable-mass system. It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex. The momentum form is preferable since this is readily generalized to more complex systems, such as special and general relativity (see four-momentum).112 It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces. For a number of particles (see many body problem), the equation of motion for one particle "i" influenced by other particles is formula_30 where p"i" is the momentum of particle "i", F"ij" is the force on particle "i" by particle "j", and F"E" is the resultant external force due to any agent not part of system. Particle "i" does not exert a force on itself. Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation. Newton's second law for rotation takes a similar form to the translational case, formula_31 by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity, formula_32 Again, these equations apply to point like particles, or at each point of a rigid body. Likewise, for a number of particles, the equation of motion for one particle "i" is formula_33 where L"i" is the angular momentum of particle "i", τ"ij" the torque on particle "i" by particle "j", and τ"E" is resultant external torque (due to any agent not part of system). Particle "i" does not exert a torque on itself. Applications. Some examples of Newton's law include describing the motion of a simple pendulum, formula_34 and a damped, sinusoidally driven harmonic oscillator, formula_35 For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R R(r, "t"), formula_36 where "G" is the gravitational constant, "M" the mass of the Earth, and A is the acceleration of the projectile due to the air currents at position r and time t. The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs, formula_37 where "i" 1, 2, ..., "N" labels the quantities (mass, position, etc.) associated with each particle. Analytical mechanics. Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has "N" degrees of freedom, then one can use a set of "N" generalized coordinates q("t") ["q"1("t"), "q"2("t") ... "qN"("t")], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the "generalized velocities" formula_38 The Euler–Lagrange equations are formula_39 where the "Lagrangian" is a function of the configuration q and its time rate of change (and possibly time "t") formula_40 Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled "N" second order ODEs in the coordinates are obtained. Hamilton's equations are formula_41 where the Hamiltonian formula_42 is a function of the configuration q and conjugate ""generalized" momenta" formula_43 in which (, , …, ) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time "t", Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2"N" first order ODEs in the coordinates "qi" and momenta "pi" are obtained. The Hamilton–Jacobi equation is formula_44 where formula_45 is "Hamilton's principal function", also called the "classical action" is a functional of "L". In this case, the momenta are given by formula_46 Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order "non-linear" PDE, in "N" + 1 variables. The action "S" allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether. All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action formula_47 stating the path the system takes through the configuration space is the one with the least action "S". Electrodynamics. In electrodynamics, the force on a charged particle of charge "q" is the Lorentz force: formula_48 Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle: formula_49 or its momentum: formula_50 The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass "m" and charge "q": formula_51 where A and "ϕ" are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by: formula_52 instead of just "m"v, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation. Alternatively the Hamiltonian (and substituting into the equations): formula_53 can derive the Lorentz force equation. General relativity. Geodesic equation of motion. The above equations are valid in flat spacetime. In curved spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a "geodesic" of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor "g", the metric provides the notion of arc length (see line element for details). The differential arc length is given by:1199 formula_54 and the geodesic equation is a second-order differential equation in the coordinates. The general solution is a family of geodesics:1200 formula_55 where "Γ μαβ" is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system). Given the mass-energy distribution provided by the stress–energy tensor "T αβ", the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of spacetime is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The "relative acceleration" of one geodesic to another in curved spacetime is given by the "geodesic deviation equation": formula_56 where ξ"α" "x"2"α" − "x"1"α" is the separation vector between two geodesics, ("not" just ) is the covariant derivative, and "Rαβγδ" is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field. For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity. Spinning objects. In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field. Analogues for waves and fields. Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified. Sometimes in the following contexts, the wave or field equations are also called "equations of motion". Field equations. Equations that describe the spatial dependence and time evolution of fields are called "field equations". These include This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead. Wave equations. Equations of wave motion are called "wave equations". The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves. From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is: formula_57 where "X" "X"(r, "t") is any mechanical or electromagnetic field amplitude, say: and "v" is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing "v" by "v"("X"). There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation. Quantum theory. In quantum theory, the wave and field concepts both appear. In quantum mechanics the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form: formula_58 where "Ψ" is the wavefunction of the system, "Ĥ" is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and "ħ" is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that "ħ" becomes zero. To compare to measurements, operators for observables must be applied the quantum wavefunction according to the experiment performed, leading to either wave-like or particle-like results. Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M\\left[\\mathbf{r}(t),\\mathbf{\\dot{r}}(t),\\mathbf{\\ddot{r}}(t),t\\right]=0\\,," }, { "math_id": 1, "text": " \\mathbf{r}(0) \\,, \\quad \\mathbf{\\dot{r}}(0) \\,. " }, { "math_id": 2, "text": " \\mathbf{v} = \\frac{d \\mathbf{r}}{d t} \\,, \\quad \\mathbf{a} = \\frac{d \\mathbf{v}}{d t} = \\frac{d^2 \\mathbf{r}}{d t^2} " }, { "math_id": 3, "text": "\\boldsymbol{\\theta} = \\theta \\hat{\\mathbf{n}} \\,,\\quad \\boldsymbol{\\omega} = \\frac{d \\boldsymbol{\\theta}}{d t} \\,, \\quad \\boldsymbol{\\alpha}= \\frac{d \\boldsymbol{\\omega}}{d t} \\,," }, { "math_id": 4, "text": " \\mathbf{v} = \\boldsymbol{\\omega}\\times \\mathbf{r} " }, { "math_id": 5, "text": "\\begin{align}\nv & = at+v_0 & [1]\\\\\nr & = r_0 + v_0 t + \\tfrac12 {a}t^2 & [2]\\\\\nr & = r_0 + \\tfrac12 \\left( v+v_0 \\right )t & [3]\\\\\nv^2 & = v_0^2 + 2a\\left( r - r_0 \\right) & [4]\\\\\nr & = r_0 + vt - \\tfrac12 {a}t^2 & [5]\\\\\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align}\n\\mathbf{v} & = \\int \\mathbf{a} dt = \\mathbf{a}t+\\mathbf{v}_0 \\,, & [1] \\\\ \n\\mathbf{r} & = \\int (\\mathbf{a}t+\\mathbf{v}_0) dt = \\frac{\\mathbf{a}t^2}{2}+\\mathbf{v}_0t +\\mathbf{r}_0 \\,, & [2] \\\\\n\\end{align}" }, { "math_id": 7, "text": "\\begin{align}\nv & = at+v_0 \\,, & [1] \\\\ \nr & = \\frac{{a}t^2}{2}+v_0t +r_0 \\,. & [2] \\\\\n\\end{align}" }, { "math_id": 8, "text": " \\mathbf{a} = \\frac{(\\mathbf{v} - \\mathbf{v}_0)}{t} " }, { "math_id": 9, "text": " \\mathbf{r} = \\mathbf{r}_0 + \\mathbf{v}_0 t + \\frac{t}{2}(\\mathbf{v} - \\mathbf{v}_0) \\,, " }, { "math_id": 10, "text": " \\mathbf{r} = \\mathbf{r}_0 + \\frac{t}{2}(\\mathbf{v} + \\mathbf{v}_0) " }, { "math_id": 11, "text": " r = r_0 + \\left( \\frac{v+v_0}{2} \\right )t \\quad [3] " }, { "math_id": 12, "text": "t = \\left( r - r_0 \\right)\\left( \\frac{2}{v+v_0} \\right )" }, { "math_id": 13, "text": "\\begin{align}\nv & = a\\left( r - r_0 \\right)\\left( \\frac{2}{v+v_0} \\right )+v_0 \\\\\nv\\left( v+v_0 \\right ) & = 2a\\left( r - r_0 \\right)+v_0\\left( v+v_0 \\right ) \\\\\nv^2+vv_0 & = 2a\\left( r - r_0 \\right)+v_0v+v_0^2 \\\\\nv^2 & = v_0^2 + 2a\\left( r - r_0 \\right) & [4] \\\\\n\\end{align}" }, { "math_id": 14, "text": " 2\\left(r - r_0\\right) - vt = v_0 t " }, { "math_id": 15, "text": " \\begin{align}\nr & = \\frac{{a}t^2}{2} + 2r - 2r_0 - vt + r_0 \\\\\n0 & = \\frac{{a}t^2}{2}+r - r_0 - vt \\\\\nr & = r_0 + vt - \\frac{{a}t^2}{2} & [5] \n\\end{align}" }, { "math_id": 16, "text": "\\begin{align}\nv & = u + at & [1] \\\\\ns & = ut + \\tfrac12 at^2 & [2] \\\\\ns & = \\tfrac{1}{2}(u + v)t & [3] \\\\\nv^2 & = u^2 + 2as & [4] \\\\\ns & = vt - \\tfrac12 at^2 & [5] \\\\\n\\end{align}" }, { "math_id": 17, "text": "\\begin{align}\n\\mathbf{v} & = \\mathbf{a}t+\\mathbf{v}_0 & [1]\\\\\n\\mathbf{r} & = \\mathbf{r}_0 + \\mathbf{v}_0 t + \\tfrac12\\mathbf{a}t^2 & [2]\\\\\n\\mathbf{r} & = \\mathbf{r}_0 + \\tfrac12 \\left(\\mathbf{v}+\\mathbf{v}_0\\right) t & [3]\\\\\n\\mathbf{v}^2 & = \\mathbf{v}_0^2 + 2\\mathbf{a}\\cdot\\left( \\mathbf{r} - \\mathbf{r}_0 \\right) & [4]\\\\\n\\mathbf{r} & = \\mathbf{r}_0 + \\mathbf{v}t - \\tfrac12\\mathbf{a}t^2 & [5]\\\\\n\\end{align}" }, { "math_id": 18, "text": "v^{2} = \\mathbf{v}\\cdot\\mathbf{v} = (\\mathbf{v}_0+\\mathbf{a}t)\\cdot(\\mathbf{v}_0+\\mathbf{a}t) = v_0^{2}+2t(\\mathbf{a}\\cdot\\mathbf{v}_0)+a^{2}t^{2}" }, { "math_id": 19, "text": "(2\\mathbf{a})\\cdot(\\mathbf{r}-\\mathbf{r}_0) = (2\\mathbf{a})\\cdot\\left(\\mathbf{v}_0t+\\tfrac{1}{2}\\mathbf{a}t^{2}\\right)=2t(\\mathbf{a}\\cdot\\mathbf{v}_0)+a^{2}t^{2} = v^{2} - v_0^{2}" }, { "math_id": 20, "text": "\\therefore v^{2} = v_0^{2} + 2(\\mathbf{a}\\cdot(\\mathbf{r}-\\mathbf{r}_0))" }, { "math_id": 21, "text": "s= \\frac{v^2 - u^2}{-2g}." }, { "math_id": 22, "text": "s = \\frac{u^2}{2g}." }, { "math_id": 23, "text": "\\begin{align}\n\\omega & = \\omega_0 + \\alpha t \\\\\n\\theta &= \\theta_0 + \\omega_0t + \\tfrac12\\alpha t^2 \\\\\n\\theta & = \\theta_0 + \\tfrac12(\\omega_0 + \\omega)t \\\\\n\\omega^2 & = \\omega_0^2 + 2\\alpha(\\theta - \\theta_0) \\\\\n\\theta & = \\theta_0 + \\omega t - \\tfrac12\\alpha t^2 \\\\\n\\end{align}" }, { "math_id": 24, "text": " \\mathbf{r} =\\mathbf{r}\\left ( r(t),\\theta(t) \\right ) = r \\mathbf{\\hat{e}}_r " }, { "math_id": 25, "text": "\\mathbf{v} = \\mathbf{\\hat{e}}_r \\frac{d r}{dt} + r \\omega \\mathbf{\\hat{e}}_\\theta " }, { "math_id": 26, "text": "\\mathbf{a} =\\left ( \\frac{d^2 r}{dt^2} - r\\omega^2\\right )\\mathbf{\\hat{e}}_r + \\left ( r \\alpha + 2 \\omega \\frac{dr}{dt} \\right )\\mathbf{\\hat{e}}_\\theta " }, { "math_id": 27, "text": " \\begin{align}\n\\mathbf{r} & =\\mathbf{r}\\left ( t \\right ) = r \\mathbf{\\hat{e}}_r\\\\\n\\mathbf{v} & = v \\mathbf{\\hat{e}}_r + r\\,\\frac{d\\theta}{dt}\\mathbf{\\hat{e}}_\\theta + r\\,\\frac{d\\varphi}{dt}\\,\\sin\\theta \\mathbf{\\hat{e}}_\\varphi \\\\\n\\mathbf{a} & = \\left( a - r\\left(\\frac{d\\theta}{dt}\\right)^2 - r\\left(\\frac{d\\varphi}{dt}\\right)^2\\sin^2\\theta \\right)\\mathbf{\\hat{e}}_r \\\\\n & + \\left( r \\frac{d^2 \\theta}{dt^2 } + 2v\\frac{d\\theta}{dt} - r\\left(\\frac{d\\varphi}{dt}\\right)^2\\sin\\theta\\cos\\theta \\right) \\mathbf{\\hat{e}}_\\theta \\\\\n & + \\left( r\\frac{d^2 \\varphi}{dt^2 }\\,\\sin\\theta + 2v\\,\\frac{d\\varphi}{dt}\\,\\sin\\theta + 2 r\\,\\frac{d\\theta}{dt}\\,\\frac{d\\varphi}{dt}\\,\\cos\\theta \\right) \\mathbf{\\hat{e}}_\\varphi\n\\end{align} \\,\\!" }, { "math_id": 28, "text": " \\mathbf{F} = \\frac{d\\mathbf{p}}{dt} " }, { "math_id": 29, "text": " \\mathbf{F} = m\\mathbf{a} " }, { "math_id": 30, "text": " \\frac{d\\mathbf{p}_i}{dt} = \\mathbf{F}_{E} + \\sum_{i \\neq j} \\mathbf{F}_{ij} " }, { "math_id": 31, "text": "\\boldsymbol{\\tau} = \\frac{d\\mathbf{L}}{dt} \\,, " }, { "math_id": 32, "text": " \\boldsymbol{\\tau} = \\mathbf{I} \\boldsymbol{\\alpha}." }, { "math_id": 33, "text": " \\frac{d\\mathbf{L}_i}{dt} = \\boldsymbol{\\tau}_E + \\sum_{i \\neq j} \\boldsymbol{\\tau}_{ij} \\,," }, { "math_id": 34, "text": " - mg\\sin\\theta = m\\frac{d^2 (\\ell\\theta)}{dt^2} \\quad \\Rightarrow \\quad \\frac{d^2 \\theta}{dt^2} = - \\frac{g}{\\ell}\\sin\\theta \\,," }, { "math_id": 35, "text": " F_0 \\sin(\\omega t) = m\\left(\\frac{d^2x}{dt^2} + 2\\zeta\\omega_0\\frac{dx}{dt} + \\omega_0^2 x \\right)\\,." }, { "math_id": 36, "text": " - \\frac{GmM}{|\\mathbf{r}|^2} \\mathbf{\\hat{e}}_r + \\mathbf{R} = m\\frac{d^2 \\mathbf{r}}{d t^2} + 0 \\quad \\Rightarrow \\quad \\frac{d^2 \\mathbf{r}}{d t^2} = - \\frac{GM}{|\\mathbf{r}|^2} \\mathbf{\\hat{e}}_r + \\mathbf{A} " }, { "math_id": 37, "text": "\\frac{d^2\\mathbf{r}_i}{dt^2} = G\\sum_{i\\neq j}\\frac{m_j}{|\\mathbf{r}_j - \\mathbf{r}_i|^3} (\\mathbf{r}_j - \\mathbf{r}_i)" }, { "math_id": 38, "text": "\\mathbf{\\dot{q}} = \\frac{d\\mathbf{q}}{dt} \\,." }, { "math_id": 39, "text": " \\frac{d}{d t} \\left ( \\frac{\\partial L}{\\partial \\mathbf{\\dot{q}} } \\right ) = \\frac{\\partial L}{\\partial \\mathbf{q}} \\,, " }, { "math_id": 40, "text": "L = L\\left [ \\mathbf{q}(t), \\mathbf{\\dot{q}}(t), t \\right ] \\,. " }, { "math_id": 41, "text": "\\mathbf{\\dot{p}} = -\\frac{\\partial H}{\\partial \\mathbf{q}} \\,, \\quad \\mathbf{\\dot{q}} = + \\frac{\\partial H}{\\partial \\mathbf{p}} \\,," }, { "math_id": 42, "text": "H = H\\left [ \\mathbf{q}(t), \\mathbf{p}(t), t \\right ] \\,," }, { "math_id": 43, "text": "\\mathbf{p} = \\frac{\\partial L}{\\partial \\mathbf{\\dot{q}}} \\,," }, { "math_id": 44, "text": " - \\frac{\\partial S(\\mathbf{q},t)}{\\partial t} = H\\left(\\mathbf{q}, \\mathbf{p}, t \\right) \\,. " }, { "math_id": 45, "text": "S[\\mathbf{q},t] = \\int_{t_1}^{t_2}L(\\mathbf{q}, \\mathbf{\\dot{q}}, t)\\,dt \\,," }, { "math_id": 46, "text": "\\mathbf{p} = \\frac{\\partial S }{\\partial \\mathbf{q}}\\,." }, { "math_id": 47, "text": "\\delta S = 0 \\,, " }, { "math_id": 48, "text": "\\mathbf{F} = q\\left(\\mathbf{E} + \\mathbf{v} \\times \\mathbf{B}\\right) " }, { "math_id": 49, "text": "m\\frac{d^2 \\mathbf{r}}{dt^2} = q\\left(\\mathbf{E} + \\frac{d \\mathbf{r}}{dt} \\times \\mathbf{B}\\right) " }, { "math_id": 50, "text": "\\frac{d\\mathbf{p}}{dt} = q\\left(\\mathbf{E} + \\frac{\\mathbf{p} \\times \\mathbf{B}}{m}\\right) " }, { "math_id": 51, "text": "L = \\tfrac 1 2 m \\mathbf{\\dot{r}}\\cdot\\mathbf{\\dot{r}}+q\\mathbf{A}\\cdot\\dot{\\mathbf{r}} - q\\phi" }, { "math_id": 52, "text": " \\mathbf{P} = \\frac{\\partial L}{\\partial \\dot{\\mathbf{r}}} = m \\dot{\\mathbf{r}} + q \\mathbf{A}" }, { "math_id": 53, "text": " H = \\frac{\\left(\\mathbf{P} - q \\mathbf{A}\\right)^2}{2m} + q\\phi " }, { "math_id": 54, "text": "ds = \\sqrt{g_{\\alpha\\beta} d x^\\alpha dx^\\beta}" }, { "math_id": 55, "text": "\\frac{d^2 x^\\mu}{ds^2} = - \\Gamma^\\mu{}_{\\alpha\\beta}\\frac{d x^\\alpha}{ds}\\frac{d x^\\beta}{ds}" }, { "math_id": 56, "text": "\\frac{D^2\\xi^\\alpha}{ds^2} = -R^\\alpha{}_{\\beta\\gamma\\delta}\\frac{dx^\\alpha}{ds}\\xi^\\gamma\\frac{dx^\\delta}{ds} " }, { "math_id": 57, "text": "\\frac{1}{v^2}\\frac{\\partial^2 X}{\\partial t^2} = \\nabla^2 X " }, { "math_id": 58, "text": "i\\hbar\\frac{\\partial\\Psi}{\\partial t} = \\hat{H}\\Psi \\,," } ]
https://en.wikipedia.org/wiki?curid=65913
65914
Kinematics
Branch of physics describing the motion of objects without considering forces &lt;templatestyles src="Hlist/styles.css"/&gt; Kinematics is a subfield of physics and mathematics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of both applied and pure mathematics since it can be studied without considering the mass of a body or the forces acting upon it. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics. Kinematics is used in astrophysics to describe the motion of celestial bodies and collections of such bodies. In mechanical engineering, robotics, and biomechanics, kinematics is used to describe the motion of systems composed of joined parts (multi-link systems) such as an engine, a robotic arm or the human skeleton. Geometric transformations, also called rigid transformations, are used to describe the movement of components in a mechanical system, simplifying the derivation of the equations of motion. They are also central to dynamic analysis. Kinematic analysis is the process of measuring the kinematic quantities used to describe motion. In engineering, for instance, kinematic analysis may be used to find the range of movement for a given mechanism and, working in reverse, using kinematic synthesis to design a mechanism for a desired range of motion. In addition, kinematics applies algebraic geometry to the study of the mechanical advantage of a mechanical system or mechanism. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Etymology. The term kinematic is the English version of A.M. Ampère's "cinématique", which he constructed from the Greek "kinema" ("movement, motion"), itself derived from "kinein" ("to move"). Kinematic and cinématique are related to the French word cinéma, but neither are directly derived from it. However, they do share a root word in common, as cinéma came from the shortened form of cinématographe, "motion picture projector and camera", once again from the Greek word for movement and from the Greek "grapho" ("to write"). Kinematics of a particle trajectory in a non-rotating frame of reference. Particle kinematics is the study of the trajectory of particles. The position of a particle is defined as the coordinate vector from the origin of a coordinate frame to the particle. For example, consider a tower 50 m south from your home, where the coordinate frame is centered at your home, such that east is in the direction of the "x"-axis and north is in the direction of the "y"-axis, then the coordinate vector to the base of the tower is r = (0 m, −50 m, 0 m). If the tower is 50 m high, and this height is measured along the "z"-axis, then the coordinate vector to the top of the tower is r = (0 m, −50 m, 50 m). In the most general case, a three-dimensional coordinate system is used to define the position of a particle. However, if the particle is constrained to move within a plane, a two-dimensional coordinate system is sufficient. All observations in physics are incomplete without being described with respect to a reference frame. The position vector of a particle is a vector drawn from the origin of the reference frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin. In three dimensions, the position vector formula_0 can be expressed as formula_1 where formula_2, formula_3, and formula_4 are the Cartesian coordinates and formula_5, formula_6 and formula_7 are the unit vectors along the formula_2, formula_3, and formula_4 coordinate axes, respectively. The magnitude of the position vector formula_8 gives the distance between the point formula_9 and the origin. formula_10 The direction cosines of the position vector provide a quantitative measure of direction. In general, an object's position vector will depend on the frame of reference; different frames will lead to different values for the position vector. The "trajectory" of a particle is a vector function of time, formula_11, which defines the curve traced by the moving particle, given by formula_12 where formula_13, formula_14, and formula_15 describe each coordinate of the particle's position as a function of time. Velocity and speed. The velocity of a particle is a vector quantity that describes the "direction" as well as the magnitude of motion of the particle. More mathematically, the rate of change of the position vector of a point with respect to time is the velocity of the point. Consider the ratio formed by dividing the difference of two positions of a particle (displacement) by the time interval. This ratio is called the average velocity over that time interval and is defined asformula_16where formula_17 is the displacement vector during the time interval formula_18. In the limit that the time interval formula_18 approaches zero, the average velocity approaches the instantaneous velocity, defined as the time derivative of the position vector, formula_19 Thus, a particle's velocity is the time rate of change of its position. Furthermore, this velocity is tangent to the particle's trajectory at every position along its path. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants. The speed of an object is the magnitude of its velocity. It is a scalar quantity: formula_20 where formula_21 is the arc-length measured along the trajectory of the particle. This arc-length must always increase as the particle moves. Hence, formula_22 is non-negative, which implies that speed is also non-negative. Acceleration. The velocity vector can change in magnitude and in direction or both at once. Hence, the acceleration accounts for both the rate of change of the magnitude of the velocity vector and the rate of change of direction of that vector. The same reasoning used with respect to the position of a particle to define velocity, can be applied to the velocity to define acceleration. The acceleration of a particle is the vector defined by the rate of change of the velocity vector. The average acceleration of a particle over a time interval is defined as the ratio. formula_23 where Δv is the average velocity and Δ"t" is the time interval. The acceleration of the particle is the limit of the average acceleration as the time interval approaches zero, which is the time derivative, formula_24 Alternatively, formula_25 Thus, acceleration is the first derivative of the velocity vector and the second derivative of the position vector of that particle. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants. The magnitude of the acceleration of an object is the magnitude |a| of its acceleration vector. It is a scalar quantity: formula_26 Relative position vector. A relative position vector is a vector that defines the position of one point relative to another. It is the difference in position of the two points. The position of one point "A" relative to another point "B" is simply the difference between their positions formula_27 which is the difference between the components of their position vectors. If point "A" has position components formula_28 and point "B" has position components formula_29 then the position of point "A" relative to point "B" is the difference between their components: formula_30 Relative velocity. The velocity of one point relative to another is simply the difference between their velocities formula_31 which is the difference between the components of their velocities. If point "A" has velocity components formula_32 and point "B" has velocity components formula_33 then the velocity of point "A" relative to point "B" is the difference between their components: formula_34 Alternatively, this same result could be obtained by computing the time derivative of the relative position vector rB/A. Relative acceleration. The acceleration of one point "C" relative to another point "B" is simply the difference between their accelerations. formula_35 which is the difference between the components of their accelerations. If point "C" has acceleration components formula_36 and point "B" has acceleration components formula_37 then the acceleration of point "C" relative to point "B" is the difference between their components: formula_38 Alternatively, this same result could be obtained by computing the second time derivative of the relative position vector rB/A. Assuming that the initial conditions of the position, formula_39, and velocity formula_40 at time formula_41 are known, the first integration yields the velocity of the particle as a function of time. formula_42 A second integration yields its path (trajectory), formula_43 Additional relations between displacement, velocity, acceleration, and time can be derived. Since the acceleration is constant, formula_44 can be substituted into the above equation to give: formula_45 A relationship between velocity, position and acceleration without explicit time dependence can be had by solving the average acceleration for time and substituting and simplifying formula_46 formula_47 where formula_48 denotes the dot product, which is appropriate as the products are scalars rather than vectors. formula_49 The dot product can be replaced by the cosine of the angle α between the vectors (see Geometric interpretation of the dot product for more details) and the vectors by their magnitudes, in which case: formula_50 In the case of acceleration always in the direction of the motion and the direction of motion should be in positive or negative, the angle between the vectors (α) is 0, so formula_51, and formula_52 This can be simplified using the notation for the magnitudes of the vectors formula_53 where formula_54 can be any curvaceous path taken as the constant tangential acceleration is applied along that path, so formula_55 This reduces the parametric equations of motion of the particle to a Cartesian relationship of speed versus position. This relation is useful when time is unknown. We also know that formula_56 or formula_54 is the area under a velocity–time graph. We can take formula_54 by adding the top area and the bottom area. The bottom area is a rectangle, and the area of a rectangle is the formula_57 where formula_58 is the width and formula_59 is the height. In this case formula_60 and formula_61 (the formula_58 here is different from the acceleration formula_62). This means that the bottom area is formula_63. Now let's find the top area (a triangle). The area of a triangle is formula_64 where formula_59 is the base and formula_65 is the height. In this case, formula_66 and formula_67 or formula_68. Adding formula_69 and formula_70 results in the equation formula_54 results in the equation formula_71. This equation is applicable when the final velocity v is unknown. Particle trajectories in cylindrical-polar coordinates. It is often convenient to formulate the trajectory of a particle r("t") = ("x"("t"), "y"("t"), "z"("t")) using polar coordinates in the "X"–"Y" plane. In this case, its velocity and acceleration take a convenient form. Recall that the trajectory of a particle "P" is defined by its coordinate vector r measured in a fixed reference frame "F". As the particle moves, its coordinate vector r("t") traces its trajectory, which is a curve in space, given by: formula_12 where x̂, ŷ, and ẑ are the unit vectors along the "x", "y" and "z" axes of the reference frame "F", respectively. Consider a particle "P" that moves only on the surface of a circular cylinder "r"("t") = constant, it is possible to align the "z" axis of the fixed frame "F" with the axis of the cylinder. Then, the angle "θ" around this axis in the "x"–"y" plane can be used to define the trajectory as, formula_72 where the constant distance from the center is denoted as "r", and "θ"("t") is a function of time. The cylindrical coordinates for r("t") can be simplified by introducing the radial and tangential unit vectors, formula_73 and their time derivatives from elementary calculus: formula_74 formula_75 formula_76 formula_77 Using this notation, r("t") takes the form, formula_78 In general, the trajectory r("t") is not constrained to lie on a circular cylinder, so the radius "R" varies with time and the trajectory of the particle in cylindrical-polar coordinates becomes: formula_79 Where "r", "θ", and "z" might be continuously differentiable functions of time and the function notation is dropped for simplicity. The velocity vector v"P" is the time derivative of the trajectory r("t"), which yields: formula_80 Similarly, the acceleration a"P", which is the time derivative of the velocity v"P", is given by: formula_81 The term formula_82 acts toward the center of curvature of the path at that point on the path, is commonly called the centripetal acceleration. The term formula_83 is called the Coriolis acceleration. Constant radius. If the trajectory of the particle is constrained to lie on a cylinder, then the radius "r" is constant and the velocity and acceleration vectors simplify. The velocity of vP is the time derivative of the trajectory r("t"), formula_84 Planar circular trajectories. A special case of a particle trajectory on a circular cylinder occurs when there is no movement along the "z" axis: formula_85 where "r" and "z"0 are constants. In this case, the velocity v"P" is given by: formula_86 where formula_87 is the angular velocity of the unit vector θ^ around the "z" axis of the cylinder. The acceleration a"P" of the particle "P" is now given by: formula_88 The components formula_89 are called, respectively, the "radial" and "tangential components" of acceleration. The notation for angular velocity and angular acceleration is often defined as formula_90 so the radial and tangential acceleration components for circular trajectories are also written as formula_91 Point trajectories in a body moving in the plane. The movement of components of a mechanical system are analyzed by attaching a reference frame to each part and determining how the various reference frames move relative to each other. If the structural stiffness of the parts are sufficient, then their deformation can be neglected and rigid transformations can be used to define this relative movement. This reduces the description of the motion of the various parts of a complicated mechanical system to a problem of describing the geometry of each part and geometric association of each part relative to other parts. Geometry is the study of the properties of figures that remain the same while the space is transformed in various ways—more technically, it is the study of invariants under a set of transformations. These transformations can cause the displacement of the triangle in the plane, while leaving the vertex angle and the distances between vertices unchanged. Kinematics is often described as applied geometry, where the movement of a mechanical system is described using the rigid transformations of Euclidean geometry. The coordinates of points in a plane are two-dimensional vectors in R2 (two dimensional space). Rigid transformations are those that preserve the distance between any two points. The set of rigid transformations in an "n"-dimensional space is called the special Euclidean group on R"n", and denoted SE("n"). Displacements and motion. The position of one component of a mechanical system relative to another is defined by introducing a reference frame, say "M", on one that moves relative to a fixed frame, "F," on the other. The rigid transformation, or displacement, of "M" relative to "F" defines the relative position of the two components. A displacement consists of the combination of a rotation and a translation. The set of all displacements of "M" relative to "F" is called the configuration space of "M." A smooth curve from one position to another in this configuration space is a continuous set of displacements, called the motion of "M" relative to "F." The motion of a body consists of a continuous set of rotations and translations. Matrix representation. The combination of a rotation and translation in the plane R2 can be represented by a certain type of 3×3 matrix known as a homogeneous transform. The 3×3 homogeneous transform is constructed from a 2×2 rotation matrix "A"("φ") and the 2×1 translation vector d = ("dx", "dy"), as: formula_92 These homogeneous transforms perform rigid transformations on the points in the plane "z" = 1, that is, on points with coordinates r = ("x", "y", 1). In particular, let r define the coordinates of points in a reference frame "M" coincident with a fixed frame "F". Then, when the origin of "M" is displaced by the translation vector d relative to the origin of "F" and rotated by the angle φ relative to the x-axis of "F", the new coordinates in "F" of points in "M" are given by: formula_93 Homogeneous transforms represent affine transformations. This formulation is necessary because a translation is not a linear transformation of R2. However, using projective geometry, so that R2 is considered a subset of R3, translations become affine linear transformations. Pure translation. If a rigid body moves so that its reference frame "M" does not rotate ("θ" = 0) relative to the fixed frame "F", the motion is called pure translation. In this case, the trajectory of every point in the body is an offset of the trajectory d("t") of the origin of "M," that is: formula_94 Thus, for bodies in pure translation, the velocity and acceleration of every point "P" in the body are given by: formula_95 where the dot denotes the derivative with respect to time and v"O" and a"O" are the velocity and acceleration, respectively, of the origin of the moving frame "M". Recall the coordinate vector p in "M" is constant, so its derivative is zero. Rotation of a body around a fixed axis. Rotational or angular kinematics is the description of the rotation of an object. In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The "z"-axis has been chosen for convenience. Position. This allows the description of a rotation as the angular position of a planar reference frame "M" relative to a fixed "F" about this shared "z"-axis. Coordinates p = ("x", "y") in "M" are related to coordinates P = (X, Y) in "F" by the matrix equation: formula_96 where formula_97 is the rotation matrix that defines the angular position of "M" relative to "F" as a function of time. Velocity. If the point p does not move in "M", its velocity in "F" is given by formula_98 It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P("t"), formula_99 where the matrix formula_100 is known as the angular velocity matrix of "M" relative to "F". The parameter "ω" is the time derivative of the angle "θ", that is: formula_101 Acceleration. The acceleration of P("t") in "F" is obtained as the time derivative of the velocity, formula_102 which becomes formula_103 where formula_104 is the angular acceleration matrix of "M" on "F", and formula_105 The description of rotation then involves these three quantities: The equations of translational kinematics can easily be extended to planar rotational kinematics for constant angular acceleration with simple variable exchanges: formula_108 formula_109 formula_110 formula_111 Here "θ"i and "θ"f are, respectively, the initial and final angular positions, "ω"i and "ω"f are, respectively, the initial and final angular velocities, and "α" is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector. Point trajectories in body moving in three dimensions. Important formulas in kinematics define the velocity and acceleration of points in a moving body as they trace trajectories in three-dimensional space. This is particularly important for the center of mass of a body, which is used to derive equations of motion using either Newton's second law or Lagrange's equations. Position. In order to define these formulas, the movement of a component "B" of a mechanical system is defined by the set of rotations [A("t")] and translations d("t") assembled into the homogeneous transformation [T("t")]=[A("t"), d("t")]. If p is the coordinates of a point "P" in "B" measured in the moving reference frame "M", then the trajectory of this point traced in "F" is given by: formula_112 This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context. This equation for the trajectory of "P" can be inverted to compute the coordinate vector p in "M" as: formula_113 This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is: formula_114 Velocity. The velocity of the point "P" along its trajectory P("t") is obtained as the time derivative of this position vector, formula_115 The dot denotes the derivative with respect to time; because p is constant, its derivative is zero. This formula can be modified to obtain the velocity of "P" by operating on its trajectory P("t") measured in the fixed frame "F". Substituting the inverse transform for p into the velocity equation yields: formula_116 The matrix ["S"] is given by: formula_117 where formula_118 is the angular velocity matrix. Multiplying by the operator ["S"], the formula for the velocity vP takes the form: formula_119 where the vector "ω" is the angular velocity vector obtained from the components of the matrix [Ω]; the vector formula_120 is the position of "P" relative to the origin "O" of the moving frame "M"; and formula_121 is the velocity of the origin "O". Acceleration. The acceleration of a point "P" in a moving body "B" is obtained as the time derivative of its velocity vector: formula_122 This equation can be expanded firstly by computing formula_123 and formula_124 The formula for the acceleration A"P" can now be obtained as: formula_125 or formula_126 where "α" is the angular acceleration vector obtained from the derivative of the angular velocity matrix; formula_127 is the relative position vector (the position of "P" relative to the origin "O" of the moving frame "M"); and formula_128 is the acceleration of the origin of the moving frame "M". Kinematic constraints. Kinematic constraints are constraints on the movement of components of a mechanical system. Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from hinges, sliders and cam joints that define the construction of the system, called holonomic constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact with a plane, which are called non-holonomic constraints. The following are some common examples. Kinematic coupling. A kinematic coupling exactly constrains all 6 degrees of freedom. Rolling without slipping. An object that rolls against a surface without slipping obeys the condition that the velocity of its center of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the center of mass: formula_129 For the case of an object that does not tip or turn, this reduces to formula_130. Inextensible cord. This is the case where bodies are connected by an idealized cord that remains in tension and cannot change length. The constraint is that the sum of lengths of all segments of the cord is the total length, and accordingly the time derivative of this sum is zero. A dynamic problem of this type is the pendulum. Another example is a drum turned by the pull of gravity upon a falling weight attached to the rim by the inextensible cord. An "equilibrium" problem (i.e. not kinematic) of this type is the catenary. Kinematic pairs. Reuleaux called the ideal connections between components that form a machine kinematic pairs. He distinguished between higher pairs which were said to have line contact between the two links and lower pairs that have area contact between the links. J. Phillips shows that there are many ways to construct pairs that do not fit this simple classification. Lower pair. A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point, line or plane in a moving solid (three-dimensional) body to a corresponding point line or plane in the fixed solid body. There are the following cases: Higher pairs. Generally speaking, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a "cam joint". Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints. Kinematic chains. Rigid bodies ("links") connected by kinematic pairs ("joints") are known as "kinematic chains". Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic chain is computed from the number of links and the number and type of joints using the mobility formula. This formula can also be used to enumerate the topologies of kinematic chains that have a given degree of freedom, which is known as "type synthesis" in machine design. Examples. The planar one degree-of-freedom linkages assembled from "N" links and "j" hinges or sliding joints are: For larger chains and their linkage topologies, see R. P. Sunkari and L. C. Schmidt, "Structural synthesis of planar kinematic chains by adapting a Mckay-type algorithm", "Mechanism and Machine Theory" #41, pp. 1021–1030 (2006). See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\bf r}" }, { "math_id": 1, "text": "\\mathbf r = (x,y,z) = x\\hat\\mathbf x + y\\hat\\mathbf y + z\\hat\\mathbf z," }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "z" }, { "math_id": 5, "text": " \\hat\\mathbf x" }, { "math_id": 6, "text": "\\hat\\mathbf y" }, { "math_id": 7, "text": "\\hat\\mathbf z" }, { "math_id": 8, "text": "\\left|\\mathbf r\\right|" }, { "math_id": 9, "text": "\\mathbf r" }, { "math_id": 10, "text": "|\\mathbf{r}| = \\sqrt{x^2 + y^2 + z^2}." }, { "math_id": 11, "text": "\\mathbf{r}(t)" }, { "math_id": 12, "text": " \\mathbf r(t) = x(t)\\hat\\mathbf x + y(t) \\hat\\mathbf y +z(t) \\hat\\mathbf z," }, { "math_id": 13, "text": "x(t)" }, { "math_id": 14, "text": "y(t)" }, { "math_id": 15, "text": "z(t)" }, { "math_id": 16, "text": " \\mathbf\\bar v = \\frac{\\Delta \\mathbf r}{\\Delta t} = \\frac{\\Delta x}{\\Delta t}\\hat\\mathbf x + \\frac{\\Delta y}{\\Delta t}\\hat\\mathbf y + \\frac{\\Delta z}{\\Delta t}\\hat\\mathbf z =\\bar v_x\\hat\\mathbf x + \\bar v_y\\hat\\mathbf y + \\bar v_z \\hat\\mathbf z \\," }, { "math_id": 17, "text": "\\Delta \\mathbf{r}" }, { "math_id": 18, "text": "\\Delta t" }, { "math_id": 19, "text": " \\mathbf v\n= \\lim_{\\Delta t\\to 0}\\frac{\\Delta\\mathbf{r}}{\\Delta t}\n= \\frac{\\text{d}\\mathbf r}{\\text{d}t}\n= v_x\\hat\\mathbf x + v_y\\hat\\mathbf y + v_z \\hat\\mathbf z ." }, { "math_id": 20, "text": " v=|\\mathbf{v}|= \\frac {\\text{d}s}{\\text{d}t}," }, { "math_id": 21, "text": "s" }, { "math_id": 22, "text": "\\text{d}s/\\text{d}t" }, { "math_id": 23, "text": " \\mathbf\\bar a = \\frac{\\Delta \\mathbf\\bar v}{\\Delta t} = \\frac{\\Delta \\bar v_x}{\\Delta t}\\hat\\mathbf x + \\frac{\\Delta \\bar v_y}{\\Delta t}\\hat\\mathbf y + \\frac{\\Delta \\bar v_z}{\\Delta t}\\hat\\mathbf z =\\bar a_x\\hat\\mathbf x + \\bar a_y\\hat\\mathbf y + \\bar a_z \\hat\\mathbf z \\," }, { "math_id": 24, "text": " \\mathbf a\n= \\lim_{\\Delta t\\to 0}\\frac{\\Delta\\mathbf{v}}{\\Delta t}\n=\\frac{\\text{d}\\mathbf v}{\\text{d}t}\n= a_x\\hat\\mathbf x + a_y\\hat\\mathbf y + a_z \\hat\\mathbf z . " }, { "math_id": 25, "text": " \\mathbf a\n= \\lim_{(\\Delta t)^2 \\to 0}\\frac{\\Delta\\mathbf{r}}{(\\Delta t)^2}\n= \\frac{\\text{d}^2\\mathbf r}{\\text{d}t^2}\n= a_x\\hat\\mathbf x + a_y\\hat\\mathbf y + a_z \\hat\\mathbf z . " }, { "math_id": 26, "text": " |\\mathbf{a}| = |\\dot{\\mathbf{v}} | = \\frac{\\text{d}v}{\\text{d}t}." }, { "math_id": 27, "text": "\\mathbf{r}_{A/B} = \\mathbf{r}_{A} - \\mathbf{r}_{B} " }, { "math_id": 28, "text": "\\mathbf{r}_{A} = \\left( x_{A}, y_{A}, z_{A} \\right) " }, { "math_id": 29, "text": "\\mathbf{r}_{B} = \\left( x_{B}, y_{B}, z_{B} \\right) " }, { "math_id": 30, "text": "\\mathbf{r}_{A/B} = \\mathbf{r}_{A} - \\mathbf{r}_{B} = \\left( x_{A} - x_{B}, y_{A} - y_{B}, z_{A} - z_{B} \\right) " }, { "math_id": 31, "text": "\\mathbf{v}_{A/B} = \\mathbf{v}_{A} - \\mathbf{v}_{B} " }, { "math_id": 32, "text": "\\mathbf{v}_{A} = \\left( v_{A_x}, v_{A_y}, v_{A_z} \\right) " }, { "math_id": 33, "text": "\\mathbf{v}_{B} = \\left( v_{B_x}, v_{B_y}, v_{B_z} \\right) " }, { "math_id": 34, "text": "\\mathbf{v}_{A/B} = \\mathbf{v}_{A} - \\mathbf{v}_{B} = \\left( v_{A_x} - v_{B_x}, v_{A_y} - v_{B_{y}}, v_{A_z} - v_{B_z} \\right) " }, { "math_id": 35, "text": "\\mathbf{a}_{C/B} = \\mathbf{a}_{C} - \\mathbf{a}_{B} " }, { "math_id": 36, "text": "\\mathbf{a}_{C} = \\left( a_{C_x}, a_{C_y}, a_{C_z} \\right) " }, { "math_id": 37, "text": "\\mathbf{a}_{B} = \\left( a_{B_x}, a_{B_y}, a_{B_z} \\right) " }, { "math_id": 38, "text": "\\mathbf{a}_{C/B} = \\mathbf{a}_{C} - \\mathbf{a}_{B} = \\left( a_{C_x} - a_{B_x} , a_{C_y} - a_{B_y} , a_{C_z} - a_{B_z} \\right) " }, { "math_id": 39, "text": "\\mathbf{r}_0" }, { "math_id": 40, "text": "\\mathbf{v}_0" }, { "math_id": 41, "text": "t = 0" }, { "math_id": 42, "text": "\\mathbf{v}(t) = \\mathbf{v}_0 + \\int_0^t \\mathbf{a} \\, \\text{d}\\tau = \\mathbf{v}_0 + \\mathbf{a}t." }, { "math_id": 43, "text": "\\mathbf{r}(t)\n= \\mathbf{r}_0 + \\int_0^t \\mathbf{v}(\\tau) \\, \\text{d} \\tau\n= \\mathbf{r}_0 + \\int_0^t \\left(\\mathbf{v}_0 + \\mathbf{a}\\tau \\right) \\text{d} \\tau\n= \\mathbf{r}_0 + \\mathbf{v}_0 t + \\tfrac{1}{2} \\mathbf{a} t^2." }, { "math_id": 44, "text": "\\mathbf{a} = \\frac{\\Delta\\mathbf{v}}{\\Delta t} = \\frac{\\mathbf{v}-\\mathbf{v}_0}{ t } " }, { "math_id": 45, "text": "\\mathbf{r}(t) = \\mathbf{r}_0 + \\left(\\frac{\\mathbf{v} + \\mathbf{v}_0}{2}\\right) t ." }, { "math_id": 46, "text": " t = \\frac{\\mathbf{v}-\\mathbf{v}_0}{ \\mathbf{a} } " }, { "math_id": 47, "text": " \\left(\\mathbf{r} - \\mathbf{r}_0\\right) \\cdot \\mathbf{a} = \\left( \\mathbf{v} - \\mathbf{v}_0 \\right) \\cdot \\frac{\\mathbf{v} + \\mathbf{v}_0}{2} \\ , " }, { "math_id": 48, "text": " \\cdot " }, { "math_id": 49, "text": "2 \\left(\\mathbf{r} - \\mathbf{r}_0\\right) \\cdot \\mathbf{a} = |\\mathbf{v}|^2 - |\\mathbf{v}_0|^2." }, { "math_id": 50, "text": "2 \\left|\\mathbf{r} - \\mathbf{r}_0\\right| \\left|\\mathbf{a}\\right| \\cos \\alpha = |\\mathbf{v}|^2 - |\\mathbf{v}_0|^2." }, { "math_id": 51, "text": "\\cos 0 = 1" }, { "math_id": 52, "text": " |\\mathbf{v}|^2= |\\mathbf{v}_0|^2 + 2 \\left|\\mathbf{a}\\right| \\left|\\mathbf{r}-\\mathbf{r}_0\\right|." }, { "math_id": 53, "text": "|\\mathbf{a}|=a, |\\mathbf{v}|=v, |\\mathbf{r}-\\mathbf{r}_0| = \\Delta r " }, { "math_id": 54, "text": "\\Delta r" }, { "math_id": 55, "text": " v^2= v_0^2 + 2a \\Delta r." }, { "math_id": 56, "text": "\\Delta r = \\int v \\, \\text{d}t" }, { "math_id": 57, "text": "A \\cdot B" }, { "math_id": 58, "text": "A" }, { "math_id": 59, "text": "B" }, { "math_id": 60, "text": "A = t" }, { "math_id": 61, "text": "B = v_0" }, { "math_id": 62, "text": "a" }, { "math_id": 63, "text": "tv_0" }, { "math_id": 64, "text": "\\frac{1}{2} BH" }, { "math_id": 65, "text": "H" }, { "math_id": 66, "text": "B = t" }, { "math_id": 67, "text": "H = at" }, { "math_id": 68, "text": "A = \\frac{1}{2} BH = \\frac{1}{2} att = \\frac{1}{2} at^2 = \\frac{at^2}{2}" }, { "math_id": 69, "text": "v_0 t" }, { "math_id": 70, "text": "\\frac{at^2}{2}" }, { "math_id": 71, "text": "\\Delta r = v_0 t + \\frac{at^2}{2}" }, { "math_id": 72, "text": " \\mathbf{r}(t) = r\\cos(\\theta(t))\\hat\\mathbf x + r\\sin(\\theta(t))\\hat\\mathbf y + z(t)\\hat\\mathbf z, " }, { "math_id": 73, "text": " \\hat\\mathbf r = \\cos(\\theta(t))\\hat\\mathbf x + \\sin(\\theta(t))\\hat\\mathbf y,\n\\quad\n\\hat\\mathbf\\theta = -\\sin(\\theta(t))\\hat\\mathbf x + \\cos(\\theta(t))\\hat\\mathbf y ." }, { "math_id": 74, "text": " \\frac{\\text{d}\\hat\\mathbf r}{\\text{d}t} = \\omega\\hat\\mathbf\\theta . " }, { "math_id": 75, "text": " \\frac{\\text{d}^2\\hat\\mathbf r}{\\text{d}t^2} = \\frac{\\text{d}(\\omega\\hat\\mathbf\\theta)}{\\text{d}t} = \\alpha\\hat\\mathbf\\theta - \\omega\\hat\\mathbf r . " }, { "math_id": 76, "text": " \\frac{\\text{d}\\hat\\mathbf\\theta}{\\text{d}t} = -\\theta\\hat\\mathbf r . " }, { "math_id": 77, "text": " \\frac{\\text{d}^2\\hat\\mathbf\\theta}{\\text{d}t^2} = \\frac{\\text{d}(-\\theta\\hat\\mathbf r)}{\\text{d}t} = -\\alpha\\hat\\mathbf r - \\omega^2\\hat\\mathbf\\theta. " }, { "math_id": 78, "text": " \\mathbf{r}(t) = r\\hat\\mathbf r + z(t)\\hat\\mathbf z ." }, { "math_id": 79, "text": " \\mathbf{r}(t) = r(t)\\hat\\mathbf r + z(t)\\hat\\mathbf z ." }, { "math_id": 80, "text": " \\mathbf v_P\n= \\frac{\\text{d}}{\\text{d}t} \\left(r\\hat\\mathbf r + z \\hat\\mathbf z \\right)\n= v\\hat\\mathbf r + r\\mathbf\\omega\\hat\\mathbf\\theta + v_z\\hat\\mathbf z = v(\\hat\\mathbf r + \\hat\\mathbf\\theta) + v_z\\hat\\mathbf z . " }, { "math_id": 81, "text": " \\mathbf{a}_P\n= \\frac{\\text{d}}{\\text{d}t} \\left(v\\hat\\mathbf r + v\\hat\\mathbf\\theta + v_z\\hat\\mathbf z\\right) =(a - v\\theta)\\hat\\mathbf r + (a + v\\omega)\\hat\\mathbf\\theta + a_z\\hat\\mathbf z . " }, { "math_id": 82, "text": " -v\\theta\\hat\\mathbf r" }, { "math_id": 83, "text": " v\\omega\\hat\\mathbf\\theta" }, { "math_id": 84, "text": " \\mathbf v_P\n= \\frac{\\text{d}}{\\text{d}t} \\left(r\\hat\\mathbf r + z \\hat\\mathbf z \\right)\n= r\\omega\\hat\\mathbf\\theta + v_z\\hat\\mathbf z = v\\hat\\mathbf\\theta + v_z\\hat\\mathbf z ." }, { "math_id": 85, "text": " \\mathbf r(t) = r\\hat\\mathbf r + z \\hat\\mathbf z," }, { "math_id": 86, "text": " \\mathbf{v}_P\n= \\frac{\\text{d}}{\\text{d}t} \\left(r\\hat\\mathbf r + z \\hat\\mathbf z\\right)\n= r\\omega\\hat\\mathbf\\theta = v\\hat\\mathbf\\theta," }, { "math_id": 87, "text": " \\omega" }, { "math_id": 88, "text": " \\mathbf{a}_P = \\frac{\\text{d}(v\\hat\\mathbf\\theta)}{\\text{d}t} = a\\hat\\mathbf\\theta - v\\theta\\hat\\mathbf r." }, { "math_id": 89, "text": " a_r = - v\\theta, \\quad a_{\\theta} = a," }, { "math_id": 90, "text": "\\omega = \\dot{\\theta}, \\quad \\alpha = \\ddot{\\theta}, " }, { "math_id": 91, "text": " a_r = - r\\omega^2, \\quad a_{\\theta} = r\\alpha." }, { "math_id": 92, "text": " [T(\\phi, \\mathbf{d})]\n= \\begin{bmatrix} A(\\phi) & \\mathbf{d} \\\\ \\mathbf 0 & 1\\end{bmatrix}\n= \\begin{bmatrix} \\cos\\phi & -\\sin\\phi & d_x \\\\ \\sin\\phi & \\cos\\phi & d_y \\\\ 0 & 0 & 1\\end{bmatrix}." }, { "math_id": 93, "text": " \\mathbf{P} = [T(\\phi, \\mathbf{d})]\\mathbf{r}\n= \\begin{bmatrix} \\cos\\phi & -\\sin\\phi & d_x \\\\ \\sin\\phi & \\cos\\phi & d_y \\\\ 0 & 0 & 1\\end{bmatrix} \\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix}." }, { "math_id": 94, "text": " \\mathbf{r}(t)=[T(0,\\mathbf{d}(t))] \\mathbf{p} = \\mathbf{d}(t) + \\mathbf{p}." }, { "math_id": 95, "text": " \\mathbf{v}_P=\\dot{\\mathbf{r}}(t) = \\dot{\\mathbf{d}}(t)=\\mathbf{v}_O,\n\\quad\n\\mathbf{a}_P=\\ddot{\\mathbf{r}}(t) = \\ddot{\\mathbf{d}}(t) = \\mathbf{a}_O," }, { "math_id": 96, "text": " \\mathbf{P}(t) = [A(t)]\\mathbf{p}, " }, { "math_id": 97, "text": " [A(t)] = \\begin{bmatrix}\n \\cos(\\theta(t)) & -\\sin(\\theta(t)) \\\\\n \\sin(\\theta(t)) & \\cos(\\theta(t))\n\\end{bmatrix}, " }, { "math_id": 98, "text": " \\mathbf{v}_P = \\dot{\\mathbf{P}} = [\\dot{A}(t)]\\mathbf{p}. " }, { "math_id": 99, "text": " \\mathbf{v}_P = [\\dot{A}(t)][A(t)^{-1}]\\mathbf{P} = [\\Omega]\\mathbf{P}, " }, { "math_id": 100, "text": " [\\Omega] = \\begin{bmatrix} 0 & -\\omega \\\\ \\omega & 0 \\end{bmatrix}, " }, { "math_id": 101, "text": " \\omega = \\frac{\\text{d}\\theta}{\\text{d}t}. " }, { "math_id": 102, "text": " \\mathbf{A}_P = \\ddot{P}(t) = [\\dot{\\Omega}]\\mathbf{P} + [\\Omega]\\dot{\\mathbf{P}}, " }, { "math_id": 103, "text": " \\mathbf{A}_P = [\\dot{\\Omega}]\\mathbf{P} + [\\Omega][\\Omega]\\mathbf{P}, " }, { "math_id": 104, "text": " [\\dot{\\Omega}] = \\begin{bmatrix} 0 & -\\alpha \\\\ \\alpha & 0 \\end{bmatrix}, " }, { "math_id": 105, "text": " \\alpha = \\frac{\\text{d}^2\\theta}{\\text{d}t^2}. " }, { "math_id": 106, "text": "\\omega = \\frac {\\text{d}\\theta}{\\text{d}t}" }, { "math_id": 107, "text": "\\alpha = \\frac {\\text{d}\\omega}{\\text{d}t}" }, { "math_id": 108, "text": "\\omega_{\\mathrm{f}} = \\omega_{\\mathrm{i}} + \\alpha t\\!" }, { "math_id": 109, "text": "\\theta_{\\mathrm{f}} - \\theta_{\\mathrm{i}} = \\omega_{\\mathrm{i}} t + \\tfrac{1}{2} \\alpha t^2" }, { "math_id": 110, "text": "\\theta_{\\mathrm{f}} - \\theta_{\\mathrm{i}} = \\tfrac{1}{2} (\\omega_{\\mathrm{f}} + \\omega_{\\mathrm{i}})t" }, { "math_id": 111, "text": "\\omega_{\\mathrm{f}}^2 = \\omega_{\\mathrm{i}}^2 + 2 \\alpha (\\theta_{\\mathrm{f}} - \\theta_{\\mathrm{i}})." }, { "math_id": 112, "text": " \\mathbf{P}(t) = [T(t)] \\mathbf{p}\n= \\begin{bmatrix} \\mathbf{P} \\\\ 1\\end{bmatrix}\n=\\begin{bmatrix} A(t) & \\mathbf{d}(t) \\\\ 0 & 1\\end{bmatrix}\n\\begin{bmatrix} \\mathbf{p} \\\\ 1\\end{bmatrix}." }, { "math_id": 113, "text": " \\mathbf{p} = [T(t)]^{-1}\\mathbf{P}(t)\n= \\begin{bmatrix} \\mathbf{p} \\\\ 1\\end{bmatrix}\n=\\begin{bmatrix} A(t)^\\text{T} & -A(t)^\\text{T}\\mathbf{d}(t) \\\\ 0 & 1\\end{bmatrix}\n\\begin{bmatrix} \\mathbf{P}(t) \\\\ 1\\end{bmatrix}." }, { "math_id": 114, "text": " [A(t)]^\\text{T}[A(t)]=I.\\!" }, { "math_id": 115, "text": " \\mathbf{v}_P = [\\dot{T}(t)]\\mathbf{p}\n=\\begin{bmatrix} \\mathbf{v}_P \\\\ 0\\end{bmatrix}\n= \\left(\\frac{d}{dt}{\\begin{bmatrix} A(t) & \\mathbf{d}(t) \\\\ 0 & 1 \\end{bmatrix}}\\right)\n\\begin{bmatrix} \\mathbf{p} \\\\ 1\\end{bmatrix}\n= \\begin{bmatrix} \\dot{A}(t) & \\dot{\\mathbf{d}}(t) \\\\ 0 & 0 \\end{bmatrix}\n\\begin{bmatrix} \\mathbf{p} \\\\ 1\\end{bmatrix}." }, { "math_id": 116, "text": "\\begin{align}\n\\mathbf{v}_P & = [\\dot{T}(t)][T(t)]^{-1}\\mathbf{P}(t) \\\\[4pt]\n& =\n\\begin{bmatrix} \\mathbf{v}_P \\\\ 0 \\end{bmatrix}\n =\n\\begin{bmatrix} \\dot{A} & \\dot{\\mathbf{d}} \\\\ 0 & 0 \\end{bmatrix}\n\\begin{bmatrix} A & \\mathbf{d} \\\\ 0 & 1 \\end{bmatrix}^{-1}\n\\begin{bmatrix} \\mathbf{P}(t) \\\\ 1\\end{bmatrix} \\\\[4pt]\n& =\n\\begin{bmatrix} \\dot{A} & \\dot{\\mathbf{d}} \\\\ 0 & 0 \\end{bmatrix}\nA^{-1}\\begin{bmatrix} 1 & -\\mathbf{d} \\\\ 0 & A \\end{bmatrix}\n\\begin{bmatrix} \\mathbf{P}(t) \\\\ 1\\end{bmatrix} \\\\[4pt]\n& =\n\\begin{bmatrix} \\dot{A}A^{-1} & -\\dot{A}A^{-1}\\mathbf{d} + \\dot{\\mathbf{d}} \\\\ 0 & 0 \\end{bmatrix}\n\\begin{bmatrix} \\mathbf{P}(t) \\\\ 1\\end{bmatrix} \\\\[4pt]\n&=\n\\begin{bmatrix} \\dot{A}A^\\text{T} & -\\dot{A}A^\\text{T}\\mathbf{d} + \\dot{\\mathbf{d}} \\\\ 0 & 0 \\end{bmatrix}\n\\begin{bmatrix} \\mathbf{P}(t) \\\\ 1\\end{bmatrix} \\\\[6pt]\n\\mathbf{v}_P &= [S]\\mathbf{P}.\n\\end{align}" }, { "math_id": 117, "text": " [S] = \\begin{bmatrix} \\Omega & -\\Omega\\mathbf{d} + \\dot{\\mathbf{d}} \\\\ 0 & 0 \\end{bmatrix}" }, { "math_id": 118, "text": " [\\Omega] = \\dot{A}A^\\text{T}," }, { "math_id": 119, "text": "\\mathbf{v}_P = [\\Omega](\\mathbf{P}-\\mathbf{d}) + \\dot{\\mathbf{d}} = \\omega\\times \\mathbf{R}_{P/O} + \\mathbf{v}_O," }, { "math_id": 120, "text": " \\mathbf{R}_{P/O}=\\mathbf{P}-\\mathbf{d}," }, { "math_id": 121, "text": "\\mathbf{v}_O=\\dot{\\mathbf{d}}," }, { "math_id": 122, "text": "\\mathbf{A}_P = \\frac{d}{dt}\\mathbf{v}_P\n= \\frac{d}{dt}\\left([S]\\mathbf{P}\\right)\n= [\\dot{S}] \\mathbf{P} + [S] \\dot{\\mathbf{P}}\n= [\\dot{S}]\\mathbf{P} + [S][S]\\mathbf{P} ." }, { "math_id": 123, "text": " [\\dot{S}] = \\begin{bmatrix} \\dot{\\Omega} & -\\dot{\\Omega}\\mathbf{d} -\\Omega\\dot{\\mathbf{d}} + \\ddot{\\mathbf{d}} \\\\ 0 & 0 \\end{bmatrix} = \\begin{bmatrix} \\dot{\\Omega} & -\\dot{\\Omega}\\mathbf{d} -\\Omega\\mathbf{v}_O + \\mathbf{A}_O \\\\ 0 & 0 \\end{bmatrix}" }, { "math_id": 124, "text": " [S]^2 = \\begin{bmatrix} \\Omega & -\\Omega\\mathbf{d} + \\mathbf{v}_O \\\\ 0 & 0 \\end{bmatrix}^2 = \\begin{bmatrix} \\Omega^2 & -\\Omega^2\\mathbf{d} + \\Omega\\mathbf{v}_O \\\\ 0 & 0 \\end{bmatrix}." }, { "math_id": 125, "text": " \\mathbf{A}_P = \\dot{\\Omega}(\\mathbf{P} - \\mathbf{d}) + \\mathbf{A}_O + \\Omega^2(\\mathbf{P}-\\mathbf{d})," }, { "math_id": 126, "text": " \\mathbf{A}_P = \\alpha\\times\\mathbf{R}_{P/O} + \\omega\\times\\omega\\times\\mathbf{R}_{P/O} + \\mathbf{A}_O," }, { "math_id": 127, "text": "\\mathbf{R}_{P/O}=\\mathbf{P}-\\mathbf{d}," }, { "math_id": 128, "text": "\\mathbf{A}_O = \\ddot{\\mathbf{d}}" }, { "math_id": 129, "text": " \\boldsymbol{ v}_G(t) = \\boldsymbol{\\Omega} \\times \\boldsymbol{ r}_{G/O}." }, { "math_id": 130, "text": " v = r \\omega" } ]
https://en.wikipedia.org/wiki?curid=65914
65914231
Thomas J. Osler
American mathematician (1940–2023) Thomas Joseph Osler (April 26, 1940 – March 26, 2023) was an American mathematician, national champion distance runner, and author. Early life and education. Born in 1940 in Camden, New Jersey, Osler was a graduate of Camden High School in 1957 and then studied physics at Drexel University, graduating in 1962. He completed his PhD at the Courant Institute of Mathematical Sciences of New York University, in 1970. His dissertation, "Leibniz Rule, the Chain Rule, and Taylor's Theorem for Fractional Derivatives", was supervised by Samuel Karp. Career. Osler taught at Saint Joseph's University and the Rensselaer Polytechnic Institute before joining the mathematics department at Rowan University in New Jersey in 1972; he was a full professor at Rowan University until his death. In mathematics, Osler is best known for his work on fractional calculus. He also gave a series of product formulas for formula_0 that interpolate between the formula of Viète and that of Wallis. In 2009, the New Jersey Section of the Mathematical Association of America gave him their Distinguished Teaching Award. A mathematics conference was held at Rowan University in honor of his 70th birthday in 2010. Running. Osler won three national Amateur Athletic Union championships at 25 km (1965), 30 km and 50 mi (1967). Osler won the 1965 Philadelphia Marathon, finishing the race in freezing-cold weather in a time of 2:34:07. Osler was involved in the creation of the Road Runners Club of America with Olympian Browning Ross; together they were elected as co-secretaries in 1959 and were among the four first official elected officers of the newly formed club. He served on the Amateur Athletic Union Standards Committee in 1979. He has been credited with helping to popularize the idea of walk breaks among US marathon runners. In 1980, Osler was inducted into the Road Runners Club of America Hall of fame. Running publications. Osler was the author of several books and booklets on running: Personal life and death. Osler was a resident of Glassboro, New Jersey. Osler died on March 26, 2023, at the age of 82. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=65914231
6591796
Support (measure theory)
Concept in mathematics In mathematics, the support (sometimes topological support or spectrum) of a measure formula_0 on a measurable topological space formula_1 is a precise notion of where in the space formula_2 the measure "lives". It is defined to be the largest (closed) subset of formula_2 for which every open neighbourhood of every point of the set has positive measure. Motivation. A (non-negative) measure formula_0 on a measurable space formula_3 is really a function formula_4 Therefore, in terms of the usual definition of support, the support of formula_0 is a subset of the σ-algebra formula_5 formula_6 where the overbar denotes set closure. However, this definition is somewhat unsatisfactory: we use the notion of closure, but we do not even have a topology on formula_7 What we really want to know is where in the space formula_2 the measure formula_0 is non-zero. Consider two examples: In light of these two examples, we can reject the following candidate definitions in favour of the one in the next section: However, the idea of "local strict positivity" is not too far from a workable definition. Definition. Let formula_20 be a topological space; let formula_21 denote the Borel σ-algebra on formula_22 i.e. the smallest sigma algebra on formula_2 that contains all open sets formula_23 Let formula_0 be a measure on formula_24 Then the support (or spectrum) of formula_0 is defined as the set of all points formula_25 in formula_2 for which every open neighbourhood formula_26 of formula_25 has positive measure: formula_27 Some authors prefer to take the closure of the above set. However, this is not necessary: see "Properties" below. An equivalent definition of support is as the largest formula_28 (with respect to inclusion) such that every open set which has non-empty intersection with formula_29 has positive measure, i.e. the largest formula_29 such that: formula_30 Signed and complex measures. This definition can be extended to signed and complex measures. Suppose that formula_31 is a signed measure. Use the Hahn decomposition theorem to write formula_32 where formula_33 are both non-negative measures. Then the support of formula_0 is defined to be formula_34 Similarly, if formula_35 is a complex measure, the support of formula_0 is defined to be the union of the supports of its real and imaginary parts. Properties. formula_36 holds. A measure formula_0 on formula_2 is strictly positive if and only if it has support formula_37 If formula_0 is strictly positive and formula_38 is arbitrary, then any open neighbourhood of formula_39 since it is an open set, has positive measure; hence, formula_40 so formula_37 Conversely, if formula_41 then every non-empty open set (being an open neighbourhood of some point in its interior, which is also a point of the support) has positive measure; hence, formula_0 is strictly positive. The support of a measure is closed in formula_22as its complement is the union of the open sets of measure formula_42 In general the support of a nonzero measure may be empty: see the examples below. However, if formula_2 is a Hausdorff topological space and formula_0 is a Radon measure, a Borel set formula_43 outside the support has measure zero: formula_44 The converse is true if formula_43 is open, but it is not true in general: it fails if there exists a point formula_45 such that formula_46 (e.g. Lebesgue measure). Thus, one does not need to "integrate outside the support": for any measurable function formula_47 or formula_48 formula_49 The concept of "support" of a measure and that of spectrum of a self-adjoint linear operator on a Hilbert space are closely related. Indeed, if formula_0 is a regular Borel measure on the line formula_50 then the multiplication operator formula_51 is self-adjoint on its natural domain formula_52 and its spectrum coincides with the essential range of the identity function formula_53 which is precisely the support of formula_54 Examples. Lebesgue measure. In the case of Lebesgue measure formula_8 on the real line formula_55 consider an arbitrary point formula_56 Then any open neighbourhood formula_26 of formula_25 must contain some open interval formula_57 for some formula_58 This interval has Lebesgue measure formula_59 so formula_60 Since formula_61 was arbitrary, formula_62 Dirac measure. In the case of Dirac measure formula_14 let formula_61 and consider two cases: We conclude that formula_68 is the closure of the singleton set formula_69 which is formula_70 itself. In fact, a measure formula_0 on the real line is a Dirac measure formula_10 for some point formula_71 if and only if the support of formula_0 is the singleton set formula_72 Consequently, Dirac measure on the real line is the unique measure with zero variance (provided that the measure has variance at all). A uniform distribution. Consider the measure formula_0 on the real line formula_73 defined by formula_74 i.e. a uniform measure on the open interval formula_75 A similar argument to the Dirac measure example shows that formula_76 Note that the boundary points 0 and 1 lie in the support: any open set containing 0 (or 1) contains an open interval about 0 (or 1), which must intersect formula_77 and so must have positive formula_0-measure. A nontrivial measure whose support is empty. The space of all countable ordinals with the topology generated by "open intervals" is a locally compact Hausdorff space. The measure ("Dieudonné measure") that assigns measure 1 to Borel sets containing an unbounded closed subset and assigns 0 to other Borel sets is a Borel probability measure whose support is empty. A nontrivial measure whose support has measure zero. On a compact Hausdorff space the support of a non-zero measure is always non-empty, but may have measure formula_42 An example of this is given by adding the first uncountable ordinal formula_78 to the previous example: the support of the measure is the single point formula_79 which has measure formula_42 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": "(X, \\operatorname{Borel}(X))" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "(X, \\Sigma)" }, { "math_id": 4, "text": "\\mu : \\Sigma \\to [0, +\\infty]." }, { "math_id": 5, "text": "\\Sigma:" }, { "math_id": 6, "text": "\\operatorname{supp} (\\mu) := \\overline{\\{A \\in \\Sigma \\,\\vert\\, \\mu(A) \\neq 0\\}}," }, { "math_id": 7, "text": "\\Sigma." }, { "math_id": 8, "text": "\\lambda" }, { "math_id": 9, "text": "\\Reals." }, { "math_id": 10, "text": "\\delta_p" }, { "math_id": 11, "text": "p \\in \\Reals." }, { "math_id": 12, "text": "p," }, { "math_id": 13, "text": "X \\setminus \\{x \\in X \\mid \\mu(\\{x\\}) = 0\\}." }, { "math_id": 14, "text": "\\delta_p," }, { "math_id": 15, "text": "\\lambda:" }, { "math_id": 16, "text": "\\{x \\in X \\mid \\exists N_x \\text{ open} \\text{ such that } (x \\in N_x \\text{ and } \\mu(N_x) > 0)\\}" }, { "math_id": 17, "text": "N_x = X" }, { "math_id": 18, "text": " x \\in X," }, { "math_id": 19, "text": "X." }, { "math_id": 20, "text": "(X, T)" }, { "math_id": 21, "text": "B(T)" }, { "math_id": 22, "text": "X," }, { "math_id": 23, "text": "U \\in T." }, { "math_id": 24, "text": "(X, B(T))" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": "N_x" }, { "math_id": 27, "text": "\\operatorname{supp} (\\mu) := \\{x \\in X \\mid \\forall N_x \\in T \\colon (x \\in N_x \\Rightarrow \\mu (N_x) > 0)\\}." }, { "math_id": 28, "text": "C \\in B(T)" }, { "math_id": 29, "text": "C" }, { "math_id": 30, "text": "(\\forall U \\in T)(U \\cap C \\neq \\varnothing \\implies \\mu (U \\cap C) > 0)." }, { "math_id": 31, "text": "\\mu : \\Sigma \\to [-\\infty, +\\infty]" }, { "math_id": 32, "text": "\\mu = \\mu^+ - \\mu^-," }, { "math_id": 33, "text": "\\mu^\\pm" }, { "math_id": 34, "text": "\\operatorname{supp} (\\mu) := \\operatorname{supp} (\\mu^+) \\cup \\operatorname{supp} (\\mu^-)." }, { "math_id": 35, "text": "\\mu : \\Sigma \\to \\Complex" }, { "math_id": 36, "text": "\\operatorname{supp} (\\mu_1 + \\mu_2) = \\operatorname{supp} (\\mu_1) \\cup \\operatorname{supp} (\\mu_2)" }, { "math_id": 37, "text": "\\operatorname{supp}(\\mu) = X." }, { "math_id": 38, "text": "x \\in X" }, { "math_id": 39, "text": "x," }, { "math_id": 40, "text": "x \\in \\operatorname{supp}(\\mu)," }, { "math_id": 41, "text": "\\operatorname{supp}(\\mu) = X," }, { "math_id": 42, "text": "0." }, { "math_id": 43, "text": "A" }, { "math_id": 44, "text": "A \\subseteq X \\setminus \\operatorname{supp} (\\mu) \\implies \\mu (A) = 0." }, { "math_id": 45, "text": "x \\in \\operatorname{supp}(\\mu)" }, { "math_id": 46, "text": "\\mu(\\{x\\}) = 0" }, { "math_id": 47, "text": "f : X \\to \\Reals" }, { "math_id": 48, "text": "\\Complex," }, { "math_id": 49, "text": "\\int_X f(x) \\, \\mathrm{d} \\mu (x) = \\int_{\\operatorname{supp} (\\mu)} f(x) \\, \\mathrm{d} \\mu (x)." }, { "math_id": 50, "text": "\\mathbb{R}," }, { "math_id": 51, "text": "(Af)(x) = xf(x)" }, { "math_id": 52, "text": "D(A) = \\{f \\in L^2(\\Reals, d\\mu) \\mid xf(x) \\in L^2(\\Reals, d\\mu)\\}" }, { "math_id": 53, "text": "x \\mapsto x," }, { "math_id": 54, "text": "\\mu." }, { "math_id": 55, "text": "\\Reals," }, { "math_id": 56, "text": "x \\in \\Reals." }, { "math_id": 57, "text": "(x - \\epsilon, x + \\epsilon)" }, { "math_id": 58, "text": "\\epsilon > 0." }, { "math_id": 59, "text": "2 \\epsilon > 0," }, { "math_id": 60, "text": "\\lambda(N_x) \\geq 2 \\epsilon > 0." }, { "math_id": 61, "text": "x \\in \\Reals" }, { "math_id": 62, "text": "\\operatorname{supp}(\\lambda) = \\Reals." }, { "math_id": 63, "text": "x = p," }, { "math_id": 64, "text": "\\delta_p(N_x) = 1 > 0." }, { "math_id": 65, "text": "x \\neq p," }, { "math_id": 66, "text": "B" }, { "math_id": 67, "text": "\\delta_p(B) = 0." }, { "math_id": 68, "text": "\\operatorname{supp}(\\delta_p)" }, { "math_id": 69, "text": "\\{p\\}," }, { "math_id": 70, "text": "\\{p\\}" }, { "math_id": 71, "text": "p" }, { "math_id": 72, "text": "\\{p\\}." }, { "math_id": 73, "text": "\\Reals" }, { "math_id": 74, "text": "\\mu(A) := \\lambda(A \\cap (0, 1))" }, { "math_id": 75, "text": "(0, 1)." }, { "math_id": 76, "text": "\\operatorname{supp}(\\mu) = [0, 1]." }, { "math_id": 77, "text": "(0, 1)," }, { "math_id": 78, "text": "\\Omega" }, { "math_id": 79, "text": "\\Omega," } ]
https://en.wikipedia.org/wiki?curid=6591796
65921358
Hexagonal Efficient Coordinate System
The Hexagonal Efficient Coordinate System (HECS), formerly known as Array Set Addressing (ASA), is a coordinate system for hexagonal grids that allows hexagonally sampled images to be efficiently stored and processed on digital systems. HECS represents the hexagonal grid as a set of two interleaved rectangular sub-arrays, which can be addressed by normal integer row and column coordinates and are distinguished with a single binary coordinate. Hexagonal sampling is the optimal approach for isotropically band-limited two-dimensional signals and its use provides a sampling efficiency improvement of 13.4% over rectangular sampling. The HECS system enables the use of hexagonal sampling for digital imaging applications without requiring significant additional processing to address the hexagonal array. Background. The advantages of sampling on a hexagonal grid instead of the standard rectangular grid for digital imaging applications include: more efficient sampling, consistent connectivity, equidistant neighboring pixels, greater angular resolution, and higher circular symmetry. Sometimes, more than one of these advantages compound together, thereby increasing the efficiency by 50% in terms of computation and storage when compared to rectangular sampling. Researchers have shown that the hexagonal grid is the optimal sampling lattice and its use provides a sampling efficiency improvement of 13.4% over rectangular sampling for isotropically band-limited two-dimensional signals. Despite all of these advantages of hexagonal sampling over rectangular sampling, application prior to the introduction of HECS was limited because of the lack of an efficient coordinate system. Hexagonal Efficient Coordinate System. Description. The Hexagonal Efficient Coordinate System (HECS) is based on the idea of representing the hexagonal grid as a set of two rectangular arrays which can be individually indexed using familiar integer-valued row and column indices. The arrays are distinguished using a single binary coordinate so that a full address for any point on the hexagonal grid is uniquely represented by three coordinates. formula_0 where the coordinates represent the array, row, and column, respectively. The hexagonal grid is separated into rectangular arrays by taking every other row as one array and the remaining rows as the other array, as shown in the figure. Nearest neighbors. The addresses of the nearest neighbors of a pixel (or grid point) are easily determined by simple expressions which are functions of the pixel's coordinates, as shown. Convert to Cartesian. Converting coordinates in HECS to their Cartesian counterparts is done with a simple matrix multiplication formula_1 Operators. Preliminaries. Let the set of all possible HECS addresses be formula_2 formula_3 formula_4 formula_5 Addition. A binary addition operator formula_6 has been defined as formula_7 where formula_8 is the logical XOR operator and formula_9 is the logical AND operator. Negation. A unary negation operator formula_10 has been defined as formula_11 Subtraction. A binary subtraction operator formula_10 has been defined by combining the negation and addition operations as formula_12 Scalar multiplication. Scalar multiplication has been defined for non-negative integer scalar multipliers as formula_13 and formula_14 Separable Fourier kernel. The hexagonal discrete Fourier transform (HDFT) was developed by Mersereau and was converted to an HECS representation by Rummelt. Let formula_15 be a two-dimensional hexagonally sampled signal and let both arrays be of size formula_16. Let formula_17 be the Fourier transform of "x." The HDFT equation for the forward transform is given by formula_18 Notice that the HECS representation of the HDFT overcomes Mersereau's "insurmountable difficulty" since it is a separable kernel, which led to the development of the hexagonal fast Fourier transform. Alternative addressing schemes. There have been several attempts to develop efficient coordinate systems for the hexagonal grid. Snyder describes a coordinate system based on non-orthogonal bases which is referred to as the "h2" system. Her developed an interesting three-coordinate system that uses an oblique plane in three dimensional space. For various reasons, both of these approaches require cumbersome machine representations that lead to inefficient image processing operations. Generalized balanced ternary (GBT) is based on a hierarchy of cells, where at every level the cells are each aggregates of cells from the previous level. In two-dimensions, GBT can be used to address the hexagonal grid where each grid point is addressed with a string of base-7 digits and each digit indicates the hexagon's position within that level of the hierarchy. The use of GBT and slightly modified versions of GBT such as HIP and Spiral Architecture for addressing hexagonal grids in two dimensions are abundant in the literature. While these approaches have some interesting mathematical properties, they fail to be convenient or efficient for image processing. Other 2D hexagonal grid applications. Although HECS was developed mainly for digital image processing of hexagonally sampled images, its benefits extend to other applications such as finding the shortest path distance and shortest path routing between points in hexagonal interconnection networks. Other addressing approaches have been developed for such applications but they suffer the same drawbacks as the ones described above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (a,r,c) \\in \\{ 0,1 \\} \\times \\mathbb Z\\times\\mathbb Z" }, { "math_id": 1, "text": "\\begin{bmatrix}\nx\\\\y\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{1}{2} & 0 & 1 \\\\ \\frac{\\sqrt{3}}{2} & \\sqrt{3} & 0\n\\end{bmatrix}\\begin{bmatrix}\na\\\\r\\\\c\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{a}{2} + c \\\\(\\sqrt{3})(\\frac{a}{2} + r)\n\\end{bmatrix}." }, { "math_id": 2, "text": " \\text{HECS} = \\{ 0,1 \\} \\times \\mathbb Z \\times\\mathbb Z." }, { "math_id": 3, "text": "\\mathbf{p}_i =\\begin{bmatrix}\na_i\\\\r_i\\\\c_i\n\\end{bmatrix}\n\\in \\text{HECS}." }, { "math_id": 4, "text": "\\mathbf{p} =\\begin{bmatrix}\na\\\\r\\\\c\n\\end{bmatrix}\n\\in \\text{HECS}." }, { "math_id": 5, "text": " k \\in \\mathbb N." }, { "math_id": 6, "text": "(\\text{HECS},+)" }, { "math_id": 7, "text": "\\mathbf{p}_1 + \\mathbf{p}_2=\\begin{bmatrix}\na_1 \\oplus a_2\\\\r_1 + r_2 + (a_1 \\land a_2)\\\\c_1 + c_2 + (a_1 \\land a_2)\n\\end{bmatrix},\n" }, { "math_id": 8, "text": "\\oplus" }, { "math_id": 9, "text": "\\land" }, { "math_id": 10, "text": "(\\text{HECS},-)" }, { "math_id": 11, "text": "-\\mathbf{p} =\\begin{bmatrix}\na\\\\-r-a\\\\-c-a\n\\end{bmatrix}.\n" }, { "math_id": 12, "text": "\\mathbf{p}_1-\\mathbf{p}_2 = \\mathbf{p}_1 + (-\\mathbf{p}_2)." }, { "math_id": 13, "text": "k\\mathbf{p}=\\begin{bmatrix}\n(ak) \\bmod 2\\\\kr + (a)\\left\\lfloor \\frac{k}{2} \\right\\rfloor \\\\kc + (a)\\left\\lfloor \\frac{k}{2} \\right\\rfloor\n\\end{bmatrix},\n" }, { "math_id": 14, "text": "-k\\mathbf{p}=k(-\\mathbf{p})." }, { "math_id": 15, "text": "x(a, r, c)" }, { "math_id": 16, "text": "n\\times m" }, { "math_id": 17, "text": "X(b, s, d)" }, { "math_id": 18, "text": "X(b, s, d) = \\sum_{a} \\sum_r \\sum_c x(a, r, c) \\exp\\left[-j\\pi\\left(\\frac{(a+2c)(b+2d)}{2m} + \\frac{(a+2r)(b+2s)}{n} \\right) \\right] " } ]
https://en.wikipedia.org/wiki?curid=65921358
65923399
Tandem rolling mill
A tandem rolling mill is a rolling mill used to produce wire and sheet metal. It is composed of two or more close-coupled stands, and uses tension between the stands as well as compressive force from work rolls to reduce the thickness of steel. It was first patented by Richard Ford in 1766 in England. Each stand of a tandem mill is set up for rolling using the mill-stand's spring curve and the compressive curve of the metal so that both the rolling force and the exit thickness of each stand are determined. For mills rolling thinner strip, bridles may be added either at the entry and/or the exit to increase the strip tension near the adjacent stands, further increasing their reduction capability. History. The first mention of a tandem rolling mill is Richard Ford's 1766 English patent for the hot rolling of wire. In 1798 he received another patent, this time for the hot rolling of plates and sheets using a tandem mill. The tandem mill's main advantage was increased production: only a single pass was required, saving time; and greater tensions were possible between the stands, increasing the reduction in the stands for the same roll force. One disadvantage was its high capital cost compared to that of a single-stand reversing mill. The development of transfer bar casting, also called thin slab casting meant that slab roughing mills were no-longer required. Thin strip casting with a thickness of 2 mm has bypassed the tandem hot mill; and further reduction in the casting thickness to produce strip steel the same as annealed cold rolled strip will bypass the tandem cold mill and the annealing process. The need for tandem rolling mills, and rolling mills in general, is being reduced by the use of continuous casters. Mill stand characteristics. The mill stand spring curve is obtained by pressing the work rolls together with increasing force. This causes the work rolls to bend, the screw-downs to compress and the mill housings to stretch. To reduce work roll bending, a much larger roll is positioned above the top work roll and another is placed below the bottom work roll. This arrangement is called a 4-high mill, as shown in sketch 1. Calculating the screw-down position. The red line in graph 1 is the linear approximation "F" = "F""d" – "M" ⋅ ("S" – "S""d") or conversely, the screw-down position where "M" is called the mill modulus and is the slope of the spring curve in the area of the datum point ( "S""d", "F""d"). For most mills "M" is approximately 4 MN/mm. Larger values would require much thicker mill housings and screw-downs. A datum is performed by lowering the screws below face until the measured force equals the required datum force "F""d", at which point the screw-down position is set so that it equals the datum screw position "S""d". At BlueScope Steel's No. 2 temper mill the datum point was 5 mm at a force of 7 MN. Wood and Ivacheff analysed the information obtained when measuring the mill modulus by pressing the work rolls together until a typical rolling force was reached, and then they continued to measure the force and screw-down position as the rolls were lifted. The shape of the plotted figures (overlaid, looped, or a figure eight) was found to give good indication of the mill stand's condition. The datum point is chosen so that the screw-down position "S" is never negative. This was necessary with the control computers of the 1960s, such as the GE/PAC 4020 installed at the then Australian Iron &amp; Steel (now BlueScope) Port Kembla plate mill, which used an assembler language that did not like negative numbers. Also, a datum point is used rather than trying to measure the point at which the force just becomes zero. The exact equation used to calculate the required screw-down setting for a required force is: where: "k" is the value to best suit the measured values and "S""a" is an adapter which corrects for the thermal expansion of the mill housing and rolls as they warm up during rolling. It is set to zero after a work roll change, when the datum is performed with the new rolls at room temperature. Using the measured values of "F" and "S" during the rolling of one piece of metal, allows the adaptor "S""a" to be calculated for use at the start of the next piece. Roll force measurement. Load cells are used to measure the force exerted onto the work rolls by the product. To obtain the true roll force acting on the work rolls the position of the load cells is important; are they with the filler plates under the bottom backup-roll bearings, or on top of the top backup-roll bearings. Both positions are shown in sketch 2. Another thing that must be considered (if they are present) is the roll balance cylinders. The roll balance cylinders act to separate the work rolls (no force between them) when the screw-downs are raised; that is, the force of the balance cylinders "F"bal is just greater than the weight of the top roll set, ("Wt"bu + "Wt"wr). The above roll weights "Wt"bu and "Wt"wr are only nominal values; the actual values will vary a little depending on how many times the rolls have been ground down between campaigns. Since the roll weights are only nominal values, any residual error is slowly zeroed out whenever the roll balance is on and the screws are raised sufficiently. Steel characteristics. A useful formula for the compression curve of steel is: where "k"0 moves the curve vertically, i.e. it sets the initial yield stress; "k"3 changes the slope, i.e. the metal's work hardening rate. The initial steep section in graph 2 is elastic compression. The effective height of this is reduced by the entry and exit tensions when present, as in a tandem mill. Notice that the curve becomes steeper as the thickness approaches zero, i.e. it would take infinite force to make the steel infinitely thin. The slope of the plastic region around the operating point is normally represented by the letter "Q". Mathematical modelling. For a rolling mill to operate, the work roll gap is set prior to the product entering the mill. Originally this setting was empirical; that is, set by the operators according to their experiences of that product's initial dimensions and the required finished thickness. With a reversing mill, the profile of intermediate thicknesses was also empirical. To obtain greater consistency, attempts were made to characterize the rolling process. In 1948, Bland and Ford were one of the first to publish such a mathematical model. Essentially such mathematical models represent the mill (its spring curve) and the compressive behavior of the product to calculate the mill's "setup". Mill setup calculation. The term "setup" is used for the calculation of the actuator settings required by each mill stand to roll the product. These settings include the initial screw-down position, the main drive speed, and the entry and exit tension references where applicable. This setup calculation is normally performed either in a lower-level computer or a PLC that controls a rolling mill stand(s). A graphical representation of a mill model can be obtained by plotting the mill stand spring curve and the compression curve for the strip against the same distance axes; then the intersection point gives the solution of expected rolling force "F", and final Strip Thickness "h", and also the required initial screw-down position "So". See graph 3. In its simplest form This equation is known as the BISRA equation. It is also known as the gaugemeter equation because measurements of "S" and "F" can be used to calculate the exit thickness as measured by an instrument called a thickness gauge. If the work rolls are initially pressed together by the screw-downs, then there will be a force "F""o" acting between the top and bottom work rolls before the strip is present. In this situation, the Mill is said to be set "below face", as shown in graph 3. This is often the case with thin strip. However, if there is an actual gap before the metal enters the mill, then "F""o" will be zero, and (from equation 1) "S""o" must be greater than "S""d" + "F""d" / "M" The calculation is repeated for the following stands with the exit thickness "h" of the one stand becoming the entry thickness "H" of the next stand. Note that the compression curve has a greater or lesser elastic region depending on the entry and exit tension stresses of that next stand. Interstand tensions. One could say the steel is compressed by the force of the work rolls, equivalent to forging; however, if there are tensions present, then it could be said that the steel is stretched by the tension pulling it through the rotating work rolls, as in extruding through a die. See sketch 3. The tensions reduce the effective elasticity of the product by an amount equal to the induced tension strain. This tension effect is represented in graphs 2 and 3 by drawing the steel compression curve with the elastic region reduced accordingly. The relationship of the rolling force to the entry and exit strip tensions is important in determining the finished strip flatness. Too much force produces strip with edge wave (often called "pressure wave"). Too much tension, that is too little force, can cause center buckle (depending on the crown of the rolls). The tension stress is 30% to 50% of the yield stress for cold mills and often higher in hot mills (which can result in heavy necking and even strip breaks). In sketch 4, observe that the force is offset from the work roll centers because the strip is thicker at the entry than at the exit; this is one component of the torque that the main drives must supply. The other component is the difference in the tension forces. If the exit tension force is much greater than the entry tension force, then the tension torque may be larger than the torque due to the rolling-force and the main drives will generate power. The neutral point, or no-slip point is the point within the roll bite where the work rolls and the strip are doing the same speed. The position of the neutral point is influenced by the entry and exit tensions. Shudder occurs when the neutral point is at an edge of the roll bite; that is the work rolls are alternately grabbing the strip and letting it slip. Forward slip (1+"f") is the ratio of the exit strip speed to the work rolls peripheral speed. Backward slip (1−"b") is the ratio of the entry strip speed to the work rolls peripheral speed. Roll wear. As the strip slides through the work rolls it polishes them and the strip. This changes the friction coefficient of the strip-to-roll surface. So, to predict the forces and the power required to drive the work rolls, the mill modelling estimates this roll wear based on the length of strip rolled. To reduce the friction in the roll bite, a warm oil-water emulsion is sprayed at the entry side of the roll bite in cold rolling mills. The work rolls of all the stands in a tandem mill are normally changed at the same time. The new work rolls will have been ground to restore their desired crown and roughness. When this is done, the roll wear is reset to zero in the modelling. The heat generated in the roll bite of a cold mill warms both the strip and the rolls. Since the cold mill rolls have no coolant applied, just a small amount of warm oil-water emulsion, the work rolls in a cold mill become hotter than those in a hot mill, where copious amounts of cold water are sprayed at the exit side of the roll bite. Back-up roll bearings speed effect. The back-up roll bearings are usually white-metal bearings which rely on a film of oil between the shaft and the white-metal to reduce the friction; as seen in sketch 5. As the speed increases more oil is dragged into the active region of the bearing and this increases the thickness of the oil film in this region. This pushes the top work roll down and the bottom work roll up, which reduces the roll gap in the same manner as running the screws down. To compensate for this, most screw-down control loops include a feed-forward parameter derived from either; an equation of rolling speed, or a value extracted from a lookup table using linear interpolation. To ensure an oil film exists even at zero speed; pumps are often used to force oil through very tiny holes into the bearing's active region; this is referred to as hydrostatics. In Chart 1, the scale for the screw-downs position (mauve trace) was 0.2mm per division; this was too coarse. Consequently Chart 2 was created from a similar coil, but with a screw-down position scale of 0.06mm per division; that is, from 5.8mm to 6.4mm. In the chart recordings, notice that the force (light green trace) has been held constant by the automatic control, which has raised the screw-downs (mauve trace) as the speed (red trace) has increased. This increase in screw position is a measure of the white metal bearing speed effect. For a more accurate measurement, the force of each mill stand is measured as it is run through its speed range without strip present. The values measured from Chart 2 were plotted in an Excel spreadsheet. The equation that was used to match the measured points was 680×POWER((speed/1200),0.225)-285. Note that the use of oil hydrostatics can hold the oil film nearly constant up to about 20% of full speed; hence no screw-down movement would be required in that low speed range (this is shown as the red line in the graph of the measured points). Now recall the gaugemeter equation in its simplest form: &lt;templatestyles src="Block indent/styles.css"/&gt;"h" = "S" – "S""d" – ("F" – "F""d") / "M" This equation is modified to include the backup roll bearing speed effect "S""v" especially when rolling product which has a thickness similar to the speed effect (~400 μm at some temper mills). Thus, Discontinuity in the stress verses strain of annealed steel. The discontinuity in the stress/strain of annealed steel makes it impossible to create round tinned steel-cans. Wherever the steel bends first is where most of the bending will occur, rather than uniformly. The discontinuity is shown within the red circle in graph 4. It is the reason the strip is given a light reduction (~1.3%) normally referred to as an elongation or extension. Since it is referred to as an elongation and not a reduction, this strip is said to have been reduced only once (at the cold mill prior to annealing); hence the term single reduced (SR). After the elongation, the discontinuity is no longer present. Alternatively; after annealing, the steel strip can be reduced a second time (by up to 30%) to make it both thinner and work hardened. When this is done, the strip is said to have been reduced twice; that is, doubled reduced (DR). Grade adaption. While a tandem mill is rolling, the "setup" computer collects the following information: It also has available the coil's schedule information: The actual rolling forces are compared with the forces predicted by the mill model given the information obtained. Any differences adjust the calculated forces by trimming the force adaptors, "F"a . Thus equation 5 becomes &lt;templatestyles src="Block indent/styles.css"/&gt;"h" = "S" – "S""d" – "S""v" – ("F" + "F"a – "F""d") / "M" Recall that equation 3 gave the compressive strength of steel at BlueScope steel's 5 stand cold mill &lt;templatestyles src="Block indent/styles.css"/&gt;"K" = "k"0 + "k"1 [ "k"2 + ln("H"/"h") ]"k"3 The average value of the force adaptors trimmed the value of "k"0 for the actual grade being rolled. Also the slope of the force adaptors corrected the work hardening rate, "k"3 for the same grade of steel. The value of "k"3 for super-strapping was approximately twice that for normal tin-plate. This made switching between grades from coil-to-coil much smoother. Threading. A few difficulties arise when threading any tandem cold mill. One way to minimize these problems is to use "open-gap" threading. With open-gap threading, the next stand to be threaded has a roll gap greater than the thickness of the strip. Once threaded, the top work roll is lowered onto the strip and then the strip moves on. Open-gap threading ensures that the head-end does not mark the work rolls as it enters the roll-gap. And having the strip stopped as the screw-downs are lowered avoids skidding as the work roll just touch the strip. For "closed-gap" threading of a tandem mill, it is important that the head-end of the strip remains flat so that it enters the next stand easily. Immediately the strip enters a stand, there is no tension on either side of it; this means that the force would be greater than during rolling, so the roll gap (screw-downs) initially needs to be increased a little with respect to that required during rolling in order to prevent excessive edge-wave. The closed-gap screw-down setting is calculated using the mill model for thread speed and with no tensions. Another issue with closed-gap threading is the speed of the stand being threaded. It needs to be faster than the proceeding stand, so that the strip doesn't build up between the stands; but not so fast that it pulls the strip taught too quickly and breaks the strip. In all cases, the strip's head-end will remain thicker because of the lack of tensions as it is threaded; consequently there will be a sizeable amount of head-end off-gauge strip that must be scrapped later. Notice in the gif simulation, that the head-end speed remained constant when moving. This was the practice at BlueScope Steel's 5 stand cold mill. Control issues. The control of a tandem rolling mill is multi-layered. Two examples are shown for BlueScope Steel's No.2 temper mill with the exit stand configured for strip shape (flatness). At the lowest level is the current/voltage control of the DC electrical drive motors. At this level the bridles and reels are in open-loop tension mode; this means they run with a voltage which is related to the strip speed and a current controlled according to the tension required in the nearby strip. The tension reel has a motoring current to pull the strip taught, and the payoff reel generates to pull back against the strip. To keep these tensions constant during acceleration/deceleration of the mill, an additional current must be applied to the reels and bridles to produce the extra torque required to accelerate/decelerate them, especially when there is a large part of a coil on a reel. This is referred to as “inertia compensation”. Above the direct motor controls, is the last stand's force control which sets the strip flatness. The rolls heat up slowly while processing a coil, and this would close the roll gap and increase the rolling force; however, to prevent this increase, the force control raises the screw-downs occasionally, as required. Also at this level is the inter-stand tension control. It can act into either of the adjacent stands, but as in the diagrams shown, it acted into both stands in the proportions shown. The thickness/elongation controls and speed profile sit above all of the other control loops. The speed profile is determined in the mathematic modelling according to the desired reduction. It is multiplied by the master ramp's speed as set by the operator using his/her inputs, which are: thread (go to thread speed); run (accelerate to top speed); hold (stop acceleration/deceleration); stop (steady deceleration to zero speed); and emergency stop (uses maximum deceleration possible). Back-up roll eccentricity. With hot rolled slabs and plates, the thickness varies mainly due to the changes in the temperature along the length. The colder sections are a result of the supports in the re-heat furnace. When cold rolling, virtually all of the strip thickness variation is the result of the eccentricity and out-of-roundness of the back-up rolls from about stand three of the hot strip mill through to the finished product. The back-up roll eccentricity can be up to 100 μm in magnitude per stack. The eccentricity can be measured off-line by plotting the force variation against time with the mill on creep, no strip present, and the mill stand below face. A modified fourier analysis was employed by the five stand cold mill at Bluescope Steel, Port Kembla from 1986 until that cold mill ceased production in 2009. Within each coil, the exit thickness deviation times 10 for every meter of strip was stored in a file. This file was analyzed separately for each frequency/wavelength from 5m to 60m in steps of 0.1m. To improve the accuracy, care was taken to use a full multiple of each wavelength (100*). The calculate amplitudes were plotted against the wavelength, so that the spikes could be compared to the expected wavelengths created by the backup rolls of each stand. If a mill stand is fitted with hydraulic pistons in series with, or instead of the electrically driven mechanical screws, then it is possible to eliminate the effect of that stands back-up roll eccentricity. While rolling, the eccentricity of each back-up roll is determined by sampling the roll force and assigning it to the corresponding portion of each back-up roll's rotational position. These recordings are then used to operate the hydraulic piston so as to neutralize the eccentricities. Sensitivities and their uses. In a tandem rolling mill, the gearing of the screw-downs is normally large enough that the work rolls can be moved during rolling. With such a ratio the worm gear is said to be self-locking; that is, the rolling force is unable to push through the worm drive and rotate the electrical drive motor. This means that no brake is attached to the electrical motor. If during rolling, it is necessary to move the screw-downs to correct either the rolling force or the exit strip thickness, then consider the triangle, shown circled in the graph 5 and enlarged in sketch 6, created when the screw-downs are moved down from the purple line to the green line. The strip becomes thinner and the rolling force increases. Δ"S" = Δ"h" + "a" with Δ"H" = 0, but the slope "Q" = Δ"F" / Δ"h", and the slope "M" = Δ"F" / "a" Therefore, Δ"S" = Δ"F" / "Q" + Δ"F" / "M" Which gives This term is used to ensure that the control of the rolling force using the screws is independent of the metal being rolled. Using Δ"F" = "Q" ⋅ Δ"h" gives This factor is used to guarantee that the control of the exit thickness by the screws is independent of the metal being rolled. The process sensitivities are highly product dependent, so to obtain reasonable values they are calculated off-line in the setup computer, and then incorporated in the real-time control systems. Mass flow. A rolling mill does not create nor destroy steel during normal steady state rolling. That is, the same mass of steel leaves the mill as entered it. And so; expressing the entry volume as "H" .⋅ "W""n". ⋅"ℓ", and the exit volume "h" ⋅. "W""x" .⋅ "L" But the entry length "ℓ" "v".⋅ "t" and the exit length "L" "V t" where "t" is the total rolling time. Therefore "ρ" ⋅. "H" ⋅. "W""n" ⋅. "v" ⋅. "t" "ρ" ⋅. "h" ⋅. "W""x" ⋅. "V" ⋅. "t" The density "ρ" is unaffected by the rolling process and can be cancelled out. The width may change, but it does so by an insignificant amount (only a fraction of the strip thickness), and so the change may be ignored when rolling thin (&lt;1mm) strip. The roll force tends to widen the strip, while the entry and exit tensions (when present) tend to make the strip narrower. So, cancelling the density "ρ", the width "W", and the time "t", gives This can be used in a rolling mill to calculate the exit thickness "h" that the X-ray gauge will measure when the corresponding portion of the strip finally reaches the gauge. By assuming all of the cold mill head-end off-gauge has been completely removed by the previous continuous annealing line; the scheduled entry thickness can be substituted in place of the actual entry thickness, "H". Then the entry bridle and exit bridle speeds can be used as the measurements of entry speed, "v" and exit speed, "V" respectively. The resulting calculated thickness deviation can be seen as the light blue trace in chart recording 3. Notice that the thickness control was working at thread speed (red trace). In the block diagram, the calculated gauge (thickness) is q62 and the thickness error is q66. Note the use of the sensitivity factor dS/dh as q2. There are two other interesting factors with this control: A bumpless PI control. Initially the control appears to be a PD control with q18 containing a P term equal to q16 times the gain q4, plus a D term being the constant q10 times the change in q16. However, since q20 is effectively added to itself, this summation converts the P term into an integral, and the D term becomes a proportional term. This arrangement has the advantage that the gains q4 and q10 can be changed while the control loop is active without causing a step in the output q20; that is, it's a bumpless control. The overall maximum/minimum limit is designed to prevent the equivalent of integral windup. An NIC trim into the inter-stand tension control. Normally moving the screw-downs to correct the strip thickness would perturb the inter-stand tension; its control would then need to trim the speed of the appropriate stand to restore the tension. So, what is required, is a compensating trim applied to the tension control at the same time as the thickness trim goes to the screw-downs. This is referred to as a non-interactive control; that is, the thickness correction no-longer disturbs the tension. In the block diagram, the screw trim q20 is converted into a compensating IS tension trim using the sensitivity factor dT/dS (the value of this was measured by applying a small step change to the thickness reference and looking for any change in the IS tension). For the coil in chart recording 1 above, the cold mill Head-end off-gauge was not fully removed at the CA line; this can be seen as the difference between the X-ray deviation (green trace) and the calculated thickness deviation (light blue trace). Bridle rolls. Bridle rolls are used to increase or decrease the strip tension in a processing line or rolling mill. The bridle rolls normally come in a set of two, three, or four rolls of equal diameter, with each roll individually powered by an electric motor/generator. The drives of the entry bridle generate power as they pull back and increase the strip tension after them. This power is partly provided by the exit bridle which pulls on the strip before it, and so decreases the strip tension after it. To assist with threading there are normally guides and even a pinch roll or rolls, as shown for the two roll bridle in sketch 7. To determine the size of the electrical drives, it is necessary to calculate the values of the intermediate tension or tensions. The maximum tension difference Δ"T" across a single bridle roll is determined by the wrap angle "α" (in radians) of the strip around that roll, and the roll-to-strip sliding friction "μ", i.e. The power required to drive such a bridle is ("T""2" – "T""1") ⋅ ("R" + "h"/2) ⋅ "ω", i.e. ("T""2" – "T""1") ⋅ "v" where *"v" is the strip speed in m/sec *"h" is the strip thickness in meters *"R" is the radius of the Bridle Rolls in meters and *"ω" is the Bridle's angular speed in radians per second. The electrical power required by the drive motor = volts ⋅ amps. The voltage can be regulated according to the strip speed, leaving the current to be proportional to the required tension change. To prevent slippage, the bridle rolls within a set are operated at only a fraction "p" of the maximum tension difference, so the actual tension difference across each bridle roll will be "e" "p"⋅"μ"⋅"α". That is, a lower value of friction is used in the calculations. Consider the simplest case: A two roll bridle set with both rolls having the same wrap angle "α". Then "T""2" = "T""1" ⋅ "e" "p"⋅"μ"⋅"α", and "T""3" = "T""2" ⋅ "e" "p"⋅"μ"⋅"α". Therefore "T""2" / "T""1" = "T""3" / "T""2" which gives, "T""2"2 = "T""1" ⋅ "T""3" And so Now consider an example: &lt;templatestyles src="Block indent/styles.css"/&gt;Let "T"3 = 2.0 "T"1, then, "T"2 = 1.4142 "T"1 so the tension across the first bridle will be (1.4142–1.0) "T""1" = 0.4142 "T""1" and the tension across the second bridle will be (2.0–1.4142) "T""1" = 0.5858 "T""1" And so, the second bridle requires just over 40% more motor power compared to the first. If one wishes to reduce the number of spares; then it is desirable to have motors of the same power. To do that, the wrap angle on the first bridle must be increased so that the tension difference across both bridles is the same; "T""3" − "T""2" = "T""2" − "T""1". That is, Let the wrap angle of bridle roll 1 be ("α"+Δ), where "α" is the wrap angle of bridle roll 2. That is "T""2" / "T""1" = "e" "p"⋅"μ"⋅("α"+Δ) = 1.5 Taking logarithms of both sides gives For bridle roll 2: "T""3" / "T""2" = "e" "p"⋅"μ"⋅"α" = 2.0 / 1.5 Again, taking logarithms gives From equations B4 and B5: Now consider a four roll bridle with tensions "T""1" through to "T""5". Normally in such a bridle set, all of the rolls have the same strip wrap angle, as shown in sketch 7. The wrap angle from "T""1" to "T""3" is the same as that from "T""3" to "T""5". Therefore, using equation B2 Similarly, So if we let "T""5" = 4.0 "T""1" then "T""3" = 2.0 "T""1" which gives "T""2" = 1.4142 "T""1" And finally Inter-stand tension control. Consider a violin string: &lt;templatestyles src="Block indent/styles.css"/&gt; The tension force "F" established in the string of length "ℓ" is related to "e", the amount the string is stretched: &lt;templatestyles src="Block indent/styles.css"/&gt;( "F" ⋅ "ℓ" ) / ( "e" ⋅ "A" ) = "E" where and the area "A" is equal to the strings cross-sectional area. Now consider the strip between the stands of a multi-stand mill: &lt;templatestyles src="Block indent/styles.css"/&gt;In time Δ"t" an extra length of "ℓ" = "V"⋅Δ"t" leaves the previous stand, where "V" is the strip's exit speed. To establish a tension "T" in this piece of strip, requires it to be stretched by an amount equal to "T"⋅"V"⋅Δ"t" / ("E"⋅"A") in the time interval of Δ"t"; that is, a speed of "T"⋅"V" / "E"⋅"A" where the strip's cross-sectional area "A" is equal to its width "W" times its thickness "h", If the speed difference between the next stand's entry and the previous stand's exit is different to this, then the strip of length "L" between the two stands will stretch or relax by Δ"e", the integral of speed difference, and this will change the actual tension as shown in the block diagram. The overall transfer function from speed-difference to tension is: &lt;templatestyles src="Block indent/styles.css"/&gt;"E"⋅"W"⋅"h" / ("sL"+"V") Therefore, the control loop divides the tension error by the strip width "W" and the strip thickness "h" in order to have a consistent response. In chart recording 4, the two components necessary to change the tension (brown trace) can be seen in the speed trim (light blue trace). There is the extra speed difference necessary to stretch the strip already in the inter-stand gap (large trim); and the slight increase in speed difference to maintain the new level of tension. Strip shape. Strip shape is one of the important quality factors of a finished strip, along with the thickness and the mechanical properties. Poor shape is revealed when the strip fails to lie flat when placed unrestrained on a flat surface. To perform this test, a sample of strip is taken at least 3 wraps in from the end of the finished coil; this is called a "run-out". Shape errors occur when the strip has not been rolled uniformly across its width. The problem is, that strip shape cannot be seen while the strip is being rolled because it is under tension; hence the need to do a run-out. The main shape defects are: An error in shape is quoted in I-units. If part of 100m of strip is rolled 1mm longer than the rest, then it is in error by one I-unit. There are a few ways in which the shape can be influenced: Coil collapse. If a coil of thin strip is wound with low tension, then it may not have the strength to support itself and may collapse, especially if roughly handled, see figure 1(a). A solution is to provide chocks to support the coils more appropriately. If a large coil is wound with a high tension, then the tension stress builds up on the inner wraps and can cause them to kink, as seen in figure 1(b). An early solution was to place a steel sleeve onto the tension reel mandrel before the coil started. However, loading the sleeve slowed production, and the handling of the sleeves back from the following production lines added an extra cost. Figure 1(c) shows a coil with a sleeve sitting on chocks. Industrial Automation Services devised a solution. The early wraps are wound with a high tension to create a pseudo sleeve, then the body of the coil is wound with a moderate tension supported by that pseudo sleeve; as shown in figure 1(d). Computers and HMI. In the 1980's it became possible for digital displays to replace hard-wired headboard-mimics and operator control/display panels. The descriptions below are based on the upgrade of BlueScope Steel's 5 stand cold mill in 1985. There are normally three levels of computers directly associated with a tandem rolling mill, as shown in sketch 11. Level 1 real-time control. At the lowest level is a Programable Logic Controller, PLC or minicomputer. The PLC or minicomputer contains the control loops that run the rolling mill. It receives inputs into these dynamic controls directly from the hard-wired desk and gets the targets for the exit thickness, the tensions and the screw-down positions from the setup computer. The desk controls include the speed requests (the thread, run and stop push buttons), any screw-down movements (each joystick has raise, lower, tilt left, and tilt right), and the tension trims (increase / decrease toggle switches). Level 2 batch processing. At level 2 is a keyboard and screen having a menu-based interface into the setup computer's mill model. This operator interface is normally described as the Human Machine Interface, HMI. Through this interface the operator trims the setup for the next coil. He/she can trim the individual stand reductions, the inter-stand tensions, and the top speed as required. When the last stand has force control, then the rolling force can also be trimmed. At this level there are a few TV monitors. At BlueScope Steel's cold mill these monitors included: Actually these displays can be connected to either level 1 or level 2; for example, after the more recent (in 1997) BlueScope Steel upgrade of their 6 stand hot strip mill, the operator displays are driven by the level 1 PLC's. Level 3 supervision. The setup computer gets each coil's primary data from a scheduling computer. This scheduling computer usually receives the product's data from the previous production unit and will pass on the results of this mill's rolling to the next unit. The primary data sent by the scheduling computer consists of the nominal entry thickness and width, the aim thickness, and if rolling a plate the aim width. The scheduler assembles the coils or plates to be processed within each campaign using his/her Human Computer Interface, HCI terminal. A campaign begins with a scheduled roll change; that is, when all of the work rolls of the tandem mill are changed together. For a cold tandem mill, the campaign has a coffin-shaped width profile. The first few coils are about 3/4 of the full width. Gradually the coils become wider until the maximum product width is reached. This allows the thermal camber of the rolls to develop before rolling the full-width product. From then on the product becomes narrower, to avoid the excessive work roll wear corresponding to the strip edges. Rolling mill definitions. These definition only apply to the rolling of slabs, plates and strip in a rolling mill. Reduction. Reduction, "r" is defined as the per-unit change in thickness with respect to the entry thickness "H", and so formula_0"r" ("H" – "h") / "H" formula_0where "h" is the exit thickness. As the material is reduced, its length becomes proportionately longer; this can be seen in the attached GIF movie. There are many other definitions of the word reduction; such as in chemistry, medicine, surgery, safety, investment, and in a more general sense, such as in cooking and waste reduction, etc. Elongation. When the reduction is small (&lt;2%), it is normally referred to as an elongation or an extension. Elongation, "e" is defined as the per-unit increase in length due to a decrease in area with respect to the entry, regardless of shape. Given an entry length "ℓ", then formula_0"e" ("L" – "ℓ") / "ℓ" formula_0where "L" is the final length. If the width is unaffected (as is the case when rolling thin strip &lt;2mm, see sketch 12), then the mass flow concept, gives formula_0 "H" . "ℓ" "h" . "L" Thus elongation, "e" ("H" – "h") / "h" When the elongation is large it is normally measured as reduction, "r" which is defined as the per-unit change in thickness with respect to the entry thickness "H"; and so if "h" is the exit thickness, Reduction, "r" ("H" – "h") / "H" Note that the thickness difference ("H" – "h") is divided by the exit thickness "h" for elongation and by the entry thickness "H" with reduction; so they are not identical. An elongation of typically 1.3% is performed to eliminate the discontinuity (seen at the yield point in graph 6) in the stress verses strain reaction of thin steel strip before it is tinned ready for making cans intended for containing preserved foods. There are many other definitions of the word elongation; such as in astronomy, plasma physics, genetics, and in a more general sense, such as referring to the lengthening of an elastic band. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. /ref&gt; and the Davy-Mckee Vidimon air bearing shapemeter.
[ { "math_id": 0, "text": "\\quad" } ]
https://en.wikipedia.org/wiki?curid=65923399
65926
Angular displacement
Displacement measured angle-wise when a body is showing circular or rotational motion &lt;templatestyles src="Hlist/styles.css"/&gt; The angular displacement (symbol θ, ϑ, or φ) – also called angle of rotation, rotational displacement, or rotary displacement – of a physical body is the angle (in units of radians, degrees, turns, etc.) through which the body rotates (revolves or spins) around a centre or axis of rotation. Angular displacement may be signed, indicating the sense of rotation (e.g., clockwise); it may also be greater (in absolute value) than a full turn. Context. When a body rotates about its axis, the motion cannot simply be analyzed as a particle, as in circular motion it undergoes a changing velocity and acceleration at any time. When dealing with the rotation of a body, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the body's motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal and negligible. Example. In the example illustrated to the right (or above in some mobile versions), a particle or body P is at a fixed distance "r" from the origin, "O", rotating counterclockwise. It becomes important to then represent the position of particle P in terms of its polar coordinates ("r", "θ"). In this particular example, the value of "θ" is changing, while the value of the radius remains the same. (In rectangular coordinates ("x", "y") both "x" and "y" vary with time.) As the particle moves along the circle, it travels an arc length "s", which becomes related to the angular position through the relationship: formula_0 Definition and units. Angular displacement may be expressed in radians or degrees. Using radians provides a very simple relationship between distance traveled around the circle ("circular arc length") and the distance "r" from the centre ("radius"): formula_1 For example, if a body rotates 360° around a circle of radius "r", the angular displacement is given by the distance traveled around the circumference - which is 2π"r" - divided by the radius: formula_2 which easily simplifies to: formula_3. Therefore, 1 revolution is formula_4 radians. The above definition is part of the International System of Quantities (ISQ), formalized in the international standard ISO 80000-3 (Space and time), and adopted in the International System of Units (SI). Angular displacement may be signed, indicating the sense of rotation (e.g., clockwise); it may also be greater (in absolute value) than a full turn. In the ISQ/SI, angular displacement is used to define the "number of revolutions", "N" θ/(2π rad), a ratio-type quantity of dimension one. In three dimensions. In three dimensions, angular displacement is an entity with a direction and a magnitude. The direction specifies the axis of rotation, which always exists by virtue of the Euler's rotation theorem; the magnitude specifies the rotation in radians about that axis (using the right-hand rule to determine direction). This entity is called an axis-angle. Despite having direction and magnitude, angular displacement is not a vector because it does not obey the commutative law for addition. Nevertheless, when dealing with infinitesimal rotations, second order infinitesimals can be discarded and in this case commutativity appears. Rotation matrices. Several ways to describe rotations exist, like rotation matrices or Euler angles. See charts on SO(3) for others. Given that any frame in the space can be described by a rotation matrix, the displacement among them can also be described by a rotation matrix. Being formula_5 and formula_6 two matrices, the angular displacement matrix between them can be obtained as formula_7. When this product is performed having a very small difference between both frames we will obtain a matrix close to the identity. In the limit, we will have an infinitesimal rotation matrix.
[ { "math_id": 0, "text": "s = r\\theta ." }, { "math_id": 1, "text": "\\theta = \\frac{s}{r} \\mathrm{rad}" }, { "math_id": 2, "text": "\\theta= \\frac{2\\pi r}r" }, { "math_id": 3, "text": "\\theta=2\\pi" }, { "math_id": 4, "text": "2\\pi" }, { "math_id": 5, "text": "A_0" }, { "math_id": 6, "text": "A_f" }, { "math_id": 7, "text": "\\Delta A = A_f A_0^{-1}" } ]
https://en.wikipedia.org/wiki?curid=65926
65927
Angular velocity
Direction and rate of rotation &lt;templatestyles src="Hlist/styles.css"/&gt; In physics, angular velocity (symbol ω or formula_0, the lowercase Greek letter omega), also known as angular frequency vector, is a pseudovector representation of how the angular position or orientation of an object changes with time, i.e. how quickly an object rotates (spins or revolves) around an axis of rotation and how fast the axis itself changes direction. The magnitude of the pseudovector, formula_1, represents the "angular speed" (or "angular frequency"), the angular rate at which the object rotates (spins or revolves). The pseudovector direction formula_2 is normal to the instantaneous plane of rotation or angular displacement. There are two types of angular velocity: Angular velocity has dimension of angle per unit time; this is analogous to linear velocity, with angle replacing distance, with time in common. The SI unit of angular velocity is radians per second, although degrees per second (°/s) is also common. The radian is a dimensionless quantity, thus the SI units of angular velocity are dimensionally equivalent to reciprocal seconds, s−1, although rad/s is preferable to avoid confusion with rotation velocity in units of hertz (also equivalent to s−1). The sense of angular velocity is conventionally specified by the right-hand rule, implying clockwise rotations (as viewed on the plane of rotation); negation (multiplication by −1) leaves the magnitude unchanged but flips the axis in the opposite direction. For example, a geostationary satellite completes one orbit per day above the equator (360 degrees per 24 hours) has angular velocity magnitude (angular speed) "ω" = 360°/24 h = 15°/h (or 2π rad/24 h ≈ 0.26 rad/h) and angular velocity direction (a unit vector) parallel to Earth's rotation axis (formula_3, in the geocentric coordinate system). If angle is measured in radians, the linear velocity is the radius times the angular velocity, formula_4. With orbital radius 42,000 km from the Earth's center, the satellite's tangential speed through space is thus "v" = 42,000 km × 0.26/h ≈ 11,000 km/h. The angular velocity is positive since the satellite travels prograde with the Earth's rotation (the same direction as the rotation of Earth). Orbital angular velocity of a point particle. Particle in two dimensions. In the simplest case of circular motion at radius formula_5, with position given by the angular displacement formula_6 from the x-axis, the orbital angular velocity is the rate of change of angle with respect to time: formula_7. If formula_8 is measured in radians, the arc-length from the positive x-axis around the circle to the particle is formula_9, and the linear velocity is formula_10, so that formula_11. In the general case of a particle moving in the plane, the orbital angular velocity is the rate at which the position vector relative to a chosen origin "sweeps out" angle. The diagram shows the position vector formula_12 from the origin formula_13 to a particle formula_14, with its polar coordinates formula_15. (All variables are functions of time formula_16.) The particle has linear velocity splitting as formula_17, with the radial component formula_18 parallel to the radius, and the cross-radial (or tangential) component formula_19 perpendicular to the radius. When there is no radial component, the particle moves around the origin in a circle; but when there is no cross-radial component, it moves in a straight line from the origin. Since radial motion leaves the angle unchanged, only the cross-radial component of linear velocity contributes to angular velocity. The angular velocity "ω" is the rate of change of angular position with respect to time, which can be computed from the cross-radial velocity as: formula_20 Here the cross-radial speed formula_21 is the signed magnitude of formula_19, positive for counter-clockwise motion, negative for clockwise. Taking polar coordinates for the linear velocity formula_22 gives magnitude formula_23 (linear speed) and angle formula_24 relative to the radius vector; in these terms, formula_25, so that formula_26 These formulas may be derived doing formula_27, being formula_5 a function of the distance to the origin with respect to time, and formula_28 a function of the angle between the vector and the x axis. Then: formula_29 which is equal to: formula_30 (see Unit vector in cylindrical coordinates). Knowing formula_31, we conclude that the radial component of the velocity is given by formula_32, because formula_33 is a radial unit vector; and the perpendicular component is given by formula_34 because formula_35 is a perpendicular unit vector. In two dimensions, angular velocity is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the radius vector turns counter-clockwise, and negative if clockwise. Angular velocity then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes. Particle in three dimensions. In three-dimensional space, we again have the position vector r of a moving particle. Here, orbital angular velocity is a pseudovector whose magnitude is the rate at which r sweeps out angle (in radians per unit of time), and whose direction is perpendicular to the instantaneous plane in which r sweeps out angle (i.e. the plane spanned by r and v). However, as there are "two" directions perpendicular to any plane, an additional condition is necessary to uniquely specify the direction of the angular velocity; conventionally, the right-hand rule is used. Let the pseudovector formula_36 be the unit vector perpendicular to the plane spanned by r and v, so that the right-hand rule is satisfied (i.e. the instantaneous direction of angular displacement is counter-clockwise looking from the top of formula_36). Taking polar coordinates formula_37 in this plane, as in the two-dimensional case above, one may define the orbital angular velocity vector as: formula_38 where "θ" is the angle between r and v. In terms of the cross product, this is: formula_39 From the above equation, one can recover the tangential velocity as: formula_40 Spin angular velocity of a rigid body or reference frame. Given a rotating frame of three unit coordinate vectors, all the three must have the same angular speed at each instant. In such a frame, each vector may be considered as a moving particle with constant scalar radius. The rotating frame appears in the context of rigid bodies, and special tools have been developed for it: the spin angular velocity may be described as a vector or equivalently as a tensor. Consistent with the general definition, the spin angular velocity of a frame is defined as the orbital angular velocity of any of the three vectors (same for all) with respect to its own center of rotation. The addition of angular velocity vectors for frames is also defined by the usual vector addition (composition of linear movements), and can be useful to decompose the rotation as in a gimbal. All components of the vector can be calculated as derivatives of the parameters defining the moving frames (Euler angles or rotation matrices). As in the general case, addition is commutative: formula_41. By Euler's rotation theorem, any rotating frame possesses an instantaneous axis of rotation, which is the direction of the angular velocity vector, and the magnitude of the angular velocity is consistent with the two-dimensional case. If we choose a reference point formula_42 fixed in the rigid body, the velocity formula_43 of any point in the body is given by formula_44 Components from the basis vectors of a body-fixed frame. Consider a rigid body rotating about a fixed point O. Construct a reference frame in the body consisting of an orthonormal set of vectors formula_45 fixed to the body and with their common origin at O. The spin angular velocity vector of both frame and body about O is then formula_46 where formula_47 is the time rate of change of the frame vector formula_48 due to the rotation. This formula is incompatible with the expression for "orbital" angular velocity formula_49 as that formula defines angular velocity for a "single point" about O, while the formula in this section applies to a frame or rigid body. In the case of a rigid body a "single" formula_50 has to account for the motion of "all" particles in the body. Components from Euler angles. The components of the spin angular velocity pseudovector were first calculated by Leonhard Euler using his Euler angles and the use of an intermediate frame: Euler proved that the projections of the angular velocity pseudovector on each of these three axes is the derivative of its associated angle (which is equivalent to decomposing the instantaneous rotation into three instantaneous Euler rotations). Therefore: formula_51 This basis is not orthonormal and it is difficult to use, but now the velocity vector can be changed to the fixed frame or to the moving frame with just a change of bases. For example, changing to the mobile frame: formula_52 where formula_53 are unit vectors for the frame fixed in the moving body. This example has been made using the Z-X-Z convention for Euler angles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{\\omega}" }, { "math_id": 1, "text": "\\omega=\\|\\boldsymbol{\\omega}\\|" }, { "math_id": 2, "text": "\\hat\\boldsymbol{\\omega}=\\boldsymbol{\\omega}/\\omega" }, { "math_id": 3, "text": "\\hat\\omega=\\hat{Z}" }, { "math_id": 4, "text": "\\boldsymbol v = r\\boldsymbol\\omega" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "\\phi(t)" }, { "math_id": 7, "text": "\\omega = \\frac{d\\phi}{dt}" }, { "math_id": 8, "text": "\\phi" }, { "math_id": 9, "text": "\\ell=r\\phi" }, { "math_id": 10, "text": "v(t) = \\frac{d\\ell}{dt} = r\\omega(t)" }, { "math_id": 11, "text": "\\omega = \\frac{v}{r}" }, { "math_id": 12, "text": "\\mathbf{r}" }, { "math_id": 13, "text": "O" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "(r, \\phi)" }, { "math_id": 16, "text": "t" }, { "math_id": 17, "text": "\\mathbf{v} = \\mathbf{v}_\\|+\\mathbf{v}_\\perp" }, { "math_id": 18, "text": "\\mathbf{v}_\\|" }, { "math_id": 19, "text": "\\mathbf{v}_\\perp" }, { "math_id": 20, "text": "\\omega = \\frac{d\\phi}{dt} = \\frac{v_\\perp}{r}." }, { "math_id": 21, "text": "v_\\perp" }, { "math_id": 22, "text": "\\mathbf{v}" }, { "math_id": 23, "text": "v" }, { "math_id": 24, "text": "\\theta" }, { "math_id": 25, "text": "v_\\perp = v\\sin(\\theta)" }, { "math_id": 26, "text": "\\omega = \\frac{v\\sin(\\theta)}{r}." }, { "math_id": 27, "text": "\\mathbf{r}=(r\\cos(\\varphi),r\\sin(\\varphi))" }, { "math_id": 28, "text": "\\varphi" }, { "math_id": 29, "text": "\\frac{d\\mathbf{r}}{dt} = (\\dot{r}\\cos(\\varphi) - r\\dot{\\varphi}\\sin(\\varphi), \\dot{r}\\sin(\\varphi) + r\\dot{\\varphi}\\cos(\\varphi))," }, { "math_id": 30, "text": "\\dot{r}(\\cos(\\varphi), \\sin(\\varphi)) + r\\dot{\\varphi}(-\\sin(\\varphi), \\cos(\\varphi)) = \\dot{r}\\hat{r} + r\\dot{\\varphi}\\hat{\\varphi}" }, { "math_id": 31, "text": "\\frac{d\\mathbf{r}}{dt} = \\mathbf{v}" }, { "math_id": 32, "text": "\\dot{r}" }, { "math_id": 33, "text": "\\hat{r}" }, { "math_id": 34, "text": "r\\dot{\\varphi}" }, { "math_id": 35, "text": "\\hat{\\varphi}" }, { "math_id": 36, "text": "\\mathbf{u}" }, { "math_id": 37, "text": "(r,\\phi)" }, { "math_id": 38, "text": "\\boldsymbol\\omega =\\omega \\mathbf u = \\frac{d\\phi}{dt}\\mathbf u=\\frac{v \\sin(\\theta)}{r}\\mathbf u," }, { "math_id": 39, "text": "\\boldsymbol\\omega\n=\\frac{\\mathbf r\\times\\mathbf v}{r^2}." }, { "math_id": 40, "text": "\\mathbf{v}_{\\perp} =\\boldsymbol{\\omega} \\times\\mathbf{r}" }, { "math_id": 41, "text": "\\omega_1 + \\omega_2 = \\omega_2 + \\omega_1" }, { "math_id": 42, "text": "{\\boldsymbol r_0}" }, { "math_id": 43, "text": " \\dot {\\boldsymbol r}" }, { "math_id": 44, "text": " \\dot {\\boldsymbol r}= \\dot {\\boldsymbol r_0}+ {\\boldsymbol\\omega}\\times({\\boldsymbol r}-{\\boldsymbol r_0})\n" }, { "math_id": 45, "text": "\\mathbf{e}_1, \\mathbf{e}_2, \\mathbf{e}_3 " }, { "math_id": 46, "text": "\\boldsymbol\\omega = \\left(\\dot \\mathbf{e}_1\\cdot\\mathbf{e}_2\\right) \\mathbf{e}_3 + \\left(\\dot \\mathbf{e}_2\\cdot\\mathbf{e}_3\\right) \\mathbf{e}_1 + \\left(\\dot \\mathbf{e}_3\\cdot\\mathbf{e}_1\\right) \\mathbf{e}_2,\n" }, { "math_id": 47, "text": " \\dot \\mathbf{e}_i= \\frac{d \\mathbf{e}_i}{dt} " }, { "math_id": 48, "text": " \\mathbf{e}_i, i=1,2,3," }, { "math_id": 49, "text": "\\boldsymbol\\omega\n=\\frac{\\boldsymbol{r}\\times\\boldsymbol{v}}{r^2}," }, { "math_id": 50, "text": " \\boldsymbol\\omega" }, { "math_id": 51, "text": "\\boldsymbol\\omega = \\dot\\alpha\\mathbf u_1+\\dot\\beta\\mathbf u_2+\\dot\\gamma \\mathbf u_3" }, { "math_id": 52, "text": "\\boldsymbol\\omega =\n(\\dot\\alpha \\sin\\beta \\sin\\gamma + \\dot\\beta\\cos\\gamma) \\hat\\mathbf i+\n(\\dot\\alpha \\sin\\beta \\cos\\gamma - \\dot\\beta\\sin\\gamma) \\hat\\mathbf j +\n(\\dot\\alpha \\cos\\beta + \\dot\\gamma) \\hat\\mathbf k" }, { "math_id": 53, "text": "\\hat\\mathbf i, \\hat\\mathbf j, \\hat\\mathbf k" } ]
https://en.wikipedia.org/wiki?curid=65927
65927172
Smith–Wilson method
The Smith–Wilson method is a method for extrapolating forward rates. It is recommended by EIOPA to extrapolate interest rates. It was introduced in 2000 by A. Smith and T. Wilson for Bacon &amp; Woodrow. Mathematical formulation. Let UFR be some ultimate forward rate and formula_0 be the time to the i'th maturity. Then formula_1 defines the price of a zero-coupon bond at time t. formula_2 Where formula_3 and the symmetric W matrix is formula_4 and formula_5, formula_6, formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_i" }, { "math_id": 1, "text": "P(t)" }, { "math_id": 2, "text": "P(t) = e^{-UFR\\cdot t} + \\sum_{j=1}^N \\xi_j \\cdot W(t, u_j)" }, { "math_id": 3, "text": "W(t, u_j) = e^{-UFR\\cdot (t+u_j)} \\cdot (\\alpha\\cdot \\min(t, u_j) - 0.5e^{-\\alpha\\cdot \\max(t, u_j)}\\cdot (e^{\\alpha\\cdot \\min(t, u_j)} - e^{-\\alpha\\cdot \\min(t, u_j)}))" }, { "math_id": 4, "text": "W = (W(u_i, u_j))_{i=1,...,N:j=1,...,N}" }, { "math_id": 5, "text": "p = (P(u_1), ..., P(u_N))^T" }, { "math_id": 6, "text": "\\mu = (e^{-UFR\\cdot u_1}, ..., e^{-UFR\\cdot u_N})^T" }, { "math_id": 7, "text": "\\xi = W^{-1}(p-\\mu)" } ]
https://en.wikipedia.org/wiki?curid=65927172
6592746
Abstract Wiener space
Mathematical construction relating to infinite-dimensional spaces. The concept of an abstract Wiener space is a mathematical construction developed by Leonard Gross to understand the structure of Gaussian measures on infinite-dimensional spaces. The construction emphasizes the fundamental role played by the Cameron–Martin space. The classical Wiener space is the prototypical example. The structure theorem for Gaussian measures states that all Gaussian measures can be represented by the abstract Wiener space construction. Motivation. Let formula_0 be a real Hilbert space, assumed to be infinite dimensional and separable. In the physics literature, one frequently encounters integrals of the form formula_1 where formula_2 is supposed to be a normalization constant and where formula_3 is supposed to be the non-existent Lebesgue measure on formula_4. Such integrals arise, notably, in the context of the Euclidean path-integral formulation of quantum field theory. At a mathematical level, such an integral cannot be interpreted as integration against a measure on the original Hilbert space formula_4. On the other hand, suppose formula_5 is a Banach space that contains formula_4 as a dense subspace. If formula_5 is "sufficiently larger" than formula_4, then the above integral can be interpreted as integration against a well-defined (Gaussian) measure on formula_5. In that case, the pair formula_6 is referred to as an abstract Wiener space. The prototypical example is the classical Wiener space, in which formula_4 is the Hilbert space of real-valued functions formula_7 on an interval formula_8 having first derivative in formula_9 and satisfying formula_10, with the norm being given by formula_11 In that case, formula_5 may be taken to be the Banach space of continuous functions on formula_8 with the supremum norm. In this case, the measure on formula_5 is the Wiener measure describing Brownian motion starting at the origin. The original subspace formula_12 is called the Cameron–Martin space, which forms a set of measure zero with respect to the Wiener measure. What the preceding example means is that we have a "formal" expression for the Wiener measure given by formula_13 Although this formal expression "suggests" that the Wiener measure should live on the space of paths for which formula_14, this is not actually the case. (Brownian paths are known to be nowhere differentiable with probability one.) Gross's abstract Wiener space construction abstracts the situation for the classical Wiener space and provides a necessary and sufficient (if sometimes difficult to check) condition for the Gaussian measure to exist on formula_5. Although the Gaussian measure formula_15 lives on formula_5 rather than formula_4, it is the geometry of formula_4 rather than formula_5 that controls the properties of formula_15. As Gross himself puts it (adapted to our notation), "However, it only became apparent with the work of I.E. Segal dealing with the normal distribution on a real Hilbert space, that the role of the Hilbert space formula_4 was indeed central, and that in so far as analysis on formula_5 is concerned, the role of formula_5 itself was auxiliary for many of Cameron and Martin's theorems, and in some instances even unnecessary." One of the appealing features of Gross's abstract Wiener space construction is that it takes formula_4 as the starting point and treats formula_5 as an auxiliary object. Although the formal expressions for formula_15 appearing earlier in this section are purely formal, physics-style expressions, they are very useful in helping to understand properties of formula_15. Notably, one can easily use these expressions to derive the (correct!) formula for the density of the translated measure formula_16 relative to formula_17, for formula_18. (See the Cameron–Martin theorem.) Mathematical description. Cylinder set measure on H. Let formula_4 be a Hilbert space defined over the real numbers, assumed to be infinite dimensional and separable. A cylinder set in formula_4 is a set defined in terms of the values of a finite collection of linear functionals on formula_4. Specifically, suppose formula_19 are continuous linear functionals on formula_4 and formula_20 is a Borel set in formula_21. Then we can consider the set formula_22 Any set of this type is called a cylinder set. The collection of all cylinder sets forms an algebra of sets in formula_4 but it is not a formula_23-algebra. There is a natural way of defining a "measure" on cylinder sets, as follows. By the Riesz representation theorem, the linear functionals formula_24 are given as the inner product with vectors formula_25 in formula_4. In light of the Gram–Schmidt procedure, it is harmless to assume that formula_25 are orthonormal. In that case, we can associate to the above-defined cylinder set formula_26 the measure of formula_20 with respect to the standard Gaussian measure on formula_27. That is, we define formula_28 where formula_29 is the standard Lebesgue measure on formula_21. Because of the product structure of the standard Gaussian measure on formula_21, it is not hard to show that formula_15 is well defined. That is, although the same set formula_26 can be represented as a cylinder set in more than one way, the value of formula_30 is always the same. Nonexistence of the measure on H. The set functional formula_15 is called the standard Gaussian cylinder set measure on formula_4. Assuming (as we do) that formula_4 is infinite dimensional, formula_15 "does not" extend to a countably additive measure on the formula_23-algebra generated by the collection of cylinder sets in formula_4. One can understand the difficulty by considering the behavior of the standard Gaussian measure on formula_31 given by formula_32 The expectation value of the squared norm with respect to this measure is computed as an elementary Gaussian integral as formula_33 That is, the typical distance from the origin of a vector chosen randomly according to the standard Gaussian measure on formula_21 is formula_34 As formula_35 tends to infinity, this typical distance tends to infinity, indicating that there is no well-defined "standard Gaussian" measure on formula_4. (The typical distance from the origin would be infinite, so that the measure would not actually live on the space formula_4.) Existence of the measure on B. Now suppose that formula_5 is a separable Banach space and that formula_36 is an injective continuous linear map whose image is dense in formula_5. It is then harmless (and convenient) to identify formula_4 with its image inside formula_5 and thus regard formula_4 as a dense subset of formula_5. We may then construct a cylinder set measure on formula_5 by defining the measure of a cylinder set formula_37 to be the previously defined cylinder set measure of formula_38, which is a cylinder set in formula_4. The idea of the abstract Wiener space construction is that if formula_5 is sufficiently bigger than formula_4, then the cylinder set measure on formula_5, unlike the cylinder set measure on formula_4, will extend to a countably additive measure on the generated formula_23-algebra. The original paper of Gross gives a necessary and sufficient condition on formula_5 for this to be the case. The measure on formula_5 is called a Gaussian measure and the subspace formula_12 is called the Cameron–Martin space. It is important to emphasize that formula_4 forms a set of measure zero inside formula_5, emphasizing that the Gaussian measure lives only on formula_5 and not on formula_4. The upshot of this whole discussion is that Gaussian integrals of the sort described in the motivation section do have a rigorous mathematical interpretation, but they do not live on the space whose norm occurs in the exponent of the formal expression. Rather, they live on some larger space. Universality of the construction. The abstract Wiener space construction is not simply one method of building Gaussian measures. Rather, "every" Gaussian measure on an infinite-dimensional Banach space occurs in this way. (See the structure theorem for Gaussian measures.) That is, given a Gaussian measure formula_15 on an infinite-dimensional, separable Banach space (over formula_39), one can identify a Cameron–Martin subspace formula_12, at which point the pair formula_6 becomes an abstract Wiener space and formula_15 is the associated Gaussian measure. Example: Classical Wiener space. The prototypical example of an abstract Wiener space is the space of continuous paths, and is known as classical Wiener space. This is the abstract Wiener space in which formula_4 is given by formula_43 with inner product given by formula_44 and formula_5 is the space of continuous maps of formula_8 into formula_27 starting at 0, with the uniform norm. In this case, the Gaussian measure formula_15 is the Wiener measure, which describes Brownian motion in formula_27, starting from the origin. The general result that formula_4 forms a set of measure zero with respect to formula_15 in this case reflects the roughness of the typical Brownian path, which is known to be nowhere differentiable. This contrasts with the assumed differentiability of the paths in formula_4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " H" }, { "math_id": 1, "text": "\\frac{1}{Z}\\int_H f(v) e^{-\\frac{1}{2} \\Vert v\\Vert^2} Dv," }, { "math_id": 2, "text": "Z" }, { "math_id": 3, "text": "Dv" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "(H,B)" }, { "math_id": 7, "text": "b" }, { "math_id": 8, "text": "[0,T]" }, { "math_id": 9, "text": "L^2" }, { "math_id": 10, "text": "b(0) = 0" }, { "math_id": 11, "text": "\\left\\Vert b\\right\\Vert^2 = \\int_0^T b'(t)^2\\,dt." }, { "math_id": 12, "text": "H\\subset B" }, { "math_id": 13, "text": "d\\mu(b)=\\frac{1}{Z} \\exp\\left\\{-\\frac{1}{2}\\int_0^T b'(t)^2\\,dt\\right\\}\\,Db." }, { "math_id": 14, "text": "\\int_0^T b'(t)^2\\,dt < \\infty" }, { "math_id": 15, "text": "\\mu" }, { "math_id": 16, "text": "d\\mu(b+h)" }, { "math_id": 17, "text": "d\\mu(b)" }, { "math_id": 18, "text": "h\\in H" }, { "math_id": 19, "text": "\\phi_1,\\ldots,\\phi_n" }, { "math_id": 20, "text": "E" }, { "math_id": 21, "text": "\\R^n" }, { "math_id": 22, "text": "C = \\left\\{v\\in H \\mid (\\phi_1(v),\\ldots,\\phi_n(v)) \\in E \\right\\}." }, { "math_id": 23, "text": "\\sigma" }, { "math_id": 24, "text": "\\phi_1, \\ldots, \\phi_n" }, { "math_id": 25, "text": "v_1, \\ldots, v_n" }, { "math_id": 26, "text": "C" }, { "math_id": 27, "text": "\\mathbb R^n" }, { "math_id": 28, "text": "\\mu(C)=(2\\pi)^{-n/2}\\int_{E \\subset \\R^n}e^{-\\Vert x\\Vert^2/2}\\,dx," }, { "math_id": 29, "text": "dx" }, { "math_id": 30, "text": "\\mu(C)" }, { "math_id": 31, "text": "\\R^n," }, { "math_id": 32, "text": "(2\\pi)^{-n/2} e^{-\\Vert x\\Vert^2/2}\\,dx." }, { "math_id": 33, "text": "(2\\pi)^{-n/2} \\int_{\\R^n} \\Vert x\\Vert^2 e^{-\\Vert x\\Vert^2/2} \\,dx = (2\\pi)^{-n/2} \\sum_{i=1}^n \\int_\\R x_i^2 e^{-x_i^2/2} \\, dx_i = n." }, { "math_id": 34, "text": "\\sqrt n." }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": "i:H\\rightarrow B" }, { "math_id": 37, "text": "C\\subset B" }, { "math_id": 38, "text": "C\\cap H" }, { "math_id": 39, "text": "\\mathbb R" }, { "math_id": 40, "text": "\\gamma_{12}=\\gamma_1\\otimes\\gamma_2" }, { "math_id": 41, "text": "(i_1 \\times i_2)_* (\\mu^{H_1 \\times H_2}) = (i_1)_* \\left( \\mu^{H_1} \\right) \\otimes (i_2)_* \\left( \\mu^{H_2} \\right)," }, { "math_id": 42, "text": "\\mu_{12}" }, { "math_id": 43, "text": "H := L_{0}^{2, 1} ([0, T]; \\mathbb{R}^{n}) := \\{ \\text{Absolutely continuous paths starting at 0 with square-integrable first derivative}\\}" }, { "math_id": 44, "text": "\\langle \\sigma_1, \\sigma_2 \\rangle_{L_0^{2,1}} := \\int_0^T \\langle \\dot{\\sigma}_1 (t), \\dot{\\sigma}_2 (t) \\rangle_{\\R^{n}} \\, dt, " } ]
https://en.wikipedia.org/wiki?curid=6592746
6592812
Thermal contact conductance
The study of heat conduction between solid bodies in thermal contact In physics, thermal contact conductance is the study of heat conduction between solid or liquid bodies in thermal contact. The thermal contact conductance coefficient, formula_0, is a property indicating the thermal conductivity, or ability to conduct heat, between two bodies in contact. The inverse of this property is termed thermal contact resistance. Definition. When two solid bodies come in contact, such as A and B in Figure 1, heat flows from the hotter body to the colder body. From experience, the temperature profile along the two bodies varies, approximately, as shown in the figure. A temperature drop is observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a "thermal contact resistance" existing between the contacting surfaces. Thermal contact resistance is defined as the ratio between this temperature drop and the average heat flow across the interface. According to Fourier's law, the heat flow between the bodies is found by the relation: where formula_1 is the heat flow, formula_2 is the thermal conductivity, formula_3 is the cross sectional area and formula_4 is the temperature gradient in the direction of flow. From considerations of energy conservation, the heat flow between the two bodies in contact, bodies A and B, is found as: One may observe that the heat flow is directly related to the thermal conductivities of the bodies in contact, formula_5 and formula_6, the contact area formula_3, and the thermal contact resistance, formula_7, which, as previously noted, is the inverse of the thermal conductance coefficient, formula_0. Importance. Most experimentally determined values of the thermal contact resistance fall between 0.000005 and 0.0005 m2 K/W (the corresponding range of thermal contact conductance is 200,000 to 2000 W/m2 K). To know whether the thermal contact resistance is significant or not, magnitudes of the thermal resistances of the layers are compared with typical values of thermal contact resistance. Thermal contact resistance is significant and may dominate for good heat conductors such as metals but can be neglected for poor heat conductors such as insulators. Thermal contact conductance is an important factor in a variety of applications, largely because many physical systems contain a mechanical combination of two materials. Some of the fields where contact conductance is of importance are: Factors influencing contact conductance. Thermal contact conductance is a complicated phenomenon, influenced by many factors. Experience shows that the most important ones are as follows: Contact pressure. For thermal transport between two contacting bodies, such as particles in a granular medium, the contact pressure is the factor of most influence on overall contact conductance. As contact pressure grows, true contact area increases and contact conductance grows (contact resistance becomes smaller). Since the contact pressure is the most important factor, most studies, correlations and mathematical models for measurement of contact conductance are done as a function of this factor. The thermal contact resistance of certain sandwich kinds of materials that are manufactured by rolling under high temperatures may sometimes be ignored because the decrease in thermal conductivity between them is negligible. Interstitial materials. No truly smooth surfaces really exist, and surface imperfections are visible under a microscope. As a result, when two bodies are pressed together, contact is only performed in a finite number of points, separated by relatively large gaps, as can be shown in Fig. 2. Since the actual contact area is reduced, another resistance for heat flow exists. The gases/fluids filling these gaps may largely influence the total heat flow across the interface. The thermal conductivity of the interstitial material and its pressure, examined through reference to the Knudsen number, are the two properties governing its influence on contact conductance, and thermal transport in heterogeneous materials in general. In the absence of interstitial materials, as in a vacuum, the contact resistance will be much larger, since flow through the intimate contact points is dominant. Surface roughness, waviness and flatness. One can characterise a surface that has undergone certain finishing operations by three main properties of: roughness, waviness, and fractal dimension. Among these, roughness and fractality are of most importance, with roughness often indicated in terms of a rms value, formula_8 and surface fractality denoted generally by "Df". The effect of surface structures on thermal conductivity at interfaces is analogous to the concept of electrical contact resistance, also known as ECR, involving contact patch restricted transport of phonons rather than electrons. Surface deformations. When the two bodies come in contact, surface deformation may occur on both bodies. This deformation may either be plastic or elastic, depending on the material properties and the contact pressure. When a surface undergoes plastic deformation, contact resistance is lowered, since the deformation causes the actual contact area to increase Surface cleanliness. The presence of dust particles, acids, etc., can also influence the contact conductance. Measurement of thermal contact conductance. Going back to Formula 2, calculation of the thermal contact conductance may prove difficult, even impossible, due to the difficulty in measuring the contact area, formula_3 (A product of surface characteristics, as explained earlier). Because of this, contact conductance/resistance is usually found experimentally, by using a standard apparatus. The results of such experiments are usually published in Engineering literature, on journals such as "Journal of Heat Transfer", "International Journal of Heat and Mass Transfer", etc. Unfortunately, a centralized database of contact conductance coefficients does not exist, a situation which sometimes causes companies to use outdated, irrelevant data, or not taking contact conductance as a consideration at all. "CoCoE" (Contact Conductance Estimator), a project founded to solve this problem and create a centralized database of contact conductance data and a computer program that uses it, was started in 2006. Thermal boundary conductance. While a finite thermal contact conductance is due to voids at the interface, surface waviness, and surface roughness, etc., a finite conductance exists even at near ideal interfaces as well. This conductance, known as thermal boundary conductance, is due to the differences in electronic and vibrational properties between the contacting materials. This conductance is generally much higher than thermal contact conductance, but becomes important in nanoscale material systems.
[ { "math_id": 0, "text": "h_c" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "dT/dx" }, { "math_id": 5, "text": "k_A" }, { "math_id": 6, "text": "k_B" }, { "math_id": 7, "text": "1/h_c" }, { "math_id": 8, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=6592812
65929
Angular acceleration
Physical quantity &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; &lt;templatestyles src="Hlist/styles.css"/&gt; In physics, angular acceleration (symbol α, alpha) is the time rate of change of angular velocity. Following the two types of angular velocity, "spin angular velocity" and "orbital angular velocity", the respective types of angular acceleration are: spin angular acceleration, involving a rigid body about an axis of rotation intersecting the body's centroid; and orbital angular acceleration, involving a point particle and an external axis. Angular acceleration has physical dimensions of angle per time squared, measured in SI units of radians per second squared (rad ⋅ s-2). In two dimensions, angular acceleration is a pseudoscalar whose sign is taken to be positive if the angular speed increases counterclockwise or decreases clockwise, and is taken to be negative if the angular speed increases clockwise or decreases counterclockwise. In three dimensions, angular acceleration is a pseudovector. For rigid bodies, angular acceleration must be caused by a net external torque. However, this is not so for non-rigid bodies: For example, a figure skater can speed up their rotation (thereby obtaining an angular acceleration) simply by contracting their arms and legs inwards, which involves no "external" torque. Orbital angular acceleration of a point particle. Particle in two dimensions. In two dimensions, the orbital angular acceleration is the rate at which the two-dimensional orbital angular velocity of the particle about the origin changes. The instantaneous angular velocity "ω" at any point in time is given by formula_0 where formula_1 is the distance from the origin and formula_2 is the cross-radial component of the instantaneous velocity (i.e. the component perpendicular to the position vector), which by convention is positive for counter-clockwise motion and negative for clockwise motion. Therefore, the instantaneous angular acceleration "α" of the particle is given by formula_3 Expanding the right-hand-side using the product rule from differential calculus, this becomes formula_4 In the special case where the particle undergoes circular motion about the origin, formula_5 becomes just the tangential acceleration formula_6, and formula_7 vanishes (since the distance from the origin stays constant), so the above equation simplifies to formula_8 In two dimensions, angular acceleration is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the angular speed increases in the counter-clockwise direction or decreases in the clockwise direction, and the sign is taken negative if the angular speed increases in the clockwise direction or decreases in the counter-clockwise direction. Angular acceleration then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes. Particle in three dimensions. In three dimensions, the orbital angular acceleration is the rate at which three-dimensional orbital angular velocity vector changes with time. The instantaneous angular velocity vector formula_9 at any point in time is given by formula_10 where formula_11 is the particle's position vector, formula_1 its distance from the origin, and formula_12 its velocity vector. Therefore, the orbital angular acceleration is the vector formula_13 defined by formula_14 Expanding this derivative using the product rule for cross-products and the ordinary quotient rule, one gets: formula_15 Since formula_16 is just formula_17, the second term may be rewritten as formula_18. In the case where the distance formula_1 of the particle from the origin does not change with time (which includes circular motion as a subcase), the second term vanishes and the above formula simplifies to formula_19 From the above equation, one can recover the cross-radial acceleration in this special case as: formula_20 Unlike in two dimensions, the angular acceleration in three dimensions need not be associated with a change in the angular "speed formula_21": If the particle's position vector "twists" in space, changing its instantaneous plane of angular displacement, the change in the "direction" of the angular velocity formula_22 will still produce a nonzero angular acceleration. This cannot not happen if the position vector is restricted to a fixed plane, in which case formula_22 has a fixed direction perpendicular to the plane. The angular acceleration vector is more properly called a pseudovector: It has three components which transform under rotations in the same way as the Cartesian coordinates of a point do, but which do not transform like Cartesian coordinates under reflections. Relation to torque. The net "torque" on a point particle is defined to be the pseudovector formula_23 where formula_24 is the net force on the particle. Torque is the rotational analogue of force: it induces change in the rotational state of a system, just as force induces change in the translational state of a system. As force on a particle is connected to acceleration by the equation formula_25, one may write a similar equation connecting torque on a particle to angular acceleration, though this relation is necessarily more complicated. First, substituting formula_26 into the above equation for torque, one gets formula_27 From the previous section: formula_28 where formula_29 is orbital angular acceleration and formula_22 is orbital angular velocity. Therefore: formula_30 In the special case of constant distance formula_1 of the particle from the origin (formula_31), the second term in the above equation vanishes and the above equation simplifies to formula_32 which can be interpreted as a "rotational analogue" to formula_26, where the quantity formula_33 (known as the moment of inertia of the particle) plays the role of the mass formula_34. However, unlike formula_26, this equation does "not" apply to an arbitrary trajectory, only to a trajectory contained within a spherical shell about the origin. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega = \\frac{v_{\\perp}}{r}," }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "v_{\\perp}" }, { "math_id": 3, "text": "\\alpha = \\frac{d}{dt} \\left(\\frac{v_{\\perp}}{r}\\right)." }, { "math_id": 4, "text": "\\alpha = \\frac{1}{r} \\frac{dv_\\perp}{dt} - \\frac{v_\\perp}{r^2} \\frac{dr}{dt}." }, { "math_id": 5, "text": "\\frac{dv_{\\perp}}{dt}" }, { "math_id": 6, "text": "a_{\\perp}" }, { "math_id": 7, "text": "\\frac{dr}{dt}" }, { "math_id": 8, "text": "\\alpha = \\frac{a_{\\perp}}{r}. " }, { "math_id": 9, "text": "\\boldsymbol\\omega" }, { "math_id": 10, "text": "\\boldsymbol\\omega =\\frac{\\mathbf r \\times \\mathbf v}{r^2} ," }, { "math_id": 11, "text": "\\mathbf r" }, { "math_id": 12, "text": "\\mathbf v" }, { "math_id": 13, "text": "\\boldsymbol\\alpha" }, { "math_id": 14, "text": "\\boldsymbol\\alpha = \\frac{d}{dt} \\left(\\frac{\\mathbf r \\times \\mathbf v}{r^2}\\right)." }, { "math_id": 15, "text": "\\begin{align}\n\\boldsymbol\\alpha &= \\frac{1}{r^2} \\left(\\mathbf r\\times \\frac{d\\mathbf v}{dt} + \\frac{d\\mathbf r}{dt} \\times \\mathbf v\\right) - \\frac{2}{r^3}\\frac{dr}{dt} \\left(\\mathbf r\\times\\mathbf v\\right)\\\\\n\\\\ \n&= \\frac{1}{r^2}\\left(\\mathbf r\\times \\mathbf a + \\mathbf v\\times \\mathbf v\\right) - \\frac{2}{r^3}\\frac{dr}{dt} \\left(\\mathbf r\\times\\mathbf v\\right)\\\\\n\\\\\n&= \\frac{\\mathbf r\\times \\mathbf a}{r^2} - \\frac{2}{r^3}\\frac{dr}{dt}\\left(\\mathbf r\\times\\mathbf v\\right).\n\\end{align}" }, { "math_id": 16, "text": "\\mathbf r\\times\\mathbf v" }, { "math_id": 17, "text": "r^2\\boldsymbol{\\omega}" }, { "math_id": 18, "text": "-\\frac{2}{r}\\frac{dr}{dt} \\boldsymbol{\\omega}" }, { "math_id": 19, "text": " \\boldsymbol\\alpha = \\frac{\\mathbf r\\times \\mathbf a}{r^2}." }, { "math_id": 20, "text": "\\mathbf{a}_{\\perp} = \\boldsymbol{\\alpha} \\times\\mathbf{r}." }, { "math_id": 21, "text": "\\omega = |\\boldsymbol{\\omega}|" }, { "math_id": 22, "text": "\\boldsymbol{\\omega}" }, { "math_id": 23, "text": "\\boldsymbol{\\tau} = \\mathbf r \\times \\mathbf F," }, { "math_id": 24, "text": "\\mathbf F" }, { "math_id": 25, "text": "\\mathbf F = m\\mathbf a" }, { "math_id": 26, "text": "\\mathbf F = m\\mathbf a" }, { "math_id": 27, "text": "\\boldsymbol{\\tau} = m\\left(\\mathbf r\\times \\mathbf a\\right) = mr^2 \\left(\\frac{\\mathbf r\\times \\mathbf a}{r^2}\\right)." }, { "math_id": 28, "text": "\\boldsymbol{\\alpha}=\\frac{\\mathbf r\\times \\mathbf a}{r^2}-\\frac{2}{r} \\frac{dr}{dt}\\boldsymbol{\\omega}," }, { "math_id": 29, "text": "\\boldsymbol{\\alpha}" }, { "math_id": 30, "text": "\\boldsymbol{\\tau} = mr^2 \\left(\\boldsymbol{\\alpha}+\\frac{2}{r} \\frac{dr}{dt}\\boldsymbol{\\omega}\\right) \n=mr^2 \\boldsymbol{\\alpha}+2mr\\frac{dr}{dt}\\boldsymbol{\\omega}. " }, { "math_id": 31, "text": "\\tfrac{ dr } {dt} = 0" }, { "math_id": 32, "text": "\\boldsymbol{\\tau} = mr^2\\boldsymbol{\\alpha}," }, { "math_id": 33, "text": "mr^2" }, { "math_id": 34, "text": "m" } ]
https://en.wikipedia.org/wiki?curid=65929
6592994
Kirsch equations
The Kirsch equations describe the elastic stresses around a hole in an infinite plate under one directional tension. They are named after Ernst Gustav Kirsch. Result. Loading an infinite plate with a circular hole of radius "a" with stress "σ", the resulting stress field is (the angle is with respect to the direction of application of the stress): formula_0 formula_1 formula_2
[ { "math_id": 0, "text": "\n\\sigma_{rr} = \\frac{\\sigma}{2}\\left(1 - \\frac{a^2}{r^2}\\right) + \\frac{\\sigma}{2}\\left(1 + 3\\frac{a^4}{r^4} - 4\\frac{a^2}{r^2}\\right)\\cos 2\\theta\n" }, { "math_id": 1, "text": "\n\\sigma_{\\theta\\theta} = \\frac{\\sigma}{2}\\left(1 + \\frac{a^2}{r^2}\\right) - \\frac{\\sigma}{2}\\left(1 + 3\\frac{a^4}{r^4}\\right)\\cos 2\\theta\n" }, { "math_id": 2, "text": "\n\\sigma_{r\\theta} = - \\frac{\\sigma}{2}\\left(1 - 3\\frac{a^4}{r^4} + 2\\frac{a^2}{r^2}\\right)\\sin 2\\theta\n" } ]
https://en.wikipedia.org/wiki?curid=6592994
659322
PP (complexity)
Class of problems in computer science In complexity theory, PP, or PPT is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977. If a decision problem is in PP, then there is an algorithm for it that is allowed to flip coins and make random decisions. It is guaranteed to run in polynomial time. If the answer is YES, the algorithm will answer YES with probability more than 1/2. If the answer is NO, the algorithm will answer YES with probability less than 1/2. In more practical terms, it is the class of problems that can be solved to any fixed degree of accuracy by running a randomized, polynomial-time algorithm a sufficient (but bounded) number of times. Turing machines that are polynomially-bound and probabilistic are characterized as PPT, which stands for probabilistic polynomial-time machines. This characterization of Turing machines does not require a bounded error probability. Hence, PP is the complexity class containing all problems solvable by a PPT machine with an error probability of less than 1/2. An alternative characterization of PP is the set of problems that can be solved by a nondeterministic Turing machine in polynomial time where the acceptance condition is that a majority (more than half) of computation paths accept. Because of this some authors have suggested the alternative name "Majority-P". Definition. A language "L" is in PP if and only if there exists a probabilistic Turing machine "M", such that Alternatively, PP can be defined using only deterministic Turing machines. A language "L" is in PP if and only if there exists a polynomial "p" and deterministic Turing machine "M", such that In both definitions, "less than" can be changed to "less than or equal to" (see below), and the threshold 1/2 can be replaced by any fixed rational number in (0,1), without changing the class. PP vs BPP. BPP is a subset of PP; it can be seen as the subset for which there are efficient probabilistic algorithms. The distinction is in the error probability that is allowed: in BPP, an algorithm must give correct answer (YES or NO) with probability exceeding some fixed constant c &gt; 1/2, such as 2/3 or 501/1000. If this is the case, then we can run the algorithm a number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. This number of repeats increases if "c" becomes closer to 1/2, but it does not depend on the input size "n". More generally, if "c" can depend on the input size formula_0 polynomially, as formula_1, then we can rerun the algorithm for formula_2 and take the majority vote. By Hoeffding's inequality, this gives us a BPP algorithm. The important thing is that this constant "c" is not allowed to depend on the input. On the other hand, a PP algorithm is permitted to do something like the following: Because these two probabilities are "exponentially" close together, even if we run it for a "polynomial" number of times it is very difficult to tell whether we are operating on a YES instance or a NO instance. Attempting to achieve a fixed desired probability level using a majority vote and the Chernoff bound requires a number of repetitions that is exponential in "n". PP compared to other complexity classes. PP includes BPP, since probabilistic algorithms described in the definition of BPP form a subset of those in the definition of PP. PP also includes NP. To prove this, we show that the NP-complete satisfiability problem belongs to PP. Consider a probabilistic algorithm that, given a formula "F"("x"1, "x"2, ..., "x""n") chooses an assignment "x"1, "x"2, ..., "x""n" uniformly at random. Then, the algorithm checks if the assignment makes the formula "F" true. If yes, it outputs YES. Otherwise, it outputs YES with probability formula_3 and NO with probability formula_4. If the formula is unsatisfiable, the algorithm will always output YES with probability formula_5. If there exists a satisfying assignment, it will output YES with probability at least formula_6 (exactly 1/2 if it picked an unsatisfying assignment and 1 if it picked a satisfying assignment, averaging to some number greater than 1/2). Thus, this algorithm puts satisfiability in PP. As SAT is NP-complete, and we can prefix any deterministic polynomial-time many-one reduction onto the PP algorithm, NP is included in PP. Because PP is closed under complement, it also includes co-NP. Furthermore, PP includes MA, which subsumes the previous two inclusions. PP also includes BQP, the class of decision problems solvable by efficient polynomial time quantum computers. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly. The class of polynomial time on quantum computers with postselection, PostBQP, is equal to PP (see #PostBQP below). Furthermore, PP includes QMA, which subsumes inclusions of MA and BQP. A polynomial time Turing machine with a PP oracle (PPP) can solve all problems in PH, the entire polynomial hierarchy. This result was shown by Seinosuke Toda in 1989 and is known as Toda's theorem. This is evidence of how hard it is to solve problems in PP. The class #P is in some sense about as hard, since P#P = PPP and therefore P#P includes PH as well. PP strictly includes uniform TC0, the class of constant-depth, unbounded-fan-in boolean circuits with majority gates that are uniform (generated by a polynomial-time algorithm). PP is included in PSPACE. This can be easily shown by exhibiting a polynomial-space algorithm for MAJSAT, defined below; simply try all assignments and count the number of satisfying ones. PP is not included in SIZE(nk) for any k, by Kannan's theorem. Complete problems and other properties. Unlike BPP, PP is a syntactic rather than semantic class. Any polynomial-time probabilistic machine recognizes some language in PP. In contrast, given a description of a polynomial-time probabilistic machine, it is undecidable in general to determine if it recognizes a language in BPP. PP has natural complete problems, for example, MAJSAT. MAJSAT is a decision problem in which one is given a Boolean formula F. The answer must be YES if more than half of all assignments "x"1, "x"2, ..., "x""n" make F true and NO otherwise. Proof that PP is closed under complement. Let "L" be a language in PP. Let formula_7 denote the complement of "L". By the definition of PP there is a polynomial-time probabilistic algorithm "A" with the property that formula_8 We claim that without loss of generality, the latter inequality is always strict; the theorem can be deduced from this claim: let formula_9 denote the machine which is the same as "A" except that formula_9 accepts when "A" would reject, and vice versa. Then formula_10 which implies that formula_7 is in PP. Now we justify our without loss of generality assumption. Let formula_11 be the polynomial upper bound on the running time of "A" on input "x". Thus "A" makes at most formula_11 random coin flips during its execution. In particular the probability of acceptance is an integer multiple of formula_12 and we have: formula_13 Define a machine "A"′ as follows: on input "x", "A"′ runs "A" as a subroutine, and rejects if "A" would reject; otherwise, if "A" would accept, "A"′ flips formula_14 coins and rejects if they are all heads, and accepts otherwise. Then formula_15 This justifies the assumption (since "A"′ is still a polynomial-time probabilistic algorithm) and completes the proof. David Russo proved in his 1985 Ph.D. thesis that PP is closed under symmetric difference. It was an open problem for 14 years whether PP was closed under union and intersection; this was settled in the affirmative by Beigel, Reingold, and Spielman. Alternate proofs were later given by Li and Aaronson (see #PostBQP below). Other equivalent complexity classes. PostBQP. The quantum complexity class BQP is the class of problems solvable in polynomial time on a quantum Turing machine. By adding postselection, a larger class called PostBQP is obtained. Informally, postselection gives the computer the following power: whenever some event (such as measuring a qubit in a certain state) has nonzero probability, you are allowed to assume that it takes place. Scott Aaronson showed in 2004 that PostBQP is equal to PP. This reformulation of PP makes it easier to show certain results, such as that PP is closed under intersection (and hence, under union), that BQP is low for PP, and that QMA is included in PP. PQP. PP is also equal to another quantum complexity class known as PQP, which is the unbounded error analog of BQP. It denotes the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of less than 1/2 for all instances. Even if all amplitudes used for PQP-computation are drawn from algebraic numbers, still PQP coincides with PP. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "c = O(n^{-k}) " }, { "math_id": 2, "text": "O(n^{2k})" }, { "math_id": 3, "text": "\\frac12 - \\frac1{2^{n + 1}}" }, { "math_id": 4, "text": "\\frac12 + \\frac1{2^{n + 1}}" }, { "math_id": 5, "text": "\\frac12 - \\frac1{2^{n + 1}} < \\frac12" }, { "math_id": 6, "text": "\\left(\\frac12 - \\frac1{2^{n + 1}}\\right)\\cdot \\left(1 - \\frac1{2^n}\\right) + 1\\cdot\\frac1{2^n} = \\frac12 + \\frac1{2^{2n + 1}} > \\frac12" }, { "math_id": 7, "text": "L^c" }, { "math_id": 8, "text": "x \\in L \\Rightarrow \\Pr[A \\text{ accepts } x] > \\frac{1}{2} \\quad \\text{and} \\quad x \\not\\in L \\Rightarrow \\Pr[A \\text{ accepts } x] \\le \\frac{1}{2}." }, { "math_id": 9, "text": "A^c" }, { "math_id": 10, "text": "x \\in L^c \\Rightarrow \\Pr[A^c \\text{ accepts } x] > \\frac{1}{2} \\quad \\text{and} \\quad x \\not\\in L^c \\Rightarrow \\Pr[A^c \\text{ accepts } x] < \\frac{1}{2}," }, { "math_id": 11, "text": "f(|x|)" }, { "math_id": 12, "text": "2^{-f(|x|)}" }, { "math_id": 13, "text": "x \\in L \\Rightarrow \\Pr[A \\text{ accepts } x] \\ge \\frac{1}{2} + \\frac{1}{2^{f(|x|)}}." }, { "math_id": 14, "text": "f(|x|)+1" }, { "math_id": 15, "text": "x \\not\\in L \\Rightarrow \\Pr[A' \\text{ accepts } x] \\le \\frac{1}{2} \\cdot \\left (1- \\frac{1}{2^{f(|x|)+1}} \\right ) < \\frac{1}{2} \\quad \\text{and} \\quad x \\in L \\Rightarrow \\Pr[A' \\text{ accepts } x] \\ge \\left (\\frac{1}{2}+\\frac{1}{2^{f(|x|)}} \\right )\\cdot \\left ( 1-\\frac{1}{2^{f(|x|)+1}} \\right ) > \\frac{1}{2}." } ]
https://en.wikipedia.org/wiki?curid=659322
6593254
Radonifying function
In measure theory, a radonifying function (ultimately named after Johann Radon) between measurable spaces is one that takes a cylinder set measure (CSM) on the first space to a true measure on the second space. It acquired its name because the pushforward measure on the second space was historically thought of as a Radon measure. Definition. Given two separable Banach spaces formula_0 and formula_1, a CSM formula_2 on formula_0 and a continuous linear map formula_3, we say that formula_4 is "radonifying" if the push forward CSM (see below) formula_5 on formula_1 "is" a measure, i.e. there is a measure formula_6 on formula_1 such that formula_7 for each formula_8, where formula_9 is the usual push forward of the measure formula_6 by the linear map formula_10. Push forward of a CSM. Because the definition of a CSM on formula_1 requires that the maps in formula_11 be surjective, the definition of the push forward for a CSM requires careful attention. The CSM formula_5 is defined by formula_12 if the composition formula_13 is surjective. If formula_14 is not surjective, let formula_15 be the image of formula_14, let formula_16 be the inclusion map, and define formula_17, where formula_18 (so formula_19) is such that formula_20. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\{ \\mu_{T} | T \\in \\mathcal{A} (E) \\}" }, { "math_id": 3, "text": "\\theta \\in \\mathrm{Lin} (E; G)" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "\\left\\{ \\left. \\left( \\theta_{*} (\\mu_{\\cdot}) \\right)_{S} \\right| S \\in \\mathcal{A} (G) \\right\\}" }, { "math_id": 6, "text": "\\nu" }, { "math_id": 7, "text": "\\left( \\theta_{*} (\\mu_{\\cdot}) \\right)_{S} = S_{*} (\\nu)" }, { "math_id": 8, "text": "S \\in \\mathcal{A} (G)" }, { "math_id": 9, "text": "S_{*} (\\nu)" }, { "math_id": 10, "text": "S : G \\to F_{S}" }, { "math_id": 11, "text": "\\mathcal{A} (G)" }, { "math_id": 12, "text": "\\left( \\theta_{*} (\\mu_{\\cdot}) \\right)_{S} = \\mu_{S \\circ \\theta}" }, { "math_id": 13, "text": "S \\circ \\theta : E \\to F_{S}" }, { "math_id": 14, "text": "S \\circ \\theta" }, { "math_id": 15, "text": "\\tilde{F}" }, { "math_id": 16, "text": "i : \\tilde{F} \\to F_{S}" }, { "math_id": 17, "text": "\\left( \\theta_{*} (\\mu_{\\cdot}) \\right)_{S} = i_{*} \\left( \\mu_{\\Sigma} \\right)" }, { "math_id": 18, "text": "\\Sigma : E \\to \\tilde{F}" }, { "math_id": 19, "text": "\\Sigma \\in \\mathcal{A} (E)" }, { "math_id": 20, "text": "i \\circ \\Sigma = S \\circ \\theta" } ]
https://en.wikipedia.org/wiki?curid=6593254
65937796
Illustrative model of greenhouse effect on climate change
Pedagogical illustration of how the greenhouse effect causes global warming There is a strong scientific consensus that greenhouse effect due to carbon dioxide is a main driver of climate change. Following is an illustrative model meant for a pedagogical purpose, showing the main physical determinants of the effect. Under this understanding, global warming is determined by a simple energy budget: In the long run, Earth emits radiation in the same amount as it receives from the sun. However, the amount emitted depends both on Earth's temperature and on its albedo: The more reflective the Earth in a certain wavelength, the less radiation it would both receive and emit in this wavelength; the warmer the Earth, the more radiation it emits. Thus changes in the albedo may have an effect on Earth's temperature, and the effect can be calculated by assuming a new steady state would be arrived at. In most of the electromagnetic spectrum, atmospheric carbon dioxide either blocks the radiation emitted from the ground almost completely, or is almost transparent, so that increasing the amount of carbon dioxide in the atmosphere, e.g. doubling the amount, will have negligible effects. However, in some narrow parts of the spectrum this is not so; doubling the amount of atmospheric carbon dioxide will make Earth's atmosphere relatively opaque to in these wavelengths, which would result in Earth emitting light in these wavelengths from the upper layers of the atmosphere, rather from lower layers or from the ground. Since the upper layers are colder, the amount emitted would be lower, leading to warming of Earth until the reduction in emission is compensated by the rise in temperature. Furthermore, such warming may cause a feedback mechanism due to other changes in Earth's albedo, e.g. due to ice melting. Structure of the atmosphere. Most of the air—including ~88% of the CO2—is located in the lower part of the atmosphere known as troposphere. The troposphere is thicker in the equator and thinner at the poles, but the global mean of its thickness is around 11 km. Inside the troposphere, the temperature drops approximately linearly at a rate of 6.5 Celsius degrees per km, from a global mean of 288 Kelvin (15 Celsius) on the ground to 220 K (-53 Celsius). At higher altitudes, up to 20 km, the temperature is approximately constant; this layer is called the tropopause. The troposphere and tropopause together consist of ~99% of the atmospheric CO2. Inside the troposphere, the CO2 drops with altitude approximately exponentially, with a typical length of 6.3 km; this means that the density at height y is approximately proportional to exp(-y/6.3 km), and it goes down to 37% at 6.3 km, and to 17% at 11 km. Higher through the tropopause, density continues dropping exponentially, albeit faster, with a typical length of 4.2 km. Effect of carbon dioxide on the Earth's energy budget. Earth constantly absorbs energy from sunlight and emits thermal radiation as infrared light. In the long run, Earth radiates the same amount of energy per second as it absorbs, because the amount of thermal radiation emitted depends upon temperature: If Earth absorbs more energy per second than it radiates, Earth heats up and the thermal radiation will increase, until balance is restored; if Earth absorbs less energy than it radiates, it cools down and the thermal radiation will decrease, again until balance is restored. Atmospheric CO2 absorbs some of the energy radiated by the ground, but it emits itself thermal radiation: For example, in some wavelengths the atmosphere is totally opaque due to absorption by CO2; at these wavelengths, looking at Earth from outer space one would not see the ground, but the atmospheric CO2, and hence its thermal radiation—rather than the ground's thermal radiation. Had the atmosphere been at the same temperature as the ground, this would not change Earth's energy budget; but since the radiation is emitted from atmosphere layers that are cooler than the ground, less radiation is emitted. As CO2 content of the atmosphere increases due to human activity, this process intensifies, and the total radiation emitted by Earth diminishes; therefore, Earth heats up until the balance is restored. Radiation absorption by carbon dioxide. CO2 absorbs the ground's thermal radiation mainly at wavelengths between 13 and 17 micron. At this wavelength range, it is almost solely responsible for the attenuation of radiation from the ground. The amount of ground radiation that is transmitted through the atmosphere in each wavelength is related to the optical depth of the atmosphere at this wavelength, OD, by: formula_0 The optical depth itself is given by Beer–Lambert law: formula_1 where σ is the absorption cross section of a single CO2 molecule, and n(y) is the number density of these molecules at altitude y. Due to the high dependence of the cross section in wavelength, the OD changes from around 0.1 at 13 microns to ~10 at 14 microns and even higher beyond 100 at 15 microns, then dropping off to ~10 at 16 microns, ~1 at 17 microns and below 0.1 at 18 microns. Note that the OD depends on the total number of molecules per unit area in the atmosphere, and therefore rises linearly with its CO2 content. Looked upon from outer space into the atmosphere at a specific wavelength, one would see to different degrees different layers of the atmosphere, but on average one would see down to an altitude such that the part of the atmosphere from this altitude and up has an optical depth of ~1. Earth will therefore radiate at this wavelength approximately according to the temperature of that altitude. The effect of increasing CO2 atmospheric content means that the optical depth increases, so that the altitude seen from outer space increases; as long as it increases within the troposphere, the radiation temperature drops and the radiation decreases. When it reaches the tropopause, any further increase in CO2 levels will have no noticeable effect, since the temperature no longer depends there on the altitude. At wavelengths of 14 to 16 microns, even the tropopause, having ~0.12 of the amount of CO2 of the whole atmosphere, has OD&gt;1. Therefore, at these wavelengths Earth radiates mainly in the tropopause temperature, and addition of CO2 does not change this. At wavelengths smaller than 13 microns or larger than 18 microns, the atmospheric absorption is negligible, and addition of CO2 hardly changes this. Therefore, the effect of CO2 increase on radiation is relevant in wavelengths 13–14 and 16–18 microns, and addition on CO2 mainly contributes to the opacity of the troposphere, changing the altitude that is effectively seen from outer space within the troposphere. Calculating the effect on radiation. One layer model. We now turn to calculating the effect of CO2 on radiation, using a one-layer model, i.e. we treat the whole troposphere as a single layer: Looking at a particular wavelength λ up to λ+dλ, the whole atmosphere has an optical depth OD, while the tropopause has an optical depth 0.12*OD; the troposphere has an optical depth of 0.88*OD. Thus, formula_2 of the radiation from below the tropopause is transmitted out, but this includes formula_3 of the radiation that originates from the ground. Thus, the weight of the troposphere in determining the radiation that is emitted to outer space is: formula_4 A relative increase in the CO2 concentration means an equal relative increase in the total CO2 content of the atmosphere, dN/N where N is the number of CO2 molecules. Adding a minute number of such molecules dN will increase the troposphere's weight in determining the radiation for the relevant wavelengths, approximately by the relative amount dN/N, and thus by: formula_5 Since CO2 hardly influences sunlight absorption by Earth, the radiative forcing due to an increase in CO2 content is equal to the difference in the flux radiated by Earth due to such an increase. To calculate this, one must multiply the above by the difference in radiation due to the difference in temperature. According to Planck's law, this is: formula_6 The ground is at temperature T0 = 288 K, and for the troposphere we will take a typical temperature, the one at the average height of molecules, 6.3 km, where the temperature is T1247 K. Therefore, dI, the change in Earth's emitted radiation is, in a rough approximation, is: formula_7 Since dN/N = d(ln N), this can be written as: formula_8 formula_9 The function formula_10 is maximal for x = 2.41, with a maximal value of 0.66, and it drops to half this value at x=0.5 and x = 9.2. Thus we look at wavelengths for which the OD is between 0.5 and 9.2: This gives a wavelength band at the width of approximately 1 micron around 17 microns, and less than 1 micron around 13.5 microns. We therefore take: λ = 13.5 microns and again 17 microns (summing contributions from both) dλ = 0.5 micron for the 13.5 microns band, and 1 micron for the 17 microns band. formula_11 Which gives -2.3 W/m2 for the 13.5 microns band, and -2.7 W/m2 for the 17 microns band, for a total of 5 W/m2. A 2-fold increase in CO2 content changes the wavelengths ranges only slightly, and so this derivative is approximately constant along such an increase. Thus, a 2-fold increase in CO2 content will reduce the radiation emitted by Earth by approximately: ln(2)*5 W/m2 = 3.4 W/m2. More generally, an increase by a factor c/c0 gives: ln(c/c0)*5 W/m2 These results are close to the approximation of a more elaborate yet simplified model giving ln(c/c0)*5.35 W/m2, and the radiative forcing due to CO2 doubling with much more complicated models giving 3.1 W/m2. Emission Layer Displacement Model. We may make a more elaborate calculation by treating the atmosphere as compounded of many thin layers. For each such layer, at height y and thickness dy, the weight of this layer in determining the radiation temperaure seen from outer space is a generalization of the expression arrived at earlier for the troposphere. It is: formula_12 where OD(y) is the optical depth of the part of the atmosphere from y upwards. The total effect of CO2 on the radiation at wavelengths λ to λ+dλ is therefore: formula_13 formula_14 where B is the expression for radiation according to Planck's law presented above: formula_15 and the infinity here can be taken actually as the top of the tropopause. Thus the effect of a relative change in CO2 concentration, dN/N = dn0/n0 (where n0 is the density number near ground), would be (noting that dN/N = d(ln N) = d(ln n0): formula_16 formula_17 where we have used integration by part. Because B does not depend on N, and because formula_18, we have: formula_19 Now, formula_20 is constant in the troposphere and zero in the tropopause. We denote the height of the border between them as U. So: formula_21 The optical depth is proportional to the integral of the number density over y, as does the pressure. Therefore, OD(y) is proportional to the pressure p(y), which within the troposphere (height 0 to U) falls exponentially with decay constant 1/Hp (Hp~5.6 km for CO2), thus: formula_22 Since formula_23 + constant, viewed as a function of both y and N, we have: formula_24 And therefore differentiating with respect to ln N is the same as differentiating with respect to y, times a factor of formula_25. We arrive at: formula_26. Since the temperature only changes by ~25% within the troposphere, one may take a (rough) linear approximation of B with T at the relevant wavelengths, and get: formula_27 Due to the linear approximation of B we have: formula_28 with T1 taken at Hp, so that totally: formula_29 giving the same result as in the one-layer model presented above, as well as the logarithmic dependence on N, except that now we see T1 is taken at 5.6 km (the pressure drop height scale), rather than 6.3 km (the density drop height scale). Comparison to the total radiation emitted by Earth. The total average energy per unit time radiated by Earth is equal to the average energy flux "j" times the surface area 4πR2, where R is Earth's radius. On the other hand, the average energy flux absorbed from sunlight is the solar constant S0 times Earth's cross section of πR2, times the fraction absorbed by Earth, which is one minus Earth's albedo "a". The average energy per unit time radiated out is equal to the average energy per unit time absorbed from sunlight, so: formula_30 giving: formula_31 Based on the value of 3.1 W/m^2 obtained above in the section on the one layer model, the radiative forcing due to CO2 relative to the average radiated flux is therefore: formula_32 An exact calculation using the MODTRAN model, over all wavelengths and including methane and ozone greenhouse gasses, as shown in the plot above, gives, for tropical latitudes, an outgoing flux formula_33 298.645 W/m2 for current CO2 levels and formula_33 295.286 W/m2 after CO2 doubling, i.e. a radiative forcing of 1.1%, under clear sky conditions, as well as a ground temperature of 299.7o K (26.6o Celsius). The radiative forcing is largely similar in different latitudes and under different weather conditions. Effect on global warming. On average, the total power of the thermal radiation emitted by Earth is equal to the power absorbed from sunlight. As CO2 levels rise, the emitted radiation can maintain this equilibrium only if the temperature increases, so that the total emitted radiation is unchanged (averaged over enough time, in the order of few years so that diurnal and annual periods are averaged upon). According to Stefan–Boltzmann law, the total emitted power by Earth per unit area is: formula_34 where σB is Stefan–Boltzmann constant and ε is the emissivity in the relevant wavelengths. T is some average temperature representing the effective radiation temperature. CO2 content changes the effective T, but instead one may treat T to be a typical ground or lower-atmosphere temperature (same as T0 or close to it) and consider CO2 content as changing the emissivity ε. We thus re-interpret ε in the above equation as an effective emissivity that includes the CO2 effect;, and take T=T0. A change in CO2 content thus causes a change dε in this effective emissivity, so that formula_35 is the radiative forcing, divided by the total energy flux radiated by Earth. The relative change in the total radiated energy flux due to changes in emissivity and temperature is: formula_36 Thus, if the total emitted power is to remain unchanged, a radiative forcing relative to the total energy flux radiated by Earth, causes a 1/4-fold relative change in temperature. Thus: formula_37 Ice–albedo feedback. Since warming of Earth means less ice on the ground on average, it would cause lower albedo and more sunlight absorbed, hence further increasing Earth's temperature. As a rough estimate, we note that the average temperature on most of Earth are between -20 and +30 Celsius degree, a good guess will be that 2% of its surface are between -1 and 0 °C, and thus an equivalent area of its surface will be changed from ice-covered (or snow-covered) to either ocean or forest. For comparison, in the northern hemisphere, the arctic sea ice has shrunk between 1979 and 2015 by 1.43x1012 m2 at maxima and 2.52x1012 m2 at minima, for an average of almost 2x1012 m2, which is 0.4% of Earth's total surface of 510x1012 m2. At this time the global temperature rose by ~0.6 °C. The areas of inland glaciers combined (not including the antarctice ice sheet), the antarctic sea ice, and the arctic sea ice are all comparable, so one may expect the change in ice of the arctic sea ice is roughly a third of the total change, giving 1.2% of the Earth surface turned from ice to ocean or bare ground per 0.6 °C, or equivalently 2% per 1 °C. The antarctic ice cap size oscillates, and it is hard to predict its future course, with factors such as relative thermal insulated and constraints due to the Antarctic Circumpolar Current probably playing a part. As the difference in albedo between ice and e.g. ocean is around 2/3, this means that due to a 1 °C rise, the albedo will drop by 2%*2/3 = 4/3%. However this will mainly happen in northern and southern latitudes, around 60 degrees off the equator, and so the effective area is actually 2% * cos(60o) = 1%, and the global albedo drop would be 2/3%. Since a change in radiation of 1.3% causes a direct change of 1 degree Celsius (without feedback), as calculated above, and this causes another change of 2/3% in radiation due to positive feedback, whice is half the original change, this means the total factor caused by this feedback mechanism would be: formula_38 Thus, this feedback would double the effect of the change in radiation, causing a change of ~ 2 K in the global temperature, which is indeed the commonly accepted short-term value. For long-term value, including further feedback mechanisms, ~3K is considered more probable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T = e^{-OD}" }, { "math_id": 1, "text": " OD(y) = \\sigma \\int_y^{\\infty} n(y^{\\prime})dy^{\\prime}" }, { "math_id": 2, "text": "e^{-0.12\\cdot OD}" }, { "math_id": 3, "text": "e^{-0.88\\cdot OD}" }, { "math_id": 4, "text": "e^{-0.12\\cdot OD}\\cdot (1- e^{-0.88\\cdot OD}) =e^{-0.12\\cdot OD} - e^{-OD} " }, { "math_id": 5, "text": "\\frac{dN}{N}\\cdot (e^{-0.12\\cdot OD} - e^{-OD})" }, { "math_id": 6, "text": "\\frac{2\\pi h c^2}{\\lambda^5} \\frac{d\\lambda}{e^{hc/\\lambda kT}-1}\\approx \\frac{2\\pi h c^2 d\\lambda }{\\lambda^5} e^{-hc/\\lambda kT}" }, { "math_id": 7, "text": " dI = \\frac{dN}{N}\\frac{2\\pi h c^2 d\\lambda }{\\lambda^5} (e^{-hc/\\lambda kT_1}-e^{-hc/\\lambda kT_0})(e^{-0.12\\cdot OD} - e^{-OD}) " }, { "math_id": 8, "text": " \\frac{dI}{d ln N} = \\frac{2\\pi h c^2 d\\lambda }{\\lambda^5} (e^{-hc/\\lambda kT_1}-e^{-hc/\\lambda kT_0})(e^{-0.12\\cdot OD} - e^{-OD}) " }, { "math_id": 9, "text": "= 3.7\\cdot 10^{8} (W/m^2) \\cdot \\frac {d\\lambda}{\\lambda^5}*{\\mu}m^4 [exp(-58.3 {\\mu}m/\\lambda) - exp(-50 {\\mu}m/\\lambda)](e^{-0.12\\cdot OD} - e^{-OD})" }, { "math_id": 10, "text": "e^{-0.12\\cdot x} - e^{-x}" }, { "math_id": 11, "text": "e^{-0.12\\cdot OD} - e^{-OD} \\approx 0.5" }, { "math_id": 12, "text": "e^{OD(y+dy)} - e^{-OD(y)} = -\\frac{d}{dy} \\frac{d OD(y)}{y} e^{-OD(y)}" }, { "math_id": 13, "text": "-\\int_0^{\\infty} \\frac{d}{dy} \\frac{d OD(y)}{y} e^{-OD(y)} [B(\\lambda, d\\lambda,T(y))-B(\\lambda, d\\lambda,T_0)]dy " }, { "math_id": 14, "text": " = \\int_0^{\\infty} \\frac{d}{dy} \\frac{d e^{-OD(y)}}{y} [B(\\lambda, d\\lambda,T(y))-B(\\lambda, d\\lambda,T_0)]dy" }, { "math_id": 15, "text": "B(\\lambda, d\\lambda,T) = \\frac{2\\pi h c^2}{\\lambda^5} \\frac{d\\lambda}{e^{hc/\\lambda kT}-1}\\approx \\frac{2\\pi h c^2 d\\lambda }{\\lambda^5} e^{-hc/\\lambda kT}" }, { "math_id": 16, "text": "\\frac{dI}{d ln N} = \\frac{d}{d ln N} \\{\\int_0^{\\infty} \\frac{d}{dy} \\frac{d e^{-OD(y)}}{y} [B(\\lambda, d\\lambda,T(y))-B(\\lambda, d\\lambda,T(_0))]dy\\} " }, { "math_id": 17, "text": " = \\frac{d}{d ln N} \\{e^{-OD(\\infty)}[B(\\lambda, d\\lambda,T(\\infty)) -B(\\lambda, d\\lambda,T(_0))] - \\int_0^{\\infty} e^{-OD(y)} \\frac{d}{dy} B(\\lambda, d\\lambda,T(y))dy\\}" }, { "math_id": 18, "text": "e^{-OD(\\infty)} = 1" }, { "math_id": 19, "text": "\\frac{dI}{d ln N} = -\\int_0^{\\infty} \\frac{d}{d ln N}e^{-OD(y)} \\frac{d}{dy} B(\\lambda, d\\lambda,T(y))dy = -\\int_0^{\\infty} \\frac{d}{d ln N}e^{-OD(y)} \\frac{d}{dT} B(\\lambda, d\\lambda,T(y)) \\frac{dT}{dy} dy" }, { "math_id": 20, "text": "\\frac{dT}{dy}" }, { "math_id": 21, "text": "\\frac{dI}{d ln N} = - \\frac{dT}{dy} \\int_0^U \\frac{d}{d ln N}e^{-OD(y)} \\frac{d}{dT} B(\\lambda, d\\lambda,T(y)) dy" }, { "math_id": 22, "text": " OD(y) = \\sigma \\int_y^{\\infty} n(y^{\\prime})dy^{\\prime} = \\sigma n_0 e^{-y/H_p} = \\sigma e^{-[y-H_p ln(n_0)]/H_p}" }, { "math_id": 23, "text": "ln(n_0) = ln(N)" }, { "math_id": 24, "text": " OD(y) = OD(y-H_p\\cdot ln N)" }, { "math_id": 25, "text": "-H_p" }, { "math_id": 26, "text": "\\frac{dI}{d ln N} = H_p\\cdot\\frac{dT}{dy} \\int_0^U \\frac{d}{d y}e^{-OD(y)} \\frac{d}{dT} B(\\lambda, d\\lambda,T(y)) dy" }, { "math_id": 27, "text": "\\frac{dI}{d ln N} \\approx H_p\\cdot\\frac{d}{dT} B(\\lambda, d\\lambda,T) \\frac{dT}{dy} \\int_0^U \\frac{d}{d y}e^{-OD(y)} dy = H_p\\cdot\\frac{d}{dT} B(\\lambda, d\\lambda,T) \\frac{dT}{dy} (e^{-OD(U)} - e^{-OD(0)})" }, { "math_id": 28, "text": "H_p\\cdot\\frac{d}{dT} B(\\lambda, d\\lambda,T) \\frac{dT}{dy} = [T(H_p)-T_0]\\cdot\\frac{d}{dT} B(\\lambda, d\\lambda,T) = B(\\lambda, d\\lambda,T_1) - B(\\lambda, d\\lambda,T_0)" }, { "math_id": 29, "text": "\\frac{dI}{d ln N} \\approx [B(\\lambda, d\\lambda,T_1)-B(\\lambda, d\\lambda,T_0)] (e^{-OD(U)} - e^{-OD(0)})" }, { "math_id": 30, "text": "4\\pi R^2 \\cdot j = \\pi R^2 \\cdot (1-a) \\cdot S_0 " }, { "math_id": 31, "text": "j = \\frac{1}{4} \\cdot (1-a) \\cdot S_0 = \\frac{1}{4} \\cdot (1-0.3) \\cdot 1360 W/m^2 = 240 W/m^2" }, { "math_id": 32, "text": "3.1 (W/m^2) / 240 (W/m^2) = 1.3%" }, { "math_id": 33, "text": "j =" }, { "math_id": 34, "text": " j = \\epsilon \\sigma_B\\cdot T^4" }, { "math_id": 35, "text": "\\frac{d\\epsilon}{\\epsilon}" }, { "math_id": 36, "text": " \\frac{dj}{j} = \\frac{d\\epsilon}{\\epsilon} + 4\\frac{dT}{T} " }, { "math_id": 37, "text": "\\Delta T = \\frac{1}{4} T \\cdot \\frac{\\Delta j}{j} = \\frac{1}{4}\\cdot 288 K \\cdot 1.3% = 0.94 K" }, { "math_id": 38, "text": " 1 + 1/2 + (1/2)^2 + (1/2)^3 ... = 2 " } ]
https://en.wikipedia.org/wiki?curid=65937796
65938707
Decision curve analysis
Type of probability threshold analysis Decision curve analysis evaluates a predictor for an event as a probability threshold is varied, typically by showing a graphical plot of net benefit against threshold probability. By convention, the default strategies of assuming that all or no observations are positive are also plotted. Decision curve analysis is distinguished from other statistical methods like receiver operating characteristic (ROC) curves by the ability to assess the clinical value of a predictor. Applying decision curve analysis can determine whether using a predictor to make clinical decisions like performing biopsy will provide benefit over alternative decision criteria, given a specified threshold probability. Threshold probability is defined as the minimum probability of an event at which a decision-maker would take a given action, for instance, the probability of cancer at which a doctor would order a biopsy. A lower threshold probability implies a greater concern about the event (e.g. a patient worried about cancer), while a higher threshold implies greater concern about the action to be taken (e.g. a patient averse to the biopsy procedure). Net benefit is a weighted combination of true and false positives, where the weight is derived from the threshold probability. The predictor could be a binary classifier, or a percentage risk from a prediction model, in which case a positive classification is defined by whether predicted probability is at least as great as the threshold probability. Theory. The threshold probability compares the relative harm of unnecessary treatment (false positives) to the benefit of indicated treatment (true positives). The use of threshold probability to weight true and false positives derives from decision theory, in which the expected value of a decision can be calculated from the utilities and probabilities associated with decision outcomes. In the case of predicting an event, there are four possible outcomes: true positive, true negative, false positive and false negative. This means that to conduct a decision analysis, the analyst must specify four different utilities, which is often challenging. In decision curve analysis, the strategy of considering all observations as negative is defined as having a value of zero. This means that only true positives (event identified and appropriately managed) and false positives (unnecessary action) are considered. Furthermore, it is easily shown that the ratio of the utility of a true positive vs. the utility of avoiding a false positive is the odds at the threshold probability. For instance, a doctor whose threshold probability to order a biopsy for cancer is 10% believes that the utility of finding cancer early is 9 times greater than that of avoiding the harm of unnecessary biopsy. Similarly to the calculation of expected value, weighting false positive outcomes by the threshold probability yields an estimate of net benefit that incorporates decision consequences and preferences. Interpretation. A decision curve analysis graph is drawn by plotting threshold probability on the x-axis and net benefit on y-axis, illustrating the trade-offs between benefit (true positives) and harm (false positives) as the threshold probability (preference) is varied across a range of reasonable threshold probabilities. The calculation of net benefit from true positives and false positives is analogous to profit. Consider a wine importer who pays €1m to buy wine in France and sells it for $1.5m in the United States. To calculate the profit, an exchange rate between euros and dollars must be used to put cost and revenue on the same scale. Similarly, the costs (false positives) and revenue (true positives) of the predictor must be compared on the same scale to calculate net benefit. The factor formula_0 expresses the relative harms and benefits of the different clinical consequences of a decision and is therefore used as the exchange rate in net benefit. The figure gives a hypothetical example of biopsy for cancer. Given the relative benefits and harms of cancer early detection and avoidable biopsy, we would consider it unreasonable to opt for a biopsy if the risk of cancer was less than 5% or, alternatively, to refuse biopsy if given a risk of more than 25%. Hence the best strategy is that with the highest net benefit across the range of threshold probabilities between 5 – 25%, in this case, model A. If no strategy has highest net benefit across the full range, that is, if the decision curves cross, then the decision curve analysis is equivocal. The default strategies of assuming all or no observations are positive are often interpreted as “Treat all” (or “Intervention for all”) and “Treat none” (or “Intervention for none”) respectively. The curve for “Treat none” is fixed at a net benefit of 0. The curve for “Treat all” crosses the y-axis and “Treat none” at the event prevalence. Net benefit on the y-axis is expressed in units of true positives per person. For instance, a difference in net benefit of 0.025 at a given threshold probability between two predictors of cancer, Model A and Model B, could be interpreted as “using Model A instead of Model B to order biopsies increases the number of cancers detected by 25 per 1000 patients, without changing the number of unnecessary biopsies.” Further reading. Additional resources and a complete tutorial for decision curve analysis are available at decisioncurveanalysis.org. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{p_t} \\over {1-p_t}" } ]
https://en.wikipedia.org/wiki?curid=65938707
6594303
Finite-dimensional distribution
Mathematics concept In mathematics, finite-dimensional distributions are a tool in the study of measures and stochastic processes. A lot of information can be gained by studying the "projection" of a measure (or process) onto a finite-dimensional vector space (or finite collection of times). Finite-dimensional distributions of a measure. Let formula_0 be a measure space. The finite-dimensional distributions of formula_1 are the pushforward measures formula_2, where formula_3, formula_4, is any measurable function. Finite-dimensional distributions of a stochastic process. Let formula_5 be a probability space and let formula_6 be a stochastic process. The finite-dimensional distributions of formula_7 are the push forward measures formula_8 on the product space formula_9 for formula_4 defined by formula_10 Very often, this condition is stated in terms of measurable rectangles: formula_11 The definition of the finite-dimensional distributions of a process formula_7 is related to the definition for a measure formula_1 in the following way: recall that the law formula_12 of formula_7 is a measure on the collection formula_13 of all functions from formula_14 into formula_15. In general, this is an infinite-dimensional space. The finite dimensional distributions of formula_7 are the push forward measures formula_16 on the finite-dimensional product space formula_9, where formula_17 is the natural "evaluate at times formula_18" function. Relation to tightness. It can be shown that if a sequence of probability measures formula_19 is tight and all the finite-dimensional distributions of the formula_20 converge weakly to the corresponding finite-dimensional distributions of some probability measure formula_1, then formula_20 converges weakly to formula_1.
[ { "math_id": 0, "text": "(X, \\mathcal{F}, \\mu)" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "f_{*} (\\mu)" }, { "math_id": 3, "text": "f : X \\to \\mathbb{R}^{k}" }, { "math_id": 4, "text": "k \\in \\mathbb{N}" }, { "math_id": 5, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 6, "text": "X : I \\times \\Omega \\to \\mathbb{X}" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "\\mathbb{P}_{i_{1} \\dots i_{k}}^{X}" }, { "math_id": 9, "text": "\\mathbb{X}^{k}" }, { "math_id": 10, "text": "\\mathbb{P}_{i_{1} \\dots i_{k}}^{X} (S) := \\mathbb{P} \\left\\{ \\omega \\in \\Omega \\left| \\left( X_{i_{1}} (\\omega), \\dots, X_{i_{k}} (\\omega) \\right) \\in S \\right. \\right\\}." }, { "math_id": 11, "text": "\\mathbb{P}_{i_{1} \\dots i_{k}}^{X} (A_{1} \\times \\cdots \\times A_{k}) := \\mathbb{P} \\left\\{ \\omega \\in \\Omega \\left| X_{i_{j}} (\\omega) \\in A_{j} \\mathrm{\\,for\\,} 1 \\leq j \\leq k \\right. \\right\\}." }, { "math_id": 12, "text": "\\mathcal{L}_{X}" }, { "math_id": 13, "text": "\\mathbb{X}^{I}" }, { "math_id": 14, "text": "I" }, { "math_id": 15, "text": "\\mathbb{X}" }, { "math_id": 16, "text": "f_{*} \\left( \\mathcal{L}_{X} \\right)" }, { "math_id": 17, "text": "f : \\mathbb{X}^{I} \\to \\mathbb{X}^{k} : \\sigma \\mapsto \\left( \\sigma (t_{1}), \\dots, \\sigma (t_{k}) \\right)" }, { "math_id": 18, "text": "t_{1}, \\dots, t_{k}" }, { "math_id": 19, "text": "(\\mu_{n})_{n = 1}^{\\infty}" }, { "math_id": 20, "text": "\\mu_{n}" } ]
https://en.wikipedia.org/wiki?curid=6594303
65944
Plankalkül
Programming language designed 1942 to 1945 Plankalkül () is a programming language designed for engineering purposes by Konrad Zuse between 1942 and 1945. It was the first high-level programming language to be designed for a computer. "Kalkül" (from Latin "calculus") is the German term for a formal system—as in "Hilbert-Kalkül", the original name for the Hilbert-style deduction system—so "Plankalkül" refers to a formal system for planning. History of programming. In the domain of creating computing machines, Zuse was self-taught, and developed them without knowledge about other mechanical computing machines that existed already – although later on (building the Z3) being inspired by Hilbert's and Ackermann's book on elementary mathematical logic (see Principles of Mathematical Logic). To describe logical circuits, Zuse invented his own diagram and notation system, which he called "combinatorics of conditionals" (). After finishing the Z1 in 1938, Zuse discovered that the calculus he had independently devised already existed and was known as propositional calculus.3 What Zuse had in mind, however, needed to be much more powerful (propositional calculus is not Turing-complete and is not able to describe even simple arithmetic calculations). In May 1939, he described his plans for the development of what would become Plankalkül. He wrote the following in his notebook: &lt;templatestyles src="Verse translation/styles.css" /&gt; While working on his doctoral dissertation, Zuse developed the first known formal system of algorithm notation9 capable of handling branches and loops.1856 In 1942 he began writing a chess program in Plankalkül. In 1944, Zuse met with the German logician and philosopher Heinrich Scholz, who expressed appreciation for Zuse's utilization of logical calculus. In 1945, Zuse described Plankalkül in an unpublished book. The collapse of Nazi Germany, however, prevented him from submitting his manuscript.18 At that time the only two working computers in the world were ENIAC and Harvard Mark I, neither of which used a compiler, and ENIAC needed to be reprogrammed for each task by changing how the wires were connected.3 Although most of his computers were destroyed by Allied bombs, Zuse was able to rescue one machine, the Z4, and move it to the Alpine village of Hinterstein8 (part of Bad Hindelang). Unable to continue building computers – which was also forbidden by the Allied Powers – Zuse devoted his time to the development of a higher-level programming model and language.18 In 1948, he published a paper in the "Archiv der Mathematik" and presented at the Annual Meeting of the GAMM.89 His work failed to attract much attention. In a 1957 lecture, Zuse expressed his hope that Plankalkül, "after some time as a Sleeping Beauty, will yet come to life."3 He expressed disappointment that the designers of ALGOL 58 never acknowledged the influence of Plankalkül on their own work.1815 Plankalkül was republished with commentary in 1972. The first compiler for Plankalkül was implemented by Joachim Hohmann in his 1975 dissertation. Other independent implementations followed in 1998 and 2000 at the Free University of Berlin.2 Description. Plankalkül has drawn comparisons to the language APL, and to relational algebra. It includes assignment statements, subroutines, conditional statements, iteration, floating-point arithmetic, arrays, hierarchical record structures, assertions, exception handling, and other advanced features such as goal-directed execution. The Plankalkül provides a data structure called "generalized graph" (), which can be used to represent geometrical structures. Many features of the Plankalkül reappear in later programming languages; an exception is its idiosyncratic two-dimensional notation using multiple lines. Some features of the Plankalkül:217 Data types. The only primitive data type in the Plankalkül is a single bit or Boolean ( – yes-no value in Zuse's terminology). It is denoted by the identifier formula_0. All the further data types are composite, and build up from primitive by means of "arrays" and "records".679 So, a sequence of eight bits (which in modern computing could be regarded as byte) is denoted by formula_1, and Boolean matrix of size formula_2 by formula_3  is described by formula_4. There also exists a shortened notation, so one could write formula_5 instead of formula_6.679 Type formula_0 could have two possible values formula_7 and formula_8. So 4-bit sequence could be written like L00L, but in cases where such a sequence represents a number, the programmer could use the decimal representation 9.679 Record of two components formula_9 and formula_10 is written as formula_11.679 Type () in Plankalkül consists of 3 elements: structured value (), pragmatic meaning () and possible restriction on possible values ().679 User defined types are identified by letter A with number, like formula_12 – first user defined type. Examples. Zuse used a lot of examples from chess theory:680 Identifiers. Identifiers are alphanumeric characters with a number.679 There are the following kinds of identifiers for variables:10 Particular variable of some kind is identified by number, written under the kind.679 For example: formula_14, formula_15, formula_16 etc. Programs and subprograms are marked with a letter P, followed by a program (and optionally a subprogram) number. For example formula_17, formula_18.679 Output value of program formula_17 saved there in variable formula_19 is available for other subprograms under the identifier formula_20, and reading value of that variable also means executing related subprogram.680 Accessing elements by index. Plankalkül allows access for separate elements of variable by using "component index" (). When, for example, program receives input in variable formula_14 of type formula_13 (game state), then formula_21 — gives board state, formula_22 — piece on square number i, and formula_23 bit number j of that piece.680 In modern programming languages, that would be described by notation similar to codice_0, codice_1, codice_2 (although to access a single bit in modern programming languages a bitmask is typically used). Two-dimensional syntax. Because indexes of variables are written vertically, each Plankalkül instruction requires multiple rows to write down. First row contains variable kind, then variable number marked with letter V (), then indexes of variable subcomponents marked with K (), and then () marked with S, which describes variable type. Type is not required, but Zuse notes that this helps with reading and understanding the program.681 In the line formula_24 types formula_0 and formula_25 could be shortened to formula_7 and formula_26.681 Examples: Indexes could be not only constants. Variables could be used as indexes for other variables, and that is marked with a line, which shows in which component index would value of variable be used: Assignment operation. Zuse introduced in his calculus an assignment operator, unknown in mathematics before him. He marked it with «formula_27», and called it yields-sign (). Use of concept of assignment is one of the key differences between mathematics and computer science.14 Zuse wrote that the expression: formula_28 is analogous to the more traditional mathematical equation: formula_29 There are claims that Konrad Zuse initially used the glyph as a sign for assignment, and started to use formula_27 under the influence of Heinz Rutishauser.681 Knuth and Pardo believe that Zuse always wrote formula_27, and that was introduced by publishers of «Über den allgemeinen Plankalkül als Mittel zur Formulierung schematisch-kombinativer Aufgaben» in 1948.14 In the ALGOL 58 conference in Zürich, European participants proposed to use the assignment character introduced by Zuse, but the American delegation insisted on codice_3.681 The variable that stores the result of an assignment (l-value) is written to the right side of assignment operator.14 The first assignment to the variable is considered to be a declaration.681 The left side of assignment operator is used for an expression (), that defines which value will be assigned to the variable. Expressions could use arithmetic operators, Boolean operators, and comparison operators (formula_30 etc.).682 The exponentiation operation is written similarly to the indexing operation - using lines in the two-dimensional notation:45 Control flow. Boolean values were represented as integers with and . Conditional control flow took the form of a guarded statement , which executed the block if was true. There was also an iteration operator, of the form } which repeats until all guards are false. Terminology. Zuse called a single program a "Rechenplan" ("computation plan"). He envisioned what he called a "Planfertigungsgerät" ("plan assembly device"), which would automatically translate the mathematical formulation of a program into machine-readable punched film stock, something today would be called a translator or compiler. Example. The original notation was two-dimensional. For a later implementation in the 1990s, a linear notation was developed. The following example defines a function codice_4 (in a linear transcription) that calculates the maximum of three variables: P1 max3 (V0[:8.0],V1[:8.0],V2[:8.0]) → R0[:8.0] max(V0[:8.0],V1[:8.0]) → Z1[:8.0] max(Z1[:8.0],V2[:8.0]) → R0[:8.0] END P2 max (V0[:8.0],V1[:8.0]) → R0[:8.0] V0[:8.0] → Z1[:8.0] (Z1[:8.0] &lt; V1[:8.0]) → V1[:8.0] → Z1[:8.0] Z1[:8.0] → R0[:8.0] END References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S0" }, { "math_id": 1, "text": "8 \\times S0" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "m \\times n \\times S0" }, { "math_id": 5, "text": "S1 \\cdot n" }, { "math_id": 6, "text": "n \\times S0" }, { "math_id": 7, "text": "0" }, { "math_id": 8, "text": "L" }, { "math_id": 9, "text": "\\sigma" }, { "math_id": 10, "text": "\\tau" }, { "math_id": 11, "text": "(\\sigma, \\tau)" }, { "math_id": 12, "text": "A1" }, { "math_id": 13, "text": "A10" }, { "math_id": 14, "text": "\\begin{matrix} V \\\\ 0 \\end{matrix}" }, { "math_id": 15, "text": "\\begin{matrix} Z \\\\ 2 \\end{matrix}" }, { "math_id": 16, "text": "\\begin{matrix} C \\\\ 31 \\end{matrix}" }, { "math_id": 17, "text": "P13" }, { "math_id": 18, "text": "P5 \\cdot 7" }, { "math_id": 19, "text": "\\begin{matrix} R \\\\ 0 \\end{matrix}" }, { "math_id": 20, "text": "\\begin{matrix} R17 \\\\ 0 \\end{matrix}" }, { "math_id": 21, "text": "\\begin{matrix} V \\\\ 0 \\\\ 0 \\end{matrix}" }, { "math_id": 22, "text": "\\begin{matrix} V \\\\ 0 \\\\ 0 \\cdot i \\end{matrix}" }, { "math_id": 23, "text": "\\begin{matrix} V \\\\ 0 \\\\ 0 \\cdot i \\cdot j \\end{matrix}" }, { "math_id": 24, "text": "S" }, { "math_id": 25, "text": "S1" }, { "math_id": 26, "text": "1" }, { "math_id": 27, "text": "\\Rightarrow" }, { "math_id": 28, "text": "\\begin{array}{r|lll}\n & Z + 1 & \\Rightarrow & Z\\\\ \n V & 1 & & 1\\\\\n \\end{array}" }, { "math_id": 29, "text": "\\begin{array}{r|lll}\n & Z + 1 & = & Z\\\\ \n V & 1 & & 1\\\\\n K & i & & i + 1\\\\\n \\end{array}" }, { "math_id": 30, "text": "=, \\neq, \\leq" } ]
https://en.wikipedia.org/wiki?curid=65944
6595367
Hewitt–Savage zero–one law
The Hewitt–Savage zero–one law is a theorem in probability theory, similar to Kolmogorov's zero–one law and the Borel–Cantelli lemma, that specifies that a certain type of event will either almost surely happen or almost surely not happen. It is sometimes known as the Savage-Hewitt law for symmetric events. It is named after Edwin Hewitt and Leonard Jimmie Savage. Statement of the Hewitt-Savage zero-one law. Let formula_0 be a sequence of independent and identically-distributed random variables taking values in a set formula_1. The Hewitt-Savage zero–one law says that any event whose occurrence or non-occurrence is determined by the values of these random variables and whose occurrence or non-occurrence is unchanged by finite permutations of the indices, has probability either 0 or 1 (a "finite" permutation is one that leaves all but finitely many of the indices fixed). Somewhat more abstractly, define the exchangeable sigma algebra or "sigma algebra of symmetric events" formula_2 to be the set of events (depending on the sequence of variables formula_0) which are invariant under permutations of the indices in the sequence formula_0. Then formula_3. Since any finite permutation can be written as a product of transpositions, if we wish to check whether or not an event formula_4 is symmetric (lies in formula_2), it is enough to check if its occurrence is unchanged by an arbitrary transposition formula_5, formula_6. Examples. Example 1. Let the sequence formula_0 of independent and identically distributed random variables take values in formula_7. Then the event that the series formula_8 converges (to a finite value) is a symmetric event in formula_2, since its occurrence is unchanged under transpositions (for a finite re-ordering, the convergence or divergence of the series—and, indeed, the numerical value of the sum itself—is independent of the order in which we add up the terms). Thus, the series either converges almost surely or diverges almost surely. If we assume in addition that the common expected value formula_9 (which essentially means that formula_10 because of the random variables' non-negativity), we may conclude that formula_11 i.e. the series diverges almost surely. This is a particularly simple application of the Hewitt–Savage zero–one law. In many situations, it can be easy to apply the Hewitt–Savage zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine "which" of these two extreme values is the correct one. Example 2. Continuing with the previous example, define formula_12 which is the position at step "N" of a random walk with the iid increments "X""n". The event { "S""N" = 0 infinitely often } is invariant under finite permutations. Therefore, the zero–one law is applicable and one infers that the probability of a random walk with real iid increments visiting the origin infinitely often is either one or zero. Visiting the origin infinitely often is a tail event with respect to the sequence ("S""N"), but "S""N" are not independent and therefore the Kolmogorov's zero–one law is not directly applicable here. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left\\{ X_n \\right\\}_{n = 1}^\\infty" }, { "math_id": 1, "text": "\\mathbb{X}" }, { "math_id": 2, "text": "\\mathcal{E}" }, { "math_id": 3, "text": "A \\in \\mathcal{E} \\implies \\mathbb{P} (A) \\in \\{ 0, 1 \\}" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "(i, j)" }, { "math_id": 6, "text": "i, j \\in \\mathbb{N}" }, { "math_id": 7, "text": "[0, \\infty)" }, { "math_id": 8, "text": "\\sum_{n = 1}^\\infty X_n" }, { "math_id": 9, "text": "\\mathbb{E}[X_n] > 0" }, { "math_id": 10, "text": "\\mathbb{P}(X_n = 0 ) < 1 " }, { "math_id": 11, "text": "\\mathbb{P} \\left( \\sum_{n = 1}^\\infty X_n = + \\infty \\right) = 1," }, { "math_id": 12, "text": "S_N= \\sum_{n = 1}^N X_n," } ]
https://en.wikipedia.org/wiki?curid=6595367
65961732
Substrate inhibition in bioreactors
Substrate inhibition in bioreactors occurs when the concentration of substrate (such as glucose, salts, or phenols) exceeds the optimal parameters and reduces the growth rate of the cells within the bioreactor. This is often confused with substrate limitation, which describes environments in which cell growth is limited due to of low substrate. Limited conditions can be modeled with the Monod equation; however, the Monod equation is no longer suitable in substrate inhibiting conditions. A Monod deviation, such as the Haldane (Andrew) equation, is more suitable for substrate inhibiting conditions. These cell growth models are analogous to equations that describe enzyme kinetics, although, unlike enzyme kinetics parameters, cell growth parameters are generally empirically estimated. General Principles. Cell growth in bioreactors depends on a wide range of environmental and physiological conditions such as substrate concentration. With regards to bioreactor cell growth, substrate refers to the nutrients that the cells consume and is contained within the bioreactor medium. Cell growth can either be substrate limited or inhibited depending on whether the substrate concentration is too low or too high, respectively. The Monod equation accurately describes limiting conditions, but substrate inhibition models are more complex. Substrate inhibition occurs when the rate of microbial growth lessens due to a high concentration of substrate. Higher substrate concentrations are usually caused by osmotic issues, viscosity, or inefficient oxygen transport. By slowly adding substrate into the medium, fed-batch bioreactor systems can help alleviate substrate inhibition. Substrate inhibition is also closely related to enzyme kinetics which is commonly modeled by the Michaelis–Menten equation. If an enzyme that is part of a rate-limiting step of microbial growth is substrate inhibited, then the cell growth will be inhibited in the same manner. However, the mechanisms are often more complex, and parameters for a model equation need to be estimated from experimental data. Additionally, information on inhibitory effects caused by mixtures of compounds is limited because most studies have been performed with single-substrate systems Types of Inhibition. Enzyme Kinetics Overview. One of the most well known equations to describe single-substrate enzyme kinetics is the Michaelis-Menten equation. This equation relates the initial rate of reaction to the concentration of substrate present, and deviations of model can be used to predict competitive inhibition and non-competitive inhibition. The model takes the form of the following equation: formula_0 ("Michaelis-Menten equation)" Where formula_1 is the Michaelis constant formula_2 is the initial reaction rate formula_3 is the maximum reaction rate If the inhibitor is different from the substrate, then competitive inhibition will increase Km while Vmax remains the same, and non-competitive will decrease Vmax while Km remains the same. However, under substrate inhibiting effects where two of the same substrate molecules bind to the active sites and inhibitory sites, the reaction rate will reach a peak value before decreasing. The reaction rate will either decrease to zero under complete inhibition, or it will decrease to a non-zero asymptote during partial inhibition. This can be described by the Haldane (or Andrew) equation, which is a common deviation of the Michaelis-Menten equation, and takes the following form: formula_4 ("Haldane equation for single-substrate inhibition of enzymatic reaction rate") Where formula_5 is the inhibition constant Cell Growth in Bioreactors. Bioreactor cell growth kinetics is analogous to the equations presented in enzyme kinetics. Under non-inhibiting single-substrate conditions, the specific growth rate of biomass can be modeled by the well-known Monod equation. The Monod equation models the growth of organisms during substrate limiting conditions, and its parameters are determined through experimental observation. The Monod equation is based on a single substrate-consuming enzyme system that follows the Michaelis-Menten equation. The Monod takes the following familiar form: formula_6 ("Monod equation)" Where: formula_7 is the saturation constant formula_8 is the specific growth rate formula_9 is the maximum specific growth rate Under single-substrate inhibiting conditions, the Monod equation is no longer suitable, and the most common Monod derivative is once again in the form of the Haldane equation. As in enzyme kinetics, the growth rate will initially increase as substrate is increased before reaching a peak and decreasing at high substrate concentrations. Reasons for substrate inhibition in bioreactor cell growth includes osmotic issues, viscosity, or inefficient oxygen transport due to overly concentrated substrate in the bioreactor medium. Substrates that are known to cause inhibition include glucose, NaCl, and phenols, among others Substrate inhibition is also a concern in wastewater treatment, where one of the most studied biodegradation substrates are the toxic phenols. Due to their toxicity, there is a large interest in bioremediation of phenols, and it is well known that phenol inhibition can be modeled by the following Haldane equation: formula_10 ("Haldane equation for single-substrate inhibition of cell growth") Where: formula_5 is the inhibition constant There are several equations that have been developed to describe substrate inhibition. Two equations listed below that are referred to as non-competitive substrate inhibition and competitive substrate inhibition models respectively by Shuler and Michael in "Bioprocess Engineering: Basic Concepts." Note that the Haldane equation above is a special case of the following non-competitive substrate inhibition model, where KI »Ks. formula_11 ("non-competitive single-substrate inhibition") formula_12 ("competitive single-substrate inhibition") These equations also have enzymatic counterparts, where the equations commonly describe the interactions between substrate and inhibitors at the active and inhibitory sites. The concept of competitive and non-competitive substrate inhibition is more well defined in enzyme kinetics, but these analogous equations also apply to cell growth models. Overcoming Substrate Inhibition in Bioreactors. Substrate inhibition can be characterized by a high substrate concentration and decreased growth rate, resulting in decreased bioreactor outputs. The most common solution is to change the growth from a batch process to a fed-batch process. Other methods to overcome substrate inhibition include the addition of another substrate type in order to develop alternative metabolic pathways, immobilizing the cells or increasing the biomass concentration. Utilizing Fed-Batch. A fed-batch process is the most common way to decrease the effects of substrate inhibition. Fed-batch processes are characterized by the continuous addition of bioreactor media (which includes the substrate) into the inoculum (cellular solution). The addition of media will increase the overall volume within the reactor along with substrate and other growth materials. A fed-batch process will also have an output flow rate of the substrate/cell/product mixture which can be collected to retrieve the desired product. Fed-batch is a good way to overcome substrate inhibition because the amount of substrate can be changed at various points in the growth process. This allows for the bioreactor technician to provide the cells with the amount of substrate they need rather than providing them too much or too little. Other methods. Other methods to overcome substrate inhibition include the use of Two Phase Partitioning Bioreactors, the immobilization of cells, and increasing the biomass concentration in the bioreactor. Two Phase Partitioning Bioreactors are able to reduce the aqueous phase substrate concentration by storing substrate in an alternative phase, which can be re-released into the biomass based on metabolic demand. The cell immobilization method the bioreactor works by encapsulating the cells into a material that makes the removal of inhibitory compounds easier, thus reducing inhibition by creating a matrix with the cells which can act as a protective barrier against the inhibitory effects of toxic materials. The method of increasing cell concentration is done by supporting the cellular material on a scaffold to create a biofilm. Biofilms allow for extremely high cell concentrations while preventing the overgrowth of inhibitory substrates. Impact On Product Production. The impact of product production depends on how the product is created. Substrate inhibition will affect products produced by enzymatic reactions differently than growth associated product formation. Substrate inhibition of enzymatic product production will inhibit the enzyme's activity, which will lower the reaction rate and reduce the rate of product formation. However, if a product is being produced by cells, then substrate inhibition will narrow product formation by limiting the growth of cells. Growth Associated Products. There are multiple relationships that may exist between the rate of product formation, the specific rate of substrate consumption, and specific growth rate. The following equations demonstrate the relationship between cell growth and product production for growth associated production. The parameters formula_13 and formula_14 (specific rate of product formation and specific growth rate respectively) are defined below. formula_15 ("specific rate of product formation") formula_16 ("specific growth rate") Where formula_17 is the cell concentration, and formula_18 is the product concentration. The product formation and cell growth are both directly linked to the amount of substrate consumed through the yield coefficients, formula_19 and formula_20respectively. These coefficients can be combined to define a yield coefficient, formula_21, that relates the product production to cell growth. formula_22 This yield coefficient can be further used to directly relate the rate of change of product to the rate of change of cell growth formula_23 Rearranging this equation gives the following relationship between the specific rate of product formation and the specific growth rate of the cells for growth associated products. formula_24 The above relationships demonstrate that for growth associated product, the specific growth rate is directly proportional to the specific rate of product formation. Furthermore, substrate inhibition limits the specific growth rate, which reduces the final biomass concentration. Increasing the substrate concentration may increase the viscosity of the media, lowers the rate of oxygen diffusivity, and affect the osmolarity of the system. These effects can be detrimental to cell growth, and by extension, the yield of product. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu=\\frac{V_m[S]}{K_M+[S]}" }, { "math_id": 1, "text": "K_M" }, { "math_id": 2, "text": "\\nu" }, { "math_id": 3, "text": "V_m" }, { "math_id": 4, "text": "\\nu=\\frac{V_m[S]}{K_M+[S]+\\frac{[S]^2}{K_I}}" }, { "math_id": 5, "text": "K_I" }, { "math_id": 6, "text": "\\mu=\\frac{\\mu_m[S]}{K_S+[S]}" }, { "math_id": 7, "text": "K_S" }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": "\\mu_m" }, { "math_id": 10, "text": "\\mu=\\frac{\\mu_m[S]}{K_S+[S]+\\frac{[S]^2}{K_I}}" }, { "math_id": 11, "text": "\\mu=\\frac{\\mu_m}{(1+\\frac{K_S}{[S]})(1+\\frac{[S]}{K_I})}" }, { "math_id": 12, "text": "\\mu=\\frac{\\mu_m[S]}{K_S(1+\\frac{[S]}{K_I})+[S]}" }, { "math_id": 13, "text": " q_p " }, { "math_id": 14, "text": " \\mu " }, { "math_id": 15, "text": "q_p = \\frac{1}{X} \\frac{dP}{dt}" }, { "math_id": 16, "text": "\\mu = \\frac{1}{X}\\frac{dX}{dt}" }, { "math_id": 17, "text": " X " }, { "math_id": 18, "text": " P " }, { "math_id": 19, "text": "Y_{\\frac{P}{S}} " }, { "math_id": 20, "text": "Y_{\\frac{X}{S}}" }, { "math_id": 21, "text": "Y_{\\frac{P}{X}} " }, { "math_id": 22, "text": "\\frac{Y_{\\frac{P}{S}}}{Y_{\\frac{X}{S}}}= Y_{\\frac{P}{X}} = \\frac{\\Delta P}{\\Delta X}" }, { "math_id": 23, "text": "\\frac{dP}{dt} = Y_{\\frac{P}{X}}\\frac{dX}{dt} \\rightarrow \\frac{dP}{dt} = Y_{\\frac{P}{X}}X\\frac{1}{X}\\frac{dX}{dt}" }, { "math_id": 24, "text": "q_P = Y_{\\frac{P}{X}}\\mu" } ]
https://en.wikipedia.org/wiki?curid=65961732
65962534
Uniswap
Decentralized cryptocurrency exchange Uniswap is a decentralized cryptocurrency exchange that uses a set of smart contracts to create liquidity pools for the execution of trades. It is an open source project and falls into the category of a DeFi product (Decentralized finance) because it uses smart contracts to facilitate trades instead of a centralized exchange. The protocol facilitates automated transactions between cryptocurrency tokens on the Ethereum blockchain through the use of smart contracts. As of October 2020[ [update]], Uniswap was estimated to be the largest decentralized exchange and the fourth-largest cryptocurrency exchange overall by daily trading volume. History. Uniswap was created on November 2, 2018 by Hayden Adams, a former mechanical engineer at Siemens. The Uniswap company received investments from business angel Ric Burton and venture capital firms, including Andreessen Horowitz, Paradigm Venture Capital, Union Square Ventures LLC and ParaFi. Uniswap’s average daily trading volume was US$220 million in October 2020. Traders and investors have utilized Uniswap because of its usage in decentralized finance (DeFi). Overview. Uniswap is a decentralized finance protocol that is used to exchange cryptocurrencies and tokens; it is provided on blockchain networks that run open-source software. This is in contrast to cryptocurrency exchanges that are run by centralized companies. Changes to the protocol are voted on by the owners of a native cryptocurrency and governance token called UNI, and then implemented by a team of developers. Uniswap launched without the UNI token, and the token is not needed to trade on the exchange. Tokens were initially distributed to early users of the protocol. Protocol. Uniswap acts as an automated market maker and uses liquidity pools to fulfill orders, instead of relying on a traditional market maker, with an aim to create more efficient markets. Individuals and bots—termed "liquidity providers"—provide liquidity to the exchange by adding a pair of tokens to a smart contract which can be bought and sold by other users according to the constant-product rule formula_0. In return, liquidity providers are given a percentage of the trading fees earned for that trading pair. For each trade, a certain amount of tokens is removed from the pool for an amount of the other token, thereby changing the price. No fees are required to list tokens which allow a large amount of Ethereum tokens to be accessible and no registration is required for users. As open-source software, Uniswap's code can also be forked to create new exchanges.
[ { "math_id": 0, "text": "\\phi(x, y) = xy" } ]
https://en.wikipedia.org/wiki?curid=65962534
65968923
Entanglement depth
In quantum physics, entanglement depth characterizes the strength of multiparticle entanglement. An entanglement depth formula_0 means that the quantum state of a particle ensemble cannot be described under the assumption that particles interacted with each other only in groups having fewer than formula_0 particles. It has been used to characterize the quantum states created in experiments with cold gases. Definition. Entanglement depth appeared in the context of spin squeezing. It turned out that to achieve larger and larger spin squeezing, and thus larger and larger precision in parameter estimation, a larger and larger entanglement depth is needed. Later it was formalized in terms of convex sets of quantum states, independent of spin squeezing as follows. Let us consider a pure state that is the tensor product of multi-particle quantum states formula_1 The pure state formula_2 is said to be formula_3-producible if all formula_4 are states of at most formula_3 particles. A mixed state is called formula_3-producible, if it is a mixture of pure states that are all at most formula_3-producible. The formula_3-producible mixed states form a convex set. A quantum state contains at least multiparticle entanglement of formula_5 particles, if it is not formula_3-producible. A formula_6-particle state with formula_6-entanglement is called genuine multipartite entangled. Finally, a quantum state has an entanglement depth formula_3, if it is formula_3-producible, but not formula_7-producible. Based on this definition, it was possible to detect the entanglement depth close to states different from spin-squeezed states. Since there is not a general method to detect multipartite entanglement, these methods had to be tailored to experiments with various relevant quantum states. Thus, entanglement criteria has been developed to detect entanglement close to symmetric "Dicke states" with formula_8 They are very different from spin-squeezed states, since they do not have a large spin polarization. They can provide Heisenberg limited metrology, while they are more robust to particle loss than Greenberger-Horne-Zeilinger (GHZ) states. There are also criteria for detecting the entanglement depth in "planar-squeezed state"s. Planar squeezed states are quantum states that can be used to estimate a rotation angle that is not expected to be small. Finally, multipartite entanglement can be detected based on the metrological usefulness of the quantum state. The criteria applied are based on bounds on the quantum Fisher information. Experiments. The entanglement criterion in Ref. has been used in many experiments with cold gases in spin-squeezed states. There have also been experiments in cold gases for detecting multipartite entanglement in symmetric Dicke states. There have been also experiments with Dicke states that detected entanglement based on metrological usefulness in cold gases and in photons. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " k " }, { "math_id": 1, "text": "\n|\\Psi\\rangle=|\\phi_1\\rangle\\otimes|\\phi_2\\rangle\\otimes ... \\otimes|\\phi_n\\rangle.\n" }, { "math_id": 2, "text": "|\\Psi\\rangle" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "\\phi_i" }, { "math_id": 5, "text": "k+1" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "(k-1)" }, { "math_id": 8, "text": "\\langle J_z\\rangle=0." } ]
https://en.wikipedia.org/wiki?curid=65968923
6596935
Security market line
Representation of the capital asset pricing model Security market line (SML) is the representation of the capital asset pricing model. It displays the expected rate of return of an individual security as a function of systematic, non-diversifiable risk. The risk of an individual risky security reflects the volatility of the return from the security rather than the return of the market portfolio. The risk in these individual risky securities reflects the systematic risk. Formula. The Y-intercept of the SML is equal to the risk-free interest rate. The slope of the SML is equal to the market risk premium and reflects the risk return tradeoff at a given time: formula_0 where: "E"("R""i") is an expected return on security "E"("R""M") is an expected return on market portfolio M "β" is a nondiversifiable or systematic risk "R""M" is a market rate of return "R""f" is a risk-free rate When used in portfolio management, the SML represents the investment's opportunity cost (investing in a combination of the market portfolio and the risk-free asset). All the correctly priced securities are plotted on the SML. The assets above the line are undervalued because for a given amount of risk (beta), they yield a higher return. The assets below the line are overvalued because for a given amount of risk, they yield a lower return. In a market in perfect equilibrium, all securities would fall on the SML. There is a question about what the SML looks like when beta is negative. A rational investor will accept these assets even though they yield sub-risk-free returns, because they will provide "recession insurance" as part of a well-diversified portfolio. Therefore, the SML continues in a straight line whether beta is positive or negative. A different way of thinking about this is that the absolute value of beta represents the amount of risk associated with the asset, while the sign explains when the risk occurs. Security Market Line, Treynor ratio and Alpha. All of the portfolios on the SML have the same Treynor ratio as does the market portfolio, i.e. formula_1 In fact, the slope of the SML is the Treynor ratio of the market portfolio since formula_2. A stock picking rule of thumb for assets with positive beta is to buy if the Treynor ratio will be above the SML and sell if it will be below (see figure above). Indeed, from the efficient market hypothesis, it follows that we cannot beat the market. Therefore, all assets should have a Treynor ratio less than or equal to that of the market. In consequence, if there is an asset whose Treynor ratio will be bigger than the market's then this asset gives more return for unit of systematic risk (i.e. beta), which contradicts the efficient market hypothesis. This "abnormal" extra return above the market's return at a given level of risk is what is called the alpha. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{SML} : E(R_i) = R_f + \\beta_{i}[E(R_M) - R_f]\\," }, { "math_id": 1, "text": "\\frac{E(R_i) - R_f}{\\beta_i} =E(R_M) - R_f." }, { "math_id": 2, "text": "\\beta_M=1" } ]
https://en.wikipedia.org/wiki?curid=6596935
6597900
Identity theorem for Riemann surfaces
In mathematics, the identity theorem for Riemann surfaces is a theorem that states that a holomorphic function is completely determined by its values on any subset of its domain that has a limit point. Statement of the theorem. Let formula_0 and formula_1 be Riemann surfaces, let formula_0 be connected, and let formula_2 be holomorphic. Suppose that formula_3 for some subset formula_4 that has a limit point, where formula_5 denotes the restriction of formula_6 to formula_7. Then formula_8 (on the whole of formula_0).
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "f, g : X \\to Y" }, { "math_id": 3, "text": "f|_{A} = g|_{A}" }, { "math_id": 4, "text": "A \\subseteq X" }, { "math_id": 5, "text": "f|_{A} : A \\to Y" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "f = g" } ]
https://en.wikipedia.org/wiki?curid=6597900
6598775
Branching theorem
In mathematics, the branching theorem is a theorem about Riemann surfaces. Intuitively, it states that every non-constant holomorphic function is locally a polynomial. Statement of the theorem. Let formula_0 and formula_1 be Riemann surfaces, and let formula_2 be a non-constant holomorphic map. Fix a point formula_3 and set formula_4. Then there exist formula_5 and charts formula_6 on formula_0 and formula_7 on formula_1 such that This theorem gives rise to several definitions:
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "f : X \\to Y" }, { "math_id": 3, "text": "a \\in X" }, { "math_id": 4, "text": "b := f(a) \\in Y" }, { "math_id": 5, "text": "k \\in \\N" }, { "math_id": 6, "text": "\\psi_{1} : U_{1} \\to V_{1}" }, { "math_id": 7, "text": "\\psi_{2} : U_{2} \\to V_{2}" }, { "math_id": 8, "text": "\\psi_{1} (a) = \\psi_{2} (b) = 0" }, { "math_id": 9, "text": "\\psi_{2} \\circ f \\circ \\psi_{1}^{-1} : V_{1} \\to V_{2}" }, { "math_id": 10, "text": "z \\mapsto z^{k}." }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "\\nu (f, a)" }, { "math_id": 15, "text": "k > 1" } ]
https://en.wikipedia.org/wiki?curid=6598775
659902
Acid–base titration
Method of chemical quantitative analysis An acid–base titration is a method of quantitative analysis for determining the concentration of Brønsted-Lowry acid or base (titrate) by neutralizing it using a solution of known concentration (titrant). A pH indicator is used to monitor the progress of the acid–base reaction and a titration curve can be constructed. This differs from other modern modes of titrations, such as oxidation-reduction titrations, precipitation titrations, &amp; complexometric titrations. Although these types of titrations are also used to determine unknown amounts of substances, these substances vary from ions to metals. Acid–base titration finds extensive applications in various scientific fields, such as pharmaceuticals, environmental monitoring, and quality control in industries. This method's precision and simplicity makes it an important tool in quantitative chemical analysis, contributing significantly to the general understanding of solution chemistry. History. The history of acid-base titration dates back to the late 19th century when advancements in analytical chemistry fostered the development of systematic techniques for quantitative analysis. The origins of titration methods can be linked to the work of chemists such as Karl Friedrich Mohr in the mid-1800s. His contributions laid the groundwork for understanding titrations involving acids and bases. Theoretical progress came with the research of Swedish chemist Svante Arrhenius, who in the late 19th century, introduced the Arrhenius theory, providing a theoretical framework for acid-base reactions. This theoretical foundation, along with ongoing experimental refinements, contributed to the evolution of acid-base titration as a precise and widely applicable analytical method. Over time, the method has undergone further refinements and adaptations, establishing itself as an essential tool in laboratories across various scientific disciplines. Alkalimetry and acidimetry. Alkalimetry and acidimetry are types of volumetric analyses in which the fundamental reaction is a neutralization reaction. They involve the controlled addition of either an acid or a base (titrant) of known concentration to the solution of the unknown concentration (titrate) until the reaction reaches its stoichiometric equivalence point. At this point, the moles of acid and base are equal, resulting in a neutral solution: acid + base → salt + water For example: HCl + NaOH → NaCl + H2O Acidimetry is the specialized analytical use of acid-base titration to determine the concentration of a basic (alkaline) substance using standard acid. This can be used for weak bases and strong bases. An example of an acidimetric titration involving a strong base is as follows: Ba(OH)2 + 2 H+ → Ba2+ + 2 H2O In this case, the strong base (Ba(OH)2) is neutralized by the acid until all of the base has reacted. This allows the viewer to calculate the concentration of the base from the volume of the standard acid that is used. Alkalimetry follows uses same concept of specialized analytic acid-base titration, but to determine the concentration of an acidic substance using standard base. An example of an alkalimetric titration involving a strong acid is as follows: H2SO4 + 2 OH− → SO42- + 2 H2O In this case, the strong acid (H2SO4) is neutralized by the base until all of the acid has reacted. This allows the viewer to calculate the concentration of the acid from the volume of the standard base that is used. The standard solution (titrant) is stored in the burette, while the solution of unknown concentration (analyte/titrate) is placed in the Erlenmeyer flask below it with an indicator. Indicator choice. A suitable pH indicator must be chosen in order to detect the end point of the titration. The colour change or other effect should occur close to the equivalence point of the reaction so that the experimenter can accurately determine when that point is reached. The pH of the equivalence point can be estimated using the following rules: These indicators are essential tools in chemistry and biology, aiding in the determination of a solution's acidity or alkalinity through the observation of colour transitions. The table below serves as a reference guide for these indicator choices, offering insights into the pH ranges and colour transformations associated with specific indicators: Phenolphthalein is widely recognized as one of the most commonly used acid-base indicators in chemistry. Its popularity is because of its effectiveness in a broad pH range and its distinct colour transitions. Its sharp and easily detectable colour changes makes phenolphthalein a valuable tool for determining the endpoint of acid-base titrations, as a precise pH change signifies the completion of the reaction. When a weak acid reacts with a weak base, the equivalence point solution will be basic if the base is stronger and acidic if the acid is stronger. If both are of equal strength, then the equivalence pH will be neutral. However, weak acids are not often titrated against weak bases because the colour change shown with the indicator is often quick, and therefore very difficult for the observer to see the change of colour. The point at which the indicator changes colour is called the "endpoint". A suitable indicator should be chosen, preferably one that will experience a change in colour (an endpoint) close to the equivalence point of the reaction. In addition to the wide variety of indicator solutions, pH papers, crafted from paper or plastic infused with combinations of these indicators, serve as a practical alternative. The pH of a solution can be estimated by immersing a strip of pH paper into it and matching the observed colour to the reference standards provided on the container. Overshot titration. Overshot titrations are a common phenomenon, and refer to a situation where the volume of titrant added during a chemical titration exceeds the amount required to reach the equivalence point. This excess titrant leads to an outcome where the solution becomes slightly more alkaline or over-acidified. Overshooting the equivalence point can occur due to various factors, such as errors in burette readings, imperfect reaction stoichiometry, or issues with endpoint detection. The consequences of overshot titrations can affect the accuracy of the analytical results, particularly in quantitative analysis. Researchers and analysts often employ corrective measures, such as back-titration and using more precise titration techniques, to mitigate the impact of overshooting and obtain reliable and precise measurements. Understanding the causes, consequences, and solutions related to overshot titrations is crucial in achieving accurate and reproducible results in the field of chemistry. Mathematical analysis: titration of weak acid. For calculating concentrations, an ICE table can be used. ICE stands for "initial", "change", and "equilibrium". The pH of a weak acid solution being titrated with a strong base solution can be found at different points along the way. These points fall into one of four categories: 1. The initial pH is approximated for a weak acid solution in water using the equation: formula_0 where &lt;chem&gt;[H3O+]0&lt;/chem&gt; is the initial concentration of the hydronium ion. 2. The pH before the equivalence point depends on the amount of weak acid remaining and the amount of conjugate base formed. The pH can be calculated approximately by the Henderson–Hasselbalch equation:formula_1 where Ka is the acid dissociation constant. 3. The pH at the equivalence point depends on how much the weak acid is consumed to be converted into its conjugate base. Note that when an acid neutralizes a base, the pH may or may not be neutral (pH = 7). The pH depends on the strengths of the acid and base. In the case of a weak acid and strong base titration, the pH is greater than 7 at the equivalence point. Thus pH can be calculated using the following formula: formula_2 Where &lt;chem&gt;{[OH^{-}]}&lt;/chem&gt; is the concentration of the hydroxide ion. The concentration of the hydroxide ion is calculated from the concentration of the hydronium ion and using the following relationship: formula_3 Where Kb is the base dissociation constant, Kw is the water dissociation constant. 4. The pH after the equivalence point depends on the concentration of the conjugate base of the weak acid and the strong base of the titrant. However, the base of the titrant is stronger than the conjugate base of the acid. Therefore, the pH in this region is controlled by the strong base. As such the pH can be found using the following: formula_4 where formula_5 is the concentration of the strong base that is added, formula_6 is the volume of base added until the equilibrium, formula_7 is the concentration of the strong acid that is added, and formula_8 is the initial volume of the acid. Single formula. More accurately, a single formula that describes the titration of a weak acid with a strong base from start to finish is given below: formula_9 where " φ = fraction of completion of the titration (φ &lt; 1 is before the equivalence point, φ = 1 is the equivalence point, and φ &gt; 1 is after the equivalence point) formula_10 = the concentrations of the acid and base respectively formula_11 = the volumes of the acid and base respectively Graphical methods. Identifying the pH associated with any stage in the titration process is relatively simple for monoprotic acids and bases. A monoprotic acid is an acid that donates one proton. A monoprotic base is a base that accepts one proton. A monoprotic acid or base only has one equivalence point on a titration curve. A diprotic acid donates two protons and a diprotic base accepts two protons. The titration curve for a diprotic solution has two equivalence points. A polyprotic substance has multiple equivalence points. All titration reactions contain small buffer regions that appear horizontal on the graph. These regions contain comparable concentrations of acid and base, preventing sudden changes in pH when additional acid or base is added. Pharmaceutical applications. In the pharmaceutical industry, acid-base titration serves as a fundamental analytical technique with diverse applications. One primary use involves the determination of the concentration of Active Pharmaceutical Ingredients (APIs) in drug formulations, ensuring product quality and compliance with regulatory standards. Acid–base titration is particularly valuable in quantifying acidic or basic functional groups with pharmaceutical compounds. Additionally, the method is employed for the analysis of additives or ingredients, making it easier to adjust and control how a product is made. Quality control laboratories utilize acid-base titration to assess the purity of raw materials and to monitor various stages of drug manufacturing processes. The technique's reliability and simplicity make it an integral tool in pharmaceutical research and development, contributing to the production of safe and effective medications. Environmental monitoring applications. Acid–base titration plays a crucial role in environmental monitoring by providing a quantitative analytical method for assessing the acidity or alkalinity of water samples. The measurement of parameters such as pH, total alkalinity, and acidity is essential in evaluating the environmental impact of industrial discharges, agricultural runoff, and other sources of water contamination. Acid–base titration allows for the determination of the buffering capacity of natural water systems, aiding in the assessment of their ability to resist changes in pH. Monitoring pH levels is important for preserving aquatic ecosystems and ensuring compliance with environmental regulations. Acid–base titration is also utilized in the analysis of acid rain effects on soil and water bodies, contributing to the overall understanding and management of environmental quality. The method's prevision and reliability make it a valuable tool in safeguarding ecosystems and assessing the impact of human activities on natural water resources. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{pH} =-\\log[\\ce{H3O+}]_0" }, { "math_id": 1, "text": " \\ce{pH} = -\\log K_a +\\log \\frac\\text{[Conjugate Base]}\\text{[Weak Acid]} " }, { "math_id": 2, "text": " \\ce{pH}_{eq}=-\\log[\\ce{H3O+}]_{eq}=14+\\log[\\ce{OH-}]_{eq} " }, { "math_id": 3, "text": " K_a K_b=K_w=10^{-14} " }, { "math_id": 4, "text": " \\ce{pH} = 14+\\log[\\ce{OH^-}]= 14 + \\log \\frac {(C_bV_b)-(C_aV_a)} { V_a + V_b } " }, { "math_id": 5, "text": " C_{b} " }, { "math_id": 6, "text": " V_{b} " }, { "math_id": 7, "text": " C_{a} " }, { "math_id": 8, "text": " V_{a} " }, { "math_id": 9, "text": "\\phi = \\frac{C_b V_b }{C_a V_a}" }, { "math_id": 10, "text": "C_a, C_b" }, { "math_id": 11, "text": "V_a, V_b" } ]
https://en.wikipedia.org/wiki?curid=659902
6599348
Clark–Ocone theorem
In mathematics, the Clark–Ocone theorem (also known as the Clark–Ocone–Haussmann theorem or formula) is a theorem of stochastic analysis. It expresses the value of some function "F" defined on the classical Wiener space of continuous paths starting at the origin as the sum of its mean value and an Itô integral with respect to that path. It is named after the contributions of mathematicians J.M.C. Clark (1970), Daniel Ocone (1984) and U.G. Haussmann (1978). Statement of the theorem. Let "C"0([0, "T"]; R) (or simply "C"0 for short) be classical Wiener space with Wiener measure "γ". Let "F" : "C"0 → R be a BC1 function, i.e. "F" is bounded and Fréchet differentiable with bounded derivative D"F" : "C"0 → Lin("C"0; R). Then formula_0 In the above formula_1 is the expected value of "F" over the whole of Wiener space "C"0; formula_2 is an Itô integral; More generally, the conclusion holds for any "F" in "L"2("C"0; R) that is differentiable in the sense of Malliavin. Integration by parts on Wiener space. The Clark–Ocone theorem gives rise to an integration by parts formula on classical Wiener space, and to write Itô integrals as divergences: Let "B" be a standard Brownian motion, and let "L"02,1 be the Cameron–Martin space for "C"0 (see abstract Wiener space. Let "V" : "C"0 → "L"02,1 be a vector field such that formula_3 is in "L"2("B") (i.e. is Itô integrable, and hence is an adapted process). Let "F" : "C"0 → R be BC1 as above. Then formula_4 i.e. formula_5 or, writing the integrals over "C"0 as expectations: formula_6 where the "divergence" div("V") : "C"0 → R is defined by formula_7 The interpretation of stochastic integrals as divergences leads to concepts such as the Skorokhod integral and the tools of the Malliavin calculus.
[ { "math_id": 0, "text": "F(\\sigma) = \\int_{C_{0}} F(p) \\, \\mathrm{d} \\gamma(p) + \\int_{0}^{T} \\mathbf{E} \\left[ \\left. \\frac{\\partial}{\\partial t} \\nabla_{H} F (-) \\right| \\Sigma_{t} \\right] (\\sigma) \\, \\mathrm{d} \\sigma_{t}." }, { "math_id": 1, "text": "\\int_{C_{0}} F(p) \\, \\mathrm{d} \\gamma(p) = \\mathbf{E}[F]" }, { "math_id": 2, "text": "\\int_0^T \\cdots \\, \\mathrm{d} \\sigma (t)" }, { "math_id": 3, "text": "\\dot{V} = \\frac{\\partial V}{\\partial t} : [0, T] \\times C_{0} \\to \\mathbb{R}" }, { "math_id": 4, "text": "\\int_{C_{0}} \\mathrm{D} F (\\sigma) (V(\\sigma)) \\, \\mathrm{d} \\gamma (\\sigma) = \\int_{C_{0}} F (\\sigma) \\left( \\int_{0}^{T} \\dot{V}_{t} (\\sigma) \\, \\mathrm{d} \\sigma_{t} \\right) \\, \\mathrm{d} \\gamma (\\sigma)," }, { "math_id": 5, "text": "\\int_{C_{0}} \\left\\langle \\nabla_{H} F (\\sigma), V (\\sigma) \\right\\rangle_{L_{0}^{2, 1}} \\, \\mathrm{d} \\gamma (\\sigma) = - \\int_{C_{0}} F (\\sigma) \\operatorname{div}(V) (\\sigma) \\, \\mathrm{d} \\gamma (\\sigma)" }, { "math_id": 6, "text": "\\mathbb{E} \\big[ \\langle \\nabla_{H} F, V \\rangle \\big] = - \\mathbb{E} \\big[ F \\operatorname{div} V \\big]," }, { "math_id": 7, "text": "\\operatorname{div} (V) (\\sigma) := - \\int_{0}^{T} \\dot{V}_{t} (\\sigma) \\, \\mathrm{d} \\sigma_{t}." } ]
https://en.wikipedia.org/wiki?curid=6599348
659939
Square
Regular quadrilateral In Euclidean geometry, a square is a regular quadrilateral, which means that it has four sides of equal length and four equal angles (90-degree angles, π/2 radian angles, or right angles). It can also be defined as a rectangle with two equal-length adjacent sides. It is the only regular polygon whose internal angle, central angle, and external angle are all equal (90°), and whose diagonals are all equal in length. A square with vertices "ABCD" would be denoted &lt;math&gt;\square&lt;/math&gt; "ABCD". Characterizations. A quadrilateral is a square if and only if it is any one of the following: Properties. A square is a special case of a rhombus (equal sides, opposite equal angles), a kite (two pairs of adjacent equal sides), a trapezoid (one pair of opposite sides parallel), a parallelogram (all opposite sides parallel), a quadrilateral or tetragon (four-sided polygon), and a rectangle (opposite sides equal, right-angles), and therefore has all the properties of all these shapes, namely: A square has Schläfli symbol {4}. A truncated square, t{4}, is an octagon, {8}. An alternated square, h{4}, is a digon, {2}. The square is the "n" = 2 case of the families of "n"-hypercubes and "n"-orthoplexes. Perimeter and area. The perimeter of a square whose four sides have length formula_1 is formula_2 and the area "A" is formula_3 Since four squared equals sixteen, a four by four square has an area equal to its perimeter. The only other quadrilateral with such a property is that of a three by six rectangle. In classical times, the second power was described in terms of the area of a square, as in the above formula. This led to the use of the term "square" to mean raising to the second power. The area can also be calculated using the diagonal "d" according to formula_4 In terms of the circumradius "R", the area of a square is formula_5 since the area of the circle is formula_6 the square fills formula_7 of its circumscribed circle. In terms of the inradius "r", the area of the square is formula_8 hence the area of the inscribed circle is formula_9 of that of the square. Because it is a regular polygon, a square is the quadrilateral of least perimeter enclosing a given area. Dually, a square is the quadrilateral containing the largest area within a given perimeter. Indeed, if "A" and "P" are the area and perimeter enclosed by a quadrilateral, then the following isoperimetric inequality holds: formula_10 with equality if and only if the quadrilateral is a square. formula_12 formula_15 formula_17 and formula_18 where formula_14 is the circumradius of the square. Coordinates and equations. The coordinates for the vertices of a square with vertical and horizontal sides, centered at the origin and with side length 2 are (±1, ±1), while the interior of this square consists of all points ("x"i, "y"i) with −1 &lt; "x""i" &lt; 1 and −1 &lt; "y""i" &lt; 1. The equation formula_19 specifies the boundary of this square. This equation means ""x"2 or "y"2, whichever is larger, equals 1." The circumradius of this square (the radius of a circle drawn through the square's vertices) is half the square's diagonal, and is equal to formula_20 Then the circumcircle has the equation formula_21 Alternatively the equation formula_22 can also be used to describe the boundary of a square with center coordinates ("a", "b"), and a horizontal or vertical radius of "r". The square is therefore the shape of a topological ball according to the L1 distance metric. Construction. The following animations show how to construct a square using a compass and straightedge. This is possible as 4 = 22, a power of two. Symmetry. The "square" has Dih4 symmetry, order 8. There are 2 dihedral subgroups: Dih2, Dih1, and 3 cyclic subgroups: Z4, Z2, and Z1. A square is a special case of many lower symmetry quadrilaterals: These 6 symmetries express 8 distinct symmetries on a square. John Conway labels these by a letter and group order. Each subgroup symmetry allows one or more degrees of freedom for irregular quadrilaterals. r8 is full symmetry of the square, and a1 is no symmetry. d4 is the symmetry of a rectangle, and p4 is the symmetry of a rhombus. These two forms are duals of each other, and have half the symmetry order of the square. d2 is the symmetry of an isosceles trapezoid, and p2 is the symmetry of a kite. g2 defines the geometry of a parallelogram. Only the g4 subgroup has no degrees of freedom, but can be seen as a square with directed edges. Squares inscribed in triangles. Every acute triangle has three inscribed squares (squares in its interior such that all four of a square's vertices lie on a side of the triangle, so two of them lie on the same side and hence one side of the square coincides with part of a side of the triangle). In a right triangle two of the squares coincide and have a vertex at the triangle's right angle, so a right triangle has only two "distinct" inscribed squares. An obtuse triangle has only one inscribed square, with a side coinciding with part of the triangle's longest side. The fraction of the triangle's area that is filled by the square is no more than 1/2. Squaring the circle. Squaring the circle, proposed by ancient geometers, is the problem of constructing a square with the same area as a given circle, by using only a finite number of steps with compass and straightedge. In 1882, the task was proven to be impossible as a consequence of the Lindemann–Weierstrass theorem, which proves that pi (π) is a transcendental number rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. Non-Euclidean geometry. In non-Euclidean geometry, squares are more generally polygons with 4 equal sides and equal angles. In spherical geometry, a square is a polygon whose edges are great circle arcs of equal distance, which meet at equal angles. Unlike the square of plane geometry, the angles of such a square are larger than a right angle. Larger spherical squares have larger angles. In hyperbolic geometry, squares with right angles do not exist. Rather, squares in hyperbolic geometry have angles of less than right angles. Larger hyperbolic squares have smaller angles. Examples: Crossed square. A crossed square is a faceting of the square, a self-intersecting polygon created by removing two opposite edges of a square and reconnecting by its two diagonals. It has half the symmetry of the square, Dih2, order 4. It has the same vertex arrangement as the square, and is vertex-transitive. It appears as two 45-45-90 triangles with a common vertex, but the geometric intersection is not considered a vertex. A crossed square is sometimes likened to a bow tie or butterfly. the crossed rectangle is related, as a faceting of the rectangle, both special cases of crossed quadrilaterals. The interior of a crossed square can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise. A square and a crossed square have the following properties in common: It exists in the vertex figure of a uniform star polyhedra, the tetrahemihexahedron. Graphs. The K4 complete graph is often drawn as a square with all 6 possible edges connected, hence appearing as a square with both diagonals drawn. This graph also represents an orthographic projection of the 4 vertices and 6 edges of the regular 3-simplex (tetrahedron). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A= \\tfrac{1}{2}(a^2+c^2)=\\tfrac{1}{2}(b^2+d^2)." }, { "math_id": 1, "text": "\\ell " }, { "math_id": 2, "text": "P=4\\ell " }, { "math_id": 3, "text": "A=\\ell^2." }, { "math_id": 4, "text": "A=\\frac{d^2}{2}." }, { "math_id": 5, "text": "A=2R^2;" }, { "math_id": 6, "text": "\\pi R^2," }, { "math_id": 7, "text": "2/\\pi \\approx 0.6366" }, { "math_id": 8, "text": "A=4r^2;" }, { "math_id": 9, "text": " \\pi/4 \\approx 0.7854" }, { "math_id": 10, "text": "16A\\le P^2" }, { "math_id": 11, "text": "\\sqrt{2}" }, { "math_id": 12, "text": " 2(PH^2-PE^2) = PD^2-PB^2." }, { "math_id": 13, "text": "d_i" }, { "math_id": 14, "text": "R" }, { "math_id": 15, "text": "\\frac{d_1^4+d_2^4+d_3^4+d_4^4}{4} + 3R^4 = \\left(\\frac{d_1^2+d_2^2+d_3^2+d_4^2}{4} + R^2\\right)^2." }, { "math_id": 16, "text": "L" }, { "math_id": 17, "text": "d_1^2 + d_3^2 = d_2^2 + d_4^2 = 2(R^2+L^2)" }, { "math_id": 18, "text": " d_1^2d_3^2 + d_2^2d_4^2 = 2(R^4+L^4), " }, { "math_id": 19, "text": "\\max(x^2, y^2) = 1" }, { "math_id": 20, "text": "\\sqrt{2}." }, { "math_id": 21, "text": "x^2 + y^2 = 2." }, { "math_id": 22, "text": "\\left|x - a\\right| + \\left|y - b\\right| = r." } ]
https://en.wikipedia.org/wiki?curid=659939
659942
Square (algebra)
Product of a number by itself In mathematics, a square is the result of multiplying a number by itself. The verb "to square" is used to denote this operation. Squaring is the same as raising to the power 2, and is denoted by a superscript 2; for instance, the square of 3 may be written as 32, which is the number 9. In some cases when superscripts are not available, as for instance in programming languages or plain text files, the notations "x"^2 (caret) or "x"**2 may be used in place of "x"2. The adjective which corresponds to squaring is "quadratic". The square of an integer may also be called a "square number" or a "perfect square". In algebra, the operation of squaring is often generalized to polynomials, other expressions, or values in systems of mathematical values other than the numbers. For instance, the square of the linear polynomial "x" + 1 is the quadratic polynomial ("x" + 1)2 = "x"2 + 2"x" + 1. One of the important properties of squaring, for numbers as well as in many other mathematical systems, is that (for all numbers x), the square of x is the same as the square of its additive inverse −"x". That is, the square function satisfies the identity "x"2 = (−"x")2. This can also be expressed by saying that the square function is an even function. In real numbers. The squaring operation defines a real function called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;square function or the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;squaring function. Its domain is the whole real line, and its image is the set of nonnegative real numbers. The square function preserves the order of positive numbers: larger numbers have larger squares. In other words, the square is a monotonic function on the interval [0, +∞). On the negative numbers, numbers with greater absolute value have greater squares, so the square is a monotonically decreasing function on (−∞,0]. Hence, zero is the (global) minimum of the square function. The square "x"2 of a number "x" is less than x (that is "x"2 &lt; "x") if and only if 0 &lt; "x" &lt; 1, that is, if x belongs to the open interval (0,1). This implies that the square of an integer is never less than the original number x. Every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of only one number, itself. For this reason, it is possible to define the square root function, which associates with a non-negative real number the non-negative number whose square is the original number. No square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. The lack of real square roots for the negative numbers can be used to expand the real number system to the complex numbers, by postulating the imaginary unit i, which is one of the square roots of −1. The property "every non-negative real number is a square" has been generalized to the notion of a real closed field, which is an ordered field such that every non-negative element is a square and every polynomial of odd degree has a root. The real closed fields cannot be distinguished from the field of real numbers by their algebraic properties: every property of the real numbers, which may be expressed in first-order logic (that is expressed by a formula in which the variables that are quantified by ∀ or ∃ represent elements, not sets), is true for every real closed field, and conversely every property of the first-order logic, which is true for a specific real closed field is also true for the real numbers. In geometry. There are several major uses of the square function in geometry. The name of the square function shows its importance in the definition of the area: it comes from the fact that the area of a square with sides of length  l is equal to "l"2. The area depends quadratically on the size: the area of a shape n times larger is "n"2 times greater. This holds for areas in three dimensions as well as in the plane: for instance, the surface area of a sphere is proportional to the square of its radius, a fact that is manifested physically by the inverse-square law describing how the strength of physical forces such as gravity varies according to distance. The square function is related to distance through the Pythagorean theorem and its generalization, the parallelogram law. Euclidean distance is not a smooth function: the three-dimensional graph of distance from a fixed point forms a cone, with a non-smooth point at the tip of the cone. However, the square of the distance (denoted "d"2 or "r"2), which has a paraboloid as its graph, is a smooth and analytic function. The dot product of a Euclidean vector with itself is equal to the square of its length: v⋅v = v2. This is further generalised to quadratic forms in linear spaces via the inner product. The inertia tensor in mechanics is an example of a quadratic form. It demonstrates a quadratic relation of the moment of inertia to the size (length). There are infinitely many Pythagorean triples, sets of three positive integers such that the sum of the squares of the first two equals the square of the third. Each of these triples gives the integer sides of a right triangle. In abstract algebra and number theory. The square function is defined in any field or ring. An element in the image of this function is called a "square", and the inverse images of a square are called "square roots". The notion of squaring is particularly important in the finite fields Z/"pZ formed by the numbers modulo an odd prime number p. A non-zero element of this field is called a quadratic residue if it is a square in Z/"pZ, and otherwise, it is called a quadratic non-residue. Zero, while a square, is not considered to be a quadratic residue. Every finite field of this type has exactly ("p" − 1)/2 quadratic residues and exactly ("p" − 1)/2 quadratic non-residues. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory. More generally, in rings, the square function may have different properties that are sometimes used to classify rings. Zero may be the square of some non-zero elements. A commutative ring such that the square of a non zero element is never zero is called a reduced ring. More generally, in a commutative ring, a radical ideal is an ideal I such that formula_0 implies formula_1. Both notions are important in algebraic geometry, because of Hilbert's Nullstellensatz. An element of a ring that is equal to its own square is called an idempotent. In any ring, 0 and 1 are idempotents. There are no other idempotents in fields and more generally in integral domains. However, the ring of the integers modulo n has 2"k" idempotents, where k is the number of distinct prime factors of n. A commutative ring in which every element is equal to its square (every element is idempotent) is called a Boolean ring; an example from computer science is the ring whose elements are binary numbers, with bitwise AND as the multiplication operation and bitwise XOR as the addition operation. In a totally ordered ring, "x"2 ≥ 0 for any x. Moreover, "x"2 = 0 if and only if "x" = 0. In a supercommutative algebra where 2 is invertible, the square of any "odd" element equals zero. If "A" is a commutative semigroup, then one has formula_2 In the language of quadratic forms, this equality says that the square function is a "form permitting composition". In fact, the square function is the foundation upon which other quadratic forms are constructed which also permit composition. The procedure was introduced by L. E. Dickson to produce the octonions out of quaternions by doubling. The doubling method was formalized by A. A. Albert who started with the real number field formula_3 and the square function, doubling it to obtain the complex number field with quadratic form "x"2 + "y"2, and then doubling again to obtain quaternions. The doubling procedure is called the Cayley–Dickson construction, and has been generalized to form algebras of dimension 2n over a field "F" with involution. The square function "z"2 is the "norm" of the composition algebra formula_4, where the identity function forms a trivial involution to begin the Cayley–Dickson constructions leading to bicomplex, biquaternion, and bioctonion composition algebras. In complex numbers. On complex numbers, the square function formula_5 is a twofold cover in the sense that each non-zero complex number has exactly two square roots. The square of the absolute value of a complex number is called its absolute square, squared modulus, or squared magnitude. It is the product of the complex number with its complex conjugate, and equals the sum of the squares of the real and imaginary parts of the complex number. The absolute square of a complex number is always a nonnegative real number, that is zero if and only if the complex number is zero. It is easier to compute than the absolute value (no square root), and is a smooth real-valued function. Because of these two properties, the absolute square is often preferred to the absolute value for explicit computations and when methods of mathematical analysis are involved (for example optimization or integration). For complex vectors, the dot product can be defined involving the conjugate transpose, leading to the "squared norm". Other uses. Squares are ubiquitous in algebra, more generally, in almost every branch of mathematics, and also in physics where many units are defined using squares and inverse squares: see below. Least squares is the standard method used with overdetermined systems. Squaring is used in statistics and probability theory in determining the standard deviation of a set of values, or a random variable. The deviation of each value xi from the mean formula_6 of the set is defined as the difference formula_7. These deviations are squared, then a mean is taken of the new set of numbers (each of which is positive). This mean is the variance, and its square root is the standard deviation. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2 \\in I" }, { "math_id": 1, "text": "x \\in I" }, { "math_id": 2, "text": "\\forall x, y \\isin A \\quad (xy)^2 = xy xy = xx yy = x^2 y^2 ." }, { "math_id": 3, "text": "\\mathbb{R}" }, { "math_id": 4, "text": "\\mathbb{C}" }, { "math_id": 5, "text": "z\\to z^2" }, { "math_id": 6, "text": "\\overline{x}" }, { "math_id": 7, "text": "x_i - \\overline{x}" } ]
https://en.wikipedia.org/wiki?curid=659942
65994329
Hidden Matching Problem
Computation complexity problem The Hidden Matching Problem is a computation complexity problem that can be solved using quantum protocols: Let formula_0 be a positive even integer. In the Hidden Matching Problem, Alice is given formula_1 and Bob is given formula_2(formula_3 denotes the family of all possible perfect matchings on formula_0 nodes). Their goal is to output a tuple formula_4 such that the edge formula_5 belongs to the matching formula_6 and formula_7. It has been used to find quantum communication problems that demonstrate super-polynomial advantage of over classical ones. Background. Communication complexity is a model of computation first introduced by Yao in 1979. Two parties (normally called Alice and Bob) each hold a piece of data and want to solve some computational task that jointly depends on their data. Alice knows only information formula_8 and Bob knows only information formula_9, and they want to solve some function formula_10. In order to do so, they will need to communicate between themselves, and their goal is to solve the problem with minimal communication obeying the restrictions of a specific communication model. There are two key communication models that can be considered: Communication tasks can be either functional, meaning that there is exactly one correct answer corresponding to every possible input, or relational, when multiple correct answers are allowed. History. The Hidden Matching Problem was first defined in 2004 by Bar-Yossef, Jayram and Kerenidis. Through its definition, they were able to provide the first exponential separation between quantum and bounded-error randomized one-way communication complexity. They proved that the quantum one-way communication complexity of the Hidden Matching Problem is formula_11, yet any randomized one-way protocol with bounded error must use formula_12 bits of communication. The Hidden Matching Problem is a relational problem. Alice sends a superposition formula_13 to Bob. Bob uses his perfect matching to project this quantum state onto one of n/2 orthogonal 2D projectors, with a projector onto the space spanned by formula_14 for pairing of i and j. After measurement, the quantum state is specified by the measured projector. The bit b determines whether the resulting state is formula_15. With a classical message, Alice has to send on order of formula_16 bits of information specifying the value of x for that many nodes. By the birthday problem, the probability is close to 1 that at least two nodes in that subset are connected by an edge. In the same paper, the authors proposed a Boolean version of the problem, the Boolean Hidden Matching problem, and conjectured that the same quantum-classical gap holds for it as well. This was later proven to be true by Gavinsky et al. In 2008, Gavinsky further improved on Bar-Yossef et al.’s result by showing an exponential separation between one-way quantum communication and two-way classical communication. Applications. The Hidden Matching Problem was used as the basis of Gavinsky's 2012 Quantum Coin Scheme. Bob has been given a coin as payment for some goods or services. This coin consists of a quantum register containing multiple qubits. Bob wishes to verify that the coin is legitimate. In a classical scenario, a digital coin is made up of a unique string of classical bits, a coin holder sends this string to the bank and the bank compares it to a static database of valid strings. If the string exists in the database then the bank confirms that the coin is valid. However, this leaves the potential for an adversary to masquerade as the bank and steal a coin holder's coin under the pretence of verifying it. Using the Hidden matching problem, the coin holder can send the relevant information to the bank and the bank can verify that the coin is legitimate but an adversary masquerading as a bank will not learn enough to be able to reproduce the coin. In the protocol Bob will provide  values formula_17 to the bank. These values formula_17 are attained by Bob measuring certain quantum registers in his coin. The bank holds the values formula_18 (the classical bit strings) and formula_19. If formula_20, then the bank can verify that Bob does in fact hold a valid coin corresponding to the classical values formula_18. For formula_21 and formula_22, we say that formula_23 if formula_24 formula_25 refers to the four bit version of the Hidden Matching Problem.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "x \\in \\{0,1\\}^n" }, { "math_id": 2, "text": "M \\in \\mathcal{M}_n" }, { "math_id": 3, "text": "\\mathcal{M}_n" }, { "math_id": 4, "text": "\\langle i, j, b \\rangle" }, { "math_id": 5, "text": "(i,j)" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "b = x_i \\oplus x_j" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "f(x,y)" }, { "math_id": 11, "text": "\\mathcal{O}(\\log n)" }, { "math_id": 12, "text": "\\Omega(\\sqrt{n})" }, { "math_id": 13, "text": "\\frac {1}{\\sqrt n} \\sum_{i=1}^n (-1)^{x_i}|i\\rangle " }, { "math_id": 14, "text": "\\{|i\\rangle, |j\\rangle\\}" }, { "math_id": 15, "text": "\\frac {1}{\\sqrt 2}\\left( |i\\rangle \\pm |j\\rangle\\right)" }, { "math_id": 16, "text": "\\mathcal{O}\\left({\\sqrt n}\\right)" }, { "math_id": 17, "text": "(a_i,b_i)" }, { "math_id": 18, "text": "x_i" }, { "math_id": 19, "text": "m_i" }, { "math_id": 20, "text": "\\forall i \\; (x_i,m_i,a_i,b_i)\\in \\textit{HMP}_4" }, { "math_id": 21, "text": "x \\in \\{0,1\\}^4" }, { "math_id": 22, "text": "m,a,b \\in \\{0,1\\}" }, { "math_id": 23, "text": "(x,m,a,b) \\in \\textit{HMP}_4" }, { "math_id": 24, "text": "b= \\begin{cases}x_1 \\otimes x_{2+m},\\quad \\text{if } a= 0\\\\x_{3-m} \\otimes x_4, \\quad \\text{if } a=1,\n\\end{cases}" }, { "math_id": 25, "text": "\\textit{HMP}_4" } ]
https://en.wikipedia.org/wiki?curid=65994329
6599701
H-derivative
In mathematics, the "H"-derivative is a notion of derivative in the study of abstract Wiener spaces and the Malliavin calculus. Definition. Let formula_0 be an abstract Wiener space, and suppose that formula_1 is differentiable. Then the Fréchet derivative is a map formula_2; i.e., for formula_3, formula_4 is an element of formula_5, the dual space to formula_6. Therefore, define the formula_7-derivative formula_8 at formula_3 by formula_9, a continuous linear map on formula_7. Define the formula_7-gradient formula_10 by formula_11. That is, if formula_12 denotes the adjoint of formula_0, we have formula_13. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i : H \\to E" }, { "math_id": 1, "text": "F : E \\to \\mathbb{R}" }, { "math_id": 2, "text": "\\mathrm{D} F : E \\to \\mathrm{Lin} (E; \\mathbb{R})" }, { "math_id": 3, "text": "x \\in E" }, { "math_id": 4, "text": "\\mathrm{D} F (x)" }, { "math_id": 5, "text": "E^{*}" }, { "math_id": 6, "text": "E" }, { "math_id": 7, "text": "H" }, { "math_id": 8, "text": "\\mathrm{D}_{H} F" }, { "math_id": 9, "text": "\\mathrm{D}_{H} F (x) := \\mathrm{D} F (x) \\circ i : H \\to \\R" }, { "math_id": 10, "text": "\\nabla_{H} F : E \\to H" }, { "math_id": 11, "text": "\\langle \\nabla_{H} F (x), h \\rangle_{H} = \\left( \\mathrm{D}_{H} F \\right) (x) (h) = \\lim_{t \\to 0} \\frac{F (x + t i(h)) - F(x)}{t}" }, { "math_id": 12, "text": "j : E^{*} \\to H" }, { "math_id": 13, "text": "\\nabla_{H} F (x) := j \\left( \\mathrm{D} F (x) \\right)" } ]
https://en.wikipedia.org/wiki?curid=6599701
6600
Currying
Transforming a function in such a way that it only takes a single argument In mathematics and computer science, currying is the technique of translating a function that takes multiple arguments into a sequence of families of functions, each taking a single argument. In the prototypical example, one begins with a function formula_0 that takes two arguments, one from formula_1 and one from formula_2 and produces objects in formula_3 The curried form of this function treats the first argument as a parameter, so as to create a family of functions formula_4 The family is arranged so that for each object formula_5 in formula_6 there is exactly one function formula_7 In this example, formula_8 itself becomes a function, that takes formula_9 as an argument, and returns a function that maps each formula_5 to formula_7 The proper notation for expressing this is verbose. The function formula_9 belongs to the set of functions formula_10 Meanwhile, formula_11 belongs to the set of functions formula_12 Thus, something that maps formula_5 to formula_11 will be of the type formula_13 With this notation, formula_8 is a function that takes objects from the first set, and returns objects in the second set, and so one writes formula_14 This is a somewhat informal example; more precise definitions of what is meant by "object" and "function" are given below. These definitions vary from context to context, and take different forms, depending on the theory that one is working in. Currying is related to, but not the same as, partial application. The example above can be used to illustrate partial application; it is quite similar. Partial application is the function formula_15 that takes the pair formula_9 and formula_5 together as arguments, and returns formula_7 Using the same notation as above, partial application has the signature formula_16 Written this way, application can be seen to be adjoint to currying. The currying of a function with more than two arguments can be defined by induction. Currying is useful in both practical and theoretical settings. In functional programming languages, and many others, it provides a way of automatically managing how arguments are passed to functions and exceptions. In theoretical computer science, it provides a way to study functions with multiple arguments in simpler theoretical models which provide only one argument. The most general setting for the strict notion of currying and uncurrying is in the closed monoidal categories, which underpins a vast generalization of the Curry–Howard correspondence of proofs and programs to a correspondence with many other structures, including quantum mechanics, cobordisms and string theory. The concept of currying was introduced by Gottlob Frege, developed by Moses Schönfinkel, and further developed by Haskell Curry. Uncurrying is the dual transformation to currying, and can be seen as a form of defunctionalization. It takes a function formula_9 whose return value is another function formula_17, and yields a new function formula_18 that takes as parameters the arguments for both formula_9 and formula_17, and returns, as a result, the application of formula_9 and subsequently, formula_17, to those arguments. The process can be iterated. Motivation. Currying provides a way for working with functions that take multiple arguments, and using them in frameworks where functions might take only one argument. For example, some analytical techniques can only be applied to functions with a single argument. Practical functions frequently take more arguments than this. Frege showed that it was sufficient to provide solutions for the single argument case, as it was possible to transform a function with multiple arguments into a chain of single-argument functions instead. This transformation is the process now known as currying. All "ordinary" functions that might typically be encountered in mathematical analysis or in computer programming can be curried. However, there are categories in which currying is not possible; the most general categories which allow currying are the closed monoidal categories. Some programming languages almost always use curried functions to achieve multiple arguments; notable examples are ML and Haskell, where in both cases all functions have exactly one argument. This property is inherited from lambda calculus, where multi-argument functions are usually represented in curried form. Currying is related to, but not the same as partial application. In practice, the programming technique of closures can be used to perform partial application and a kind of currying, by hiding arguments in an environment that travels with the curried function. History. The "Curry" in "Currying" is a reference to logician Haskell Curry, who used the concept extensively, but Moses Schönfinkel had the idea six years before Curry. The alternative name "Schönfinkelisation" has been proposed. In the mathematical context, the principle can be traced back to work in 1893 by Frege. The originator of the word "currying" is not clear. David Turner says the word was coined by Christopher Strachey in his 1967 lecture notes Fundamental Concepts in Programming Languages, but that source introduces the concept as "a device originated by Schönfinkel", and the term "currying" is not used, while Curry is mentioned later in the context of higher-order functions. John C. Reynolds defined "currying" in a 1972 paper, but did not claim to have coined the term. Definition. Currying is most easily understood by starting with an informal definition, which can then be molded to fit many different domains. First, there is some notation to be established. The notation formula_19 denotes all functions from formula_1 to formula_20. If formula_9 is such a function, we write formula_21. Let formula_22 denote the ordered pairs of the elements of formula_1 and formula_20 respectively, that is, the Cartesian product of formula_1 and formula_20. Here, formula_1 and formula_20 may be sets, or they may be types, or they may be other kinds of objects, as explored below. Given a function formula_23, currying constructs a new function formula_24. That is, formula_17 takes an argument of type formula_1 and returns a function of type formula_25. It is defined by formula_26 for formula_5 of type formula_1 and formula_27 of type formula_20. We then also write formula_28 Uncurrying is the reverse transformation, and is most easily understood in terms of its right adjoint, the function formula_29 Set theory. In set theory, the notation formula_30 is used to denote the set of functions from the set formula_1 to the set formula_20. Currying is the natural bijection between the set formula_31 of functions from formula_32 to formula_33, and the set formula_34 of functions from formula_35 to the set of functions from formula_36 to formula_33. In symbols: formula_37 Indeed, it is this natural bijection that justifies the exponential notation for the set of functions. As is the case in all instances of currying, the formula above describes an adjoint pair of functors: for every fixed set formula_36, the functor formula_38 is left adjoint to the functor formula_39. In the category of sets, the object formula_30 is called the exponential object. Function spaces. In the theory of function spaces, such as in functional analysis or homotopy theory, one is commonly interested in continuous functions between topological spaces. One writes formula_40 (the Hom functor) for the set of "all" functions from formula_1 to formula_20, and uses the notation formula_30 to denote the subset of continuous functions. Here, formula_41 is the bijection formula_42 while uncurrying is the inverse map. If the set formula_30 of continuous functions from formula_1 to formula_20 is given the compact-open topology, and if the space formula_20 is locally compact Hausdorff, then formula_43 is a homeomorphism. This is also the case when formula_1, formula_20 and formula_30 are compactly generated,chapter 5 although there are more cases. One useful corollary is that a function is continuous if and only if its curried form is continuous. Another important result is that the application map, usually called "evaluation" in this context, is continuous (note that eval is a strictly different concept in computer science.) That is, formula_44 is continuous when formula_30 is compact-open and formula_20 locally compact Hausdorff. These two results are central for establishing the continuity of homotopy, i.e. when formula_1 is the unit interval formula_45, so that formula_46 can be thought of as either a homotopy of two functions from formula_20 to formula_47, or, equivalently, a single (continuous) path in formula_48. Algebraic topology. In algebraic topology, currying serves as an example of Eckmann–Hilton duality, and, as such, plays an important role in a variety of different settings. For example, loop space is adjoint to reduced suspensions; this is commonly written as formula_49 where formula_50 is the set of homotopy classes of maps formula_51, and formula_52 is the suspension of "A", and formula_53 is the loop space of "A". In essence, the suspension formula_54 can be seen as the cartesian product of formula_1 with the unit interval, modulo an equivalence relation to turn the interval into a loop. The curried form then maps the space formula_1 to the space of functions from loops into formula_47, that is, from formula_1 into formula_55. Then formula_41 is the adjoint functor that maps suspensions to loop spaces, and uncurrying is the dual. The duality between the mapping cone and the mapping fiber (cofibration and fibration)chapters 6,7 can be understood as a form of currying, which in turn leads to the duality of the long exact and coexact Puppe sequences. In homological algebra, the relationship between currying and uncurrying is known as tensor-hom adjunction. Here, an interesting twist arises: the Hom functor and the tensor product functor might not lift to an exact sequence; this leads to the definition of the Ext functor and the Tor functor. Domain theory. In order theory, that is, the theory of lattices of partially ordered sets, formula_41 is a continuous function when the lattice is given the Scott topology. Scott-continuous functions were first investigated in the attempt to provide a semantics for lambda calculus (as ordinary set theory is inadequate to do this). More generally, Scott-continuous functions are now studied in domain theory, which encompasses the study of denotational semantics of computer algorithms. Note that the Scott topology is quite different than many common topologies one might encounter in the category of topological spaces; the Scott topology is typically finer, and is not sober. The notion of continuity makes its appearance in homotopy type theory, where, roughly speaking, two computer programs can be considered to be homotopic, i.e. compute the same results, if they can be "continuously" refactored from one to the other. Lambda calculi. In theoretical computer science, currying provides a way to study functions with multiple arguments in very simple theoretical models, such as the lambda calculus, in which functions only take a single argument. Consider a function formula_56 taking two arguments, and having the type formula_57, which should be understood to mean that "x" must have the type formula_1, "y" must have the type formula_20, and the function itself returns the type formula_47. The curried form of "f" is defined as formula_58 where formula_59 is the abstractor of lambda calculus. Since curry takes, as input, functions with the type formula_60, one concludes that the type of curry itself is formula_61 The → operator is often considered right-associative, so the curried function type formula_62 is often written as formula_63. Conversely, function application is considered to be left-associative, so that formula_64 is equivalent to formula_65. That is, the parenthesis are not required to disambiguate the order of the application. Curried functions may be used in any programming language that supports closures; however, uncurried functions are generally preferred for efficiency reasons, since the overhead of partial application and closure creation can then be avoided for most function calls. Type theory. In type theory, the general idea of a type system in computer science is formalized into a specific algebra of types. For example, when writing formula_21, the intent is that formula_1 and formula_20 are types, while the arrow formula_66 is a type constructor, specifically, the function type or arrow type. Similarly, the Cartesian product formula_22 of types is constructed by the product type constructor formula_67. The type-theoretical approach is expressed in programming languages such as ML and the languages derived from and inspired by it: Caml, Haskell, and F#. The type-theoretical approach provides a natural complement to the language of category theory, as discussed below. This is because categories, and specifically, monoidal categories, have an internal language, with simply-typed lambda calculus being the most prominent example of such a language. It is important in this context, because it can be built from a single type constructor, the arrow type. Currying then endows the language with a natural product type. The correspondence between objects in categories and types then allows programming languages to be re-interpreted as logics (via Curry–Howard correspondence), and as other types of mathematical systems, as explored further, below. Logic. Under the Curry–Howard correspondence, the existence of currying and uncurrying is equivalent to the logical theorem formula_68 (also known as exportation), as tuples (product type) corresponds to conjunction in logic, and function type corresponds to implication. The exponential object formula_69 in the category of Heyting algebras is normally written as material implication formula_70. Distributive Heyting algebras are Boolean algebras, and the exponential object has the explicit form formula_71, thus making it clear that the exponential object really is material implication. Category theory. The above notions of currying and uncurrying find their most general, abstract statement in category theory. Currying is a universal property of an exponential object, and gives rise to an adjunction in cartesian closed categories. That is, there is a natural isomorphism between the morphisms from a binary product formula_23 and the morphisms to an exponential object formula_72. This generalizes to a broader result in closed monoidal categories: Currying is the statement that the tensor product and the internal Hom are adjoint functors; that is, for every object formula_35 there is a natural isomorphism: formula_73 Here, "Hom" denotes the (external) Hom-functor of all morphisms in the category, while formula_74 denotes the internal hom functor in the closed monoidal category. For the category of sets, the two are the same. When the product is the cartesian product, then the internal hom formula_74 becomes the exponential object formula_75. Currying can break down in one of two ways. One is if a category is not closed, and thus lacks an internal hom functor (possibly because there is more than one choice for such a functor). Another way is if it is not monoidal, and thus lacks a product (that is, lacks a way of writing down pairs of objects). Categories that do have both products and internal homs are exactly the closed monoidal categories. The setting of cartesian closed categories is sufficient for the discussion of classical logic; the more general setting of closed monoidal categories is suitable for quantum computation. The difference between these two is that the product for cartesian categories (such as the category of sets, complete partial orders or Heyting algebras) is just the Cartesian product; it is interpreted as an ordered pair of items (or a list). Simply typed lambda calculus is the internal language of cartesian closed categories; and it is for this reason that pairs and lists are the primary types in the type theory of LISP, Scheme and many functional programming languages. By contrast, the product for monoidal categories (such as Hilbert space and the vector spaces of functional analysis) is the tensor product. The internal language of such categories is linear logic, a form of quantum logic; the corresponding type system is the linear type system. Such categories are suitable for describing entangled quantum states, and, more generally, allow a vast generalization of the Curry–Howard correspondence to quantum mechanics, to cobordisms in algebraic topology, and to string theory. The linear type system, and linear logic are useful for describing synchronization primitives, such as mutual exclusion locks, and the operation of vending machines. Contrast with partial function application. Currying and partial function application are often conflated. One of the significant differences between the two is that a call to a partially applied function returns the result right away, not another function down the currying chain; this distinction can be illustrated clearly for functions whose arity is greater than two. Given a function of type formula_76, currying produces formula_77. That is, while an evaluation of the first function might be represented as formula_78, evaluation of the curried function would be represented as formula_79, applying each argument in turn to a single-argument function returned by the previous invocation. Note that after calling formula_80, we are left with a function that takes a single argument and returns another function, not a function that takes two arguments. In contrast, partial function application refers to the process of fixing a number of arguments to a function, producing another function of smaller arity. Given the definition of formula_9 above, we might fix (or 'bind') the first argument, producing a function of type formula_81. Evaluation of this function might be represented as formula_82. Note that the result of partial function application in this case is a function that takes two arguments. Intuitively, partial function application says "if you fix the first argument of the function, you get a function of the remaining arguments". For example, if function "div" stands for the division operation "x"/"y", then "div" with the parameter "x" fixed at 1 (i.e., "div" 1) is another function: the same as the function "inv" that returns the multiplicative inverse of its argument, defined by "inv"("y") = 1/"y". The practical motivation for partial application is that very often the functions obtained by supplying some but not all of the arguments to a function are useful; for example, many languages have a function or operator similar to codice_0. Partial application makes it easy to define these functions, for example by creating a function that represents the addition operator with 1 bound as its first argument. Partial application can be seen as evaluating a curried function at a fixed point, e.g. given formula_76 and formula_83 then formula_84 or simply formula_85 where formula_86 curries f's first parameter. Thus, partial application is reduced to a curried function at a fixed point. Further, a curried function at a fixed point is (trivially), a partial application. For further evidence, note that, given any function formula_56, a function formula_87 may be defined such that formula_88. Thus, any partial application may be reduced to a single curry operation. As such, curry is more suitably defined as an operation which, in many theoretical cases, is often applied recursively, but which is theoretically indistinguishable (when considered as an operation) from a partial application. So, a partial application can be defined as the objective result of a single application of the curry operator on some ordering of the inputs of some function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f:(X\\times Y)\\to Z" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "Y," }, { "math_id": 3, "text": "Z." }, { "math_id": 4, "text": "f_x :Y\\to Z." }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "X," }, { "math_id": 7, "text": "f_x." }, { "math_id": 8, "text": "\\mbox{curry}" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "(X\\times Y)\\to Z." }, { "math_id": 11, "text": "f_x" }, { "math_id": 12, "text": "Y\\to Z." }, { "math_id": 13, "text": "X\\to [Y\\to Z]." }, { "math_id": 14, "text": "\\mbox{curry}:[(X\\times Y)\\to Z]\\to (X\\to [Y\\to Z])." }, { "math_id": 15, "text": "\\mbox{apply}" }, { "math_id": 16, "text": "\\mbox{apply}:([(X\\times Y)\\to Z] \\times X) \\to [Y\\to Z]." }, { "math_id": 17, "text": "g" }, { "math_id": 18, "text": "f'" }, { "math_id": 19, "text": "X \\to Y " }, { "math_id": 20, "text": "Y" }, { "math_id": 21, "text": "f \\colon X \\to Y " }, { "math_id": 22, "text": "X \\times Y" }, { "math_id": 23, "text": "f \\colon (X \\times Y) \\to Z " }, { "math_id": 24, "text": "g \\colon X \\to (Y \\to Z) " }, { "math_id": 25, "text": "Y\\to Z" }, { "math_id": 26, "text": "g(x)(y)=f(x,y)" }, { "math_id": 27, "text": "y" }, { "math_id": 28, "text": "\\text{curry}(f)=g." }, { "math_id": 29, "text": "\\operatorname{apply}." }, { "math_id": 30, "text": "Y^X" }, { "math_id": 31, "text": "A^{B\\times C}" }, { "math_id": 32, "text": "B\\times C" }, { "math_id": 33, "text": "A" }, { "math_id": 34, "text": "(A^C)^B" }, { "math_id": 35, "text": "B" }, { "math_id": 36, "text": "C" }, { "math_id": 37, "text": "A^{B\\times C}\\cong (A^C)^B" }, { "math_id": 38, "text": "B\\mapsto B\\times C" }, { "math_id": 39, "text": "A \\mapsto A^C" }, { "math_id": 40, "text": "\\text{Hom}(X,Y)" }, { "math_id": 41, "text": "\\text{curry}" }, { "math_id": 42, "text": "\\text{curry}:\\text{Hom}(X\\times Y, Z) \\to \\text{Hom}(X, \\text{Hom}(Y,Z)) ," }, { "math_id": 43, "text": "\\text{curry} : Z^{X\\times Y}\\to (Z^Y)^X" }, { "math_id": 44, "text": "\\begin{align} &&\\text{eval}:Y^X \\times X \\to Y \\\\\n && (f,x) \\mapsto f(x) \\end{align}" }, { "math_id": 45, "text": "I" }, { "math_id": 46, "text": "Z^{I\\times Y} \\cong (Z^Y)^I" }, { "math_id": 47, "text": "Z" }, { "math_id": 48, "text": "Z^Y" }, { "math_id": 49, "text": "[\\Sigma X,Z] \\approxeq [X, \\Omega Z]" }, { "math_id": 50, "text": "[A,B]" }, { "math_id": 51, "text": "A \\rightarrow B" }, { "math_id": 52, "text": "\\Sigma A" }, { "math_id": 53, "text": "\\Omega A" }, { "math_id": 54, "text": "\\Sigma X" }, { "math_id": 55, "text": "\\Omega Z" }, { "math_id": 56, "text": "f(x,y)" }, { "math_id": 57, "text": "(X \\times Y)\\to Z" }, { "math_id": 58, "text": "\\text{curry}(f) = \\lambda x.(\\lambda y.(f(x,y)))" }, { "math_id": 59, "text": "\\lambda" }, { "math_id": 60, "text": "(X\\times Y)\\to Z" }, { "math_id": 61, "text": "\\text{curry}:((X \\times Y)\\to Z) \\to (X \\to (Y \\to Z))" }, { "math_id": 62, "text": "X \\to (Y \\to Z)" }, { "math_id": 63, "text": "X \\to Y \\to Z" }, { "math_id": 64, "text": "f(x, y)" }, { "math_id": 65, "text": "((\\text{curry}(f) \\; x) \\;y) = \\text{curry}(f) \\; x \\;y" }, { "math_id": 66, "text": "\\to" }, { "math_id": 67, "text": "\\times" }, { "math_id": 68, "text": "((A \\land B) \\to C) \\Leftrightarrow (A \\to (B \\to C))" }, { "math_id": 69, "text": "Q^P" }, { "math_id": 70, "text": "P\\to Q" }, { "math_id": 71, "text": "\\neg P \\lor Q" }, { "math_id": 72, "text": "g \\colon X \\to Z^Y " }, { "math_id": 73, "text": " \\mathrm{Hom}(A\\otimes B, C) \\cong \\mathrm{Hom}(A, B\\Rightarrow C) ." }, { "math_id": 74, "text": "B\\Rightarrow C" }, { "math_id": 75, "text": "C^B" }, { "math_id": 76, "text": "f \\colon (X \\times Y \\times Z) \\to N " }, { "math_id": 77, "text": "\\text{curry}(f) \\colon X \\to (Y \\to (Z \\to N)) " }, { "math_id": 78, "text": "f(1, 2, 3)" }, { "math_id": 79, "text": "f_\\text{curried}(1)(2)(3)" }, { "math_id": 80, "text": "f_\\text{curried}(1)" }, { "math_id": 81, "text": "\\text{partial}(f) \\colon (Y \\times Z) \\to N" }, { "math_id": 82, "text": "f_\\text{partial}(2, 3)" }, { "math_id": 83, "text": "a \\in X" }, { "math_id": 84, "text": "\\text{curry}(\\text{partial}(f)_a)(y)(z) = \\text{curry}(f)(a)(y)(z) " }, { "math_id": 85, "text": "\\text{partial}(f)_a = \\text{curry}_1(f)(a) " }, { "math_id": 86, "text": "\\text{curry}_1" }, { "math_id": 87, "text": "g(y,x)" }, { "math_id": 88, "text": "g(y,x) = f(x,y)" } ]
https://en.wikipedia.org/wiki?curid=6600
66001552
Attention (machine learning)
Machine learning technique &lt;templatestyles src="Machine learning/styles.css"/&gt; Attention is a machine learning method that determines the relative importance of each component in a sequence relative to the other components in that sequence. In natural language processing, importance is represented by "soft" weights assigned to each word in a sentence. More generally, attention encodes vectors called token embeddings across a fixed-width sequence that can range from tens to millions of tokens in size. Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and are therefore changing with every step of the input. Earlier designs implemented the attention mechanism in a serial recurrent neural network language translation system, but the later transformer design removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme. Inspired by ideas about attention in humans, the attention mechanism was developed to address the weaknesses of leveraging information from the hidden layers of recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to be attenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state. History. Academic reviews of the history of the attention mechanism are provided in Niu et al. and Soydaner. Predecessors. Selective attention in humans had been well studied in neuroscience and cognitive psychology. In 1953, Colin Cherry studied selective attention in the context of audition, known as the cocktail party effect. In 1958, Donald Broadbent proposed the filter model of attention. Selective attention of vision was studied in the 1960s by George Sperling's partial report paradigm. It was also noticed that saccade control is modulated by cognitive processes, insofar as the eye moves preferentially towards areas of high salience. As the fovea of the eye is small, the eye cannot sharply resolve the entire visual field at once. The use of saccade control allows the eye to quickly scan important features of a scene. These research developments inspired algorithms such as the Neocognitron and its variants. Meanwhile, developments in neural networks had inspired circuit models of biological visual attention. One well-cited network from 1998, for example, was inspired by the low-level primate visual system. It produced saliency maps of images using handcrafted (not learned) features, which were then used to guide a second neural network in processing patches of the image in order of reducing saliency. A key aspect of attention mechanism can be written (schematically) as formula_0where the angled brackets denote dot product. This shows that it involves a multiplicative operation. Multiplicative operations within neural networks had been studied under the names of "higher-order neural networks", "multiplication units", "sigma-pi units", "fast weight controllers", and "hyper-networks". Recurrent attention. During the deep learning era, attention mechanism was developed solve similar problems in encoding-decoding. In machine translation, the seq2seq model, as it was proposed in 2014, would encode an input text into a fixed-length vector, which would then be decoded into an output text. If the input text is long, the fixed-length vector would be unable to carry enough information for accurate decoding. An attention mechanism was proposed to solve this problem. An image captioning model was proposed in 2015, citing inspiration from the seq2seq model. that would encode an input image into a fixed-length vector. (Xu et al 2015), citing (Bahdanau et al 2014), applied the attention mechanism as used in the seq2seq model to image captioning. Transformer. One problem with seq2seq models was their use of recurrent neural networks, which are not parallelizable as both the encoder and the decoder must process the sequence token-by-token. "Decomposable attention" attempted to solve this problem by processing the input sequence in parallel, before computing a "soft alignment matrix" ("alignment" is the terminology used by Bahdanau et al) in order to allow for parallel processing. The idea of using the attention mechanism for self-attention, instead of in an encoder-decoder (cross-attention), was also proposed during this period, such as in differentiable neural computers and neural Turing machines. It was termed "intra-attention" where an LSTM is augmented with a memory network as it encodes an input sequence. These strands of development were brought together in 2017 with the Transformer architecture, published in the "Attention Is All You Need" paper. Machine translation. In neural machine translation, the seq2seq method developed in the early 2010s uses two neural networks. An encoder network encodes an input sentence into numerical vectors, which a decoder network decodes into an output sentence in another language. During the evolution of seq2seq in the 2014-2017 period, the attention mechanism was refined, until it appeared in the Transformer in 2017. seq2seq machine translation. Consider the seq2seq language English-to-French translation task. To be concrete, let us consider the translation of "the zone of international control &lt;end&gt;", which should translate to "la zone de contrôle international &lt;end&gt;". Here, we use the special &lt;end&gt; token as a control character to delimit the end of input for both the encoder and the decoder. An input sequence of text formula_1 is processed by a neural network (which can be an LSTM, a Transformer encoder, or some other network) into a sequence of real-valued vectors formula_2, where formula_3 stands for "hidden vector". After the encoder has finished processing, the decoder starts operating over the hidden vectors, to produce an output sequence formula_4, autoregressively. That is, it always takes as input both the hidden vectors produced by the encoder, and what the decoder itself has produced before, to produce the next output word: Here, we use the special &lt;start&gt; token as a control character to delimit the start of input for the decoder. The decoding terminates as soon as "&lt;end&gt;" appears in the decoder output. Attention weights. In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. In the "I love you" example above, the second word "love" is aligned with the third word "aime". Stacking soft row vectors together for "je", "t"', and "aime" yields an alignment matrix: Sometimes, alignment can be multiple-to-multiple. For example, the English phrase "look it up" corresponds to "cherchez-le". Thus, "soft" attention weights work better than "hard" attention weights (setting one attention weight to 1, and the others to 0), as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector. This view of the attention weights addresses some of the neural network explainability problem. Networks that perform verbatim translation without regard to word order would show the highest scores along the (dominant) diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced. On the first pass through the decoder, 94% of the attention weight is on the first English word "I", so the network offers the word "je". On the second pass of the decoder, 88% of the attention weight is on the third English word "you", so it offers "t"'. On the last pass, 95% of the attention weight is on the second English word "love", so it offers "aime". Attention weights. As hand-crafting weights defeats the purpose of machine learning, the model must compute the attention weights on its own. Taking analogy from the language of database queries, we make the model construct a triple of vectors: key, query, and value. The rough idea is that we have a "database" in the form of a list of key-value pairs. The decoder send in a query, and obtain a reply in the form of a weighted sum of the values, where the weight is proportional to how closely the query resembles each key. The decoder first processes the "&lt;start&gt;" input partially, to obtain an intermediate vector formula_5, the 0th hidden vector of decoder. Then, the intermediate vector is transformed by a linear map formula_6 into a query vector formula_7. Meanwhile, the hidden vectors outputted by the encoder are transformed by another linear map formula_8 into key vectors formula_9. The linear maps are useful for providing the model with enough freedom to find the best way to represent the data. Now, the query and keys are compared by taking dot products: formula_10. Ideally, the model should have learned to compute the keys and values, such that formula_11 is large, formula_12 is small, and the rest are very small. This can be interpreted as saying that the attention weight should be mostly applied to the 0th hidden vector of the encoder, a little to the 1st, and essentially none to the rest. In order to make a properly weighted sum, we need to transform this list of dot products into a probability distribution over formula_13. This can be accomplished by the softmax function, thus giving us the attention weights:formula_14This is then used to compute the context vector:formula_15where formula_16 are the value vectors, linearly transformed by another matrix to provide the model with freedom to find the best way to represent values. Without the matrices formula_17, the model would be forced to use the same hidden vector for both key and value, which might not be appropriate, as these two tasks are not the same. This is the dot-attention mechanism. The particular version described in this section is "decoder cross-attention", as the output context vector is used by the decoder, and the input keys and values come from the encoder, but the query comes from the decoder, thus "cross-attention". More succinctly, we can write it asformula_18where the matrix formula_19 is the matrix whose rows are formula_20. Note that the querying vector, formula_21, is not necessarily the same as the key-value vector formula_22. In fact, it is theoretically possible for query, key, and value vectors to all be different, though that is rarely done in practice. Self-attention. Self-attention is essentially the same as cross-attention, except that query, key, and value vectors all come from the same model. Both encoder and decoder can use self-attention, but with subtle differences. For encoder self-attention, we can start with a simple encoder without self-attention, such as an "embedding layer", which simply converts each input word into a vector by a fixed lookup table. This gives a sequence of hidden vectors formula_20. These can then be applied to a dot-product attention mechanism, to obtainformula_23or more succinctly, formula_24. This can be applied repeatedly, to obtain a multilayered encoder. This is the "encoder self-attention", sometimes called the "all-to-all attention", as the vector at every position can attend to every other. Masking. For decoder self-attention, all-to-all attention is inappropriate, because during the autoregressive decoding process, the decoder cannot attend to future outputs that has yet to be decoded. This can be solved by forcing the attention weights formula_25 for all formula_26, called "causal masking". This attention mechanism is the "causally masked self-attention". General attention. In general, the attention unit consists of dot products, with 3 trained, fully-connected neural network layers called query, key, and value. &lt;templatestyles src="Plain image with caption/styles.css"/&gt; The attention network was designed to identify high correlations patterns amongst words in a given sentence, assuming that it has learned word correlation patterns from the training data. This correlation is captured as neuronal weights learned during training with backpropagation. The diagram shows the Attention forward pass calculating correlations of the word "that" with other words in "See that girl run." Given the right weights from training, the network should be able to identify "girl" as a highly correlated word. Some things to note: formula_27 This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests an asymmetric role for the Query and Key vectors, where one item of interest (the Query vector "that") is matched against all possible items (the Key vectors of each word in the sentence). However, Attention's parallel calculations matches all words of a sentence with itself; therefore the roles of these vectors are symmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understand Attention further by studying their roles in focused settings, such as in-context learning, masked language tasks, stripped down transformers, bigram statistics, N-gram statistics, pairwise convolutions, and arithmetic factoring. Variants. Many variants of attention implement soft weights, such as For convolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention, channel attention, or combinations. These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. Mathematical representation. Standard Scaled Dot-Product Attention. For matrices: formula_28 and formula_29, the scaled dot-product, or QKV attention is defined as: formula_30 where formula_31 denotes transpose and the softmax function is applied independently to every row of its argument. The matrix formula_32 contains formula_33 queries, while matrices formula_34 jointly contain an "unordered" set of formula_35 key-value pairs. Value vectors in matrix formula_36 are weighted using the weights resulting from the softmax operation, so that the rows of the formula_33-by-formula_37 output matrix are confined to the convex hull of the points in formula_38 given by the rows of formula_36. To understand the permutation invariance and permutation equivariance properties of QKV attention, let formula_39 and formula_40 be permutation matrices; and formula_41 an arbitrary matrix. The softmax function is permutation equivariant in the sense that: formula_42 By noting that the transpose of a permutation matrix is also its inverse, it follows that: formula_43 which shows that QKV attention is equivariant with respect to re-ordering the queries (rows of formula_32); and invariant to re-ordering of the key-value pairs in formula_44. These properties are inherited when applying linear transforms to the inputs and outputs of QKV attention blocks. For example, a simple self-attention function defined as: formula_45 is permutation equivariant with respect to re-ordering the rows of the input matrix formula_46 in a non-trivial way, because every row of the output is a function of all the rows of the input. Similar properties hold for "multi-head attention", which is defined below. Multi-Head Attention. Multi-head attention formula_47 where each head is computed with QKV attention as: formula_48 and formula_49, and formula_50 are parameter matrices. The permutation properties of QKV attention apply here also. For permutation matrices, formula_51: formula_52 from which we also see that multi-head self-attention: formula_53 is equivariant with respect to re-ordering of the rows of input matrix formula_46. Bahdanau (Additive) Attention. formula_54 where formula_55 and formula_56 and formula_57 are learnable weight matrices. Luong Attention (General). formula_58 where formula_59 is a learnable weight matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_i \\langle(\\text{query})_i, (\\text{key})_i\\rangle (\\text{value})_i" }, { "math_id": 1, "text": "x_0, x_1, \\dots" }, { "math_id": 2, "text": "h_0, h_1, \\dots" }, { "math_id": 3, "text": "h" }, { "math_id": 4, "text": "y_0, y_1, \\dots" }, { "math_id": 5, "text": "h^d_0" }, { "math_id": 6, "text": "W^Q" }, { "math_id": 7, "text": "q_0 = h_0^d W^Q " }, { "math_id": 8, "text": "W^K" }, { "math_id": 9, "text": "k_0 = h_0 W^K, k_1 = h_1 W^K, \\dots " }, { "math_id": 10, "text": "q_0 k_0^T, q_0 k_1^T, \\dots" }, { "math_id": 11, "text": "q_0 k_0^T" }, { "math_id": 12, "text": "q_0 k_1^T " }, { "math_id": 13, "text": "0, 1, \\dots " }, { "math_id": 14, "text": "(w_{00}, w_{01}, \\dots) = \\mathrm{softmax}(q_0 k_0^T, q_0 k_1^T, \\dots) " }, { "math_id": 15, "text": "c_0 = w_{00} v_0 + w_{01} v_1 + \\cdots " }, { "math_id": 16, "text": "v_0 = h_0 W^V, v_1 = h_1 W^V , \\dots " }, { "math_id": 17, "text": "W^Q, W^K, W^V " }, { "math_id": 18, "text": "c_0 = \\mathrm{Attention}(h_0^d W^Q, HW^K, H W^V) = \\mathrm{softmax}((h_0^d W^Q) \\; (H W^K)^T) (H W^V) " }, { "math_id": 19, "text": "H " }, { "math_id": 20, "text": "h_0, h_1, \\dots " }, { "math_id": 21, "text": "h_0^d" }, { "math_id": 22, "text": "h_0" }, { "math_id": 23, "text": "\\begin{aligned}\nh_0' &= \\mathrm{Attention}(h_0 W^Q, HW^K, H W^V) \\\\ \nh_1' &= \\mathrm{Attention}(h_1 W^Q, HW^K, H W^V) \\\\\n&\\cdots\n\\end{aligned} " }, { "math_id": 24, "text": "H' = \\mathrm{Attention}(H W^Q, HW^K, H W^V) " }, { "math_id": 25, "text": "w_{ij} = 0 " }, { "math_id": 26, "text": "i < j " }, { "math_id": 27, "text": "\\begin{align} (XW_v)^T * {[ (W_k X^T) * { (\\underline{x}W_q)^T } ]_{sm} } \\end{align}" }, { "math_id": 28, "text": "\\mathbf{Q}\\in\\mathbb{R^{m\\times d_k}}, \\mathbf{K}\\in\\mathbb{R^{n\\times d_k}}" }, { "math_id": 29, "text": "\\mathbf{V}\\in\\mathbb{R^{n\\times d_v}}" }, { "math_id": 30, "text": "\n \\text{Attention}(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}) = \\text{softmax}\\left(\\frac{\\mathbf{Q}\\mathbf{K}^T}{\\sqrt{d_k}}\\right)\\mathbf{V}\\in\\mathbb{R}^{m\\times d_v}\n" }, { "math_id": 31, "text": "{}^T" }, { "math_id": 32, "text": "\\mathbf{Q}" }, { "math_id": 33, "text": "m" }, { "math_id": 34, "text": "\\mathbf{K}, \\mathbf{V}" }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": "\\mathbf{V}" }, { "math_id": 37, "text": "d_v" }, { "math_id": 38, "text": "\\mathbb{R}^{d_v}" }, { "math_id": 39, "text": "\\mathbf{A}\\in\\mathbb{R}^{m\\times m}" }, { "math_id": 40, "text": "\\mathbf{B}\\in\\mathbb{R}^{n\\times n}" }, { "math_id": 41, "text": "\\mathbf{D}\\in\\mathbb{R}^{m\\times n}" }, { "math_id": 42, "text": "\n\\text{softmax}(\\mathbf{A}\\mathbf{D}\\mathbf{B}) = \\mathbf{A}\\,\\text{softmax}(\\mathbf{D})\\mathbf{B}\n" }, { "math_id": 43, "text": "\n\\text{Attention}(\\mathbf{A}\\mathbf{Q}, \\mathbf{B}\\mathbf{K}, \\mathbf{B}\\mathbf{V}) = \\mathbf{A}\\,\\text{Attention}(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V})\n" }, { "math_id": 44, "text": "\\mathbf{K},\\mathbf{V}" }, { "math_id": 45, "text": "\n\\mathbf{X}\\mapsto\\text{Attention}(\\mathbf{X}\\mathbf{T}_q, \\mathbf{X}\\mathbf{T}_k, \\mathbf{X}\\mathbf{T}_v)\n" }, { "math_id": 46, "text": "X" }, { "math_id": 47, "text": "\n \\text{MultiHead}(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}) = \\text{Concat}(\\text{head}_1, ..., \\text{head}_h)\\mathbf{W}^O\n" }, { "math_id": 48, "text": "\n \\text{head}_i = \\text{Attention}(\\mathbf{Q}\\mathbf{W}_i^Q, \\mathbf{K}\\mathbf{W}_i^K, \\mathbf{V}\\mathbf{W}_i^V)\n" }, { "math_id": 49, "text": "\\mathbf{W}_i^Q, \\mathbf{W}_i^K, \\mathbf{W}_i^V" }, { "math_id": 50, "text": "\\mathbf{W}^O" }, { "math_id": 51, "text": "\\mathbf{A}, \\mathbf{B}" }, { "math_id": 52, "text": "\n\\text{MultiHead}(\\mathbf{A}\\mathbf{Q}, \\mathbf{B}\\mathbf{K}, \\mathbf{B}\\mathbf{V}) = \\mathbf{A}\\,\\text{MultiHead}(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V})\n" }, { "math_id": 53, "text": "\n\\mathbf{X}\\mapsto\\text{MultiHead}(\\mathbf{X}\\mathbf{T}_q, \\mathbf{X}\\mathbf{T}_k, \\mathbf{X}\\mathbf{T}_v)\n" }, { "math_id": 54, "text": "\n \\text{Attention}(Q, K, V) = \\text{softmax}(e)V\n" }, { "math_id": 55, "text": "e = \\tanh(W_QQ + W_KK)" }, { "math_id": 56, "text": "W_Q" }, { "math_id": 57, "text": "W_K" }, { "math_id": 58, "text": "\n \\text{Attention}(Q, K, V) = \\text{softmax}(QW_aK^T)V\n" }, { "math_id": 59, "text": "W_a" } ]
https://en.wikipedia.org/wiki?curid=66001552
660019
Convex polygon
Polygon that is the boundary of a convex set In geometry, a convex polygon is a polygon that is the boundary of a convex set. This means that the line segment between two points of the polygon is contained in the union of the interior and the boundary of the polygon. In particular, it is a simple polygon (not self-intersecting). Equivalently, a polygon is convex if every line that does not contain any edge intersects the polygon in at most two points. A strictly convex polygon is a convex polygon such that no line contains two of its edges. In a convex polygon, all interior angles are less than or equal to 180 degrees, while in a strictly convex polygon all interior angles are strictly less than 180 degrees. Properties. The following properties of a simple polygon are all equivalent to convexity: Additional properties of convex polygons include: Every polygon inscribed in a circle (such that all vertices of the polygon touch the circle), if not self-intersecting, is convex. However, not every convex polygon can be inscribed in a circle. Strict convexity. The following properties of a simple polygon are all equivalent to strict convexity: Every non-degenerate triangle is strictly convex.
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "2A" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "0.5 \\text{ × Area}(R) \\leq \\text{Area}(C) \\leq 2 \\text{ × Area}(r)" }, { "math_id": 6, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=660019
660026
Decibel watt
SI measurement of signal strength and intensity The decibel watt (dBW or dBW) is a unit for the measurement of the strength of a signal expressed in decibels relative to one watt. It is used because of its capability to express both very large and very small values of power in a short range of number; e.g., 1 milliwatt = −30 dBW, 1 watt = 0 dBW, 10 watts = 10 dBW, 100 watts = 20 dBW, and 1,000,000 W = 60 dBW. formula_0 and also formula_1 Compare dBW to dBm, which is referenced to one milliwatt (0.001 W). A given dBW value expressed in dBm is always 30 more because 1 watt is 1,000 milliwatts, and a ratio of 1,000 (in power) is 30 dB; e.g., 10 dBm (10 mW) is equal to −20 dBW (0.01 W). In the SI system the non SI modifier decibel (dB) is not permitted for use directly alongside SI units so the dBW is not directly permitted but 10 dBW may be written 10 dB (1 watt). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{Power in dBW} = 10 \\log_{10}\\frac{\\mbox{Power}}{ 1 \\mathrm{W}} " }, { "math_id": 1, "text": "\\mbox{Power in W} = 10^{\\frac{\\mbox{Power in dBW}}{10}} " } ]
https://en.wikipedia.org/wiki?curid=660026
66003120
Soiling (solar energy)
Accumulation of material on solar energy collectors Soiling is the accumulation of material on light-collecting surfaces in solar power systems. The accumulated material blocks or scatters incident light, which leads to a loss in power output. Typical soiling materials include mineral dust, bird droppings, fungi, lichen, pollen, engine exhaust, and agricultural emissions. Soiling affects conventional photovoltaic systems, concentrated photovoltaics, and concentrated solar (thermal) power. However, the consequences of soiling are higher for concentrating systems than for non-concentrating systems. Note that soiling refers to both the process of accumulation and the accumulated material itself. There are several ways to reduce the effect of soiling. The antisoiling coating is most important solution for solar power projects. But water cleaning is the most widely used technique so far due to absence of antisoiling coatings in past. Soiling losses vary largely from region to region, and within regions. Average soiling-induced power losses can be below one percent in regions with frequent rain. As of 2018, the estimated global average annual power loss due to soiling is 5% to 10% percent. The estimated soiling-induced revenue loss is 3 – 5 billion euros. Physics of soiling. Soiling is typically caused by the deposition of airborne particles, including, but not limited to, mineral dust (silica, metal oxides, salts), pollen, and soot. However, soiling also includes snow, ice, frost, various kinds of industry pollution, sulfuric acid particulates, bird droppings, falling leaves, agricultural feed dust, and the growth of algae, moss, fungi, lichen, or biofilms of bacteria. Which of these soiling mechanisms are most prominent depends on the location. Soiling either blocks the light completely (hard shading), or it lets through some sunlight (soft shading). With soft shading, parts of the transmitted light is scattered. Scattering makes the light diffuse, i.e. the rays go in many different directions. While conventional photovoltaics works well with diffuse light, concentrated solar power and concentrated photovoltaics relies only on the (collimated) light coming "directly" from the sun. For this reason, concentrated solar power is more sensitive to soiling than conventional photovoltaics. Typical soiling-induced power losses are 8-14 times higher for concentrated solar power than for photovoltaics. Influence of geography and meteorology. Soiling losses vary greatly from region to region, and within regions. The rate at which soiling deposits depends on geographical factors such as proximity to deserts, agriculture, industry, and roads, as these are likely to be sources of airborne particles. If a location is close to a source of airborne particles, the risk of soiling losses is high. The soiling rate (see definition below) varies from season to season and from location to location, but is typically between 0%/day and 1%/day. However, average deposition rates as high as 2.5%/day have been observed for conventional photovoltaics in China. For concentrated solar power, soiling rates as high 5%/day have been observed. In regions with high soiling rates, soiling can become a significant contributor to power losses. As an extreme example, the total losses due to soiling of a photovoltaic system in the city of Helwan (Egypt) were observed to reach 66% at one point. The soiling in Helwan was attributed to dust from a nearby desert and local industry pollution. Several initiatives to map out the soiling risk of different regions of the world exist. Soiling losses also depend on meteorological parameters such as rain, temperature, wind, humidity, and cloud cover. The most important meteorological factor is the average frequency of rain, since rain can wash soiling off of the solar panels/mirrors. If there is consistent rain throughout the whole year at a given site, the soiling losses are likely to be small. However, light rain and dew can also lead to increased particle adhesion, increasing the soiling losses. Some climates are favorable for the growth of biological soiling, but it is not known what the decisive factors are. The dependence of soiling on climate and weather is a complex matter. As of 2019, it is not possible to accurately predict soiling rates based on meteorological parameters. Quantifying soiling losses. The level of soiling in a photovoltaic system can be expressed with the "soiling ratio" ("SR"), defined in the technical standard IEC 61724-1 as: formula_0 Hence, if formula_1 there is no soiling, and if formula_2, there is so much soiling that there is no production in the photovoltaic system. An alternative metric is the "soiling loss" ("SL"), which is defined as formula_3. The soiling loss represents the fraction of energy lost due to soiling. The "soiling deposition rate" (or "soiling rate") is the rate of change of the soiling loss, typically given in %/day. Note that most sources define the soiling rate to be positive in the case of increasing soiling losses' but some sources use the opposite sign[NREL]. A procedure for measuring the soiling ratio at photovoltaic systems is given in IEC 61724-1. This standard proposes that two photovoltaic devices are used, where one is left to accumulate soil, and the other is held clean. The soiling ratio is estimated by the ratio of the power output of the soiled device to its expected power output if it was clean. The expected power output is calculated using calibration values and the measured short-circuit current of the clean device. This setup is also referred to as a "soiling measurement station", or just "soiling station". Methods that estimate soiling ratios and soiling deposition rates of photovoltaic systems without the use of dedicated soiling stations have been proposed, including methods for systems using bifacial solar cells which introduce new variables and challenges to soiling estimation that monofacial systems don't have. These procedures infer soiling ratios based on the performance of the photovoltaic systems. A project for mapping out the soiling losses throughout the United States was started in 2017. This project is based on data from both soiling stations and photovoltaic systems, and uses the method proposed in to extract soiling ratios and soiling rates. Mitigation techniques. There are many different options for mitigating soiling losses, ranging from site selection to cleaning to electrodynamic dust removal. The optimal mitigation technique depends on soiling type, deposition rate, water availability, accessibility of the site, and system type. For instance, conventional photovoltaics involve different concerns than concentrated solar power, large-scale systems call for different concerns than smaller rooftop systems, and systems with fixed inclination involve different concerns than systems with solar trackers. The most common mitigation techniques are: This means one can expect solar panels to be more resistant to soiling losses in the future. Wet-chemically etched nanowires and a hydrophobic coating on the surface water droplets was shown to be able to remove 98% of dust particles. This approach requires a higher capital investment, but involves lower cost of labor than manual cleaning. Fully automatic cleaning involves the use of robots that clean the solar panels at night. This approach requires the highest capital cost, but involves no manual labor except for maintenance of the robots. All three methods may or may not use water. Typically, water makes the cleaning more efficient. However, if water is a scarce or expensive resource at the given site, dry cleaning may be preferred. See Economic consequences for typical costs of cleaning. The coating can be applied to the panels/mirrors during production or retrofitted after they have been installed. As of 2019, no particular anti-soiling technology had been widely adopted, mostly due to a lack of durability. Economic consequences. The cost of cleaning depends on what cleaning technique is used and the cost of labor at the given location. Furthermore, there is a difference between large-scale power station and rooftop systems. The cost of cleaning of large-scale systems vary from 0.015 euro/m2 in the cheapest countries to 0.9 euro/m2 in the Netherlands. The cost of cleaning of rooftop systems have been reported to be as low as 0.06 euro/m2 in China, and as high as 8 euro/m2 in the Netherlands. Soiling leads to reduced power production in the affected solar power equipment. Whether or not money is spent on mitigating soiling losses, soiling leads to a reduced revenue for the owners of the system. The magnitude of the revenue loss depends mostly on the cost of soiling mitigation, the soiling deposition rate, and the frequency of rain at the given location. Ilse et al. estimated the global average annual soiling loss to be between 3% and 4% in 2018. This estimate was made under the assumption that all solar power systems are cleaned with an optimal fixed frequency. Based on this estimate, the total cost of soiling (including power losses and mitigation costs) in 2018 was estimated to between 3 and 5 billion euros. This could grow to between 4 and 7 billion euros by 2023. A method to obtain the power loss, energy loss and economic loss due to soiling, directly from PV remote monitoring system time-series data has been discussed in which can help the PV asset owners to timely clean the panels. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " SR = \\frac{\\text{Actual power output}}{\\text{Expected power output if clean}}." }, { "math_id": 1, "text": "SR = 1" }, { "math_id": 2, "text": "SR = 0" }, { "math_id": 3, "text": " SL = 1-SR " } ]
https://en.wikipedia.org/wiki?curid=66003120
6601335
Partial specific volume
The partial specific volume formula_0 express the variation of the extensive volume of a mixture in respect to composition of the masses. It is the partial derivative of volume with respect to the mass of the component of interest. formula_1 where formula_2 is the partial specific volume of a component formula_3 defined as: formula_4 The PSV is usually measured in milliLiters (mL) per gram (g), proteins &gt; 30 kDa can be assumed to have a partial specific volume of 0.708 mL/g. Experimental determination is possible by measuring the natural frequency of a U-shaped tube filled successively with air, buffer and protein solution. Properties. The weighted sum of partial specific volumes of a mixture or solution is an inverse of density of the mixture namely the specific volume of the mixture. formula_5 formula_6
[ { "math_id": 0, "text": "\\bar{v_i}," }, { "math_id": 1, "text": "V=\\sum _{i=1}^n m_i \\bar{v_i}," }, { "math_id": 2, "text": "\\bar{v_i}" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "\\bar{v_i}=\\left( \\frac{\\partial V}{\\partial m_i} \\right)_{T,P,m_{j\\neq i}}." }, { "math_id": 5, "text": "v = \\sum_i w_i\\cdot \\bar{v_i} = \\frac {1}{\\rho}" }, { "math_id": 6, "text": "\\sum_i \\rho_i \\cdot \\bar{v_i} = 1" } ]
https://en.wikipedia.org/wiki?curid=6601335
66014
Pascal (unit)
SI derived unit of pressure &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The pascal (symbol: Pa) is the unit of pressure in the International System of Units (SI). It is also used to quantify internal pressure, stress, Young's modulus, and ultimate tensile strength. The unit, named after Blaise Pascal, is an SI coherent derived unit defined as one newton per square metre (N/m2). It is also equivalent to 10 barye (10 Ba) in the CGS system. Common multiple units of the pascal are the hectopascal (1 hPa = 100 Pa), which is equal to one millibar, and the kilopascal (1 kPa = 1000 Pa), which is equal to one centibar. The unit of measurement called "standard atmosphere (atm)" is defined as 101,325 Pa. Meteorological observations typically report atmospheric pressure in hectopascals per the recommendation of the World Meteorological Organization, thus a standard atmosphere (atm) or typical sea-level air pressure is about 1013 hPa. Reports in the United States typically use inches of mercury or millibars (hectopascals). In Canada these reports are given in kilopascals. Etymology. The unit is named after Blaise Pascal, noted for his contributions to hydrodynamics and hydrostatics, and experiments with a barometer. The name "pascal" was adopted for the SI unit newton per square metre (N/m2) by the 14th General Conference on Weights and Measures in 1971. Definition. The pascal can be expressed using SI derived units, or alternatively solely SI base units, as: formula_0 where N is the newton, m is the metre, kg is the kilogram, s is the second, and J is the joule. One pascal is the pressure exerted by a force of magnitude one newton perpendicularly upon an area of one square metre. Standard units. The unit of measurement called an atmosphere or a standard atmosphere (atm) is . This value is often used as a reference pressure and specified as such in some national and international standards, such as the International Organization for Standardization's ISO 2787 (pneumatic tools and compressors), ISO 2533 (aerospace) and ISO 5024 (petroleum). In contrast, International Union of Pure and Applied Chemistry (IUPAC) recommends the use of 100 kPa as a standard pressure when reporting the properties of substances. Unicode has dedicated code-points and in the CJK Compatibility block, but these exist only for backward-compatibility with some older ideographic character-sets and are therefore deprecated. Uses. The pascal (Pa) or kilopascal (kPa) as a unit of pressure measurement is widely used throughout the world and has largely replaced the pounds per square inch (psi) unit, except in some countries that still use the imperial measurement system or the US customary system, including the United States. Geophysicists use the gigapascal (GPa) in measuring or calculating tectonic stresses and pressures within the Earth. Medical elastography measures tissue stiffness non-invasively with ultrasound or magnetic resonance imaging, and often displays the Young's modulus or shear modulus of tissue in kilopascals. In materials science and engineering, the pascal measures the stiffness, tensile strength and compressive strength of materials. In engineering the megapascal (MPa) is the preferred unit for these uses, because the pascal represents a very small quantity. The pascal is also equivalent to the SI unit of energy density, the joule per cubic metre. This applies not only to the thermodynamics of pressurised gases, but also to the energy density of electric, magnetic, and gravitational fields. The pascal is used to measure sound pressure. Loudness is the subjective experience of sound pressure and is measured as a sound pressure level (SPL) on a logarithmic scale of the sound pressure relative to some reference pressure. For sound in air, a pressure of 20 μPa is considered to be at the threshold of hearing for humans and is a common reference pressure, so that its SPL is zero. The airtightness of buildings is measured at 50 Pa. In medicine, blood pressure is measured in millimeters of mercury (mmHg, very close to one Torr). The normal adult blood pressure is less than 120 mmHg systolic BP (SBP) and less than 80 mmHg diastolic BP (DBP). Convert mmHg to SI units as follows: 1 mmHg = 0.13332 kPa. Hence normal blood pressure in SI units is less than 16.0 kPa SBP and less than 10.7 kPa DBP. These values are similar to the pressure of water column of average human height; so pressure has to be measured on arm roughly at the level of the heart. Hectopascal and millibar units. The units of atmospheric pressure commonly used in meteorology were formerly the bar (100,000 Pa), which is close to the average air pressure on Earth, and the millibar. Since the introduction of SI units, meteorologists generally measure pressures in hectopascals (hPa) unit, equal to 100 pascals or 1 millibar. Exceptions include Canada, which uses kilopascals (kPa). In many other fields of science, prefixes that are a power of 1000 are preferred, which excludes the hectopascal from use. Many countries also use millibars. In practically all other fields, the kilopascal is used instead. Multiples and submultiples. Decimal multiples and submultiples are formed using standard SI units. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\rm 1~Pa = 1~\\frac{N}{m^2} = 1~\\frac{kg}{m {\\cdot} s^2} = 1~\\frac{J}{m^3} }" } ]
https://en.wikipedia.org/wiki?curid=66014
66033889
L-Aspartic-4-semialdehyde
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound -Aspartic-4-semialdehyde is an α-amino acid derivative of aspartate. It is an important intermediate in the aspartate pathway, which is a metabolic pathway present in bacteria and plants. The aspartate pathway leads to the biosynthesis of a variety of amino acids from aspartate, including lysine, methionine, and threonine. Aspartate pathway. The aspartate pathway is an amino acid metabolic pathway present in bacteria and plants that deal with converting aspartate to other amino acids through a series of reactions and intermediates. -Aspartate-4-semialdehyde serves as one of the first intermediates in the pathway and as an important step of differentiation in the pathway. -Aspartate-4-semialdehyde is synthesized by the enzyme aspartate semialdehyde dehydrogenase, which catalyzes the following reversible chemical reaction: -4-Aspartyl phosphate + NADPH + H+ formula_0 -aspartate-4-semialdehyde + NADP+ + phosphate Once -aspartate-4-semialdehyde is synthesized, the molecule can then progress down a number of pathways. One possible pathway requires -aspartate-4-semialdehyde to undergo a reaction catalyzed by the enzyme dihydrodipicolinate synthase in order to form the molecule dihydrodipicolinate. This reversible chemical reaction is shown below: -Aspartate-4-semialdehyde + pyruvate formula_0 dihydrodipicolinate + H2O Once dihydrodipicolinate is synthesized, it can continue down the metabolic pathway leading to the synthesis of lysine. Other than the lysine biosynthetic pathway, -aspartate-4-semialdehyde can also undergo a reversible reaction catalyzed by the enzyme homoserine dehydrogenase. This reaction, which turns -aspartate-4-semialdehyde into homoserine is shown below: -Aspartate-4-semialdehyde + NAD(P)H + H+ formula_0 homoserine + NAD(P)+ Homoserine represents another branch in the aspartate pathway, as it can progress down one of two pathways to eventually become one of two amino acids: threonine or methionine. This aspartate pathway is present in plants and bacteria, allowing them to synthesize lysine, methionine, and threonine. This pathway is not present in humans or other animals, however. The lack of this pathway means that humans need to take in these amino acids through their diet, which is why they are called essential amino acids. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=66033889
6603892
Fiber (mathematics)
Set of all points in a function's domain that all map to some single given point In mathematics, the fiber (US English) or fibre (British English) of an element formula_0 under a function formula_1 is the preimage of the singleton set formula_2, that is formula_3 As an example of abuse of notation, this set is often denoted as formula_4, which is technically incorrect since the inverse relation formula_5 of formula_1 is not necessarily a function. Properties and applications. In naive set theory. If formula_6 and formula_7 are the domain and image of formula_1, respectively, then the fibers of formula_1 are the sets in formula_8 which is a partition of the domain set formula_6. Note that formula_0 must be restricted to the image set formula_7 of formula_1, since otherwise formula_4 would be the empty set which is not allowed in a partition. The fiber containing an element formula_9 is the set formula_10 For example, let formula_1 be the function from formula_11 to formula_12 that sends point formula_13 to formula_14. The fiber of 5 under formula_1 are all the points on the straight line with equation formula_15. The fibers of formula_1 are that line and all the straight lines parallel to it, which form a partition of the plane formula_11. More generally, if formula_1 is a linear map from some linear vector space formula_6 to some other linear space formula_7, the fibers of formula_1 are affine subspaces of formula_6, which are all the translated copies of the null space of formula_1. If formula_1 is a real-valued function of several real variables, the fibers of the function are the level sets of formula_1. If formula_1 is also a continuous function and formula_16 is in the image of formula_17 the level set formula_4 will typically be a curve in 2D, a surface in 3D, and, more generally, a hypersurface in the domain of formula_18 The fibers of formula_1 are the equivalence classes of the equivalence relation formula_19 defined on the domain formula_6 such that formula_20 if and only if formula_21. In topology. In point set topology, one generally considers functions from topological spaces to topological spaces. If formula_1 is a continuous function and if formula_7 (or more generally, the image set formula_22) is a T1 space then every fiber is a closed subset of formula_23 In particular, if formula_1 is a local homeomorphism from formula_6 to formula_7, each fiber of formula_1 is a discrete subspace of formula_6. A function between topological spaces is called if every fiber is a connected subspace of its domain. A function formula_24 is monotone in this topological sense if and only if it is non-increasing or non-decreasing, which is the usual meaning of "monotone function" in real analysis. A function between topological spaces is (sometimes) called a proper map if every fiber is a compact subspace of its domain. However, many authors use other non-equivalent competing definitions of "proper map" so it is advisable to always check how a particular author defines this term. A continuous closed surjective function whose fibers are all compact is called a perfect map. A fiber bundle is a function formula_1 between topological spaces formula_6 and formula_7 whose fibers have certain special properties related to the topology of those spaces. In algebraic geometry. In algebraic geometry, if formula_25 is a morphism of schemes, the fiber of a point formula_26 in formula_7 is the fiber product of schemes formula_27 where formula_28 is the residue field at formula_29 See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "y" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "\\{ y \\}" }, { "math_id": 3, "text": "f^{-1}(\\{y\\}) = \\{ x \\mathrel{:} f(x) = y \\}" }, { "math_id": 4, "text": "f^{-1}(y)" }, { "math_id": 5, "text": "f^{-1}" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "Y" }, { "math_id": 8, "text": "\\left\\{ f^{-1}(y) \\mathrel{:} y \\in Y \\right\\}\\quad=\\quad \\left\\{\\left\\{ x\\in X \\mathrel{:} f(x) = y \\right\\} \\mathrel{:} y \\in Y\\right\\}" }, { "math_id": 9, "text": "x\\in X" }, { "math_id": 10, "text": "f^{-1}(f(x))." }, { "math_id": 11, "text": "\\R^2" }, { "math_id": 12, "text": "\\R" }, { "math_id": 13, "text": "(a,b)" }, { "math_id": 14, "text": "a+b" }, { "math_id": 15, "text": "a+b=5" }, { "math_id": 16, "text": "y\\in\\R" }, { "math_id": 17, "text": "f," }, { "math_id": 18, "text": "f." }, { "math_id": 19, "text": "\\equiv_f" }, { "math_id": 20, "text": "x'\\equiv_f x''" }, { "math_id": 21, "text": "f(x') = f(x'')" }, { "math_id": 22, "text": "f(X)" }, { "math_id": 23, "text": "X." }, { "math_id": 24, "text": "f : \\R \\to \\R" }, { "math_id": 25, "text": "f : X \\to Y" }, { "math_id": 26, "text": "p" }, { "math_id": 27, "text": "X \\times_Y \\operatorname{Spec} k(p)" }, { "math_id": 28, "text": "k(p)" }, { "math_id": 29, "text": "p." } ]
https://en.wikipedia.org/wiki?curid=6603892
66038980
Endaze
Ottoman unit of length Endaze is a defunct measurement unit of length used in the Ottoman Empire. Endaze means pace. But it is shorter than the pace. It was equal to 65.25 cm. It was usually used in the silk trade. Its sub unit was rubu and formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1\\quad \\text{rubu}= \\frac {1}{8}\\quad \\text{endaze}" } ]
https://en.wikipedia.org/wiki?curid=66038980
6604
Rendering (computer graphics)
Process of generating an image from a model Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as a rendering. Multiple models can be defined in a "scene file" containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output. A software application or component that performs rendering is called a rendering engine, render engine, , graphics engine, or simply a renderer. Rendering is one of the major sub-topics of 3D computer graphics, and in practice it is always connected to the others. It is the last major step in the graphics pipeline, giving models and animation their final appearance. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject. Rendering has uses in architecture, video games, simulators, movie and TV visual effects, and design visualization, each employing a different balance of features and techniques. A wide variety of renderers are available for use. Some are integrated into larger modeling and animation packages, some are stand-alone, and some are free open-source projects. On the inside, a renderer is a carefully engineered program based on multiple disciplines, including light physics, visual perception, mathematics, and software development. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image on a screen from a 3D representation stored in a scene file are handled by the graphics pipeline in a rendering device such as a GPU. A GPU is a purpose-built device that assists a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software must solve the rendering equation. The rendering equation does not account for all lighting phenomena, but instead acts as a general lighting model for computer-generated imagery. In the case of 3D graphics, scenes can be pre-rendered or generated in realtime. Pre-rendering is a slow, computationally intensive process that is typically used for movie creation, where scenes can be generated ahead of time, while real-time rendering is often done for 3D video games and other applications that must dynamically create scenes. 3D hardware accelerators can improve realtime rendering performance. Features. A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together. Inputs. Before a 3D scene or 2D image can be rendered, it must be described in a way that the rendering software can understand. Historically, inputs for both 2D and 3D rendering were usually text files, which are easier than binary files for humans to edit and understand. For 3D graphics, text formats have largely been supplanted by more efficient binary formats, and by APIs which allow interactive applications to communicate directly with a rendering component without generating a file on disk (although a scene description is usually still created in memory prior to rendering).1.2, 3.2.6, 3.3.1, 3.3.7 Traditional rendering algorithms use geometric descriptions of 3D scenes or 2D images. Applications and algorithms that render visualizations of data scanned from the real world, or scientific simulations, may require different types of input data. The PostScript format (which is often credited with the rise of desktop publishing) provides a standardized, interoperable way to describe 2D graphics and page layout. The Scalable Vector Graphics (SVG) format is also text-based, and the PDF format uses the PostScript language internally. In contrast, although there have been many attempts at standardization of 3D graphics file formats (including text-based formats such as VRML and X3D), different rendering applications typically use formats tailored to their needs, and this has led to a proliferation of formats, with binary files being more common.3.2.3, 3.2.5, 3.3.7vii16.5.2. 2D vector graphics. A vector graphics image description may include: 3D geometry. A geometric scene description may include:Ch. 4-7, 8.7 Many file formats exist for storing individual 3D objects or "models". These can be imported into a larger scene, or loaded on-demand by rendering software or games. A realistic scene may require hundreds of items like household objects, vehicles, and trees, and 3D artists often utilize large libraries of models. In game production, these models (along with other data such as textures, audio files, and animations) are referred to as "assets".Ch. 4 Volumetric data. Scientific and engineering visualization often requires rendering volumetric data generated by 3D scans or simulations. Perhaps the most common source of such data is medical CT and MRI scans, which need to be rendered for diagnosis. Volumetric data can be extremely large, and requires specialized data formats to store it efficiently, particularly if the volume is "sparse" (with empty regions that do not contain data).14.3.1 Before rendering, level sets for volumetric data can be extracted and converted into a mesh of triangles, e.g. by using the marching cubes algorithm. Algorithms have also been developed that work directly with volumetric data, for example to render realistic depictions of the way light is scattered and absorbed by clouds and smoke, and this type of volumetric rendering is used extensively in visual effects for movies. When rendering lower-resolution volumetric data without interpolation, the individual cubes or "voxels" may be visible, an effect sometimes used deliberately for game graphics.4.613.10, Ch. 14, 16.1 Photogrammetry and scanning. Photographs of real world objects can be incorporated into a rendered scene by using them as textures for 3D objects. Photos of a scene can also be stitched together to create panoramic images or environment maps, which allow the scene to be rendered very efficiently but only from a single viewpoint. Scanning of real objects and scenes using structured light or lidar produces point clouds consisting of the coordinates of millions of individual points in space, sometimes along with color information. These point clouds may either be rendered directly or converted into meshes before rendering. (Note: "point cloud" sometimes also refers to a minimalist rendering style that can be used for any 3D geometry, similar to wireframe rendering.)13.3, 13.91.3 Neural approximations and light fields. A more recent, experimental approach is description of scenes using radiance fields which define the color, intensity, and direction of incoming light at each point in space. (This is conceptually similar to, but not identical to, the light field recorded by a hologram.) For any useful resolution, the amount of data in a radiance field is so large that it is impractical to represent it directly as volumetric data, and an approximation function must be found. Neural networks are typically used to generate and evaluate these approximations, sometimes using video frames, or a collection of photographs of a scene taken at different angles, as "training data". Algorithms related to neural networks have recently been used to find approximations of a scene as 3D Gaussians. The resulting representation is similar to a point cloud, except that it uses fuzzy, partially-transparent blobs of varying dimensions and orientations instead of points. As with neural radiance fields, these approximations are often generated from photographs or video frames. Outputs. The output of rendering may be displayed immediately on the screen (many times a second, in the case of real-time rendering such as games) or saved in a raster graphics file format such as JPEG or PNG. High-end rendering applications commonly use the OpenEXR file format, which can represent finer gradations of colors and high dynamic range lighting, allowing tone mapping or other adjustments to be applied afterwards without loss of quality.Ch. 14, Ap. B Quickly rendered animations can be saved directly as video files, but for high-quality rendering, individual frames (which may be rendered by different computers in a cluster or "render farm" and may take hours or even days to render) are output as separate files and combined later into a video clip.1.5, 3.11, 8.11 The output of a renderer sometimes includes more than just RGB color values. For example, the spectrum can be sampled using multiple wavelengths of light, or additional information such as depth (distance from camera) or the material of each point in the image can be included (this data can be used during compositing or when generating texture maps for real-time rendering, or used to assist in removing noise from a path-traced image). Transparency information can be included, allowing rendered foreground objects to be composited with photographs or video. It is also sometimes useful to store the contributions of different lights, or of specular and diffuse lighting, as separate channels, so lighting can be adjusted after rendering. The OpenEXR format allows storing many channels of data in a single file.Ch. 14, Ap. B Techniques. Choosing how to render a 3D scene usually involves trade-offs between speed, memory usage, and realism (although realism is not always desired). The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;algorithms developed over the years follow a loose progression, with more advanced methods becoming practical as computing power and memory capacity increased. Multiple techniques may be used for a single final image. An important distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. For simple scenes, object order is usually more efficient, as there are fewer objects than pixels.Ch. 4 The vector displays of the 1960s-1970s used deflection of an electron beam to draw line segments directly on the screen. Nowadays, vector graphics are rendered by rasterization algorithms that also support filled shapes. In principle, any 2D vector graphics renderer can be used to render 3D objects by first projecting them onto a 2D image plane. 93, 431, 505, 553 Adapts 2D rasterization algorithms so they can be used more efficiently for 3D rendering, handling hidden surface removal via scanline or z-buffer techniques. Different realistic or stylized effects can be obtained by coloring the pixels covered by the objects in different ways. Surfaces are typically divided into meshes of triangles before being rasterized. Rasterization is usually synonymous with "object order" rendering (as described above).560-561, 575-5908.5Ch. 9 Uses geometric formulas to compute the first object that a ray intersects.8 It can be used to implement "image order" rendering by casting a ray for each pixel, and finding a corresponding point in the scene. Ray casting is a fundamental operation used for both graphical and non-graphical purposes,6 e.g. determining whether a point is in shadow, or checking what an enemy can see in a game. Simulates the bouncing paths of light caused by specular reflection and refraction, requiring a varying number of ray casting operations for each path. Advanced forms use Monte Carlo techniques to render effects such as area lights, depth of field, blurry reflections, and soft shadows, but computing global illumination is usually in the domain of path tracing.9-13 A finite element analysis approach that breaks surfaces in the scene into pieces, and estimates the amount of light that each piece receives from light sources, or indirectly from other surfaces. Once the irradiance of each surface is known, the scene can be rendered using rasterization or ray tracing.888-890, 1044-1045 Uses Monte Carlo integration with a simplified form of ray tracing, computing the average brightness of a sample of the possible paths that a photon could take when traveling from a light source to the camera (for some images, thousands of paths need to be sampled per pixel8). It was introduced as a statistically unbiased way to solve the rendering equation, giving ray tracing a rigorous mathematical foundation.11-13 Each of the above approaches has many variations, and there is some overlap. Path tracing may be considered either a distinct technique or a particular type of ray tracing.846, 1021 Note that the usage of terminology related to ray tracing and path tracing has changed significantly over time.7 Ray marching is a family of algorithms, used by ray casting, for finding intersections between a ray and a complex object, such as a volumetric dataset or a surface defined by a signed distance function. It is not, by itself, a rendering method, but it can be incorporated into ray tracing and path tracing, and is used by rasterization to implement screen-space reflection and other effects.13 A technique called photon mapping or "photon tracing" uses "forward ray tracing" (also called "particle tracing"), tracing paths of photons from a light source to an object, rather than backward from the camera. The additional data collected by this process is used together with conventional backward ray tracing or path tracing.1037-1039 Rendering a scene using only forward ray tracing is impractical, even though it corresponds more closely to reality, because a huge number of photons would need to be simulated, only a tiny fraction of which actually hit the camera.7-9587 Real-time rendering, including video game graphics, typically uses rasterization, but increasingly combines it with ray tracing and path tracing.2 To enable realistic global illumination, real-time rendering often relies on pre-rendered ("baked") lighting for stationary objects. For moving objects, it may use a technique called "light probes", in which lighting is recorded by rendering omnidirectional views of the scene at chosen points in space (often points on a grid to allow easier interpolation). These are similar to environment maps, but typically use a very low resolution or an approximation such as spherical harmonics. (Note: Blender uses the term 'light probes' for a more general class of pre-recorded lighting data, including reflection maps.) Scanline rendering and rasterization. A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives. If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loop through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards. Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization. The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect. Ray casting. In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the color value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged. Ray casting involves calculating the "view direction" (from camera position), and incrementally following along that "ray cast" through "solid 3d objects" in the scene, while accumulating the resulting value from each point in 3D space. This is related and similar to "ray tracing" except that the raycast is usually not "bounced" off surfaces (where the "ray tracing" indicates that it is tracing out the lights path including bounces). "Ray casting" implies that the light ray is following a straight path (which may include traveling through semi-transparent objects). The ray cast is a vector that can originate from the camera or from the scene endpoint ("back to front", or "front to back"). Sometimes the final light value is derived from a "transfer function" and sometimes it's used directly. Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two. Ray tracing and path tracing. Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects. In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness. Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel. In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments. As part of the approach known as physically based rendering, path tracing has become the dominant technique for rendering realistic scenes, including effects for movies. For example, the popular open source 3D software Blender uses path tracing in its Cycles renderer. Images produced using path tracing for global illumination are generally noisier than when using radiosity (the main competing algorithm), but radiosity can be difficult to apply to complex scenes and is prone to artifacts that arise from using a tessellated representation of irradiance.975-976, 1045 Path tracing's relative simplicity and its nature as a Monte Carlo method (sampling hundreds or thousands of paths per pixel) make it attractive to implement on a GPU, especially on recent GPUs that support ray tracing acceleration technology such as Nvidia's RTX and OptiX. Many techniques have been developed to denoise the output of path tracing, reducing the number of paths required to achieve acceptable quality, at the risk of losing some detail or introducing small-scale artifacts that are more objectionable than noise; neural networks are now widely used for this purpose. Advances in GPU technology have made real-time ray tracing possible in games, although it is currently almost always used in combination with rasterization.2 This enables visual effects that are difficult with only rasterization, including reflection from curved surfaces and interreflective objects,305 and shadows that are accurate over a wide range of distances and surface orientations.159-160 Ray tracing support is included in recent versions of the graphics APIs used by games, such as DirectX, Metal, and Vulkan. Neural rendering. Neural rendering is a rendering method using artificial neural networks. Neural rendering includes image-based rendering methods that are used to reconstruct 3D models from 2-dimensional images.One of these methods are photogrammetry, which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably Nvidia, Google and various other companies. Radiosity. Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is a way that shadows 'hug' the corners of rooms. The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it. The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithm, images may exhibit convincing realism, particularly for indoor scenes. In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model. Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some digital artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity – or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame. Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films. Sampling and filtering. One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel. If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing. Optimization. Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed. For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'. Academic core. The implementation of a realistic renderer always has some basic element of physical simulation or emulation – some computation which resembles or abstracts a real physical process. The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community. The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques. Rendering research is concerned with both the adaptation of scientific models and their efficient application. The rendering equation. This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation. formula_0 Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' – all the movement of light – in a scene. The bidirectional reflectance distribution function. The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows: formula_1 Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs. Geometric optics. Rendering is practically exclusively concerned with the particle aspect of light physics – known as geometrical optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model. Visual perception. Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate a wide range of light brightness and color, but current displays – movie screen, computer monitor, etc. – cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties will not be noticeable. This related subject is tone mapping. Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods. Chronology of concepts. &lt;templatestyles src="Div col/styles.css"/&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "L_o(x, \\omega) = L_e(x, \\omega) + \\int_\\Omega L_i(x, \\omega') f_r(x, \\omega', \\omega) (\\omega' \\cdot n) \\, \\mathrm d \\omega'" }, { "math_id": 1, "text": "f_r(x, \\omega', \\omega) = \\frac{\\mathrm d L_r(x, \\omega)}{L_i(x, \\omega')(\\omega' \\cdot \\vec n) \\mathrm d \\omega'}" } ]
https://en.wikipedia.org/wiki?curid=6604
66044705
Rothalpy
Fluid mechanical property Rothalpy (or trothalpy) formula_0, a short name of rotational stagnation enthalpy, is a fluid mechanical property of importance in the study of flow within rotating systems. Concept. Consider we have an inertial frame of reference formula_1 and a rotating frame of reference formula_2 which both are sharing common origin formula_3. Assume that frame formula_2 is rotating around a fixed axis with angular velocity formula_4. Now assuming fluid velocity to be formula_5 and fluid velocity relative to rotating frame of reference to be formula_6: Rothalpy of a fluid point formula_7 can be defined as formula_8 where formula_9 and formula_10 and formula_11 is the stagnation enthalpy of fluid point formula_7 relative to the rotating frame of reference formula_2, which is given by formula_12 and is known as relative stagnation enthalpy. Rothalpy can also be defined in terms of absolute stagnation enthalpy: formula_13 where formula_14 is tangential component of fluid velocity formula_5. Applications. Rothalpy has applications in turbomachinery and study of relative flows in rotating systems. One such application is that for steady, adiabatic and irreversible flow in a turbomachine, the value of rothalpy across a blade remains constant along a flow streamline: formula_15 so Euler equation of turbomachinery can be written in terms of rothalpy. This form of the Euler work equation shows that, for rotating blade rows, the relative stagnation enthalpy is constant through the blades provided the blade speed is constant. In other words, formula_16, if the radius of a streamline passing through the blades stays the same. This result is important for analyzing turbomachinery flows in the relative frame of reference. Naming. The function formula_0 was first introduced by Wu (1952) and has acquired the widely used name rothalpy. This quantity is commonly called rothalpy, a compound word combining the terms rotation and enthalpy. However, its construction does not conform to the established rules for formation of new words in the English language, namely, that the roots of the new word originate from the same language. The word trothalpy satisfies this requirement as trohos is the Greek root for wheel and enthalpy is to put heat in, whereas rotation is derived from Latin rotare.
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "XYZ" }, { "math_id": 2, "text": "xyz" }, { "math_id": 3, "text": "O" }, { "math_id": 4, "text": "\\mathbf {\\omega}" }, { "math_id": 5, "text": "\\mathbf {V}" }, { "math_id": 6, "text": "\\mathbf {w}=\\mathbf {V}-\\mathbf {u}" }, { "math_id": 7, "text": "P" }, { "math_id": 8, "text": "I=h_{0,rel}-\\frac{u^2}{2}" }, { "math_id": 9, "text": "\\mathbf {u}=\\mathbf {\\omega}\\times\\mathbf {r}" }, { "math_id": 10, "text": "\\mathbf {r}=\\vec{OP}" }, { "math_id": 11, "text": "h_{0,rel}" }, { "math_id": 12, "text": "h_{0,rel}=h+\\frac{w^2}{2}" }, { "math_id": 13, "text": "I=h_0-uV_\\theta" }, { "math_id": 14, "text": "V_\\theta" }, { "math_id": 15, "text": "I=const." }, { "math_id": 16, "text": "h_{0,rel}=const." } ]
https://en.wikipedia.org/wiki?curid=66044705
66051495
Prostaglandin F synthase
Monomeric wild-type protein In enzymology, a prostaglandin-F synthase (PGFS; EC 1.1.1.188) is an enzyme that catalyzes the chemical reaction: (5"Z",13"E")-(15"S")-9alpha,11alpha,15-trihydroxyprosta-5,13-dienoate + NADP+ formula_0 (5"Z",13"E")-(15"S")-9alpha,15-dihydroxy-11-oxoprosta-5,13-dienoate + NADPH + H+ Thus, the two products of this enzyme are 9α,11β–PGF2 and NADP+, whereas its three substrates are Prostaglandin D2, NADPH, and H+. PGFS is a monomeric wild-type protein that was first purified from bovine lung (PDB ID: 2F38). This enzyme belongs to the family of aldo-keto reductase (AKR) based on its high substrate specificity, its high molecular weight (38055.48 Da) and amino acid sequence. In addition, it is categorized as C3 (AKR1C3) because it is an isoform of 3α-hydroxysteroid dehydrogenase. The function of PGFS is to catalyze the reduction of aldehydes and ketones to their corresponding alcohols. In humans, these reactions take place mostly in the lungs and in the liver. More specifically, PGFS catalyzes the reduction of PGD2 to 9α,11β–PGF2 and PGH2 to PGF2α by using NADPH as cofactor. Nomenclature. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (5Z,13E)-(15S)-9alpha,11alpha,15-trihydroxyprosta-5,13-dienoate:NADP+ 11-oxidoreductase. Other names in common use include prostaglandin-D2 11-reductase, reductase, 15-hydroxy-11-oxoprostaglandin, PGD2 11-ketoreductase, PGF2α synthetase, prostaglandin 11-ketoreductase, prostaglandin D2-ketoreductase, prostaglandin F synthase, prostaglandin F synthetase, synthetase, prostaglandin F2α, prostaglandin-D2 11-reductase, PGF synthetase, NADPH-dependent prostaglandin D2 11-keto reductase, and prostaglandin 11-keto reductase. This enzyme participates in arachidonic acid metabolism. Structure. As of late 2007[ [update]], 7 structures have been solved for this class of enzymes, with PDB accession codes 1RY0, 1RY8, 1VBJ, 1XF0, 1ZQ5, 2F38, and 2FGB. The primary structure of prostaglandin F synthase consists of 323 amino acid residues. The secondary structure consists of 17 α-helices which contain 130 residues and 18 β-strands which contain 55 residues as well as many random coils. The tertiary structure is a single subunit. The active site of the enzyme is referred to as an (α/β)8 barrel because it consists of 8 α-helices and 8 β-strands. More specifically, the eight α-helices surround the eight β-strands which form the cylindrical core of the active site. In addition, the active site of the enzyme contains also three random coils which help to connect the helices and strands together. The size of the active site of the enzyme is large enough not only to bind NADPH cofactor but also to bind the substrates PGD2 or PGH2. Reaction. In order for the PGFS enzyme to catalyze the reduction of the substrates PGH2 or PGD2, the cofactor NADPH must be present in the active site. This cofactor is present deep within the cavity of the enzyme and forms a hydrogen bond with it, whereas the substrate is located closer to the mouth of the cavity which limits its interaction with PGFS. The rate determining step of the catalysis is the binding of NADPH cofactor in the active site of the enzyme. This is because the binding of NADPH occurs before the binding of the substrate. NADPH is an important cofactor because it is involved in the hydride transfer which is necessary for the reduction to take place. More specifically, in order for the hydride transfer to occur, the substrate (PGD2) has to bind to the active site of the enzyme PGFS. The substrate binds to the active site through hydrogen bonding between the carbonyl group of PGD2 and the hydroxyl group of tyrosine (Y55) as well as one of the imidazole nitrogen of histidine (H117). The hydride shift from NADPH reduces the carbonyl group of PGD2 and forms a new sp3 hydroxyl group (9α,11β–PGF2). The protonation of the carbonyl oxygen is facilitated at low pH when histidine is used and at high pH when tyrosine is used for hydrogen bonding with the substrate. On the one hand, histidine is an ideal proton donor at low pH because of its pKa value (6.00), which means that it is protonated at a pH below 6.00. On the other hand, tyrosine is an ideal proton donor at higher pH because of its pKa value (10.1). The type of amino acid that is used for protonation depends on the substrate. For example, reduction of PGD2 in the human body occurs at a pH range of 6-9, which makes histidine an ideal proton donor. The hydride that is transferred to the carbonyl oxygen of PGD2 causes weakening of the hydrogen bond between the substrate and the enzyme. This has as a result the cleavage of the product (9α,11β–PGF2) from the active site of the enzyme. Use. In general, prostaglandins are molecules that are used for inflammation, muscle contraction and blood clotting. Prostaglandin F synthase (PGFS) is very important enzyme because it catalyzes the formation of 9α,11β–PGF2 and PGF2α which are critical for the contraction of bronchial, vascular and arterial smooth muscle. Also, this enzyme can be used in cancer research. Recent studies have shown that there is a correlation between high levels of PGFS in gastrointestinal tumors and the effectiveness of non-steroidal anti-inflammatory drugs (NSAID). The inhibition of PGFS by NSAID could turn out to be a very important medicinal field in the development of anti-cancer medication.     Inhibition. Prostaglandin F synthase can be inhibited not only by NSAIDs such as indometacin and suprofen but also by a molecule known as bimatoprost (BMP). BMP, an analogue of PGD2 is an ocular hypotensive agent that binds to the active site of the PGFS enzyme. This means that it inhibits the action of PGFS to catalyze the conversion of PGD2 to 9α,11β–PGF2 and PGH2 to PGF2α because it inhibits the substrate to bind to the active site of the enzyme. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=66051495
66053979
Gate capacitance
Capacitance of the gate terminal of a field-effect transistor In electronics, gate capacitance is the capacitance of the gate terminal of a field-effect transistor (FET). It can be expressed as the absolute capacitance of the gate of a transistor, or as the capacitance per unit area of an integrated circuit technology, or as the capacitance per unit width of minimum-length transistors in a technology. In generations of approximately Dennard scaling of metal-oxide-semiconductor FETs (MOSFETs), the capacitance per unit area has increased inversely with device dimensions. Since the gate area has gone down by the square of device dimensions, the gate capacitance of a transistor has gone down in direct proportion with device dimensions. With Dennard scaling, the capacitance per unit of gate width has remained approximately constant; this measurement can include gate–source and gate–drain overlap capacitances. Other scalings are not uncommon; the voltages and gate oxide thicknesses have not always decreased as rapidly as device dimensions, so the gate capacitance per unit area has not increased as fast, and the capacitance per transistor width has sometimes decreased over generations. The intrinsic gate capacitance (that is, ignoring fringing fields and other details) for a silicon-dioxide-insulated gate can be calculated from thin-oxide capacitance per unit area as: formula_0 where: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C_\\mathrm{G} = A_\\mathrm{G}C_\\mathrm{ox}" }, { "math_id": 1, "text": " C_\\mathrm{ox} = \\frac{\\varepsilon_\\mathrm{SiO_2} \\varepsilon_0}{t_\\mathrm{ox}} " } ]
https://en.wikipedia.org/wiki?curid=66053979
660651
Laws of Form
1969 non-fiction book by G. Spencer-Brown Laws of Form (hereinafter LoF) is a book by G. Spencer-Brown, published in 1969, that straddles the boundary between mathematics and philosophy. "LoF" describes three distinct logical systems: "Boundary algebra" is Meguire's (2011) term for the union of the primary algebra and the primary arithmetic. "Laws of Form" sometimes loosely refers to the "primary algebra" as well as to "LoF". The book. The preface states that the work was first explored in 1959, and Spencer Brown cites Bertrand Russell as being supportive of his endeavour. He also thanks J. C. P. Miller of University College London for helping with the proof reading and offering other guidance. In 1963 Spencer Brown was invited by Harry Frost, staff lecturer in the physical sciences at the department of Extra-Mural Studies of the University of London, to deliver a course on the mathematics of logic. "LoF" emerged from work in electronic engineering its author did around 1960, and from subsequent lectures on mathematical logic he gave under the auspices of the University of London's extension program. "LoF" has appeared in several editions. The second series of editions appeared in 1972 with the "Preface to the First American Edition", which emphasised the use of self-referential paradoxes, and the most recent being a 1997 German translation. "LoF" has never gone out of print. "LoF"'s mystical and declamatory prose and its love of paradox make it a challenging read for all. Spencer-Brown was influenced by Wittgenstein and R. D. Laing. "LoF" also echoes a number of themes from the writings of Charles Sanders Peirce, Bertrand Russell, and Alfred North Whitehead. The work has had curious effects on some classes of its readership; for example, on obscure grounds, it has been claimed that the entire book is written in an operational way, giving instructions to the reader instead of telling them what "is", and that in accordance with G. Spencer-Brown's interest in paradoxes, the only sentence that makes a statement that something "is", is the statement which says no such statements are used in this book. Furthermore, the claim asserts that except for this one sentence the book can be seen as an example of E-Prime. What prompted such a claim, is obscure, either in terms of incentive, logical merit, or as a matter of fact, because the book routinely and naturally uses the verb "to be" throughout, and in all its grammatical forms, as may be seen both in the original and in quotes shown below. Reception. Ostensibly a work of formal mathematics and philosophy, "LoF" became something of a cult classic: it was praised by Heinz von Foerster when he reviewed it for the "Whole Earth Catalog". Those who agree point to "LoF" as embodying an enigmatic "mathematics of consciousness", its algebraic symbolism capturing an (perhaps even "the") implicit root of cognition: the ability to "distinguish". "LoF" argues that primary algebra reveals striking connections among logic, Boolean algebra, and arithmetic, and the philosophy of language and mind. Stafford Beer wrote in a review for "Nature", "When one thinks of all that Russell went through sixty years ago, to write the "Principia", and all we his readers underwent in wrestling with those three vast volumes, it is almost sad". Banaschewski (1977) argues that the primary algebra is nothing but new notation for Boolean algebra. Indeed, the two-element Boolean algebra 2 can be seen as the intended interpretation of the primary algebra. Yet the notation of the primary algebra: Moreover, the syntax of the primary algebra can be extended to formal systems other than 2 and sentential logic, resulting in boundary mathematics (see below). "LoF" has influenced, among others, Heinz von Foerster, Louis Kauffman, Niklas Luhmann, Humberto Maturana, Francisco Varela and William Bricken. Some of these authors have modified the primary algebra in a variety of interesting ways. "LoF" claimed that certain well-known mathematical conjectures of very long standing, such as the four color theorem, Fermat's Last Theorem, and the Goldbach conjecture, are provable using extensions of the primary algebra. Spencer-Brown eventually circulated a purported proof of the four color theorem, but it met with skepticism. The form (Chapter 1). The symbol: Also called the "mark" or "cross", is the essential feature of the Laws of Form. In Spencer-Brown's inimitable and enigmatic fashion, the Mark symbolizes the root of cognition, i.e., the dualistic Mark indicates the capability of differentiating a "this" from "everything else "but" this". In "LoF", a Cross denotes the drawing of a "distinction", and can be thought of as signifying the following, all at once: All three ways imply an action on the part of the cognitive entity (e.g., person) making the distinction. As "LoF" puts it: "The first command: can well be expressed in such ways as: Or: The counterpoint to the Marked state is the Unmarked state, which is simply nothing, the void, or the un-expressable infinite represented by a blank space. It is simply the absence of a Cross. No distinction has been made and nothing has been crossed. The Marked state and the void are the two primitive values of the Laws of Form. The Cross can be seen as denoting the distinction between two states, one "considered as a symbol" and another not so considered. From this fact arises a curious resonance with some theories of consciousness and language. Paradoxically, the Form is at once Observer and Observed, and is also the creative act of making an observation. "LoF" (excluding back matter) closes with the words: ...the first distinction, the Mark and the observer are not only interchangeable, but, in the form, identical. C. S. Peirce came to a related insight in the 1890s; see . The primary arithmetic (Chapter 4). The syntax of the primary arithmetic goes as follows. There are just two atomic expressions: There are two inductive rules: The semantics of the primary arithmetic are perhaps nothing more than the sole explicit definition in "LoF": "Distinction is perfect continence". Let the "unmarked state" be a synonym for the void. Let an empty Cross denote the "marked state". To cross is to move from one value, the unmarked or marked state, to the other. We can now state the "arithmetical" axioms A1 and A2, which ground the primary arithmetic (and hence all of the Laws of Form): "A1. The law of Calling". Calling twice from a state is indistinguishable from calling once. To make a distinction twice has the same effect as making it once. For example, saying "Let there be light" and then saying "Let there be light" again, is the same as saying it once. Formally: formula_0 "A2. The law of Crossing". After crossing from the unmarked to the marked state, crossing again ("recrossing") starting from the marked state returns one to the unmarked state. Hence recrossing annuls crossing. Formally: formula_0 In both A1 and A2, the expression to the right of '=' has fewer symbols than the expression to the left of '='. This suggests that every primary arithmetic expression can, by repeated application of A1 and A2, be "simplified" to one of two states: the marked or the unmarked state. This is indeed the case, and the result is the expression's "simplification". The two fundamental metatheorems of the primary arithmetic state that: Thus the relation of logical equivalence partitions all primary arithmetic expressions into two equivalence classes: those that simplify to the Cross, and those that simplify to the void. A1 and A2 have loose analogs in the properties of series and parallel electrical circuits, and in other ways of diagramming processes, including flowcharting. A1 corresponds to a parallel connection and A2 to a series connection, with the understanding that making a distinction corresponds to changing how two points in a circuit are connected, and not simply to adding wiring. The primary arithmetic is analogous to the following formal languages from mathematics and computer science: The phrase "calculus of indications" in "LoF" is a synonym for "primary arithmetic". The notion of canon. A concept peculiar to "LoF" is that of "canon". While "LoF" does not formally define canon, the following two excerpts from the Notes to chpt. 2 are apt: The more important structures of command are sometimes called "canons". They are the ways in which the guiding injunctions appear to group themselves in constellations, and are thus by no means independent of each other. A canon bears the distinction of being outside (i.e., describing) the system under construction, but a command to construct (e.g., 'draw a distinction'), even though it may be of central importance, is not a canon. A canon is an order, or set of orders, to permit or allow, but not to construct or create. ...the primary form of mathematical communication is not description but injunction... Music is a similar art form, the composer does not even attempt to describe the set of sounds he has in mind, much less the set of feelings occasioned through them, but writes down a set of commands which, if they are obeyed by the performer, can result in a reproduction, to the listener, of the composer's original experience. These excerpts relate to the distinction in metalogic between the object language, the formal language of the logical system under discussion, and the metalanguage, a language (often a natural language) distinct from the object language, employed to exposit and discuss the object language. The first quote seems to assert that the "canons" are part of the metalanguage. The second quote seems to assert that statements in the object language are essentially commands addressed to the reader by the author. Neither assertion holds in standard metalogic. The primary algebra (Chapter 6). Syntax. Given any valid primary arithmetic expression, insert into one or more locations any number of Latin letters bearing optional numerical subscripts; the result is a primary algebra formula. Letters so employed in mathematics and logic are called variables. A primary algebra variable indicates a location where one can write the primitive value or its complement . Multiple instances of the same variable denote multiple locations of the same primitive value. Rules governing logical equivalence. The sign '=' may link two logically equivalent expressions; the result is an equation. By "logically equivalent" is meant that the two expressions have the same simplification. Logical equivalence is an equivalence relation over the set of primary algebra formulas, governed by the rules R1 and R2. Let "C" and "D" be formulae each containing at least one instance of the subformula "A": R2 is employed very frequently in "primary algebra" demonstrations (see below), almost always silently. These rules are routinely invoked in logic and most of mathematics, nearly always unconsciously. The "primary algebra" consists of equations, i.e., pairs of formulae linked by an infix operator '='. R1 and R2 enable transforming one equation into another. Hence the "primary algebra" is an "equational" formal system, like the many algebraic structures, including Boolean algebra, that are varieties. Equational logic was common before "Principia Mathematica" (e.g., Peirce,1,2,3 Johnson 1892), and has present-day advocates (Gries and Schneider 1993). Conventional mathematical logic consists of tautological formulae, signalled by a prefixed turnstile. To denote that the "primary algebra" formula "A" is a tautology, simply write ""A" = ". If one replaces '=' in R1 and R2 with the biconditional, the resulting rules hold in conventional logic. However, conventional logic relies mainly on the rule modus ponens; thus conventional logic is "ponential". The equational-ponential dichotomy distills much of what distinguishes mathematical logic from the rest of mathematics. Initials. An "initial" is a "primary algebra" equation verifiable by a decision procedure and as such is "not" an axiom. "LoF" lays down the initials: The absence of anything to the right of the "=" above, is deliberate. J2 is the familiar distributive law of sentential logic and Boolean algebra. Another set of initials, friendlier to calculations, is: It is thanks to C2 that the "primary algebra" is a lattice. By virtue of J1a, it is a complemented lattice whose upper bound is . By J0, is the corresponding lower bound and identity element. J0 is also an algebraic version of A2 and makes clear the sense in which aliases with the blank page. T13 in "LoF" generalizes C2 as follows. Any "primary algebra" (or sentential logic) formula "B" can be viewed as an ordered tree with "branches". Then: T13: A subformula "A" can be copied at will into any depth of "B" greater than that of "A", as long as "A" and its copy are in the same branch of "B". Also, given multiple instances of "A" in the same branch of "B", all instances but the shallowest are redundant. While a proof of T13 would require induction, the intuition underlying it should be clear. C2 or its equivalent is named: Perhaps the first instance of an axiom or rule with the power of C2 was the "Rule of (De)Iteration", combining T13 and "AA=A", of C. S. Peirce's existential graphs. "LoF" asserts that concatenation can be read as commuting and associating by default and hence need not be explicitly assumed or demonstrated. (Peirce made a similar assertion about his existential graphs.) Let a period be a temporary notation to establish grouping. That concatenation commutes and associates may then be demonstrated from the: Having demonstrated associativity, the period can be discarded. The initials in Meguire (2011) are "AC.D"="CD.A", called B1; B2, J0 above; B3, J1a above; and B4, C2. By design, these initials are very similar to the axioms for an abelian group, G1-G3 below. Proof theory. The "primary algebra" contains three kinds of proved assertions: The distinction between consequence and theorem holds for all formal systems, including mathematics and logic, but is usually not made explicit. A demonstration or decision procedure can be carried out and verified by computer. The proof of a theorem cannot be. Let "A" and "B" be "primary algebra" formulas. A demonstration of "A"="B" may proceed in either of two ways: Once "A"="B" has been demonstrated, "A"="B" can be invoked to justify steps in subsequent demonstrations. "primary algebra" demonstrations and calculations often require no more than J1a, J2, C2, and the consequences (C3 in "LoF"), (C1), and "AA"="A" (C5). The consequence , C7 in "LoF", enables an algorithm, sketched in "LoF"'s proof of T14, that transforms an arbitrary "primary algebra" formula to an equivalent formula whose depth does not exceed two. The result is a "normal form", the "primary algebra" analog of the conjunctive normal form. "LoF" (T14–15) proves the "primary algebra" analog of the well-known Boolean algebra theorem that every formula has a normal form. Let "A" be a subformula of some formula "B". When paired with C3, J1a can be viewed as the closure condition for calculations: "B" is a tautology if and only if "A" and ("A") both appear in depth 0 of "B". A related condition appears in some versions of natural deduction. A demonstration by calculation is often little more than: The last step of a calculation always invokes J1a. "LoF" includes elegant new proofs of the following standard metatheory: That sentential logic is complete is taught in every first university course in mathematical logic. But university courses in Boolean algebra seldom mention the completeness of 2. Interpretations. If the Marked and Unmarked states are read as the Boolean values 1 and 0 (or True and False), the "primary algebra" interprets 2 (or sentential logic). "LoF" shows how the "primary algebra" can interpret the syllogism. Each of these interpretations is discussed in a subsection below. Extending the "primary algebra" so that it could interpret standard first-order logic has yet to be done, but Peirce's "beta" existential graphs suggest that this extension is feasible. Two-element Boolean algebra 2. The "primary algebra" is an elegant minimalist notation for the two-element Boolean algebra 2. Let: If join (meet) interprets "AC", then meet (join) interprets formula_1. Hence the "primary algebra" and 2 are isomorphic but for one detail: "primary algebra" complementation can be nullary, in which case it denotes a primitive value. Modulo this detail, 2 is a model of the primary algebra. The primary arithmetic suggests the following arithmetic axiomatization of 2: 1+1=1+0=0+1=1=~0, and 0+0=0=~1. The set formula_2 formula_3 formula_4 is the Boolean domain or "carrier". In the language of universal algebra, the "primary algebra" is the algebraic structure formula_5 of type formula_6. The expressive adequacy of the Sheffer stroke points to the "primary algebra" also being a formula_7 algebra of type formula_8. In both cases, the identities are J1a, J0, C2, and "ACD=CDA". Since the "primary algebra" and 2 are isomorphic, 2 can be seen as a formula_9 algebra of type formula_6. This description of 2 is simpler than the conventional one, namely an formula_10 algebra of type formula_11. The two possible interpretations are dual to each other in the Boolean sense. (In Boolean algebra, exchanging AND ↔ OR and 1 ↔ 0 throughout an equation yields an equally valid equation.) The identities remain invariant regardless of which interpretation is chosen, so the transformations or modes of calculation remain the same; only the interpretation of each form would be different. Example: J1a is . Interpreting juxtaposition as OR and as 1, this translates to formula_12 which is true. Interpreting juxtaposition as AND and as 0, this translates to formula_13 which is true as well (and the dual of formula_12). operator-operand duality. The marked state, , is both an operator (e.g., the complement) and operand (e.g., the value 1). This can be summarized neatly by defining two functions formula_14 and formula_15 for the marked and unmarked state, respectively: let formula_16 and formula_17, where formula_18 is a (possibly empty) set of boolean values. This reveals that formula_19 is either the value 0 or the OR operator, while formula_20 is either the value 1 or the NOR operator, depending on whether formula_18 is the empty set or not. As noted above, there is a dual form of these functions exchanging AND ↔ OR and 1 ↔ 0. Sentential logic. Let the blank page denote False, and let a Cross be read as Not. Then the primary arithmetic has the following sentential reading:  =   False  =  True  =  not False  =  Not True  =  False The "primary algebra" interprets sentential logic as follows. A letter represents any given sentential expression. Thus: interprets Not A interprets A Or B interprets Not A Or B or If A Then B. interprets Not (Not A Or Not B) or Not (If A Then Not B) or A And B. Thus any expression in sentential logic has a "primary algebra" translation. Equivalently, the "primary algebra" interprets sentential logic. Given an assignment of every variable to the Marked or Unmarked states, this "primary algebra" translation reduces to a primary arithmetic expression, which can be simplified. Repeating this exercise for all possible assignments of the two primitive values to each variable, reveals whether the original expression is tautological or satisfiable. This is an example of a decision procedure, one more or less in the spirit of conventional truth tables. Given some "primary algebra" formula containing "N" variables, this decision procedure requires simplifying 2"N" primary arithmetic formulae. For a less tedious decision procedure more in the spirit of Quine's "truth value analysis", see Meguire (2003). Schwartz (1981) proved that the "primary algebra" is equivalent — syntactically, semantically, and proof theoretically — with the classical propositional calculus. Likewise, it can be shown that the "primary algebra" is syntactically equivalent with expressions built up in the usual way from the classical truth values true and false, the logical connectives NOT, OR, and AND, and parentheses. Interpreting the Unmarked State as False is wholly arbitrary; that state can equally well be read as True. All that is required is that the interpretation of concatenation change from OR to AND. IF A THEN B now translates as instead of . More generally, the "primary algebra" is "self-dual", meaning that any "primary algebra" formula has two sentential or Boolean readings, each the dual of the other. Another consequence of self-duality is the irrelevance of De Morgan's laws; those laws are built into the syntax of the "primary algebra" from the outset. The true nature of the distinction between the "primary algebra" on the one hand, and 2 and sentential logic on the other, now emerges. In the latter formalisms, complementation/negation operating on "nothing" is not well-formed. But an empty Cross is a well-formed "primary algebra" expression, denoting the Marked state, a primitive value. Hence a nonempty Cross is an operator, while an empty Cross is an operand because it denotes a primitive value. Thus the "primary algebra" reveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action, the making of a distinction. Syllogisms. Appendix 2 of "LoF" shows how to translate traditional syllogisms and sorites into the "primary algebra". A valid syllogism is simply one whose "primary algebra" translation simplifies to an empty Cross. Let "A"* denote a "literal", i.e., either "A" or formula_21, indifferently. Then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization of Barbara whose "primary algebra" equivalent is formula_22. These 24 possible permutations include the 19 syllogistic forms deemed valid in Aristotelian and medieval logic. This "primary algebra" translation of syllogistic logic also suggests that the "primary algebra" can interpret monadic and term logic, and that the "primary algebra" has affinities to the Boolean term schemata of Quine (1982: Part II). An example of calculation. The following calculation of Leibniz's nontrivial "Praeclarum Theorema" exemplifies the demonstrative power of the "primary algebra". Let C1 be formula_23 ="A", C2 be formula_24, C3 be formula_25, J1a be formula_26, and let OI mean that variables and subformulae have been reordered in a way that commutativity and associativity permit. Relation to magmas. The "primary algebra" embodies a point noted by Huntington in 1933: Boolean algebra requires, in addition to one unary operation, one, and not two, binary operations. Hence the seldom-noted fact that Boolean algebras are magmas. (Magmas were called groupoids until the latter term was appropriated by category theory.) To see this, note that the "primary algebra" is a commutative: Groups also require a unary operation, called inverse, the group counterpart of Boolean complementation. Let denote the inverse of "a". Let denote the group identity element. Then groups and the "primary algebra" have the same signatures, namely they are both formula_27 algebras of type 〈2,1,0〉. Hence the "primary algebra" is a boundary algebra. The axioms for an abelian group, in boundary notation, are: From G1 and G2, the commutativity and associativity of concatenation may be derived, as above. Note that G3 and J1a are identical. G2 and J0 would be identical if    =    replaced A2. This is the defining arithmetical identity of group theory, in boundary notation. The "primary algebra" differs from an abelian group in two ways: Both A2 and C2 follow from "B"'s being an ordered set. Equations of the second degree (Chapter 11). Chapter 11 of "LoF" introduces "equations of the second degree", composed of recursive formulae that can be seen as having "infinite" depth. Some recursive formulae simplify to the marked or unmarked state. Others "oscillate" indefinitely between the two states depending on whether a given depth is even or odd. Specifically, certain recursive formulae can be interpreted as oscillating between true and false over successive intervals of time, in which case a formula is deemed to have an "imaginary" truth value. Thus the flow of time may be introduced into the "primary algebra". Turney (1986) shows how these recursive formulae can be interpreted via Alonzo Church's Restricted Recursive Arithmetic (RRA). Church introduced RRA in 1955 as an axiomatic formalization of finite automata. Turney (1986) presents a general method for translating equations of the second degree into Church's RRA, illustrating his method using the formulae E1, E2, and E4 in chapter 11 of "LoF". This translation into RRA sheds light on the names Spencer-Brown gave to E1 and E4, namely "memory" and "counter". RRA thus formalizes and clarifies "LoF"'s notion of an imaginary truth value. Related work. Gottfried Leibniz, in memoranda not published before the late 19th and early 20th centuries, invented Boolean logic. His notation was isomorphic to that of "LoF": concatenation read as conjunction, and "non-("X")" read as the complement of "X". Recognition of Leibniz's pioneering role in algebraic logic was foreshadowed by Lewis (1918) and Rescher (1954). But a full appreciation of Leibniz's accomplishments had to await the work of Wolfgang Lenzen, published in the 1980s and reviewed in Lenzen (2004). Charles Sanders Peirce (1839–1914) anticipated the "primary algebra" in three veins of work: Ironically, "LoF" cites vol. 4 of Peirce's "Collected Papers," the source for the formalisms in (2) and (3) above. (1)-(3) were virtually unknown at the time when (1960s) and in the place where (UK) "LoF" was written. Peirce's semiotics, about which "LoF" is silent, may yet shed light on the philosophical aspects of "LoF". Kauffman (2001) discusses another notation similar to that of "LoF", that of a 1917 article by Jean Nicod, who was a disciple of Bertrand Russell's. The above formalisms are, like the "primary algebra", all instances of "boundary mathematics", i.e., mathematics whose syntax is limited to letters and brackets (enclosing devices). A minimalist syntax of this nature is a "boundary notation". Boundary notation is free of infix operators, prefix, or postfix operator symbols. The very well known curly braces ('{', '}') of set theory can be seen as a boundary notation. The work of Leibniz, Peirce, and Nicod is innocent of metatheory, as they wrote before Emil Post's landmark 1920 paper (which "LoF" cites), proving that sentential logic is complete, and before Hilbert and Łukasiewicz showed how to prove axiom independence using models. Craig (1979) argued that the world, and how humans perceive and interact with that world, has a rich Boolean structure. Craig was an orthodox logician and an authority on algebraic logic. Second-generation cognitive science emerged in the 1970s, after "LoF" was written. On cognitive science and its relevance to Boolean algebra, logic, and set theory, see Lakoff (1987) (see index entries under "Image schema examples: container") and Lakoff and Núñez (2001). Neither book cites "LoF". The biologists and cognitive scientists Humberto Maturana and his student Francisco Varela both discuss "LoF" in their writings, which identify "distinction" as the fundamental cognitive act. The Berkeley psychologist and cognitive scientist Eleanor Rosch has written extensively on the closely related notion of categorization. Other formal systems with possible affinities to the primary algebra include: The primary arithmetic and algebra are a minimalist formalism for sentential logic and Boolean algebra. Other minimalist formalisms having the power of set theory include: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ =" }, { "math_id": 1, "text": "\\overline{\\overline{A |} \\ \\ \\overline{C |} \\Big|}" }, { "math_id": 2, "text": "\\ B=\\{" }, { "math_id": 3, "text": "," }, { "math_id": 4, "text": "\\ \\}" }, { "math_id": 5, "text": "\\lang B,-\\ -,\\overline{- \\ |},\\overline{\\ \\ |} \\rang" }, { "math_id": 6, "text": "\\lang 2,1,0 \\rang" }, { "math_id": 7, "text": "\\lang B,\\overline{-\\ - \\ |},\\overline{\\ \\ |}\\rang" }, { "math_id": 8, "text": "\\lang 2,0 \\rang" }, { "math_id": 9, "text": "\\lang B,+,\\lnot,1 \\rang" }, { "math_id": 10, "text": "\\lang B,+,\\times,\\lnot,1,0 \\rang" }, { "math_id": 11, "text": "\\lang 2,2,1,0,0 \\rang" }, { "math_id": 12, "text": "\\neg A \\lor A = 1" }, { "math_id": 13, "text": "\\neg A \\land A = 0" }, { "math_id": 14, "text": "m(x)" }, { "math_id": 15, "text": "u(x)" }, { "math_id": 16, "text": "m(x) = 1-\\max(\\{0\\}\\cup x)" }, { "math_id": 17, "text": "u(x) = \\max(\\{0\\} \\cup x)" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "u" }, { "math_id": 20, "text": "m" }, { "math_id": 21, "text": "\\overline{A |}" }, { "math_id": 22, "text": "\\overline{A^* \\ B |} \\ \\ \\overline{\\overline{B |} \\ C^* \\Big|} \\ A^* \\ C^* " }, { "math_id": 23, "text": "\\overline{\\overline{A |} \\Big|}" }, { "math_id": 24, "text": "A \\ \\overline{A \\ B |} = A \\ \\overline{B |}" }, { "math_id": 25, "text": "\\overline{\\ \\ |} \\ A = \\overline{\\ \\ |}" }, { "math_id": 26, "text": "\\overline{A |} \\ A = \\overline{\\ \\ |}" }, { "math_id": 27, "text": "\\lang - \\ -, \\overline{- \\ |}, \\overline{\\ \\ |} \\rang" }, { "math_id": 28, "text": "\\overline{\\ \\overline{\\ \\overline{\\ \\ |} \\ \\Big|} \\ \\Bigg|} = \\overline{\\ \\ |}" }, { "math_id": 29, "text": "\\overline{A \\ \\ B \\ |}" }, { "math_id": 30, "text": "\\overline{A |} \\ \\ \\overline{B |}" } ]
https://en.wikipedia.org/wiki?curid=660651
660657
Rankine cycle
Model that is used to predict the performance of steam turbine systems The Rankine cycle is an idealized thermodynamic cycle describing the process by which certain heat engines, such as steam turbines or reciprocating steam engines, allow mechanical work to be extracted from a fluid as it moves between a heat source and heat sink. The Rankine cycle is named after William John Macquorn Rankine, a Scottish polymath professor at Glasgow University. Heat energy is supplied to the system via a boiler where the working fluid (typically water) is converted to a high-pressure gaseous state (steam) in order to turn a turbine. After passing over the turbine the fluid is allowed to condense back into a liquid state as waste heat energy is rejected before being returned to boiler, completing the cycle. Friction losses throughout the system are often neglected for the purpose of simplifying calculations as such losses are usually much less significant than thermodynamic losses, especially in larger systems. Description. The Rankine cycle closely describes the process by which steam engines commonly found in thermal power generation plants harness the thermal energy of a fuel or other heat source to generate electricity. Possible heat sources include combustion of fossil fuels such as coal, natural gas, and oil, use of mined resources for nuclear fission, renewable fuels like biomass and ethanol, and energy capture of natural sources such as concentrated solar power and geothermal energy. Common heat sinks include ambient air above or around a facility and bodies of water such as rivers, ponds, and oceans. The ability of a Rankine engine to harness energy depends on the relative temperature difference between the heat source and heat sink. The greater the differential, the more mechanical power can be efficiently extracted out of heat energy, as per Carnot's theorem. The efficiency of the Rankine cycle is limited by the high heat of vaporization of the working fluid. Unless the pressure and temperature reach supercritical levels in the boiler, the temperature range over which the cycle can operate is quite small. As of 2022, most supercritical power plants adopt a steam inlet pressure of 24.1 MPa and inlet temperature between 538°C and 566°C, which results in plant efficiency of 40%. However, if pressure is further increased to 31 MPa the power plant is referred to as ultra-supercritical, and one can increase the steam inlet temperature to 600°C, thus achieving a thermal efficiency of 42%. This low steam turbine entry temperature (compared to a gas turbine) is why the Rankine (steam) cycle is often used as a bottoming cycle to recover otherwise rejected heat in combined-cycle gas turbine power stations. The idea is that very hot combustion products are first expanded in a gas turbine, and then the exhaust gases, which are still relatively hot, are used as a heat source for the Rankine cycle, thus reducing the temperature difference between the heat source and the working fluid and therefore reducing the amount of entropy generated by irreversibility. Rankine engines generally operate in a closed loop in which the working fluid is reused. The water vapor with condensed droplets often seen billowing from power stations is created by the cooling systems (not directly from the closed-loop Rankine power cycle). This "exhaust" heat is represented by the "Qout" flowing out of the lower side of the cycle shown in the T–s diagram below. Cooling towers operate as large heat exchangers by absorbing the latent heat of vaporization of the working fluid and simultaneously evaporating cooling water to the atmosphere. While many substances can be used as the working fluid, water is usually chosen for its simple chemistry, relative abundance, low cost, and thermodynamic properties. By condensing the working steam vapor to a liquid, the pressure at the turbine outlet is lowered, and the energy required by the feed pump consumes only 1% to 3% of the turbine output power. These factors contribute to a higher efficiency for the cycle. The benefit of this is offset by the low temperatures of steam admitted to the turbine(s). Gas turbines, for instance, have turbine entry temperatures approaching 1500 °C. However, the thermal efficiencies of actual large steam power stations and large modern gas turbine stations are similar. The four processes in the Rankine cycle. There are four processes in the Rankine cycle. The states are identified by numbers (in brown) in the T–s diagram. In an ideal Rankine cycle the pump and turbine would be isentropic: i.e., the pump and turbine would generate no entropy and would hence maximize the net work output. Processes 1–2 and 3–4 would be represented by vertical lines on the T–s diagram and more closely resemble that of the Carnot cycle. The Rankine cycle shown here prevents the state of the working fluid from ending up in the superheated vapor region after the expansion in the turbine, &lt;templatestyles src="Citation/styles.css"/&gt;[1] which reduces the energy removed by the condensers. The actual vapor power cycle differs from the ideal Rankine cycle because of irreversibilities in the inherent components caused by fluid friction and heat loss to the surroundings; fluid friction causes pressure drops in the boiler, the condenser, and the piping between the components, and as a result the steam leaves the boiler at a lower pressure; heat loss reduces the net work output, thus heat addition to the steam in the boiler is required to maintain the same level of net work output. Equations. formula_0 defines the thermodynamic efficiency of the cycle as the ratio of net power output to heat input. As the work required by the pump is often around 1% of the turbine work output, it can be simplified: formula_1 Each of the next four equations&lt;templatestyles src="Citation/styles.css"/&gt;[1] is derived from the energy and mass balance for a control volume. formula_2 formula_3 formula_4 formula_5 When dealing with the efficiencies of the turbines and pumps, an adjustment to the work terms must be made: formula_6 formula_7 Real Rankine cycle (non-ideal). In a real power-plant cycle (the name "Rankine" cycle is used only for the ideal cycle), the compression by the pump and the expansion in the turbine are not isentropic. In other words, these processes are non-reversible, and entropy is increased during the two processes. This somewhat increases the power required by the pump and decreases the power generated by the turbine. In particular, the efficiency of the steam turbine will be limited by water-droplet formation. As the water condenses, water droplets hit the turbine blades at high speed, causing pitting and erosion, gradually decreasing the life of turbine blades and efficiency of the turbine. The easiest way to overcome this problem is by superheating the steam. On the T–s diagram above, state 3 is at a border of the two-phase region of steam and water, so after expansion the steam will be very wet. By superheating, state 3 will move to the right (and up) in the diagram and hence produce a drier steam after expansion. Variations of the basic Rankine cycle. The overall thermodynamic efficiency can be increased by raising the average heat input temperature formula_8 of that cycle. Increasing the temperature of the steam into the superheat region is a simple way of doing this. There are also variations of the basic Rankine cycle designed to raise the thermal efficiency of the cycle in this way; two of these are described below. Rankine cycle with reheat. The purpose of a reheating cycle is to remove the moisture carried by the steam at the final stages of the expansion process. In this variation, two turbines work in series. The first accepts vapor from the boiler at high pressure. After the vapor has passed through the first turbine, it re-enters the boiler and is reheated before passing through a second, lower-pressure, turbine. The reheat temperatures are very close or equal to the inlet temperatures, whereas the optimal reheat pressure needed is only one fourth of the original boiler pressure. Among other advantages, this prevents the vapor from condensing during its expansion and thereby reducing the damage in the turbine blades, and improves the efficiency of the cycle, because more of the heat flow into the cycle occurs at higher temperature. The reheat cycle was first introduced in the 1920s, but was not operational for long due to technical difficulties. In the 1940s, it was reintroduced with the increasing manufacture of high-pressure boilers, and eventually double reheating was introduced in the 1950s. The idea behind double reheating is to increase the average temperature. It was observed that more than two stages of reheating are generally unnecessary, since the next stage increases the cycle efficiency only half as much as the preceding stage. Today, double reheating is commonly used in power plants that operate under supercritical pressure. Regenerative Rankine cycle. The regenerative Rankine cycle is so named because after emerging from the condenser (possibly as a subcooled liquid) the working fluid is heated by steam tapped from the hot portion of the cycle. On the diagram shown, the fluid at 2 is mixed with the fluid at 4 (both at the same pressure) to end up with the saturated liquid at 7. This is called "direct-contact heating". The Regenerative Rankine cycle (with minor variants) is commonly used in real power stations. Another variation sends "bleed steam" from between turbine stages to feedwater heaters to preheat the water on its way from the condenser to the boiler. These heaters do not mix the input steam and condensate, function as an ordinary tubular heat exchanger, and are named "closed feedwater heaters". Regeneration increases the cycle heat input temperature by eliminating the addition of heat from the boiler/fuel source at the relatively low feedwater temperatures that would exist without regenerative feedwater heating. This improves the efficiency of the cycle, as more of the heat flow into the cycle occurs at higher temperature. Organic Rankine cycle. The organic Rankine cycle (ORC) uses an organic fluid such as n-pentane or toluene in place of water and steam. This allows use of lower-temperature heat sources, such as solar ponds, which typically operate at around 70 –90 °C. The efficiency of the cycle is much lower as a result of the lower temperature range, but this can be worthwhile because of the lower cost involved in gathering heat at this lower temperature. Alternatively, fluids can be used that have boiling points above water, and this may have thermodynamic benefits (See, for example, mercury vapour turbine). The properties of the actual working fluid have great influence on the quality of steam (vapour) after the expansion step, influencing the design of the whole cycle. The Rankine cycle does not restrict the working fluid in its definition, so the name "organic cycle" is simply a marketing concept and the cycle should not be regarded as a separate thermodynamic cycle. Supercritical Rankine cycle. The Rankine cycle applied using a supercritical fluid combines the concepts of heat regeneration and supercritical Rankine cycle into a unified process called the regenerative supercritical cycle (RGSC). It is optimised for temperature sources 125–450 °C. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta_\\text{therm}" }, { "math_id": 1, "text": " \\eta_\\text{therm} = \\frac{\\dot{W}_\\text{turb} - \\dot{W}_\\text{pump}}{\\dot{Q}_\\text{in}} \\approx \\frac{\\dot{W}_\\text{turb}}{\\dot{Q}_\\text{in}}" }, { "math_id": 2, "text": "\\frac{\\dot{Q}_\\text{in}}{\\dot{m}} = h_3 - h_2," }, { "math_id": 3, "text": "\\frac{\\dot{Q}_\\text{out}}{\\dot{m}} = h_4 - h_1," }, { "math_id": 4, "text": "\\frac{\\dot{W}_\\text{pump}}{\\dot{m}} = h_2 - h_1," }, { "math_id": 5, "text": "\\frac{\\dot{W}_\\text{turbine}}{\\dot{m}} = h_3 - h_4." }, { "math_id": 6, "text": " \\frac{\\dot{W}_\\text{pump}}{\\dot{m}} = h_2 - h_1 \\approx \\frac{v_1 \\Delta p}{\\eta_\\text{pump}} = \\frac{v_1 (p_2 - p_1)}{\\eta_\\text{pump}}," }, { "math_id": 7, "text": " \\frac{\\dot{W}_\\text{turbine}}{\\dot{m}} = h_3-h_4 \\approx (h_3 - h_4) \\eta_\\text{turbine}." }, { "math_id": 8, "text": "\\bar{T}_\\text{in} = \\frac{\\int_2^3 T\\,dQ}{Q_\\text{in}}" } ]
https://en.wikipedia.org/wiki?curid=660657
6607051
Strictly positive measure
In mathematics, strict positivity is a concept in measure theory. Intuitively, a strictly positive measure is one that is "nowhere zero", or that is zero "only on points". Definition. Let formula_0 be a Hausdorff topological space and let formula_1 be a formula_2-algebra on formula_3 that contains the topology formula_4 (so that every open set is a measurable set, and formula_1 is at least as fine as the Borel formula_2-algebra on formula_3). Then a measure formula_5 on formula_6 is called strictly positive if every non-empty open subset of formula_3 has strictly positive measure. More concisely, formula_5 is strictly positive if and only if for all formula_7 such that formula_8 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(X, T)" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "(X, \\Sigma)" }, { "math_id": 7, "text": "U \\in T" }, { "math_id": 8, "text": "U \\neq \\varnothing, \\mu (U) > 0." }, { "math_id": 9, "text": "\\delta_0" }, { "math_id": 10, "text": "\\R" }, { "math_id": 11, "text": "T = \\{\\varnothing, \\R\\}," }, { "math_id": 12, "text": "\\R^n" }, { "math_id": 13, "text": "\\nu" }, { "math_id": 14, "text": "(X, \\Sigma)," }, { "math_id": 15, "text": "\\nu," }, { "math_id": 16, "text": "U \\subseteq X" }, { "math_id": 17, "text": "\\mu(U) > 0;" }, { "math_id": 18, "text": "\\nu(U) > 0" } ]
https://en.wikipedia.org/wiki?curid=6607051
6607484
Infinite-dimensional Lebesgue measure
Mathematical folklore An infinite-dimensional Lebesgue measure is a measure defined on infinite-dimensional normed vector spaces, such as Banach spaces, which resembles the Lebesgue measure used in finite-dimensional spaces. However, the traditional Lebesgue measure cannot be straightforwardly extended to all infinite-dimensional spaces due to a key limitation: any translation-invariant Borel measure on an infinite-dimensional separable Banach space must be either infinite for all sets or zero for all sets. Despite this, certain forms of infinite-dimensional Lebesgue-like measures can exist in specific contexts. These include non-separable spaces like the Hilbert cube, or scenarios where some typical properties of finite-dimensional Lebesgue measures are modified or omitted. Motivation. The Lebesgue measure formula_0 on the Euclidean space formula_1 is locally finite, strictly positive, and translation-invariant. That is: Motivated by their geometrical significance, constructing measures satisfying the above set properties for infinite-dimensional spaces such as the formula_11 spaces or path spaces is still an open and active area of research. Non-Existence Theorem in Separable Banach spaces. Statement of the theorem. Let formula_12 be an infinite-dimensional, separable Banach space. Then, the only locally finite and translation invariant Borel measure formula_13 on formula_12 is a trivial measure. Equivalently, there is no locally finite, strictly positive, and translation invariant measure on formula_12. More generally: on a non locally compact Polish group formula_14, there cannot exist a σ-finite and left-invariant Borel measure. This theorem implies that on an infinite dimensional separable Banach space (which cannot be locally compact) a measure that perfectly matches the properties of a finite dimensional Lebesgue measure does not exist. Proof. Let formula_12 be an infinite-dimensional, separable Banach space equipped with a locally finite translation-invariant measure formula_13. To prove that formula_13 is the trivial measure, it is sufficient and necessary to show that formula_15 Like every separable metric space, formula_12 is a Lindelöf space, which means that every open cover of formula_12 has a countable subcover. It is, therefore, enough to show that there exists some open cover of formula_12 by null sets because by choosing a countable subcover, the σ-subadditivity of formula_13 will imply that formula_15 Using local finiteness of the measure formula_13, suppose that for some formula_16 the open ball formula_17 of radius formula_18 has a finite formula_13-measure. Since formula_12 is infinite-dimensional, by Riesz's lemma there is an infinite sequence of pairwise disjoint open balls formula_19 formula_20, of radius formula_21 with all the smaller balls formula_22 contained within formula_23 By translation invariance, all the cover's balls have the same formula_13-measure, and since the infinite sum of these finite formula_13-measures are finite, the cover's balls must all have formula_13-measure zero. Since formula_18 was arbitrary, every open ball in formula_12 has zero formula_13-measure, and taking a cover of formula_12 which is the set of all open balls that completes the proof that formula_24. Nontrivial measures. Here are some examples of infinite-dimensional Lebesgue measures that can exist if the conditions of the above theorem are relaxed. One example for an entirely separable Banach space is the abstract Wiener space construction, similar to a product of Gaussian measures (which are not translation invariant). Another approach is to consider a Lebesgue measure of finite-dimensional subspaces within the larger space and look at prevalent and shy sets. The Hilbert cube carries the product Lebesgue measure and the compact topological group given by the Tychonoff product of an infinite number of copies of the circle group is infinite-dimensional and carries a Haar measure that is translation-invariant. These two spaces can be mapped onto each other in a measure-preserving way by unwrapping the circles into intervals. The infinite product of the additive real numbers has the analogous product Haar measure, which is precisely the infinite-dimensional analog of the Lebesgue measure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "\\Reals^n" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "N_x" }, { "math_id": 4, "text": "\\lambda(N_x) < + \\infty;" }, { "math_id": 5, "text": "U" }, { "math_id": 6, "text": "\\lambda(U) > 0;" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "\\Reals^n," }, { "math_id": 9, "text": "h" }, { "math_id": 10, "text": "\\lambda(A+h) = \\lambda(A)." }, { "math_id": 11, "text": "L^p" }, { "math_id": 12, "text": "X" }, { "math_id": 13, "text": "\\mu" }, { "math_id": 14, "text": "G" }, { "math_id": 15, "text": "\\mu(X) = 0." }, { "math_id": 16, "text": "r > 0," }, { "math_id": 17, "text": "B(r)" }, { "math_id": 18, "text": "r" }, { "math_id": 19, "text": "B_n(r/4)," }, { "math_id": 20, "text": "n \\in \\N" }, { "math_id": 21, "text": "r/4," }, { "math_id": 22, "text": "B_n(r/4)" }, { "math_id": 23, "text": "B(r)." }, { "math_id": 24, "text": "\\mu(X) = 0" } ]
https://en.wikipedia.org/wiki?curid=6607484
66082725
Activity-driven model
In network science, the activity-driven model is a temporal network model in which each node has a randomly-assigned "activity potential", which governs how it links to other nodes over time. Each node formula_0 (out of formula_1 total) has its activity potential formula_2 drawn from a given distribution formula_3. A sequence of timesteps unfolds, and in each timestep each node formula_4 forms ties to formula_5 random other nodes at rate formula_6 (more precisely, it does so with probability formula_7 per timestep). All links are then deleted after each timestep. Properties of time-aggregated network snapshots are able to be studied in terms of formula_3. For example, since each node formula_4 after formula_8 timesteps will have on average formula_9 outgoing links, the degree distribution after formula_8 timesteps in the time-aggregated network will be related to the activity-potential distribution by formula_10 Spreading behavior according to the SIS epidemic model was investigated on activity-driven networks, and the following condition was derived for large-scale outbreaks to be possible: formula_11 where formula_12 is the per-contact transmission probability, formula_13 is the per-timestep recovery probability, and (formula_14, formula_15) are the first and second moments of the random activity-rate formula_16. Extensions. A variety of extensions to the activity-driven model have been studied. One example is activity-driven networks with attractiveness, in which the links that a given node forms do not attach to other nodes at random, but rather with a probability proportional to a variable encoding nodewise attractiveness. Another example is activity-driven networks with memory, in which activity-levels change according to a self-excitation mechanism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " j " }, { "math_id": 1, "text": " N " }, { "math_id": 2, "text": " x_i " }, { "math_id": 3, "text": " F(x) " }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": " m " }, { "math_id": 6, "text": " a_i=\\eta x_i" }, { "math_id": 7, "text": " a_i \\, \\Delta t " }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "m\\eta x_i T" }, { "math_id": 10, "text": " P_T(k) \\propto F\\left(\\frac{k}{m\\eta T}\\right). " }, { "math_id": 11, "text": " \\frac{\\beta}{\\lambda} > \\frac{2\\langle a\\rangle}{\\langle a\\rangle + \\sqrt{\\langle a^2\\rangle}}, " }, { "math_id": 12, "text": "\\beta" }, { "math_id": 13, "text": "\\lambda" }, { "math_id": 14, "text": " \\langle a\\rangle " }, { "math_id": 15, "text": " \\langle a^2\\rangle" }, { "math_id": 16, "text": " a_j" } ]
https://en.wikipedia.org/wiki?curid=66082725
660870
Doxycycline
Tetracycline-class antibiotic Doxycycline is a broad-spectrum antibiotic of the tetracycline class used in the treatment of infections caused by bacteria and certain parasites. It is used to treat bacterial pneumonia, acne, chlamydia infections, Lyme disease, cholera, typhus, and syphilis. It is also used to prevent malaria. Doxycycline may be taken by mouth or by injection into a vein. Common side effects include diarrhea, nausea, vomiting, abdominal pain, and an increased risk of sunburn. Use during pregnancy is not recommended. Like other agents of the tetracycline class, it either slows or kills bacteria by inhibiting protein production. It kills malaria by targeting a plastid organelle, the apicoplast. Doxycycline was patented in 1957 and came into commercial use in 1967. It is on the World Health Organization's List of Essential Medicines. Doxycycline is available as a generic medicine. In 2021, it was the most commonly prescribed medication in the United States, with more than 8million prescriptions. Medical uses. In addition to the general indications for all members of the tetracycline antibiotics group, doxycycline is frequently used to treat Lyme disease, chronic prostatitis, sinusitis, pelvic inflammatory disease, severe acne, rosacea, and rickettsial infections. The efficiency of oral doxycycline for treating papulopustular rosacea and adult acne is not solely based on its antibiotic properties, but also on its anti-inflammatory and anti-angiogenic properties. In Canada, in 2004, doxycycline was considered a first-line treatment for chlamydia and non-gonococcal urethritis and with cefixime for uncomplicated gonorrhea. Antibacterial. General indications. Doxycycline is a broad-spectrum antibiotic that is employed in the treatment of numerous bacterial infections. It is effective against bacteria such as "Moraxella catarrhalis", "Brucella melitensis", "Chlamydia pneumoniae", and "Mycoplasma pneumoniae". Additionally, doxycycline is used in the prevention and treatment of serious conditions like anthrax, leptospirosis, bubonic plague, and Lyme disease. However, some bacteria, including "Haemophilus" spp., "Mycoplasma hominis", and "Pseudomonas aeruginosa", have shown resistance to doxycycline. It is also effective against "Yersinia pestis" (the infectious agent of bubonic plague), and is prescribed for the treatment of Lyme disease, ehrlichiosis, and Rocky Mountain spotted fever. Specifically, doxycycline is indicated for treatment of the following diseases: Gram-negative bacteria specific indications. When bacteriologic testing indicates appropriate susceptibility to the drug, doxycycline may be used to treat these infections caused by Gram-negative bacteria: Gram-positive bacteria specific indications. Some Gram-positive bacteria have developed resistance to doxycycline. Up to 44% of "Streptococcus pyogenes" and up to 74% of "S. faecalis" specimens have developed resistance to the tetracycline group of antibiotics. Up to 57% of "P. acnes" strains developed resistance to doxycycline. When bacteriologic testing indicates appropriate susceptibility to the drug, doxycycline may be used to treat these infections caused by Gram-positive bacteria: Specific applications of doxycycline when penicillin is contraindicated. When penicillin is contraindicated, doxycycline can be used to treat: Use as adjunctive therapy. Doxycycline may also be used as adjunctive therapy for severe acne. Subantimicrobial-dose doxycycline (SDD) is widely used as an adjunctive treatment to scaling and root planing for periodontitis. Significant differences were observed for all investigated clinical parameters of periodontitis in favor of the scaling and root planing + SDD group where SDD dosage regimens is 20 mg twice daily for three months in a meta-analysis published in 2011. SDD is also used to treat skin conditions such as acne and rosacea, including ocular rosacea. In ocular rosacea, treatment period is 2 to 3 months. After discontinuation of doxycycline, recurrences may occur within three months; therefore, many studies recommend either slow tapering or treatment with a lower dose over a longer period of time. Doxycycline is used as an adjunctive therapy for acute intestinal amebiasis. Doxycycline is also used as an adjunctive therapy for chancroid. As prophylaxis against sexually transmitted infections. Doxycycline is used for post-exposure prophylaxis (PEP) to reduce the incidence of sexually transmitted bacterial infections (STIs), but it has been associated with tetracycline resistance in associated species, in particular, in Neisseria gonorrhoeae. For this reason, the Australian consensus statement mentions that doxycycline for PEP particularly in gay, bisexual, and other men who have sex with men (GBMSM) should be considered only for the prevention of syphilis in GBMSM, and that the risk of increasing antimicrobial resistance outweighed any potential benefit from reductions in other bacterial STIs in GBMSM. Appropriate use of doxycycline for PEP is supported by guidelines from the US Centers for Disease Control and Prevention (CDC) and the Australasian Society for HIV Medicine. Use in combination. The first-line treatment for brucellosis is a combination of doxycycline and streptomycin and the second-line is a combination of doxycycline and rifampicin (rifampin). Antimalarial. Doxycycline is active against the erythrocytic stages of "Plasmodium falciparum" but not against the gametocytes of "P. falciparum". It is used to prevent malaria. It is not recommended alone for initial treatment of malaria, even when the parasite is doxycycline-sensitive, because the antimalarial effect of doxycycline is delayed. Doxycycline blocks protein production in apicoplast (an organelle) of "P. falciparum"—such blocking leads to two main effects: it disrupts the parasite's ability to produce fatty acids, which are essential for its growth, and it impairs the production of heme, a cofactor. These effects occur late in the parasite's life cycle when it is in the blood stage, causing the symptoms of malaria. By blocking important processes in the parasite, doxycycline both inhibits the growth and prevents the multiplication of "P. falciparum". It does not directly kill the living organisms of "P. falciparum" but creates conditions that prevent their growth and replication. The World Health Organization (WHO) guidelines state that the combination of doxycycline with either artesunate or quinine may be used for the treatment of uncomplicated malaria due to "P. falciparum" or following intravenous treatment of severe malaria. Antihelminthic. Doxycycline kills the symbiotic "Wolbachia" bacteria in the reproductive tracts of parasitic filarial nematodes, making the nematodes sterile, and thus reducing transmission of diseases such as onchocerciasis and elephantiasis. Field trials in 2005 showed an eight-week course of doxycycline almost eliminates the release of microfilariae. Spectrum of susceptibility. Doxycycline has been used successfully to treat sexually transmitted, respiratory, and ophthalmic infections. Representative pathogenic genera include "Chlamydia, Streptococcus, Ureaplasma, Mycoplasma", and others. The following represents minimum inhibitory concentration susceptibility data for a few medically significant microorganisms. Sclerotherapy. Doxycycline is also used for sclerotherapy in slow-flow vascular malformations, namely venous and lymphatic malformations, as well as post-operative lymphoceles. Off-label use. Doxycycline has found off-label use in the treatment of transthyretin amyloidosis (ATTR). Together with tauroursodeoxycholic acid, doxycyclin appears to be a promising combination capable of disrupting transthyretine TTR fibrils in existing amyloid deposits of ATTR patients. Routes of administration. Doxycycline can be administered via oral or intravenous routes. The combination of doxycycline with dairy, antacids, calcium supplements, iron products, laxatives containing magnesium, or bile acid sequestrants is not inherently dangerous, but any of these foods and supplements may decrease absorption of doxycycline. Doxycycline has a high oral bioavailability, as it is almost completely absorbed in the stomach and proximal small intestine. Unlike other tetracyclines, its absorption is not significantly affected by food or dairy intake. However, co-administration of dairy products reduces the serum concentration of doxycycline by 20%. Doxycycline absorption is also inhibited by divalent and trivalent cations, such as iron, bismuth, aluminum, calcium and magnesium. Doxycycline forms unstable complexes with metal ions in the acidic gastric environment, which dissociate in the small intestine, allowing the drug to be absorbed. However, some doxycycline remains complexed with metal ions in the duodenum, resulting in a slight decrease in absorption. Contraindications. Severe liver disease or concomitant use of isotretinoin or other retinoids are contraindications, as both tetracyclines and retinoids can cause intracranial hypertension (increased pressure around the brain) in rare cases. Pregnancy and lactation. Doxycycline is categorized by the FDA as a class D drug in pregnancy. Doxycycline crosses into breastmilk. Other tetracycline antibiotics are contraindicated in pregnancy and up to eight years of age, due to the potential for disrupting bone and tooth development. They include a class warning about staining of teeth and decreased development of dental enamel in children exposed to tetracyclines in utero, during breastfeeding or during young childhood. However, the FDA has acknowledged that the actual risk of dental staining of primary teeth is undetermined for doxycycline specifically. The best available evidence indicates that doxycycline has little or no effect on hypoplasia of dental enamel or on staining of teeth and the CDC recommends the use of doxycycline for treatment of Q fever and also for tick-borne rickettsial diseases in young children and others advocate for its use in malaria. Adverse effects. Adverse effects are similar to those of other members of the tetracycline antibiotic group. Doxycycline can cause gastrointestinal upset. Oral doxycycline can cause pill esophagitis, particularly when it is swallowed without adequate fluid, or by persons with difficulty swallowing or impaired mobility. Doxycycline is less likely than other antibiotic drugs to cause "Clostridium difficile" colitis"." An erythematous rash in sun-exposed parts of the body has been reported to occur in 7.3–21.2% of persons taking doxycycline for malaria prophylaxis. One study examined the tolerability of various malaria prophylactic regimens and found doxycycline did not cause a significantly higher percentage of all skin events (photosensitivity not specified) when compared with other antimalarials. The rash resolves upon discontinuation of the drug. Unlike some other members of the tetracycline group, it may be used in those with renal impairment. Doxycycline use has been associated with increased risk of inflammatory bowel disease. In one large retrospective study, patients who were prescribed doxycycline for their acne had a 2.25-fold greater risk of developing Crohn's disease. Interactions. Previously, doxycycline was believed to impair the effectiveness of many types of hormonal contraception due to CYP450 induction. Research has shown no significant loss of effectiveness in oral contraceptives while using most tetracycline antibiotics (including doxycycline), although many physicians still recommend the use of barrier contraception for people taking the drug to prevent unwanted pregnancy. Pharmacology. Doxycycline, like other tetracycline antibiotics, is bacteriostatic. It works by preventing bacteria from reproducing through the inhibition of protein synthesis. Doxycycline is highly lipophilic, so it can easily enter cells, meaning the drug is easily absorbed after oral administration and has a large volume of distribution. It can also be re-absorbed in the renal tubules and gastrointestinal tract due to its high lipophilicity, giving it a long elimination half-life, and it is also prevented from accumulating in the kidneys of patients with kidney failure due to the compensatory excretion in faeces. Doxycycline–metal ion complexes are unstable at acid pH, therefore more doxycycline enters the duodenum for absorption than the earlier tetracycline compounds. In addition, food has less effect on absorption than on absorption of earlier drugs with doxycycline serum concentrations being reduced by about 20% by test meals compared with 50% for tetracycline. Mechanism of action. Doxycycline is a broad-spectrum bacteriostatic antibiotic. It inhibits the synthesis of bacterial proteins by binding to the 30S ribosomal subunit, which is only found in bacteria. This prevents the binding of transfer RNA to messenger RNA at the ribosomal subunit meaning amino acids cannot be added to polypeptide chains and new proteins cannot be made. This stops bacterial growth giving the immune system time to kill and remove the bacteria. Pharmacokinetics. The substance is almost completely absorbed from the upper part of the small intestine. It reaches highest concentrations in the blood plasma after one to two hours and has a high plasma protein binding rate of about 80–90%. Doxycycline penetrates into almost all tissues and body fluids. Very high concentrations are found in the gallbladder, liver, kidneys, lung, breast milk, bone and genitals; low ones in saliva, aqueous humor, cerebrospinal fluid (CSF), and especially in inflamed meninges. By comparison, the tetracycline antibiotic minocycline penetrates significantly better into the CSF and meninges. Doxycycline metabolism is negligible. It is actively excreted into the gut (in part via the gallbladder, in part directly from blood vessels), where some of it is inactivated by forming chelates. About 40% are eliminated via the kidneys, much less in people with end-stage kidney disease. The biological half-life is 18 to 22 hours (16 ± 6 hours according to another source) in healthy people, slightly longer in those with end-stage kidney disease, and significantly longer in those with liver disease. Chemistry. Expired tetracyclines or tetracyclines allowed to stand at a pH less than 2 are reported to be nephrotoxic due to the formation of a degradation product, anhydro-4-epitetracycline causing Fanconi syndrome. In the case of doxycycline, the absence of a hydroxyl group in C-6 prevents the formation of the nephrotoxic compound. Nevertheless, tetracyclines and doxycycline itself have to be taken with caution in patients with kidney injury, as they can worsen azotemia due to catabolic effects. Chemical properties. Doxycycline, doxycycline monohydrate and doxycycline hyclate are yellow, crystalline powders with a bitter taste. The latter smells faintly of ethanol, a 1% aqueous solution has a pH of 2–3, and the specific rotation is formula_0 −110° cm3/dm·g in 0.01 N methanolic hydrochloric acid. History. After penicillin revolutionized the treatment of bacterial infections in World War II, many chemical companies moved into the field of discovering antibiotics by bioprospecting. American Cyanamid was one of these, and in the late 1940s chemists there discovered chlortetracycline, the first member of the tetracycline class of antibiotics. Shortly thereafter, scientists at Pfizer discovered oxytetracycline and it was brought to market. Both compounds, like penicillin, were natural products and it was commonly believed that nature had perfected them, and further chemical changes could only degrade their effectiveness. Scientists at Pfizer led by Lloyd Conover modified these compounds, which led to the invention of tetracycline itself, the first semi-synthetic antibiotic. Charlie Stephens' group at Pfizer worked on further analogs and created one with greatly improved stability and pharmacological efficacy: doxycycline. It was clinically developed in the early 1960s and approved by the FDA in 1967. As its patent grew near to expiring in the early 1970s, the patent became the subject of lawsuit between Pfizer and International Rectifier that was not resolved until 1983; at the time it was the largest litigated patent case in US history. Instead of a cash payment for infringement, Pfizer took the veterinary and feed-additive businesses of International Rectifier's subsidiary, Rachelle Laboratories. In January 2013, the FDA reported shortages of some, but not all, forms of doxycycline "caused by increased demand and manufacturing issues". Companies involved included an unnamed major generics manufacturer that ceased production in February 2013, Teva (which ceased production in May 2013), Mylan, Actavis, and Hikma Pharmaceuticals. The shortage came at a particularly bad time, since there were also shortages of an alternative antibiotic, tetracycline, at the same time. The market price for doxycycline dramatically increased in the United States in 2013 and early 2014 (from $20 to over $1800 for a bottle of 500 tablets), before decreasing again. Society and culture. Doxycycline is available worldwide under many brand names. Doxycycline is available as a generic medicine. Research. Research areas on the application of doxycycline include the following medical conditions: Anti-inflammatory agent. Some studies show doxycycline as a potential agent to possess anti-inflammatory properties acting by inhibiting proinflammatory cytokines such as interleukin-1 (IL-1), interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), and matrix metalloproteinases (MMPs) while increasing the production of anti-inflammatory cytokines such as interleukin-10 (IL-10). Cytokines are small proteins that are secreted by immune cells and play a key role in the immune response. Some studies suggest that doxycycline can suppress the activation of the nuclear factor-kappa B (NF-κB) pathway, which is responsible for upregulating several inflammatory mediators in various cells, including neurons; therefore, it is studied as a potential agent for treating neuroinflammation. A potential explanation of doxycycline's anti-inflammatory properties is its inhibition of matrix metalloproteinases (MMPs), which are a group of proteases known to regulate the turnover of extracellular matrix (ECM) and thus are suggested to be important in the process of several diseases associated with tissue remodeling and inflammation. Doxycycline has been shown to inhibit MMPs, including matrilysin (MMP7), by interacting with the structural zinc atom and/or calcium atoms within the structural metal center of the protein. Doxycycline also inhibits allikrein-related peptidase 5 (KLK5). The inhibition of MMPs and KLK5 enzymes subsequently suppresses the expression of LL-37, a cathelicidin antimicrobial peptide that, when overexpressed, can trigger inflammatory cascades. By inhibiting LL-37 expression, doxycycline helps to mitigate these downstream inflammatory cascades, thereby reducing inflammation and the symptoms of inflammatory conditions. Doxycycline is used to treat acne vulgaris and rosacea. However, there is no clear understanding of what contributes more: the bacteriostatic properties of doxycycline, which affect bacteria (such as Propionibacterium acnes) on the surface of sebaceous glands even in lower doses called "submicrobial" or "subantimicrobial", or whether doxycycline's anti-inflammatory effects, which reduce inflammation in acne vulgaris and rosacea, including ocular rosacea, contribute more to its therapeutic effectiveness against these skin conditions. Subantimicrobial-dose doxycycline (SDD) can still have a bacteriostatic effect, especially when taken for extended periods, such as several months in treating acne and rosacea. While the SDD is believed to have anti-inflammatory effects rather than solely antibacterial effects, SDD was proven to work by reducing inflammation associated with acne and rosacea. Still, the exact mechanisms have yet to be fully discovered. One probable mechanism is doxycycline's ability to decrease the amount of reactive oxygen species (ROS). Inflammation in rosacea may be associated with increased production of ROS by inflammatory cells; these ROS contribute toward exacerbating symptoms. Doxycycline may reduce ROS levels and induce antioxidant activity because it directly scavenges hydroxyl radicals and singlet oxygen, helping minimize tissue damage caused by highly oxidative and inflammatory conditions. Studies have shown that SDD can effectively improve acne and rosacea symptoms, probably without inducing antibiotic resistance. It is observed that doxycycline exerts its anti-inflammatory effects by inhibiting neutrophil chemotaxis and oxidative bursts, which are common mechanisms involved in inflammation and ROS activity in rosacea and acne. Doxycycline's dual benefits as an antibacterial and anti-inflammatory make it a helpful treatment option for diseases involving inflammation not only of the skin, such as rosacea and acne, but also in conditions such as osteoarthritis or periodontitis. Nevertheless, current results are inconclusive, and evidence of doxycycline's anti-inflammatory properties needs to be improved, considering conflicting reports from animal models so far. Doxycycline has been studied in various immunological disorders, including rheumatoid arthritis, lupus, and periodontitis. In these conditions, doxycycline has been researched to determine anti-inflammatory and immunomodulatory effects that could be beneficial in treating these conditions. However, a solid conclusion still needs to be provided. Doxycycline is also studied for its neuroprotective properties which are associated with antioxidant, anti-apoptotic, and anti-inflammatory mechanisms. In this context, it is important to note that doxycycline is able to cross the blood–brain barrier. Several studies have shown that doxycycline inhibits dopaminergic neurodegeneration through the upregulation of axonal and synaptic proteins. Axonal degeneration and synaptic loss are key events at the early stages of neurodegeneration and precede neuronal death in neurodegenerative diseases, including Parkinson's disease (PD). Therefore, the regeneration of the axonal and synaptic network might be beneficial in PD. It has been demonstrated that doxycycline mimics nerve growth factor (NGF) signaling in PC12 cells. However, the involvement of this mechanism in the neuroprotective effect of doxycycline is unknown. Doxycycline is also studied in reverting inflammatory changes related to depression. While there is some research on the use of doxycycline for treating major depressive disorder, the results are mixed. After a large-scale trial showed no benefit of using doxycycline in treating COVID‑19, the UK's National Institute for Health and Care Excellence (NICE) updated its guidance to not recommend the medication for the treatment of COVID‑19. Doxycycline was expected to possess anti-inflammatory properties that could lessen the cytokine storm associated with a SARS-CoV-2 infection, but the trials did not demonstrate the expected benefit. Researchers also believed that doxycycline possesses anti-inflammatory and immunomodulatory effects that could reduce the production of cytokines in COVID-19, but these supposed effects failed to improve the outcome of COVID-19 treatment. Wound healing. Research on novel drug formulations for the delivery of doxycycline in wound treatment is expanding, focusing on overcoming stability limitations for long-term storage and developing consumer-friendly, parenteral antibiotic delivery systems. The most common and practical form of doxycycline delivery is through wound dressings, which have evolved from mono- to three-layered systems to maximize healing effectiveness. Research directions on the use of doxycycline in wound healing include the continuous stabilization of doxycycline, scaling up technology and industrial production, and exploring non-contact wound treatment methods like sprays and aerosols for use in emergencies and when medical care is not readily accessible. Research reagent. Doxycycline and other members of the tetracycline class of antibiotics are often used as research reagents in "in vitro" and "in vivo" biomedical research experiments involving bacteria as well in experiments in eukaryotic cells and organisms with inducible protein expression systems using tetracycline-controlled transcriptional activation. The mechanism of action for the antibacterial effect of tetracyclines relies on disrupting protein translation in bacteria, thereby damaging the ability of microbes to grow and repair; however protein translation is also disrupted in eukaryotic mitochondria impairing metabolism and leading to effects that can confound experimental results. Doxycycline is also used in "tet-on" (gene expression activated by doxycycline) and "tet-off" (gene expression inactivated by doxycycline) tetracycline-controlled transcriptional activation to regulate transgene expression in organisms and cell cultures. Doxycycline is more stable than tetracycline for this purpose. At subantimicrobial doses, doxycycline is an inhibitor of matrix metalloproteases, and has been used in various experimental systems for this purpose, such as for recalcitrant recurrent corneal erosions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[\\alpha]_D^{25}" } ]
https://en.wikipedia.org/wiki?curid=660870
6609054
Spectral slope
In astrophysics and planetary science, spectral slope, also called spectral gradient, is a measure of dependence of the reflectance on the wavelength. In digital signal processing, it is a measure of how quickly the spectrum of an audio sound tails off towards the high frequencies, calculated using a linear regression. Spectral slope in astrophysics and planetary science. The visible and infrared spectrum of the reflected sunlight is used to infer physical and chemical properties of the surface of a body. Some objects are brighter (reflect more) in longer wavelengths (red). Consequently, in visible light they will appear redder than objects showing no dependence of reflectance on the wavelength. The diagram illustrates three slopes: The slope (spectral gradient) is defined as: formula_0 where formula_1 is the reflectance measured with filters F0, F1 having the central wavelengths λ0 and λ1, respectively. The slope is typically expressed in percentage increase of reflectance (i.e. reflexivity) per unit of wavelength: %/100 nm (or % /1000 Å) The slope is mostly used in near infrared part of the spectrum while colour indices are commonly used in the visible part of the spectrum. The trans-Neptunian object Sedna is a typical example of a body showing a steep red slope (20%/100 nm) while Orcus' spectrum appears flat in near infra-red. Spectral slope in audio. The spectral "slope" of many natural audio signals (their tendency to have less energy at high frequencies) has been known for many years, and the fact that this slope is related to the nature of the sound source. One way to quantify this is by applying linear regression to the Fourier magnitude spectrum of the signal, which produces a single number indicating the slope of the line-of-best-fit through the spectral data. Alternative ways to characterise a sound signal's distribution of energy vs. frequency include spectral rolloff, spectral centroid. Animals that can sense spectral slope. The dung beetle can see the spectral gradient of the sky and polarised light, and they used this to navigate. Desert ants Cataglyphis use the polarization and spectral skylight gradients to navigate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S = \\frac{R_{F_1}-R_{F_0}}{\\lambda_1 - \\lambda_0}" }, { "math_id": 1, "text": "R_{F_0}, R_{F_1}" } ]
https://en.wikipedia.org/wiki?curid=6609054