id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
954329
Schuler tuning
Inertial navigation design principle Schuler tuning is a design principle for inertial navigation systems that accounts for the curvature of the Earth. An inertial navigation system, used in submarines, ships, aircraft, and other vehicles to keep track of position, determines directions with respect to three axes pointing "north", "east", and "down". To detect the vehicle's orientation, the system contains an "inertial platform" mounted on gimbals, with gyroscopes that detect motion connected to a servo system to keep it pointing in a fixed orientation in space. However, the directions "north", "east" and "down" change as the vehicle moves on the curved surface of the Earth. Schuler tuning describes the conditions necessary for an inertial navigation system to keep the inertial platform always pointing "north", "east" and "down", so it gives correct directions on the near-spherical Earth. It is widely used in electronic control systems. Principle. As first explained by German engineer Maximilian Schuler in a 1923 paper, a pendulum that has a period that equals the orbital period of a hypothetical satellite orbiting at the surface of Earth (about 84.4 minutes) will tend to remain pointing at the center of Earth when its support is suddenly displaced. Such a pendulum (sometimes called a "Schuler pendulum") would have a length equal to the radius of Earth. Consider a simple gravity pendulum, whose length to its center of gravity equals the radius of Earth, suspended in a uniform gravitational field of the same strength as that experienced at Earth's surface. If suspended from the surface of Earth, the center of gravity of the pendulum bob would be at the center of Earth. If it is hanging motionless and its support is moved sideways, the bob tends to remain motionless, so the pendulum always points at the center of Earth. If such a pendulum were attached to the inertial platform of an inertial navigation system, the platform would remain level, facing "north", "east" and "down", as it was moved about on the surface of the Earth. The Schuler period can be derived from the classic formula for the period of a pendulum: formula_0 where L is the mean radius of Earth in meters and g is the local acceleration of gravity in metres per second per second. Application. A pendulum the length of the Earth's radius is impractical, so Schuler tuning doesn't use physical pendulums. Instead, the electronic control system of the inertial navigation system is modified to make the platform behave as if it were attached to a pendulum. The inertial platform is mounted on gimbals, and an electronic control system keeps it pointed in a constant direction with respect to the three axes. As the vehicle moves, the gyroscopes detect changes in orientation, and a feedback loop applies signals to torquers to rotate the platform on its gimbals to keep it pointed along the axes. To implement Schuler tuning, the feedback loop is modified to tilt the platform as the vehicle moves in the north–south and east–west directions, to keep the platform facing "down". To do this, the torquers that rotate the platform are fed a signal proportional to the vehicle's north–south and east–west velocity. The turning rate of the torquers is equal to the velocity divided by the radius of Earth "R": formula_1 So: formula_2 The acceleration a is a combination of the actual vehicle acceleration and the acceleration due to gravity acting on the tilting inertial platform. It can be measured by an accelerometer mounted fixed on the platform, in either the north–south or east west direction, horizontally. So this equation can be seen as a version of the equation for a simple gravity pendulum with a length equal to the radius of Earth. The inertial platform acts as if it were attached to such a pendulum. An inertial navigation system is tuned by letting it sit motionless for one Schuler period. If its coordinates deviate too much during the period or it does not return to its original coordinates at its end it must be tuned to the correct coordinates. Schuler's time constant appears in other contexts. Suppose a tunnel is dug from one end of the Earth to the other end straight through its center. A stone dropped in such a tunnel oscillates harmonically with Schuler's time constant. It can also be proved that the time is the same constant for a tunnel that is not through the center of Earth. Such a tunnel has to be an Earth-centered ellipse, the same shape as the path of the stone. These thought experiments (or rather the results of the corresponding calculations) rely on an assumption of uniform density throughout the Earth. Since the density is not actually uniform, the "true" periods would deviate from Schuler's time constant. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "T = 2\\pi \\sqrt\\frac{L}{g} \\approx 2\\pi \\sqrt\\frac{6371000}{9.81} \\approx 5063 \\ \\text{seconds} \\approx 84.4 \\ \\text{minutes}" }, { "math_id": 1, "text": "\\dot{\\theta} = v/R" }, { "math_id": 2, "text": "\\ddot{\\theta} = a/R" } ]
https://en.wikipedia.org/wiki?curid=954329
954333
Box topology
In topology, the cartesian product of topological spaces can be given several different topologies. One of the more natural choices is the box topology, where a base is given by the Cartesian products of open sets in the component spaces. Another possibility is the product topology, where a base is given by the Cartesian products of open sets in the component spaces, only finitely many of which can be unequal to the entire component space. While the box topology has a somewhat more intuitive definition than the product topology, it satisfies fewer desirable properties. In particular, if all the component spaces are compact, the box topology on their Cartesian product will not necessarily be compact, although the product topology on their Cartesian product will always be compact. In general, the box topology is finer than the product topology, although the two agree in the case of finite direct products (or when all but finitely many of the factors are trivial). Definition. Given formula_0 such that formula_1 or the (possibly infinite) Cartesian product of the topological spaces formula_2, indexed by formula_3, the box topology on formula_0 is generated by the base formula_4 The name "box" comes from the case of R"n", in which the basis sets look like boxes. The set formula_5 endowed with the box topology is sometimes denoted by formula_6 Properties. Box topology on R"ω": Example — failure of continuity. The following example is based on the Hilbert cube. Let R"ω" denote the countable cartesian product of R with itself, i.e. the set of all sequences in R. Equip R with the standard topology and R"ω" with the box topology. Define: formula_7 So all the component functions are the identity and hence continuous, however we will show "f" is not continuous. To see this, consider the open set formula_8 Suppose "f" were continuous. Then, since: formula_9 there should exist formula_10 such that formula_11 But this would imply that formula_12 which is false since formula_13 for formula_14 Thus "f" is not continuous even though all its component functions are. Example — failure of compactness. Consider the countable product formula_15 where for each "i", formula_16 with the discrete topology. The box topology on formula_0 will also be the discrete topology. Since discrete spaces are compact if and only if they are finite, we immediately see that formula_0 is not compact, even though its component spaces are. formula_0 is not sequentially compact either: consider the sequence formula_17 given by formula_18 Since no two points in the sequence are the same, the sequence has no limit point, and therefore formula_0 is not sequentially compact. Convergence in the box topology. Topologies are often best understood by describing how sequences converge. In general, a Cartesian product of a space formula_0 with itself over an indexing set formula_19 is precisely the space of functions from formula_19 to formula_0"," denoted formula_20. The product topology yields the topology of pointwise convergence; sequences of functions converge if and only if they converge at every point of formula_19. Because the box topology is finer than the product topology, convergence of a sequence in the box topology is a more stringent condition. Assuming formula_0 is Hausdorff, a sequence formula_21 of functions in formula_22 converges in the box topology to a function formula_23 if and only if it converges pointwise to formula_24 and there is a finite subset formula_25 and there is an formula_26 such that for all formula_27 the sequence formula_28 in formula_0 is constant for all formula_29. In other words, the sequence formula_28 is eventually constant for nearly all formula_30 and in a uniform way. Comparison with product topology. The basis sets in the product topology have almost the same definition as the above, "except" with the qualification that "all but finitely many" "Ui" are equal to the component space "Xi". The product topology satisfies a very desirable property for maps "fi" : "Y" → "Xi" into the component spaces: the product map "f": "Y" → "X" defined by the component functions "fi" is continuous if and only if all the "fi" are continuous. As shown above, this does not always hold in the box topology. This actually makes the box topology very useful for providing counterexamples—many qualities such as compactness, connectedness, metrizability, etc., if possessed by the factor spaces, are not in general preserved in the product with this topology. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "X := \\prod_{i \\in I} X_i," }, { "math_id": 2, "text": "X_i" }, { "math_id": 3, "text": "i \\in I" }, { "math_id": 4, "text": "\\mathcal{B} = \\left\\{ \\prod_{i \\in I} U_i \\mid U_i \\text{ open in } X_i \\right\\}." }, { "math_id": 5, "text": "\\prod_{i \\in I} X_i" }, { "math_id": 6, "text": "\\underset{i \\in I}{\\square} X_i." }, { "math_id": 7, "text": "\\begin{cases} f : \\mathbf{R} \\to \\mathbf{R}^\\omega \\\\ x \\mapsto (x,x,x, \\ldots) \\end{cases}" }, { "math_id": 8, "text": " U = \\prod_{n=1}^{\\infty} \\left ( -\\tfrac{1}{n}, \\tfrac{1}{n} \\right )." }, { "math_id": 9, "text": "f(0) = (0,0,0, \\ldots ) \\in U," }, { "math_id": 10, "text": "\\varepsilon > 0" }, { "math_id": 11, "text": "(-\\varepsilon, \\varepsilon) \\subset f^{-1}(U)." }, { "math_id": 12, "text": " f\\left (\\tfrac{\\varepsilon}{2} \\right ) = \\left ( \\tfrac{\\varepsilon}{2}, \\tfrac{\\varepsilon}{2}, \\tfrac{\\varepsilon}{2}, \\ldots \\right ) \\in U," }, { "math_id": 13, "text": "\\tfrac{\\varepsilon}{2} > \\tfrac{1}{n}" }, { "math_id": 14, "text": "n > \\tfrac{2}{\\varepsilon}." }, { "math_id": 15, "text": "X = \\prod_{i \\in \\N} X_i" }, { "math_id": 16, "text": "X_i = \\{0,1\\}" }, { "math_id": 17, "text": "\\{x_n\\}_{n=1}^\\infty" }, { "math_id": 18, "text": "(x_n)_m=\\begin{cases}\n 0 & m < n \\\\\n 1 & m \\ge n \n\\end{cases}" }, { "math_id": 19, "text": "S" }, { "math_id": 20, "text": "\\prod_{s \\in S} X = X^S" }, { "math_id": 21, "text": "(f_n)_n" }, { "math_id": 22, "text": "X^S" }, { "math_id": 23, "text": "f\\in X^S" }, { "math_id": 24, "text": "f" }, { "math_id": 25, "text": "S_0\\subset S" }, { "math_id": 26, "text": "N" }, { "math_id": 27, "text": "n>N" }, { "math_id": 28, "text": "(f_n(s))_n" }, { "math_id": 29, "text": "s\\in S\\setminus S_0" }, { "math_id": 30, "text": "s" } ]
https://en.wikipedia.org/wiki?curid=954333
9544343
Self-diffusion
According to IUPAC definition, self-diffusion coefficient is the diffusion coefficient formula_0 of species formula_1 when the chemical potential gradient equals zero. It is linked to the diffusion coefficient formula_2 by the equation: formula_3 Here, formula_4 is the activity of the species formula_1 in the solution and formula_5 is the concentration of formula_1. This term is commonly assumed to be equal to the tracer diffusion determined by watching the movement of an isotope in the material of interest. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_i^*" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "D_i" }, { "math_id": 3, "text": "D_i^*=D_i\\frac{\\partial\\ln c_i}{\\partial\\ln a_i}." }, { "math_id": 4, "text": "a_i" }, { "math_id": 5, "text": "c_i" } ]
https://en.wikipedia.org/wiki?curid=9544343
9544968
Coulomb damping
Damping mechanism in which kinetic energy is dissipated by sliding friction Coulomb damping is a type of constant mechanical damping in which the system's kinetic energy is absorbed via sliding friction (the friction generated by the relative motion of two surfaces that press against each other). Coulomb damping is a common damping mechanism that occurs in machinery. History. Coulomb damping was so named because Charles-Augustin de Coulomb carried on research in mechanics. He later published a work on friction in 1781 entitled "Theory of Simple Machines" for an Academy of Sciences contest. Coulomb then gained much fame for his work with electricity and magnetism. Modes of Coulombian friction. Coulomb damping absorbs energy with friction, which converts that kinetic energy into thermal energy, i.e. heat. Coulomb friction considers this under two distinct modes: either static, or kinetic. Static friction occurs when two objects are not in relative motion, e.g. if both are stationary. The force "F"s exerted between the objects does exceed—in magnitude—the product of the normal force N and the "coefficient of static friction" "μ"s: formula_0. Kinetic friction on the other hand, occurs when two objects are undergoing relative motion, as they slide against each other. The force "F"k exerted between the moving objects is equal in magnitude to the product of the normal force N and the "coefficient of kinetic friction" "μ"k: formula_1. Regardless of the mode, friction always acts to oppose the objects' relative motion. The normal force is taken perpendicularly to the direction of relative motion; under the influence of gravity, and in the common case of an object supported by a horizontal surface, the normal force is just the weight of the object itself. As there is no relative motion under static friction, no work is done, and hence no energy can be dissipated. An oscillating system is (by definition) only dampened via kinetic friction. Illustration. Consider a block of mass formula_2 that slides over a rough horizontal surface under the restraint of a spring with a spring constant formula_3. The spring is attached to the block and mounted to an immobile object on the other end allowing the block to be moved by the force of the spring formula_4, where formula_5 is the horizontal displacement of the block from when the spring is unstretched. On a horizontal surface, the normal force is constant and equal to the weight of the block by Newton's third law, i.e. formula_6. As stated earlier, formula_7 acts to opposite the motion of the block. Once in motion, the block will oscillate horizontally back and forth around the equilibrium. Newton's second law states that the equation of motion of the block is formula_8. Above, formula_9 and formula_10 respectively denote the velocity and acceleration of the block. Note that the sign of the kinetic friction term depends on formula_11—the "direction" the block is travelling in—but not the "speed". A real-life example of Coulomb damping occurs in large structures with non-welded joints such as airplane wings. Theory. Coulomb damping dissipates energy constantly because of sliding friction. The magnitude of sliding friction is a constant value; independent of surface area, displacement or position, and velocity. The system undergoing Coulomb damping is periodic or oscillating and restrained by the sliding friction. Essentially, the object in the system is vibrating back and forth around an equilibrium point. A system being acted upon by Coulomb damping is nonlinear because the frictional force always opposes the direction of motion of the system as stated earlier. And because there is friction present, the amplitude of the motion decreases or decays with time. Under the influence of Coulomb damping, the amplitude decays linearly with a slope offormula_12 where "ω"n is the natural frequency. The natural frequency is the number of times the system oscillates between a fixed time interval in an undamped system. It should also be known that the frequency and the period of vibration do not change when the damping is constant, as in the case of Coulomb damping. The period "τ" is the amount of time between the repetition of phases during vibration. As time progresses, the object sliding slows and the distance it travels during these oscillations becomes smaller until it reaches zero, the equilibrium point. The position where the object stops, or its equilibrium position, could potentially be at a completely different position than when initially at rest because the system is nonlinear. Linear systems have only a single equilibrium point.
[ { "math_id": 0, "text": "|F_{\\rm s}| < \\mu_{\\rm s} N" }, { "math_id": 1, "text": "|F_{\\rm k}| = \\mu_{\\rm k} N" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "F = k x" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "N = mg" }, { "math_id": 7, "text": "F_{\\rm k}" }, { "math_id": 8, "text": "m \\ddot x \\ = -F - (\\sgn{\\dot x}) F_k = -k x - (\\sgn{\\dot x}) \\mu_{\\rm k}mg" }, { "math_id": 9, "text": "\\dot x" }, { "math_id": 10, "text": "\\ddot x" }, { "math_id": 11, "text": "\\sgn{\\dot x}" }, { "math_id": 12, "text": "\\pm 2\\mu mg\\omega_{\\rm n}/(k\\pi)" } ]
https://en.wikipedia.org/wiki?curid=9544968
954539
Dini test
In mathematics, the Dini and Dini–Lipschitz tests are highly precise tests that can be used to prove that the Fourier series of a function converges at a given point. These tests are named after Ulisse Dini and Rudolf Lipschitz. Definition. Let f be a function on [0,2π], let t be some point and let δ be a positive number. We define the local modulus of continuity at the point t by formula_0 Notice that we consider here f to be a periodic function, e.g. if "t" = 0 and ε is negative then we define "f"("ε") = "f"(2π + "ε"). The global modulus of continuity (or simply the modulus of continuity) is defined by formula_1 With these definitions we may state the main results: Theorem (Dini's test): Assume a function f satisfies at a point t that formula_2 Then the Fourier series of f converges at t to "f"("t"). For example, the theorem holds with "ωf" = log−2() but does not hold with log−1(). Theorem (the Dini–Lipschitz test): Assume a function f satisfies formula_3 Then the Fourier series of f converges uniformly to f. In particular, any function that obeys a Hölder condition satisfies the Dini–Lipschitz test. Precision. Both tests are the best of their kind. For the Dini-Lipschitz test, it is possible to construct a function f with its modulus of continuity satisfying the test with O instead of o, i.e. formula_4 and the Fourier series of f diverges. For the Dini test, the statement of precision is slightly longer: it says that for any function Ω such that formula_5 there exists a function f such that formula_6 and the Fourier series of f diverges at 0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left.\\right.\\omega_f(\\delta;t)=\\max_{|\\varepsilon| \\le \\delta} |f(t)-f(t+\\varepsilon)|" }, { "math_id": 1, "text": "\\omega_f(\\delta) = \\max_t \\omega_f(\\delta;t)" }, { "math_id": 2, "text": "\\int_0^\\pi \\frac{1}{\\delta}\\omega_f(\\delta;t)\\,\\mathrm{d}\\delta < \\infty." }, { "math_id": 3, "text": "\\omega_f(\\delta)=o\\left(\\log\\frac{1}{\\delta}\\right)^{-1}." }, { "math_id": 4, "text": "\\omega_f(\\delta)=O\\left(\\log\\frac{1}{\\delta}\\right)^{-1}." }, { "math_id": 5, "text": "\\int_0^\\pi \\frac{1}{\\delta}\\Omega(\\delta)\\,\\mathrm{d}\\delta = \\infty" }, { "math_id": 6, "text": "\\omega_f(\\delta;0) < \\Omega(\\delta)" } ]
https://en.wikipedia.org/wiki?curid=954539
95465
Stirling number
Important sequences in combinatorics In mathematics, Stirling numbers arise in a variety of analytic and combinatorial problems. They are named after James Stirling, who introduced them in a purely algebraic setting in his book "Methodus differentialis" (1730). They were rediscovered and given a combinatorial meaning by Masanobu Saka in 1782. Two different sets of numbers bear this name: the Stirling numbers of the first kind and the Stirling numbers of the second kind. Additionally, Lah numbers are sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them. A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions of "n" elements into "k" non-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear). Notation. Several different notations for Stirling numbers are in use. Ordinary (signed) Stirling numbers of the first kind are commonly denoted: formula_0 Unsigned Stirling numbers of the first kind, which count the number of permutations of "n" elements with "k" disjoint cycles, are denoted: formula_1 Stirling numbers of the second kind, which count the number of ways to partition a set of "n" elements into "k" nonempty subsets: formula_2 Abramowitz and Stegun use an uppercase formula_3 and a blackletter formula_4, respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy to binomial coefficients, was introduced in 1935 by Jovan Karamata and promoted later by Donald Knuth. (The bracket notation conflicts with a common notation for Gaussian coefficients.) The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page for Stirling numbers and exponential generating functions. Another infrequent notation is formula_5 and formula_6. Expansions of falling and rising factorials. Stirling numbers express coefficients in expansions of falling and rising factorials (also known as the Pochhammer symbol) as polynomials. That is, the falling factorial, defined as formula_7 is a polynomial in x of degree n whose expansion is formula_8 with (signed) Stirling numbers of the first kind as coefficients. Note that formula_9 by convention, because it is an empty product. The notations formula_10 for the falling factorial and formula_11 for the rising factorial are also often used. (Confusingly, the Pochhammer symbol that many use for "falling" factorials is used in special functions for "rising" factorials.) Similarly, the rising factorial, defined as formula_12 is a polynomial in x of degree n whose expansion is formula_13 with unsigned Stirling numbers of the first kind as coefficients. One of these expansions can be derived from the other by observing that formula_14 Stirling numbers of the second kind express the reverse relations: formula_15 and formula_16 As change of basis coefficients. Considering the set of polynomials in the (indeterminate) variable "x" as a vector space, each of the three sequences formula_17 is a basis. That is, every polynomial in "x" can be written as a sum formula_18 for some unique coefficients formula_19 (similarly for the other two bases). The above relations then express the change of basis between them, as summarized in the following commutative diagram: The coefficients for the two bottom changes are described by the Lah numbers below. Since coefficients in any basis are unique, one can define Stirling numbers this way, as the coefficients expressing polynomials of one basis in terms of another, that is, the unique numbers relating formula_20 with falling and rising factorials as above. Falling factorials define, up to scaling, the same polynomials as binomial coefficients: formula_21. The changes between the standard basis formula_22 and the basis formula_23 are thus described by similar formulas: formula_24. Example. Expressing a polynomial in the basis of falling factorials is useful for calculating sums of the polynomial evaluated at consecutive integers. Indeed, the sum of falling factorials with fixed "k" can expressed as another falling factorial (for formula_25) formula_26 This can be proved by induction. For example, the sum of fourth powers of integers up to "n" (this time with "n" included), is: formula_27 Here the Stirling numbers can be computed from their definition as the number of partitions of 4 elements into "k" non-empty unlabeled subsets. In contrast, the sum formula_28 in the standard basis is given by Faulhaber's formula, which in general is more complicated. As inverse matrices. The Stirling numbers of the first and second kinds can be considered inverses of one another: formula_29 and formula_30 where formula_31 is the Kronecker delta. These two relationships may be understood to be matrix inverse relationships. That is, let "s" be the lower triangular matrix of Stirling numbers of the first kind, whose matrix elements formula_32 The inverse of this matrix is "S", the lower triangular matrix of Stirling numbers of the second kind, whose entries are formula_33 Symbolically, this is written formula_34 Although "s" and "S" are infinite, so calculating a product entry involves an infinite sum, the matrix multiplications work because these matrices are lower triangular, so only a finite number of terms in the sum are nonzero. Lah numbers. The Lah numbers formula_35 are sometimes called Stirling numbers of the third kind. By convention, formula_36 and formula_37 if formula_38 or formula_39. These numbers are coefficients expressing falling factorials in terms of rising factorials and vice versa: formula_40 and formula_41 As above, this means they express the change of basis between the bases formula_42 and formula_43, completing the diagram. In particular, one formula is the inverse of the other, thus: formula_44 Similarly, composing the change of basis from formula_45 to formula_20 with the change of basis from formula_20 to formula_46 gives the change of basis directly from formula_45 to formula_46: formula_47 and similarly for other compositions. In terms of matrices, if formula_48 denotes the matrix with entries formula_49 and formula_50 denotes the matrix with entries formula_51, then one is the inverse of the other: formula_52. Composing the matrix of unsigned Stirling numbers of the first kind with the matrix of Stirling numbers of the second kind gives the Lah numbers: formula_53. Enumeratively, formula_54 can be defined as the number of partitions of "n" elements into "k" non-empty unlabeled subsets, where each subset is endowed with no order, a cyclic order, or a linear order, respectively. In particular, this implies the inequalities: formula_55 Inversion relations and the Stirling transform. For any pair of sequences, formula_56 and formula_57, related by a finite sum Stirling number formula given by formula_58 for all integers formula_59, we have a corresponding inversion formula for formula_60 given by formula_61 The lower indices could be any integer between formula_62 and formula_63. These inversion relations between the two sequences translate into functional equations between the sequence exponential generating functions given by the Stirling (generating function) transform as formula_64 and formula_65 For formula_66, the differential operators formula_67 and formula_68 are related by the following formulas for all integers formula_59: formula_69 Another pair of "inversion" relations involving the Stirling numbers relate the forward differences and the ordinary formula_70 derivatives of a function, formula_71, which is analytic for all formula_72 by the formulas formula_73 formula_74 Similar properties. See the specific articles for details. Symmetric formulae. Abramowitz and Stegun give the following symmetric formulae that relate the Stirling numbers of the first and second kind. formula_75 and formula_76 Stirling numbers with negative integral values. The Stirling numbers can be extended to negative integral values, but not all authors do so in the same way. Regardless of the approach taken, it is worth noting that Stirling numbers of first and second kind are connected by the relations: formula_77 when "n" and "k" are nonnegative integers. So we have the following table for formula_78: Donald Knuth defined the more general Stirling numbers by extending a recurrence relation to all integers. In this approach, formula_79 and formula_80 are zero if "n" is negative and "k" is nonnegative, or if "n" is nonnegative and "k" is negative, and so we have, for "any" integers "n" and "k", formula_81 On the other hand, for positive integers "n" and "k", David Branson defined formula_82 formula_83 formula_84 and formula_85 (but not formula_86 or formula_87). In this approach, one has the following extension of the recurrence relation of the Stirling numbers of the first kind: formula_88, For example, formula_89 This leads to the following table of values of formula_90 for negative integral "n". In this case formula_91 where formula_92 is a Bell number, and so one may define the negative Bell numbers by formula_93. For example, this produces formula_94, generally formula_95. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " s(n,k)\\,." }, { "math_id": 1, "text": " \\biggl[{n \\atop k}\\biggr] =c(n,k)=|s(n,k)|=(-1)^{n-k} s(n,k)\\," }, { "math_id": 2, "text": " \\biggl\\{{\\!n\\! \\atop \\!k\\!}\\biggr\\} = S(n,k) = S_n^{(k)} \\," }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "\\mathfrak S" }, { "math_id": 5, "text": "s_1(n,k)" }, { "math_id": 6, "text": "s_2(n,k)" }, { "math_id": 7, "text": "\\ (x)_{n} = x(x-1)\\ \\cdots(x-n+1)\\ ," }, { "math_id": 8, "text": "(x)_{n}\\ =\\ \\sum_{k=0}^n\\ s(n,k)\\ x^k\\ " }, { "math_id": 9, "text": "\\ (x)_0 \\equiv 1\\ ," }, { "math_id": 10, "text": "\\ x^{\\underline{n}}\\ " }, { "math_id": 11, "text": "\\ x^{\\overline{n}}\\ " }, { "math_id": 12, "text": "\\ x^{(n)}\\ =\\ x(x+1)\\ \\cdots(x+n-1)\\ ," }, { "math_id": 13, "text": " x^{(n)}\\ =\\ \\sum_{k=0}^n\\ \\biggl[{n \\atop k}\\biggr]\\ x^k\\ =\\ \\sum_{k=0}^n\\ (-1)^{n-k}\\ s(n,k)\\ x^k\\ ," }, { "math_id": 14, "text": "\\ x^{(n)} = (-1)^n (-x)_{n} ~." }, { "math_id": 15, "text": "\\ x^n\\ =\\ \\sum_{k=0}^n\\ S(n,k)\\ (x)_k\\ " }, { "math_id": 16, "text": "\\ x^n\\ =\\ \\sum_{k=0}^n\\ (-1)^{n-k}\\ S(n,k)\\ x^{(k)} ~." }, { "math_id": 17, "text": "x^0,x^1,x^2,x^3,\\dots \\quad (x)_0,(x)_1,(x)_2,\\dots \\quad x^{(0)},x^{(1)},x^{(2)},\\dots" }, { "math_id": 18, "text": "a_0 x^{(0)} + a_1 x^{(1)} + \\dots + a_n x^{(n)}" }, { "math_id": 19, "text": "a_i" }, { "math_id": 20, "text": "x^n" }, { "math_id": 21, "text": "\\binom{x}{k} = (x)_k/k!" }, { "math_id": 22, "text": "\\textstyle x^0, x^1, x^2, \\dots" }, { "math_id": 23, "text": "\\binom{x}{0}, \\binom{x}{1}, \\binom{x}{2}, \\dots" }, { "math_id": 24, "text": "x^n=\\sum_{k=0}^n \\biggl\\{{\\!n\\! \\atop \\!k\\!}\\biggr\\} k! \\binom{x}{k} \\quad \\text{and} \\quad \\binom{x}{n}=\\sum_{k=0}^n \\frac{s(n,k)}{n!} x^k" }, { "math_id": 25, "text": "k\\ne-1" }, { "math_id": 26, "text": "\\sum_{0\\leq i < n} (i)_k = \\frac{(n)_{k+1}}{k+1} " }, { "math_id": 27, "text": "\\begin{align}\n\\sum_{i=0}^{n} i^4 & = \\sum_{i=0}^n \\sum_{k=0}^4 \\biggl\\{{\\!4\\! \\atop \\!k\\!}\\biggr\\} (i)_k = \\sum_{k=0}^4 \\biggl\\{{\\!4\\! \\atop \\!k\\!}\\biggr\\} \\sum_{i=0}^n (i)_k = \\sum_{k=0}^4 \\biggl\\{{\\!4\\! \\atop \\!k\\!}\\biggr\\} \\frac{(n{+}1)_{k+1}}{k{+}1} \\\\[10mu]\n& = \\biggl\\{{\\!4\\! \\atop \\!1\\!}\\biggr\\} \\frac{(n{+}1)_{2}}2\n + \\biggl\\{{\\!4\\! \\atop \\!2\\!}\\biggr\\} \\frac{(n{+}1)_{3}}3\n + \\biggl\\{{\\!4\\! \\atop \\!3\\!}\\biggr\\} \\frac{(n{+}1)_{4}}4\n + \\biggl\\{{\\!4\\! \\atop \\!4\\!}\\biggr\\} \\frac{(n{+}1)_{5}}5 \\\\[8mu]\n& = \\frac12 (n{+}1)_{2} + \\frac73 (n{+}1)_{3} + \\frac64 (n{+}1)_{4} + \\frac15 (n{+}1)_{5}\\,.\n\\end{align}" }, { "math_id": 28, "text": "\\sum_{i=0}^n i^k" }, { "math_id": 29, "text": "\\sum_{j=k}^n s(n,j) S(j,k) = \\sum_{j=k}^n (-1)^{n-j} \\biggl[{n \\atop j}\\biggr] \\biggl\\{{\\!j\\! \\atop \\!k\\!}\\biggr\\} = \\delta_{n,k}" }, { "math_id": 30, "text": "\\sum_{j=k}^n S(n,j) s(j,k) = \\sum_{j=k}^n (-1)^{j-k} \\biggl\\{{\\!n\\! \\atop \\!j\\!}\\biggr\\} \\biggl[{j \\atop k}\\biggr]= \\delta_{n,k}," }, { "math_id": 31, "text": "\\delta_{nk}" }, { "math_id": 32, "text": "s_{nk}=s(n,k).\\," }, { "math_id": 33, "text": "S_{nk}=S(n,k)." }, { "math_id": 34, "text": "s^{-1} = S\\," }, { "math_id": 35, "text": "L(n,k) = {n-1 \\choose k-1} \\frac{n!}{k!}" }, { "math_id": 36, "text": "L(0,0)=1" }, { "math_id": 37, "text": "L(n,k)=0" }, { "math_id": 38, "text": "n<k" }, { "math_id": 39, "text": "k = 0 < n" }, { "math_id": 40, "text": "x^{(n)} = \\sum_{k=0}^n L(n,k) (x)_k\\quad" }, { "math_id": 41, "text": "\\quad(x)_n = \\sum_{k=0}^n (-1)^{n-k} L(n,k)x^{(k)}." }, { "math_id": 42, "text": "(x)_0,(x)_1,(x)_2,\\cdots" }, { "math_id": 43, "text": "x^{(0)},x^{(1)},x^{(2)},\\cdots" }, { "math_id": 44, "text": "\\sum_{j=k}^n (-1)^{j-k} L(n,j) L(j,k) = \\delta_{n,k}." }, { "math_id": 45, "text": "x^{(n)}" }, { "math_id": 46, "text": "(x)_{n}" }, { "math_id": 47, "text": " L(n,k) = \\sum_{j=k}^n \\biggl[{n \\atop j}\\biggr] \\biggl\\{{\\!j\\! \\atop \\!k\\!}\\biggr\\} ," }, { "math_id": 48, "text": "L" }, { "math_id": 49, "text": "L_{nk}=L(n,k)" }, { "math_id": 50, "text": "L^{-}" }, { "math_id": 51, "text": "L^{-}_{nk}=(-1)^{n-k}L(n,k)" }, { "math_id": 52, "text": " L^{-} = L^{-1}" }, { "math_id": 53, "text": "L = |s| \\cdot S" }, { "math_id": 54, "text": "\\left\\{{\\!n\\! \\atop \\!k\\!}\\right\\}, \\left[{n \\atop k}\\right] , L(n,k)" }, { "math_id": 55, "text": "\\biggl\\{{\\!n\\! \\atop \\!k\\!}\\biggr\\} \\leq \\biggl[{n \\atop k}\\biggr] \\leq L(n,k)." }, { "math_id": 56, "text": "\\{f_n\\}" }, { "math_id": 57, "text": "\\{g_n\\}" }, { "math_id": 58, "text": "g_n = \\sum_{k=0}^{n} \\left\\{\\begin{matrix} n \\\\ k \\end{matrix} \\right\\} f_k, " }, { "math_id": 59, "text": "n \\geq 0" }, { "math_id": 60, "text": "f_n" }, { "math_id": 61, "text": "f_n = \\sum_{k=0}^{n} \\left[\\begin{matrix} n \\\\ k \\end{matrix} \\right] (-1)^{n-k} g_k. " }, { "math_id": 62, "text": "0" }, { "math_id": 63, "text": "n" }, { "math_id": 64, "text": "\\widehat{G}(z) = \\widehat{F}\\left(e^z-1\\right)" }, { "math_id": 65, "text": "\\widehat{F}(z) = \\widehat{G}\\left(\\log(1+z)\\right). " }, { "math_id": 66, "text": "D = d/dx" }, { "math_id": 67, "text": "x^nD^n" }, { "math_id": 68, "text": "(xD)^n" }, { "math_id": 69, "text": "\n\\begin{align}\n(xD)^n &= \\sum_{k=0}^n S(n, k) x^k D^k \\\\\nx^n D^n &= \\sum_{k=0}^n s(n, k) (xD)^k = (xD)_n = xD(xD - 1)\\ldots (xD - n + 1)\n\\end{align}\n" }, { "math_id": 70, "text": "n^{th}" }, { "math_id": 71, "text": "f(x)" }, { "math_id": 72, "text": "x" }, { "math_id": 73, "text": "\\frac{1}{k!} \\frac{d^k}{dx^k} f(x) = \\sum_{n=k}^{\\infty} \\frac{s(n, k)}{n!} \\Delta^n f(x)" }, { "math_id": 74, "text": "\\frac{1}{k!} \\Delta^k f(x) = \\sum_{n=k}^{\\infty} \\frac{S(n, k)}{n!} \\frac{d^n}{dx^n} f(x). " }, { "math_id": 75, "text": " \\left[{ n \\atop k } \\right] = \\sum_{j=n}^{2n-k} (-1)^{j-k} \\binom{j-1}{k-1} \\binom{2n-k}{j} \\left\\{{ j-k \\atop j-n} \\right\\} " }, { "math_id": 76, "text": "\n\\left\\{{n \\atop k}\\right\\} = \\sum_{j=n}^{2n-k} (-1)^{j-k} \\binom{j-1}{k-1} \\binom{2n-k}{j} \\left[{j-k \\atop j-n } \\right] \n" }, { "math_id": 77, "text": "\\biggl[{n \\atop k}\\biggr] = \\biggl\\{{\\!-k\\! \\atop \\!-n\\!}\\biggr\\} \\quad \\text{and} \\quad\n\\biggl\\{{\\!n\\! \\atop \\!k\\!}\\biggr\\} = \\biggl[{-k \\atop -n}\\biggr]" }, { "math_id": 78, "text": "\\left[{-n \\atop -k}\\right]" }, { "math_id": 79, "text": " \\left[{n \\atop k}\\right]" }, { "math_id": 80, "text": "\\left\\{{\\!n\\! \\atop \\!k\\!}\\right\\}" }, { "math_id": 81, "text": "\\biggl[{n \\atop k}\\biggr] = \\biggl\\{{\\!-k\\! \\atop \\!-n\\!}\\biggr\\} \\quad \\text{and} \\quad\n\\biggl\\{{\\!n\\! \\atop \\!k\\!}\\biggr\\} = \\biggl[{-k \\atop -n}\\biggr]." }, { "math_id": 82, "text": " \\left[{-n \\atop -k}\\right]\\!," }, { "math_id": 83, "text": "\\left\\{{\\!-n\\! \\atop \\!-k\\!}\\right\\}\\!," }, { "math_id": 84, "text": " \\left[{-n \\atop k}\\right]\\!," }, { "math_id": 85, "text": "\\left\\{{\\!-n\\! \\atop \\!k\\!}\\right\\}" }, { "math_id": 86, "text": " \\left[{n \\atop -k}\\right]" }, { "math_id": 87, "text": "\\left\\{{\\!n\\! \\atop \\!-k\\!}\\right\\}" }, { "math_id": 88, "text": " \\biggl[{-n \\atop k}\\biggr]\n= \\frac{(-1)^{n+1}}{n!}\\sum_{i=1}^{n}\\frac{(-1)^{i+1}}{i^k} \\binom ni " }, { "math_id": 89, "text": "\\left[{-5 \\atop k}\\right] = \\frac1{120}\\Bigl(5-\\frac{10}{2^k}+\\frac{10}{3^k}-\\frac 5{4^k}+\\frac 1{5^k}\\Bigr)." }, { "math_id": 90, "text": "\\left[{n \\atop k}\\right]" }, { "math_id": 91, "text": "\\sum_{n=1}^{\\infty}\\left[{-n \\atop -k}\\right]=B_{k} " }, { "math_id": 92, "text": "B_{k}" }, { "math_id": 93, "text": "\\sum_{n=1}^{\\infty}\\left[{-n \\atop k}\\right]=:B_{-k}" }, { "math_id": 94, "text": "\\sum_{n=1}^{\\infty}\\left[{-n \\atop 1}\\right]=B_{-1}=\\frac 1e\\sum_{j=1}^\\infty\\frac1{j\\cdot j!}=\\frac 1e\\int_0^1\\frac{e^t-1}{t}dt=0.4848291\\dots" }, { "math_id": 95, "text": "B_{-k}=\\frac 1e\\sum_{j=1}^\\infty\\frac1{j^kj!} " } ]
https://en.wikipedia.org/wiki?curid=95465
9550
Electricity
Phenomena related to electric charge Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others. The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts. Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force behind the Second Industrial Revolution, with electricity's versatility driving transformations in both industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society. History. Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote "De Magnete", in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word "electricus" ("of amber" or "like amber", from ἤλεκτρον, "elektron", the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's "Pseudodoxia Epidemica" of 1646. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges. In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.148 While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life. In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels. The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor. Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. Concepts. Electric charge. By modern convention, the charge carried by electrons is defined as negative, and that by protons is positive. Before these particles were discovered, Benjamin Franklin had defined a positive charge as being the charge acquired by a glass rod when it is rubbed with a silk cloth. A proton by definition carries a charge of exactly . This value is also defined as the elementary charge. No object can have a charge smaller than the elementary charge, and any amount of charge an object may carry is a multiple of the elementary charge. An electron has an equal negative charge, i.e. . Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle. The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: "like-charged objects repel and opposite-charged objects attract". The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together. Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other. Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer. Electric current. The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator. By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the "opposite" direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation. The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires. Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass.370 He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment. In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised. Electric field. The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves. A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body. This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects. The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre.2 The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh. The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect. Electric potential. The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is "conservative", which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage. For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable. Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface. The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together. Electromagnets. Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it.370 Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too. Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere. This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained. Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work. Electric circuits. An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task. The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli. The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp. The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol "F": one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it. The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one. Electric power. Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second. Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter "P". The term "wattage" is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current "I" consisting of a charge of "Q" coulombs every "t" seconds passing through an electric potential (voltage) difference of "V" is formula_0 where "Q" is electric charge in coulombs "t" is time in seconds "I" is electric current in amperes "V" is electric potential or voltage in volts Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency. Electronics. Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system. Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering. Electromagnetic wave. Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced. These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics. The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances. Production, storage and uses. Generation and transmission. In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity. Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect. Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China. Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels. Transmission and storage. The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed. Normally, demand of electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower). Applications. Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector. The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps). The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership. Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process. Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square. Electricity and the natural world. Physiological effects. A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century. Electrical phenomena in nature. Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek "piezein" (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly. Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants. Cultural perception. It is said that in the 1850s, British politician William Ewart Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, "One day sir, you may tax it." However, according to Snopes.com "the anecdote should be considered apocryphal because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death." In the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature.69 This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored "Frankenstein" (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films. As the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light,71 such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem "Sons of Martha".71 Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the "Tom Swift" books.71 The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.71 With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it "stops" flowing,71 an event that usually signals disaster.71 The people who "keep" it flowing, such as the nameless hero of Jimmy Webb's song "Wichita Lineman" (1968),71 are still often cast as heroic, wizard-like figures.71 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = \\text{work done per unit time} = \\frac {QV}{t} = IV \\," } ]
https://en.wikipedia.org/wiki?curid=9550
9550030
History of algebra
Algebra can essentially be considered as doing computations similar to those of arithmetic but with non-numerical mathematical objects. However, until the 19th century, algebra consisted essentially of the theory of equations. For example, the fundamental theorem of algebra belongs to the theory of equations and is not, nowadays, considered as belonging to algebra (in fact, every proof must use the completeness of the real numbers, which is not an algebraic property). This article describes the history of the theory of equations, called here "algebra", from the origins to the emergence of algebra as a separate area of mathematics. Etymology. The word "algebra" is derived from the Arabic word , and this comes from the treatise written in the year 830 by the medieval Persian mathematician, Al-Khwārizmī, whose Arabic title, "Kitāb al-muḫtaṣar fī ḥisāb al-ğabr wa-l-muqābala", can be translated as "The Compendious Book on Calculation by Completion and Balancing". The treatise provided for the systematic solution of linear and quadratic equations. According to one history, "[i]t is not certain just what the terms "al-jabr" and "muqabalah" mean, but the usual interpretation is similar to that implied in the previous translation. The word 'al-jabr' presumably meant something like 'restoration' or 'completion' and seems to refer to the transposition of subtracted terms to the other side of an equation; the word 'muqabalah' is said to refer to 'reduction' or 'balancing'—that is, the cancellation of like terms on opposite sides of the equation. Arabic influence in Spain long after the time of al-Khwarizmi is found in "Don Quixote", where the word 'algebrista' is used for a bone-setter, that is, a 'restorer'." The term is used by al-Khwarizmi to describe the operations that he introduced, "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. Stages of algebra. Algebraic expression. Algebra did not always make use of the symbolism that is now ubiquitous in mathematics; instead, it went through three distinct stages. The stages in the development of symbolic algebra are approximately as follows: Equally important as the use or lack of symbolism in algebra was the degree of the equations that were addressed. Quadratic equations played an important role in early algebra; and throughout most of history, until the early modern period, all quadratic equations were classified as belonging to one of three categories. where formula_4 and formula_5 are positive. This trichotomy comes about because quadratic equations of the form formula_6with formula_4 and formula_5 positive, have no positive roots. In between the rhetorical and syncopated stages of symbolic algebra, a geometric constructive algebra was developed by classical Greek and Vedic Indian mathematicians in which algebraic equations were solved through geometry. For instance, an equation of the form formula_7 was solved by finding the side of a square of area formula_8 Conceptual stages. In addition to the three stages of expressing algebraic ideas, some authors recognized four conceptual stages in the development of algebra that occurred alongside the changes in expression. These four stages were as follows: Babylon. The origins of algebra can be traced to the ancient Babylonians, who developed a positional number system that greatly aided them in solving their rhetorical algebraic equations. The Babylonians were not interested in exact solutions, but rather approximations, and so they would commonly use linear interpolation to approximate intermediate values. One of the most famous tablets is the Plimpton 322 tablet, created around 1900–1600 BC, which gives a table of Pythagorean triples and represents some of the most advanced mathematics prior to Greek mathematics. Babylonian algebra was much more advanced than the Egyptian algebra of the time; whereas the Egyptians were mainly concerned with linear equations the Babylonians were more concerned with quadratic and cubic equations. The Babylonians had developed flexible algebraic operations with which they were able to add equals to equals and multiply both sides of an equation by like quantities so as to eliminate fractions and factors. They were familiar with many simple forms of factoring, three-term quadratic equations with positive roots, and many cubic equations, although it is not known if they were able to reduce the general cubic equation. Ancient Egypt. Ancient Egyptian algebra dealt mainly with linear equations while the Babylonians found these equations too elementary, and developed mathematics to a higher level than the Egyptians. The Rhind Papyrus, also known as the Ahmes Papyrus, is an ancient Egyptian papyrus written c. 1650 BC by Ahmes, who transcribed it from an earlier work that he dated to between 2000 and 1800 BC. It is the most extensive ancient Egyptian mathematical document known to historians. The Rhind Papyrus contains problems where linear equations of the form formula_9 and formula_10 are solved, where formula_11 and formula_12 are known and formula_13 which is referred to as "aha" or heap, is the unknown. The solutions were possibly, but not likely, arrived at by using the "method of false position", or "regula falsi", where first a specific value is substituted into the left hand side of the equation, then the required arithmetic calculations are done, thirdly the result is compared to the right hand side of the equation, and finally the correct answer is found through the use of proportions. In some of the problems the author "checks" his solution, thereby writing one of the earliest known simple proofs. Greek mathematics. It is sometimes alleged that the Greeks had no algebra, but this is disputed. By the time of Plato, Greek mathematics had undergone a drastic change. The Greeks created a geometric algebra where terms were represented by sides of geometric objects, usually lines, that had letters associated with them, and with this new form of algebra they were able to find solutions to equations by using a process that they invented, known as "the application of areas". "The application of areas" is only a part of geometric algebra and it is thoroughly covered in Euclid's "Elements". An example of geometric algebra would be solving the linear equation formula_17 The ancient Greeks would solve this equation by looking at it as an equality of areas rather than as an equality between the ratios formula_14 and formula_15 The Greeks would construct a rectangle with sides of length formula_18 and formula_19 then extend a side of the rectangle to length formula_20 and finally they would complete the extended rectangle so as to find the side of the rectangle that is the solution. Bloom of Thymaridas. Iamblichus in "Introductio arithmatica" says that Thymaridas (c. 400 BC – c. 350 BC) worked with simultaneous linear equations. In particular, he created the then famous rule that was known as the "bloom of Thymaridas" or as the "flower of Thymaridas", which states that: If the sum of formula_21 quantities be given, and also the sum of every pair containing a particular quantity, then this particular quantity is equal to formula_22 of the difference between the sums of these pairs and the first given sum. or using modern notation, the solution of the following system of formula_21 linear equations in formula_21 unknowns, formula_23 formula_24 formula_25 formula_26 formula_27 is, formula_28 Iamblichus goes on to describe how some systems of linear equations that are not in this form can be placed into this form. Euclid of Alexandria. Euclid (Greek: ) was a Greek mathematician who flourished in Alexandria, Egypt, almost certainly during the reign of Ptolemy I (323–283 BC). Neither the year nor place of his birth have been established, nor the circumstances of his death. Euclid is regarded as the "father of geometry". His "Elements" is the most successful textbook in the history of mathematics. Although he is one of the most famous mathematicians in history there are no new discoveries attributed to him; rather he is remembered for his great explanatory skills. The "Elements" is not, as is sometimes thought, a collection of all Greek mathematical knowledge to its date; rather, it is an elementary introduction to it. "Elements". The geometric work of the Greeks, typified in Euclid's "Elements", provided the framework for generalizing formulae beyond the solution of particular problems into more general systems of stating and solving equations. Book II of the "Elements" contains fourteen propositions, which in Euclid's time were extremely significant for doing geometric algebra. These propositions and their results are the geometric equivalents of our modern symbolic algebra and trigonometry. Today, using modern symbolic algebra, we let symbols represent known and unknown magnitudes (i.e. numbers) and then apply algebraic operations on them, while in Euclid's time magnitudes were viewed as line segments and then results were deduced using the axioms or theorems of geometry. Many basic laws of addition and multiplication are included or proved geometrically in the "Elements". For instance, proposition 1 of Book II states: If there be two straight lines, and one of them be cut into any number of segments whatever, the rectangle contained by the two straight lines is equal to the rectangles contained by the uncut straight line and each of the segments. But this is nothing more than the geometric version of the (left) distributive law, formula_29; and in Books V and VII of the "Elements" the commutative and associative laws for multiplication are demonstrated. Many basic equations were also proved geometrically. For instance, proposition 5 in Book II proves that formula_30 and proposition 4 in Book II proves that formula_31 Furthermore, there are also geometric solutions given to many equations. For instance, proposition 6 of Book II gives the solution to the quadratic equation formula_32 and proposition 11 of Book II gives a solution to formula_33 "Data". "Data" is a work written by Euclid for use at the schools of Alexandria and it was meant to be used as a companion volume to the first six books of the "Elements". The book contains some fifteen definitions and ninety-five statements, of which there are about two dozen statements that serve as algebraic rules or formulas. Some of these statements are geometric equivalents to solutions of quadratic equations. For instance, "Data" contains the solutions to the equations formula_34 and the familiar Babylonian equation formula_35 Conic sections. A conic section is a curve that results from the intersection of a cone with a plane. There are three primary types of conic sections: ellipses (including circles), parabolas, and hyperbolas. The conic sections are reputed to have been discovered by Menaechmus (c. 380 BC – c. 320 BC) and since dealing with conic sections is equivalent to dealing with their respective equations, they played geometric roles equivalent to cubic equations and other higher order equations. Menaechmus knew that in a parabola, the equation formula_36 holds, where formula_37 is a constant called the latus rectum, although he was not aware of the fact that any equation in two unknowns determines a curve. He apparently derived these properties of conic sections and others as well. Using this information it was now possible to find a solution to the problem of the duplication of the cube by solving for the points at which two parabolas intersect, a solution equivalent to solving a cubic equation. We are informed by Eutocius that the method he used to solve the cubic equation was due to Dionysodorus (250 BC – 190 BC). Dionysodorus solved the cubic by means of the intersection of a rectangular hyperbola and a parabola. This was related to a problem in Archimedes' "On the Sphere and Cylinder". Conic sections would be studied and used for thousands of years by Greek, and later Islamic and European, mathematicians. In particular Apollonius of Perga's famous "Conics" deals with conic sections, among other topics. China. Chinese mathematics dates to at least 300 BC with the "Zhoubi Suanjing", generally considered to be one of the oldest Chinese mathematical documents. "Nine Chapters on the Mathematical Art". "Chiu-chang suan-shu" or "The Nine Chapters on the Mathematical Art", written around 250 BC, is one of the most influential of all Chinese math books and it is composed of some 246 problems. Chapter eight deals with solving determinate and indeterminate simultaneous linear equations using positive and negative numbers, with one problem dealing with solving four equations in five unknowns. "Sea-Mirror of the Circle Measurements". "Ts'e-yuan hai-ching", or "Sea-Mirror of the Circle Measurements", is a collection of some 170 problems written by Li Zhi (or Li Ye) (1192 – 1279 AD). He used "fan fa", or Horner's method, to solve equations of degree as high as six, although he did not describe his method of solving equations. "Mathematical Treatise in Nine Sections". "Shu-shu chiu-chang", or "Mathematical Treatise in Nine Sections", was written by the wealthy governor and minister Ch'in Chiu-shao (c. 1202 – c. 1261). With the introduction of a method for solving simultaneous congruences, now called the Chinese remainder theorem, it marks the high point in Chinese . Magic squares. The earliest known magic squares appeared in China. In "Nine Chapters" the author solves a system of simultaneous linear equations by placing the coefficients and constant terms of the linear equations into a magic square (i.e. a matrix) and performing column reducing operations on the magic square. The earliest known magic squares of order greater than three are attributed to Yang Hui (fl. c. 1261 – 1275), who worked with magic squares of order as high as ten. "Precious Mirror of the Four Elements". "Ssy-yüan yü-chien"《四元玉鑒》, or "Precious Mirror of the Four Elements", was written by Chu Shih-chieh in 1303 and it marks the peak in the development of Chinese algebra. The four elements, called heaven, earth, man and matter, represented the four unknown quantities in his algebraic equations. The "Ssy-yüan yü-chien" deals with simultaneous equations and with equations of degrees as high as fourteen. The author uses the method of "fan fa", today called Horner's method, to solve these equations. The "Precious Mirror" opens with a diagram of the arithmetic triangle (Pascal's triangle) using a round zero symbol, but Chu Shih-chieh denies credit for it. A similar triangle appears in Yang Hui's work, but without the zero symbol. There are many summation equations given without proof in the "Precious mirror". A few of the summations are: formula_38 formula_39 Diophantus. Diophantus was a Hellenistic mathematician who lived c. 250 AD, but the uncertainty of this date is so great that it may be off by more than a century. He is known for having written "Arithmetica", a treatise that was originally thirteen books but of which only the first six have survived. "Arithmetica" is the earliest extant work present that solve arithmetic problems by algebra. Diophantus however did not invent the method of algebra, which existed before him. Algebra was practiced and diffused orally by practitioners, with Diophantus picking up techniques to solve problems in arithmetic. In modern algebra a polynomial is a linear combination of variable x that is built of exponentiation, scalar multiplication, addition, and subtraction. The algebra of Diophantus, similar to medieval arabic algebra is an aggregation of objects of different types with no operations present For example, in Diophantus a polynomial "6 4′ inverse Powers, 25 Powers lacking 9 units", which in modern notation is formula_40 is a collection of formula_41 object of one kind with 25 object of second kind which lack 9 objects of third kind with no operation present. Similar to medieval Arabic algebra Diophantus uses three stages to solve a problem by Algebra: 1) An unknown is named and an equation is set up 2) An equation is simplified to a standard form( al-jabr and al-muqābala in arabic) 3) Simplified equation is solved Diophantus does not give a classification of equations in six types like Al-Khwarizmi in extant parts of Arithmetica. He does say that he would give solution to three terms equations later, so this part of the work is possibly just lost In "Arithmetica", Diophantus is the first to use symbols for unknown numbers as well as abbreviations for powers of numbers, relationships, and operations; thus he used what is now known as "syncopated" algebra. The main difference between Diophantine syncopated algebra and modern algebraic notation is that the former lacked special symbols for operations, relations, and exponentials. So, for example, what we would write as formula_42 which can be rewritten as formula_43 would be written in Diophantus's syncopated notation as formula_44formula_45 where the symbols represent the following: Unlike in modern notation, the coefficients come after the variables and that addition is represented by the juxtaposition of terms. A literal symbol-for-symbol translation of Diophantus's syncopated equation into a modern symbolic equation would be the following: formula_46 where to clarify, if the modern parentheses and plus are used then the above equation can be rewritten as: formula_47 However the distinction between "rhetorical algebra", "syncopated algebra" and "symbolic algebra" is considered outdated by Jeffrey Oaks and Jean Christianidis. The problems were solved on dust-board using some notation, while in books solution were written in "rhetorical style". "Arithmetica" also makes use of the identities: India. The Indian mathematicians were active in studying about number systems. The earliest known Indian mathematical documents are dated to around the middle of the first millennium BC (around the 6th century BC). The recurring themes in Indian mathematics are, among others, determinate and indeterminate linear and quadratic equations, simple mensuration, and Pythagorean triples. "Aryabhata". Aryabhata (476–550) was an Indian mathematician who authored "Aryabhatiya". In it he gave the rules, formula_48 and formula_49 "Brahma Sphuta Siddhanta". Brahmagupta (fl. 628) was an Indian mathematician who authored "Brahma Sphuta Siddhanta". In his work Brahmagupta solves the general quadratic equation for both positive and negative roots. In indeterminate analysis Brahmagupta gives the Pythagorean triads formula_50 formula_51 but this is a modified form of an old Babylonian rule that Brahmagupta may have been familiar with. He was the first to give a general solution to the linear Diophantine equation formula_52 where formula_11 and formula_12 are integers. Unlike Diophantus who only gave one solution to an indeterminate equation, Brahmagupta gave "all" integer solutions; but that Brahmagupta used some of the same examples as Diophantus has led some historians to consider the possibility of a Greek influence on Brahmagupta's work, or at least a common Babylonian source. Like the algebra of Diophantus, the algebra of Brahmagupta was syncopated. Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend, and division by placing the divisor below the dividend, similar to our modern notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms. The extent of Greek influence on this syncopation, if any, is not known and it is possible that both Greek and Indian syncopation may be derived from a common Babylonian source. Bhāskara II. Bhāskara II (1114 – c. 1185) was the leading mathematician of the 12th century. In Algebra, he gave the general solution of Pell's equation. He is the author of "Lilavati" and "Vija-Ganita", which contain problems dealing with determinate and indeterminate linear and quadratic equations, and Pythagorean triples and he fails to distinguish between exact and approximate statements. Many of the problems in "Lilavati" and "Vija-Ganita" are derived from other Hindu sources, and so Bhaskara is at his best in dealing with indeterminate analysis. Bhaskara uses the initial symbols of the names for colors as the symbols of unknown variables. So, for example, what we would write today as formula_53 Bhaskara would have written as . _ . "ya" 1 "ru" 1 "ya" 2 "ru" 8 Sum "ya" 1 ru "9" where "ya" indicates the first syllable of the word for "black", and "ru" is taken from the word "species". The dots over the numbers indicate subtraction. Islamic world. The first century of the Islamic Arab Empire saw almost no scientific or mathematical achievements since the Arabs, with their newly conquered empire, had not yet gained any intellectual drive and research in other parts of the world had faded. In the second half of the 8th century, Islam had a cultural awakening, and research in mathematics and the sciences increased. The Muslim Abbasid caliph al-Mamun (809–833) is said to have had a dream where Aristotle appeared to him, and as a consequence al-Mamun ordered that Arabic translation be made of as many Greek works as possible, including Ptolemy's "Almagest" and Euclid's "Elements". Greek works would be given to the Muslims by the Byzantine Empire in exchange for treaties, as the two empires held an uneasy peace. Many of these Greek works were translated by Thabit ibn Qurra (826–901), who translated books written by Euclid, Archimedes, Apollonius, Ptolemy, and Eutocius. Arabic mathematicians established algebra as an independent discipline, and gave it the name "algebra" ("al-jabr"). They were the first to teach algebra in an elementary form and for its own sake. There are three theories about the origins of Arabic Algebra. The first emphasizes Hindu influence, the second emphasizes Mesopotamian or Persian-Syriac influence and the third emphasizes Greek influence. Many scholars believe that it is the result of a combination of all three sources. Throughout their time in power, the Arabs used a fully rhetorical algebra, where often even the numbers were spelled out in words. The Arabs would eventually replace spelled out numbers (e.g. twenty-two) with Arabic numerals (e.g. 22), but the Arabs did not adopt or develop a syncopated or symbolic algebra until the work of Ibn al-Banna, who developed a symbolic algebra in the 13th century, followed by Abū al-Hasan ibn Alī al-Qalasādī in the 15th century. "Al-jabr wa'l muqabalah". The Muslim Persian mathematician Muhammad ibn Mūsā al-Khwārizmī, described as the father or founder of algebra, was a faculty member of the "House of Wisdom" ("Bait al-Hikma") in Baghdad, which was established by Al-Mamun. Al-Khwarizmi, who died around 850 AD, wrote more than half a dozen mathematical and astronomical works. One of al-Khwarizmi's most famous books is entitled "Al-jabr wa'l muqabalah" or "The Compendious Book on Calculation by Completion and Balancing", and it gives an exhaustive account of solving polynomials up to the second degree. The book also introduced the fundamental concept of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as "al-jabr". The name "algebra" comes from the "al-jabr" in the title of his book. R. Rashed and Angela Armstrong write: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"Al-Khwarizmi's text can be seen to be distinct not only from the Babylonian tablets, but also from Diophantus' "Arithmetica". It no longer concerns a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study. On the other hand, the idea of an equation for its own sake appears from the beginning and, one could say, in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems." "Al-Jabr" is divided into six chapters, each of which deals with a different type of formula. The first chapter of "Al-Jabr" deals with equations whose squares equal its roots formula_54 the second chapter deals with squares equal to number formula_55 the third chapter deals with roots equal to a number formula_56 the fourth chapter deals with squares and roots equal a number formula_57 the fifth chapter deals with squares and number equal roots formula_58 and the sixth and final chapter deals with roots and number equal to squares formula_59 In "Al-Jabr", al-Khwarizmi uses geometric proofs, he does not recognize the root formula_60 and he only deals with positive roots. He also recognizes that the discriminant must be positive and described the method of completing the square, though he does not justify the procedure. The Greek influence is shown by "Al-Jabr"'s geometric foundations and by one problem taken from Heron. He makes use of lettered diagrams but all of the coefficients in all of his equations are specific numbers since he had no way of expressing with parameters what he could express geometrically; although generality of method is intended. Al-Khwarizmi most likely did not know of Diophantus's "Arithmetica", which became known to the Arabs sometime before the 10th century. And even though al-Khwarizmi most likely knew of Brahmagupta's work, "Al-Jabr" is fully rhetorical with the numbers even being spelled out in words. So, for example, what we would write as formula_61 Diophantus would have written as formula_62formula_63 And al-Khwarizmi would have written as One square and ten roots of the same amount to thirty-nine "dirhems"; that is to say, what must be the square which, when increased by ten of its own roots, amounts to thirty-nine? "Logical Necessities in Mixed Equations". 'Abd al-Hamīd ibn Turk authored a manuscript entitled "Logical Necessities in Mixed Equations", which is very similar to al-Khwarzimi's "Al-Jabr" and was published at around the same time as, or even possibly earlier than, "Al-Jabr". The manuscript gives exactly the same geometric demonstration as is found in "Al-Jabr", and in one case the same example as found in "Al-Jabr", and even goes beyond "Al-Jabr" by giving a geometric proof that if the discriminant is negative then the quadratic equation has no solution. The similarity between these two works has led some historians to conclude that Arabic algebra may have been well developed by the time of al-Khwarizmi and 'Abd al-Hamid. Abu Kamil and al-Karaji. Arabic mathematicians treated irrational numbers as algebraic objects. The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers in the form of a square root or fourth root as solutions to quadratic equations or as coefficients in an equation. He was also the first to solve three non-linear simultaneous equations with three unknown variables. Al-Karaji (953–1029), also known as Al-Karkhi, was the successor of Abū al-Wafā' al-Būzjānī (940–998) and he discovered the first numerical solution to equations of the form formula_64 Al-Karaji only considered positive roots. He is also regarded as the first person to free algebra from geometrical operations and replace them with the type of arithmetic operations which are at the core of algebra today. His work on algebra and polynomials gave the rules for arithmetic operations to manipulate polynomials. The historian of mathematics F. Woepcke, in "Extrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi" (Paris, 1853), praised Al-Karaji for being "the first who introduced the theory of algebraic calculus". Stemming from this, Al-Karaji investigated binomial coefficients and Pascal's triangle. Omar Khayyám, Sharaf al-Dīn al-Tusi, and al-Kashi. Omar Khayyám (c. 1050 – 1123) wrote a book on Algebra that went beyond "Al-Jabr" to include equations of the third degree. Omar Khayyám provided both arithmetic and geometric solutions for quadratic equations, but he only gave geometric solutions for general cubic equations since he mistakenly believed that arithmetic solutions were impossible. His method of solving cubic equations by using intersecting conics had been used by Menaechmus, Archimedes, and Ibn al-Haytham (Alhazen), but Omar Khayyám generalized the method to cover all cubic equations with positive roots. He only considered positive roots and he did not go past the third degree. He also saw a strong relationship between geometry and algebra. In the 12th century, Sharaf al-Dīn al-Tūsī (1135–1213) wrote the "Al-Mu'adalat" ("Treatise on Equations"), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the "Ruffini-Horner method" to numerically approximate the root of a cubic equation. He also developed the concepts of the maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation and used an early version of Cardano's formula to find algebraic solutions to certain types of cubic equations. Some scholars, such as Roshdi Rashed, argue that Sharaf al-Din discovered the derivative of cubic polynomials and realized its significance, while other scholars connect his solution to the ideas of Euclid and Archimedes. Sharaf al-Din also developed the concept of a function. In his analysis of the equation formula_65 for example, he begins by changing the equation's form to formula_66. He then states that the question of whether the equation has a solution depends on whether or not the "function" on the left side reaches the value formula_67. To determine this, he finds a maximum value for the function. He proves that the maximum value occurs when formula_68, which gives the functional value formula_69. Sharaf al-Din then states that if this value is less than formula_67, there are no positive solutions; if it is equal to formula_67, then there is one solution at formula_68; and if it is greater than formula_67, then there are two solutions, one between formula_70 and formula_71 and one between formula_71 and formula_18. In the early 15th century, Jamshīd al-Kāshī developed an early form of Newton's method to numerically solve the equation formula_72 to find roots of formula_73. Al-Kāshī also developed decimal fractions and claimed to have discovered it himself. However, J. Lennart Berggrenn notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. Al-Hassār, Ibn al-Banna, and al-Qalasadi. Al-Hassār, a mathematician from Morocco specializing in Islamic inheritance jurisprudence during the 12th century, developed the modern symbolic mathematical notation for fractions, where the numerator and denominator are separated by a horizontal bar. This same fractional notation appeared soon after in the work of Fibonacci in the 13th century. Abū al-Hasan ibn Alī al-Qalasādī (1412–1486) was the last major medieval Arab algebraist, who made the first attempt at creating an algebraic notation since Ibn al-Banna two centuries earlier, who was himself the first to make such an attempt since Diophantus and Brahmagupta in ancient times. The syncopated notations of his predecessors, however, lacked symbols for mathematical operations. Al-Qalasadi "took the first steps toward the introduction of algebraic symbolism by using letters in place of numbers" and by "using short Arabic words, or just their initial letters, as mathematical symbols." Europe and the Mediterranean region. Just as the death of Hypatia signals the close of the Library of Alexandria as a mathematical center, so does the death of Boethius signal the end of mathematics in the Western Roman Empire. Although there was some work being done at Athens, it came to a close when in 529 the Byzantine emperor Justinian closed the pagan philosophical schools. The year 529 is now taken to be the beginning of the medieval period. Scholars fled the West towards the more hospitable East, particularly towards Persia, where they found haven under King Chosroes and established what might be termed an "Athenian Academy in Exile". Under a treaty with Justinian, Chosroes would eventually return the scholars to the Eastern Empire. During the Dark Ages, European mathematics was at its nadir with mathematical research consisting mainly of commentaries on ancient treatises; and most of this research was centered in the Byzantine Empire. The end of the medieval period is set as the fall of Constantinople to the Turks in 1453. Late Middle Ages. The 12th century saw a flood of translations from Arabic into Latin and by the 13th century, European mathematics was beginning to rival the mathematics of other lands. In the 13th century, the solution of a cubic equation by Fibonacci is representative of the beginning of a revival in European algebra. As the Islamic world was declining after the 15th century, the European world was ascending. And it is here that algebra was further developed. Symbolic algebra. Modern notation for arithmetic operations was introduced between the end of the 15th century and the beginning of the 16th century by Johannes Widmann and Michael Stifel. At the end of 16th century, François Viète introduced symbols, now called variables, for representing indeterminate or unknown numbers. This created a new algebra consisting of computing with symbolic expressions as if they were numbers. Another key event in the further development of algebra was the general algebraic solution of the cubic and quartic equations, developed in the mid-16th century. The idea of a determinant was developed by Japanese mathematician Kowa Seki in the 17th century, followed by Gottfried Leibniz ten years later, for the purpose of solving systems of simultaneous linear equations using matrices. Gabriel Cramer also did some work on matrices and determinants in the 18th century. The symbol "x". By tradition, the first unknown variable in an algebraic problem is nowadays represented by the symbol formula_74 and if there is a second or a third unknown, then these are labeled formula_75 and formula_76 respectively. Algebraic formula_16 is conventionally printed in italic type to distinguish it from the sign of multiplication. Mathematical historians generally agree that the use of formula_16 in algebra was introduced by René Descartes and was first published in his treatise "La Géométrie" (1637). In that work, he used letters from the beginning of the alphabet formula_77 for known quantities, and letters from the end of the alphabet formula_78 for unknowns. It has been suggested that he later settled on formula_16 (in place of formula_79) for the first unknown because of its relatively greater abundance in the French and Latin typographical fonts of the time. Three alternative theories of the origin of algebraic formula_16 were suggested in the 19th century: (1) a symbol used by German algebraists and thought to be derived from a cursive letter formula_80 mistaken for formula_16; (2) the numeral "1" with oblique strikethrough; and (3) an Arabic/Spanish source (see below). But the Swiss-American historian of mathematics Florian Cajori examined these and found all three lacking in concrete evidence; Cajori credited Descartes as the originator, and described his formula_82 and formula_79 as "free from tradition[,] and their choice purely arbitrary." Nevertheless, the Hispano-Arabic hypothesis continues to have a presence in popular culture today. It is the claim that algebraic formula_16 is the abbreviation of a supposed loanword from Arabic in Old Spanish. The theory originated in 1884 with the German orientalist Paul de Lagarde, shortly after he published his edition of a 1505 Spanish/Arabic bilingual glossary in which Spanish ("thing") was paired with its Arabic equivalent, ("shayʔ"), transcribed as "xei". (The "sh" sound in Old Spanish was routinely spelled formula_81) Evidently Lagarde was aware that Arab mathematicians, in the "rhetorical" stage of algebra's development, often used that word to represent the unknown quantity. He surmised that "nothing could be more natural" ("Nichts war also natürlicher...") than for the initial of the Arabic word—romanized as the Old Spanish formula_16—to be adopted for use in algebra. A later reader reinterpreted Lagarde's conjecture as having "proven" the point. Lagarde was unaware that early Spanish mathematicians used, not a "transcription" of the Arabic word, but rather its "translation" in their own language, "cosa". There is no instance of "xei" or similar forms in several compiled historical vocabularies of Spanish. Gottfried Leibniz. Although the mathematical notion of function was implicit in trigonometric and logarithmic tables, which existed in his day, Gottfried Leibniz was the first, in 1692 and 1694, to employ it explicitly, to denote any of several geometric concepts derived from a curve, such as abscissa, ordinate, tangent, chord, and the perpendicular. In the 18th century, "function" lost these geometrical associations. Leibniz realized that the coefficients of a system of linear equations could be arranged into an array, now called a matrix, which can be manipulated to find the solution of the system, if any. This method was later called Gaussian elimination. Leibniz also discovered Boolean algebra and symbolic logic, also relevant to algebra. Abstract algebra. The ability to do algebra is a skill cultivated in mathematics education. As explained by Andrew Warwick, Cambridge University students in the early 19th century practiced "mixed mathematics", doing exercises based on physical variables such as space, time, and weight. Over time the association of variables with physical quantities faded away as mathematical technique grew. Eventually mathematics was concerned completely with abstract polynomials, complex numbers, hypercomplex numbers and other concepts. Application to physical situations was then called applied mathematics or mathematical physics, and the field of mathematics expanded to include abstract algebra. For instance, the issue of constructible numbers showed some mathematical limitations, and the field of Galois theory was developed. The father of algebra. The title of "the father of algebra" is frequently credited to the Persian mathematician Al-Khwarizmi, supported by historians of mathematics, such as Carl Benjamin Boyer, Solomon Gandz and Bartel Leendert van der Waerden. However, the point is debatable and the title is sometimes credited to the Hellenistic mathematician Diophantus. Those who support Diophantus point to the algebra found in "Al-Jabr" being more elementary than the algebra found in "Arithmetica", and "Arithmetica" being syncopated while "Al-Jabr" is fully rhetorical. However, the mathematics historian Kurt Vogel argues against Diophantus holding this title, as his mathematics was not much more algebraic than that of the ancient Babylonians. Those who support Al-Khwarizmi point to the fact that he gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots, and was the first to teach algebra in an elementary form and for its own sake, whereas Diophantus was primarily concerned with the theory of numbers. Al-Khwarizmi also introduced the fundamental concept of "reduction" and "balancing" (which he originally used the term "al-jabr" to refer to), referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. Other supporters of Al-Khwarizmi point to his algebra no longer being concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." They also point to his treatment of an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems". Victor J. Katz regards "Al-Jabr" as the first true algebra text that is still extant. Pre-modern algebra was developed and used by merchants and surveyors as part of what Jens Høyrup called "subscientific" tradition. Diophantus used this method of algebra in his book, in particular for indeterminate problems, while Al-Khwarizmi wrote one of the first books in Arabic about the general method for solving equations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x + 1 = 2" }, { "math_id": 1, "text": "x^2 + px = q" }, { "math_id": 2, "text": "x^2 = px + q" }, { "math_id": 3, "text": "x^2 + q = px" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "q" }, { "math_id": 6, "text": "x^2 + px + q = 0," }, { "math_id": 7, "text": "x^2 = A" }, { "math_id": 8, "text": "A." }, { "math_id": 9, "text": "x + ax = b" }, { "math_id": 10, "text": "x + ax + bx = c" }, { "math_id": 11, "text": "a, b," }, { "math_id": 12, "text": "c" }, { "math_id": 13, "text": "x," }, { "math_id": 14, "text": "a : b" }, { "math_id": 15, "text": "c : x." }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "ax = bc." }, { "math_id": 18, "text": "b" }, { "math_id": 19, "text": "c," }, { "math_id": 20, "text": "a," }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "1/(n - 2)" }, { "math_id": 23, "text": "x + x_1 + x_2 + \\cdots + x_{n-1} = s" }, { "math_id": 24, "text": "x + x_1 = m_1" }, { "math_id": 25, "text": "x + x_2 = m_2" }, { "math_id": 26, "text": "\\vdots" }, { "math_id": 27, "text": "x + x_{n-1} = m_{n-1}" }, { "math_id": 28, "text": "x = \\cfrac{(m_1 + m_2 + ... + m_{n-1}) - s}{n-2} = \\cfrac{ (\\sum_{i=1}^{n-1} m_i) - s}{n-2}." }, { "math_id": 29, "text": "a(b + c + d) = ab + ac + ad" }, { "math_id": 30, "text": "a^2 - b^2 = (a + b)(a - b)," }, { "math_id": 31, "text": "(a + b)^2 = a^2 + 2ab + b^2." }, { "math_id": 32, "text": "ax + x^2 = b^2," }, { "math_id": 33, "text": "ax + x^2 = a^2." }, { "math_id": 34, "text": "d x^2 - adx + b^2c = 0" }, { "math_id": 35, "text": "xy = a^2, x \\pm y = b." }, { "math_id": 36, "text": "y^2 = l x" }, { "math_id": 37, "text": "l" }, { "math_id": 38, "text": "1^2 + 2^2 + 3^2 + \\cdots + n^2 = {n(n + 1)(2n + 1)\\over 3!}" }, { "math_id": 39, "text": "1 + 8 + 30 + 80 + \\cdots + {n^2(n + 1)(n + 2)\\over 3!} = {n(n + 1)(n + 2)(n + 3)(4n + 1)\\over 5!}" }, { "math_id": 40, "text": "6\\tfrac14x^{-1} +25x^2 - 9" }, { "math_id": 41, "text": "6\\tfrac14" }, { "math_id": 42, "text": "x^3 - 2x^2 + 10x -1 = 5," }, { "math_id": 43, "text": "\\left({x^3}1 + {x}10\\right) - \\left({x^2}2 + {x^0}1\\right) = {x^0}5," }, { "math_id": 44, "text": "\\Kappa^{\\upsilon} \\overline{\\alpha} \\; \\zeta \\overline{\\iota} \\;\\, \\pitchfork \\;\\, \\Delta^{\\upsilon} \\overline{\\beta} \\; \\Mu \\overline{\\alpha} \\,\\;" }, { "math_id": 45, "text": "\\sigma\\;\\, \\Mu \\overline{\\varepsilon}" }, { "math_id": 46, "text": "{x^3}1 {x}10 - {x^2}2 {x^0}1 = {x^0}5" }, { "math_id": 47, "text": "\\left({x^3}1 + {x}10\\right) - \\left({x^2}2 + {x^0}1\\right) = {x^0}5" }, { "math_id": 48, "text": "1^2 + 2^2 + \\cdots + n^2 = {n(n + 1)(2n + 1) \\over 6}" }, { "math_id": 49, "text": "1^3 + 2^3 + \\cdots + n^3 = (1 + 2 + \\cdots + n)^2" }, { "math_id": 50, "text": "m, \\frac{1}{2}\\left({m^2\\over n} - n\\right)," }, { "math_id": 51, "text": "\\frac{1}{2}\\left({m^2\\over n} + n\\right)," }, { "math_id": 52, "text": "ax + by = c," }, { "math_id": 53, "text": "( -x - 1 ) + ( 2x - 8 ) = x - 9" }, { "math_id": 54, "text": "\\left(ax^2 = bx\\right)," }, { "math_id": 55, "text": "\\left(ax^2 = c\\right)," }, { "math_id": 56, "text": "\\left(bx = c\\right)," }, { "math_id": 57, "text": "\\left(ax^2 + bx = c\\right)," }, { "math_id": 58, "text": "\\left(ax^2 + c = bx\\right)," }, { "math_id": 59, "text": "\\left(bx + c = ax^2\\right)." }, { "math_id": 60, "text": "x = 0," }, { "math_id": 61, "text": "x^2 + 10x = 39" }, { "math_id": 62, "text": "\\Delta^{\\Upsilon} \\overline{\\alpha} \\varsigma \\overline{\\iota} \\,\\;" }, { "math_id": 63, "text": "\\sigma\\;\\, \\Mu \\lambda \\overline{\\theta}" }, { "math_id": 64, "text": "ax^{2n} + bx^n = c." }, { "math_id": 65, "text": "x^3 + d = bx^2" }, { "math_id": 66, "text": "x^2(b - x) = d" }, { "math_id": 67, "text": "d" }, { "math_id": 68, "text": "x = \\frac{2b}{3}" }, { "math_id": 69, "text": "\\frac{4b^3}{27}" }, { "math_id": 70, "text": "0" }, { "math_id": 71, "text": "\\frac{2b}{3}" }, { "math_id": 72, "text": "x^P - N = 0" }, { "math_id": 73, "text": "N" }, { "math_id": 74, "text": "\\mathit{x}" }, { "math_id": 75, "text": "\\mathit{y}" }, { "math_id": 76, "text": "\\mathit{z}" }, { "math_id": 77, "text": "(a, b, c, \\ldots)" }, { "math_id": 78, "text": "(z, y, x, \\ldots)" }, { "math_id": 79, "text": "z" }, { "math_id": 80, "text": "r," }, { "math_id": 81, "text": "x." }, { "math_id": 82, "text": "x, y," } ]
https://en.wikipedia.org/wiki?curid=9550030
9550415
Generator (category theory)
In mathematics, specifically category theory, a family of generators (or family of separators) of a category formula_0 is a collection formula_1 of objects in formula_0, such that for any two "distinct" morphisms formula_2 in formula_3, that is with formula_4, there is some formula_5 in formula_6 and some morphism formula_7 such that formula_8 If the collection consists of a single object formula_5, we say it is a generator (or separator). Generators are central to the definition of Grothendieck categories. The dual concept is called a cogenerator or coseparator.
[ { "math_id": 0, "text": "\\mathcal C" }, { "math_id": 1, "text": "\\mathcal G \\subseteq Ob(\\mathcal C)" }, { "math_id": 2, "text": "f, g: X \\to Y" }, { "math_id": 3, "text": "\\mathcal{C}" }, { "math_id": 4, "text": "f \\neq g" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "\\mathcal G" }, { "math_id": 7, "text": "h : G \\to X" }, { "math_id": 8, "text": "f \\circ h \\neq g \\circ h." }, { "math_id": 9, "text": "\\mathbf Z" }, { "math_id": 10, "text": "x \\in X" }, { "math_id": 11, "text": "f(x) \\neq g(x)" }, { "math_id": 12, "text": "\\mathbf Z \\rightarrow X," }, { "math_id": 13, "text": "n \\mapsto n \\cdot x" } ]
https://en.wikipedia.org/wiki?curid=9550415
955164
Geometric group theory
Area in mathematics devoted to the study of finitely generated groups Geometric group theory is an area in mathematics devoted to the study of finitely generated groups via exploring the connections between algebraic properties of such groups and topological and geometric properties of spaces on which these groups can act non-trivially (that is, when the groups in question are realized as geometric symmetries or continuous transformations of some spaces). Another important idea in geometric group theory is to consider finitely generated groups themselves as geometric objects. This is usually done by studying the Cayley graphs of groups, which, in addition to the graph structure, are endowed with the structure of a metric space, given by the so-called word metric. Geometric group theory, as a distinct area, is relatively new, and became a clearly identifiable branch of mathematics in the late 1980s and early 1990s. Geometric group theory closely interacts with low-dimensional topology, hyperbolic geometry, algebraic topology, computational group theory and differential geometry. There are also substantial connections with complexity theory, mathematical logic, the study of Lie groups and their discrete subgroups, dynamical systems, probability theory, K-theory, and other areas of mathematics. In the introduction to his book "Topics in Geometric Group Theory", Pierre de la Harpe wrote: "One of my personal beliefs is that fascination with symmetries and groups is one way of coping with frustrations of life's limitations: we like to recognize symmetries which allow us to recognize more than what we can see. In this sense the study of geometric group theory is a part of culture, and reminds me of several things that Georges de Rham practiced on many occasions, such as teaching mathematics, reciting Mallarmé, or greeting a friend". History. Geometric group theory grew out of combinatorial group theory that largely studied properties of discrete groups via analyzing group presentations, which describe groups as quotients of free groups; this field was first systematically studied by Walther von Dyck, student of Felix Klein, in the early 1880s, while an early form is found in the 1856 icosian calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. Currently combinatorial group theory as an area is largely subsumed by geometric group theory. Moreover, the term "geometric group theory" came to often include studying discrete groups using probabilistic, measure-theoretic, arithmetic, analytic and other approaches that lie outside of the traditional combinatorial group theory arsenal. In the first half of the 20th century, pioneering work of Max Dehn, Jakob Nielsen, Kurt Reidemeister and Otto Schreier, J. H. C. Whitehead, Egbert van Kampen, amongst others, introduced some topological and geometric ideas into the study of discrete groups. Other precursors of geometric group theory include small cancellation theory and Bass–Serre theory. Small cancellation theory was introduced by Martin Grindlinger in the 1960s and further developed by Roger Lyndon and Paul Schupp. It studies van Kampen diagrams, corresponding to finite group presentations, via combinatorial curvature conditions and derives algebraic and algorithmic properties of groups from such analysis. Bass–Serre theory, introduced in the 1977 book of Serre, derives structural algebraic information about groups by studying group actions on simplicial trees. External precursors of geometric group theory include the study of lattices in Lie groups, especially Mostow's rigidity theorem, the study of Kleinian groups, and the progress achieved in low-dimensional topology and hyperbolic geometry in the 1970s and early 1980s, spurred, in particular, by William Thurston's Geometrization program. The emergence of geometric group theory as a distinct area of mathematics is usually traced to the late 1980s and early 1990s. It was spurred by the 1987 monograph of Mikhail Gromov "Hyperbolic groups" that introduced the notion of a hyperbolic group (also known as "word-hyperbolic" or "Gromov-hyperbolic" or "negatively curved" group), which captures the idea of a finitely generated group having large-scale negative curvature, and by his subsequent monograph "Asymptotic Invariants of Infinite Groups", that outlined Gromov's program of understanding discrete groups up to quasi-isometry. The work of Gromov had a transformative effect on the study of discrete groups and the phrase "geometric group theory" started appearing soon afterwards. (see e.g.). Modern themes and developments. Notable themes and developments in geometric group theory in 1990s and 2000s include: A particularly influential broad theme in the area is Gromov's program of classifying finitely generated groups according to their large scale geometry. Formally, this means classifying finitely generated groups with their word metric up to quasi-isometry. This program involves: #The study of properties that are invariant under quasi-isometry. Examples of such properties of finitely generated groups include: the growth rate of a finitely generated group; the isoperimetric function or Dehn function of a finitely presented group; the number of ends of a group; hyperbolicity of a group; the homeomorphism type of the Gromov boundary of a hyperbolic group; asymptotic cones of finitely generated groups (see e.g.); amenability of a finitely generated group; being virtually abelian (that is, having an abelian subgroup of finite index); being virtually nilpotent; being virtually free; being finitely presentable; being a finitely presentable group with solvable Word Problem; and others. #Theorems which use quasi-isometry invariants to prove algebraic results about groups, for example: Gromov's polynomial growth theorem; Stallings' ends theorem; Mostow rigidity theorem. #Quasi-isometric rigidity theorems, in which one classifies algebraically all groups that are quasi-isometric to some given group or metric space. This direction was initiated by the work of Schwartz on quasi-isometric rigidity of rank-one lattices and the work of Benson Farb and Lee Mosher on quasi-isometric rigidity of Baumslag–Solitar groups. Examples. The following examples are often studied in geometric group theory: &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Books and monographs. These texts cover geometric group theory and related topics.
[ { "math_id": 0, "text": "\\mathbb R" }, { "math_id": 1, "text": "SL(n, \\mathbb R)" } ]
https://en.wikipedia.org/wiki?curid=955164
9552096
Support polygon
For a rigid object in contact with a fixed environment and acted upon by gravity in the vertical direction, its support polygon is a horizontal region over which the center of mass must lie to achieve static stability. For example, for an object resting on a horizontal surface (e.g. a table), the support polygon is the convex hull of its "footprint" on the table. The support polygon succinctly represents the conditions necessary for an object to be at equilibrium under gravity. That is, if the object's center of mass lies over the support polygon, then there exist a set of forces over the region of contact that exactly counteracts the forces of gravity. Note that this is a "necessary" condition for stability, but "not a sufficient" one. Derivation. Let the object be in contact at a finite number of points formula_0. At each point formula_1, let formula_2 be the set of forces that can be applied on the object at that point. Here, formula_2 is known as the "friction cone", and for the Coulomb model of friction, is actually a cone with apex at the origin, extending to infinity in the normal direction of the contact. Let formula_3 be the (unspecified) forces at the contact points. To balance the object in static equilibrium, the following Newton-Euler equations must be met on formula_3: where formula_8 is the force of gravity on the object, and formula_9 is its center of mass. The first two equations are the Newton-Euler equations, and the third requires all forces to be valid. If there is no set of forces formula_3 that meet all these conditions, the object will not be in equilibrium. The second equation has no dependence on the vertical component of the center of mass, and thus if a solution exists for one formula_9, the same solution works for all formula_10. Therefore, the set of all formula_9 that have solutions to the above conditions is a set that extends infinitely in the up and down directions. The support polygon is simply the projection of this set on the horizontal plane. These results can easily be extended to different friction models and an infinite number of contact points (i.e. a region of contact). Properties. Even though the word "polygon" is used to describe this region, in general it can be any convex shape with curved edges. The support polygon is invariant under translations and rotations about the gravity vector (that is, if the contact points and friction cones were translated and rotated about the gravity vector, the support polygon is simply translated and rotated). If the friction cones are convex cones (as they typically are), the support polygon is always a convex region. It is also invariant to the mass of the object (provided it is nonzero). If all contacts lie on a (not necessarily horizontal) plane, and the friction cones at all contacts contain the negative gravity vector formula_11, then the support polygon is the convex hull of the contact points projected onto the horizontal plane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_1,\\ldots,C_N" }, { "math_id": 1, "text": "C_k" }, { "math_id": 2, "text": "FC_k" }, { "math_id": 3, "text": "f_1,\\ldots,f_N" }, { "math_id": 4, "text": "\\sum_{k=1}^N f_k + G = 0" }, { "math_id": 5, "text": "\\sum_{k=1}^N f_k \\times C_k + G \\times CM = 0" }, { "math_id": 6, "text": "f_k \\in FC_k" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "G" }, { "math_id": 9, "text": "CM" }, { "math_id": 10, "text": "CM+\\alpha G" }, { "math_id": 11, "text": "-G" } ]
https://en.wikipedia.org/wiki?curid=9552096
9552145
1 + 2 + 4 + 8 + ⋯
Infinite series In mathematics, 1 + 2 + 4 + 8 + ⋯ is the infinite series whose terms are the successive powers of two. As a geometric series, it is characterized by its first term, 1, and its common ratio, 2. As a series of real numbers it diverges to infinity, so the sum of this series is infinity. However, it can be manipulated to yield a number of mathematically interesting results. For example, many summation methods are used in mathematics to assign numerical values even to a divergent series. For example, the Ramanujan summation of this series is −1, which is the limit of the series using the 2-adic metric. Summation. The partial sums of formula_0 are formula_1 since these diverge to infinity, so does the series. formula_2 It is written as formula_3 Therefore, any totally regular summation method gives a sum of infinity, including the Cesàro sum and Abel sum. On the other hand, there is at least one generally useful method that sums formula_0 to the finite value of −1. The associated power series formula_4 has a radius of convergence around 0 of only formula_5 so it does not converge at formula_6 Nonetheless, the so-defined function formula_7 has a unique analytic continuation to the complex plane with the point formula_8 deleted, and it is given by the same rule formula_9 Since formula_10 the original series formula_0 is said to be summable (E) to −1, and −1 is the (E) sum of the series. (The notation is due to G. H. Hardy in reference to Leonhard Euler's approach to divergent series.) An almost identical approach (the one taken by Euler himself) is to consider the power series whose coefficients are all 1, that is, formula_11 and plugging in formula_12 These two series are related by the substitution formula_13 The fact that (E) summation assigns a finite value to formula_0 shows that the general method is not totally regular. On the other hand, it possesses some other desirable qualities for a summation method, including stability and linearity. These latter two axioms actually force the sum to be −1, since they make the following manipulation valid: formula_14 In a useful sense, formula_15 is a root of the equation formula_16 (For example, formula_17 is one of the two fixed points of the Möbius transformation formula_18 on the Riemann sphere). If some summation method is known to return an ordinary number for formula_19; that is, not formula_20 then it is easily determined. In this case formula_19 may be subtracted from both sides of the equation, yielding formula_21 so formula_22 The above manipulation might be called on to produce −1 outside the context of a sufficiently powerful summation procedure. For the most well-known and straightforward sum concepts, including the fundamental convergent one, it is absurd that a series of positive terms could have a negative value. A similar phenomenon occurs with the divergent geometric series formula_23 (Grandi's series), where a series of integers appears to have the non-integer sum formula_24 These examples illustrate the potential danger in applying similar arguments to the series implied by such recurring decimals as formula_25 and most notably formula_26. The arguments are ultimately justified for these convergent series, implying that formula_27 and formula_28 but the underlying proofs demand careful thinking about the interpretation of endless sums. It is also possible to view this series as convergent in a number system different from the real numbers, namely, the 2-adic numbers. As a series of 2-adic numbers this series converges to the same sum, −1, as was derived above by analytic continuation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1 + 2 + 4 + 8 + \\cdots" }, { "math_id": 1, "text": "1, 3, 7, 15, \\ldots;" }, { "math_id": 2, "text": "2^0+2^1 + \\cdots + 2^k = 2^{k+1}-1" }, { "math_id": 3, "text": "\n\\sum_{n=0}^\\infty 2^n\n" }, { "math_id": 4, "text": "f(x) = 1 + 2x + 4x^2 + 8x^3+ \\cdots + 2^n{}x^n + \\cdots = \\frac{1}{1-2x}" }, { "math_id": 5, "text": "\\frac{1}{2}" }, { "math_id": 6, "text": "x = 1." }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "x = \\frac{1}{2}" }, { "math_id": 9, "text": "f(x) = \\frac{1}{1 - 2 x}." }, { "math_id": 10, "text": "f(1) = -1," }, { "math_id": 11, "text": "1 + y + y^2 + y^3 + \\cdots = \\frac{1}{1-y}" }, { "math_id": 12, "text": "y = 2." }, { "math_id": 13, "text": "y = 2 x." }, { "math_id": 14, "text": "\\begin{array}{rcl}\ns & = &\\displaystyle 1+2+4+8+16+\\cdots \\\\\n & = &\\displaystyle 1+2(1+2+4+8+\\cdots) \\\\\n & = &\\displaystyle 1+2s\n\\end{array}" }, { "math_id": 15, "text": "s = \\infty" }, { "math_id": 16, "text": "s = 1 + 2 s." }, { "math_id": 17, "text": "\\infty" }, { "math_id": 18, "text": "z \\mapsto 1 + 2 z" }, { "math_id": 19, "text": "s" }, { "math_id": 20, "text": "\\infty," }, { "math_id": 21, "text": "0 = 1 + s," }, { "math_id": 22, "text": "s = -1." }, { "math_id": 23, "text": "1 - 1 + 1 - 1 + \\cdots" }, { "math_id": 24, "text": "\\frac{1}{2}." }, { "math_id": 25, "text": "0.111\\ldots" }, { "math_id": 26, "text": "0.999\\ldots" }, { "math_id": 27, "text": "0.111\\ldots = \\frac{1}{9}" }, { "math_id": 28, "text": "0.999\\ldots = 1," } ]
https://en.wikipedia.org/wiki?curid=9552145
9553738
Ensemble Kalman filter
Recursive filter The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the same thing as an ensemble member) but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter. Introduction. The ensemble Kalman filter (EnKF) is a Monte Carlo implementation of the Bayesian update problem: given a probability density function (PDF) of the state of the modeled system (the "prior", called often the forecast in geosciences) and the data likelihood, Bayes' theorem is used to obtain the PDF after the data likelihood has been taken into account (the "posterior", often called the analysis). This is called a Bayesian update. The Bayesian update is combined with advancing the model in time, incorporating new data from time to time. The original Kalman filter, introduced in 1960, assumes that all PDFs are Gaussian (the Gaussian assumption) and provides algebraic formulas for the change of the mean and the covariance matrix by the Bayesian update, as well as a formula for advancing the mean and covariance in time provided the system is linear. However, maintaining the covariance matrix is not feasible computationally for high-dimensional systems. For this reason, EnKFs were developed. EnKFs represent the distribution of the system state using a collection of state vectors, called an ensemble, and replace the covariance matrix by the sample covariance computed from the ensemble. The ensemble is operated with as if it were a random sample, but the ensemble members are really not independent, as they all share the EnKF. One advantage of EnKFs is that advancing the PDF in time is achieved by simply advancing each member of the ensemble. Derivation. Kalman filter. Let formula_0 denote the formula_1-dimensional state vector of a model, and assume that it has Gaussian probability distribution with mean formula_2 and covariance formula_3, i.e., its PDF is formula_4 Here and below, formula_5 means proportional; a PDF is always scaled so that its integral over the whole space is one. This formula_6, called the "prior", was evolved in time by running the model and now is to be updated to account for new data. It is natural to assume that the error distribution of the data is known; data have to come with an error estimate, otherwise they are meaningless. Here, the data formula_7 is assumed to have Gaussian PDF with covariance formula_8 and mean formula_9, where formula_10 is the so-called observation matrix. The covariance matrix formula_8 describes the estimate of the error of the data; if the random errors in the entries of the data vector formula_7 are independent, formula_8 is diagonal and its diagonal entries are the squares of the standard deviation (“error size”) of the error of the corresponding entries of the data vector formula_7. The value formula_9 is what the value of the data would be for the state formula_0 in the absence of data errors. Then the probability density formula_11 of the data formula_7 conditional of the system state formula_0, called the data likelihood, is formula_12 The PDF of the state and the data likelihood are combined to give the new probability density of the system state formula_0 conditional on the value of the data formula_7 (the "posterior") by the Bayes theorem, formula_13 The data formula_7 is fixed once it is received, so denote the posterior state by formula_14 instead of formula_15 and the posterior PDF by formula_16. It can be shown by algebraic manipulations that the posterior PDF is also Gaussian, formula_17 with the posterior mean formula_18 and covariance formula_19 given by the Kalman update formulas formula_20 where formula_21 is the so-called Kalman gain matrix. Ensemble Kalman Filter. The EnKF is a Monte Carlo approximation of the Kalman filter, which avoids evolving the covariance matrix of the PDF of the state vector formula_0. Instead, the PDF is represented by an ensemble formula_22 formula_23 is an formula_24 matrix whose columns are the ensemble members, and it is called the "prior ensemble". Ideally, ensemble members would form a sample from the prior distribution. However, the ensemble members are not in general independent except in the initial ensemble, since every EnKF step ties them together. They are deemed to be approximately independent, and all calculations proceed as if they actually were independent. Replicate the data formula_7 into an formula_25 matrix formula_26 so that each column formula_27 consists of the data vector formula_7 plus a random vector from the formula_28-dimensional normal distribution formula_29. If, in addition, the columns of formula_23 are a sample from the prior probability distribution, then the columns of formula_30 form a sample from the posterior probability distribution. To see this in the scalar case with formula_31: Let formula_32, and formula_33 Then formula_34. The first sum is the posterior mean, and the second sum, in view of the independence, has a variance formula_35, which is the posterior variance. The EnKF is now obtained simply by replacing the state covariance formula_3 in Kalman gain matrix formula_36 by the sample covariance formula_37 computed from the ensemble members (called the "ensemble covariance"), that is: formula_38 Implementation. Basic formulation. Here we follow. Suppose the ensemble matrix formula_23 and the data matrix formula_39 are as above. The ensemble mean and the covariance are formula_40 where formula_41 and formula_42 denotes the matrix of all ones of the indicated size. The posterior ensemble formula_43 is then given by formula_44 where the perturbed data matrix formula_39 is as above. Note that since formula_8 is a covariance matrix, it is always positive semidefinite and usually positive definite, so the inverse above exists and the formula can be implemented by the Cholesky decomposition. In, formula_8 is replaced by the sample covariance formula_45 where formula_46and the inverse is replaced by a pseudoinverse, computed using the singular-value decomposition (SVD) . Since these formulas are matrix operations with dominant Level 3 operations, they are suitable for efficient implementation using software packages such as LAPACK (on serial and shared memory computers) and ScaLAPACK (on distributed memory computers). Instead of computing the inverse of a matrix and multiplying by it, it is much better (several times cheaper and also more accurate) to compute the Cholesky decomposition of the matrix and treat the multiplication by the inverse as solution of a linear system with many simultaneous right-hand sides. Observation matrix-free implementation. Since we have replaced the covariance matrix with ensemble covariance, this leads to a simpler formula where ensemble observations are directly used without explicitly specifying the matrix formula_10. More specifically, define a function formula_47 of the form formula_48 The function formula_49 is called the "observation function" or, in the inverse problems context, the "forward operator". The value of formula_47 is what the value of the data would be for the state formula_0 assuming the measurement is exact. Then the posterior ensemble can be rewritten as formula_50 where formula_51 and formula_52 with formula_53 Consequently, the ensemble update can be computed by evaluating the observation function formula_49 on each ensemble member once and the matrix formula_10 does not need to be known explicitly. This formula holds also for an observation function formula_54 with a fixed offset formula_55, which also does not need to be known explicitly. The above formula has been commonly used for a nonlinear observation function formula_49, such as the position of a hurricane vortex. In that case, the observation function is essentially approximated by a linear function from its values at ensemble members. Implementation for a large number of data points. For a large number formula_28 of data points, the multiplication by formula_56 becomes a bottleneck. The following alternative formula is advantageous when the number of data points formula_28 is large (such as when assimilating gridded or pixel data) and the data error covariance matrix formula_8 is diagonal (which is the case when the data errors are uncorrelated), or cheap to decompose (such as banded due to limited covariance distance). Using the Sherman–Morrison–Woodbury formula formula_57 with formula_58 gives formula_59 which requires only the solution of systems with the matrix formula_8 (assumed to be cheap) and of a system of size formula_60 with formula_28 right-hand sides. See for operation counts. Further extensions. The EnKF version described here involves randomization of data. For filters without randomization of data, see. Since the ensemble covariance is rank deficient (there are many more state variables, typically millions, than the ensemble members, typically less than a hundred), it has large terms for pairs of points that are spatially distant. Since in reality the values of physical fields at distant locations are not that much correlated, the covariance matrix is tapered off artificially based on the distance, which gives rise to localized EnKF algorithms. These methods modify the covariance matrix used in the computations and, consequently, the posterior ensemble is no longer made only of linear combinations of the prior ensemble. For nonlinear problems, EnKF can create posterior ensemble with non-physical states. This can be alleviated by regularization, such as penalization of states with large spatial gradients. For problems with coherent features, such as hurricanes, thunderstorms, firelines, squall lines, and rain fronts, there is a need to adjust the numerical model state by deforming the state in space (its grid) as well as by correcting the state amplitudes additively. In 2007, Ravela et al. introduce the joint position-amplitude adjustment model using ensembles, and systematically derive a sequential approximation which can be applied to both EnKF and other formulations. Their method does not make the assumption that amplitudes and position errors are independent or jointly Gaussian, as others do. The morphing EnKF employs intermediate states, obtained by techniques borrowed from image registration and morphing, instead of linear combinations of states. Formally, EnKFs rely on the Gaussian assumption. In practice they can also be used for nonlinear problems, where the Gaussian assumption may not be satisfied. Related filters attempting to relax the Gaussian assumption in EnKF while preserving its advantages include filters that fit the state PDF with multiple Gaussian kernels, filters that approximate the state PDF by Gaussian mixtures, a variant of the particle filter with computation of particle weights by density estimation, and a variant of the particle filter with thick tailed data PDF to alleviate particle filter degeneracy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\mathbf{\\mu}" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": " p(\\mathbf{x})\\propto\\exp\\left( -\\frac{1}{2}(\\mathbf{x}-\\mathbf{\\mu })^{\\mathrm{T}}Q^{-1}(\\mathbf{x}-\\mathbf{\\mu})\\right) . " }, { "math_id": 5, "text": "\\propto" }, { "math_id": 6, "text": "p(\\mathbf{x})" }, { "math_id": 7, "text": "\\mathbf{d}" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "H\\mathbf{x}" }, { "math_id": 10, "text": "H" }, { "math_id": 11, "text": "p(\\mathbf{d}|\\mathbf{x})" }, { "math_id": 12, "text": " p\\left( \\mathbf{d}|\\mathbf{x}\\right) \\propto\\exp\\left( -\\frac{1}{2}(\\mathbf{d}-H\\mathbf{x})^{\\mathrm{T}}R^{-1}(\\mathbf{d}-H\\mathbf{x})\\right) . " }, { "math_id": 13, "text": " p\\left( \\mathbf{x}|\\mathbf{d}\\right) \\propto p\\left( \\mathbf{d}|\\mathbf{x}\\right) p(\\mathbf{x}). " }, { "math_id": 14, "text": "\\mathbf{\\hat{x}}" }, { "math_id": 15, "text": "\\mathbf{x}|\\mathbf{d}" }, { "math_id": 16, "text": "p\\left( \\mathbf{\\hat{x}}\\right) " }, { "math_id": 17, "text": " p\\left( \\mathbf{\\hat{x}}\\right) \\propto\\exp\\left( -\\frac{1}{2}(\\mathbf{\\hat{x}}-\\mathbf{\\hat{\\mu}})^{\\mathrm{T}}\\hat{Q}^{-1}(\\mathbf{\\hat{x}}-\\mathbf{\\hat{\\mu}})\\right) , " }, { "math_id": 18, "text": "\\mathbf{\\hat{\\mu}}" }, { "math_id": 19, "text": "\\hat{Q}" }, { "math_id": 20, "text": " \\mathbf{\\hat{\\mu}}=\\mathbf{\\mu}+K\\left( \\mathbf{d}-H\\mathbf{\\mu}\\right) ,\\quad\\hat{Q}=\\left( I-KH\\right) Q, " }, { "math_id": 21, "text": " K=QH^{\\mathrm{T}}\\left( HQH^{\\mathrm{T}}+R\\right) ^{-1}" }, { "math_id": 22, "text": " X=\\left[ \\mathbf{x}_{1},\\ldots,\\mathbf{x}_{N}\\right] =\\left[ \\mathbf{x}_{i}\\right]. " }, { "math_id": 23, "text": "X" }, { "math_id": 24, "text": "n\\times N" }, { "math_id": 25, "text": "m\\times N" }, { "math_id": 26, "text": " D=\\left[ \\mathbf{d}_{1},\\ldots,\\mathbf{d}_{N}\\right] =\\left[ \\mathbf{d}_{i}\\right], \\quad \\mathbf{d}_{i}=\\mathbf{d}+\\mathbf{\\epsilon_{i}}, \\quad \\mathbf{\\epsilon_{i}} \\sim N(0,R), " }, { "math_id": 27, "text": "\\mathbf{d}_{i}" }, { "math_id": 28, "text": "m" }, { "math_id": 29, "text": "N(0,R)" }, { "math_id": 30, "text": " \\hat{X}=X+K(D-HX) " }, { "math_id": 31, "text": "H=1" }, { "math_id": 32, "text": "x_i = \\mu + \\xi_i, \\; \\xi_i \\sim N(0, \\sigma_x^2)" }, { "math_id": 33, "text": "d_i = d + \\epsilon_i, \\; \\epsilon_i \\sim N(0, \\sigma_d^2)." }, { "math_id": 34, "text": "\\hat{x}_i = \\left(\\frac{1/\\sigma_x^2}{1/\\sigma_x^2 + 1/\\sigma_d^2} \\mu + \\frac{1/\\sigma_d^2}{1/\\sigma_x^2 + 1/\\sigma_d^2} d \\right)+ \\left(\\frac{1/\\sigma_x^2}{1/\\sigma_x^2 + 1/\\sigma_d^2} \\xi_i + \\frac{1/\\sigma_d^2}{1/\\sigma_x^2 + 1/\\sigma_d^2} \\epsilon_i \\right) " }, { "math_id": 35, "text": "\\left(\\frac{1/\\sigma_x^2}{1/\\sigma_x^2 + 1/\\sigma_d^2}\\right)^2 \\sigma_x^2 + \\left(\\frac{1/\\sigma_d^2}{1/\\sigma_x^2 + 1/\\sigma_d^2}\\right)^2 \\sigma_d^2 = \\frac{1}{1/\\sigma_x^2 + 1/\\sigma_d^2}" }, { "math_id": 36, "text": "K" }, { "math_id": 37, "text": "C" }, { "math_id": 38, "text": "K=CH^{\\mathrm{T}}\\left( HCH^{\\mathrm{T}}+R\\right) ^{-1}" }, { "math_id": 39, "text": "D" }, { "math_id": 40, "text": " E\\left( X\\right) =\\frac{1}{N}\\sum_{k=1}^{N}\\mathbf{x}_{k},\\quad C=\\frac{AA^{T}}{N-1}, " }, { "math_id": 41, "text": " A=X-E\\left( X\\right) \\mathbf{e}_{1\\times N} =X-\\frac{1}{N}\\left( X\\mathbf{e}_{N\\times1}\\right) \\mathbf{e}_{1\\times N}, " }, { "math_id": 42, "text": "\\mathbf{e}" }, { "math_id": 43, "text": "X^{p}" }, { "math_id": 44, "text": " X^{p}=X+CH^{T}\\left( HCH^{T}+R\\right) ^{-1}(D-HX), " }, { "math_id": 45, "text": "\\tilde{D} \\tilde{D}^{T}/\\left( N-1\\right) " }, { "math_id": 46, "text": "\\tilde{D} = D - \\frac{1}{N} d \\, \\mathbf{e}_{1\\times N}" }, { "math_id": 47, "text": "h(\\mathbf{x})" }, { "math_id": 48, "text": " h(\\mathbf{x})=H\\mathbf{x}. " }, { "math_id": 49, "text": "h" }, { "math_id": 50, "text": " X^{p}=X+\\frac{1}{N-1}A\\left( HA\\right) ^{T}P^{-1}(D-HX) " }, { "math_id": 51, "text": " HA=HX-\\frac{1}{N}\\left( \\left( HX\\right) \\mathbf{e}_{N\\times1}\\right) \\mathbf{e}_{1\\times N}, " }, { "math_id": 52, "text": " P=\\frac{1}{N-1}HA\\left( HA\\right) ^{T}+R, " }, { "math_id": 53, "text": "\\left[ HA\\right] _{i} =H\\mathbf{x}_{i}-H\\frac{1}{N}\\sum_{j=1}^{N}\\mathbf{x}_{j}\\ =h\\left( \\mathbf{x}_{i}\\right) -\\frac{1}{N}\\sum_{j=1}^{N}h\\left( \\mathbf{x}_{j}\\right) . " }, { "math_id": 54, "text": "h(\\mathbf{x})=H\\mathbf{x+f}" }, { "math_id": 55, "text": "\\mathbf{f}" }, { "math_id": 56, "text": "P^{-1}" }, { "math_id": 57, "text": " (R+UV^{T})^{-1}=R^{-1}-R^{-1}U(I+V^{T}R^{-1}U)^{-1}V^{T}R^{-1}, " }, { "math_id": 58, "text": " U=\\frac{1}{N-1}HA,\\quad V=HA, " }, { "math_id": 59, "text": "\\begin{align} P^{-1} & =\\left( R+\\frac{1}{N-1}HA\\left( HA\\right) ^{T}\\right) ^{-1}\\ = \\\\\n& =R^{-1}\\left[ I-\\frac{1}{N-1}\\left( HA\\right) \\left( I+\\left( HA\\right) ^{T}R^{-1}\\frac{1}{N-1}\\left( HA\\right) \\right) ^{-1}\\left( HA\\right) ^{T}R^{-1}\\right] , \\end{align}" }, { "math_id": 60, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=9553738
9553854
Artin–Rees lemma
In mathematics, the Artin–Rees lemma is a basic result about modules over a Noetherian ring, along with results such as the Hilbert basis theorem. It was proved in the 1950s in independent works by the mathematicians Emil Artin and David Rees; a special case was known to Oscar Zariski prior to their work. An intuitive characterization of the lemma involves the notion that a submodule "N" of a module "M" over some ring "A" with specified ideal "I" holds "a priori" two topologies: one induced by the topology on "M," and the other when considered with the "I-adic" topology over "A." Then Artin-Rees dictates that these topologies actually coincide, at least when "A" is Noetherian and "M" finitely-generated. One consequence of the lemma is the Krull intersection theorem. The result is also used to prove the exactness property of completion. The lemma also plays a key role in the study of ℓ-adic sheaves. Statement. Let "I" be an ideal in a Noetherian ring "R"; let "M" be a finitely generated "R"-module and let "N" a submodule of "M". Then there exists an integer "k" ≥ 1 so that, for "n" ≥ "k", formula_0 Proof. The lemma immediately follows from the fact that "R" is Noetherian once necessary notions and notations are set up. For any ring "R" and an ideal "I" in "R", we set formula_1 ("B" for blow-up.) We say a decreasing sequence of submodules formula_2 is an "I"-filtration if formula_3; moreover, it is stable if formula_4 for sufficiently large "n". If "M" is given an "I"-filtration, we set formula_5; it is a graded module over formula_6. Now, let "M" be a "R"-module with the "I"-filtration formula_7 by finitely generated "R"-modules. We make an observation formula_8 is a finitely generated module over formula_6 if and only if the filtration is "I"-stable. Indeed, if the filtration is "I"-stable, then formula_8 is generated by the first formula_9 terms formula_10 and those terms are finitely generated; thus, formula_8 is finitely generated. Conversely, if it is finitely generated, say, by some homogeneous elements in formula_11, then, for formula_12, each "f" in formula_13 can be written as formula_14 with the generators formula_15 in formula_16. That is, formula_17. We can now prove the lemma, assuming "R" is Noetherian. Let formula_18. Then formula_13 are an "I"-stable filtration. Thus, by the observation, formula_8 is finitely generated over formula_6. But formula_19 is a Noetherian ring since "R" is. (The ring formula_20 is called the Rees algebra.) Thus, formula_8 is a Noetherian module and any submodule is finitely generated over formula_6; in particular, formula_21 is finitely generated when "N" is given the induced filtration; i.e., formula_22. Then the induced filtration is "I"-stable again by the observation. Krull's intersection theorem. Besides the use in completion of a ring, a typical application of the lemma is the proof of the Krull's intersection theorem, which says: formula_23 for a proper ideal "I" in a commutative Noetherian ring that is either a local ring or an integral domain. By the lemma applied to the intersection formula_24, we find "k" such that for formula_12, formula_25 Taking formula_26, this means formula_27 or formula_28. Thus, if "A" is local, formula_29 by Nakayama's lemma. If "A" is an integral domain, then one uses the determinant trick (that is a variant of the Cayley–Hamilton theorem and yields Nakayama's lemma): &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  Let "u" be an endomorphism of an "A"-module "N" generated by "n" elements and "I" an ideal of "A" such that formula_30. Then there is a relation: formula_31 In the setup here, take "u" to be the identity operator on "N"; that will yield a nonzero element "x" in "A" such that formula_32, which implies formula_29, as formula_33 is a nonzerodivisor. For both a local ring and an integral domain, the "Noetherian" cannot be dropped from the assumption: for the local ring case, see local ring#Commutative case. For the integral domain case, take formula_34 to be the ring of algebraic integers (i.e., the integral closure of formula_35 in formula_36). If formula_37 is a prime ideal of "A", then we have: formula_38 for every integer formula_39. Indeed, if formula_40, then formula_41 for some complex number formula_42. Now, formula_42 is integral over formula_35; thus in formula_34 and then in formula_43, proving the claim. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I^{n} M \\cap N = I^{n - k} (I^{k} M \\cap N)." }, { "math_id": 1, "text": "B_I R = \\bigoplus_{n=0}^\\infty I^n" }, { "math_id": 2, "text": "M = M_0 \\supset M_1 \\supset M_2 \\supset \\cdots" }, { "math_id": 3, "text": "I M_n \\subset M_{n+1}" }, { "math_id": 4, "text": "I M_n = M_{n+1}" }, { "math_id": 5, "text": "B_I M = \\bigoplus_{n=0}^\\infty M_n" }, { "math_id": 6, "text": "B_I R" }, { "math_id": 7, "text": "M_i" }, { "math_id": 8, "text": "B_I M" }, { "math_id": 9, "text": "k+1" }, { "math_id": 10, "text": "M_0, \\dots, M_k" }, { "math_id": 11, "text": "\\bigoplus_{j=0}^k M_j" }, { "math_id": 12, "text": "n \\ge k" }, { "math_id": 13, "text": "M_n" }, { "math_id": 14, "text": "f = \\sum a_{j} g_{j}, \\quad a_{j} \\in I^{n-j}" }, { "math_id": 15, "text": "g_{j}" }, { "math_id": 16, "text": "M_j, j \\le k" }, { "math_id": 17, "text": "f \\in I^{n-k} M_k" }, { "math_id": 18, "text": "M_n = I^n M" }, { "math_id": 19, "text": "B_I R \\simeq R[It]" }, { "math_id": 20, "text": "R[It]" }, { "math_id": 21, "text": "B_I N" }, { "math_id": 22, "text": "N_n = M_n \\cap N" }, { "math_id": 23, "text": "\\bigcap_{n=1}^\\infty I^n = 0" }, { "math_id": 24, "text": "N" }, { "math_id": 25, "text": "I^{n} \\cap N = I^{n - k} (I^{k} \\cap N)." }, { "math_id": 26, "text": "n = k+1" }, { "math_id": 27, "text": "I^{k+1}\\cap N = I(I^{k}\\cap N)" }, { "math_id": 28, "text": "N = IN" }, { "math_id": 29, "text": "N = 0" }, { "math_id": 30, "text": "u(N) \\subset IN" }, { "math_id": 31, "text": "u^n + a_1 u^{n-1} + \\cdots + a_{n-1} u + a_n = 0, \\, a_i \\in I^i." }, { "math_id": 32, "text": "x N = 0" }, { "math_id": 33, "text": "x" }, { "math_id": 34, "text": "A" }, { "math_id": 35, "text": "\\mathbb{Z}" }, { "math_id": 36, "text": "\\mathbb{C}" }, { "math_id": 37, "text": "\\mathfrak p" }, { "math_id": 38, "text": "\\mathfrak{p}^n = \\mathfrak{p}" }, { "math_id": 39, "text": "n > 0" }, { "math_id": 40, "text": "y \\in \\mathfrak p" }, { "math_id": 41, "text": "y = \\alpha^n" }, { "math_id": 42, "text": "\\alpha" }, { "math_id": 43, "text": "\\mathfrak{p}" } ]
https://en.wikipedia.org/wiki?curid=9553854
9558678
Exponential utility
In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience when risk (sometimes referred to as uncertainty) is present, in which case expected utility is maximized. Formally, exponential utility is given by: formula_0 formula_1 is a variable that the economic decision-maker prefers more of, such as consumption, and formula_2 is a constant that represents the degree of risk preference (formula_3 for risk aversion, formula_4 for risk-neutrality, or formula_5 for risk-seeking). In situations where only risk aversion is allowed, the formula is often simplified to formula_6. Note that the additive term 1 in the above function is mathematically irrelevant and is (sometimes) included only for the aesthetic feature that it keeps the range of the function between zero and one over the domain of non-negative values for "c". The reason for its irrelevance is that maximizing the expected value of utility formula_7 gives the same result for the choice variable as does maximizing the expected value of formula_8; since expected values of utility (as opposed to the utility function itself) are interpreted ordinally instead of cardinally, the range and sign of the expected utility values are of no significance. The exponential utility function is a special case of the hyperbolic absolute risk aversion utility functions. Risk aversion characteristic. Exponential utility implies constant absolute risk aversion (CARA), with coefficient of absolute risk aversion equal to a constant: formula_9 In the standard model of one risky asset and one risk-free asset, for example, this feature implies that the optimal holding of the risky asset is independent of the level of initial wealth; thus on the margin any additional wealth would be allocated totally to additional holdings of the risk-free asset. This feature explains why the exponential utility function is considered unrealistic. Mathematical tractability. Though isoelastic utility, exhibiting constant "relative" risk aversion (CRRA), is considered more plausible (as are other utility functions exhibiting decreasing absolute risk aversion), exponential utility is particularly convenient for many calculations. Consumption example. For example, suppose that consumption "c" is a function of labor supply "x" and a random term formula_10: "c" = "c"("x") + formula_10. Then under exponential utility, expected utility is given by: formula_11 where E is the expectation operator. With normally distributed noise, i.e., formula_12 E("u"("c")) can be calculated easily using the fact that formula_13 Thus formula_14 Multi-asset portfolio example. Consider the portfolio allocation problem of maximizing expected exponential utility formula_15 of final wealth "W" subject to formula_16 where the prime sign indicates a vector transpose and where formula_17 is initial wealth, "x" is a column vector of quantities placed in the "n" risky assets, "r" is a random vector of stochastic returns on the "n" assets, "k" is a vector of ones (so formula_18 is the quantity placed in the risk-free asset), and "r""f" is the known scalar return on the risk-free asset. Suppose further that the stochastic vector "r" is jointly normally distributed. Then expected utility can be written as formula_19 where formula_20 is the mean vector of the vector "r" and formula_21 is the variance of final wealth. Maximizing this is equivalent to minimizing formula_22 which in turn is equivalent to maximizing formula_23 Denoting the covariance matrix of "r" as "V", the variance formula_24 of final wealth can be written as formula_25. Thus we wish to maximize the following with respect to the choice vector "x" of quantities to be placed in the risky assets: formula_26 This is an easy problem in matrix calculus, and its solution is formula_27 From this it can be seen that (1) the holdings "x"* of the risky assets are unaffected by initial wealth "W"0, an unrealistic property, and (2) the holding of each risky asset is smaller the larger is the risk aversion parameter "a" (as would be intuitively expected). This portfolio example shows the two key features of exponential utility: tractability under joint normality, and lack of realism due to its feature of constant absolute risk aversion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u(c) = \\begin{cases} (1-e^{-a c})/a & a \\neq 0 \\\\ c & a = 0 \\\\ \\end{cases} " }, { "math_id": 1, "text": "c" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "a>0" }, { "math_id": 4, "text": "a=0" }, { "math_id": 5, "text": "a<0" }, { "math_id": 6, "text": "u(c)=1-e^{-a c}" }, { "math_id": 7, "text": "u(c)=(1-e^{-a c})/a" }, { "math_id": 8, "text": "u(c)=-e^{-a c}/a" }, { "math_id": 9, "text": "\\frac{-u''(c)}{u'(c)}=a." }, { "math_id": 10, "text": "\\epsilon" }, { "math_id": 11, "text": "\\text{E}(u(c))=\\text{E}[1-e^{-a (c(x)+ \\epsilon)}]," }, { "math_id": 12, "text": "\\varepsilon \\sim N(\\mu, \\sigma^2),\\!" }, { "math_id": 13, "text": "\\text{E}[e^{-a \\varepsilon}]=e^{-a \\mu + \\frac{a^2}{2}\\sigma^2}." }, { "math_id": 14, "text": "\\text{E}(u(c))=\\text{E}[1-e^{-a (c(x)+ \\epsilon)}] = \\text{E}[1-e^{-a c(x)}e^{-a \\varepsilon}] = 1 - e^{-ac(x)}\\text{E}[e^{-a \\epsilon}] = 1 - e^{-ac(x)}e^{-a \\mu + \\frac{a^2}{2}\\sigma^2}." }, { "math_id": 15, "text": "\\text{E}[-e^{-aW}]" }, { "math_id": 16, "text": "W = x'r + (W_0 - x'k) \\cdot r_f" }, { "math_id": 17, "text": "W_0" }, { "math_id": 18, "text": "W_0 - x'k" }, { "math_id": 19, "text": "\\text{E}[-e^{-aW}] = - \\text{E}[e^{-a [x'r + (W_0 - x'k) \\cdot r_f]}] = - e^{-a[(W_0 - x'k)r_f]}\\text{E}[e^{-a \\cdot x'r}] = - e^{-a[(W_0 - x'k)r_f]}e^{-a \\cdot x'\\mu + \\frac{a^2}{2}\\sigma^2}" }, { "math_id": 20, "text": "\\mu" }, { "math_id": 21, "text": "\\sigma^2" }, { "math_id": 22, "text": "e^{ar_f (x'k)-a \\cdot x'\\mu + \\frac{a^2}{2}\\sigma^2}," }, { "math_id": 23, "text": "x'(\\mu - r_f \\cdot k) - \\frac{a}{2}\\sigma^2." }, { "math_id": 24, "text": "\\sigma ^2" }, { "math_id": 25, "text": "x'Vx" }, { "math_id": 26, "text": "x'(\\mu - r_f \\cdot k) - \\frac{a}{2} \\cdot x'Vx." }, { "math_id": 27, "text": "x^* = \\frac{1}{a}V^{-1} (\\mu - r_f \\cdot k). " } ]
https://en.wikipedia.org/wiki?curid=9558678
9559289
Cold-air damming
Cold air damming, or CAD, is a meteorological phenomenon that involves a high-pressure system (anticyclone) accelerating equatorward east of a north-south oriented mountain range due to the formation of a barrier jet behind a cold front associated with the poleward portion of a split upper level trough. Initially, a high-pressure system moves poleward of a north-south mountain range. Once it sloshes over poleward and eastward of the range, the flow around the high banks up against the mountains, forming a barrier jet which funnels cool air down a stretch of land east of the mountains. The higher the mountain chain, the deeper the cold air mass becomes lodged to its east, and the greater impediment it is within the flow pattern and the more resistant it becomes to intrusions of milder air. As the equatorward portion of the system approaches the cold air wedge, persistent low cloudiness, such as stratus, and precipitation such as drizzle develop, which can linger for long periods of time; as long as ten days. The precipitation itself can create or enhance a damming signature, if the poleward high is relatively weak. If such events accelerate through mountain passes, dangerously accelerated mountain-gap winds can result, such as the Tehuantepecer and Santa Ana winds. These events are seen commonly in the northern Hemisphere across central and eastern North America, south of the Alps in Italy, and near Taiwan and Korea in Asia. Events in the southern Hemisphere have been noted in South America east of the Andes. Location. Cold air damming typically happens in the mid-latitudes as this region lies within the Westerlies, an area where frontal intrusions are common. When the Arctic oscillation is negative and pressures are higher over the poles, the flow is more meridional, blowing from the direction of the pole towards the equator, which brings cold air into the mid-latitudes. Cold air damming is observed in the southern hemisphere to the east of the Andes, with cool incursions seen as far equatorward as the 10th parallel south. In the northern hemisphere, common situations occur along the east side of ranges within the Rocky Mountains system over the western portions of the Great Plains, as well as various other mountain ranges (such as the Cascades) along the west coast of the United States. The initial is caused by the poleward portion of a split upper level trough, with the damming preceding the arrival of the more equatorward portion. Some of the cold air damming events which occur east of the Rockies continue southward to the east of the Sierra Madre Oriental through the coastal plain of Mexico through the Isthmus of Tehuantepec. Further funneling of cool air occurs within the Isthmus, which can lead to winds of gale and hurricane-force, referred to as a Tehuantepecer. Other common instances of cold air damming take place on the coastal plain of east-central North America, between the Appalachian Mountains and Atlantic Ocean. In Europe, areas south of the Alps can be prone to cold air damming. In Asia, cold air damming has been documented near Taiwan and the Korean Peninsula. The cold surges on the eastern slopes of the Rocky Mountains, Iceland, New Zealand, and eastern Asia differ from the cold air damming east of the Appalachians due to the wider mountain ranges, sloping terrain, and lack of an eastern body of warm water. Development. The usual development of CAD is when a cool high-pressure area wedges in east of a north-south oriented mountain chain. As a system approaches from the west, a persistent cloud deck with associated precipitation forms and lingers across the region for prolonged periods of time. Temperature differences between the warmer coast and inland sections east of the terrain can exceed 36 degrees Fahrenheit (20 degrees Celsius), with rain near the coast and frozen precipitation, such as snow, sleet, and freezing rain, falling inland during colder times of the year. In the Northern Hemisphere, two-thirds of such events occur between October and April, with summer events preceded by the passage of a backdoor cold front. In the Southern Hemisphere, they have been documented to occur between June and November. Cold air damming events which occur when the parent surface high-pressure system is relatively weak, with a central pressure below , or remaining a progressive feature (move consistently eastward), can be significantly enhanced by cloudiness and precipitation itself. Clouds and precipitation act to increase sea level pressure in the area by 1.5 to 2.0 mb ( 0.04 to 0.06 inHg). When the surface high moves offshore, the precipitation itself can cause the CAD event. formula_0 Detection. Detection algorithm. This algorithm is used to identify the specific type of CAD events based on the surface pressure ridge, its associated cold dome, and ageostrophic northeasterly flow which flows at a significant angle to the isobaric pattern. These values are calculated using hourly data from surface weather observations. The Laplacian of sea level pressure or potential temperature in the mountain-normal—perpendicular to the mountain chain—direction provides a quantitative measure of the intensity of a pressure ridge or associated cold dome. The detection algorithm is based upon Laplacians (formula_1) evaluated for three mountain-normal lines constructed from surface observations in and around the area affected by the cold air damming—the damming region. The "x" denotes either sea level pressure or potential temperature (θ) and the subscripts 1–3 denote stations running from west to east along the line, while the "d" represents the distance between two stations. Negative Laplacian values are typically associated with pressure maxima at the center station, while positive Laplacian values usually correspond to colder temperatures in the center of the section. Effects. When cold air damming occurs, it allows for cold air to surge toward the equator in the affected area. In calm, non-stormy situations, the cold air will advance unhindered until the high-pressure area can no longer exert any influence because of a lack of size or its leaving the area. The effects of cold air damming become more prominent (and also more complicated) when a storm system interacts with the spreading cold air. The effects of cold air damming east of the Cascades in Washington are strengthened by the bowl or basin-like topography of Eastern Washington. Cold Arctic air flowing south from British Columbia through the Okanogan River valley fills the basin, blocked to the south by the Blue Mountains. Cold air damming causes the cold air to bank up along the eastern Cascade slopes, especially into the lower passes, such as Snoqualmie Pass and Stevens Pass. Milder, Pacific-influenced air moving east over the Cascades is often forced aloft by the cold air in the passes, held in place by cold air damming east of the Cascades. As a result, the passes often receive more snow than higher areas in the Cascades, which supports skiing at Snoqualmie and Stevens passes. The situation during Tehuantepecers and Santa Ana wind events are more complicated, as they occur when air rushing southward due to cold air damming east of the Sierra Madre Oriental and Sierra Nevada respectively, is accelerated when it moves through gaps in the terrain. The Santa Ana is further complicated by down-sloped air, or foehn winds, drying out and warming up in the lee of the Sierra Nevada and coastal ranges, leading to a dangerous wildfire situation. The wedge. The effect known as "the wedge" is the most widely known example of cold air damming. In this scenario, the more equatorward storm system will bring warmer air with it above the surface (at around ). This warmer air will ride over the cooler air at the surface, which is being held in place by the poleward high-pressure system. This temperature profile, known as a temperature inversion, will lead to the development of drizzle, rain, freezing rain, sleet, or snow. When it is above freezing at the surface, drizzle or rain could result. Sleet, or Ice pellets, form when a layer of above-freezing air exists with sub-freezing air both above and below it. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A thicker or stronger cold layer, where the warm layer aloft does not significantly warm above the melting point, will lead to snow. Blocking. Blocking occurs when a well-established poleward high-pressure system lies near or within the path of the advancing storm system. The thicker the cold air mass is, the more effectively it can block an invading milder air mass. The depth of the cold air mass is normally shallower than the mountain barrier which created the CAD. Some events across the Intermountain West can last for ten days. Pollutants and smoke can remain suspended within the stable air mass of a cold air dam. Erosion. It is often more difficult to forecast the erosion of a CAD event than its development. Numerical models tend to underestimate the event's duration. The bulk Richardson number, Ri, calculates vertical wind shear to help forecast erosion. The numerator corresponds to the strength of the inversion layer separating the CAD cold dome and the immediate atmosphere above. The denominator expresses the square of the vertical wind shear across the inversion layer. Small values of the Richardson number result in turbulent mixing that can weaken the inversion layer and aid the deterioration of the cold dome, leading to the end of the CAD event. formula_2 Cold advection aloft. One of the most effective erosion mechanisms is the import of colder air—also known as cold air advection—aloft. With cold advection maximized above the inversion layer, cooling aloft can weaken in the inversion layer, which allows for mixing and the demise of CAD. The Richardson number is reduced by the weakening inversion layer. Cold advection favors subsidence and drying, which supports solar heating beneath the inversion. Solar heating. Solar heating has the ability to erode a CAD event by heating the surface in the absence of a thick overcast. However, even a shallow stratus layer during the cold season can render solar heating ineffective. During breaks of overcast for the warm season, absorption of solar radiation at the surface warms the cold dome, once again lowering the Richardson number and promoting mixing. Near-surface divergence. In the United States, as a high-pressure system moves eastward out to the Atlantic, northerly winds are reduced along the southeast coast. If northeasterly winds persist in the southern damming region, net divergence is implied. Near-surface divergence reduces the depth of the cold dome as well as aid the sinking of air, which can reduce cloud cover. The reduction of cloud cover permits solar heating to effectively warm the cold dome from the surface up. Shear-induced mixing. The strong static stability of a CAD inversion layer usually inhibits turbulent mixing, even in the presence of vertical wind shear. However, if the shear strengthens in addition to a weakening of the inversion, the cold dome becomes vulnerable to shear-induced mixing. Unlike solar heating, this CAD event erosion happens from the top down. Mixing occurs when the depth of the northeasterly flow becomes increasingly shallow and strong southerly flow makes a downward progression resulting in high shear. Frontal advance. Erosion of a cold dome will typically first occur near the fringes where the layer is relatively shallow. As mixing progresses and the cold dome erodes, the boundary of the cold air – often indicated as a coastal or warm front – will move inland, diminishing the width of the cold dome. Classifying Southeastern United States events. An objective scheme has been developed to classify certain types of CAD events in the Southeastern United States. Each scheme is based on the strength and location of the parent high-pressure system. Classical. Classical CAD events are characterized by dry synoptic forcing, partial diabatic contribution, and a strong parent anticyclone (high-pressure system) located to the north of the Appalachian damming region. A strong high-pressure system usually is defined as having a central pressure over . The northeastern United States is the most favorable location for the high-pressure system in classical CAD events. For diabatically enhanced classical events, at 24 hours prior to the onset of CAD, a prominent 250-mb jet extends from southwest to northeast across eastern North America. A general area of troughing is present at the 500- and 250-mb levels west of the jet. The parent high-pressure system is centered over the upper Midwest beneath the 250-mb jet entrance region, setting up conditions for CAD east of the Rocky Mountains. For dry onset classical events, the 250-mb jet is weaker and centered farther east relative to the diabatically enhanced classical events. The jet also does not extend as far southwest compared to diabatically enhanced classical CAD events. The center of the high-pressure system is farther east, so ridging extends southward into the south-central eastern United States. Although both types of classical events begin differently, their results are very similar. Hybrid. When the parent anticyclone is weaker or not ideally located, the diabatic process must start to contribute in order to develop CAD. In scenarios where there is an equal contribution from dry synoptic forcing and diabatic processes, it is considered a hybrid damming event. The 250-mb jet is weaker and slightly farther south relative to a classical composite 24 hours prior to CAD onset. With the surface parent high farther west, it builds in eastward into the northern Great Plains and western Great Lakes region, located beneath a region of confluent flow from the 250-mb jet. In-situ. In-situ events are the weakest and often most short lived out of CAD event types. These events occur during the absence of ideal synoptic conditions, when the anticyclone position is highly unfavorable located well offshore. In some in situ cases, the barrier pressure gradient is largely due to a cyclone to the southwest rather than the anticyclone to the northeast. Diabatic processes lead to the stabilization of an air mass approaching the Appalachians. Diabatic processes are essential for in-situ events. These events often lead to weak, narrow damming. Prediction. Overview. Weather forecasts during CAD events are especially prone to inaccuracies. Precipitation type and daily high temperatures are especially difficult to predict. Numerical weather models tend to be more accurate in predicting the development of a CAD event, and less accurate in predicting their erosion. Manual forecasting can provide more accurate forecasts. An experienced human forecaster will use numerical models as a guide, but account for the model's inaccuracies and shortcomings. Example case. The Appalachian CAD event of October 2002 illustrates some shortcomings of short-term weather models for predicting a CAD event. This event was characterized by a stable saturated layer of cold air from surface up to the 700mb pressure level over the states of Virginia, North Carolina, and South Carolina. This mass of cold air was blocked by the Appalachians and did not dissipate even as a coastal cyclone to east strengthened. During this event, short term weather models predicted this cold mass clearing, leading to fairer weather conditions for the region such as warmer conditions and the absence of a layer of stratus clouds. However, the model performed poorly because they did not account for excessive solar radiation transmission through the cloud layers and shallow mixing promoted by the model's convective parameterization scheme. While these errors have been corrected in updated models, they resulted in an inaccurate forecast. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\nabla ^2 x = \\frac{ \\frac{x_{3}-x_{2} }{d_{2-3}} - \\frac{x_2 - x_1}{d_{1-2}}}\n{\\frac{1}{2}(d_{2-3}+d_{1-2})} \n" }, { "math_id": 1, "text": "\n\\nabla ^2 x " }, { "math_id": 2, "text": "\nRi = \\frac{g\\Delta \\theta _{v}/\\theta _{v}}{[(\\Delta U)^2 + (\\Delta V)^{2}]/\\Delta Z}\n" } ]
https://en.wikipedia.org/wiki?curid=9559289
9560178
Hyperpolarizability
The hyperpolarizability, a nonlinear-optical property of a molecule, is the second order electric susceptibility per unit volume. The hyperpolarizability can be calculated using quantum chemical calculations developed in several software packages. See nonlinear optics. Definition and higher orders. The linear electric polarizability formula_0 in isotropic media is defined as the ratio of the induced dipole moment formula_1 of an atom to the electric field formula_2 that produces this dipole moment. Therefore, the dipole moment is: formula_3 In an isotropic medium formula_1 is in the same direction as formula_2, i.e. formula_0 is a scalar. In an anisotropic medium formula_1 and formula_2 can be in different directions and the polarisability is now a tensor. The total density of induced polarization is the product of the number density of molecules multiplied by the dipole moment of each molecule, i.e.: formula_4 where formula_5 is the concentration, formula_6 is the vacuum permittivity, and formula_7 is the electric susceptibility. In a nonlinear optical medium, the polarization density is written as a series expansion in powers of the applied electric field, and the coefficients are termed the non-linear susceptibility: formula_8 where the coefficients χ("n") are the "n"-th-order susceptibilities of the medium, and the presence of such a term is generally referred to as an "n"-th-order nonlinearity. In isotropic media formula_9 is zero for even "n", and is a scalar for odd n. In general, χ("n") is an ("n" + 1)-th-rank tensor. It is natural to perform the same expansion for the non-linear molecular dipole moment: formula_10 i.e. the "n"-th-order susceptibility for an ensemble of molecules is simply related to the "n"-th-order hyperpolarizability for a single molecule by: formula_11 With this definition formula_12 is equal to formula_0 defined above for the linear polarizability. Often formula_13 is given the symbol formula_14 and formula_15 is given the symbol formula_16. However, care is needed because some authors take out the factor formula_6 from formula_17, so that formula_18 and hence formula_19, which is convenient because then the (hyper-)polarizability may be accurately called the (nonlinear-)susceptibility per molecule, but at the same time inconvenient because of the inconsistency with the usual linear polarisability definition above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\mathbf{p}" }, { "math_id": 2, "text": "\\mathbf{E}" }, { "math_id": 3, "text": "\\mathbf{p}=\\alpha \\mathbf{E}" }, { "math_id": 4, "text": "\\mathbf{P} = \\rho \\mathbf{p} = \\rho \\alpha \\mathbf{E} = \\varepsilon_0 \\chi \\mathbf{E}," }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\varepsilon_0" }, { "math_id": 7, "text": "\\chi" }, { "math_id": 8, "text": "\\mathbf{P}(t) = \\varepsilon_0 \\left( \\chi^{(1)} \\mathbf{E}(t) + \\chi^{(2)} \\mathbf{E}^2(t) + \\chi^{(3)} \\mathbf{E}^3(t) + \\ldots \\right)," }, { "math_id": 9, "text": "\\chi^{(n)}" }, { "math_id": 10, "text": "\\mathbf{p}(t) = \\alpha^{(1)} \\mathbf{E}(t) + \\alpha^{(2)} \\mathbf{E}^2(t) + \\alpha^{(3)} \\mathbf{E}^3(t) + \\ldots ," }, { "math_id": 11, "text": "\\alpha^{(n)}=\\frac{\\varepsilon_0}{\\rho} \\chi^{(n)} ." }, { "math_id": 12, "text": "\\alpha^{(1)}" }, { "math_id": 13, "text": "\\alpha^{(2)}" }, { "math_id": 14, "text": "\\beta" }, { "math_id": 15, "text": "\\alpha^{(3)}" }, { "math_id": 16, "text": "\\gamma" }, { "math_id": 17, "text": "\\alpha^{(n)}" }, { "math_id": 18, "text": "\\mathbf{p}=\\varepsilon_0\\sum_n\\alpha^{(n)} \\mathbf{E}^n" }, { "math_id": 19, "text": "\\alpha^{(n)}=\\chi^{(n)}/\\rho" } ]
https://en.wikipedia.org/wiki?curid=9560178
95646
Dioptre
Unit of measurement of optical power &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; A dioptre (British spelling) or (American spelling), symbol dpt, is a unit of measurement with dimension of reciprocal length, equivalent to one reciprocal metre, 1 dpt 1 m−1. It is normally used to express the optical power of a lens or curved mirror, which is a physical quantity equal to the reciprocal of the focal length, expressed in metres. For example, a 3-dioptre lens brings parallel rays of light to focus at &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3 metre. A flat window has an optical power of zero dioptres, as it does not cause light to converge or diverge. Dioptres are also sometimes used for other reciprocals of distance, particularly radii of curvature and the vergence of optical beams. The main benefit of using optical power rather than focal length is that the thin lens formula has the object distance, image distance, and focal length all as reciprocals. Additionally, when relatively thin lenses are placed close together their powers approximately add. Thus, a thin 2.0-dioptre lens placed close to a thin 0.5-dioptre lens yields almost the same focal length as a single 2.5-dioptre lens. Though the dioptre is based on the SI-metric system, it has not been included in the standard, so that there is no international name or symbol for this unit of measurement—within the international system of units, this unit for optical power would need to be specified explicitly as the inverse metre (m−1). However most languages have borrowed the original name and some national standardization bodies like DIN specify a unit name (dioptrie, dioptria, etc.). In vision care the symbol D is frequently used. The idea of numbering lenses based on the reciprocal of their focal length in metres was first suggested by Albrecht Nagel in 1866. The term "dioptre" was proposed by French ophthalmologist Ferdinand Monoyer in 1872, based on earlier use of the term "dioptrice" by Johannes Kepler. In vision correction. The fact that optical powers are approximately additive enables an eye care professional to prescribe corrective lenses as a simple correction to the eye's optical power, rather than doing a detailed analysis of the entire optical system (the eye and the lens). Optical power can also be used to adjust a basic prescription for reading. Thus an eye care professional, having determined that a myopic (nearsighted) person requires a basic correction of, say, −2 dioptres to restore normal distance vision, might then make a further prescription of 'add 1' for reading, to make up for lack of accommodation (ability to alter focus). This is the same as saying that −1 dioptre lenses are prescribed for reading. In humans, the total optical power of the relaxed eye is approximately 60 dioptres. The cornea accounts for approximately two-thirds of this refractive power (about 40 dioptres) and the crystalline lens contributes the remaining one-third (about 20 dioptres). In focusing, the ciliary muscle contracts to reduce the tension or stress transferred to the lens by the suspensory ligaments. This results in increased convexity of the lens which in turn increases the optical power of the eye. The amplitude of accommodation is about 11 to 16 dioptres at age 15, decreasing to about 10 dioptres at age 25, and to around 1 dioptre above age 60. Convex lenses have positive dioptric value and are generally used to correct hyperopia (farsightedness) or to allow people with presbyopia (the limited accommodation of advancing age) to read at close range. Over the counter reading glasses are rated at +1.00 to +4.00 dioptres. Concave lenses have negative dioptric value and generally correct myopia (nearsightedness). Typical glasses for mild myopia have a power of −0.50 to −3.00 dioptres. Optometrists usually measure refractive error using lenses graded in steps of 0.25 dioptres. Curvature. The dioptre can also be used as a measurement of curvature equal to the reciprocal of the radius measured in metres. For example, a circle with a radius of 1/2 metre has a curvature of 2 dioptres. If the curvature of a surface of a lens is "C" and the index of refraction is "n", the optical power is φ = ("n" − 1)"C". If both surfaces of the lens are curved, consider their curvatures as positive toward the lens and add them. This gives approximately the right result, as long as the thickness of the lens is much less than the radius of curvature of one of the surfaces. For a mirror the optical power is φ = 2"C". Relation to magnifying power. The "magnifying power" V of a simple magnifying glass is related to its optical power φ by formula_0. This is approximately the magnification observed when a person with normal vision holds the magnifying glass close to his or her eye. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "V = 0.25\\ \\mathrm{m} \\times \\varphi + 1" } ]
https://en.wikipedia.org/wiki?curid=95646
9565831
Kauffman polynomial
In knot theory, the Kauffman polynomial is a 2-variable knot polynomial due to Louis Kauffman. It is initially defined on a link diagram as formula_0, where formula_1 is the writhe of the link diagram and formula_2 is a polynomial in "a" and "z" defined on link diagrams by the following properties: Here formula_5 is a strand and formula_6 (resp. formula_7) is the same strand with a right-handed (resp. left-handed) curl added (using a type I Reidemeister move). Additionally "L" must satisfy Kauffman's skein relation: The pictures represent the "L" polynomial of the diagrams which differ inside a disc as shown but are identical outside. Kauffman showed that "L" exists and is a regular isotopy invariant of unoriented links. It follows easily that "F" is an ambient isotopy invariant of oriented links. The Jones polynomial is a special case of the Kauffman polynomial, as the "L" polynomial specializes to the bracket polynomial. The Kauffman polynomial is related to Chern–Simons gauge theories for SO(N) in the same way that the HOMFLY polynomial is related to Chern–Simons gauge theories for SU(N).
[ { "math_id": 0, "text": "F(K)(a,z)=a^{-w(K)}L(K)\\," }, { "math_id": 1, "text": "w(K)" }, { "math_id": 2, "text": "L(K)" }, { "math_id": 3, "text": "L(O) = 1" }, { "math_id": 4, "text": "L(s_r)=aL(s), \\qquad L(s_\\ell)=a^{-1}L(s)." }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "s_r" }, { "math_id": 7, "text": "s_\\ell" } ]
https://en.wikipedia.org/wiki?curid=9565831
9566
Empty set
Mathematical set containing no elements In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set. Any set other than the empty set is called non-empty. In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty). Notation. Common notations for the empty set include "{ }", "formula_0", and "∅". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø () in the Danish and Norwegian alphabets. In the past, "0" (the numeral zero) was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation. The symbol ∅ is available at Unicode point . It can be coded in HTML as and as or as . It can be coded in LaTeX as . The symbol formula_0 is coded in LaTeX as . When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead. Properties. In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements (that is, neither of them has an element not in the other). As a result, there can be only one set with no elements, hence the usage of "the empty set" rather than "an empty set". The only subset of the empty set is the empty set itself; equivalently, the power set of the empty set is the set containing only the empty set. The number of elements of the empty set (i.e., its cardinality) is zero. The empty set is the only set with either of these properties. For any set "A": For any property "P": Conversely, if for some property "P" and some set "V", the following two statements hold: then formula_2 By the definition of subset, the empty set is a subset of any set "A". That is, every element "x" of formula_1 belongs to "A". Indeed, if it were not true that every element of formula_1 is in "A", then there would be at least one element of formula_1 that is not present in "A". Since there are no elements of formula_1 at all, there is no element of formula_1 that is not in "A". Any statement that begins "for every element of formula_1" is not making any substantive claim; it is a vacuous truth. This is often paraphrased as "everything is true of the elements of the empty set." In the usual set-theoretic definition of natural numbers, zero is modelled by the empty set. Operations on the empty set. When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set (the empty sum) is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set (the empty product) should be considered to be one, since one is the identity element for multiplication. A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation (formula_3), and it is vacuously true that no element (of the empty set) can be found that retains its original position. In other areas of mathematics. Extended real numbers. Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted formula_4 which is defined to be less than every other extended real number, and positive infinity, denoted formula_5 which is defined to be greater than every other extended real number), we have that: formula_6 and formula_7 That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators. Topology. In any topological space "X", the empty set is open by definition, as is "X". Since the complement of an open set is closed and the empty set and "X" are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." Category theory. If formula_8 is a set, then there exists precisely one function formula_9 from formula_1 to formula_10 the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. Set theory. In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as formula_11. Thus, we have formula_12, formula_13, formula_14, and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, formula_15, such that the Peano axioms of arithmetic are satisfied. Questioned existence. Historical issues. In the context of sets of real numbers, Cantor used formula_16 to denote "formula_17 contains no single point". This formula_18 notation was utilized in definitions; for example, Cantor defined two sets as being disjoint if their intersection has an absence of points; however, it is debatable whether Cantor viewed formula_19 as an existent set on its own, or if Cantor merely used formula_18 as an emptiness predicate. Zermelo accepted formula_19 itself as a set, but considered it an "improper set". Axiomatic set theory. In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: Philosophical issues. While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the same thing as nothing; rather, it is a set with nothing inside it and a set is always something. This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king." The popular syllogism Nothing is better than eternal happiness; a ham sandwich is better than nothing; therefore, a ham sandwich is better than eternal happiness is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is formula_1" and the latter to "The set {ham sandwich} is better than the set formula_1". The first compares elements of sets, while the second compares the sets themselves. Jonathan Lowe argues that while the empty set was undoubtedly an important landmark in the history of mathematics, … we should not assume that its utility in calculation is dependent upon its actually denoting some object. it is also the case that: "All that we are ever informed about the empty set is that it (1) is a set, (2) has no members, and (3) is unique amongst sets in having no members. However, there are very many things that 'have no members', in the set-theoretical sense—namely, all non-sets. It is perfectly clear why these things have no members, for they are not sets. What is unclear is how there can be, uniquely amongst sets, a set which has no members. We cannot conjure such an entity into existence by mere stipulation." George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\emptyset" }, { "math_id": 1, "text": "\\varnothing" }, { "math_id": 2, "text": "V = \\varnothing." }, { "math_id": 3, "text": "0!=1" }, { "math_id": 4, "text": "-\\infty\\!\\,," }, { "math_id": 5, "text": "+\\infty\\!\\,," }, { "math_id": 6, "text": "\\sup\\varnothing=\\min(\\{-\\infty, +\\infty \\} \\cup \\mathbb{R})=-\\infty," }, { "math_id": 7, "text": "\\inf\\varnothing=\\max(\\{-\\infty, +\\infty \\} \\cup \\mathbb{R})=+\\infty." }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "A," }, { "math_id": 11, "text": "S(\\alpha)=\\alpha\\cup\\{\\alpha\\}" }, { "math_id": 12, "text": "0=\\varnothing" }, { "math_id": 13, "text": "1 = 0\\cup\\{0\\}=\\{\\varnothing\\}" }, { "math_id": 14, "text": "2=1\\cup\\{1\\}=\\{\\varnothing,\\{\\varnothing\\}\\}" }, { "math_id": 15, "text": "\\N_0" }, { "math_id": 16, "text": "P\\equiv O" }, { "math_id": 17, "text": "P" }, { "math_id": 18, "text": "\\equiv O" }, { "math_id": 19, "text": "O" } ]
https://en.wikipedia.org/wiki?curid=9566
956704
Stopwatch
Handheld timepiece measuring an amount of time A stopwatch is a timepiece designed to measure the amount of time that elapses between its activation and deactivation. A large digital version of a stopwatch designed for viewing at a distance, as in a sports stadium, is called a stop clock. In manual timing, the clock is started and stopped by a person pressing a button. In fully automatic time, both starting and stopping are triggered automatically, by sensors. The timing functions are traditionally controlled by two buttons on the case. Pressing the top button starts the timer running, and pressing the button a second time stops it, leaving the elapsed time displayed. A press of the second button then resets the stopwatch to zero. The second button is also used to record "split times" or "lap times". When the split time button is pressed while the watch is running it allows the elapsed time to that point to be read, but the watch mechanism continues running to record total elapsed time. Pressing the split button a second time allows the watch to resume display of total time. Mechanical stopwatches are powered by a mainspring, which must be wound up by turning the knurled knob at the top of the stopwatch. Digital electronic stopwatches are available which, due to their crystal oscillator timing element, are much more accurate than mechanical timepieces. Because they contain a microchip, they often include date and time-of-day functions as well. Some may have a connector for external sensors, allowing the stopwatch to be triggered by external events, thus measuring elapsed time far more accurately than is possible by pressing the buttons with one's finger. The first digital timer used in organized sports was the Digitimer, developed by Cox Electronic Systems, Inc. of Salt Lake City Utah (1962). It utilized a Nixie-tube readout and provided a resolution of 1/1000 second. Its first use was in ski racing but was later used by the World University Games in Moscow, Russia, the U.S. NCAA, and in the Olympic trials. The device is used when time periods must be measured precisely and with a minimum of complications. Laboratory experiments and sporting events like sprints are good examples. The stopwatch function is also present as an additional function of many electronic devices such as wristwatches, cell phones, portable music players, and computers. Humans are prone to make mistakes every time they use one. Normally, humans will take about 180–200 milliseconds to detect and respond to visual stimulus. However, in most situations where a stopwatch is used, there are indicators that the timing event is about to happen, and the manual action of starting/stopping the timer can be much more accurate. The average measurement error using manual timing was evaluated to be around 0.04 s when compared to electronic timing, in this case for a running sprint. To get more accurate results, most researchers use the propagation of uncertainty equation in order to reduce any error in experiments. For example: If the result from measuring the width of a window is 1.50 ± 0.05 m, 1.50 will be formula_4 and 0.05 will be formula_5. Unit. In most science experiments, researchers will normally use SI or the International System of Units on any of their experiments. For stopwatches, the units of time that are generally used when observing a stopwatch are minutes, seconds, and 'one-hundredth of a second'. Many mechanical stopwatches are of the 'decimal minute' type. These split one minute into 100 units of 0.6s each. This makes addition and subtraction of times easier than using regular seconds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_Q = \\sqrt{\\sigma_a^2 + \\sigma_b^2}" }, { "math_id": 1, "text": "\\sigma_Q" }, { "math_id": 2, "text": "\\sigma_a^2" }, { "math_id": 3, "text": "\\sigma_b^2" }, { "math_id": 4, "text": "\\sigma_a" }, { "math_id": 5, "text": "\\sigma_b" } ]
https://en.wikipedia.org/wiki?curid=956704
9567916
Mechanical screening
Separating granulated ore by particle size Mechanical screening, often just called screening, is the practice of taking granulated or crushed ore material and separating it into multiple grades by particle size. This practice occurs in a variety of industries such as mining and mineral processing, agriculture, pharmaceutical, food, plastics, and recycling. A method of separating solid particles according to size alone is called screening. General categories. Screening falls under two general categories: dry screening, and wet screening. From these categories, screening separates a flow of material into grades, these grades are then either further processed to an intermediary product or a finished product. Additionally, the machines can be categorized into a moving screen and static screen machines, as well as by whether the screens are horizontal or inclined. Applications. The mining and mineral processing industry uses screening for a variety of processing applications. For example, after mining the minerals, the material is transported to a primary crusher. Before crushing large boulder are scalped on a shaker with thick shielding screening. Further down stream after crushing the material can pass through screens with openings or slots that continue to become smaller. Finally, screening is used to make a final separation to produce saleable products based on a grade or a size range. Process. A screening machine consist of a drive that induces vibration, a screen media that causes particle separation, and a deck which holds the screen media and the drive and is the mode of transport for the vibration. There are physical factors that makes screening practical. For example, vibration, g force, bed density, and material shape all facilitate the rate or cut. Electrostatic forces can also hinder screening efficiency in way of water attraction causing sticking or plugging, or very dry material generate a charge that causes it to attract to the screen itself. As with any industrial process there is a group of terms that identify and define what screening is. Terms like blinding, contamination, frequency, amplitude, and others describe the basic characteristics of screening, and those characteristics in turn shape the overall method of dry or wet screening. In addition, the way a deck is vibrated differentiates screens. Different types of motion have their advantages and disadvantages. In addition media types also have their different properties that lead to advantages and disadvantages. Finally, there are issues and problems associated with screening. Screen tearing, contamination, blinding, and dampening all affect screening efficiency. Screening terminology. Like any mechanical and physical entity there are scientific, industrial, and layman terminology. The following is a partial list of terms that are associated with mechanical screening. Types of mechanical screening. There are a number of types of mechanical screening equipment that cause segregation. These types are based on the motion of the machine through its motor drive. Tumbler screening technique. An improvement on vibration, vibratory, and linear screeners, a tumbler screener uses elliptical action which aids in screening of even very fine material. As like panning for gold, the fine particles tend to stay towards the center and the larger go to the outside. It allows for segregation and unloads the screen surface so that it can effectively do its job. With the addition of multiple decks and ball cleaning decks, even difficult products can be screened at high capacity to very fine separations. Circle-throw vibrating equipment. Circle-Throw Vibrating Equipment is a shaker or a series of shakers as to where the drive causes the whole structure to move. The structure extends to a maximum throw or length and then contracts to a base state. A pattern of springs are situated below the structure to where there is vibration and shock absorption as the structure returns to the base state. This type of equipment is used for very large particles, sizes that range from pebble size on up to boulder size material. It is also designed for high volume output. As a scalper, this shaker will allow oversize material to pass over and fall into a crusher such a cone crusher, jaw crusher, or hammer mill. The material that passes the screen by-passes the crusher and is conveyed and combined with the crush material. Also this equipment is used in washing processes, as material passes under spray bars, finer material and foreign material is washed through the screen. This is one example of wet screening. High frequency vibrating equipment. High-frequency vibrating screening equipment is a shaker whose frame is fixed and the drive vibrates only the screen cloth. High frequency vibration equipment is for particles that are in this particle size range of an 1/8 in (3 mm) down to a +150 mesh. Traditional shaker screeners have a difficult time making separations at sizes like 44 microns. At the same time, other high energy sieves like the Elcan Industries' advanced screening technology allow for much finer separations down to as fine as 10um and 5um, respectively. These shakers usually make a secondary cut for further processing or make a finished product cut. These shakers are usually set at a steep angle relative to the horizontal level plane. Angles range from 25 to 45 degrees relative to the horizontal level plane. Gyratory equipment. This type of equipment has an eccentric drive or weights that causes the shaker to travel in an orbital path. The material rolls over the screen and falls with the induction of gravity and directional shifts. Rubber balls and trays provide an additional mechanical means to cause the material to fall through. The balls also provide a throwing action for the material to find an open slot to fall through. The shaker is set a shallow angle relative to the horizontal level plane. Usually, no more than 2 to 5 degrees relative to the horizontal level plane. These types of shakers are used for very clean cuts. Generally, a final material cut will not contain any oversize or any fines contamination. These shakers are designed for the highest attainable quality at the cost of a reduced feed rate. Trommel screens. Trommel screens have a rotating drum on a shallow angle with screen panels around the diameter of the drum. The feed material always sits at the bottom of the drum and, as the drum rotates, always comes into contact with clean screen. The oversize travels to the end of the drum as it does not pass through the screen, while the undersize passes through the screen into a launder below. Screen Media Attachment Systems. There are many ways to install screen media into a screen box deck (shaker deck). Also, the type of attachment system has an influence on the dimensions of the media. Tensioned screen media. Tensioned screen cloth is typically 4 feet by the width or the length of the screening machine depending on whether the deck is side or end tensioned. Screen cloth for tensioned decks can be made with hooks and are attached with clamp rails bolted on both sides of the screen box. When the clamp rail bolts are tightened, the cloth is tensioned or even stretched in the case of some types of self-cleaning screen media. To ensure that the center of the cloth does not tap repeatedly on the deck due to the vibrating shaker and that the cloth stays tensioned, support bars are positioned at different heights on the deck to create a crown curve from hook to hook on the cloth. Tensioned screen cloth is available in various materials: stainless steel, high carbon steel and oil tempered steel wires, as well as moulded rubber or polyurethane and hybrid screens (a self-cleaning screen cloth made of rubber or polyurethane and metal wires). Commonly, vibratory-type screening equipment employs rigid, circular sieve frames to which woven wire mesh is attached. Conventional methods of producing tensioned meshed screens has given way in recent years to bonding, whereby the mesh is no longer tensioned and trapped between a sieve frame body and clamping ring; instead, developments in modern adhesive technologies has allowed the industry to adopt high strength structural adhesives to bond tensioned mesh directly to frames. Modular screen media. Modular screen media is typically 1 foot large by 1 or 2 feet long (4 feet long for ISEPREN WS 85 ) steel reinforced polyurethane or rubber panels. They are installed on a flat deck (no crown) that normally has a larger surface than a tensioned deck. This larger surface design compensates for the fact that rubber and polyurethane modular screen media offers less open area than wire cloth. Over the years, numerous ways have been developed to attach modular panels to the screen deck stringers (girders). Some of these attachment systems have been or are currently patented. Self-cleaning screen media is also available on this modular system. Types of Screen Media. There are several types of screen media manufactured with different types of material that use the two common types of screen media attachment systems, tensioned and modular. Woven Wire Cloth (Mesh). Woven wire cloth, typically produced from stainless steel, is commonly employed as a filtration medium for sieving in a wide range of industries. Most often woven with a plain weave, or a twill weave for the lightest of meshes, apertures can be produced from a few microns upwards (e.g. 25 microns), employing wires with diameters from as little as 25 microns. A twill weave allows a mesh to be woven when the wire diameter is too thick in proportion to the aperture. Other, less commonplace, weaves, such as Dutch/Hollander, allow the production of meshes that are stronger and/or having smaller apertures. Today wire cloth is woven to strict international standards, e.g. ISO1944:1999, which dictates acceptable tolerance regarding nominal mesh count and blemishes. The nominal mesh count, to which mesh is generally defined is a measure of the number of openings per lineal inch, determined by counting the number of openings from the centre of one wire to the centre of another wire one lineal inch away. For example, a 2 mesh woven with a wire of 1.6mm wire diameter has an aperture of 11.1mm (see picture below of a 2 mesh with an intermediate crimp). The formula for calculating the aperture of a mesh, with a known mesh count and wire diameter, is as follows: formula_0 where a = aperture, b = mesh count and c = wire diameter. Other calculations regarding woven wire cloth/mesh can be made including weight and open area determination. Of note, wire diameters are often referred to by their standard wire gauge (swg); e.g. a 1.6mm wire is a 16 swg. Traditionally, screen cloth was made with metal wires woven with a loom. Today, woven cloth is still widely used primarily because they are less expensive than other types of screen media. Over the years, different weaving techniques have been developed; either to increase the open area percentage or add wear-life. Slotted opening woven cloth is used where product shape is not a priority and where users need a higher open area percentage. Flat-top woven cloth is used when the consumer wants to increase wear-life. On regular woven wire, the crimps (knuckles on woven wires) wear out faster than the rest of the cloth resulting in premature breakage. On flat-top woven wire, the cloth wears out equally until half of the wire diameter is worn, resulting in a longer wear life. Unfortunately flat-top woven wire cloth is not widely used because of the lack of crimps that causes a pronounced reduction of passing fines resulting in premature wear of con crushers. Perforated &amp; Punch Plate. On a crushing and screening plant, punch plates or perforated plates are mostly used on scalper vibrating screens, after raw products pass on grizzly bars. Most likely installed on a tensioned deck, punch plates offer excellent wear life for high-impact and high material flow applications. Synthetic screen media (typically rubber or polyurethane). Synthetic screen media is used where wear life is an issue. Large producers such as mines or huge quarries use them to reduce the frequency of having to stop the plant for screen deck maintenance. Rubber is also used as a very resistant high-impact screen media material used on the top deck of a scalper screen. To compete with rubber screen media fabrication, polyurethane manufacturers developed screen media with lower Shore Hardness. To compete with self-cleaning screen media that is still primarily available in tensioned cloth, synthetic screen media manufacturers also developed membrane screen panels, slotted opening panels and diamond opening panels. Due to the 7-degree demoulding angle, polyurethane screen media users can experience granulometry changes of product during the wear life of the panel. Self-Cleaning Screen Media. Self-cleaning screen media was initially engineered to resolve screen cloth blinding, clogging and pegging problems. The idea was to place crimped wires side by side on a flat surface, creating openings and then, in some way, holding them together over the support bars (crown bars or bucker bars). This would allow the wires to be free to vibrate between the support bars, preventing blinding, clogging and pegging of the cloth. Initially, crimped longitudinal wires on self-cleaning cloth were held together over support bars with woven wire. In the 50s, some manufacturers started to cover the woven cross wires with caulking or rubber to prevent premature wear of the crimps (knuckles on woven wires). One of the pioneer products in this category was ONDAP GOMME made by the French manufacturer Giron. During the mid 90s, Major Wire Industries Ltd., a Quebec manufacturer, developed a “hybrid” self-cleaning screen cloth called Flex-Mat, without woven cross wires. In this product, the crimped longitudinal wires are held in place by polyurethane strips. Rather than locking (impeding) vibration over the support bars due to woven cross wires, polyurethane strips reduce vibration of longitudinal wires over the support bars, thus allowing vibration from hook to hook. Major Wire quickly started to promote this product as a high-performance screen that helped producers screen more in-specification material for less cost and not simply a problem solver. They claimed that the independent vibrating wires helped produce more product compared to a woven wire cloth with the same opening (aperture) and wire diameter. This higher throughput would be a direct result of the higher vibration frequency of each independent wire of the screen cloth (calculated in hertz) compared to the shaker vibration (calculated in RPM), accelerating the stratification of the material bed. Another benefit that helped the throughput increase is that hybrid self-cleaning screen media offered a better open area percentage than woven wire screen media. Due to its flat surface (no knuckles), hybrid self-cleaning screen media can use a smaller wire diameter for the same aperture than woven wire and still lasts as long, resulting in a greater opening percentage. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a = \\left ( \\frac{25.4}{b} \\right ) - c" } ]
https://en.wikipedia.org/wiki?curid=9567916
9569
Endomorphism
Self-self morphism In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space "V" is a linear map "f": "V" → "V", and an endomorphism of a group "G" is a group homomorphism "f": "G" → "G". In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set "S" to itself. In any category, the composition of any two endomorphisms of "X" is again an endomorphism of "X". It follows that the set of all endomorphisms of "X" forms a monoid, the full transformation monoid, and denoted End("X") (or End"C"("X") to emphasize the category "C"). Automorphisms. An invertible endomorphism of "X" is called an automorphism. The set of all automorphisms is a subset of End("X") with a group structure, called the automorphism group of "X" and denoted Aut("X"). In the following diagram, the arrows denote implication: Endomorphism rings. Any two endomorphisms of an abelian group, "A", can be added together by the rule ("f" + "g")("a") "f"("a") + "g"("a"). Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of formula_0 is the ring of all "n" × "n" matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group. Operator theory. In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing the notion of element orbits to be defined, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. Endofunctions. An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism. Let "S" be an arbitrary set. Among endofunctions on "S" one finds permutations of "S" and constant functions associating to every "x" in "S" the same element "c" in "S". Every permutation of "S" has the codomain equal to its domain and is bijective and invertible. If "S" has more than one element, a constant function on "S" has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number "n" the floor of "n"/2 has its image equal to its codomain and is not invertible. Finite endofunctions are equivalent to directed pseudoforests. For sets of size "n" there are "n""n" endofunctions on the set. Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}^n" } ]
https://en.wikipedia.org/wiki?curid=9569
9569479
Maclaurin's inequality
In mathematics, Maclaurin's inequality, named after Colin Maclaurin, is a refinement of the inequality of arithmetic and geometric means. Let formula_0 be non-negative real numbers, and for formula_1, define the averages formula_2 as follows: formula_3 The numerator of this fraction is the elementary symmetric polynomial of degree formula_4 in the formula_5 variables formula_0, that is, the sum of all products of formula_4 of the numbers formula_0 with the indices in increasing order. The denominator is the number of terms in the numerator, the binomial coefficient formula_6 Maclaurin's inequality is the following chain of inequalities: formula_7 with equality if and only if all the formula_8 are equal. For formula_9, this gives the usual inequality of arithmetic and geometric means of two non-negative numbers. Maclaurin's inequality is well illustrated by the case formula_10: formula_11 Maclaurin's inequality can be proved using Newton's inequalities or generalised Bernoulli's inequality. References. "This article incorporates material from MacLaurin's Inequality on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "a_1, a_2,\\ldots,a_n" }, { "math_id": 1, "text": "k=1,2,\\ldots,n" }, { "math_id": 2, "text": "S_k" }, { "math_id": 3, "text": "\nS_k = \\frac{\\displaystyle \\sum_{ 1\\leq i_1 < \\cdots < i_k \\leq n}a_{i_1} a_{i_2} \\cdots a_{i_k}}{\\displaystyle {n \\choose k}}.\n" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\tbinom n k." }, { "math_id": 7, "text": "\nS_1 \\geq \\sqrt{S_2} \\geq \\sqrt[3]{S_3} \\geq \\cdots \\geq \\sqrt[n]{S_n}\n" }, { "math_id": 8, "text": "a_i" }, { "math_id": 9, "text": "n=2" }, { "math_id": 10, "text": "n=4" }, { "math_id": 11, "text": "\\begin{align}\n& {} \\quad \\frac{a_1+a_2+a_3+a_4}{4} \\\\[8pt]\n& {} \\ge \\sqrt{\\frac{a_1a_2+a_1a_3+a_1a_4+a_2a_3+a_2a_4+a_3a_4}{6}} \\\\[8pt]\n& {} \\ge \\sqrt[3]{\\frac{a_1a_2a_3+a_1a_2a_4+a_1a_3a_4+a_2a_3a_4}{4}} \\\\[8pt]\n& {} \\ge \\sqrt[4]{a_1a_2a_3a_4}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=9569479
9569619
Vapor quality
Mass fraction of a saturated mixture which is vapor In thermodynamics, vapor quality is the mass fraction in a saturated mixture that is vapor; in other words, saturated vapor has a "quality" of 100%, and saturated liquid has a "quality" of 0%. Vapor quality is an intensive property which can be used in conjunction with other independent intensive properties to specify the thermodynamic state of the working fluid of a thermodynamic system. It has no meaning for substances which are not saturated mixtures (for example, compressed liquids or superheated fluids). Vapor quality is an important quantity during the adiabatic expansion step in various thermodynamic cycles (like Organic Rankine cycle, Rankine cycle, etc.). Working fluids can be classified by using the appearance of droplets in the vapor during the expansion step. Quality χ can be calculated by dividing the mass of the vapor by the mass of the total mixture: formula_0 where m indicates mass. Another definition used in chemical engineering defines quality (q) of a fluid as the fraction that is saturated liquid. By this definition, a saturated liquid has "q" = 0. A saturated vapor has "q" = 1. An alternative definition is the 'equilibrium thermodynamic quality'. It can be used only for single-component mixtures (e.g. water with steam), and can take values &lt; 0 (for sub-cooled fluids) and &gt; 1 (for super-heated vapors): formula_1 where h is the mixture specific enthalpy, defined as: formula_2 Subscripts f and g refer to saturated liquid and saturated gas respectively, and fg refers to vaporization. Calculation. The above expression for vapor quality can be expressed as: formula_3 where formula_4 is equal to either specific enthalpy, specific entropy, specific volume or specific internal energy, formula_5 is the value of the specific property of saturated liquid state and formula_6 is the value of the specific property of the substance in dome zone, which we can find both liquid formula_5 and vapor formula_7. Another expression of the same concept is: formula_8 where formula_9 is the vapor mass and formula_10 is the liquid mass. Steam quality and work. The origin of the idea of vapor quality was derived from the origins of thermodynamics, where an important application was the steam engine. Low quality steam would contain a high moisture percentage and therefore damage components more easily. High quality steam would not corrode the steam engine. Steam engines use water vapor (steam) to push pistons or turbines, and that movement creates work. The quantitatively described "steam quality" (steam dryness) is the proportion of saturated steam in a saturated water/steam mixture. In other words, a steam quality of 0 indicates 100% liquid, while a steam quality of 1 (or 100%) indicates 100% steam. The quality of steam on which steam whistles are blown is variable and may affect frequency. Steam quality determines the velocity of sound, which declines with decreasing dryness due to the inertia of the liquid phase. Also, the specific volume of steam for a given temperature decreases with decreasing dryness. Steam quality is very useful in determining enthalpy of saturated water/steam mixtures, since the enthalpy of steam (gaseous state) is many orders of magnitude higher than the enthalpy of water (liquid state).
[ { "math_id": 0, "text": "\\chi = \\frac{m_\\text{vapor}}{m_\\text{total}}" }, { "math_id": 1, "text": "\\chi_{eq} = \\frac{h-h_{f}}{h_{fg}}" }, { "math_id": 2, "text": "h = \\frac{m_f \\cdot h_f + m_g \\cdot h_g}{m_f+m_g}." }, { "math_id": 3, "text": "\\chi = \\frac{y - y_f}{y_g - y_f}" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "y_f" }, { "math_id": 6, "text": "y_g - y_f" }, { "math_id": 7, "text": "y_g" }, { "math_id": 8, "text": "\\chi=\\frac{m_v}{m_l + m_v}" }, { "math_id": 9, "text": "m_v" }, { "math_id": 10, "text": "m_l" } ]
https://en.wikipedia.org/wiki?curid=9569619
957598
Linda (coordination language)
In computer science, Linda is a coordination model that aids communication in parallel computing environments. Developed by David Gelernter, it is meant to be used alongside a full-fledged computation language like Fortran or C where Linda's role is to ""create" computational activities and to support communication among them". History. David Gelernter wrote the first version of Linda as a Ph.D. candidate in 1979, naming it after Linda Lovelace, who appeared in the pornographic film "Deep Throat". At the time, the main language for parallel processing was Ada, developed by the U.S. Department of Defense and a tribute to Ada Lovelace, which Gelernter considered an "inelegant and bulky" language. It was widely released in 1986, when Gelernter, along with his Yale colleague Nicholas Carriero and Sudhir Ahuja at AT&amp;T Bell Laboratories, published "Linda and Friends" in an IEEE journal. By the early 1990s, Linda was widely used by corporations to more efficiently conduct big data analyses, including Wall Street brokerages as well as AT&amp;T, Boeing, and United Technologies. There were even companies that specialized in creating specialized parallel computing applications based on Linda, the largest of which was Scientific Computing Associates, a New Haven-based company founded by several Yale computer scientists (Gelernter occasionally consulted for them but did not work there). Interest in Linda dropped in the mid-1990s, only to make a comeback in the late 1990s with several corporations implementing Linda in Java, including Sun Microsystems and IBM. Overview. Model. The Linda model provides a distributed shared memory, known as a tuple space because its basic addressable unit is a tuple, an ordered sequence of typed data objects; specifically in Linda, a tuple is a sequence of up to 16 typed fields enclosed in parentheses". The tuple space is "logically shared by processes" which are referred to as workers that store and retrieve tuples. Operations. One of Linda's main advantages is that it's simple, with only six operations that workers perform on the tuples to access tuplespace: Comparison. Compared to other parallel-processing models, Linda is more orthogonal in treating process coordination as a separate activity from computation, and it is more general in being able to subsume various levels of concurrency—uniprocessor, multi-threaded multiprocessor, or networked—under a single model. Its orthogonality allows processes computing in different languages and platforms to interoperate using the same primitives. Its generality allows a multi-threaded Linda system to be distributed across multiple computers without change. Whereas message-passing models require tightly coupled processes sending messages to each other in some sequence or protocol, Linda processes are decoupled from other processes, communicating only through the tuplespace; a process need have no notion of other processes except for the kinds of tuples consumed or produced. Criticisms of Linda from the multiprocessing community tend to focus on the decreased speed of operations in Linda systems as compared to Message Passing Interface (MPI) systems. While not without justification, these claims were largely refuted for an important class of problems. Detailed criticisms of the Linda model can also be found in Steven Ericsson-Zenith's book "Process Interaction Models". Researchers have proposed more primitives to support different types of communication and co-ordination between (open distributed) computer systems, and to solve particular problems arising from various uses of the model. Researchers have also experimented with various means of implementing the virtual shared memory for this model. Many of these researchers proposed larger modifications to the original Linda model, developing a family of systems known as Linda-like systems and implemented as orthogonal technology (unlike original version). An example of this is the language Ease designed by Steven Ericsson-Zenith. Linda's approach has also been compared to that of flow-based programming. Linda-calculus. The Linda-calculus is a formalisation of the above model with the difference that in the following formula_0 subsumes both out and eval operations. Syntax. We abstract the concrete representation of tuples. We just assume that we have a set of tuples formula_1 and we are allowed to form and apply a substitution function formula_2 on tuples substituting variables for terms that yields a tuple. For example, given we have a tuple formula_3, then applying a substitution formula_4 on formula_5 yields formula_6 The Linda-calculus processes are defined by the following grammar. formula_7 The syntax includes the aftermentioned Linda operations, non-deterministic choice, and recursion. The substitution function is extended to processes recursively. Semantics. A tuple space is represented as a multiset of the processes. We write formula_8 for formula_9 where formula_10 is a multiset, formula_11 a singleton multiset, and formula_12 is the multiset union operation. The semantics is then defined as a reduction relation on a multiset formula_13 as follows. formula_14 Note that (input) consumes the tuple formula_15 from the tuple space whereas (read) only reads it. The resulting operational semantics is synchronous. Implementations. Linda was originally implemented in C and Fortran, but has since been implemented in many programming languages, including: Some of the more notable Linda implementations include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{outeval}" }, { "math_id": 1, "text": "t \\in \\mathcal{T}" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "t = (\\mathit{hello}, x)" }, { "math_id": 4, "text": "\\sigma = \\{\\mathit{world}/x\\}" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "(\\mathit{hello}, \\mathit{world})" }, { "math_id": 7, "text": "P,Q ::= t \n\\;|\\; \\mathrm{outeval}(P).Q \n\\;|\\; \\mathrm{rd}(t).P\n\\;|\\; \\mathrm{in}(t).P\n\\;|\\; P + Q\n\\;|\\; \\mathbf{rec} X. P\n\\;|\\; X\n" }, { "math_id": 8, "text": "M \\,|\\, P" }, { "math_id": 9, "text": "M \\uplus \\{P\\}" }, { "math_id": 10, "text": "M" }, { "math_id": 11, "text": "\\{P\\}" }, { "math_id": 12, "text": "\\uplus" }, { "math_id": 13, "text": "M \\rightarrow M'" }, { "math_id": 14, "text": "\n\\begin{array}{lrcll}\n\\text{(outeval)} & M \\,|\\, \\mathrm{outeval}(P).Q &\\rightarrow& M \\,|\\, P \\,|\\, Q & \\\\\n\\text{(read)} & M \\,|\\, \\mathrm{rd}(t).P \\,|\\, t' &\\rightarrow& M \\,|\\, P\\sigma \\,|\\, t' & \\text{if exists } \\sigma \\text{ such that } t\\sigma = t' \\\\\n\\text{(input)} & M \\,|\\, \\mathrm{in}(t).P \\,|\\, t' &\\rightarrow& M \\,|\\, P\\sigma & \\text{if exists } \\sigma \\text{ such that } t\\sigma = t' \\\\\n\\text{(left choice)} & M \\,|\\, P + Q &\\rightarrow& M \\,|\\, P & \\\\\n\\text{(right choice)} & M \\,|\\, P + Q &\\rightarrow& M \\,|\\, Q & \\\\\n\\text{(recursion)} & M \\,|\\, \\mathbf{rec} X. P &\\rightarrow& M \\,|\\, P\\{\\mathbf{rec} X. P/X\\} & \\\\\n\\end{array}\n" }, { "math_id": 15, "text": "t'" } ]
https://en.wikipedia.org/wiki?curid=957598
9576297
Frobenius matrix
A Frobenius matrix is a special kind of square matrix from numerical analysis. A matrix is a Frobenius matrix if it has the following three properties: The following matrix is an example. formula_0 Frobenius matrices are invertible. The inverse of a Frobenius matrix is again a Frobenius matrix, equal to the original matrix with changed signs outside the main diagonal. The inverse of the example above is therefore: formula_1 Frobenius matrices are named after Ferdinand Georg Frobenius. The term Frobenius matrix may also be used for an alternative matrix form that differs from an Identity matrix only in the elements of a single row preceding the diagonal entry of that row (as opposed to the above definition which has the matrix differing from the identity matrix in a single column below the diagonal). The following matrix is an example of this alternative form showing a 4-by-4 matrix with its 3rd row differing from the identity matrix. formula_2 An alternative name for this latter form of Frobenius matrices is Gauss transformation matrix, after Carl Friedrich Gauss. They are used in the process of Gaussian elimination to represent the Gaussian transformations. If a matrix is multiplied from the left (left multiplied) with a Gauss transformation matrix, a linear combination of the preceding rows is added to the given row of the matrix (in the example shown above, a linear combination of rows 1 and 2 will be added to row 3). Multiplication with the inverse matrix subtracts the corresponding linear combination from the given row. This corresponds to one of the elementary operations of Gaussian elimination (besides the operation of transposing the rows and multiplying a row with a scalar multiple).
[ { "math_id": 0, "text": "A=\\begin{pmatrix}\n 1 & 0 & 0 & \\cdots & 0 \\\\\n 0 & 1 & 0 & \\cdots & 0 \\\\\n 0 & a_{32} & 1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & a_{n2} & 0 & \\cdots & 1\n\\end{pmatrix}" }, { "math_id": 1, "text": "A^{-1}=\\begin{pmatrix}\n 1 & 0 & 0 & \\cdots & 0 \\\\\n 0 & 1 & 0 & \\cdots & 0 \\\\\n 0 & -a_{32} & 1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & -a_{n2} & 0 & \\cdots & 1\n\\end{pmatrix}" }, { "math_id": 2, "text": "A=\\begin{pmatrix}\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\na_{31} & a_{32} & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=9576297
9578059
Green chemistry metrics
Green chemistry metrics describe aspects of a chemical process relating to the principles of green chemistry. The metrics serve to quantify the efficiency or environmental performance of chemical processes, and allow changes in performance to be measured. The motivation for using metrics is the expectation that quantifying technical and environmental improvements can make the benefits of new technologies more tangible, perceptible, or understandable. This, in turn, is likely to aid the communication of research and potentially facilitate the wider adoption of green chemistry technologies in industry. For a non-chemist, an understandable method of describing the improvement might be "a decrease of X unit cost per kilogram of compound Y". This, however, might be an over-simplification. For example, it would not allow a chemist to visualize the improvement made or to understand changes in material toxicity and process hazards. For yield improvements and selectivity increases, simple percentages are suitable, but this simplistic approach may not always be appropriate. For example, when a highly pyrophoric reagent is replaced by a benign one, a numerical value is difficult to assign but the improvement is obvious, if all other factors are similar. Numerous metrics have been formulated over time. A general problem is that the more accurate and universally applicable the metric devised, the more complex and unemployable it becomes. A good metric must be clearly defined, simple, measurable, objective rather than subjective and must ultimately drive the desired behavior. Mass-based versus impact-based metrics. The fundamental purpose of metrics is to allow comparisons. If there are several economically viable ways to make a product, which one causes the least environmental harm (i.e. which is the greenest)? The metrics that have been developed to achieve that purpose fall into two groups: mass-based metrics and impact-based metrics. The simplest metrics are based upon the mass of materials rather than their impact. Atom economy, E-factor, yield, reaction mass efficiency and effective mass efficiency are all metrics that compare the mass of desired product to the mass of waste. They do not differentiate between more harmful and less harmful wastes. A process that produces less waste may appear to be greener than the alternatives according to mass-based metrics but may in fact be less green if the waste produced is particularly harmful to the environment. This serious limitation means that mass-based metrics can not be used to determine which synthetic method is greener. However, mass-based metrics have the great advantage of simplicity: they can be calculated from readily available data with few assumptions. For companies that produce thousands of products, mass-based metrics may be the only viable choice for monitoring company-wide reductions in environmental harm. In contrast, impact-based metrics such as those used in life-cycle assessment evaluate environmental impact as well as mass, making them much more suitable for selecting the greenest of several options or synthetic pathways. Some of them, such as those for acidification, ozone depletion, and resource depletion, are just as easy to calculate as mass-based metrics but require emissions data that may not be readily available. Others, such as those for inhalation toxicity, ingestion toxicity, and various forms of aquatic eco toxicity, are more complex to calculate in addition to requiring emissions data. Atom economy. Atom economy was designed by Barry Trost as a framework by which organic chemists would pursue “greener” chemistry. The atom economy number is how much of the reactants remain in the final product. formula_0 For a generic multi-stage reaction used for producing R: A + B → P + X P + C → Q + Y Q + D → R + Z The atom economy is calculated by formula_1 The conservation of mass principle dictates that the total mass of the reactants is the same as the total mass of the products. In the above example, the sum of molecular masses of A, B, C and D should be equal to that of R, X, Y and Z. As only R is the useful product, the atoms of X, Y and Z are said to be wasted as by-products. Economic and environmental costs of disposal of these waste make a reaction with low atom economy to be "less green". A further simplified version of this is the carbon economy. It is how much carbon ends up in the useful product compared to how much carbon was used to create the product. formula_2 This metric is a good simplification for use in the pharmaceutical industry as it takes into account the stoichiometry of reactants and products. Furthermore, this metric is of interest to the pharmaceutical industry where development of carbon skeletons is key to their work. The atom economy calculation is a simple representation of the “greenness” of a reaction as it can be carried out without the need for experimental results. Nevertheless, it can be useful in the process synthesis early stage design. The drawback of this type of analysis is that assumptions have to be made. In an ideal chemical process, the amount of starting materials or reactants equals the amount of all products generated and no atom is lost. However, in most processes, some of the consumed reactant atoms do not become part of the products, but remain as unreacted reactants, or are lost in some side reactions. Besides, solvents and energy used for the reaction are ignored in this calculation, but they may have non-negligible impacts to the environment. Percentage yield. Percentage yield is calculated by dividing the amount of the obtained desired product by the theoretical yield. In a chemical process, the reaction is usually reversible, thus reactants are not completely converted into products; some reactants are also lost by undesired side reaction. To evaluate these losses of chemicals, actual yield has to be measured experimentally. formula_3 As percentage yield is affected by chemical equilibrium, allowing one or more reactants to be in great excess can increase the yield. However, this may not be considered as a "greener" method, as it implies a greater amount of the excess reactant remain unreacted and therefore wasted. To evaluate the use of excess reactants, the excess reactant factor can be calculated. formula_4 If this value is far greater than 1, then the excess reactants may be a large waste of chemicals and costs. This can be a concern when raw materials have high economic costs or environmental costs in extraction. In addition, increasing the temperature can also increase the yield of some endothermic reactions, but at the expense of consuming more energy. Hence this may not be attractive methods as well. Reaction mass efficiency. The reaction mass efficiency is the percentage of actual mass of desire product to the mass of all reactants used. It takes into account both atom economy and chemical yield. formula_5 formula_6 Reaction mass efficiency, together with all metrics mentioned above, shows the “greenness” of a reaction but not of a process. Neither metric takes into account all waste produced. For example, these metrics could present a rearrangement as “very green” but fail to address any solvent, work-up, and energy issues that make the process less attractive. Effective mass efficiency. A metric similar to reaction mass efficiency is the effective mass efficiency, as suggested by Hudlicky "et al". It is defined as the percentage of the mass of the desired product relative to the mass of all non-benign reagents used in its synthesis. The reagents here may include any used reactant, solvent or catalyst. formula_7 Note that when most reagents are benign, the effective mass efficiency can be greater than 100%. This metric requires further definition of a benign substance. Hudlicky defines it as “those by-products, reagents or solvents that have no environmental risk associated with them, for example, water, low-concentration saline, dilute ethanol, autoclaved cell mass, etc.”. This definition leaves the metric open to criticism, as nothing is absolutely benign (which is a subjective term), and even the substances listed in the definition have some environmental impact associated with them. The formula also fails to address the level of toxicity associated with a process. Until all toxicology data is available for all chemicals and a term dealing with these levels of “benign” reagents is written into the formula, the effective mass efficiency is not the best metric for chemistry. Environmental factor. The first general metric for green chemistry remains one of the most flexible and popular ones. Roger A. Sheldon’s environmental factor (E-factor) can be made as complex and thorough or as simple as desired and useful. The E-factor of a process is the ratio of the mass of waste per mass of product: formula_8 As examples, Sheldon calculated E-factors of various industries: It highlights the waste produced in the process as opposed to the reaction, thus helping those who try to fulfil one of the twelve principles of green chemistry to avoid waste production. E-factors can be combined to assess multi-step reactions step by step or in one calculation. E-factors ignore recyclable factors such as recycled solvents and re-used catalysts, which obviously increases the accuracy but ignores the energy involved in the recovery (these are often included theoretically by assuming 90% solvent recovery). The main difficulty with E-factors is the need to define system boundaries, for example, which stages of the production or product life-cycle to consider before calculations can be made. This metric is simple to apply industrially, as a production facility can measure how much material enters the site and how much leaves as product and waste, thereby directly giving an accurate global E-factor for the site. Sheldon's analyses (see table) demonstrate that oil companies produce less waste than pharmaceuticals as a percentage of material processed. This reflects the fact that the profit margins in the oil industry require them to minimise waste and find uses for products which would normally be discarded as waste. By contrast the pharmaceutical sector is more focused on molecule manufacture and quality. The (currently) high profit margins within the sector mean that there is less concern about the comparatively large amounts of waste that are produced (especially considering the volumes used) although it has to be noted that, despite the percentage waste and E-factor being high, the pharmaceutical section produces much lower tonnage of waste than any other sector. This table encouraged a number of large pharmaceutical companies to commence “green” chemistry programs. The EcoScale. The EcoScale metric was proposed in an article in the Beilstein Journal of Organic Chemistry in 2006 for evaluation of the effectiveness of a synthetic reaction. It is characterized by simplicity and general applicability. Like the yield-based scale, the EcoScale gives a score from 0 to 100, but also takes into account cost, safety, technical set-up, energy and purification aspects. It is obtained by assigning a value of 100 to an ideal reaction defined as "Compound A (substrate) undergoes a reaction with (or in the presence of)inexpensive compound(s) B to give the desired compound C in 100% yield at room temperature with a minimal risk for the operator and a minimal impact on the environment", and then subtracting penalty points for non-ideal conditions. These penalty points take into account both the advantages and disadvantages of specific reagents, set-ups and technologies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Atom economy} = \\frac{\\text{molecular mass of desire product}}{\\text{molecular masses of reactants}} \\times 100\\%" }, { "math_id": 1, "text": "\\text{Atom economy} = \\frac{\\text{molecular mass of R}}{\\text{molecular masses of A, B, C and D}} \\times 100\\%" }, { "math_id": 2, "text": "\\text{Carbon economy} = \\frac{\\text{number of carbon atoms in desire product}}{\\text{number of carbon atoms in reactants}} \\times 100\\%" }, { "math_id": 3, "text": "\\text{Percentage yield} = \\frac{\\text{actual mass of product}}{\\text{theoretical mass of product}} \\times 100\\%" }, { "math_id": 4, "text": "\\text{Excess reactant factor} = \\frac{\\text{stoichiometric mass of reactants} + \\text{excess mass of reactant(s)}}{\\text{stoichiometric mass of reactants}}" }, { "math_id": 5, "text": "\\text{Reaction mass efficiency} = \\frac{\\text{actual mass of desired product}}{\\text{mass of reactants}} \\times 100\\%" }, { "math_id": 6, "text": "\\text{Reaction mass efficiency} = \\frac{\\text{atom economy} \\times \\text{percentage yield}}{\\text{excess reactant factor}}" }, { "math_id": 7, "text": "\\text{Effective mass efficiency} = \\frac{\\text{actual mass of desire products}}{\\text{mass of non-benign reagents}} \\times 100\\%" }, { "math_id": 8, "text": "\\text{E-factor}=\\frac{\\text{mass of total waste}}{\\text{mass of product}}" } ]
https://en.wikipedia.org/wiki?curid=9578059
9578492
Rhodopsin kinase
Protein-coding gene in the species Homo sapiens Rhodopsin kinase (EC 2.7.11.14, "rod opsin kinase", "G-protein-coupled receptor kinase 1", "GPCR kinase 1", "GRK1", "opsin kinase", "opsin kinase (phosphorylating)", "rhodopsin kinase (phosphorylating)", "RK", "STK14") is a serine/threonine-specific protein kinase involved in phototransduction. This enzyme catalyses the following chemical reaction: ATP + rhodopsin formula_0 ADP + phospho-rhodopsin Mutations in rhodopsin kinase are associated with a form of night blindness called Oguchi disease. Function and mechanism of action. Rhodopsin kinase is a member of the family of G protein-coupled receptor kinases, and is officially named G protein-coupled receptor kinase 1, or GRK1. Rhodopsin kinase is found primarily in mammalian retinal rod cells, where it phosphorylates light-activated rhodopsin, a member of the family of G protein-coupled receptors that recognizes light. Phosphorylated, light-activated rhodopsin binds to the protein arrestin to terminate the light-activated signaling cascade. The related GRK7, also known as cone opsin kinase, serves a similar function in retinal cone cells subserving high-acuity color vision in the fovea. The post-translational modification of GRK1 by farnesylation and α-carboxyl methylation is important for regulating the ability of the enzyme to recognize rhodopsin in rod outer segment disk membranes. Arrestin-1 bound to rhodopsin prevents rhodopsin activation of the transducin protein to turn off photo-transduction completely. Rhodopsin kinase is inhibited by the calcium-binding protein recoverin in a graded manner that maintains rhodopsin sensitivity to light despite large changes in ambient light conditions. That is, in retinas exposed to only dim light, calcium levels are high in retinal rod cells and recoverin is bound to and inhibits rhodopsin kinase, leaving rhodopsin exquisitely sensitive to photons to mediate low-light, low-acuity vision; in bright light, rod cell calcium levels are low so recoverin cannot bind or inhibit rhodopsin kinase, resulting in greater rhodopsin kinase/arrestin inhibition of rhodopsin signaling at baseline to preserve visual sensitivity. According to a proposed model, the N-terminus of rhodopsin kinase is involved in its own activation. It's suggested that an activated rhodopsin binds to the N-terminus, which is also involved in the stabilization of the kinase domain to induce an active conformation. Eye disease. Mutation in rhodopsin kinase can result in diseases such as Oguchi disease and retinal degeneration. Oguchi disease is a form of congenital stationary night blindness (CSNB). Congenital stationary night blindness is caused by the inability to send a signal from outer retina to the inner retina by signaling molecules. Oguchi disease is a genetic disorder so an individual can be inherited from his or her parents. Genes that are responsible for Oguchi disease are SAG (which encodes arrestin) and GRK1 genes. Rhodopsin kinase is encoded from the GRK1 gene, so a mutation in GRK1 can result in Oguchi disease. Retinal degeneration is a form of the retinal disease caused by the death of photoreceptor cells that present in the back of the eye, retina. Rhodopsin kinase directly participates in the rhodopsin to activate the visual phototransduction. Studies have shown that lack of rhodopsin kinase will result in photoreceptor cell death. When photoreceptors cells die, they will be detached from the retina and result in retinal degeneration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=9578492
957911
Nonlinear autoregressive exogenous model
In time series modeling, a nonlinear autoregressive exogenous model (NARX) is a nonlinear autoregressive model which has exogenous inputs. This means that the model relates the current value of a time series to both: In addition, the model contains an error term which relates to the fact that knowledge of other terms will not enable the current value of the time series to be predicted exactly. Such a model can be stated algebraically as formula_0 Here "y" is the variable of interest, and "u" is the externally determined variable. In this scheme, information about "u" helps predict "y", as do previous values of "y" itself. Here "ε" is the error term (sometimes called noise). For example, "y" may be air temperature at noon, and "u" may be the day of the year (day-number within year). The function "F" is some nonlinear function, such as a polynomial. "F" can be a neural network, a wavelet network, a sigmoid network and so on. To test for non-linearity in a time series, the BDS test (Brock-Dechert-Scheinkman test) developed for econometrics can be used.
[ { "math_id": 0, "text": " y_t = F(y_{t-1}, y_{t-2}, y_{t-3}, \\ldots, u_{t}, u_{t-1}, u_{t-2}, u_{t-3}, \\ldots) + \\varepsilon_t " } ]
https://en.wikipedia.org/wiki?curid=957911
9579379
Base (geometry)
Bottom of a geometric figure In geometry, a base is a side of a polygon or a face of a polyhedron, particularly one oriented perpendicular to the direction in which height is measured, or on what is considered to be the "bottom" of the figure. This term is commonly applied in plane geometry to triangles, parallelograms, trapezoids, and in solid geometry to cylinders, cones, pyramids, parallelepipeds, prisms, and frustums. The side or point opposite the base is often called the "apex" or "summit" of the shape. Of a triangle. In a triangle, any arbitrary side can be considered the "base". The two endpoints of the base are called "base vertices" and the corresponding angles are called "base angles". The third vertex opposite the base is called the "apex". The extended base of a triangle (a particular case of an extended side) is the line that contains the base. When the triangle is obtuse and the base is chosen to be one of the sides adjacent to the obtuse angle, then the altitude dropped perpendicularly from the apex to the base intersects the extended base outside of the triangle. The area of a triangle is its half of the product of the base times the height (length of the altitude). For a triangle formula_0 with opposite sides formula_1 if the three altitudes of the triangle are called formula_2 the area is: formula_3 Given a fixed base side and a fixed area for a triangle, the locus of apex points is a straight line parallel to the base. Of a trapezoid or parallelogram. Any of the sides of a parallelogram, or either (but typically the longer) of the parallel sides of a trapezoid can be considered its "base". Sometimes the parallel opposite side is also called a "base", or sometimes it is called a "top", "apex", or "summit". The other two edges can be called the "sides". Role in area and volume calculation. Bases are commonly used (together with heights) to calculate the areas and volumes of figures. In speaking about these processes, the measure (length or area) of a figure's base is often referred to as its "base." By this usage, the area of a parallelogram or the volume of a prism or cylinder can be calculated by multiplying its "base" by its height; likewise, the areas of triangles and the volumes of cones and pyramids are fractions of the products of their bases and heights. Some figures have two parallel bases (such as trapezoids and frustums), both of which are used to calculate the extent of the figures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\triangle ABC" }, { "math_id": 1, "text": "a, b, c," }, { "math_id": 2, "text": "h_a, h_b, h_c," }, { "math_id": 3, "text": "\\Delta = \\tfrac12 a h_a = \\tfrac12 b h_b = \\tfrac12 c h_c." } ]
https://en.wikipedia.org/wiki?curid=9579379
9579767
Profitability index
Mathematical economic formula Profitability index (PI), also known as profit investment ratio (PIR) and value investment ratio (VIR), is the ratio of payoff to investment of a proposed project. It is a useful tool for ranking projects because it allows you to quantify the amount of value created per unit of investment. Under capital rationing, PI method is suitable because PI method indicates relative figure i.e. ratio instead of absolute figure. The ratio is calculated as follows: Assuming that the cash flow calculated does not include the investment made in the project, a profitability index of 1 indicates break-even. Any value lower than one would indicate that the project's present value (PV) is less than the initial investment. As the value of the profitability index increases, so does the financial attractiveness of the proposed project. The PI is similar to the Return on Investment (ROI), except that the net profit is discounted. Example. We assume an investment opportunity with the following characteristics: CFAT Year CFAT 1 18000 2 12000 3 10000 4 9000 5 6000 Calculate Net present value at 6% and PI: Year CFAT PV@10% PV 1 18000 0.909 16362 2 12000 0.827 9924 3 10000 0.752 7520 4 9000 0.683 6147 5 6000 0.621 3726 Total present value 43679 (-) Investment 40000 NPV 3679 PI = 43679/40000 = 1.092 &gt; 1 ⇒ Accept the project References. &lt;references&gt;Tp Tladi External links. Use explained in the business book: Pursuing the Competitive Edge, Hayes, Pisano, Upton and Wheelwright. Wiley, 2005. pg. 264
[ { "math_id": 0, "text": "\\text{Profitability index} = \\frac{\\text{PV of future cash flows}}{\\text{Initial investment}} = 1 + \\frac{\\text{NPV}}{\\text{Initial investment}}" } ]
https://en.wikipedia.org/wiki?curid=9579767
958031
Compartmental models in epidemiology
Type of mathematical model used for infectious diseases Compartmental models are a very general modelling technique. They are often applied to the mathematical modelling of infectious diseases. The population is assigned to compartments with labels – for example, S, I, or R, (Susceptible, Infectious, or Recovered). People may progress between compartments. The order of the labels usually shows the flow patterns between the compartments; for example SEIS means susceptible, exposed, infectious, then susceptible again. The origin of such models is the early 20th century, with important works being that of Ross in 1916, Ross and Hudson in 1917, Kermack and McKendrick in 1927, and Kendall in 1956. The Reed–Frost model was also a significant and widely overlooked ancestor of modern epidemiological modelling approaches. The models are most often run with ordinary differential equations (which are deterministic), but can also be used with a stochastic (random) framework, which is more realistic but much more complicated to analyze. These models are used to analyze the disease dynamics and to estimate the total number of infected people, the total number of recovered people, and to estimate epidemiological parameters such as the basic reproduction number or effective reproduction number. Such models can show how different public health interventions may affect the outcome of the epidemic. The SIR model. The SIR model is one of the simplest compartmental models, and many models are derivatives of this basic form. The model consists of three compartments: S: The number of susceptible individuals. When a susceptible and an infectious individual come into "infectious contact", the susceptible individual contracts the disease and transitions to the infectious compartment. I: The number of infectious individuals. These are individuals who have been infected and are capable of infecting susceptible individuals. R for the number of removed (and immune) or deceased individuals. These are individuals who have been infected and have either recovered from the disease and entered the removed compartment, or died. It is assumed that the number of deaths is negligible with respect to the total population. This compartment may also be called "recovered" or "resistant". This model is reasonably predictive for infectious diseases that are transmitted from human to human, and where recovery confers lasting resistance, such as measles, mumps, and rubella. These variables (S, I, and R) represent the number of people in each compartment at a particular time. To represent that the number of susceptible, infectious, and removed individuals may vary over time (even if the total population size remains constant), we make the precise numbers a function of "t" (time): S("t"), I("t"), and R("t"). For a specific disease in a specific population, these functions may be worked out in order to predict possible outbreaks and bring them under control. Note that in the SIR model, formula_0 and formula_1 are different quantities – the former describes the number of recovered at "t" = 0 whereas the latter describes the ratio between the frequency of contacts to the frequency of recovery. As implied by the variable function of "t", the model is dynamic in that the numbers in each compartment may fluctuate over time. The importance of this dynamic aspect is most obvious in an endemic disease with a short infectious period, such as measles in the UK prior to the introduction of a vaccine in 1968. Such diseases tend to occur in cycles of outbreaks due to the variation in number of susceptibles (S("t")) over time. During an epidemic, the number of susceptible individuals falls rapidly as more of them are infected and thus enter the infectious and removed compartments. The disease cannot break out again until the number of susceptibles has built back up, e.g. as a result of offspring being born into the susceptible compartment. Each member of the population typically progresses from susceptible to infectious to recovered. This can be shown as a flow diagram in which the boxes represent the different compartments and the arrows the transition between compartments (see diagram). Transition rates. For the full specification of the model, the arrows should be labeled with the transition rates between compartments. Between "S" and "I", the transition rate is assumed to be formula_2, where formula_3 is the total population, formula_4 is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject, and formula_5 is the fraction of those contacts between an infectious and susceptible individual which result in the susceptible person becoming infected. (This is mathematically similar to the law of mass action in chemistry in which random collisions between molecules result in a chemical reaction and the fractional rate is proportional to the concentration of the two reactants.) Between "I" and "R", the transition rate is assumed to be proportional to the number of infectious individuals which is formula_6. If an individual is infectious for an average time period formula_7, then formula_8. This is also equivalent to the assumption that the length of time spent by an individual in the infectious state is a random variable with an exponential distribution. The "classical" SIR model may be modified by using more complex and realistic distributions for the I-R transition rate (e.g. the Erlang distribution). For the special case in which there is no removal from the infectious compartment (formula_9), the SIR model reduces to a very simple SI model, which has a logistic solution, in which every individual eventually becomes infected. The SIR model without birth and death. The dynamics of an epidemic, for example, the flu, are often much faster than the dynamics of birth and death, therefore, birth and death are often omitted in simple compartmental models. The SIR system without so-called vital dynamics (birth and death, sometimes called demography) described above can be expressed by the following system of ordinary differential equations: formula_10 where formula_11 is the stock of susceptible population, formula_12 is the stock of infected, formula_13 is the stock of removed population (either by death or recovery), and formula_14 is the sum of these three. This model was for the first time proposed by William Ogilvy Kermack and Anderson Gray McKendrick as a special case of what we now call Kermack–McKendrick theory, and followed work McKendrick had done with Ronald Ross. This system is non-linear, however it is possible to derive its analytic solution in implicit form. Firstly note that from: formula_15 it follows that: formula_16 expressing in mathematical terms the constancy of population formula_3. Note that the above relationship implies that one need only study the equation for two of the three variables. Secondly, we note that the dynamics of the infectious class depends on the following ratio: formula_17 the so-called basic reproduction number (also called basic reproduction ratio). This ratio is derived as the expected number of new infections (these new infections are sometimes called secondary infections) from a single infection in a population where all subjects are susceptible. This idea can probably be more readily seen if we say that the typical time between contacts is formula_18, and the typical time until removal is formula_19. From here it follows that, on average, the number of contacts by an infectious individual with others "before" the infectious has been removed is: formula_20 By dividing the first differential equation by the third, separating the variables and integrating we get formula_21 where formula_22 and formula_0 are the initial numbers of, respectively, susceptible and removed subjects. Writing formula_23 for the initial proportion of susceptible individuals, and formula_24 and formula_25 for the proportion of susceptible and removed individuals respectively in the limit formula_26 one has formula_27 (note that the infectious compartment empties in this limit). This transcendental equation has a solution in terms of the Lambert W function, namely formula_28 This shows that at the end of an epidemic that conforms to the simple assumptions of the SIR model, unless formula_29, not all individuals of the population have been removed, so some must remain susceptible. A driving force leading to the end of an epidemic is a decline in the number of infectious individuals. The epidemic does not typically end because of a complete lack of susceptible individuals. The role of both the basic reproduction number and the initial susceptibility are extremely important. In fact, upon rewriting the equation for infectious individuals as follows: formula_30 it yields that if: formula_31 then: formula_32 i.e., there will be a proper epidemic outbreak with an increase of the number of the infectious (which can reach a considerable fraction of the population). On the contrary, if formula_33 then formula_34 i.e., independently from the initial size of the susceptible population the disease can never cause a proper epidemic outbreak. As a consequence, it is clear that both the basic reproduction number and the initial susceptibility are extremely important. The force of infection. Note that in the above model the function: formula_35 models the transition rate from the compartment of susceptible individuals to the compartment of infectious individuals, so that it is called the force of infection. However, for large classes of communicable diseases it is more realistic to consider a force of infection that does not depend on the absolute number of infectious subjects, but on their fraction (with respect to the total constant population formula_14): formula_36 Capasso and, afterwards, other authors have proposed nonlinear forces of infection to model more realistically the contagion process. Exact analytical solutions to the SIR model. In 2014, Harko and coauthors derived an exact so-called analytical solution (involving an integral that can only be calculated numerically) to the SIR model. In the case without vital dynamics setup, for formula_37, etc., it corresponds to the following time parametrization formula_38 formula_39 formula_40 for formula_41 with initial conditions formula_42 where formula_43 satisfies formula_44. By the transcendental equation for formula_45 above, it follows that formula_46, if formula_47 and formula_48. An equivalent so-called analytical solution (involving an integral that can only be calculated numerically) found by Miller yields formula_49 Here formula_50 can be interpreted as the expected number of transmissions an individual has received by time formula_51. The two solutions are related by formula_52. Effectively the same result can be found in the original work by Kermack and McKendrick. These solutions may be easily understood by noting that all of the terms on the right-hand sides of the original differential equations are proportional to formula_12. The equations may thus be divided through by formula_12, and the time rescaled so that the differential operator on the left-hand side becomes simply formula_53, where formula_54, i.e. formula_55. The differential equations are now all linear, and the third equation, of the form formula_56 const., shows that formula_57 and formula_13 (and formula_58 above) are simply linearly related. A highly accurate analytic approximant of the SIR model as well as exact analytic expressions for the final values formula_59, formula_60, and formula_45 were provided by Kröger and Schlickeiser, so that there is no need to perform a numerical integration to solve the SIR model (a simplified example practice on COVID-19 numerical simulation using Microsoft Excel can be found here ), to obtain its parameters from existing data, or to predict the future dynamics of an epidemics modeled by the SIR model. The approximant involves the Lambert W function which is part of all basic data visualization software such as Microsoft Excel, MATLAB, and Mathematica. While Kendall considered the so-called all-time SIR model where the initial conditions formula_22, formula_61, and formula_0 are coupled through the above relations, Kermack and McKendrick proposed to study the more general semi-time case, for which formula_22 and formula_61 are both arbitrary. This latter version, denoted as semi-time SIR model, makes predictions only for future times formula_62. An analytic approximant and exact expressions for the final values are available for the semi-time SIR model as well. Numerical solutions to the SIR model with approximations. Numerical solutions to the SIR model can be found in the literature. An example is using the model to analyze COVID-19 spreading data. Three reproduction numbers can be pulled out from the data analyzed with numerical approximation, the basic reproduction number: formula_63 the real-time reproduction number: formula_64 and the real-time effective reproduction number: formula_65 formula_1 represents the speed of reproduction rate at the beginning of the spreading when all populations are assumed susceptible, e.g. if formula_66 and formula_67 meaning one infectious person on average infects 0.4 susceptible people per day and recovers in 1/0.2=5 days. Thus when this person recovered, there are two people still infectious directly got from this person and formula_68, i.e. the number of infectious people doubled in one cycle of 5 days. The data simulated by the model with formula_68 or real data fitted will yield a doubling of the number of infectious people faster than 5 days because the two infected people are infecting people. From the SIR model, we can tell that formula_69 is determined by the nature of the disease and also a function of the interactive frequency between the infectious person formula_12 with the susceptible people formula_11 and also the intensity/duration of the interaction like how close they interact for how long and whether or not they both wear masks, thus, it changes over time when the average behavior of the carriers and susceptible people changes. The model use formula_70 to represent these factors but it indeed is referenced to the initial stage when no action is taken to prevent the spread and all population is susceptible, thus all changes are absorbed by the change of formula_69. formula_71 is usually more stable over time assuming when the infectious person shows symptoms, she/he will seek medical attention or be self-isolated. So if we find formula_72 changes, most probably the behaviors of people in the community have changed from their normal patterns before the outbreak, or the disease has mutated to a new form. Costive massive detection and isolation of susceptible close contacts have effects on reducing formula_73 but whose efficiencies are under debate. This debate is largely on the uncertainty of the number of days reduced from after infectious or detectable whichever comes first to before a symptom shows up for an infected susceptible person. If the person is infectious after symptoms show up, or detection only works for a person with symptoms, then these prevention methods are not necessary, and self-isolation and/or medical attention is the best way to cut the formula_73 values. The typical onset of the COVID-19 infectious period is in the order of one day from the symptoms showing up, making massive detection with typical frequency in a few days useless. formula_72 does not tell us whether or not the spreading will speed up or slow down in the latter stages when the fraction of susceptible people in the community has dropped significantly after recovery or vaccination. formula_74 corrects this dilution effect by multiplying the fraction of the susceptible population over the total population. It corrects the effective/transmissible interaction between an infectious person and the rest of the community when many of the interaction is immune in the middle to late stages of the disease spreading. Thus, when formula_75, we will see an exponential-like outbreak; when formula_76, a steady state reached and no number of infectious people changes over time; and when formula_77, the disease decays and fades away over time. Using the differential equations of the SIR model and converting them to numerical discrete forms, one can set up the recursive equations and calculate the S, I, and R populations with any given initial conditions but accumulate errors over a long calculation time from the reference point. Sometimes a convergence test is needed to estimate the errors. Given a set of initial conditions and the disease-spreading data, one can also fit the data with the SIR model and pull out the three reproduction numbers when the errors are usually negligible due to the short time step from the reference point. Any point of the time can be used as the initial condition to predict the future after it using this numerical model with assumption of time-evolved parameters such as population, formula_72, and formula_71. However, away from this reference point, errors will accumulate over time thus convergence test is needed to find an optimal time step for more accurate results. Among these three reproduction numbers, formula_1 is very useful to judge the control pressure, e.g., a large value meaning the disease will spread very fast and is very difficult to control. formula_72 is most useful in predicting future trends, for example, if we know the social interactions have reduced 50% frequently from that before the outbreak and the interaction intensities among people are the same, then we can set formula_78. If social distancing and masks add another 50% cut in infection efficiency, we can set formula_79. formula_74 will perfectly correlate with the waves of the spreading and whenever formula_80, the spreading accelerates, and when formula_81, the spreading slows down thus useful to set a prediction on the short-term trends. Also, it can be used to directly calculate the threshold population of vaccination/immunization for the herd immunity stage by setting formula_82, and formula_83, i.e. formula_84. The SIR model with vital dynamics and constant population. Consider a population characterized by a death rate formula_85 and birth rate formula_86, and where a communicable disease is spreading. The model with mass-action transmission is: formula_87 for which the disease-free equilibrium (DFE) is: formula_88 In this case, we can derive a basic reproduction number: formula_89 which has threshold properties. In fact, independently from biologically meaningful initial values, one can show that: formula_90 formula_91 The point EE is called the Endemic Equilibrium (the disease is not totally eradicated and remains in the population). With heuristic arguments, one may show that formula_92 may be read as the average number of infections caused by a single infectious subject in a wholly susceptible population, the above relationship biologically means that if this number is less than or equal to one the disease goes extinct, whereas if this number is greater than one the disease will remain permanently endemic in the population. The SIR model. In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, formula_93; infected, formula_94; and recovered, formula_95. The compartments used for this model consist of three classes: The flow of this model may be considered as follows: formula_96 Using a fixed population, formula_97 in the three functions resolves that the value formula_14 should remain constant within the simulation, if a simulation is used to solve the SIR model. Alternatively, the analytic approximant can be used without performing a simulation. The model is started with values of formula_98, formula_99 and formula_100. These are the number of people in the susceptible, infected and removed categories at time equals zero. If the SIR model is assumed to hold at all times, these initial conditions are not independent. Subsequently, the flow model updates the three variables for every time point with set values for formula_69 and formula_71. The simulation first updates the infected from the susceptible and then the removed category is updated from the infected category for the next time point (t=1). This describes the flow persons between the three categories. During an epidemic the susceptible category is not shifted with this model, formula_69 changes over the course of the epidemic and so does formula_71. These variables determine the length of the epidemic and would have to be updated with each cycle. formula_101 formula_102 formula_103 Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of formula_104 and an equal fraction formula_105 of people that an individual makes contact with per unit time. Then, let formula_69 be the multiplication of formula_104 and formula_105. This is the transmission probability times the contact rate. Besides, an infected individual makes contact with formula_105 persons per unit time whereas only a fraction, formula_106 of them are susceptible. Thus, we have every infective can infect formula_107 susceptible persons, and therefore, the whole number of susceptibles infected by infectives per unit time is formula_108. For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, a number equal to the fraction formula_71 (which represents the mean recovery/death rate, or formula_73 the mean infective period) of infectives are leaving this class per unit time to enter the removed class. These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned. Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model. Steady-state solutions. The only steady state solution to the classic SIR model as defined by the differential equations above is I=0, S and R can then take any values. The model can be changed while retaining three compartments to give a steady-state endemic solution by adding some input to the S compartment. For example, one may postulate that the expected duration of susceptibility will be formula_109 where formula_110 reflects the time alive (life expectancy) and formula_111 reflects the time in the susceptible state before becoming infected, which can be simplified to: formula_112 such that the number of susceptible persons is the number entering the susceptible compartment formula_113 times the duration of susceptibility: formula_114 Analogously, the steady-state number of infected persons is the number entering the infected state from the susceptible state (number susceptible, times rate of infection) formula_115 times the duration of infectiousness formula_116: formula_117 Other compartmental models. There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR). Compartmental models can also be used to model multiple risk groups, and even the interaction of multiple pathogens. Variations on the basic SIR model. The SIS model. Some infections, for example, those from the common cold and influenza, do not confer any long-lasting immunity. Such infections may give temporary resistance but do not give long-term immunity upon recovery from infection, and individuals become susceptible again. We have the model: formula_118 Note that denoting with "N" the total population it holds that: formula_119. It follows that: formula_120, i.e. the dynamics of infectious is ruled by a logistic function, so that formula_121: formula_122 It is possible to find an analytical solution to this model (by making a transformation of variables: formula_123 and substituting this into the mean-field equations), such that the basic reproduction rate is greater than unity. The solution is given as formula_124. where formula_125 is the endemic infectious population, formula_126, and formula_127. As the system is assumed to be closed, the susceptible population is then formula_128. Whenever the integer nature of the number of agents is evident (populations with fewer than tens of thousands of individuals), inherent fluctuations in the disease spreading process caused by discrete agents result in uncertainties. In this scenario, the evolution of the disease predicted by compartmental equations deviates significantly from the observed results. These uncertainties may even cause the epidemic to end earlier than predicted by the compartmental equations. As a special case, one obtains the usual logistic function by assuming formula_129. This can be also considered in the SIR model with formula_130, i.e. no removal will take place. That is the "SI model". The differential equation system using formula_131 thus reduces to: formula_132 In the long run, in the SI model, all individuals will become infected. The SIRD model. The "Susceptible-Infectious-Recovered-Deceased model" differentiates between "Recovered" (meaning specifically individuals having survived the disease and now immune) and "Deceased". The SIRD model has semi analytical solutions based on the four parts method. This model uses the following system of differential equations: formula_133 where formula_134 are the rates of infection, recovery, and mortality, respectively. The SIRV model. The "Susceptible-Infectious-Recovered-Vaccinated model" is an extended SIR model that accounts for vaccination of the susceptible population. This model uses the following system of differential equations: formula_135 where formula_136 are the rates of infection, recovery, and vaccination, respectively. For the semi-time initial conditions formula_137, formula_138, formula_139 and constant ratios formula_140 and formula_141 the model had been solved approximately. The occurrence of a pandemic outburst requires formula_142 and there is a critical reduced vaccination rate formula_143 beyond which the steady-state size formula_144of the susceptible compartment remains relatively close to formula_22. Arbitrary initial conditions satisfying formula_145 can be mapped to the solved special case with formula_139. The numerical solution of this model to calculate the real-time reproduction number formula_72 of COVID-19 can be practiced based on information from the different populations in a community. Numerical solution is a commonly used method to analyze complicated kinetic networks when the analytical solution is difficult to obtain or limited by requirements such as boundary conditions or special parameters. It uses recursive equations to calculate the next step by converting the numerical integration into Riemann sum of discrete time steps e.g., use yesterday's principal and interest rate to calculate today's interest which assumes the interest rate is fixed during the day. The calculation contains projected errors if the analytical corrections on the numerical step size are not included, e.g. when the interest rate of annual collection is simplified to 12 times the monthly rate, a projected error is introduced. Thus the calculated results will carry accumulative errors when the time step is far away from the reference point and a convergence test is needed to estimate the error. However, this error is usually acceptable for data fitting. When fitting a set of data with a close time step, the error is relatively small because the reference point is nearby compared to when predicting a long period of time after a reference point. Once the real-time formula_72 is pulled out, one can compare it to the basic reproduction number formula_1. Before the vaccination, formula_72 gives the policy maker and general public a measure of the efficiency of social mitigation activities such as social distancing and face masking simply by dividing formula_146. Under massive vaccination, the goal of disease control is to reduce the effective reproduction number formula_147, where formula_11 is the number of susceptible population at the time and formula_14 is the total population. When formula_77, the spreading decays and daily infected cases go down. The SIRVD model. The "susceptible-infected-recovered-vaccinated-deceased" (SIRVD) epidemic compartment model extends the SIR model to include the effects of vaccination campaigns and time-dependent fatality rates on epidemic outbreaks. It encompasses the SIR, SIRV, SIRD, and SI models as special cases, with individual time-dependent rates governing transitions between different fractions. This model uses the following system of differential equations for the population fractions formula_148: formula_149 where formula_150 are the infection, vaccination, recovery, and fatality rates, respectively. For the semi-time initial conditions formula_151, formula_152, formula_153 and constant ratios formula_154, formula_155, and formula_156 the model had been solved approximately, and exactly for some special cases, irrespective of the functional form of formula_157. This is achieved upon rewriting the above SIRVD model equations in equivalent, but reduced form formula_158 where formula_159 is a reduced, dimensionless time. The temporal dependence of the infected fraction formula_160 and the rate of new infections formula_161 differs when considering the effects of vaccinations and when the real-time dependence of fatality and recovery rates diverge. These differences have been highlighted for stationary ratios and gradually decreasing fatality rates. The case of stationary ratios allows one to construct a diagnostics method to extract analytically all SIRVD model parameters from measured COVID-19 data of a completed pandemic wave. The MSIR model. For many infections, including measles, babies are not born into the susceptible compartment but are immune to the disease for the first few months of life due to protection from maternal antibodies (passed across the placenta and additionally through colostrum). This is called passive immunity. This added detail can be shown by including an M class (for maternally derived immunity) at the beginning of the model. To indicate this mathematically, an additional compartment is added, "M"("t")"." This results in the following differential equations: formula_162 Carrier state. Some people who have had an infectious disease such as tuberculosis never completely recover and continue to carry the infection, whilst not suffering the disease themselves. They may then move back into the infectious compartment and suffer symptoms (as in tuberculosis) or they may continue to infect others in their carrier state, while not suffering symptoms. The most famous example of this is probably Mary Mallon, who infected 22 people with typhoid fever. The carrier compartment is labelled C. The SEIR model. For many important infections, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment "E" (for exposed). Assuming that the latency period is a random variable with exponential distribution with parameter formula_104 (i.e. the average latency period is formula_163), and also assuming the presence of vital dynamics with birth rate formula_86 equal to death rate formula_164 (so that the total number formula_165 is constant), we have the model: formula_166 We have formula_167 but this is only constant because of the simplifying assumption that birth and death rates are equal; in general formula_14 is a variable. For this model, the basic reproduction number is: formula_168 Similarly to the SIR model, also, in this case, we have a Disease-Free-Equilibrium ("N",0,0,0) and an Endemic Equilibrium EE, and one can show that, independently from biologically meaningful initial conditions formula_169 it holds that: formula_170 formula_171 In case of periodically varying contact rate formula_172 the condition for the global attractiveness of DFE is that the following linear system with periodic coefficients: formula_173 is stable (i.e. it has its Floquet's eigenvalues inside the unit circle in the complex plane). The SEIS model. The SEIS model is like the SEIR model (above) except that no immunity is acquired at the end. formula_174 In this model an infection does not leave any immunity thus individuals that have recovered return to being susceptible, moving back into the "S"("t") compartment. The following differential equations describe this model: formula_175 The MSEIR model. For the case of a disease, with the factors of passive immunity, and a latency period there is the MSEIR model. formula_176 formula_177 The MSEIRS model. An MSEIRS model is similar to the MSEIR, but the immunity in the R class would be temporary, so that individuals would regain their susceptibility when the temporary immunity ended. formula_178 Variable contact rates. It is well known that the probability of getting a disease is not constant in time. As a pandemic progresses, reactions to the pandemic may change the contact rates which are assumed constant in the simpler models. Counter-measures such as masks, social distancing, and lockdown will alter the contact rate in a way to reduce the speed of the pandemic. In addition, Some diseases are seasonal, such as the common cold viruses, which are more prevalent during winter. With childhood diseases, such as measles, mumps, and rubella, there is a strong correlation with the school calendar, so that during the school holidays the probability of getting such a disease dramatically decreases. As a consequence, for many classes of diseases, one should consider a force of infection with periodically ('seasonal') varying contact rate formula_179 with period T equal to one year. Thus, our model becomes formula_180 (the dynamics of recovered easily follows from formula_181), i.e. a nonlinear set of differential equations with periodically varying parameters. It is well known that this class of dynamical systems may undergo very interesting and complex phenomena of nonlinear parametric resonance. It is easy to see that if: formula_182 whereas if the integral is greater than one the disease will not die out and there may be such resonances. For example, considering the periodically varying contact rate as the 'input' of the system one has that the output is a periodic function whose period is a multiple of the period of the input. This allowed to give a contribution to explain the poly-annual (typically biennial) epidemic outbreaks of some infectious diseases as interplay between the period of the contact rate oscillations and the pseudo-period of the damped oscillations near the endemic equilibrium. Remarkably, in some cases, the behavior may also be quasi-periodic or even chaotic. SIR model with diffusion. Spatiotemporal compartmental models describe not the total number, but the density of susceptible/infective/recovered persons. Consequently, they also allow to model the distribution of infected persons in space. In most cases, this is done by combining the SIR model with a diffusion equation formula_183 where formula_184, formula_185 and formula_186 are diffusion constants. Thereby, one obtains a reaction-diffusion equation. (Note that, for dimensional reasons, the parameter formula_69 has to be changed compared to the simple SIR model.) Early models of this type have been used to model the spread of the black death in Europe. Extensions of this model have been used to incorporate, e.g., effects of nonpharmaceutical interventions such as social distancing. Interacting Subpopulation SEIR Model. As social contacts, disease severity and lethality, as well as the efficacy of prophylactic measures may differ substantially between interacting subpopulations, e.g., the elderly versus the young, separate SEIR models for each subgroup may be used that are mutually connected through interaction links. Such Interacting Subpopulation SEIR models have been used for modeling the COVID-19 pandemic at continent scale to develop personalized, accelerated, subpopulation-targeted vaccination strategies that promise a shortening of the pandemic and a reduction of case and death counts in the setting of limited access to vaccines during a wave of virus Variants of Concern. SIR Model on Networks. The SIR model has been studied on networks of various kinds in order to model a more realistic form of connection than the homogeneous mixing condition which is usually required. A simple model for epidemics on networks in which an individual has a probability p of being infected by each of his infected neighbors in a given time step leads to results similar to giant component formation on Erdos Renyi random graphs. SIRSS model - combination of SIR with modelling of social stress. Dynamics of epidemics depend on how people's behavior changes in time. For example, at the beginning of the epidemic, people are ignorant and careless, then, after the outbreak of epidemics and alarm, they begin to comply with the various restrictions and the spreading of epidemics may decline. Over time, some people get tired/frustrated by the restrictions and stop following them (exhaustion), especially if the number of new cases drops down. After resting for some time, they can follow the restrictions again. But during this pause the second wave can come and become even stronger than the first one. Social dynamics should be considered. The social physics models of social stress complement the classical epidemics models. The simplest SIR-social stress (SIRSS) model is organised as follows. The susceptible individuals (S) can be split in three subgroups by the types of behavior: ignorant or unaware of the epidemic (Sign), rationally resistant (Sres), and exhausted (Sexh) that do not react on the external stimuli (this is a sort of refractory period). In other words: S(t) = Sign(t) + Sres(t) + Sexh(t). Symbolically, the social stress model can be presented by the "reaction scheme" (where I denotes the infected individuals): The main "SIR epidemic reaction" has different reaction rate constants formula_69 for Sign, Sres, and Sexh. Presumably, for Sres, formula_69 is lower than for Sign and Sign. The differences between countries are concentrated in two kinetic constants: the rate of mobilization and the rate of exhaustion calculated for COVID-19 epidemic in 13 countries. These constants for this epidemic in all countries can be extracted by the fitting of the SIRSS model to publicly available data The KdV-SIR equation. Based on the classical SIR model, a Korteweg-de Vries (KdV)–SIR equation and its analytical solution have been proposed to illustrate the fundamental dynamics of an epidemic wave, the dependence of solutions on parameters, and the dependence of predictability horizons on various types of solutions. The KdV-SIR equation is written as follows: formula_191. Here, formula_192, formula_193, and formula_194. formula_195 indicates the initial value of the state variable formula_11. Parameters formula_196(σ-naught) and formula_197 (R-naught) are the time-independent relative growth rate and basic reproduction number, respectively. formula_198presents the maximum of the state variables formula_12(for the number of infected persons). An analytical solution to the KdV-SIR equation is written as follows: formula_199, which represents a solitary wave solution. Modelling vaccination. The SIR model can be modified to model vaccination. Typically these introduce an additional compartment to the SIR model, formula_200, for vaccinated individuals. Below are some examples. Vaccinating newborns. In presence of a communicable diseases, one of the main tasks is that of eradicating it via prevention measures and, if possible, via the establishment of a mass vaccination program. Consider a disease for which the newborn are vaccinated (with a vaccine giving lifelong immunity) at a rate formula_201: formula_202 where formula_200 is the class of vaccinated subjects. It is immediate to show that: formula_203 thus we shall deal with the long term behavior of formula_11 and formula_12, for which it holds that: formula_204 formula_205 In other words, if formula_206 the vaccination program is not successful in eradicating the disease, on the contrary, it will remain endemic, although at lower levels than the case of absence of vaccinations. This means that the mathematical model suggests that for a disease whose basic reproduction number may be as high as 18 one should vaccinate at least 94.4% of newborns in order to eradicate the disease. Vaccination and information. Modern societies are facing the challenge of "rational" exemption, i.e. the family's decision to not vaccinate children as a consequence of a "rational" comparison between the perceived risk from infection and that from getting damages from the vaccine. In order to assess whether this behavior is really rational, i.e. if it can equally lead to the eradication of the disease, one may simply assume that the vaccination rate is an increasing function of the number of infectious subjects: formula_207 In such a case the eradication condition becomes: formula_208 i.e. the baseline vaccination rate should be greater than the "mandatory vaccination" threshold, which, in case of exemption, cannot hold. Thus, "rational" exemption might be myopic since it is based only on the current low incidence due to high vaccine coverage, instead taking into account future resurgence of infection due to coverage decline. Vaccination of non-newborns. In case there also are vaccinations of non newborns at a rate ρ the equation for the susceptible and vaccinated subject has to be modified as follows: formula_209 leading to the following eradication condition: formula_210 Pulse vaccination strategy. This strategy repeatedly vaccinates a defined age-cohort (such as young children or the elderly) in a susceptible population over time. Using this strategy, the block of susceptible individuals is then immediately removed, making it possible to eliminate an infectious disease, (such as measles), from the entire population. Every T time units a constant fraction p of susceptible subjects is vaccinated in a relatively short (with respect to the dynamics of the disease) time. This leads to the following impulsive differential equations for the susceptible and vaccinated subjects: formula_211 It is easy to see that by setting "I" = 0 one obtains that the dynamics of the susceptible subjects is given by: formula_212 and that the eradication condition is: formula_213 The influence of age: age-structured models. Age has a deep influence on the disease spread rate in a population, especially the contact rate. This rate summarizes the effectiveness of contacts between susceptible and infectious subjects. Taking into account the ages of the epidemic classes formula_214 (to limit ourselves to the susceptible-infectious-removed scheme) such that: formula_215 formula_216 formula_217 (where formula_218 is the maximum admissible age) and their dynamics is not described, as one might think, by "simple" partial differential equations, but by integro-differential equations: formula_219 formula_220 formula_221 where: formula_222 is the force of infection, which, of course, will depend, though the contact kernel formula_223 on the interactions between the ages. Complexity is added by the initial conditions for newborns (i.e. for a=0), that are straightforward for infectious and removed: formula_224 but that are nonlocal for the density of susceptible newborns: formula_225 where formula_226 are the fertilities of the adults. Moreover, defining now the density of the total population formula_227 one obtains: formula_228 In the simplest case of equal fertilities in the three epidemic classes, we have that in order to have demographic equilibrium the following necessary and sufficient condition linking the fertility formula_229 with the mortality formula_230 must hold: formula_231 and the demographic equilibrium is formula_232 automatically ensuring the existence of the disease-free solution: formula_233 A basic reproduction number can be calculated as the spectral radius of an appropriate functional operator. Next-generation method. One way to calculate formula_234 is to average the expected number of new infections over all possible infected types. The next-generation method is a general method of deriving formula_234 when more than one class of infectives is involved. This method, originally introduced by Diekmann "et al". (1990), can be used for models with underlying age structure or spatial structure, among other possibilities. In this picture, the spectral radius of the next-generation matrix formula_235 gives the basic reproduction number, formula_236 Consider a sexually transmitted disease. In a naive population where almost everyone is susceptible, but the infection seed, if the expected number of gender 1 is formula_237 and the expected number of infected gender 2 is formula_238, we can know how many would be infected in the next-generation. Such that the "next-generation matrix" formula_235 can be written as:formula_239where each element formula_240 is the expected number of secondary infections of gender formula_241 caused by a single infected individual of gender formula_242, assuming that the population of gender formula_241 is entirely susceptible. Diagonal elements are zero because people of the same gender cannot transmit the disease to each other but, for example, each formula_237 can transmit the disease to formula_238, on average. Meaning that each element formula_240 is a reproduction number, but one where who infects whom is accounted for. If generation formula_243 is represented with formula_244 then the next generation formula_245 would be formula_246. The spectral radius of the next-generation matrix is the basic reproduction number, formula_247, that is here, the geometric mean of the expected number of each gender in the next-generation. Note that multiplication factors formula_237 and formula_238 alternate because, the infectious person has to ‘pass through’ a second gender before it can enter a new host of the first gender. In other words, it takes two generations to get back to the same type, and every two generations numbers are multiplied by formula_238×formula_237. The average per generation multiplication factor is therefore formula_248. Note that formula_235 is a non-negative matrix so it has single, unique, positive, real eigenvalue which is strictly greater than all the others. Next-generation matrix for compartmental models. In mathematical modelling of infectious disease, the dynamics of spreading is usually described through a set of non-linear ordinary differential equations (ODE). So there is always formula_249 coupled equations of form formula_250 which shows how the number of people in compartment formula_251 changes over time. For example, in a SIR model, formula_252, formula_253, and formula_254. Compartmental models have a disease-free equilibrium (DFE) meaning that it is possible to find an equilibrium while setting the number of infected people to zero, formula_255. In other words, as a rule, there is an infection-free steady state. This solution, also usually ensures that the disease-free equilibrium is also an equilibrium of the system. There is another fixed point known as an Endemic Equilibrium (EE) where the disease is not totally eradicated and remains in the population. Mathematically, formula_234 is a threshold for stability of a disease-free equilibrium such that: formula_256 formula_257 To calculate formula_234, the first step is to linearise around the disease-free equilibrium (DFE), but for the infected subsystem of non-linear ODEs which describe the production of new infections and changes in state among infected individuals. Epidemiologically, the linearisation reflects that formula_234 characterizes the potential for initial spread of an infectious person in a naive population, assuming the change in the susceptible population is negligible during the initial spread. A linear system of ODEs can always be described by a matrix. So, the next step is to construct a linear positive operator that provides the next generation of infected people when applied to the present generation. Note that this operator (matrix) is responsible for the number of infected people, not all the compartments. Iteration of this operator describes the initial progression of infection within the heterogeneous population. So comparing the spectral radius of this operator to unity determines whether the generations of infected people grow or not. formula_234 can be written as a product of the infection rate near the disease-free equilibrium and average duration of infectiousness. It is used to find the peak and final size of an epidemic. The SEIR model with vital dynamics and constant population. As described in the example above, so many epidemic processes can be described with a SIR‌ model. However, for many important infections, such as COVID-19, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment "E" (for exposed). Here, the formation of the next-generation matrix from the SEIR‌ model involves determining two compartments, infected and non-infected, since they are the populations that spread the infection. So we only need to model the exposed, E, and infected, I, compartments. Consider a population characterized by a death rate formula_85 and birth rate formula_258 where a communicable disease is spreading. As in the previous example, we can use the transition rates between the compartments per capita such that formula_69 be the infection rate, formula_71 be the recovery rate, and formula_259 be the rate at which a latent individual becomes infectious. Then, we can define the model dynamics using the following equations: formula_260Here we have 4 compartments and we can define vector formula_261 where formula_262 denotes the number or proportion of individuals in the "formula_263"-th compartment. Let formula_264 be the rate of appearance of new infections in compartment "formula_263" such that it includes only infections that are newly arising, but does not include terms which describe the transfer of infectious individuals from one infected compartment to another. Then if formula_265 is the rate of transfer of individuals into compartment "formula_263" by all other means and formula_266 is the rate of transfer of individuals out of the "formula_263"-th compartment, then the difference formula_267 gives the rate of change of such that formula_268. We can now make matrices of partial derivatives of "formula_269" and "formula_270" such that formula_271 and formula_272 , where formula_273 is the disease-free equilibrium. We now can form the next-generation matrix (operator) formula_274. Basically, formula_275 is a non-negative matrix which represents the infection rates near the equilibrium, and formula_276 is an M-matrix for linear transition terms making formula_277 a matrix which represents the average duration of infectiousness. Therefore, formula_278 gives the rate at which infected individuals in "formula_279" produce new infections in "formula_262", times the average length of time an individual spends in a single visit to compartment "formula_280" Finally, for this SEIR process we can have: formula_281 and formula_282 and so formula_283 Estimation methods. The basic reproduction number can be estimated through examining detailed transmission chains or through genomic sequencing. However, it is most frequently calculated using epidemiological models. During an epidemic, typically the number of diagnosed infections formula_284 over time formula_51 is known. In the early stages of an epidemic, growth is exponential, with a logarithmic growth rate formula_285 For exponential growth, formula_14 can be interpreted as the cumulative number of diagnoses (including individuals who have recovered) or the present number of infection cases; the logarithmic growth rate is the same for either definition. In order to estimate formula_1, assumptions are necessary about the time delay between infection and diagnosis and the time between infection and starting to be infectious. In exponential growth, formula_286 is related to the doubling time formula_287 asformula_288 Simple model. If an individual, after getting infected, infects exactly formula_1 new individuals only after exactly a time formula_57 (the serial interval) has passed, then the number of infectious individuals over time grows asformula_289orformula_290The underlying matching differential equation isformula_291orformula_292In this case, formula_293 or formula_294. For example, with formula_295 and formula_296, we would find formula_297. If formula_1 is time dependent formula_298 showing that it may be important to keep formula_299 below 0, time-averaged, to avoid exponential growth. Latent infectious period, isolation after diagnosis. In this model, an individual infection has the following stages: This is a SEIR model and formula_1 may be written in the following form formula_302 This estimation method has been applied to COVID-19 and SARS. It follows from the differential equation for the number of exposed individuals formula_303 and the number of latent infectious individuals formula_304,formula_305The largest eigenvalue of the matrix is the logarithmic growth rate formula_286, which can be solved for formula_1. In the special case formula_306, this model results in formula_307, which is different from the simple model above (formula_308). For example, with the same values formula_295 and formula_296, we would find formula_309, rather than the true value of formula_310. The difference is due to a subtle difference in the underlying growth model; the matrix equation above assumes that newly infected patients are currently already contributing to infections, while in fact infections only occur due to the number infected at formula_300 ago. A more correct treatment would require the use of delay differential equations. Latent period is the transition time between contagion event and disease manifestation. In cases of diseases with varying latent periods, the basic reproduction number can be calculated as the sum of the reproduction numbers for each transition time into the disease. An example of this is tuberculosis (TB). Blower and coauthors calculated from a simple model of TB the following reproduction number:formula_311In their model, it is assumed that the infected individuals can develop active TB by either direct progression (the disease develops immediately after infection) considered above as FAST tuberculosis or endogenous reactivation (the disease develops years after the infection) considered above as SLOW tuberculosis. Other considerations within compartmental epidemic models. Vertical transmission. In the case of some diseases such as AIDS and hepatitis B, it is possible for the offspring of infected parents to be born infected. This transmission of the disease down from the mother is referred to as vertical transmission. The influx of additional members into the infected category can be considered within the model by including a fraction of the newborn members in the infected compartment. Vector transmission. Diseases transmitted from human to human indirectly, i.e. malaria spread by way of mosquitoes, are transmitted through a vector. In these cases, the infection transfers from human to insect and an epidemic model must include both species, generally requiring many more compartments than a model for direct transmission. Others. Other occurrences which may need to be considered when modeling an epidemic include things such as the following: Deterministic versus stochastic epidemic models. The deterministic models presented here are valid only in case of sufficiently large populations, and as such should be used cautiously. These models are only valid in the thermodynamic limit, where the population is effectively infinite. In stochastic models, the long-time endemic equilibrium derived above, does not hold, as there is a finite probability that the number of infected individuals drops below one in a system. In a true system then, the pathogen may not propagate, as no host will be infected. But, in deterministic mean-field models, the number of infected can take on real, namely, non-integer values of infected hosts, and the number of hosts in the model can be less than one, but more than zero, thereby allowing the pathogen in the model to propagate. The reliability of compartmental models is limited to compartmental applications. One of the possible extensions of mean-field models considers the spreading of epidemics on a network based on percolation theory concepts. Stochastic epidemic models have been studied on different networks and more recently applied to the COVID-19 pandemic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R(0)" }, { "math_id": 1, "text": "R_0" }, { "math_id": 2, "text": " d(S/N)/dt = -\\beta SI/N^2 " }, { "math_id": 3, "text": " N " }, { "math_id": 4, "text": " \\beta " }, { "math_id": 5, "text": " SI/N^2 " }, { "math_id": 6, "text": " \\gamma I " }, { "math_id": 7, "text": " D " }, { "math_id": 8, "text": " \\gamma = 1 / D " }, { "math_id": 9, "text": " \\gamma=0 " }, { "math_id": 10, "text": "\n\\left\\{\\begin{aligned}\n& \\frac{dS}{dt} = - \\beta I S, \\\\[6pt]\n& \\frac{dI}{dt} = \\beta I S - \\gamma I, \\\\[6pt]\n& \\frac{dR}{dt} = \\gamma I,\n\\end{aligned}\\right.\n" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "I" }, { "math_id": 13, "text": "R" }, { "math_id": 14, "text": "N" }, { "math_id": 15, "text": " \\frac{dS}{dt} + \\frac{dI}{dt} + \\frac{dR}{dt} = 0," }, { "math_id": 16, "text": " S(t) + I(t) + R(t) = \\text{constant} = N," }, { "math_id": 17, "text": " R_0 = \\frac{\\beta}{\\gamma}," }, { "math_id": 18, "text": "T_{c} = \\beta^{-1}" }, { "math_id": 19, "text": "T_{r} = \\gamma^{-1}" }, { "math_id": 20, "text": "T_{r}/T_{c}." }, { "math_id": 21, "text": " S(t) = S(0) e^{-R_0(R(t) - R(0))/N}, " }, { "math_id": 22, "text": "S(0)" }, { "math_id": 23, "text": "s_0 = S(0) / N" }, { "math_id": 24, "text": "s_\\infty = S(\\infty) / N" }, { "math_id": 25, "text": "r_\\infty = R(\\infty) / N" }, { "math_id": 26, "text": "t \\to \\infty," }, { "math_id": 27, "text": "s_\\infty = 1 - r_\\infty = s_0 e^{-R_0(r_\\infty - r_0)}" }, { "math_id": 28, "text": "s_\\infty = 1-r_\\infty = - R_0^{-1}\\, W(-s_0 R_0 e^{-R_0(1-r_0)})." }, { "math_id": 29, "text": "s_0=0" }, { "math_id": 30, "text": " \\frac{dI}{dt} = \\left(R_0 \\frac{S}{N} - 1\\right) \\gamma I," }, { "math_id": 31, "text": " R_{0} \\cdot S(0) > N," }, { "math_id": 32, "text": " \\frac{dI}{dt}(0) >0 ," }, { "math_id": 33, "text": " R_{0} \\cdot S(0) < N," }, { "math_id": 34, "text": " \\frac{dI}{dt}(0) <0 ," }, { "math_id": 35, "text": " F = \\beta I," }, { "math_id": 36, "text": " F = \\beta \\frac{I}{N} ." }, { "math_id": 37, "text": "\\mathcal{S}(u)=S(t)" }, { "math_id": 38, "text": "\\mathcal{S}(u)= S(0)u " }, { "math_id": 39, "text": "\\mathcal{I}(u)= N -\\mathcal{R}(u)-\\mathcal{S}(u) " }, { "math_id": 40, "text": "\\mathcal{R}(u)=R(0) -\\rho \\ln(u)" }, { "math_id": 41, "text": "t= \\frac{N}{\\beta}\\int_u^1 \\frac{du^*}{u^*\\mathcal{I}(u^*)} , \\quad \\rho=\\frac{\\gamma N}{\\beta}," }, { "math_id": 42, "text": "(\\mathcal{S}(1),\\mathcal{I}(1),\\mathcal{R}(1))=(S(0),N -R(0)-S(0),R(0)), \\quad u_T<u<1," }, { "math_id": 43, "text": "u_T" }, { "math_id": 44, "text": "\\mathcal{I}(u_T)=0" }, { "math_id": 45, "text": "R_{\\infty}" }, { "math_id": 46, "text": "u_T=e^{-(R_{\\infty}-R(0))/\\rho}(=S_{\\infty}/S(0)" }, { "math_id": 47, "text": "S(0) \\neq 0)" }, { "math_id": 48, "text": "I_{\\infty}=0" }, { "math_id": 49, "text": "\n\\begin{align}\nS(t) & = S(0) e^{-\\xi(t)} \\\\[8pt]\nI(t) & = N-S(t)-R(t) \\\\[8pt]\nR(t) & = R(0) + \\rho \\xi(t) \\\\[8pt]\n\\xi(t) & = \\frac{\\beta}{N}\\int_0^t I(t^*) \\, dt^*\n\\end{align}\n" }, { "math_id": 50, "text": "\\xi(t)" }, { "math_id": 51, "text": "t" }, { "math_id": 52, "text": "e^{-\\xi(t)} = u" }, { "math_id": 53, "text": "d/d\\tau" }, { "math_id": 54, "text": "d\\tau=I dt" }, { "math_id": 55, "text": "\\tau=\\int I dt" }, { "math_id": 56, "text": "dR/d\\tau =" }, { "math_id": 57, "text": "\\tau" }, { "math_id": 58, "text": "\\xi" }, { "math_id": 59, "text": "S_{\\infty}" }, { "math_id": 60, "text": "I_{\\infty}" }, { "math_id": 61, "text": "I(0)" }, { "math_id": 62, "text": "t>0" }, { "math_id": 63, "text": "R_0=\\frac{\\beta_0}{\\gamma_0}" }, { "math_id": 64, "text": "R_t=\\frac{\\beta_t}{\\gamma_t}" }, { "math_id": 65, "text": "R_e=\\frac{\\beta_tS}{\\gamma_tN}" }, { "math_id": 66, "text": "\\beta_0 = 0.4 day^{-1}" }, { "math_id": 67, "text": "\\gamma_0 = 0.2 day^{-1}" }, { "math_id": 68, "text": "R_0 = 2" }, { "math_id": 69, "text": "\\beta" }, { "math_id": 70, "text": "SI" }, { "math_id": 71, "text": "\\gamma" }, { "math_id": 72, "text": "R_t" }, { "math_id": 73, "text": "1/\\gamma" }, { "math_id": 74, "text": "R_e" }, { "math_id": 75, "text": "R_e > 1" }, { "math_id": 76, "text": "R_e = 1" }, { "math_id": 77, "text": "R_e < 1" }, { "math_id": 78, "text": "R_t = 0.5R_0" }, { "math_id": 79, "text": "R_t = 0.25R_0" }, { "math_id": 80, "text": "R_e>1" }, { "math_id": 81, "text": "R_e<1" }, { "math_id": 82, "text": "R_t = R_0" }, { "math_id": 83, "text": "R_E = 1" }, { "math_id": 84, "text": "S = N/R_0" }, { "math_id": 85, "text": "\\mu" }, { "math_id": 86, "text": "\\Lambda" }, { "math_id": 87, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\Lambda - \\mu S - \\frac{\\beta I S}{N} \\\\[8pt]\n\\frac{dI}{dt} & = \\frac{\\beta I S}{N} - \\gamma I -\\mu I \\\\[8pt]\n\\frac{dR}{dt} & = \\gamma I - \\mu R\n\\end{align}\n" }, { "math_id": 88, "text": "\\left(S(t),I(t),R(t)\\right) =\\left(\\frac{\\Lambda}{\\mu},0,0\\right)." }, { "math_id": 89, "text": " R_0 = \\frac{ \\beta}{\\mu+\\gamma}, " }, { "math_id": 90, "text": " R_0 \\le 1 \\Rightarrow \\lim_{t \\to \\infty} (S(t),I(t),R(t)) = \\textrm{DFE} = \\left(\\frac{\\Lambda}{\\mu},0,0\\right) " }, { "math_id": 91, "text": " R_0 > 1 , I(0)> 0 \\Rightarrow \\lim_{t \\to \\infty} (S(t),I(t),R(t)) = \\textrm{EE} = \\left(\\frac{\\gamma+\\mu}{\\beta},\\frac{\\mu}{\\beta}\\left(R_0-1\\right), \\frac{\\gamma}{\\beta} \\left(R_0-1\\right)\\right). " }, { "math_id": 92, "text": "R_{0}" }, { "math_id": 93, "text": "S(t)" }, { "math_id": 94, "text": "I(t)" }, { "math_id": 95, "text": "R(t)" }, { "math_id": 96, "text": "{\\color{blue}{\\mathcal{S} \\rightarrow \\mathcal{I} \\rightarrow \\mathcal{R}}}" }, { "math_id": 97, "text": "N = S(t) + I(t) + R(t)" }, { "math_id": 98, "text": "S(t=0)" }, { "math_id": 99, "text": "I(t=0)" }, { "math_id": 100, "text": "R(t=0)" }, { "math_id": 101, "text": "\\frac{dS}{dt} = - \\beta S I " }, { "math_id": 102, "text": "\\frac{dI}{dt} = \\beta S I - \\gamma I " }, { "math_id": 103, "text": "\\frac{dR}{dt} = \\gamma I " }, { "math_id": 104, "text": "a" }, { "math_id": 105, "text": "b" }, { "math_id": 106, "text": "S/N" }, { "math_id": 107, "text": "a b S = \\beta S" }, { "math_id": 108, "text": "\\beta S I " }, { "math_id": 109, "text": "\\operatorname E[\\min(T_L\\mid T_S)]" }, { "math_id": 110, "text": "T_L" }, { "math_id": 111, "text": "T_S" }, { "math_id": 112, "text": " \\operatorname E[\\min(T_L\\mid T_S)]=\\int_0^\\infty e^{-(\\mu+\\delta) x} \\, dx= \\frac{1}{\\mu+\\delta}," }, { "math_id": 113, "text": "\\mu N" }, { "math_id": 114, "text": "S = \\frac{\\mu N}{\\mu + \\lambda}." }, { "math_id": 115, "text": "\\lambda = \\tfrac{\\beta I}{N}," }, { "math_id": 116, "text": "\\tfrac{1}{\\mu+v}" }, { "math_id": 117, "text": "I = \\frac{\\mu N}{\\mu + \\lambda} \\lambda \\frac{1}{\\mu + v}. " }, { "math_id": 118, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = - \\frac{\\beta S I}{N} + \\gamma I \\\\[6pt]\n\\frac{dI}{dt} & = \\frac{\\beta S I}{N} - \\gamma I\n\\end{align}\n" }, { "math_id": 119, "text": "\\frac{dS}{dt} + \\frac{dI}{dt} = 0 \\Rightarrow S(t)+I(t) = N" }, { "math_id": 120, "text": " \\frac{dI}{dt} = (\\beta - \\gamma) I - \\frac{\\beta}{N} I^2 " }, { "math_id": 121, "text": "\\forall I(0) > 0" }, { "math_id": 122, "text": "\n\\begin{align}\n& \\frac{\\beta}{\\gamma} \\le 1 \\Rightarrow \\lim_{t \\to +\\infty}I(t)=0, \\\\[6pt]\n& \\frac{\\beta}{\\gamma} > 1 \\Rightarrow \\lim_{t \\to +\\infty}I(t) = \\left(1 - \\frac{\\gamma}{\\beta} \\right) N.\n\\end{align}\n" }, { "math_id": 123, "text": "I = y^{-1}" }, { "math_id": 124, "text": "I(t) = \\frac{I_\\infty}{1+V e^{-\\chi t}}" }, { "math_id": 125, "text": "I_\\infty = (1 -\\gamma/\\beta)N" }, { "math_id": 126, "text": "\\chi = \\beta-\\gamma" }, { "math_id": 127, "text": "V = I_\\infty/I_0 - 1" }, { "math_id": 128, "text": "S(t) = N - I(t)" }, { "math_id": 129, "text": "\\gamma=0" }, { "math_id": 130, "text": "R=0" }, { "math_id": 131, "text": "S=N-I" }, { "math_id": 132, "text": "\n\\frac{dI}{dt} \\propto I\\cdot (N-I).\n" }, { "math_id": 133, "text": "\n\\begin{align}\n& \\frac{dS}{dt} = - \\frac{\\beta I S}{N}, \\\\[6pt]\n& \\frac{dI}{dt} = \\frac{\\beta I S}{N} - \\gamma I - \\mu I, \\\\[6pt]\n& \\frac{dR}{dt} = \\gamma I, \\\\[6pt]\n& \\frac{dD}{dt} = \\mu I,\n\\end{align}\n" }, { "math_id": 134, "text": "\\beta, \\gamma, \\mu" }, { "math_id": 135, "text": "\n\\begin{align}\n& \\frac{dS}{dt} = - \\frac{\\beta(t) I S}{N} - v(t) S, \\\\[6pt]\n& \\frac{dI}{dt} = \\frac{\\beta(t) I S}{N} - \\gamma(t) I, \\\\[6pt]\n& \\frac{dR}{dt} = \\gamma(t) I, \\\\[6pt]\n& \\frac{dV}{dt} = v(t) S,\n\\end{align}\n" }, { "math_id": 136, "text": "\\beta, \\gamma, v" }, { "math_id": 137, "text": "S(0)=(1-\\eta)N" }, { "math_id": 138, "text": "I(0)=\\eta N" }, { "math_id": 139, "text": "R(0)=V(0)=0" }, { "math_id": 140, "text": "k=\\gamma(t)/\\beta(t)" }, { "math_id": 141, "text": "b=v(t)/\\beta(t)" }, { "math_id": 142, "text": "k+b<1-2\\eta" }, { "math_id": 143, "text": "b_c" }, { "math_id": 144, "text": "S_\\infty" }, { "math_id": 145, "text": "S(0)+I(0)+R(0)+V(0)=N" }, { "math_id": 146, "text": "\\frac{R_t}{R_0}" }, { "math_id": 147, "text": "R_e = \\frac{R_tS}{N} < 1" }, { "math_id": 148, "text": "S, I, R, V, D" }, { "math_id": 149, "text": "\n\\begin{align}\n& \\frac{dS}{dt} = - a(t) S I - v(t) S, \\\\[6pt]\n& \\frac{dI}{dt} = a(t) S I - \\mu(t) I - \\psi(t) I, \\\\[6pt]\n& \\frac{dR}{dt} = \\mu(t) I, \\\\[6pt]\n& \\frac{dV}{dt} = v(t) S,\\\\[6pt]\n& \\frac{dD}{dt} = \\psi(t) I\n\\end{align}\n" }, { "math_id": 150, "text": "a(t), v(t), \\mu(t), \\psi(t)" }, { "math_id": 151, "text": "S(0)=1-\\eta" }, { "math_id": 152, "text": "I(0)=\\eta" }, { "math_id": 153, "text": "R(0)=V(0)=D(0)=0" }, { "math_id": 154, "text": "k=\\mu(t)/a(t)" }, { "math_id": 155, "text": "b=v(t)/a(t)" }, { "math_id": 156, "text": "q=\\psi(t)/a(t)" }, { "math_id": 157, "text": "a(t)" }, { "math_id": 158, "text": "\n\\begin{align}\n& \\frac{dS}{d\\tau} = - S I - b(\\tau) S, \\\\[6pt]\n& \\frac{dI}{d\\tau} = S I - [k(\\tau)+q(\\tau)] I, \\\\[6pt]\n& \\frac{dR}{d\\tau} = k(\\tau) I, \\\\[6pt]\n& \\frac{dV}{d\\tau} = b(\\tau) S,\\\\[6pt]\n& \\frac{dD}{d\\tau} = q(\\tau) S\n\\end{align}\n" }, { "math_id": 159, "text": "\n\\tau(t) = \\int_0^t a(\\xi) d\\xi\n" }, { "math_id": 160, "text": "I(\\tau)" }, { "math_id": 161, "text": "j(\\tau)=S(\\tau)I(\\tau)" }, { "math_id": 162, "text": "\n\\begin{align}\n\\frac{dM}{dt} & = \\Lambda - \\delta M - \\mu M\\\\[8pt]\n\\frac{dS}{dt} & = \\delta M - \\frac{\\beta SI}{N} - \\mu S\\\\[8pt]\n\\frac{dI}{dt} & = \\frac{\\beta SI}{N} - \\gamma I - \\mu I\\\\[8pt]\n\\frac{dR}{dt} & = \\gamma I - \\mu R\n\\end{align}\n" }, { "math_id": 163, "text": "a^{-1}" }, { "math_id": 164, "text": "N\\mu" }, { "math_id": 165, "text": " N" }, { "math_id": 166, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\mu N - \\mu S - \\frac{\\beta I S}{N} \\\\[8pt]\n\\frac{dE}{dt} & = \\frac{\\beta I S}{N} - (\\mu + a ) E \\\\[8pt]\n\\frac{dI}{dt} & = a E - (\\gamma +\\mu ) I \\\\[8pt]\n\\frac{dR}{dt} & = \\gamma I - \\mu R.\n\\end{align}\n" }, { "math_id": 167, "text": "S+E+I+R=N," }, { "math_id": 168, "text": "R_0 = \\frac{a}{\\mu+a}\\frac{\\beta}{\\mu+\\gamma}." }, { "math_id": 169, "text": " \\left(S(0),E(0),I(0),R(0)\\right) \\in \\left\\{(S,E,I,R)\\in [0,N]^4 : S \\ge 0, E \\ge 0, I\\ge 0, R\\ge 0, S+E+I+R = N \\right\\} " }, { "math_id": 170, "text": " R_0 \\le 1 \\Rightarrow \\lim_{t \\to +\\infty} \\left(S(t),E(t),I(t),R(t)\\right) = DFE = (N,0,0,0)," }, { "math_id": 171, "text": " R_0 > 1 , I(0)> 0 \\Rightarrow \\lim_{t \\to +\\infty} \\left(S(t),E(t),I(t),R(t)\\right) = EE. " }, { "math_id": 172, "text": "\\beta(t)" }, { "math_id": 173, "text": "\n\\begin{align}\n\\frac{dE_1}{dt} & = \\beta(t) I_1 - (\\gamma +a ) E_1 \\\\[8pt]\n\\frac{dI_1}{dt} & = a E_1 - (\\gamma +\\mu ) I_1\n\\end{align}\n" }, { "math_id": 174, "text": "{\\color{blue}{\\mathcal{S} \\to \\mathcal{E} \\to \\mathcal{I} \\to \\mathcal{S}}}" }, { "math_id": 175, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\Lambda - \\frac{\\beta SI}{N} - \\mu S + \\gamma I \\\\[6pt]\n\\frac{dE}{dt} & = \\frac{\\beta SI}{N} - (\\epsilon + \\mu)E \\\\[6pt]\n\\frac{dI}{dt} & = \\varepsilon E - (\\gamma + \\mu)I\n\\end{align}\n" }, { "math_id": 176, "text": " \\color{blue}{\\mathcal{M} \\to \\mathcal{S} \\to \\mathcal{E} \\to \\mathcal{I} \\to \\mathcal{R}} " }, { "math_id": 177, "text": "\n\\begin{align}\n\\frac{dM}{dt} & = \\Lambda - \\delta M - \\mu M \\\\[6pt]\n\\frac{dS}{dt} & = \\delta M - \\frac{\\beta SI}{N} - \\mu S \\\\[6pt]\n\\frac{dE}{dt} & = \\frac{\\beta SI}{N} - (\\varepsilon + \\mu)E \\\\[6pt]\n\\frac{dI}{dt} & = \\varepsilon E - (\\gamma + \\mu)I \\\\[6pt]\n\\frac{dR}{dt} & = \\gamma I - \\mu R\n\\end{align}\n" }, { "math_id": 178, "text": "{\\color{blue}{\\mathcal{M} \\to \\mathcal{S} \\to \\mathcal{E} \\to \\mathcal{I} \\to \\mathcal{R} \\to \\mathcal{S}}}" }, { "math_id": 179, "text": " F = \\beta(t) \\frac{I}{N} , \\quad \\beta(t+T)=\\beta(t)" }, { "math_id": 180, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\mu N - \\mu S - \\beta(t) \\frac{I}{N} S \\\\[8pt]\n\\frac{dI}{dt} & = \\beta(t) \\frac{I}{N} S - (\\gamma +\\mu ) I\n\\end{align}\n" }, { "math_id": 181, "text": "R=N-S-I" }, { "math_id": 182, "text": "\\frac 1 T \\int_0^T \\frac{\\beta(t)}{\\mu+\\gamma} \\, dt < 1 \\Rightarrow \\lim_{t \\to +\\infty} (S(t),I(t)) = DFE = (N,0), " }, { "math_id": 183, "text": "\n\\begin{align}\n& \\partial_t S = D_S \\nabla^2 S - \\frac{\\beta I S}{N}, \\\\[6pt]\n& \\partial_t I = D_I \\nabla^2 I + \\frac{\\beta I S}{N}- \\gamma I, \\\\[6pt]\n& \\partial_t R = D_R \\nabla^2 R + \\gamma I,\n\\end{align}\n" }, { "math_id": 184, "text": "D_S" }, { "math_id": 185, "text": "D_I" }, { "math_id": 186, "text": "D_R" }, { "math_id": 187, "text": "{\\color{blue}{\\mathcal{S_{ign}}+2\\mathcal{I} \\to \\mathcal{S_{res}}+2\\mathcal{I}}} " }, { "math_id": 188, "text": "{\\color{blue}{\\mathcal{S_{res}} \\to \\mathcal{S_{exh}}}}" }, { "math_id": 189, "text": "{\\color{blue}{\\mathcal{S_{exh}} \\to \\mathcal{S_{ign}}}} " }, { "math_id": 190, "text": "{\\color{blue}{\\mathcal{S_{...}}+\\mathcal{I} \\to \\mathcal{2I}}} " }, { "math_id": 191, "text": "\\frac{d^2I}{dt} -\\sigma_o^2 I + \\frac{3}{2} \\frac{\\sigma_o^2} {I_{max}} I^2 = 0 " }, { "math_id": 192, "text": "\\sigma_o = \\gamma (R_o -1) " }, { "math_id": 193, "text": "R_o = \\frac{\\beta}{\\gamma} \\frac{S_o}{N} " }, { "math_id": 194, "text": "I_{max}=\\frac{S_o}{2}\\frac{(R_o-1)^2}{R_o^2} " }, { "math_id": 195, "text": "S_o" }, { "math_id": 196, "text": "\\sigma_o" }, { "math_id": 197, "text": "R_o" }, { "math_id": 198, "text": "I_{max}" }, { "math_id": 199, "text": "I=I_{max}sech^2 \\left( \\frac{\\sigma_o}{2}t \\right) " }, { "math_id": 200, "text": "V" }, { "math_id": 201, "text": "P \\in (0,1)" }, { "math_id": 202, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\nu N (1-P) - \\mu S - \\beta \\frac{I}{N} S \\\\[8pt]\n\\frac{dI}{dt} & = \\beta \\frac{I}{N} S - (\\mu+\\gamma) I \\\\[8pt]\n\\frac{dV}{dt} & = \\nu N P - \\mu V\n\\end{align}\n" }, { "math_id": 203, "text": " \\lim_{t \\to +\\infty} V(t)= N P," }, { "math_id": 204, "text": " R_0 (1-P) \\le 1 \\Rightarrow \\lim_{t \\to +\\infty} \\left(S(t),I(t)\\right) = DFE = \\left(N \\left(1-P\\right),0\\right) " }, { "math_id": 205, "text": " R_0 (1-P) > 1 , \\quad I(0)> 0 \\Rightarrow \\lim_{t \\to +\\infty} \\left(S(t),I(t)\\right) = EE = \\left(\\frac{N}{R_0(1-P)},N \\left(R_0 (1-P)-1\\right)\\right). " }, { "math_id": 206, "text": " P < P^{*}= 1-\\frac{1}{R_0} " }, { "math_id": 207, "text": " P=P(I), \\quad P'(I)>0." }, { "math_id": 208, "text": " P(0) \\ge P^{*}," }, { "math_id": 209, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\mu N (1-P) - \\mu S - \\rho S - \\beta \\frac{I}{N} S \\\\[8pt]\n\\frac{dV}{dt} & = \\mu N P + \\rho S - \\mu V\n\\end{align}\n" }, { "math_id": 210, "text": " P \\ge 1- \\left(1+\\frac{\\rho}{\\mu}\\right)\\frac{1}{R_0} " }, { "math_id": 211, "text": "\n\\begin{align}\n\\frac{dS}{dt} & = \\mu N - \\mu S - \\beta \\frac{I}{N} S, \\quad S(n T^+) = (1-p) S(n T^-), & & n=0,1,2,\\ldots \\\\[8pt]\n\\frac{dV}{dt} & = - \\mu V, \\quad V(n T^+) = V(n T^-) + p S(n T^-), & & n=0,1,2,\\ldots\n\\end{align}\n" }, { "math_id": 212, "text": " S^*(t) = 1- \\frac{p}{1-(1-p)E^{-\\mu T}} E^{-\\mu MOD(t,T)} " }, { "math_id": 213, "text": " R_0 \\int_0^T S^*(t) \\, dt < 1 " }, { "math_id": 214, "text": "s(t,a),i(t,a),r(t,a)" }, { "math_id": 215, "text": "S(t)=\\int_0^{a_M} s(t,a)\\,da " }, { "math_id": 216, "text": "I(t)=\\int_0^{a_M} i(t,a)\\,da" }, { "math_id": 217, "text": "R(t)=\\int_0^{a_M} r(t,a)\\,da" }, { "math_id": 218, "text": "a_M \\le +\\infty" }, { "math_id": 219, "text": "\\partial_t s(t,a) + \\partial_a s(t,a) = -\\mu(a) s(a,t) - s(a,t)\\int_0^{a_M} k(a,a_1;t)i(a_1,t)\\,da_1 " }, { "math_id": 220, "text": "\\partial_t i(t,a) + \\partial_a i(t,a) = s(a,t)\\int_{0}^{a_M}{k(a,a_1;t)i(a_1,t)da_1} -\\mu(a) i(a,t) - \\gamma(a)i(a,t) " }, { "math_id": 221, "text": "\\partial_t r(t,a) + \\partial_a r(t,a) = -\\mu(a) r(a,t) + \\gamma(a)i(a,t) " }, { "math_id": 222, "text": "F(a,t,i(\\cdot,\\cdot))=\\int_0^{a_M} k(a,a_1;t)i(a_1,t) \\, da_1 " }, { "math_id": 223, "text": " k(a,a_1;t) " }, { "math_id": 224, "text": "i(t,0)=r(t,0)=0" }, { "math_id": 225, "text": "s(t,0)= \\int_0^{a_M} \\left(\\varphi_s(a) s(a,t)+\\varphi_i(a) i(a,t)+\\varphi_r(a) r(a,t)\\right) \\, da " }, { "math_id": 226, "text": "\\varphi_j(a), j=s,i,r" }, { "math_id": 227, "text": "n(t,a)=s(t,a)+i(t,a)+r(t,a)" }, { "math_id": 228, "text": "\\partial_t n(t,a) + \\partial_a n(t,a) = -\\mu(a) n(a,t) " }, { "math_id": 229, "text": "\\varphi(.)" }, { "math_id": 230, "text": "\\mu(a)" }, { "math_id": 231, "text": " 1 = \\int_0^{a_M} \\varphi(a) \\exp\\left(- \\int_0^a{\\mu(q)dq} \\right) \\, da " }, { "math_id": 232, "text": "n^*(a)=C \\exp\\left(- \\int_0^a \\mu(q) \\, dq \\right)," }, { "math_id": 233, "text": "DFS(a)= (n^*(a),0,0)." }, { "math_id": 234, "text": "R_0\n" }, { "math_id": 235, "text": "G\n" }, { "math_id": 236, "text": "R_0 = \\rho(G).\n" }, { "math_id": 237, "text": "f\n" }, { "math_id": 238, "text": "m\n" }, { "math_id": 239, "text": "G = \\begin{pmatrix} 0 & f \\\\ m & 0 \\end{pmatrix},\n" }, { "math_id": 240, "text": "g_{ij}\n" }, { "math_id": 241, "text": "i\n" }, { "math_id": 242, "text": "j\n" }, { "math_id": 243, "text": "a\n" }, { "math_id": 244, "text": "\\phi_a\n" }, { "math_id": 245, "text": "\\phi_{a+1}\n" }, { "math_id": 246, "text": "G\\phi_a\n" }, { "math_id": 247, "text": "R_0 = \\rho(G) = \\sqrt{mf}" }, { "math_id": 248, "text": "\\sqrt{mf}\n" }, { "math_id": 249, "text": "n\n" }, { "math_id": 250, "text": "\\dot{C_i}={\\operatorname{d}\\!C_i\\over\\operatorname{d}\\!t} = f(C_1, C_2, ..., C_n)" }, { "math_id": 251, "text": "C_i" }, { "math_id": 252, "text": "C_1 = S" }, { "math_id": 253, "text": "C_2 = I\n" }, { "math_id": 254, "text": "C_3 = R\n" }, { "math_id": 255, "text": "I=0\n" }, { "math_id": 256, "text": " R_0 \\le 1 \\Rightarrow \\lim_{t \\to \\infty} (C_1(t),C_2(t),\\cdots, C_n(t)) = \\textrm{DFE} " }, { "math_id": 257, "text": " R_0 > 1 , I(0)> 0 \\Rightarrow \\lim_{t \\to \\infty} (C_1(t),C_2(t),\\cdots, C_n(t)) = \\textrm{EE}. " }, { "math_id": 258, "text": "\\lambda\n\n" }, { "math_id": 259, "text": "\\kappa" }, { "math_id": 260, "text": "\\begin{cases} \\dot{S} = \\lambda - \\mu S - \\beta SI, \\\\\\\\ \\dot{E} = \\beta SI - (\\mu+\\kappa)E, \\\\\\\\ \\dot{I} = \\kappa E - (\\mu+\\gamma)I, \\\\\\\\ \\dot{R} = \\gamma I - \\mu R. \\end{cases}\n" }, { "math_id": 261, "text": "\\mathrm{x} = (S, E, I, R)\n\n" }, { "math_id": 262, "text": "\\mathrm{x}_i\n\n" }, { "math_id": 263, "text": "i\n\n" }, { "math_id": 264, "text": "F_i(\\mathrm{x})\n\n" }, { "math_id": 265, "text": "V_i^+\n\n" }, { "math_id": 266, "text": "V_i^-\n\n" }, { "math_id": 267, "text": "F_i(\\mathrm{x}) - V_i(\\mathrm{x})\n\n" }, { "math_id": 268, "text": "V_i(\\mathrm{x}) = V_i^-(\\mathrm{x}) - V_i^+\n(\\mathrm{x})\n\n" }, { "math_id": 269, "text": "F\n\n" }, { "math_id": 270, "text": "V\n\n" }, { "math_id": 271, "text": "F_{ij} = {\\partial\\! \\ F_i(\\mathrm{x}^*)\\over\\partial \\! \\ \\mathrm{x}_j}\n\n" }, { "math_id": 272, "text": "V_{ij} = {\\partial\\! \\ V_i(\\mathrm{x}^*)\\over\\partial \\! \\ \\mathrm{x}_j}\n\n" }, { "math_id": 273, "text": "\\mathrm{x}^* = (S^*, E^*, I^*, R^*) = (\\lambda/\\mu, 0, 0, 0)\n\n" }, { "math_id": 274, "text": "G = FV^{-1}\n\n" }, { "math_id": 275, "text": "F\n" }, { "math_id": 276, "text": "V\n" }, { "math_id": 277, "text": "V^{-1}\n" }, { "math_id": 278, "text": "G_{ij}\n\n" }, { "math_id": 279, "text": "\\mathrm{x}_j\n\n" }, { "math_id": 280, "text": "j.\n\n" }, { "math_id": 281, "text": "F = \\begin{pmatrix} 0 & \\beta S^* \\\\ 0 & 0 \\end{pmatrix}\n\n" }, { "math_id": 282, "text": "V = \\begin{pmatrix} \\mu + \\kappa & 0 \\\\ -\\kappa & \\gamma + \\mu \\end{pmatrix}\n\n" }, { "math_id": 283, "text": "R_0 = \\rho( FV^{-1}) = \\frac{\\kappa\\beta S^*}{(\\mu+\\kappa)(\\mu+\\gamma)}.\n" }, { "math_id": 284, "text": "N(t)" }, { "math_id": 285, "text": "K := \\frac{d\\ln(N)}{dt}." }, { "math_id": 286, "text": "K" }, { "math_id": 287, "text": "T_d" }, { "math_id": 288, "text": "K=\\frac{\\ln(2)}{T_d}." }, { "math_id": 289, "text": "n_E(t) = n_E(0)\\, R_0^{t/\\tau} = n_E(0)\\,e^{Kt}" }, { "math_id": 290, "text": "\\ln(n_E(t)) = \\ln(n_E(0))+\\ln(R_0)t/\\tau." }, { "math_id": 291, "text": "\\frac{dn_E(t)}{dt} =n_E(t)\\frac {\\ln(R_0)}{\\tau}." }, { "math_id": 292, "text": "\\frac{d\\ln(n_E(t))}{dt} =\\frac {\\ln(R_0)}{\\tau}." }, { "math_id": 293, "text": "R_0 = e^{K \\tau}" }, { "math_id": 294, "text": "K = \\frac{\\ln R_0}{\\tau}" }, { "math_id": 295, "text": "\\tau=5~\\mathrm{d}" }, { "math_id": 296, "text": "K=0.183~\\mathrm{d}^{-1}" }, { "math_id": 297, "text": "R_0=2.5" }, { "math_id": 298, "text": "\\ln(n_E(t)) = \\ln(n_E(0))+\\frac{1}{\\tau}\\int\\limits_{0}^{t}\\ln(R_0(t))dt" }, { "math_id": 299, "text": "\\ln(R_0)" }, { "math_id": 300, "text": "\\tau_E" }, { "math_id": 301, "text": "\\tau_I" }, { "math_id": 302, "text": "R_0 = 1 + K(\\tau_E+\\tau_I) + K^2\\tau_E\\tau_I." }, { "math_id": 303, "text": "n_E" }, { "math_id": 304, "text": "n_I" }, { "math_id": 305, "text": "\\frac{d}{dt} \\begin{pmatrix} n_E \\\\ n_I \\end{pmatrix} = \\begin{pmatrix} -1/\\tau_E & R_0/\\tau_I \\\\ 1/\\tau_E & -1/\\tau_I \\end{pmatrix} \\begin{pmatrix} n_E \\\\ n_I \\end{pmatrix}." }, { "math_id": 306, "text": "\\tau_I = 0" }, { "math_id": 307, "text": "R_0=1+K\\tau_E" }, { "math_id": 308, "text": "R_0=\\exp(K\\tau_E)" }, { "math_id": 309, "text": "R_0=1.9" }, { "math_id": 310, "text": "2.5" }, { "math_id": 311, "text": "R_0 = R_0^{\\text{FAST}} + R_0^{\\text{SLOW}}" } ]
https://en.wikipedia.org/wiki?curid=958031
9581197
Quark–lepton complementarity
The quark–lepton complementarity (QLC) is a possible fundamental symmetry between quarks and leptons. First proposed in 1990 by Foot and Lew, it assumes that leptons as well as quarks come in three "colors". Such theory may reproduce the Standard Model at low energies, and hence quark–lepton symmetry may be realized in nature. Possible evidence for QLC. Recent neutrino experiments confirm that the Pontecorvo–Maki–Nakagawa–Sakata matrix UPMNS contains large mixing angles. For example, atmospheric measurements of particle decay yield "θ" ≈ 45°, while solar experiments yield "θ" ≈ 34°. Compare these results with "θ" ≈ 9° which is clearly smaller, at about ~× the size, and with the quark mixing angles in the Cabibbo–Kobayashi–Maskawa matrix UCKM . The disparity that nature indicates between quark and lepton mixing angles has been viewed in terms of a "quark–lepton complementarity" which can be expressed in the relations formula_0 formula_1 Possible consequences of QLC have been investigated in the literature and in particular a simple correspondence between the PMNS and CKM matrices have been proposed and analyzed in terms of a correlation matrix. The correlation matrix VM is roughly defined as the product of the CKM and PMNS matrices: formula_2 Unitarity implies: formula_3 Open questions. One may ask where the large lepton mixings come from, and whether this information is implicit in the form of the  VM  matrix. This question has been widely investigated in the literature, but its answer is still open. Furthermore, in some Grand Unification Theories (GUTs) the direct QLC correlation between the CKM and the PMNS mixing matrix can be obtained. In this class of models, the VM matrix is determined by the heavy Majorana neutrino mass matrix. Despite the naïve relations between the PMNS and CKM angles, a detailed analysis shows that the correlation matrix is phenomenologically compatible with a tribimaximal pattern, and only marginally with a bimaximal pattern. It is possible to include bimaximal forms of the correlation matrix  VM  in models with renormalization effects that are relevant, however, only in particular cases with formula_4 and with quasi-degenerate neutrino masses. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\theta_{12}^\\text{PMNS}+\\theta_{12}^\\text{CKM} \\approx 45^\\circ \\,," }, { "math_id": 1, "text": " \\theta_{23}^\\text{PMNS}+\\theta_{23}^\\text{CKM} \\approx 45^\\circ \\,." }, { "math_id": 2, "text": " V_\\text{M} = U_\\text{CKM} \\cdot U_\\text{PMNS} \\, ," }, { "math_id": 3, "text": " U_\\text{PMNS} = U^{\\dagger}_\\text{CKM} V_\\text{M} \\, ." }, { "math_id": 4, "text": "\\ \\tan \\beta > 40\\ " } ]
https://en.wikipedia.org/wiki?curid=9581197
9582652
Lift (data mining)
In data mining and association rule learning, lift is a measure of the performance of a targeting model (association rule) at predicting or classifying cases as having an enhanced response (with respect to the population as a whole), measured against a random choice targeting model. A targeting model is doing a good job if the response within the target (formula_0) is much better than the baseline (formula_1) average for the population as a whole. Lift is simply the ratio of these values: target response divided by average response. Mathematically, formula_2 For example, suppose a population has an average response rate of 5%, but a certain model (or rule) has identified a segment with a response rate of 20%. Then that segment would have a lift of 4.0 (20%/5%). Applications. Typically, the modeller seeks to divide the population into quantiles, and rank the quantiles by lift. Organizations can then consider each quantile, and by weighing the predicted response rate (and associated financial benefit) against the cost, they can decide whether to market to that quantile or not. The lift curve can also be considered a variation on the receiver operating characteristic (ROC) curve, and is also known in econometrics as the Lorenz or power curve. Example. Assume the data set being mined is: where the antecedent is the input variable that we can control, and the consequent is the variable we are trying to predict. Real mining problems would typically have more complex antecedents, but usually focus on single-value consequents. Most mining algorithms would determine the following rules (targeting models): because these are simply the most common patterns found in the data. A simple review of the above table should make these rules obvious. The "support" for Rule 1 is 3/7 because that is the number of items in the dataset in which the antecedent is A and the consequent 0. The support for Rule 2 is 2/7 because two of the seven records meet the antecedent of B and the consequent of 1. The supports can be written as: formula_3 formula_4 The "confidence" for Rule 1 is 3/4 because three of the four records that meet the antecedent of A meet the consequent of 0. The confidence for Rule 2 is 2/3 because two of the three records that meet the antecedent of B meet the consequent of 1. The confidences can be written as: formula_5 formula_6 Lift can be found by dividing the confidence by the unconditional probability of the consequent, or by dividing the support by the probability of the antecedent times the probability of the consequent, so: formula_7 formula_8 If some rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events. If the lift is &gt; 1, like it is here for Rules 1 and 2, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets. Observe that even though Rule 1 has higher confidence, it has lower lift. Intuitively, it would seem that Rule 1 is more valuable because of its higher confidence—it seems more accurate (better supported). But accuracy of the rule independent of the data set can be misleading. The value of lift is that it considers both the confidence of the rule and the overall data set.
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": " \\operatorname{lift} = \\frac{P(T \\mid B)}{P(T)} = \\frac{P(T\\wedge B)}{P(T)P(B)}" }, { "math_id": 3, "text": " \\operatorname{supp}(A \\Rightarrow 0) = P(A \\land 0) = P(A)P(0\\mid A) = P(0)P(A\\mid 0)" }, { "math_id": 4, "text": " \\operatorname{supp}(B \\Rightarrow 1) = P(B \\land 1) = P(B)P(1\\mid B) = P(1)P(B\\mid 1)" }, { "math_id": 5, "text": " \\operatorname{conf}(A \\Rightarrow 0) = P(0\\mid A)" }, { "math_id": 6, "text": " \\operatorname{conf}(B \\Rightarrow 1) = P(1\\mid B)" }, { "math_id": 7, "text": " \\operatorname{lift}(A \\Rightarrow 0) = \\frac{P(0\\mid A)}{P(0)} = \\frac{P(A \\land 0)}{P(A)P(0)}" }, { "math_id": 8, "text": " \\operatorname{lift}(B \\Rightarrow 1) = \\frac{P(1\\mid B)}{P(1)} = \\frac{P(B \\land 1)}{P(B)P(1)}" } ]
https://en.wikipedia.org/wiki?curid=9582652
958449
Hyperbolic orthogonality
Relation of space and time in relativity theory In geometry, the relation of hyperbolic orthogonality between two lines separated by the asymptotes of a hyperbola is a concept used in special relativity to define simultaneous events. Two events will be simultaneous when they are on a line hyperbolically orthogonal to a particular timeline. This dependence on a certain timeline is determined by velocity, and is the basis for the relativity of simultaneity. Geometry. Two lines are hyperbolic orthogonal when they are reflections of each other over the asymptote of a given hyperbola. Two particular hyperbolas are frequently used in the plane: The relation of hyperbolic orthogonality actually applies to classes of parallel lines in the plane, where any particular line can represent the class. Thus, for a given hyperbola and asymptote "A", a pair of lines ("a", "b") are hyperbolic orthogonal if there is a pair ("c", "d") such that formula_0, and "c" is the reflection of "d" across "A". Similar to the perpendularity of a circle radius to the tangent, a radius to a hyperbola is hyperbolic orthogonal to a tangent to the hyperbola. A bilinear form is used to describe orthogonality in analytic geometry, with two elements orthogonal when their bilinear form vanishes. In the plane of complex numbers formula_1, the bilinear form is formula_2, while in the plane of hyperbolic numbers formula_3 the bilinear form is formula_4 The vectors "z"1 and "z"2 in the complex number plane, and "w"1 and "w"2 in the hyperbolic number plane are said to be respectively "Euclidean orthogonal" or "hyperbolic orthogonal" if their respective inner products [bilinear forms] are zero. The bilinear form may be computed as the real part of the complex product of one number with the conjugate of the other. Then formula_5 entails perpendicularity in the complex plane, while formula_6 implies the "w"'s are hyperbolic orthogonal. The notion of hyperbolic orthogonality arose in analytic geometry in consideration of conjugate diameters of ellipses and hyperbolas. If "g" and "g"′ represent the slopes of the conjugate diameters, then formula_7 in the case of an ellipse and formula_8 in the case of a hyperbola. When "a" = "b" the ellipse is a circle and the conjugate diameters are perpendicular while the hyperbola is rectangular and the conjugate diameters are hyperbolic-orthogonal. In the terminology of projective geometry, the operation of taking the hyperbolic orthogonal line is an involution. Suppose the slope of a vertical line is denoted ∞ so that all lines have a slope in the projectively extended real line. Then whichever hyperbola (A) or (B) is used, the operation is an example of a hyperbolic involution where the asymptote is invariant. Hyperbolically orthogonal lines lie in different sectors of the plane, determined by the asymptotes of the hyperbola, thus the relation of hyperbolic orthogonality is a heterogeneous relation on sets of lines in the plane. Simultaneity. Since Hermann Minkowski's foundation for spacetime study in 1908, the concept of points in a spacetime plane being hyperbolic-orthogonal to a timeline (tangent to a world line) has been used to define simultaneity of events relative to the timeline, or relativity of simultaneity. In Minkowski's development the hyperbola of type (B) above is in use. Two vectors (x1, y1, z1, t1) and (x2, y2, z2, t2) are "normal" (meaning hyperbolic orthogonal) when formula_9 When c = 1 and the ys and zs are zero, x1 ≠ 0, t2 ≠ 0, then formula_10. Given a hyperbola with asymptote "A", its reflection in "A" produces the conjugate hyperbola. Any diameter of the original hyperbola is reflected to a conjugate diameter. The directions indicated by conjugate diameters are taken for space and time axes in relativity. As E. T. Whittaker wrote in 1910, "[the] hyperbola is unaltered when any pair of conjugate diameters are taken as new axes, and a new unit of length is taken proportional to the length of either of these diameters." On this principle of relativity, he then wrote the Lorentz transformation in the modern form using rapidity. Edwin Bidwell Wilson and Gilbert N. Lewis developed the concept within synthetic geometry in 1912. They note "in our plane no pair of perpendicular [hyperbolic-orthogonal] lines is better suited to serve as coordinate axes than any other pair" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a \\rVert c ,\\ b \\rVert d " }, { "math_id": 1, "text": "z_1 =u + iv, \\quad z_2 = x + iy" }, { "math_id": 2, "text": "xu + yv" }, { "math_id": 3, "text": "w_1 = u + jv,\\quad w_2 = x +jy," }, { "math_id": 4, "text": "xu - yv ." }, { "math_id": 5, "text": "z_1 z_2^* + z_1^* z_2 = 0" }, { "math_id": 6, "text": "w_1 w_2^* + w_1^* w_2 = 0" }, { "math_id": 7, "text": "g g' = - \\frac{b^2}{a^2}" }, { "math_id": 8, "text": "g g' = \\frac{b^2}{a^2}" }, { "math_id": 9, "text": "c^{2} \\ t_1 \\ t_2 - x_1 \\ x_2 - y_1 \\ y_2 - z_1 \\ z_2 = 0." }, { "math_id": 10, "text": "\\frac{c \\ t_1}{x_1} = \\frac{x_2}{c \\ t_2}" } ]
https://en.wikipedia.org/wiki?curid=958449
9584635
Bornology
Mathematical generalization of boundedness In mathematics, especially functional analysis, a bornology on a set "X" is a collection of subsets of "X" satisfying axioms that generalize the notion of boundedness. One of the key motivations behind bornologies and bornological analysis is the fact that bornological spaces provide a convenient setting for homological algebra in functional analysis. This is becausepg 9 the category of bornological spaces is additive, complete, cocomplete, and has a tensor product adjoint to an internal hom, all necessary components for homological algebra. History. Bornology originates from functional analysis. There are two natural ways of studying the problems of functional analysis: one way is to study notions related to topologies (vector topologies, continuous operators, open/compact subsets, etc.) and the other is to study notions related to boundedness (vector bornologies, bounded operators, bounded subsets, etc.). For normed spaces, from which functional analysis arose, topological and bornological notions are distinct but complementary and closely related. For example, the unit ball centered at the origin is both a neighborhood of the origin and a bounded subset. Furthermore, a subset of a normed space is "a neighborhood of the origin" (respectively, is "a bounded set") exactly when it "contains" (respectively, it "is contained in") a non-zero scalar multiple of this ball; so this is one instance where the topological and bornological notions are distinct but complementary (in the sense that their definitions differ only by which of formula_0 and formula_1 is used). Other times, the distinction between topological and bornological notions may even be unnecessary. For example, for linear maps between normed spaces, being continuous (a topological notion) is equivalent to being bounded (a bornological notion). Although the distinction between topology and bornology is often blurred or unnecessary for normed space, it becomes more important when studying generalizations of normed spaces. Nevertheless, bornology and topology can still be thought of as two necessary, distinct, and complementary aspects of one and the same reality. The general theory of topological vector spaces arose first from the theory of normed spaces and then bornology emerged from this general theory of topological vector spaces, although bornology has since become recognized as a fundamental notion in functional analysis. Born from the work of George Mackey (after whom Mackey spaces are named), the importance of bounded subsets first became apparent in duality theory, especially because of the Mackey–Arens theorem and the Mackey topology. Starting around the 1950s, it became apparent that topological vector spaces were inadequate for the study of certain major problems. For example, the multiplication operation of some important topological algebras was not continuous, although it was often bounded. Other major problems for which TVSs were found to be inadequate was in developing a more general theory of differential calculus, generalizing distributions from (the usual) scalar-valued distributions to vector or operator-valued distributions, and extending the holomorphic functional calculus of Gelfand (which is primarily concerted with Banach algebras or locally convex algebras) to a broader class of operators, including those whose spectra are not compact. Bornology has been found to be a useful tool for investigating these problems and others, including problems in algebraic geometry and general topology. Definitions. A bornology on a set is a cover of the set that is closed under finite unions and taking subsets. Elements of a bornology are called bounded sets. Explicitly, a or on a set formula_2 is a family formula_3 of subsets of formula_2 such that in which case the pair formula_7 is called a or a . Thus a bornology can equivalently be defined as a downward closed cover that is closed under binary unions. A non-empty family of sets that closed under finite unions and taking subsets (properties (1) and (3)) is called an ideal (because it is an ideal in the Boolean algebra/field of sets consisting of all subsets). A bornology on a set formula_2 can thus be equivalently defined as an ideal that covers formula_8 Elements of formula_4 are called formula_4-bounded sets or simply , if formula_4 is understood. Properties (1) and (2) imply that every singleton subset of formula_2 is an element of every bornology on formula_9 property (3), in turn, guarantees that the same is true of every finite subset of formula_8 In other words, points and finite subsets are always bounded in every bornology. In particular, the empty set is always bounded. If formula_7 is a bounded structure and formula_10 then the set of complements formula_11 is a (proper) filter called the ; it is always a free filter, which by definition means that it has empty intersection/kernel, because formula_12 for every formula_13 Bases and subbases. If formula_14 and formula_4 are bornologies on formula_2 then formula_4 is said to be or than formula_14 and also formula_14 is said to be or than formula_4 if formula_15 A family of sets formula_14 is called a or of a bornology formula_4 if formula_16 and for every formula_6 there exists an formula_17 such that formula_18 A family of sets formula_19 is called a of a bornology formula_4 if formula_20 and the collection of all finite unions of sets in formula_19 forms a base for formula_5 Every base for a bornology is also a subbase for it. Generated bornology. The intersection of any collection of (one or more) bornologies on formula_2 is once again a bornology on formula_8 Such an intersection of bornologies will cover formula_2 because every bornology on formula_2 contains every finite subset of formula_2 (that is, if formula_4 is a bornology on formula_2 and formula_21 is finite then formula_22). It is readily verified that such an intersection will also be closed under (subset) inclusion and finite unions and thus will be a bornology on formula_8 Given a collection formula_19 of subsets of formula_23 the smallest bornology on formula_2 containing formula_19 is called the . It is equal to the intersection of all bornologies on formula_2 that contain formula_19 as a subset. This intersection is well-defined because the power set formula_24 of formula_2 is always a bornology on formula_23 so every family formula_19 of subsets of formula_2 is always contained in at least one bornology on formula_8 Bounded maps. Suppose that formula_25 and formula_26 are bounded structures. A map formula_27 is called a , or just a , if the image under formula_28 of every formula_14-bounded set is a formula_4-bounded set; that is, if for every formula_29 formula_30 Since the composition of two locally bounded map is again locally bounded, it is clear that the class of all bounded structures forms a category whose morphisms are bounded maps. An isomorphism in this category is called a and it is a bijective locally bounded map whose inverse is also locally bounded. Examples of bounded maps. If formula_27 is a continuous linear operator between two topological vector spaces (not necessarily Hausdorff), then it is a bounded linear operator when formula_2 and formula_31 have their von-Neumann bornologies, where a set is bounded precisely when it is absorbed by all neighbourhoods of origin (these are the subsets of a TVS that are normally called bounded when no other bornology is explicitly mentioned.). The converse is in general false. A sequentially continuous map formula_27 between two TVSs is necessarily locally bounded. General constructions. Discrete bornology For any set formula_23 the power set formula_24 of formula_2 is a bornology on formula_2 called the . Since every bornology on formula_2 is a subset of formula_32 the discrete bornology is the finest bornology on formula_8 If formula_7 is a bounded structure then (because bornologies are downward closed) formula_4 is the discrete bornology if and only if formula_33 Indiscrete bornology For any set formula_23 the set of all finite subsets of formula_2 is a bornology on formula_2 called the . It is the coarsest bornology on formula_23 meaning that it is a subset of every bornology on formula_8 Sets of bounded cardinality The set of all countable subsets of formula_2 is a bornology on formula_8 More generally, for any infinite cardinal formula_34 the set of all subsets of formula_2 having cardinality at most formula_35 is a bornology on formula_8 Inverse image bornology. If formula_36 is a map and formula_4 is a bornology on formula_23 then formula_37 denotes the bornology generated by formula_38 which is called it the inverse image bornology or the initial bornology induced by formula_28 on formula_39 Let formula_40 be a set, formula_41 be an formula_42-indexed family of bounded structures, and let formula_43 be an formula_42-indexed family of maps where formula_44 for every formula_45 The formula_14 on formula_40 determined by these maps is the strongest bornology on formula_40 making each formula_46 locally bounded. This bornology is equal to formula_47 Direct image bornology. Let formula_40 be a set, formula_41 be an formula_42-indexed family of bounded structures, and let formula_43 be an formula_42-indexed family of maps where formula_48 for every formula_45 The formula_14 on formula_40 determined by these maps is the weakest bornology on formula_40 making each formula_49 locally bounded. If for each formula_50 formula_51 denotes the bornology generated by formula_52 then this bornology is equal to the collection of all subsets formula_53 of formula_40 of the form formula_54 where each formula_55 and all but finitely many formula_56 are empty. Subspace bornology. Suppose that formula_7 is a bounded structure and formula_40 be a subset of formula_8 The formula_14 on formula_40 is the finest bornology on formula_40 making the inclusion map formula_57 of formula_40 into formula_2 (defined by formula_58) locally bounded. Product bornology. Let formula_59 be an formula_42-indexed family of bounded structures, let formula_60 and for each formula_50 let formula_61 denote the canonical projection. The on formula_2 is the inverse image bornology determined by the canonical projections formula_62 That is, it is the strongest bornology on formula_2 making each of the canonical projections locally bounded. A base for the product bornology is given by formula_63 Topological constructions. Compact bornology. A subset of a topological space formula_2 is called relatively compact if its closure is a compact subspace of formula_8 For any topological space formula_2 in which singleton subsets are relatively compact (such as a T1 space), the set of all relatively compact subsets of formula_2 form a bornology on formula_2 called the on formula_8 Every continuous map between T1 spaces is bounded with respect to their compact bornologies. The set of relatively compact subsets of formula_64 form a bornology on formula_65 A base for this bornology is given by all closed intervals of the form formula_66 for formula_67 Metric bornology. Given a metric space formula_68 the metric bornology consists of all subsets formula_69 such that the supremum formula_70 is finite. Similarly, given a measure space formula_71 the family of all measurable subsets formula_72 of finite measure (meaning formula_73) form a bornology on formula_8 Closure and interior bornologies. Suppose that formula_2 is a topological space and formula_4 is a bornology on formula_8 The bornology generated by the set of all topological interiors of sets in formula_4 (that is, generated by formula_74 is called the of formula_4 and is denoted by formula_75 The bornology formula_4 is called if formula_76 The bornology generated by the set of all topological closures of sets in formula_4 (that is, generated by formula_77) is called the of formula_4 and is denoted by formula_78 We necessarily have formula_79 The bornology formula_4 is called if it satisfies any of the following equivalent conditions: The bornology formula_4 is called if formula_4 is both open and closed. The topological space formula_2 is called or just locally bounded if every formula_80 has a neighborhood that belongs to formula_5 Every compact subset of a locally bounded topological space is bounded. Bornology of a topological vector space. If formula_2 is a topological vector space (TVS) then the set of all bounded subsets of formula_2 form a bornology (indeed, even a vector bornology) on formula_2 called the , the , or simply of formula_2 and is referred to as . In any locally convex TVS formula_23 the set of all closed bounded disks forms a base for the usual bornology of formula_8 A linear map between two bornological spaces is continuous if and only if it is bounded (with respect to the usual bornologies). Topological rings. Suppose that formula_2 is a commutative topological ring. A subset formula_40 of formula_2 is called a if for each neighborhood formula_81 of the origin in formula_23 there exists a neighborhood formula_82 of the origin in formula_2 such that formula_83 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\,\\subseteq\\," }, { "math_id": 1, "text": "\\,\\supseteq\\," }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\mathcal{B} \\neq \\varnothing" }, { "math_id": 4, "text": "\\mathcal{B}" }, { "math_id": 5, "text": "\\mathcal{B}." }, { "math_id": 6, "text": "B \\in \\mathcal{B}," }, { "math_id": 7, "text": "(X, \\mathcal{B})" }, { "math_id": 8, "text": "X." }, { "math_id": 9, "text": "X;" }, { "math_id": 10, "text": "X \\notin \\mathcal{B}," }, { "math_id": 11, "text": "\\{X \\setminus B : B \\in \\mathcal{B}\\}" }, { "math_id": 12, "text": "\\{x\\} \\in \\mathcal{B}" }, { "math_id": 13, "text": "x \\in X." }, { "math_id": 14, "text": "\\mathcal{A}" }, { "math_id": 15, "text": "\\mathcal{A} \\subseteq \\mathcal{B}." }, { "math_id": 16, "text": "\\mathcal{A} \\subseteq \\mathcal{B}" }, { "math_id": 17, "text": "A \\in \\mathcal{A}" }, { "math_id": 18, "text": "B \\subseteq A." }, { "math_id": 19, "text": "\\mathcal{S}" }, { "math_id": 20, "text": "\\mathcal{S} \\subseteq \\mathcal{B}" }, { "math_id": 21, "text": "F \\subseteq X" }, { "math_id": 22, "text": "F \\in \\mathcal{B}" }, { "math_id": 23, "text": "X," }, { "math_id": 24, "text": "\\wp(X)" }, { "math_id": 25, "text": "(X, \\mathcal{A})" }, { "math_id": 26, "text": "(Y, \\mathcal{B})" }, { "math_id": 27, "text": "f : X \\to Y" }, { "math_id": 28, "text": "f" }, { "math_id": 29, "text": "A \\in \\mathcal{A}," }, { "math_id": 30, "text": "f(A) \\in \\mathcal{B}." }, { "math_id": 31, "text": "Y" }, { "math_id": 32, "text": "\\wp(X)," }, { "math_id": 33, "text": "X \\in \\mathcal{B}." }, { "math_id": 34, "text": "\\kappa," }, { "math_id": 35, "text": "\\kappa" }, { "math_id": 36, "text": "f : S \\to X" }, { "math_id": 37, "text": "\\left[f^{-1}(\\mathcal{B})\\right]" }, { "math_id": 38, "text": "f^{-1}(\\mathcal{B}) := \\left\\{f^{-1}(B) : B \\in \\mathcal{B}\\right\\}," }, { "math_id": 39, "text": "S." }, { "math_id": 40, "text": "S" }, { "math_id": 41, "text": "\\left(T_i, \\mathcal{B}_i\\right)_{i \\in I}" }, { "math_id": 42, "text": "I" }, { "math_id": 43, "text": "\\left(f_i\\right)_{i \\in I}" }, { "math_id": 44, "text": "f_i : S \\to T_i" }, { "math_id": 45, "text": "i \\in I." }, { "math_id": 46, "text": "f_i : (S, \\mathcal{A}) \\to \\left(T_i, \\mathcal{B}_i\\right)" }, { "math_id": 47, "text": "{\\textstyle \\bigcap\\limits_{i \\in I} \\left[f^{-1}\\left(\\mathcal{B}_i\\right)\\right]}." }, { "math_id": 48, "text": "f_i : T_i \\to S" }, { "math_id": 49, "text": "f_i : \\left(T_i, \\mathcal{B}_i\\right) \\to (S, \\mathcal{A})" }, { "math_id": 50, "text": "i \\in I," }, { "math_id": 51, "text": "\\mathcal{A}_i" }, { "math_id": 52, "text": "f\\left(\\mathcal{B}_i\\right)," }, { "math_id": 53, "text": "A" }, { "math_id": 54, "text": "\\cup_{i \\in I} A_i" }, { "math_id": 55, "text": "A_i \\in \\mathcal{A}_i" }, { "math_id": 56, "text": "A_i" }, { "math_id": 57, "text": "(S, \\mathcal{A}) \\to (X, \\mathcal{B})" }, { "math_id": 58, "text": "s \\mapsto s" }, { "math_id": 59, "text": "\\left(X_i, \\mathcal{B}_i\\right)_{i \\in I}" }, { "math_id": 60, "text": "X = {\\textstyle \\prod\\limits_{i \\in I} X_i}," }, { "math_id": 61, "text": "f_i : X \\to X_i" }, { "math_id": 62, "text": "f_i : X \\to X_i." }, { "math_id": 63, "text": "{\\textstyle \\left\\{\\prod\\limits_{i \\in I} B_i ~:~ B_i \\in \\mathcal{B}_i \\text{ for all } i \\in I\\right\\}}." }, { "math_id": 64, "text": "\\R" }, { "math_id": 65, "text": "\\R." }, { "math_id": 66, "text": "[-n, n]" }, { "math_id": 67, "text": "n = 1, 2, 3, \\ldots." }, { "math_id": 68, "text": "(X, d)," }, { "math_id": 69, "text": "S \\subseteq X" }, { "math_id": 70, "text": "\\sup_{s, t \\in S} d(s, t) < \\infty" }, { "math_id": 71, "text": "(X, \\Omega, \\mu)," }, { "math_id": 72, "text": "S \\in \\Omega" }, { "math_id": 73, "text": "\\mu(S) < \\infty" }, { "math_id": 74, "text": "\\{\\operatorname{int} B : B \\in \\mathcal{B}\\}" }, { "math_id": 75, "text": "\\operatorname{int} \\mathcal{B}." }, { "math_id": 76, "text": "\\mathcal{B} = \\operatorname{int} \\mathcal{B}." }, { "math_id": 77, "text": "\\{\\operatorname{cl} B : B \\in \\mathcal{B}\\}" }, { "math_id": 78, "text": "\\operatorname{cl} \\mathcal{B}." }, { "math_id": 79, "text": "\\operatorname{int} \\mathcal{B} \\subseteq \\mathcal{B} \\subseteq \\operatorname{cl} \\mathcal{B}." }, { "math_id": 80, "text": "x \\in X" }, { "math_id": 81, "text": "U" }, { "math_id": 82, "text": "V" }, { "math_id": 83, "text": "S V \\subseteq U." } ]
https://en.wikipedia.org/wiki?curid=9584635
9585894
Kohn anomaly
A Kohn anomaly or the Kohn effect is an anomaly in the dispersion relation of a phonon branch in a metal. The anomaly is named for Walter Kohn, who first proposed it in 1959. Description. In condensed matter physics, a Kohn anomaly (also called the Kohn effect) is an anomaly in the dispersion relation of a phonon branch in a metal. For a specific wavevector, the frequency (and thus the energy) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative. In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface, that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms this vector would be formula_0). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically. In the phononic spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that occurs at certain high symmetry points of the first Brillouin zone, produced by the abrupt change in the screening of lattice vibrations by conduction electrons. Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part formula_1 of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at formula_2, where formula_3 is the Fermi wavevector. Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of formula_4 in the proximity of the singularity mentioned above. In the context of phonon dispersion relations, these oscillations appear as a vertical tangent in the plot of formula_5, called the Kohn anomalies. Many different systems exhibit Kohn anomalies, including graphene, bulk metals, and many low-dimensional systems (the reason involves the condition formula_6, which depends on the topology of the Fermi surface). However, it is important to emphasize that only materials showing metallic behaviour can exhibit a Kohn anomaly, since the model emerges from a homogeneous electron gas approximation. History. The anomaly is named for Walter Kohn. They have been first proposed by Walter Kohn in 1959.
[ { "math_id": 0, "text": "2k_{\\rm F}" }, { "math_id": 1, "text": " \\operatorname{Re}(\\varepsilon (\\mathbf{q}, \\omega)) " }, { "math_id": 2, "text": " \\mathbf{q} = 2\\mathbf{k}_{\\rm F} " }, { "math_id": 3, "text": " \\mathbf{k}_{\\rm F} " }, { "math_id": 4, "text": " \\operatorname{Re}(\\varepsilon (\\mathbf{r}, \\omega)) " }, { "math_id": 5, "text": " \\omega ^2(\\mathbf{q}) " }, { "math_id": 6, "text": " \\mathbf{q} = 2 \\mathbf{k}_{\\rm F} " } ]
https://en.wikipedia.org/wiki?curid=9585894
9588
Extraterrestrial life
Life not on Earth &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in astronomy: Could life have arisen elsewhere?What are the requirements for life?Are there exoplanets like Earth?How likely is the evolution of intelligent life? Extraterrestrial life, alien life, or colloquially aliens, is life which does not originate from Earth. No extraterrestrial life has yet been conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a "plurality of worlds" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds "throughout the boundless immensity of space" in "The City of God". Pre-modern writers typically assumed extraterrestrial "worlds" are inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants. Nicholas of Cusa wrote in 1440 that Earth is "a brilliant star" like other celestial objects visible in space; which would appear similar to the Sun, from an exterior perspective, due to a layer of "fiery brightness" in the outer layer of the atmosphere. He theorised all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there was no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation. When considering the atmospheric composition and ecosystems hosted by extraterrestrial bodies, extraterrestrial life can seem more speculation than reality due, to the harsh conditions and disparate chemical composition of the atmospheres, when compared to the life-abundant Earth. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Hydrothermal vents, acidic hot springs, and volcanic lakes are examples of life forming under difficult circumstances, provide parallels to the extreme environments on other planets and support the possibility of extraterrestrial life. Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit communications. The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth. Context. If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. According to the Big Bang, the universe was initially too hot to allow life. 15 million years later, it cooled to temperate levels, but the elements that make up living things did not exist yet. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell in it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread – by meteoroids, for example – between habitable planets in a process called panspermia. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", where water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as a rock. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the habitable zone of the Solar System but does not have liquid water because of the conditions of its atmosphere. Jovian planets or Gas Giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation, and may have stricter requirements. A planet or moon may not have any life on it, even if it was habitable. Likelihood of existence. It is unclear if life and intelligent life are ubiquitous in the cosmos or rare. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the chemical elements that make up life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life "not" to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may be actually rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that all such requirements are simultaneously met by another planet. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life, and that at this point it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilisations in the Milky Way galaxy. The Drake equation is: formula_0 where: "N" = the number of Milky Way galaxy civilisations already capable of communicating across interplanetary space and "R"* = the average rate of star formation in our galaxy "f""p" = the fraction of those stars that have planets "n""e" = the average number of planets that can potentially support life "f""l" = the fraction of planets that actually support life "f""i" = the fraction of planets with life that evolves to become intelligent life (civilisations) "f""c" = the fraction of civilisations that develop a technology to broadcast detectable signs of their existence into space "L" = the length of time over which such civilisations broadcast detectable signals into space Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: formula_1 The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten per cent of all Sun-like stars have a system of planets, i.e. there are stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the "Kepler" spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanos, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds; two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesisers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. Harsh environmental conditions on Earth harboring life. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System. The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial lifeforms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient lifeforms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe "Cassini" flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, "Cassini" detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search. The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of  2017[ [update]], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Search for basic life. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the "Curiosity" rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary "IRAS 16293-2422", which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Search for extraterrestrial intelligences. Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of technosignatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Extrasolar planets. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (7,026 planets in 4,949 planetary systems including 1007 multiple planetary systems as of none }}). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located from Earth in the southern constellation of Centaurus. As of March 2014[ [update]], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact. Cosmic pluralism. The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from "apeiron", a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the "atomists" that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the "Aristotelians" who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the "aether". Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would made it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, "Bharat Kshetra", "Mahavideh Kshetra", "Airavat Kshetra", and "Hari kshetra". Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem "The House of Fame" engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. Early modern period. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was trialed for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which trialed and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotlean ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. 19th century. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book "Mars," followed by "Mars and its Canals" in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced to investigate the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the "Viking 1" orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. Recent history. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as "exobiology", this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial lifeforms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled "Rare Earth: Why Complex Life is Uncommon in the Universe". In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". In fiction. Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book "On the Origin of Species" by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasability alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. Government responses. The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "N = R_{\\ast} \\cdot f_p \\cdot n_e \\cdot f_{\\ell} \\cdot f_i \\cdot f_c \\cdot L" }, { "math_id": 1, "text": "10{,}000 = 5 \\cdot 0.5 \\cdot 2 \\cdot 1 \\cdot 0.2 \\cdot 1 \\cdot 10{,}000" } ]
https://en.wikipedia.org/wiki?curid=9588
9588527
Engine power
Engine power is the power that an engine can put out. It can be expressed in power units, most commonly kilowatt, pferdestärke (metric horsepower), or horsepower. In terms of internal combustion engines, the engine power usually describes the "rated power", which is a power output that the engine can maintain over a long period of time according to a certain testing method, for example ISO 1585. In general though, an internal combustion engine has a power take-off shaft (the crankshaft), therefore, the rule for shaft power applies to internal combustion engines: Engine power is the product of the engine torque and the crankshaft's angular velocity. Definition. Power is the product of torque and angular velocity: Let: Power is then: formula_5 In internal combustion engines, the crankshaft speed formula_6 is a more common figure than formula_7, so we can use formula_8 instead, which is equivalent to formula_7: formula_9 Note that formula_6 is per Second (s−1). If we want to use the common per Minute (min−1) instead, we have to divide formula_6 by 60: formula_10 Usage. Numerical value equations. The approximate numerical value equations for engine power from torque and crankshaft speed are: International unit system (SI). Let: Then: formula_11 Technical unit system (MKS). Then: formula_12 Imperial/U.S. Customary unit system. Then: formula_13 Example. "The power curve (orange) can be derived from the torque curve (blue)by multiplying with the crankshaft speed and dividing by 9550" A diesel engine produces a torque formula_14 of 234 N·m at formula_6 4200 min−1, which is the engine's rated speed. Let: Then: formula_17 or using the numerical value equation: formula_18 The engine's rated power output is 103 kW. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P=" }, { "math_id": 1, "text": "M=" }, { "math_id": 2, "text": "n=" }, { "math_id": 3, "text": "\\omega=" }, { "math_id": 4, "text": "2\\pi n" }, { "math_id": 5, "text": "P= M \\cdot \\omega" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "\\omega" }, { "math_id": 8, "text": "2 \\pi n" }, { "math_id": 9, "text": "P= M \\cdot 2 \\pi \\cdot n" }, { "math_id": 10, "text": "P= M \\cdot 2 \\pi \\cdot {n \\over 60}" }, { "math_id": 11, "text": "P= {M \\cdot n \\over 9550}" }, { "math_id": 12, "text": "P= {M \\cdot n \\over 716}" }, { "math_id": 13, "text": "P= {M \\cdot n \\over 5252}" }, { "math_id": 14, "text": "M" }, { "math_id": 15, "text": "M= 234 \\, N \\cdot m" }, { "math_id": 16, "text": "n= 4200 \\, {min}^{-1} = 70 \\, s^{-1}" }, { "math_id": 17, "text": "234 \\, N \\cdot m \\cdot 2 \\pi \\cdot 70 \\, s^{-1} = 102,919 \\, N \\cdot m \\cdot s^{-1} \\approx 103 \\, kW" }, { "math_id": 18, "text": "{234 \\cdot 4200 \\over 9550} = 102.91 \\approx 103" } ]
https://en.wikipedia.org/wiki?curid=9588527
95925
Maxima (software)
Computer algebra system Maxima () is a powerful software package for performing computer algebra calculations in mathematics and the physical sciences. It is written in Common Lisp and runs on all POSIX platforms such as macOS, Unix, BSD, and Linux, as well as under Microsoft Windows and Android. It is free software released under the terms of the GNU General Public License (GPL). History. Maxima is based on a 1982 version of Macsyma, which was developed at MIT with funding from the United States Department of Energy and other government agencies. A version of Macsyma was maintained by Bill Schelter from 1982 until his death in 2001. In 1998, Schelter obtained permission from the Department of Energy to release his version under the GPL. That version, now called Maxima, is maintained by an independent group of users and developers. Maxima does not include any of the many modifications and enhancements made to the commercial version of Macsyma during 1982–1999. Though the core functionality remains similar, code depending on these enhancements may not work on Maxima, and bugs which were fixed in Macsyma may still be present in Maxima, and vice versa. Maxima participated in Google Summer of Code in 2019 under International Neuroinformatics Coordinating Facility. Symbolic calculations. Like most computer algebra systems, Maxima supports a variety of ways of reorganizing symbolic algebraic expressions, such as polynomial factorization, polynomial greatest common divisor calculation, expansion, separation into real and imaginary parts, and transformation of trigonometric functions to exponential and vice versa. It has a variety of techniques for simplifying algebraic expressions involving trigonometric functions, roots, and exponential functions. It can calculate symbolic antiderivatives ("indefinite integrals"), definite integrals, and limits. It can derive closed-form series expansions as well as terms of Taylor-Maclaurin-Laurent series. It can perform matrix manipulations with symbolic entries. Maxima is a general-purpose system, and special-case calculations such as factorization of large numbers, manipulation of extremely large polynomials, etc. are sometimes better done in specialized systems. Numeric calculations. Maxima specializes in symbolic operations, but it also offers numerical capabilities such as arbitrary-precision integer, rational number, and floating-point numbers, limited only by space and time constraints. Programming. Maxima includes a complete programming language with ALGOL-like syntax but Lisp-like semantics. It is written in Common Lisp and can be accessed programmatically and extended, as the underlying Lisp can be called from Maxima. It uses gnuplot for drawing. For calculations using floating point and arrays heavily, Maxima has translators from the Maxima language to other programming languages (notably Fortran), which may execute more efficiently. Interfaces. Various graphical user interfaces (GUIs) are available for Maxima: Examples of Maxima code. Basic operations. Arbitrary-precision arithmetic. bfloat(sqrt(2)), fpprec=40; formula_0 Function. f(x):=x^3$ f(4); formula_1 Expand. expand((a-b)^3); formula_2 Factor. factor(x^2-1); formula_3 Solving equations. formula_4 solve(x^2 + a*x + 1, x); formula_5 Solving equations numerically. formula_6 find_root(cos(x) = x, x, 0, 1); formula_7 bf_find_root(cos(x) = x, x, 0, 1), fpprec = 50; formula_8 Indefinite integral. formula_9 integrate(x^2 + cos(x), x); formula_10 Definite integral. formula_11 integrate(1/(x^3 + 1), x, 0, 1), ratsimp; formula_12 Numerical integral. formula_13 quad_qags(sin(sin(x)), x, 0, 2)[1]; formula_14 Derivative. formula_15 diff(cos(x)^2, x, 3); formula_16 Limit. formula_17 limit((1+sinh(x))/exp(x), x, inf); formula_18 Number theory. primes(10, 20); formula_19 fib(10); formula_20 Series. formula_21 sum(1/x^2, x, 1, inf), simpsum; formula_22 Series expansion. taylor(sin(x), x, 0, 9); formula_23 niceindices(powerseries(cos(x), x, 0)); formula_24 Special functions. bessel_j(0, 4.5); formula_25 airy_ai(1.5); formula_26 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1.41421356237309504880168872420969807857 \\cdot 10^0" }, { "math_id": 1, "text": "64" }, { "math_id": 2, "text": "-b^3+3 ab^2-3a^2b+a^3" }, { "math_id": 3, "text": "(x-1)(x+1)" }, { "math_id": 4, "text": "x^2+a \\ x + 1=0" }, { "math_id": 5, "text": "[x=-\\Biggl(\\frac{\\sqrt{a^2-4}+a}{2}\\Biggr),x=\\frac{\\sqrt{a^2-4}-a}{2}]" }, { "math_id": 6, "text": "\\cos x = x" }, { "math_id": 7, "text": "0.7390851332151607" }, { "math_id": 8, "text": "7.3908513321516064165531208767387340401341175890076 \\cdot 10^{-1}" }, { "math_id": 9, "text": "\\int x^2+\\cos x\\ d x" }, { "math_id": 10, "text": "\\sin x + \\frac{x^3}{3}" }, { "math_id": 11, "text": "\\int_{0}^{1}\\frac{1}{x^3+1}\\, dx" }, { "math_id": 12, "text": "\\frac{\\sqrt3 \\log2+\\pi}{3^{\\frac{3}{2}}}" }, { "math_id": 13, "text": "\\int_{0}^{2} \\sin(\\sin (x)) \\, dx" }, { "math_id": 14, "text": "1.247056058244003" }, { "math_id": 15, "text": "{d^3 \\over d x^3} \\cos^2 x" }, { "math_id": 16, "text": "8 \\cos{x}\\sin{x}" }, { "math_id": 17, "text": "\\lim_{x \\to \\infty} \\frac{1+\\sinh{x}}{e^{x}}" }, { "math_id": 18, "text": "\\frac{1}{2}" }, { "math_id": 19, "text": "[11,13,17,19]" }, { "math_id": 20, "text": "55" }, { "math_id": 21, "text": "\\sum_{x=1}^\\infty \\frac{1}{x^2} " }, { "math_id": 22, "text": "\\frac{\\pi^2}{6}" }, { "math_id": 23, "text": "x-\\frac{x^3}{6}+\\frac{x^5}{120}-\\frac{x^7}{5040}+\\frac{x^9}{362880}" }, { "math_id": 24, "text": "\\sum_{i=0}^\\infty \\frac{(-1)^ix^{2i}}{(2i)!}" }, { "math_id": 25, "text": "-0.3205425089851214" }, { "math_id": 26, "text": "0.07174949700810543" } ]
https://en.wikipedia.org/wiki?curid=95925
9593786
Dihydroorotate dehydrogenase
Class of enzymes Dihydroorotate dehydrogenase (DHODH) is an enzyme that in humans is encoded by the "DHODH" gene on chromosome 16. The protein encoded by this gene catalyzes the fourth enzymatic step, the ubiquinone-mediated oxidation of dihydroorotate to orotate, in "de novo" pyrimidine biosynthesis. This protein is a mitochondrial protein located on the outer surface of the inner mitochondrial membrane (IMM). Inhibitors of this enzyme are used to treat autoimmune diseases such as rheumatoid arthritis. Structure. DHODH can vary in cofactor content, oligomeric state, subcellular localization, and membrane association. An overall sequence alignment of these DHODH variants presents two classes of DHODHs: the cytosolic Class 1 and the membrane-bound Class 2. In Class 1 DHODH, a basic cysteine residue catalyzes the oxidation reaction, whereas in Class 2, the serine serves this catalytic function. Structurally, Class 1 DHODHs can also be divided into two subclasses, one of which forms homodimers and uses fumarate as its electron acceptor, and the other which forms heterotetramers and uses NAD+ as its electron acceptor. This second subclass contains an addition subunit (PyrK) containing an iron-sulfur cluster and a flavin adenine dinucleotide (FAD). Meanwhile, Class 2 DHODHs use coenzyme Q/ubiquinones for their oxidant. In higher eukaryotes, this class of DHODH contains an N-terminal bipartite signal comprising a cationic, amphipathic mitochondrial targeting sequence of about 30 residues and a hydrophobic transmembrane sequence. The targeting sequence is responsible for this protein's localization to the IMM, possibly from recruiting the import apparatus and mediating ΔΨ-driven transport across the inner and outer mitochondrial membranes, while the transmembrane sequence is essential for its insertion into the IMM. This sequence is adjacent to a pair of α-helices, α1 and α2, which are connected by a short loop. Together, this pair forms a hydrophobic funnel that is suggested to serve as the insertion site for ubiquinone, in conjunction with the FMN binding cavity at the C-terminal. The two terminal domains are directly connected by an extended loop. The C-terminal domain is the larger of the two and folds into a conserved α/β-barrel structure with a core of eight parallel β-strands surrounded by eight α helices. Function. Human DHODH is a ubiquitous FMN flavoprotein. In bacteria (gene pyrD), it is located on the inner side of the cytosolic membrane. In some yeasts, such as in "Saccharomyces cerevisiae" (gene URA1), it is a cytosolic protein, whereas, in other eukaryotes, it is found in the mitochondria. It is also the only enzyme in the pyrimidine biosynthesis pathway located in the mitochondria rather than the cytosol. As an enzyme associated with the electron transport chain, DHODH links mitochondrial bioenergetics, cell proliferation, ROS production, and apoptosis in certain cell types. DHODH depletion also resulted in increased ROS production, decreased membrane potential and cell growth retardation. Also, due to its role in DNA synthesis, inhibition of DHODH may provide a means to regulate transcriptional elongation. Mechanism. In mammalian species, DHODH catalyzes the fourth step in de novo pyrimidine biosynthesis, which involves the ubiquinone-mediated oxidation of dihydroorotate to orotate and the reduction of FMN to dihydroflavin mononucleotide (FMNH2): (S)-dihydroorotate + O2 formula_0 orotate + H2O2 The particular mechanism for the dehydrogenation of dihydroorotic acid by DHODH differs between the two classes of DHODH. Class 1 DHODHs follow a concerted mechanism, in which the two C–H bonds of dihydroorotic acid break in concert. Class 2 DHODHs follow a stepwise mechanism, in which the breaking of the C–H bonds precedes the equilibration of iminium into orotic acid. Clinical significance. The immunomodulatory drugs teriflunomide and leflunomide have been shown to inhibit DHODH. Human DHODH has two domains: an alpha/beta-barrel domain containing the active site and an alpha-helical domain that forms the opening of a tunnel leading to the active site. Leflunomide has been shown to bind in this tunnel. Leflunomide is being used for treatment of rheumatoid and psoriatic arthritis, as well as multiple sclerosis. Its immunosuppressive effects have been attributed to the depletion of the pyrimidine supply for T cells or to more complex interferon or interleukin-mediated pathways, but nonetheless require further research. Additionally, DHODH may play a role in retinoid N-(4-hydroxyphenyl)retinamide (4HPR)-mediated cancer suppression. Inhibition of DHODH activity with teriflunomide or expression with RNA interference resulted in reduced ROS generation in, and thus apoptosis of, transformed skin and prostate epithelial cells. Mutations in this gene have been shown to cause Miller syndrome, also known as Genee-Wiedemann syndrome, Wildervanck-Smith syndrome or post-axial acrofacial dystosis. Interactions. DHODH binds to its FMN cofactor in conjunction with ubiquinone to catalyze the oxidation of dihydroorotate to orotate. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=9593786
959465
Lamé's special quartic
Lamé's special quartic, named after Gabriel Lamé, is the graph of the equation formula_0 where formula_1. It looks like a rounded square with "sides" of length formula_2 and centered on the origin. This curve is a squircle centered on the origin, and it is a special case of a superellipse. Because of Pierre de Fermat's only surviving proof, that of the "n" = 4 case of Fermat's Last Theorem, if "r" is rational there is no non-trivial rational point ("x", "y") on this curve (that is, no point for which both "x" and "y" are non-zero rational numbers). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^4 + y^4 = r^4" }, { "math_id": 1, "text": "r > 0" }, { "math_id": 2, "text": "2r" } ]
https://en.wikipedia.org/wiki?curid=959465
9594699
Risk difference
The risk difference (RD), excess risk, or attributable risk is the difference between the risk of an outcome in the exposed group and the unexposed group. It is computed as formula_0, where formula_1is the incidence in the exposed group, and formula_2 is the incidence in the unexposed group. If the risk of an outcome is increased by the exposure, the term absolute risk increase (ARI) is used, and computed as formula_0. Equivalently, if the risk of an outcome is decreased by the exposure, the term absolute risk reduction (ARR) is used, and computed as formula_3. The inverse of the absolute risk reduction is the number needed to treat, and the inverse of the absolute risk increase is the number needed to harm. Usage in reporting. It is recommended to use absolute measurements, such as risk difference, alongside the relative measurements, when presenting the results of randomized controlled trials. Their utility can be illustrated by the following example of a hypothetical drug which reduces the risk of colon cancer from 1 case in 5000 to 1 case in 10,000 over one year. The relative risk reduction is 0.5 (50%), while the absolute risk reduction is 0.0001 (0.01%). The absolute risk reduction reflects the low probability of getting colon cancer in the first place, while reporting only relative risk reduction, would run into risk of readers exaggerating the effectiveness of the drug. Authors such as Ben Goldacre believe that the risk difference is best presented as a natural number - drug reduces 2 cases of colon cancer to 1 case if you treat 10,000 people. Natural numbers, which are used in the number needed to treat approach, are easily understood by non-experts. Inference. Risk difference can be estimated from a 2x2 contingency table: The point estimate of the risk difference is formula_4 The sampling distribution of RD is approximately normal, with standard error formula_5 The formula_6 confidence interval for the RD is then formula_7 where formula_8 is the standard score for the chosen level of significance Bayesian interpretation. We could assume a disease noted by formula_9, and no disease noted by formula_10, exposure noted by formula_11, and no exposure noted by formula_12. The risk difference can be written as formula_13 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_e - I_u" }, { "math_id": 1, "text": "I_e" }, { "math_id": 2, "text": "I_u" }, { "math_id": 3, "text": "I_u - I_e" }, { "math_id": 4, "text": "RD = \\frac{EE}{EE + EN} - \\frac{CE}{CE + CN}." }, { "math_id": 5, "text": "SE(RD) = \\sqrt{\\frac{EE\\cdot EN}{(EE + EN)^3} + \\frac{CE\\cdot CN}{(CE + CN)^3}}." }, { "math_id": 6, "text": "1 - \\alpha" }, { "math_id": 7, "text": "CI_{1 - \\alpha}(RD) = RD\\pm SE(RD)\\cdot z_\\alpha," }, { "math_id": 8, "text": "z_\\alpha" }, { "math_id": 9, "text": "D" }, { "math_id": 10, "text": "\\neg D" }, { "math_id": 11, "text": "E" }, { "math_id": 12, "text": "\\neg E" }, { "math_id": 13, "text": "RD = P(D\\mid E)-P(D\\mid \\neg E)." } ]
https://en.wikipedia.org/wiki?curid=9594699
9596
Ellipsis
Triple-dot punctuation mark The ellipsis (), rendered ..., alternatively described as suspension points/dots, or points/periods of ellipsis, or colloquially, dot-dot-dot, is a punctuation mark consisting of a series of three dots. An ellipsis can be used in many ways, including for intentional omission of text or to imply a concept without using words. The plural is ellipses. The term originates from the , meaning 'leave out'. Opinions differ on how to render an ellipsis in printed material and are to some extent based on the technology used for rendering. Many style guides are still influenced by the typewriter. According to "The Chicago Manual of Style", it should consist of three periods, each separated from its neighbor by a non-breaking space: . . .. According to the "AP Stylebook", the periods should be rendered with no space between them: ... A third option – available in electronic text – is to use the precomposed character U+2026 … HORIZONTAL ELLIPSIS. Style. The most common forms of an ellipsis include a row of three periods (i.e., "dots" or "full points"), as characters ... or as a precomposed triple-dot glyph, the horizontal ellipsis …. Style guides often include rules governing ellipsis use. For example, "The Chicago Manual of Style" ("Chicago" style) recommends that an ellipsis be formed by typing three periods, each with a non-breaking space on both sides  . . . , while the "Associated Press Stylebook" ("AP" style) puts the dots together, but retains a space before and after the group, thus: . When a sentence ends with ellipsis, some style guides indicate there should be four dots; three for ellipsis and a period. "Chicago" advises it, as does the "Publication Manual of the American Psychological Association" (APA style), while some other style guides do not; the "Merriam-Webster Dictionary" and related works treat this style as optional, saying that it "may" be used. When text is omitted following a sentence, a period (full stop) terminates the sentence, and a subsequent ellipsis indicates one or more omitted sentences before continuing a longer quotation. "Business Insider" magazine suggests this style and it is also used in many academic journals. The "Associated Press Stylebook" favors this approach. In writing. In her book on the ellipsis, "Ellipsis in English Literature: Signs of Omission", Anne Toner suggests that the first use of the punctuation in the English language dates to a 1588 translation of Terence's "Andria", by Maurice Kyffin. In this case, however, the ellipsis consists not of dots but of short dashes. "Subpuncting" of medieval manuscripts also denotes omitted meaning and may be related. Occasionally, it would be used in pulp fiction and other works of early 20th-century fiction to denote expletives that would otherwise have been censored. An ellipsis may also imply an unstated alternative indicated by context. For example, "I never drink wine ..." implies that the speaker does drink something else—such as vodka. In reported speech, the ellipsis can be used to represent an intentional silence. In poetry, an ellipsis is used as a thought-pause or line break at the caesura or this is used to highlight sarcasm or make the reader think about the last points in the poem. In news reporting, often put inside square brackets, it is used to indicate that a quotation has been condensed for space, brevity or relevance, as in "The President said that [...] he would not be satisfied", where the exact quotation was "The President said that, for as long as this situation continued, he would not be satisfied". Herb Caen, Pulitzer-prize-winning columnist for the "San Francisco Chronicle", became famous for his "three-dot journalism". Depending on context, ellipsis can indicate an unfinished thought, a leading statement, a slight pause, an echoing voice, or a nervous or awkward silence. Aposiopesis is the use of an ellipsis to trail off into silence—for example: "But I thought he was..." When placed at the end of a sentence, an ellipsis may be used to suggest melancholy or longing. In different languages. In English. American English. "The Chicago Manual of Style" suggests the use of an ellipsis for any omitted word, phrase, line, or paragraph from within but not at the end of a quoted passage. There are two commonly used methods of using ellipses: one uses three dots for any omission, while the second one makes a distinction between omissions within a sentence (using three dots: . . .) and omissions between sentences (using a period and a space followed by three dots: . ...). The "Chicago Style" Q&amp;A recommends that writers avoid using the precomposed … (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later. The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: [ . . . ]. Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion. The MLA now indicates that a three-dot, spaced ellipsis  . . .  should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis . . . .  should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. text [...] text). According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style. According to Robert Bringhurst's "Elements of Typographic Style", the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots (with a normal word space before and after), or "thin"-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character . Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples: In legal writing in the United States, Rule 5.3 in the "Bluebook" citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. Hah . . . ?). In some legal writing, an ellipsis is written as three asterisks, *** or * * *, to make it obvious that text has been omitted or to signal that the omitted text extends beyond the end of the paragraph. British English. "The Oxford Style Guide" recommends setting the ellipsis as a single character … or as a series of three (narrow) spaced dots surrounded by spaces, thus: . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences. &lt;poem&gt;The … fox jumps … The quick brown fox jumps over the lazy dog. … And if they have not died, they are still alive today. It is not cold … it is freezing cold.&lt;/poem&gt; Contrary to "The Oxford Style Guide", the "University of Oxford Style Guide" demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before). An ellipsis is never preceded or followed by a full stop. &lt;poem&gt;The...fox jumps... The quick brown fox jumps over the lazy dog...And if they have not died, they are still alive today. It is not cold... it is freezing cold.&lt;/poem&gt; In Polish. When applied in Polish syntax, the ellipsis is called , literally 'multidot'. The word "wielokropek" distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an . When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactic rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, (Rules for Setting Texts in Polish). In Russian. The combination "ellipsis+period" is replaced by the ellipsis. The combinations "ellipsis+exclamation mark" and "ellipsis+question mark" are written in this way: !.. ?.. In Japanese. The most common character corresponding to an ellipsis is called "3"-ten rīdā (""3"-dot leaders", ). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two "3"-ten rīdā characters, ). Three dots (one "3"-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced ", the dots are colloquially called " (, akin to the English "dot dot dot"). In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a "pregnant pause". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the "ten-ten-ten" is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative "camera" on the silent subject, implying an expectation of some motion or action. It is not unheard of to see inanimate objects "speaking" the ellipsis. In Chinese. In Chinese, the ellipsis is six dots (in two groups of three dots, occupying the same horizontal or vertical space as two characters). In horizontally written text the dots are commonly vertically centered along the midline (halfway between the Roman descent and Roman ascent, or equivalently halfway between the Roman baseline and the capital height, i.e. ). This is generally true of Traditional Chinese, while Simplified Chinese tends to have the ellipses aligned with the baseline ; in vertically written text the dots are always centered horizontally (i.e. ). Also note that Taiwan and Mainland China have different punctuation standards. In Spanish. In Spanish, the ellipsis is commonly used as a substitute of "et cetera" at the end of unfinished lists. So it means "and so forth" or "and other things". Other use is the suspension of a part of a text, or a paragraph, or a phrase or a part of a word because it is obvious, or unnecessary, or implied. For instance, sometimes the ellipsis is used to avoid the complete use of expletives. When the ellipsis is placed alone into a parenthesis (...) or—less often—between brackets [...], which is what happens usually within a text transcription, it means the original text had more contents on the same position but are not useful to our target in the transcription. When the suppressed text is at the beginning or at the end of a text, the ellipsis does not need to be placed in a parenthesis. The number of dots is three and only three. In French. In French, the ellipsis is commonly used at the end of lists to represent . In French typography, the ellipsis is written immediately after the preceding word, but has a space after it, for example: . If, exceptionally, it begins a sentence, there is a space before and after, for example: . However, any omitted word, phrase or line at the end of a quoted passage would be indicated as follows: [...] (space before and after the square brackets but not inside), for example: . In German. In German, the ellipsis in general is surrounded by spaces, if it stands for one or more omitted words. On the other side there is no space between a letter or (part of) a word and an ellipsis, if it stands for one or more omitted letters, that should stick to the written letter or letters. Example for both cases, using German style: "The first el...is stands for omitted letters, the second ... for an omitted word." If the ellipsis is at the end of a sentence, the final full stop is omitted. Example: "I think that ..." In Italian. The suggests the use of an ellipsis () to indicate a pause longer than a period and, when placed between brackets, the omission of letters, words or phrases. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;(Gabriele D’Annunzio, "Il piacere") In mathematical notation. An ellipsis is used in mathematics to mean "and so forth"; usually indicating the omission of terms that follow an obvious pattern as indicated by included terms. The whole numbers from 1 to 100 can be shown as: formula_0 The positive whole numbers, an infinite list, can be shown as: formula_1 To indicate omitted terms in a repeated operation, an ellipsis is sometimes raised from the baseline, as: formula_2 But, this raised formatting is not standard. For example, Russian mathematical texts use the baseline format. The ellipsis is not a formally defined mathematical symbol. Repeated summations or products may be more formally denoted using capital sigma and capital pi notation, respectively: formula_3 (see termial) formula_4 (see factorial) Ellipsis is sometimes used where the pattern is not clear. For example, indicating the indefinite continuation of an irrational number such as: formula_5 It can be useful to display a formula compactly, for example: formula_6 In set notation, the ellipsis is used as horizontal, vertical and diagonal for indicating missing matrix terms, such as the size-"n" identity matrix: formula_7 In computer programming. Some programming languages use ellipsis to indicate a range or for a variable argument list. The CSS codice_0 property can be set to codice_1, which cuts off text with an ellipsis when it overflows the content area. In computer user interface. More. An ellipsis is sometimes used as the label for a button to access user interface that has been omitted – probably due to space limitations – particularly in mobile apps running on small screen devices. This may be described as a "more button". Similar functionality may be accessible via a button with a hamburger icon (≡) or a narrow version called the kebab icon which is a vertical ellipsis (⋮). More info needed. According to some style guides, a menu item or button labeled with a trailing ellipsis requests an operation that cannot be completed without additional information and selecting it will prompt the user for input. Without an ellipsis, selecting the item or button will perform an action without user input. For example, the menu item "Save" overwrites an existing file whereas "Save as..." prompts the user for save options before saving. Busy/progress. Ellipsis is commonly used to indicate that a longer-lasting operation is in progress like "Loading...", "Saving...". Sometimes progress is animated with an ellipse-like construct of repeatedly adding dots to a label. In texting. In text-based communications, the ellipsis may indicate: Although an ellipsis is complete with three periods (...), an ellipsis-like construct with more dots is used to indicate "trailing-off" or "silence". The extent of repetition in itself might serve as an additional contextualization or paralinguistic cue; one paper wrote that they "extend the lexical meaning of the words, add character to the sentences, and allow fine-tuning and personalisation of the message". While composing a text message, some environments show others in the conversation a typing awareness indicator ellipsis to indicate remote activity. Computer representations. In computing, several ellipsis characters have been codified. Unicode. Unicode defines the following ellipsis characters: Unicode recognizes a series of three period characters () as compatibility equivalent (though not canonical) to the horizontal ellipsis character. HTML. In HTML, the horizontal ellipsis character may be represented by the entity reference codice_2 (since HTML 4.0), and the vertical ellipsis character by the entity reference codice_3 (since HTML 5.0). Alternatively, in HTML, XML, and SGML, a numeric character reference such as codice_4 or codice_5 can be used. TeX. In the TeX typesetting system, the following types of ellipsis are available: In LaTeX, the reverse orientation of codice_6 can be achieved with codice_7 provided by the codice_8 package: codice_9 yields . With the codice_10 package from AMS-LaTeX, more specific ellipses are provided for math mode. Other. The horizontal ellipsis character also appears in older character maps: Note that ISO/IEC 8859 encoding series provides no code point for ellipsis. As with all characters, especially those outside the ASCII range, the author, sender and receiver of an encoded ellipsis must be in agreement upon what bytes are being used to represent the character. Naive text processing software may improperly assume that a particular encoding is being used, resulting in mojibake. Input. In Windows, the horizontal ellipsis can be inserted with , using the numeric keypad. In macOS, it can be inserted with (on an English language keyboard). In some Linux distributions, it can be inserted with (this produces an interpunct on other systems), or . In Android, ellipsis is a long-press key. If Gboard is in alphanumeric layout, change to numeric and special characters layout by pressing from alphanumeric layout. Once in numeric and special characters layout, long press key to insert an ellipsis. This is a single symbol without spaces in between the three dots ( ). In Chinese and sometimes in Japanese, ellipsis characters are made by entering two consecutive "horizontal ellipses", each with Unicode code point U+2026. In vertical texts, the application should rotate the symbol accordingly. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1,2,3,\\ldots,100" }, { "math_id": 1, "text": "1,2,3,\\ldots" }, { "math_id": 2, "text": "1+2+3+\\cdots+100" }, { "math_id": 3, "text": "1+2+3+\\cdots+100\\ = \\sum_{n=1}^{100} n = 100?" }, { "math_id": 4, "text": "1 \\times 2 \\times 3 \\times \\cdots \\times 100\\ = \\prod_{n=1}^{100} n = 100!" }, { "math_id": 5, "text": "\\pi=3.14159265\\ldots" }, { "math_id": 6, "text": "1+4+9+\\cdots+n^2+\\cdots+400" }, { "math_id": 7, "text": "I_n = \\begin{bmatrix}1 & 0 & \\cdots & 0 \\\\0 & 1 & \\cdots & 0 \\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\0 & 0 & \\cdots & 1 \\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=9596
9597100
Fair coin
Statistical concept In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin. In theoretical studies, the assumption that a coin is fair is often made by referring to an ideal coin. John Edmund Kerrich performed experiments in coin flipping and found that a coin made from a wooden disk about the size of a crown and coated on one side with lead landed heads (wooden side up) 679 times out of 1000. In this experiment the coin was tossed by balancing it on the forefinger, flipping it using the thumb so that it spun through the air for about a foot before landing on a flat cloth spread over a table. Edwin Thompson Jaynes claimed that when a coin is caught in the hand, instead of being allowed to bounce, the physical bias in the coin is insignificant compared to the method of the toss, where with sufficient practice a coin can be made to land heads 100% of the time. Exploring the problem of checking whether a coin is fair is a well-established pedagogical tool in teaching statistics. Probability space definition. In probability theory, a fair coin is defined as a probability space formula_0, which is in turn defined by the sample space, event space, and probability measure. Using formula_1 for heads and formula_2 for tails, the sample space of a coin is defined as: formula_3 The event space for a coin includes all sets of outcomes from the sample space which can be assigned a probability, which is the full power set formula_4. Thus, the event space is defined as: formula_5 formula_6 is the event where neither outcome happens (which is impossible and can therefore be assigned 0 probability), and formula_7 is the event where either outcome happens, (which is guaranteed and can be assigned 1 probability). Because the coin is fair, the possibility of any single outcome is 50-50. The probability measure is then defined by the function: So the full probability space which defines a fair coin is the triplet formula_0 as defined above. Note that this is not a random variable because heads and tails do not have inherent numerical values like you might find on a fair two-valued die. A random variable adds the additional structure of assigning a numerical value to each outcome. Common choices are formula_8 or formula_9. Role in statistical teaching and theory. The probabilistic and statistical properties of coin-tossing games are often used as examples in both introductory and advanced text books and these are mainly based in assuming that a coin is fair or "ideal". For example, Feller uses this basis to introduce both the idea of random walks and to develop tests for homogeneity within a sequence of observations by looking at the properties of the runs of identical values within a sequence. The latter leads on to a runs test. A time-series consisting of the result from tossing a fair coin is called a Bernoulli process. Fair results from a biased coin. If a cheat has altered a coin to prefer one side over another (a biased coin), the coin can still be used for fair results by changing the game slightly. John von Neumann gave the following procedure: The reason this process produces a fair result is that the probability of getting heads and then tails must be the same as the probability of getting tails and then heads, as the coin is not changing its bias between flips and the two flips are independent. This works only if getting one result on a trial does not change the bias on subsequent trials, which is the case for most non-malleable coins (but "not" for processes such as the Pólya urn). By excluding the events of two heads and two tails by repeating the procedure, the coin flipper is left with the only two remaining outcomes having equivalent probability. This procedure "only" works if the tosses are paired properly; if part of a pair is reused in another pair, the fairness may be ruined. Also, the coin must not be so biased that one side has a probability of zero. This method may be extended by also considering sequences of four tosses. That is, if the coin is flipped twice but the results match, and the coin is flipped twice again but the results match now for the opposite side, then the first result can be used. This is because HHTT and TTHH are equally likely. This can be extended to any multiple of 2. The expected value of flips at the n game formula_10 is not hard to calculate, first notice that in step 3 whatever the event formula_11 or formula_12 we have flipped the coin twice so formula_13 but in step 2 (formula_14 or formula_15) we also have to redo things so we will have 2 flips plus the expected value of flips of the next game that is formula_16 but as we start over the expected value of the next game is the same as the value of the previous game or any other game so it does not really depend on n thus formula_17 (this can be understood the process being a martingale formula_18 where taking the expectation again get us that formula_19 but because of the law of total expectation we get that formula_20) hence we have: formula_21 formula_22 The more biased our coin is, the more likely it is that we will have to perform a greater number of trials before a fair result. A better algorithm when P(H) is known. Suppose that the bias formula_23 is known. In this section, we provide a simple algorithm that improves the expected number of coin tosses. The algorithm utilizes an ideal probability formula_24, which We first consider an algorithm to generate an arbitrary coin with bias formula_25. To get a fair coin, the algorithm first sets formula_26 and then executes the following algorithm. Note that the above algorithm does not reach the optimal expected number of coin tosses, which is formula_36, here formula_37 is the binary entropy function. There are algorithms that reaches this optimal value in expectation. However, those algorithms are more sophisticated than the one showed above. The above algorithm has an expected number of biased coinflips being formula_38, which is exactly half comparing with von Neumann's trick. Analysis. The correctness of the above algorithm is a perfect exercise of conditional expectation. We now analyze the expected number of coinflips. Given the bias formula_39 and the current value of formula_24, one can define a function formula_40 that represents the expected number of coin tosses before a result is returned. The recurrence relation of formula_40 can be described as follows. formula_41 This magically solves to the following function: formula_42 When formula_43, the expected number of coinflips is formula_44 as desired. Remark. The idea of this algorithm can be extended to generating any biased coin with a specified probability.
[ { "math_id": 0, "text": "(\\Omega, \\mathcal{F}, P)" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "\\Omega = \\{H, T\\}" }, { "math_id": 4, "text": "2^{\\Omega}" }, { "math_id": 5, "text": "\\mathcal{F} = \\{\\{\\}, \\{H\\}, \\{T\\}, \\{H, T\\}\\}" }, { "math_id": 6, "text": "\\{\\}" }, { "math_id": 7, "text": "\\{H, T\\}" }, { "math_id": 8, "text": "(H,T)\\to(1,0)" }, { "math_id": 9, "text": "(H,T)\\to(1,-1)" }, { "math_id": 10, "text": "E(F_n)" }, { "math_id": 11, "text": "HT" }, { "math_id": 12, "text": "TH" }, { "math_id": 13, "text": "E(F_n|HT,TH)=2" }, { "math_id": 14, "text": "TT" }, { "math_id": 15, "text": "HH" }, { "math_id": 16, "text": "E(F_n|TT,HH)=2+E(F_{n+1})" }, { "math_id": 17, "text": "E(F)=E(F_n)=E(F_{n+1})" }, { "math_id": 18, "text": "E(F_{n+1}|F_n,...,F_1)=F_n" }, { "math_id": 19, "text": "E(E(F_{n+1}|F_n,...,X_1))=E(F_n)" }, { "math_id": 20, "text": "E(F_{n+1})=E(E(F_{n+1}|F_n,...,F_1))=E(F_n)" }, { "math_id": 21, "text": "\\begin{align} \nE(F)\n&=E(F_n)\\\\\n&=E(F_n|TT,HH)P(TT,HH)+E(F_n|HT,TH)P(HT,TH)\\\\\n&=(2+E(F_{n+1}))P(TT,HH)+2P(HT,TH)\\\\\n&=(2+E(F))P(TT,HH))+2P(HT,TH)\\\\\n&=(2+E(F))(P(TT)+P(HH))+2(P(HT)+P(TH))\\\\\n&=(2+E(F))(P(T)^2+P(H)^2)+4P(H)P(T)\\\\\n&=(2+E(F))(1-2P(H)P(T))+4P(H)P(T)\\\\\n&=2+E(F)-2P(H)P(T)E(F)\\\\\n\\end{align}" }, { "math_id": 22, "text": "\\therefore E(F)=2+E(F)-2P(H)P(T)E(F)\\Rightarrow E(F)=\\frac{1}{P(H)P(T)}=\\frac{1}{P(H)(1-P(H))}" }, { "math_id": 23, "text": "b:=P(\\mathtt{H})" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "b" }, { "math_id": 26, "text": "p = 0.5" }, { "math_id": 27, "text": "X \\in \\{\\mathtt{H}, \\mathtt{T}\\}" }, { "math_id": 28, "text": "p \\ge b" }, { "math_id": 29, "text": "\\mathtt{H}" }, { "math_id": 30, "text": "X=\\mathtt{H}" }, { "math_id": 31, "text": "p\\gets \\frac{p-b}{1-b}" }, { "math_id": 32, "text": "p < b" }, { "math_id": 33, "text": "\\mathtt{T}" }, { "math_id": 34, "text": "X=\\mathtt{T}" }, { "math_id": 35, "text": "p\\gets \\frac{p}{b}" }, { "math_id": 36, "text": "1/H(b)" }, { "math_id": 37, "text": "H(b)" }, { "math_id": 38, "text": "\\frac{1}{2 b(1-b)}" }, { "math_id": 39, "text": "b=P(H)" }, { "math_id": 40, "text": "f_b(p)" }, { "math_id": 41, "text": "\nf_b(p) = \\begin{cases}\n1 + b\\cdot f_b\\left( \\frac{p}{b} \\right) & \\text{if } p < b \\\\\n1 + (1-b)\\cdot f_b\\left( \\frac{p-b}{1-b} \\right) & \\text{if } p \\ge b\n\\end{cases}\n" }, { "math_id": 42, "text": "\nf_b(p) = \\frac{b + (1-2b)p}{b(1-b)}\n" }, { "math_id": 43, "text": "p=0.5" }, { "math_id": 44, "text": "f_b(0.5) = \\frac{1}{2b(1-b)}" } ]
https://en.wikipedia.org/wiki?curid=9597100
9598
Electronvolt
Unit of energy In physics, an electronvolt (symbol eV), also written electron-volt and electron volt, is the measure of an amount of kinetic energy gained by a single electron accelerating through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equal to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to the exact value .‍ Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge "q" gains an energy "E" = "qV" after passing through a voltage of "V". Definition and use. An electronvolt is the amount of energy gained or lost by a single electron when it moves through an electric potential difference of one volt. Hence, it has a value of one volt, which is , multiplied by the elementary charge "e" = .‍ Therefore, one electronvolt is equal to .‍ The electronvolt (eV) is a unit of energy, but is not an SI unit. It is a commonly used unit of energy within physics, widely used in solid state, atomic, nuclear and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes "milli-" (10-3), "kilo-" (103), "mega-" (106), "giga-" (109), "tera-" (1012), "peta-" (1015) or "exa-" (1018), the respective symbols being meV, keV, MeV, GeV, TeV, PeV and EeV. The SI unit of energy is the joule (J). In some older documents, and in the name "Bevatron", the symbol "BeV" is used, where the "B" stands for "billion". The symbol "BeV" is therefore equivalent to "GeV", though neither is an SI unit. Relation to other physical properties and units. In the fields of physics in which the electronvolt is used, other quantities are typically measured using units derived from the electronvolt as a product with fundamental constants of importance in the theory are often used. Mass. By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/"c"2, where "c" is the speed of light in vacuum (from "E" = "mc"2). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with "c" set to 1. The kilogram equivalent of is: formula_0 For example, an electron and a positron, each with a mass of , can annihilate to yield of energy. A proton has a mass of . In general, the masses of all hadrons are of the order of , which makes the GeV/"c"2 a convenient unit of mass for particle physics: &lt;templatestyles src="Block indent/styles.css"/&gt;= . The atomic mass constant ("m"u), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula: &lt;templatestyles src="Block indent/styles.css"/&gt;"m"u = 1 Da = = . Momentum. By dividing a particle's kinetic energy in electronvolts by the fundamental constant "c" (the speed of light), one can describe the particle's momentum in units of eV/"c". In natural units in which the fundamental velocity constant "c" is numerically 1, the "c" may be informally be omitted to express momentum using the unit electronvolt. The energy–momentum relation formula_1 in natural units (with formula_2) formula_3 is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as formula_4 in high-energy physics such that an applied energy with expressed in the unit eV conveniently results in a numerically approximately equivalent change of momentum when expressed with the unit eV/"c". The dimension of momentum is . The dimension of energy is . Dividing a unit of energy (such as eV) by a fundamental constant (such as the speed of light) that has the dimension of velocity () facilitates the required conversion for using a unit of energy to quantify momentum. For example, if the momentum "p" of an electron is , then the conversion to MKS system of units can be achieved by: formula_5 Distance. In particle physics, a system of natural units in which the speed of light in vacuum "c" and the reduced Planck constant "ħ" are dimensionless and equal to unity is widely used: "c" = "ħ" = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented using a unit of inverse particle mass. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: formula_6 The above relations also allow expressing the mean lifetime "τ" of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = "ħ"/"τ". For example, the meson has a lifetime of 1.530(9) picoseconds, mean decay length is "cτ" =, or a decay width of . Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: formula_7 Temperature. In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale: formula_8 where "k"B is the Boltzmann constant. The "k"B is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is (kiloelectronvolt), which is equal to 174 MK (megakelvin). As an approximation: "k"B"T" is about (≈ ) at a temperature of . Wavelength. The energy "E", frequency "ν", and wavelength "λ" of a photon are related by formula_9 where "h" is the Planck constant, "c" is the speed of light. This reduces to‍ formula_10 A photon with a wavelength of (green light) would have an energy of approximately . Similarly, would correspond to an infrared photon of wavelength or frequency . Scattering experiments. In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. Energy comparisons. Molar energy. One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant ("F" ≈ ), where the energy in joules of "n" moles of particles each with energy "E" eV is equal to "E"·"F"·"n". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1\\; \\text{eV}/c^2 = \\frac{(1.602\\ 176\\ 634 \\times 10^{-19} \\, \\text{C}) \\times 1 \\, \\text{V}}{(299\\ 792\\ 458\\; \\mathrm{m/s})^2} = 1.782\\ 661\\ 92 \\times 10^{-36}\\; \\text{kg}." }, { "math_id": 1, "text": "E^2 = p^2 c^2 + m_0^2 c^4" }, { "math_id": 2, "text": "c=1" }, { "math_id": 3, "text": "E^2 = p^2 + m_0^2" }, { "math_id": 4, "text": "E \\simeq p" }, { "math_id": 5, "text": "p = 1\\; \\text{GeV}/c = \\frac{(1 \\times 10^9) \\times (1.602\\ 176\\ 634 \\times 10^{-19} \\; \\text{C}) \\times (1 \\; \\text{V})}{2.99\\ 792\\ 458 \\times 10^8\\; \\text{m}/\\text{s}} = 5.344\\ 286 \\times 10^{-19}\\; \\text{kg} {\\cdot} \\text{m}/\\text{s}." }, { "math_id": 6, "text": "\\hbar = 1.054\\ 571\\ 817\\ 646\\times 10^{-34}\\ \\mathrm{J{\\cdot}s} = 6.582\\ 119\\ 569\\ 509\\times 10^{-16}\\ \\mathrm{eV{\\cdot}s}." }, { "math_id": 7, "text": "\\frac{1\\; \\text{eV}}{hc} = \\frac{1.602\\ 176\\ 634 \\times 10^{-19} \\; \\text{J}}{(2.99\\ 792\\ 458 \\times 10^{10}\\; \\text{cm}/\\text{s}) \\times (6.62\\ 607\\ 015 \\times 10^{-34}\\; \\text{J} {\\cdot} \\text{s})} \\thickapprox 8065.5439 \\; \\text{cm}^{-1}." }, { "math_id": 8, "text": "{1 \\,\\mathrm{eV} / k_{\\text{B}}} = {1.602\\ 176\\ 634 \\times 10^{-19} \\text{ J} \\over 1.380\\ 649 \\times 10^{-23} \\text{ J/K}} = 11\\ 604.518\\ 12 \\text{ K}," }, { "math_id": 9, "text": "E = h\\nu = \\frac{hc}{\\lambda}\n= \\frac{\\mathrm{4.135\\ 667\\ 696 \\times 10^{-15}\\;eV/Hz} \\times \\mathrm{299\\, 792\\, 458\\;m/s}}{\\lambda}" }, { "math_id": 10, "text": "\\begin{align}\nE\n&= 4.135\\ 667\\ 696 \\times 10^{-15}\\;\\mathrm{eV/Hz}\\times\\nu \\\\[4pt]\n&=\\frac{1\\ 239.841\\ 98\\;\\mathrm{eV{\\cdot}nm}}{\\lambda}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=9598
9599471
Higgs prime
A Higgs prime, named after Denis Higgs, is a prime number with a totient (one less than the prime) that evenly divides the square of the product of the smaller Higgs primes. (This can be generalized to cubes, fourth powers, etc.) To put it algebraically, given an exponent "a", a Higgs prime "Hp""n" satisfies formula_0 where Φ("x") is Euler's totient function. For squares, the first few Higgs primes are 2, 3, 5, 7, 11, 13, 19, 23, 29, 31, 37, 43, 47, ... (sequence in the OEIS). So, for example, 13 is a Higgs prime because the square of the product of the smaller Higgs primes is 5336100, and divided by 12 this is 444675. But 17 is not a Higgs prime because the square of the product of the smaller primes is 901800900, which leaves a remainder of 4 when divided by 16. From observation of the first few Higgs primes for squares through seventh powers, it would seem more compact to list those primes that are not Higgs primes: Observation further reveals that a Fermat prime formula_1can't be a Higgs prime for the "a"th power if "a" is less than 2"n". It's not known if there are infinitely many Higgs primes for any exponent "a" greater than 1. The situation is quite different for "a" = 1. There are only four of them: 2, 3, 7 and 43 (a sequence suspiciously similar to Sylvester's sequence). found that about a fifth of the primes below a million are Higgs prime, and they concluded that even if the sequence of Higgs primes for squares is finite, "a computer enumeration is not feasible."
[ { "math_id": 0, "text": "\\phi(Hp_n)|\\prod_{i = 1}^{n - 1} {Hp_i}^a\\mbox{ and }Hp_n > Hp_{n - 1}" }, { "math_id": 1, "text": "2^{2^n} + 1" } ]
https://en.wikipedia.org/wiki?curid=9599471
9601
Electrochemistry
Branch of chemistry Electrochemistry is the branch of physical chemistry concerned with the relationship between electrical potential difference and identifiable chemical change. These reactions involve electrons moving via an electronically-conducting phase (typically an external electrical circuit, but not necessarily, as in electroless plating) between electrodes separated by an ionically conducting and electronically insulating electrolyte (or ionic species in a solution). When a chemical reaction is driven by an electrical potential difference, as in electrolysis, or if a potential difference results from a chemical reaction as in an electric battery or fuel cell, it is called an "electrochemical" reaction. Unlike in other chemical reactions, in electrochemical reactions electrons are not transferred directly between atoms, ions, or molecules, but via the aforementioned electronically-conducting circuit. This phenomenon is what distinguishes an electrochemical reaction from a conventional chemical reaction. History. 16th–18th century. Understanding of electrical matters began in the sixteenth century. During this century, the English scientist William Gilbert spent 17 years experimenting with magnetism and, to a lesser extent, electricity. For his work on magnets, Gilbert became known as the "Father of Magnetism." He discovered various methods for producing and strengthening magnets. In 1663, the German physicist Otto von Guericke created the first electric generator, which produced static electricity by applying friction in the machine. The generator was made of a large sulfur ball cast inside a glass globe, mounted on a shaft. The ball was rotated by means of a crank and an electric spark was produced when a pad was rubbed against the ball as it rotated. The globe could be removed and used as source for experiments with electricity. By the mid-18th century the French chemist Charles François de Cisternay du Fay had discovered two types of static electricity, and that like charges repel each other whilst unlike charges attract. Du Fay announced that electricity consisted of two fluids: "vitreous" (from the Latin for "glass"), or positive, electricity; and "resinous," or negative, electricity. This was the "two-fluid theory" of electricity, which was to be opposed by Benjamin Franklin's "one-fluid theory" later in the century. In 1785, Charles-Augustin de Coulomb developed the law of electrostatic attraction as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley in England. In the late 18th century the Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between chemical reactions and electricity on his essay "De Viribus Electricitatis in Motu Musculari Commentarius" (Latin for Commentary on the Effect of Electricity on Muscular Motion) in 1791 where he proposed a "nerveo-electrical substance" on biological life forms. In his essay Galvani concluded that animal tissue contained a here-to-fore neglected innate, vital force, which he termed "animal electricity," which activated nerves and muscles spanned by metal probes. He believed that this new force was a form of electricity in addition to the "natural" form produced by lightning or by the electric eel and torpedo ray as well as the "artificial" form produced by friction (i.e., static electricity). Galvani's scientific colleagues generally accepted his views, but Alessandro Volta rejected the idea of an "animal electric fluid," replying that the frog's legs responded to differences in metal temper, composition, and bulk. Galvani refuted this by obtaining muscular action with two pieces of the same material. Nevertheless, Volta's experimentation led him to develop the first practical battery, which took advantage of the relatively high energy (weak bonding) of zinc and could deliver an electrical current for much longer than any other device known at the time. 19th century. In 1800, William Nicholson and Johann Wilhelm Ritter succeeded in decomposing water into hydrogen and oxygen by electrolysis using Volta's battery. Soon thereafter Ritter discovered the process of electroplating. He also observed that the amount of metal deposited and the amount of oxygen produced during an electrolytic process depended on the distance between the electrodes. By 1801, Ritter observed thermoelectric currents and anticipated the discovery of thermoelectricity by Thomas Johann Seebeck. By the 1810s, William Hyde Wollaston made improvements to the galvanic cell. Sir Humphry Davy's work with electrolysis led to the conclusion that the production of electricity in simple electrolytic cells resulted from chemical action and that chemical combination occurred between substances of opposite charge. This work led directly to the isolation of metallic sodium and potassium by electrolysis of their molten salts, and of the alkaline earth metals from theirs, in 1808. Hans Christian Ørsted's discovery of the magnetic effect of electric currents in 1820 was immediately recognized as an epoch-making advance, although he left further work on electromagnetism to others. André-Marie Ampère quickly repeated Ørsted's experiment, and formulated them mathematically. In 1821, Estonian-German physicist Thomas Johann Seebeck demonstrated the electrical potential between the juncture points of two dissimilar metals when there is a temperature difference between the joints. In 1827, the German scientist Georg Ohm expressed his law in this famous book "Die galvanische Kette, mathematisch bearbeitet" (The Galvanic Circuit Investigated Mathematically) in which he gave his complete theory of electricity. In 1832, Michael Faraday's experiments led him to state his two laws of electrochemistry. In 1836, John Daniell invented a primary cell which solved the problem of polarization by introducing copper ions into the solution near the positive electrode and thus eliminating hydrogen gas generation. Later results revealed that at the other electrode, amalgamated zinc (i.e., zinc alloyed with mercury) would produce a higher voltage. William Grove produced the first fuel cell in 1839. In 1846, Wilhelm Weber developed the electrodynamometer. In 1868, Georges Leclanché patented a new cell which eventually became the forerunner to the world's first widely used battery, the zinc–carbon cell. Svante Arrhenius published his thesis in 1884 on "Recherches sur la conductibilité galvanique des électrolytes" (Investigations on the galvanic conductivity of electrolytes). From his results the author concluded that electrolytes, when dissolved in water, become to varying degrees split or dissociated into electrically opposite positive and negative ions. In 1886, Paul Héroult and Charles M. Hall developed an efficient method (the Hall–Héroult process) to obtain aluminium using electrolysis of molten alumina. In 1894, Friedrich Ostwald concluded important studies of the conductivity and electrolytic dissociation of organic acids. Walther Hermann Nernst developed the theory of the electromotive force of the voltaic cell in 1888. In 1889, he showed how the characteristics of the voltage produced could be used to calculate the free energy change in the chemical reaction producing the voltage. He constructed an equation, known as Nernst equation, which related the voltage of a cell to its properties. In 1898, Fritz Haber showed that definite reduction products can result from electrolytic processes if the potential at the cathode is kept constant. In 1898, he explained the reduction of nitrobenzene in stages at the cathode and this became the model for other similar reduction processes. 20th century. In 1902, The Electrochemical Society (ECS) was founded. In 1909, Robert Andrews Millikan began a series of experiments (see oil drop experiment) to determine the electric charge carried by a single electron. In 1911, Harvey Fletcher, working with Millikan, was successful in measuring the charge on the electron, by replacing the water droplets used by Millikan, which quickly evaporated, with oil droplets. Within one day Fletcher measured the charge of an electron within several decimal places. In 1923, Johannes Nicolaus Brønsted and Martin Lowry published essentially the same theory about how acids and bases behave, using an electrochemical basis. In 1937, Arne Tiselius developed the first sophisticated electrophoretic apparatus. Some years later, he was awarded the 1948 Nobel Prize for his work in protein electrophoresis. A year later, in 1949, the International Society of Electrochemistry (ISE) was founded. By the 1960s–1970s quantum electrochemistry was developed by Revaz Dogonadze and his students. Principles. Oxidation and reduction. The term "redox" stands for reduction-oxidation. It refers to electrochemical processes involving electron transfer to or from a molecule or ion, changing its oxidation state. This reaction can occur through the application of an external voltage or through the release of chemical energy. Oxidation and reduction describe the change of oxidation state that takes place in the atoms, ions or molecules involved in an electrochemical reaction. Formally, oxidation state is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic. An atom or ion that gives up an electron to another atom or ion has its oxidation state increase, and the recipient of the negatively charged electron has its oxidation state decrease. For example, when atomic sodium reacts with atomic chlorine, sodium donates one electron and attains an oxidation state of +1. Chlorine accepts the electron and its oxidation state is reduced to −1. The sign of the oxidation state (positive/negative) actually corresponds to the value of each ion's electronic charge. The attraction of the differently charged sodium and chlorine ions is the reason they then form an ionic bond. The loss of electrons from an atom or molecule is called oxidation, and the gain of electrons is reduction. This can be easily remembered through the use of mnemonic devices. Two of the most popular are "OIL RIG" (Oxidation Is Loss, Reduction Is Gain) and "LEO" the lion says "GER" (Lose Electrons: Oxidation, Gain Electrons: Reduction). Oxidation and reduction always occur in a paired fashion such that one species is oxidized when another is reduced. For cases where electrons are shared (covalent bonds) between atoms with large differences in electronegativity, the electron is assigned to the atom with the largest electronegativity in determining the oxidation state. The atom or molecule which loses electrons is known as the "reducing agent", or "reductant", and the substance which accepts the electrons is called the "oxidizing agent", or "oxidant". Thus, the oxidizing agent is always being reduced in a reaction; the reducing agent is always being oxidized. Oxygen is a common oxidizing agent, but not the only one. Despite the name, an oxidation reaction does not necessarily need to involve oxygen. In fact, a fire can be fed by an oxidant other than oxygen; fluorine fires are often unquenchable, as fluorine is an even stronger oxidant (it has a weaker bond and higher electronegativity, and thus accepts electrons even better) than oxygen. For reactions involving oxygen, the gain of oxygen implies the oxidation of the atom or molecule to which the oxygen is added (and the oxygen is reduced). In organic compounds, such as butane or ethanol, the loss of hydrogen implies oxidation of the molecule from which it is lost (and the hydrogen is reduced). This follows because the hydrogen donates its electron in covalent bonds with non-metals but it takes the electron along when it is lost. Conversely, loss of oxygen or gain of hydrogen implies reduction. Balancing redox reactions. Electrochemical reactions in water are better analyzed by using the ion-electron method, where H+, OH− ion, H2O and electrons (to compensate the oxidation changes) are added to the cell's half-reactions for oxidation and reduction. Acidic medium. In acidic medium, H+ ions and water are added to balance each half-reaction. For example, when manganese reacts with sodium bismuthate. "Unbalanced reaction": Mn2+(aq) + NaBiO3(s) → Bi3+(aq) + MnO4−(aq) "Oxidation": 4 H2O(l) + Mn2+(aq) → MnO4−(aq) + 8 H+(aq) + 5 e− "Reduction": 2 e− + 6 H+(aq) + BiO3−(s) → Bi3+(aq) + 3 H2O(l) Finally, the reaction is balanced by multiplying the stoichiometric coefficients so the numbers of electrons in both half reactions match 8 H2O(l) + 2 Mn2+(aq) → 2 MnO4−(aq) + 16 H+(aq) + 10 e− 10 e− + 30 H+(aq) + 5 BiO3−(s) → 5 Bi3+(aq) + 15 H2O(l) and adding the resulting half reactions to give the balanced reaction: 14 H+(aq) + 2 Mn2+(aq) + 5 NaBiO3(s) → 7 H2O(l) + 2 MnO4−(aq) + 5 Bi3+(aq) + 5 Na+(aq) Basic medium. In basic medium, OH− ions and water are added to balance each half-reaction. For example, in a reaction between potassium permanganate and sodium sulfite: "Unbalanced reaction": KMnO4 + Na2SO3 + H2O → MnO2 + Na2SO4 + KOH "Reduction": 3 e− + 2 H2O + MnO4− → MnO2 + 4 OH− "Oxidation": 2 OH− + SO32− → SO42− + H2O + 2 e− Here, 'spectator ions' (K+, Na+) were omitted from the half-reactions. By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match: 6 e− + 4 H2O + 2 MnO4− → 2 MnO2 + 8 OH− 6 OH− + 3 SO32− → 3 SO42− + 3 H2O + 6 e− the balanced overall reaction is obtained: 2 KMnO4 + 3 Na2SO3 + H2O → 2 MnO2 + 3 Na2SO4 + 2 KOH Neutral medium. The same procedure as used in acidic medium can be applied, for example, to balance the complete combustion of propane: "Unbalanced reaction": C3H8 + O2 → CO2 + H2O "Reduction": 4 H+ + O2 + 4 e− → 2 H2O "Oxidation": 6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+ By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match: 20 H+ + 5 O2 + 20 e− → 10 H2O 6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+ the balanced equation is obtained: C3H8 + 5 O2 → 3 CO2 + 4 H2O Electrochemical cells. An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted experiments on chemical reactions and electric current during the late 18th century. Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move. The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light. A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell. The half reactions in a Daniell cell are as follows: Zinc electrode (anode): Zn(s) → Zn2+(aq) + 2 e− Copper electrode (cathode): Cu2+(aq) + 2 e− → Cu(s) In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode. To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while minimizing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte. A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode. The electrochemical cell voltage is also referred to as electromotive force or emf. A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell: Zn(s) | Zn2+ (1 M) || Cu2+ (1 M) | Cu(s) First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the exact cell potential. Standard electrode potential. To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction 2 H+(aq) + 2 e− → H2 which is shown as a reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H+ activity equal to 1 (usually assumed to be [H+] = 1 mol/liter, i.e. pH = 0). The SHE electrode can be connected to any other electrode by a salt bridge and an external circuit to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V). Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode. "E"°cell = "E"°red (cathode) – "E"°red (anode) = "E"°red (cathode) + "E"°oxi (anode) For example, the standard electrode potential for a copper electrode is: "Cell diagram" Pt(s) | H2 (1 atm) | H+ (1 M) || Cu2+ (1 M) | Cu(s) "E"°cell = "E"°red (cathode) – "E"°red (anode) At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving "E"cell = "E"°(Cu2+/Cu) – "E"°(H+/H2) Or, "E"°(Cu2+/Cu) = 0.34 V Changes in the stoichiometric coefficients of a balanced cell equation will not change the "E"°red value because the standard electrode potential is an intensive property. Spontaneity of redox reaction. During operation of an electrochemical cell, chemical energy is transformed into electrical energy. This can be expressed mathematically as the product of the cell's emf "E"cell measured in volts (V) and the electric charge "Q"ele,trans transferred through the external circuit. Electrical energy = "E"cell"Q"ele,trans "Q"ele,trans is the cell current integrated over time and measured in coulombs (C); it can also be determined by multiplying the total number "n"e of electrons transferred (measured in moles) times Faraday's constant ("F"). The emf of the cell at zero current is the maximum possible emf. It can be used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation: formula_0, where work is defined as positive when it increases the energy of the system. Since the free energy is the maximum amount of work that can be extracted from a system, one can write: formula_1 A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis. A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy. Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example. The relation between the equilibrium constant, "K", and the Gibbs free energy for an electrochemical cell is expressed as follows: formula_2. Rearranging to express the relation between standard potential and equilibrium constant yields formula_3. At "T" = 298 K, the previous equation can be rewritten using the Briggsian logarithm as follows: formula_4 Cell emf dependency on changes in concentration. Nernst equation. The standard potential of an electrochemical cell requires standard conditions (Δ"G"°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential. In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy formula_5 Here Δ"G" is change in Gibbs free energy, Δ"G"° is the cell potential when "Q" is equal to 1, "T" is absolute temperature (Kelvin), "R" is the gas constant and "Q" is the reaction quotient, which can be calculated by dividing concentrations of products by those of reactants, each raised to the power of its stoichiometric coefficient, using only those products and reactants that are aqueous or gaseous. Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity. Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes formula_6 Here "ne" is the number of electrons (in moles), "F" is the Faraday constant (in coulombs/mole), and Δ"E" is the cell potential (in volts). Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name: formula_7 Assuming standard conditions ("T" = 298 K or 25 °C) and "R" = 8.3145 J/(K·mol), the equation above can be expressed on base-10 logarithm as shown below: formula_8 Note that "" is also known as the thermal voltage "V"T and is found in the study of plasmas and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10. Concentration cells. A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells. An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode) Cu2+(aq) + 2 e− → Cu(s) Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu2+ ions increases. Reduction will take place in the cell's compartment where the concentration is higher and oxidation will occur on the more dilute side. The following cell diagram describes the concentration cell mentioned above: Cu(s) | Cu2+ (0.05 M) || Cu2+ (2.0 M) | Cu(s) where the half cell reactions for oxidation and reduction are: Oxidation: Cu(s) → Cu2+ (0.05 M) + 2 e− Reduction: Cu2+ (2.0 M) + 2 e− → Cu(s) Overall reaction: Cu2+ (2.0 M) → Cu2+ (0.05 M) The cell's emf is calculated through the Nernst equation as follows: formula_9 The value of "E"° in this kind of cell is zero, as electrodes and ions are the same in both half-cells. After replacing values from the case mentioned, it is possible to calculate cell's potential: formula_10 or by: formula_11 However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here. The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell. Battery. Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells. The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem, however, is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use but it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles. All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium metal battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices. The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen and oxygen directly into electrical energy with a much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system. Corrosion. Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass. Iron corrosion. For iron rust to occur the metal has to be in contact with oxygen and water. The chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation) One area on the surface of the metal acts as the anode, which is where the oxidation (corrosion) occurs. At the anode, the metal gives up electrons. Fe(s) → Fe2+(aq) + 2 e− Electrons are transferred from iron, reducing oxygen in the atmosphere into water on the cathode, which is placed in another region of the metal. O2(g) + 4 H+(aq) + 4 e− → 2 H2O(l) Global reaction for the process: 2 Fe(s) + O2(g) + 4 H+(aq) → 2 Fe2+(aq) + 2 H2O(l) Standard emf for iron rusting: "E"° = "E"° (cathode) − "E"° (anode) "E"° = 1.23V − (−0.44 V) = 1.67 V Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize further, following this equation: 4 Fe2+(aq) + O2(g) + (4+2x) H2O(l) → 2 Fe2O3·xH2O + 8 H+(aq) Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·xH2O. An electric circuit is formed as passage of electrons and ions occurs; thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. Corrosion of common metals. Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black silver sulfide. Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia. Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface, which bonds with the underlying metal. This thin oxide layer protects the underlying bulk of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized. Prevention of corrosion. Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal. While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur. Coating. Metals can be coated with paint or other less conductive metals ("passivation"). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction. Sacrificial anodes. A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called "sacrificial" because the anode dissolves and has to be replaced periodically. Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal. To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those dissolved. Electrolysis. The spontaneous redox reactions of a conventional battery produce electricity through the different reduction potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell. Electrolysis of molten sodium chloride. When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Downs cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell. Reactions that take place in a Downs cell are the following: Anode (oxidation): 2 Cl−(l) → Cl2(g) + 2 e− Cathode (reduction): 2 Na+(l) + 2 e− → 2 Na(l) Overall reaction: 2 Na+(l) + 2 Cl−(l) → 2 Na(l) + Cl2(g) This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used in mineral dressing and metallurgy industries. The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential difference of 4 V. However, larger voltages must be used for this reaction to occur at a high rate. Electrolysis of water. Water can be converted to its component elemental gases, H2 and O2, through the application of an external voltage. Water does not decompose into hydrogen and oxygen spontaneously as the Gibbs free energy change for the process at standard conditions is very positive, about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M). Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above: Anode (oxidation): 2 H2O(l) → O2(g) + 4 H+(aq) + 4 e− Cathode (reduction): 2 H2O(g) + 2 e− → H2(g) + 2 OH−(aq) Overall reaction: 2 H2O(l) → 2 H2(g) + O2(g) Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively low voltages (~2 V depending on the pH). Electrolysis of aqueous solutions. Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized. Electrolysis of a solution of sodium chloride. The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned above in electrolysis of water yielding "gaseous oxygen in the anode" and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na+ and Cl− ions. The cation, which is the positive ion, will be attracted to the cathode (−), thus reducing the sodium ion. The chloride anion will then be attracted to the anode (+), where it is oxidized to chlorine gas. The following half reactions should be considered in the process mentioned: Reaction 1 is discarded as it has the most negative value on standard reduction potential thus making it less thermodynamically favorable in the process. When comparing the reduction potentials in reactions 2 and 4, the oxidation of chloride ion is favored over oxidation of water, thus chlorine gas is produced at the anode and not oxygen gas. Although the initial analysis is correct, there is another effect, known as the overvoltage effect. Additional voltage is sometimes required, beyond the voltage predicted by the "E"°cell. This may be due to kinetic rather than thermodynamic considerations. In fact, it has been proven that the activation energy for the chloride ion is very low, hence favorable in kinetic terms. In other words, although the voltage applied is thermodynamically sufficient to drive electrolysis, the rate is so slow that to make the process proceed in a reasonable time frame, the voltage of the external source has to be increased (hence, overvoltage). The overall reaction for the process according to the analysis is the following: Anode (oxidation): 2 Cl−(aq) → Cl2(g) + 2 e− Cathode (reduction): 2 H2O(l) + 2 e− → H2(g) + 2 OH−(aq) Overall reaction: 2 H2O + 2 Cl−(aq) → H2(g) + Cl2(g) + 2 OH−(aq) As the overall reaction indicates, the concentration of chloride ions is reduced in comparison to OH− ions (whose concentration increases). The reaction also shows the production of gaseous hydrogen, chlorine and aqueous sodium hydroxide. Quantitative electrolysis and Faraday's laws. Quantitative aspects of electrolysis were originally developed by Michael Faraday in 1834. Faraday is also credited to have coined the terms "electrolyte", electrolysis, among many others while he studied quantitative analysis of electrochemical reactions. Also he was an advocate of the law of conservation of energy. First law. Faraday concluded after several experiments on electric current in a non-spontaneous process that the mass of the products yielded on the electrodes was proportional to the value of current supplied to the cell, the length of time the current existed, and the molar mass of the substance analyzed. In other words, the amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell. Below is a simplified equation of Faraday's first law: formula_12 where "m" is the mass of the substance produced at the electrode (in grams), "Q" is the total electric charge that passed through the solution (in coulombs), "n" is the valence number of the substance as an ion in solution (electrons per ion), "M" is the molar mass of the substance (in grams per mole), "F" is Faraday's constant (96485 coulombs per mole). Second law. Faraday devised the laws of chemical electrodeposition of metals from solutions in 1857. He formulated the second law of electrolysis stating "the amounts of bodies which are equivalent to each other in their ordinary chemical action have equal quantities of electricity naturally associated with them." In other words, the quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights. An important aspect of the second law of electrolysis is electroplating, which together with the first law of electrolysis has a significant number of applications in industry, as when used to protectively coat metals to avoid corrosion. Applications. There are various important electrochemical processes in both nature and industry, like the coating of objects with metals or metal oxides through electrodeposition, the addition (electroplating) or removal (electropolishing) of thin layers of metal from an object's surface, and the detection of alcohol in drunk drivers through the redox reaction of ethanol. The generation of chemical energy through photosynthesis is inherently an electrochemical process, as is production of metals like aluminum and titanium from their ores. Certain diabetes blood sugar meters measure the amount of glucose in the blood through its redox potential. In addition to established electrochemical technologies (like deep cycle lead acid batteries) there is also a wide range of new emerging technologies such as fuel cells, large format lithium-ion batteries, electrochemical reactors and super-capacitors that are becoming increasingly commercial. Electrochemical or coulometric titrations were introduced for quantitative analysis of minute quantities in 1938 by the Hungarian chemists László Szebellédy and Zoltan Somogyi. Electrochemistry also has important applications in the food industry, like the assessment of food/package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, or the determination of free acidity in olive oil. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W_\\mathrm{max} = W_\\mathrm{electrical} = -n_eFE_\\mathrm{cell} " }, { "math_id": 1, "text": "\\Delta G = -n_eFE_\\mathrm{cell} " }, { "math_id": 2, "text": "\\Delta G^\\circ = -RT \\ln K = -nFE^{\\circ}_\\mathrm{cell} " }, { "math_id": 3, "text": "E^{\\circ}_{cell} = \\frac{RT}{nF}\\ln K" }, { "math_id": 4, "text": "E^{\\circ}_{cell} = \\frac{0.05916\\,\\mathrm{V}}{n} \\log K" }, { "math_id": 5, "text": "\\Delta G = \\Delta G^\\circ + RT \\ln Q " }, { "math_id": 6, "text": "n_eF\\Delta E = n_e F\\Delta E^\\circ - RT \\ln Q" }, { "math_id": 7, "text": "\\Delta E = \\Delta E^\\circ - \\frac{RT}{nF} \\ln Q " }, { "math_id": 8, "text": "\\Delta E = \\Delta E^\\circ- \\frac{0.05916 \\,\\mathrm{V}}{n} \\log Q " }, { "math_id": 9, "text": "E = E^\\circ - \\frac{0.05916 \\,\\mathrm{V}}{2} \\log \\frac{[\\mathrm{Cu^{2+}}]_\\mathrm{diluted}}{[\\mathrm{Cu^{2+}}]_\\mathrm{concentrated}} " }, { "math_id": 10, "text": "E = 0 - \\frac{0.05916 \\,\\mathrm{V}}{2} \\log \\frac{0.05}{2.0} = 0.0474\\,\\mathrm{V} " }, { "math_id": 11, "text": "E = 0 - \\frac{0.0257 \\,\\mathrm{V}}{2} \\ln \\frac{0.05}{2.0}= 0.0474\\,\\mathrm{V} " }, { "math_id": 12, "text": "m = \\frac{1}{F} \\cdot \\frac{QM}{n} " } ]
https://en.wikipedia.org/wiki?curid=9601
960585
Semantic security
Cryptography method In cryptography, a semantically secure cryptosystem is one where only negligible information about the plaintext can be feasibly extracted from the ciphertext. Specifically, any probabilistic, polynomial-time algorithm (PPTA) that is given the ciphertext of a certain message formula_0 (taken from any distribution of messages), and the message's length, cannot determine any partial information on the message with probability non-negligibly higher than all other PPTA's that only have access to the message length (and not the ciphertext). This concept is the computational complexity analogue to Shannon's concept of perfect secrecy. Perfect secrecy means that the ciphertext reveals no information at all about the plaintext, whereas semantic security implies that any information revealed cannot be feasibly extracted. History. The notion of semantic security was first put forward by Goldwasser and Micali in 1982. However, the definition they initially proposed offered no straightforward means to prove the security of practical cryptosystems. Goldwasser/Micali subsequently demonstrated that semantic security is equivalent to another definition of security called ciphertext indistinguishability under chosen-plaintext attack. This latter definition is more common than the original definition of semantic security because it better facilitates proving the security of practical cryptosystems. Symmetric-key cryptography. In the case of symmetric-key algorithm cryptosystems, an adversary must not be able to compute any information about a plaintext from its ciphertext. This may be posited as an adversary, given two plaintexts of equal length and their two respective ciphertexts, cannot determine which ciphertext belongs to which plaintext. Public-key cryptography. For an asymmetric key encryption algorithm cryptosystem to be semantically secure, it must be infeasible for a computationally bounded adversary to derive significant information about a message (plaintext) when given only its ciphertext and the corresponding public encryption key. Semantic security considers only the case of a "passive" attacker, i.e., one who generates and observes ciphertexts using the public key and plaintexts of their choice. Unlike other security definitions, semantic security does not consider the case of chosen ciphertext attack (CCA), where an attacker is able to request the decryption of chosen ciphertexts, and many semantically secure encryption schemes are demonstrably insecure against chosen ciphertext attack. Consequently, semantic security is now considered an insufficient condition for securing a general-purpose encryption scheme. Indistinguishability under Chosen Plaintext Attack (IND-CPA) is commonly defined by the following experiment: The underlying cryptosystem is IND-CPA (and thus semantically secure under chosen plaintext attack) if the adversary cannot determine which of the two messages was chosen by the oracle, with probability significantly greater than formula_9 (the success rate of random guessing). Variants of this definition define indistinguishability under chosen ciphertext attack and adaptive chosen ciphertext attack (IND-CCA, IND-CCA2). Because the adversary possesses the public encryption key in the above game, a semantically secure encryption scheme must by definition be probabilistic, possessing a component of randomness; if this were not the case, the adversary could simply compute the deterministic encryption of formula_4 and formula_5 and compare these encryptions with the returned ciphertext formula_8 to successfully guess the oracle's choice. Semantically secure encryption algorithms include Goldwasser-Micali, ElGamal and Paillier. These schemes are considered provably secure, as their semantic security can be reduced to solving some hard mathematical problem (e.g., Decisional Diffie-Hellman or the Quadratic Residuosity Problem). Other, semantically insecure algorithms such as RSA, can be made semantically secure (under stronger assumptions) through the use of random encryption padding schemes such as Optimal Asymmetric Encryption Padding (OAEP).
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "(pk,sk)" }, { "math_id": 2, "text": "Gen(1^n)" }, { "math_id": 3, "text": "pk" }, { "math_id": 4, "text": "m_0" }, { "math_id": 5, "text": "m_1" }, { "math_id": 6, "text": "b \\in \\{0,1\\}" }, { "math_id": 7, "text": "m_b" }, { "math_id": 8, "text": "c" }, { "math_id": 9, "text": "1/2" } ]
https://en.wikipedia.org/wiki?curid=960585
9606881
68–95–99.7 rule
Shorthand used in statistics In statistics, the 68–95–99.7 rule, also known as the empirical rule, and sometimes abbreviated 3sr, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: approximately 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively. In mathematical notation, these facts can be expressed as follows, where Pr() is the probability function, Χ is an observation from a normally distributed random variable, μ (mu) is the mean of the distribution, and σ (sigma) is its standard deviation: formula_0 The usefulness of this heuristic especially depends on the question under consideration. In the empirical sciences, the so-called three-sigma rule of thumb (or 3σ rule) expresses a conventional heuristic that nearly all values are taken to lie within three standard deviations of the mean, and thus it is empirically useful to treat 99.7% probability as near certainty. In the social sciences, a result may be considered "significant" if its confidence level is of the order of a two-sigma effect (95%), while in particle physics, there is a convention of a five-sigma effect (99.99994% confidence) being required to qualify as a discovery. A weaker three-sigma rule can be derived from Chebyshev's inequality, stating that even for non-normally distributed variables, at least 88.8% of cases should fall within properly calculated three-sigma intervals. For unimodal distributions, the probability of being within the interval is at least 95% by the Vysochanskij–Petunin inequality. There may be certain assumptions for a distribution that force this probability to be at least 98%. Proof. We have that formula_1 doing the change of variable in terms of the standard score formula_2, we have formula_3 and this integral is independent of formula_4 and formula_5. We only need to calculate each integral for the cases formula_6. formula_7 Cumulative distribution function. These numerical values "68%, 95%, 99.7%" come from the cumulative distribution function of the normal distribution. The prediction interval for any standard score "z" corresponds numerically to (1 − (1 − Φ(z)) · 2). For example, , or , corresponding to a prediction interval of (1 − (1 − 0.97725)·2) = 0.9545 = 95.45%. This is not a symmetrical interval – this is merely the probability that an observation is less than . To compute the probability that an observation is within two standard deviations of the mean (small differences due to rounding): formula_8 This is related to confidence interval as used in statistics: formula_9 is approximately a 95% confidence interval when formula_10 is the average of a sample of size formula_11. Normality tests. The "68–95–99.7 rule" is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed to be normal. It is also used as a simple test for outliers if the population is assumed normal, and as a normality test if the population is potentially not normal. To pass from a sample to a number of standard deviations, one first computes the deviation, either the error or residual depending on whether one knows the population mean or only estimates it. The next step is standardizing (dividing by the population standard deviation), if the population parameters are known, or studentizing (dividing by an estimate of the standard deviation), if the parameters are unknown and only estimated. To use as a test for outliers or a normality test, one computes the size of deviations in terms of standard deviations, and compares this to expected frequency. Given a sample set, one can compute the studentized residuals and compare these to the expected frequency: points that fall more than 3 standard deviations from the norm are likely outliers (unless the sample size is significantly large, by which point one expects a sample this extreme), and if there are many points more than 3 standard deviations from the norm, one likely has reason to question the assumed normality of the distribution. This holds ever more strongly for moves of 4 or more standard deviations. One can compute more precisely, approximating the number of extreme moves of a given magnitude or greater by a Poisson distribution, but simply, if one has multiple 4 standard deviation moves in a sample of size 1,000, one has strong reason to consider these outliers or question the assumed normality of the distribution. For example, a 6"σ" event corresponds to a chance of about two parts per billion. For illustration, if events are taken to occur daily, this would correspond to an event expected every 1.4 million years. This gives a simple normality test: if one witnesses a 6"σ" in daily data and significantly fewer than 1 million years have passed, then a normal distribution most likely does not provide a good model for the magnitude or frequency of large deviations in this respect. In "", Nassim Nicholas Taleb gives the example of risk models according to which the Black Monday crash would correspond to a 36-"σ" event: the occurrence of such an event should instantly suggest that the model is flawed, i.e. that the process under consideration is not satisfactorily modeled by a normal distribution. Refined models should then be considered, e.g. by the introduction of stochastic volatility. In such discussions it is important to be aware of the problem of the gambler's fallacy, which states that a single observation of a rare event does not contradict that the event is in fact rare. It is the observation of a plurality of purportedly rare events that increasingly undermines the hypothesis that they are rare, i.e. the validity of the assumed model. A proper modelling of this process of gradual loss of confidence in a hypothesis would involve the designation of prior probability not just to the hypothesis itself but to all possible alternative hypotheses. For this reason, statistical hypothesis testing works not so much by confirming a hypothesis considered to be likely, but by refuting hypotheses considered unlikely. Table of numerical values. Because of the exponentially decreasing tails of the normal distribution, odds of higher deviations decrease very quickly. From the rules for normally distributed data for a daily event: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n \\Pr(\\mu-1\\sigma \\le X \\le \\mu+1\\sigma) & \\approx 68.27\\% \\\\\n \\Pr(\\mu-2\\sigma \\le X \\le \\mu+2\\sigma) & \\approx 95.45\\% \\\\\n \\Pr(\\mu-3\\sigma \\le X \\le \\mu+3\\sigma) & \\approx 99.73\\%\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\\Pr(\\mu -n\\sigma \\leq X \\leq \\mu + n\\sigma) = \\int_{\\mu-n\\sigma}^{\\mu + n\\sigma} \\frac{1}{\\sqrt{2\\pi} \\sigma} e^{-\\frac{1}{2} \\left(\\frac{x-\\mu}{\\sigma}\\right)^2} dx, \\end{align}" }, { "math_id": 2, "text": " z = \\frac{x - \\mu}{\\sigma}" }, { "math_id": 3, "text": "\\begin{align}\\frac{1}{\\sqrt{2\\pi}} \\int_{-n}^{n} e^{-\\frac{z^2}{2}}dz\\end{align}," }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\sigma" }, { "math_id": 6, "text": "n = 1,2,3" }, { "math_id": 7, "text": "\\begin{align}\n\\Pr(\\mu -1\\sigma \\leq X \\leq \\mu + 1\\sigma) &= \\frac{1}{\\sqrt{2\\pi}} \\int_{-1}^{1} e^{-\\frac{z^2}{2}}dz \\approx 0.6827 \\\\\n\\Pr(\\mu -2\\sigma \\leq X \\leq \\mu + 2\\sigma) &= \\frac{1}{\\sqrt{2\\pi}}\\int_{-2}^{2} e^{-\\frac{z^2}{2}}dz \\approx 0.9545 \\\\\n\\Pr(\\mu -3\\sigma \\leq X \\leq \\mu + 3\\sigma) &= \\frac{1}{\\sqrt{2\\pi}}\\int_{-3}^{3} e^{-\\frac{z^2}{2}}dz \\approx 0.9973.\n\\end{align}" }, { "math_id": 8, "text": "\\Pr(\\mu-2\\sigma \\le X \\le \\mu+2\\sigma)\n = \\Phi(2) - \\Phi(-2)\n \\approx 0.9772 - (1 - 0.9772)\n \\approx 0.9545\n" }, { "math_id": 9, "text": "\\bar{X} \\pm 2\\frac{\\sigma}{\\sqrt{n}}" }, { "math_id": 10, "text": "\\bar{X}" }, { "math_id": 11, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=9606881
9607393
Retention uniformity
Retention uniformity, or RU, is a concept in thin layer chromatography. It is designed for the quantitative measurement of "equal-spreading" of the spots on the chromatographic plate and is one of the Chromatographic response functions. Formula. Retention uniformity is calculated from the following formula: formula_0 where "n" is the number of compounds separated, "Rf (1...n)" are the Retention factor of the compounds sorted in non-descending order. Theoretical considerations. The coefficient lies always in range &lt;0,1&gt; and 0 indicates worst case of separation (all Rf values equal to 0 or 1), value 1 indicates ideal equal-spreading of the spots, for example (0.25,0.5,0.75) for three solutes, or (0.2,0.4,0.6,0.8) for four solutes. This coefficient was proposed as an alternative to earlier approaches, such as "D" (separation response), "Ip" (performance index) or "Sm" (informational entropy). Besides its stable range, the advantage is a stable distribution as a random variable, regardless of compounds investigated. In contrast to the similar concept called Retention distance, "Ru" is insensitive to "Rf" values close to 0 or 1, or close to themselves. If two values are not separated, it still indicates some "uniformity" of chromatographic system. For example, the "Rf" values (0,0.2,0.2,0.3) (two compounds not separated at 0.2 and one at the start ) result in "RU" equal to 0.3609.
[ { "math_id": 0, "text": "\nR_{U} = 1 - \\sqrt{\\frac{6(n+1)}{n(2n+1)}\\sum_{i=1}^{n}{\\left(R_{Fi}-\\frac{i}{n+1}\\right)^2}} \n" } ]
https://en.wikipedia.org/wiki?curid=9607393
9607629
Retention distance
Concept in chromatography Retention distance, or "RD", is a concept in thin layer chromatography, designed for quantitative measurement of "equal-spreading" of the spots on the chromatographic plate and one of the Chromatographic response functions. It is calculated from the following formula: formula_0 where "n" is the number of compounds separated, "Rf (1...n)" are the Retention factor of the compounds sorted in non-descending order, "Rf0" = 0 and "Rf(n+1)" = 1. Theoretical considerations. The coefficient lies always in range &lt;0,1&gt; and 0 indicates worst case of separation (all Rf values equal to 0 or 1), value 1 indicates ideal equal-spreading of the spots, for example (0.25,0.5,0.75) for three solutes, or (0.2,0.4,0.6,0.8) for four solutes. This coefficient was proposed as an alternative to earlier approaches, such as delta-Rf, delta-Rf product or MRF (Multispot Response Function). Besides its stable range, the advantage is a stable distribution as a random variable, regardless of compounds investigated. In contrast to the similar concept called Retention uniformity, "Rd" is sensitive to "Rf" values close to 0 or 1, or close to themselves. If two values are not separated, it is equal to 0. For example, the "Rf" values (0,0.2,0.2,0.3) (two compounds not separated at 0.2 and one at the start ) result in "RD" equal to 0, but "RU" equal to 0.3609. When some distance from 0 and spots occurs, the value is larger, for example "Rf" values (0.1,0.2,0.25,0.3) give "RD" = 0.4835, "RU" = 0.4066.
[ { "math_id": 0, "text": "\nR_D = \\Bigg[(n+1)^{(n+1)} \\prod^n_{i=0}{(R_{F(i+1)}-R_{Fi})\\Bigg]^{\\frac{1}{n}}} \n" } ]
https://en.wikipedia.org/wiki?curid=9607629
9607933
Handshaking lemma
Every graph has evenly many odd vertices In graph theory, a branch of mathematics, the handshaking lemma is the statement that, in every finite undirected graph, the number of vertices that touch an odd number of edges is even. For example, if there is a party of people who shake hands, the number of people who shake an odd number of other people's hands is even. The handshaking lemma is a consequence of the degree sum formula, also sometimes called the handshaking lemma, according to which the sum of the degrees (the numbers of times each vertex is touched) equals twice the number of edges in the graph. Both results were proven by Leonhard Euler (1736) in his famous paper on the Seven Bridges of Königsberg that began the study of graph theory. Beyond the Seven Bridges of Königsberg Problem, which subsequently formalized Eulerian Tours, other applications of the degree sum formula include proofs of certain combinatorial structures. For example, in the proofs of Sperner's lemma and the mountain climbing problem the geometric properties of the formula commonly arise. The complexity class PPA encapsulates the difficulty of finding a second odd vertex, given one such vertex in a large implicitly-defined graph. Definitions and statement. An undirected graph consists of a system of vertices, and edges connecting unordered pairs of vertices. In any graph, the degree formula_0 of a vertex formula_1 is defined as the number of edges that have formula_1 as an endpoint. For graphs that are allowed to contain loops connecting a vertex to itself, a loop should be counted as contributing two units to the degree of its endpoint for the purposes of the handshaking lemma. Then, the handshaking lemma states that, in every finite graph, there must be an even number of vertices for which formula_0 is an odd number. The vertices of odd degree in a graph are sometimes called odd nodes (or odd vertices); in this terminology, the handshaking lemma can be rephrased as the statement that every graph has an even number of odd nodes. The degree sum formula states that formula_2 where formula_3 is the set of nodes (or vertices) in the graph and formula_4 is the set of edges in the graph. That is, the sum of the vertex degrees equals twice the number of edges. In directed graphs, another form of the degree-sum formula states that the sum of in-degrees of all vertices, and the sum of out-degrees, both equal the number of edges. Here, the in-degree is the number of incoming edges, and the out-degree is the number of outgoing edges. A version of the degree sum formula also applies to finite families of sets or, equivalently, multigraphs: the sum of the degrees of the elements (where the degree equals the number of sets containing it) always equals the sum of the cardinalities of the sets. Both results also apply to any subgraph of the given graph and in particular to its connected components. A consequence is that, for any odd vertex, there must exist a path connecting it to another odd vertex. Applications. Euler paths and tours. Leonhard Euler first proved the handshaking lemma in his work on the Seven Bridges of Königsberg, asking for a walking tour of the city of Königsberg (now Kaliningrad) crossing each of its seven bridges once. This can be translated into graph-theoretic terms as asking for an Euler path or Euler tour of a connected graph representing the city and its bridges: a walk through the graph that traverses each edge once, either ending at a different vertex than it starts in the case of an Euler path or returning to its starting point in the case of an Euler tour. Euler stated the fundamental results for this problem in terms of the number of odd vertices in the graph, which the handshaking lemma restricts to be an even number. If this number is zero, an Euler tour exists, and if it is two, an Euler path exists. Otherwise, the problem cannot be solved. In the case of the Seven Bridges of Königsberg, the graph representing the problem has four odd vertices, and has neither an Euler path nor an Euler tour. It was therefore impossible to tour all seven bridges in Königsberg without repeating a bridge. In the Christofides–Serdyukov algorithm for approximating the traveling salesperson problem, the geometric implications of the degree sum formula plays a vital role, allowing the algorithm to connect vertices in pairs in order to construct a graph on which an Euler tour forms an approximate TSP tour. Combinatorial enumeration. Several combinatorial structures may be shown to be even in number by relating them to the odd vertices in an appropriate "exchange graph". For instance, as C. A. B. Smith proved, in any cubic graph formula_5 there must be an even number of Hamiltonian cycles through any fixed edge formula_6; these are cycles that pass through each vertex exactly once. used a proof based on the handshaking lemma to extend this result to graphs in which all vertices have odd degree. Thomason defines an exchange graph formula_7, the vertices of which are in one-to-one correspondence with the Hamiltonian paths in formula_5 beginning at formula_8 and continuing through edge formula_6. Two such paths formula_9 and formula_10 are defined as being connected by an edge in formula_7 if one may obtain formula_10 by adding a new edge to the end of formula_9 and removing another edge from the middle of formula_9. This operation is reversible, forming a symmetric relation, so formula_7 is an undirected graph. If path formula_11 ends at vertex formula_12, then the vertex corresponding to formula_11 in formula_7 has degree equal to the number of ways that formula_11 may be extended by an edge that does not connect back to formula_8; that is, the degree of this vertex in formula_7 is either formula_13 (an even number) if formula_11 does not form part of a Hamiltonian cycle through formula_6, or formula_14 (an odd number) if formula_11 is part of a Hamiltonian cycle through formula_6. Since formula_7 has an even number of odd vertices, formula_5 must have an even number of Hamiltonian cycles through formula_6. Other applications. The handshaking lemma (or degree sum formula) are also used in proofs of several other results in mathematics. These include the following: Proof. Euler's proof of the degree sum formula uses the technique of double counting: he counts the number of incident pairs formula_15 where formula_16 is an edge and vertex formula_1 is one of its endpoints, in two different ways. Vertex formula_1 belongs to formula_0 pairs, where formula_0 (the degree of formula_1) is the number of edges incident to it. Therefore, the number of incident pairs is the sum of the degrees. However, each edge in the graph belongs to exactly two incident pairs, one for each of its endpoints; therefore, the number of incident pairs is formula_17. Since these two formulas count the same set of objects, they must have equal values. The same proof can be interpreted as summing the entries of the incidence matrix of the graph in two ways, by rows to get the sum of degrees and by columns to get twice the number of edges. For graphs, the handshaking lemma follows as a corollary of the degree sum formula. In a sum of integers, the parity of the sum is not affected by the even terms in the sum; the overall sum is even when there is an even number of odd terms, and odd when there is an odd number of odd terms. Since one side of the degree sum formula is the even number formula_17, the sum on the other side must have an even number of odd terms; that is, there must be an even number of odd-degree vertices. Alternatively, it is possible to use mathematical induction to prove the degree sum formula, or to prove directly that the number of odd-degree vertices is even, by removing one edge at a time from a given graph and using a case analysis on the degrees of its endpoints to determine the effect of this removal on the parity of the number of odd-degree vertices. In special classes of graphs. Regular graphs. The degree sum formula implies that every formula_18-regular graph with formula_19 vertices has formula_20 edges. Because the number of edges must be an integer, it follows that when formula_18 is odd the number of vertices must be even. Additionally, for odd values of formula_18, the number of edges must be divisible by formula_18. Bipartite and biregular graphs. A bipartite graph has its vertices split into two subsets, with each edge having one endpoint in each subset. It follows from the same double counting argument that, in each subset, the sum of degrees equals the number of edges in the graph. In particular, both subsets have equal degree sums. For biregular graphs, with a partition of the vertices into subsets formula_21 and formula_22 with every vertex in a subset formula_23 having degree formula_24, it must be the case that formula_25; both equal the number of edges. Infinite graphs. The handshaking lemma does not apply in its usual form to infinite graphs, even when they have only a finite number of odd-degree vertices. For instance, an infinite path graph with one endpoint has only a single odd-degree vertex rather than having an even number of such vertices. However, it is possible to formulate a version of the handshaking lemma using the concept of an end, an equivalence class of semi-infinite paths ("rays") considering two rays as equivalent when there exists a third ray that uses infinitely many vertices from each of them. The degree of an end is the maximum number of edge-disjoint rays that it contains, and an end is odd if its degree is finite and odd. More generally, it is possible to define an end as being odd or even, regardless of whether it has infinite degree, in graphs for which all vertices have finite degree. Then, in such graphs, the number of odd vertices and odd ends, added together, is either even or infinite. Subgraphs. By a theorem of Gallai the vertices of any graph can be decomposed as formula_26 where formula_27 has all degree even and formula_28 has all degree odd with formula_29 even by the handshaking lemma. In 1994 Yair Caro proved that formula_30 and in 2021 a preprint by Ferber Asaf and Michael Krivelevich showed that formula_31. Computational complexity. In connection with the exchange graph method for proving the existence of combinatorial structures, it is of interest to ask how efficiently these structures may be found. For instance, suppose one is given as input a Hamiltonian cycle in a cubic graph; it follows from Smith's theorem that there exists a second cycle. How quickly can this second cycle be found? investigated the computational complexity of questions such as this, or more generally of finding a second odd-degree vertex when one is given a single odd vertex in a large implicitly-defined graph. He defined the complexity class PPA to encapsulate problems such as this one; a closely related class defined on directed graphs, PPAD, has attracted significant attention in algorithmic game theory because computing a Nash equilibrium is computationally equivalent to the hardest problems in this class. Computational problems proven to be complete for the complexity class PPA include computational tasks related to Sperner's lemma and to fair subdivision of resources according to the Hobby–Rice theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\deg(v)" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "\\sum_{v\\in V} \\deg v = 2|E|," }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "uv" }, { "math_id": 7, "text": "H" }, { "math_id": 8, "text": "u" }, { "math_id": 9, "text": "p_1" }, { "math_id": 10, "text": "p_2" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "w" }, { "math_id": 13, "text": "\\deg(w)-1" }, { "math_id": 14, "text": "\\deg(w)-2" }, { "math_id": 15, "text": "(v,e)" }, { "math_id": 16, "text": "e" }, { "math_id": 17, "text": "2|E|" }, { "math_id": 18, "text": "r" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "nr/2" }, { "math_id": 21, "text": "V_1" }, { "math_id": 22, "text": "V_2" }, { "math_id": 23, "text": "V_i" }, { "math_id": 24, "text": "r_i" }, { "math_id": 25, "text": "|V_1|r_1=|V_2|r_2" }, { "math_id": 26, "text": "V=V_e\\cup V_o" }, { "math_id": 27, "text": "G(V_e)" }, { "math_id": 28, "text": "G(V_o)" }, { "math_id": 29, "text": "|V_o|" }, { "math_id": 30, "text": "|V_o|/|V|>1/\\sqrt{n}" }, { "math_id": 31, "text": "|V_o|/|V|>1/10000" } ]
https://en.wikipedia.org/wiki?curid=9607933
9608295
Reed–Muller expansion
In Boolean logic, a Reed–Muller expansion (or Davio expansion) is a decomposition of a Boolean function. For a Boolean function formula_0 we call formula_1 the positive and negative cofactors of formula_2 with respect to formula_3, and formula_4 the boolean derivation of formula_2 with respect to formula_3, where formula_5 denotes the XOR operator. Then we have for the Reed–Muller or positive Davio expansion: formula_6 Description. This equation is written in a way that it resembles a Taylor expansion of formula_2 about formula_7. There is a similar decomposition corresponding to an expansion about formula_8 (negative Davio expansion): formula_9 Repeated application of the Reed–Muller expansion results in an XOR polynomial in formula_10: formula_11 This representation is unique and sometimes also called Reed–Muller expansion. E.g. for formula_12 the result would be formula_13 where formula_14. For formula_15 the result would be formula_16 where formula_17. Geometric interpretation. This formula_15 case can be given a cubical geometric interpretation (or a graph-theoretic interpretation) as follows: when moving along the edge from formula_18 to formula_19, XOR up the functions of the two end-vertices of the edge in order to obtain the coefficient of formula_20. To move from formula_18 to formula_21 there are two shortest paths: one is a two-edge path passing through formula_19 and the other one a two-edge path passing through formula_22. These two paths encompass four vertices of a square, and XORing up the functions of these four vertices yields the coefficient of formula_23. Finally, to move from formula_18 to formula_24 there are six shortest paths which are three-edge paths, and these six paths encompass all the vertices of the cube, therefore the coefficient of formula_24 can be obtained by XORing up the functions of all eight of the vertices. (The other, unmentioned coefficients can be obtained by symmetry.) Paths. The shortest paths all involve monotonic changes to the values of the variables, whereas non-shortest paths all involve non-monotonic changes of such variables; or, to put it another way, the shortest paths all have lengths equal to the Hamming distance between the starting and destination vertices. This means that it should be easy to generalize an algorithm for obtaining coefficients from a truth table by XORing up values of the function from appropriate rows of a truth table, even for hyperdimensional cases (formula_25 and above). Between the starting and destination rows of a truth table, some variables have their values remaining fixed: find all the rows of the truth table such that those variables likewise remain fixed at those given values, then XOR up their functions and the result should be the coefficient for the monomial corresponding to the destination row. (In such monomial, include any variable whose value is 1 (at that row) and exclude any variable whose value is 0 (at that row), instead of including the negation of the variable whose value is 0, as in the minterm style.) Similar to binary decision diagrams (BDDs), where nodes represent Shannon expansion with respect to the according variable, we can define a decision diagram based on the Reed–Muller expansion. These decision diagrams are called functional BDDs (FBDDs). Derivations. The Reed–Muller expansion can be derived from the XOR-form of the Shannon decomposition, using the identity formula_26: formula_27 Derivation of the expansion for formula_28: formula_29 Derivation of the second-order boolean derivative: formula_30 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x_1,\\ldots,x_n) : \\mathbb{B}^n \\to \\mathbb{B}" }, { "math_id": 1, "text": "\n\\begin{align}\nf_{{x_i}}(x) & = f(x_1,\\ldots,x_{i-1},1,x_{i+1},\\ldots,x_n) \\\\\nf_{\\overline{x}_i}(x)& = f(x_1,\\ldots,x_{i-1},0,x_{i+1},\\ldots,x_n)\n\\end{align}\n" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "\n\\begin{align}\n\\frac{\\partial f}{\\partial x_i} & = f_{x_i}(x) \\oplus f_{\\overline{x}_i}(x)\n\\end{align}\n" }, { "math_id": 5, "text": "{\\oplus}" }, { "math_id": 6, "text": "\nf = f_{\\overline{x}_i} \\oplus x_i \\frac{\\partial f}{\\partial x_i}.\n" }, { "math_id": 7, "text": "x_i=0" }, { "math_id": 8, "text": "x_i=1" }, { "math_id": 9, "text": "\nf = f_{x_i} \\oplus \\overline{x}_i \\frac{\\partial f}{\\partial x_i}.\n" }, { "math_id": 10, "text": "x_1,\\ldots,x_n" }, { "math_id": 11, "text": "\nf = a_1 \\oplus a_2 x_1 \\oplus a_3 x_2 \\oplus a_4 x_1 x_2 \\oplus \\ldots \\oplus a_{2^n} x_1\\cdots x_n\n" }, { "math_id": 12, "text": "n=2" }, { "math_id": 13, "text": "\nf(x_1, x_2) = f_{\\overline{x}_1 \\overline{x}_2} \\oplus \\frac{\\partial f_{\\overline{x}_2}}{\\partial x_1} x_1 \\oplus \\frac{\\partial f_{\\overline{x}_1}}{\\partial x_2} x_2 \\oplus \\frac{\\partial^2 f}{\\partial x_1 \\partial x_2} x_1 x_2\n" }, { "math_id": 14, "text": " {\\partial^2 f \\over \\partial x_1 \\partial x_2} = f_{\\bar x_1 \\bar x_2} \\oplus f_{\\bar x_1 x_2} \\oplus f_{x_1 \\bar x_2} \\oplus f_{x_1 x_2} " }, { "math_id": 15, "text": "n = 3" }, { "math_id": 16, "text": " f(x_1, x_2, x_3) = f_{\\bar x_1 \\bar x_2 \\bar x_3} \\oplus {\\partial f_{\\bar x_2 \\bar x_3} \\over \\partial x_1} x_1 \\oplus {\\partial f_{\\bar x_1 \\bar x_3} \\over \\partial x_2} x_2 \\oplus {\\partial f_{\\bar x_1 \\bar x_2} \\over \\partial x_3} x_3 \\oplus {\\partial^2 f_{\\bar x_3} \\over \\partial x_1 \\partial x_2} x_1 x_2 \\oplus {\\partial^2 f_{\\bar x_2} \\over \\partial x_1 \\partial x_3} x_1 x_3 \\oplus {\\partial^2 f_{\\bar x_1} \\over \\partial x_2 \\partial x_3} x_2 x_3 \\oplus {\\partial^3 f \\over \\partial x_1 \\partial x_2 \\partial x_3} x_1 x_2 x_3 " }, { "math_id": 17, "text": " {\\partial^3 f \\over \\partial x_1 \\partial x_2 \\partial x_3} = f_{\\bar x_1 \\bar x_2 \\bar x_3} \\oplus f_{\\bar x_1 \\bar x_2 x_3} \\oplus f_{\\bar x_1 x_2 \\bar x_3} \\oplus f_{\\bar x_1 x_2 x_3} \\oplus f_{x_1 \\bar x_2 \\bar x_3} \\oplus f_{x_1 \\bar x_2 x_3} \\oplus f_{x_1 x_2 \\bar x_3} \\oplus f_{x_1 x_2 x_3} " }, { "math_id": 18, "text": "\\bar x_1 \\bar x_2 \\bar x_3" }, { "math_id": 19, "text": "x_1 \\bar x_2 \\bar x_3" }, { "math_id": 20, "text": "x_1" }, { "math_id": 21, "text": "x_1 x_2 \\bar x_3" }, { "math_id": 22, "text": "\\bar x_1 x_2 \\bar x_3" }, { "math_id": 23, "text": "x_1 x_2" }, { "math_id": 24, "text": "x_1 x_2 x_3" }, { "math_id": 25, "text": "n = 4" }, { "math_id": 26, "text": "\\overline{x} = 1 \\oplus x" }, { "math_id": 27, "text": "\n\\begin{align}\nf & = x_i f_{x_i} \\oplus \\overline{x}_i f_{\\overline{x}_i} \\\\\n & = x_i f_{x_i} \\oplus (1 \\oplus x_i) f_{\\overline{x}_i} \\\\\n & = x_i f_{x_i} \\oplus f_{\\overline{x}_i} \\oplus x_i f_{\\overline{x}_i} \\\\\n & = f_{\\overline{x}_i} \\oplus x_i \\frac{\\partial f}{\\partial x_i}.\n\\end{align}\n" }, { "math_id": 28, "text": "n = 2" }, { "math_id": 29, "text": "\\begin{align}\n f & = f_{\\bar x_1} \\oplus x_1 {\\partial f \\over \\partial x_1} \\\\\n & = \\Big( f_{\\bar x_2} \\oplus x_2 {\\partial f \\over \\partial x_2} \\Big)_{\\bar x_1} \\oplus x_1 {\\partial \\Big(f_{\\bar x_2} \\oplus x_2 {\\partial f \\over \\partial x_2} \\Big) \\over \\partial x_1} \\\\\n & = f_{\\bar x_1 \\bar x_2} \\oplus x_2 {\\partial f_{\\bar x_1} \\over \\partial x_2} \\oplus x_1 \\Big({\\partial f_{\\bar x_2} \\over \\partial x_1} \\oplus x_2 {\\partial^2 f \\over \\partial x_1 \\partial x_2}\\Big) \\\\\n & = f_{\\bar x_1 \\bar x_2} \\oplus x_2 {\\partial f_{\\bar x_1} \\over \\partial x_2} \\oplus x_1 {\\partial f_{\\bar x_2} \\over \\partial x_1} \\oplus x_1 x_2 {\\partial^2 f \\over \\partial x_1 \\partial x_2}. \n \\end{align} " }, { "math_id": 30, "text": " \\begin{align}\n{\\partial^2 f \\over \\partial x_1 \\partial x_2} & = {\\partial \\over \\partial x_1} \\Big( {\\partial f \\over \\partial x_2} \\Big) = {\\partial \\over \\partial x_1} (f_{\\bar x_2} \\oplus f_{x_2}) \\\\\n & = (f_{\\bar x_2} \\oplus f_{x_2})_{\\bar x_1} \\oplus (f_{\\bar x_2} \\oplus f_{x_2})_{x_1} \\\\\n & = f_{\\bar x_1 \\bar x_2} \\oplus f_{\\bar x_1 x_2} \\oplus f_{x_1 \\bar x_2} \\oplus f_{x_1 x_2}.\n \\end{align}" } ]
https://en.wikipedia.org/wiki?curid=9608295
960929
Slerp
Spherical linear interpolation in computer graphics In computer graphics, slerp is shorthand for spherical linear interpolation, introduced by Ken Shoemake in the context of quaternion interpolation for the purpose of animating 3D rotation. It refers to constant-speed motion along a unit-radius great circle arc, given the ends and an interpolation parameter between 0 and 1. Geometric slerp. Slerp has a geometric formula independent of quaternions, and independent of the dimension of the space in which the arc is embedded. This formula, a symmetric weighted sum credited to Glenn Davis, is based on the fact that any point on the curve must be a linear combination of the ends. Let "p"0 and "p"1 be the first and last points of the arc, and let "t" be the parameter, 0 ≤ "t" ≤ 1. Compute Ω as the angle subtended by the arc, so that cos Ω "p"0 ⋅ "p"1, the "n"-dimensional dot product of the unit vectors from the origin to the ends. The geometric formula is then formula_0 The symmetry lies in the fact that slerp("p"0, "p"1; "t") slerp("p"1, "p"0; 1 − "t"). In the limit as Ω → 0, this formula reduces to the corresponding symmetric formula for linear interpolation, formula_1 A slerp path is, in fact, the spherical geometry equivalent of a path along a line segment in the plane; a great circle is a spherical geodesic. More familiar than the general slerp formula is the case when the end vectors are perpendicular, in which case the formula is "p"0cos "θ" + "p"1sin "θ". Letting "θ" "t"π/2, and applying the trigonometric identity cos "θ" sin(π/2 − "θ"), this becomes the slerp formula. The factor of 1/sin Ω in the general formula is a normalization, since a vector "p"1 at an angle of Ω to "p"0 projects onto the perpendicular ⊥"p"0 with a length of only sin Ω. Some special cases of slerp admit more efficient calculation. When a circular arc is to be drawn into a raster image, the preferred method is some variation of Bresenham's circle algorithm. Evaluation at the special parameter values 0 and 1 trivially yields "p"0 and "p"1, respectively; and bisection, evaluation at , simplifies to ("p"0 + "p"1)/2, normalized. Another special case, common in animation, is evaluation with fixed ends and equal parametric steps. If "p""k"−1 and "p""k" are two consecutive values, and if "c" is twice their dot product (constant for all steps), then the next value, "p""k"+1, is the reflection "p""k"+1 "cp""k" − "p""k"−1. Quaternion slerp. When slerp is applied to unit quaternions, the quaternion path maps to a path through 3D rotations in a standard way. The effect is a rotation with uniform angular velocity around a fixed rotation axis. When the initial end point is the identity quaternion, slerp gives a segment of a one-parameter subgroup of both the Lie group of 3D rotations, SO(3), and its universal covering group of unit quaternions, "S"3. Slerp gives a straightest and shortest path between its quaternion end points, and maps to a rotation through an angle of 2Ω. However, because the covering is double ("q" and −"q" map to the same rotation), the rotation path may turn either the "short way" (less than 180°) or the "long way" (more than 180°). Long paths can be prevented by negating one end if the dot product, cos Ω, is negative, thus ensuring that −90° ≤ Ω ≤ 90°. Slerp also has expressions in terms of quaternion algebra, all using exponentiation. Real powers of a quaternion are defined in terms of the quaternion exponential function, written as "e""q" and given by the power series equally familiar from calculus, complex analysis and matrix algebra: formula_2 Writing a unit quaternion "q" in versor form, cos Ω + v sin Ω, with v a unit 3-vector, and noting that the quaternion square v2 equals −1 (implying a quaternion version of Euler's formula), we have "e" vΩ "q", and "q""t" cos "t"Ω + v sin "t"Ω. The identification of interest is "q" "q"1"q"0−1, so that the real part of "q" is cos Ω, the same as the geometric dot product used above. Here are four equivalent quaternion expressions for slerp. formula_3 The derivative of slerp("q"0, "q"1; "t") with respect to "t", assuming the ends are fixed, is log("q"1"q"0−1) times the function value, where the quaternion natural logarithm in this case yields half the 3D angular velocity vector. The initial tangent vector is parallel transported to each tangent along the curve; thus the curve is, indeed, a geodesic. In the tangent space at any point on a quaternion slerp curve, the inverse of the exponential map transforms the curve into a line segment. Slerp curves not extending through a point fail to transform into lines in that point's tangent space. Quaternion slerps are commonly used to construct smooth animation curves by mimicking affine constructions like the de Casteljau algorithm for Bézier curves. Since the sphere is not an affine space, familiar properties of affine constructions may fail, though the constructed curves may otherwise be entirely satisfactory. For example, the de Casteljau algorithm may be used to split a curve in affine space; this does not work on a sphere. The two-valued slerp can be extended to interpolate among many unit quaternions, but the extension loses the fixed execution-time of the slerp algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\operatorname{slerp}(p_0,p_1; t) = \\frac{\\sin {[(1-t)\\Omega}]}{\\sin \\Omega} p_0 + \\frac{\\sin [t\\Omega]}{\\sin \\Omega} p_1." }, { "math_id": 1, "text": " \\operatorname{lerp}(p_0,p_1; t) = (1-t) p_0 + t p_1." }, { "math_id": 2, "text": " e^q = 1 + q + \\frac{q^2}{2} + \\frac{q^3}{6} + \\cdots + \\frac{q^n}{n!} + \\cdots ." }, { "math_id": 3, "text": "\n\\begin{align}\n\\operatorname{slerp}(q_0, q_1, t) & = q_0 (q_0^{-1} q_1)^t \\\\[6pt]\n& = q_1 (q_1^{-1} q_0)^{1-t} \\\\[6pt]\n& = (q_0 q_1^{-1})^{1-t} q_1 \\\\[6pt]\n& = (q_1 q_0^{-1})^t q_0\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=960929
9610107
Uniform integrability
Mathematical concept In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. Measure-theoretic definition. Uniform integrability is an extension to the notion of a family of functions being dominated in formula_0 which is central in dominated convergence. Several textbooks on real analysis and measure theory use the following definition: Definition A: Let formula_1 be a positive measure space. A set formula_2 is called uniformly integrable if formula_3, and to each formula_4 there corresponds a formula_5 such that formula_6 whenever formula_7 and formula_8 Definition A is rather restrictive for infinite measure spaces. A more general definition of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt. Definition H: Let formula_9 be a positive measure space. A set formula_10 is called uniformly integrable if and only if formula_11 where formula_12. Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics. The following result provides another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability. Theorem 1: If formula_9 is a (positive) finite measure space, then a set formula_10 is uniformly integrable if and only if formula_13 If in addition formula_14, then uniform integrability is equivalent to either of the following conditions 1. formula_15. 2. formula_16 When the underlying space formula_17 is formula_18-finite, Hunt's definition is equivalent to the following: Theorem 2: Let formula_9 be a formula_18-finite measure space, and formula_19 be such that formula_20 almost everywhere. A set formula_10 is uniformly integrable if and only if formula_21, and for any formula_4, there exits formula_5 such that formula_22 whenever formula_23. A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking formula_24 in Theorem 2. Probability definition. In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables., that is, 1. A class formula_25 of random variables is called uniformly integrable if: or alternatively 2. A class formula_25 of random variables is called uniformly integrable (UI) if for every formula_29 there exists formula_34 such that formula_35, where formula_36 is the indicator function formula_37. Tightness and uniform integrability. One consequence of uniformly integrability of a class formula_25 of random variables is that family of laws or distributions formula_38 is tight. That is, for each formula_30, there exists formula_39 such that formula_40 for all formula_41. This however, does not mean that the family of measures formula_42 is tight. (In any case, tightness would require a topology on formula_43 in order to be defined.) Uniform absolute continuity. There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral Definition: Suppose formula_44 is a probability space. A classed formula_25 of random variables is uniformly absolutely continuous with respect to formula_45 if for any formula_46, there is formula_47 such that formula_48 whenever formula_49. It is equivalent to uniform integrability if the measure is finite and has no atoms. The term "uniform absolute continuity" is not standard, but is used by some authors. Related corollaries. The following results apply to the probabilistic definition. Relevant theorems. In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of formula_69. Relation to convergence of random variables. A sequence formula_66 converges to formula_27 in the formula_75 norm if and only if it converges in measure to formula_27 and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable. This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " L_1" }, { "math_id": 1, "text": " (X,\\mathfrak{M}, \\mu)" }, { "math_id": 2, "text": "\\Phi\\subset L^1(\\mu)" }, { "math_id": 3, "text": "\\sup_{f\\in\\Phi}\\|f\\|_{L_1(\\mu)}<\\infty" }, { "math_id": 4, "text": " \\varepsilon>0 " }, { "math_id": 5, "text": " \\delta>0 " }, { "math_id": 6, "text": " \\int_E |f| \\, d\\mu < \\varepsilon " }, { "math_id": 7, "text": "f \\in \\Phi " }, { "math_id": 8, "text": "\\mu(E)<\\delta." }, { "math_id": 9, "text": " (X,\\mathfrak{M},\\mu)" }, { "math_id": 10, "text": " \\Phi\\subset L^1(\\mu)" }, { "math_id": 11, "text": " \\inf_{g\\in L^1_+(\\mu)}\\sup_{f\\in\\Phi}\\int_{\\{|f|>g\\}}|f|\\, d\\mu=0 " }, { "math_id": 12, "text": " L^1_+(\\mu)=\\{g\\in L^1(\\mu): g\\geq0\\} " }, { "math_id": 13, "text": " \\inf_{g\\in L^1_+(\\mu)}\\sup_{f\\in\\Phi}\\int (|f|- g)^+ \\, d\\mu=0 " }, { "math_id": 14, "text": "\\mu(X)<\\infty" }, { "math_id": 15, "text": "\\inf_{a>0}\\sup_{f\\in \\Phi}\\int(|f|-a)_+\\,d\\mu =0" }, { "math_id": 16, "text": "\\inf_{a>0}\\sup_{f\\in \\Phi}\\int_{\\{|f|>a\\}}|f|\\,d\\mu=0" }, { "math_id": 17, "text": " (X,\\mathfrak{M},\\mu) " }, { "math_id": 18, "text": " \\sigma " }, { "math_id": 19, "text": " h\\in L^1(\\mu) " }, { "math_id": 20, "text": " h>0 " }, { "math_id": 21, "text": " \\sup_{f\\in\\Phi}\\|f\\|_{L_1(\\mu)}<\\infty " }, { "math_id": 22, "text": " \\sup_{f\\in\\Phi}\\int_A|f|\\, d\\mu <\\varepsilon " }, { "math_id": 23, "text": " \\int_A h\\,d\\mu <\\delta " }, { "math_id": 24, "text": " h\\equiv1" }, { "math_id": 25, "text": "\\mathcal{C}" }, { "math_id": 26, "text": "M" }, { "math_id": 27, "text": "X" }, { "math_id": 28, "text": "\\operatorname E(|X|)\\leq M" }, { "math_id": 29, "text": "\\varepsilon > 0" }, { "math_id": 30, "text": "\\delta > 0" }, { "math_id": 31, "text": "A" }, { "math_id": 32, "text": "P(A)\\leq \\delta" }, { "math_id": 33, "text": "\\operatorname E(|X|I_A)\\leq\\varepsilon" }, { "math_id": 34, "text": "K\\in[0,\\infty)" }, { "math_id": 35, "text": "\\operatorname E(|X|I_{|X|\\geq K})\\le\\varepsilon\\ \\text{ for all } X \\in \\mathcal{C}" }, { "math_id": 36, "text": " I_{|X|\\geq K} " }, { "math_id": 37, "text": " I_{|X|\\geq K} = \\begin{cases} 1 &\\text{if } |X|\\geq K, \\\\ 0 &\\text{if } |X| < K. \\end{cases}" }, { "math_id": 38, "text": " \\{P\\circ|X|^{-1}(\\cdot):X\\in\\mathcal{C}\\}" }, { "math_id": 39, "text": "a > 0" }, { "math_id": 40, "text": " P(|X|>a) \\leq \\delta " }, { "math_id": 41, "text": "X\\in\\mathcal{C}" }, { "math_id": 42, "text": "\\mathcal{V}_{\\mathcal{C}}:=\\Big\\{\\mu_X:A\\mapsto\\int_A|X|\\,dP,\\,X\\in\\mathcal{C}\\Big\\}" }, { "math_id": 43, "text": "\\Omega" }, { "math_id": 44, "text": "(\\Omega,\\mathcal{F},P)" }, { "math_id": 45, "text": "P" }, { "math_id": 46, "text": "\\varepsilon>0" }, { "math_id": 47, "text": "\\delta>0" }, { "math_id": 48, "text": " E[|X|I_A]<\\varepsilon" }, { "math_id": 49, "text": " P(A)<\\delta" }, { "math_id": 50, "text": "\\lim_{K \\to \\infty} \\sup_{X \\in \\mathcal{C}} \\operatorname E(|X|\\,I_{|X|\\geq K})=0." }, { "math_id": 51, "text": "\\Omega = [0,1] \\subset \\mathbb{R}" }, { "math_id": 52, "text": "X_n(\\omega) = \\begin{cases}\n n, & \\omega\\in (0,1/n), \\\\\n 0 , & \\text{otherwise.} \\end{cases}" }, { "math_id": 53, "text": "X_n\\in L^1" }, { "math_id": 54, "text": "\\operatorname E(|X_n|)=1\\ ," }, { "math_id": 55, "text": "\\operatorname E(|X_n| I_{\\{|X_n|\\ge K \\}})= 1\\ \\text{ for all } n \\ge K," }, { "math_id": 56, "text": "L^1" }, { "math_id": 57, "text": "X_n" }, { "math_id": 58, "text": "\\delta " }, { "math_id": 59, "text": " (0, 1/n)" }, { "math_id": 60, "text": "\\delta" }, { "math_id": 61, "text": "E[|X_m|: (0, 1/n)] =1 " }, { "math_id": 62, "text": "m \\ge n " }, { "math_id": 63, "text": "\\operatorname E(|X|) = \\operatorname E(|X| I_{\\{|X| \\geq K \\}})+\\operatorname E(|X| I_{\\{|X| < K \\}})" }, { "math_id": 64, "text": "Y" }, { "math_id": 65, "text": " |X_n(\\omega)| \\le Y(\\omega),\\ Y(\\omega)\\ge 0,\\ \\operatorname E(Y) < \\infty," }, { "math_id": 66, "text": "\\{X_n\\}" }, { "math_id": 67, "text": "L^p" }, { "math_id": 68, "text": "p > 1" }, { "math_id": 69, "text": " L^1(\\mu)" }, { "math_id": 70, "text": "X_n \\subset L^1(\\mu)" }, { "math_id": 71, "text": "\\sigma(L^1,L^\\infty)" }, { "math_id": 72, "text": "\\{X_{\\alpha}\\}_{\\alpha\\in\\Alpha} \\subset L^1(\\mu)" }, { "math_id": 73, "text": "G(t)" }, { "math_id": 74, "text": "\\lim_{t \\to \\infty} \\frac{G(t)} t = \\infty \\text{ and } \\sup_\\alpha \\operatorname E(G(|X_{\\alpha}|)) < \\infty." }, { "math_id": 75, "text": "L_1" } ]
https://en.wikipedia.org/wiki?curid=9610107
9610491
Circumconic and inconic
Conic section that passes through the vertices of a triangle or is tangent to its sides In Euclidean geometry, a circumconic is a conic section that passes through the three vertices of a triangle, and an inconic is a conic section inscribed in the sides, possibly extended, of a triangle. Suppose A, B, C are distinct non-collinear points, and let △"ABC" denote the triangle whose vertices are A, B, C. Following common practice, A denotes not only the vertex but also the angle ∠"BAC" at vertex A, and similarly for B and C as angles in △"ABC". Let formula_0 the sidelengths of △"ABC". In trilinear coordinates, the general circumconic is the locus of a variable point formula_1 satisfying an equation formula_2 for some point "u" : "v" : "w". The isogonal conjugate of each point X on the circumconic, other than A, B, C, is a point on the line formula_3 This line meets the circumcircle of △"ABC" in 0,1, or 2 points according as the circumconic is an ellipse, parabola, or hyperbola. The "general inconic" is tangent to the three sidelines of △"ABC" and is given by the equation formula_4 Centers and tangent lines. Circumconic. The center of the general circumconic is the point formula_5 The lines tangent to the general circumconic at the vertices A, B, C are, respectively, formula_6 Inconic. The center of the general inconic is the point formula_7 The lines tangent to the general inconic are the sidelines of △"ABC", given by the equations "x" = 0, "y" = 0, "z" = 0. formula_8 formula_10 formula_11 and to a rectangular hyperbola if and only if formula_12 formula_13 in which case it is tangent externally to one of the sides of the triangle and is tangent to the extensions of the other two sides. formula_14 As the parameter t ranges through the real numbers, the locus of X is a line. Define formula_15 The locus of "X"2 is the inconic, necessarily an ellipse, given by the equation formula_16 where formula_17 formula_18 which is maximized by the centroid's barycentric coordinates "α" = "β" = "γ" = ⅓. Extension to quadrilaterals. All the centers of inellipses of a given quadrilateral fall on the line segment connecting the midpoints of the diagonals of the quadrilateral. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a= |BC|, b=|CA|, c=|AB|," }, { "math_id": 1, "text": "X = x:y:z" }, { "math_id": 2, "text": "uyz + vzx + wxy = 0," }, { "math_id": 3, "text": "ux + vy + wz = 0." }, { "math_id": 4, "text": "u^2x^2 + v^2y^2 + w^2z^2 - 2vwyz - 2wuzx - 2uvxy = 0." }, { "math_id": 5, "text": "u(-au+bv+cw) : v(au-bv+cw) : w(au+bv-cw)." }, { "math_id": 6, "text": "\\begin{align} \nwv+vz &= 0, \\\\\nuz+wx &= 0, \\\\\nvx+uy &= 0.\n\\end{align}" }, { "math_id": 7, "text": "cv+bw : aw+cu : bu+av." }, { "math_id": 8, "text": "(cx-az)(ay-bx) : (ay-bx)(bz-cy) : (bz-cy)(cx-az)" }, { "math_id": 9, "text": "P = p:q:r" }, { "math_id": 10, "text": "(vr+wq)x + (wp+ur)y + (uq+vp)z = 0." }, { "math_id": 11, "text": "u^2a^2 + v^2b^2 + w^2c^2 - 2vwbc - 2wuca - 2uvab = 0," }, { "math_id": 12, "text": "u\\cos A + v\\cos B + w\\cos C = 0." }, { "math_id": 13, "text": "ubc + vca + wab = 0," }, { "math_id": 14, "text": "X = (p_1+p_2 t) : (q_1+q_2 t) : (r_1+r_2 t)." }, { "math_id": 15, "text": "X^2 = (p_1+p_2 t)^2 : (q_1+q_2 t)^2 : (r_1+r_2 t)^2." }, { "math_id": 16, "text": "L^4x^2 + M^4y^2 + N^4z^2 - 2M^2N^2yz - 2N^2L^2zx - 2L^2M^2xy = 0," }, { "math_id": 17, "text": "\\begin{align}\nL &= q_1r_2 - r_1q_2, \\\\\nM &= r_1p_2 - p_1r_2, \\\\\nN &= p_1q_2 - q_1p_2.\n\\end{align}" }, { "math_id": 18, "text": "\\frac{\\text{Area of inellipse}}{\\text{Area of triangle}}= \\pi \\sqrt{(1-2\\alpha)(1-2\\beta)(1-2\\gamma)}," } ]
https://en.wikipedia.org/wiki?curid=9610491
9610679
Morse–Palais lemma
In mathematics, the Morse–Palais lemma is a result in the calculus of variations and theory of Hilbert spaces. Roughly speaking, it states that a smooth enough function near a critical point can be expressed as a quadratic form after a suitable change of coordinates. The Morse–Palais lemma was originally proved in the finite-dimensional case by the American mathematician Marston Morse, using the Gram–Schmidt orthogonalization process. This result plays a crucial role in Morse theory. The generalization to Hilbert spaces is due to Richard Palais and Stephen Smale. Statement of the lemma. Let formula_0 be a real Hilbert space, and let formula_1 be an open neighbourhood of the origin in formula_2 Let formula_3 be a formula_4-times continuously differentiable function with formula_5 that is, formula_6 Assume that formula_7 and that formula_8 is a non-degenerate critical point of formula_9 that is, the second derivative formula_10 defines an isomorphism of formula_11 with its continuous dual space formula_12 by formula_13 Then there exists a subneighbourhood formula_14 of formula_8 in formula_15 a diffeomorphism formula_16 that is formula_17 with formula_17 inverse, and an invertible symmetric operator formula_18 such that formula_19 Corollary. Let formula_3 be formula_20 such that formula_8 is a non-degenerate critical point. Then there exists a formula_17-with-formula_17-inverse diffeomorphism formula_21 and an orthogonal decomposition formula_22 such that, if one writes formula_23 then formula_24
[ { "math_id": 0, "text": "(H, \\langle \\cdot ,\\cdot \\rangle)" }, { "math_id": 1, "text": "U" }, { "math_id": 2, "text": "H." }, { "math_id": 3, "text": "f : U \\to \\R" }, { "math_id": 4, "text": "(k+2)" }, { "math_id": 5, "text": "k \\geq 1;" }, { "math_id": 6, "text": "f \\in C^{k+2}(U; \\R)." }, { "math_id": 7, "text": "f(0) = 0" }, { "math_id": 8, "text": "0" }, { "math_id": 9, "text": "f;" }, { "math_id": 10, "text": "D^2 f(0)" }, { "math_id": 11, "text": "H" }, { "math_id": 12, "text": "H^*" }, { "math_id": 13, "text": "H \\ni x \\mapsto \\mathrm{D}^2 f(0) (x, -) \\in H^*." }, { "math_id": 14, "text": "V" }, { "math_id": 15, "text": "U," }, { "math_id": 16, "text": "\\varphi : V \\to V" }, { "math_id": 17, "text": "C^k" }, { "math_id": 18, "text": "A : H \\to H," }, { "math_id": 19, "text": "f(x) = \\langle A \\varphi(x), \\varphi(x) \\rangle \\quad \\text{ for all } x \\in V." }, { "math_id": 20, "text": "f \\in C^{k+2}" }, { "math_id": 21, "text": "\\psi : V \\to V" }, { "math_id": 22, "text": "H = G \\oplus G^{\\perp}," }, { "math_id": 23, "text": "\\psi (x) = y + z \\quad \\mbox{ with } y \\in G, z \\in G^{\\perp}," }, { "math_id": 24, "text": "f (\\psi(x)) = \\langle y, y \\rangle - \\langle z, z \\rangle \\quad \\text{ for all } x \\in V." } ]
https://en.wikipedia.org/wiki?curid=9610679
9612488
Misiurewicz point
Parameter in the Mandelbrot set In mathematics, a Misiurewicz point is a parameter value in the Mandelbrot set (the parameter space of complex quadratic maps) and also in real quadratic maps of the interval for which the critical point is strictly pre-periodic (i.e., it becomes periodic after finitely many iterations but is not periodic itself). By analogy, the term "Misiurewicz point" is also used for parameters in a multibrot set where the unique critical point is strictly pre-periodic. This term makes less sense for maps in greater generality that have more than one free critical point because some critical points might be periodic and others not. These points are named after the Polish-American mathematician Michał Misiurewicz, who was the first to study them. Mathematical notation. A parameter formula_0 is a Misiurewicz point formula_1 if it satisfies the equations: formula_2 and: formula_3 so: formula_4 where: Name. The term "Misiurewicz point" is used ambiguously: Misiurewicz originally investigated maps in which all critical points were non-recurrent; that is, in which there exists a neighbourhood for every critical point that is not visited by the orbit of this critical point. This meaning is firmly established in the context of the dynamics of iterated interval maps. Only in very special cases does a quadratic polynomial have a strictly periodic and unique critical point. In this restricted sense, the term is used in complex dynamics; a more appropriate one would be Misiurewicz–Thurston points (after William Thurston, who investigated post-critically finite rational maps). Quadratic maps. A complex quadratic polynomial has only one critical point. By a suitable conjugation any quadratic polynomial can be transformed into a map of the form formula_10 which has a single critical point at formula_11. The Misiurewicz points of this family of maps are roots of the equations: formula_12 Subject to the condition that the critical point is not periodic, where: For example, the Misiurewicz points with "k"= 2 and "n"= 1, denoted by "M"2,1, are roots of: formula_16 The root "c"= 0 is not a Misiurewicz point because the critical point is a fixed point when "c"= 0, and so is periodic rather than pre-periodic. This leaves a single Misiurewicz point "M"2,1 at "c" = −2. Properties of Misiurewicz points of complex quadratic mapping. Misiurewicz points belong to, and are dense in, the boundary of the Mandelbrot set. If formula_0 is a Misiurewicz point, then the associated filled Julia set is equal to the Julia set and means the filled Julia set has no interior. If formula_0 is a Misiurewicz point, then in the corresponding Julia set all periodic cycles are repelling (in particular the cycle that the critical orbit falls onto). The Mandelbrot set and Julia set formula_17 are locally asymptotically self-similar around Misiurewicz points. Types. Misiurewicz points in the context of the Mandelbrot set can be classified based on several criteria. One such criterion is the number of external rays that converge on such a point. Branch points, which can divide the Mandelbrot set into two or more sub-regions, have three or more external arguments (or angles). Non-branch points have exactly two external rays (these correspond to points lying on arcs within the Mandelbrot set). These non-branch points are generally more subtle and challenging to identify in visual representations. End points, or branch tips, have only one external ray converging on them. Another criterion for classifying Misiurewicz points is their appearance within a plot of a subset of the Mandelbrot set. Misiurewicz points can be found at the centers of spirals as well as at points where two or more branches meet. According to the Branch Theorem of the Mandelbrot set, all branch points of the Mandelbrot set are Misiurewicz points. Most Misiurewicz parameters within the Mandelbrot set exhibit a "center of a spiral". This occurs due to the behavior at a Misiurewicz parameter where the critical value jumps onto a repelling periodic cycle after a finite number of iterations. At each point during the cycle, the Julia set exhibits asymptotic self-similarity through complex multiplication by the derivative of this cycle. If the derivative is non-real, it implies that the Julia set near the periodic cycle has a spiral structure. Consequently, a similar spiral structure occurs in the Julia set near the critical value, and by Tan Lei's theorem, also in the Mandelbrot set near any Misiurewicz parameter for which the repelling orbit has a non-real multiplier. The visibility of the spiral shape depends on the value of this multiplier. The number of arms in the spiral corresponds to the number of branches at the Misiurewicz parameter, which in turn equals the number of branches at the critical value in the Julia set. Even the principal Misiurewicz point in the 1/3-limb, located at the end of the parameter rays at angles 9/56, 11/56, and 15/56, is asymptotically a spiral with infinitely many turns, although this is difficult to discern without magnification. External arguments. External arguments of Misiurewicz points, measured in turns are: where: a and b are positive integers and b is odd, subscript number shows base of numeral system. Examples of Misiurewicz points of complex quadratic mapping. End points. Point formula_22 is considered an end point as it is a tip of a filament, and the landing point of the external ray for the angle 1/6. Its critical orbit is formula_23. Point formula_24 is considered an end point as it is the endpoint of the main antenna of the Mandelbrot set. and the landing point of only one external ray (parameter ray) of angle 1/2. It is also considered an end point because its critical orbit is formula_25, following the Symbolic sequence = C L R R R ... with a pre-period of 2 and period of 1. Branch points. Point formula_26 is considered a branch point because it is a principal Misiurewicz point of the 1/3 limb and has 3 external rays: 9/56, 11/56 and 15/56. Other points. These are points which are not-branch and not-end points. Point formula_27 is near a Misiurewicz point formula_28. This can be seen because it is a center of a two-arms spiral, the landing point of 2 external rays with angles: formula_29 and formula_30 where the denominator is formula_31, and has a preperiodic point with pre-period formula_32 and period formula_33 Point formula_34 is near a Misiurewicz point formula_35, as it is the landing point for pair of rays: formula_36, formula_37 and has pre-period formula_38 and period formula_39. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "M_{k,n}" }, { "math_id": 2, "text": "f_c^{(k)}(z_{cr}) = f_c^{(k+n)}(z_{cr})" }, { "math_id": 3, "text": "f_c^{(k-1)}(z_{cr}) \\neq f_c^{(k+n-1)}(z_{cr})" }, { "math_id": 4, "text": "M_{k,n} = c : f_c^{(k)}(z_{cr}) = f_c^{(k+n)}(z_{cr})" }, { "math_id": 5, "text": "z_{cr}" }, { "math_id": 6, "text": "f_c" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "f_c^{(k)}" }, { "math_id": 10, "text": "P_c(z)=z^2+c" }, { "math_id": 11, "text": "z = 0" }, { "math_id": 12, "text": "P_c^{(k)}(0) = P_c^{(k+n)}(0)," }, { "math_id": 13, "text": "P_c^{(n)} = P_c ( P_c^{(n-1)})" }, { "math_id": 14, "text": "P_c(z)= z^2+c" }, { "math_id": 15, "text": "P_c" }, { "math_id": 16, "text": "\\begin{align}\n& P_c^{(2)}(0) = P_c^{(3)}(0)\\\\\n\\Rightarrow {} & c^2+c = (c^2+c)^2+c \\\\\n\\Rightarrow {} & c^4+2c^3 = 0.\n\\end{align}" }, { "math_id": 17, "text": "J_c" }, { "math_id": 18, "text": "= 2^b" }, { "math_id": 19, "text": "\\frac{1}{2}_{10} = 0.5_{10} = 0.1_2" }, { "math_id": 20, "text": "= a \\cdot 2^b" }, { "math_id": 21, "text": "\\frac{1}{6}_{10} = {\\frac{1}{2 \\times 3}}_{10}=0.16666..._{10} = 0.0(01)..._2." }, { "math_id": 22, "text": "c = M_{2,2} = i" }, { "math_id": 23, "text": "\\{0, i, i-1, -i, i-1, -i...\\}" }, { "math_id": 24, "text": "c = M_{2,1} = -2" }, { "math_id": 25, "text": "\\{ 0 , -2, 2, 2, 2, ... \\}" }, { "math_id": 26, "text": "c = -0.10109636384562... + i \\, 0.95628651080914... = M_{4,1}" }, { "math_id": 27, "text": "c = -0.77568377 + i \\, 0.13646737" }, { "math_id": 28, "text": "M_{23,2}" }, { "math_id": 29, "text": "\\frac{8388611}{25165824}" }, { "math_id": 30, "text": "\\frac{8388613}{25165824}" }, { "math_id": 31, "text": "3*2^{23}" }, { "math_id": 32, "text": "k = 23" }, { "math_id": 33, "text": "n = 2" }, { "math_id": 34, "text": " c = -1.54368901269109" }, { "math_id": 35, "text": "M_{3,1}" }, { "math_id": 36, "text": "\\frac{5}{12}" }, { "math_id": 37, "text": "\\frac{7}{12}" }, { "math_id": 38, "text": "k = 3" }, { "math_id": 39, "text": "n = 1" } ]
https://en.wikipedia.org/wiki?curid=9612488
9613
Euler's formula
Complex exponential in terms of sine and cosine Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x, one has formula_0 where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted cis "x" ("cosine plus "i" sine"). The formula is still valid if x is a complex number, and is also called "Euler's formula" in this more general case. Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics". When "x" = "π", Euler's formula may be rewritten as "eiπ" + 1 = 0 or "eiπ" = −1, which is known as Euler's identity. History. In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of formula_1) as: formula_2 Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of 2"πi". Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work "Introductio in analysin infinitorum". Johann Bernoulli had found that formula_3 And since formula_4 the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral. Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values. The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel. Definitions of complex exponentiation. The exponential function "ex" for real values of x may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of "ez" for complex values of z simply by substituting z in place of x and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of "ex" to the complex plane. Differential equation definition. The exponential function formula_5 is the unique differentiable function of a complex variable for which the derivative equals the function formula_6 and formula_7 Power series definition. For complex z formula_8 Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines "ez" for all complex z. Limit definition. For complex z formula_9 Here, n is restricted to positive integers, so there is no question about what the power with exponent n means. Proofs. Various proofs of the formula are possible. Using differentiation. This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted). Consider the function "f"("θ") formula_10 for real θ. Differentiating gives by the product rule formula_11 Thus, "f"("θ") is a constant. Since "f"(0) = 1, then "f"("θ") = 1 for all real θ, and thus formula_12 Using power series. Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of i: formula_13 Using now the power-series definition from above, we see that for real values of x formula_14 where in the last step we recognize the two terms are the Maclaurin series for cos "x" and sin "x". The rearrangement of terms is justified because each series is absolutely convergent. Using polar coordinates. Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some r and θ depending on x, formula_15 No assumptions are being made about r and θ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of "e""ix" is "ie""ix". Therefore, differentiating both sides gives formula_16 Substituting "r"(cos "θ" + "i" sin "θ") for "eix" and equating real and imaginary parts in this formula gives ' = 0 and ' = 1. Thus, r is a constant, and θ is "x" + "C" for some constant C. The initial values "r"(0) = 1 and "θ"(0) = 0 come from "e"0"i" = 1, giving "r" = 1 and "θ" = "x". This proves the formula formula_17 Applications. Applications in complex number theory. Interpretation of the formula. This formula can be interpreted as saying that the function "e""iφ" is a unit complex number, i.e., it traces out the unit circle in the complex plane as φ ranges through the real numbers. Here φ is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians. The original proof is based on the Taylor series expansions of the exponential function "e""z" (where z is a complex number) and of sin "x" and cos "x" for real numbers x (see above). In fact, the same proof shows that Euler's formula is even valid for all "complex" numbers x. A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number "z" = "x" + "iy", and its complex conjugate, "z" = "x" − "iy", can be written as formula_18 where φ is the argument of z, i.e., the angle between the "x" axis and the vector "z" measured counterclockwise in radians, which is defined up to addition of 2"π". Many texts write "φ" = tan−1 "" instead of "φ" = atan2("y", "x"), but the first equation needs adjustment when "x" ≤ 0. This is because for any real x and y, not both zero, the angles of the vectors ("x", "y") and (−"x", −"y") differ by π radians, but have the identical value of tan "φ" =. Use of the formula to define the logarithm of complex numbers. Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation): formula_19 and that formula_20 both valid for any complex numbers a and b. Therefore, one can write: formula_21 for any "z" ≠ 0. Taking the logarithm of both sides shows that formula_22 and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because φ is multi-valued. Finally, the other exponential law formula_23 which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula. Relationship to trigonometry. Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function: formula_24 The two equations above can be derived by adding or subtracting Euler's formulas: formula_25 and solving for either cosine or sine. These formulas can even serve as the definition of the trigonometric functions for complex arguments x. For example, letting "x" = "iy", we have: formula_26 In addition formula_27 Complex exponentials can simplify trigonometry, because they are easier to manipulate than their sinusoidal components. One technique is simply to convert sinusoids into equivalent expressions in terms of exponentials. After the manipulations, the simplified result is still real-valued. For example: formula_28 Another technique is to represent the sinusoids in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example: formula_29 This formula is used for recursive generation of cos "nx" for integer values of n and arbitrary x (in radians). Considering cos "x" a parameter in equation above yields recursive formula for Chebyshev polynomials of the first kind. Topological interpretation. In the language of topology, Euler's formula states that the imaginary exponential function formula_30 is a (surjective) morphism of topological groups from the real line formula_31 to the unit circle formula_32. In fact, this exhibits formula_31 as a covering space of formula_32. Similarly, Euler's identity says that the kernel of this map is formula_33, where formula_34. These observations may be combined and summarized in the commutative diagram below: Other applications. In differential equations, the function "eix" is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation. In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor. In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point r on this sphere, and x a real number, Euler's formula applies: formula_35 and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space. Other special cases. The special cases that evaluate to units illustrate rotation around the complex unit circle: The special case at "x" = "τ" (where "τ" = 2"π", one turn) yields "eiτ" = 1 + 0. This is also argued to link five fundamental constants with three basic arithmetic operations, but, unlike Euler's identity, without rearranging the addends from the general case: formula_36 An interpretation of the simplified form "eiτ" = 1 is that rotating by a full turn is an identity function.
[ { "math_id": 0, "text": "e^{i x} = \\cos x + i \\sin x, " }, { "math_id": 1, "text": "\\sqrt{-1}" }, { "math_id": 2, "text": "ix = \\ln(\\cos x + i\\sin x)." }, { "math_id": 3, "text": "\\frac{1}{1 + x^2} = \\frac 1 2 \\left( \\frac{1}{1 - ix} + \\frac{1}{1 + ix}\\right)." }, { "math_id": 4, "text": "\\int \\frac{dx}{1 + ax} = \\frac{1}{a} \\ln(1 + ax) + C," }, { "math_id": 5, "text": "f(z) = e^z" }, { "math_id": 6, "text": "\\frac{df}{dz} = f" }, { "math_id": 7, "text": "f(0) = 1." }, { "math_id": 8, "text": "e^z = 1 + \\frac{z}{1!} + \\frac{z^2}{2!} + \\frac{z^3}{3!} + \\cdots = \\sum_{n=0}^{\\infty} \\frac{z^n}{n!}." }, { "math_id": 9, "text": "e^z = \\lim_{n \\to \\infty} \\left(1+\\frac{z}{n}\\right)^n." }, { "math_id": 10, "text": " f(\\theta) = \\frac{\\cos\\theta + i\\sin\\theta}{e^{i\\theta}} = e^{-i\\theta} \\left(\\cos\\theta + i \\sin\\theta\\right) " }, { "math_id": 11, "text": " f'(\\theta) = e^{-i\\theta} \\left(i\\cos\\theta - \\sin\\theta\\right) - ie^{-i\\theta} \\left(\\cos\\theta + i\\sin\\theta\\right) = 0" }, { "math_id": 12, "text": "e^{i\\theta} = \\cos\\theta + i\\sin\\theta." }, { "math_id": 13, "text": "\\begin{align}\ni^0 &= 1, & i^1 &= i, & i^2 &= -1, & i^3 &= -i, \\\\\ni^4 &= 1, & i^5 &= i, & i^6 &= -1, & i^7 &= -i \\\\\n&\\vdots & &\\vdots & &\\vdots & &\\vdots\n\\end{align}" }, { "math_id": 14, "text": "\\begin{align}\n e^{ix} &= 1 + ix + \\frac{(ix)^2}{2!} + \\frac{(ix)^3}{3!} + \\frac{(ix)^4}{4!} + \\frac{(ix)^5}{5!} + \\frac{(ix)^6}{6!} + \\frac{(ix)^7}{7!} + \\frac{(ix)^8}{8!} + \\cdots \\\\[8pt]\n &= 1 + ix - \\frac{x^2}{2!} - \\frac{ix^3}{3!} + \\frac{x^4}{4!} + \\frac{ix^5}{5!} - \\frac{x^6}{6!} - \\frac{ix^7}{7!} + \\frac{x^8}{8!} + \\cdots \\\\[8pt]\n &= \\left( 1 - \\frac{x^2}{2!} + \\frac{x^4}{4!} - \\frac{x^6}{6!} + \\frac{x^8}{8!} - \\cdots \\right) + i\\left( x - \\frac{x^3}{3!} + \\frac{x^5}{5!} - \\frac{x^7}{7!} + \\cdots \\right) \\\\[8pt]\n &= \\cos x + i\\sin x ,\n\\end{align}" }, { "math_id": 15, "text": "e^{i x} = r \\left(\\cos \\theta + i \\sin \\theta\\right)." }, { "math_id": 16, "text": "i e ^{ix} = \\left(\\cos \\theta + i \\sin \\theta\\right) \\frac{dr}{dx} + r \\left(-\\sin \\theta + i \\cos \\theta\\right) \\frac{d\\theta}{dx}." }, { "math_id": 17, "text": "e^{i \\theta} = 1(\\cos \\theta +i \\sin \\theta) = \\cos \\theta + i \\sin \\theta." }, { "math_id": 18, "text": "\\begin{align}\nz &= x + iy = |z| (\\cos \\varphi + i\\sin \\varphi) = r e^{i \\varphi}, \\\\\n\\bar{z} &= x - iy = |z| (\\cos \\varphi - i\\sin \\varphi) = r e^{-i \\varphi},\n\\end{align}" }, { "math_id": 19, "text": "a = e^{\\ln a}, " }, { "math_id": 20, "text": "e^a e^b = e^{a + b}, " }, { "math_id": 21, "text": "z = \\left|z\\right| e^{i \\varphi} = e^{\\ln\\left|z\\right|} e^{i \\varphi} = e^{\\ln\\left|z\\right| + i \\varphi}" }, { "math_id": 22, "text": "\\ln z = \\ln \\left|z\\right| + i \\varphi," }, { "math_id": 23, "text": "\\left(e^a\\right)^k = e^{a k}," }, { "math_id": 24, "text": "\\begin{align}\n\\cos x &= \\operatorname{Re} \\left(e^{ix}\\right) =\\frac{e^{ix} + e^{-ix}}{2}, \\\\\n\\sin x &= \\operatorname{Im} \\left(e^{ix}\\right) =\\frac{e^{ix} - e^{-ix}}{2i}.\n\\end{align}" }, { "math_id": 25, "text": "\\begin{align}\ne^{ix} &= \\cos x + i \\sin x, \\\\\ne^{-ix} &= \\cos(- x) + i \\sin(- x) = \\cos x - i \\sin x\n\\end{align}" }, { "math_id": 26, "text": "\\begin{align}\n\\cos iy &= \\frac{e^{-y} + e^y}{2} = \\cosh y, \\\\\n\\sin iy &= \\frac{e^{-y} - e^y}{2i} = \\frac{e^y - e^{-y}}{2}i = i\\sinh y.\n\\end{align}" }, { "math_id": 27, "text": "\\begin{align}\n\\cosh ix &= \\frac{e^{ix} + e^{-ix}}{2} = \\cos x, \\\\\n\\sinh ix &= \\frac{e^{ix} - e^{-ix}}{2} = i\\sin x.\n\\end{align}" }, { "math_id": 28, "text": "\\begin{align}\n\\cos x \\cos y &= \\frac{e^{ix}+e^{-ix}}{2} \\cdot \\frac{e^{iy}+e^{-iy}}{2} \\\\\n &= \\frac 1 2 \\cdot \\frac{e^{i(x+y)}+e^{i(x-y)}+e^{i(-x+y)}+e^{i(-x-y)}}{2} \\\\\n &= \\frac 1 2 \\bigg( \\frac{e^{i(x+y)} + e^{-i(x+y)}}{2} + \\frac{e^{i(x-y)} + e^{-i(x-y)}}{2} \\bigg)\\\\\n &= \\frac 1 2 \\left( \\cos(x+y) + \\cos(x-y) \\right).\n\\end{align}\n" }, { "math_id": 29, "text": "\\begin{align}\n\\cos nx &= \\operatorname{Re} \\left(e^{inx}\\right) \\\\\n &= \\operatorname{Re} \\left( e^{i(n-1)x}\\cdot e^{ix} \\right) \\\\\n &= \\operatorname{Re} \\Big( e^{i(n-1)x}\\cdot \\big(\\underbrace{e^{ix} + e^{-ix}}_{2\\cos x } - e^{-ix}\\big) \\Big) \\\\\n &= \\operatorname{Re} \\left( e^{i(n-1)x}\\cdot 2\\cos x - e^{i(n-2)x} \\right) \\\\\n &= \\cos[(n-1)x] \\cdot [2 \\cos x] - \\cos[(n-2)x].\n\\end{align}" }, { "math_id": 30, "text": "t \\mapsto e^{it}" }, { "math_id": 31, "text": "\\mathbb R" }, { "math_id": 32, "text": "\\mathbb S^1" }, { "math_id": 33, "text": "\\tau \\mathbb Z" }, { "math_id": 34, "text": "\\tau = 2\\pi" }, { "math_id": 35, "text": "\\exp xr = \\cos x + r \\sin x," }, { "math_id": 36, "text": "\\begin{align}\ne^{i\\tau} &= \\cos \\tau + i \\sin \\tau \\\\\n&= 1 + 0\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=9613
961463
151 (number)
Natural number 151 (one hundred [and] fifty-one) is a natural number. It follows 150 and precedes 152. In mathematics. 151 is the 36th prime number, the previous is 149, with which it comprises a twin prime. 151 is also a palindromic prime, a centered decagonal number, and a lucky number. 151 appears in the Padovan sequence, preceded by the terms 65, 86, 114; it is the sum of the first two of these. 151 is a unique prime in base 2, since it is the only prime with period 15 in base 2. There are 151 4-uniform tilings, such that the symmetry of tilings with regular polygons have four orbits of vertices. 151 is the number of uniform paracompact honeycombs with infinite facets and vertex figures in the third dimension, which stem from 23 different Coxeter groups. Split into two whole numbers, 151 is the sum of 75 and 76, both relevant numbers in Euclidean and hyperbolic 3-space: While 151 is the 36th indexed prime, its twin prime 149 has a reciprocal whose repeating decimal expansion has a digit sum of 666, which is the magic constant in a formula_0 prime reciprocal magic square equal to the sum of the first 36 non-zero integers, or equivalently the 36th triangular number. Furthermore, the sum between twin primes (149, 151) is 300, which in turn is the 24th triangular number. In other fields. 151 is also: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1}{149}" } ]
https://en.wikipedia.org/wiki?curid=961463
9614993
Chromatographic response function
Chromatographic response function, often abbreviated to CRF, is a coefficient which measures the quality of the separation in the result of a chromatography. The CRF concept have been created during the development of separation optimization, to compare the quality of many simulated or real chromatographic separations. Many CRFs have been proposed and discussed. In high performance liquid chromatography the CRF is calculated from various parameters of the peaks of solutes (like width, retention time, symmetry etc.) are considered into the calculation. In TLC the CRFs are based on the placement of the spots, measured as RF values. Examples in thin layer chromatography. The CRFs in thin layer chromatography characterize the "equal-spreading" of the spots. The ideal case, when the RF of the spots are uniformly distributed in &lt;0,1&gt; range (for example 0.25,0.5 and 0.75 for three solutes) should be characterized as the best situation possible. The simplest criteria are formula_0 and formula_0 product (Wang et al., 1996). They are the smallest difference between sorted RF values, or product of such differences. Another function is the multispot response function (MRF) as developed by De Spiegeleer et al.{Analytical Chemistry (1987):59(1),62-64} It is based also of differences product. This function always lies between 0 and 1. When two RF values are equal, it is equal to 0, when all RF values are equal-spread, it is equal to 1. The L and U values – upper and lower limit of RF – give possibility to avoid the band region. formula_1 The last example of coefficient sensitive to minimal distance between spots is Retention distance (Komsta et al., 2007) formula_2 The second group are criteria insensitive for minimal difference between RF values (if two compounds are not separated, such CRF functions will not indicate it). They are equal to zero in equal-spread state increase when situation is getting worse. There are: Separation response (Bayne et al., 1987) formula_3 Performance index (Gocan et al., 1991) formula_4 Informational entropy (Gocan et al., 1991, second reference) formula_5 Retention uniformity (Komsta et al., 2007) formula_6 In all above formulas, "n" is the number of compounds separated, "Rf (1...n)" are the Retention factor of the compounds sorted in non-descending order, "Rf0" = 0 and "Rf(n+1)" = 1.
[ { "math_id": 0, "text": "\\Delta R_F" }, { "math_id": 1, "text": "\nMRF = \\frac {(U - hR_{Fn})(hR_{F1} - L)\\prod^{n-1}_{i=1}(hR_{Fi+1} - hR_{Fi})} {[(U - L)/(n+1)]^{n+1}}\n" }, { "math_id": 2, "text": "\nR_D = \\Bigg[(n+1)^{(n+1)} \\prod^n_{i=0}{(R_{F(i+1)}-R_{Fi})\\Bigg]^{\\frac{1}{n}}} \n" }, { "math_id": 3, "text": "\nD = \\sqrt{\\sum^n_{i=1}\\left(R_{Fi} - \\frac{i-1}{n-1}\\right)}\n" }, { "math_id": 4, "text": "\nI_p = \\sqrt{\\frac{\\sum(\\Delta hR_{Fi} - \\Delta hR_{Ft})^2}{n(n+1)}}\n" }, { "math_id": 5, "text": "\ns_m = \\sqrt{\\frac{\\sum(\\Delta hR_{Fi} - \\Delta hR_{Ft})^2}{n+1}}\n" }, { "math_id": 6, "text": "\nR_{U} = 1 - \\sqrt{\\frac{6(n+1)}{n(2n+1)}\\sum_{i=1}^{n}{\\left(R_{Fi}-\\frac{i}{n+1}\\right)^2}} \n" } ]
https://en.wikipedia.org/wiki?curid=9614993
9616434
LPBoost
Linear Programming Boosting (LPBoost) is a supervised classifier from the boosting family of classifiers. LPBoost maximizes a "margin" between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms. Consider a classification function formula_0 which classifies samples from a space formula_1 into one of two classes, labelled 1 and -1, respectively. LPBoost is an algorithm to "learn" such a classification function given a set of training examples with known class labels. LPBoost is a machine learning technique and especially suited for applications of joint classification and feature selection in structured domains. LPBoost overview. As in all boosting classifiers, the final classification function is of the form formula_2 where formula_3 are non-negative weightings for "weak" classifiers formula_4. Each individual weak classifier formula_5 may be just a little bit better than random, but the resulting linear combination of many weak classifiers can perform very well. LPBoost constructs formula_6 by starting with an empty set of weak classifiers. Iteratively, a single weak classifier to add to the set of considered weak classifiers is selected, added and all the weights formula_7 for the current set of weak classifiers are adjusted. This is repeated until no weak classifiers to add remain. The property that all classifier weights are adjusted in each iteration is known as "totally-corrective" property. Early boosting methods, such as AdaBoost do not have this property and converge slower. Linear program. More generally, let formula_8 be the possibly infinite set of weak classifiers, also termed "hypotheses". One way to write down the problem LPBoost solves is as a linear program with infinitely many variables. The primal linear program of LPBoost, optimizing over the non-negative weight vector formula_7, the non-negative vector formula_9 of slack variables and the "margin" formula_10 is the following. formula_11 Note the effects of slack variables formula_12: their one-norm is penalized in the objective function by a constant factor formula_13, which—if small enough—always leads to a primal feasible linear program. Here we adopted the notation of a parameter space formula_14, such that for a choice formula_15 the weak classifier formula_16 is uniquely defined. When the above linear program was first written down in early publications about boosting methods it was disregarded as intractable due to the large number of variables formula_7. Only later it was discovered that such linear programs can indeed be solved efficiently using the classic technique of column generation. Column generation for LPBoost. In a linear program a "column" corresponds to a primal variable. Column generation is a technique to solve large linear programs. It typically works in a restricted problem, dealing only with a subset of variables. By generating primal variables iteratively and on-demand, eventually the original unrestricted problem with all variables is recovered. By cleverly choosing the columns to generate the problem can be solved such that while still guaranteeing the obtained solution to be optimal for the original full problem, only a small fraction of columns has to be created. LPBoost dual problem. Columns in the primal linear program corresponds to rows in the dual linear program. The equivalent dual linear program of LPBoost is the following linear program. formula_17 For linear programs the optimal value of the primal and dual problem are equal. For the above primal and dual problems, the optimal value is equal to the negative 'soft margin'. The soft margin is the size of the margin separating positive from negative training instances minus positive slack variables that carry penalties for margin-violating samples. Thus, the soft margin may be positive although not all samples are linearly separated by the classification function. The latter is called the 'hard margin' or 'realized margin'. Convergence criterion. Consider a subset of the satisfied constraints in the dual problem. For any finite subset we can solve the linear program and thus satisfy all constraints. If we could prove that of all the constraints which we did not add to the dual problem no single constraint is violated, we would have proven that solving our restricted problem is equivalent to solving the original problem. More formally, let formula_18 be the optimal objective function value for any restricted instance. Then, we can formulate a search problem for the 'most violated constraint' in the original problem space, namely finding formula_19 as formula_20 That is, we search the space formula_21 for a single decision stump formula_22 maximizing the left hand side of the dual constraint. If the constraint cannot be violated by any choice of decision stump, none of the corresponding constraint can be active in the original problem and the restricted problem is equivalent. Penalization constant formula_13. The positive value of penalization constant formula_13 has to be found using model selection techniques. However, if we choose formula_23, where formula_24 is the number of training samples and formula_25, then the new parameter formula_26 has the following properties. Algorithm. Note that if the convergence threshold is set to formula_45 the solution obtained is the global optimal solution of the above linear program. In practice, formula_46 is set to a small positive value in order obtain a good solution quickly. Realized margin. The actual margin separating the training samples is termed the "realized margin" and is defined as formula_47 The realized margin can and will usually be negative in the first iterations. For a hypothesis space that permits singling out of any single sample, as is commonly the case, the realized margin will eventually converge to some positive value. Convergence guarantee. While the above algorithm is proven to converge, in contrast to other boosting formulations, such as AdaBoost and TotalBoost, there are no known convergence bounds for LPBoost. In practise however, LPBoost is known to converge quickly, often faster than other formulations. Base learners. LPBoost is an ensemble learning method and thus does not dictate the choice of base learners, the space of hypotheses formula_21. Demiriz et al. showed that under mild assumptions, any base learner can be used. If the base learners are particularly simple, they are often referred to as "decision stumps". The number of base learners commonly used with Boosting in the literature is large. For example, if formula_48, a base learner could be a linear soft margin support vector machine. Or even more simple, a simple stump of the form formula_49 The above decision stumps looks only along a single dimension formula_50 of the input space and simply thresholds the respective column of the sample using a constant threshold formula_51. Then, it can decide in either direction, depending on formula_52 for a positive or negative class. Given weights for the training samples, constructing the optimal decision stump of the above form simply involves searching along all sample columns and determining formula_50, formula_51 and formula_52 in order to optimize the gain function.
[ { "math_id": 0, "text": "\nf: \\mathcal{X} \\to \\{ -1, 1 \\},\n" }, { "math_id": 1, "text": "\\mathcal{X}" }, { "math_id": 2, "text": "f(\\boldsymbol{x}) = \\sum_{j=1}^{J} \\alpha_j h_j(\\boldsymbol{x})," }, { "math_id": 3, "text": "\\alpha_j" }, { "math_id": 4, "text": "h_j: \\mathcal{X} \\to \\{-1,1\\}" }, { "math_id": 5, "text": "h_j" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "\\boldsymbol{\\alpha}" }, { "math_id": 8, "text": "\\mathcal{H}=\\{h(\\cdot;\\omega) | \\omega \\in \\Omega\\}" }, { "math_id": 9, "text": "\\boldsymbol{\\xi}" }, { "math_id": 10, "text": "\\rho" }, { "math_id": 11, "text": "\\begin{array}{cl}\n \\underset{\\boldsymbol{\\alpha},\\boldsymbol{\\xi},\\rho}{\\min} & -\\rho + D \\sum_{n=1}^{\\ell} \\xi_n\\\\\n \\textrm{sb.t.} & \\sum_{\\omega \\in \\Omega} y_n \\alpha_{\\omega} h(\\boldsymbol{x}_n ; \\omega) + \\xi_n \\geq \\rho,\\qquad n=1,\\dots,\\ell,\\\\\n & \\sum_{\\omega \\in \\Omega} \\alpha_{\\omega} = 1,\\\\\n & \\xi_n \\geq 0,\\qquad n=1,\\dots,\\ell,\\\\\n & \\alpha_{\\omega} \\geq 0,\\qquad \\omega \\in \\Omega,\\\\\n & \\rho \\in {\\mathbb R}.\n\\end{array}" }, { "math_id": 12, "text": "\\boldsymbol{\\xi} \\geq 0" }, { "math_id": 13, "text": "D" }, { "math_id": 14, "text": "\\Omega" }, { "math_id": 15, "text": "\\omega \\in \\Omega" }, { "math_id": 16, "text": "h(\\cdot ; \\omega): \\mathcal{X} \\to \\{-1,1\\}" }, { "math_id": 17, "text": "\\begin{array}{cl}\n\\underset{\\boldsymbol{\\lambda},\\gamma}{\\max} & \\gamma\\\\\n\\textrm{sb.t.} & \\sum_{n=1}^{\\ell} y_n h(\\boldsymbol{x}_n ; \\omega) \\lambda_n + \\gamma \\leq 0,\\qquad \\omega \\in \\Omega,\\\\\n& 0 \\leq \\lambda_n \\leq D,\\qquad n=1,\\dots,\\ell,\\\\\n& \\sum_{n=1}^{\\ell} \\lambda_n = 1,\\\\\n& \\gamma \\in \\mathbb{R}.\n\\end{array}" }, { "math_id": 18, "text": "\\gamma^*" }, { "math_id": 19, "text": "\\omega^* \\in \\Omega" }, { "math_id": 20, "text": "\\omega^* = \\underset{\\omega \\in \\Omega}{\\textrm{argmax}} \\sum_{n=1}^{\\ell} y_n h(\\boldsymbol{x}_n;\\omega) \\lambda_n." }, { "math_id": 21, "text": "\\mathcal{H}" }, { "math_id": 22, "text": "h(\\cdot;\\omega^*)" }, { "math_id": 23, "text": "D=\\frac{1}{\\ell \\nu}" }, { "math_id": 24, "text": "\\ell" }, { "math_id": 25, "text": "0 < \\nu < 1" }, { "math_id": 26, "text": "\\nu" }, { "math_id": 27, "text": "k" }, { "math_id": 28, "text": "\\frac{k}{\\ell} \\leq \\nu" }, { "math_id": 29, "text": "X = \\{\\boldsymbol{x}_1, \\dots, \\boldsymbol{x}_{\\ell}\\}" }, { "math_id": 30, "text": "\\boldsymbol{x}_i \\in \\mathcal{X}" }, { "math_id": 31, "text": "Y = \\{y_1,\\dots,y_{\\ell}\\}" }, { "math_id": 32, "text": "y_i \\in \\{-1,1\\}" }, { "math_id": 33, "text": "\\theta \\geq 0" }, { "math_id": 34, "text": "f: \\mathcal{X} \\to \\{-1,1\\}" }, { "math_id": 35, "text": "\\lambda_n \\leftarrow \\frac{1}{\\ell},\\quad n=1,\\dots,\\ell" }, { "math_id": 36, "text": "\\gamma \\leftarrow 0" }, { "math_id": 37, "text": "J \\leftarrow 1" }, { "math_id": 38, "text": "\\hat h \\leftarrow \\underset{\\omega \\in \\Omega}{\\textrm{argmax}} \\sum_{n=1}^{\\ell} y_n h(\\boldsymbol{x}_n;\\omega) \\lambda_n" }, { "math_id": 39, "text": "\\sum_{n=1}^{\\ell} y_n \\hat h(\\boldsymbol{x}_n) \\lambda_n + \\gamma \\leq \\theta" }, { "math_id": 40, "text": "h_J \\leftarrow \\hat h" }, { "math_id": 41, "text": "J \\leftarrow J + 1" }, { "math_id": 42, "text": "(\\boldsymbol{\\lambda},\\gamma) \\leftarrow" }, { "math_id": 43, "text": "\\boldsymbol{\\alpha} \\leftarrow" }, { "math_id": 44, "text": "f(\\boldsymbol{x}) := \\textrm{sign} \\left(\\sum_{j=1}^J \\alpha_j h_j (\\boldsymbol{x})\\right)" }, { "math_id": 45, "text": "\\theta = 0" }, { "math_id": 46, "text": "\\theta" }, { "math_id": 47, "text": "\\rho(\\boldsymbol{\\alpha}) := \\min_{n=1,\\dots,\\ell} y_n \\sum_{\\alpha_{\\omega} \\in \\Omega} \\alpha_{\\omega} h(\\boldsymbol{x}_n ; \\omega)." }, { "math_id": 48, "text": "\\mathcal{X} \\subseteq {\\mathbb R}^n" }, { "math_id": 49, "text": "h(\\boldsymbol{x} ; \\omega \\in \\{1,-1\\}, p \\in \\{1,\\dots,n\\}, t \\in {\\mathbb R}) :=\n \\left\\{\\begin{array}{cl} \\omega & \\textrm{if~} \\boldsymbol{x}_p \\leq t\\\\\n -\\omega & \\textrm{otherwise}\\end{array}\\right.." }, { "math_id": 50, "text": "p" }, { "math_id": 51, "text": "t" }, { "math_id": 52, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=9616434
9617564
Oja's rule
Model of how neurons in the brain or artificial neural networks learn over time Oja's learning rule, or simply Oja's rule, named after Finnish computer scientist Erkki Oja (, ), is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the standard Hebb's Rule (see Hebbian learning) that, through multiplicative normalization, solves all stability problems and generates an algorithm for principal components analysis. This is a computational form of an effect which is believed to happen in biological neurons. Theory. Oja's rule requires a number of simplifications to derive, but in its final form it is demonstrably stable, unlike Hebb's rule. It is a single-neuron special case of the Generalized Hebbian Algorithm. However, Oja's rule can also be generalized in other ways to varying degrees of stability and success. Formula. Consider a simplified model of a neuron formula_0 that returns a linear combination of its inputs x using presynaptic weights w: formula_1 Oja's rule defines the change in presynaptic weights w given the output response formula_0 of a neuron to its inputs x to be formula_2 where "η" is the "learning rate" which can also change with time. Note that the bold symbols are vectors and "n" defines a discrete time iteration. The rule can also be made for continuous iterations as formula_3 Derivation. The simplest learning rule known is Hebb's rule, which states in conceptual terms that "neurons that fire together, wire together". In component form as a difference equation, it is written formula_4, or in scalar form with implicit "n"-dependence, formula_5, where "y"(x"n") is again the output, this time explicitly dependent on its input vector x. Hebb's rule has synaptic weights approaching infinity with a positive learning rate. We can stop this by normalizing the weights so that each weight's magnitude is restricted between 0, corresponding to no weight, and 1, corresponding to being the only input neuron with any weight. We do this by normalizing the weight vector to be of length one: formula_6. Note that in Oja's original paper, "p" 2, corresponding to quadrature (root sum of squares), which is the familiar Cartesian normalization rule. However, any type of normalization, even linear, will give the same result without loss of generality. For a small learning rate formula_7 the equation can be expanded as a Power series in formula_8. formula_9. For small "η", our higher-order terms "O"("η"2) go to zero. We again make the specification of a linear neuron, that is, the output of the neuron is equal to the sum of the product of each input and its synaptic weight to the power of p-1, which in the case of "p" 2 is synaptic weight itself, or formula_10. We also specify that our weights normalize to 1, which will be a necessary condition for stability, so formula_11, which, when substituted into our expansion, gives Oja's rule, or formula_12. Stability and PCA. In analyzing the convergence of a single neuron evolving by Oja's rule, one extracts the first "principal component", or feature, of a data set. Furthermore, with extensions using the Generalized Hebbian Algorithm, one can create a multi-Oja neural network that can extract as many features as desired, allowing for principal components analysis. A principal component "a""j" is extracted from a dataset x through some associated vector q"j", or "a"j q"j"⋅x, and we can restore our original dataset by taking formula_13. In the case of a single neuron trained by Oja's rule, we find the weight vector converges to q1, or the first principal component, as time or number of iterations approaches infinity. We can also define, given a set of input vectors "X""i", that its correlation matrix "R""ij" "X""i""X""j" has an associated eigenvector given by q"j" with eigenvalue "λ""j". The variance of outputs of our Oja neuron σ2("n") ⟨y2("n")⟩ then converges with time iterations to the principal eigenvalue, or formula_14. These results are derived using Lyapunov function analysis, and they show that Oja's neuron necessarily converges on strictly the first principal component if certain conditions are met in our original learning rule. Most importantly, our learning rate "η" is allowed to vary with time, but only such that its sum is "divergent" but its power sum is "convergent", that is formula_15. Our output activation function "y"(x("n")) is also allowed to be nonlinear and nonstatic, but it must be continuously differentiable in both x and w and have derivatives bounded in time. Applications. Oja's rule was originally described in Oja's 1982 paper, but the principle of self-organization to which it is applied is first attributed to Alan Turing in 1952. PCA has also had a long history of use before Oja's rule formalized its use in network computation in 1989. The model can thus be applied to any problem of self-organizing mapping, in particular those in which feature extraction is of primary interest. Therefore, Oja's rule has an important place in image and speech processing. It is also useful as it expands easily to higher dimensions of processing, thus being able to integrate multiple outputs quickly. A canonical example is its use in binocular vision. Biology and Oja's subspace rule. There is clear evidence for both long-term potentiation and long-term depression in biological neural networks, along with a normalization effect in both input weights and neuron outputs. However, while there is no direct experimental evidence yet of Oja's rule active in a biological neural network, a biophysical derivation of a generalization of the rule is possible. Such a derivation requires retrograde signalling from the postsynaptic neuron, which is biologically plausible (see neural backpropagation), and takes the form of formula_16 where as before "w""ij" is the synaptic weight between the "i"th input and "j"th output neurons, "x" is the input, "y" is the postsynaptic output, and we define "ε" to be a constant analogous the learning rate, and "c"pre and "c"post are presynaptic and postsynaptic functions that model the weakening of signals over time. Note that the angle brackets denote the average and the ∗ operator is a convolution. By taking the pre- and post-synaptic functions into frequency space and combining integration terms with the convolution, we find that this gives an arbitrary-dimensional generalization of Oja's rule known as Oja's Subspace, namely formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y" }, { "math_id": 1, "text": "\\,y(\\mathbf{x}) ~ = ~ \\sum_{j=1}^m x_j w_j " }, { "math_id": 2, "text": "\\,\\Delta \\mathbf{w} ~ = ~ \\mathbf{w}_{n+1}-\\mathbf{w}_{n} ~ = ~ \\eta \\, y_{n} (\\mathbf{x}_{n} - y_{n}\\mathbf{w}_{n})," }, { "math_id": 3, "text": "\\,\\frac{d\\mathbf{w}}{d t} ~ = ~ \\eta \\, y(t) (\\mathbf{x}(t) - y(t)\\mathbf{w}(t))." }, { "math_id": 4, "text": "\\,\\Delta\\mathbf{w} ~ = ~ \\eta\\, y(\\mathbf{x}_n) \\mathbf{x}_{n}" }, { "math_id": 5, "text": "\\,w_{i}(n+1) ~ = ~ w_{i}(n) + \\eta\\, y(\\mathbf{x}) x_{i}" }, { "math_id": 6, "text": "\\,w_i (n+1) ~ = ~ \\frac{w_i(n) + \\eta\\, y(\\mathbf{x}) x_i}{\\left(\\sum_{j=1}^m [w_j(n) + \\eta\\, y(\\mathbf{x}) x_j]^p \\right)^{1/p}}" }, { "math_id": 7, "text": "| \\eta | \\ll 1" }, { "math_id": 8, "text": "\\eta" }, { "math_id": 9, "text": "\\,w_i (n+1) ~ = ~ \\frac{w_i(n)}{\\left( \\sum_j w_j^p(n) \\right)^{1/p}} ~ + ~ \\eta \\left( \\frac{y x_i}{\\left(\\sum_j w_j^p(n) \\right)^{1/p}} - \\frac{w_i(n) \\sum_j y x_j w_j^{p-1}(n)}{\\left(\\sum_j w_j^p(n) \\right)^{(1 + 1/p)}} \\right) ~ + ~ O(\\eta^2)" }, { "math_id": 10, "text": "\\,y(\\mathbf{x}) ~ = ~ \\sum_{j=1}^m x_j w_j^{p-1} " }, { "math_id": 11, "text": "\\,| \\mathbf{w} | ~ = ~ \\left( \\sum_{j=1}^m w_j^p \\right)^{1/p} ~ = ~ 1" }, { "math_id": 12, "text": "\\,w_i (n+1) ~ = ~ w_i(n) + \\eta\\, y(x_i - w_i(n) y)" }, { "math_id": 13, "text": "\\mathbf{x} ~ = ~ \\sum_j a_j \\mathbf{q}_j" }, { "math_id": 14, "text": "\\lim_{n\\rightarrow\\infty} \\sigma^2(n) ~ = ~ \\lambda_1" }, { "math_id": 15, "text": "\\sum_{n=1}^\\infty \\eta(n) = \\infty, ~~~ \\sum_{n=1}^\\infty \\eta(n)^p < \\infty, ~~~ p > 1" }, { "math_id": 16, "text": "\\Delta w_{ij} ~ \\propto ~ \\langle x_i y_j \\rangle - \\epsilon \\left\\langle \\left(c_\\mathrm{pre} * \\sum_k w_{ik} y_k \\right) \\cdot \\left(c_\\mathrm{post} * y_j \\right) \\right\\rangle," }, { "math_id": 17, "text": "\\Delta w ~ = ~ C x\\cdot w - w\\cdot C y." } ]
https://en.wikipedia.org/wiki?curid=9617564
9619738
Valley of stability
Characterization of nuclide stability In nuclear physics, the valley of stability (also called the belt of stability, nuclear valley, energy valley, or beta stability valley) is a characterization of the stability of nuclides to radioactivity based on their binding energy. Nuclides are composed of protons and neutrons. The shape of the valley refers to the profile of binding energy as a function of the numbers of neutrons and protons, with the lowest part of the valley corresponding to the region of most stable nuclei. The line of stable nuclides down the center of the valley of stability is known as the line of beta stability. The sides of the valley correspond to increasing instability to beta decay (β− or β+). The decay of a nuclide becomes more energetically favorable the further it is from the line of beta stability. The boundaries of the valley correspond to the nuclear drip lines, where nuclides become so unstable they emit single protons or single neutrons. Regions of instability within the valley at high atomic number also include radioactive decay by alpha radiation or spontaneous fission. The shape of the valley is roughly an elongated paraboloid corresponding to the nuclide binding energies as a function of neutron and atomic numbers. The nuclides within the valley of stability encompass the entire table of nuclides. The chart of those nuclides is also known as a Segrè chart, after the physicist Emilio Segrè. The Segrè chart may be considered a map of the nuclear valley. The region of proton and neutron combinations outside of the valley of stability is referred to as the sea of instability. Scientists have long searched for long-lived heavy isotopes outside of the valley of stability, hypothesized by Glenn T. Seaborg in the late 1960s. These relatively stable nuclides are expected to have particular configurations of "magic" atomic and neutron numbers, and form a so-called island of stability. Description. All atomic nuclei are composed of protons and neutrons bound together by the nuclear force. There are 286 primordial nuclides that occur naturally on earth, each corresponding to a unique number of protons, called the atomic number, "Z", and a unique number of neutrons, called the neutron number, "N". The mass number, "A", of a nuclide is the sum of atomic and neutron numbers, "A" = "Z" + "N". Not all nuclides are stable, however. According to Byrne, stable nuclides are defined as those having a half-life greater than 1018 years, and there are many combinations of protons and neutrons that form nuclides that are unstable. A common example of an unstable nuclide is carbon-14 that decays by beta decay into nitrogen-14 with a half-life of about 5,730 years: C → N + + In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation and a beta particle and an electron antineutrino are emitted. An essential property of this and all nuclide decays is that the total energy of the decay product is less than that of the original nuclide. The difference between the initial and final nuclide binding energies is carried away by the kinetic energies of the decay products, often the beta particle and its associated neutrino. The concept of the valley of stability is a way of organizing all of the nuclides according to binding energy as a function of neutron and proton numbers. Most stable nuclides have roughly equal numbers of protons and neutrons, so the line for which "Z" = "N" forms a rough initial line defining stable nuclides. The greater the number of protons, the more neutrons are required to stabilize a nuclide; nuclides with larger values for "Z" require an even larger number of neutrons, "N" &gt; "Z", to be stable. The valley of stability is formed by the negative of binding energy, the binding energy being the energy required to break apart the nuclide into its proton and neutron components. The stable nuclides have high binding energy, and these nuclides lie along the bottom of the valley of stability. Nuclides with weaker binding energy have combinations of "N" and "Z" that lie off of the line of stability and further up the sides of the valley of stability. Unstable nuclides can be formed in nuclear reactors or supernovas, for example. Such nuclides often decay in sequences of reactions called decay chains that take the resulting nuclides sequentially down the slopes of the valley of stability. The sequence of decays take nuclides toward greater binding energies, and the nuclides terminating the chain are stable. The valley of stability provides both a conceptual approach for how to organize the myriad stable and unstable nuclides into a coherent picture and an intuitive way to understand how and why sequences of radioactive decay occur. The role of neutrons. The protons and neutrons that comprise an atomic nucleus behave almost identically within the nucleus. The approximate symmetry of isospin treats these particles as identical, but in a different quantum state. This symmetry is only approximate, however, and the nuclear force that binds nucleons together is a complicated function depending on nucleon type, spin state, electric charge, momentum, etc. and with contributions from non-central forces. The nuclear force is not a fundamental force of nature, but a consequence of the residual effects of the strong force that surround the nucleons. One consequence of these complications is that although deuterium, a bound state of a proton (p) and a neutron (n) is stable, exotic nuclides such as diproton or dineutron are unbound. The nuclear force is not sufficiently strong to form either p-p or n-n bound states, or equivalently, the nuclear force does not form a potential well deep enough to bind these identical nucleons. Stable nuclides require approximately equal numbers of protons and neutrons. The stable nuclide carbon-12 (12C) is composed of six neutrons and six protons, for example. Protons have a positive charge, hence within a nuclide with many protons there are large repulsive forces between protons arising from the Coulomb force. By acting to separate protons from one another, the neutrons within a nuclide play an essential role in stabilizing nuclides. With increasing atomic number, even greater numbers of neutrons are required to obtain stability. The heaviest stable element, lead (Pb), has many more neutrons than protons. The stable nuclide 206Pb has "Z" = 82 and "N" = 124, for example. For this reason, the valley of stability does not follow the line "Z" = "N" for A larger than 40 ("Z" = 20 is the element calcium). Neutron number increases along the line of beta stability at a faster rate than atomic number. The line of beta stability follows a particular curve of neutron–proton ratio, corresponding to the most stable nuclides. On one side of the valley of stability, this ratio is small, corresponding to an excess of protons over neutrons in the nuclides. These nuclides tend to be unstable to β+ decay or electron capture, since such decay converts a proton to a neutron. The decay serves to move the nuclides toward a more stable neutron-proton ratio. On the other side of the valley of stability, this ratio is large, corresponding to an excess of neutrons over protons in the nuclides. These nuclides tend to be unstable to β− decay, since such decay converts neutrons to protons. On this side of the valley of stability, β− decay also serves to move nuclides toward a more stable neutron-proton ratio. Neutrons, protons, and binding energy. The mass of an atomic nucleus is given by formula_0 where formula_1 and formula_2 are the rest mass of a proton and a neutron, respectively, and formula_3 is the total binding energy of the nucleus. The mass–energy equivalence is used here. The binding energy is subtracted from the sum of the proton and neutron masses because the mass of the nucleus is "less" than that sum. This property, called the mass defect, is necessary for a stable nucleus; within a nucleus, the nuclides are trapped by a potential well. A semi-empirical mass formula states that the binding energy will take the form formula_4 The difference between the mass of a nucleus and the sum of the masses of the neutrons and protons that comprise it is known as the mass defect. EB is often divided by the mass number to obtain binding energy per nucleon for comparisons of binding energies between nuclides. Each of the terms in this formula has a theoretical basis. The coefficients formula_5, formula_6, formula_7, formula_8 and a coefficient that appears in the formula for formula_9 are determined empirically. The binding energy expression gives a quantitative estimate for the neutron-proton ratio. The energy is a quadratic expression in Z that is minimized when the neutron-proton ratio is formula_10. This equation for the neutron-proton ratio shows that in stable nuclides the number of neutrons is greater than the number of protons by a factor that scales as formula_11. The figure at right shows the average binding energy per nucleon as a function of atomic mass number along the line of beta stability, that is, along the bottom of the valley of stability. For very small atomic mass number (H, He, Li), binding energy per nucleon is small, and this energy increases rapidly with atomic mass number. Nickel-62 (28 protons, 34 neutrons) has the highest mean binding energy of all nuclides, while iron-58 (26 protons, 32 neutrons) and iron-56 (26 protons, 30 neutrons) are a close second and third. These nuclides lie at the very bottom of the valley of stability. From this bottom, the average binding energy per nucleon slowly decreases with increasing atomic mass number. The heavy nuclide 238U is not stable, but is slow to decay with a half-life of 4.5 billion years. It has relatively small binding energy per nucleon. For β− decay, nuclear reactions have the generic form X → X′ + + where A and Z are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final nuclides, respectively. For β+ decay, the generic form is X → X′ + + These reactions correspond to the decay of a neutron to a proton, or the decay of a proton to a neutron, within the nucleus, respectively. These reactions begin on one side or the other of the valley of stability, and the directions of the reactions are to move the initial nuclides down the valley walls towards a region of greater stability, that is, toward greater binding energy. The figure at right shows the average binding energy per nucleon across the valley of stability for nuclides with mass number "A" = 125. At the bottom of this curve is tellurium (52Te), which is stable. Nuclides to the left of 52Te are unstable with an excess of neutrons, while those on the right are unstable with an excess of protons. A nuclide on the left therefore undergoes β− decay, which converts a neutron to a proton, hence shifts the nuclide to the right and toward greater stability. A nuclide on the right similarly undergoes β+ decay, which shifts the nuclide to the left and toward greater stability. Heavy nuclides are susceptible to α decay, and these nuclear reactions have the generic form, X → X′ + He As in β decay, the decay product X′ has greater binding energy and it is closer to the middle of the valley of stability. The α particle carries away two neutrons and two protons, leaving a lighter nuclide. Since heavy nuclides have many more neutrons than protons, α decay increases a nuclide's neutron-proton ratio. Proton and neutron drip lines. The boundaries of the valley of stability, that is, the upper limits of the valley walls, are the neutron drip line on the neutron-rich side, and the proton drip line on the proton-rich side. The nucleon drip lines are at the extremes of the neutron-proton ratio. At neutron–proton ratios beyond the drip lines, no nuclei can exist. The location of the neutron drip line is not well known for most of the Segrè chart, whereas the proton and alpha drip lines have been measured for a wide range of elements. Drip lines are defined for protons, neutrons, and alpha particles, and these all play important roles in nuclear physics. The difference in binding energy between neighboring nuclides increases as the sides of the valley of stability are ascended, and correspondingly the nuclide half-lives decrease, as indicated in the figure above. If one were to add nucleons one at a time to a given nuclide, the process will eventually lead to a newly formed nuclide that is so unstable that it promptly decays by emitting a proton (or neutron). Colloquially speaking, the nucleon has 'leaked' or 'dripped' out of the nucleus, hence giving rise to the term "drip line". Proton emission is not seen in naturally occurring nuclides. Proton emitters can be produced via nuclear reactions, usually utilizing linear particle accelerators (linac). Although prompt (i.e. not beta-delayed) proton emission was observed from an isomer in cobalt-53 as early as 1969, no other proton-emitting states were found until 1981, when the proton radioactive ground states of lutetium-151 and thulium-147 were observed at experiments at the GSI in West Germany. Research in the field flourished after this breakthrough, and to date more than 25 nuclides have been found to exhibit proton emission. The study of proton emission has aided the understanding of nuclear deformation, masses and structure, and it is an example of quantum tunneling. Two examples of nuclides that emit neutrons are beryllium-13 (mean life ) and helium-5 (). Since only a neutron is lost in this process, the atom does not gain or lose any protons, and so it does not become an atom of a different element. Instead, the atom will become a new isotope of the original element, such as beryllium-13 becoming beryllium-12 after emitting one of its neutrons. In nuclear engineering, a prompt neutron is a neutron immediately emitted by a nuclear fission event. Prompt neutrons emerge from the fission of an unstable fissionable or fissile heavy nucleus almost instantaneously. Delayed neutron decay can occur within the same context, emitted after beta decay of one of the fission products. Delayed neutron decay can occur at times from a few milliseconds to a few minutes. The U.S. Nuclear Regulatory Commission defines a prompt neutron as a neutron emerging from fission within 10−14 seconds. Island of stability. The island of stability is a region outside the valley of stability where it is predicted that a set of heavy isotopes with near magic numbers of protons and neutrons will locally reverse the trend of decreasing stability in elements heavier than uranium. The hypothesis for the island of stability is based upon the nuclear shell model, which implies that the atomic nucleus is built up in "shells" in a manner similar to the structure of the much larger electron shells in atoms. In both cases, shells are just groups of quantum energy levels that are relatively close to each other. Energy levels from quantum states in two different shells will be separated by a relatively large energy gap. So when the number of neutrons and protons completely fills the energy levels of a given shell in the nucleus, the binding energy per nucleon will reach a local maximum and thus that particular configuration will have a longer lifetime than nearby isotopes that do not possess filled shells. A filled shell would have "magic numbers" of neutrons and protons. One possible magic number of neutrons for spherical nuclei is 184, and some possible matching proton numbers are 114, 120 and 126. These configurations imply that the most stable spherical isotopes would be flerovium-298, unbinilium-304 and unbihexium-310. Of particular note is 298Fl, which would be "doubly magic" (both its proton number of 114 and neutron number of 184 are thought to be magic). This doubly magic configuration is the most likely to have a very long half-life. The next lighter doubly magic spherical nucleus is lead-208, the heaviest known stable nucleus and most stable heavy metal. Discussion. The valley of stability can be helpful in interpreting and understanding properties of nuclear decay processes such as decay chains and nuclear fission. Radioactive decay often proceeds via a sequence of steps known as a decay chain. For example, 238U decays to 234Th which decays to 234mPa and so on, eventually reaching 206Pb: formula_12 With each step of this sequence of reactions, energy is released and the decay products move further down the valley of stability towards the line of beta stability. 206Pb is stable and lies on the line of beta stability. The fission processes that occur within nuclear reactors are accompanied by the release of neutrons that sustain the chain reaction. Fission occurs when a heavy nuclide such as uranium-235 absorbs a neutron and breaks into nuclides of lighter elements such as barium or krypton, usually with the release of additional neutrons. Like all nuclides with a high atomic number, these uranium nuclei require many neutrons to bolster their stability, so they have a large neutron-proton ratio ("N"/"Z"). The nuclei resulting from a fission (fission products) inherit a similar "N"/"Z", but have atomic numbers that are approximately half that of uranium. Isotopes with the atomic number of the fission products and an "N"/"Z" near that of uranium or other fissionable nuclei have too many neutrons to be stable; this neutron excess is why multiple free neutrons but no free protons are usually emitted in the fission process, and it is also why many fission product nuclei undergo a long chain of β− decays, each of which converts a nucleus "N"/"Z" to ("N" − 1)/("Z" + 1), where "N" and "Z" are, respectively, the numbers of neutrons and protons contained in the nucleus. When fission reactions are sustained at a given rate, such as in a liquid-cooled or solid fuel nuclear reactor, the nuclear fuel in the system produces many antineutrinos for each fission that has occurred. These antineutrinos come from the decay of fission products that, as their nuclei progress down a β− decay chain toward the valley of stability, emit an antineutrino along with each β− particle. In 1956, Reines and Cowan exploited the (anticipated) intense flux of antineutrinos from a nuclear reactor in the design of an experiment to detect and confirm the existence of these elusive particles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m = Z m_{p} + N m_{n} - \\frac{E_{B}}{c^{2}}" }, { "math_id": 1, "text": "m_{p}" }, { "math_id": 2, "text": "m_{n}" }, { "math_id": 3, "text": "E_{B}" }, { "math_id": 4, "text": "E_{B} = a_{V} A - a_{S} A^{2/3} - a_{C} \\frac{Z^2}{A^{1/3}} - a_{A} \\frac{(A - 2Z)^{2}}{A} \\pm \\delta(A,Z)" }, { "math_id": 5, "text": "a_{V}" }, { "math_id": 6, "text": "a_{S}" }, { "math_id": 7, "text": "a_{C}" }, { "math_id": 8, "text": "a_{A}" }, { "math_id": 9, "text": "\\delta(A,Z)" }, { "math_id": 10, "text": "N/Z \\approx 1 + \\frac{a_C}{2a_A} A^{2/3} " }, { "math_id": 11, "text": "A^{2/3}" }, { "math_id": 12, "text": "\\begin{array}{l}{}\\\\\n\\ce{^{238}_{92}U->[\\alpha][4.5 \\times 10^9 \\ \\ce y] {^{234}_{90}Th} ->[\\beta^-][24 \\ \\ce d] {^{234\\!m}_{91}Pa}}\n\\ce{->[\\beta^-][1 \\ \\ce{min}]}\n\\ce{^{234}_{92}U ->[\\alpha][2.4 \\times 10^5 \\ \\ce y] {^{230}_{90}Th} ->[\\alpha][7.7 \\times 10^4 \\ \\ce y] }\n\\\\\n\\ce{^{226}_{88}Ra ->[\\alpha][1600 \\ y] {^{222}_{86}Rn} ->[\\alpha][3.8 \\ \\ce d] {^{218}_{84}Po} ->[\\alpha][3 \\ \\ce{min}] {^{214}_{82}Pb} ->[\\beta^-][27 \\ \\ce{min}] {^{214}_{83}Bi} ->[\\beta^-][20 \\ \\ce{min}]}\n\\\\\n\\ce{^{214}_{84}Po ->[\\alpha][164 \\ \\mu\\ce{s}] {^{210}_{82}Pb} ->[\\beta^-][22 \\ \\ce y] {^{210}_{83}Bi} ->[\\beta^-][5 \\ \\ce d] {^{210}_{84}Po} ->[\\alpha][138 \\ \\ce d] {^{206}_{82}Pb}}\\\\{}\n\\end{array}\n" } ]
https://en.wikipedia.org/wiki?curid=9619738
9620079
Cupriavidus necator
Species of bacterium &lt;templatestyles src="Template:Taxobox/core/styles.css" /&gt; Cupriavidus necator is a Gram-negative soil bacterium of the class Betaproteobacteria. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Taxonomy. "Cupriavidus necator" has gone through a series of name changes. In the first half of the 20th century, many micro-organisms were isolated for their ability to use hydrogen. Hydrogen-metabolizing chemolithotrophic organisms were clustered into the group "Hydrogenomonas". "C. necator" was originally named "Hydrogenomonas eutrophus" because it fell under the "Hydrogenomonas" classification and was "well nourished and robust". Some of the original "H. eutrophus" cultures isolated were by Bovell and Wilde. After characterizing cell morphology, metabolism and GC content, the "Hydrogenomonas" nomenclature was disbanded because it comprised many species of microorganisms. "H. eutrophus" was then renamed "Alcaligenes eutropha" because it was a micro-organism with degenerated peritrichous flagellation. Investigating phenotype, lipid composition, fatty acid composition and 16S rRNA analysis, "A. eutropha" was found to belong to the genus "Ralstonia" and named "Ralstonia eutropha". Upon further study of the genus, "Ralstonia" was found to comprise two phenotypically distinct clusters. The new genus "Wautersia" was created from one of these clusters which included "R. eutropha". In turn "R. eutropha" was renamed "Wautersia eutropha". Looking at DNA–DNA hybridization and phenotype comparison with "Cupriavidus necator", "W. eutropha" was found to be the same species as previously described "C. necator". Because "C. necator" was named in 1987 far before the name change to "R. eutropha" and "W. eutropha", the name "C. necator" was assigned to "R. eutropha" according to Rule 23a of the International Code of Nomenclature of Bacteria. Metabolism. "Cupriavidus necator" is a hydrogen-oxidizing bacterium ("knallgas" bacterium) capable of growing at the interface of anaerobic and aerobic environments. It can easily adapt between heterotrophic and autotrophic lifestyles. Both organic compounds and hydrogen can be used as a source of energy "C. necator" can perform aerobic or anaerobic respiration by denitrification of nitrate and/or nitrite to nitrogen gas. When growing under autotrophic conditions, "C. necator" fixes carbon through the reductive pentose phosphate pathway. It is known to produce and sequester polyhydroxyalkanoate (PHA) plastics when exposed to excess amounts of sugar substrate. PHA can accumulate to levels around 90% of the cell's dry weight. To better characterize the lifestyle of "C. necator", the genomes of two strains have been sequenced. Hydrogenases. "Cupriavidus necator" can use hydrogen gas as a source of energy when growing under autotrophic conditions. It contains four different hydrogenases that have [Ni-Fe] active sites and all perform this reaction: H2 formula_0 2H+ + 2e− The hydrogenases of "C. necator" are like other typical [Ni-Fe] hydrogenases because they are made up of a large and a small subunit. The large subunit is where the [Ni-Fe] active site resides and the small subunit is composed of [Fe-S] clusters. However, the hydrogenases of "C. necator" are different from typical [Ni-Fe] hydrogenases because they are tolerant to oxygen and are not inhibited by CO. While the four hydrogenases perform the same reaction in the cell, each hydrogenase is linked to a different cellular process. The differences between the regulatory hydrogenase, membrane-bound hydrogenase, soluble hydrogenase and actinobacterial hydrogenase in "C. necator" are described below. Regulatory hydrogenase. The first hydrogenase is a regulatory hydrogenase (RH) that signals to the cell hydrogen is present. The RH is a protein containing large and small [Ni-Fe] hydrogenase subunits attached to a histidine protein kinase subunit. The hydrogen gas is oxidized at the [Ni-Fe] center in the large subunit and in turn reduces the [Fe-S] clusters in the small subunit. It is unknown whether the electrons are transferred from the [Fe-S] clusters to the protein kinase domain. The histidine protein kinase activates a response regulator. The response regulator is active in the dephosphorylated form. The dephosphorylated response regulator promotes the transcription of the membrane bound hydrogenase and soluble hydrogenase. Membrane-bound hydrogenase. The membrane-bound hydrogenase (MBH) is linked to the respiratory chain through a specific cytochrome b-related protein in "C. necator". Hydrogen gas is oxidized at the [Ni-Fe] active site in the large subunit and the electrons are shuttled through the [Fe-S] clusters in the small subunit to the cytochrome b-like protein. The MBH is located on the outer cytoplasmic membrane. It recovers energy for the cell by funneling electrons into the respiratory chain and by increasing the proton gradient. The MBH in "C. necator" is not inhibited by CO and is tolerant to oxygen. NAD+-reducing hydrogenase. The NAD+-reducing hydrogenase (soluble hydrogenase, SH) creates a NADH-reducing equivalence by oxidizing hydrogen gas. The SH is a heterohexameric protein with two subunits making up the large and small subunits of the [Ni-Fe] hydrogenase and the other two subunits comprising a reductase module similar to the one of Complex I. The [Ni-Fe] active site oxidized hydrogen gas which transfers electrons to a FMN-a cofactor, then to a [Fe-S] cluster relay of the small hydrogenase subunit and the reductase module, then to another FMN-b cofactor and finally to NAD+. The reducing equivalences are then used for fixing carbon dioxide when "C. necator" is growing autotrophically. The active site of the SH of "C. necator" H16 has been extensively studied because "C. necator" H16 can be produced in large amounts, can be genetically manipulated, and can be analyzed with spectrographic techniques. However, no crystal structure is currently available for the "C. necator" H16 soluble hydrogenase in the presence of oxygen to determine the interactions of the active site with the rest of the protein. Typical anaerobic [Ni-Fe] hydrogenases. The [Ni-Fe] hydrogenase from "Desulfovibrio vulgaris" and "D. gigas" have similar protein structures to each other and represent typical [Ni-Fe] hydrogenases. The large subunit contains the [Ni-Fe] active site buried deep in the core of the protein and the small subunit contains [Fe-S] clusters. The Ni atom is coordinated to the Desulfovibrio hydrogenase by 4 cysteine ligands. Two of these same cysteine ligands also bridge the Fe of the [Ni-Fe] active site. The Fe atom also contains three ligands, one CO and two CN that complete the active site. These additional ligands might contribute to the reactivity or help stabilize the Fe atom in the low spin +2 oxidation state. Typical [NiFe] hydrogenases like those of "D. vulgaris" and "D. gigas" are poisoned by oxygen because an oxygen atom binds strongly to the NiFe active site. "C. necator" oxygen-tolerant SH. The SH in "C. necator" are unique for other organisms because it is oxygen tolerant. The active site of the SH has been studied to learn why this protein is tolerant to oxygen. A recent study showed that oxygen tolerance as implemented in the SH is based on a continuous catalytically driven detoxification of [Ref missing].  The genes encoding this SH can be up-regulated under heterotrophic growth condition using glycerol in the growth media and this enables aerobic production and purification of the same enzyme. Applications. The oxygen-tolerant hydrogenases of "C. necator" have been studied for diverse purposes. "C. necator" was studied as an attractive organism to help support life in space. It can fix carbon dioxide as a carbon source, use the urea in urine as a nitrogen source, and use hydrogen as an energy source to create dense cultures that could be used as a source of protein. Electrolysis of water is one way of creating oxygenic atmosphere in space and "C. necator" was investigated to recycle the hydrogen produced during this process. Oxygen-tolerant hydrogenases are being used to investigate biofuels. Hydrogenases from "C. necator" have been used to coat electrode surfaces to create hydrogen fuel cells tolerant to oxygen and carbon monoxide and to design hydrogen-producing light complexes. In addition, the hydrogenases from "C. necator" have been used to create hydrogen sensors. Genetically modified "C. necator" can produce isobutanol from CO2 that can directly substitute or blend with gasoline. The organism emits the isobutanol without having to be destroyed to obtain it. Industrial uses. Researchers at UCLA have genetically modified a strain of the species "C. necator" (formerly known as "R. eutropha" H16) to produce isobutanol from CO2 feedstock using electricity produced by a solar cell. The project, funded by the U.S. Dept. of Energy, is a potential high energy-density electrofuel that could use existing infrastructure to replace oil as a transportation fuel. Chemical and biomolecular engineers at Korea Advanced Institute of Science and Technology has presented a scalable way to convert CO2 in the air into a polyester by means of the "C. necator". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=9620079
962035
Seismic refraction
Geophysical principle Seismic refraction is a geophysical principle governed by Snell's Law of refraction. The seismic refraction method utilizes the refraction of seismic waves by rock or soil layers to characterize the subsurface geologic conditions and geologic structure. Seismic refraction is exploited in engineering geology, geotechnical engineering and exploration geophysics. Seismic refraction traverses (seismic lines) are performed using an array of seismographs or geophones and an energy source. The methods depend on the fact that seismic waves have differing velocities in different types of soil or rock. The waves are refracted when they cross the boundary between different types (or conditions) of soil or rock. The methods enable the general soil types and the approximate depth to strata boundaries, or to bedrock, to be determined. P-wave refraction. P-wave refraction evaluates the compression wave generated by the seismic source located at a known distance from the array. The wave is generated by vertically striking a striker plate with a sledgehammer, shooting a seismic shotgun into the ground, or detonating an explosive charge in the ground. Since the compression wave is the fastest of the seismic waves, it is sometimes referred to as the primary wave and is usually more-readily identifiable within the seismic recording as compared to the other seismic waves. S-wave refraction. S-wave refraction evaluates the shear wave generated by the seismic source located at a known distance from the array. The wave is generated by horizontally striking an object on the ground surface to induce the shear wave. Since the shear wave is the second fastest wave, it is sometimes referred to as the secondary wave. When compared to the compression wave, the shear wave is approximately one-half (but may vary significantly from this estimate) the velocity depending on the medium. Two horizontal layers. ic0 - critical angle V0 - velocity of the first layer V1 - velocity of the second layer h0 - thickness of the first layer T01 - intercept formula_0 formula_1 formula_2 formula_3 formula_4 Applications. Seismic refraction has been successfully applied to tailings characterisation through P- and S-wave travel time tomographic inversions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i_{c_{0}} = asin \\left( {V_{0} \\over V_{1}} \\right) " }, { "math_id": 1, "text": " T = {2h_{0}cos(i_{c_{0}}) \\over V_{0}} + {X \\over V_{1}} = T0_{1} + {X \\over V_{1}}" }, { "math_id": 2, "text": " h_{0}= {T0_{1}V_{0} \\over 2cos(i_{c}) }" }, { "math_id": 3, "text": " h_{0} = {X_{cross_{1}} \\over 2} \\sqrt{{V_{1}-V_{0} \\over V_{1} + V_{0}}}" }, { "math_id": 4, "text": " h_{n} = {V_{n} \\over cos(i_{n})} \\left( {T0_{n+1} \\over 2} - \\sum_{j=0}^{n-1}{h_{j}\\sqrt{{1 \\over V_{j}^2} - {1 \\over V_{j+1}^2}}} \\right) " } ]
https://en.wikipedia.org/wiki?curid=962035
962171
Stern–Gerlach experiment
1922 physical experiment demonstrating that atomic spin is quantized In quantum physics, the Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially-varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment were deflected, owing to the magnetic field gradient, from a straight path. The screen revealed discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. Historically, this experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems. After its conception by Otto Stern in 1921, the experiment was first successfully conducted with Walther Gerlach in early 1922. Description. The Stern–Gerlach experiment involves sending silver atoms through an inhomogeneous magnetic field and observing their deflection. Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an inhomogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the inhomogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate. The results show that particles possess an intrinsic angular momentum that is closely analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values. Another important result is that only one component of a particle's spin can be measured at one time, meaning that the measurement of the spin along the z-axis destroys information about a particle's spin along the x and y axis. The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate. If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole (see torque-induced precession). If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous then the force on one end of the dipole will be slightly greater than the opposing force on the other end, so that there is a net force which deflects the particle's trajectory. If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to the dot product of its magnetic moment with the external field gradient, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are deflected either up or down by a specific amount. This was a measurement of the quantum observable now known as spin angular momentum, which demonstrated possible outcomes of a measurement where the observable has a discrete set of values or point spectrum. Although some discrete quantum phenomena, such as atomic spectra, were observed much earlier, the Stern–Gerlach experiment allowed scientists to directly observe separation between discrete quantum states for the first time in the history of science. Theoretically, quantum angular momentum "of any kind" has a discrete spectrum, which is sometimes briefly expressed as "angular momentum is quantized". Experiment using particles with +1/2 or −1/2 spin. If the experiment is conducted using charged particles like electrons, there will be a Lorentz force that tends to bend the trajectory in a circle. This force can be cancelled by an electric field of appropriate magnitude oriented transverse to the charged particle's path. Electrons are spin-1/2 particles. These have only two possible spin angular momentum values measured along any axis, formula_0 or formula_1, a purely quantum mechanical phenomenon. Because its value is always the same, it is regarded as an intrinsic property of electrons, and is sometimes known as "intrinsic angular momentum" (to distinguish it from orbital angular momentum, which can vary and depends on the presence of other particles). If one measures the spin along a vertical axis, electrons are described as "spin up" or "spin down", based on the magnetic moment pointing up or down, respectively. To mathematically describe the experiment with spin formula_2 particles, it is easiest to use Dirac's bra–ket notation. As the particles pass through the Stern–Gerlach device, they are deflected either up or down, and observed by the detector which resolves to either spin up or spin down. These are described by the angular momentum quantum number formula_3, which can take on one of the two possible allowed values, either formula_0 or formula_1. The act of observing (measuring) the momentum along the formula_4 axis corresponds to the operator formula_5. In mathematical terms, the initial state of the particles is formula_6 where constants formula_7 and formula_8 are complex numbers. This initial state spin can point in any direction. The squares of the absolute values formula_9 and formula_10 are respectively the probabilities for a system in the state formula_11 to be found in formula_12 and formula_13 after the measurement along formula_4 axis is made. The constants formula_7 and formula_8 must also be normalized in order that the probability of finding either one of the values be unity, that is we must ensure that formula_14. However, this information is not sufficient to determine the values of formula_7 and formula_8, because they are complex numbers. Therefore, the measurement yields only the squared magnitudes of the constants, which are interpreted as probabilities. Sequential experiments. If we link multiple Stern–Gerlach apparatuses (the rectangles containing "S-G"), we can clearly see that they do not act as simple selectors, i.e. filtering out particles with one of the states (pre-existing to the measurement) and blocking the others. Instead they alter the state by observing it (as in light polarization). In the figure below, x and z name the directions of the (inhomogenous) magnetic field, with the x-z-plane being orthogonal to the particle beam. In the three S-G systems shown below, the cross-hatched squares denote the blocking of a given output, i.e. each of the S-G systems with a blocker allows only particles with one of two states to enter the next S-G apparatus in the sequence. Experiment 1. The top illustration shows that when a second, identical, S-G apparatus is placed at the exit of the first apparatus, only z+ is seen in the output of the second apparatus. This result is expected since all particles at this point are expected to have z+ spin, as only the z+ beam from the first apparatus entered the second apparatus. Experiment 2. The middle system shows what happens when a different S-G apparatus is placed at the exit of the z+ beam resulting of the first apparatus, the second apparatus measuring the deflection of the beams on the x axis instead of the z axis. The second apparatus produces x+ and x- outputs. Now classically we would expect to have one beam with the x characteristic oriented + and the z characteristic oriented +, and another with the x characteristic oriented - and the z characteristic oriented +. Experiment 3. The bottom system contradicts that expectation. The output of the third apparatus which measures the deflection on the z axis again shows an output of z- as well as z+. Given that the input to the second S-G apparatus consisted only of z+, it can be inferred that a S-G apparatus must be altering the states of the particles that pass through it. This experiment can be interpreted to exhibit the uncertainty principle: since the angular momentum cannot be measured on two perpendicular directions at the same time, the measurement of the angular momentum on the x direction destroys the previous determination of the angular momentum in the z direction. That's why the third apparatus measures renewed z+ and z- beams like the x measurement really made a clean slate of the z+ output. History. The Stern–Gerlach experiment was conceived by Otto Stern in 1921 and performed by him and Walther Gerlach in Frankfurt in 1922. At the time of the experiment, the most prevalent model for describing the atom was the Bohr-Sommerfeld model, which described electrons as going around the positively charged nucleus only in certain discrete atomic orbitals or energy levels. Since the electron was quantized to be only in certain positions in space, the separation into distinct orbits was referred to as space quantization. The Stern–Gerlach experiment was meant to test the Bohr–Sommerfeld hypothesis that the direction of the angular momentum of a silver atom is quantized. The experiment was first performed with an electromagnet that allowed the non-uniform magnetic field to be turned on gradually from a null value. When the field was null, the silver atoms were deposited as a single band on the detecting glass slide. When the field was made stronger, the middle of the band began to widen and eventually to split into two, so that the glass-slide image looked like a lip-print, with an opening in the middle, and closure at either end. In the middle, where the magnetic field was strong enough to split the beam into two, statistically half of the silver atoms had been deflected by the non-uniformity of the field. Note that the experiment was performed several years before George Uhlenbeck and Samuel Goudsmit formulated their hypothesis about the existence of electron spin in 1925. Even though the result of the Stern−Gerlach experiment has later turned out to be in agreement with the predictions of quantum mechanics for a spin-1/2 particle, the experimental result was also consistent with the Bohr–Sommerfeld theory. In 1927, T.E. Phipps and J.B. Taylor reproduced the effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms. However, in 1926 the non-relativistic scalar Schrödinger equation had incorrectly predicted the magnetic moment of hydrogen to be zero in its ground state. To correct this problem Wolfgang Pauli considered a spin-1/2 version of the Schrödinger equation using the 3 Pauli matrices which now bear his name, which was later shown by Paul Dirac in 1928 to be a consequence of his relativistic Dirac equation. In the early 1930's Stern, together with Otto Robert Frisch and Immanuel Estermann improved the molecular beam apparatus sufficient to measure the magnet moment of the proton, a value nearly 2000 times smaller than the electron moment. In 1931, theoretical analysis by Gregory Breit and Isidor Isaac Rabi showed that this apparatus could be used to measure nuclear spin whenever the electronic configuration of the atom was known. The concept was applied by Rabi and Victor W. Cohen in 1934 to determine the formula_15 spin of Na atoms. In 1938 Rabi and coworkers inserted an oscillating magnetic field element into their apparatus, inventing nuclear magnetic resonance spectroscopy. By tuning the frequency of the oscillator to the frequency of the nuclear precessions they could selectively tune into each quantum level of the material under study. Rabi was awarded the Nobel Prize in 1944 for this work. Importance. The Stern–Gerlach experiment strongly influenced later developments in modern physics: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "+\\frac{\\hbar}{2}" }, { "math_id": 1, "text": "-\\frac{\\hbar}{2}" }, { "math_id": 2, "text": "+\\frac{1}{2}" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "z" }, { "math_id": 5, "text": "J_z" }, { "math_id": 6, "text": "|\\psi\\rangle = c_1\\left|\\psi_{j = +\\frac{\\hbar}{2}}\\right\\rangle + c_2\\left|\\psi_{j = -\\frac{\\hbar}{2}}\\right\\rangle" }, { "math_id": 7, "text": "c_1" }, { "math_id": 8, "text": "c_2" }, { "math_id": 9, "text": "|c_1|^2" }, { "math_id": 10, "text": "|c_2|^2" }, { "math_id": 11, "text": "|\\psi\\rangle" }, { "math_id": 12, "text": "\\left|\\psi_{j = +\\frac{\\hbar}{2}}\\right\\rangle" }, { "math_id": 13, "text": "\\left|\\psi_{j = -\\frac{\\hbar}{2}}\\right\\rangle" }, { "math_id": 14, "text": "|c_1|^2 + |c_2|^2 = 1" }, { "math_id": 15, "text": "3/2" } ]
https://en.wikipedia.org/wiki?curid=962171
9622185
Hodges–Lehmann estimator
Robust and nonparametric estimator of a population's location parameter In statistics, the Hodges–Lehmann estimator is a robust and nonparametric estimator of a population's location parameter. For populations that are symmetric about one median, such as the Gaussian or normal distribution or the Student "t"-distribution, the Hodges–Lehmann estimator is a consistent and median-unbiased estimate of the population median. For non-symmetric populations, the Hodges–Lehmann estimator estimates the "pseudo–median", which is closely related to the population median. The Hodges–Lehmann estimator was proposed originally for estimating the location parameter of one-dimensional populations, but it has been used for many more purposes. It has been used to estimate the differences between the members of two populations. It has been generalized from univariate populations to multivariate populations, which produce samples of vectors. It is based on the Wilcoxon signed-rank statistic. In statistical theory, it was an early example of a rank-based estimator, an important class of estimators both in nonparametric statistics and in robust statistics. The Hodges–Lehmann estimator was proposed in 1963 independently by Pranab Kumar Sen and by Joseph Hodges and Erich Lehmann, and so it is also called the "Hodges–Lehmann–Sen estimator". Definition. In the simplest case, the "Hodges–Lehmann" statistic estimates the location parameter for a univariate population. Its computation can be described quickly. For a dataset with "n" measurements, the set of all possible two-element subsets of it formula_0 such that formula_1 ≤ formula_2 (i.e. specifically including self-pairs; many secondary sources incorrectly omit this detail), which set has "n"("n" + 1)/2 elements. For each such subset, the mean is computed; finally, the median of these "n"("n" + 1)/2 averages is defined to be the Hodges–Lehmann estimator of location. The Hodges–Lehmann statistic also estimates the difference between two populations. For two sets of data with "m" and "n" observations, the set of two-element sets made of them is their Cartesian product, which contains "m" × "n" pairs of points (one from each set); each such pair defines one difference of values. The Hodges–Lehmann statistic is the median of the "m" × "n" differences. Estimating the population median of a symmetric population. For a population that is symmetric, the Hodges–Lehmann statistic estimates the population's median. It is a robust statistic that has a breakdown point of 0.29, which means that the statistic remains bounded even if nearly 30 percent of the data have been contaminated. This robustness is an important advantage over the sample mean, which has a zero breakdown point, being proportional to any single observation and so liable to being misled by even one outlier. The sample median is even more robust, having a breakdown point of 0.50. The Hodges–Lehmann estimator is much better than the sample mean when estimating mixtures of normal distributions, also. For symmetric distributions, the Hodges–Lehmann statistic has greater efficiency than does the sample median. For the normal distribution, the Hodges-Lehmann statistic is nearly as efficient as the sample mean. For the Cauchy distribution (Student t-distribution with one degree of freedom), the Hodges-Lehmann is infinitely more efficient than the sample mean, which is not a consistent estimator of the median. For non-symmetric populations, the Hodges-Lehmann statistic estimates the population's "pseudo-median", a location parameter that is closely related to the median. The difference between the median and pseudo-median is relatively small, and so this distinction is neglected in elementary discussions. Like the spatial median, the pseudo–median is well defined for all distributions of random variables having dimension two or greater; for one-dimensional distributions, there exists some pseudo–median, which need not be unique, however. Like the median, the pseudo–median is defined for even heavy–tailed distributions that lack any (finite) mean. The one-sample Hodges–Lehmann statistic need not estimate any population mean, which for many distributions does not exist. The two-sample Hodges–Lehmann estimator need not estimate the difference of two means or the difference of two (pseudo-)medians; rather, it estimates the differences between the population of the paired random–variables drawn respectively from the populations. In general statistics. The Hodges–Lehmann "univariate" statistics have several generalizations in "multivariate" statistics: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(z_i,z_j)" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "j" } ]
https://en.wikipedia.org/wiki?curid=9622185
9623828
P–P plot
Probability plot which compares two cumulative distribution functions In statistics, a P–P plot (probability–probability plot or percent–percent plot or P value plot) is a probability plot for assessing how closely two data sets agree, or for assessing how closely a dataset fits a particular model. It works by plotting the two cumulative distribution functions against each other; if they are similar, the data will appear to be nearly a straight line. This behavior is similar to that of the more widely used Q–Q plot, with which it is often confused. Definition. A P–P plot plots two cumulative distribution functions (cdfs) against each other: given two probability distributions, with cdfs ""F" and "G"", it plots formula_0 as "z" ranges from formula_1 to formula_2 As a cdf has range [0,1], the domain of this parametric graph is formula_3 and the range is the unit square formula_4 Thus for input "z" the output is the pair of numbers giving what "percentage" of "f" and what "percentage" of "g" fall at or below "z." The comparison line is the 45° line from (0,0) to (1,1), and the distributions are equal if and only if the plot falls on this line. The degree of deviation makes it easy to visually identify how different the distributions are, but because of sampling error, even samples drawn from identical distributions will not appear identical. Example. As an example, if the two distributions do not overlap, say "F" is below "G," then the P–P plot will move from left to right along the bottom of the square – as "z" moves through the support of "F," the cdf of "F" goes from 0 to 1, while the cdf of "G" stays at 0 – and then moves up the right side of the square – the cdf of "F" is now 1, as all points of "F" lie below all points of "G," and now the cdf of "G" moves from 0 to 1 as "z" moves through the support of "G." (need a graph for this paragraph) Use. As the above example illustrates, if two distributions are separated in space, the P–P plot will give very little data – it is only useful for comparing probability distributions that have nearby or equal location. Notably, it will pass through the point (1/2, 1/2) if and only if the two distributions have the same median. P–P plots are sometimes limited to comparisons between two samples, rather than comparison of a sample to a theoretical model distribution. However, they are of general use, particularly where observations are not all modelled with the same distribution. However, it has found some use in comparing a sample distribution from a "known" theoretical distribution: given "n" samples, plotting the continuous theoretical cdf against the empirical cdf would yield a stairstep (a step as "z" hits a sample), and would hit the top of the square when the last data point was hit. Instead one only plots points, plotting the observed "k"th observed points (in order: formally the observed "k"th order statistic) against the "k"/("n" + 1) quantile of the theoretical distribution. This choice of "plotting position" (choice of quantile of the theoretical distribution) has occasioned less controversy than the choice for Q–Q plots. The resulting goodness of fit of the 45° line gives a measure of the difference between a sample set and the theoretical distribution. A P–P plot can be used as a graphical adjunct to a tests of the fit of probability distributions, with additional lines being included on the plot to indicate either specific acceptance regions or the range of expected departure from the 1:1 line. An improved version of the P–P plot, called the SP or S–P plot, is available, which makes use of a variance-stabilizing transformation to create a plot on which the variations about the 1:1 line should be the same at all locations. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(F(z),G(z))" }, { "math_id": 1, "text": "-\\infty" }, { "math_id": 2, "text": "\\infty." }, { "math_id": 3, "text": "(-\\infty,\\infty)" }, { "math_id": 4, "text": "[0,1]\\times [0,1]." } ]
https://en.wikipedia.org/wiki?curid=9623828
9626451
Muller automaton
In automata theory, a Muller automaton is a type of an ω-automaton. The acceptance condition separates a Muller automaton from other ω-automata. The Muller automaton is defined using a Muller acceptance condition, i.e. the set of all states visited infinitely often must be an element of the acceptance set. Both deterministic and non-deterministic Muller automata recognize the ω-regular languages. They are named after David E. Muller, an American mathematician and computer scientist, who invented them in 1963. Formal definition. Formally, a deterministic Muller-automaton is a tuple "A" = ("Q",Σ,δ,"q"0,F) that consists of the following information: In a non-deterministic Muller automaton, the transition function δ is replaced with a transition relation Δ that returns a set of states and the initial state "q"0 is replaced by a set of initial states "Q"0. Generally, 'Muller automaton' refers to a non-deterministic Muller automaton. For more comprehensive formalisation look at ω-automaton. Equivalence with other ω-automata. The Muller automata are equally expressive as parity automata, Rabin automata, Streett automata, and non-deterministic Büchi automata, to mention some, and strictly more expressive than the deterministic Büchi automata. The equivalence of the above automata and non-deterministic Muller automata can be shown very easily as the accepting conditions of these automata can be emulated using the acceptance condition of Muller automata and vice versa. McNaughton's theorem demonstrates the equivalence of non-deterministic Büchi automaton and deterministic Muller automaton. Thus, deterministic and non-deterministic Muller automata are equivalent in terms of the languages they can accept. Transformation to non-deterministic Muller automata. Following is a list of automata constructions that each transforms a type of ω-automata to a non-deterministic Muller automaton. If formula_0 is the set of final states in a Büchi automaton with the set of states formula_1, we can construct a Muller automaton with same set of states, transition function and initial state with the Muller accepting condition as F = { "X" | "X" ∈ P("Q") ∧ "X" ∩ "B" ≠ formula_2}. Similarly, the Rabin conditions formula_3 can be emulated by constructing the acceptance set in the Muller automaton as all sets formula_4 that satisfy formula_5 and formula_6, for some "j". Note that this covers the case of parity automata too, as the parity acceptance condition can be expressed as a Rabin acceptance condition easily. The Streett conditions formula_3 can be emulated by constructing the acceptance set in the Muller automaton as all sets formula_4 that satisfy formula_7, for all "j". Transformation to deterministic Muller automata. McNaughton's theorem provides a procedure to transform any non-deterministic Büchi automaton into a deterministic Muller automaton. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "\\emptyset" }, { "math_id": 3, "text": "(E_j,F_j)" }, { "math_id": 4, "text": "F \\subseteq Q" }, { "math_id": 5, "text": "F \\cap E_j = \\emptyset " }, { "math_id": 6, "text": "F \\cap F_j \\neq \\emptyset" }, { "math_id": 7, "text": "F \\cap F_j = \\emptyset \\implies F \\cap E_j = \\emptyset" } ]
https://en.wikipedia.org/wiki?curid=9626451
9626951
Exact couple
In mathematics, an exact couple, due to William S. Massey (1952), is a general source of spectral sequences. It is common especially in algebraic topology; for example, Serre spectral sequence can be constructed by first constructing an exact couple. For the definition of an exact couple and the construction of a spectral sequence from it (which is immediate), see . For a basic example, see Bockstein spectral sequence. The present article covers additional materials. Exact couple of a filtered complex. Let "R" be a ring, which is fixed throughout the discussion. Note if "R" is formula_0, then modules over "R" are the same thing as abelian groups. Each filtered chain complex of modules determines an exact couple, which in turn determines a spectral sequence, as follows. Let "C" be a chain complex graded by integers and suppose it is given an increasing filtration: for each integer "p", there is an inclusion of complexes: formula_1 From the filtration one can form the associated graded complex: formula_2 which is doubly-graded and which is the zero-th page of the spectral sequence: formula_3 To get the first page, for each fixed "p", we look at the short exact sequence of complexes: formula_4 from which we obtain a long exact sequence of homologies: ("p" is still fixed) formula_5 With the notation formula_6, the above reads: formula_7 which is precisely an exact couple and formula_8 is a complex with the differential formula_9. The derived couple of this exact couple gives the second page and we iterate. In the end, one obtains the complexes formula_10 with the differential "d": formula_11 The next lemma gives a more explicit formula for the spectral sequence; in particular, it shows the spectral sequence constructed above is the same one in more traditional direct construction, in which one uses the formula below as definition (cf. Spectral sequence#The spectral sequence of a filtered complex). Sketch of proof: Remembering formula_9, it is easy to see: formula_12 where they are viewed as subcomplexes of formula_8. We will write the bar for formula_13. Now, if formula_14, then formula_15 for some formula_16. On the other hand, remembering "k" is a connecting homomorphism, formula_17 where "x" is a representative living in formula_18. Thus, we can write: formula_19 for some formula_20. Hence, formula_21 modulo formula_22, yielding formula_23. Next, we note that a class in formula_24 is represented by a cycle "x" such that formula_25. Hence, since "j" is induced by formula_26, formula_27. We conclude: since formula_28, formula_29 Proof: See the last section of May. formula_30 Exact couple of a double complex. A double complex determines two exact couples; whence, the two spectral sequences, as follows. (Some authors call the two spectral sequences horizontal and vertical.) Let formula_31 be a double complex. With the notation formula_32, for each with fixed "p", we have the exact sequence of cochain complexes: formula_33 Taking cohomology of it gives rise to an exact couple: formula_34 By symmetry, that is, by switching first and second indexes, one also obtains the other exact couple. Example: Serre spectral sequence. The Serre spectral sequence arises from a fibration: formula_35 For the sake of transparency, we only consider the case when the spaces are CW complexes, "F" is connected and "B" is simply connected; the general case involves more technicality (namely, local coefficient system). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Z" }, { "math_id": 1, "text": "F_{p-1} C \\subset F_p C." }, { "math_id": 2, "text": "\\operatorname{gr} C = \\bigoplus_{-\\infty}^\\infty F_p C/F_{p-1} C," }, { "math_id": 3, "text": "E^0_{p, q} = (\\operatorname{gr} C)_{p, q} = (F_p C / F_{p-1} C)_{p+q}." }, { "math_id": 4, "text": "0 \\to F_{p-1} C \\to F_p C \\to (\\operatorname{gr}C)_p \\to 0" }, { "math_id": 5, "text": "\\cdots \\to H_n(F_{p-1} C) \\overset{i}\\to H_n(F_p C) \\overset{j} \\to H_n(\\operatorname{gr}(C)_p) \\overset{k}\\to H_{n-1}(F_{p-1} C) \\to \\cdots" }, { "math_id": 6, "text": "D_{p, q} = H_{p+q} (F_p C), \\, E^1_{p, q} = H_{p + q} (\\operatorname{gr}(C)_p)" }, { "math_id": 7, "text": "\\cdots \\to D_{p - 1, q + 1} \\overset{i}\\to D_{p, q} \\overset{j} \\to E^1_{p, q} \\overset{k}\\to D_{p - 1, q} \\to \\cdots," }, { "math_id": 8, "text": "E^1" }, { "math_id": 9, "text": "d = j \\circ k" }, { "math_id": 10, "text": "E^r_{*, *}" }, { "math_id": 11, "text": "E^r_{p, q} \\overset{k}\\to D^r_{p - 1, q} \\overset{{}^r j}\\to E^r_{p - r, q + r - 1}." }, { "math_id": 12, "text": "Z^r= k^{-1} (\\operatorname{im} i^r), \\, B^r = j (\\operatorname{ker} i^r)," }, { "math_id": 13, "text": "F_p C \\to F_p C / F_{p-1} C" }, { "math_id": 14, "text": "[\\overline{x}] \\in Z^{r-1}_{p, q} \\subset E^1_{p, q}" }, { "math_id": 15, "text": "k([\\overline{x}]) = i^{r-1}([y])" }, { "math_id": 16, "text": "[y] \\in D_{p - r, q + r - 1} = H_{p+q-1}(F_p C)" }, { "math_id": 17, "text": "k([\\overline{x}]) = [d(x)]" }, { "math_id": 18, "text": "(F_p C)_{p + q}" }, { "math_id": 19, "text": "d(x) - i^{r-1}(y) = d(x')" }, { "math_id": 20, "text": "x' \\in F_{p-1}C" }, { "math_id": 21, "text": "[\\overline{x}] \\in Z^r_p \\Leftrightarrow x \\in A^r_p" }, { "math_id": 22, "text": "F_{p-1} C" }, { "math_id": 23, "text": "Z_p^r \\simeq (A^r_p + F_{p-1}C)/F_{p-1} C" }, { "math_id": 24, "text": "\\operatorname{ker}(i^{r-1}: H_{p+q}(F_pC) \\to H_{p+q}(F_{p + r - 1} C))" }, { "math_id": 25, "text": "x \\in d(F_{p+r-1} C)" }, { "math_id": 26, "text": "\\overline{\\cdot}" }, { "math_id": 27, "text": "B^{r-1}_p = j (\\operatorname{ker} i^{r-1}) \\simeq (d(A^{r-1}_{p+r-1}) + F_{p-1} C)/F_{p-1} C" }, { "math_id": 28, "text": "A^r_p \\cap F_{p-1} C = A^{r-1}_{p-1}" }, { "math_id": 29, "text": "E^r_{p, *} = {Z^{r-1}_p \\over B^{r-1}_p} \\simeq {A^r_p + F_{p-1} C \\over d(A^{r-1}_{p+r-1}) + F_{p-1}C} \\simeq {A^r_p \\over d(A^{r-1}_{p+r-1}) + A^{r-1}_{p-1}}. \\qquad \\square" }, { "math_id": 30, "text": "\\square" }, { "math_id": 31, "text": "K^{p,q}" }, { "math_id": 32, "text": "G^p = \\bigoplus_{i \\ge p} K^{i, *}" }, { "math_id": 33, "text": "0 \\to G^{p+1} \\to G^p \\to K^{p, *} \\to 0." }, { "math_id": 34, "text": "\\cdots \\to D^{p, q} \\overset{j}\\to E_1^{p, q} \\overset{k}\\to \\cdots" }, { "math_id": 35, "text": "F \\to E \\to B." } ]
https://en.wikipedia.org/wiki?curid=9626951
9628193
Darwin–Radau equation
In astrophysics, the Darwin–Radau equation (named after Rodolphe Radau and Charles Galton Darwin) gives an approximate relation between the moment of inertia factor of a planetary body and its rotational speed and shape. The moment of inertia factor is directly related to the largest principal moment of inertia, "C". It is assumed that the rotating body is in hydrostatic equilibrium and is an ellipsoid of revolution. The Darwin–Radau equation states formula_0 where "M" and "Re" represent the mass and mean equatorial radius of the body. Here λ is known as d'Alembert's parameter and the Radau parameter η is defined as formula_1 where "q" is the geodynamical constant formula_2 and ε is the geometrical flattening formula_3 where "Rp" is the mean polar radius and "Re" is the mean equatorial radius. For Earth, formula_4 and formula_5, which yields formula_6, a good approximation to the measured value of 0.3307. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{C}{MR_{e}^{2}} = \\frac{2}{3\\lambda} = \\frac{2}{3} \\left( 1 - \\frac{2}{5} \\sqrt{1 + \\eta} \\right)\n" }, { "math_id": 1, "text": "\n\\eta = \\frac{5q}{2\\epsilon} - 2\n" }, { "math_id": 2, "text": "\nq = \\frac{\\omega^{2} R_{e}^{3}}{GM}\n" }, { "math_id": 3, "text": "\n\\epsilon = \\frac{R_{e} - R_{p}}{R_{e}}\n" }, { "math_id": 4, "text": "q \\approx 3.461391 \\times 10^{-3}" }, { "math_id": 5, "text": "\\epsilon \\approx 1/298.257" }, { "math_id": 6, "text": "\\frac{C}{MR_{e}^{2}} \\approx 0.3313" } ]
https://en.wikipedia.org/wiki?curid=9628193
9628924
Intravascular volume status
In medicine, intravascular volume status refers to the volume of blood in a patient's circulatory system, and is essentially the blood plasma component of the overall volume status of the body, which otherwise includes both intracellular fluid and extracellular fluid. Still, the intravascular component is usually of primary interest, and "volume status" is sometimes used synonymously with "intravascular volume status". It is related to the patient's state of hydration, but is not identical to it. For instance, intravascular volume depletion can exist in an adequately hydrated person if there is loss of water into interstitial tissue (e.g. due to hyponatremia or liver failure). Clinical assessment. Intravascular Volume Depletion. Volume contraction of intravascular fluid (blood plasma) is termed hypovolemia, and its signs include, in order of severity: Intravascular volume overload. Signs of intravascular volume overload (high blood volume) include: Intravascular Blood Volume Correlation to a Patient's Ideal Height and Weight. For the clinical assessment of intravascular blood volume, the BVA-100, a semi-automated blood volume analyzer device that has FDA approval, determines the status of a patient’s blood volume based on the Ideal Height and Weight Method. Using a patient’s ideal weight and actual weight, the percent deviation from the desirable weight is found using the following equation: formula_0 Using the deviation from desirable weight, the BV ratio (ml/kg), i.e. Ideal Blood Volume, can be determined. The machine was tested in clinical studies for the treatment of a broad range of medical conditions related to Intravascular Volume Status, such as anemia, congestive heart failure, sepsis, CFS, Hyponatremia, Syncope and more. This tool for measuring blood volume may foster improved patient care as both a stand-alone and complementary diagnostic tool as there has been a statistically significant increase in patient survival. Pathophysiology. Intravascular volume depletion. The most common cause of hypovolemia is diarrhea or vomiting. The other causes are usually divided into "renal" and "extrarenal" causes. Renal causes include overuse of diuretics, or trauma or disease of the kidney. Extrarenal causes include bleeding, burns, and any causes of edema (e.g. congestive heart failure, liver failure). Intravascular volume depletion is divided into three types based on the blood sodium level: Intravascular volume overload. Intravascular volume overload can occur during surgery, if water rather than isotonic saline is used to wash the incision. It can also occur if there is inadequate urination, e.g. with certain kidney diseases. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pm\\%\\text{ Desirable Weight} = \\frac{\\text{Actual Weight}-\\text{Desirable Weight}}{\\text{Desirable Weight}}\\times100" } ]
https://en.wikipedia.org/wiki?curid=9628924
9630
Ecology
Study of organisms and their environment Ecology (from grc " ' ()" 'house' and " ' ()" 'study of')[A] is the natural science of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history. Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes. Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology). The word "ecology" () was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory. Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value. Levels, scope, and scale of organization. The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame. The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes. Hierarchy. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; System behaviors must first be arrayed into different levels of the organization. Behaviors corresponding to higher levels occur at slow rates. Conversely, lower organizational levels exhibit rapid rates. For example, individual tree leaves respond rapidly to momentary changes in light intensity, CO2 concentration, and the like. The growth of the tree responds more slowly and integrates these short-term changes. O'Neill et al. (1986) The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales. To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties." Biodiversity. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Biodiversity refers to the variety of life and its processes. It includes the variety of living organisms, the genetic differences among them, the communities and ecosystems in which they occur, and the ecological and evolutionary processes that keep them functioning, yet ever-changing and adapting. Noss &amp; Carpenter (1994) Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry. Habitat. The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard ("Tropidurus hispidus") has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment. Niche. Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the "fundamental" and the "realized" niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose "dimensions" are defined as environmental variables and whose "size" is a function of the number of values that the environmental values may assume for which an organism has "positive fitness"." Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species. Niche construction. Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats." The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time. Biome. Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans. Biosphere. The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance. Population ecology. Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat. A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration. An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by: formula_0 where "N" is the total number of individuals in the population, "b" and "d" are the per capita rates of birth and death respectively, and "r" is the per capita rate of population change. Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst: formula_1 where "N(t)" is the number of individuals measured as biomass density as a function of time, "t", "r" is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and formula_2 is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size (formula_3) will grow to approach equilibrium, where (formula_4), when the rates of increase and crowding are balanced, formula_5. A common, analogous model fixes the equilibrium, formula_5 as "K", which is known as the "carrying capacity." Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data." Metapopulations and migration. The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population. In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure. Community ecology. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Community ecology examines how interactions among species and their environment affect the abundance, distribution and diversity of species within communities. Johnson &amp; Stinchcomb (2007) Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals. Ecosystem ecology. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; These ecosystems, as we may call them, are of the most various kinds and sizes. They form one category of the multitudinous physical systems of the universe, which range from the universe as a whole down to the atom. Tansley (1935) Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria), The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity. Food webs. A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows. Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems. Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life. The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras. Trophic levels. A trophic level (from Greek "troph", τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'. Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing. Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores." Keystone species. A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed "trophic cascades") that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability. Sea otters ("Enhydra lutris") are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow ("Hydrodamalis gigas"). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied. Complexity. Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.[E] "Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960. Holism. Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed." Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells. Relation to evolution. Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal "Trends in Ecology and Evolution". There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation. Behavioural ecology. All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba. Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness. Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk." Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors. Cognitive ecology. Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...". Social ecology. Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members. Coevolution. Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients. Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure. Biogeography. Biogeography (an amalgamation of "biology" and "geography") is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The "Journal of Biogeography" was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory. Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming. r/K selection theory. A population ecology concept is r/K selection theory,[D] one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience "density-independent" forces of natural selection, which is called "r"-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called "K"-selection. In the "r/K"-selection model, the first variable "r" is the intrinsic rate of natural increase in population size and the second variable "K" is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An "r"-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in "r"-selected species. Many kinds of insects and invasive species exhibit "r"-selected characteristics. In contrast, a "K"-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting "K"-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring. Molecular ecology. The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication "Molecular Ecology" in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, "Molecular Markers, Natural History and Evolution". Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography. Human ecology. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; The history of life on Earth has been a history of interaction between living things and their surroundings. To a large extent, the physical form and the habits of the earth's vegetation and its animal life have been molded by the environment. Considering the whole span of earthly time, the opposite effect, in which life actually modifies its surroundings, has been relatively slight. Only within the moment of time represented by the present century has one species man acquired significant power to alter the nature of his world. Rachel Carson, "Silent Spring" Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century. The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Ecosystem management is not just about science nor is it simply an extension of traditional resource management; it offers a fundamental reframing of how humans may work with nature. Grumbine (1994) Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes. Relation to the environment. The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat. The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or "holocoenotic" system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem. Disturbance and resilience. A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances. The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades. Metabolism and the early atmosphere. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Metabolism – the rate at which energy and material resources are taken up from the environment, transformed within an organism, and allocated to maintenance, growth and reproduction – is a fundamental physiological trait. Ernest et al. The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved. Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + h"v" → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the "Great Oxidation") did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior. Radiation: heat, temperature and light. The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy. There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds. Physical environments. Water. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Wetland conditions such as shallow water, high plant productivity, and anaerobic substrates provide a suitable environment for important physical, biological, and chemical processes. Because of these processes, wetlands play a vital role in global nutrient and element cycles. Cronk &amp; Fennessy (2001) Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water. Gravity. The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra). Pressure. Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations. Wind and turbulence. Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems. Fire. Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s. Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., "Pinus halepensis") cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems. Soils. Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils. Biogeochemistry and climate. Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry. The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm. In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles. History. Early beginnings. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; By ecology, we mean the whole science of the relations of the organism to the environment including, in the broad sense, all the "conditions of existence". Thus, the theory of evolution explains the housekeeping relations of organisms mechanistically as the necessary consequences of effectual causes; and so forms the monistic groundwork of ecology. Ernst Haeckel (1866) [B] Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died "c". 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Nowhere can one see more clearly illustrated what may be called the sensibility of such an organic complex, – expressed by the fact that whatever affects any species belonging to it, must speedily have its influence of some sort upon the whole assemblage. He will thus be made to see the impossibility of studying any form completely, out of relation to the other forms, – the necessity for taking a comprehensive survey of the whole as a condition to a satisfactory understanding of any part. Stephen Forbes (1887) Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" () was coined by Ernst Haeckel in his book "Generelle Morphologie der Organismen" (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy. Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the "economy or polity of nature" in "The Origin of Species". Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous. From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to "The Origin of Species", there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication "Natural History of Selborne" by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in "The Origin of Species". Evolutionary theory changed the way that researchers approached the ecological sciences. Since 1900. Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892. In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of "scientific natural history". Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations. The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept.[C] Around the same time, Charles Elton pioneered the concept of food chains in his classical book "Animal Ecology". Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology. In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; This whole chain of poisoning, then, seems to rest on a base of minute plants which must have been the original concentrators. But what of the opposite end of the food chain—the human being who, in probable ignorance of all this sequence of events, has rigged his fishing tackle, caught a string of fish from the waters of Clear Lake, and taken them home to fry for his supper? Rachel Carson (1962) Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s. In 1962, marine biologist and ecologist Rachel Carson's book "Silent Spring" helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management. See also. &lt;templatestyles src="Div col/styles.css"/&gt; &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\operatorname{d}N(t)}{\\operatorname{d}t} = bN(t) - dN(t) = (b - d)N(t) = rN(t), " }, { "math_id": 1, "text": "\\frac{\\operatorname{d}N(t)}{\\operatorname{d}t} = rN(t) - \\alpha N(t)^2 = rN(t)\\left(\\frac{K - N(t)}{K}\\right)," }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\mathrm{d}N(t)/\\mathrm{d}t" }, { "math_id": 4, "text": "\\mathrm{d}N(t)/\\mathrm{d}t = 0" }, { "math_id": 5, "text": "r/\\alpha" } ]
https://en.wikipedia.org/wiki?curid=9630
963042
Finitely generated group
In algebra, a finitely generated group is a group "G" that has some finite generating set "S" so that every element of "G" can be written as the combination (under the group operation) of finitely many elements of "S" and of inverses of such elements. By definition, every finite group is finitely generated, since "S" can be taken to be "G" itself. Every infinite finitely generated group must be countable but countable groups need not be finitely generated. The additive group of rational numbers Q is an example of a countable group that is not finitely generated. Finitely generated abelian groups. Every abelian group can be seen as a module over the ring of integers Z, and in a finitely generated abelian group with generators "x"1, ..., "x""n", every group element "x" can be written as a linear combination of these generators, "x" = "α"1⋅"x"1 + "α"2⋅"x"2 + ... + "α""n"⋅"x""n" with integers "α"1, ..., "α""n". Subgroups of a finitely generated abelian group are themselves finitely generated. The fundamental theorem of finitely generated abelian groups states that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of which are unique up to isomorphism. Subgroups. A subgroup of a finitely generated group need not be finitely generated. The commutator subgroup of the free group formula_0 on two generators is an example of a subgroup of a finitely generated group that is not finitely generated. On the other hand, all subgroups of a finitely generated abelian group are finitely generated. A subgroup of finite index in a finitely generated group is always finitely generated, and the Schreier index formula gives a bound on the number of generators required. In 1954, Albert G. Howson showed that the intersection of two finitely generated subgroups of a free group is again finitely generated. Furthermore, if formula_1 and formula_2 are the numbers of generators of the two finitely generated subgroups then their intersection is generated by at most formula_3 generators. This upper bound was then significantly improved by Hanna Neumann to formula_4; see Hanna Neumann conjecture. The lattice of subgroups of a group satisfies the ascending chain condition if and only if all subgroups of the group are finitely generated. A group such that all its subgroups are finitely generated is called Noetherian. A group such that every finitely generated subgroup is finite is called locally finite. Every locally finite group is periodic, i.e., every element has finite order. Conversely, every periodic abelian group is locally finite. Applications. Geometric group theory studies the connections between algebraic properties of finitely generated groups and topological and geometric properties of spaces on which these groups act. Related notions. The word problem for a finitely generated group is the decision problem of whether two words in the generators of the group represent the same element. The word problem for a given finitely generated group is solvable if and only if the group can be embedded in every algebraically closed group. The rank of a group is often defined to be the smallest cardinality of a generating set for the group. By definition, the rank of a finitely generated group is finite. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_2" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "2mn - m - n + 1" }, { "math_id": 4, "text": "2(m-1)(n-1) + 1" } ]
https://en.wikipedia.org/wiki?curid=963042
963084
Register-transfer level
Digital circuit design abstraction In digital circuit design, register-transfer level (RTL) is a design abstraction which models a synchronous digital circuit in terms of the flow of digital signals (data) between hardware registers, and the logical operations performed on those signals. Register-transfer-level abstraction is used in hardware description languages (HDLs) like Verilog and VHDL to create high-level representations of a circuit, from which lower-level representations and ultimately actual wiring can be derived. Design at the RTL level is typical practice in modern digital design. Unlike in software compiler design, where the register-transfer level is an intermediate representation and at the lowest level, the RTL level is the usual input that circuit designers operate on. In fact, in circuit synthesis, an intermediate language between the input register transfer level representation and the target netlist is sometimes used. Unlike in netlist, constructs such as cells, functions, and multi-bit registers are available. Examples include FIRRTL and RTLIL. Transaction-level modeling is a higher level of electronic system design. RTL description. A synchronous circuit consists of two kinds of elements: registers (Sequential logic) and combinational logic. Registers (usually implemented as D flip-flops) synchronize the circuit's operation to the edges of the clock signal, and are the only elements in the circuit that have memory properties. Combinational logic performs all the logical functions in the circuit and it typically consists of logic gates. For example, a very simple synchronous circuit is shown in the figure. The inverter is connected from the output, Q, of a register to the register's input, D, to create a circuit that changes its state on each rising edge of the clock, clk. In this circuit, the combinational logic consists of the inverter. When designing digital integrated circuits with a hardware description language (HDL), the designs are usually engineered at a higher level of abstraction than transistor level (logic families) or logic gate level. In HDLs the designer declares the registers (which roughly correspond to variables in computer programming languages), and describes the combinational logic by using constructs that are familiar from programming languages such as if-then-else and arithmetic operations. This level is called "register-transfer level". The term refers to the fact that RTL focuses on describing the flow of signals between registers. As an example, the circuit mentioned above can be described in VHDL as follows: D &lt;= not Q; process(clk) begin if rising_edge(clk) then Q &lt;= D; end if; end process; Using an EDA tool for synthesis, this description can usually be directly translated to an equivalent hardware implementation file for an ASIC or an FPGA. The synthesis tool also performs logic optimization. At the register-transfer level, some types of circuits can be recognized. If there is a cyclic path of logic from a register's output to its input (or from a set of registers outputs to its inputs), the circuit is called a state machine or can be said to be sequential logic. If there are logic paths from a register to another without a cycle, it is called a pipeline. RTL in the circuit design cycle. RTL is used in the logic design phase of the integrated circuit design cycle. An RTL description is usually converted to a gate-level description of the circuit by a logic synthesis tool. The synthesis results are then used by placement and routing tools to create a physical layout. Logic simulation tools may use a design's RTL description to verify its correctness. Power estimation techniques for RTL. The most accurate power analysis tools are available for the circuit level but unfortunately, even with switch- rather than device-level modelling, tools at the circuit level have disadvantages like they are either too slow or require too much memory thus inhibiting large chip handling. The majority of these are simulators like SPICE and have been used by the designers for many years as performance analysis tools. Due to these disadvantages, gate-level power estimation tools have begun to gain some acceptance where faster, probabilistic techniques have begun to gain a foothold. But it also has its trade off as speedup is achieved on the cost of accuracy, especially in the presence of correlated signals. Over the years it has been realized that biggest wins in low power design cannot come from circuit- and gate-level optimizations whereas architecture, system, and algorithm optimizations tend to have the largest impact on power consumption. Therefore, there has been a shift in the incline of the tool developers towards high-level analysis and optimization tools for power. Motivation. It is well known that more significant power reductions are possible if optimizations are made on levels of abstraction, like the architectural and algorithmic level, which are higher than the circuit or gate level This provides the required motivation for the developers to focus on the development of new architectural level power analysis tools. This in no way implies that lower level tools are unimportant. Instead, each layer of tools provides a foundation upon which the next level can be built. The abstractions of the estimation techniques at a lower level can be used on a higher level with slight modifications. Gate Equivalents. It is a technique based on the concept of gate equivalents. The complexity of a chip architecture can be described approximately in terms of gate equivalents where gate equivalent count specifies the average number of reference gates that are required to implement the particular function. The total power required for the particular function is estimated by multiplying the approximated number of gate equivalents with the average power consumed per gate. The reference gate can be any gate e.g. 2-input NAND gate. Steps: # Identify the functional blocks such as counters, decoders, multipliers, memories, etc. # Assign a complexity in terms of Gate Equivalents. The number of GE’s for each unit type are either taken directly as an input from the user or are fed from a library. formula_0 Where Etyp is the assumed average dissipated energy by a gate equivalent, when active. The activity factor, Aint, denotes the average percentage of gates switching per clock cycle and is allowed to vary from function to function. The capacitive load, CL, is a combination of fan-out loading as well as wiring. An estimate of the average wire length can be used to calculate the wiring capacitance. This is provided by the user and cross-checked by using a derivative of Rent’s Rule. Assumptions: # A single reference gate is taken as the basis for all the power estimates not taking into consideration different circuit styles, clocking strategies, or layout techniques. # The percentage of gates switching per clock cycle denoted by Activity factors are assumed to be fixed regardless of the input patterns. # Typical gate switching energy is characterized by completely random uniform white noise (UWN) distribution of the input data. This implies that the power estimation is same regardless of the circuit being idle or at maximum load as this UWN model ignores how different input distributions affect the power consumption of gates and modules. formula_1 Where Cwire denotes the bit line wiring capacitance per unit length and Ccell denotes the loading due to a single cell hanging off the bit line. The clock capacitance is based on the assumption of an H-tree distribution network. Activity is modelled using a UWN model. As can be seen by the equation the power consumption of each components is related to the number of columns (Ncol) and rows (Nrow) in the memory array. Disadvantages: # The circuit activities are not modeled accurately as an overall activity factor is assumed for the entire chip which is also not trustable as provided by the user. As a matter of fact activity factors will vary throughout the chip hence this is not very accurate and prone to error. This leads to the problem that even if the model gives a correct estimate for the total power consumption by the chip, the module wise power distribution is fairly inaccurate. #The chosen activity factor gives the correct total power, but the breakdown of power into logic, clock, memory, etc. is less accurate. Therefore this tool is not much different or improved in comparison with CES. Precharacterized Cell Libraries. This technique further customizes the power estimation of various functional blocks by having separate power model for logic, memory, and interconnect suggesting a power factor approximation (PFA) method for individually characterizing an entire library of functional blocks such as multipliers, adders, etc. instead of a single gate-equivalent model for “logic” blocks. The power over the entire chip is approximated by the expression: formula_2 Where Ki is PFA proportionality constant that characterizes the ith functional element formula_3 is the measure of hardware complexity, and formula_4 denotes the activation frequency. Example. Gi denoting the hardware complexity of the multiplier is related to the square of the input word length i.e. N2 where N is the word length. The activation frequency is the rate at which multiplies are performed by the algorithm denoted by formula_5 and the PFA constant, formula_6, is extracted empirically from past multiplier designs and shown to be about 15 fW/bit2-Hz for a 1.2 μm technology at 5V. The resulting power model for the multiplier on the basis of the above assumptions is: formula_7 Advantages: Weakness: The estimation error (relative to switch-level simulation) for a 16x16 multiplier is experimented and it is observed that when the dynamic range of the inputs does not fully occupy the word length of the multiplier, the UWN model becomes extremely inaccurate. Granted, good designers attempt to maximize word length utilization. Still, errors in the range of 50-100% are not uncommon. The figure clearly suggests a flaw in the UWN model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\displaystyle P = \\sum_{i \\in \\text{fns}} \\textit{GE}_i (E_\\text{typ} + C_L^i V_\\text{dd}^2) f A_\\text{int}^i" }, { "math_id": 1, "text": "P_\\text{bitlines} = \\dfrac{N_\\text{col}}{2} \\cdot (L_\\text{col} C_\\text{wire} + N_\\text{row} C_\\text{cell}) V_\\text{dd} V_\\text{swing}" }, { "math_id": 2, "text": "\\displaystyle P = \\sum_{i \\in \\text{all blocks}} K_i G_i f_i" }, { "math_id": 3, "text": "G_i" }, { "math_id": 4, "text": "f_i" }, { "math_id": 5, "text": "f_{mult}" }, { "math_id": 6, "text": "K_{mult}" }, { "math_id": 7, "text": "\\displaystyle P_\\text{mult} = K_\\text{mult} N^2 f_\\text{mult}" } ]
https://en.wikipedia.org/wiki?curid=963084
9632150
Einstein–de Haas effect
Consequence of the conservation of angular momentum The Einstein–de Haas effect is a physical phenomenon in which a change in the magnetic moment of a free body causes this body to rotate. The effect is a consequence of the conservation of angular momentum. It is strong enough to be observable in ferromagnetic materials. The experimental observation and accurate measurement of the effect demonstrated that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons. The effect also demonstrated the close relation between the notions of angular momentum in classical and in quantum physics. The effect was predicted by O. W. Richardson in 1908. It is named after Albert Einstein and Wander Johannes de Haas, who published two papers in 1915 claiming the first experimental observation of the effect. Description. The orbital motion of an electron (or any charged particle) around a certain axis produces a magnetic dipole with the magnetic moment of formula_0 where formula_1 and formula_2 are the charge and the mass of the particle, while formula_3 is the angular momentum of the motion (SI units are used). In contrast, the intrinsic magnetic moment of the electron is related to its intrinsic angular momentum (spin) as formula_4 (see Landé "g"-factor and anomalous magnetic dipole moment). If a number of electrons in a unit volume of the material have a total orbital angular momentum of formula_5 with respect to a certain axis, their magnetic moments would produce the magnetization of formula_6. For the spin contribution the relation would be formula_7. A change in magnetization, formula_8 implies a proportional change in the angular momentum, formula_9 of the electrons involved. Provided that there is no external torque along the magnetization axis applied to the body in the process, the rest of the body (practically all its mass) should acquire an angular momentum formula_10 due to the law of conservation of angular momentum. Experimental setup. The experiments involve a cylinder of a ferromagnetic material suspended with the aid of a thin string inside a cylindrical coil which is used to provide an axial magnetic field that magnetizes the cylinder along its axis. A change in the electric current in the coil changes the magnetic field the coil produces, which changes the magnetization of the ferromagnetic cylinder and, due to the effect described, its angular momentum. A change in the angular momentum causes a change in the rotational speed of the cylinder, monitored using optical devices. The external field formula_11 interacting with a magnetic dipole formula_12 cannot produce any torque (formula_13) along the field direction. In these experiments the magnetization happens along the direction of the field produced by the magnetizing coil, therefore, in absence of other external fields, the angular momentum along this axis must be conserved. In spite of the simplicity of such a layout, the experiments are not easy. The magnetization can be measured accurately with the help of a pickup coil around the cylinder, but the associated change in the angular momentum is small. Furthermore, the ambient magnetic fields, such as the Earth field, can provide a 107–108 times larger mechanical impact on the magnetized cylinder. The later accurate experiments were done in a specially constructed demagnetized environment with active compensation of the ambient fields. The measurement methods typically use the properties of the torsion pendulum, providing periodic current to the magnetization coil at frequencies close to the pendulum's resonance. The experiments measure directly the ratio: formula_14 and derive the dimensionless gyromagnetic factor formula_15 of the material from the definition: formula_16. The quantity formula_17 is called gyromagnetic ratio. History. The expected effect and a possible experimental approach was first described by Owen Willans Richardson in a paper published in 1908. The electron spin was discovered in 1925, therefore only the orbital motion of electrons was considered before that. Richardson derived the expected relation of formula_18. The paper mentioned the ongoing attempts to observe the effect at Princeton University. In that historical context the idea of the orbital motion of electrons in atoms contradicted classical physics. This contradiction was addressed in the Bohr model in 1913, and later was removed with the development of quantum mechanics. Samuel Jackson Barnett, motivated by the Richardson's paper realized that the opposite effect should also happen – a change in rotation should cause a magnetization (the Barnett effect). He published the idea in 1909, after which he pursued the experimental studies of the effect. Einstein and de Haas published two papers in April 1915 containing a description of the expected effect and the experimental results. In the paper "Experimental proof of the existence of Ampere's molecular currents" they described in details the experimental apparatus and the measurements performed. Their result for the ratio of the angular momentum of the sample to its magnetic moment (the authors called it formula_19) was very close (within 3%) to the expected value of formula_20. It was realized later that their result with the quoted uncertainty of 10% was not consistent with the correct value which is close to formula_21. Apparently, the authors underestimated the experimental uncertainties. Barnett reported the results of his measurements at several scientific conferences in 1914. In October 1915 he published the first observation of the Barnett effect in a paper titled "Magnetization by Rotation". His result for formula_19 was close to the right value of formula_21, which was unexpected at that time. In 1918 John Quincy Stewart published the results of his measurements confirming the Barnett's result. In his paper he was calling the phenomenon the 'Richardson effect'. The following experiments demonstrated that the gyromagnetic ratio for iron is indeed close to formula_22 rather than formula_23. This phenomenon, dubbed "gyromagnetic anomaly" was finally explained after the discovery of the spin and introduction of the Dirac equation in 1928. The experimental equipment was later donated by Geertruida de Haas-Lorentz, wife of de Haas and daughter of Lorentz, to the Ampère Museum in Lyon France in 1961. It went lost and was later rediscovered in 2023. Literature about the effect and its discovery. Detailed accounts of the historical context and the explanations of the effect can be found in literature Commenting on the papers by Einstein, Calaprice in "The Einstein Almanac" writes: 52. "Experimental Proof of Ampère's Molecular Currents" (Experimenteller Nachweis der Ampereschen Molekularströme) (with Wander J. de Hass). "Deutsche Physikalische Gesellschaft, Verhandlungen" 17 (1915): 152–170. Considering [André-Marie] Ampère's hypothesis that magnetism is caused by the microscopic circular motions of electric charges, the authors proposed a design to test [Hendrik] Lorentz's theory that the rotating particles are electrons. The aim of the experiment was to measure the torque generated by a reversal of the magnetisation of an iron cylinder. Calaprice further writes: 53. "Experimental Proof of the Existence of Ampère's Molecular Currents" (with Wander J. de Haas) (in English). "Koninklijke Akademie van Wetenschappen te Amsterdam, Proceedings" 18 (1915–16). Einstein wrote three papers with Wander J. de Haas on experimental work they did together on Ampère's molecular currents, known as the Einstein–De Haas effect. He immediately wrote a correction to paper 52 (above) when Dutch physicist H. A. Lorentz pointed out an error. In addition to the two papers above [that is 52 and 53] Einstein and de Haas cowrote a "Comment" on paper 53 later in the year for the same journal. This topic was only indirectly related to Einstein's interest in physics, but, as he wrote to his friend Michele Besso, "In my old age I am developing a passion for experimentation." The second paper by Einstein and de Haas was communicated to the "Proceedings of the Royal Netherlands Academy of Arts and Sciences" by Hendrik Lorentz who was the father-in-law of de Haas. According to Viktor Frenkel, Einstein wrote in a report to the German Physical Society: "In the past three months I have performed experiments jointly with de Haas–Lorentz in the Imperial Physicotechnical Institute that have firmly established the existence of Ampère molecular currents." Probably, he attributed the hyphenated name to de Haas, not meaning both de Haas and H. A. Lorentz. Later measurements and applications. The effect was used to measure the properties of various ferromagnetic elements and alloys. The key to more accurate measurements was better magnetic shielding, while the methods were essentially similar to those of the first experiments. The experiments measure the value of the "g"-factor formula_24 (here we use the projections of the pseudovectors formula_25 and formula_26 onto the magnetization axis and omit the formula_27 sign). The magnetization and the angular momentum consist of the contributions from the spin and the orbital angular momentum: formula_28, formula_29. Using the known relations formula_30, and formula_31, where formula_32 is the g-factor for the anomalous magnetic moment of the electron, one can derive the relative spin contribution to magnetization as: formula_33. For pure iron the measured value is formula_34, and formula_35. Therefore, in pure iron 96% of the magnetization is provided by the polarization of the electrons' spins, while the remaining 4% is provided by the polarization of their orbital angular momenta. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol{\\mu} = e/2m \\cdot \\mathbf{j}," }, { "math_id": 1, "text": "e" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "\\mathbf{j}" }, { "math_id": 4, "text": "\\boldsymbol{\\mu} \\approx{} 2\\cdot{}e/2m \\cdot \\mathbf{j}" }, { "math_id": 5, "text": "\\mathbf{J}_\\text{o}" }, { "math_id": 6, "text": "\\mathbf{M}_\\text{o} = e/2m \\cdot \\mathbf{J}_\\text{o}" }, { "math_id": 7, "text": "\\mathbf{M}_\\text{s} \\approx e/m \\cdot \\mathbf{J}_\\text{s}" }, { "math_id": 8, "text": "\\Delta\\mathbf{M}," }, { "math_id": 9, "text": "\\Delta\\mathbf{J}\\propto{}\\Delta\\mathbf{M}," }, { "math_id": 10, "text": "-\\Delta\\mathbf{J}" }, { "math_id": 11, "text": "\\mathbf{B}" }, { "math_id": 12, "text": "\\boldsymbol{\\mu}" }, { "math_id": 13, "text": "\\boldsymbol{\\tau} = \\boldsymbol{\\mu} \\times \\mathbf{B}" }, { "math_id": 14, "text": "\\lambda =\\Delta\\mathbf{J}/\\Delta\\mathbf{M}" }, { "math_id": 15, "text": "g'" }, { "math_id": 16, "text": "g' \\equiv{} \\frac{2m}{e}\\frac{1}{\\lambda}" }, { "math_id": 17, "text": "\\gamma \\equiv \\frac{1}{\\lambda} \\equiv \\frac{e}{2m}g'" }, { "math_id": 18, "text": "\\mathbf{M} = e/2m \\cdot \\mathbf{J}" }, { "math_id": 19, "text": "\\lambda" }, { "math_id": 20, "text": "2m/e" }, { "math_id": 21, "text": "m/e" }, { "math_id": 22, "text": "e/m" }, { "math_id": 23, "text": "e/2m" }, { "math_id": 24, "text": "g' =\\frac{2m}{e}\\frac{M}{J}" }, { "math_id": 25, "text": "\\mathbf{M}" }, { "math_id": 26, "text": "\\mathbf{J}" }, { "math_id": 27, "text": "\\Delta" }, { "math_id": 28, "text": "M=M_\\text{s}+M_\\text{o}" }, { "math_id": 29, "text": "J=J_\\text{s}+J_\\text{o}" }, { "math_id": 30, "text": "M_\\text{o}=\\frac{e}{2m}J_\\text{o}" }, { "math_id": 31, "text": "M_\\text{s}=g\\cdot{}\\frac{e}{2m}J_\\text{s}" }, { "math_id": 32, "text": "g\\approx{}2.002" }, { "math_id": 33, "text": "\\frac{M_\\text{s}}{M}=\\frac{(g'-1)g}{(g-1)g'}" }, { "math_id": 34, "text": "g'=1.919\\pm{}0.002" }, { "math_id": 35, "text": "\\frac{M_\\text{s}}{M}\\approx{}0.96" } ]
https://en.wikipedia.org/wiki?curid=9632150
9632204
De Vaucouleurs's law
de Vaucouleurs's law, also known as the de Vaucouleurs profile or de Vaucouleurs model, describes how the surface brightness formula_0 of an elliptical galaxy varies as a function of apparent distance formula_1 from the center of the galaxy: formula_2 By defining "Re" as the radius of the isophote containing half of the total luminosity of the galaxy, the half-light radius, de Vaucouleurs profile may be expressed as: formula_3 or formula_4 where "Ie" is the surface brightness at "Re". This can be confirmed by noting formula_5 de Vaucouleurs model is a special case of Sersic's model, with a Sersic index of "n" = 4. A number of (internal) density profiles that approximately reproduce de Vaucouleurs's law after projection onto the plane of the sky include Jaffe's model and Dehnen's model. The model is named after Gérard de Vaucouleurs who first formulated it in 1948. Although an empirical model rather than a law of physics, it was so entrenched in astronomy during the 20th century that it was referred to as a "law". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "\n\\ln I(R) = \\ln I_{0} - k R^{1/4}.\n" }, { "math_id": 3, "text": "\n\\ln I(R) = \\ln I_{e} + 7.669 \\left[ 1 - \\left( \\frac{R}{R_{e}} \\right)^{1/4} \\right]\n" }, { "math_id": 4, "text": "\nI(R) = I_{e} e^{-7.669 \\left[ \\left(\\frac{R}{R_{e}}\\right)^{1/4} - 1 \\right]}\n" }, { "math_id": 5, "text": "\n\\int^{R_e}_0 I(r)2\\pi r \\, dr = \\frac{1}{2} \\int^{\\infty}_0 I(r)2\\pi r \\, dr .\n" } ]
https://en.wikipedia.org/wiki?curid=9632204
9632448
Fermi coordinates
Local coordinates that are adapted to a geodesic In the mathematical theory of Riemannian geometry, there are two uses of the term Fermi coordinates. In one use they are local coordinates that are adapted to a geodesic. In a second, more general one, they are local coordinates that are adapted to any world line, even not geodesical. Take a future-directed timelike curve formula_0, formula_1 being the proper time along formula_2 in the spacetime formula_3. Assume that formula_4 is the initial point of formula_2. Fermi coordinates adapted to formula_2 are constructed this way. Consider an orthonormal basis of formula_5 with formula_6 parallel to formula_7. Transport the basis formula_8along formula_9 making use of Fermi–Walker's transport. The basis formula_10 at each point formula_9 is still orthonormal with formula_11 parallel to formula_7 and is non-rotated (in a precise sense related to the decomposition of Lorentz transformations into pure transformations and rotations) with respect to the initial basis, this is the physical meaning of Fermi–Walker's transport. Finally construct a coordinate system in an open tube formula_12, a neighbourhood of formula_2, emitting all spacelike geodesics through formula_9 with initial tangent vector formula_13, for every formula_1. A point formula_14 has coordinates formula_15 where formula_16 is the only vector whose associated geodesic reaches formula_17 for the value of its parameter formula_18 and formula_19 is the only time along formula_2 for that this geodesic reaching formula_17 exists. If formula_2 itself is a geodesic, then Fermi–Walker's transport becomes the standard parallel transport and Fermi's coordinates become standard Riemannian coordinates adapted to formula_2. In this case, using these coordinates in a neighbourhood formula_12 of formula_2, we have formula_20, all Christoffel symbols vanish exactly on formula_2. This property is not valid for Fermi's coordinates however when formula_2 is not a geodesic. Such coordinates are called Fermi coordinates and are named after the Italian physicist Enrico Fermi. The above properties are only valid on the geodesic. The Fermi-coordinates adapted to a null geodesic is provided by Mattias Blau, Denis Frank, and Sebastian Weiss. Notice that, if all Christoffel symbols vanish near formula_21, then the manifold is flat near formula_21. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma=\\gamma(\\tau)" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "p=\\gamma(0)" }, { "math_id": 5, "text": "TM" }, { "math_id": 6, "text": "e_0" }, { "math_id": 7, "text": "\\dot\\gamma" }, { "math_id": 8, "text": "\\{e_a\\}_{a=0,1,2,3}" }, { "math_id": 9, "text": "\\gamma(\\tau)" }, { "math_id": 10, "text": "\\{e_a(\\tau)\\}_{a=0,1,2,3}" }, { "math_id": 11, "text": "e_0(\\tau)" }, { "math_id": 12, "text": "T" }, { "math_id": 13, "text": "\\sum_{i=1}^3 v^i e_i(\\tau)" }, { "math_id": 14, "text": " q\\in T" }, { "math_id": 15, "text": " \\tau(q),v^1(q),v^2(q),v^3(q)" }, { "math_id": 16, "text": "\\sum_{i=1}^3 v^i e_i(\\tau(q))" }, { "math_id": 17, "text": "q" }, { "math_id": 18, "text": "s=1" }, { "math_id": 19, "text": "\\tau(q)" }, { "math_id": 20, "text": "\\Gamma^a_{bc}=0" }, { "math_id": 21, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=9632448
9633
E (mathematical constant)
2.71828..., base of natural logarithms Constant value used in mathematics The number e is a mathematical constant approximately equal to 2.71828 that can be characterized in many ways. It is the base of the natural logarithm function. It is the limit of formula_0 as n tends to infinity, an expression that arises in the computation of compound interest. It is the value at 1 of the (natural) exponential function, commonly denoted formula_1 It is also the sum of the infinite series formula_2 There are various other characterizations; see and . The number e is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted formula_3. Alternatively, e can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest. The number e is of great importance in mathematics, alongside 0, 1, π, and i. All five appear in one formulation of Euler's identity formula_4 and play important and recurring roles across mathematics. Like the constant π, e is irrational, meaning that it cannot be represented as a ratio of integers, and moreover it is transcendental, meaning that it is not a root of any non-zero polynomial with rational coefficients. To 30 decimal places, the value of e is: &lt;templatestyles src="Block indent/styles.css"/&gt; Definitions. The number e is the limit formula_5 an expression that arises in the computation of compound interest. It is the sum of the infinite series formula_2 It is the unique positive number a such that the graph of the function "y" = "a""x" has a slope of 1 at "x" = 0. One has formula_6 where formula_7 is the (natural) exponential function, the unique function that equals its own derivative and satisfies the equation formula_8 Since the exponential function is commonly denoted as formula_9 one has also formula_10 The logarithm of base b can be defined as the inverse function of the function formula_11 Since formula_12 one has formula_13 The equation formula_14 implies therefore that e is the base of the natural logarithm. The number e can also be characterized in terms of an integral: formula_15 For other characterizations, see . History. The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base formula_16. It is assumed that the table was written by William Oughtred. In 1661, Christiaan Huygens studied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm of e, but he did not recognize e itself as a quantity of interest. The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant e occurs as the limit formula_17 where n represents the number of intervals in a year on which the compound interest is evaluated (for example, formula_18 for monthly compounding). The first symbol used for this constant was the letter b by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691. Leonhard Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of e in a printed publication was in Euler's "Mechanica" (1736). It is unknown why Euler chose the letter e. Although some researchers used the letter c in the subsequent years, the letter e was more common and eventually became standard. Euler proved that e is the sum of the infinite series formula_19 where "n"! is the factorial of n. The equivalence of the two characterizations using the limit and the infinite series can be proved via the binomial theorem. Applications. Compound interest. Jacob Bernoulli discovered this constant in 1683, while studying a question about compound interest: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;An account starts with $1.00 and pays 100 percent interest per year. If the interest is credited once, at the end of the year, the value of the account at year-end will be $2.00. What happens if the interest is computed and credited more frequently during the year? If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding $1.00 × 1.52 = $2.25 at the end of the year. Compounding quarterly yields $1.00 × 1.254 = $2.44140625, and compounding monthly yields $1.00 × (1 + 1/12)12 = $2.613035... If there are n compounding intervals, the interest for each interval will be 100%/"n" and the value at the end of the year will be $1.00 × (1 + 1/"n")"n". Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger n and, thus, smaller compounding intervals. Compounding weekly ("n" = 52) yields $2.692596..., while compounding daily ("n" = 365) yields $2.714567... (approximately two cents more). The limit as n grows large is the number that came to be known as e. That is, with "continuous" compounding, the account value will reach $2.718281828... More generally, an account that starts at $1 and offers an annual interest rate of R will, after t years, yield "e""Rt" dollars with continuous compounding. Here, R is the decimal equivalent of the rate of interest expressed as a "percentage", so for 5% interest, "R" = 5/100 = 0.05. Bernoulli trials. The number e itself also has applications in probability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one in n and plays it n times. As n increases, the probability that gambler will lose all n bets approaches 1/"e". For "n" = 20, this is already approximately 1/2.789509... This is an example of a Bernoulli trial process. Each time the gambler plays the slots, there is a one in n chance of winning. Playing n times is modeled by the binomial distribution, which is closely related to the binomial theorem and Pascal's triangle. The probability of winning k times out of n trials is: formula_20 In particular, the probability of winning zero times ("k" = 0) is formula_21 The limit of the above expression, as n tends to infinity, is precisely 1/"e". Exponential growth and decay. Exponential growth is a process that increases quantity over time at an ever-increasing rate. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. The law of exponential growth can be written in different but mathematically equivalent forms, by using a different base, for which the number e is a common and convenient choice: formula_22 Here, formula_23 denotes the initial value of the quantity x, k is the growth constant, and formula_24 is the time it takes the quantity to grow by a factor of e. Standard normal distribution. The normal distribution with zero mean and unit standard deviation is known as the "standard normal distribution", given by the probability density function formula_25 The constraint of unit standard deviation (and thus also unit variance) results in the in the exponent, and the constraint of unit total area under the curve formula_26 results in the factor formula_27. This function is symmetric around "x" = 0, where it attains its maximum value formula_27, and has inflection points at "x" = ±1. Derangements. Another application of e, also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the "hat check problem": n guests are invited to a party and, at the door, the guests all check their hats with the butler, who in turn places the hats into n boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so puts the hats into boxes selected at random. The problem of de Montmort is to find the probability that "none" of the hats gets put into the right box. This probability, denoted by formula_28, is: formula_29 As n tends to infinity, "p""n" approaches 1/"e". Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box is "n"!/"e", rounded to the nearest integer, for every positive n. Optimal planning problems. The maximum value of formula_30 occurs at formula_31. Equivalently, for any value of the base "b" &gt; 1, it is the case that the maximum value of formula_32 occurs at formula_31 (Steiner's problem, discussed below). This is useful in the problem of a stick of length L that is broken into n equal parts. The value of n that maximizes the product of the lengths is then either formula_33 or formula_34 The quantity formula_32 is also a measure of information gleaned from an event occurring with probability formula_35 (approximately formula_36 when formula_37), so that essentially the same optimal division appears in optimal planning problems like the secretary problem. Asymptotics. The number e occurs naturally in connection with many problems involving asymptotics. An example is Stirling's formula for the asymptotics of the factorial function, in which both the numbers e and π appear: formula_38 As a consequence, formula_39 Properties. Calculus. The principal motivation for introducing the number e, particularly in calculus, is to perform differential and integral calculus with exponential functions and logarithms. A general exponential has a derivative, given by a limit: formula_40 The parenthesized limit on the right is independent of the Its value turns out to be the logarithm of a to base e. Thus, when the value of a is set this limit is equal and so one arrives at the following simple identity: formula_41 Consequently, the exponential function with base e is particularly suited to doing calculus. (as opposed to some other number) as the base of the exponential function makes calculations involving the derivatives much simpler. Another motivation comes from considering the derivative of the base-a logarithm (i.e., log"a" "x"), for "x" &gt; 0: formula_42 where the substitution "u" "h"/"x" was made. The base-a logarithm of e is 1, if a equals e. So symbolically, formula_43 The logarithm with this special base is called the natural logarithm, and is usually denoted as ln; it behaves well under differentiation since there is no undetermined limit to carry through the calculations. Thus, there are two ways of selecting such special numbers a. One way is to set the derivative of the exponential function "a""x" equal to "a""x", and solve for a. The other way is to set the derivative of the base a logarithm to 1/"x" and solve for a. In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for a are actually "the same": the number e. The Taylor series for the exponential function can be deduced from the facts that the exponential function is its own derivative and that it equals 1 when evaluated at 0: formula_44 Setting formula_45 recovers the definition of e as the sum of an infinite series. The natural logarithm function can be defined as the integral from 1 to formula_46 of formula_47, and the exponential function can then be defined as the inverse function of the natural logarithm. The number e is the value of the exponential function evaluated at formula_45, or equivalently, the number whose natural logarithm is 1. It follows that e is the unique positive real number such that formula_48 Because "e""x" is the unique function (up to multiplication by a constant K) that is equal to its own derivative, formula_49 it is therefore its own antiderivative as well: formula_50 Equivalently, the family of functions formula_51 where K is any real or complex number, is the full solution to the differential equation formula_52 Inequalities. The number e is the unique real number such that formula_53 for all positive x. Also, we have the inequality formula_54 for all real x, with equality if and only if "x" 0. Furthermore, e is the unique base of the exponential for which the inequality "a""x" ≥ "x" + 1 holds for all x. This is a limiting case of Bernoulli's inequality. Exponential-like functions. Steiner's problem asks to find the global maximum for the function formula_55 This maximum occurs precisely at "x" "e". (One can check that the derivative of ln "f"("x") is zero only for this value of x.) Similarly, "x" 1/"e" is where the global minimum occurs for the function formula_56 The infinite tetration formula_57 or formula_58 converges if and only if "x" ∈ [(1/"e")"e", "e"1/"e"] ≈ [0.06599, 1.4447], shown by a theorem of Leonhard Euler. Number theory. The real number e is irrational. Euler proved this by showing that its simple continued fraction expansion does not terminate. (See also Fourier's proof that e is irrational.) Furthermore, by the Lindemann–Weierstrass theorem, e is transcendental, meaning that it is not a solution of any non-zero polynomial equation with rational coefficients. It was the first number to be proved transcendental without having been specifically constructed for this purpose (compare with Liouville number); the proof was given by Charles Hermite in 1873. It is conjectured that e is normal, meaning that when e is expressed in any base the possible digits in that base are uniformly distributed (occur with equal probability in any sequence of given length). In algebraic geometry, a "period" is a number that can be expressed as an integral of an algebraic function over an algebraic domain. The constant π is a period, but it is conjectured that e is not. Complex numbers. The exponential function "e""x" may be written as a Taylor series formula_59 Because this series is convergent for every complex value of x, it is commonly used to extend the definition of "e""x" to the complex numbers. This, with the Taylor series for sin and cos "x", allows one to derive Euler's formula: formula_60 which holds for every complex x. The special case with "x" π is Euler's identity: formula_61 which is considered to be an exemplar of mathematical beauty as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that π is transcendental, which implies the impossibility of squaring the circle. Moreover, the identity implies that, in the principal branch of the logarithm, formula_62 Furthermore, using the laws for exponentiation, formula_63 for any integer n, which is de Moivre's formula. The expressions of cos "x" and sin "x" in terms of the exponential function can be deduced from the Taylor series: formula_64 The expression formula_65 is sometimes abbreviated as cis("x"). Representations. The number e can be represented in a variety of ways: as an infinite series, an infinite product, a continued fraction, or a limit of a sequence. In addition to the limit and the series given above, there is also the continued fraction formula_66 which written out looks like formula_67 The following infinite product evaluates to e: formula_68 Many other series, sequence, continued fraction, and infinite product representations of e have been proved. Stochastic representations. In addition to exact analytical expressions for representation of e, there are stochastic techniques for estimating e. One such approach begins with an infinite sequence of independent random variables "X"1, "X"2..., drawn from the uniform distribution on [0, 1]. Let V be the least number n such that the sum of the first n observations exceeds 1: formula_69 Then the expected value of V is e: E("V") "e". Known digits. The number of known digits of e has increased substantially during the last decades. This is due both to the increased performance of computers and to algorithmic improvements. Since around 2010, the proliferation of modern high-speed desktop computers has made it feasible for amateurs to compute trillions of digits of e within acceptable amounts of time. On Dec 5, 2020, a record-setting calculation was made, giving e to 31,415,926,535,897 (approximately π×1013) digits. Computing the digits. One way to compute the digits of e is with the series formula_70 A faster method involves two recursive functions formula_71 and formula_72. The functions are defined as formula_73 The expression formula_74 produces the nth partial sum of the series above. This method uses binary splitting to compute e with fewer single-digit arithmetic operations and thus reduced bit complexity. Combining this with fast Fourier transform-based methods of multiplying integers makes computing the digits very fast. In computer culture. During the emergence of internet culture, individuals and organizations sometimes paid homage to the number e. In an early example, the computer scientist Donald Knuth let the version numbers of his program Metafont approach e. The versions are 2, 2.7, 2.71, 2.718, and so forth. In another instance, the IPO filing for Google in 2004, rather than a typical round-number amount of money, the company announced its intention to raise 2,718,281,828 USD, which is e billion dollars rounded to the nearest dollar. Google was also responsible for a billboard that appeared in the heart of Silicon Valley, and later in Cambridge, Massachusetts; Seattle, Washington; and Austin, Texas. It read "{first 10-digit prime found in consecutive digits of e}.com". The first 10-digit prime in e is 7427466391, which starts at the 99th digit. Solving this problem and visiting the advertised (now defunct) website led to an even more difficult problem to solve, which consisted in finding the fifth term in the sequence 7182818284, 8182845904, 8747135266, 7427466391. It turned out that the sequence consisted of 10-digit numbers found in consecutive digits of e whose digits summed to 49. The fifth term in the sequence is 5966290435, which starts at the 127th digit. Solving this second problem finally led to a Google Labs webpage where the visitor was invited to submit a résumé. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(1+1/n)^n" }, { "math_id": 1, "text": "e^x." }, { "math_id": 2, "text": "e = \\sum\\limits_{n = 0}^{\\infty} \\frac{1}{n!} = 1 + \\frac{1}{1} + \\frac{1}{1\\cdot 2} + \\frac{1}{1\\cdot 2\\cdot 3} + \\cdots." }, { "math_id": 3, "text": "\\gamma" }, { "math_id": 4, "text": "e^{i\\pi}+1=0" }, { "math_id": 5, "text": "\\lim_{n\\to \\infty}\\left(1+\\frac 1n\\right)^n," }, { "math_id": 6, "text": "e=\\exp(1)," }, { "math_id": 7, "text": "\\exp" }, { "math_id": 8, "text": "\\exp(0)=1." }, { "math_id": 9, "text": "x\\mapsto e^x," }, { "math_id": 10, "text": "e=e^1." }, { "math_id": 11, "text": "x\\mapsto b^x." }, { "math_id": 12, "text": "b=b^1," }, { "math_id": 13, "text": "\\log_b b= 1." }, { "math_id": 14, "text": "e=e^1" }, { "math_id": 15, "text": "\\int_1^e \\frac {dx}x =1." }, { "math_id": 16, "text": "e" }, { "math_id": 17, "text": "\\lim_{n\\to \\infty} \\left( 1 + \\frac{1}{n} \\right)^n," }, { "math_id": 18, "text": "n=12" }, { "math_id": 19, "text": "e = \\sum_{n = 0}^\\infty \\frac{1}{n!} = \\frac{1}{0!} + \\frac{1}{1!} + \\frac{1}{2!} + \\frac{1}{3!} + \\frac{1}{4!} + \\cdots ," }, { "math_id": 20, "text": "\\Pr[k~\\mathrm{wins~of}~n] = \\binom{n}{k} \\left(\\frac{1}{n}\\right)^k\\left(1 - \\frac{1}{n}\\right)^{n-k}." }, { "math_id": 21, "text": "\\Pr[0~\\mathrm{wins~of}~n] = \\left(1 - \\frac{1}{n}\\right)^{n}." }, { "math_id": 22, "text": "x(t) = x_0\\cdot e^{kt} = x_0\\cdot e^{t/\\tau}." }, { "math_id": 23, "text": "x_0" }, { "math_id": 24, "text": "\\tau" }, { "math_id": 25, "text": " \\phi(x) = \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac{1}{2} x^2}. " }, { "math_id": 26, "text": "\\phi(x)" }, { "math_id": 27, "text": "\\textstyle 1/\\sqrt{2\\pi}" }, { "math_id": 28, "text": "p_n\\!" }, { "math_id": 29, "text": "p_n = 1 - \\frac{1}{1!} + \\frac{1}{2!} - \\frac{1}{3!} + \\cdots + \\frac{(-1)^n}{n!} = \\sum_{k = 0}^n \\frac{(-1)^k}{k!}." }, { "math_id": 30, "text": "\\sqrt[x]{x}" }, { "math_id": 31, "text": "x = e" }, { "math_id": 32, "text": "x^{-1}\\log_b x" }, { "math_id": 33, "text": "n = \\left\\lfloor \\frac{L}{e} \\right\\rfloor" }, { "math_id": 34, "text": "\\left\\lceil \\frac{L}{e} \\right\\rceil." }, { "math_id": 35, "text": "1/x" }, { "math_id": 36, "text": "36.8\\%" }, { "math_id": 37, "text": "x=e" }, { "math_id": 38, "text": "n! \\sim \\sqrt{2\\pi n} \\left(\\frac{n}{e}\\right)^n." }, { "math_id": 39, "text": "e = \\lim_{n\\to\\infty} \\frac{n}{\\sqrt[n]{n!}} ." }, { "math_id": 40, "text": "\\begin{align}\n \\frac{d}{dx}a^x\n &= \\lim_{h\\to 0}\\frac{a^{x+h} - a^x}{h} = \\lim_{h\\to 0}\\frac{a^x a^h - a^x}{h} \\\\\n &= a^x \\cdot \\left(\\lim_{h\\to 0}\\frac{a^h - 1}{h}\\right).\n\\end{align}" }, { "math_id": 41, "text": "\\frac{d}{dx}e^x = e^x." }, { "math_id": 42, "text": "\\begin{align}\n \\frac{d}{dx}\\log_a x\n &= \\lim_{h\\to 0}\\frac{\\log_a(x + h) - \\log_a(x)}{h} \\\\\n &= \\lim_{h\\to 0}\\frac{\\log_a(1 + h/x)}{x\\cdot h/x} \\\\\n &= \\frac{1}{x}\\log_a\\left(\\lim_{u\\to 0}(1 + u)^\\frac{1}{u}\\right) \\\\\n &= \\frac{1}{x}\\log_a e,\n\\end{align}" }, { "math_id": 43, "text": "\\frac{d}{dx}\\log_e x = \\frac{1}{x}." }, { "math_id": 44, "text": "e^x = \\sum_{n=0}^\\infty \\frac{x^n}{n!}." }, { "math_id": 45, "text": "x = 1" }, { "math_id": 46, "text": "x" }, { "math_id": 47, "text": "1/t" }, { "math_id": 48, "text": "\\int_1^e \\frac{1}{t} \\, dt = 1." }, { "math_id": 49, "text": "\\frac{d}{dx}Ke^x = Ke^x," }, { "math_id": 50, "text": "\\int Ke^x\\,dx = Ke^x + C ." }, { "math_id": 51, "text": "y(x) = Ke^x" }, { "math_id": 52, "text": "y' = y ." }, { "math_id": 53, "text": "\\left(1 + \\frac{1}{x}\\right)^x < e < \\left(1 + \\frac{1}{x}\\right)^{x+1}" }, { "math_id": 54, "text": "e^x \\ge x + 1" }, { "math_id": 55, "text": " f(x) = x^\\frac{1}{x} ." }, { "math_id": 56, "text": " f(x) = x^x ." }, { "math_id": 57, "text": " x^{x^{x^{\\cdot^{\\cdot^{\\cdot}}}}} " }, { "math_id": 58, "text": "{^\\infty}x" }, { "math_id": 59, "text": " e^{x} = 1 + {x \\over 1!} + {x^{2} \\over 2!} + {x^{3} \\over 3!} + \\cdots = \\sum_{n=0}^{\\infty} \\frac{x^n}{n!}." }, { "math_id": 60, "text": "e^{ix} = \\cos x + i\\sin x ," }, { "math_id": 61, "text": "e^{i\\pi} + 1 = 0 ," }, { "math_id": 62, "text": "\\ln (-1) = i\\pi ." }, { "math_id": 63, "text": "(\\cos x + i\\sin x)^n = \\left(e^{ix}\\right)^n = e^{inx} = \\cos nx + i \\sin nx" }, { "math_id": 64, "text": "\n \\cos x = \\frac{e^{ix} + e^{-ix}}{2} , \\qquad\n \\sin x = \\frac{e^{ix} - e^{-ix}}{2i}.\n" }, { "math_id": 65, "text": "\\cos x + i \\sin x" }, { "math_id": 66, "text": "\n e = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, ..., 1, 2n, 1, ...],\n" }, { "math_id": 67, "text": "e = 2 +\n\\cfrac{1}\n {1 + \\cfrac{1}\n {2 + \\cfrac{1}\n {1 + \\cfrac{1}\n {1 + \\cfrac{1}\n {4 + \\cfrac{1}\n {1 + \\cfrac{1}\n {1 + \\ddots}\n }\n }\n }\n }\n }\n }\n.\n" }, { "math_id": 68, "text": "e = \\frac{2}{1} \\left(\\frac{4}{3}\\right)^{1/2} \\left(\\frac{6 \\cdot 8}{5 \\cdot 7}\\right)^{1/4} \\left(\\frac{10 \\cdot 12 \\cdot 14 \\cdot 16}{9 \\cdot 11 \\cdot 13 \\cdot 15}\\right)^{1/8} \\cdots." }, { "math_id": 69, "text": "V = \\min\\left\\{ n \\mid X_1 + X_2 + \\cdots + X_n > 1 \\right\\}." }, { "math_id": 70, "text": "e=\\sum_{k=0}^\\infty \\frac{1}{k!}." }, { "math_id": 71, "text": "p(a,b)" }, { "math_id": 72, "text": "q(a,b)" }, { "math_id": 73, "text": "\\binom{p(a,b)}{q(a,b)}= \\begin{cases} \\binom{1}{b}, & \\text{if }b=a+1\\text{,} \\\\ \\binom{p(a,m)q(m,b)+p(m,b)}{q(a,m)q(m,b)}, & \\text{otherwise, where }m=\\lfloor(a+b)/2\\rfloor .\\end{cases}" }, { "math_id": 74, "text": "1+\\frac{p(0,n)}{q(0,n)}" } ]
https://en.wikipedia.org/wiki?curid=9633
9633335
Gauss–Codazzi equations
Fundamental formulas linking the metric and curvature tensor of a manifold In Riemannian geometry and pseudo-Riemannian geometry, the Gauss–Codazzi equations (also called the Gauss–Codazzi–Weingarten-Mainardi equations or Gauss–Peterson–Codazzi formulas) are fundamental formulas that link together the induced metric and second fundamental form of a submanifold of (or immersion into) a Riemannian or pseudo-Riemannian manifold. The equations were originally discovered in the context of surfaces in three-dimensional Euclidean space. In this context, the first equation, often called the Gauss equation (after its discoverer Carl Friedrich Gauss), says that the Gauss curvature of the surface, at any given point, is dictated by the derivatives of the Gauss map at that point, as encoded by the second fundamental form. The second equation, called the Codazzi equation or Codazzi-Mainardi equation, states that the covariant derivative of the second fundamental form is fully symmetric. It is named for Gaspare Mainardi (1856) and Delfino Codazzi (1868–1869), who independently derived the result, although it was discovered earlier by Karl Mikhailovich Peterson. Formal statement. Let formula_0 be an "n"-dimensional embedded submanifold of a Riemannian manifold "P" of dimension formula_1. There is a natural inclusion of the tangent bundle of "M" into that of "P" by the pushforward, and the cokernel is the normal bundle of "M": formula_2 The metric splits this short exact sequence, and so formula_3 Relative to this splitting, the Levi-Civita connection formula_4 of "P" decomposes into tangential and normal components. For each formula_5 and vector field "Y" on "M", formula_6 Let formula_7 The Gauss formula now asserts that formula_8 is the Levi-Civita connection for "M", and formula_9 is a "symmetric" vector-valued form with values in the normal bundle. It is often referred to as the second fundamental form. An immediate corollary is the Gauss equation for the curvature tensor. For formula_10, formula_11 where formula_12 is the Riemann curvature tensor of "P" and "R" is that of "M". The Weingarten equation is an analog of the Gauss formula for a connection in the normal bundle. Let formula_13 and formula_14 a normal vector field. Then decompose the ambient covariant derivative of formula_14 along "X" into tangential and normal components: formula_15 Then There are thus a pair of connections: ∇, defined on the tangent bundle of "M"; and "D", defined on the normal bundle of "M". These combine to form a connection on any tensor product of copies of T"M" and T⊥"M". In particular, they defined the covariant derivative of formula_9: formula_17 The Codazzi–Mainardi equation is formula_18 Since every immersion is, in particular, a local embedding, the above formulas also hold for immersions. Gauss–Codazzi equations in classical differential geometry. Statement of classical equations. In classical differential geometry of surfaces, the Codazzi–Mainardi equations are expressed via the second fundamental form ("L", "M", "N"): formula_19 formula_20 The Gauss formula, depending on how one chooses to define the Gaussian curvature, may be a tautology. It can be stated as formula_21 where ("e", "f", "g") are the components of the first fundamental form. Derivation of classical equations. Consider a parametric surface in Euclidean 3-space, formula_22 where the three component functions depend smoothly on ordered pairs ("u","v") in some open domain "U" in the "uv"-plane. Assume that this surface is regular, meaning that the vectors r"u" and r"v" are linearly independent. Complete this to a basis {ru,rv,n}, by selecting a unit vector n normal to the surface. It is possible to express the second partial derivatives of r (vectors of formula_23) with the Christoffel symbols and the elements of the second fundamental form. We choose the first two components of the basis as they are intrinsic to the surface and intend to prove intrinsic property of the Gaussian curvature. The last term in the basis is extrinsic. formula_24 formula_25 formula_26 Clairaut's theorem states that partial derivatives commute: formula_27 If we differentiate ruu with respect to "v" and ruv with respect to "u", we get: formula_28formula_29 Now substitute the above expressions for the second derivatives and equate the coefficients of n: formula_30 Rearranging this equation gives the first Codazzi–Mainardi equation. The second equation may be derived similarly. Mean curvature. Let "M" be a smooth "m"-dimensional manifold immersed in the ("m" + "k")-dimensional smooth manifold "P". Let formula_31 be a local orthonormal frame of vector fields normal to "M". Then we can write, formula_32 If, now, formula_33 is a local orthonormal frame (of tangent vector fields) on the same open subset of "M", then we can define the mean curvatures of the immersion by formula_34 In particular, if "M" is a hypersurface of "P", i.e. formula_35, then there is only one mean curvature to speak of. The immersion is called minimal if all the formula_36 are identically zero. Observe that the mean curvature is a trace, or average, of the second fundamental form, for any given component. Sometimes mean curvature is defined by multiplying the sum on the right-hand side by formula_37. We can now write the Gauss–Codazzi equations as formula_38 Contracting the formula_39 components gives us formula_40 When "M" is a hypersurface, this simplifies to formula_41 where formula_42 formula_43 and formula_44. In that case, one more contraction yields, formula_45 where formula_12 and formula_46 are the scalar curvatures of "P" and "M" respectively, and formula_47 If formula_48, the scalar curvature equation might be more complicated. We can already use these equations to draw some conclusions. For example, any minimal immersion into the round sphere formula_49 must be of the form formula_50 where formula_51 runs from 1 to formula_52 and formula_53 is the Laplacian on "M", and formula_54 is a positive constant. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Historical references Textbooks Articles
[ { "math_id": 0, "text": "i \\colon M \\subset P" }, { "math_id": 1, "text": "n+p" }, { "math_id": 2, "text": "0 \\rightarrow T_xM \\rightarrow T_xP|_M \\rightarrow T_x^\\perp M \\rightarrow 0." }, { "math_id": 3, "text": "TP|_M = TM\\oplus T^\\perp M." }, { "math_id": 4, "text": "\\nabla'" }, { "math_id": 5, "text": "X\\in TM" }, { "math_id": 6, "text": "\\nabla'_X Y = \\top\\left(\\nabla'_X Y\\right) + \\bot\\left(\\nabla'_X Y\\right)." }, { "math_id": 7, "text": "\\nabla_X Y = \\top\\left(\\nabla'_X Y\\right),\\quad \\alpha(X, Y) = \\bot\\left(\\nabla'_X Y\\right)." }, { "math_id": 8, "text": "\\nabla_X" }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "X, Y, Z, W \\in TM" }, { "math_id": 11, "text": "\\langle R'(X, Y)Z, W\\rangle = \\langle R(X, Y)Z, W\\rangle + \\langle \\alpha(X, Z), \\alpha(Y, W)\\rangle - \\langle \\alpha(Y, Z), \\alpha(X, W)\\rangle " }, { "math_id": 12, "text": "R'" }, { "math_id": 13, "text": "X \\in TM" }, { "math_id": 14, "text": "\\xi" }, { "math_id": 15, "text": "\\nabla'_X\\xi = \\top \\left(\\nabla'_X\\xi\\right) + \\bot\\left(\\nabla'_X\\xi\\right) = -A_\\xi(X) + D_X(\\xi)." }, { "math_id": 16, "text": "\\langle A_\\xi X, Y\\rangle = \\langle \\alpha(X, Y), \\xi\\rangle" }, { "math_id": 17, "text": "\\left(\\tilde{\\nabla}_X \\alpha\\right)(Y, Z) = D_X\\left(\\alpha(Y, Z)\\right) - \\alpha\\left(\\nabla_X Y, Z\\right) - \\alpha\\left(Y, \\nabla_X Z\\right)." }, { "math_id": 18, "text": "\\bot\\left(R'(X, Y)Z\\right) = \\left(\\tilde{\\nabla}_X\\alpha\\right)(Y, Z) - \\left(\\tilde{\\nabla}_Y\\alpha\\right)(X, Z)." }, { "math_id": 19, "text": "L_v-M_u = L\\Gamma^1{}_{12} + M\\left({\\Gamma^2}_{12} - {\\Gamma^1}_{11}\\right) - N{\\Gamma^2}_{11}" }, { "math_id": 20, "text": "M_v-N_u = L\\Gamma^1{}_{22} + M\\left({\\Gamma^2}_{22} - {\\Gamma^1}_{12}\\right) - N{\\Gamma^2}_{12}" }, { "math_id": 21, "text": "K = \\frac{LN - M^2}{eg - f^2}," }, { "math_id": 22, "text": "\\mathbf{r}(u,v) = (x(u,v),y(u,v),z(u,v))" }, { "math_id": 23, "text": "\\mathbb{R^3}" }, { "math_id": 24, "text": "\\mathbf{r}_{uu} = {\\Gamma^1}_{11} \\mathbf{r}_u + {\\Gamma^2}_{11} \\mathbf{r}_v + L \\mathbf{n}" }, { "math_id": 25, "text": "\\mathbf{r}_{uv} = {\\Gamma^1}_{12} \\mathbf{r}_u + {\\Gamma^2}_{12} \\mathbf{r}_v + M \\mathbf{n}" }, { "math_id": 26, "text": "\\mathbf{r}_{vv} = {\\Gamma^1}_{22} \\mathbf{r}_u + {\\Gamma^2}_{22} \\mathbf{r}_v + N \\mathbf{n}" }, { "math_id": 27, "text": "\\left(\\mathbf{r}_{uu}\\right)_v = \\left(\\mathbf{r}_{uv}\\right)_u" }, { "math_id": 28, "text": "\\left({\\Gamma^1}_{11}\\right)_v \\mathbf{r}_u + {\\Gamma^1}_{11} \\mathbf{r}_{uv} + \\left({\\Gamma^2}_{11}\\right)_v \\mathbf{r}_v + {\\Gamma^2}_{11} \\mathbf{r}_{vv} + L_v \\mathbf{n} + L \\mathbf{n}_v " }, { "math_id": 29, "text": " = \\left({\\Gamma^1}_{12}\\right)_u \\mathbf{r}_u + {\\Gamma^1}_{12} \\mathbf{r}_{uu} + \\left(\\Gamma_{12}^2\\right)_u \\mathbf{r}_v + {\\Gamma^2}_{12} \\mathbf{r}_{uv} + M_u \\mathbf{n} + M \\mathbf{n}_u" }, { "math_id": 30, "text": " M {\\Gamma^1}_{11} + N {\\Gamma^2}_{11} + L_v = L {\\Gamma^1}_{12} + M {\\Gamma^2}_{12} + M_u " }, { "math_id": 31, "text": "e_1, e_2, \\ldots, e_k" }, { "math_id": 32, "text": "\\alpha(X, Y) = \\sum_{j=1}^k\\alpha_j(X, Y)e_j." }, { "math_id": 33, "text": "E_1, E_2, \\ldots, E_m" }, { "math_id": 34, "text": "H_j=\\sum_{i=1}^m\\alpha_j(E_i, E_i)." }, { "math_id": 35, "text": "k=1" }, { "math_id": 36, "text": "H_j" }, { "math_id": 37, "text": "1/m" }, { "math_id": 38, "text": "\\langle R'(X, Y)Z, W \\rangle = \\langle R(X,Y)Z, W \\rangle + \\sum_{j=1}^k \\left(\\alpha_j(X,Z) \\alpha_j(Y, W) - \\alpha_j(Y, Z) \\alpha_j(X, W)\\right). " }, { "math_id": 39, "text": "Y, Z" }, { "math_id": 40, "text": "\\operatorname{Ric}'(X, W) = \\operatorname{Ric}(X,W) + \\sum_{j=1}^k \\langle R'(X, e_j)e_j, W\\rangle + \\sum_{j=1}^k \\left(\\sum_{i=1}^m\\alpha_j(X, E_i) \\alpha_j(E_i, W)- H_j \\alpha_j(X, W)\\right)." }, { "math_id": 41, "text": "\\operatorname{Ric}'(X, W) = \\operatorname{Ric}(X, W) + \\langle R'(X, n)n, W \\rangle + \\sum_{i=1}^mh(X, E_i) h(E_i, W) - H h(X, W)" }, { "math_id": 42, "text": "n = e_1," }, { "math_id": 43, "text": "h = \\alpha_1" }, { "math_id": 44, "text": "H = H_1" }, { "math_id": 45, "text": "R' = R + 2 \\operatorname{Ric}'(n, n) + \\|h\\|^2 - H^2" }, { "math_id": 46, "text": "R" }, { "math_id": 47, "text": "\\|h\\|^2 = \\sum_{i,j=1}^m h(E_i, E_j)^2." }, { "math_id": 48, "text": "k>1" }, { "math_id": 49, "text": " x_1^2 + x_2^2 + \\cdots + x_{m+k+1}^2 = 1 " }, { "math_id": 50, "text": "\\Delta x_j + \\lambda x_j = 0" }, { "math_id": 51, "text": "j" }, { "math_id": 52, "text": "m + k + 1" }, { "math_id": 53, "text": "\\Delta = \\sum_{i=1}^m \\nabla_{E_i}\\nabla_{E_i}" }, { "math_id": 54, "text": "\\lambda > 0" } ]
https://en.wikipedia.org/wiki?curid=9633335
9636455
Logarithmic number system
A logarithmic number system (LNS) is an arithmetic system used for representing real numbers in computer and digital hardware, especially for digital signal processing. Overview. A number, formula_0, is represented in an LNS by two components: the logarithm (formula_1) of its absolute value (as a binary word usually in two's complement), and its sign bit (formula_2): formula_3 An LNS can be considered as a floating-point number with the significand being always equal to 1 and a non-integer exponent. This formulation simplifies the operations of multiplication, division, powers and roots, since they are reduced down to addition, subtraction, multiplication, and division, respectively. On the other hand, the operations of addition and subtraction are more complicated and they are calculated by the formulae: formula_4 formula_5 where the "sum" function is defined by formula_6, and the "difference" function by formula_7. These functions formula_8 and formula_9 are also known as Gaussian logarithms. The simplification of multiplication, division, roots, and powers is counterbalanced by the cost of evaluating these functions for addition and subtraction. This added cost of evaluation may not be critical when using an LNS primarily for increasing the precision of floating-point math operations. History. Logarithmic number systems have been independently invented and published at least three times as an alternative to fixed-point and floating-point number systems. Nicholas Kingsbury and Peter Rayner introduced "logarithmic arithmetic" for digital signal processing (DSP) in 1971. A similar LNS named "signed logarithmic number system" (SLNS) was described in 1975 by Earl Swartzlander and Aristides Alexopoulos; rather than use two's complement notation for the logarithms, they offset them (scale the numbers being represented) to avoid negative logs. Samuel Lee and Albert Edgar described a similar system, which they called the "Focus" number system, in 1977. The mathematical foundations for addition and subtraction in an LNS trace back to Zecchini Leonelli and Carl Friedrich Gauss in the early 1800s. Applications. In the late 1800s, the Spanish engineer Leonardo Torres Quevedo conceived a series of analogue calculating mechanical machines and developed one that could solve algebraic equations with eight terms, finding the roots, including the complex ones. One part of this machine called an "endless spindle" allowed the mechanical expression of the relation formula_10, with the aim of extracting the logarithm of a sum as a sum of logarithms. A LNS has been used in the Gravity Pipe (GRAPE-5) special-purpose supercomputer that won the Gordon Bell Prize in 1999. A substantial effort to explore the applicability of LNSs as a viable alternative to floating point for general-purpose processing of single-precision real numbers is described in the context of the "European Logarithmic Microprocessor" (ELM). A fabricated prototype of the processor, which has a 32-bit cotransformation-based LNS arithmetic logic unit (ALU), demonstrated LNSs as a "more accurate alternative to floating-point", with improved speed. Further improvement of the LNS design based on the ELM architecture has shown its capability to offer significantly higher speed and accuracy than floating-point as well. LNSs are sometimes used in FPGA-based applications where most arithmetic operations are multiplication or division. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "X\\rightarrow\\begin{cases}\nx=\\log_b\\big|X\\big| \\, , \\\\\ns=\\begin{cases}\n0\\text{ if } X>0 \\, , \\\\\n1\\text{ if } X<0 \\, .\n\\end{cases}\n\\end{cases}" }, { "math_id": 4, "text": "\\log_b(|X|+|Y|)=x+s_b(y-x) \\, ," }, { "math_id": 5, "text": " \\log_b\\bigg||X|-|Y|\\bigg|=x+d_b(y-x) \\, ," }, { "math_id": 6, "text": "s_b(z)=\\log_b(1+b^z)" }, { "math_id": 7, "text": "d_b(z)=\\log_b\\big|1-b^z\\big|" }, { "math_id": 8, "text": "s_b(z)" }, { "math_id": 9, "text": "d_b(z)" }, { "math_id": 10, "text": " y=\\log(1+10^x)" } ]
https://en.wikipedia.org/wiki?curid=9636455
9637
Euler–Maclaurin formula
Summation formula In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula. The formula. If m and n are natural numbers and "f"("x") is a real or complex valued continuous function for real numbers x in the interval ["m","n"], then the integral formula_0 can be approximated by the sum (or vice versa) formula_1 (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives "f"("k")("x") evaluated at the endpoints of the interval, that is to say "x" "m" and "x" "n". Explicitly, for p a positive integer and a function "f"("x") that is p times continuously differentiable on the interval ["m","n"], we have formula_2 where Bk is the kth Bernoulli number (with "B"1 ) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p. The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for "B"1. In this case we have formula_3 or alternatively formula_4 The remainder term. The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals ["r", "r" + 1] for "r" "m", "m" + 1, …, "n" − 1. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term has an exact expression in terms of the periodized Bernoulli functions "Pk"("x"). The Bernoulli polynomials may be defined recursively by "B"0("x") 1 and, for "k" ≥ 1, formula_5 The periodized Bernoulli functions are defined as formula_6 where ⌊"x"⌋ denotes the largest integer less than or equal to x, so that "x" − ⌊"x"⌋ always lies in the interval [0,1). With this notation, the remainder term Rp equals formula_7 When "k" &gt; 0, it can be shown that for 0 ≤ "x" ≤ 1, formula_8 where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials "Bk"("x"). The bound is achieved for even k when x is zero. The term "ζ"("k") may be omitted for odd k but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as formula_9 Low-order cases. The Bernoulli numbers from "B"1 to "B"7 are , , 0, −, 0, , 0. Therefore the low-order cases of the Euler–Maclaurin formula are: formula_10 Applications. The Basel problem. The Basel problem is to determine the sum formula_11 Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals , which he proved in the same year. Sums involving a polynomial. If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if "f"("x") "x"3, we can choose "p" 2 to obtain, after simplification, formula_12 Approximation of integrals. The formula provides a means of approximating a finite integral. Let "a" &lt; "b" be the endpoints of the interval of integration. Fix N, the number of points to use in the approximation, and denote the corresponding step size by "h" . Set "xi" "a" + ("i" − 1)"h", so that "x"1 "a" and "xN" "b". Then: formula_13 This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation. Asymptotic expansion of sums. In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is formula_14 where a and b are integers. Often the expansion remains valid even after taking the limits "a" → −∞ or "b" → +∞ or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, formula_15 Here the left-hand side is equal to "ψ"(1)("z"), namely the first-order polygamma function defined by formula_16 the gamma function Γ("z") is equal to ("z" − 1)! when z is a positive integer. This results in an asymptotic expansion for "ψ"(1)("z"). That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function. Examples. If s is an integer greater than 1 we have: formula_17 Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: formula_18 For s equal to 2 this simplifies to formula_19 or formula_20 When "s" 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers: formula_21 where "γ" ≈ 0.5772... is the Euler–Mascheroni constant. Proofs. Derivation by mathematical induction. We outline the argument given in Apostol. The Bernoulli polynomials "Bn"("x") and the periodic Bernoulli functions "Pn"("x") for "n" 0, 1, 2, ... were introduced above. The first several Bernoulli polynomials are formula_22 The values "Bn"(1) are the Bernoulli numbers "B""n". Notice that for "n" ≠ 1 we have formula_23 and for "n" 1, formula_24 The functions "P""n" agree with the Bernoulli polynomials on the interval [0, 1] and are periodic with period 1. Furthermore, except when "n" 1, they are also continuous. Thus, formula_25 Let "k" be an integer, and consider the integral formula_26 where formula_27 Integrating by parts, we get formula_28 Using "B"1(0) −, "B"1(1) , and summing the above from "k" 0 to "k" "n" − 1, we get formula_29 Adding to both sides and rearranging, we have formula_30 This is the "p" 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term: formula_31 where formula_32 The result of integrating by parts is formula_33 Summing from "k" 0 to "k" "n" − 1 and substituting this for the lower order error term results in the "p" 2 case of the formula, formula_34 This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "I = \\int_m^n f(x)\\,dx" }, { "math_id": 1, "text": "S = f(m + 1) + \\cdots + f(n - 1) + f(n)" }, { "math_id": 2, "text": "S - I = \\sum_{k=1}^p {\\frac{B_k}{k!} \\left(f^{(k - 1)}(n) - f^{(k - 1)}(m)\\right)} + R_p," }, { "math_id": 3, "text": "\\sum_{i=m}^n f(i) =\n \\int^n_m f(x)\\,dx + \\frac{f(n) + f(m)}{2} +\n \\sum_{k=1}^{\\left\\lfloor \\frac{p}{2}\\right\\rfloor} \\frac{B_{2k}}{(2k)!} \\left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\\right) + R_p,\n" }, { "math_id": 4, "text": "\\sum_{i=m+1}^n f(i) =\n \\int^n_m f(x)\\,dx + \\frac{f(n) - f(m)}{2} +\n \\sum_{k=1}^{\\left\\lfloor \\frac{p}{2}\\right\\rfloor} \\frac{B_{2k}}{(2k)!} \\left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\\right) + R_p.\n" }, { "math_id": 5, "text": "\\begin{align}\n B_k'(x) &= kB_{k - 1}(x), \\\\\n \\int_0^1 B_k(x)\\,dx &= 0.\n\\end{align}" }, { "math_id": 6, "text": "P_k(x) = B_k\\bigl(x - \\lfloor x\\rfloor\\bigr)," }, { "math_id": 7, "text": "R_{p} = (-1)^{p+1}\\int_m^n f^{(p)}(x) \\frac{P_p(x)}{p!}\\,dx. " }, { "math_id": 8, "text": "\\bigl|B_k(x)\\bigr| \\le \\frac{2 \\cdot k!}{(2\\pi)^k}\\zeta(k)," }, { "math_id": 9, "text": "\\left|R_p\\right| \\leq \\frac{2 \\zeta(p)}{(2\\pi)^p}\\int_m^n \\left|f^{(p)}(x)\\right|\\,dx." }, { "math_id": 10, "text": "\\begin{align}\n\\sum_{i=m}^n f(i) - \\int_m^n f(x)\\,dx &= \\frac{f(m)+f(n)}{2} + \\int_m^n f'(x)P_1(x)\\,dx \\\\\n&=\\frac{f(m)+f(n)}{2} + \\frac{1}{6}\\frac{f'(n) - f'(m)}{2!} - \\int_m^n f''(x)\\frac{P_2(x)}{2!}\\,dx \\\\\n&=\\frac{f(m)+f(n)}{2} + \\frac{1}{6}\\frac{f'(n) - f'(m)}{2!} + \\int_m^n f'''(x)\\frac{P_3(x)}{3!}\\,dx \\\\\n&=\\frac{f(m)+f(n)}{2} + \\frac{1}{6}\\frac{f'(n) - f'(m)}{2!} - \\frac{1}{30}\\frac{f'''(n) - f'''(m)}{4!}-\\int_m^n f^{(4)}(x) \\frac{P_4(x)}{4!}\\, dx \\\\\n&=\\frac{f(m)+f(n)}{2} + \\frac{1}{6}\\frac{f'(n) - f'(m)}{2!} - \\frac{1}{30}\\frac{f'''(n) - f'''(m)}{4!} + \\int_m^n f^{(5)}(x)\\frac{P_5(x)}{5!}\\,dx \\\\\n&=\\frac{f(m)+f(n)}{2} + \\frac{1}{6}\\frac{f'(n) - f'(m)}{2!} - \\frac{1}{30}\\frac{f'''(n) - f'''(m)}{4!} + \\frac{1}{42}\\frac{f^{(5)}(n) - f^{(5)}(m)}{6!} - \\int_m^n f^{(6)}(x)\\frac{P_6(x)}{6!}\\,dx \\\\\n&=\\frac{f(m)+f(n)}{2} + \\frac{1}{6}\\frac{f'(n) - f'(m)}{2!} - \\frac{1}{30}\\frac{f'''(n) - f'''(m)}{4!} + \\frac{1}{42}\\frac{f^{(5)}(n) - f^{(5)}(m)}{6!} + \\int_m^n f^{(7)}(x)\\frac{P_7(x)}{7!}\\,dx.\n\\end{align}" }, { "math_id": 11, "text": " 1 + \\frac14 + \\frac19 + \\frac1{16} + \\frac1{25} + \\cdots = \\sum_{n=1}^\\infty \\frac{1}{n^2}. " }, { "math_id": 12, "text": "\\sum_{i=0}^n i^3 = \\left(\\frac{n(n + 1)}{2}\\right)^2." }, { "math_id": 13, "text": "\n\\begin{align}\nI & = \\int_a^b f(x)\\,dx \\\\\n&\\sim h\\left(\\frac{f(x_1)}{2} + f(x_2) + \\cdots + f(x_{N-1}) + \\frac{f(x_N)}{2}\\right) + \\frac{h^2}{12}\\bigl[f'(x_1) - f'(x_N)\\bigr] - \\frac{h^4}{720}\\bigl[f'''(x_1) - f'''(x_N)\\bigr] + \\cdots\n\\end{align}\n" }, { "math_id": 14, "text": "\\sum_{n=a}^b f(n) \\sim \\int_a^b f(x)\\,dx + \\frac{f(b) + f(a)}{2} + \\sum_{k=1}^\\infty \\,\\frac{B_{2k}}{(2k)!} \\left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\\right)," }, { "math_id": 15, "text": "\\sum_{k=0}^\\infty \\frac{1}{(z + k)^2} \\sim \\underbrace{\\int_0^\\infty\\frac{1}{(z + k)^2}\\,dk}_{= \\dfrac{1}{z}} + \\frac{1}{2z^2} + \\sum_{t = 1}^\\infty \\frac{B_{2t}}{z^{2t + 1}}." }, { "math_id": 16, "text": "\\psi^{(1)}(z) = \\frac{d^2}{dz^2}\\log \\Gamma(z);" }, { "math_id": 17, "text": "\\sum_{k=1}^n \\frac{1}{k^s} \\approx \\frac 1{s-1}+\\frac 12-\\frac 1{(s-1)n^{s-1}}+\\frac 1{2n^s}+\\sum_{i=1}\\frac{B_{2i}}{(2i)!}\\left[\\frac{(s+2i-2)!}{(s-1)!}-\\frac{(s+2i-2)!}{(s-1)!n^{s+2i-1}}\\right]." }, { "math_id": 18, "text": "\\sum_{k=1}^n \\frac{1}{k^s} \\sim\\zeta(s)-\\frac 1{(s-1)n^{s-1}}+\\frac 1{2n^s}-\\sum_{i=1}\\frac{B_{2i}}{(2i)!}\\frac{(s+2i-2)!}{(s-1)!n^{s+2i-1}}." }, { "math_id": 19, "text": "\\sum_{k=1}^n \\frac{1}{k^2} \\sim\\zeta(2)-\\frac 1n+\\frac 1{2n^2}-\\sum_{i=1}\\frac{B_{2i}}{n^{2i+1}}," }, { "math_id": 20, "text": "\\sum_{k=1}^n \\frac{1}{k^2} \\sim \\frac{\\pi^2}{6} -\\frac{1}{n} +\\frac{1}{2n^2} -\\frac{1}{6n^3}+\\frac{1}{30n^5}-\\frac{1}{42n^7} + \\cdots." }, { "math_id": 21, "text": "\\sum_{k=1}^n \\frac{1}{k} \\sim \\gamma + \\log n + \\frac{1}{2n} - \\sum_{k=1}^\\infty \\frac{B_{2k}}{2kn^{2k}}," }, { "math_id": 22, "text": "\\begin{align}\n B_0(x) &= 1, \\\\\n B_1(x) &= x - \\tfrac{1}{2}, \\\\\n B_2(x) &= x^2 - x + \\tfrac{1}{6}, \\\\\n B_3(x) &= x^3 - \\tfrac{3}{2}x^2 + \\tfrac{1}{2}x, \\\\\n B_4(x) &= x^4 - 2x^3 + x^2 - \\tfrac{1}{30}, \\\\\n &\\,\\,\\,\\vdots\n\\end{align}" }, { "math_id": 23, "text": "B_n = B_n(1) = B_n(0)," }, { "math_id": 24, "text": "B_1 = B_1(1) = -B_1(0)." }, { "math_id": 25, "text": " P_n(0) = P_n(1) = B_n \\quad \\text{for }n \\neq 1." }, { "math_id": 26, "text": " \\int_k^{k + 1} f(x)\\,dx = \\int_k^{k + 1} u\\,dv," }, { "math_id": 27, "text": "\\begin{align}\n u &= f(x), \\\\\n du &= f'(x)\\,dx, \\\\\n dv &= P_0(x)\\,dx & \\text{since }P_0(x) &= 1, \\\\\n v &= P_1(x).\n\\end{align}" }, { "math_id": 28, "text": "\\begin{align}\n \\int_k^{k + 1} f(x)\\,dx &= \\bigl[uv\\bigr]_k^{k + 1} - \\int_k^{k + 1} v\\,du \\\\\n &= \\bigl[f(x)P_1(x)\\bigr]_k^{k + 1} - \\int_k^{k+1} f'(x)P_1(x)\\,dx \\\\\n &= B_1(1)f(k+1)-B_1(0)f(k) - \\int_k^{k+1} f'(x)P_1(x)\\,dx.\n\\end{align}" }, { "math_id": 29, "text": "\\begin{align}\n\\int_0^n f(x)\\, dx &= \\int_0^1 f(x)\\,dx + \\cdots + \\int_{n-1}^n f(x)\\,dx \\\\\n&= \\frac{f(0)}{2}+ f(1) + \\dotsb + f(n-1) + \\frac{f(n)}{2} - \\int_0^n f'(x) P_1(x)\\,dx.\n\\end{align}" }, { "math_id": 30, "text": " \\sum_{k=1}^n f(k) = \\int_0^n f(x)\\,dx + \\frac{f(n) - f(0)}{2} + \\int_0^n f'(x) P_1(x)\\,dx." }, { "math_id": 31, "text": "\\int_k^{k+1} f'(x)P_1(x)\\,dx = \\int_k^{k + 1} u\\,dv," }, { "math_id": 32, "text": "\\begin{align}\n u &= f'(x), \\\\\n du &= f''(x)\\,dx, \\\\\n dv &= P_1(x)\\,dx, \\\\\n v &= \\tfrac{1}{2}P_2(x).\n\\end{align}" }, { "math_id": 33, "text": "\\begin{align}\n \\bigl[uv\\bigr]_k^{k + 1} - \\int_k^{k + 1} v\\,du &= \\left[\\frac{f'(x)P_2(x)}{2} \\right]_k^{k+1} - \\frac{1}{2}\\int_k^{k+1} f''(x)P_2(x)\\,dx \\\\\n &= \\frac{B_2}{2}(f'(k + 1) - f'(k)) - \\frac{1}{2}\\int_k^{k + 1} f''(x)P_2(x)\\,dx.\n\\end{align}" }, { "math_id": 34, "text": "\\sum_{k=1}^n f(k) = \\int_0^n f(x)\\,dx + \\frac{f(n) - f(0)}{2} + \\frac{B_2}{2}\\bigl(f'(n) - f'(0)\\bigr) - \\frac{1}{2}\\int_0^n f''(x)P_2(x)\\,dx." } ]
https://en.wikipedia.org/wiki?curid=9637
9638200
Boltzmann brain
Philosophical thought experiment The Boltzmann brain thought experiment suggests that it might be more likely for a single brain to spontaneously form in space, complete with a memory of having existed in our universe, rather than for the entire universe to come about in the manner cosmologists think it actually did. Physicists use the Boltzmann brain thought experiment as a "reductio ad absurdum" argument for evaluating competing scientific theories. In contrast to brain in a vat thought experiments, which are about perception and thought, Boltzmann brains are used in cosmology to test our assumptions about thermodynamics and the development of the universe. Over a sufficiently long time, random fluctuations could cause particles to spontaneously form literally any structure of any degree of complexity, including a functioning human brain. The scenario initially involved only a single brain with false memories, but physicist Sean M. Carroll pointed out that, in a fluctuating universe, the scenario works just as well with entire bodies, even entire galaxies. The idea is named after the physicist Ludwig Boltzmann (1844–1906), who, in 1896, published a theory that tried to account for the fact that the universe is not as chaotic as the budding field of thermodynamics seemed to predict. He offered several explanations, one of them being that the universe, even after it had progressed to its most likely spread-out and featureless state of thermal equilibrium, would spontaneously fluctuate to a more ordered (or low-entropy) state such as the universe in which we find ourselves. Boltzmann brains were first proposed as a "reductio ad absurdum" response to this explanation by Boltzmann for the low-entropy state of our universe. The Boltzmann brain gained new relevance around 2002, when some cosmologists started to become concerned that, in many theories about the universe, human brains are vastly more likely to arise from random fluctuations; this leads to the conclusion that, statistically, humans are likely to be wrong about their memories of the past and in fact are Boltzmann brains. When applied to more recent theories about the multiverse, Boltzmann brain arguments are part of the unsolved measure problem of cosmology. "Boltzmann universe". In 1896, the mathematician Ernst Zermelo advanced a theory that the second law of thermodynamics was absolute rather than statistical. Zermelo bolstered his theory by pointing out that the Poincaré recurrence theorem shows statistical entropy in a closed system must eventually be a periodic function; therefore, the Second Law, which is always observed to increase entropy, is unlikely to be statistical. To counter Zermelo's argument, Boltzmann advanced two theories. The first theory, now believed to be the correct one, is that the universe started for some unknown reason in a low-entropy state. The second and alternative theory, published in 1896 but attributed in 1895 to Boltzmann's assistant Ignaz Schütz, is the "Boltzmann universe" scenario. In this scenario, the universe spends the vast majority of eternity in a featureless state of heat death; however, over enough eons, eventually a very rare thermal fluctuation will occur where atoms bounce off each other in exactly such a way as to form a substructure equivalent to our entire observable universe. Boltzmann argues that, while most of the universe is featureless, humans do not see those regions because they are devoid of intelligent life; to Boltzmann, it is unremarkable that humanity views solely the interior of its Boltzmann universe, as that is the only place where intelligent life lives. (This may be the first use in modern science of the anthropic principle). In 1931, astronomer Arthur Eddington pointed out that, because a large fluctuation is exponentially less probable than a small fluctuation, observers in Boltzmann universes will be vastly outnumbered by observers in smaller fluctuations. Physicist Richard Feynman published a similar counterargument within his widely read "Feynman Lectures on Physics". By 2004, physicists had pushed Eddington's observation to its logical conclusion: the most numerous observers in an eternity of thermal fluctuations would be minimal "Boltzmann brains" popping up in an otherwise featureless universe. Spontaneous formation. In the universe's eventual state of ergodic "heat death", given enough time, every possible structure (including every possible brain) will presumably get formed via random fluctuation, the timescale of which is related to the Poincaré recurrence time. A Boltzmann brain (or body or world) need not fluctuate suddenly into existence, argue Anthony Aguirre, Sean M. Carroll, and Matthew C. Johnson. Rather, it would form in a sequence of smaller fluctuations that would look like the brain's decay path run in reverse. Boltzmann-style thought experiments generally focus on structures like human brains that are presumably self-aware observers. However, smaller structures that minimally meet the criteria are vastly and exponentially more common than larger structures; a rough analogy is how the odds of a single real English word showing up when one shakes a box of "Scrabble" letters are greater than the odds that a whole English sentence or paragraph will form. The average timescale required for the formation of a Boltzmann brain is vastly greater than the current age of the universe. In modern physics, Boltzmann brains can be formed either by quantum fluctuation, or by a thermal fluctuation generally involving nucleation. Via quantum fluctuation. By one calculation, a Boltzmann brain would appear as a quantum fluctuation in the vacuum after a time interval of formula_0 years. This fluctuation can occur even in a true Minkowski vacuum (a flat spacetime vacuum lacking vacuum energy). Quantum mechanics heavily favors smaller fluctuations that "borrow" the least amount of energy from the vacuum. Typically, a quantum Boltzmann brain would suddenly appear from the vacuum (alongside an equivalent amount of virtual antimatter), remain only long enough to have a single coherent thought or observation, and then disappear into the vacuum as suddenly as it appeared. Such a brain is completely self-contained, and can never radiate energy out to infinity. Via nucleation. Current evidence suggests that the vacuum permeating the observable universe is not a Minkowski space, but rather a de Sitter space with a positive cosmological constant. In a de Sitter vacuum (but not in a Minkowski vacuum), a Boltzmann brain can form via nucleation of non-virtual particles gradually assembled by chance from the Hawking radiation emitted from the de Sitter space's bounded cosmological horizon. One estimate for the average time required until nucleation is around formula_1 years. A typical nucleated Boltzmann brain will cool off to absolute zero and eventually completely decay, as any isolated object would in the vacuum of space. Unlike the quantum fluctuation case, the Boltzmann brain will radiate energy out to infinity. In nucleation, the most common fluctuations are as close to thermal equilibrium overall as possible given whatever arbitrary criteria are provided for labeling a fluctuation a "Boltzmann brain". Theoretically a Boltzmann brain can also form, albeit again with a tiny probability, at any time during the matter-dominated early universe. Modern reactions to the Boltzmann brain problem. The consensus amongst cosmologists is that some yet-to-be-revealed error is hinted at by the surprising calculation that Boltzmann brains should vastly outnumber normal human brains. Sean Carroll states "We're not arguing that Boltzmann Brains exist—we're trying to avoid them." Carroll has stated that the hypothesis of being a Boltzmann brain results in "cognitive instability". Because, he argues, it would take longer than the current age of the universe for a brain to form, and yet it thinks that it observes that it exists in a younger universe, and thus this shows that memories and reasoning processes would be untrustworthy if it were indeed a Boltzmann brain. Seth Lloyd has stated, "They fail the Monty Python test: Stop that! That's too silly!" A "New Scientist" journalist summarizes that "The starting point for our understanding of the universe and its behavior is that humans, not disembodied brains, are typical observers". Some argue that brains produced via quantum fluctuation, and maybe even brains produced via nucleation in the de Sitter vacuum, do not count as observers. Quantum fluctuations are easier to exclude than nucleated brains, as quantum fluctuations can more easily be targeted by straightforward criteria (such as their lack of interaction with the environment at infinity). Carroll believes that a better understanding of the measurement problem in quantum mechanics would show that some vacuum states have no dynamical evolution and cannot support nucleated brains, nor any other type of observer. Some cosmologists believe that a better understanding of the degrees of freedom in the quantum vacuum of holographic string theory can solve the Boltzmann brain problem. American theoretical physicist and mathematician Brian Greene states: "I am confident that I am not a Boltzmann brain. However, we want our theories to similarly concur that we are not Boltzmann brains, but so far it has proved surprisingly difficult for them to do so". In single-universe scenarios. In a single de Sitter universe with a cosmological constant, and starting from any finite spatial slice, the number of "normal" observers is finite and bounded by the heat death of the universe. If the universe lasts forever, the number of nucleated Boltzmann brains is, in most models, infinite; cosmologists such as Alan Guth worry that this would make it seem "infinitely unlikely for us to be normal brains". One caveat is that if the universe is a false vacuum that locally decays into a Minkowski or a Big Crunch-bound anti-de Sitter space in less than 20 billion years, then infinite Boltzmann nucleation is avoided. (If the average local false vacuum decay rate is over 20 billion years, Boltzmann brain nucleation is still infinite, as the universe increases in size faster than local vacuum collapses destroy the portions of the universe within the collapses' future light cones). Proposed hypothetical mechanisms to destroy the universe within that timeframe range from superheavy gravitinos to a heavier-than-observed top quark triggering "death by Higgs". If no cosmological constant exists, and if the presently observed vacuum energy is from quintessence that will eventually completely dissipate, then infinite Boltzmann nucleation is also avoided. In eternal inflation. One class of solutions to the Boltzmann brain problem makes use of differing approaches to the measure problem in cosmology: in infinite multiverse theories, the ratio of normal observers to Boltzmann brains depends on how infinite limits are taken. Measures might be chosen to avoid appreciable fractions of Boltzmann brains. Unlike the single-universe case, one challenge in finding a global solution in eternal inflation is that all possible string landscapes must be summed over; in some measures, having even a small fraction of universes permeated with Boltzmann brains causes the measure of the multiverse as a whole to be dominated by Boltzmann brains. The measurement problem in cosmology also grapples with the ratio of normal observers to abnormally early observers. In measures such as the proper time measure that suffer from an extreme "youngness" problem, the typical observer is a "Boltzmann baby" formed by rare fluctuation in an extremely hot, early universe. Identifying whether oneself is a Boltzmann observer. In Boltzmann brain scenarios, the ratio of Boltzmann brains to "normal observers" is astronomically large. Almost any relevant subset of Boltzmann brains, such as "brains embedded within functioning bodies", "observers who believe they are perceiving 3 K microwave background radiation through telescopes", "observers who have a memory of coherent experiences", or "observers who have the same series of experiences as me", also vastly outnumber "normal observers". Therefore, under most models of consciousness, it is unclear that one can reliably conclude that oneself is not such a "Boltzmann observer", in a case where Boltzmann brains dominate the universe. Even under "content externalism" models of consciousness, Boltzmann observers living in a consistent Earth-sized fluctuation over the course of the past several years outnumber the "normal observers" spawned before a universe's "heat death". As stated earlier, most Boltzmann brains have "abnormal" experiences; Feynman has pointed out that, if one knows oneself to be a typical Boltzmann brain, one does not expect "normal" observations to continue in the future. In other words, in a Boltzmann-dominated universe, most Boltzmann brains have "abnormal" experiences, but most observers with only "normal" experiences are Boltzmann brains, due to the overwhelming vastness of the population of Boltzmann brains in such a universe. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "10^{10^{50}}" }, { "math_id": 1, "text": "10^{10^{69}}" } ]
https://en.wikipedia.org/wiki?curid=9638200
964161
Modulus of continuity
In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function "f" : "I" → R admits ω as a modulus of continuity if formula_0 for all "x" and "y" in the domain of "f". Since moduli of continuity are required to be infinitesimal at 0, a function turns out to be uniformly continuous if and only if it admits a modulus of continuity. Moreover, relevance to the notion is given by the fact that sets of functions sharing the same modulus of continuity are exactly equicontinuous families. For instance, the modulus ω("t") := "kt" describes the k-Lipschitz functions, the moduli ω("t") := "kt"α describe the Hölder continuity, the modulus ω("t") := "kt"(|log "t"|+1) describes the almost Lipschitz class, and so on. In general, the role of ω is to fix some explicit functional dependence of ε on δ in the (ε, δ) definition of uniform continuity. The same notions generalize naturally to functions between metric spaces. Moreover, a suitable local version of these notions allows to describe quantitatively the continuity at a point in terms of moduli of continuity. A special role is played by concave moduli of continuity, especially in connection with extension properties, and with approximation of uniformly continuous functions. For a function between metric spaces, it is equivalent to admit a modulus of continuity that is either concave, or subadditive, or uniformly continuous, or sublinear (in the sense of growth). Actually, the existence of such special moduli of continuity for a uniformly continuous function is always ensured whenever the domain is either a compact, or a convex subset of a normed space. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios formula_1 are uniformly bounded for all pairs ("x", "x"′) bounded away from the diagonal of "X x X". The functions with the latter property constitute a special subclass of the uniformly continuous functions, that in the following we refer to as the "special uniformly continuous" functions. Real-valued special uniformly continuous functions on the metric space "X" can also be characterized as the set of all functions that are restrictions to "X" of uniformly continuous functions over any normed space isometrically containing "X". Also, it can be characterized as the uniform closure of the Lipschitz functions on "X". Formal definition. Formally, a modulus of continuity is any increasing real-extended valued function ω : [0, ∞] → [0, ∞], vanishing at 0 and continuous at 0, that is formula_2 Moduli of continuity are mainly used to give a quantitative account both of the continuity at a point, and of the uniform continuity, for functions between metric spaces, according to the following definitions. A function "f" : ("X", "dX") → ("Y", "dY") admits ω as (local) modulus of continuity at the point "x" in "X" if and only if, formula_3 Also, "f" admits ω as (global) modulus of continuity if and only if, formula_4 One equivalently says that ω is a modulus of continuity (resp., at "x") for "f", or shortly, "f" is ω-continuous (resp., at "x"). Here, we mainly treat the global notion. Special moduli of continuity. Special moduli of continuity also reflect certain global properties of functions such as extendibility and uniform approximation. In this section we mainly deal with moduli of continuity that are concave, or subadditive, or uniformly continuous, or sublinear. These properties are essentially equivalent in that, for a modulus ω (more precisely, its restriction on [0, ∞)) each of the following implies the next: Thus, for a function "f" between metric spaces it is equivalent to admit a modulus of continuity which is either concave, or subadditive, or uniformly continuous, or sublinear. In this case, the function "f" is sometimes called a "special uniformly continuous" map. This is always true in case of either compact or convex domains. Indeed, a uniformly continuous map "f" : "C" → "Y" defined on a convex set "C" of a normed space "E" always admits a subadditive modulus of continuity; in particular, real-valued as a function ω : [0, ∞) → [0, ∞). Indeed, it is immediate to check that the optimal modulus of continuity ω"f" defined above is subadditive if the domain of "f" is convex: we have, for all "s" and "t": formula_17 Note that as an immediate consequence, any uniformly continuous function on a convex subset of a normed space has a sublinear growth: there are constants "a" and "b" such that |"f"("x")| ≤ "a"|"x"|+"b" for all "x". However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios formula_18 are uniformly bounded for all pairs ("x", "x"′) with distance bounded away from zero; this condition is certainly satisfied by any bounded uniformly continuous function; hence in particular, by any continuous function on a compact metric space. Sublinear moduli, and bounded perturbations from Lipschitz. A sublinear modulus of continuity can easily be found for any uniformly continuous function which is a bounded perturbation of a Lipschitz function: if "f" is a uniformly continuous function with modulus of continuity ω, and "g" is a "k" Lipschitz function with uniform distance "r" from "f", then "f" admits the sublinear module of continuity min{ω("t"), 2"r"+"kt"}. Conversely, at least for real-valued functions, any special uniformly continuous function is a bounded, uniformly continuous perturbation of some Lipschitz function; indeed more is true as shown below (Lipschitz approximation). Subadditive moduli, and extendibility. The above property for uniformly continuous function on convex domains admits a sort of converse at least in the case of real-valued functions: that is, every special uniformly continuous real-valued function "f" : "X" → R defined on a metric space "X", which is a metric subspace of a normed space "E", admits extensions over "E" that preserves any subadditive modulus ω of "f". The least and the greatest of such extensions are respectively: formula_19 As remarked, any subadditive modulus of continuity is uniformly continuous: in fact, it admits itself as a modulus of continuity. Therefore, "f"∗ and "f*" are respectively inferior and superior envelopes of ω-continuous families; hence still ω-continuous. Incidentally, by the Kuratowski embedding any metric space is isometric to a subset of a normed space. Hence, special uniformly continuous real-valued functions are essentially the restrictions of uniformly continuous functions on normed spaces. In particular, this construction provides a quick proof of the Tietze extension theorem on compact metric spaces. However, for mappings with values in more general Banach spaces than R, the situation is quite more complicated; the first non-trivial result in this direction is the Kirszbraun theorem. Concave moduli and Lipschitz approximation. Every special uniformly continuous real-valued function "f" : "X" → R defined on the metric space "X" is uniformly approximable by means of Lipschitz functions. Moreover, the speed of convergence in terms of the Lipschitz constants of the approximations is strictly related to the modulus of continuity of "f". Precisely, let ω be the minimal concave modulus of continuity of "f", which is formula_20 Let δ("s") be the uniform distance between the function "f" and the set Lip"s" of all Lipschitz real-valued functions on "C" having Lipschitz constant "s" : formula_21 Then the functions ω("t") and δ("s") can be related with each other via a Legendre transformation: more precisely, the functions 2δ("s") and −ω(−"t") (suitably extended to +∞ outside their domains of finiteness) are a pair of conjugated convex functions, for formula_22 formula_23 Since ω("t") = o(1) for "t" → 0+, it follows that δ("s") = o(1) for "s" → +∞, that exactly means that "f" is uniformly approximable by Lipschitz functions. Correspondingly, an optimal approximation is given by the functions formula_24 each function "fs" has Lipschitz constant "s" and formula_25 in fact, it is the greatest "s"-Lipschitz function that realize the distance δ("s"). For example, the α-Hölder real-valued functions on a metric space are characterized as those functions that can be uniformly approximated by "s"-Lipschitz functions with speed of convergence formula_26 while the almost Lipschitz functions are characterized by an exponential speed of convergence formula_27 History. Steffens (2006, p. 160) attributes the first usage of omega for the modulus of continuity to Lebesgue (1909, p. 309/p. 75) where omega refers to the oscillation of a Fourier transform. De la Vallée Poussin (1919, pp. 7-8) mentions both names (1) "modulus of continuity" and (2) "modulus of oscillation" and then concludes "but we choose (1) to draw attention to the usage we will make of it". The translation group of "Lp" functions, and moduli of continuity "Lp".. Let 1 ≤ "p"; let "f" : R"n" → R a function of class "Lp", and let "h" ∈ R"n". The "h"-translation of "f", the function defined by (τ"h""f")("x") := "f"("x"−"h"), belongs to the "Lp" class; moreover, if 1 ≤ "p" &lt; ∞, then as ǁ"h"ǁ → 0 we have: formula_30 Therefore, since translations are in fact linear isometries, also formula_31 as ǁ"h"ǁ → 0, uniformly on "v" ∈ R"n". In other words, the map "h" → τ"h" defines a strongly continuous group of linear isometries of "Lp". In the case "p" = ∞ the above property does not hold in general: actually, it exactly reduces to the uniform continuity, and defines the uniform continuous functions. This leads to the following definition, that generalizes the notion of a modulus of continuity of the uniformly continuous functions: a modulus of continuity "Lp" for a measurable function "f" : "X" → R is a modulus of continuity ω : [0, ∞] → [0, ∞] such that formula_32 This way, moduli of continuity also give a quantitative account of the continuity property shared by all "Lp" functions. Modulus of continuity of higher orders. It can be seen that formal definition of the modulus uses notion of finite difference of first order: formula_33 If we replace that difference with a difference of order "n", we get a modulus of continuity of order "n": formula_34 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|f(x)-f(y)|\\leq\\omega(|x-y|)," }, { "math_id": 1, "text": "\\frac{d_Y(f(x),f(x'))}{d_X(x,x')}" }, { "math_id": 2, "text": "\\lim_{t\\to0}\\omega(t)=\\omega(0)=0." }, { "math_id": 3, "text": "\\forall x'\\in X: d_Y(f(x),f(x'))\\leq\\omega(d_X(x,x'))." }, { "math_id": 4, "text": "\\forall x,x'\\in X: d_Y(f(x),f(x'))\\leq\\omega(d_X(x,x'))." }, { "math_id": 5, "text": "g\\circ f:X\\to Z" }, { "math_id": 6, "text": "\\omega_2\\circ\\omega_1" }, { "math_id": 7, "text": "\\|g\\|_\\infty\\omega_1+\\|f\\|_\\infty \\omega_2" }, { "math_id": 8, "text": "\\{f_\\lambda\\}_{\\lambda\\in\\Lambda}" }, { "math_id": 9, "text": "\\inf_{\\lambda\\in\\Lambda}f_\\lambda" }, { "math_id": 10, "text": "\\sup_{\\lambda\\in\\Lambda}f_\\lambda" }, { "math_id": 11, "text": "\\omega_1(t) := \\sup_{s\\leq t}\\omega(s)" }, { "math_id": 12, "text": "\\omega_2(t):=\\frac{1}{t} \\int_t^{2t}\\omega_1(s)ds" }, { "math_id": 13, "text": "\\omega_f(t) := \\sup\\{ d_Y(f(x),f(x')):x\\in X,x'\\in X,d_X(x,x')\\le t \\} ,\\quad\\forall t\\geq0." }, { "math_id": 14, "text": "\\omega_f(t;x):=\\sup\\{ d_Y(f(x),f(x')): x'\\in X,d_X(x,x')\\le t \\},\\quad\\forall t\\geq0." }, { "math_id": 15, "text": "\\tilde\\omega" }, { "math_id": 16, "text": "\\omega(t)\\leq \\tilde\\omega(t)" }, { "math_id": 17, "text": "\\begin{align}\n\\omega_f(s+t) &=\\sup_{|x-x'|\\le t+s} d_Y(f(x),f(x')) \\\\\n&\\leq \\sup_{|x-x'|\\le t+s}\\left\\{d_Y\\left( f(x), f\\left(x-t\\frac{x-x'}{|x-x'|}\\right)\\right) + d_Y\\left( f\\left(x-t\\frac{x-x'}{|x-x'|}\\right), f(x')\\right )\\right\\} \\\\\n&\\leq \\omega_f(t)+\\omega_f(s).\n\\end{align}" }, { "math_id": 18, "text": "d_Y(f(x),f(x'))/d_X(x,x')" }, { "math_id": 19, "text": "\\begin{align}\nf_*(x) &:=\\sup_{y\\in X}\\left\\{f(y)-\\omega(|x-y|)\\right\\}, \\\\\nf^*(x) &:=\\inf_{y\\in X}\\left\\{f(y)+\\omega(|x-y|)\\right\\}.\n\\end{align}" }, { "math_id": 20, "text": "\\omega(t)=\\inf\\big\\{at+b\\, :\\, a>0,\\, b>0,\\, \\forall x\\in X,\\, \\forall x'\\in X\\,\\, |f(x)-f(x')|\\leq ad(x,x')+b\\big\\}." }, { "math_id": 21, "text": "\\delta(s):=\\inf\\big\\{\\|f-u\\|_{\\infty,X}\\,:\\, u\\in \\mathrm{Lip}_s\\big\\}\\leq+\\infty." }, { "math_id": 22, "text": "2\\delta(s)=\\sup_{t\\geq0}\\left\\{\\omega(t)-st\\right\\}," }, { "math_id": 23, "text": "\\omega(t)=\\inf_{s\\geq0}\\left\\{2\\delta(s)+st\\right\\}." }, { "math_id": 24, "text": "f_s:=\\delta(s)+\\inf_{y\\in X}\\{f(y)+sd(x,y)\\}, \\quad \\mathrm{for} \\ s\\in\\mathrm{dom}(\\delta):" }, { "math_id": 25, "text": "\\|f-f_s\\|_{\\infty,X}=\\delta(s);" }, { "math_id": 26, "text": "O(s^{-\\frac{\\alpha}{1-\\alpha}})," }, { "math_id": 27, "text": "O(e^{-as})." }, { "math_id": 28, "text": " |P| := \\max_{0\\le i<n} (t_{i+1}-t_i) " }, { "math_id": 29, "text": "S^*(f;P) - S_*(f;P) \\leq (b-a) \\omega(|P|)." }, { "math_id": 30, "text": "\\|\\tau_h f - f\\|_p=o(1)." }, { "math_id": 31, "text": "\\|\\tau_{v+h} f - \\tau_v f\\|_p=o(1)," }, { "math_id": 32, "text": "\\|\\tau_h f - f\\|_p\\leq \\omega(h)." }, { "math_id": 33, "text": "\\omega_f(\\delta)=\\omega(f, \\delta)=\\sup\\limits_{x; |h|<\\delta;}\\left|\\Delta_h(f,x)\\right|." }, { "math_id": 34, "text": "\\omega_n(f, \\delta)=\\sup\\limits_{x; |h|<\\delta;}\\left|\\Delta^n_h(f,x)\\right|." } ]
https://en.wikipedia.org/wiki?curid=964161
964177
Maurer–Cartan form
Mathematical concept In mathematics, the Maurer–Cartan form for a Lie group "G" is a distinguished differential one-form on "G" that carries the basic infinitesimal information about the structure of "G". It was much used by Élie Cartan as a basic ingredient of his method of moving frames, and bears his name together with that of Ludwig Maurer. As a one-form, the Maurer–Cartan form is peculiar in that it takes its values in the Lie algebra associated to the Lie group "G". The Lie algebra is identified with the tangent space of "G" at the identity, denoted T"e""G". The Maurer–Cartan form "ω" is thus a one-form defined globally on "G" which is a linear mapping of the tangent space T"g""G" at each "g" ∈ "G" into T"e""G". It is given as the pushforward of a vector in T"g""G" along the left-translation in the group: formula_0 Motivation and interpretation. A Lie group acts on itself by multiplication under the mapping formula_1 A question of importance to Cartan and his contemporaries was how to identify a principal homogeneous space of "G". That is, a manifold "P" identical to the group "G", but without a fixed choice of unit element. This motivation came, in part, from Felix Klein's Erlangen programme where one was interested in a notion of symmetry on a space, where the symmetries of the space were transformations forming a Lie group. The geometries of interest were homogeneous spaces "G"/"H", but usually without a fixed choice of origin corresponding to the coset "eH". A principal homogeneous space of "G" is a manifold "P" abstractly characterized by having a free and transitive action of "G" on "P". The Maurer–Cartan form gives an appropriate "infinitesimal" characterization of the principal homogeneous space. It is a one-form defined on "P" satisfying an integrability condition known as the Maurer–Cartan equation. Using this integrability condition, it is possible to define the exponential map of the Lie algebra and in this way obtain, locally, a group action on "P". Construction. Intrinsic construction. Let g ≅ T"e""G" be the tangent space of a Lie group "G" at the identity (its Lie algebra). "G" acts on itself by left translation formula_2 such that for a given "g" ∈ "G" we have formula_3 and this induces a map of the tangent bundle to itself: formula_4 A left-invariant vector field is a section "X" of T"G" such that formula_5 The Maurer–Cartan form "ω" is a g-valued one-form on "G" defined on vectors "v" ∈ T"g""G" by the formula formula_6 Extrinsic construction. If "G" is embedded in GL("n") by a matrix valued mapping "g" ("g""ij"), then one can write "ω" explicitly as formula_7 In this sense, the Maurer–Cartan form is always the left logarithmic derivative of the identity map of "G". Characterization as a connection. If we regard the Lie group "G" as a principal bundle over a manifold consisting of a single point then the Maurer–Cartan form can also be characterized abstractly as the unique principal connection on the principal bundle "G". Indeed, it is the unique g T"e""G" valued 1-form on "G" satisfying # formula_8 # formula_9 where "R""h"* is the pullback of forms along the right-translation in the group and Ad("h") is the adjoint action on the Lie algebra. Properties. If "X" is a left-invariant vector field on "G", then "ω"("X") is constant on "G". Furthermore, if "X" and "Y" are both left-invariant, then formula_10 where the bracket on the left-hand side is the Lie bracket of vector fields, and the bracket on the right-hand side is the bracket on the Lie algebra g. (This may be used as the definition of the bracket on g.) These facts may be used to establish an isomorphism of Lie algebras formula_11 By the definition of the exterior derivative, if "X" and "Y" are arbitrary vector fields then formula_12 Here "ω"("Y") is the g-valued function obtained by duality from pairing the one-form "ω" with the vector field "Y", and "X"("ω"("Y")) is the Lie derivative of this function along "X". Similarly "Y"("ω"("X")) is the Lie derivative along "Y" of the g-valued function "ω"("X"). In particular, if "X" and "Y" are left-invariant, then formula_13 so formula_14 but the left-invariant fields span the tangent space at any point (the push-forward of a basis in T"e""G" under a diffeomorphism is still a basis), so the equation is true for any pair of vector fields "X" and "Y". This is known as the Maurer–Cartan equation. It is often written as formula_15 Here [ω, ω] denotes the bracket of Lie algebra-valued forms. Maurer–Cartan frame. One can also view the Maurer–Cartan form as being constructed from a Maurer–Cartan frame. Let "E"i be a basis of sections of T"G" consisting of left-invariant vector fields, and "θ""j" be the dual basis of sections of T*"G" such that "θ""j"("E""i") "δ""i""j", the Kronecker delta. Then "E""i" is a Maurer–Cartan frame, and "θ""i" is a Maurer–Cartan coframe. Since "E""i" is left-invariant, applying the Maurer–Cartan form to it simply returns the value of "E""i" at the identity. Thus "ω"("E""i") "E""i"("e") ∈ g. Thus, the Maurer–Cartan form can be written Suppose that the Lie brackets of the vector fields "E""i" are given by formula_16 The quantities "c""ij""k" are the structure constants of the Lie algebra (relative to the basis "E""i"). A simple calculation, using the definition of the exterior derivative "d", yields formula_17 so that by duality This equation is also often called the Maurer–Cartan equation. To relate it to the previous definition, which only involved the Maurer–Cartan form "ω", take the exterior derivative of (1): formula_18 The frame components are given by formula_19 which establishes the equivalence of the two forms of the Maurer–Cartan equation. On a homogeneous space. Maurer–Cartan forms play an important role in Cartan's method of moving frames. In this context, one may view the Maurer–Cartan form as a defined on the tautological principal bundle associated with a homogeneous space. If "H" is a closed subgroup of "G", then "G"/"H" is a smooth manifold of dimension dim "G" − dim "H". The quotient map "G" → "G"/"H" induces the structure of an "H"-principal bundle over "G"/"H". The Maurer–Cartan form on the Lie group "G" yields a flat Cartan connection for this principal bundle. In particular, if "H" {"e"}, then this Cartan connection is an ordinary connection form, and we have formula_20 which is the condition for the vanishing of the curvature. In the method of moving frames, one sometimes considers a local section of the tautological bundle, say "s" : "G"/"H" → "G". (If working on a submanifold of the homogeneous space, then "s" need only be a local section over the submanifold.) The pullback of the Maurer–Cartan form along "s" defines a non-degenerate g-valued 1-form "θ" "s"*"ω" over the base. The Maurer–Cartan equation implies that formula_21 Moreover, if "s""U" and "s""V" are a pair of local sections defined, respectively, over open sets "U" and "V", then they are related by an element of "H" in each fibre of the bundle: formula_22 The differential of "h" gives a compatibility condition relating the two sections on the overlap region: formula_23 where "ω""H" is the Maurer–Cartan form on the group "H". A system of non-degenerate g-valued 1-forms "θ""U" defined on open sets in a manifold "M", satisfying the Maurer–Cartan structural equations and the compatibility conditions endows the manifold "M" locally with the structure of the homogeneous space "G"/"H". In other words, there is locally a diffeomorphism of "M" into the homogeneous space, such that "θ""U" is the pullback of the Maurer–Cartan form along some section of the tautological bundle. This is a consequence of the existence of primitives of the Darboux derivative. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega(v) = (L_{g^{-1}})_* v,\\quad v\\in T_gG." }, { "math_id": 1, "text": "G\\times G \\ni (g,h) \\mapsto gh \\in G." }, { "math_id": 2, "text": " L : G \\times G \\to G" }, { "math_id": 3, "text": " L_g : G \\to G \\quad \\mbox{where} \\quad L_g(h) = gh," }, { "math_id": 4, "text": "(L_g)_*:T_hG\\to T_{gh}G." }, { "math_id": 5, "text": "(L_g)_{*}X = X \\quad \\forall g \\in G." }, { "math_id": 6, "text": "\\omega_g(v)=(L_{g^{-1}})_*v." }, { "math_id": 7, "text": "\\omega_g = g^{-1} \\,dg." }, { "math_id": 8, "text": "\\omega_e = \\mathrm{id} : T_eG\\rightarrow {\\mathfrak g},\\text{ and}" }, { "math_id": 9, "text": "\\forall g \\in G \\quad \\omega_g = \\mathrm{Ad}(h)(R_h^*\\omega_e),\\text{ where }h=g^{-1}," }, { "math_id": 10, "text": "\\omega([X,Y])=[\\omega(X),\\omega(Y)]" }, { "math_id": 11, "text": "\\mathfrak{g}=T_eG\\cong \\{\\hbox{left-invariant vector fields on G}\\}." }, { "math_id": 12, "text": "d\\omega(X,Y)=X(\\omega(Y))-Y(\\omega(X))-\\omega([X,Y])." }, { "math_id": 13, "text": "X(\\omega(Y))=Y(\\omega(X))=0," }, { "math_id": 14, "text": "d\\omega(X,Y)+[\\omega(X),\\omega(Y)]=0" }, { "math_id": 15, "text": "d\\omega + \\frac{1}{2}[\\omega,\\omega]=0." }, { "math_id": 16, "text": "[E_i,E_j]=\\sum_k{c_{ij}}^kE_k." }, { "math_id": 17, "text": "d\\theta^i(E_j,E_k) = -\\theta^i([E_j,E_k]) = -\\sum_r {c_{jk}}^r\\theta^i(E_r) = -{c_{jk}}^i = -\\frac{1}{2}({c_{jk}}^i - {c_{kj}}^i)," }, { "math_id": 18, "text": "d\\omega = \\sum_i E_i(e)\\otimes d\\theta^i\\,=\\,-\\frac12 \\sum_{ijk}{c_{jk}}^iE_i(e)\\otimes\\theta^j\\wedge\\theta^k." }, { "math_id": 19, "text": "d\\omega(E_j,E_k) = -\\sum_i {c_{jk}}^iE_i(e) = -[E_j(e),E_k(e)]=-[\\omega(E_j),\\omega(E_k)]," }, { "math_id": 20, "text": "d\\omega+\\omega\\wedge\\omega=0" }, { "math_id": 21, "text": "d\\theta + \\frac{1}{2}[\\theta,\\theta]=0." }, { "math_id": 22, "text": "h_{UV}(x) = s_V\\circ s_U^{-1}(x),\\quad x \\in U \\cap V." }, { "math_id": 23, "text": "\\theta_V = \\operatorname{Ad}(h^{-1}_{UV})\\theta_U + (h_{UV})^* \\omega_H " } ]
https://en.wikipedia.org/wiki?curid=964177
964312
12AX7
Miniature high-gain dual triode vacuum tube 12AX7 (also known as ECC83) is a miniature dual-triode vacuum tube with high voltage gain. Developed around 1946 by RCA engineers in Camden, New Jersey, under developmental number A-4522, it was released for public sale under the 12AX7 identifier on September 15, 1947. The 12AX7 was originally intended as replacement for the 6SL7 family of dual-triode amplifier tubes for audio applications. As a popular choice for guitar tube amplifiers, its ongoing use in such equipment makes it one of the few small-signal vacuum tubes in continuous production since it was introduced. History. The 12AX7 is a twin triode basically composed of two of the triodes from a 6AV6, a double diode triode. The 6AV6 is a miniature repackaging (with just a single cathode) of the triode and twin diodes from the octal 6SQ7 (a double-diode triode used in AM radios), which itself is very similar to the older type 75 triode-diode dating from 1930. Application. The 12AX7 is a high-gain (typical amplification factor 100), low-plate-current triode best suited for low-level audio voltage amplification. In this role it is widely used for the preamplifier (input and mid-level) stages of audio amplifiers. It has relatively high Miller capacitance, making it unsuitable for radio-frequency use. Typically a 12AX7 triode is configured with a high-value plate resistor, 100 kohms in most guitar amps and 220 kΩ or more in high-fidelity equipment. Grid bias is most often provided by a cathode resistor. If the cathode resistor is unbypassed, negative feedback is introduced and each half of a 12AX7 provides a typical voltage gain of about 30; the amplification factor is basically twice the maximum stage gain, as the plate impedance must be matched. Thus half the voltage is across the tube at rest, half across the load resistor. The cathode resistor can be bypassed to reduce or eliminate AC negative feedback and thereby increase gain; maximum gain is about 60 times with a 100k plate load, and a center biased and bypassed cathode, and higher with a larger plate load. formula_0 Where formula_1 = voltage gain, formula_2 is the amplification factor of the valve, formula_3 is the internal plate resistance, formula_4 is the cathode resistor and formula_5 is the parallel combination of formula_6 (external plate resistor) and formula_7. If the cathode resistor is bypassed, use formula_8. The initial “12” in the designator implies a 12-volt heater requirement; however, the tube has a center-tapped heater so it can be used in either 6.3-V or 12.6-V heater circuits. Similar twin-triode designs. The 12AX7 is the most common member of what eventually became a large family of twin-triode vacuum tubes, manufactured all over the world, all sharing the same pinout (EIA 9A). Most use heaters which can be optionally wired in series (12.6V, 150 mA) or parallel (6.3V, 300 mA). Other tubes, which in some cases can be used interchangeably in an emergency or for different performance characteristics, include the 12AT7, 12AU7, 12AV7, 12AY7, and the low-voltage 12U7, plus many four-digit EIA series dual triodes. They span a wide range of voltage gain and transconductance. Different versions of each were designed for enhanced ruggedness, low microphonics, stability, lifespan, etc. Those other designs offer lower voltage gain (traded off for higher plate current) than the 12AX7 (which has a voltage gain or formula_1 of 100), and are more suitable for high-frequency applications. Some American designs similar to the 12AX7: Although commonly known in Europe by its Mullard–Philips tube designation of ECC83, other European variations also exist including the low-noise versions 12AX7A, 12AD7, 6681, 7025, and 7729; European versions B339, B759, CV492, CV4004, CV8156, CV8222, ECC803, ECC803S, E2164, and M8137; and the lower-gain low-noise versions 5751 and 6851, intended for avionics equipment. In European usage special-quality valves of some sort were often indicated by exchanging letters and digits in the name: the E83CC was a special-quality ECC83. In the US a "W" in the designation, as in 12AX7WA, designates the tube as complying with military grade, higher reliability specifications. The 'E' in the European designation classifies this as having a 6.3 volt heater, whereas the American designation of 12AX7 classifies it as having a 12.6 volt heater. It can, of course, be wired for operation off either voltage. Manufacturers. As of 2022[ [update]] versions of the 12AX7/ECC83 are available from the following manufacturers: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_v = \\mu \\times R_{tot} /(r_P + R_{tot} + (R_K \\times (\\mu + 1))" }, { "math_id": 1, "text": "A_v" }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "r_P" }, { "math_id": 4, "text": "R_K" }, { "math_id": 5, "text": "R_{tot}" }, { "math_id": 6, "text": "R_P" }, { "math_id": 7, "text": "R_{load}" }, { "math_id": 8, "text": "R_K = 0" } ]
https://en.wikipedia.org/wiki?curid=964312
964356
Cash and cash equivalents
Highly liquid, short-term assets Cash and cash equivalents (CCE) are the most liquid current assets found on a business's balance sheet. Cash equivalents are short-term commitments "with temporarily idle cash and easily convertible into a known cash amount". An investment normally counts as a cash equivalent when it has a short maturity period of 90 days or less, and can be included in the cash and cash equivalents balance from the date of acquisition when it carries an insignificant risk of changes in the asset value. If it has a maturity of more than 90 days, it is not considered a cash equivalent. Equity investments mostly are excluded from cash equivalents, unless they are essentially cash equivalents (e.g., preferred shares with a short maturity period and a specified recovery date). One of the company's crucial health indicators is its ability to generate cash and cash equivalents. So, a company with relatively high net assets and significantly less cash and cash equivalents can mostly be considered an indication of non-liquidity. For investors and companies cash and cash equivalents are generally counted to be "low risk and low return" investments and sometimes analysts can estimate company's ability to pay its bills in a short period of time by comparing CCE and current liabilities. Nevertheless, this can happen only if there are receivables that can be converted into cash immediately. However, companies with a big value of cash and cash equivalents are targets for takeovers (by other companies), since their excess cash helps buyers to finance their acquisition. High cash reserves can also indicate that the company is not effective at deploying its CCE resources, whereas for big companies it might be a sign of preparation for substantial purchases. The opportunity cost of saving up CCE is the return on equity that company could earn by investing in a new product or service or expansion of business. Calculation of cash and cash equivalents. Cash and cash equivalents are listed on balance sheet as "current assets" and its value changes when different transactions are occurred. These changes are called "cash flows" and they are recorded on accounting ledger. For instance, if a company spends $300 on purchasing goods, this is recorded as $300 increase to its supplies and decrease in the value of CCE. These are few formulas that are used by analysts to calculate transactions related to cash and cash equivalents: "Change in CCE = End of Year Cash and Cash equivalents - Beginning of Year Cash and Cash Equivalents". Value of Cash and Cash Equivalents at the end of period = Net Cash Flow + Value of CCE at the period of beginning Restricted cash. Restricted cash is the amount of cash and cash equivalent items which are restricted for withdrawal and usage. The restrictions might include legally restricted deposits, which are held as compensating balances against short-term borrowings, contracts entered into with others or entity statements of intention with regard to specific deposits; nevertheless, time deposits and short-term certificates of deposit are excluded from legally restricted deposits. Restricted cash can be also set aside for other purposes such as expansion of the entity, dividend funds or "retirement of long-term debt". Depending on its immateriality or materiality, restricted cash may be recorded as "cash" in the financial statement or it might be classified based on the date of availability disbursements. Moreover, if cash is expected to be used within one year after the balance sheet date it can be classified as "current asset", but in a longer period of time it is mentioned as non- current asset. For example, a large machine manufacturing company receives an advance payment (deposit) from its customer for a machine that should be produced and shipped to another country within 2 months. Based on the customer contract the manufacturer should put the deposit into separate bank account and not withdraw or use the money until the equipment is shipped and delivered. This is a restricted cash, since manufacturer has the deposit, but he can not use it for operations until the equipment is shipped. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{Current Ratio} = {\\mbox{Current Assets}\\over \\mbox{Current Liabilities}}" }, { "math_id": 1, "text": "\\mbox{Quick Ratio} = {{\\mbox{Current Assets} - \\mbox{Inventories}}\\over \\mbox{Current Liabilities}}" }, { "math_id": 2, "text": "\\mbox{Cash Ratio} = {\\mbox{Cash and Cash Equivalent}\\over \\mbox{Current Liabilities}}" } ]
https://en.wikipedia.org/wiki?curid=964356
9644681
Geometric programming
A geometric program (GP) is an optimization problem of the form formula_0 where formula_1 are posynomials and formula_2 are monomials. In the context of geometric programming (unlike standard mathematics), a monomial is a function from formula_3 to formula_4 defined as formula_5 where formula_6 and formula_7. A posynomial is any sum of monomials. Geometric programming is closely related to convex optimization: any GP can be made convex by means of a change of variables. GPs have numerous applications, including component sizing in IC design, aircraft design, maximum likelihood estimation for logistic regression in statistics, and parameter tuning of positive linear systems in control theory. Convex form. Geometric programs are not in general convex optimization problems, but they can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions. In particular, after performing the change of variables formula_8 and taking the log of the objective and constraint functions, the functions formula_9, i.e., the posynomials, are transformed into log-sum-exp functions, which are convex, and the functions formula_10, i.e., the monomials, become affine. Hence, this transformation transforms every GP into an equivalent convex program. In fact, this log-log transformation can be used to convert a larger class of problems, known as log-log convex programming (LLCP), into an equivalent convex form. Software. Several software packages exist to assist with formulating and solving geometric programs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{array}{ll}\n\\mbox{minimize} & f_0(x) \\\\\n\\mbox{subject to} & f_i(x) \\leq 1, \\quad i=1, \\ldots, m\\\\\n& g_i(x) = 1, \\quad i=1, \\ldots, p,\n\\end{array}\n" }, { "math_id": 1, "text": "f_0,\\dots,f_m" }, { "math_id": 2, "text": "g_1,\\dots,g_p" }, { "math_id": 3, "text": "\\mathbb{R}_{++}^n" }, { "math_id": 4, "text": "\\mathbb{R}" }, { "math_id": 5, "text": "x \\mapsto c x_1^{a_1} x_2^{a_2} \\cdots x_n^{a_n} " }, { "math_id": 6, "text": " c > 0 \\ " }, { "math_id": 7, "text": "a_i \\in \\mathbb{R} " }, { "math_id": 8, "text": "y_i = \\log(x_i)" }, { "math_id": 9, "text": "f_i" }, { "math_id": 10, "text": "g_i" } ]
https://en.wikipedia.org/wiki?curid=9644681
9644721
Posynomial
A posynomial, also known as a posinomial in some literature, is a function of the form formula_0 where all the coordinates formula_1 and coefficients formula_2 are positive real numbers, and the exponents formula_3 are real numbers. Posynomials are closed under addition, multiplication, and nonnegative scaling. For example, formula_4 is a posynomial. Posynomials are not the same as polynomials in several independent variables. A polynomial's exponents must be non-negative integers, but its independent variables and coefficients can be arbitrary real numbers; on the other hand, a posynomial's exponents can be arbitrary real numbers, but its independent variables and coefficients must be positive real numbers. This terminology was introduced by Richard J. Duffin, Elmor L. Peterson, and Clarence Zener in their seminal book on geometric programming. Posynomials are a special case of signomials, the latter not having the restriction that the formula_2 be positive.
[ { "math_id": 0, "text": "f(x_1, x_2, \\dots, x_n) = \\sum_{k=1}^K c_k x_1^{a_{1k}} \\cdots x_n^{a_{nk}}" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "c_k" }, { "math_id": 3, "text": "a_{ik}" }, { "math_id": 4, "text": "f(x_1, x_2, x_3) = 2.7 x_1^2x_2^{-1/3}x_3^{0.7} + 2x_1^{-4}x_3^{2/5}" } ]
https://en.wikipedia.org/wiki?curid=9644721
9644792
Hilbert projection theorem
On closed convex subsets in Hilbert space In mathematics, the Hilbert projection theorem is a famous result of convex analysis that says that for every vector formula_0 in a Hilbert space formula_1 and every nonempty closed convex formula_2 there exists a unique vector formula_3 for which formula_4 is minimized over the vectors formula_5; that is, such that formula_6 for every formula_7 Finite dimensional case. Some intuition for the theorem can be obtained by considering the first order condition of the optimization problem. Consider a finite dimensional real Hilbert space formula_1 with a subspace formula_8 and a point formula_9 If formula_3 is a minimizer or minimum point of the function formula_10 defined by formula_11 (which is the same as the minimum point of formula_12), then derivative must be zero at formula_13 In matrix derivative notation formula_14 Since formula_15 is a vector in formula_8 that represents an arbitrary tangent direction, it follows that formula_16 must be orthogonal to every vector in formula_17 Statement. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Hilbert projection theorem — For every vector formula_0 in a Hilbert space formula_1 and every nonempty closed convex formula_2 there exists a unique vector formula_3 for which formula_18 is equal to formula_19 If the closed subset formula_8 is also a vector subspace of formula_1 then this minimizer formula_20 is the unique element in formula_8 such that formula_21 is orthogonal to formula_17 Detailed elementary proof. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof that a minimum point formula_22 exists Let formula_23 be the distance between formula_0 and formula_24 formula_25 a sequence in formula_8 such that the distance squared between formula_0 and formula_26 is less than or equal to formula_27 Let formula_28 and formula_20 be two integers, then the following equalities are true: formula_29 and formula_30 Therefore formula_31 (This equation is the same as the formula formula_32 for the length formula_33 of a median in a triangle with sides of length formula_34 and formula_35 where specifically, the triangle's vertices are formula_36). By giving an upper bound to the first two terms of the equality and by noticing that the middle of formula_26 and formula_37 belong to formula_8 and has therefore a distance greater than or equal to formula_38 from formula_39 it follows that: formula_40 The last inequality proves that formula_25 is a Cauchy sequence. Since formula_8 is complete, the sequence is therefore convergent to a point formula_41 whose distance from formula_0 is minimal. formula_42 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof that formula_20 is unique Let formula_43 and formula_44 be two minimum points. Then: formula_45 Since formula_46 belongs to formula_24 we have formula_47 and therefore formula_48 Hence formula_49 which proves uniqueness. formula_42 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof of characterization of minimum point when formula_8 is a closed vector subspace Assume that formula_8 is a closed vector subspace of formula_50 It must be shown the minimizer formula_20 is the unique element in formula_8 such that formula_51 for every formula_7 Proof that the condition is sufficient: Let formula_52 be such that formula_53 for all formula_7 If formula_5 then formula_54 and so formula_55 which implies that formula_56 Because formula_5 was arbitrary, this proves that formula_57 and so formula_58 is a minimum point. Proof that the condition is necessary: Let formula_3 be the minimum point. Let formula_5 and formula_59 Because formula_60 the minimality of formula_20 guarantees that formula_61 Thus formula_62 is always non-negative and formula_63 must be a real number. If formula_64 then the map formula_65 has a minimum at formula_66 and moreover, formula_67 which is a contradiction. Thus formula_68 formula_42 Proof by reduction to a special case. It suffices to prove the theorem in the case of formula_69 because the general case follows from the statement below by replacing formula_8 with formula_70 &lt;templatestyles src="Math_theorem/styles.css" /&gt; Hilbert projection theorem (case formula_69) — For every nonempty closed convex subset formula_71 of a Hilbert space formula_72 there exists a unique vector formula_3 such that formula_73 Furthermore, letting formula_74 if formula_25 is any sequence in formula_8 such that formula_75 in formula_76 then formula_77 in formula_50 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof class="mw-collapsible mw-archivedtalk mw-collapsed " style="background: transparent; text-align: left; border: 1px solid Silver; margin: 0.2em auto auto; width:100%; clear: both; padding: 1px;" Consequences. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Proposition — If formula_8 is a closed vector subspace of a Hilbert space formula_1 then formula_79 Properties. Expression as a global minimum The statement and conclusion of the Hilbert projection theorem can be expressed in terms of global minimums of the followings functions. Their notation will also be used to simplify certain statements. Given a non-empty subset formula_71 and some formula_80 define a function formula_81 A global minimum point of formula_82 if one exists, is any point formula_20 in formula_83 such that formula_84 in which case formula_85 is equal to the global minimum value of the function formula_86 which is: formula_87 Effects of translations and scalings When this global minimum point formula_20 exists and is unique then denote it by formula_88 explicitly, the defining properties of formula_89 (if it exists) are: formula_90 The Hilbert projection theorem guarantees that this unique minimum point exists whenever formula_8 is a non-empty closed and convex subset of a Hilbert space. However, such a minimum point can also exist in non-convex or non-closed subsets as well; for instance, just as long is formula_8 is non-empty, if formula_78 then formula_91 If formula_71 is a non-empty subset, formula_92 is any scalar, and formula_93 are any vectors then formula_94 which implies: formula_95 formula_96 formula_97 Examples The following counter-example demonstrates a continuous linear isomorphism formula_98 for which formula_99 Endow formula_100 with the dot product, let formula_101 and for every real formula_102 let formula_103 be the line of slope formula_92 through the origin, where it is readily verified that formula_104 Pick a real number formula_105 and define formula_106 by formula_107 (so this map scales the formula_108coordinate by formula_109 while leaving the formula_110coordinate unchanged). Then formula_106 is an invertible continuous linear operator that satisfies formula_111 and formula_112 so that formula_113 and formula_114 Consequently, if formula_115 with formula_116 and if formula_117 then formula_118 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "C \\subseteq H," }, { "math_id": 3, "text": "m \\in C" }, { "math_id": 4, "text": "\\|c - x\\|" }, { "math_id": 5, "text": "c \\in C" }, { "math_id": 6, "text": "\\|m - x\\| \\leq \\|c - x\\|" }, { "math_id": 7, "text": "c \\in C." }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "x." }, { "math_id": 10, "text": "N : C \\to \\R" }, { "math_id": 11, "text": "N(c) := \\|c - x\\|" }, { "math_id": 12, "text": "c \\mapsto \\|c - x\\|^2" }, { "math_id": 13, "text": "m." }, { "math_id": 14, "text": "\\begin{aligned}\n\\partial \\lVert x - c \\rVert^2 &= \\partial \\langle c - x, c - x \\rangle \\\\\n&= 2 \\langle c - x, \\partial c\\rangle\n\\end{aligned}" }, { "math_id": 15, "text": "\\partial c" }, { "math_id": 16, "text": "m - x" }, { "math_id": 17, "text": "C." }, { "math_id": 18, "text": "\\lVert x - m \\rVert" }, { "math_id": 19, "text": "\\delta := \\inf_{c \\in C} \\|x - c\\|." }, { "math_id": 20, "text": "m" }, { "math_id": 21, "text": "x - m" }, { "math_id": 22, "text": "y" }, { "math_id": 23, "text": "\\delta := \\inf_{c \\in C} \\|x - c\\|" }, { "math_id": 24, "text": "C," }, { "math_id": 25, "text": "\\left(c_n\\right)_{n=1}^{\\infty}" }, { "math_id": 26, "text": "c_n" }, { "math_id": 27, "text": "\\delta^2 + 1/n." }, { "math_id": 28, "text": "n" }, { "math_id": 29, "text": "\\left\\|c_n - c_m\\right\\|^2 = \\left\\|c_n - x\\right\\|^2 + \\left\\|c_m - x\\right\\|^2 - 2 \\left\\langle c_n - x \\, , \\, c_m - x\\right\\rangle" }, { "math_id": 30, "text": "4 \\left\\|\\frac{c_n + c_m}2 - x\\right\\|^2 = \\left\\|c_n - x\\right\\|^2 + \\left\\|c_m - x\\right\\|^2 + 2 \\left\\langle c_n - x \\, , \\, c_m - x\\right\\rangle" }, { "math_id": 31, "text": "\\left\\|c_n - c_m\\right\\|^2 = 2 \\left\\|c_n - x\\right\\|^2 + 2\\left\\|c_m - x\\right\\|^2 - 4\\left\\|\\frac{c_n + c_m}2 - x\\right\\|^2" }, { "math_id": 32, "text": "a^2 = 2 b^2 + 2 c^2 - 4 M_a^2" }, { "math_id": 33, "text": "M_a" }, { "math_id": 34, "text": "a, b," }, { "math_id": 35, "text": "c," }, { "math_id": 36, "text": "x, c_m, c_n" }, { "math_id": 37, "text": "c_m" }, { "math_id": 38, "text": "\\delta" }, { "math_id": 39, "text": "x," }, { "math_id": 40, "text": "\\|c_n - c_m\\|^2 \\; \\leq \\; 2\\left(\\delta^2 + \\frac{1}{n}\\right) + 2\\left(\\delta^2 + \\frac{1}{m}\\right) - 4\\delta^2 = 2\\left(\\frac{1}{n} + \\frac{1}{m}\\right)" }, { "math_id": 41, "text": "m \\in C," }, { "math_id": 42, "text": "\\blacksquare" }, { "math_id": 43, "text": "m_1" }, { "math_id": 44, "text": "m_2" }, { "math_id": 45, "text": "\\|m_2 - m_1\\|^2 = 2\\|m_1 - x\\|^2 + 2\\|m_2 - x\\|^2 - 4 \\left\\|\\frac{m_1 + m_2}2 - x\\right\\|^2" }, { "math_id": 46, "text": "\\frac{m_1 + m_2}2" }, { "math_id": 47, "text": "\\left\\|\\frac{m_1 + m_2} 2 - x\\right\\|^2 \\geq \\delta^2" }, { "math_id": 48, "text": "\\|m_2 - m_1\\|^2 \\leq 2 \\delta^2 + 2 \\delta^2 - 4 \\delta^2 = 0." }, { "math_id": 49, "text": "m_1 = m_2," }, { "math_id": 50, "text": "H." }, { "math_id": 51, "text": "\\langle m - x, c \\rangle = 0" }, { "math_id": 52, "text": "z \\in C" }, { "math_id": 53, "text": "\\langle z - x, c \\rangle = 0" }, { "math_id": 54, "text": "c - z \\in C" }, { "math_id": 55, "text": "\\|c-x\\|^2 = \\|(z-x) + (c-z)\\|^2 = \\|z-x\\|^2 + \\|c-z\\|^2 + 2 \\langle z-x, c-z \\rangle = \\|z-x\\|^2 + \\|c-z\\|^2" }, { "math_id": 56, "text": "\\|z-x\\|^2 \\leq \\|c-x\\|^2." }, { "math_id": 57, "text": "\\|z-x\\| = \\inf_{c \\in C} \\|c - x\\|" }, { "math_id": 58, "text": "z" }, { "math_id": 59, "text": "t \\in \\R." }, { "math_id": 60, "text": "m + t c \\in C," }, { "math_id": 61, "text": "\\|m-x\\| \\leq \\|(m + t c) - x\\|." }, { "math_id": 62, "text": "\\|(m + t c) - x\\|^2 - \\|m-x\\|^2 = 2t\\langle m-x, c\\rangle + t^2 \\|c\\|^2" }, { "math_id": 63, "text": "\\langle m-x, c\\rangle" }, { "math_id": 64, "text": "\\langle m - x, c\\rangle \\neq 0" }, { "math_id": 65, "text": "f(t) := 2t\\langle m - x, c\\rangle + t^2 \\|c\\|^2" }, { "math_id": 66, "text": "t_0 := - \\frac{\\langle m - x, c\\rangle}{\\|c\\|^2}" }, { "math_id": 67, "text": "f\\left(t_0\\right) < 0," }, { "math_id": 68, "text": "\\langle m - x, c\\rangle = 0." }, { "math_id": 69, "text": "x = 0" }, { "math_id": 70, "text": "C - x." }, { "math_id": 71, "text": "C \\subseteq H" }, { "math_id": 72, "text": "H," }, { "math_id": 73, "text": "\\inf_{c \\in C} \\| c \\| = \\| m \\|." }, { "math_id": 74, "text": "d := \\inf_{c \\in C} \\| c \\|," }, { "math_id": 75, "text": "\\lim_{n \\to \\infty} \\left\\|c_n\\right\\| = d" }, { "math_id": 76, "text": "\\R" }, { "math_id": 77, "text": "\\lim_{n \\to \\infty} c_n = m" }, { "math_id": 78, "text": "x \\in C" }, { "math_id": 79, "text": "H = C \\oplus C^{\\bot}." }, { "math_id": 80, "text": "x \\in H," }, { "math_id": 81, "text": "d_{C,x} : C \\to [0, \\infty) \\quad \\text{ by } c \\mapsto \\|x - c\\|." }, { "math_id": 82, "text": "d_{C,x}," }, { "math_id": 83, "text": "\\,\\operatorname{domain} d_{C,x} = C\\," }, { "math_id": 84, "text": "d_{C,x}(m) \\,\\leq\\, d_{C,x}(c) \\quad \\text{ for all } c \\in C," }, { "math_id": 85, "text": "d_{C,x}(m) = \\|m - x\\|" }, { "math_id": 86, "text": "d_{C, x}," }, { "math_id": 87, "text": "\\inf_{c \\in C} d_{C,x}(c) = \\inf_{c \\in C} \\|x - c\\|." }, { "math_id": 88, "text": "\\min(C, x);" }, { "math_id": 89, "text": "\\min(C, x)" }, { "math_id": 90, "text": "\\min(C, x) \\in C \\quad \\text { and } \\quad \\left\\|x - \\min(C, x)\\right\\| \\leq \\|x - c\\| \\quad \\text{ for all } c \\in C." }, { "math_id": 91, "text": "\\min(C, x) = x." }, { "math_id": 92, "text": "s" }, { "math_id": 93, "text": "x, x_0 \\in H" }, { "math_id": 94, "text": "\\,\\min\\left(s C + x_0, s x + x_0\\right) = s \\min(C, x) + x_0" }, { "math_id": 95, "text": "\\begin{alignat}{6}\n\\min&(s C, s x) &&= s &&\\min(C, x) \\\\\n\\min&(- C, - x) &&= - &&\\min(C, x) \\\\\n\\end{alignat}" }, { "math_id": 96, "text": "\\begin{alignat}{6}\n\\min\\left(C + x_0, x + x_0\\right) &= \\min(C, x) + x_0 \\\\\n\\min\\left(C - x_0, x - x_0\\right) &= \\min(C, x) - x_0 \\\\\n\\end{alignat}" }, { "math_id": 97, "text": "\\begin{alignat}{6}\n\\min&(C, - x) {} &&= \\min(C + x, 0) - x \\\\\n\\min&(C, 0) \\;+\\; x\\;\\;\\;\\; &&= \\min(C + x, x) \\\\\n\\min&(C - x, 0) {} &&= \\min(C, x) - x \\\\\n\\end{alignat}" }, { "math_id": 98, "text": "A : H \\to H" }, { "math_id": 99, "text": "\\,\\min(A(C), A(x)) \\neq A(\\min(C, x))." }, { "math_id": 100, "text": "H := \\R^2" }, { "math_id": 101, "text": "x_0 := (0, 1)," }, { "math_id": 102, "text": "s \\in \\R," }, { "math_id": 103, "text": "L_s := \\{ (x, s x) : x \\in \\R \\}" }, { "math_id": 104, "text": "\\min\\left(L_s, x_0\\right) = \\frac{s}{1+s^2}(1, s)." }, { "math_id": 105, "text": "r \\neq 0" }, { "math_id": 106, "text": "A : \\R^2 \\to \\R^2" }, { "math_id": 107, "text": "A(x, y) := (r x, y)" }, { "math_id": 108, "text": "x-" }, { "math_id": 109, "text": "r" }, { "math_id": 110, "text": "y-" }, { "math_id": 111, "text": "A\\left(L_s\\right) = L_{s/r}" }, { "math_id": 112, "text": "A\\left(x_0\\right) = x_0," }, { "math_id": 113, "text": "\\,\\min\\left(A\\left(L_s\\right), A\\left(x_0\\right)\\right) = \\frac{s}{r^2 + s^2} (1, s)" }, { "math_id": 114, "text": "A\\left(\\min\\left(L_s, x_0\\right)\\right) = \\frac{s}{1 + s^2} \\left(r, s\\right)." }, { "math_id": 115, "text": "C := L_s" }, { "math_id": 116, "text": "s \\neq 0" }, { "math_id": 117, "text": "(r, s) \\neq (\\pm 1, 1)" }, { "math_id": 118, "text": "\\,\\min(A(C), A\\left(x_0\\right)) \\neq A\\left(\\min\\left(C, x_0\\right)\\right)." } ]
https://en.wikipedia.org/wiki?curid=9644792
9647039
Strip algebra
Strip Algebra is a set of elements and operators for the description of carbon nanotube structures, considered as a subgroup of polyhedra, and more precisely, of polyhedra with vertices formed by three edges. This restriction is imposed on the polyhedra because carbon nanotubes are formed of sp2 carbon atoms. Strip Algebra was developed initially for the determination of the structure connecting two arbitrary nanotubes, but has also been extended to the connection of three identical nanotubes Background. Graphitic systems are molecules and crystals formed of carbon atoms in sp2 hybridization. Thus, the atoms are arranged on a hexagonal grid. Graphite, nanotubes, and fullerenes are examples of graphitic systems. All of them share the property that each atom is bonded to three others (3-valent). The relation between the number of vertices, edges and faces of any finite polyhedron is given by Euler's polyhedron formula: formula_0 where "e", "f" and "v" are the number of edges, faces and vertices, respectively, and "g" is the genus of the polyhedron, i.e., the number of "holes" in the surface. For example, a sphere is a surface of genus 0, while a torus is of genus 1. Nomenclature. A substrip is identified by a pair of natural numbers measuring the position of the last ring in parentheses, together with the turns induced by the defect ring. The number of edges of the defect can be extracted from these. formula_1 Elements. A Strip is defined as a set of consecutive rings, that is able to be joined with others, by sharing a side of the first or last ring. Numerous complex structures can be formed with strips. As said before, strips have both at the beginning and at the end two connections. With strips only, can be formed two of them. Operators. Given the definition of a strip, a set of operations may be defined. These are necessary to find out the combined result of a set of contiguous strips. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\ne - f - v = 2 (g -1),\\,\n" }, { "math_id": 1, "text": "(n,m) [T_+,T_-]" } ]
https://en.wikipedia.org/wiki?curid=9647039
964733
Mazda Wankel engine
The Mazda Wankel engines are a family of Wankel rotary combustion car engines produced by Mazda. Wankel engines were invented in 1950s by Felix Wankel, a German engineer. Over the years, displacement has been increased and turbocharging has been added. Mazda rotary engines have a reputation for being relatively small and powerful at the expense of poor fuel efficiency. The engines became popular with kit car builders, hot rodders and in light aircraft because of their light weight, compact size, tuning potential and inherently high power-to-weight ratio—as is true for all Wankel-type engines. Since the end of production of the Mazda RX-8 in 2012, the engine was produced only for single seater racing, with the one-make Star Mazda Championship being contested with a Wankel engine until 2017; the series' transition to using a Mazda-branded piston engine in 2018 temporarily ended the production of the engine. In 2023, Mazda reintroduced the engine as a generator for the 2023 MX-30 e-Skyactiv R-EV plug-in hybrid. Displacement. Wankel engines can be classified by their geometric size in terms of radius (rotor center to tip distance, also the median stator radius) and depth (rotor thickness), and offset (crank throw, eccentricity, also 1/4 the difference between stator's major and minor axes). These metrics function similarly to the bore and stroke measurements of a piston engine. The displacement of rotor can be calculated as formula_0 Note that this only counts a single face of each rotor as the entire rotor's displacement, because with the eccentric shaft – crankshaft – spinning at three times the rate of the rotor, only one power stroke is created per output revolution, thus only one face of the rotor is actually working per "crankshaft" revolution, roughly equivalent to a 2-stroke engine of similar displacement to a single rotor face. Nearly all Mazda production Wankel engines share a single rotor radius, , with a crankshaft offset. The only engine to diverge from this formula was the rare 13A, which used a rotor radius and crankshaft offset. As Wankel engines became commonplace in motorsport, the problem of correctly representing their displacement for the purposes of competition arose. Rather than force participants who drove vehicles with piston engines, who were the majority, to halve their quoted displacement, most racing organizations decided to double the quoted displacement of Wankel engines. The key for comparing the displacement between the 4-cycle engine and the rotary engine is in studying the number of rotations for a thermodynamic cycle to occur. For a 4-cycle engine to complete a thermodynamic cycle, the engine must rotate two complete revolutions of the crankshaft, or 720°. By contrast, in a Wankel engine, the engine rotor rotates at one-third the speed of the crankshaft. Each rotation of the engine (360°) will bring two faces through the combustion cycle (the torque input to the eccentric shaft). This said, it takes three complete revolutions of the crankshaft, or 1080°, to complete the entire thermodynamic cycle. To get a relatable number to compare to a 4-stroke engine, compare the events that occur in two rotations of a two-rotor engine. For every 360° of rotation, two faces of the engine complete a combustion cycle. Thus, for two whole rotations, four faces will complete their cycle. If the displacement per face is , then four faces can be seen as equivalent to . Extrapolating to the case of where three whole rotations is a complete thermodynamic cycle of the engine with a total of six faces completing a cycle, per face for six faces yields . 40A. Mazda's first prototype Wankel was the 40A, a single-rotor engine very much like the NSU KKM400. Although never produced in volume, the 40A was a valuable testbed for Mazda engineers, and quickly demonstrated two serious challenges to the feasibility of the design: "chatter marks" in the housing, and heavy oil consumption. The chatter marks, nicknamed "devil's fingernails", were caused by the tip-seal vibrating at its natural frequency. The oil consumption problem was addressed with heat-resistant rubber oil seals at the sides of the rotors. This early engine had a rotor radius of , an offset of , and a depth of . L8A. The very first Mazda Cosmo prototype used a L8A two-rotor Wankel. The engine and car were both shown at the 1963 Tokyo Motor Show. Hollow cast iron apex seals reduced vibration by changing their resonance frequency and thus eliminated chatter marks. It used dry-sump lubrication. Rotor radius was up from the 40A to , but depth dropped to . One-, three-, and four-rotor derivatives of the L8A were also created for experimentation. 10A. The 10A series was Mazda's first production Wankel, appearing in 1965. It was a two-rotor design, with each chamber displacing so two chambers (one per rotor) would displace ; the series name reflects this value ("10" suggesting 1.0 litres). These engines featured the mainstream rotor dimensions with a depth. The rotor housing was made of sand-cast aluminium plated with chrome, while the aluminium sides were sprayed with molten carbon steel for strength. Cast iron was used for the rotors themselves, and their eccentric shafts were of expensive chrome-molybdenum steel. The addition of aluminium/carbon apex seals addressed the chatter mark problem. 0810. The first 10A engine was the 0810, used in the "Series I" Cosmo from May 1965 to July 1968. These cars, and their revolutionary engine, were often called L10A models. Gross output was at 7000 rpm and at 3500 rpm, but both numbers were probably optimistic (rpm of the crankshaft). The 10A featured twin side intake ports per rotor, each fed by one of four carburetor barrels. Only one port per rotor was used under low loads for added fuel economy. A single peripheral exhaust port routed hot gas through the coolest parts of the housing, and engine coolant flowed axially rather than the radial flow used by NSU. A bit of oil was mixed with the intake charge for lubrication. The 0810 was modified for the racing Cosmos used at Nürburgring. These engines had both side- and peripheral-located intake ports switched with a butterfly valve for low- and high-RPM use (respectively) Applications: 0813. The improved 0813 engine appeared in July 1968 in the "Series II/L10B" Cosmo. Its construction was very similar to the 0810. Japanese-spec gross output was at 7000 rpm and at 3500 rpm. The use of less-expensive components increased the mass of the engine from . Applications: 0866. The final member of the 10A family was the 1971 0866. This variant featured a cast-iron thermal reactor to reduce exhaust emissions and re-tuned exhaust ports. The new approach to reducing emissions was partly a result of Japanese Government emission control legislation in 1968, with implementation starting in 1975. Mazda called their technology REAPS (Rotary Engine Anti Pollution System). The die-cast rotor housing was now coated with a new process: The new Transplant Coating Process (TCP) featured sprayed-on steel which is then coated with chrome. Gross output was at 7000 rpm and at 3500 rpm. Applications: 3A. Mazda began development on a single rotor engine displacing , and was designed for "kei car" use in the upcoming Mazda Chantez but was never placed into production. It was a slimmed down derivative of the 10A engine as fitted to the R100. A prototype engine is on display at the Mazda Museum in Hiroshima, Japan. 13A. The 13A was designed especially for front-wheel drive applications. It was a two-rotor design, with each chamber displacing so two chambers (one per rotor) would displace ; continuing earlier practice, the series name reflects this value ("13" suggesting 1.3 litres). This was the only production Mazda Wankel with different rotor dimensions: Radius was and offset was , but depth remained the same as the 10A at . Another major difference from the previous engines was the integrated water-cooled oil cooler. The 13A was used only in the 1969–1972 R130 Luce, where it produced and . This was the end of the line for this engine design: the next Luce was rear-wheel drive and Mazda never again made a front-wheel drive rotary vehicle. Applications: 12A. The 12A is an "elongated" version of the 10A: the rotor radius was the same, but the depth was increased by to . It continued the two-rotor design; with the depth increase each chamber displaced so two chambers (one per rotor) would displace ; the series name continues earlier practice and reflects this value ("12" suggesting 1.2 litres). The 12A series was produced for 15 years, from May 1970 through 1985. In 1974, a 12A became the first engine built outside of western Europe or the U.S. to finish the 24 hours of Le Mans (and in 1991 Mazda won the race outright with the 4-rotor R26B engine). In 1974, a new process was used to harden the rotor housing. The Sheet-metal Insert Process (SIP) used a sheet of steel much like a conventional piston engine cylinder liner with a chrome plated surface. The side housing coating was also changed to eliminate the troublesome sprayed metal. The new "REST" process created such a strong housing, the old carbon seals could be abandoned in favour of conventional cast iron. Early 12A engines also feature a thermal reactor, similar to the 0866 10A, and some use an exhaust port insert to reduce exhaust noise. A lean-burn version was introduced in 1979 (in Japan) and 1980 (in America) which substituted a more-conventional catalytic converter for this "afterburner". A major modification of the 12A architecture was the 6PI which featured variable induction ports. Applications: Turbo. The ultimate 12A engine was the electronically fuel-injected engine used in the Japan-spec HB series Cosmo, Luce, and SA series RX-7. In 1982 a 12A turbo powered Cosmo coupe was officially the fastest production car in Japan. It featured "semi-direct injection" into both rotors at once. A passive knock sensor was used to eliminate knocking, and later models featured a specially-designed smaller and lighter "Impact Turbo" which was tweaked for the unique exhaust signature of the Wankel engine for a 5-horsepower increase. The engine continued until 1989 in the HB Cosmo series but by that stage it had grown a reputation as a thirsty engine. Applications: 12B. The 12B was an improved version of the 12A and was quietly introduced for the 1974 Mazda RX-2 and RX-3. It had increased reliability from the previous series and also used a single distributor for the first time: the earlier 12A and 10A were both twin distributor engines. Applications: 13B. The 13B is the most widely produced rotary engine. It was the basis for all future Mazda Wankel engines, and was produced for over 30 years. The 13B has no relation to the 13A. Instead, it is a lengthened version of the 12A, having thick rotors. It was a two-rotor design, with each chamber displacing so two chambers (one per rotor) would displace ; the series name reflects this value ("13" suggesting 1.3 litres), as with the 13A of the same displacement but different proportions. In the United States, the 13B was available from 1974 to 1978 and was then retired from sedans but continued in 1984–1985 RX-7 GSL-SE. It was then used from 1985 to 1992 in the RX-7 FC, in Naturally Aspirated or Turbocharged options, then once again in the RX-7 FD in a twin turbocharged form from 1992. It disappeared from the US market again in 1995, when the last US-spec RX-7s were sold. The engine was continually used in Japan from 1972's Mazda Luce/RX-4 through 2002's RX-7. AP. The 13B was designed with both high performance and low emissions in mind. Early vehicles using this engine used the AP name. Applications: 13B-RESI. A tuned intake manifold was used in a Wankel engine for the first time with the 13B-RESI. RESI = Rotary Engine Super Injection. The so-called Dynamic Effect Intake featured a two-level intake box which derived a supercharger-like effect from the Helmholtz resonance of the opening and closing intake ports. The RESI engine also featured Bosch L-Jetronic fuel injection. Output was much improved at and . Applications: 13B-DEI. Like the 12A-SIP, the second-generation RX-7 bowed with a variable-intake system. Dubbed DEI, the engine features both the 6PI and DEI systems, as well as four-injector electronic fuel injection. Total output is up to at 6500 rpm and at 3500 rpm. The 13B-T was turbocharged in 1986. It features the newer four-injector fuel injection of the 6PI engine, but lacks that engine's eponymous variable intake system and 6PI. Mazda went back to the 4 port intake design similar to what was used in the '74–'78 13B. In '86–'88 engines the twin-scroll turbocharger is fed using a two-stage mechanically actuated valve, however, on '89–'91 engines a better turbo design was used with a divided manifold powering the twin-scroll configuration. For engines manufactured between '86-'88 output is rated at at 6500 rpm and at 3500 rpm. Applications: 13B-RE. The 13B-RE from the JC Cosmo series was a similar motor to the 13B-REW but had a few key differences, namely it being endowed with the largest side ports of any later model rotary engine. Injector sizes = PRI + SEC. Approximately 5000 13B-RE optioned JC Cosmos were sold, making this engine almost as hard to source as its rarer 20B-REW big-brother. Applications: 13B-REW. A sequentially-turbocharged version of the 13B, the 13B-REW, became famous for its high output and low weight. The turbos were operated sequentially, with only the primary providing boost until 4,500 rpm, and the secondary additionally coming online afterwards. Notably, this was the world's first volume-production sequential turbocharger system. Output eventually reached, and may have exceeded, Japan's unofficial maximum of DIN for the final revision used in the Series 8 Mazda RX-7. Applications: 13G/20B. In Le Mans racing, the first three-rotor engine used in the 757 was named the 13G. The main difference between the 13G and 20B is that the 13G uses a factory peripheral intake port (used for racing) and the 20B (production vehicle) uses side intake ports. It was renamed 20B after Mazda's naming convention for the 767 in November 1987. As a three-rotor design, with each chamber displacing , three chambers (one per rotor) would displace , and so the new series name reflected this value ("20" suggesting 2.0 litres). The three-rotor 20B-REW was only used in the 1990-1995 Eunos Cosmo. It was offered in both "13B-RE" and 20B-REW form. It displaced per set of three chambers (counting only one chamber per rotor) and used of boost pressure from twin sequential turbochargers to produce a claimed and . A version of the 20B known as the "R20B Renesis 3 Rotor Engine" was built by Racing Beat in the US for the Furai concept car which was released on 27 December 2007. The engine was tuned to run powerfully on 100% ethanol (E100) fuel, produced in partnership with BP. During a "Top Gear" photo shoot in 2008, a fire in the engine bay combined with a delay to inform the fire crews, the car was engulfed and the entire car destroyed. This information was withheld until made public in 2013. 13J. The first Mazda racing four-rotor engine was the 13J-M used in the 1988 and 1989 (13J-MM with two step induction pipe) 767 Le Mans Group C racers. This motor was replaced by the 26B. R26B. The most prominent 4-rotor engine from Mazda, the 26B, was used only in various Mazda-built sports prototype cars including the 767, 787B and the RX-792P in replacement of the older 13J. In 1991 the 26B-powered Mazda 787B became the first Japanese car and the first car with anything other than a reciprocating piston engine to win the 24 Hours of Le Mans race outright. The 26B engine displaced per set of four chambers (counting only one chamber for each of the four rotors) – thus the "26" in the series name suggesting 2.6 litres – and developed at 9000 rpm. The engine design uses peripheral intake ports, continually variable geometry intakes, and an additional (third) spark plug per rotor. 13B-MSP Renesis. The Renesis engine – also 13B-MSP (Multi-Side Port) – which first appeared in production in the 2004 model-year Mazda RX-8, is an evolution of the previous 13B. It was designed to reduce exhaust emission and improve fuel economy, which were two of the most recurrent drawbacks of Wankel rotary engines. It is naturally aspirated, unlike its most recent predecessors from the 13B range, and therefore slightly less powerful than Mazda RX-7's twin-turbocharged 13B-REW which develops . The Renesis design features two major changes from its predecessors. First, the exhaust ports are not peripheral but are located on the side of the housing, which eliminates overlap and allows redesign of the intake port area. This produced noticeably more power thanks to an increased effective compression ratio; however, Mazda engineers discovered that when changing the exhaust port to the side housing, a buildup of carbon in the exhaust port would stop the engine from running. To remedy this, Mazda engineers added a water jacket passage into the side housing. Secondly, the rotors are sealed differently through the use of redesigned side seals, low-height apex seals and the addition of a second cut-off ring. Mazda engineers had originally used apex seals identical to the older design of seal. Mazda changed the apex seal design to reduce friction and push the new engine closer to its limits. These and other innovative technologies allow the Renesis to achieve 49% higher output and reduced fuel consumption and emissions. Regarding hydrocarbon (HC) emission characteristics of the RENESIS, the use of the side exhaust port allowed for about 35 – 50% HC reduction compared to the 13B-REW with the peripheral exhaust port. With this reduction, the RENESIS vehicle meets USA LEV-II (LEV). The Renesis won International Engine of the Year and Best New Engine awards 2003 and also holds the "2.5 to 3 liter" (note that the engine is designated as a 1.3–litre by Mazda) size award for 2003 and 2004, where it is considered a 2.6 L engine, but only for the matter of giving awards. This is because although a 2-rotor wankel with chambers displaces the same volume in one output shaft rotation as that of a 1.3L four-stroke piston engine, the wankel will complete 2 full combustion cycles in the same amount of time that it takes the four-stroke piston engine to complete 1 combustion cycle. Finally, it was on the Ward's 10 Best Engines list for 2004 and 2005. The Renesis has also been adapted for a dual-fuel use, allowing it to run on petrol or hydrogen in cars like the Mazda Premacy Hydrogen RE Hybrid and Mazda RX-8 Hydrogen RE. All the Mazda rotary engines have been praised because of their light weight. The unmodified 13B-MSP Renesis Engine has a weight of , including all standard attachments (except the airbox, alternator, starter motor, cover, etc.), but without engine fluids (such as coolant, oil, etc.), known to make . 16X. Also known as the Renesis II, made its first and only appearance in the Mazda Taiki concept car at the 2007 Tokyo Auto Show, but has not been seen since. It features up to , a lengthened stroke, reduced width rotor housing, direct injection, and aluminium side housings. 8C. The 8C engine is used as a generator for the 2023 MX-30 e-Skyactiv R-EV plug-in hybrid. The 8C is a single rotor with a radius of 120mm, a width of 76mm, using 2.5mm apex seals, and displacing 830cc making up to 75 hp (55 kW) at 4700rpm and 116 Nm (85 lb-ft) at 4000 rpm. It has a higher compression ratio of 11.9:1 and the first instance of Gasoline direct injection in a production rotary engine, which improves fuel economy by as much as 25%. Various other technologies have been integrated to increase the efficiency of the engine further, including exhaust gas recirculation (EGR) to reduce the combustion chamber temperatures and plasma spray coatings on the insides of the housings to reduce the friction on the rotor. Changes have also been made to decrease the weight of the unit such as using aluminium side housings, which saved . Sales. Mazda was fully committed to the Wankel engine just as the energy crisis of the 1970s struck. The company had all but eliminated piston engines from its products in 1974, a decision that nearly led to the company's collapse. A switch to a three-prong approach (piston-gasoline, piston-Diesel, and Wankel) for the 1980s relegated the Wankel to sports car use (in the RX-7 and Cosmo), severely limiting production volume. But the company had continued production continually since the mid-1960s, and was the only maker of Wankel-powered cars when the RX-8 was discontinued from production in June 2012 with 2000 RX-8 Spirit R models being made for the JDM (RHD) market. Though not reflected in the graph at right, the RX-8 was a higher-volume car than its predecessors. Sales of the RX-8 peaked in 2004 at 23,690, but continued to decline through 2011, when fewer than 1000 were produced. On 16 November 2011, Mazda CEO Takashi Yamanouchi announced that the company is still committed to producing the rotary engine, saying, "So long as I remain involved with this company... there will be a rotary engine offering or multiple offerings in the lineup." Currently, the engine is produced for SCCA Formula Mazda, and its professional Indy Racing League LLC dba INDYCAR sanctioned Pro Mazda Championship. Future expectations. Mazda last built a production street car powered by a rotary engine in 2012, the RX-8, but had to abandon it largely to poor fuel efficiency and emissions. It has continued to work on the technology, however, as it is one of the company's signature features. Mazda officials have previously suggested that if they can get it to perform as well as a reciprocating engine they will bring it back, to power a conventional sports car. On 16 November 2011, Mazda CEO Takashi Yamanouchi announced that the company is still committed to producing the rotary engine, saying, "So long as I remain involved with this company... there will be a rotary engine offering or multiple offerings in the lineup." On 17 November 2016 Senior managing executive officer of Mazda research and development Kiyoshi Fujiwara told journalists at the Los Angeles motor show that the company is currently developing its first EV in 2019, and it's likely to incorporate a rotary engine, but that the details were still "a big secret." He did say, however, that the car is likely to use a new-generation rotary engine as a range extender, similar in concept to a BMW i3. In 2013, Mazda had displayed a Mazda2 RE prototype car, using a similar rotary range extender EV system. On 27 October 2017 Senior managing executive officer and R&amp;D Chief Kiyoshi Fujiwara told journalists that they were still working on a rotary engine for a sports car, that will potentially in some markets be with hybrid drivetrains, but both will have distinct powertrains from Mazda's first EV, which will be released in 2019/20. "...some cities will ban combustion, therefore we need some additional portion of electrification because the driver can't use this rotary sports car. Some of the regions we don't need this small electrification, therefore we can utilise pure rotary engines." In 2021, Mazda announced that the upcoming plug-in hybrid variant of the MX-30 will feature a new rotary engine that acts as a range extender to recharge the batteries, but not to power the wheels. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{3\\cdot \\sqrt{3} \\cdot{\\mathrm{Depth\\cdot Radius^2\\cdot (Offset/Radius)}}}{1000}" } ]
https://en.wikipedia.org/wiki?curid=964733
9649
Energy
Physical quantity Energy (from grc " "" ()" 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed; matter and energy may also be converted to one another. The unit of measurement for energy in the International System of Units (SI) is the joule (J). Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, renewable energy, and geothermal energy. Forms. The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself. While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples. History. The word "energy" derives from the , which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the , or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total "vis viva" was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from "vis viva" only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's "Principia Mathematica", which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy". In 1807, Thomas Young was possibly the first to use the term "energy" instead of "vis viva", in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. Units of measure. In 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units (SI), the unit of energy is the joule, named after Joule. It is a derived unit that is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. Scientific use. Classical mechanics. &lt;templatestyles src="Hlist/styles.css"/&gt; In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept. Work, a function of energy, is force times distance. formula_0 This says that the work (formula_1) is equal to the line integral of the force F along a path "C"; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics. Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy "minus" the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. Chemistry. In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The "speed" of a chemical reaction (at a given temperature "T") is related to the activation energy "E" by the Boltzmann's population factor e−"E"/"kT"; that is, the probability of a molecule to have energy greater than or equal to "E" at a given temperature "T". This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy. Biology. In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy. Sunlight's radiant energy is also captured by plants as "chemical potential energy" in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action. All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria &lt;chem display="block"&gt;C6H12O6 + 6O2 -&gt; 6CO2 + 6H2O&lt;/chem&gt; &lt;chem display="block"&gt;C57H110O6 + (81 1/2) O2 -&gt; 57CO2 + 55H2O&lt;/chem&gt; and some of the energy is used to convert ADP into ATP: &lt;templatestyles src="Block indent/styles.css"/&gt;ADP + HPO42− → ATP + H2O The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. Earth sciences. In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy. Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms). Cosmology. In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Quantum mechanics. In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: formula_2 (where formula_3 is the Planck constant and formula_4 the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. Relativity. When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: formula_5 where For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts). Transformation. Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work). Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy (formula_7) to kinetic energy (formula_8) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since formula_9 (mass times acceleration due to gravity times the height) and formula_10 (half mass times velocity squared). Then the total amount of energy can be found by adding formula_11. Conservation of energy and mass in transformation. Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula "E" = "mc"², derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information). Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since formula_12 is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~formula_13 joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws. Reversible and non-reversible transformations. Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease. Conservation of energy. The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant. While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system. Richard Feynman said during a 1961 lecture: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There is a fact, or if you wish, a "law", governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the "conservation of energy". It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa. This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it. In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by formula_14 which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since "H" and "t" are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena. Energy transfer. Closed systems. Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law: where formula_15 is the amount of energy transferred, formula_1  represents the work done on or by the system, and formula_16 represents the heat flow into or out of the system. As a simplification, the heat term, formula_16, can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes, This simplified equation is the one used to define the joule, for example. Open systems. Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by formula_17, one may write Thermodynamics. Internal energy. Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone. First law of thermodynamics. The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a "gain" in energy signified by a positive quantity) is given as formula_18, where the first term on the right is the heat transferred into the system, expressed in terms of temperature "T" and entropy "S" (in which entropy increases and its change d"S" is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is "P" and volume "V" (the negative sign results since compression of the system requires work to be done on it and so the volume change, d"V", is negative when work is done on the system). This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and "PV"-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a "closed" system is expressed in a general form by formula_19 where formula_20 is the heat supplied to the system and formula_21 is the work applied to the system. Equipartition of energy. The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " W = \\int_C \\mathbf{F} \\cdot \\mathrm{d} \\mathbf{s}" }, { "math_id": 1, "text": "W" }, { "math_id": 2, "text": "E = h\\nu" }, { "math_id": 3, "text": "h" }, { "math_id": 4, "text": "\\nu" }, { "math_id": 5, "text": " E_0 = m_0 c^2 ," }, { "math_id": 6, "text": "E_0" }, { "math_id": 7, "text": "E_p" }, { "math_id": 8, "text": "E_k" }, { "math_id": 9, "text": "E_p = mgh" }, { "math_id": 10, "text": "E_k = \\frac{1}{2} mv^2" }, { "math_id": 11, "text": "E_p + E_k = E_\\text{total}" }, { "math_id": 12, "text": "c^2" }, { "math_id": 13, "text": "9\\times 10^{16}" }, { "math_id": 14, "text": "\\Delta E \\Delta t \\ge \\frac { \\hbar } {2 } " }, { "math_id": 15, "text": "E" }, { "math_id": 16, "text": "Q" }, { "math_id": 17, "text": "E_\\text{matter}" }, { "math_id": 18, "text": "\\mathrm{d}E = T\\mathrm{d}S - P\\mathrm{d}V\\," }, { "math_id": 19, "text": "\\mathrm{d}E=\\delta Q+\\delta W" }, { "math_id": 20, "text": "\\delta Q" }, { "math_id": 21, "text": "\\delta W" } ]
https://en.wikipedia.org/wiki?curid=9649
9649365
Stochastic oscillator
Market momentum indicator Stochastic oscillator is a momentum indicator within technical analysis that uses support and resistance levels as an oscillator. George Lane developed this indicator in the late 1950s. The term "stochastic" refers to the point of a current price in relation to its price range over a period of time. This method attempts to predict price turning points by comparing the closing price of a security to its price range. The 5-period stochastic oscillator in a daily timeframe is defined as follows: formula_0 formula_1 where formula_2 and formula_3 are the highest and lowest prices in the last 5 days respectively, while %"D" is the "N"-day moving average of %"K" (the last "N" values of %"K"). Usually this is a simple moving average, but can be an exponential moving average for a less standardized weighting for more recent values. There is only one valid signal in working with %"D" alone — a divergence between %"D" and the analyzed security. Calculation. The calculation above finds the range between an asset's high and low price during a given period of time. The current security's price is then expressed as a percentage of this range with 0% indicating the bottom of the range and 100% indicating the upper limits of the range over the time period covered. The idea behind this indicator is that prices tend to close near the extremes of the recent range before turning points. The Stochastic oscillator is calculated: formula_4 formula_5 "Where" formula_6 is the last closing price formula_7 is the lowest price over the last "N" periods formula_8 is the highest price over the last "N" periods formula_9 is a 3-period simple moving average of %"K", formula_10. formula_11 is a 3-period simple moving average of %"D", formula_12. A 3-line Stochastics will give an anticipatory signal in %"K", a signal in the turnaround of %"D" at or before a bottom, and a confirmation of the turnaround in %"D"-Slow. Typical values for "N" are 5, 9, or 14 periods. Smoothing the indicator over 3 periods is standard. According to George Lane, the Stochastics indicator is to be used with cycles, Elliott Wave Theory and Fibonacci retracement for timing. In low margin, calendar futures spreads, one might use Wilders parabolic as a trailing stop after a stochastics entry. A centerpiece of his teaching is the divergence and convergence of trendlines drawn on stochastics, as diverging/converging to trendlines drawn on price cycles. Stochastics predicts tops and bottoms. Interpretation. The signal to act is when there is a divergence-convergence, in an extreme area, with a crossover on the right hand side, of a cycle bottom. As plain crossovers can occur frequently, one typically waits for crossovers occurring together with an extreme pullback, after a peak or trough in the %D line. If price volatility is high, an exponential moving average of the %D indicator may be taken, which tends to smooth out rapid fluctuations in price. Stochastics attempts to predict turning points by comparing the closing price of a security to its price range. Prices tend to close near the extremes of the recent range just before turning points. In the case of an uptrend, prices tend to make higher highs, and the settlement price usually tends to be in the upper end of that time period's trading range. When the momentum starts to slow, the settlement prices will start to retreat from the upper boundaries of the range, causing the stochastic indicator to turn down at or before the final price high. An alert or set-up is present when the %D line is in an extreme area and diverging from the price action. The actual signal takes place when the faster % K line crosses the % D line. Divergence-convergence is an indication that the momentum in the market is waning and a reversal may be in the making. The chart below illustrates an example of where a divergence in stochastics, relative to price, forecasts a reversal in the price's direction. An event known as "stochastic pop" occurs when prices break out and keep going. This is interpreted as a signal to increase the current position, or liquidate if the direction is against the current position. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\%K = 100\\times\\frac{\\mathrm{Price}-\\mathrm{Low}_5}{\\mathrm{High}_5-\\mathrm{Low}_5}" }, { "math_id": 1, "text": " \\%D_N = \\frac {\\%K_1+\\%K_2+\\%K_3+...\\%K_N}{N}" }, { "math_id": 2, "text": "\\mathrm{High}_5" }, { "math_id": 3, "text": "\\mathrm{Low}_5" }, { "math_id": 4, "text": " \\%K = \\frac {\\mathrm{Price}-\\mathrm{Low}_N}{\\mathrm{High}_N-\\mathrm{Low}_N}\\times 100" }, { "math_id": 5, "text": " \\%D = \\frac {\\%K_1+\\%K_2+\\%K_3}{3}" }, { "math_id": 6, "text": "\\mathrm{Price}" }, { "math_id": 7, "text": "\\mathrm{Low}_N" }, { "math_id": 8, "text": "\\mathrm{High}_N" }, { "math_id": 9, "text": "\\%D" }, { "math_id": 10, "text": "\\mathrm{SMA}_3(\\%K)" }, { "math_id": 11, "text": "\\%D\\mathrm{-Slow}" }, { "math_id": 12, "text": "\\mathrm{SMA}_3(\\%D)" } ]
https://en.wikipedia.org/wiki?curid=9649365
9651443
Radial basis function network
Type of artificial neural network that uses radial basis functions as activation functions In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment. Network architecture. Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers formula_0. The output of the network is then a scalar function of the input vector, formula_1, and is given by formula_2 where formula_3 is the number of neurons in the hidden layer, formula_4 is the center vector for neuron formula_5, and formula_6 is the weight of neuron formula_5 in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better with pattern recognition) and the radial basis function is commonly taken to be Gaussian formula_7. The Gaussian basis functions are local to the center vector in the sense that formula_8 i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron. Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of formula_9. This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision. The parameters formula_10, formula_11, and formula_12 are determined in a manner that optimizes the fit between formula_13 and the data. Normalized. Normalized architecture. In addition to the above "unnormalized" architecture, RBF networks can be "normalized". In this case the mapping is formula_14 where formula_15 is known as a "normalized radial basis function". Theoretical motivation for normalization. There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density formula_16 where the weights formula_17 and formula_18 are exemplars from the data and we require the kernels to be normalized formula_19 and formula_20. The probability densities in the input and output spaces are formula_21 and The expectation of y given an input formula_22 is formula_23 where formula_24 is the conditional probability of y given formula_25. The conditional probability is related to the joint probability through Bayes theorem formula_26 which yields formula_27. This becomes formula_28 when the integrations are performed. Local linear models. It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order, formula_29 and formula_30 in the unnormalized and normalized cases, respectively. Here formula_31 are weights to be determined. Higher order linear terms are also possible. This result can be written formula_32 where formula_33 and formula_34 in the unnormalized case and formula_35 in the normalized case. Here formula_36 is a Kronecker delta function defined as formula_37. Training. RBF networks are typically trained from pairs of input and target values formula_38, formula_39 by a two-step algorithm. In the first step, the center vectors formula_4 of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using k-means clustering. Note that this step is unsupervised. The second step simply fits a linear model with coefficients formula_40 to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function: formula_41 where formula_42. We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit. There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as formula_43 where formula_44 and formula_45 where optimization of S maximizes smoothness and formula_46 is known as a regularization parameter. A third optional backpropagation step can be performed to fine-tune all of the RBF net's parameters. Interpolation. RBF networks can be used to interpolate a function formula_47 when the values of that function are known on finite number of points: formula_48. Taking the known points formula_49 to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points formula_50 the weights can be solved from the equation formula_51 It can be shown that the interpolation matrix in the above equation is non-singular, if the points formula_49 are distinct, and thus the weights formula_52 can be solved by simple linear algebra: formula_53 where formula_54. Function approximation. If the purpose is not to perform strict interpolation but instead more general function approximation or classification the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron. Training the basis function centers. Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers. The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers. Pseudoinverse solution for the linear weights. After the centers formula_55 have been fixed, the weights that minimize the error at the output can be computed with a linear pseudoinverse solution: formula_56, where the entries of "G" are the values of the radial basis functions evaluated at the points formula_57: formula_58. The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed). Gradient descent training of the linear weights. Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found), formula_59 where formula_60 is a "learning parameter." For the case of training the linear weights, formula_61, the algorithm becomes formula_62 in the unnormalized case and formula_63 in the normalized case. For local-linear-architectures gradient-descent training is formula_64 Projection operator training of the linear weights. For the case of training the linear weights, formula_61 and formula_65, the algorithm becomes formula_66 in the unnormalized case and formula_67 in the normalized case and formula_68 in the local-linear case. For one basis function, projection operator training reduces to Newton's method. Examples. Logistic map. The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and control theory. The map originated from the field of population dynamics and became the prototype for chaotic time series. The map, in the fully chaotic regime, is given by formula_69 where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map. Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate formula_70 for f. Function approximation. Unnormalized radial basis functions. The architecture is formula_71 where formula_72. Since the input is a scalar rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight formula_73 is taken to be a constant equal to 5. The weights formula_74 are five exemplars from the time series. The weights formula_10 are trained with projection operator training: formula_75 where the learning rate formula_76 is taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error is 0.15. Normalized radial basis functions. The normalized RBF architecture is formula_14 where formula_77. Again: formula_78. Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight formula_73 is taken to be a constant equal to 6. The weights formula_74 are five exemplars from the time series. The weights formula_10 are trained with projection operator training: formula_79 where the learning rate formula_76 is again taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases. Time series prediction. Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration: formula_80 formula_81 formula_82. A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps. Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent. Control of a chaotic time series. We assume the output of the logistic map can be manipulated through a control parameter formula_83 such that formula_84. The goal is to choose the control parameter in such a way as to drive the time series to a desired output formula_85. This can be done if we choose the control parameter to be formula_86 where formula_87 is an approximation to the underlying natural dynamics of the system. The learning algorithm is given by formula_88 where formula_89. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x} \\in \\mathbb{R}^n" }, { "math_id": 1, "text": " \\varphi : \\mathbb{R}^n \\to \\mathbb{R} " }, { "math_id": 2, "text": "\\varphi(\\mathbf{x}) = \\sum_{i=1}^N a_i \\rho(||\\mathbf{x}-\\mathbf{c}_i||)" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "\\mathbf c_i" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "a_i" }, { "math_id": 7, "text": " \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) = \\exp \\left[ -\\beta_i \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert ^2 \\right] " }, { "math_id": 8, "text": "\\lim_{||x|| \\to \\infty}\\rho(\\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert) = 0" }, { "math_id": 9, "text": "\\mathbb{R}^n" }, { "math_id": 10, "text": " a_i " }, { "math_id": 11, "text": " \\mathbf{c}_i " }, { "math_id": 12, "text": " \\beta_i " }, { "math_id": 13, "text": " \\varphi " }, { "math_id": 14, "text": " \\varphi ( \\mathbf{x} ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac { \\sum_{i=1}^N a_i \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } { \\sum_{i=1}^N \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } = \\sum_{i=1}^N a_i u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 15, "text": " u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac { \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } { \\sum_{j=1}^N \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_j \\right \\Vert \\big ) } " }, { "math_id": 16, "text": " P\\left ( \\mathbf{x} \\land y \\right ) = {1 \\over N} \\sum_{i=1}^N \\, \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) \\, \\sigma \\big ( \\left \\vert y - e_i \\right \\vert \\big )" }, { "math_id": 17, "text": " \\mathbf{c}_i " }, { "math_id": 18, "text": " e_i " }, { "math_id": 19, "text": " \\int \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) \\, d^n\\mathbf{x} =1" }, { "math_id": 20, "text": " \\int \\sigma \\big ( \\left \\vert y - e_i \\right \\vert \\big ) \\, dy =1" }, { "math_id": 21, "text": " P \\left ( \\mathbf{x} \\right ) = \\int P \\left ( \\mathbf{x} \\land y \\right ) \\, dy = {1 \\over N} \\sum_{i=1}^N \\, \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big )" }, { "math_id": 22, "text": " \\mathbf{x} " }, { "math_id": 23, "text": " \\varphi \\left ( \\mathbf{x} \\right ) \\ \\stackrel{\\mathrm{def}}{=}\\ E\\left ( y \\mid \\mathbf{x} \\right ) = \\int y \\, P\\left ( y \\mid \\mathbf{x} \\right ) dy " }, { "math_id": 24, "text": " P\\left ( y \\mid \\mathbf{x} \\right ) " }, { "math_id": 25, "text": " \\mathbf{x} " }, { "math_id": 26, "text": " P\\left ( y \\mid \\mathbf{x} \\right ) = \\frac {P \\left ( \\mathbf{x} \\land y \\right )} {P \\left ( \\mathbf{x} \\right )} " }, { "math_id": 27, "text": " \\varphi \\left ( \\mathbf{x} \\right ) = \\int y \\, \\frac {P \\left ( \\mathbf{x} \\land y \\right )} {P \\left ( \\mathbf{x} \\right )} \\, dy " }, { "math_id": 28, "text": " \\varphi \\left ( \\mathbf{x} \\right ) = \\frac { \\sum_{i=1}^N e_i \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } { \\sum_{i=1}^N \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } = \\sum_{i=1}^N e_i u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 29, "text": " \\varphi \\left ( \\mathbf{x} \\right ) = \\sum_{i=1}^N \\left ( a_i + \\mathbf{b}_i \\cdot \\left ( \\mathbf{x} - \\mathbf{c}_i \\right ) \\right )\\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 30, "text": " \\varphi \\left ( \\mathbf{x} \\right ) = \\sum_{i=1}^N \\left ( a_i + \\mathbf{b}_i \\cdot \\left ( \\mathbf{x} - \\mathbf{c}_i \\right ) \\right )u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 31, "text": " \\mathbf{b}_i " }, { "math_id": 32, "text": " \\varphi \\left ( \\mathbf{x} \\right ) = \\sum_{i=1}^{2N} \\sum_{j=1}^n e_{ij} v_{ij} \\big ( \\mathbf{x} - \\mathbf{c}_i \\big ) " }, { "math_id": 33, "text": " e_{ij} = \\begin{cases} a_i, & \\mbox{if } i \\in [1,N] \\\\ b_{ij}, & \\mbox{if }i \\in [N+1,2N] \\end{cases} " }, { "math_id": 34, "text": " v_{ij}\\big ( \\mathbf{x} - \\mathbf{c}_i \\big ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\begin{cases} \\delta_{ij} \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) , & \\mbox{if } i \\in [1,N] \\\\ \\left ( x_{ij} - c_{ij} \\right ) \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) , & \\mbox{if }i \\in [N+1,2N] \\end{cases} " }, { "math_id": 35, "text": " v_{ij}\\big ( \\mathbf{x} - \\mathbf{c}_i \\big ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\begin{cases} \\delta_{ij} u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) , & \\mbox{if } i \\in [1,N] \\\\ \\left ( x_{ij} - c_{ij} \\right ) u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) , & \\mbox{if }i \\in [N+1,2N] \\end{cases} " }, { "math_id": 36, "text": " \\delta_{ij} " }, { "math_id": 37, "text": " \\delta_{ij} = \\begin{cases} 1, & \\mbox{if }i = j \\\\ 0, & \\mbox{if }i \\ne j \\end{cases} " }, { "math_id": 38, "text": "\\mathbf{x}(t), y(t)" }, { "math_id": 39, "text": "t = 1, \\dots, T" }, { "math_id": 40, "text": "w_i" }, { "math_id": 41, "text": " K( \\mathbf{w} ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_{t=1}^T K_t( \\mathbf{w} ) " }, { "math_id": 42, "text": " K_t( \\mathbf{w} ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ]^2 " }, { "math_id": 43, "text": " H( \\mathbf{w} ) \\ \\stackrel{\\mathrm{def}}{=}\\ K( \\mathbf{w} ) + \\lambda S( \\mathbf{w} ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_{t=1}^T H_t( \\mathbf{w} ) " }, { "math_id": 44, "text": " S( \\mathbf{w} ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_{t=1}^T S_t( \\mathbf{w} ) " }, { "math_id": 45, "text": " H_t( \\mathbf{w} ) \\ \\stackrel{\\mathrm{def}}{=}\\ K_t ( \\mathbf{w} ) + \\lambda S_t ( \\mathbf{w} ) " }, { "math_id": 46, "text": " \\lambda " }, { "math_id": 47, "text": "y: \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 48, "text": "y(\\mathbf x_i) = b_i, i=1, \\ldots, N" }, { "math_id": 49, "text": "\\mathbf x_i" }, { "math_id": 50, "text": "g_{ij} = \\rho(|| \\mathbf x_j - \\mathbf x_i ||)" }, { "math_id": 51, "text": "\\left[ \\begin{matrix}\ng_{11} & g_{12} & \\cdots & g_{1N} \\\\\ng_{21} & g_{22} & \\cdots & g_{2N} \\\\\n\\vdots & & \\ddots & \\vdots \\\\\ng_{N1} & g_{N2} & \\cdots & g_{NN}\n\\end{matrix}\\right] \\left[ \\begin{matrix}\nw_1 \\\\\nw_2 \\\\\n\\vdots \\\\\nw_N\n\\end{matrix} \\right] = \\left[ \\begin{matrix}\nb_1 \\\\\nb_2 \\\\\n\\vdots \\\\\nb_N\n\\end{matrix} \\right]" }, { "math_id": 52, "text": "w" }, { "math_id": 53, "text": "\\mathbf{w} = \\mathbf{G}^{-1} \\mathbf{b}" }, { "math_id": 54, "text": "G = (g_{ij})" }, { "math_id": 55, "text": "c_i" }, { "math_id": 56, "text": "\\mathbf{w} = \\mathbf{G}^+ \\mathbf{b}" }, { "math_id": 57, "text": "x_i" }, { "math_id": 58, "text": "g_{ji} = \\rho(||x_j-c_i||)" }, { "math_id": 59, "text": " \\mathbf{w}(t+1) = \\mathbf{w}(t) - \\nu \\frac {d} {d\\mathbf{w}} H_t(\\mathbf{w}) " }, { "math_id": 60, "text": " \\nu " }, { "math_id": 61, "text": " a_i " }, { "math_id": 62, "text": " a_i (t+1) = a_i(t) + \\nu \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] \\rho \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 63, "text": " a_i (t+1) = a_i(t) + \\nu \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] u \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 64, "text": " e_{ij} (t+1) = e_{ij}(t) + \\nu \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] v_{ij} \\big ( \\mathbf{x}(t) - \\mathbf{c}_i \\big ) " }, { "math_id": 65, "text": " e_{ij} " }, { "math_id": 66, "text": " a_i (t+1) = a_i(t) + \\nu \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] \\frac {\\rho \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} {\\sum_{i=1}^N \\rho^2 \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} " }, { "math_id": 67, "text": " a_i (t+1) = a_i(t) + \\nu \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] \\frac {u \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} {\\sum_{i=1}^N u^2 \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} " }, { "math_id": 68, "text": " e_{ij} (t+1) = e_{ij}(t) + \\nu \\big [ y(t) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] \\frac { v_{ij} \\big ( \\mathbf{x}(t) - \\mathbf{c}_i \\big ) } {\\sum_{i=1}^N \\sum_{j=1}^n v_{ij}^2 \\big ( \\mathbf{x}(t) - \\mathbf{c}_i \\big ) } " }, { "math_id": 69, "text": " x(t+1)\\ \\stackrel{\\mathrm{def}}{=}\\ f\\left [ x(t)\\right ] = 4 x(t) \\left [ 1-x(t) \\right ] " }, { "math_id": 70, "text": " x(t+1) = f \\left [ x(t) \\right ] \\approx \\varphi(t) = \\varphi \\left [ x(t)\\right ] " }, { "math_id": 71, "text": " \\varphi ( \\mathbf{x} ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_{i=1}^N a_i \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) " }, { "math_id": 72, "text": " \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) = \\exp \\left[ -\\beta_i \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert ^2 \\right] = \\exp \\left[ -\\beta_i \\left ( x(t) - c_i \\right ) ^2 \\right] " }, { "math_id": 73, "text": " \\beta " }, { "math_id": 74, "text": " c_i " }, { "math_id": 75, "text": " a_i (t+1) = a_i(t) + \\nu \\big [ x(t+1) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] \\frac {\\rho \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} {\\sum_{i=1}^N \\rho^2 \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} " }, { "math_id": 76, "text": " \\nu " }, { "math_id": 77, "text": " u \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac { \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } { \\sum_{i=1}^N \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) } " }, { "math_id": 78, "text": " \\rho \\big ( \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert \\big ) = \\exp \\left[ -\\beta \\left \\Vert \\mathbf{x} - \\mathbf{c}_i \\right \\Vert ^2 \\right] = \\exp \\left[ -\\beta \\left ( x(t) - c_i \\right ) ^2 \\right] " }, { "math_id": 79, "text": " a_i (t+1) = a_i(t) + \\nu \\big [ x(t+1) - \\varphi \\big ( \\mathbf{x}(t), \\mathbf{w} \\big ) \\big ] \\frac {u \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} {\\sum_{i=1}^N u^2 \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} " }, { "math_id": 80, "text": " \\varphi(0) = x(1)" }, { "math_id": 81, "text": " {x}(t) \\approx \\varphi(t-1) " }, { "math_id": 82, "text": " {x}(t+1) \\approx \\varphi(t)=\\varphi [\\varphi(t-1)]" }, { "math_id": 83, "text": " c[ x(t),t] " }, { "math_id": 84, "text": " {x}^{ }_{ }(t+1) = 4 x(t) [1-x(t)] +c[x(t),t] " }, { "math_id": 85, "text": " d(t) " }, { "math_id": 86, "text": " c^{ }_{ }[x(t),t] \\ \\stackrel{\\mathrm{def}}{=}\\ -\\varphi [x(t)] + d(t+1) " }, { "math_id": 87, "text": " y[x(t)] \\approx f[x(t)] = x(t+1)- c[x(t),t] " }, { "math_id": 88, "text": " a_i (t+1) = a_i(t) + \\nu \\varepsilon \\frac {u \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} {\\sum_{i=1}^N u^2 \\big ( \\left \\Vert \\mathbf{x}(t) - \\mathbf{c}_i \\right \\Vert \\big )} " }, { "math_id": 89, "text": " \\varepsilon \\ \\stackrel{\\mathrm{def}}{=}\\ f[x(t)] - \\varphi [x(t)] = x(t+1)- c[x(t),t] - \\varphi [x(t)] = x(t+1) - d(t+1) " } ]
https://en.wikipedia.org/wiki?curid=9651443
9653
Expected value
Average value of a random variable In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable X is often denoted by E("X"), E["X"], or E"X", with E also often stylized as formula_0 or "E". &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; History. The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes "in a fair way" between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it. In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability. In the foreword to his treatise, Huygens wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs. In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables. Etymology. Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2. More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage "mathematical hope." Notations. The use of the letter E to denote "expected value" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, E stands for "Erwartungswert", in Spanish for "esperanza matemática", and in French for "espérance mathématique." When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as E (upright), E (italic), or formula_0 (in blackboard bold), while a variety of bracket notations (such as E("X"), E["X"], and E"X") are all used. Another popular notation is μ"X", whereas ⟨"X"⟩, ⟨"X"⟩av, and formula_1 are commonly used in physics, and M("X") in Russian-language literature. Definition. As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector X. It is defined component by component, as E["X"]"i" E["X""i"]. Similarly, one may define the expected value of a random matrix X with components "X""ij" by E["X"]"ij" E["X""ij"]. Random variables with finitely many outcomes. Consider a random variable X with a "finite" list "x"1, ..., "x""k" of possible outcomes, each of which (respectively) has probability "p"1, ..., "p""k" of occurring. The expectation of X is defined as formula_2 Since the probabilities must satisfy "p"1 + ⋅⋅⋅ + "p""k" = 1, it is natural to interpret E["X"] as a weighted average of the "x""i" values, with weights given by their probabilities "p""i". In the special case that all possible outcomes are equiprobable (that is, "p"1 = ⋅⋅⋅ = "p""k"), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Random variables with countably infinitely many outcomes. Informally, the expectation of a random variable with a countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that formula_7 where "x"1, "x"2, ... are the possible outcomes of the random variable X and "p"1, "p"2, ... are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable "does not have finite expectation." Random variables with density. Now consider a random variable X which has a probability density function given by a function f on the real number line. This means that the probability of X taking on a value in any given open interval is given by the integral of f over that interval. The expectation of X is then given by the integral formula_13 A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of "absolutely continuous random variables" is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes "continuous random variables" are defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of X is given by the Cauchy distribution Cauchy(0, π), so that "f"("x") ("x"2 + π2)−1. It is straightforward to compute in this case that formula_14 The limit of this expression as "a" → −∞ and "b" → ∞ does not exist: if the limits are taken so that "a" −"b", then the limit is zero, while if the constraint 2"a" −"b" is taken, then the limit is ln(2). To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with E["X"] left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of E["X"] for more general random variables X. Arbitrary real-valued random variables. All definitions of the expected value may be expressed in the language of measure theory. In general, if X is a real-valued random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E["X"], is defined as the Lebesgue integral formula_15 Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of X is defined via weighted averages of "approximations" of X which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable X is said to be "absolutely continuous" if any of the following conditions are satisfied: These conditions are all equivalent, although this is nontrivial to establish. In this definition, f is called the "probability density function" of X (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that formula_17 for any absolutely continuous random variable X. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variable formula_3 can also be defined on the graph of its cumulative distribution function formula_18 by a nearby equality of areas. In fact, formula_19 with a real number formula_20 if and only if the two surfaces in the formula_21-formula_22-plane, described by formula_23 respectively, have the same finite area, i.e. if formula_24 and both improper Riemann integrals converge. Finally, this is equivalent to the representation formula_25 also with convergent integrals. Infinite expected values. Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of ±∞. This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes "x""i" 2"i", with associated probabilities "p""i" 2−"i", for i ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has formula_26 It is natural to say that the expected value equals +∞. There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any "nonnegative" random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as +∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable X, one defines the positive and negative parts by "X" + max("X", 0) and "X" − −min("X", 0). These are nonnegative random variables, and it can be directly checked that "X" "X" + − "X" −. Since E["X" +] and E["X" −] are both then defined as either nonnegative numbers or +∞, it is then natural to define: formula_27 According to this definition, E["X"] exists and is finite if and only if E["X" +] and E["X" −] are both finite. Due to the formula |"X"| "X" + + "X" −, this is the case if and only if E|"X"| is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. 0 and so E["X"] +∞ as desired. ∞ and E["X" −] ∞ (see Harmonic series). Hence, in this case the expectation of X is undefined. Expected values of common distributions. The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references. Properties. The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like formula_28 is true almost surely, when the probability measure attributes zero-mass to the complementary event formula_29 Inequalities. Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a "nonnegative" random variable X and any positive number a, it states that formula_68 If X is any random variable with finite expectation, then Markov's inequality may be applied to the random variable |"X"−E["X"]|2 to obtain Chebyshev's inequality formula_69 where Var is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables. The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory. 1, then formula_72 for any random variables X and Y. The special case of "p" "q" 2 is called the Cauchy–Schwarz inequality, and is particularly well-known. The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces. Expectations under convergence of random variables. In general, it is not the case that formula_74 even if formula_75 pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let formula_76 be a random variable distributed uniformly on formula_77 For formula_78 define a sequence of random variables formula_79 with formula_80 being the indicator function of the event formula_81 Then, it follows that formula_82 pointwise. But, formula_83 for each formula_84 Hence, formula_85 Analogously, for general sequence of random variables formula_86 the expected value operator is not formula_87-additive, i.e. formula_88 An example is easily obtained by setting formula_89 and formula_90 for formula_91 where formula_92 is as in the previous example. A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below. Relationship with characteristic function. The probability density function formula_114 of a scalar random variable formula_3 is related to its characteristic function formula_115 by the inversion formula: formula_116 For the expected value of formula_117 (where formula_118 is a Borel function), we can use this inversion formula to obtain formula_119 If formula_120 is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem, formula_121 where formula_122 is the Fourier transform of formula_123 The expression for formula_120 also follows directly from the Plancherel theorem. Uses and applications. The expectation of a random variable plays an important role in a variety of contexts. In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter. For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies. The expected values of the powers of "X" are called the moments of "X"; the moments about the mean of "X" are expected values of powers of "X" − E["X"]. The moments of some random variables can be used to specify their distributions, via their moment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. formula_124 where formula_125 is the indicator function of the set formula_126 In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose "X" is a discrete random variable with values "xi" and corresponding probabilities "pi." Now consider a weightless rod on which are placed weights, at locations "xi" along the rod and having masses "pi" (whose sum is one). The point at which the rod balances is E["X"]. Expected values can also be used to compute the variance, by means of the computational formula for the variance formula_127 A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator formula_128 operating on a quantum state vector formula_129 is written as formula_130 The uncertainty in formula_128 can be calculated by the formula formula_131. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{E}" }, { "math_id": 1, "text": "\\overline{X}" }, { "math_id": 2, "text": "\\operatorname{E}[X] =x_1p_1 + x_2p_2 + \\cdots + x_kp_k." }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": " \\operatorname{E}[X] = 1 \\cdot \\frac{1}{6} + 2 \\cdot \\frac{1}{6} + 3\\cdot\\frac{1}{6} + 4\\cdot\\frac{1}{6} + 5\\cdot\\frac{1}{6} + 6\\cdot\\frac{1}{6} = 3.5." }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": " \\operatorname{E}[\\,\\text{gain from }\\$1\\text{ bet}\\,] = -\\$1 \\cdot \\frac{37}{38} + \\$35 \\cdot \\frac{1}{38} = -\\$\\frac{1}{19}." }, { "math_id": 7, "text": "\\operatorname{E}[X] = \\sum_{i=1}^\\infty x_i\\, p_i," }, { "math_id": 8, "text": "x_i = i" }, { "math_id": 9, "text": "p_i = \\tfrac{c}{i \\cdot 2^i}" }, { "math_id": 10, "text": "i = 1, 2, 3, \\ldots," }, { "math_id": 11, "text": "c = \\tfrac{1}{\\ln 2}" }, { "math_id": 12, "text": "\\operatorname{E}[X] \\,= \\sum_i x_i p_i = 1(\\tfrac{c}{2})\n+ 2(\\tfrac{c}{8}) + 3 (\\tfrac{c}{24}) + \\cdots\n \\,= \\, \\tfrac{c}{2} + \\tfrac{c}{4} + \\tfrac{c}{8} + \\cdots \\,=\\, c \\,=\\, \\tfrac{1}{\\ln 2}." }, { "math_id": 13, "text": "\\operatorname{E}[X] = \\int_{-\\infty}^\\infty x f(x)\\, dx." }, { "math_id": 14, "text": "\\int_a^b xf(x)\\,dx=\\int_a^b \\frac{x}{x^2+\\pi^2}\\,dx=\\frac{1}{2}\\ln\\frac{b^2+\\pi^2}{a^2+\\pi^2}." }, { "math_id": 15, "text": "\\operatorname{E} [X] = \\int_\\Omega X\\,d\\operatorname{P}." }, { "math_id": 16, "text": "\\operatorname{P}(X \\in A) = \\int_A f(x) \\, dx," }, { "math_id": 17, "text": "\\operatorname{E}[X] \\equiv \\int_\\Omega X\\,d\\operatorname{P} = \\int_\\Reals x f(x)\\, dx" }, { "math_id": 18, "text": "F" }, { "math_id": 19, "text": "\\operatorname{E}[X] = \\mu" }, { "math_id": 20, "text": "\\mu" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "y" }, { "math_id": 23, "text": "\nx \\le \\mu, \\;\\, 0\\le y \\le F(x) \\quad\\text{or}\\quad x \\ge \\mu, \\;\\, F(x) \\le y \\le 1\n" }, { "math_id": 24, "text": "\n\\int_{-\\infty}^\\mu F(x)\\,dx = \\int_\\mu^\\infty \\big(1 - F(x)\\big)\\,dx\n" }, { "math_id": 25, "text": "\n\\operatorname{E}[X]\n= \\int_0^\\infty \\bigl(1 - F(x)\\bigr) \\, dx - \\int_{-\\infty}^0 F(x) \\, dx,\n" }, { "math_id": 26, "text": " \\operatorname{E}[X]= \\sum_{i=1}^\\infty x_i\\,p_i = 2\\cdot \\frac{1}{2}+4\\cdot\\frac{1}{4} + 8\\cdot\\frac{1}{8}+ 16\\cdot\\frac{1}{16}+ \\cdots = 1 + 1 + 1 + 1 + \\cdots." }, { "math_id": 27, "text": "\n\\operatorname{E}[X] = \\begin{cases}\n\\operatorname{E}[X^+] - \\operatorname{E}[X^-] & \\text{if } \\operatorname{E}[X^+] < \\infty \\text{ and } \\operatorname{E}[X^-] < \\infty;\\\\\n+\\infty & \\text{if } \\operatorname{E}[X^+] = \\infty \\text{ and } \\operatorname{E}[X^-] < \\infty;\\\\\n-\\infty & \\text{if } \\operatorname{E}[X^+] < \\infty \\text{ and } \\operatorname{E}[X^-] = \\infty;\\\\\n\\text{undefined} & \\text{if } \\operatorname{E}[X^+] = \\infty \\text{ and } \\operatorname{E}[X^-] = \\infty.\n\\end{cases}\n" }, { "math_id": 28, "text": "X \\geq 0" }, { "math_id": 29, "text": "\\left\\{ X < 0 \\right\\}." }, { "math_id": 30, "text": "\\operatorname{E}[X] \\geq 0." }, { "math_id": 31, "text": "\\operatorname{E}[\\cdot]" }, { "math_id": 32, "text": "Y," }, { "math_id": 33, "text": "a," }, { "math_id": 34, "text": "\\begin{align}\n \\operatorname{E}[X + Y] &= \\operatorname{E}[X] + \\operatorname{E}[Y], \\\\\n \\operatorname{E}[aX] &= a \\operatorname{E}[X],\n\\end{align}\n" }, { "math_id": 35, "text": "N" }, { "math_id": 36, "text": "X_{i}" }, { "math_id": 37, "text": "a_{i} (1\\leq i \\leq N)," }, { "math_id": 38, "text": " \\operatorname{E}\\left[\\sum_{i=1}^{N}a_{i}X_{i}\\right] = \\sum_{i=1}^{N}a_{i}\\operatorname{E}[X_{i}]." }, { "math_id": 39, "text": "X\\leq Y" }, { "math_id": 40, "text": "\\operatorname{E}[X]" }, { "math_id": 41, "text": "\\operatorname{E}[Y]" }, { "math_id": 42, "text": "\\operatorname{E}[X]\\leq\\operatorname{E}[Y]." }, { "math_id": 43, "text": "Z=Y-X," }, { "math_id": 44, "text": "Z\\geq 0" }, { "math_id": 45, "text": "\\operatorname{E}[|X|]=0," }, { "math_id": 46, "text": "X=0" }, { "math_id": 47, "text": "X = Y" }, { "math_id": 48, "text": "\\operatorname{E}[X] = \\operatorname{E}[ Y]." }, { "math_id": 49, "text": "X = c" }, { "math_id": 50, "text": "\\operatorname{E}[X] = c." }, { "math_id": 51, "text": "\\operatorname{E}[\\operatorname{E}[X]] = \\operatorname{E}[X]." }, { "math_id": 52, "text": "|\\operatorname{E}[X]| \\leq \\operatorname{E}|X|." }, { "math_id": 53, "text": "F(x)" }, { "math_id": 54, "text": "\\operatorname{E}[X] = \\int_{-\\infty}^\\infty x\\,dF(x)," }, { "math_id": 55, "text": " \\operatorname{E}[X] = \\int_0^\\infty (1-F(x))\\,dx - \\int^0_{-\\infty} F(x)\\,dx," }, { "math_id": 56, "text": " \\operatorname{E}[X] = \\sum _{n=0}^\\infty \\Pr(X>n), " }, { "math_id": 57, "text": "\\operatorname{E}[XY]" }, { "math_id": 58, "text": "\\operatorname{E}[X]\\cdot \\operatorname{E}[Y]." }, { "math_id": 59, "text": "Y" }, { "math_id": 60, "text": "\\operatorname{E}[XY]=\\operatorname{E}[X] \\operatorname{E}[Y]." }, { "math_id": 61, "text": "\\operatorname{E}[XY] \\neq \\operatorname{E}[X] \\operatorname{E}[Y]," }, { "math_id": 62, "text": "X," }, { "math_id": 63, "text": "g(X)," }, { "math_id": 64, "text": "f(x)," }, { "math_id": 65, "text": "f" }, { "math_id": 66, "text": "g" }, { "math_id": 67, "text": "\\operatorname{E}[g(X)] = \\int_{\\R} g(x) f(x)\\, dx ." }, { "math_id": 68, "text": "\n\\operatorname{P}(X\\geq a)\\leq\\frac{\\operatorname{E}[X]}{a}.\n" }, { "math_id": 69, "text": "\n\\operatorname{P}(|X-\\text{E}[X]|\\geq a)\\leq\\frac{\\operatorname{Var}[X]}{a^2},\n" }, { "math_id": 70, "text": "\nf(\\operatorname{E}(X)) \\leq \\operatorname{E} (f(X)).\n" }, { "math_id": 71, "text": "\n\\left(\\operatorname{E}|X|^s\\right)^{1/s} \\leq \\left(\\operatorname{E}|X|^t\\right)^{1/t}.\n" }, { "math_id": 72, "text": "\n\\operatorname{E}|XY|\\leq(\\operatorname{E}|X|^p)^{1/p}(\\operatorname{E}|Y|^q)^{1/q}.\n" }, { "math_id": 73, "text": "\n\\Bigl(\\operatorname{E}|X+Y|^p\\Bigr)^{1/p}\\leq\\Bigl(\\operatorname{E}|X|^p\\Bigr)^{1/p}+\\Bigl(\\operatorname{E}|Y|^p\\Bigr)^{1/p}.\n" }, { "math_id": 74, "text": "\\operatorname{E}[X_n] \\to \\operatorname{E}[X]" }, { "math_id": 75, "text": "X_n\\to X" }, { "math_id": 76, "text": "U" }, { "math_id": 77, "text": "[0,1]." }, { "math_id": 78, "text": "n\\geq 1," }, { "math_id": 79, "text": "X_n = n \\cdot \\mathbf{1}\\left\\{ U \\in \\left(0,\\tfrac{1}{n}\\right)\\right\\}," }, { "math_id": 80, "text": "\\mathbf{1}\\{A\\}" }, { "math_id": 81, "text": "A." }, { "math_id": 82, "text": "X_n \\to 0" }, { "math_id": 83, "text": "\\operatorname{E}[X_n] = n \\cdot \\Pr\\left(U \\in \\left[ 0, \\tfrac{1}{n}\\right] \\right) = n \\cdot \\tfrac{1}{n} = 1" }, { "math_id": 84, "text": "n." }, { "math_id": 85, "text": "\\lim_{n \\to \\infty} \\operatorname{E}[X_n] = 1 \\neq 0 = \\operatorname{E}\\left[ \\lim_{n \\to \\infty} X_n \\right]." }, { "math_id": 86, "text": "\\{ Y_n : n \\geq 0\\}," }, { "math_id": 87, "text": "\\sigma" }, { "math_id": 88, "text": "\\operatorname{E}\\left[\\sum^\\infty_{n=0} Y_n\\right] \\neq \\sum^\\infty_{n=0}\\operatorname{E}[Y_n]." }, { "math_id": 89, "text": "Y_0 = X_1" }, { "math_id": 90, "text": "Y_n = X_{n+1} - X_n" }, { "math_id": 91, "text": "n \\geq 1," }, { "math_id": 92, "text": "X_n" }, { "math_id": 93, "text": "\\{X_n : n \\geq 0\\}" }, { "math_id": 94, "text": "0 \\leq X_n \\leq X_{n+1}" }, { "math_id": 95, "text": "n \\geq 0." }, { "math_id": 96, "text": "X_n \\to X" }, { "math_id": 97, "text": "\\lim_n\\operatorname{E}[X_n]=\\operatorname{E}[X]." }, { "math_id": 98, "text": "\\{X_i\\}_{i=0}^\\infty" }, { "math_id": 99, "text": "\n\\operatorname{E}\\left[\\sum^\\infty_{i=0}X_i\\right] = \\sum^\\infty_{i=0}\\operatorname{E}[X_i].\n" }, { "math_id": 100, "text": "\\{ X_n \\geq 0 : n \\geq 0\\}" }, { "math_id": 101, "text": "\\operatorname{E}[\\liminf_n X_n] \\leq \\liminf_n \\operatorname{E}[X_n]." }, { "math_id": 102, "text": "X_n \\geq 0" }, { "math_id": 103, "text": "\\operatorname{E}[X_n] \\leq C" }, { "math_id": 104, "text": "\\operatorname{E}[X] \\leq C." }, { "math_id": 105, "text": " X = \\liminf_n X_n" }, { "math_id": 106, "text": "\\{X_n : n \\geq 0 \\}" }, { "math_id": 107, "text": "|X_n|\\leq Y \\leq +\\infty" }, { "math_id": 108, "text": "\\operatorname{E}[Y]<\\infty." }, { "math_id": 109, "text": "\\operatorname{E}|X| \\leq \\operatorname{E}[Y] <\\infty" }, { "math_id": 110, "text": "\\lim_n\\operatorname{E}[X_n]=\\operatorname{E}[X]" }, { "math_id": 111, "text": "\\lim_n\\operatorname{E}|X_n - X| = 0." }, { "math_id": 112, "text": "\\lim_n\\operatorname{E}[X_n]=\\operatorname{E}[\\lim_n X_n]" }, { "math_id": 113, "text": "\\{X_n\\}" }, { "math_id": 114, "text": "f_X" }, { "math_id": 115, "text": "\\varphi_X" }, { "math_id": 116, "text": "f_X(x) = \\frac{1}{2\\pi}\\int_{\\mathbb{R}} e^{-itx}\\varphi_X(t) \\, dt." }, { "math_id": 117, "text": "g(X)" }, { "math_id": 118, "text": "g:{\\mathbb R}\\to{\\mathbb R}" }, { "math_id": 119, "text": "\\operatorname{E}[g(X)] = \\frac{1}{2\\pi} \\int_\\Reals g(x) \\left[ \\int_\\Reals e^{-itx}\\varphi_X(t) \\, dt \\right] dx." }, { "math_id": 120, "text": "\\operatorname{E}[g(X)]" }, { "math_id": 121, "text": "\\operatorname{E}[g(X)] = \\frac{1}{2\\pi} \\int_\\Reals G(t) \\varphi_X(t) \\, dt," }, { "math_id": 122, "text": "G(t) = \\int_\\Reals g(x) e^{-itx} \\, dx" }, { "math_id": 123, "text": "g(x)." }, { "math_id": 124, "text": "\\operatorname{P}({X \\in \\mathcal{A}}) = \\operatorname{E}[{\\mathbf 1}_{\\mathcal{A}}]," }, { "math_id": 125, "text": "{\\mathbf 1}_{\\mathcal{A}}" }, { "math_id": 126, "text": "\\mathcal{A}." }, { "math_id": 127, "text": "\\operatorname{Var}(X)= \\operatorname{E}[X^2] - (\\operatorname{E}[X])^2." }, { "math_id": 128, "text": "\\hat{A}" }, { "math_id": 129, "text": "|\\psi\\rangle" }, { "math_id": 130, "text": "\\langle\\hat{A}\\rangle = \\langle\\psi|\\hat{A}|\\psi\\rangle." }, { "math_id": 131, "text": "(\\Delta A)^2 = \\langle\\hat{A}^2\\rangle - \\langle \\hat{A} \\rangle^2" }, { "math_id": 132, "text": "m " } ]
https://en.wikipedia.org/wiki?curid=9653
965348
Functional calculus
Theory allowing one to apply mathematical functions to mathematical operators In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theory. (Historically, the term was also used synonymously with calculus of variations; this usage is obsolete, except for functional derivative. Sometimes it is used in relation to types of functional equations, or in logic for systems of predicate calculus.) If formula_0 is a function, say a numerical function of a real number, and formula_1 is an operator, there is no particular reason why the expression formula_2 should make sense. If it does, then we are no longer using formula_0 on its original function domain. In the tradition of operational calculus, algebraic expressions in operators are handled irrespective of their meaning. This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of formula_3 and formula_1 an formula_4 matrix. The idea of a functional calculus is to create a "principled" approach to this kind of overloading of the notation. The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed. In the finite-dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator formula_5. This family is an ideal in the ring of polynomials. Furthermore, it is a nontrivial ideal: let formula_6 be the finite dimension of the algebra of matrices, then formula_7 is linearly dependent. So formula_8 for some scalars formula_9, not all equal to 0. This implies that the polynomial formula_10 lies in the ideal. Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial formula_11. Multiplying by a unit if necessary, we can choose formula_11 to be monic. When this is done, the polynomial formula_11 is precisely the minimal polynomial of formula_5. This polynomial gives deep information about formula_5. For instance, a scalar formula_12 is an eigenvalue of formula_5 if and only if formula_12 is a root of formula_11. Also, sometimes formula_11 can be used to calculate the exponential of formula_5 efficiently. The polynomial calculus is not as informative in the infinite-dimensional case. Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f " }, { "math_id": 1, "text": " M " }, { "math_id": 2, "text": " f(M) " }, { "math_id": 3, "text": " f(x) = x^2 " }, { "math_id": 4, "text": " n\\times n " }, { "math_id": 5, "text": " T " }, { "math_id": 6, "text": " n " }, { "math_id": 7, "text": " \\{I, T, T^2, \\ldots, T^n \\} " }, { "math_id": 8, "text": " \\sum_{i=0}^n \\alpha_i T^i = 0 " }, { "math_id": 9, "text": " \\alpha_i " }, { "math_id": 10, "text": " \\sum_{i=0}^n \\alpha_i x^i " }, { "math_id": 11, "text": " m " }, { "math_id": 12, "text": " \\alpha " } ]
https://en.wikipedia.org/wiki?curid=965348