id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
89489
Inequality (mathematics)
Mathematical relation expressed with < or ≤ In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. The main types of inequality are less than and greater than. Notation. There are several different notations used to represent different kinds of inequalities: In either case, "a" is not equal to "b". These relations are known as strict inequalities, meaning that "a" is strictly less than or strictly greater than "b". Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: In the 17th and 18th centuries, personal notations or typewriting signs were used to signal inequalities. For example, In 1670, John Wallis used a single horizontal bar "above" rather than below the < and >. Later in 1734, ≦ and ≧, known as "less than (greater-than) over equal to" or "less than (greater than) or equal to with double horizontal bars", first appeared in Pierre Bouguer's work . After that, mathematicians simplified Bouguer's symbol to "less than (greater than) or equal to with one horizontal bar" (≤), or "less than (greater than) or slanted equal to" (⩽). The relation not greater than can also be represented by formula_0 the symbol for "greater than" bisected by a slash, "not". The same is true for not less than, formula_1 The notation "a" ≠ "b" means that "a" is not equal to "b"; this "inequation" sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require "a" and "b" to be member of an ordered set. In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, normally by several orders of magnitude. This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics). In all of the cases above, any two symbols mirroring each other are symmetrical; "a" < "b" and "b" > "a" are equivalent, etc. Properties on the number line. Inequalities are governed by the following properties. All of these properties also hold if all of the non-strict inequalities (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and — in the case of applying a function — monotonic functions are limited to "strictly" monotonic functions. Converse. The relations ≤ and ≥ are each other's converse, meaning that for any real numbers "a" and "b": <templatestyles src="Block indent/styles.css"/>"a" ≤ "b" and "b" ≥ "a" are equivalent. Transitivity. The transitive property of inequality states that for any real numbers "a", "b", "c": <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b" and "b" ≤ "c", then "a" ≤ "c". If "either" of the premises is a strict inequality, then the conclusion is a strict inequality: <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b" and "b" < "c", then "a" < "c". <templatestyles src="Block indent/styles.css"/>If "a" < "b" and "b" ≤ "c", then "a" < "c". Addition and subtraction. A common constant "c" may be added to or subtracted from both sides of an inequality. So, for any real numbers "a", "b", "c": <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b", then "a" + "c" ≤ "b" + "c" and "a" − "c" ≤ "b" − "c". In other words, the inequality relation is preserved under addition (or subtraction) and the real numbers are an ordered group under addition. Multiplication and division. The properties that deal with multiplication and division state that for any real numbers, "a", "b" and non-zero "c": <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b" and "c" > 0, then "ac" ≤ "bc" and "a"/"c" ≤ "b"/"c". <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b" and "c" < 0, then "ac" ≥ "bc" and "a"/"c" ≥ "b"/"c". In other words, the inequality relation is preserved under multiplication and division with positive constant, but is reversed when a negative constant is involved. More generally, this applies for an ordered field. For more information, see "§ Ordered fields". Additive inverse. The property for the additive inverse states that for any real numbers "a" and "b": <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b", then −"a" ≥ −"b". Multiplicative inverse. If both numbers are positive, then the inequality relation between the multiplicative inverses is opposite of that between the original numbers. More specifically, for any non-zero real numbers "a" and "b" that are both positive (or both negative): <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b", then ≥ . All of the cases for the signs of "a" and "b" can also be written in chained notation, as follows: <templatestyles src="Block indent/styles.css"/>If 0 < "a" ≤ "b", then ≥ > 0. <templatestyles src="Block indent/styles.css"/>If "a" ≤ "b" < 0, then 0 > ≥ . <templatestyles src="Block indent/styles.css"/>If "a" < 0 < "b", then < 0 < . Applying a function to both sides. Any monotonically increasing function, by its definition, may be applied to both sides of an inequality without breaking the inequality relation (provided that both expressions are in the domain of that function). However, applying a monotonically decreasing function to both sides of an inequality means the inequality relation would be reversed. The rules for the additive inverse, and the multiplicative inverse for positive numbers, are both examples of applying a monotonically decreasing function. If the inequality is strict ("a" < "b", "a" > "b") "and" the function is strictly monotonic, then the inequality remains strict. If only one of these conditions is strict, then the resultant inequality is non-strict. In fact, the rules for additive and multiplicative inverses are both examples of applying a "strictly" monotonically decreasing function. A few examples of this rule are: Formal definitions and generalizations. A (non-strict) partial order is a binary relation ≤ over a set "P" which is reflexive, antisymmetric, and transitive. That is, for all "a", "b", and "c" in "P", it must satisfy the three following clauses: A set with a partial order is called a partially ordered set. Those are the very basic axioms that every kind of order has to satisfy. A strict partial order (<) would have to satisfy: Other axioms that exist for other definitions of orders on a set "P" include: Ordered fields. If ("F", +, ×) is a field and ≤ is a total order on "F", then ("F", +, ×, ≤) is called an ordered field if and only if: Both ⁠⁠ and ⁠⁠ are ordered fields, but ≤ cannot be defined in order to make ⁠⁠ an ordered field, because −1 is the square of "i" and would therefore be positive. Besides being an ordered field, R also has the Least-upper-bound property. In fact, R can be defined as the only ordered field with that quality. Chained notation. The notation a" < "b" < "c stands for ""a" < "b" and "b" < "c"", from which, by the transitivity property above, it also follows that "a" < "c". By the above laws, one can add or subtract the same number to all three terms, or multiply or divide all three terms by same nonzero number and reverse all inequalities if that number is negative. Hence, for example, "a" < "b" + "e" < "c" is equivalent to "a" − "e" < "b" < "c" − "e". This notation can be generalized to any number of terms: for instance, "a"1 ≤ "a"2 ≤ ... ≤ "a""n" means that "a""i" ≤ "a""i"+1 for "i" = 1, 2, ..., "n" − 1. By transitivity, this condition is equivalent to "a""i" ≤ "a""j" for any 1 ≤ "i" ≤ "j" ≤ "n". When solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms independently. For instance, to solve the inequality 4"x" < 2"x" + 1 ≤ 3"x" + 2, it is not possible to isolate "x" in any one part of the inequality through addition or subtraction. Instead, the inequalities must be solved independently, yielding "x" < and "x" ≥ −1 respectively, which can be combined into the final solution −1 ≤ "x" < . Occasionally, chained notation is used with inequalities in different directions, in which case the meaning is the logical conjunction of the inequalities between adjacent terms. For example, the defining condition of a zigzag poset is written as "a"1 < "a"2 > "a"3 < "a"4 > "a"5 < "a"6 > ... . Mixed chained notation is used more often with compatible relations, like <, =, ≤. For instance, "a" < "b" = "c" ≤ "d" means that "a" < "b", "b" = "c", and "c" ≤ "d". This notation exists in a few programming languages such as Python. In contrast, in programming languages that provide an ordering on the type of comparison results, such as C, even homogeneous chains may have a completely different meaning. The cylindrical algebraic decomposition is an algorithm that allows testing whether a system of polynomial equations and inequalities has solutions, and, if solutions exist, describing them. The complexity of this algorithm is doubly exponential in the number of variables. It is an active research domain to design algorithms that are more efficient in specific cases.
[ { "math_id": 0, "text": "a \\ngtr b," }, { "math_id": 1, "text": "a \\nless b." } ]
https://en.wikipedia.org/wiki?curid=89489
894921
Active cooling
Cooling methods that expend energy to cool a system or component Active cooling is a heat-reducing mechanism that is typically implemented in electronic devices and indoor buildings to ensure proper heat transfer and circulation from within. Unlike its counterpart passive cooling, active cooling is entirely dependent on energy consumption in order to operate. It uses various mechanical systems that consume energy to dissipate heat. It is commonly implemented in systems that are unable to maintain their temperature through passive means. Active cooling systems are usually powered through the use of electricity or thermal energy but it's possible for some systems to be powered by solar energy or even hydroelectric energy. They need to be well-maintained and sustainable in order for them to perform its necessary tasks or the possibility of damages within objects could occur. Various applications of commercial active cooling systems include indoor air conditioners, computer fans, and heat pumps. Building usage. Many buildings require high demands in cooling and as many as 27 out of the 50 largest metropolitan areas around the world are located in areas of hot or tropical weather. With this, engineers have to establish the heat balance in order to ensure proper ventilation throughout the structure. The heat balance equation is given as: formula_0 where formula_1 is the air density, formula_2 is the specific heat capacity of air at constant pressure, formula_3 is the rate of heat transfer, formula_4 is the internal heat gains, formula_5 is the heat transfer through the envelope, formula_6 is the heat gain/loss between indoor and outdoor air, and formula_7 is the mechanical heat transfer. Using this, it can be determined how much cooling is required within the infrastructure. There are three active cooling systems commonly used in the residential sectors: Fans. A fan is three to four blades rotated by an electrical motor at a constant speed. Throughout the rotation, airflow is produced and having the surrounding being cooled through the process of forced convection heat transfer. Because of its relatively low price, it is the most frequently used out of the three active cooling systems in the residential sector. Heat pumps. A heat pump utilizes electricity in order to extract heat from a cool area into a warm area, causing the cool area to lower in temperature and the warm area to increase in temperature. There are two types of heat pumps: Compression heat pumps. Being the more popular variant of the two, compression heat pumps operates through the use of the refrigerant cycle. The vapor refrigerant in the air gets compressed while increasing in temperature, creating a superheated vapor. The vapor then goes through a condenser and converts into a liquid form, dispelling more heat in the process. Traveling through the expansion valve, the liquid refrigerant forms a mixture of liquid and vapor. As it passes through the evaporator, vapor refrigerant forms and expels into the air, repeating the refrigerant cycle. Absorption heat pumps. The process for the absorption heat pump works similarly to the compression variant with the main contrast being the usage of an absorber instead of a compressor. The absorber takes in the vapor refrigerant and creates a liquid form which then travels into the liquid pump to be turned into superheated vapor. The absorption heat pump utilizes both electric and heat for its functionality compared to compression heat pumps which only uses electricity. Evaporative coolers. An evaporative cooler absorbs the outside air and passes it through water-saturated pads, lowering the temperature of the air through water evaporation. It can be divided by: Direct. This method evaporates the water which would then travel directly into the air stream, producing a small form of humidity. It usually requires a decent amount of water consumption in order to properly lower the temperature of the surrounding area. Indirect. This method evaporates the water into a second air stream and then putting it through a heat exchanger, lowering the temperature of the main air stream without adding any humidity. Compared to direct evaporative coolers, it requires much less water consumption to operate and lowering temperature. Other applications. Besides normal commercial usage of active cooling, researchers are also looking for ways to improve the implementation of active cooling into various technologies. Thermoelectric Generator(TEG). The thermoelectric generator, or TEG, is a power source that has been recently experimented with to test its viability in maintaining active cooling. It is a device that makes use of the Seebeck effect to convert heat energy into electrical energy. Applications of the power source are more commonly found in technologies requiring high power. Examples include space probes, aircraft, and automobiles. In a 2019 research, the viability of TEG active cooling was tested. The test was applied on a Raspberry PI3, a small single-board computer, equipped with a fan powered by TEG and was compared alongside another powered by a commercial passive cooler. Throughout the research, the voltage, the power, and the temperature in both of the Raspberry PIs were observed and recorded. The data showed that throughout the benchmark test, the TEG- powered Raspberry PI3 stabilized to a temperature a few Celsius lower than the passive cooling Raspberry PI3. The power produced by the TEG was also analyzed to measure the possibility of the fan having self-sustainable capabilities. Currently, using only TEG to power the fan isn't enough to be completely self-sustainable because it lacks enough energy for the initial startup of the fan. But, with the implementation of an energy accumulator, it would be possible. The power generation of TEG is given as: formula_8 where formula_9 is the power generated by TEG, formula_10 is the thermal resistance, and formula_11 is the temperature from TEG. Based on the result, the thermoelectric generator active cooling has been shown to effectively decrease and maintain temperatures that is comparable to commercial usage of passive coolers. Near Immersion Active Cooling (NIAC). Near Immersion Active Cooling, or NIAC, is a thermal management technique that has been recently researched in an effort to reduce the amount of heat accumulation generated by Wire + Arc Additive Manufacturing, or WAAM (a metal 3-D printing technology). NIAC utilizes a cooling liquid that surrounds the WAAM within a work tank and increases the water level when metal is being deposited. The direct contact with the liquid allows for quick withdrawal of heat from the WAAM, decreasing temperature by a significant amount. In a 2020 experiment, researchers wanted to discover the feasibility of using the NIAC and to test its cooling capabilities. The experiment compared the effectiveness of mitigating temperature generated by the WAAM between natural cooling, passive cooling, and near immersion active cooling. Natural cooling used air, passive cooling used a cooling liquid that stays on a fixed level, and NIAC used a cooling liquid that rises based on the actions of the WAAM. The following tests were used to measure the feasibility of using NIAC: They concluded NIAC is viable and comparable to conventional cooling methods such as passive and natural cooling. Comparison with passive cooling. Active cooling is usually compared alongside passive cooling in various situations to determine which provides a better and more efficient way of cooling. Both of these are viable in many situations but depending on several factors, one could be more advantageous than the other. Advantages. Active cooling systems are usually better in terms of decreasing temperature than passive cooling systems. Passive cooling doesn't utilize much energy for its operation but instead takes advantage of natural cooling, which takes longer to cool over a long period of time. Most people prefer the use of active cooling systems in hot or tropical climates than passive cooling because of its effectiveness in lowering temperature in a short time interval. In technologies, it helps maintain proper thermal conditions, preventing the risk of damages or overheating of the core operation systems. It is able to better balance out the heat generation from the technology, maintaining it in a consistent manner. Some active cooling systems also contain the possibility of being self-sustainable as shown in the application of the thermoelectric generator compared to passive cooling where it is highly dependent on natural means to operate. Disadvantages. The issues with active cooling compared to passive cooling are mainly the financial costs and energy consumption. Because of active cooling's high energy requirement, it makes it much less energy efficient as well as less cost efficient. In a residential setting, active cooling usually consumes a large amount of energy in order to provide enough cooling throughout the entire building which increases the financial costs. Engineers of the building would need to take in account that an increase in energy consumption would also play a factor in negatively affecting the global climate. Compared to active cooling, passive cooling are more seen being used in places with average or low temperatures. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "p \\cdot c_p\\cdot V \\cdot dT/dt = E_{int} + E_{Conv} + E_{Vent} + E_{AC}" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "c_p" }, { "math_id": 3, "text": "dT/dt" }, { "math_id": 4, "text": "E_{int}" }, { "math_id": 5, "text": "E_{Conv}" }, { "math_id": 6, "text": "E_{Vent}" }, { "math_id": 7, "text": "E_{AC}" }, { "math_id": 8, "text": "P_{TEG}\\rightarrow {fan air flow\\over fan power}\\rightarrow\\sum R_{thermal}\\rightarrow\\bigtriangleup T_{TEG}\\rightarrow P_{TEG}" }, { "math_id": 9, "text": "P_{TEG}" }, { "math_id": 10, "text": "R_{thermal}" }, { "math_id": 11, "text": "T_{TEG}" } ]
https://en.wikipedia.org/wiki?curid=894921
8950361
Triple modular redundancy
Method for increasing reliability In computing, triple modular redundancy, sometimes called triple-mode redundancy, (TMR) is a fault-tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a majority-voting system to produce a single output. If any one of the three systems fails, the other two systems can correct and mask the fault. The TMR concept can be applied to many forms of redundancy, such as software redundancy in the form of N-version programming, and is commonly found in fault-tolerant computer systems. Space satellite systems often use TMR, although satellite RAM usually uses Hamming error correction. Some ECC memory uses triple modular redundancy hardware (rather than the more common Hamming code), because triple modular redundancy hardware is faster than Hamming error correction hardware. Called repetition code, some communication systems use N-modular redundancy as a simple form of forward error correction. For example, 5-modular redundancy communication systems (such as FlexRay) use the majority of 5 samples – if any 2 of the 5 results are erroneous, the other 3 results can correct and mask the fault. Modular redundancy is a basic concept, dating to antiquity, while the first use of TMR in a computer was the Czechoslovak computer SAPO, in the 1950s. General case. The general case of TMR is called N-modular redundancy, in which any positive number of replications of the same action is used. The number is typically taken to be at least three, so that error correction by majority vote can take place; it is also usually taken to be odd, so that no ties may happen. Majority logic gate. 3-input majority gate. The 3-input majority gate output is 1 if two or more of the inputs of the majority gate are 1; output is 0 if two or more of the majority gate's inputs are 0. Thus, the majority gate is the carry output of a full adder, i.e., the majority gate is a voting machine. The 3-input majority gate can be represented by the following boolean equation and truth table: formula_0 In TMR, three identical logic circuits (logic gates) are used to compute the same set of specified Boolean function. If there are no circuit failures, the outputs of the three circuits are identical. But due to circuit failures, the outputs of the three circuits may be different. TMR operation. Assuming the Boolean function computed by the three identical logic gates has value 1, then: (a) if no circuit has failed, all three circuits produce an output of value 1, and the majority gate output has value 1. (b) if one circuit fails and produces an output of 0, while the other two are working correctly and produce an output of 1, the majority gate output is 1, i.e., it still has the correct value. And similarly for the case when the Boolean function computed by the three identical circuits has value 0. Thus, the majority gate output is guaranteed to be correct as long as no more than one of the three identical logic circuits has failed. For a TMR system with a single voter of reliability (probability of working) Rv and three components of reliability Rm, the probability of it being correct can be shown to be RTMR = Rv (3 Rm2 – 2 Rm3). TMR systems should use data scrubbing – rewrite flip-flops periodically – in order to avoid accumulation of errors. Voter. The majority gate itself could fail. This can be protected against by applying triple redundancy to the voters themselves. In a few TMR systems, such as the Saturn Launch Vehicle Digital Computer and functional triple modular redundancy (FTMR) systems, the voters are also triplicated. Three voters are used – one for each copy of the next stage of TMR logic. In such systems there is no single point of failure. Even though only using a single voter brings a single point of failure – a failed voter will bring down the entire system – most TMR systems do not use triplicated voters. This is because the majority gates are much less complex than the systems that they guard against, so they are much more reliable. By using the reliability calculations, it is possible to find the minimum reliability of the voter for TMR to be a win. Chronometers. To use triple modular redundancy, a ship must have at least three chronometers; two chronometers provided dual modular redundancy, allowing a backup if one should cease to work, but not allowing any error correction if the two displayed a different time, since in case of contradiction between the two chronometers, it would be impossible to know which one was wrong (the error detection obtained would be the same of having only one chronometer and checking it periodically). Three chronometers provided triple modular redundancy, allowing error correction if one of the three was wrong, so the pilot would take the average of the two with closer reading (vote for average precision). There is an old adage to this effect, stating: "Never go to sea with two chronometers; take one or three." Mainly this means that if two chronometers contradict, how do you know which one is correct? At one time this observation or rule was an expensive one as the cost of three sufficiently accurate chronometers was more than the cost of many types of smaller merchant vessels. Some vessels carried more than three chronometers – for example, HMS Beagle carried 22 chronometers. However, such a large number was usually only carried on ships undertaking survey work as was the case with the "Beagle". In the modern era, ships at sea use GNSS navigation receivers (with GPS, GLONASS & WAAS etc. support) – mostly running with WAAS or EGNOS support so as to provide accurate time (and location). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Q = AB \\lor BC \\lor AC " } ]
https://en.wikipedia.org/wiki?curid=8950361
8950551
Operational calculus
Technique to solve differential equations Operational calculus, also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation. History. The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied. This approach was further developed by Francois-Joseph Servois who developed convenient notations. Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester. Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 and by Boole in 1859. This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy. Guided greatly by intuition and his wealth of knowledge on the physics behind his circuit studies, [Heaviside] developed the operational calculus now ascribed to his name. At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians. Operational calculus first found applications in electrical engineering problems, for the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush. A rigorous mathematical justification of Heaviside's operational methods came only after the work of Bromwich that related operational calculus with Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition). Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener). A different approach to operational calculus was developed in the 1930s by Polish mathematician Jan Mikusiński, using algebraic reasoning. Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926: The brilliant work of Heaviside is purely heuristic, devoid of even the pretense to mathematical rigor. Its operators apply to electric voltages and currents, which may be discontinuous and certainly need not be analytic. For example, the favorite "corpus vile" on which he tries out his operators is a function which vanishes to the left of the origin and is 1 to the right. This excludes any direct application of the methods of Pincherle… Although Heaviside’s developments have not been justified by the present state of the purely mathematical theory of operators, there is a great deal of what we may call experimental evidence of their validity, and they are very valuable to the electrical engineers. There are cases, however, where they lead to ambiguous or contradictory results. Principle. The key element of the operational calculus is to consider differentiation as an operator p = acting on functions. Linear differential equations can then be recast in the form of "functions" "F"(p) of the operator p acting on the unknown function equaling the known function. Here, "F" is defining something that takes in an operator p and returns another operator "F"(p). Solutions are then obtained by making the inverse operator of F act on the known function. The operational calculus generally is typified by two symbols: the operator p, and the unit function 1. The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator p in the Heaviside calculus initially is to represent the time differentiator . Further, it is desired for this operator to bear the reciprocal relation such that p−1 denotes the operation of integration. In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step: Heaviside step function: "H"("t") such that "H"("t") = 0 if "t" < 0 and "H"("t") = 1 if "t" > 0. The simplest example of application of the operational calculus is to solve: p "y" = "H"("t"), which gives formula_0 From this example, one sees that formula_1 represents integration. Furthermore n iterated integrations is represented by formula_2 so that formula_3 Continuing to treat p as if it were a variable, formula_4 which can be rewritten by using a geometric series expansion: formula_5 Using partial fraction decomposition, one can define any fraction in the operator p and compute its action on "H"("t"). Moreover, if the function 1/"F"(p) has a series expansion of the form formula_6 it is straightforward to find formula_7 Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem. Heaviside went further and defined fractional power of p, thus establishing a connection between operational calculus and fractional calculus. Using the Taylor expansion, one can also verify the Lagrange–Boole translation formula, "e""a" p "f"("t") = "f"("t" + "a"), so the operational calculus is also applicable to finite-difference equations and to electrical engineering problems with delayed signals. References. <templatestyles src="Reflist/styles.css" /> Further sources. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "y = \\operatorname{p}^{-1} H = \\int_0^t H(u) \\, du = t\\,H(t)." }, { "math_id": 1, "text": "\\operatorname{p}^{-1}" }, { "math_id": 2, "text": "\\operatorname{p}^{-n}," }, { "math_id": 3, "text": "\\operatorname{p}^{-n} H(t) = \\frac{t^n}{n!} H(t)." }, { "math_id": 4, "text": "\\frac{\\operatorname{p}}{\\operatorname{p} - a} H(t) = \\frac{1}{1 - \\frac{a}{\\operatorname{p}}}\\,H(t)," }, { "math_id": 5, "text": "\n \\frac{1}{1 - \\frac{a}{\\operatorname{p}}} H(t) = \\sum_{n=0}^\\infty a^n \\operatorname{p}^{-n} H(t) = \\sum_{n=0}^\\infty \\frac{a^n t^n}{n!} H(t) = e^{at} H(t).\n" }, { "math_id": 6, "text": "\\frac{1}{F(\\operatorname{p})} = \\sum_{n=0}^\\infty a_n \\operatorname{p}^{-n}," }, { "math_id": 7, "text": "\\frac{1}{F(\\operatorname{p})} H(t) = \\sum_{n=0}^\\infty a_n \\frac{t^n}{n!} H(t)." } ]
https://en.wikipedia.org/wiki?curid=8950551
8951286
Peter Hilton
British mathematician (1923–2010) Peter John Hilton (7 April 1923 – 6 November 2010) was a British mathematician, noted for his contributions to homotopy theory and for code-breaking during World War II. Early life. He was born in Brondesbury, London, the son Mortimer Jacob Hilton, a Jewish physician who was in general practice in Peckham, and his wife Elizabeth Amelia Freedman, and was brought up in Kilburn. The physiologist Sidney Montague Hilton (1921–2011) of the University of Birmingham Medical School was his elder brother. Hilton was educated at St Paul's School, London. He went to The Queen's College, Oxford in 1940 to read mathematics, on an open scholarship, where the mathematics tutor was Ughtred Haslam-Jones. Bletchley Park. A wartime undergraduate in wartime Oxford, on a shortened course, Hilton was obliged to train with the Royal Artillery, and faced scheduled conscription in summer 1942. After four terms, he took the advice of his tutor, and followed up a civil service recruitment contact. He had an interview for mathematicians with knowledge of German, and was offered a position in the Foreign Office without being told the nature of the work. The team was, in fact, recruiting on behalf of the Government Code and Cypher School. Aged 18, he arrived at the codebreaking station Bletchley Park on 12 January 1942. Hilton worked with several of the Bletchley Park deciphering groups. He was initially assigned to Naval Enigma in Hut 8. Hilton commented on his experience working with Alan Turing, whom he knew well for the last 12 years of his life, in his "Reminiscences of Bletchley Park" from "A Century of Mathematics in America:"<templatestyles src="Template:Blockquote/styles.css" />It is a rare experience to meet an authentic genius. Those of us privileged to inhabit the world of scholarship are familiar with the intellectual stimulation furnished by talented colleagues. We can admire the ideas they share with us and are usually able to understand their source; we may even often believe that we ourselves could have created such concepts and originated such thoughts. However, the experience of sharing the intellectual life of a genius is entirely different; one realizes that one is in the presence of an intelligence, a sensibility of such profundity and originality that one is filled with wonder and excitement. Hilton echoed similar thoughts in the Nova PBS documentary "Decoding Nazi Secrets" (UK "Station X", Channel 4, 1999). In late 1942, Hilton transferred to work on German teleprinter ciphers. A special section known as the "Testery" had been formed in July 1942 to work on one such cipher, codenamed "Tunny", and Hilton was one of the early members of the group. His role was to devise ways to deal with changes in Tunny, and to liaise with another section working on Tunny, the "Newmanry", which complemented the hand-methods of the Testery with specialised codebreaking machinery. Hilton has been counted as a member of the Newmanry, possibly on a part-time basis. Recreational. A convivial pub drinker at Bletchley Park, Hilton also spent time with Turing working on chess problems and palindromes. He there constructed a 51-letter palindrome: "Doc note, I dissent. A fast never prevents a fatness. I diet on cod." Mathematics. Hilton obtained his DPhil in 1949 from Oxford University under the supervision of John Henry Whitehead. His dissertation was "Calculation of the homotopy groups of formula_0-polyhedra". His principal research interests were in algebraic topology, homological algebra, categorical algebra and mathematics education. He published 15 books and over 600 articles in these areas, some jointly with colleagues. Hilton's theorem (1955) is on the homotopy groups of a wedge of spheres. It addresses an issue that comes up in the theory of "homotopy operations". Turing, at the Victoria University of Manchester, in 1948 invited Hilton to see the Manchester Mark 1 machine. Around 1950, Hilton took a position at the university maths department. He was there in 1949, when Turing engaged in a discussion that introduced him to the word problem for groups. Hilton worked with Walter Lederman. Another colleague there was Hugh Dowker, who in 1951 drew his attention to the Serre spectral sequence. In 1952, Hilton moved to DPMMS in Cambridge, England, where he ran a topology seminar attended by John Frank Adams, Michael Atiyah, David B. A. Epstein, Terry Wall and Christopher Zeeman. Via Hilton, Atiyah became aware of Jean-Pierre Serre's coherent sheaf proof of the Riemann–Roch theorem for curves, and found his first research direction in sheaf methods for ruled surfaces. In 1955, Hilton started work with Beno Eckmann on what became known as Eckmann-Hilton duality for the homotopy category. Through Eckmann, he became editor of the "Ergebnisse der Mathematik und ihrer Grenzgebiete", a position he held from 1964 to 1983. Hilton returned to Manchester as Professor, in 1956. In 1958, he became the Mason Professor of Pure Mathematics at the University of Birmingham. He moved to the United States in 1962 to be Professor of Mathematics at Cornell University, a post he held until 1971. From 1971 to 1973, he held a joint appointment as Fellow of the Battelle Seattle Research Center and Professor of Mathematics at the University of Washington. On 1 September 1972, he was appointed Louis D. Beaumont University Professor at Case Western Reserve University; on 1 September 1973, he took up the appointment. In 1982, he was appointed Distinguished Professor of Mathematics at Binghamton University, becoming Emeritus in 2003. Latterly, he spent each spring semester as Distinguished Professor of Mathematics at the University of Central Florida. Hilton is featured in the book "Mathematical People". Death and family. Peter Hilton died on 6 November 2010 in Binghamton, New York, at age 87. He left behind his wife, Margaret Mostyn (born 1925), whom he married in 1949, and their two sons, who were adopted. Margaret, a schoolteacher, had an acting career as Margaret Hilton in the US, in summer stock theatre. She also played television roles. She died in Seattle in 2020. In popular culture. Hilton is portrayed by actor Matthew Beard in the 2014 film "The Imitation Game", which tells the tale of Alan Turing and the cracking of Nazi Germany's Enigma code. Hilton's former PhD students. According to the Mathematics Genealogy Project site, Hilton supervised at least 27 doctoral students, including Paul Kainen at Cornell University. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A_n^2" } ]
https://en.wikipedia.org/wiki?curid=8951286
8953468
Equiareal map
Transformation that preserves area measure of regions In differential geometry, an equiareal map, sometimes called an authalic map, is a smooth map from one surface to another that preserves the areas of figures. Properties. If "M" and "N" are two Riemannian (or pseudo-Riemannian) surfaces, then an equiareal map "f" from "M" to "N" can be characterized by any of the following equivalent conditions: formula_0 where formula_1 denotes the Euclidean wedge product of vectors and "df" denotes the pushforward along "f". Example. An example of an equiareal map, due to Archimedes of Syracuse, is the projection from the unit sphere "x"2 + "y"2 + "z"2 = 1 to the unit cylinder "x"2 + "y"2 = 1 outward from their common axis. An explicit formula is formula_2 for ("x", "y", "z") a point on the unit sphere. Linear transformations. Every Euclidean isometry of the Euclidean plane is equiareal, but the converse is not true. In fact, shear mapping and squeeze mapping are counterexamples to the converse. Shear mapping takes a rectangle to a parallelogram of the same area. Written in matrix form, a shear mapping along the x-axis is formula_3 Squeeze mapping lengthens and contracts the sides of a rectangle in a reciprocal manner so that the area is preserved. Written in matrix form, with λ > 1 the squeeze reads formula_4 A linear transformation formula_5 multiplies areas by the absolute value of its determinant . Gaussian elimination shows that every equiareal linear transformation (rotations included) can be obtained by composing at most two shears along the axes, a squeeze and (if the determinant is negative), a reflection. In map projections. In the context of geographic maps, a map projection is called equal-area, equivalent, authalic, equiareal, or area-preserving, if areas are preserved up to a constant factor; embedding the target map, usually considered a subset of R2, in the obvious way in R3, the requirement above then is weakened to: formula_6 for some "κ" > 0 not depending on formula_7 and formula_8. For examples of such projections, see equal-area map projection.
[ { "math_id": 0, "text": "\\bigl|df_p(v)\\wedge df_p(w)\\bigr| = |v\\wedge w|\\," }, { "math_id": 1, "text": "\\wedge" }, { "math_id": 2, "text": "f(x,y,z) = \\left(\\frac{x}{\\sqrt{x^2+y^2}}, \\frac{y}{\\sqrt{x^2+y^2}}, z\\right)" }, { "math_id": 3, "text": "\\begin{pmatrix}1 & v \\\\ 0 & 1 \\end{pmatrix} \\,\\begin{pmatrix}x\\\\y \\end{pmatrix} = \\begin{pmatrix}x+vy\\\\y \\end{pmatrix}." }, { "math_id": 4, "text": "\\begin{pmatrix}\\lambda & 0 \\\\ 0 & 1/\\lambda \\end{pmatrix}\\,\\begin{pmatrix}x\\\\y \\end{pmatrix} = \\begin{pmatrix}\\lambda x\\\\ y/\\lambda.\\end{pmatrix}" }, { "math_id": 5, "text": "\\begin{pmatrix}a & b \\\\ c & d \\end{pmatrix}" }, { "math_id": 6, "text": "|df_p(v)\\times df_p(w)|=\\kappa|v\\times w|" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "w" } ]
https://en.wikipedia.org/wiki?curid=8953468
8953682
Flatness (systems theory)
Flatness in systems theory is a system property that extends the notion of controllability from linear systems to nonlinear dynamical systems. A system that has the flatness property is called a "flat system". Flat systems have a (fictitious) "flat output", which can be used to explicitly express all states and inputs in terms of the flat output and a finite number of its derivatives. Definition. A nonlinear system formula_0 is flat, if there exists an output formula_1 that satisfies the following conditions: If these conditions are satisfied at least locally, then the (possibly fictitious) output is called "flat output", and the system is "flat". Relation to controllability of linear systems. A linear system formula_10 with the same signal dimensions for formula_11 as the nonlinear system is flat, if and only if it is controllable. For linear systems both properties are equivalent, hence exchangeable. Significance. The flatness property is useful for both the analysis of and controller synthesis for nonlinear dynamical systems. It is particularly advantageous for solving trajectory planning problems and asymptotical setpoint following control.
[ { "math_id": 0, "text": "\\dot{\\mathbf{x}}(t) = \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t)), \\quad \\mathbf{x}(0) = \\mathbf{x}_0, \\quad \\mathbf{u}(t) \\in R^m, \\quad \\mathbf{x}(t) \\in R^n, \\text{Rank} \\frac{\\partial\\mathbf{f}(\\mathbf{x},\\mathbf{u})}{\\partial\\mathbf{u}} = m" }, { "math_id": 1, "text": "\\mathbf{y}(t) = (y_1(t),...,y_m(t))" }, { "math_id": 2, "text": "y_i,i=1,...,m" }, { "math_id": 3, "text": "x_i,i=1,...,n" }, { "math_id": 4, "text": "u_i,i=1,...,m" }, { "math_id": 5, "text": "u_i^{(k)}, k=1,...,\\alpha_i" }, { "math_id": 6, "text": "\\mathbf{y} = \\Phi(\\mathbf{x},\\mathbf{u},\\dot{\\mathbf{u}},...,\\mathbf{u}^{(\\alpha)})" }, { "math_id": 7, "text": "y_i^{(k)}, i=1,...,m" }, { "math_id": 8, "text": "\\mathbf{y}" }, { "math_id": 9, "text": "\\phi(\\mathbf{y},\\dot{\\mathbf{y}},\\mathbf{y}^{(\\gamma)}) = \\mathbf{0}" }, { "math_id": 10, "text": "\\dot{\\mathbf{x}}(t) = \\mathbf{A}\\mathbf{x}(t) + \\mathbf{B}\\mathbf{u}(t), \\quad \\mathbf{x}(0) = \\mathbf{x}_0" }, { "math_id": 11, "text": "\\mathbf{x},\\mathbf{u}" } ]
https://en.wikipedia.org/wiki?curid=8953682
89547
Water vapor
Gaseous phase of water Water vapor, water vapour or aqueous vapor is the gaseous phase of water. It is one state of water within the hydrosphere. Water vapor can be produced from the evaporation or boiling of liquid water or from the sublimation of ice. Water vapor is transparent, like most constituents of the atmosphere. Under typical atmospheric conditions, water vapor is continuously generated by evaporation and removed by condensation. It is less dense than most of the other constituents of air and triggers convection currents that can lead to clouds and fog. Being a component of Earth's hydrosphere and hydrologic cycle, it is particularly abundant in Earth's atmosphere, where it acts as a greenhouse gas and warming feedback, contributing more to total greenhouse effect than non-condensable gases such as carbon dioxide and methane. Use of water vapor, as steam, has been important for cooking, and as a major component in energy production and transport systems since the industrial revolution. Water vapor is a relatively common atmospheric constituent, present even in the solar atmosphere as well as every planet in the Solar System and many astronomical objects including natural satellites, comets and even large asteroids. Likewise the detection of extrasolar water vapor would indicate a similar distribution in other planetary systems. Water vapor can also be indirect evidence supporting the presence of extraterrestrial liquid water in the case of some planetary mass objects. Water vapor, which reacts to temperature changes, is referred to as a 'feedback', because it amplifies the effect of forces that initially cause the warming. So, it is a greenhouse gas. Properties. Evaporation. Whenever a water molecule leaves a surface and diffuses into a surrounding gas, it is said to have evaporated. Each individual water molecule which transitions between a more associated (liquid) and a less associated (vapor/gas) state does so through the absorption or release of kinetic energy. The aggregate measurement of this kinetic energy transfer is defined as thermal energy and occurs only when there is differential in the temperature of the water molecules. Liquid water that becomes water vapor takes a parcel of heat with it, in a process called evaporative cooling. The amount of water vapor in the air determines how frequently molecules will return to the surface. When a net evaporation occurs, the body of water will undergo a net cooling directly related to the loss of water. In the US, the National Weather Service measures the actual rate of evaporation from a standardized "pan" open water surface outdoors, at various locations nationwide. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over 120 inches per year. Formulas can be used for calculating the rate of evaporation from a water surface such as a swimming pool. In some countries, the evaporation rate far exceeds the precipitation rate. Evaporative cooling is restricted by atmospheric conditions. Humidity is the amount of water vapor in the air. The vapor content of air is measured with devices known as hygrometers. The measurements are usually expressed as specific humidity or percent relative humidity. The temperatures of the atmosphere and the water surface determine the equilibrium vapor pressure; 100% relative humidity occurs when the partial pressure of water vapor is equal to the equilibrium vapor pressure. This condition is often referred to as complete saturation. Humidity ranges from 0 grams per cubic metre in dry air to 30 grams per cubic metre (0.03 ounce per cubic foot) when the vapor is saturated at 30 °C. Sublimation. Sublimation is the process by which water molecules directly leave the surface of ice without first becoming liquid water. Sublimation accounts for the slow mid-winter disappearance of ice and snow at temperatures too low to cause melting. Antarctica shows this effect to a unique degree because it is by far the continent with the lowest rate of precipitation on Earth. As a result, there are large areas where millennial layers of snow have sublimed, leaving behind whatever non-volatile materials they had contained. This is extremely valuable to certain scientific disciplines, a dramatic example being the collection of meteorites that are left exposed in unparalleled numbers and excellent states of preservation. Sublimation is important in the preparation of certain classes of biological specimens for scanning electron microscopy. Typically the specimens are prepared by cryofixation and freeze-fracture, after which the broken surface is freeze-etched, being eroded by exposure to vacuum until it shows the required level of detail. This technique can display protein molecules, organelle structures and lipid bilayers with very low degrees of distortion. Condensation. Water vapor will only condense onto another surface when that surface is cooler than the dew point temperature, or when the water vapor equilibrium in air has been exceeded. When water vapor condenses onto a surface, a net warming occurs on that surface. The water molecule brings heat energy with it. In turn, the temperature of the atmosphere drops slightly. In the atmosphere, condensation produces clouds, fog and precipitation (usually only when facilitated by cloud condensation nuclei). The dew point of an air parcel is the temperature to which it must cool before water vapor in the air begins to condense. Condensation in the atmosphere forms cloud droplets. Also, a net condensation of water vapor occurs on surfaces when the temperature of the surface is at or below the dew point temperature of the atmosphere. Deposition is a phase transition separate from condensation which leads to the direct formation of ice from water vapor. Frost and snow are examples of deposition. There are several mechanisms of cooling by which condensation occurs: 1) Direct loss of heat by conduction or radiation. 2) Cooling from the drop in air pressure which occurs with uplift of air, also known as adiabatic cooling. Air can be lifted by mountains, which deflect the air upward, by convection, and by cold and warm fronts. 3) Advective cooling - cooling due to horizontal movement of air. Chemical reactions. A number of chemical reactions have water as a product. If the reactions take place at temperatures higher than the dew point of the surrounding air the water will be formed as vapor and increase the local humidity, if below the dew point local condensation will occur. Typical reactions that result in water formation are the burning of hydrogen or hydrocarbons in air or other oxygen containing gas mixtures, or as a result of reactions with oxidizers. In a similar fashion other chemical or physical reactions can take place in the presence of water vapor resulting in new chemicals forming such as rust on iron or steel, polymerization occurring (certain polyurethane foams and cyanoacrylate glues cure with exposure to atmospheric humidity) or forms changing such as where anhydrous chemicals may absorb enough vapor to form a crystalline structure or alter an existing one, sometimes resulting in characteristic color changes that can be used for measurement. Measurement. Measuring the quantity of water vapor in a medium can be done directly or remotely with varying degrees of accuracy. Remote methods such electromagnetic absorption are possible from satellites above planetary atmospheres. Direct methods may use electronic transducers, moistened thermometers or hygroscopic materials measuring changes in physical properties or dimensions. Impact on air density. Water vapor is lighter or less dense than dry air. At equivalent temperatures it is buoyant with respect to dry air, whereby the density of dry air at standard temperature and pressure (273.15 K, 101.325 kPa) is 1.27 g/L and water vapor at standard temperature has a vapor pressure of 0.6 kPa and the much lower density of 0.0048 g/L. Calculations. Water vapor and dry air density calculations at 0 °C: At equal temperatures. At the same temperature, a column of dry air will be denser or heavier than a column of air containing any water vapor, the molar mass of diatomic nitrogen and diatomic oxygen both being greater than the molar mass of water. Thus, any volume of dry air will sink if placed in a larger volume of moist air. Also, a volume of moist air will rise or be buoyant if placed in a larger region of dry air. As the temperature rises the proportion of water vapor in the air increases, and its buoyancy will increase. The increase in buoyancy can have a significant atmospheric impact, giving rise to powerful, moisture rich, upward air currents when the air temperature and sea temperature reaches 25 °C or above. This phenomenon provides a significant driving force for cyclonic and anticyclonic weather systems (typhoons and hurricanes). Respiration and breathing. Water vapor is a by-product of respiration in plants and animals. Its contribution to the pressure, increases as its concentration increases. Its partial pressure contribution to air pressure increases, lowering the partial pressure contribution of the other atmospheric gases (Dalton's Law). The total air pressure must remain constant. The presence of water vapor in the air naturally dilutes or displaces the other air components as its concentration increases. This can have an effect on respiration. In very warm air (35 °C) the proportion of water vapor is large enough to give rise to the stuffiness that can be experienced in humid jungle conditions or in poorly ventilated buildings. Lifting gas. Water vapor has lower density than that of air and is therefore buoyant in air but has lower vapor pressure than that of air. When water vapor is used as a lifting gas by a thermal airship the water vapor is heated to form steam so that its vapor pressure is greater than the surrounding air pressure in order to maintain the shape of a theoretical "steam balloon", which yields approximately 60% the lift of helium and twice that of hot air. General discussion. The amount of water vapor in an atmosphere is constrained by the restrictions of partial pressures and temperature. Dew point temperature and relative humidity act as guidelines for the process of water vapor in the water cycle. Energy input, such as sunlight, can trigger more evaporation on an ocean surface or more sublimation on a chunk of ice on top of a mountain. The "balance" between condensation and evaporation gives the quantity called vapor partial pressure. The maximum partial pressure ("saturation pressure") of water vapor in air varies with temperature of the air and water vapor mixture. A variety of empirical formulas exist for this quantity; the most used reference formula is the Goff-Gratch equation for the SVP over liquid water below zero degrees Celsius: formula_0 where T, temperature of the moist air, is given in units of kelvin, and p is given in units of millibars (hectopascals). The formula is valid from about −50 to 102 °C; however there are a very limited number of measurements of the vapor pressure of water over supercooled liquid water. There are a number of other formulae which can be used. Under certain conditions, such as when the boiling temperature of water is reached, a net evaporation will always occur during standard atmospheric conditions regardless of the percent of relative humidity. This immediate process will dispel massive amounts of water vapor into a cooler atmosphere. Exhaled air is almost fully at equilibrium with water vapor at the body temperature. In the cold air the exhaled vapor quickly condenses, thus showing up as a fog or mist of water droplets and as condensation or frost on surfaces. Forcibly condensing these water droplets from exhaled breath is the basis of exhaled breath condensate, an evolving medical diagnostic test. Controlling water vapor in air is a key concern in the heating, ventilating, and air-conditioning (HVAC) industry. Thermal comfort depends on the moist air conditions. Non-human comfort situations are called refrigeration, and also are affected by water vapor. For example, many food stores, like supermarkets, utilize open chiller cabinets, or "food cases", which can significantly lower the water vapor pressure (lowering humidity). This practice delivers several benefits as well as problems. In Earth's atmosphere. Gaseous water represents a small but environmentally significant constituent of the atmosphere. The percentage of water vapor in surface air varies from 0.01% at -42 °C (-44 °F) to 4.24% when the dew point is 30 °C (86 °F). Over 99% of atmospheric water is in the form of vapour, rather than liquid water or ice, and approximately 99.13% of the water vapour is contained in the troposphere. The condensation of water vapor to the liquid or ice phase is responsible for clouds, rain, snow, and other precipitation, all of which count among the most significant elements of what we experience as weather. Less obviously, the latent heat of vaporization, which is released to the atmosphere whenever condensation occurs, is one of the most important terms in the atmospheric energy budget on both local and global scales. For example, latent heat release in atmospheric convection is directly responsible for powering destructive storms such as tropical cyclones and severe thunderstorms. Water vapor is an important greenhouse gas owing to the presence of the hydroxyl bond which strongly absorbs in the infra-red. Water vapor is the "working medium" of the atmospheric thermodynamic engine which transforms heat energy from sun irradiation into mechanical energy in the form of winds. Transforming thermal energy into mechanical energy requires an upper and a lower temperature level, as well as a working medium which shuttles forth and back between both. The upper temperature level is given by the soil or water surface of the Earth, which absorbs the incoming sun radiation and warms up, evaporating water. The moist and warm air at the ground is lighter than its surroundings and rises up to the upper limit of the troposphere. There the water molecules radiate their thermal energy into outer space, cooling down the surrounding air. The upper atmosphere constitutes the lower temperature level of the atmospheric thermodynamic engine. The water vapor in the now cold air condenses out and falls down to the ground in the form of rain or snow. The now heavier cold and dry air sinks down to ground as well; the atmospheric thermodynamic engine thus establishes a vertical convection, which transports heat from the ground into the upper atmosphere, where the water molecules can radiate it to outer space. Due to the Earth's rotation and the resulting Coriolis forces, this vertical atmospheric convection is also converted into a horizontal convection, in the form of cyclones and anticyclones, which transport the water evaporated over the oceans into the interior of the continents, enabling vegetation to grow. Water in Earth's atmosphere is not merely below its boiling point (100 °C), but at altitude it goes below its freezing point (0 °C), due to water's highly polar attraction. When combined with its quantity, water vapor then has a relevant dew point and frost point, unlike e. g., carbon dioxide and methane. Water vapor thus has a scale height a fraction of that of the bulk atmosphere, as the water condenses and exits, primarily in the troposphere, the lowest layer of the atmosphere. Carbon dioxide (CO2) and methane, being well-mixed in the atmosphere, tend to rise above water vapour. The absorption and emission of both compounds contribute to Earth's emission to space, and thus the planetary greenhouse effect. This greenhouse forcing is directly observable, via distinct spectral features versus water vapor, and observed to be rising with rising CO2 levels. Conversely, adding water vapor at high altitudes has a disproportionate impact, which is why jet traffic has a disproportionately high warming effect. Oxidation of methane is also a major source of water vapour in the stratosphere, and adds about 15% to methane's global warming effect. In the absence of other greenhouse gases, Earth's water vapor would condense to the surface; this has likely happened, possibly more than once. Scientists thus distinguish between non-condensable (driving) and condensable (driven) greenhouse gases, i.e., the above water vapor feedback. Fog and clouds form through condensation around cloud condensation nuclei. In the absence of nuclei, condensation will only occur at much lower temperatures. Under persistent condensation or deposition, cloud droplets or snowflakes form, which precipitate when they reach a critical mass. Atmospheric concentration of water vapour is highly variable between locations and times, from 10 ppmv in the coldest air to 5% (50 000 ppmv) in humid tropical air, and can be measured with a combination of land observations, weather balloons and satellites. The water content of the atmosphere as a whole is constantly depleted by precipitation. At the same time it is constantly replenished by evaporation, most prominently from oceans, lakes, rivers, and moist earth. Other sources of atmospheric water include combustion, respiration, volcanic eruptions, the transpiration of plants, and various other biological and geological processes. At any given time there is about 1.29 x 1016 litres (3.4 x 1015 gal.) of water in the atmosphere. The atmosphere holds 1 part in 2500 of the fresh water, and 1 part in 100,000 of the total water on Earth. The mean global content of water vapor in the atmosphere is roughly sufficient to cover the surface of the planet with a layer of liquid water about 25 mm deep. The mean annual precipitation for the planet is about 1 metre, a comparison which implies a rapid turnover of water in the air – on average, the residence time of a water molecule in the troposphere is about 9 to 10 days. Global mean water vapour is about 0.25% of the atmosphere by mass and also varies seasonally, in terms of contribution to atmospheric pressure between 2.62 hPa in July and 2.33 hPa in December. IPCC AR6 expresses medium confidence in increase of total water vapour at about 1-2% per decade; it is expected to increase by around 7% per °C of warming. Episodes of surface geothermal activity, such as volcanic eruptions and geysers, release variable amounts of water vapor into the atmosphere. Such eruptions may be large in human terms, and major explosive eruptions may inject exceptionally large masses of water exceptionally high into the atmosphere, but as a percentage of total atmospheric water, the role of such processes is trivial. The relative concentrations of the various gases emitted by volcanoes varies considerably according to the site and according to the particular event at any one site. However, water vapor is consistently the commonest volcanic gas; as a rule, it comprises more than 60% of total emissions during a subaerial eruption. Atmospheric water vapor content is expressed using various measures. These include vapor pressure, specific humidity, mixing ratio, dew point temperature, and relative humidity. Radar and satellite imaging. Because water molecules absorb microwaves and other radio wave frequencies, water in the atmosphere attenuates radar signals. In addition, atmospheric water will reflect and refract signals to an extent that depends on whether it is vapor, liquid or solid. Generally, radar signals lose strength progressively the farther they travel through the troposphere. Different frequencies attenuate at different rates, such that some components of air are opaque to some frequencies and transparent to others. Radio waves used for broadcasting and other communication experience the same effect. Water vapor reflects radar to a lesser extent than do water's other two phases. In the form of drops and ice crystals, water acts as a prism, which it does not do as an individual molecule; however, the existence of water vapor in the atmosphere causes the atmosphere to act as a giant prism. A comparison of GOES-12 satellite images shows the distribution of atmospheric water vapor relative to the oceans, clouds and continents of the Earth. Vapor surrounds the planet but is unevenly distributed. The image loop on the right shows monthly average of water vapor content with the units are given in centimeters, which is the precipitable water or equivalent amount of water that could be produced if all the water vapor in the column were to condense. The lowest amounts of water vapor (0 centimeters) appear in yellow, and the highest amounts (6 centimeters) appear in dark blue. Areas of missing data appear in shades of gray. The maps are based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on NASA's Aqua satellite. The most noticeable pattern in the time series is the influence of seasonal temperature changes and incoming sunlight on water vapor. In the tropics, a band of extremely humid air wobbles north and south of the equator as the seasons change. This band of humidity is part of the Intertropical Convergence Zone, where the easterly trade winds from each hemisphere converge and produce near-daily thunderstorms and clouds. Farther from the equator, water vapor concentrations are high in the hemisphere experiencing summer and low in the one experiencing winter. Another pattern that shows up in the time series is that water vapor amounts over land areas decrease more in winter months than adjacent ocean areas do. This is largely because air temperatures over land drop more in the winter than temperatures over the ocean. Water vapor condenses more rapidly in colder air. As water vapor absorbs light in the visible spectral range, its absorption can be used in spectroscopic applications (such as DOAS) to determine the amount of water vapor in the atmosphere. This is done operationally, e.g. from the Global Ozone Monitoring Experiment (GOME) spectrometers on ERS (GOME) and MetOp (GOME-2). The weaker water vapor absorption lines in the blue spectral range and further into the UV up to its dissociation limit around 243 nm are mostly based on quantum mechanical calculations and are only partly confirmed by experiments. Lightning generation. Water vapor plays a key role in lightning production in the atmosphere. From cloud physics, usually clouds are the real generators of static charge as found in Earth's atmosphere. The ability of clouds to hold massive amounts of electrical energy is directly related to the amount of water vapor present in the local system. The amount of water vapor directly controls the permittivity of the air. During times of low humidity, static discharge is quick and easy. During times of higher humidity, fewer static discharges occur. Permittivity and capacitance work hand in hand to produce the megawatt outputs of lightning. After a cloud, for instance, has started its way to becoming a lightning generator, atmospheric water vapor acts as a substance (or insulator) that decreases the ability of the cloud to discharge its electrical energy. Over a certain amount of time, if the cloud continues to generate and store more static electricity, the barrier that was created by the atmospheric water vapor will ultimately break down from the stored electrical potential energy. This energy will be released to a local oppositely charged region, in the form of lightning. The strength of each discharge is directly related to the atmospheric permittivity, capacitance, and the source's charge generating ability. Extraterrestrial. Water vapor is common in the Solar System and by extension, other planetary systems. Its signature has been detected in the atmospheres of the Sun, occurring in sunspots. The presence of water vapor has been detected in the atmospheres of all seven extraterrestrial planets in the Solar System, the Earth's Moon, and the moons of other planets, although typically in only trace amounts. Geological formations such as cryogeysers are thought to exist on the surface of several icy moons ejecting water vapor due to tidal heating and may indicate the presence of substantial quantities of subsurface water. Plumes of water vapor have been detected on Jupiter's moon Europa and are similar to plumes of water vapor detected on Saturn's moon Enceladus. Traces of water vapor have also been detected in the stratosphere of Titan. Water vapor has been found to be a major constituent of the atmosphere of dwarf planet, Ceres, largest object in the asteroid belt The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes." According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." Scientists studying Mars hypothesize that if water moves about the planet, it does so as vapor. The brilliance of comet tails comes largely from water vapor. On approach to the Sun, the ice many comets carry sublimes to vapor. Knowing a comet's distance from the sun, astronomers may deduce the comet's water content from its brilliance. Water vapor has also been confirmed outside the Solar System. Spectroscopic analysis of HD 209458 b, an extrasolar planet in the constellation Pegasus, provides the first evidence of atmospheric water vapor beyond the Solar System. A star called CW Leonis was found to have a ring of vast quantities of water vapor circling the aging, massive star. A NASA satellite designed to study chemicals in interstellar gas clouds, made the discovery with an onboard spectrometer. Most likely, "the water vapor was vaporized from the surfaces of orbiting comets." Other exoplanets with evidence of water vapor include HAT-P-11b and K2-18b. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" /> Bibliography. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\begin{align}\n\\log_{10} \\left( p \\right) =\n& -7.90298 \\left( \\frac{373.16}{T}-1 \\right) + 5.02808 \\log_{10} \\frac{373.16}{T} \\\\\n& - 1.3816 \\times 10^{-7} \\left( 10^{11.344 \\left( 1-\\frac{T}{373.16} \\right)} -1 \\right) \\\\\n& + 8.1328 \\times 10^{-3} \\left( 10^{-3.49149 \\left( \\frac{373.16}{T}-1 \\right)} -1 \\right) \\\\\n& + \\log_{10} \\left( 1013.246 \\right)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=89547
8955537
Gravitation of the Moon
The acceleration due to gravity on the surface of the Moon is approximately 1.625 m/s2, about 16.6% that on Earth's surface or 0.166 "ɡ". Over the entire surface, the variation in gravitational acceleration is about 0.0253 m/s2 (1.6% of the acceleration due to gravity). Because weight is directly dependent upon gravitational acceleration, things on the Moon will weigh only 16.6% (= 1/6) of what they weigh on the Earth. Gravitational field. The gravitational field of the Moon has been measured by tracking the radio signals emitted by orbiting spacecraft. The principle used depends on the Doppler effect, whereby the line-of-sight spacecraft acceleration can be measured by small shifts in frequency of the radio signal, and the measurement of the distance from the spacecraft to a station on Earth. Since the gravitational field of the Moon affects the orbit of a spacecraft, one can use this tracking data to detect gravity anomalies. Most low lunar orbits are unstable. Detailed data collected has shown that for low lunar orbit the only "stable" orbits are at inclinations near 27°, 50°, 76°, and 86°. Because of the Moon's synchronous rotation it is not possible to track spacecraft from Earth much beyond the limbs of the Moon, so until the recent Gravity Recovery and Interior Laboratory (GRAIL) mission the far-side gravity field was not well mapped. The missions with accurate Doppler tracking that have been used for deriving gravity fields are in the accompanying table. The table gives the mission spacecraft name, a brief designation, the number of mission spacecraft with accurate tracking, the country of origin, and the time span of the Doppler data. Apollos 15 and 16 released subsatellites. The Kaguya/SELENE mission had tracking between 3 satellites to get far-side tracking. GRAIL had very accurate tracking between 2 spacecraft and tracking from Earth. The accompanying table below lists lunar gravity fields. The table lists the designation of the gravity field, the highest degree and order, a list of mission IDs that were analyzed together, and a citation. Mission ID LO includes all 5 Lunar Orbiter missions. The GRAIL fields are very accurate; other missions are not combined with GRAIL. A major feature of the Moon's gravitational field is the presence of mascons, which are large positive gravity anomalies associated with some of the giant impact basins. These anomalies significantly influence the orbit of spacecraft around the Moon, and an accurate gravitational model is necessary in the planning of both crewed and uncrewed missions. They were initially discovered by the analysis of Lunar Orbiter tracking data: navigation tests prior to the Apollo program showed positioning errors much larger than mission specifications. Mascons are in part due to the presence of dense mare basaltic lava flows that fill some of the impact basins. However, lava flows by themselves cannot fully explain the gravitational variations, and uplift of the crust-mantle interface is required as well. Based on Lunar Prospector gravitational models, it has been suggested that some mascons exist that do not show evidence for mare basaltic volcanism. The huge expanse of mare basaltic volcanism associated with Oceanus Procellarum does not cause a positive gravity anomaly. The center of gravity of the Moon does not coincide exactly with its geometric center, but is displaced toward the Earth by about 2 kilometers. Mass of Moon. The gravitational constant "G" is less accurate than the product of "G" and masses for Earth and Moon. Consequently, it is conventional to express the lunar mass "M" multiplied by the gravitational constant "G". The lunar "GM" = 4902.8001 km3/s2 from GRAIL analyses. The mass of the Moon is "M" = 7.3458 × 1022 kg and the mean density is 3346 kg/m3. The lunar "GM" is 1/81.30057 of the Earth's "GM". Theory. For the lunar gravity field, it is conventional to use an equatorial radius of "R" = 1738.0 km. The gravity potential is written with a series of spherical harmonic functions "P""nm". The gravitational potential "V" at an external point is conventionally expressed as positive in astronomy and geophysics, but negative in physics. Then, with the former sign, formula_0 where "r" is the radius to an external point with r ≥ "R", "φ" is the latitude of the external point, and λ is the east longitude of the external point. Note that the spherical harmonic functions "Pnm" can be normalized or unnormalized affecting the gravity coefficients "Jn", "Cnm", and "Snm". Here we will use unnormalized functions and compatible coefficients. The "Pn0" are called Legendre polynomials and the "Pnm" with "m"≠0 are called the Associated Legendre polynomials, where subscript "n" is the degree, "m" is the order, and "m" ≤ "n". The sums start at "n" = 2. The unnormalized degree-2 functions are formula_1 Note that of the three functions, only "P"20(±1)=1 is finite at the poles. More generally, only "P"n0(±1)=1 are finite at the poles. The gravitational acceleration of vector position r is formula_2 where er, eφ, and eλ are unit vectors in the three directions. Gravity coefficients. The unnormalized gravity coefficients of degree 2 and 3 that were determined by the GRAIL mission are given in Table 1. The zero values of "C"21, "S"21, and "S"22 are because a principal axis frame is being used. There are no degree-1 coefficients when the three axes are centered on the center of mass. The "J"2 coefficient for an oblate shape to the gravity field is affected by rotation and solid-body tides whereas "C"22 is affected by solid-body tides. Both are larger than their equilibrium values showing that the upper layers of the Moon are strong enough to support elastic stress. The "C"31 coefficient is large. Simulating lunar gravity. In January 2022 China was reported by the "South China Morning Post" to have built a small (60 centimeters in diameter) research facility to simulate low lunar gravity with the help of magnets. The facility was reportedly partly inspired by the work of Andre Geim (who later shared the 2010 Nobel Prize in Physics for his research on graphene) and Michael Berry, who both shared the Ig Nobel Prize in Physics in 2000 for the magnetic levitation of a frog. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " V = \\left ( \\frac{GM}{r} \\right )- \\left ( \\frac{GM}{r} \\right ) \\sum \\left ( \\frac{R}{r} \\right )^n J_n P_{n,0} (\\sin \\phi) + \\left ( \\frac{GM}{r} \\right ) \\sum \\left ( \\frac{R}{r} \\right )^n [C_{n,m} P_{n,m}(\\sin \\phi) \\cos(m \\lambda) + S_{n,m}P_{n,m} (\\sin \\phi) \\sin(m \\lambda) ] " }, { "math_id": 1, "text": " \\begin{align}\nP_{2,0} &= \\frac{3}{2} \\sin^2 \\!\\phi - \\frac{1}{2} \\\\[1ex]\nP_{2,1} &= 3 \\sin \\phi \\cos \\phi \\\\[1ex]\nP_{2,2} &= 3 \\cos^2 \\phi\n\\end{align} " }, { "math_id": 2, "text": "\\begin{align}\n\\frac{d^2 r}{dt^2} &= \\nabla V \\\\[1ex]\n&= {\\partial V \\over \\partial r} e_r + \\frac{1}{r} {\\partial V \\over \\partial \\phi} e_\\phi + \\frac{1}{r \\cos \\phi} {\\partial V \\over \\partial \\lambda} {e_\\lambda} \n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=8955537
8958586
CDHS experiment
CDHS was a neutrino experiment at CERN taking data from 1976 until 1984. The experiment was officially referred to as "WA1". CDHS was a collaboration of groups from CERN, Dortmund, Heidelberg, Saclay and later Warsaw. The collaboration was led by Jack Steinberger. The experiment was designed to study deep inelastic neutrino interactions in iron. Experimental setup. The core of the detector consisted of 19 (later 20) magnetized iron modules. In the spacings between these, drift chambers for track reconstruction were installed. Additionally, plastic scintillators were inserted into the iron. Each iron module therefore served successively as an interaction target, where the neutrinos hit and produced hadron showers, a calorimeter that measured those hadrons' energy and a spectrometer, determining the momenta of produced muons via magnetic deflection. At the time of its completion in 1976, the overall detector was 20 m long and weighed approximately 1250 tons. The experiment was located in CERN's West Area, in building 182. The neutrinos (and antineutrinos) were produced by protons from the Super Proton Synchrotron (SPS) at energies of around 400 GeV, which were shot onto a beryllium target. History. The experiment was first proposed in July 1973 by a group led by Jack Steinberger as a two-piece detector. The front should serve as the neutrino target and hadronic shower detector, the following second part should detect the muon traces. It was planned that the four proposing groups from Saclay, Dortmund, Heidelberg and CERN would contribute with complementary expertise and manpower. For example, Saclay was assigned to be in charge of the drift chambers, whereas CERN should handle the iron core magnets. It were also these four groups that gave the experiment its name: "CERN Dortmund Heidelberg Saclay (CDHS)". Approximately 30 people should form the final experiment group. After prolonged discussions with the SPS Committee, that was in charge of approving the proposals and distributing available money, an updated proposal for the new detector was submitted in March 1974. The suggested detector was a modular setup consisting of magnetized iron modules in combination with drift chambers and plastic scintillators. This new proposal was approved by the committee in April 1974. Construction started soon after and was completed in 1976. The experiment's official name was "WA1", since it was the first approved experiment at CERN's West Area. The estimated cost of the detector ranged between 6 and 8 million CHF. In 1979, an upgrade of the experimental setup was proposed. The main reason for this upgrade was the comparably low resolution of eight of the 19 detector modules. This situation should be improved by inserting twelve new and better modules, resulting in a slightly longer and significantly more accurate machine. The proposal also included the suggestion for a group from Warsaw University, led by Adam Para, to join the project. Starting with the long shutdown of the Super Proton Synchrotron (SPS) from summer 1980 on, the requested changes were implemented. Eventually, half of the experiment's target calorimeters got replaced and the total number of detector modules was increased from 19 to 20. This led to four times higher spatial resolution of the produced particles as well as 25% more accurate measurements of the deposited hadronic energy. Additionally, four new drift chambers were installed, improving the reconstruction of muon tracks. Later, a liquid hydrogen tank was added in front of the detector as a target to measure the structure function of protons. CDHS took data with neutrinos delivered by the SPS from late 1976 until September 1984. Results and discoveries. The scientific goal of the CDHS experiment was to study high energy neutrino interactions. When the incoming neutrinos (or antineutrinos) were interacting with the target iron, either charged current ( + Fe → + anything) or neutral current ( + Fe → + anything) events could be produced. One of the main objectives of the experiment was to determine the ratio between the neutral and the charged inclusive neutrino cross sections, from which the Weinberg angle could be inferred. Neutral currents had previously been discovered by the Gargamelle experiment, which had also provided first estimates of the Weinberg angle. The results were confirmed and measured with much higher precision by CDHS, allowing to predict the mass of the top quark, before it was discovered at the Tevatron, with approximately ±40 GeV precision. Other measurements regarding the electroweak interaction within the standard model included the measurement of more than one muon; i.e. dimuon and trimuon events. Results obtained at CDHS provided experimental validation of the standard model, at a time when this model was still in the testing phase. An important step in this regard was the falsification of the alleged "high-y anomaly". The value y characterizes the inelasticity of neutrino collisions, i.e. it measures the amount of energy that an incoming neutrino transfers to the hadrons during their collision. Experiments at Fermilab had found the so-called "high-y anomaly", which challenged the standard model. However, results from CDHS disproved those findings, strengthening the standard model. CDHS examined the nucleon structure functions, which enabled scientists to confirm the theory of quantum chromodynamics (QCD). This work included the determination of the QCD coupling constant formula_0, verification of the quark's (s = ) and gluon's (s = 1) spin, as well as the falsification of both abelian theories of strong interactions and theories based on scalar gluons. Additionally, the experiments provided insights into the structure of the nucleon, examining the distribution of gluons, quarks and antiquarks within it. Results from CDHS were in line with the quark parton model, that assigned quarks to be point-like partons. In this context, it was also confirmed that the number of valence quarks in a nucleon is 3. Finally, the CDHS results allowed to determine the momentum distribution of strange quarks and antiquarks within a nucleon. During its last years of operation, the CDHS collaboration engaged in the search for neutrino oscillations. Although this phenomenon could not be confirmed using CERN's large energy neutrino beam, this attempt influenced the following experiments that eventually discovered neutrino oscillations. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\alpha_\\text{s}" } ]
https://en.wikipedia.org/wiki?curid=8958586
896174
Incompressible surface
In mathematics, an incompressible surface is a surface properly embedded in a 3-manifold, which, in intuitive terms, is a "nontrivial" surface that cannot be simplified. In non-mathematical terms, the surface of a suitcase is compressible, because we could cut the handle and shrink it into the surface. But a Conway sphere (a sphere with four holes) is incompressible, because there are essential parts of a knot or link both inside and out, so there is no way to move the entire knot or link to one side of the punctured sphere. The mathematical definition is as follows. There are two cases to consider. A sphere is incompressible if both inside and outside the sphere there are some obstructions that prevent the sphere from shrinking to a point and also prevent the sphere from expanding to encompass all of space. A surface other than a sphere is incompressible if any disk with its boundary on the surface spans a disk in the surface. Incompressible surfaces are used for decomposition of Haken manifolds, in normal surface theory, and in the study of the fundamental groups of 3-manifolds. Formal definition. Let "S" be a compact surface properly embedded in a smooth or PL 3-manifold "M". A compressing disk "D" is a disk embedded in "M" such that formula_0 and the intersection is transverse. If the curve ∂"D" does not bound a disk inside of "S", then "D" is called a nontrivial compressing disk. If "S" has a nontrivial compressing disk, then we call "S" a compressible surface in "M". If "S"is neither the 2-sphere nor a compressible surface, then we call the surface (geometrically) incompressible. Note that 2-spheres are excluded since they have no nontrivial compressing disks by the Jordan-Schoenflies theorem, and 3-manifolds have abundant embedded 2-spheres. Sometimes one alters the definition so that an incompressible sphere is a 2-sphere embedded in a 3-manifold that does not bound an embedded 3-ball. Such spheres arise exactly when a 3-manifold is not irreducible. Since this notion of incompressibility for a sphere is quite different from the above definition for surfaces, often an incompressible sphere is instead referred to as an essential sphere or a reducing sphere. Compression. Given a compressible surface "S" with a compressing disk "D" that we may assume lies in the interior of "M" and intersects "S" transversely, one may perform embedded 1-surgery on "S" to get a surface that is obtained by compressing "S" along "D". There is a tubular neighborhood of "D" whose closure is an embedding of "D" × [-1,1] with "D" × 0 being identified with "D" and with formula_1 Then formula_2 is a new properly embedded surface obtained by compressing "S" along "D". A non-negative complexity measure on compact surfaces without 2-sphere components is "b"0("S") − "χ"("S"), where "b"0("S") is the zeroth Betti number (the number of connected components) and "χ"("S") is the Euler characteristic of "S". When compressing a compressible surface along a nontrivial compressing disk, the Euler characteristic increases by two, while "b"0 might remain the same or increase by 1. Thus, every properly embedded compact surface without 2-sphere components is related to an incompressible surface through a sequence of compressions. Sometimes we drop the condition that "S" be compressible. If "D" were to bound a disk inside "S" (which is always the case if "S" is incompressible, for example), then compressing "S" along "D" would result in a disjoint union of a sphere and a surface homeomorphic to "S". The resulting surface with the sphere deleted might or might not be isotopic to "S", and it will be if "S" is incompressible and "M" is irreducible. Algebraically incompressible surfaces. There is also an algebraic version of incompressibility. Suppose formula_3 is a proper embedding of a compact surface in a 3-manifold. Then "S" is "π"1-injective (or algebraically incompressible) if the induced map formula_4 on fundamental groups is injective. In general, every "π"1-injective surface is incompressible, but the reverse implication is not always true. For instance, the Lens space "L"(4,1) contains an incompressible Klein bottle that is not "π"1-injective. However, if "S" is two-sided, the loop theorem implies Kneser's lemma, that if "S" is incompressible, then it is "π"1-injective. Seifert surfaces. A Seifert surface "S" for an oriented link "L" is an oriented surface whose boundary is "L" with the same induced orientation. If "S" is not "π"1-injective in "S"3 − "N"("L"), where "N"("L") is a tubular neighborhood of "L", then the loop theorem gives a compressing disk that one may use to compress "S" along, providing another Seifert surface of reduced complexity. Hence, there are incompressible Seifert surfaces. Every Seifert surface of a link is related to one another through compressions in the sense that the equivalence relation generated by compression has one equivalence class. The inverse of a compression is sometimes called embedded arc surgery (an embedded 0-surgery). The genus of a link is the minimal genus of all Seifert surfaces of a link. A Seifert surface of minimal genus is incompressible. However, it is not in general the case that an incompressible Seifert surface is of minimal genus, so "π"1 alone cannot certify the genus of a link. David Gabai proved in particular that a genus-minimizing Seifert surface is a leaf of some taut, transversely oriented foliation of the knot complement, which can be certified with a taut sutured manifold hierarchy. Given an incompressible Seifert surface "S"' for a knot "K", then the fundamental group of "S"3 − "N"("K") splits as an HNN extension over "π"1("S"), which is a free group. The two maps from "π"1("S") into "π"1("S"3 − "N"("S")) given by pushing loops off the surface to the positive or negative side of "N"("S") are both injections. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "D \\cap S = \\partial D" }, { "math_id": 1, "text": "(D\\times [-1,1])\\cap S=\\partial D\\times [-1,1]." }, { "math_id": 2, "text": "(S-\\partial D\\times(-1,1))\\cup (D\\times \\{-1,1\\})" }, { "math_id": 3, "text": "\\iota: S \\rightarrow M" }, { "math_id": 4, "text": "\\iota_\\star: \\pi_1(S) \\rightarrow \\pi_1(M)" } ]
https://en.wikipedia.org/wiki?curid=896174
896221
Lipps–Meyer law
The Lipps–Meyer law, named for Theodor Lipps (1851–1914) and Max Friedrich Meyer (1873–1967), hypothesizes that the closure of melodic intervals is determined by "whether or not the end tone of the interval can be represented by the number two or a power of two", in the frequency ratio between notes (see octave). "The 'Lipps–Meyer' Law predicts an 'effect of finality' for a melodic interval that ends on a tone which, in terms of an idealized frequency ratio, can be represented as a power of two." Thus the interval order matters — a perfect fifth, for instance (C,G), ordered ⟨C,G⟩, 2:3, gives an "effect of indicated continuation", while ⟨G,C⟩, 3:2, gives an "effect of finality". This is a measure of interval strength or stability and finality. Notice that it is similar to the more common measure of interval strength, which is determined by its approximation to a lower, stronger, or higher, weaker, position in the harmonic series. The reason for the effect of finality of such interval ratios may be seen as follows. If formula_0 is the interval ratio in consideration, where formula_1 is a positive integer and formula_2 is the higher harmonic number of the ratio, then its interval can be determined by taking the base-2 logarithm formula_3 (3/2=7.02 and 4/3=4.98). The difference of these terms is the harmonic series representation of the interval in question (using harmonic numbers), whose bottom note formula_4 is a transposition of the tonic by "n" octaves. This suggests why descending interval ratios with denominator a power of two are final. A similar situation is seen if the term in the numerator is a power of two. Sources. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "F = h_2/2^n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "h_2" }, { "math_id": 3, "text": "I=12log_2(h_2/2^n)=12log_2(h_2) - 12n" }, { "math_id": 4, "text": "12n" } ]
https://en.wikipedia.org/wiki?curid=896221
8963700
Ambisonic decoding
This page focusses on decoding of classic first-order Ambisonics. Other relevant information is available on the Ambisonic reproduction systems page. The Ambisonic B-format WXYZ signals define what the listener should hear. How these signals are presented to the listener by the speakers for best results, depends on the number of speakers and their location. Ambisonics treats directions where no speakers are placed with as much importance as speaker positions. It is undesirable for the listener to be conscious that the sound is coming from a discrete number of speakers. Some simple decoding equations are known to give good results for common speaker arrangements. But Ambisonic Speaker Decoders can use much more information about the position of speakers, including their exact position and distance from the listener. Because human beings use different mechanisms to locate sound, Classic Ambisonic Decoders it is desirable to modify the speaker feeds at each frequency to present the best information using Shelf Filters. Some views on the complexities of Shelf Filters and Distance Compensation are explained in "Ambisonic Surround Decoders" and "SHELF FILTERS for Ambisonic Decoders". There are specialised decoders for large audiences in large spaces. Hardware decoders have been commercially available since the late 1970s; currently, Ambisonics is standard in surround products offered by Meridian Audio, Ltd. Ad hoc software decoders are also available. There are five main types of decoder: Diametric decoders. This design is intended for a domestic, small room setting, and allows speakers to be arranged in diametrically opposed pairs. Regular Polygon decoders. This design is intended for a domestic, small room setting. The speakers are equidistant from the listener and lie equally spaced on the circumference of a circle. The simplest Regular Polygon decoder is a Square with the listener in the centre. At least four speakers are required. Triangles do not work, exhibiting large "holes" between the speakers. Regular Hexagons perform better than Squares especially to the sides. For the simplest (two dimensional) case (no height information), and spacing the loudspeakers equally in a circle, we derive the loudspeaker signals from the B-format W, X and Y channels: formula_0 where formula_1 is the direction of the speaker under consideration. The most useful of these is the Square 4.0 decoder. The coordinate system used in Ambisonics follows the right hand rule convention with positive X pointing forwards, positive Y pointing to the left and positive Z pointing upwards. Horizontal angles run anticlockwise from due front and vertical angles are positive above the horizontal, negative below. Auditorium decoders. This design is intended for a large, public space setting. "Vienna" decoders. These are so named because the paper introducing deriving Ambisonic Decoders for irregular loudspeaker layouts was presented at the 1992 AES conference held in Vienna. The design was covered by a 1998 patent. from Trifield Productions. The technology provides one approach to the decoding of Ambisonic signals to irregular loudspeaker arrays (such as ITU) commonly used for 5.1 surround sound replay. A slight flaw in the 1992 published papers decoder coefficients, and the use of heuristic search algorithms in order to solve the set of non-linear simultaneous equations needed to generate the decoders was published by Wiggins et al. in 2003, and later extended to higher order irregular decoders in 2004 Parametric decoders. The idea behind parametric decoding is to treat the sound's direction of incidence as a parameter that can be estimated through time–frequency analysis. A large body of research into human spatial hearing suggests that our auditory cortex applies similar techniques in its auditory scene analysis, which explains why these methods work. The major benefits of parametric decoding is a greatly increased angular resolution and the separation of analysis and synthesis into separate processing steps. This separation allows B-format recordings to be rendered using any panning technique, including delay panning, VBAP and HRTF-based synthesis. Parametric decoding was pioneered by Lake DSP in the late 1990s and independently suggested by Farina and Ugolotti in 1999. Later work in this domain includes the DirAC method and the Harpex method. Irregular layout decoders. The Rapture3D decoder from Blue Ripple Sound supports this and is already used in a number of computer games using OpenAL. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "P_n = W + X \\cos\\theta_n + Y \\sin\\theta_n" }, { "math_id": 1, "text": "\\theta_n" } ]
https://en.wikipedia.org/wiki?curid=8963700
8964423
Transfer-matrix method
Mathematical technique In statistical mechanics, the transfer-matrix method is a mathematical technique which is used to write the partition function into a simpler form. It was introduced in 1941 by Hans Kramers and Gregory Wannier. In many one dimensional lattice models, the partition function is first written as an "n"-fold summation over each possible microstate, and also contains an additional summation of each component's contribution to the energy of the system within each microstate. Overview. Higher-dimensional models contain even more summations. For systems with more than a few particles, such expressions can quickly become too complex to work out directly, even by computer. Instead, the partition function can be rewritten in an equivalent way. The basic idea is to write the partition function in the form formula_0 where v0 and v"N"+1 are vectors of dimension "p" and the "p" × "p" matrices W"k" are the so-called transfer matrices. In some cases, particularly for systems with periodic boundary conditions, the partition function may be written more simply as formula_1 where "tr" denotes the matrix trace. In either case, the partition function may be solved exactly using eigenanalysis. If the matrices are all the same matrix W, the partition function may be approximated as the "N"th power of the largest eigenvalue of W, since the trace is the sum of the eigenvalues and the eigenvalues of the product of two diagonal matrices equals the product of their individual eigenvalues. The transfer-matrix method is used when the total system can be broken into a "sequence" of subsystems that interact only with adjacent subsystems. For example, a three-dimensional cubical lattice of spins in an Ising model can be decomposed into a sequence of two-dimensional planar lattices of spins that interact only adjacently. The dimension "p" of the "p" × "p" transfer matrix equals the number of states the subsystem may have; the transfer matrix itself W"k" encodes the statistical weight associated with a particular state of subsystem "k" − 1 being next to another state of subsystem "k". Importantly, transfer matrix methods allow to tackle probabilistic lattice models from an algebraic perspective, allowing for instance the use of results from representation theory. As an example of observables that can be calculated from this method, the probability of a particular state formula_2 occurring at position "x" is given by: formula_3 Where formula_4 is the projection matrix for state formula_2, having elements formula_5 Transfer-matrix methods have been critical for many exact solutions of problems in statistical mechanics, including the Zimm–Bragg and Lifson–Roig models of the helix-coil transition, transfer matrix models for protein-DNA binding, as well as the famous exact solution of the two-dimensional Ising model by Lars Onsager. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n\\mathcal{Z} = \\mathbf{v}_0 \\cdot \\left\\{ \\prod_{k=1}^N \\mathbf{W}_k \\right\\} \\cdot \\mathbf{v}_{N+1}\n" }, { "math_id": 1, "text": "\n\\mathcal{Z} = \\operatorname{tr} \\left\\{ \\prod_{k=1}^N \\mathbf{W}_k \\right\\}\n" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "\n\\mathrm{Pr}_m(x) = \\frac{\\operatorname{tr} \\left[ \\prod_{k=1}^x \\mathbf{W}_k \\mathbf{Pj} \\prod_{k'=x+1}^N \\mathbf{W}_{k'} \\right]} { \\operatorname{tr} \\left[ \\prod_{k=1}^N \\mathbf{W}_k \\right] }\n" }, { "math_id": 4, "text": "Pj" }, { "math_id": 5, "text": "Pj_{\\mu\\nu} = \\delta_{\\mu\\nu}\\delta_{\\mu m}" } ]
https://en.wikipedia.org/wiki?curid=8964423
8964665
Category utility
Measure of "category goodness" Category utility is a measure of "category goodness" defined in and . It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (; ) and "collocation index" . It provides a normative information-theoretic measure of the "predictive advantage" gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does "not" possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in . Probability-theoretic definition of category utility. The probability-theoretic definition of category utility given in and is as follows: formula_0 where formula_1 is a size-formula_2 set of formula_3-ary features, and formula_4 is a set of formula_5 categories. The term formula_6 designates the marginal probability that feature formula_7 takes on value formula_8, and the term formula_9 designates the category-conditional probability that feature formula_7 takes on value formula_8 "given" that the object in question belongs to category formula_10. The motivation and development of this expression for category utility, and the role of the multiplicand formula_11 as a crude overfitting control, is given in the above sources. Loosely , the term formula_12 is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while formula_13 is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category labels. Their difference therefore reflects the relative advantage accruing to the observer by having knowledge of the category structure. Information-theoretic definition of category utility. The information-theoretic definition of category utility for a set of entities with size-formula_2 binary feature set formula_1, and a binary category formula_14 is given in as follows: formula_15 where formula_16 is the prior probability of an entity belonging to the positive category formula_17 (in the absence of any feature information), formula_18 is the conditional probability of an entity having feature formula_7 given that the entity belongs to category formula_17, formula_19 is likewise the conditional probability of an entity having feature formula_7 given that the entity belongs to category formula_20, and formula_21 is the prior probability of an entity possessing feature formula_7 (in the absence of any category information). The intuition behind the above expression is as follows: The term formula_22 represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category formula_17. Similarly, the term formula_23 represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category formula_20. The sum of these two terms in the brackets is therefore the weighted average of these two costs. The final term, formula_24, represents the cost (in bits) of optimally encoding (or transmitting) feature information when no category information is available. The value of the category utility will, in the above formulation, be non-negative. Category utility and mutual information. and mention that the category utility is equivalent to the mutual information. Here is a simple demonstration of the nature of this equivalence. Assume a set of entities each having the same formula_25 features, i.e., feature set formula_1, with each feature variable having cardinality formula_26. That is, each feature has the capacity to adopt any of formula_26 distinct values (which need "not" be ordered; all variables can be nominal); for the special case formula_27 these features would be considered "binary", but more generally, for any formula_26, the features are simply "m-ary". For the purposes of this demonstration, without loss of generality, feature set formula_28 can be replaced with a single aggregate variable formula_29 that has cardinality formula_30, and adopts a unique value formula_31 corresponding to each feature combination in the Cartesian product formula_32. (Ordinality does "not" matter, because the mutual information is not sensitive to ordinality.) In what follows, a term such as formula_33 or simply formula_34 refers to the probability with which formula_29 adopts the particular value formula_35. (Using the aggregate feature variable formula_29 replaces multiple summations, and simplifies the presentation to follow.) For this demonstration, also assume a single category variable formula_36, which has cardinality formula_37. This is equivalent to a classification system in which there are formula_37 non-intersecting categories. In the special case of formula_38 there are the two-category case discussed above. From the definition of mutual information for discrete variables, the mutual information formula_39 between the aggregate feature variable formula_29 and the category variable formula_36 is given by: formula_40 where formula_34 is the prior probability of feature variable formula_29 adopting value formula_35, formula_41 is the marginal probability of category variable formula_36 adopting value formula_42, and formula_43 is the joint probability of variables formula_29 and formula_36 simultaneously adopting those respective values. In terms of the conditional probabilities this can be re-written (or defined) as formula_44 If the original definition of the category utility from above is rewritten with formula_14, formula_45 This equation clearly has the same form as the (blue) equation expressing the mutual information between the feature set and the category variable; the difference is that the sum formula_46 in the category utility equation runs over independent binary variables formula_1, whereas the sum formula_47 in the mutual information runs over "values" of the single formula_30-ary variable formula_29. The two measures are actually equivalent then "only" when the features formula_48, are "independent" (and assuming that terms in the sum corresponding to formula_49 are also added). Insensitivity of category utility to ordinality. Like the mutual information, the category utility is not sensitive to any "ordering" in the feature or category variable values. That is, as far as the category utility is concerned, the category set codice_0 is not qualitatively different from the category set codice_1 since the formulation of the category utility does not account for any ordering of the class variable. Similarly, a feature variable adopting values codice_2 is not qualitatively different from a feature variable adopting values codice_3. As far as the category utility or "mutual information" are concerned, "all" category and feature variables are "nominal variables." For this reason, category utility does not reflect any "gestalt" aspects of "category goodness" that might be based on such ordering effects. One possible adjustment for this insensitivity to ordinality is given by the weighting scheme described in the article for mutual information. Category "goodness": models and philosophy. This section provides some background on the origins of, and need for, formal measures of "category goodness" such as the category utility, and some of the history that lead to the development of this particular metric. What makes a good category? At least since the time of Aristotle there has been a tremendous fascination in philosophy with the nature of concepts and universals. What kind of "entity" is a concept such as "horse"? Such abstractions do not designate any particular individual in the world, and yet we can scarcely imagine being able to comprehend the world without their use. Does the concept "horse" therefore have an independent existence outside of the mind? If it does, then what is the locus of this independent existence? The question of locus was an important issue on which the classical schools of Plato and Aristotle famously differed. However, they remained in agreement that universals "did" indeed have a mind-independent existence. There was, therefore, always a "fact to the matter" about which concepts and universals exist in the world. In the late Middle Ages (perhaps beginning with Occam, although Porphyry also makes a much earlier remark indicating a certain discomfort with the status quo), however, the certainty that existed on this issue began to erode, and it became acceptable among the so-called nominalists and empiricists to consider concepts and universals as strictly mental entities or conventions of language. On this view of concepts—that they are purely representational constructs—a new question then comes to the fore: "Why do we possess one set of concepts rather than another?" What makes one set of concepts "good" and another set of concepts "bad"? This is a question that modern philosophers, and subsequently machine learning theorists and cognitive scientists, have struggled with for many decades. What purpose do concepts serve? One approach to answering such questions is to investigate the "role" or "purpose" of concepts in cognition. Thus the answer to "What are concepts good for in the first place?" by and many others is that classification (conception) is a precursor to "induction": By imposing a particular categorization on the universe, an organism gains the ability to deal with physically non-identical objects or situations in an identical fashion, thereby gaining substantial predictive leverage (; ). As J.S. Mill puts it , <templatestyles src="Template:Blockquote/styles.css" />The general problem of classification... [is] to provide that things shall be thought of in such groups, and those groups in such an order, as will best conduce to the remembrance and to the ascertainment of their laws... [and] one of the uses of such a classification that by drawing attention to the properties on which it is founded, and which, if the classification be good, are marks of many others, it facilitates the discovery of those others. From this base, Mill reaches the following conclusion, which foreshadows much subsequent thinking about category goodness, including the notion of category utility: <templatestyles src="Template:Blockquote/styles.css" />The ends of scientific classification are best answered when the objects are formed into groups respecting which a greater number of general propositions can be made, and those propositions more important, than could be made respecting any other groups into which the same things could be distributed. The properties, therefore, according to which objects are classified should, if possible, be those which are causes of many other properties; or, at any rate, which are sure marks of them. One may compare this to the "category utility hypothesis" proposed by : "A category is useful to the extent that it can be expected to improve the ability of a person to accurately predict the features of instances of that category." Mill here seems to be suggesting that the best category structure is one in which object features (properties) are maximally informative about the object's class, and, simultaneously, the object class is maximally informative about the object's features. In other words, a useful classification scheme is one in which category knowledge can be used to accurately infer object properties, and property knowledge can be used to accurately infer object classes. One may also compare this idea to Aristotle's criterion of "counter-predication" for definitional predicates, as well as to the notion of concepts described in formal concept analysis. Attempts at formalization. A variety of different measures have been suggested with an aim of formally capturing this notion of "category goodness," the best known of which is probably the "cue validity". Cue validity of a feature formula_7 with respect to category formula_10 is defined as the conditional probability of the category given the feature (;;), formula_50, or as the deviation of the conditional probability from the category base rate (;), formula_51. Clearly, these measures quantify only inference from feature to category (i.e., "cue validity"), but not from category to feature, i.e., the "category validity" formula_52. Also, while the cue validity was originally intended to account for the demonstrable appearance of "basic categories" in human cognition—categories of a particular level of generality that are evidently preferred by human learners—a number of major flaws in the cue validity quickly emerged in this regard (;;, and others). One attempt to address both problems by simultaneously maximizing both feature validity and category validity was made by in defining the "collocation index" as the product formula_53, but this construction was fairly ad hoc (see ). The category utility was introduced as a more sophisticated refinement of the cue validity, which attempts to more rigorously quantify the full inferential power of a class structure. As shown above, on a certain view the category utility is equivalent to the mutual information between the feature variable and the category variable. It has been suggested that categories having the greatest overall category utility are those that are not only those "best" in a normative sense, but also those human learners prefer to use, e.g., "basic" categories . Other related measures of category goodness are "cohesion" (;) and "salience" . References. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\nCU(C,F) = \\tfrac{1}{p} \\sum_{c_j \\in C} p(c_j) \\left [\\sum_{f_i \\in F} \\sum_{k=1}^m p(f_{ik}|c_j)^2 - \\sum_{f_i \\in F} \\sum_{k=1}^m p(f_{ik})^2\\right ]\n" }, { "math_id": 1, "text": "F = \\{f_i\\}, \\ i=1 \\ldots n" }, { "math_id": 2, "text": "n\\ " }, { "math_id": 3, "text": "m\\ " }, { "math_id": 4, "text": "C = \\{c_j\\} \\ j=1 \\ldots p" }, { "math_id": 5, "text": "p\\ " }, { "math_id": 6, "text": "p(f_{ik})\\ " }, { "math_id": 7, "text": "f_i\\ " }, { "math_id": 8, "text": "k\\ " }, { "math_id": 9, "text": "p(f_{ik}|c_j)\\ " }, { "math_id": 10, "text": "c_j\\ " }, { "math_id": 11, "text": "\\textstyle \\tfrac{1}{p}" }, { "math_id": 12, "text": "\\textstyle p(c_j) \\sum_{f_i \\in F} \\sum_{k=1}^m p(f_{ik}|c_j)^2" }, { "math_id": 13, "text": "\\textstyle p(c_j) \\sum_{f_i \\in F} \\sum_{k=1}^m p(f_{ik})^2" }, { "math_id": 14, "text": "C = \\{c,\\bar{c}\\}" }, { "math_id": 15, "text": "\nCU(C,F) = \\left [p(c) \\sum_{i=1}^n p(f_i|c)\\log p(f_i|c) + p(\\bar{c}) \\sum_{i=1}^n p(f_i|\\bar{c})\\log p(f_i|\\bar{c}) \\right ] - \\sum_{i=1}^n p(f_i)\\log p(f_i)\n" }, { "math_id": 16, "text": "p(c)\\ " }, { "math_id": 17, "text": "c\\ " }, { "math_id": 18, "text": "p(f_i|c)\\ " }, { "math_id": 19, "text": "p(f_i|\\bar{c})" }, { "math_id": 20, "text": "\\bar{c}" }, { "math_id": 21, "text": "p(f_i)\\ " }, { "math_id": 22, "text": "p(c)\\textstyle \\sum_{i=1}^n p(f_i|c)\\log p(f_i|c)" }, { "math_id": 23, "text": "p(\\bar{c})\\textstyle \\sum_{i=1}^n p(f_i|\\bar{c})\\log p(f_i|\\bar{c})" }, { "math_id": 24, "text": "\\textstyle \\sum_{i=1}^n p(f_i)\\log p(f_i)" }, { "math_id": 25, "text": "n" }, { "math_id": 26, "text": "m" }, { "math_id": 27, "text": "m=2" }, { "math_id": 28, "text": "F" }, { "math_id": 29, "text": "F_a" }, { "math_id": 30, "text": "m^n" }, { "math_id": 31, "text": "v_i, \\ i=1 \\ldots m^n" }, { "math_id": 32, "text": "\\otimes F" }, { "math_id": 33, "text": "p(F_a=v_i)" }, { "math_id": 34, "text": "p(v_i)" }, { "math_id": 35, "text": "v_i" }, { "math_id": 36, "text": "C" }, { "math_id": 37, "text": "p" }, { "math_id": 38, "text": "p=2" }, { "math_id": 39, "text": "I(F_a;C)" }, { "math_id": 40, "text": " \nI(F_a;C) = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i,c_j) \\log \\frac{p(v_i,c_j)}{p(v_i)\\,p(c_j)}\n" }, { "math_id": 41, "text": "p(c_j)" }, { "math_id": 42, "text": "c_j" }, { "math_id": 43, "text": "p(v_i,c_j)" }, { "math_id": 44, "text": " \n\\begin{align}\nI(F_a;C) & = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i,c_j) \\log \\frac{p(v_i|c_j)}{p(v_i)} \\\\\n & = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i|c_j)p(c_j) \\left [\\log p(v_i|c_j)- \\log p(v_i) \\right ] \\\\\n & = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i|c_j)p(c_j) \\log p(v_i|c_j)- \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i|c_j)p(c_j) \\log p(v_i) \\\\\n & = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i|c_j)p(c_j) \\log p(v_i|c_j)- \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i,c_j) \\log p(v_i) \\\\\n & = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i|c_j)p(c_j) \\log p(v_i|c_j)- \\sum_{v_i \\in F_a} \\log p(v_i) \\sum_{c_j \\in C} p(v_i,c_j) \\\\\n & = \\sum_{v_i \\in F_a} \\sum_{c_j \\in C} p(v_i|c_j)p(c_j) \\log p(v_i|c_j)- \\sum_{v_i \\in F_a} p(v_i) \\log p(v_i) \\\\\n\\end{align}\n" }, { "math_id": 45, "text": "\nCU(C,F) = \\sum_{f_i \\in F} \\sum_{c_j \\in C} p(f_i|c_j) p(c_j) \\log p(f_i|c_j) - \\sum_{f_i \\in F} p(f_i) \\log p(f_i)\n" }, { "math_id": 46, "text": "\\textstyle \\sum_{f_i \\in F}" }, { "math_id": 47, "text": "\\textstyle \\sum_{v_i \\in F_a}" }, { "math_id": 48, "text": "\\{f_i\\}" }, { "math_id": 49, "text": "p(\\bar{f_i})" }, { "math_id": 50, "text": "p(c_j|f_i)\\ " }, { "math_id": 51, "text": "p(c_j|f_i)-p(c_j)\\ " }, { "math_id": 52, "text": "p(f_i|c_j)\\ " }, { "math_id": 53, "text": "p(c_j|f_i) p(f_i|c_j)\\ " } ]
https://en.wikipedia.org/wiki?curid=8964665
896574
Throughput accounting
Principle of management accounting Throughput accounting (TA) is a principle-based and simplified management accounting approach that provides managers with decision support information for enterprise profitability improvement. TA is relatively new in management accounting. It is an approach that identifies factors that limit an organization from reaching its goal, and then focuses on simple measures that drive behavior in key areas towards reaching organizational goals. TA was proposed by Eliyahu M. Goldratt as an alternative to traditional cost accounting. As such, Throughput Accounting is neither cost accounting nor costing because it is cash focused and does not allocate all costs (variable and fixed expenses, including overheads) to products and services sold or provided by an enterprise. Considering the laws of variation, only costs that vary totally with units of output (see definition of T below for TVC) e.g. raw materials, are allocated to products and services which are deducted from sales to determine Throughput. Throughput Accounting is a management accounting technique used as the performance measure in the Theory of Constraints (TOC). It is the business intelligence used for maximizing profits, however, unlike cost accounting that primarily focuses on 'cutting costs' and reducing expenses to make a profit, Throughput Accounting primarily focuses on generating more throughput. Conceptually, Throughput Accounting seeks to increase the speed or rate at which throughput (see definition of T below) is generated by products and services with respect to an organization's constraint, whether the constraint is internal or external to the organization. Throughput Accounting is the only management accounting methodology that considers constraints as factors limiting the performance of organizations. Management accounting is an organization's internal set of techniques and methods used to maximize shareholder wealth. Throughput Accounting is thus part of the management accountants' toolkit, ensuring efficiency where it matters as well as the overall effectiveness of the organization. It is an internal reporting tool. Outside or external parties to a business depend on accounting reports prepared by financial (public) accountants who apply Generally Accepted Accounting Principles (GAAP) issued by the Financial Accounting Standards Board (FASB) and enforced by the U.S. Securities and Exchange Commission (SEC) and other local and international regulatory agencies and bodies such as International Financial Reporting Standards (IFRS). Throughput Accounting improves profit performance with better management decisions by using measurements that more closely reflect the effect of decisions on three critical monetary variables (throughput, investment (AKA inventory), and operating expense — defined below). History. When cost accounting was developed in the 1890s, labor was the largest fraction of product cost and could be considered a variable cost. Workers often did not know how many hours they would work in a week when they reported on Monday morning because time-keeping systems were rudimentary. Cost accountants, therefore, concentrated on how efficiently managers used labor since it was their most important variable resource. Now however, workers who come to work on Monday morning almost always work 40 hours or more; their cost is fixed rather than variable. However, today, many managers are still evaluated on their labor efficiencies, and many "downsizing," "rightsizing," and other labor reduction campaigns are based on them. Goldratt argues that, under current conditions, labor efficiencies lead to decisions that harm rather than help organizations. Throughput Accounting, therefore, removes standard cost accounting's reliance on efficiencies in general, and labor efficiency in particular, from management practice. Many cost and financial accountants agree with Goldratt's critique, but they have not agreed on a replacement of their own and there is enormous inertia in the installed base of people trained to work with existing practices. Constraints accounting, which is a development in the Throughput Accounting field, emphasizes the role of the constraint, (referred to as the Archemedian constraint) in decision making. The concepts of Throughput Accounting. Goldratt's alternative begins with the idea that each organization has a goal and that better decisions increase its value. The goal for a profit maximizing firm is stated as, increasing net profit now and in the future. Profit maximization seen from a Throughput Accounting viewpoint, is about maximizing a system's profit mix without Cost Accounting's traditional allocation of total costs. Throughput Accounting actions include obtaining the maximum net profit in the minimum time period, given limited resource capacities and capabilities. These resources include machines, capital (own or borrowed), people, processes, technology, time, materials, markets, etc. Throughput Accounting applies to not-for-profit organizations too, where they develop their goal that makes sense in their individual cases, and these goals are commonly measured in goal units. Throughput Accounting also pays particular attention to the concept of 'bottleneck' (referred to as "constraint" in the Theory of Constraints) in the manufacturing or servicing processes. Throughput Accounting uses three measures of income and expense: Organizations that wish to increase their attainment of "The Goal" should therefore require managers to test proposed decisions against three questions. Will the proposed change: The answers to these questions determine the effect of proposed changes on system wide measurements: These relationships between financial ratios as illustrated by Goldratt are very similar to a set of relationships defined by DuPont and General Motors financial executive Donaldson Brown about 1920. Brown did not advocate changes in management accounting methods, but instead used the ratios to evaluate traditional financial accounting data. formula_0 formula_1 Explanation. For example: The railway coach company was offered a contract to make 15 open-topped streetcars each month, using a design that included ornate brass foundry work, but very little of the metalwork needed to produce a covered rail coach. The buyer offered to pay $280 per streetcar. The company had a firm order for 40 rail coaches each month for $350 per unit. The cost accountant determined that the cost of operating the foundry vs. the metalwork shop each month was as follows: The company was at full capacity making 40 rail coaches each month. And since the foundry was expensive to operate, and purchasing brass as a raw material for the streetcars was expensive, the accountant determined that the company would lose money on any streetcars it built. He showed an analysis of the estimated product costs based on standard cost accounting and recommended that the company decline to build any streetcars. However, the company's operations manager knew that recent investment in automated foundry equipment had created idle time for workers in that department. The constraint on production of the railcoaches was the metalwork shop. She made an analysis of profit and loss if the company took the contract using throughput accounting to determine the profitability of products by calculating "throughput" (revenue less variable cost) in the metal shop. After the presentations from the company accountant and the operations manager, the president understood that the metal shop capacity was limiting the company's profitability. The company could make only 40 rail coaches per month. But by taking the contract for the streetcars, the company could make nearly all the railway coaches ordered, and also meet all the demand for streetcars. The result would increase throughput in the metal shop from $6.25 to $10.38 per hour of available time, and increase profitability by 66 percent. Relevance. One of the most important aspects of Throughput Accounting is the relevance of the information it produces. Throughput Accounting reports what currently happens in business functions such as operations, distribution and marketing. It does not rely solely on GAAP's financial accounting reports (that still need to be verified by external auditors) and is thus relevant to current decisions made by management that affect the business now and in the future. Throughput Accounting is used in Critical Chain Project Management (CCPM), Drum Buffer Rope (DBR)—in businesses that are internally constrained, in Simplified Drum Buffer Rope (S-DBR) —in businesses that are externally constrained (particularly where the lack of customer orders denotes a market constraint), as well as in strategy, planning and tactics, etc. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\text{Throughput} = \\text{Sales revenue – Total Variable Costs}" }, { "math_id": 1, "text": "\\text{Throughput accounting Ratio} = \\text{Return per factory hour} / \\text{Cost per factory hour}" } ]
https://en.wikipedia.org/wiki?curid=896574
8966592
Cosine similarity
Similarity measure for number sequences In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval formula_0 For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in formula_1. For example, in information retrieval and text mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents. The technique is also used to measure cohesion within clusters in the field of data mining. One advantage of cosine similarity is its low complexity, especially for sparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity include Orchini similarity and Tucker coefficient of congruence; the Otsuka–Ochiai similarity (see below) is cosine similarity applied to binary data. Definition. The cosine of two non-zero vectors can be derived by using the Euclidean dot product formula: formula_2 Given two "n"-dimensional vectors of attributes, A and B, the cosine similarity, cos(θ), is represented using a dot product and magnitude as formula_3 where formula_4 and formula_5 are the formula_6th components of vectors formula_7 and formula_8, respectively. The resulting similarity ranges from -1 meaning exactly opposite, to 1 meaning exactly the same, with 0 indicating orthogonality or decorrelation, while in-between values indicate intermediate similarity or dissimilarity. For text matching, the attribute vectors "A" and "B" are usually the term frequency vectors of the documents. Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from formula_9, since the term frequencies cannot be negative. This remains true when using TF-IDF weights. The angle between two term frequency vectors cannot be greater than 90°. If the attribute vectors are normalized by subtracting the vector means (e.g., formula_10), the measure is called the centered cosine similarity and is equivalent to the Pearson correlation coefficient. For an example of centering, formula_11 formula_12 Cosine distance. When the distance between two unit-length vectors is defined to be the length of their vector difference then formula_13 Nonetheless the cosine distance is often defined without the square root or factor of 2: formula_14 It is important to note that, by virtue of being proportional to squared Euclidean distance, the cosine distance is not a true distance metric; it does not exhibit the triangle inequality property — or, more formally, the Schwarz inequality — and it violates the coincidence axiom. To repair the triangle inequality property while maintaining the same ordering, one can convert to Euclidean distance formula_15 or angular distance "θ" = arccos("S""C"("A", "B")). Alternatively, the triangular inequality that does work for angular distances can be expressed directly in terms of the cosines; see below. Angular distance and similarity. The normalized angle, referred to as angular distance, between any two vectors formula_16 and formula_17 is a formal distance metric and can be calculated from the cosine similarity. The complement of the angular distance metric can then be used to define angular similarity function bounded between 0 and 1, inclusive. When the vector elements may be positive or negative: formula_18 formula_19 Or, if the vector elements are always positive: formula_20 formula_21 Unfortunately, computing the inverse cosine (arccos) function is slow, making the use of the angular distance more computationally expensive than using the more common (but not metric) cosine distance above. L2-normalized Euclidean distance. Another effective proxy for cosine distance can be obtained by formula_22 normalisation of the vectors, followed by the application of normal Euclidean distance. Using this technique each term in each vector is first divided by the magnitude of the vector, yielding a vector of unit length. Then the Euclidean distance over the end-points of any two vectors is a proper metric which gives the same ordering as the cosine distance (a monotonic transformation of Euclidean distance; see below) for any comparison of vectors, and furthermore avoids the potentially expensive trigonometric operations required to yield a proper metric. Once the normalisation has occurred, the vector space can be used with the full range of techniques available to any Euclidean space, notably standard dimensionality reduction techniques. This normalised form distance is often used within many deep learning algorithms. Otsuka–Ochiai coefficient. In biology, there is a similar concept known as the Otsuka–Ochiai coefficient named after Yanosuke Otsuka (also spelled as Ōtsuka, Ootsuka or Otuka, ) and Akira Ochiai (), also known as the Ochiai–Barkman or Ochiai coefficient, which can be represented as: formula_23 Here, formula_16 and formula_17 are sets, and formula_24 is the number of elements in formula_16. If sets are represented as bit vectors, the Otsuka–Ochiai coefficient can be seen to be the same as the cosine similarity. It is identical to the score introduced by Godfrey Thomson. In a recent book, the coefficient is tentatively misattributed to another Japanese researcher with the family name Otsuka. The confusion arises because in 1957 Akira Ochiai attributes the coefficient only to Otsuka (no first name mentioned) by citing an article by Ikuso Hamai (), who in turn cites the original 1936 article by Yanosuke Otsuka. Properties. The most noteworthy property of cosine similarity is that it reflects a relative, rather than absolute, comparison of the individual vector dimensions. For any positive constant formula_25 and vector formula_26, the vectors formula_26 and formula_27 are maximally similar. The measure is thus most appropriate for data where frequency is more important than absolute values; notably, term frequency in documents. However more recent metrics with a grounding in information theory, such as Jensen–Shannon, SED, and triangular divergence have been shown to have improved semantics in at least some contexts. Cosine similarity is related to Euclidean distance as follows. Denote Euclidean distance by the usual formula_28, and observe that formula_29 (polarization identity) by expansion. When A and B are normalized to unit length, formula_30 so this expression is equal to formula_31 In short, the cosine distance can be expressed in terms of Euclidean distance as formula_32. The Euclidean distance is called the "chord distance" (because it is the length of the chord on the unit circle) and it is the Euclidean distance between the vectors which were normalized to unit sum of squared values within them. Null distribution: For data which can be negative as well as positive, the null distribution for cosine similarity is the distribution of the dot product of two independent random unit vectors. This distribution has a mean of zero and a variance of formula_33 (where formula_34 is the number of dimensions), and although the distribution is bounded between -1 and +1, as formula_34 grows large the distribution is increasingly well-approximated by the normal distribution. Other types of data such as bitstreams, which only take the values 0 or 1, the null distribution takes a different form and may have a nonzero mean. Triangle inequality for cosine similarity. The ordinary triangle inequality for angles (i.e., arc lengths on a unit hypersphere) gives us that formula_35 Because the cosine function decreases as an angle in radians increases, the sense of these inequalities is reversed when we take the cosine of each value: formula_36 Using the cosine addition and subtraction formulas, these two inequalities can be written in terms of the original cosines, formula_37 formula_38 This form of the triangle inequality can be used to bound the minimum and maximum similarity of two objects A and B if the similarities to a reference object C is already known. This is used for example in metric data indexing, but has also been used to accelerate spherical k-means clustering the same way the Euclidean triangle inequality has been used to accelerate regular k-means. Soft cosine measure. A soft cosine or ("soft" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity. For example, in the field of natural language processing (NLP) the similarity among features is quite intuitive. Features such as words, "n"-grams, or syntactic "n"-grams can be quite similar, though formally they are considered as different features in the VSM. For example, words “play” and “game” are different words and thus mapped to different points in VSM; yet they are semantically related. In case of "n"-grams or syntactic "n"-grams, Levenshtein distance can be applied (in fact, Levenshtein distance can be applied to words as well). For calculating soft cosine, the matrix s is used to indicate similarity between features. It can be calculated through Levenshtein distance, WordNet similarity, or other similarity measures. Then we just multiply by this matrix. Given two "N"-dimension vectors formula_25 and formula_39, the soft cosine similarity is calculated as follows: formula_40 where "sij" similarity(feature"i", feature"j"). If there is no similarity between features ("sii" 1, "sij" 0 for "i" ≠ "j"), the given equation is equivalent to the conventional cosine similarity formula. The time complexity of this measure is quadratic, which makes it applicable to real-world tasks. Note that the complexity can be reduced to subquadratic. An efficient implementation of such soft cosine similarity is included in the Gensim open source library. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "[-1, 1]." }, { "math_id": 1, "text": "[0,1]" }, { "math_id": 2, "text": "\\mathbf{A}\\cdot\\mathbf{B}\n=\\left\\|\\mathbf{A}\\right\\|\\left\\|\\mathbf{B}\\right\\|\\cos\\theta" }, { "math_id": 3, "text": "\\text{cosine similarity} =S_C (A,B):= \\cos(\\theta) = {\\mathbf{A} \\cdot \\mathbf{B} \\over \\|\\mathbf{A}\\| \\|\\mathbf{B}\\|} = \\frac{ \\sum\\limits_{i=1}^{n}{A_i B_i} }{ \\sqrt{\\sum\\limits_{i=1}^{n}{A_i^2}} \\cdot \\sqrt{\\sum\\limits_{i=1}^{n}{B_i^2}} }," }, { "math_id": 4, "text": "A_i" }, { "math_id": 5, "text": "B_i" }, { "math_id": 6, "text": "i" }, { "math_id": 7, "text": "\\mathbf{A}" }, { "math_id": 8, "text": "\\mathbf{B}" }, { "math_id": 9, "text": "0 \\to 1" }, { "math_id": 10, "text": "A - \\bar{A}" }, { "math_id": 11, "text": "\n\\text{if}\\, A = [A_1, A_2]^T, \\text{ then } \\bar{A} = \\left[\\frac{(A_1+A_2)}{2},\\frac{(A_1+A_2)}{2}\\right]^T,\n" }, { "math_id": 12, "text": "\n \\text{ so } A-\\bar{A}= \\left[\\frac{(A_1-A_2)}{2},\\frac{(-A_1+A_2)}{2}\\right]^T.\n" }, { "math_id": 13, "text": "\\operatorname{dist}(\\mathbf A, \\mathbf B)\n= \\sqrt{(\\mathbf A - \\mathbf B) \\cdot (\\mathbf A - \\mathbf B)}\n= \\sqrt{\\mathbf A \\cdot \\mathbf A -2(\\mathbf A \\cdot \\mathbf B) + \\mathbf B \\cdot \\mathbf B}\n= \\sqrt{2(1-S_C(\\mathbf A, \\mathbf B))}\\,.\n" }, { "math_id": 14, "text": " \\text{cosine distance} = D_C(A,B) := 1 - S_C(A,B)\\,." }, { "math_id": 15, "text": "\\sqrt{2(1- S_C(A,B))}" }, { "math_id": 16, "text": "A" }, { "math_id": 17, "text": "B" }, { "math_id": 18, "text": "\\text{angular distance} = D_{\\theta} := \\frac{ \\arccos( \\text{cosine similarity} ) }{ \\pi } = \\frac{\\theta}{\\pi}" }, { "math_id": 19, "text": "\\text{angular similarity} = S_{\\theta} := 1 - \\text{angular distance} = 1 - \\frac{\\theta}{\\pi}" }, { "math_id": 20, "text": "\\text{angular distance} = D_{\\theta} := \\frac{ 2 \\cdot \\arccos( \\text{cosine similarity} ) }{ \\pi } = \\frac{2\\theta}{\\pi}" }, { "math_id": 21, "text": "\\text{angular similarity} = S_{\\theta} := 1 - \\text{angular distance} = 1 - \\frac{2\\theta}{\\pi}" }, { "math_id": 22, "text": "L_2" }, { "math_id": 23, "text": "K =\\frac{|A \\cap B|}{\\sqrt{|A| \\times |B|}}" }, { "math_id": 24, "text": "|A|" }, { "math_id": 25, "text": "a" }, { "math_id": 26, "text": "V" }, { "math_id": 27, "text": "aV" }, { "math_id": 28, "text": "\\|A - B\\|" }, { "math_id": 29, "text": "\\|A - B\\|^2 = (A - B) \\cdot (A - B) = \\|A\\|^2 + \\|B\\|^2 - 2 (A \\cdot B)\\ " }, { "math_id": 30, "text": "\\|A\\|^2 = \\|B\\|^2 = 1" }, { "math_id": 31, "text": "2 (1 - \\cos(A, B))." }, { "math_id": 32, "text": "D_C(A, B) = \\frac{\\|A - B\\|^2}{2}\\quad\\mathrm{when}\\quad\\|A\\|^2 = \\|B\\|^2 = 1" }, { "math_id": 33, "text": "1/n" }, { "math_id": 34, "text": "n" }, { "math_id": 35, "text": "|~\\angle{AC} - \\angle{CB}~| \\le ~\\angle{AB}~ \\le ~\\angle{AC}~ + ~\\angle{CB}~." }, { "math_id": 36, "text": "\\cos(\\angle{AC} - \\angle{CB}) \\ge \\cos(\\angle{AB}) \\ge \\cos(\\angle{AC} + \\angle{CB})." }, { "math_id": 37, "text": "\\cos(A,C) \\cdot \\cos(C,B) + \\sqrt{\\left(1-\\cos(A,C)^2\\right)\\cdot\\left(1-\\cos(C,B)^2\\right)} \\geq \\cos(A,B)," }, { "math_id": 38, "text": "\\cos(A,B) \\geq \\cos(A,C) \\cdot \\cos(C,B) - \\sqrt{\\left(1-\\cos(A,C)^2\\right)\\cdot\\left(1-\\cos(C,B)^2\\right)}." }, { "math_id": 39, "text": "b" }, { "math_id": 40, "text": "\\begin{align}\n \\operatorname{soft\\_cosine}_1(a,b)=\n \\frac{\\sum\\nolimits_{i,j}^N s_{ij}a_ib_j}{\\sqrt{\\sum\\nolimits_{i,j}^N s_{ij}a_ia_j}\\sqrt{\\sum\\nolimits_{i,j}^N s_{ij}b_ib_j}},\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=8966592
8966784
Relative permeability
Dimensionless measure of a porous material's permeability In multiphase flow in porous media, the relative permeability of a phase is a dimensionless measure of the effective permeability of that phase. It is the ratio of the effective permeability of that phase to the absolute permeability. It can be viewed as an adaptation of Darcy's law to multiphase flow. For two-phase flow in porous media given steady-state conditions, we can write formula_0 where formula_1 is the flux, formula_2 is the pressure drop, formula_3 is the viscosity. The subscript formula_4 indicates that the parameters are for phase formula_4. formula_5 is here the phase permeability (i.e., the effective permeability of phase formula_4), as observed through the equation above. Relative permeability, formula_6, for phase formula_4 is then defined from formula_7, as formula_8 where formula_9 is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability. Relative permeability must be between zero and one. In applications, relative permeability is often represented as a function of water saturation; however, owing to capillary hysteresis one often resorts to a function or curve measured under drainage and another measured under imbibition. Under this approach, the flow of each phase is inhibited by the presence of the other phases. Thus the sum of relative permeabilities over all phases is less than 1. However, apparent relative permeabilities larger than 1 have been obtained since the Darcean approach disregards the viscous coupling effects derived from momentum transfer between the phases (see assumptions below). This coupling could enhance the flow instead of inhibit it. This has been observed in heavy oil petroleum reservoirs when the gas phase flows as bubbles or patches (disconnected). Modelling assumptions. The above form for Darcy's law is sometimes also called Darcy's extended law, formulated for horizontal, one-dimensional, immiscible multiphase flow in homogeneous and isotropic porous media. The interactions between the fluids are neglected, so this model assumes that the solid porous media and the other fluids form a new porous matrix through which a phase can flow, implying that the fluid-fluid interfaces remain static in steady-state flow, which is not true, but this approximation has proven useful anyway. Each of the phase saturations must be larger than the irreducible saturation, and each phase is assumed continuous within the porous medium. Based on data from special core analysis laboratory (SCAL) experiments, simplified models of relative permeability as a function of saturation (e.g. water saturation) can be constructed. This article will focus on an oil-water system. Saturation scaling. Water saturation formula_10 is the fraction of porevolume that is filled with water, and similar for oil saturation formula_11. Thus, saturations are in itselves scaled properties or variables. This gives the constraint formula_12 The model functions or correlations for relative permeabilities in an oil-water system are therefore usually written as functions of only water saturation, and this makes it natural to select water saturation as the horizontal axis in graphical presentations. Let formula_13 (also denoted formula_14 and sometimes formula_15) be the irreducible (or minimal or connate) water saturation, and let formula_16 be the residual (minimal) oil saturation after water flooding (imbibition). The flowing water saturation window in a water invasion / injection / imbibition process is bounded by a minimum value formula_13 and a maximum value formula_17. In mathematical terms the flowing saturation window is written as formula_18 By scaling the water saturation to the flowing saturation window, we get a (new or another) normalized water saturation value formula_19 and a normalized oil saturation value formula_20 Endpoints. Let formula_21 be oil relative permeability, and let formula_22 be water relative permeability. There are two ways of scaling phase permeability (i.e. effective permeability of the phase). If we scale phase permeability w.r.t. absolute water permeability (i.e. formula_23), we get an endpoint parameter for both oil and water relative permeability. If we scale phase permeability w.r.t. oil permeability with irreducible water saturation present, formula_21 endpoint is one, and we are left with only the formula_22 endpoint parameter. In order to satisfy both options in the mathematical model, it is common to use two endpoint symbols in the model for two-phase relative permeability. The endpoints / endpoint parameters of oil and water relative permeabilities are formula_24 These symbols have their merits and limits. The symbol formula_25 emphasize that it represents the top point of formula_21. It occurs at irreducible water saturation, and it is the largest value of formula_21 that can occur for initial water saturation. The competing endpoint symbol formula_26 occurs in imbibition flow in oil-gas systems. If the permeability basis is oil with irreducible water present, then formula_27. The symbol formula_28 emphasizes that it is occurring at the residual oil saturation. An alternative symbol to formula_28 is formula_29 which emphasizes that the reference permeability is oil permeability with irreducible water formula_13 present. The oil and water relative permeability models are then written as formula_30 The functions formula_31 and formula_32 are called normalised relative permeabilities or shape functions for oil and water, respectively. The endpoint parameters formula_25 and formula_28 (which is a simplification of formula_33) are physical properties that are obtained either before or together with the optimization of shape parameters present in the shape functions. There are often many symbols in articles that discuss relative permeability models and modelling. A number of busy core analysts, reservoir engineers and scientists often skip using tedious and time-consuming subscripts, and write e.g. Krow instead of formula_21 or formula_34 or krow or oil relative permeability. A variety of symbols are therefore to be expected, and accepted as long as they are explained or defined. The effects that slip or no-slip boundary conditions in pore flow have on endpoint parameters, are discussed by Berg et alios. Corey-model. An often used approximation of relative permeability is the Corey correlation which is a power law in saturation. The Corey correlations of the relative permeability for oil and water are then formula_36 formula_37 If the permeability basis is normal oil with irreducible water present, then formula_27. The empirical parameters formula_35 and formula_38 are called curve shape parameters or simply shape parameters, and they can be obtained from measured data either by analytical interpretation of measured data, or by optimization using a core flow numerical simulator to match the experiment (often called history matching). formula_39 is sometimes appropriate. The physical properties formula_25 and formula_28 are obtained either before or together with the optimizing of formula_35 and formula_38. In case of gas-water system or gas-oil system there are Corey correlations similar to the oil-water relative permeabilities correlations shown above. LET-model. The Corey-correlation or Corey model has only one degree of freedom for the shape of each relative permeability curve, the shape parameter N. The LET-correlation adds more degrees of freedom in order to accommodate the shape of relative permeability curves in SCAL experiments and in 3D reservoir models that are adjusted to match historic production. These adjustments frequently includes relative permeability curves and endpoints. The LET-type approximation is described by 3 parameters L, E, T. The correlation for water and oil relative permeability with water injection is thus formula_40 and formula_41 written using the same formula_42 normalization as for Corey. Only formula_13, formula_16, formula_25, and formula_28 have direct physical meaning, while the parameters "L", "E" and "T" are empirical. The parameter "L" describes the lower part of the curve, and by similarity and experience the "L"-values are comparable to the appropriate Corey parameter. The parameter "T" describes the upper part (or the top part) of the curve in a similar way that the "L"-parameter describes the lower part of the curve. The parameter "E" describes the position of the slope (or the elevation) of the curve. A value of one is a neutral value, and the position of the slope is governed by the "L"- and "T"-parameters. Increasing the value of the "E"-parameter pushes the slope towards the high end of the curve. Decreasing the value of the "E"-parameter pushes the slope towards the lower end of the curve. Experience using the LET correlation indicates the following reasonable ranges for the parameters "L", "E", and "T": "L" ≥ 0.1, "E" > 0 and "T" ≥ 0.1. In case of gas-water system or gas-oil system there are LET correlations similar to the oil-water relative permeabilities correlations shown above. Evaluations. After Morris Muskat et alios established the concept of relative permeability in late 1930'ies, the number of correlations, i.e. models, for relative permeability has steadily increased. This creates a need for evaluation of the most common correlations at the current time. Two of the latest (per 2019) and most thorough evaluations are done by Moghadasi et alios and by Sakhaei et alios. Moghadasi et alios evaluated Corey, Chierici and LET correlations for oil/water relative permeability using a sophisticated method that takes into account the number of uncertain model parameters. They found that LET, with the largest number (three) of uncertain parameters, was clearly the best one for both oil and water relative permeability. Sakhaei et alios evaluated 10 common and widely used relative permeability correlations for gas/oil and gas/condensate systems, and found that LET showed best agreement with experimental values for both gas and oil/condensate relative permeability. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "q_i = -\\frac{k_i}{\\mu_i} \\nabla P_i \\qquad \\text{for} \\quad i=1,2" }, { "math_id": 1, "text": "q_i" }, { "math_id": 2, "text": "\\nabla P_i" }, { "math_id": 3, "text": "\\mu_i" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "k_i" }, { "math_id": 6, "text": "k_{\\mathit{ri}}" }, { "math_id": 7, "text": "k_i = k_{\\mathit{ri}}k" }, { "math_id": 8, "text": "k_{\\mathit{ri}} = k_i / k" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "S_\\mathit{w}" }, { "math_id": 11, "text": "S_\\mathit{o}" }, { "math_id": 12, "text": " S_\\mathit{w} + S_\\mathit{o} = 1 \\Leftrightarrow S_\\mathit{w} = 1 - S_\\mathit{o}" }, { "math_id": 13, "text": "S_\\mathit{wir}" }, { "math_id": 14, "text": "S_\\mathit{wc}" }, { "math_id": 15, "text": "S_\\mathit{wr}" }, { "math_id": 16, "text": "S_\\mathit{orw}" }, { "math_id": 17, "text": "S_\\mathit{wor} = 1 - S_\\mathit{orw}" }, { "math_id": 18, "text": " S_\\mathit{wir} \\leq S_\\mathit{w} \\leq S_\\mathit{wor} = 1 - S_\\mathit{orw}" }, { "math_id": 19, "text": " S_\\mathit{wn} = S_\\mathit{wn}(S_w) = \\frac{S_w - S_\\mathit{wir}}{1-S_\\mathit{wir} - S_\\mathit{orw}}\n= \\frac{S_w - S_\\mathit{wir}}{S_\\mathit{wor} - S_\\mathit{wir}}" }, { "math_id": 20, "text": " S_\\mathit{on} = 1-S_\\mathit{wn} = \\frac{S_o - S_\\mathit{orw}}{1-S_\\mathit{wir} - S_\\mathit{orw}}" }, { "math_id": 21, "text": "K_\\mathit{row}" }, { "math_id": 22, "text": "K_\\mathit{rw}" }, { "math_id": 23, "text": "S_\\mathit{w} = 1" }, { "math_id": 24, "text": "\\begin{align}\nK_\\mathit{row}(S_\\mathit{wir}) = K_\\mathit{rot} && \\text{and} &&\nK_\\mathit{rw}(S_\\mathit{wor}) = K_\\mathit{rwr}\n\\end{align}" }, { "math_id": 25, "text": "K_\\mathit{rot}" }, { "math_id": 26, "text": "K_\\mathit{ror}" }, { "math_id": 27, "text": "K_\\mathit{rot} = 1" }, { "math_id": 28, "text": "K_\\mathit{rwr}" }, { "math_id": 29, "text": "K_\\mathit{rw}^o" }, { "math_id": 30, "text": "\\begin{align}\nK_\\mathit{row} = K_\\mathit{rot} \\cdot K_\\mathit{rown}(S_\\mathit{wn}) && \\text{and} &&\nK_\\mathit{rw} = K_\\mathit{rwr} \\cdot K_\\mathit{rwn}(S_\\mathit{wn}) \n\\end{align}" }, { "math_id": 31, "text": "K_\\mathit{rown}" }, { "math_id": 32, "text": "K_\\mathit{rwn}" }, { "math_id": 33, "text": "K_\\mathit{rwor}" }, { "math_id": 34, "text": "k_\\mathit{row}" }, { "math_id": 35, "text": "N_\\mathit{o}" }, { "math_id": 36, "text": "K_\\mathit{row}(S_{w}) = K{_\\mathit{rot}}(1-S_\\mathit{wn})^{N_\\mathit{o}}" }, { "math_id": 37, "text": " K_\\mathit{rw}(S_{w}) = K{_\\mathit{rwr}}S_\\mathit{wn}^{N_\\mathit{w}}" }, { "math_id": 38, "text": "N_\\mathit{w}" }, { "math_id": 39, "text": "N_\\mathit{o} = N_\\mathit{w} = 2" }, { "math_id": 40, "text": "K_\\mathit{rw}=\\frac{{K_\\mathit{rwr}}S_\\mathit{wn}^{L_\\mathit{w}}}{{S_\\mathit{wn}}^{L_\\mathit{w}}+{E_\\mathit{w}}{(1-S_\\mathit{wn})}^{T_\\mathit{w}}}" }, { "math_id": 41, "text": "K_\\mathit{row}=\\frac{K_\\mathit{rot}(1-S_\\mathit{wn})^{L_o}}{{(1-S_\\mathit{wn})^{L_o}}+{E_\\mathit{o}}S_\\mathit{wn}^{T_\\mathit{o}}}" }, { "math_id": 42, "text": "S_w" } ]
https://en.wikipedia.org/wiki?curid=8966784
8967165
Multibody system
Tool to study dynamic behavior of interconnected rigid or flexible bodies Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements. Introduction. The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics. The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange’s formalisms based on minimal coordinates and a second formulation that introduces constraints. Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum. Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies. Applications. While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas: Example. The following example shows a typical multibody system. It is usually denoted as slider-crank mechanism. The mechanism is used to transform rotational motion into translational motion by means of a rotating driving beam, a connection rod and a sliding body. In the present example, a flexible body is used for the connection rod. The sliding mass is not allowed to rotate and three revolute joints are used to connect the bodies. While each body has six degrees of freedom in space, the kinematical conditions lead to one degree of freedom for the whole system. The motion of the mechanism can be viewed in the following gif animation: Concept. A body is usually considered to be a rigid or flexible part of a mechanical system (not to be confused with the human body). An example of a body is the arm of a robot, a wheel or axle in a car or the human forearm. A link is the connection of two or more bodies, or a body with the ground. The link is defined by certain (kinematical) constraints that restrict the relative motion of the bodies. Typical constraints are: There are two important terms in multibody systems: degree of freedom and constraint condition. Degree of freedom. The degrees of freedom denote the number of independent kinematical possibilities to move. In other words, degrees of freedom are the minimum number of parameters required to completely define the position of an entity in space. A rigid body has six degrees of freedom in the case of general spatial motion, three of them translational degrees of freedom and three rotational degrees of freedom. In the case of planar motion, a body has only three degrees of freedom with only one rotational and two translational degrees of freedom. The degrees of freedom in planar motion can be easily demonstrated using a computer mouse. The degrees of freedom are: left-right, forward-backward and the rotation about the vertical axis. Constraint condition. A constraint condition implies a restriction in the kinematical degrees of freedom of one or more bodies. The classical constraint is usually an algebraic equation that defines the relative translation or rotation between two bodies. There are furthermore possibilities to constrain the relative velocity between two bodies or a body and the ground. This is for example the case of a rolling disc, where the point of the disc that contacts the ground has always zero relative velocity with respect to the ground. In the case that the velocity constraint condition cannot be integrated in time in order to form a position constraint, it is called non-holonomic. This is the case for the general rolling constraint. In addition to that there are non-classical constraints that might even introduce a new unknown coordinate, such as a sliding joint, where a point of a body is allowed to move along the surface of another body. In the case of contact, the constraint condition is based on inequalities and therefore such a constraint does not permanently restrict the degrees of freedom of bodies. Equations of motion. The equations of motion are used to describe the dynamic behavior of a multibody system. Each multibody system formulation may lead to a different mathematical appearance of the equations of motion while the physics behind is the same. The motion of the constrained bodies is described by means of equations that result basically from Newton’s second law. The equations are written for general motion of the single bodies with the addition of constraint conditions. Usually the equations of motions are derived from the Newton-Euler equations or Lagrange’s equations. The motion of rigid bodies is described by means of formula_0 (1) formula_1 (2) These types of equations of motion are based on so-called redundant coordinates, because the equations use more coordinates than degrees of freedom of the underlying system. The generalized coordinates are denoted by formula_2, the mass matrix is represented by formula_3 which may depend on the generalized coordinates. formula_4 represents the constraint conditions and the matrix formula_5 (sometimes termed the Jacobian) is the derivative of the constraint conditions with respect to the coordinates. This matrix is used to apply constraint forces formula_6 to the according equations of the bodies. The components of the vector formula_6 are also denoted as Lagrange multipliers. In a rigid body, possible coordinates could be split into two parts, formula_7 where formula_8 represents translations and formula_9 describes the rotations. Quadratic velocity vector. In the case of rigid bodies, the so-called quadratic velocity vector formula_10 is used to describe Coriolis and centrifugal terms in the equations of motion. The name is because formula_10 includes quadratic terms of velocities and it results due to partial derivatives of the kinetic energy of the body. Lagrange multipliers. The Lagrange multiplier formula_11 is related to a constraint condition formula_12 and usually represents a force or a moment, which acts in “direction” of the constraint degree of freedom. The Lagrange multipliers do no "work" as compared to external forces that change the potential energy of a body. Minimal coordinates. The equations of motion (1,2) are represented by means of redundant coordinates, meaning that the coordinates are not independent. This can be exemplified by the slider-crank mechanism shown above, where each body has six degrees of freedom while most of the coordinates are dependent on the motion of the other bodies. For example, 18 coordinates and 17 constraints could be used to describe the motion of the slider-crank with rigid bodies. However, as there is only one degree of freedom, the equation of motion could be also represented by means of one equation and one degree of freedom, using e.g. the angle of the driving link as degree of freedom. The latter formulation has then the minimum number of coordinates in order to describe the motion of the system and can be thus called a minimal coordinates formulation. The transformation of redundant coordinates to minimal coordinates is sometimes cumbersome and only possible in the case of holonomic constraints and without kinematical loops. Several algorithms have been developed for the derivation of minimal coordinate equations of motion, to mention only the so-called recursive formulation. The resulting equations are easier to be solved because in the absence of constraint conditions, standard time integration methods can be used to integrate the equations of motion in time. While the reduced system might be solved more efficiently, the transformation of the coordinates might be computationally expensive. In very general multibody system formulations and software systems, redundant coordinates are used in order to make the systems user-friendly and flexible. Flexible multibody. There are several cases in which it is necessary to consider the flexibility of the bodies. For example in cases where flexibility plays a fundamental role in kinematics as well as in compliant mechanisms. Flexibility could be take in account in different way. There are three main approaches: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathbf{M(q)} \\ddot{\\mathbf{q}} - \\mathbf{Q}_v + \\mathbf{C_q}^T \\mathbf{\\lambda} = \\mathbf{F}," }, { "math_id": 1, "text": "\\mathbf{C}(\\mathbf{q},\\dot{\\mathbf{q}}) = 0" }, { "math_id": 2, "text": "\\mathbf{q}" }, { "math_id": 3, "text": "\\mathbf{M}(\\mathbf{q})" }, { "math_id": 4, "text": "\\mathbf{C}" }, { "math_id": 5, "text": "\\mathbf{C_q}" }, { "math_id": 6, "text": "\\mathbf{\\lambda}" }, { "math_id": 7, "text": "\\mathbf{q} = \\left[ \\mathbf{u} \\quad \\mathbf{\\Psi} \\right]^T " }, { "math_id": 8, "text": "\\mathbf{u}" }, { "math_id": 9, "text": "\\mathbf{\\Psi}" }, { "math_id": 10, "text": "\\mathbf{Q}_v" }, { "math_id": 11, "text": "\\lambda_i" }, { "math_id": 12, "text": "C_i=0" } ]
https://en.wikipedia.org/wiki?curid=8967165
8967469
Geodetic effect
Precession of satellite orbits due to a celestial body's presence affecting spacetime The geodetic effect (also known as geodetic precession, de Sitter precession or de Sitter effect) represents the effect of the curvature of spacetime, predicted by general relativity, on a vector carried along with an orbiting body. For example, the vector could be the angular momentum of a gyroscope orbiting the Earth, as carried out by the Gravity Probe B experiment. The geodetic effect was first predicted by Willem de Sitter in 1916, who provided relativistic corrections to the Earth–Moon system's motion. De Sitter's work was extended in 1918 by Jan Schouten and in 1920 by Adriaan Fokker. It can also be applied to a particular secular precession of astronomical orbits, equivalent to the rotation of the Laplace–Runge–Lenz vector. The term geodetic effect has two slightly different meanings as the moving body may be spinning or non-spinning. Non-spinning bodies move in geodesics, whereas spinning bodies move in slightly different orbits. The difference between de Sitter precession and Lense–Thirring precession (frame dragging) is that the de Sitter effect is due simply to the presence of a central mass, whereas Lense–Thirring precession is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession. Experimental confirmation. The geodetic effect was verified to a precision of better than 0.5% percent by Gravity Probe B, an experiment which measures the tilting of the spin axis of gyroscopes in orbit about the Earth. The first results were announced on April 14, 2007 at the meeting of the American Physical Society. Formulae. To derive the precession, assume the system is in a rotating Schwarzschild metric. The nonrotating metric is formula_0 where "c" = "G" = 1. We introduce a rotating coordinate system, with an angular velocity formula_1, such that a satellite in a circular orbit in the θ = π/2 plane remains at rest. This gives us formula_2 In this coordinate system, an observer at radial position "r" sees a vector positioned at "r" as rotating with angular frequency ω. This observer, however, sees a vector positioned at some other value of "r" as rotating at a different rate, due to relativistic time dilation. Transforming the Schwarzschild metric into the rotating frame, and assuming that formula_3 is a constant, we find formula_4 with formula_5. For a body orbiting in the θ = π/2 plane, we will have β = 1, and the body's world-line will maintain constant spatial coordinates for all time. Now, the metric is in the canonical form formula_6 From this canonical form, we can easily determine the rotational rate of a gyroscope in proper time formula_7 where the last equality is true only for free falling observers for which there is no acceleration, and thus formula_8. This leads to formula_9 Solving this equation for ω yields formula_10 This is essentially Kepler's law of periods, which happens to be relativistically exact when expressed in terms of the time coordinate "t" of this particular rotating coordinate system. In the rotating frame, the satellite remains at rest, but an observer aboard the satellite sees the gyroscope's angular momentum vector precessing at the rate ω. This observer also sees the distant stars as rotating, but they rotate at a slightly different rate due to time dilation. Let τ be the gyroscope's proper time. Then formula_11 The −2"m"/"r" term is interpreted as the gravitational time dilation, while the additional −"m"/"r" is due to the rotation of this frame of reference. Let α' be the accumulated precession in the rotating frame. Since formula_12, the precession over the course of one orbit, relative to the distant stars, is given by: formula_13 With a first-order Taylor series we find formula_14 Thomas precession. One can attempt to break down the de Sitter precession into a kinematic effect called Thomas precession combined with a geometric effect caused by gravitationally curved spacetime. At least one author does describe it this way, but others state that "The Thomas precession comes into play for a gyroscope on the surface of the Earth ..., but not for a gyroscope in a freely moving satellite." An objection to the former interpretation is that the Thomas precession required has the wrong sign. The Fermi-Walker transport equation gives both the geodetic effect and Thomas precession and describes the transport of the spin 4-vector for accelerated motion in curved spacetime. The spin 4-vector is orthogonal to the velocity 4-vector. Fermi-Walker transport preserves this relation. If there is no acceleration, Fermi-Walker transport is just parallel transport along a geodesic and gives the spin precession due to the geodetic effect. For the acceleration due to uniform circular motion in flat Minkowski spacetime, Fermi Walker transport gives the Thomas precession. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "ds^2 = dt^2 \\left(1-\\frac{2m}{r}\\right) - dr^2 \\left(1 - \\frac{2m}{r}\\right)^{-1} - r^2 (d\\theta^2 + \\sin^2 \\theta \\, d\\phi'^2) ,\n" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "d\\phi = d\\phi' - \\omega \\, dt." }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "\n\\begin{align}\nds^2 & = \\left(1-\\frac{2m}{r}-r^2 \\beta\\omega^2 \\right)\\left(dt-\\frac{r^2 \\beta\\omega}{1-2m/r-r^2 \\beta\\omega^2} \\, d\\phi\\right)^2 - \\\\\n & - dr^2 \\left(1-\\frac{2m}{r}\\right)^{-1} - \\frac{r^2 \\beta - 2mr\\beta}{1-2m/r - r^2 \\beta\\omega^2} \\, d\\phi^2,\n\\end{align}\n" }, { "math_id": 5, "text": "\\beta = \\sin^2(\\theta)" }, { "math_id": 6, "text": "ds^2 = e^{2\\Phi}\\left(dt - w_i \\, dx^i \\right)^2 - k_{ij} \\, dx^i \\, dx^j." }, { "math_id": 7, "text": "\n\\begin{align}\n\\Omega & = \\frac{\\sqrt{2}}{4} e^\\Phi [k^{ik}k^{jl}(\\omega_{i,j}-\\omega_{j,i})(\\omega_{k,l} - \\omega_{l,k})]^{1/2} = \\\\\n & = \\frac{ \\sqrt{\\beta} \\omega (r -3 m) }{ r- 2 m - \\beta \\omega^2 r^3 } = \\sqrt{\\beta}\\omega.\n\\end{align}\n" }, { "math_id": 8, "text": " \\Phi,_{i} = 0" }, { "math_id": 9, "text": "\n\\Phi,_i = \\frac{2m/r^2 - 2r\\beta\\omega^2}{2(1-2m/r-r^2 \\beta\\omega^2)} = 0. \n" }, { "math_id": 10, "text": "\n\\omega^2 = \\frac{m}{r^3 \\beta}.\n" }, { "math_id": 11, "text": "\n\\Delta \\tau = \\left(1-\\frac{2m}{r} - r^2 \\beta\\omega^2 \\right)^{1/2} \\, dt = \\left(1-\\frac{3m}{r}\\right)^{1/2} \\, dt.\n" }, { "math_id": 12, "text": "\\alpha' = \\Omega \\Delta \\tau" }, { "math_id": 13, "text": "\n\\alpha = \\alpha' + 2\\pi = -2 \\pi \\sqrt{\\beta}\\Bigg( \\left(1-\\frac{3m}{r} \\right)^{1/2} - 1 \\Bigg).\n" }, { "math_id": 14, "text": "\n\\alpha \\approx \\frac{3\\pi m}{r}\\sqrt{\\beta} = \\frac{3\\pi m}{r}\\sin(\\theta).\n" } ]
https://en.wikipedia.org/wiki?curid=8967469
8969021
Lacunary function
In analysis, a lacunary function, also known as a lacunary series, is an analytic function that cannot be analytically continued anywhere outside the radius of convergence within which it is defined by a power series. The word "lacunary" is derived from lacuna ("pl." lacunae), meaning gap, or vacancy. The first known examples of lacunary functions involved Taylor series with large gaps, or lacunae, between the non-zero coefficients of their expansions. More recent investigations have also focused attention on Fourier series with similar gaps between non-zero coefficients. There is a slight ambiguity in the modern usage of the term lacunary series, which may refer to either Taylor series or Fourier series. A simple example. Pick an integer formula_0. Consider the following function defined by a simple power series: formula_1 The power series converges locally uniform on any open domain |"z"| < 1. This can be proved by comparing "f" with the geometric series, which is absolutely convergent when |"z"| < 1. So "f" is analytic on the open unit disk. Nevertheless, "f" has a singularity at every point on the unit circle, and cannot be analytically continued outside of the open unit disk, as the following argument demonstrates. Clearly "f" has a singularity at "z" = 1, because formula_2 is a divergent series. But if "z" is allowed to be non-real, problems arise, since formula_3 we can see that "f" has a singularity at a point "z" when "z"a = 1, and also when "z"a2 = 1. By the induction suggested by the above equations, "f" must have a singularity at each of the "a""n"-th roots of unity for all natural numbers "n." The set of all such points is dense on the unit circle, hence by continuous extension every point on the unit circle must be a singularity of "f." An elementary result. Evidently the argument advanced in the simple example shows that certain series can be constructed to define lacunary functions. What is not so evident is that the gaps between the powers of "z" can expand much more slowly, and the resulting series will still define a lacunary function. To make this notion more precise some additional notation is needed. We write formula_4 where "b""n" = "a""k" when "n" = λ"k", and "b""n" = 0 otherwise. The stretches where the coefficients "b""n" in the second series are all zero are the "lacunae" in the coefficients. The monotonically increasing sequence of positive natural numbers {λ"k"} specifies the powers of "z" which are in the power series for "f"("z"). Now a theorem of Hadamard can be stated. If formula_5 for all "k", where "δ" > 0 is an arbitrary positive constant, then "f"("z") is a lacunary function that cannot be continued outside its circle of convergence. In other words, the sequence {λ"k"} doesn't have to grow as fast as 2"k" for "f"("z") to be a lacunary function – it just has to grow as fast as some geometric progression (1 + δ)"k". A series for which λ"k" grows this quickly is said to contain Hadamard gaps. See Ostrowski–Hadamard gap theorem. Lacunary trigonometric series. Mathematicians have also investigated the properties of lacunary trigonometric series formula_6 for which the λ"k" are far apart. Here the coefficients "a""k" are real numbers. In this context, attention has been focused on criteria sufficient to guarantee convergence of the trigonometric series almost everywhere (that is, for almost every value of the angle "θ" and of the distortion factor "ω"). formula_7 converges (diverges). A unified view. Greater insight into the underlying question that motivates the investigation of lacunary power series and lacunary trigonometric series can be gained by re-examining the simple example above. In that example we used the geometric series formula_8 and the Weierstrass M-test to demonstrate that the simple example defines an analytic function on the open unit disk. The geometric series itself defines an analytic function that converges everywhere on the "closed" unit disk except when "z" = 1, where "g"("z") has a simple pole. And, since "z" = "e""iθ" for points on the unit circle, the geometric series becomes formula_9 at a particular "z", |"z"| = 1. From this perspective, then, mathematicians who investigate lacunary series are asking the question: How much does the geometric series have to be distorted – by chopping big sections out, and by introducing coefficients "a""k" ≠ 1 – before the resulting mathematical object is transformed from a nice smooth meromorphic function into something that exhibits a primitive form of chaotic behavior?
[ { "math_id": 0, "text": "a \\geq 2" }, { "math_id": 1, "text": "\nf(z) = \\sum_{n=0}^\\infty z^{a^n} = z + z^a + z^{a^2} + z^{a^3} + z^{a^4} + \\cdots\\,\n" }, { "math_id": 2, "text": "\nf(1) = 1 + 1 + 1 + \\cdots\\,\n" }, { "math_id": 3, "text": "\nf\\left(z^a\\right) = f(z) - z \\qquad f\\left(z^{a^2}\\right) = f(z^a) - z^a \\qquad f\\left(z^{a^3}\\right) = f\\left(z^{a^2}\\right) - z^{a^2} \\qquad \\cdots \\qquad f\\left(z^{a^{n+1}}\\right) = f\\left(z^{a^n}\\right)-z^{a^n}\n" }, { "math_id": 4, "text": "\nf(z) = \\sum_{k=1}^\\infty a_kz^{\\lambda_k} = \\sum_{n=1}^\\infty b_n z^n\\,\n" }, { "math_id": 5, "text": "\n\\frac{\\lambda_k}{\\lambda_{k-1}} > 1 + \\delta \\,\n" }, { "math_id": 6, "text": "\nS((\\lambda_k)_k,\\theta) = \\sum_{k=1}^\\infty a_k \\cos(\\lambda_k\\theta) \\qquad \nS((\\lambda_k)_k,\\theta,\\omega) = \\sum_{k=1}^\\infty a_k \\cos(\\lambda_k\\theta + \\omega) \\,\n" }, { "math_id": 7, "text": "\n\\sum_{k=1}^\\infty a_k^2\n" }, { "math_id": 8, "text": "\ng(z) = \\sum_{n=1}^\\infty z^n \\,\n" }, { "math_id": 9, "text": "\ng(z) = \\sum_{n=1}^\\infty e^{in\\theta} = \\sum_{n=1}^\\infty \\left(\\cos n\\theta + i\\sin n\\theta\\right) \\,\n" } ]
https://en.wikipedia.org/wiki?curid=8969021
897064
Topological sorting
Node ordering for directed acyclic graphs In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge "(u,v)" from vertex "u" to vertex "v", "u" comes before "v" in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. Precisely, a topological sort is a graph traversal in which each node "v" is visited only after all its dependencies are visited"." A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time. Topological sorting has many applications, especially in ranking problems such as feedback arc set. Topological sorting is possible even when the DAG has disconnected components. Examples. The canonical application of topological sorting is in scheduling a sequence of jobs or tasks based on their dependencies. The jobs are represented by vertices, and there is an edge from "x" to "y" if job "x" must be completed before job "y" can be started (for example, when washing clothes, the washing machine must finish before we put the clothes in the dryer). Then, a topological sort gives an order in which to perform the jobs. A closely-related application of topological sorting algorithms was first studied in the early 1960s in the context of the PERT technique for scheduling in project management. In this application, the vertices of a graph represent the milestones of a project, and the edges represent tasks that must be performed between one milestone and another. Topological sorting forms the basis of linear-time algorithms for finding the critical path of the project, a sequence of milestones and tasks that controls the length of the overall project schedule. In computer science, applications of this type arise in instruction scheduling, ordering of formula cell evaluation when recomputing formula values in spreadsheets, logic synthesis, determining the order of compilation tasks to perform in makefiles, data serialization, and resolving symbol dependencies in linkers. It is also used to decide in which order to load tables with foreign keys in databases. Algorithms. The usual algorithms for topological sorting have running time linear in the number of nodes plus the number of edges, asymptotically, formula_0 Kahn's algorithm. One of these algorithms, first described by , works by choosing vertices in the same order as the eventual topological sort. First, find a list of "start nodes" that have no incoming edges and insert them into a set S; at least one such node must exist in a non-empty (finite) acyclic graph. Then: "L" ← Empty list that will contain the sorted elements "S" ← Set of all nodes with no incoming edge while "S" is not empty do remove a node "n" from "S" add "n" to "L" for each node "m" with an edge "e" from "n" to "m" do remove edge "e" from the graph if "m" has no other incoming edges then insert "m" into "S" if "graph" has edges then return error "(graph has at least one cycle)" else return "L" "(a topologically sorted order)" If the graph is a DAG, a solution will be contained in the list L (although the solution is not necessarily unique). Otherwise, the graph must have at least one cycle and therefore a topological sort is impossible. Reflecting the non-uniqueness of the resulting sort, the structure S can be simply a set or a queue or a stack. Depending on the order that nodes n are removed from set S, a different solution is created. A variation of Kahn's algorithm that breaks ties lexicographically forms a key component of the Coffman–Graham algorithm for parallel scheduling and layered graph drawing. Depth-first search. An alternative algorithm for topological sorting is based on depth-first search. The algorithm loops through each node of the graph, in an arbitrary order, initiating a depth-first search that terminates when it hits any node that has already been visited since the beginning of the topological sort or the node has no outgoing edges (i.e., a leaf node): "L" ← Empty list that will contain the sorted nodes while exists nodes without a permanent mark do select an unmarked node "n" visit("n") function visit(node "n") if "n" has a permanent mark then return if "n" has a temporary mark then stop "(graph has at least one cycle)" mark "n" with a temporary mark for each node "m" with an edge from "n" to "m" do visit("m") mark "n" with a permanent mark add "n" to head of "L" Each node "n" gets "prepended" to the output list L only after considering all other nodes that depend on "n" (all descendants of "n" in the graph). Specifically, when the algorithm adds node "n", we are guaranteed that all nodes that depend on "n" are already in the output list L: they were added to L either by the recursive call to visit() that ended before the call to visit "n", or by a call to visit() that started even before the call to visit "n". Since each edge and node is visited once, the algorithm runs in linear time. This depth-first-search-based algorithm is the one described by ; it seems to have been first described in print by Tarjan in 1976. Parallel algorithms. On a parallel random-access machine, a topological ordering can be constructed in "O"((log "n")2) time using a polynomial number of processors, putting the problem into the complexity class NC2. One method for doing this is to repeatedly square the adjacency matrix of the given graph, logarithmically many times, using min-plus matrix multiplication with maximization in place of minimization. The resulting matrix describes the longest path distances in the graph. Sorting the vertices by the lengths of their longest incoming paths produces a topological ordering. An algorithm for parallel topological sorting on distributed memory machines parallelizes the algorithm of Kahn for a DAG formula_1. On a high level, the algorithm of Kahn repeatedly removes the vertices of indegree 0 and adds them to the topological sorting in the order in which they were removed. Since the outgoing edges of the removed vertices are also removed, there will be a new set of vertices of indegree 0, where the procedure is repeated until no vertices are left. This algorithm performs formula_2 iterations, where D is the longest path in G. Each iteration can be parallelized, which is the idea of the following algorithm. In the following, it is assumed that the graph partition is stored on p processing elements (PE), which are labeled formula_3. Each PE i initializes a set of local vertices formula_4 with indegree 0, where the upper index represents the current iteration. Since all vertices in the local sets formula_5 have indegree 0, i.e., they are not adjacent, they can be given in an arbitrary order for a valid topological sorting. To assign a global index to each vertex, a prefix sum is calculated over the sizes of formula_5. So, each step, there are formula_6 vertices added to the topological sorting. In the first step, PE j assigns the indices formula_7 to the local vertices in formula_8. These vertices in formula_8 are removed, together with their corresponding outgoing edges. For each outgoing edge formula_9 with endpoint v in another PE formula_10, the message formula_9 is posted to PE l. After all vertices in formula_8 are removed, the posted messages are sent to their corresponding PE. Each message formula_9 received updates the indegree of the local vertex v. If the indegree drops to zero, v is added to formula_11. Then the next iteration starts. In step k, PE j assigns the indices formula_12, where formula_13is the total number of processed vertices after step ⁠⁠. This procedure repeats until there are no vertices left to process, hence formula_14. Below is a high level, single program, multiple data pseudo-code overview of this algorithm. Note that the prefix sum for the local offsets formula_12 can be efficiently calculated in parallel. p processing elements with IDs from 0 to "p"-1 Input: G = (V, E) DAG, distributed to PEs, PE index j = 0, ..., p - 1 Output: topological sorting of G function traverseDAGDistributed δ incoming degree of local vertices "V" "Q" = {"v" ∈ "V"| δ["v"] 0} // All vertices with indegree 0 nrOfVerticesProcessed = 0 do global build prefix sum over size of "Q" // get offsets and total number of vertices in this step offset = nrOfVerticesProcessed + sum(Qi, i = 0 to j - 1) // "j" is the processor index foreach u in Q localOrder[u] = index++; foreach (u,v) in E do post message ("u, v") to PE owning vertex "v" nrOfVerticesProcessed += sum(|Qi|, i = 0 to p - 1) deliver all messages to neighbors of vertices in Q receive messages for local vertices V remove all vertices in Q foreach message ("u, v") received: if --δ[v] = 0 add "v" to "Q" while global size of "Q" > 0 return localOrder The communication cost depends heavily on the given graph partition. As for runtime, on a CRCW-PRAM model that allows fetch-and-decrement in constant time, this algorithm runs in formula_15, where D is again the longest path in G and Δ the maximum degree. Application to shortest path finding. The topological ordering can also be used to quickly compute shortest paths through a weighted directed acyclic graph. Let V be the list of vertices in such a graph, in topological order. Then the following algorithm computes the shortest path from some source vertex s to all other vertices: <templatestyles src="Framebox/styles.css" /> Equivalently: <templatestyles src="Framebox/styles.css" /> On a graph of n vertices and m edges, this algorithm takes Θ("n" + "m"), i.e., linear, time. Uniqueness. If a topological sort has the property that all pairs of consecutive vertices in the sorted order are connected by edges, then these edges form a directed Hamiltonian path in the DAG. If a Hamiltonian path exists, the topological sort order is unique; no other order respects the edges of the path. Conversely, if a topological sort does not form a Hamiltonian path, the DAG will have two or more valid topological orderings, for in this case it is always possible to form a second valid ordering by swapping two consecutive vertices that are not connected by an edge to each other. Therefore, it is possible to test in linear time whether a unique ordering exists, and whether a Hamiltonian path exists, despite the NP-hardness of the Hamiltonian path problem for more general directed graphs (i.e., cyclic directed graphs). Relation to partial orders. Topological orderings are also closely related to the concept of a linear extension of a partial order in mathematics. A partially ordered set is just a set of objects together with a definition of the "≤" inequality relation, satisfying the axioms of reflexivity ("x" ≤ "x"), antisymmetry (if "x" ≤ "y" and "y" ≤ "x" then "x" = "y") and transitivity (if "x" ≤ "y" and "y" ≤ "z", then "x" ≤ "z"). A total order is a partial order in which, for every two objects "x" and "y" in the set, either "x" ≤ "y" or "y" ≤ "x". Total orders are familiar in computer science as the comparison operators needed to perform comparison sorting algorithms. For finite sets, total orders may be identified with linear sequences of objects, where the "≤" relation is true whenever the first object precedes the second object in the order; a comparison sorting algorithm may be used to convert a total order into a sequence in this way. A linear extension of a partial order is a total order that is compatible with it, in the sense that, if "x" ≤ "y" in the partial order, then "x" ≤ "y" in the total order as well. One can define a partial ordering from any DAG by letting the set of objects be the vertices of the DAG, and defining "x" ≤ "y" to be true, for any two vertices "x" and "y", whenever there exists a directed path from "x" to "y"; that is, whenever "y" is reachable from "x". With these definitions, a topological ordering of the DAG is the same thing as a linear extension of this partial order. Conversely, any partial ordering may be defined as the reachability relation in a DAG. One way of doing this is to define a DAG that has a vertex for every object in the partially ordered set, and an edge "xy" for every pair of objects for which "x" ≤ "y". An alternative way of doing this is to use the transitive reduction of the partial ordering; in general, this produces DAGs with fewer edges, but the reachability relation in these DAGs is still the same partial order. By using these constructions, one can use topological ordering algorithms to find linear extensions of partial orders. Relation to scheduling optimisation. By definition, the solution of a scheduling problem that includes a precedence graph is a valid solution to topological sort (irrespective of the number of machines), however, topological sort in itself is "not" enough to optimally solve a scheduling optimisation problem. Hu's algorithm is a popular method used to solve scheduling problems that require a precedence graph and involve processing times (where the goal is to minimise the largest completion time amongst all the jobs). Like topological sort, Hu's algorithm is not unique and can be solved using DFS (by finding the largest path length and then assigning the jobs). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "O(\\left|{V}\\right| + \\left|{E}\\right|)." }, { "math_id": 1, "text": "G = (V, E)" }, { "math_id": 2, "text": "D+1" }, { "math_id": 3, "text": "0, \\dots, p-1" }, { "math_id": 4, "text": "Q_i^1" }, { "math_id": 5, "text": "Q_0^1, \\dots, Q_{p-1}^1" }, { "math_id": 6, "text": "\\sum_{i=0}^{p-1} |Q_i|" }, { "math_id": 7, "text": "\\sum_{i=0}^{j-1} |Q_i^1|, \\dots, \\left(\\sum_{i=0}^{j} |Q_i^1|\\right) - 1" }, { "math_id": 8, "text": "Q_j^1" }, { "math_id": 9, "text": "(u, v)" }, { "math_id": 10, "text": "l, j \\neq l" }, { "math_id": 11, "text": "Q_j^2" }, { "math_id": 12, "text": "a_{k-1} + \\sum_{i=0}^{j-1} |Q_i^k|, \\dots, a_{k-1} + \\left(\\sum_{i=0}^{j} |Q_i^k|\\right) - 1" }, { "math_id": 13, "text": "a_{k-1}" }, { "math_id": 14, "text": "\\sum_{i=0}^{p-1} |Q_i^{D+1}| = 0" }, { "math_id": 15, "text": "\\mathcal{O} \\left(\\frac{m + n}{p} + D (\\Delta + \\log n)\\right)" } ]
https://en.wikipedia.org/wiki?curid=897064
897136
Life-support system
Technology that allows survival in hostile environments A life-support system is the combination of equipment that allows survival in an environment or situation that would not support that life in its absence. It is generally applied to systems supporting human life in situations where the outside environment is hostile, such as outer space or underwater, or medical situations where the health of the person is compromised to the extent that the risk of death would be high without the function of the equipment. In human spaceflight, a life-support system is a group of devices that allow a human being to survive in outer space. US government space agency NASA, and private spaceflight companies use the phrase "environmental control and life-support system" or the acronym "ECLSS" when describing these systems. The life-support system may supply air, water and food. It must also maintain the correct body temperature, an acceptable pressure on the body and deal with the body's waste products. Shielding against harmful external influences such as radiation and micro-meteorites may also be necessary. Components of the life-support system are life-critical, and are designed and constructed using safety engineering techniques. In underwater diving, the breathing apparatus is considered to be life support equipment, and a saturation diving system is considered a life-support system – the personnel who are responsible for operating it are called life support technicians. The concept can also be extended to submarines, crewed submersibles and atmospheric diving suits, where the breathing gas requires treatment to remain respirable, and the occupants are isolated from the outside ambient pressure and temperature. Medical life-support systems include heart-lung machines, medical ventilators and dialysis equipment. Human physiological and metabolic needs. A crewmember of typical size requires approximately of food, water, and oxygen per day to perform standard activities on a space mission, and outputs a similar amount in the form of waste solids, waste liquids, and carbon dioxide. The mass breakdown of these metabolic parameters is as follows: of oxygen, of food, and of water consumed, converted through the body's physiological processes to of solid wastes, of liquid wastes, and of carbon dioxide produced. These levels can vary due to activity level of a specific mission assignment, but must obey the principle of mass balance. Actual water use during space missions is typically double the given value, mainly due to non-biological use (e.g. showering). Additionally, the volume and variety of waste products varies with mission duration to include hair, finger nails, skin flaking, and other biological wastes in missions exceeding one week in length. Other environmental considerations such as radiation, gravity, noise, vibration, and lighting also factor into human physiological response in outer space, though not with the more immediate effect that the metabolic parameters have. Atmosphere. Outer space life-support systems maintain atmospheres composed, at a minimum, of oxygen, water vapor and carbon dioxide. The partial pressure of each component gas adds to the overall barometric pressure. However, the elimination of diluent gases substantially increases fire risks, especially in ground operations when for structural reasons the total cabin pressure must exceed the external atmospheric pressure; see Apollo 1. Furthermore, oxygen toxicity becomes a factor at high oxygen concentrations. For this reason, most modern crewed spacecraft use conventional air (nitrogen/oxygen) atmospheres and use pure oxygen only in pressure suits during extravehicular activity where acceptable suit flexibility mandates the lowest inflation pressure possible. Water. Water is consumed by crew members for drinking, cleaning activities, EVA thermal control, and emergency uses. It must be stored, used, and reclaimed (from waste water and exhaled water vapor) efficiently since no on-site sources currently exist for the environments reached in the course of human space exploration. Future lunar missions may utilize water sourced from polar ices; Mars missions may utilize water from the atmosphere or ice deposits. Food. All space missions to date have used supplied food. Life-support systems could include a plant cultivation system which allows food to be grown within buildings or vessels. This would also regenerate water and oxygen. However, no such system has flown in outer space as yet. Such a system could be designed so that it reuses most (otherwise lost) nutrients. This is done, for example, by composting toilets which reintegrate waste material (excrement) back into the system, allowing the nutrients to be taken up by the food crops. The food coming from the crops is then consumed again by the system's users and the cycle continues. The logistics and area requirements involved however have been prohibitive in implementing such a system to date. Gravity. Depending on the length of the mission, astronauts may need artificial gravity to reduce the effects of space adaptation syndrome, body fluid redistribution, and loss of bone and muscle mass. Two methods of generating artificial weight in outer space exist. Linear acceleration. If a spacecraft's engines could produce thrust continuously on the outbound trip with a thrust level equal to the mass of the ship, it would continuously accelerate at the rate of per second, and the crew would experience a pull toward the ship's aft bulkhead at normal Earth gravity (one g). The effect is proportional to the rate of acceleration. When the ship reaches the halfway point, it would turn around and produce thrust in the retrograde direction to slow down. Rotation. Alternatively, if the ship's cabin is designed with a large cylindrical wall, or with a long beam extending another cabin section or counterweight, spinning it at an appropriate speed will cause centrifugal force to simulate the effect of gravity. If "ω" is the angular velocity of the ship's spin, then the acceleration at a radius "r" is: formula_0 Notice the magnitude of this effect varies with the radius of rotation, which crewmembers might find inconvenient depending on the cabin design. Also, the effects of Coriolis force (a force imparted at right angles to motion within the cabin) must be dealt with. And there is concern that rotation could aggravate the effects of vestibular disruption. Space vehicle systems. Gemini, Mercury, and Apollo. American Mercury, Gemini and Apollo spacecraft contained 100% oxygen atmospheres, suitable for short duration missions, to minimize weight and complexity. Space Shuttle. The Space Shuttle was the first American spacecraft to have an Earth-like atmospheric mixture, comprising 22% oxygen and 78% nitrogen. For the Space Shuttle, NASA includes in the ECLSS category systems that provide both life support for the crew and environmental control for payloads. The "Shuttle Reference Manual" contains ECLSS sections on: Crew Compartment Cabin Pressurization, Cabin Air Revitalization, Water Coolant Loop System, Active Thermal Control System, Supply and Waste Water, Waste Collection System, Waste Water Tank, Airlock Support, Extravehicular Mobility Units, Crew Altitude Protection System, and Radioisotope Thermoelectric Generator Cooling and Gaseous Nitrogen Purge for Payloads. Soyuz. The life-support system on the Soyuz spacecraft is called the Kompleks Sredstv Obespecheniya Zhiznideyatelnosti (KSOZh) (). Vostok, Voshkod and Soyuz contained air-like mixtures at approximately 101kPa (14.7 psi). The life support system provides a nitrogen/oxygen atmosphere at sea level partial pressures. The atmosphere is then regenerated through KO2 cylinders, which absorb most of the CO2 and water produced by the crew biologically and regenerates the oxygen, the LiOH cylinders then absorb the leftover CO2. Plug and play. The Paragon Space Development Corporation is developing a plug and play ECLSS called "commercial crew transport-air revitalization system" (CCT-ARS) for future spacecraft partially paid for using NASA's Commercial Crew Development (CCDev) funding. The CCT-ARS provides seven primary spacecraft life support functions in a highly integrated and reliable system: Air temperature control, Humidity removal, Carbon dioxide removal, Trace contaminant removal, Post-fire atmospheric recovery, Air filtration, and Cabin air circulation. Space station systems. Space station systems include technology that enables humans to live in outer space for a prolonged period of time. Such technology includes filtration systems for human waste disposal and air production. Skylab. Skylab used 72% oxygen and 28% nitrogen at a total pressure of 5 psi. Salyut and Mir. The Salyut and Mir space stations contained an air-like Oxygen and Nitrogen mixture at approximately sea-level pressures of 93.1 kPa (13.5psi) to 129 kPa (18.8 psi) with an Oxygen content of 21% to 40%. Bigelow commercial space station. The life-support system for the Bigelow Commercial Space Station is being designed by Bigelow Aerospace in Las Vegas, Nevada. The space station will be constructed of habitable Sundancer and BA 330 expandable spacecraft modules. As of October 2010,[ [update]] "human-in-the-loop testing of the environmental control and life-support system (ECLSS)" for "Sundancer" has begun. Natural systems. Natural LSS like the Biosphere 2 in Arizona have been tested for future space travel or colonization. These systems are also known as closed ecological systems. They have the advantage of using solar energy as primary energy only and being independent from logistical support with fuel. Natural systems have the highest degree of efficiency due to integration of multiple functions. They also provide the proper ambience for humans which is necessary for a longer stay in outer space. Underwater and saturation diving habitats. Underwater habitats and surface saturation accommodation facilities provide life-support for their occupants over periods of days to weeks. The occupants are constrained from immediate return to surface atmospheric pressure by decompression obligations of up to several weeks. The life support system of a surface saturation accommodation facility provides breathing gas and other services to support life for the personnel under pressure. It includes the following components: Underwater habitats differ in that the ambient external pressure is the same as internal pressure, so some engineering problems are simplified. Underwater habitats balance internal pressure with the ambient external pressure, allowing the occupants free access to the ambient environment within a specific depth range, while saturation divers accommodated in surface systems are transferred under pressure to the working depth in a closed diving bell The life support system for the bell provides and monitors the main supply of breathing gas, and the control station monitors the deployment and communications with the divers. Primary gas supply, power and communications to the bell are through a bell umbilical, made up from a number of hoses and electrical cables twisted together and deployed as a unit. This is extended to the divers through the diver umbilicals. The accommodation life support system maintains the chamber environment within the acceptable range for health and comfort of the occupants. Temperature, humidity, breathing gas quality sanitation systems and equipment function are monitored and controlled. Experimental life-support systems. MELiSSA. Micro-Ecological Life Support System Alternative (MELiSSA) is a European Space Agency led initiative, conceived as a micro-organisms and higher plants based ecosystem intended as a tool to gain understanding of the behaviour of artificial ecosystems, and for the development of the technology for a future regenerative life-support system for long term crewed space missions. CyBLiSS. CyBLiSS ("Cyanobacterium-Based Life Support Systems") is a concept developed by researchers from several space agencies (NASA, the German Aerospace Center and the Italian Space Agency) which would use cyanobacteria to process resources available on Mars directly into useful products, and into substrates for other key organisms of Bioregenerative life support system (BLSS). The goal is to make future human-occupied outposts on Mars as independent of Earth as possible (explorers living "off the land"), to reduce mission costs and increase safety. Even though developed independently, CyBLiSS would be complementary to other BLSS projects (such as MELiSSA) as it can connect them to materials found on Mars, thereby making them sustainable and expandable there. Instead of relying on a closed loop, new elements found on site can be brought into the system. Footnotes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "g = \\omega^2 r" } ]
https://en.wikipedia.org/wiki?curid=897136
8971904
Coercive function
In mathematics, a coercive function is a function that "grows rapidly" at the extremes of the space on which it is defined. Depending on the context different exact definitions of this idea are in use. Coercive vector fields. A vector field "f" : R"n" → R"n" is called coercive if formula_0 where "formula_1" denotes the usual dot product and formula_2 denotes the usual Euclidean norm of the vector "x". A coercive vector field is in particular norm-coercive since formula_3 for formula_4, by Cauchy–Schwarz inequality. However a norm-coercive mapping "f" : R"n" → R"n" is not necessarily a coercive vector field. For instance the rotation "f" : R"2" → R"2", "f"("x") = (−"x"2, "x"1) by 90° is a norm-coercive mapping which fails to be a coercive vector field since formula_5 for every formula_6. Coercive operators and forms. A self-adjoint operator formula_7 where formula_8 is a real Hilbert space, is called coercive if there exists a constant formula_9 such that formula_10 for all formula_11 in formula_12 A bilinear form formula_13 is called coercive if there exists a constant formula_9 such that formula_14 for all formula_11 in formula_12 It follows from the Riesz representation theorem that any symmetric (defined as formula_15 for all formula_16 in formula_8), continuous (formula_17 for all formula_16 in formula_8 and some constant formula_18) and coercive bilinear form formula_19 has the representation formula_20 for some self-adjoint operator formula_7 which then turns out to be a coercive operator. Also, given a coercive self-adjoint operator formula_21 the bilinear form formula_19 defined as above is coercive. If formula_22 is a coercive operator then it is a coercive mapping (in the sense of coercivity of a vector field, where one has to replace the dot product with the more general inner product). Indeed, formula_23 for big formula_2 (if formula_2 is bounded, then it readily follows); then replacing formula_11 by formula_24 we get that formula_25 is a coercive operator. One can also show that the converse holds true if formula_25 is self-adjoint. The definitions of coercivity for vector fields, operators, and bilinear forms are closely related and compatible. Norm-coercive mappings. A mapping formula_26 between two normed vector spaces formula_27 and formula_28 is called norm-coercive if and only if formula_29 More generally, a function formula_26 between two topological spaces formula_30 and formula_31 is called coercive if for every compact subset formula_32 of formula_31 there exists a compact subset formula_33 of formula_30 such that formula_34 The composition of a bijective proper map followed by a coercive map is coercive. (Extended valued) coercive functions. An (extended valued) function formula_35 is called coercive if formula_36 A real valued coercive function formula_37 is, in particular, norm-coercive. However, a norm-coercive function formula_37 is not necessarily coercive. For instance, the identity function on formula_38 is norm-coercive but not coercive. References. "This article incorporates material from Coercive Function on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "\\frac{f(x) \\cdot x}{\\| x \\|} \\to + \\infty \\text{ as } \\| x \\| \\to + \\infty," }, { "math_id": 1, "text": "\\cdot" }, { "math_id": 2, "text": "\\|x\\|" }, { "math_id": 3, "text": "\\|f(x)\\| \\geq (f(x) \\cdot x) / \\| x \\|" }, { "math_id": 4, "text": "x \\in \\mathbb{R}^n \\setminus \\{0\\} " }, { "math_id": 5, "text": "f(x) \\cdot x = 0" }, { "math_id": 6, "text": "x \\in \\mathbb{R}^2" }, { "math_id": 7, "text": "A:H\\to H," }, { "math_id": 8, "text": "H" }, { "math_id": 9, "text": "c>0" }, { "math_id": 10, "text": "\\langle Ax, x\\rangle \\ge c\\|x\\|^2" }, { "math_id": 11, "text": "x" }, { "math_id": 12, "text": "H." }, { "math_id": 13, "text": "a:H\\times H\\to \\mathbb R" }, { "math_id": 14, "text": "a(x, x)\\ge c\\|x\\|^2" }, { "math_id": 15, "text": "a(x, y)=a(y, x)" }, { "math_id": 16, "text": "x, y" }, { "math_id": 17, "text": "|a(x, y)|\\le k\\|x\\|\\,\\|y\\|" }, { "math_id": 18, "text": "k>0" }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "a(x, y)=\\langle Ax, y\\rangle" }, { "math_id": 21, "text": "A," }, { "math_id": 22, "text": "A:H\\to H" }, { "math_id": 23, "text": "\\langle Ax, x\\rangle \\ge C\\|x\\|" }, { "math_id": 24, "text": "x\\|x\\|^{-2}" }, { "math_id": 25, "text": "A" }, { "math_id": 26, "text": "f : X \\to X' " }, { "math_id": 27, "text": "(X, \\| \\cdot \\|)" }, { "math_id": 28, "text": "(X', \\| \\cdot \\|')" }, { "math_id": 29, "text": " \\|f(x)\\|' \\to + \\infty \\mbox{ as } \\|x\\| \\to +\\infty ." }, { "math_id": 30, "text": "X" }, { "math_id": 31, "text": "X'" }, { "math_id": 32, "text": "K'" }, { "math_id": 33, "text": "K" }, { "math_id": 34, "text": "f (X \\setminus K) \\subseteq X' \\setminus K'." }, { "math_id": 35, "text": "f: \\mathbb{R}^n \\to \\mathbb{R} \\cup \\{- \\infty, + \\infty\\}" }, { "math_id": 36, "text": " f(x) \\to + \\infty \\mbox{ as } \\| x \\| \\to + \\infty." }, { "math_id": 37, "text": "f:\\mathbb{R}^n \\to \\mathbb{R} " }, { "math_id": 38, "text": " \\mathbb{R} " } ]
https://en.wikipedia.org/wiki?curid=8971904
8974925
Apdex
Open standard for software performance Apdex (Application Performance Index) is an open standard developed by an alliance of companies for measuring performance of software applications in computing. Its purpose is to convert measurements into insights about user satisfaction, by specifying a uniform way to analyze and report on the degree to which measured performance meets user expectations. It is based on counts of "satisfied", "tolerating", and "frustrated" users, given a maximum satisfactory response time "tof" , a maximum tolerable response time of "4t", and where users are assumed to be frustrated above "4t". The score is equivalent to a weighted average of these user counts with weights 1, 0.5, and 0, respectively. Problems addressed. When engaging in application performance management, for example in the course of website monitoring, enterprises collect many measurements of the performance of information technology applications. However, this measurement data may not provide a clear and simple picture of how well those applications are performing from a business point of view, a characteristic desired in metrics that are used as key performance indicators. Reporting several different kinds of data can confuse. Reducing measurement data to a single well understood metric is a convenient way to track and report on quality of experience. Measurements of application response times, in particular, may be difficult to evaluate because: The Apdex method seeks to address these problems. Apdex method. Proponents of the Apdex standard believe that it offers a better way to "measure what matters". The Apdex method converts many measurements into one number on a uniform scale of 0 to 1 (0 = no users satisfied, 1 = all users satisfied). The resulting Apdex score is a numerical measure of user satisfaction with the performance of enterprise applications. This metric can be used to report on any source of end-user performance measurements for which a performance objective has been defined. The Apdex formula is the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples: formula_0 where the sub-script t is the target time, and the tolerable time is assumed to be 4 times the target time. So it is easy to see how this ratio is always directly related to users' perceptions of satisfactory application responsiveness. Example: assuming a performance objective of 3 seconds or better, and a tolerable standard of 12 seconds or better, given a dataset with 100 samples where 60 are below 3 seconds, 30 are between 3 and 12 seconds, and the remaining 10 are above 12 seconds, the Apdex score is: formula_1 The Apdex formula is equivalent to a weighted average, where a satisfied user is given a score of 1, a tolerating user is given a score of 0.5, and a frustrated user is given a score of 0. Apdex Alliance. The Apdex Alliance, headquartered in Charlottesville, Virginia, was founded in 2004 by Peter Sevcik, President of NetForecast, Inc. The Alliance is a group of companies that are collaborating to establish the Apdex standard. These companies have perceived the need for a simple and uniform way to report on application performance, are adopting the Apdex method in their internal operations or software products, and are participating in the work of refining and extending the definition of the Apdex specifications. Alliance contributing members who incorporate the standard into their products may use the Apdex name or logo where the Alliance has certified them as compliant. In January 2007, the Alliance comprised 11 contributing member companies, and over 200 individual members. While the number of contributing companies has remained relatively stable, individual membership grew to over 800 by December 2008, and reached 2000 in 2010. In 2008 the Alliance began publishing a blog, the Apdex Exchange, and in 2010, began offering educational Webinars. These activities address performance management topics, with an emphasis on how to apply the Apdex methodology.
[ { "math_id": 0, "text": "\\mathrm{Apdex}_\\mathrm{t} = \\frac{\\mathrm{Satisfied Count} + ({0.5\\cdot\\mathrm{Tolerating Count}}) + (0\\cdot\\mathrm{FrustratedCount})}{\\mathrm{Total Samples}}" }, { "math_id": 1, "text": "\\mathrm{Apdex}_3 = \\frac{60 + (0.5 \\cdot 30) + (0 \\cdot 10)}{100} = 0.75" } ]
https://en.wikipedia.org/wiki?curid=8974925
897529
Fermat curve
Algebraic curve In mathematics, the Fermat curve is the algebraic curve in the complex projective plane defined in homogeneous coordinates ("X":"Y":"Z") by the Fermat equation: formula_0 Therefore, in terms of the affine plane its equation is: formula_1 An integer solution to the Fermat equation would correspond to a nonzero rational number solution to the affine equation, and vice versa. But by Fermat's Last Theorem it is now known that (for "n" > 2) there are no nontrivial integer solutions to the Fermat equation; therefore, the Fermat curve has no nontrivial rational points. The Fermat curve is non-singular and has genus: formula_2 This means genus 0 for the case "n" = 2 (a conic) and genus 1 only for "n" = 3 (an elliptic curve). The Jacobian variety of the Fermat curve has been studied in depth. It is isogenous to a product of simple abelian varieties with complex multiplication. The Fermat curve also has gonality: formula_3 Fermat varieties. Fermat-style equations in more variables define as projective varieties the Fermat varieties.
[ { "math_id": 0, "text": "X^n + Y^n = Z^n.\\ " }, { "math_id": 1, "text": "x^n + y^n = 1.\\ " }, { "math_id": 2, "text": "(n - 1)(n - 2)/2.\\ " }, { "math_id": 3, "text": "n-1.\\ " } ]
https://en.wikipedia.org/wiki?curid=897529
897539
Hamilton–Jacobi equation
Formulation of classical mechanics based on the calculus of variations <templatestyles src="Hlist/styles.css"/> In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, Schrödinger's equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy. In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming. Overview. The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation formula_0 for a system of particles at coordinates ⁠}⁠. The function formula_1 is the system's Hamiltonian giving the system's energy. The solution of the equation is the "action functional", ⁠⁠, called "Hamilton's principal function" in older textbooks. The solution can be related to the system Lagrangian formula_2 by an indefinite integral of the form used in the principle of least action: formula_3 Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics. Mathematical formulation. Notation. Boldface variables such as formula_4 represent a list of formula_5 generalized coordinates, formula_6 A dot over a variable or list signifies the time derivative (see Newton's notation). For example, formula_7 The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as formula_8 The action functional (a.k.a. Hamilton's principal function). Definition. Let the Hessian matrix formula_9 be invertible. The relation formula_10 shows that the Euler–Lagrange equations form a formula_11 system of second-order ordinary differential equations. Inverting the matrix formula_12 transforms this system into formula_13 Let a time instant formula_14 and a point formula_15 in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every formula_16 the initial value problem with the conditions formula_17 and formula_18 has a locally unique solution formula_19 Additionally, let there be a sufficiently small time interval formula_20 such that extremals with different initial velocities formula_21 would not intersect in formula_22 The latter means that, for any formula_23 and any formula_24 there can be at most one extremal formula_25 for which formula_17 and formula_26 Substituting formula_25 into the action functional results in the Hamilton's principal function (HPF) formula_27 where Formula for the momenta. The momenta are defined as the quantities formula_30 This section shows that the dependency of formula_31 on formula_32 disappears, once the HPF is known. Indeed, let a time instant formula_14 and a point formula_33 in the configuration space be fixed. For every time instant formula_34 and a point formula_35 let formula_25 be the (unique) extremal from the definition of the Hamilton's principal function ⁠⁠. Call formula_36 the velocity at ⁠⁠. Then formula_37 <templatestyles src="Math_proof/styles.css" />Proof While the proof below assumes the configuration space to be an open subset of formula_38 the underlying technique applies equally to arbitrary spaces. In the context of this proof, the calligraphic letter formula_39 denotes the action functional, and the italic formula_40 the Hamilton's principal function. Step 1. Let formula_41 be a path in the configuration space, and formula_42 a vector field along formula_43. (For each formula_44 the vector formula_45 is called "perturbation", "infinitesimal variation" or "virtual displacement" of the mechanical system at the point formula_46). Recall that the variation formula_47 of the action formula_39 at the point formula_43 in the direction formula_48 is given by the formula formula_49 where one should substitute formula_50 and formula_51 after calculating the partial derivatives on the right-hand side. (This formula follows from the definition of Gateaux derivative via integration by parts). Assume that formula_43 is an extremal. Since formula_43 now satisfies the Euler–Lagrange equations, the integral term vanishes. If formula_43's starting point formula_33 is fixed, then, by the same logic that was used to derive the Euler–Lagrange equations, formula_52 Thus, formula_53 Step 2. Let formula_54 be the (unique) extremal from the definition of HPF, formula_55 a vector field along formula_56 and formula_57 a variation of formula_58 "compatible" with formula_59 In precise terms, formula_60 formula_61 formula_62 By definition of HPF and Gateaux derivative, formula_63 Here, we took into account that formula_64 and dropped formula_14 for compactness. Step 3. We now substitute formula_65 and formula_66 into the expression for formula_67 from Step 1 and compare the result with the formula derived in Step 2. The fact that, for formula_68 the vector field formula_69 was chosen arbitrarily completes the proof. Formula. Given the Hamiltonian formula_70 of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function formula_40, formula_71 <templatestyles src="Math_proof/styles.css" />Derivation For an extremal formula_72 where formula_73 is the initial speed (see discussion preceding the definition of HPF), formula_74 From the formula for formula_75 and the coordinate-based definition of the Hamiltonian formula_76 with formula_77 satisfying the (uniquely solvable for formula_78 equation formula_79 obtain formula_80 where formula_81 and formula_82 Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating formula_83 as the generating function for a canonical transformation of the classical Hamiltonian formula_84 The conjugate momenta correspond to the first derivatives of formula_40 with respect to the generalized coordinates formula_85 As a solution to the Hamilton–Jacobi equation, the principal function contains formula_86 undetermined constants, the first formula_5 of them denoted as formula_87, and the last one coming from the integration of formula_88. The relationship between formula_89 and formula_4 then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities formula_90 are also constants of motion, and these equations can be inverted to find formula_4 as a function of all the formula_91 and formula_92 constants and time. Comparison with other formulations of mechanics. The Hamilton–Jacobi equation is a "single", first-order partial differential equation for the function of the formula_5 generalized coordinates formula_93 and the time formula_34. The generalized momenta do not appear, except as derivatives of formula_40, the classical action. For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a "system" of formula_5, generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another "system" of 2"N" first-order equations for the time evolution of the generalized coordinates and their conjugate momenta formula_94. Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful. Derivation using a canonical transformation. Any canonical transformation involving a type-2 generating function formula_95 leads to the relations formula_96 and Hamilton's equations in terms of the new variables formula_97 and new Hamiltonian formula_98 have the same form: formula_99 To derive the HJE, a generating function formula_95 is chosen in such a way that, it will make the new Hamiltonian formula_100. Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial formula_101 so the new generalized coordinates and momenta are "constants" of motion. As they are constants, in this context the new generalized momenta formula_102 are usually denoted formula_87, i.e. formula_103 and the new generalized coordinates formula_104 are typically denoted as formula_105, so formula_106. Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant formula_107: formula_108 the HJE automatically arises formula_109 When solved for formula_110, these also give us the useful equations formula_111 or written in components for clarity formula_112 Ideally, these "N" equations can be inverted to find the original generalized coordinates formula_113 as a function of the constants formula_114 and formula_115, thus solving the original problem. Separation of variables. When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time "t" can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative formula_116 in the HJE must be a constant, usually denoted (formula_117), giving the separated solution formula_118 where the time-independent function formula_119 is sometimes called the abbreviated action or Hamilton's characteristic function and sometimes written formula_120 (see action principle names). The reduced Hamilton–Jacobi equation can then be written formula_121 To illustrate separability for other variables, a certain generalized coordinate formula_122 and its derivative formula_123 are assumed to appear together as a single function formula_124 in the Hamiltonian formula_125 In that case, the function "S" can be partitioned into two functions, one that depends only on "qk" and another that depends only on the remaining generalized coordinates formula_126 Substitution of these formulae into the Hamilton–Jacobi equation shows that the function "ψ" must be a constant (denoted here as formula_127), yielding a first-order ordinary differential equation for formula_128 formula_129 In fortunate cases, the function formula_130 can be separated completely into formula_131 functions formula_132 formula_133 In such a case, the problem devolves to formula_131 ordinary differential equations. The separability of "S" depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, formula_130 will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections. Examples in various coordinate systems. Spherical coordinates. In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential "U" can be written formula_134 The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions formula_135 such that formula_136 can be written in the analogous form formula_137 Substitution of the completely separated solution formula_138 into the HJE yields formula_139 This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for formula_140 formula_141 where formula_142 is a constant of the motion that eliminates the formula_140 dependence from the Hamilton–Jacobi equation formula_143 The next ordinary differential equation involves the formula_144 generalized coordinate formula_145 where formula_146 is again a constant of the motion that eliminates the formula_144 dependence and reduces the HJE to the final ordinary differential equation formula_147 whose integration completes the solution for formula_40. Elliptic cylindrical coordinates. The Hamiltonian in elliptic cylindrical coordinates can be written formula_148 where the foci of the ellipses are located at formula_149 on the formula_150-axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that formula_136 has an analogous form formula_151 where formula_152, formula_153 and formula_154 are arbitrary functions. Substitution of the completely separated solution formula_155 into the HJE yields formula_156 Separating the first ordinary differential equation formula_157 yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) formula_158 which itself may be separated into two independent ordinary differential equations formula_159 formula_160 that, when solved, provide a complete solution for formula_40. Parabolic cylindrical coordinates. The Hamiltonian in parabolic cylindrical coordinates can be written formula_161 The Hamilton–Jacobi equation is completely separable in these coordinates provided that formula_136 has an analogous form formula_162 where formula_163, formula_164, and formula_154 are arbitrary functions. Substitution of the completely separated solution formula_165 into the HJE yields formula_166 Separating the first ordinary differential equation formula_167 yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) formula_168 which itself may be separated into two independent ordinary differential equations formula_169 formula_170 that, when solved, provide a complete solution for formula_40. Waves and particles. Optical wave fronts and trajectories. The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface formula_171 that the light emitted at time formula_172 has reached at time formula_173. Light rays and wave fronts are dual: if one is known, the other can be deduced. More precisely, geometrical optics is a variational problem where the “action” is the travel time formula_174 along a path,formula_175 where formula_176 is the medium's index of refraction and formula_177 is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other. The above duality is very general and applies to "all" systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation. The wave front at time formula_173, for a system initially at formula_178 at time formula_179, is defined as the collection of points formula_180 such that formula_181. If formula_182 is known, the momentum is immediately deduced.formula_183 Once formula_184 is known, tangents to the trajectories formula_185 are computed by solving the equationformula_186for formula_185, where formula_187 is the Lagrangian. The trajectories are then recovered from the knowledge of formula_185. Relationship to the Schrödinger equation. The isosurfaces of the function formula_188 can be determined at any time "t". The motion of an formula_40-isosurface as a function of time is defined by the motions of the particles beginning at the points formula_4 on the isosurface. The motion of such an isosurface can be thought of as a "wave" moving through formula_4-space, although it does not obey the wave equation exactly. To show this, let "S" represent the phase of a wave formula_189 where formula_190 is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having formula_40 be a complex number. The Hamilton–Jacobi equation is then rewritten as formula_191 which is the Schrödinger equation. Conversely, starting with the Schrödinger equation and our ansatz for formula_192, it can be deduced that formula_193 The classical limit (formula_194) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation, formula_195 Applications. HJE in a gravitational field. Using the energy–momentum relation in the form formula_196 for a particle of rest mass formula_197 travelling in curved space, where formula_198 are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and formula_199 is the speed of light. Setting the four-momentum formula_200 equal to the four-gradient of the action formula_130, formula_201 gives the Hamilton–Jacobi equation in the geometry determined by the metric formula_202: formula_203 in other words, in a gravitational field. HJE in electromagnetic fields. For a particle of rest mass formula_204 and electric charge formula_205 moving in electromagnetic field with four-potential formula_206 in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor formula_207 has a form formula_208 and can be solved for the Hamilton principal action function formula_40 to obtain further solution for the particle trajectory and momentum: formula_209 formula_210 formula_211 formula_212 formula_213 formula_214 formula_215 where formula_216 and formula_217 with formula_218 the cycle average of the vector potential. A circularly polarized wave. In the case of circular polarization, formula_219 formula_220 Hence formula_221 formula_222 formula_223 formula_224 where formula_225, implying the particle moving along a circular trajectory with a permanent radius formula_226 and an invariable value of momentum formula_227 directed along a magnetic field vector. A monochromatic linearly polarized plane wave. For the flat, monochromatic, linearly polarized wave with a field formula_228 directed along the axis formula_229 formula_230 formula_231 hence formula_232 formula_233 formula_234 formula_235 formula_236 formula_237 formula_238 formula_239 implying the particle figure-8 trajectory with a long its axis oriented along the electric field formula_228 vector. An electromagnetic wave with a solenoidal magnetic field. For the electromagnetic wave with axial (solenoidal) magnetic field: formula_240 formula_241 hence formula_242 formula_243 formula_244 formula_245 formula_246 formula_247 formula_236 formula_248 formula_249 formula_250 where formula_251 is the magnetic field magnitude in a solenoid with the effective radius formula_252, inductivity formula_253, number of windings formula_254, and an electric current magnitude formula_255 through the solenoid windings. The particle motion occurs along the figure-8 trajectory in formula_256 plane set perpendicular to the solenoid axis with arbitrary azimuth angle formula_257 due to axial symmetry of the solenoidal magnetic field. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " - \\frac{\\partial S}{\\partial t} = H\\!\\!\\left(\\mathbf{q},\\frac{\\partial S}{\\partial \\mathbf{q}},t \\right)." }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "\\ \\mathcal{L}\\ " }, { "math_id": 3, "text": "\\ S = \\int \\mathcal{L}\\ \\operatorname{d}t + ~\\mathsf{ some\\ constant}~" }, { "math_id": 4, "text": "\\mathbf{q}" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "\\mathbf{q} = (q_1, q_2, \\ldots, q_{N-1}, q_N)" }, { "math_id": 7, "text": "\\dot{\\mathbf{q}} = \\frac{d\\mathbf{q}}{dt}." }, { "math_id": 8, "text": "\\mathbf{p} \\cdot \\mathbf{q} = \\sum_{k=1}^N p_k q_k." }, { "math_id": 9, "text": "H_{\\cal L}(\\mathbf{q},\\mathbf{\\dot q},t) = \\left\\{\\partial^2 {\\cal L}/\\partial {\\dot q}^i\\partial {\\dot q}^j\\right\\}_{ij}" }, { "math_id": 10, "text": "\n\\frac{d}{dt}\\frac{\\partial {\\cal L}}{\\partial{\\dot q}^i} =\n\\sum^n_{j=1}\\left(\\frac{\\partial^2 {\\cal L}}{\\partial{\\dot q}^i\\partial{\\dot q}^j} {\\ddot q}^j\n+ \\frac{\\partial^2 {\\cal L}}{\\partial{\\dot q}^i\\partial{q}^j}{\\dot q}^j \\right)\n+\\frac{\\partial^2 {\\cal L}}{\\partial{\\dot q}^i \\partial t},\\qquad i=1,\\ldots,n,\n" }, { "math_id": 11, "text": "n \\times n" }, { "math_id": 12, "text": "H_{\\cal L}" }, { "math_id": 13, "text": "\\ddot q^i = F_i(\\mathbf{q},\\mathbf{\\dot q},t),\\ i=1,\\ldots, n." }, { "math_id": 14, "text": "t_0" }, { "math_id": 15, "text": "\\mathbf{q}_0 \\in M" }, { "math_id": 16, "text": "\\mathbf{v}_0," }, { "math_id": 17, "text": "\\gamma|_{\\tau=t_0} = \\mathbf{q}_0" }, { "math_id": 18, "text": "{\\dot \\gamma}|_{\\tau=t_0} = \\mathbf{v}_0" }, { "math_id": 19, "text": "\\gamma = \\gamma(\\tau; t_0,\\mathbf{q}_0,\\mathbf{v}_0)." }, { "math_id": 20, "text": " (t_0,t_1) " }, { "math_id": 21, "text": "\\mathbf{v}_0" }, { "math_id": 22, "text": "M \\times (t_0,t_1)." }, { "math_id": 23, "text": "\\mathbf{q} \\in M" }, { "math_id": 24, "text": "t \\in (t_0,t_1)," }, { "math_id": 25, "text": "\\gamma=\\gamma(\\tau;t,t_0,\\mathbf{q},\\mathbf{q}_0)" }, { "math_id": 26, "text": "\\gamma|_{\\tau=t} = \\mathbf{q}." }, { "math_id": 27, "text": "\nS(\\mathbf{q},t;\\mathbf{q}_0,t_0) \\ \\stackrel{\\text{def}}{=} \\int^t_{t_0} \\mathcal{L}(\\gamma(\\tau;\\cdot),\\dot\\gamma(\\tau;\\cdot),\\tau)\\,d\\tau," }, { "math_id": 28, "text": "\\gamma=\\gamma(\\tau;t,t_0,\\mathbf{q},\\mathbf{q}_0)," }, { "math_id": 29, "text": "\\gamma|_{\\tau=t_0} = \\mathbf{q}_0," }, { "math_id": 30, "text": " p_i(\\mathbf{q},\\mathbf{\\dot q},t) = \\partial {\\cal L}/\\partial \\dot q^i." }, { "math_id": 31, "text": "p_i" }, { "math_id": 32, "text": "\\mathbf{\\dot q}" }, { "math_id": 33, "text": "\\mathbf{q}_0" }, { "math_id": 34, "text": "t" }, { "math_id": 35, "text": "\\mathbf{q}," }, { "math_id": 36, "text": "\\mathbf{v}\\, \\stackrel{\\text{def}}{=}\\, \\dot \\gamma(\\tau;t,t_0,\\mathbf{q},\\mathbf{q}_0)|_{\\tau=t}" }, { "math_id": 37, "text": "\\frac{\\partial S}{\\partial q^i} = \\left.\\frac{\\partial {\\cal L}}{\\partial \\dot q^i}\\right|_{\\mathbf{\\dot q} = \\mathbf{v}}\\!\\!\\!\\!\\!\\!\\!, \\quad i=1,\\ldots,n." }, { "math_id": 38, "text": "\\R^n," }, { "math_id": 39, "text": "{\\cal S}" }, { "math_id": 40, "text": "S" }, { "math_id": 41, "text": "\\xi = \\xi(t)" }, { "math_id": 42, "text": "\\delta \\xi = \\delta \\xi(t)" }, { "math_id": 43, "text": "\\xi" }, { "math_id": 44, "text": "t," }, { "math_id": 45, "text": "\\delta\\xi(t)" }, { "math_id": 46, "text": "\\xi(t)" }, { "math_id": 47, "text": "\\delta {\\cal S}_{\\delta \\xi}[\\gamma,t_1,t_0]" }, { "math_id": 48, "text": "\\delta \\xi" }, { "math_id": 49, "text": "\n\\delta {\\cal S}_{\\delta \\xi}[\\xi,t_1,t_0] = \\int^{t_1}_{t_0} \\left(\\frac{\\partial {\\cal L}}{\\partial\\mathbf{q}} - \\frac{d}{dt}\\frac{\\partial {\\cal L}}{\\partial\\mathbf{\\dot q}}\\right)\\delta \\xi\\,dt + \\frac{\\partial {\\cal L}}{\\partial \\mathbf{\\dot q}}\\,\\delta \\xi\\Biggl|^{t_1}_{t_0},\n" }, { "math_id": 50, "text": "q^i=\\xi^i(t)" }, { "math_id": 51, "text": "\\dot q^i = \\dot \\xi^i(t)" }, { "math_id": 52, "text": "\\delta \\xi(t_0) = 0." }, { "math_id": 53, "text": "\n\\delta{\\cal S}_{\\delta \\xi}[\\xi,t;t_0]\n= \\left. \\frac{\\partial {\\cal L}}{\\partial \\mathbf{\\dot q}} \\right|^{\\mathbf{q}=\\xi(t)}_{\\mathbf{\\dot q}\n= \\dot \\xi(t)}\\, \\delta \\xi(t).\n" }, { "math_id": 54, "text": "\\gamma=\\gamma(\\tau;\\mathbf{q},\\mathbf{q}_0,t,t_0)" }, { "math_id": 55, "text": "\\delta\\gamma = \\delta\\gamma(\\tau)" }, { "math_id": 56, "text": "\\gamma," }, { "math_id": 57, "text": "\\gamma_\\varepsilon=\\gamma_\\varepsilon(\\tau;\\mathbf{q}_\\varepsilon,\\mathbf{q}_0,t,t_0)" }, { "math_id": 58, "text": "\\gamma" }, { "math_id": 59, "text": "\\delta \\gamma." }, { "math_id": 60, "text": "\\gamma_\\varepsilon|_{\\varepsilon = 0} = \\gamma," }, { "math_id": 61, "text": "\\dot\\gamma_\\varepsilon|_{\\varepsilon = 0} = \\delta\\gamma," }, { "math_id": 62, "text": "\\gamma_\\varepsilon|_{\\tau=t_0} = \\gamma|_{\\tau=t_0} = \\mathbf{q}_0." }, { "math_id": 63, "text": "\n\\delta{\\cal S}_{\\delta \\gamma}[\\gamma,t]\n\\overset{\\text{def}}{{}={}} \\left. \\frac{d{\\cal S}[\\gamma_\\varepsilon,t]}{d\\varepsilon} \\right|_{\\varepsilon=0}\n= \\left. \\frac{dS(\\gamma_\\varepsilon(t),t)}{d\\varepsilon} \\right|_{\\varepsilon=0}\n= \\frac{\\partial S}{\\mathbf{\\partial q}} \\, \\delta\\gamma(t).\n" }, { "math_id": 64, "text": "\\mathbf{q}=\\gamma(t;\\mathbf{q},\\mathbf{q}_0,t,t_0)" }, { "math_id": 65, "text": "\\xi = \\gamma" }, { "math_id": 66, "text": "\\delta\\xi = \\delta\\gamma" }, { "math_id": 67, "text": "\\delta{\\cal S}_{\\delta \\xi}[\\xi,t;t_0]" }, { "math_id": 68, "text": "t > t_0," }, { "math_id": 69, "text": "\\delta \\gamma" }, { "math_id": 70, "text": "H(\\mathbf{q},\\mathbf{p},t)" }, { "math_id": 71, "text": " - \\frac{\\partial S}{\\partial t} = H\\left(\\mathbf{q},\\frac{\\partial S}{\\partial \\mathbf{q}},t \\right)." }, { "math_id": 72, "text": "\\xi=\\xi(t;t_0,\\mathbf{q}_0,\\mathbf{v}_0)," }, { "math_id": 73, "text": "\\mathbf{v}_0 = \\dot\\xi|_{t=t_0}" }, { "math_id": 74, "text": "\n{\\cal L}(\\xi(t),\\dot\\xi(t),t)\n= \\frac{dS(\\xi(t),t)}{dt}\n= \\left[\\frac{\\partial S}{\\partial \\mathbf{q}}\\mathbf{\\dot q} + \\frac{\\partial S}{\\partial t}\\right]^{\\mathbf{q}=\\xi(t)}_{\\mathbf{\\dot q} = \\dot\\xi(t)}.\n" }, { "math_id": 75, "text": "p_i=p_i(\\mathbf{q},t)" }, { "math_id": 76, "text": "H(\\mathbf{q},\\mathbf{p},t) = \\mathbf{p}\\mathbf{\\dot q} - {\\cal L}(\\mathbf{q},\\mathbf{\\dot q},t)," }, { "math_id": 77, "text": "\\mathbf{\\dot q}(\\mathbf{p},\\mathbf{q},t)" }, { "math_id": 78, "text": "\\mathbf{\\dot q})" }, { "math_id": 79, "text": " \\mathbf{p} = \\frac{\\partial {\\cal L}(\\mathbf{q},\\mathbf{\\dot q},t)}{\\partial \\mathbf{\\dot q}}," }, { "math_id": 80, "text": "\n\\frac{\\partial S}{\\partial t} = {\\cal L}(\\mathbf{q},\\mathbf{\\dot q},t) - \\frac{\\partial S}{\\mathbf{\\partial q}}\\mathbf{\\dot q} = -H\\left(\\mathbf{q},\\frac{\\partial S}{\\partial \\mathbf{q}},t\\right),\n" }, { "math_id": 81, "text": "\\mathbf{q} = \\xi(t)" }, { "math_id": 82, "text": "\\mathbf{\\dot q} = \\dot\\xi(t)." }, { "math_id": 83, "text": " S" }, { "math_id": 84, "text": "H = H(q_1,q_2,\\ldots, q_N;p_1,p_2,\\ldots, p_N;t)." }, { "math_id": 85, "text": "p_k = \\frac{\\partial S}{\\partial q_k}." }, { "math_id": 86, "text": "N+1" }, { "math_id": 87, "text": "\\alpha_1,\\, \\alpha_2, \\dots , \\alpha_N" }, { "math_id": 88, "text": "\\frac{\\partial S}{\\partial t}" }, { "math_id": 89, "text": "\\mathbf{p}" }, { "math_id": 90, "text": "\\beta_k=\\frac{\\partial S}{\\partial\\alpha_k},\\quad k=1,2, \\ldots, N " }, { "math_id": 91, "text": "\\alpha" }, { "math_id": 92, "text": "\\beta" }, { "math_id": 93, "text": "q_1,\\, q_2, \\dots , q_N" }, { "math_id": 94, "text": "p_1,\\, p_2, \\dots , p_N" }, { "math_id": 95, "text": "G_2 (\\mathbf{q}, \\mathbf{P}, t)" }, { "math_id": 96, "text": "\n\\mathbf{p} = {\\partial G_2 \\over \\partial \\mathbf{q}}, \\quad\n\\mathbf{Q} = {\\partial G_2 \\over \\partial \\mathbf{P}}, \\quad\nK(\\mathbf{Q},\\mathbf{P},t) = H(\\mathbf{q},\\mathbf{p},t) + {\\partial G_2 \\over \\partial t}\n" }, { "math_id": 97, "text": "\\mathbf{P}, \\,\\mathbf{Q}" }, { "math_id": 98, "text": "K" }, { "math_id": 99, "text": " \\dot{\\mathbf{P}} = -{\\partial K \\over \\partial \\mathbf{Q}},\n\\quad \\dot{\\mathbf{Q}} = +{\\partial K \\over \\partial \\mathbf{P}}. " }, { "math_id": 100, "text": "K=0" }, { "math_id": 101, "text": "\\dot{\\mathbf{P}} = \\dot{\\mathbf{Q}} = 0" }, { "math_id": 102, "text": "\\mathbf{P}" }, { "math_id": 103, "text": "P_m =\\alpha_m" }, { "math_id": 104, "text": "\\mathbf{Q}" }, { "math_id": 105, "text": "\\beta_1,\\, \\beta_2, \\dots , \\beta_N" }, { "math_id": 106, "text": "Q_m =\\beta_m" }, { "math_id": 107, "text": "A" }, { "math_id": 108, "text": "G_2(\\mathbf{q},\\boldsymbol{\\alpha},t)=S(\\mathbf{q},t)+A, " }, { "math_id": 109, "text": "\\mathbf{p}=\\frac{\\partial G_2}{\\partial \\mathbf{q}}=\\frac{\\partial S}{\\partial \\mathbf{q}} \\, \\rightarrow \\,\nH(\\mathbf{q},\\mathbf{p},t) + {\\partial G_2 \\over \\partial t}=0 \\, \\rightarrow \\,\nH\\left(\\mathbf{q},\\frac{\\partial S}{\\partial \\mathbf{q}},t\\right) + {\\partial S \\over \\partial t}=0. " }, { "math_id": 110, "text": " S(\\mathbf{q},\\boldsymbol\\alpha, t) " }, { "math_id": 111, "text": "\\mathbf{Q} = \\boldsymbol\\beta = {\\partial S \\over \\partial \\boldsymbol\\alpha}," }, { "math_id": 112, "text": " Q_{m} = \\beta_{m} = \\frac{\\partial S(\\mathbf{q},\\boldsymbol\\alpha, t)}{\\partial \\alpha_{m}}. " }, { "math_id": 113, "text": " \\mathbf{q} " }, { "math_id": 114, "text": " \\boldsymbol\\alpha, \\,\\boldsymbol\\beta, " }, { "math_id": 115, "text": " t " }, { "math_id": 116, "text": "\\frac{\\partial S}{\\partial t} " }, { "math_id": 117, "text": "-E " }, { "math_id": 118, "text": " S = W(q_1,q_2, \\ldots, q_N) - Et " }, { "math_id": 119, "text": "W(\\mathbf{q}) " }, { "math_id": 120, "text": "S_0" }, { "math_id": 121, "text": " H\\left(\\mathbf{q},\\frac{\\partial S}{\\partial \\mathbf{q}} \\right) = E. " }, { "math_id": 122, "text": "q_k " }, { "math_id": 123, "text": "\\frac{\\partial S}{\\partial q_k} " }, { "math_id": 124, "text": "\\psi \\left(q_k, \\frac{\\partial S}{\\partial q_k} \\right)" }, { "math_id": 125, "text": " H = H(q_1,q_2,\\ldots, q_{k-1}, q_{k+1},\\ldots, q_N; p_1,p_2,\\ldots, p_{k-1}, p_{k+1},\\ldots, p_N; \\psi; t). " }, { "math_id": 126, "text": "S = S_k(q_k) + S_\\text{rem}(q_1,\\ldots, q_{k-1}, q_{k+1}, \\ldots, q_N, t). " }, { "math_id": 127, "text": "\\Gamma_k " }, { "math_id": 128, "text": "S_k (q_k), " }, { "math_id": 129, "text": " \\psi \\left(q_k, \\frac{ d S_k}{ d q_k} \\right) = \\Gamma_k. " }, { "math_id": 130, "text": "S " }, { "math_id": 131, "text": "N " }, { "math_id": 132, "text": "S_m (q_m), " }, { "math_id": 133, "text": " S=S_1(q_1)+S_2(q_2)+\\cdots+S_N(q_N)-Et. " }, { "math_id": 134, "text": " H = \\frac{1}{2m} \\left[ p_{r}^{2} + \\frac{p_{\\theta}^{2}}{r^{2}} + \\frac{p_{\\phi}^{2}}{r^{2} \\sin^{2} \\theta} \\right] + U(r, \\theta, \\phi). " }, { "math_id": 135, "text": " U_{r}(r), U_{\\theta}(\\theta), U_{\\phi}(\\phi) " }, { "math_id": 136, "text": "U" }, { "math_id": 137, "text": " U(r, \\theta, \\phi) = U_{r}(r) + \\frac{U_{\\theta}(\\theta)}{r^{2}} + \\frac{U_{\\phi}(\\phi)}{r^{2}\\sin^{2}\\theta} . " }, { "math_id": 138, "text": "S = S_{r}(r) + S_{\\theta}(\\theta) + S_{\\phi}(\\phi) - Et" }, { "math_id": 139, "text": "\n\\frac{1}{2m} \\left( \\frac{ dS_{r}}{ dr} \\right)^{2} + U_{r}(r) +\n\\frac{1}{2m r^{2}} \\left[ \\left( \\frac{ dS_{\\theta}}{ d\\theta} \\right)^{2} + 2m U_{\\theta}(\\theta) \\right] +\n\\frac{1}{2m r^{2}\\sin^{2}\\theta} \\left[ \\left( \\frac{ dS_{\\phi}}{ d\\phi} \\right)^{2} + 2m U_{\\phi}(\\phi) \\right] = E.\n" }, { "math_id": 140, "text": "\\phi" }, { "math_id": 141, "text": " \\left( \\frac{ dS_{\\phi}}{ d\\phi} \\right)^{2} + 2m U_{\\phi}(\\phi) = \\Gamma_{\\phi} " }, { "math_id": 142, "text": "\\Gamma_\\phi" }, { "math_id": 143, "text": " \\frac{1}{2m} \\left( \\frac{ dS_{r}}{ dr} \\right)^{2} + U_{r}(r) + \\frac{1}{2m r^{2}} \\left[ \\left( \\frac{ dS_{\\theta}}{ d\\theta} \\right)^{2} + 2m U_{\\theta}(\\theta) + \\frac{\\Gamma_{\\phi}}{\\sin^{2}\\theta} \\right] = E. " }, { "math_id": 144, "text": "\\theta" }, { "math_id": 145, "text": " \\left( \\frac{ dS_{\\theta}}{ d\\theta} \\right)^{2} + 2m U_{\\theta}(\\theta) + \\frac{\\Gamma_{\\phi}}{\\sin^{2}\\theta} = \\Gamma_{\\theta} " }, { "math_id": 146, "text": "\\Gamma_\\theta" }, { "math_id": 147, "text": " \\frac{1}{2m} \\left( \\frac{ dS_{r}}{ dr} \\right)^{2} + U_{r}(r) + \\frac{\\Gamma_{\\theta}}{2m r^{2}} = E " }, { "math_id": 148, "text": " H = \\frac{p_{\\mu}^{2} + p_{\\nu}^{2}}{2ma^{2} \\left( \\sinh^{2} \\mu + \\sin^{2} \\nu\\right)} + \\frac{p_{z}^{2}}{2m} + U(\\mu, \\nu, z) " }, { "math_id": 149, "text": "\\pm a" }, { "math_id": 150, "text": "x" }, { "math_id": 151, "text": " U(\\mu, \\nu, z) = \\frac{U_{\\mu}(\\mu) + U_{\\nu}(\\nu)}{\\sinh^{2} \\mu + \\sin^{2} \\nu} + U_{z}(z) " }, { "math_id": 152, "text": " U_\\mu(\\mu)" }, { "math_id": 153, "text": "U_\\nu(\\nu)" }, { "math_id": 154, "text": "U_z(z)" }, { "math_id": 155, "text": "S = S_{\\mu}(\\mu) + S_{\\nu}(\\nu) + S_{z}(z) - Et" }, { "math_id": 156, "text": "\n\\frac{1}{2m} \\left( \\frac{ dS_{z}}{ dz} \\right)^{2} + U_{z}(z) +\n\\frac{1}{2ma^{2} \\left( \\sinh^{2} \\mu + \\sin^{2} \\nu\\right)} \\left[ \\left( \\frac{ dS_{\\mu}}{ d\\mu} \\right)^{2} + \\left( \\frac{ dS_{\\nu}}{ d\\nu} \\right)^{2} + 2m a^{2} U_{\\mu}(\\mu) + 2m a^{2} U_{\\nu}(\\nu)\\right] = E.\n" }, { "math_id": 157, "text": " \\frac{1}{2m} \\left( \\frac{ dS_{z}}{ dz} \\right)^{2} + U_{z}(z) = \\Gamma_{z} " }, { "math_id": 158, "text": " \\left( \\frac{ dS_{\\mu}}{ d\\mu} \\right)^{2} + \\left( \\frac{ dS_{\\nu}}{ d\\nu} \\right)^{2} + 2m a^{2} U_{\\mu}(\\mu) + 2m a^{2} U_{\\nu}(\\nu) = 2ma^{2} \\left( \\sinh^{2} \\mu + \\sin^{2} \\nu\\right) \\left( E - \\Gamma_{z} \\right) " }, { "math_id": 159, "text": " \\left( \\frac{ dS_{\\mu}}{ d\\mu} \\right)^{2} + 2m a^{2} U_{\\mu}(\\mu) + 2ma^{2} \\left(\\Gamma_{z} - E \\right) \\sinh^{2} \\mu = \\Gamma_{\\mu} " }, { "math_id": 160, "text": " \\left( \\frac{ dS_{\\nu}}{ d\\nu} \\right)^{2} + 2m a^{2} U_{\\nu}(\\nu) + 2ma^{2} \\left(\\Gamma_{z} - E \\right) \\sin^{2} \\nu = \\Gamma_{\\nu} " }, { "math_id": 161, "text": " H = \\frac{p_{\\sigma}^{2} + p_{\\tau}^{2}}{2m \\left( \\sigma^{2} + \\tau^{2}\\right)} + \\frac{p_{z}^{2}}{2m} + U(\\sigma, \\tau, z). " }, { "math_id": 162, "text": " U(\\sigma, \\tau, z) = \\frac{U_{\\sigma}(\\sigma) + U_{\\tau}(\\tau)}{\\sigma^{2} + \\tau^{2}} + U_{z}(z) " }, { "math_id": 163, "text": "U_\\sigma (\\sigma)" }, { "math_id": 164, "text": "U_\\tau (\\tau)" }, { "math_id": 165, "text": "S = S_{\\sigma}(\\sigma) + S_{\\tau}(\\tau) + S_{z}(z) - Et + \\text{constant}" }, { "math_id": 166, "text": "\n\\frac{1}{2m} \\left( \\frac{ dS_{z}}{ dz} \\right)^{2} + U_{z}(z) +\n\\frac{1}{2m \\left( \\sigma^{2} + \\tau^{2} \\right)} \\left[ \\left( \\frac{ dS_{\\sigma}}{ d\\sigma} \\right)^{2} + \\left( \\frac{ dS_{\\tau}}{ d\\tau} \\right)^{2} + 2m U_{\\sigma}(\\sigma) + 2m U_{\\tau}(\\tau)\\right] = E.\n" }, { "math_id": 167, "text": "\\frac{1}{2m} \\left( \\frac{ dS_{z}}{ dz} \\right)^{2} + U_{z}(z) = \\Gamma_{z}" }, { "math_id": 168, "text": "\\left( \\frac{ dS_{\\sigma}}{ d\\sigma} \\right)^{2} + \\left( \\frac{ dS_{\\tau}}{ d\\tau} \\right)^{2} + 2m U_{\\sigma}(\\sigma) + 2m U_{\\tau}(\\tau) = 2m \\left( \\sigma^{2} + \\tau^{2} \\right) \\left( E - \\Gamma_{z} \\right)" }, { "math_id": 169, "text": "\\left( \\frac{ dS_{\\sigma}}{ d\\sigma} \\right)^{2} + 2m U_{\\sigma}(\\sigma) + 2m\\sigma^{2} \\left(\\Gamma_{z} - E \\right) = \\Gamma_{\\sigma}" }, { "math_id": 170, "text": "\\left( \\frac{ dS_{\\tau}}{ d\\tau} \\right)^{2} + 2m U_{\\tau}(\\tau) + 2m \\tau^{2} \\left(\\Gamma_{z} - E \\right) = \\Gamma_{\\tau}" }, { "math_id": 171, "text": "{\\cal C}_{t}" }, { "math_id": 172, "text": "t=0" }, { "math_id": 173, "text": "t" }, { "math_id": 174, "text": "T" }, { "math_id": 175, "text": "T = \\frac{1}{c}\\int_{A}^{B} n \\, ds" }, { "math_id": 176, "text": "n" }, { "math_id": 177, "text": "ds" }, { "math_id": 178, "text": "\\mathbf{q}_{0}" }, { "math_id": 179, "text": "t_{0}" }, { "math_id": 180, "text": "\\mathbf{q}" }, { "math_id": 181, "text": "S(\\mathbf{q},t)=\\text{const}" }, { "math_id": 182, "text": "S(\\mathbf{q},t)" }, { "math_id": 183, "text": "\\mathbf{p}=\\frac{\\partial S}{\\partial\\mathbf{q}}." }, { "math_id": 184, "text": "\\mathbf{p}" }, { "math_id": 185, "text": "\\dot{\\mathbf{q}}" }, { "math_id": 186, "text": "\\frac{\\partial{\\cal L}}{\\partial\\dot{ \\mathbf{q}}}=\\boldsymbol{p}" }, { "math_id": 187, "text": "{\\cal L}" }, { "math_id": 188, "text": "S(\\mathbf{q}, t)" }, { "math_id": 189, "text": " \\psi = \\psi_{0} e^{iS/\\hbar} " }, { "math_id": 190, "text": "\\hbar" }, { "math_id": 191, "text": " \\frac{\\hbar^{2}}{2m} \\nabla^2 \\psi - U\\psi = \\frac{\\hbar}{i} \\frac{\\partial \\psi}{\\partial t} " }, { "math_id": 192, "text": "\\psi" }, { "math_id": 193, "text": " \\frac{1}{2m} \\left( \\nabla S \\right)^{2} + U + \\frac{\\partial S}{\\partial t} = \\frac{i\\hbar}{2m} \\nabla^{2} S. " }, { "math_id": 194, "text": "\\hbar \\rightarrow 0" }, { "math_id": 195, "text": " \\frac{1}{2m} \\left( \\nabla S \\right)^{2} + U + \\frac{\\partial S}{\\partial t} = 0. " }, { "math_id": 196, "text": "g^{\\alpha\\beta}P_\\alpha P_\\beta - (mc)^2 = 0 " }, { "math_id": 197, "text": "m " }, { "math_id": 198, "text": "g^{\\alpha \\beta}" }, { "math_id": 199, "text": "c" }, { "math_id": 200, "text": "P_\\alpha" }, { "math_id": 201, "text": "P_\\alpha =-\\frac{\\partial S}{\\partial x^\\alpha}" }, { "math_id": 202, "text": "g " }, { "math_id": 203, "text": "g^{\\alpha\\beta}\\frac{\\partial S}{\\partial x^\\alpha}\\frac{\\partial S}{\\partial x^\\beta} -(mc)^2 = 0," }, { "math_id": 204, "text": "m" }, { "math_id": 205, "text": "e" }, { "math_id": 206, "text": "A_i = (\\phi,\\Alpha)" }, { "math_id": 207, "text": "g^{ik} = g_{ik}" }, { "math_id": 208, "text": "g^{ik}\\left ( \\frac{\\partial S}{\\partial x^i} + \\frac {e}{c}A_i \\right ) \\left ( \\frac{\\partial S}{\\partial x^k} + \\frac {e}{c}A_k \\right ) = m^2 c^2" }, { "math_id": 209, "text": "x = - \\frac {e}{c \\gamma}\\int A_z \\,d\\xi," }, { "math_id": 210, "text": "y = - \\frac {e}{c \\gamma} \\int A_y \\,d\\xi," }, { "math_id": 211, "text": "z = - \\frac {e^2}{2c^2 \\gamma^2}\\int (\\Alpha^2 - \\overline {\\Alpha^2 }) \\, d \\xi," }, { "math_id": 212, "text": "\\xi = ct - \\frac{e^2}{2 \\gamma^2 c^2}\\int (\\Alpha^2 - \\overline {\\Alpha^2}) \\, d \\xi, " }, { "math_id": 213, "text": "p_x = - \\frac{e}{c}A_x, \\quad p_y = - \\frac{e}{c}A_y," }, { "math_id": 214, "text": "p_z = \\frac{e^2}{2\\gamma c}(\\Alpha^2 - \\overline {\\Alpha^2})," }, { "math_id": 215, "text": "\\mathcal{E} = c\\gamma + \\frac{e^2}{2 \\gamma c}(\\Alpha^2 - \\overline {\\Alpha^2})," }, { "math_id": 216, "text": "\\xi = ct - z" }, { "math_id": 217, "text": "\\gamma^2 = m^2 c^2 + \\frac{e^2}{c^2} \\overline{A}^2 " }, { "math_id": 218, "text": "\\overline{\\mathbf{A}}" }, { "math_id": 219, "text": "E_x = E_0 \\sin \\omega \\xi_1, \\quad E_y = E_0 \\cos \\omega \\xi_1, " }, { "math_id": 220, "text": "A_x = \\frac{ cE_0 }{\\omega} \\cos \\omega \\xi_1, \\quad A_y = - \\frac{ cE_0 }{\\omega} \\sin \\omega \\xi_1. " }, { "math_id": 221, "text": "x = - \\frac{ecE_0} \\omega \\sin \\omega \\xi_1, " }, { "math_id": 222, "text": "y = - \\frac{ecE_0} \\omega \\cos \\omega \\xi_1, " }, { "math_id": 223, "text": "p_x = - \\frac{eE_0} \\omega \\cos \\omega \\xi_1, " }, { "math_id": 224, "text": "p_y = \\frac{eE_0}{\\omega} \\sin \\omega \\xi_1, " }, { "math_id": 225, "text": "\\xi_1 = \\xi /c " }, { "math_id": 226, "text": "e cE_0 / \\gamma \\omega^2 " }, { "math_id": 227, "text": "e E_0 / \\omega^2 " }, { "math_id": 228, "text": "E" }, { "math_id": 229, "text": "y" }, { "math_id": 230, "text": "E_y = E_0 \\cos \\omega \\xi_1," }, { "math_id": 231, "text": "A_y = - \\frac {cE_0}{\\omega} \\sin \\omega \\xi_1," }, { "math_id": 232, "text": "x = \\text{const}," }, { "math_id": 233, "text": "y_0 = -\\frac{ecE_0}{\\gamma \\omega^2}," }, { "math_id": 234, "text": "y = y_0 \\cos \\omega \\xi_1, \\quad z = C_z y_0 \\sin 2\\omega \\xi_1," }, { "math_id": 235, "text": "C_z = \\frac{eE_0}{8\\gamma \\omega}, \\quad \\gamma^2 = m^2 c^2 + \\frac{e^2 E_0^2}{2 \\omega^2}, " }, { "math_id": 236, "text": "p_x = 0," }, { "math_id": 237, "text": "p_{y,0} = \\frac{eE_0}{\\omega}," }, { "math_id": 238, "text": "p_y = p_{y,0} \\sin \\omega \\xi_1, " }, { "math_id": 239, "text": "p_z = - 2C_z p_{y,0} \\cos 2\\omega \\xi_1 " }, { "math_id": 240, "text": "E = E_\\phi = \\frac{\\omega \\rho_0}{c} B_0 \\cos \\omega \\xi_1, " }, { "math_id": 241, "text": "A_\\phi = - \\rho_0 B_0 \\sin \\omega \\xi_1 = - \\frac{L_s}{\\pi \\rho_0 N_s} I_0 \\sin \\omega \\xi_1," }, { "math_id": 242, "text": "x = \\text{constant}," }, { "math_id": 243, "text": "y_0 = -\\frac{e \\rho_0 B_0}{\\gamma \\omega}," }, { "math_id": 244, "text": "y = y_0 \\cos \\omega \\xi_1," }, { "math_id": 245, "text": "z = C_z y_0 \\sin 2\\omega \\xi_1," }, { "math_id": 246, "text": "C_z = \\frac{e \\rho_0 B_0}{8c \\gamma}," }, { "math_id": 247, "text": "\\gamma^2 = m^2 c^2 + \\frac{e^2 \\rho_0^2 B_0^2}{2c^2}," }, { "math_id": 248, "text": "p_{y,0} = \\frac{e \\rho_0 B_0}{c}," }, { "math_id": 249, "text": "p_y = p_{y,0} \\sin \\omega \\xi_1," }, { "math_id": 250, "text": "p_z = - 2C_z p_{y,0} \\cos 2 \\omega \\xi_1," }, { "math_id": 251, "text": "B_0" }, { "math_id": 252, "text": "\\rho_0" }, { "math_id": 253, "text": "L_s" }, { "math_id": 254, "text": "N_s" }, { "math_id": 255, "text": "I_0" }, { "math_id": 256, "text": "yz" }, { "math_id": 257, "text": "\\varphi" } ]
https://en.wikipedia.org/wiki?curid=897539
897554
Packrat parser
Type of parser The Packrat parser is a type of parser that shares similarities with the recursive descent parser in its construction. However, it differs because it takes parsing expression grammars (PEGs) as input rather than LL grammars. In 1970, Alexander Birman laid the groundwork for packrat parsing by introducing the "TMG recognition scheme" (TS), and "generalized TS" (gTS). TS was based upon Robert M. McClure's TMG compiler-compiler, and gTS was based upon Dewey Val Schorre's META compiler-compiler. Birman's work was later refined by Aho and Ullman; and renamed as Top-Down Parsing Language (TDPL), and Generalized TDPL (GTDPL), respectively. These algorithms were the first of their kind to employ deterministic top-down parsing with backtracking. Bryan Ford developed PEGs as an expansion of GTDPL and TS. Unlike CFGs, PEGs are unambiguous and can match well with machine-oriented languages. PEGs, similar to GTDPL and TS, can also express all LL(k) and LR(k). Bryan also introduced Packrat as a parser that uses memoization techniques on top of a simple PEG parser. This was done because PEGs have an unlimited lookahead capability resulting in a parser with exponential time performance in the worst case. Packrat keeps track of the intermediate results for all mutually recursive parsing functions. Each parsing function is only called once at a specific input position. In some instances of packrat implementation, if there is insufficient memory, certain parsing functions may need to be called multiple times at the same input position, causing the parser to take longer than linear time. Syntax. The packrat parser takes in input the same syntax as PEGs: a simple PEG is composed of terminal and nonterminal symbols, possibly interleaved with operators that compose one or several derivation rules. Rules. A derivation rule is composed by a nonterminal symbol and an expression formula_5. A special expression formula_6 is the starting point of the grammar. In case no formula_6 is specified, the first expression of the first rule is used. An input string is considered accepted by the parser if the formula_7 is recognized. As a side-effect, a string formula_8 can be recognized by the parser even if it was not fully consumed. An extreme case of this rule is that the grammar formula_9 matches any string. This can be avoided by rewriting the grammar as formula_10 Example. formula_11 This grammar recognizes a palindrome over the alphabet formula_12, with an optional digit in the middle. Example strings accepted by the grammar include: formula_13 and formula_14. Left recursion. Left recursion happens when a grammar production refers to itself as its left-most element, either directly or indirectly. Since Packrat is a recursive descent parser, it cannot handle left recursion directly. During the early stages of development, it was found that a production that is left-recursive can be transformed into a right-recursive production. This modification significantly simplifies the task of a Packrat parser. Nonetheless, if there is an indirect left recursion involved, the process of rewriting can be quite complex and challenging. If the time complexity requirements are loosened from linear to superlinear, it is possible to modify the memoization table of a Packrat parser to permit left recursion, without altering the input grammar. Iterative combinator. The iterative combinator formula_3, formula_4, needs special attention when used in a Packrat parser. As a matter of fact, the use of iterative combinators introduces a "secret" recursion that does not record intermediate results in the outcome matrix. This can lead to the parser operating with a superlinear behaviour. This problem can be resolved apply the following transformation: With this transformation, the intermediate results can be properly memoized. Memoization technique. Memoization is an optimization technique in computing that aims to speed up programs by storing the results of expensive function calls. This technique essentially works by caching the results so that when the same inputs occur again, the cached result is simply returned, thus avoiding the time-consuming process of re-computing. When using packrat parsing and memoization, it's noteworthy that the parsing function for each nonterminal is solely based on the input string. It does not depend on any information gathered during the parsing process. Essentially, memoization table entries do not affect or rely on the parser's specific state at any given time. Packrat parsing stores results in a matrix or similar data structure that allows for quick look-ups and insertions. When a production is encountered, the matrix is checked to see if it has already occurred. If it has, the result is retrieved from the matrix. If not, the production is evaluated, the result is inserted into the matrix, and then returned. When evaluating the entire formula_15 matrix in a tabular approach, it would require formula_16 space. Here, formula_17 represents the number of nonterminals, and formula_18 represents the input string size. In a naïve implementation, the entire table can be derived from the input string starting from the end of the string. The Packrat parser can be improved to update only the necessary cells in the matrix through a depth-first visit of each subexpression tree. Consequently, using a matrix with dimensions of formula_15 is often wasteful, as most entries would remain empty. These cells are linked to the input string, not to the nonterminals of the grammar. This means that increasing the input string size would always increase memory consumption, while the number of parsing rules changes only the worst space complexity. Cut operator. Another operator called "cut" has been introduced to Packrat to reduce its average space complexity even further. This operator utilizes the formal structures of many programming languages to eliminate impossible derivations. For instance, control statements parsing in a standard programming language is mutually exclusive from the first recognized token, e.g.,formula_19. When a Packrat parser uses cut operators, it effectively clears its backtracking stack. This is because a cut operator reduces the number of possible alternatives in an ordered choice. By adding cut operators in the right places in a grammar's definition, the resulting Packrat parser only needs a nearly constant amount of space for memoization. The algorithm. Sketch of an implementation of a Packrat algorithm in a Lua-like pseudocode. INPUT(n) -- return the character at position n RULE(R : Rule, P : Position ) entry = GET_MEMO(R,P) -- return the number of elements previously matched in rule R at position P if entry == nil then return EVAL(R, P); end return entry; EVAL(R : Rule, P : Position ) start = P; for choice in R.choices -- Return a list of choice acc=0; for symbol in choice then -- Return each element of a rule, terminal and nonterminal if symbol.is_terminal then if INPUT(start+acc) == symbol.terminal then acc = acc + 1; --Found correct terminal skip pass it else break; end else res = RULE(symbol.nonterminal , start+acc ); -- try to recognize a nonterminal in position start+acc SET_MEMO(symbol.nonterminal , start+acc, res ); -- we memoize also the failure with special value fail if res == fail then break; end acc = acc + res; end if symbol == choice.last -- check if we have matched the last symbol in a choice if so return return acc; end end return fail; --if no choice match return fail Example. Given the following context, a free grammar that recognizes simple arithmetic expressions composed of single digits interleaved by sum, multiplication, and parenthesis. formula_20 Denoted with formula_21 the line terminator we can apply the "packrat algorithm" References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\{S, E, F, D\\}" }, { "math_id": 1, "text": "\\{a,b,z,e,g \\}" }, { "math_id": 2, "text": "\\{\\alpha,\\beta,\\gamma,\\omega,\\tau\\}" }, { "math_id": 3, "text": "\\alpha +" }, { "math_id": 4, "text": "\\alpha *" }, { "math_id": 5, "text": "S \\rightarrow \\alpha" }, { "math_id": 6, "text": "\\alpha_s" }, { "math_id": 7, "text": " \\alpha_s " }, { "math_id": 8, "text": " x " }, { "math_id": 9, "text": " S \\rightarrow x* " }, { "math_id": 10, "text": " S \\rightarrow x*!. " }, { "math_id": 11, "text": "\\begin{cases}\n S \\rightarrow A/B/D \\\\\n A \\rightarrow \\textbf{'a'}\\ S \\ \\textbf{'a'} \\\\\n B \\rightarrow \\textbf{'b'}\\ S \\ \\textbf{'b'} \\\\\n D \\rightarrow (\\textbf{'0'}-\\textbf{'9'})?\n\\end{cases}" }, { "math_id": 12, "text": " \\{ a,b \\} " }, { "math_id": 13, "text": " \\textbf{'aa'} " }, { "math_id": 14, "text": " \\textbf{'aba3aba'} " }, { "math_id": 15, "text": "m*n" }, { "math_id": 16, "text": "\\Theta(mn)" }, { "math_id": 17, "text": "m" }, { "math_id": 18, "text": "n" }, { "math_id": 19, "text": "\\{if, do, while, switch\\}\n " }, { "math_id": 20, "text": "\\begin{cases}\n S \\rightarrow A \\\\\n A \\rightarrow M\\ \\textbf{'+'}\\ A \\ / \\ M \\\\\n M \\rightarrow P\\ \\textbf{'*'}\\ M \\ / \\ P \\\\\n P \\rightarrow \\textbf{'('}\\ A\\ \\textbf{')'}\\ / \\ D \\\\\n D \\rightarrow (\\textbf{'0'}-\\textbf{'9'})\n\\end{cases}" }, { "math_id": 21, "text": "\\dashv" } ]
https://en.wikipedia.org/wiki?curid=897554
897558
Beta (finance)
Financial Metric In finance, the beta (β or market beta or beta coefficient) is a statistic that measures the expected increase or decrease of an individual stock price in proportion to movements of the stock market as a whole. Beta can be used to indicate the contribution of an individual asset to the market risk of a portfolio when it is added in small quantity. It refers to an asset's non-diversifiable risk, systematic risk, or market risk. Beta is not a measure of idiosyncratic risk. Beta is the hedge ratio of an investment with respect to the stock market. For example, to hedge out the market-risk of a stock with a market beta of 2.0, an investor would short $2,000 in the stock market for every $1,000 invested in the stock. Thus insured, movements of the overall stock market no longer influence the combined position on average. Beta measures the contribution of an individual investment to the risk of the market portfolio that was not reduced by diversification. It does not measure the risk when an investment is held on a stand-alone basis. The beta of an asset is compared to the market as a whole, usually the S&P 500. By definition, the value-weighted average of all market-betas of all investable assets with respect to the value-weighted market index is 1. If an asset has a beta above 1, it indicates that its return moves more than 1-to-1 with the return of the market-portfolio, on average; that is, it is more volatile than the market. In practice, few stocks have negative betas (tending to go up when the market goes down). Most stocks have betas between 0 and 3. Most fixed income instruments and commodities tend to have low or zero betas; call options tend to have high betas; and put options and short positions and some inverse ETFs tend to have negative betas. Technical aspects. Mathematical definition. The market beta formula_0 of an asset formula_1, observed on formula_2 occasions, is defined by (and best obtained via) a linear regression of the rate of return formula_3 of asset formula_1 on the rate of return formula_4 of the (typically value-weighted) stock-market index formula_5: formula_6 where formula_7 is an unbiased error term whose squared error should be minimized. The coefficient formula_8 is often referred to as the alpha. The ordinary least squares solution is: formula_9 where formula_10 and formula_11 are the covariance and variance operators. Betas with respect to different market indexes are not comparable. Relationship between own risk and beta risk. By using the relationship between standard deviation and variance, formula_12 and the definition of correlation formula_13, market beta can also be written as formula_14, where formula_15 is the correlation of the two returns, and formula_16, formula_17 are the respective volatilities. This equation shows that the idiosyncratic risk (formula_16) is related to but often very different to market beta. If the idiosyncratic risk is 0 (i.e., the stock returns do not move), so is the market-beta. The reverse is not the case: A coin toss bet has a zero beta but not zero risk. Attempts have been made to estimate the three ingredient components separately, but this has not led to better estimates of market-betas. Adding an asset to the market portfolio. Suppose an investor has all his money in the market formula_5 and wishes to move a small amount into asset class formula_1. The new portfolio is defined by formula_18 The variance can be computed as formula_19 For small values of formula_20, the terms in formula_21 can be ignored, formula_22 Using the definition of formula_23 this is formula_24 This suggests that an asset with formula_25 greater than 1 increases the portfolio variance, while an asset with formula_25 less than 1 decreases it "if" added in a small amount. Beta as a linear operator. Market-beta can be weighted, averaged, added, etc. That is, if a portfolio consists of 80% asset A and 20% asset B, then the beta of the portfolio is 80% times the beta of asset A and 20% times the beta of asset B. formula_26 Financial analysis. In practice, the choice of index makes relatively little difference in the market betas of individual assets, because broad value-weighted market indexes tend to move closely together. Academics tend to prefer to work with a value-weighted market portfolio due to its attractive aggregation properties and its close link with the capital asset pricing model (CAPM). Practitioners tend to prefer to work with the S&P 500 due to its easy in-time availability and availability to hedge with stock index futures. In the idealized CAPM, beta risk is the only kind of risk for which investors should receive an expected return higher than the risk-free rate of interest. When used within the context of the CAPM, beta becomes a measure of the appropriate expected rate of return. Due to the fact that the overall rate of return on the firm is weighted rate of return on its debt and its equity, the market-beta of the overall unlevered firm is the weighted average of the firm's debt beta (often close to 0) and its levered equity beta. In fund management, adjusting for exposure to the market separates out the component that fund managers should have received given that they had their specific exposure to the market. For example, if the stock market went up by 20% in a given year, and a manager had a portfolio with a market-beta of 2.0, this portfolio should have returned 40% in the absence of specific stock picking skills. This is measured by the alpha in the market-model, holding beta constant. Occasionally, other betas than market-betas are used. The arbitrage pricing theory (APT) has multiple factors in its model and thus requires multiple betas. (The CAPM has only one risk factor, namely the overall market, and thus works only with the plain beta.) For example, a beta with respect to oil price changes would sometimes be called an "oil-beta" rather than "market-beta" to clarify the difference. Betas commonly quoted in mutual fund analyses often measure the exposure to a specific fund benchmark, rather than to the overall stock market. Such a beta would measure the risk from adding a specific fund to a holder of the mutual fund benchmark portfolio, rather than the risk of adding the fund to a portfolio of the market. Special cases. commonly show up as examples of low beta. These have some similarity to bonds, in that they tend to pay consistent dividends, and their prospects are not strongly dependent on economic cycles. They are still stocks, so the market price will be affected by overall stock market trends, even if this does not make sense. Foreign stocks may provide some diversification. World benchmarks such as S&P Global 100 have slightly lower betas than comparable US-only benchmarks such as S&P 100. However, this effect is not as good as it used to be; the various markets are now fairly correlated, especially the US and Western Europe. Derivatives are examples of non-linear assets. Whereas Beta relies on a linear model, an out of the money option will have a distinctly non-linear payoff. In these cases, then, the change in price of an option relative to the change in the price of its underlying asset is not constant. (True also - but here, far less pronounced - for volatility, time to expiration, and other factors.) Thus "beta" here, calculated traditionally, would vary constantly as the price of the underlying changed. Accommodating this, mathematical finance defines a specific volatility beta. Here, analogous to the above, this beta represents the covariance between the derivative's return and changes in the value of the underlying asset, with, additionally, a correction for instantaneous underlying changes. See volatility (finance), volatility risk, . Empirical estimation. A true beta (which defines the true expected relationship between the rate of return on assets and the market) differs from a realized beta that is based on historical rates of returns and represents just one specific history out of the set of possible stock return realizations. The true market-beta is essentially the average outcome if infinitely many draws could be observed. On average, the best forecast of the realized market-beta is also the best forecast of the true market-beta. Estimators of market-beta have to wrestle with two important problems. First, the underlying market betas are known to move over time. Second, investors are interested in the best forecast of the true prevailing beta most indicative of the most likely "future beta" realization and not in the "historical market-beta". Despite these problems, a historical beta estimator remains an obvious benchmark predictor. It is obtained as the slope of the fitted line from the linear least-squares estimator. The OLS regression can be estimated on 1–5 years worth of daily, weekly or monthly stock returns. The choice depends on the trade off between accuracy of beta measurement (longer periodic measurement times and more years give more accurate results) and historic firm beta changes over time (for example, due to changing sales products or clients). Improved estimators. Other beta estimators reflect the tendency of betas (like rates of return) for regression toward the mean, induced not only by measurement error but also by underlying changes in the true beta and/or historical randomness. (Intuitively, one would not suggest a company with high return [e.g., a drug discovery] last year also to have as high a return next year.) Such estimators include the Blume/Bloomberg beta (used prominently on many financial websites), the Vasicek beta, the Scholes–Williams beta, the Dimson beta, and the Welch beta. These estimators attempt to uncover the instant prevailing market-beta. When long-term market-betas are required, further regression toward the mean over long horizons should be considered. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\beta_{i}" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "r_{i,t}" }, { "math_id": 4, "text": "r_{m,t}" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "r_{i,t} = \\alpha_{i} + \\beta_{i} \\cdot r_{m,t} + \\varepsilon_{t}" }, { "math_id": 7, "text": "\\varepsilon_{t}" }, { "math_id": 8, "text": "\\alpha_{i}" }, { "math_id": 9, "text": "\\beta_{i} = \\frac {\\operatorname{Cov}(r_{i},r_{m})}{\\operatorname{Var}(r_{m})}," }, { "math_id": 10, "text": "\\operatorname{Cov}" }, { "math_id": 11, "text": "\\operatorname{Var}" }, { "math_id": 12, "text": "\\sigma \\equiv \\sqrt{\\operatorname{Var}(r)}" }, { "math_id": 13, "text": "\\rho_{a,b} \\equiv \\frac{\\operatorname{Cov}(r_{a}, r_{b})}{\\sqrt{ \\operatorname{Var}(r_{a}) \\operatorname{Var}(r_{b}) }}" }, { "math_id": 14, "text": "\\beta_{i} = \\rho_{i,m}\\frac{\\sigma_{i}}{\\sigma_{m}}" }, { "math_id": 15, "text": "\\rho_{i,m}" }, { "math_id": 16, "text": "\\sigma_{i}" }, { "math_id": 17, "text": "\\sigma_{m}" }, { "math_id": 18, "text": "r_{p} = (1 - \\delta) r_{m} + \\delta r_{i}." }, { "math_id": 19, "text": "\\operatorname{Var}(r_{p}) = (1 - \\delta)^{2} \\operatorname{Var}(r_{m}) + 2 \\delta (1 - \\delta) \\operatorname{Cov}(r_{m}, r_{i}) + \\delta^{2} \\operatorname{Var}(r_{i}) ." }, { "math_id": 20, "text": "\\delta" }, { "math_id": 21, "text": "\\delta^{2}" }, { "math_id": 22, "text": "\\operatorname{Var}(r_{p}) \\approx (1 - 2 \\delta) \\operatorname{Var}(r_{m}) + 2 \\delta \\operatorname{Cov}(r_{m},r_{i})." }, { "math_id": 23, "text": "\\beta_{i} = \\operatorname{Cov}(r_{m}, r_{i}) / \\operatorname{Var}(r_{m})," }, { "math_id": 24, "text": "\\operatorname{Var}(r_{p}) / \\operatorname{Var}(r_{m}) \\approx 1 + 2 \\delta (\\beta_{i} - 1)." }, { "math_id": 25, "text": "\\beta" }, { "math_id": 26, "text": "r_{p} = w_{a} \\cdot r_{a} + w_{b} \\cdot r_{b} \\Rightarrow \\beta_{p,m} = w_{a} \\cdot \\beta_{a,m} + w_{b} \\cdot \\beta_{b,m} ." }, { "math_id": 27, "text": "\\text{rsw}_{i,d} \\in (-2 \\cdot r_{m,d}, 4 \\cdot r_{m,d}) " } ]
https://en.wikipedia.org/wiki?curid=897558
8975663
Coding gain
In coding theory, telecommunications engineering and other related engineering problems, coding gain is the measure in the difference between the signal-to-noise ratio (SNR) levels between the uncoded system and coded system required to reach the same bit error rate (BER) levels when used with the error correcting code (ECC). Example. If the uncoded BPSK system in AWGN environment has a bit error rate (BER) of 10−2 at the SNR level 4 dB, and the corresponding coded (e.g., BCH) system has the same BER at an SNR of 2.5 dB, then we say the "coding gain" = 4 dB − 2.5 dB = 1.5 dB, due to the code used (in this case BCH). Power-limited regime. In the "power-limited regime" (where the nominal spectral efficiency formula_0 [b/2D or b/s/Hz], "i.e." the domain of binary signaling), the effective coding gain formula_1 of a signal set formula_2 at a given target error probability per bit formula_3 is defined as the difference in dB between the formula_4 required to achieve the target formula_3 with formula_2 and the formula_4 required to achieve the target formula_3 with 2-PAM or (2×2)-QAM ("i.e." no coding). The nominal coding gain formula_5 is defined as formula_6 This definition is normalized so that formula_7 for 2-PAM or (2×2)-QAM. If the average number of nearest neighbors per transmitted bit formula_8 is equal to one, the effective coding gain formula_1 is approximately equal to the nominal coding gain formula_5. However, if formula_9, the effective coding gain formula_1 is less than the nominal coding gain formula_5 by an amount which depends on the steepness of the formula_3 "vs." formula_4 curve at the target formula_3. This curve can be plotted using the union bound estimate (UBE) formula_10 where "Q" is the Gaussian probability-of-error function. For the special case of a binary linear block code formula_11 with parameters formula_12, the nominal spectral efficiency is formula_13 and the nominal coding gain is "kd"/"n". Example. The table below lists the nominal spectral efficiency, nominal coding gain and effective coding gain at formula_14 for Reed–Muller codes of length formula_15: Bandwidth-limited regime. In the "bandwidth-limited regime" (formula_16, "i.e." the domain of non-binary signaling), the effective coding gain formula_1 of a signal set formula_2 at a given target error rate formula_17 is defined as the difference in dB between the formula_18 required to achieve the target formula_17 with formula_2 and the formula_18 required to achieve the target formula_17 with M-PAM or (M×M)-QAM ("i.e." no coding). The nominal coding gain formula_5 is defined as formula_19 This definition is normalized so that formula_7 for M-PAM or ("M"×"M")-QAM. The UBE becomes formula_20 where formula_21 is the average number of nearest neighbors per two dimensions. References. MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes sections 5.3, 5.5, 6.3, 6.4
[ { "math_id": 0, "text": "\\rho \\le 2" }, { "math_id": 1, "text": "\\gamma_\\mathrm{eff}(A)" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "P_b(E)" }, { "math_id": 4, "text": "E_b/N_0" }, { "math_id": 5, "text": "\\gamma_c(A)" }, { "math_id": 6, "text": "\\gamma_c(A) = \\frac{d^2_{\\min}(A)}{4E_b}." }, { "math_id": 7, "text": "\\gamma_c(A) = 1" }, { "math_id": 8, "text": "K_b(A)" }, { "math_id": 9, "text": "K_b(A)>1" }, { "math_id": 10, "text": "P_b(E) \\approx K_b(A)Q\\left(\\sqrt{\\frac{2\\gamma_c(A)E_b}{N_0}}\\right)," }, { "math_id": 11, "text": "C" }, { "math_id": 12, "text": "(n,k,d)" }, { "math_id": 13, "text": "\\rho = 2k/n " }, { "math_id": 14, "text": "P_b(E) \\approx 10^{-5}" }, { "math_id": 15, "text": "n \\le 64" }, { "math_id": 16, "text": "\\rho > 2~b/2D" }, { "math_id": 17, "text": "P_s(E)" }, { "math_id": 18, "text": "SNR_\\mathrm{norm}" }, { "math_id": 19, "text": "\\gamma_c(A) = {(2^\\rho - 1)d^2_{\\min} (A) \\over 6E_s}." }, { "math_id": 20, "text": "P_s(E) \\approx K_s(A)Q\\sqrt{3\\gamma_c(A)SNR_\\mathrm{norm}}," }, { "math_id": 21, "text": "K_s(A)" } ]
https://en.wikipedia.org/wiki?curid=8975663
897658
Derivation (differential algebra)
Algebraic generalization of the derivative In mathematics, a derivation is a function on an algebra that generalizes certain features of the derivative operator. Specifically, given an algebra "A" over a ring or a field "K", a "K"-derivation is a "K"-linear map "D" : "A" → "A" that satisfies Leibniz's law: formula_0 More generally, if "M" is an "A"-bimodule, a "K"-linear map "D" : "A" → "M" that satisfies the Leibniz law is also called a derivation. The collection of all "K"-derivations of "A" to itself is denoted by Der"K"("A"). The collection of "K"-derivations of "A" into an "A"-module "M" is denoted by Der"K"("A", "M"). Derivations occur in many different contexts in diverse areas of mathematics. The partial derivative with respect to a variable is an R-derivation on the algebra of real-valued differentiable functions on R"n". The Lie derivative with respect to a vector field is an R-derivation on the algebra of differentiable functions on a differentiable manifold; more generally it is a derivation on the tensor algebra of a manifold. It follows that the adjoint representation of a Lie algebra is a derivation on that algebra. The Pincherle derivative is an example of a derivation in abstract algebra. If the algebra "A" is noncommutative, then the commutator with respect to an element of the algebra "A" defines a linear endomorphism of "A" to itself, which is a derivation over "K". That is, formula_1 where formula_2 is the commutator with respect to formula_3. An algebra "A" equipped with a distinguished derivation "d" forms a differential algebra, and is itself a significant object of study in areas such as differential Galois theory. Properties. If "A" is a "K"-algebra, for "K" a ring, and "D": "A" → "A" is a "K"-derivation, then which is formula_5 if for all i, "D"("xi") commutes with formula_6. formula_7 Moreover, if "M" is an "A"-bimodule, write formula_8 for the set of "K"-derivations from "A" to "M". formula_9 since it is readily verified that the commutator of two derivations is again a derivation. formula_10 The correspondence formula_11 is an isomorphism of "A"-modules: formula_12 formula_13 since any "K"-derivation is "a fortiori" a "k"-derivation. Graded derivations. Given a graded algebra "A" and a homogeneous linear map "D" of grade |"D"| on "A", "D" is a homogeneous derivation if formula_14 for every homogeneous element "a" and every element "b" of "A" for a commutator factor "ε" = ±1. A graded derivation is sum of homogeneous derivations with the same "ε". If "ε" = 1, this definition reduces to the usual case. If "ε" = −1, however, then formula_15 for odd |"D"|, and "D" is called an anti-derivation. Examples of anti-derivations include the exterior derivative and the interior product acting on differential forms. Graded derivations of superalgebras (i.e. Z2-graded algebras) are often called superderivations. Related notions. Hasse–Schmidt derivations are "K"-algebra homomorphisms formula_16 Composing further with the map which sends a formal power series formula_17 to the coefficient formula_18 gives a derivation.
[ { "math_id": 0, "text": " D(ab) = a D(b) + D(a) b." }, { "math_id": 1, "text": "[FG,N]=[F,N]G+F[G,N]," }, { "math_id": 2, "text": "[\\cdot,N]" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "D(x_1x_2\\cdots x_n) = \\sum_i x_1\\cdots x_{i-1}D(x_i)x_{i+1}\\cdots x_n " }, { "math_id": 5, "text": "\\sum_i D(x_i)\\prod_{j\\neq i}x_j" }, { "math_id": 6, "text": "x_1,x_2,\\ldots, x_{i-1}" }, { "math_id": 7, "text": "D^n(uv) = \\sum_{k=0}^n \\binom{n}{k} \\cdot D^{n-k}(u)\\cdot D^k(v)." }, { "math_id": 8, "text": " \\operatorname{Der}_K(A,M)" }, { "math_id": 9, "text": "[D_1,D_2] = D_1\\circ D_2 - D_2\\circ D_1." }, { "math_id": 10, "text": " D: A\\stackrel{d}{\\longrightarrow} \\Omega_{A/K}\\stackrel{\\varphi}{\\longrightarrow} M " }, { "math_id": 11, "text": " D\\leftrightarrow \\varphi" }, { "math_id": 12, "text": " \\operatorname{Der}_K(A,M)\\simeq \\operatorname{Hom}_{A}(\\Omega_{A/K},M)" }, { "math_id": 13, "text": "\\operatorname{Der}_K(A,M)\\subset \\operatorname{Der}_k(A,M) ," }, { "math_id": 14, "text": "{D(ab)=D(a)b+\\varepsilon^{|a||D|}aD(b)}" }, { "math_id": 15, "text": "{D(ab)=D(a)b+(-1)^{|a|}aD(b)}" }, { "math_id": 16, "text": "A \\to A[[t]]." }, { "math_id": 17, "text": "\\sum a_n t^n" }, { "math_id": 18, "text": "a_1" } ]
https://en.wikipedia.org/wiki?curid=897658
897724
Schreier's lemma
In mathematics, Schreier's lemma is a theorem in group theory used in the Schreier–Sims algorithm and also for finding a presentation of a subgroup. Statement. Suppose formula_0 is a subgroup of formula_1, which is finitely generated with generating set formula_2, that is, formula_3. Let formula_4 be a right transversal of formula_0 in formula_1. In other words, formula_4 is (the image of) a section of the quotient map formula_5, where formula_6 denotes the set of right cosets of formula_0 in formula_1. The definition is made given that formula_7, formula_8 is the chosen representative in the transversal formula_4 of the coset formula_9, that is, formula_10 Then formula_0 is generated by the set formula_11 Hence, in particular, Schreier's lemma implies that every subgroup of finite index of a finitely generated group is again finitely generated. Example. The group Z3 = Z/3Z is cyclic. Via Cayley's theorem, Z3 is a subgroup of the symmetric group "S"3. Now, formula_12 formula_13 where formula_14 is the identity permutation. Note "S"3 = formula_15{ "s"1=(1 2), "s"2 = (1 2 3) }formula_16. Z3 has just two cosets, Z3 and "S"3 \ Z3, so we select the transversal { "t"1 = "e", "t"2=(1 2) }, and we have formula_17 Finally, formula_18 formula_19 formula_20 formula_21 Thus, by Schreier's subgroup lemma, { e, (1 2 3) } generates Z3, but having the identity in the generating set is redundant, so it can be removed to obtain another generating set for Z3, { (1 2 3) } (as expected).
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "G = \\langle S\\rangle" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "G \\to H\\backslash G" }, { "math_id": 6, "text": "H\\backslash G" }, { "math_id": 7, "text": "g\\in G" }, { "math_id": 8, "text": "\\overline{g}" }, { "math_id": 9, "text": "Hg" }, { "math_id": 10, "text": "g\\in H\\overline{g}." }, { "math_id": 11, "text": "\\{rs(\\overline{rs})^{-1}|r\\in R, s\\in S\\}." }, { "math_id": 12, "text": "\\mathbb{Z}_3=\\{ e, (1\\ 2\\ 3), (1\\ 3\\ 2) \\}" }, { "math_id": 13, "text": "S_3= \\{ e, (1\\ 2), (1\\ 3), (2\\ 3), (1\\ 2\\ 3), (1\\ 3\\ 2) \\}" }, { "math_id": 14, "text": "e" }, { "math_id": 15, "text": "\\scriptstyle\\langle" }, { "math_id": 16, "text": "\\scriptstyle\\rangle" }, { "math_id": 17, "text": "\\begin{matrix}\nt_1s_1 = (1\\ 2),&\\quad\\text{so}\\quad&\\overline{t_1s_1} = (1\\ 2)\\\\\nt_1s_2 = (1\\ 2\\ 3) ,&\\quad\\text{so}\\quad& \\overline{t_1s_2} = e\\\\\nt_2s_1 = e ,&\\quad\\text{so}\\quad& \\overline{t_2s_1} = e\\\\\nt_2s_2 = (2\\ 3) ,&\\quad\\text{so}\\quad& \\overline{t_2s_2} = (1\\ 2). \\\\\n\\end{matrix}" }, { "math_id": 18, "text": "t_1s_1\\overline{t_1s_1}^{-1} = e" }, { "math_id": 19, "text": "t_1s_2\\overline{t_1s_2}^{-1} = (1\\ 2\\ 3)" }, { "math_id": 20, "text": "t_2s_1\\overline{t_2s_1}^{-1} = e " }, { "math_id": 21, "text": "t_2s_2\\overline{t_2s_2}^{-1} = (1\\ 2\\ 3)." } ]
https://en.wikipedia.org/wiki?curid=897724
897733
Transversal (combinatorics)
Set that intersects every one of a family of sets In mathematics, particularly in combinatorics, given a family of sets, here called a collection "C", a transversal (also called a cross-section) is a set containing exactly one element from each member of the collection. When the sets of the collection are mutually disjoint, each element of the transversal corresponds to exactly one member of "C" (the set it is a member of). If the original sets are not disjoint, there are two possibilities for the definition of a transversal: In computer science, computing transversals is useful in several application domains, with the input family of sets often being described as a hypergraph. Existence and number. A fundamental question in the study of SDR is whether or not an SDR exists. Hall's marriage theorem gives necessary and sufficient conditions for a finite collection of sets, some possibly overlapping, to have a transversal. The condition is that, for every integer "k", every collection of "k" sets must contain in common at least "k" different elements. The following refinement by H. J. Ryser gives lower bounds on the number of such SDRs. "Theorem". Let "S"1, "S"2, ..., "S""m" be a collection of sets such that formula_0 contains at least "k" elements for "k" = 1,2...,"m" and for all "k"-combinations {formula_1} of the integers 1,2...,"m" and suppose that each of these sets contains at least "t" elements. If "t" ≤ "m" then the collection has at least "t" ! SDRs, and if "t" > "m" then the collection has at least "t" ! / ("t" - "m")! SDRs. Relation to matching and covering. One can construct a bipartite graph in which the vertices on one side are the sets, the vertices on the other side are the elements, and the edges connect a set to the elements it contains. Then, a transversal (defined as a system of "distinct" representatives) is equivalent to a perfect matching in this graph. One can construct a hypergraph in which the vertices are the elements, and the hyperedges are the sets. Then, a transversal (defined as a system of "not-necessarily-distinct" representatives) is a vertex cover in a hypergraph. Examples. In group theory, given a subgroup "H" of a group "G", a right (respectively left) transversal is a set containing exactly one element from each right (respectively left) coset of "H". In this case, the "sets" (cosets) are mutually disjoint, i.e. the cosets form a partition of the group. As a particular case of the previous example, given a direct product of groups formula_2, then "H" is a transversal for the cosets of "K". In general, since any equivalence relation on an arbitrary set gives rise to a partition, picking any representative from each equivalence class results in a transversal. Another instance of a partition-based transversal occurs when one considers the equivalence relation known as the (set-theoretic) kernel of a function, defined for a function formula_3 with domain "X" as the partition of the domain formula_4. which partitions the domain of "f" into equivalence classes such that all elements in a class map via "f" to the same value. If "f" is injective, there is only one transversal of formula_5. For a not-necessarily-injective "f", fixing a transversal "T" of formula_5 induces a one-to-one correspondence between "T" and the image of "f", henceforth denoted by formula_6. Consequently, a function formula_7 is well defined by the property that for all "z" in formula_8 where "x" is the unique element in "T" such that formula_9; furthermore, "g" can be extended (not necessarily in a unique manner) so that it is defined on the whole codomain of "f" by picking arbitrary values for "g(z)" when "z" is outside the image of "f". It is a simple calculation to verify that "g" thus defined has the property that formula_10, which is the proof (when the domain and codomain of "f" are the same set) that the full transformation semigroup is a regular semigroup. formula_11 acts as a (not necessarily unique) quasi-inverse for "f"; within semigroup theory this is simply called an inverse. Note however that for an arbitrary "g" with the aforementioned property the "dual" equation formula_12 may not hold. However if we denote by formula_13, then "f" is a quasi-inverse of "h", i.e. formula_14. Common transversals. A common transversal of the collections "A" and "B" (where formula_15) is a set that is a transversal of both "A" and "B". The collections "A" and "B" have a common transversal if and only if, for all formula_16, formula_17 Generalizations. A partial transversal is a set containing at most one element from each member of the collection, or (in the stricter form of the concept) a set with an injection from the set to "C". The transversals of a finite collection "C" of finite sets form the basis sets of a matroid, the transversal matroid of "C". The independent sets of the transversal matroid are the partial transversals of "C". An independent transversal (also called a rainbow-independent set or independent system of representatives) is a transversal which is also an independent set of a given graph. To explain the difference in figurative terms, consider a faculty with "m" departments, where the faculty dean wants to construct a committee of "m" members, one member per department. Such a committee is a transversal. But now, suppose that some faculty members dislike each other and do not agree to sit in the committee together. In this case, the committee must be an independent transversal, where the underlying graph describes the "dislike" relations. Another generalization of the concept of a transversal would be a set that just has a non-empty intersection with each member of "C". An example of the latter would be a Bernstein set, which is defined as a set that has a non-empty intersection with each set of "C", but contains no set of "C", where "C" is the collection of all perfect sets of a topological Polish space. As another example, let "C" consist of all the lines of a projective plane, then a blocking set in this plane is a set of points which intersects each line but contains no line. Category theory. In the language of category theory, a transversal of a collection of mutually disjoint sets is a section of the quotient map induced by the collection. Computational complexity. The computational complexity of computing all transversals of an input family of sets has been studied, in particular in the framework of enumeration algorithms. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "S_{i_1} \\cup S_{i_2} \\cup \\dots \\cup S_{i_k}" }, { "math_id": 1, "text": "i_1, i_2, \\ldots, i_k" }, { "math_id": 2, "text": "G = H \\times K" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\operatorname{ker} f := \\left\\{\\, \\left\\{\\, y \\in X \\mid f(x)=f(y) \\,\\right\\} \\mid x \\in X \\,\\right\\}" }, { "math_id": 5, "text": "\\operatorname{ker} f" }, { "math_id": 6, "text": "\\operatorname{Im}f" }, { "math_id": 7, "text": "g: (\\operatorname{Im} f) \\to T" }, { "math_id": 8, "text": "\\operatorname{Im} f, g(z)=x" }, { "math_id": 9, "text": "f(x)=z" }, { "math_id": 10, "text": "f\\circ g \\circ f = f" }, { "math_id": 11, "text": "g" }, { "math_id": 12, "text": "g \\circ f \\circ g= g" }, { "math_id": 13, "text": "h= g \\circ f \\circ g" }, { "math_id": 14, "text": "h \\circ f \\circ h = h" }, { "math_id": 15, "text": "|A| = |B| = n" }, { "math_id": 16, "text": "I, J \\subset \\{1,...,n\\}" }, { "math_id": 17, "text": "|(\\bigcup_{i \\in I}A_i) \\cap (\\bigcup_{j \\in J}B_j)| \\geq |I|+|J|-n" } ]
https://en.wikipedia.org/wiki?curid=897733
8978774
Quantum nonlocality
Deviations from local realism In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions. Any physical theory that aims at superseding or replacing quantum theory should account for such experiments and therefore cannot fulfill local realism; quantum nonlocality is a property of the universe that is independent of our description of nature. Quantum nonlocality does not allow for faster-than-light communication, and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. Still, it prompts many of the foundational discussions concerning quantum theory. History. Einstein, Podolsky and Rosen. In the 1935 EPR paper, Albert Einstein, Boris Podolsky and Nathan Rosen described "two spatially separated particles which have both perfectly correlated positions and momenta" as a direct consequence of quantum theory. They intended to use the classical principle of locality to challenge the idea that the quantum wavefunction was a complete description of reality, but instead they sparked a debate on the nature of reality. Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. The quantum state of the two particles prior to measurement can be written as formula_0 where formula_1. Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of measurements performed by the experimentalists. Alice, for example, will measure her particle to be spin-up in an average of fifty percent of measurements. However, according to the Copenhagen interpretation, Alice's measurement causes the state of the two particles to collapse, so that if Alice performs a measurement of spin in the z-direction, that is with respect to the basis formula_2, then Bob's system will be left in one of the states formula_3. Likewise, if Alice performs a measurement of spin in the x-direction, that is, with respect to the basis formula_4, then Bob's system will be left in one of the states formula_5. Schrödinger referred to this phenomenon as "steering". This steering occurs in such a way that no signal can be sent by performing such a state update; quantum nonlocality cannot be used to send messages instantaneously and is therefore not in direct conflict with causality concerns in special relativity. In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states formula_6 or formula_7, since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states formula_8 or formula_9 for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes: <templatestyles src="Template:Blockquote/styles.css" />While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible. Although various authors (most notably Niels Bohr) criticised the ambiguous terminology of the EPR paper, the thought experiment nevertheless generated a great deal of interest. Their notion of a "complete description" was later formalised by the suggestion of hidden variables that determine the statistics of measurement results, but to which an observer does not have access. Bohmian mechanics provides such a completion of quantum mechanics, with the introduction of hidden variables; however the theory is explicitly nonlocal. The interpretation therefore does not give an answer to Einstein's question, which was whether or not a complete description of quantum mechanics could be given in terms of local hidden variables in keeping with the "Principle of Local Action". Bell inequality. In 1964 John Bell answered Einstein's question by showing that such local hidden variables can never reproduce the full range of statistical outcomes predicted by quantum theory. Bell showed that a local hidden variable hypothesis leads to restrictions on the strength of correlations of measurement results. If the Bell inequalities are violated experimentally as predicted by quantum mechanics, then reality cannot be described by local hidden variables and the mystery of quantum nonlocal causation remains. However, Bell notes that the non-local hidden variable model of Bohm are different: <templatestyles src="Template:Blockquote/styles.css" />This [grossly nonlocal structure] is characteristic ... of any such theory which reproduces exactly the quantum mechanical predictions. Clauser, Horne, Shimony and Holt (CHSH) reformulated these inequalities in a manner that was more conducive to experimental testing (see CHSH inequality). In the scenario proposed by Bell (a Bell scenario), two experimentalists, Alice and Bob, conduct experiments in separate labs. At each run, Alice (Bob) conducts an experiment formula_10 formula_11 in her (his) lab, obtaining outcome formula_12 formula_13. If Alice and Bob repeat their experiments several times, then they can estimate the probabilities formula_14, namely, the probability that Alice and Bob respectively observe the results formula_15 when they respectively conduct the experiments x,y. In the following, each such set of probabilities formula_16 will be denoted by just formula_14. In the quantum nonlocality slang, formula_14 is termed a box. Bell formalized the idea of a hidden variable by introducing the parameter formula_17 to locally characterize measurement results on each system: "It is a matter of indifference ... whether λ denotes a single variable or a set ... and whether the variables are discrete or continuous". However, it is equivalent (and more intuitive) to think of formula_17 as a local "strategy" or "message" that occurs with some probability formula_18 when Alice and Bob reboot their experimental setup. Bell's assumption of local causality then stipulates that each local strategy defines the distributions of independent outcomes if Alice conducts experiment x and Bob conducts experiment formula_19: formula_20 Here formula_21 (formula_22) denotes the probability that Alice (Bob) obtains the result formula_12 formula_23 when she (he) conducts experiment formula_24 formula_11 and the local variable describing her (his) experiment has value formula_25 (formula_26). Suppose that formula_27 can take values from some set formula_28. If each pair of values formula_29 has an associated probability formula_30 of being selected (shared randomness is allowed, i.e., formula_27 can be correlated), then one can average over this distribution to obtain a formula for the joint probability of each measurement result: formula_31 A box admitting such a decomposition is called a Bell local or a classical box. Fixing the number of possible values which formula_32 can each take, one can represent each box formula_14 as a finite vector with entries formula_33. In that representation, the set of all classical boxes forms a convex polytope. In the Bell scenario studied by CHSH, where formula_32 can take values within formula_34, any Bell local box formula_35 must satisfy the CHSH inequality: formula_36 where formula_37 The above considerations apply to model a quantum experiment. Consider two parties conducting local polarization measurements on a bipartite photonic state. The measurement result for the polarization of a photon can take one of two values (informally, whether the photon is polarized in that direction, or in the orthogonal direction). If each party is allowed to choose between just two different polarization directions, the experiment fits within the CHSH scenario. As noted by CHSH, there exist a quantum state and polarization directions which generate a box formula_35 with formula_38 equal to formula_39. This demonstrates an explicit way in which a theory with ontological states that are local, with local measurements and only local actions cannot match the probabilistic predictions of quantum theory, disproving Einstein's hypothesis. Experimentalists such as Alain Aspect have verified the quantum violation of the CHSH inequality as well as other formulations of Bell's inequality, to invalidate the local hidden variables hypothesis and confirm that reality is indeed nonlocal in the EPR sense. Possibilistic nonlocality. Bell's demonstration is probabilistic in the sense that it shows that the precise probabilities predicted by quantum mechanics for some entangled scenarios cannot be met by a local hidden variable theory. (For short, here and henceforth "local theory" means "local hidden variables theory".) However, quantum mechanics permits an even stronger violation of local theories: a possibilistic one, in which local theories cannot even agree with quantum mechanics on which events are possible or impossible in an entangled scenario. The first proof of this kind was due to Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993 The state involved is often called the GHZ state. In 1993, Lucien Hardy demonstrated a logical proof of quantum nonlocality that, like the GHZ proof is a possibilistic proof. It starts with the observation that the state formula_40 defined below can be written in a few suggestive ways: formula_41 where, as above, formula_42. The experiment consists of this entangled state being shared between two experimenters, each of whom has the ability to measure either with respect to the basis formula_43 or formula_44. We see that if they each measure with respect to formula_43, then they never see the outcome formula_45. If one measures with respect to formula_43 and the other formula_44, they never see the outcomes formula_46 formula_47 However, sometimes they see the outcome formula_48 when measuring with respect to formula_44, since formula_49 This leads to the paradox: having the outcome formula_50 we conclude that if one of the experimenters had measured with respect to the formula_43 basis instead, the outcome must have been formula_51 or formula_52, since formula_53 and formula_54 are impossible. But then, if they had both measured with respect to the formula_43 basis, by locality the result must have been formula_45, which is also impossible. Nonlocal hidden variable models with a finite propagation speed. The work of Bancal et al. generalizes Bell's result by proving that correlations achievable in quantum theory are also incompatible with a large class of superluminal hidden variable models. In this framework, faster-than-light signaling is precluded. However, the choice of settings of one party can influence hidden variables at another party's distant location, if there is enough time for a superluminal influence (of finite, but otherwise unknown speed) to propagate from one point to the other. In this scenario, any bipartite experiment revealing Bell nonlocality can just provide lower bounds on the hidden influence's propagation speed. Quantum experiments with three or more parties can, nonetheless, disprove all such non-local hidden variable models. Analogs of Bell’s theorem in more complicated causal structures. The random variables measured in a general experiment can depend on each other in complicated ways. In the field of causal inference, such dependencies are represented via Bayesian networks: directed acyclic graphs where each node represents a variable and an edge from a variable to another signifies that the former influences the latter and not otherwise, see the figure. In a standard bipartite Bell experiment, Alice's (Bob's) setting formula_24 (formula_55), together with her (his) local variable formula_25 (formula_26), influence her (his) local outcome formula_12 (formula_56). Bell's theorem can thus be interpreted as a separation between the quantum and classical predictions in a type of causal structures with just one hidden node formula_57. Similar separations have been established in other types of causal structures. The characterization of the boundaries for classical correlations in such extended Bell scenarios is challenging, but there exist complete practical computational methods to achieve it. Entanglement and nonlocality. Quantum nonlocality is sometimes understood as being equivalent to entanglement. However, this is not the case. Quantum entanglement can be defined only within the formalism of quantum mechanics, i.e., it is a model-dependent property. In contrast, nonlocality refers to the impossibility of a description of observed statistics in terms of a local hidden variable model, so it is independent of the physical model used to describe the experiment. It is true that for any pure entangled state there exists a choice of measurements that produce Bell nonlocal correlations, but the situation is more complex for mixed states. While any Bell nonlocal state must be entangled, there exist (mixed) entangled states which do not produce Bell nonlocal correlations (although, operating on several copies of some of such states, or carrying out local post-selections, it is possible to witness nonlocal effects). Moreover, while there are catalysts for entanglement, there are none for nonlocality. Finally, reasonably simple examples of Bell inequalities have been found for which the quantum state giving the largest violation is never a maximally entangled state, showing that entanglement is, in some sense, not even proportional to nonlocality. Quantum correlations. As shown, the statistics achievable by two or more parties conducting experiments in a classical system are constrained in a non-trivial way. Analogously, the statistics achievable by separate observers in a quantum theory also happen to be restricted. The first derivation of a non-trivial statistical limit on the set of quantum correlations, due to B. Tsirelson, is known as Tsirelson's bound. Consider the CHSH Bell scenario detailed before, but this time assume that, in their experiments, Alice and Bob are preparing and measuring quantum systems. In that case, the CHSH parameter can be shown to be bounded by formula_58 The sets of quantum correlations and Tsirelson’s problem. Mathematically, a box formula_35 admits a quantum realization if and only if there exists a pair of Hilbert spaces formula_59, a normalized vector formula_60 and projection operators formula_61 such that In the following, the set of such boxes will be called formula_66. Contrary to the classical set of correlations, when viewed in probability space, formula_66 is not a polytope. On the contrary, it contains both straight and curved boundaries. In addition, formula_66 is not closed: this means that there exist boxes formula_35 which can be arbitrarily well approximated by quantum systems but are themselves not quantum. In the above definition, the space-like separation of the two parties conducting the Bell experiment was modeled by imposing that their associated operator algebras act on different factors formula_59 of the overall Hilbert space formula_67 describing the experiment. Alternatively, one could model space-like separation by imposing that these two algebras commute. This leads to a different definition: formula_35 admits a field quantum realization if and only if there exists a Hilbert space formula_68, a normalized vector formula_69 and projection operators formula_70 such that Call formula_75 the set of all such correlations formula_35. How does this new set relate to the more conventional formula_66 defined above? It can be proven that formula_75 is closed. Moreover, formula_76, where formula_77 denotes the closure of formula_66. Tsirelson's problem consists in deciding whether the inclusion relation formula_76 is strict, i.e., whether or not formula_78. This problem only appears in infinite dimensions: when the Hilbert space formula_68 in the definition of formula_75 is constrained to be finite-dimensional, the closure of the corresponding set equals formula_77. In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed a result in quantum complexity theory that would imply that formula_79, thus solving Tsirelson's problem. Tsirelson's problem can be shown equivalent to Connes embedding problem, a famous conjecture in the theory of operator algebras. Characterization of quantum correlations. Since the dimensions of formula_80 and formula_81 are, in principle, unbounded, determining whether a given box formula_35 admits a quantum realization is a complicated problem. In fact, the dual problem of establishing whether a quantum box can have a perfect score at a non-local game is known to be undecidable. Moreover, the problem of deciding whether formula_35 can be approximated by a quantum system with precision formula_82 is NP-hard. Characterizing quantum boxes is equivalent to characterizing the cone of completely positive semidefinite matrices under a set of linear constraints. For small fixed dimensions formula_83, one can explore, using variational methods, whether formula_35 can be realized in a bipartite quantum system formula_84, with formula_85, formula_86. That method, however, can just be used to prove the realizability of formula_35, and not its unrealizability with quantum systems. To prove unrealizability, the most known method is the Navascués–Pironio–Acín (NPA) hierarchy. This is an infinite decreasing sequence of sets of correlations formula_87 with the properties: The NPA hierarchy thus provides a computational characterization, not of formula_66, but of formula_75. If formula_93, (as claimed by Ji, Natarajan, Vidick, Wright, and Yuen) then a new method to detect the non-realizability of the correlations in formula_94 is needed. If Tsirelson's problem was solved in the affirmative, namely, formula_95, then the above two methods would provide a practical characterization of formula_77. The physics of supra-quantum correlations. The works listed above describe what the quantum set of correlations looks like, but they do not explain why. Are quantum correlations unavoidable, even in post-quantum physical theories, or on the contrary, could there exist correlations outside formula_77 which nonetheless do not lead to any unphysical operational behavior? In their seminal 1994 paper, Popescu and Rohrlich explore whether quantum correlations can be explained by appealing to relativistic causality alone. Namely, whether any hypothetical box formula_96 would allow building a device capable of transmitting information faster than the speed of light. At the level of correlations between two parties, Einstein's causality translates in the requirement that Alice's measurement choice should not affect Bob's statistics, and vice versa. Otherwise, Alice (Bob) could signal Bob (Alice) instantaneously by choosing her (his) measurement setting formula_24 formula_97 appropriately. Mathematically, Popescu and Rohrlich's no-signalling conditions are: formula_98 formula_99 Like the set of classical boxes, when represented in probability space, the set of no-signalling boxes forms a polytope. Popescu and Rohrlich identified a box formula_35 that, while complying with the no-signalling conditions, violates Tsirelson's bound, and is thus unrealizable in quantum physics. Dubbed the PR-box, it can be written as: formula_100 Here formula_32 take values in formula_34, and formula_101 denotes the sum modulo two. It can be verified that the CHSH value of this box is 4 (as opposed to the Tsirelson bound of formula_39). This box had been identified earlier, by Rastall and Khalfin and Tsirelson. In view of this mismatch, Popescu and Rohrlich pose the problem of identifying a physical principle, stronger than the no-signalling conditions, that allows deriving the set of quantum correlations. Several proposals followed: All these principles can be experimentally falsified under the assumption that we can decide if two or more events are space-like separated. This sets this research program aside from the axiomatic reconstruction of quantum mechanics via Generalized Probabilistic Theories. The works above rely on the implicit assumption that any physical set of correlations must be closed under wirings. This means that any effective box built by combining the inputs and outputs of a number of boxes within the considered set must also belong to the set. Closure under wirings does not seem to enforce any limit on the maximum value of CHSH. However, it is not a void principle: on the contrary, in it is shown that many simple, intuitive families of sets of correlations in probability space happen to violate it. Originally, it was unknown whether any of these principles (or a subset thereof) was strong enough to derive all the constraints defining formula_77. This state of affairs continued for some years until the construction of the almost quantum set formula_118. formula_118 is a set of correlations that is closed under wirings and can be characterized via semidefinite programming. It contains all correlations in formula_119, but also some non-quantum boxes formula_91. Remarkably, all boxes within the almost quantum set are shown to be compatible with the principles of NTCC, NANLC, ML and LO. There is also numerical evidence that almost-quantum boxes also comply with IC. It seems, therefore, that, even when the above principles are taken together, they do not suffice to single out the quantum set in the simplest Bell scenario of two parties, two inputs and two outputs. Device independent protocols. Nonlocality can be exploited to conduct quantum information tasks which do not rely on the knowledge of the inner workings of the prepare-and-measurement apparatuses involved in the experiment. The security or reliability of any such protocol just depends on the strength of the experimentally measured correlations formula_35. These protocols are termed device-independent. Device-independent quantum key distribution. The first device-independent protocol proposed was device-independent quantum key distribution (QKD). In this primitive, two distant parties, Alice and Bob, are distributed an entangled quantum state, that they probe, thus obtaining the statistics formula_35. Based on how non-local the box formula_35 happens to be, Alice and Bob estimate how much knowledge an external quantum adversary Eve (the eavesdropper) could possess on the value of Alice and Bob's outputs. This estimation allows them to devise a reconciliation protocol at the end of which Alice and Bob share a perfectly correlated one-time pad of which Eve has no information whatsoever. The one-time pad can then be used to transmit a secret message through a public channel. Although the first security analyses on device-independent QKD relied on Eve carrying out a specific family of attacks, all such protocols have been recently proven unconditionally secure. Device-independent randomness certification, expansion and amplification. Nonlocality can be used to certify that the outcomes of one of the parties in a Bell experiment are partially unknown to an external adversary. By feeding a partially random seed to several non-local boxes, and, after processing the outputs, one can end up with a longer (potentially unbounded) string of comparable randomness or with a shorter but more random string. This last primitive can be proven impossible in a classical setting. Device-independent (DI) randomness certification, expansion, and amplification are techniques used to generate high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography, where high-quality random numbers are essential for ensuring the security of cryptographic protocols. Randomness certification is the process of verifying that the output of a random number generator is truly random and has not been tampered with by an adversary. DI randomness certification does this verification without making assumptions about the underlying devices that generate random numbers. Instead, randomness is certified by observing correlations between the outputs of different devices that are generated using the same physical process. Recent research has demonstrated the feasibility of DI randomness certification using entangled quantum systems, such as photons or electrons. Randomness expansion is taking a small amount of initial random seed and expanding it into a much larger sequence of random numbers. In DI randomness expansion, the expansion is done using measurements of quantum systems that are prepared in a highly entangled state. The security of the expansion is guaranteed by the laws of quantum mechanics, which make it impossible for an adversary to predict the expansion output. Recent research has shown that DI randomness expansion can be achieved using entangled photon pairs and measurement devices that violate a Bell inequality. Randomness amplification is the process of taking a small amount of initial random seed and increasing its randomness by using a cryptographic algorithm. In DI randomness amplification, this process is done using entanglement properties and quantum mechanics. The security of the amplification is guaranteed by the fact that any attempt by an adversary to manipulate the algorithm's output will inevitably introduce errors that can be detected and corrected. Recent research has demonstrated the feasibility of DI randomness amplification using quantum entanglement and the violation of a Bell inequality. DI randomness certification, expansion, and amplification are powerful techniques for generating high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography and are likely to become increasingly crucial as quantum computing technology advances. In addition, a milder approach called semi-DI exists where random numbers can be generated with some assumptions on the working principle of the devices, environment, dimension, energy, etc., in which it benefits from ease-of-implementation and high generation rate. Self-testing. Sometimes, the box formula_35 shared by Alice and Bob is such that it only admits a unique quantum realization. This means that there exist measurement operators formula_120 and a quantum state formula_121 giving rise to formula_35 such that any other physical realization formula_122 of formula_35 is connected to formula_123 via local unitary transformations. This phenomenon, that can be interpreted as an instance of device-independent quantum tomography, was first pointed out by Tsirelson and named self-testing by Mayers and Yao. Self-testing is known to be robust against systematic noise, i.e., if the experimentally measured statistics are close enough to formula_35, one can still determine the underlying state and measurement operators up to error bars. Dimension witnesses. The degree of non-locality of a quantum box formula_35 can also provide lower bounds on the Hilbert space dimension of the local systems accessible to Alice and Bob. This problem is equivalent to deciding the existence of a matrix with low completely positive semidefinite rank. Finding lower bounds on the Hilbert space dimension based on statistics happens to be a hard task, and current general methods only provide very low estimates. However, a Bell scenario with five inputs and three outputs suffices to provide arbitrarily high lower bounds on the underlying Hilbert space dimension. Quantum communication protocols which assume a knowledge of the local dimension of Alice and Bob's systems, but otherwise do not make claims on the mathematical description of the preparation and measuring devices involved are termed semi-device independent protocols. Currently, there exist semi-device independent protocols for quantum key distribution and randomness expansion. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\left|\\psi_{AB}\\right\\rang =\\frac{1}{\\sqrt{2}} \\left(\\left|0\\right\\rang_A \\left|1\\right\\rang_B -\n\\left|1\\right\\rang_A \\left|0\\right\\rang_B \\right)\n=\\frac{1}{\\sqrt{2}} \\left(\\left|-\\right\\rang_A \\left|+\\right\\rang_B -\n\\left|+\\right\\rang_A \\left|-\\right\\rang_B \\right) " }, { "math_id": 1, "text": "\\left|\\pm\\right\\rangle=\\frac{1}{\\sqrt{2}}\\left(\\left|0\\right\\rangle\\pm\\left|1\\right\\rangle\\right)" }, { "math_id": 2, "text": "\\{\\left|0\\right\\rang_A, \\left|1\\right\\rang_A\\} " }, { "math_id": 3, "text": "\\{\\left|0\\right\\rang_B, \\left|1\\right\\rang_B\\} " }, { "math_id": 4, "text": "\\{\\left|+\\right\\rang_A, \\left|-\\right\\rang_A\\} " }, { "math_id": 5, "text": "\\{\\left|+\\right\\rang_B, \\left|-\\right\\rang_B\\} " }, { "math_id": 6, "text": "\\left|\\uparrow\\right\\rang_B" }, { "math_id": 7, "text": "\\left|\\downarrow\\right\\rang_B " }, { "math_id": 8, "text": "\\left|\\leftarrow\\right\\rang_B" }, { "math_id": 9, "text": "\\left|\\rightarrow\\right\\rang_B " }, { "math_id": 10, "text": "x " }, { "math_id": 11, "text": " (y) " }, { "math_id": 12, "text": "a" }, { "math_id": 13, "text": "(b) " }, { "math_id": 14, "text": "P(a,b|x,y) " }, { "math_id": 15, "text": "a, b" }, { "math_id": 16, "text": "\\{P(a,b|x,y):a,b,x,y\\}" }, { "math_id": 17, "text": "\\lambda " }, { "math_id": 18, "text": "\\rho(\\lambda) " }, { "math_id": 19, "text": "y " }, { "math_id": 20, "text": " P(a,b|x,y,\\lambda_A,\\lambda_B)=P_A(a|x,\\lambda_A) P_B(b|y,\\lambda_B)" }, { "math_id": 21, "text": "P_A(a|x, \\lambda_A)" }, { "math_id": 22, "text": "P_B(b|y, \\lambda_B)" }, { "math_id": 23, "text": " (b) " }, { "math_id": 24, "text": "x" }, { "math_id": 25, "text": "\\lambda_A" }, { "math_id": 26, "text": "\\lambda_B" }, { "math_id": 27, "text": "\\lambda_A,\\lambda_B" }, { "math_id": 28, "text": "\\Lambda" }, { "math_id": 29, "text": "\\lambda_A,\\lambda_B\\in\\Lambda" }, { "math_id": 30, "text": "\\rho(\\lambda_A,\\lambda_B)" }, { "math_id": 31, "text": "P(a,b|x,y) =\\sum_{\\lambda_A,\\lambda_B\\in\\Lambda}\\rho(\\lambda_A,\\lambda_B)P_A(a|x,\\lambda_A) P_B(b|y,\\lambda_B) " }, { "math_id": 32, "text": "a,b,x,y" }, { "math_id": 33, "text": "\\left(P(a,b|x,y)\\right)_{a,b,x,y}" }, { "math_id": 34, "text": "{0,1}" }, { "math_id": 35, "text": "P(a,b|x,y)" }, { "math_id": 36, "text": "S_{\\rm CHSH}\\equiv E(0,0)+E(1,0)+E(0,1)-E(1,1)\\leq 2," }, { "math_id": 37, "text": "E(x,y)\\equiv\\sum_{a,b=0,1}(-1)^{a+b}P(a,b|x,y)." }, { "math_id": 38, "text": "S_{\\rm CHSH}" }, { "math_id": 39, "text": "2\\sqrt{2}\\approx 2.828" }, { "math_id": 40, "text": "\\left| \\psi\\right\\rangle " }, { "math_id": 41, "text": "\\left|\\psi\\right\\rangle=\\frac{1}{\\sqrt{3}}\\left(\\left|00\\right\\rangle+\\left|01\\right\\rangle+\\left|10\\right\\rangle\\right)=\n\\frac{1}\\sqrt{3}\\left(\\sqrt{2}\\left|+0\\right\\rangle+\\frac{1}{\\sqrt{2}}\\left(\\left|+1\\right\\rangle+\\left|-1\\right\\rangle\\right)\\right)=\n\\frac{1}\\sqrt{3}\\left(\\sqrt{2}\\left|0+\\right\\rangle+\\frac{1}{\\sqrt{2}}\\left(\\left|1+\\right\\rangle+\\left|1-\\right\\rangle\\right)\\right)" }, { "math_id": 42, "text": "|\\pm\\rangle=\\tfrac{1}{\\sqrt{2}}(\\left|0\\right\\rangle\\pm\\left|1\\right\\rangle)" }, { "math_id": 43, "text": "\\{\\left|0\\right\\rangle,\\left|1\\right\\rangle\\}" }, { "math_id": 44, "text": "\\{\\left|+\\right\\rangle,\\left|-\\right\\rangle\\}" }, { "math_id": 45, "text": "\\left|11\\right\\rangle" }, { "math_id": 46, "text": "\\left|-0\\right\\rangle," }, { "math_id": 47, "text": "\\left|0-\\right\\rangle." }, { "math_id": 48, "text": "\\left|--\\right\\rangle" }, { "math_id": 49, "text": "\\langle--|\\psi\\rangle = -\\tfrac{1}{2\\sqrt3} \\ne 0." }, { "math_id": 50, "text": "|--\\rangle" }, { "math_id": 51, "text": "|{-}1\\rangle" }, { "math_id": 52, "text": "|1-\\rangle" }, { "math_id": 53, "text": "|{-}0\\rangle" }, { "math_id": 54, "text": "|0-\\rangle" }, { "math_id": 55, "text": "y" }, { "math_id": 56, "text": "b" }, { "math_id": 57, "text": "(\\lambda_A,\\lambda_B)" }, { "math_id": 58, "text": "-2\\sqrt{2}\\leq \\mathrm{CHSH}\\leq 2\\sqrt{2}." }, { "math_id": 59, "text": "H_A, H_B" }, { "math_id": 60, "text": "\\left|\\psi\\right\\rangle\\in H_A\\otimes H_B" }, { "math_id": 61, "text": "E^x_a:H_A\\to H_A, F^y_b:H_B\\to H_B" }, { "math_id": 62, "text": "x,y" }, { "math_id": 63, "text": "\\{E^x_a\\}_a,\\{F^y_b\\}_b" }, { "math_id": 64, "text": "\\sum_aE^x_a={\\mathbb I}_A, \\sum_bF^y_b={\\mathbb I}_B" }, { "math_id": 65, "text": "P(a,b|x,y) =\\left\\langle\\psi\\right|E^x_a\\otimes F^y_b\\left|\\psi\\right\\rangle" }, { "math_id": 66, "text": "Q" }, { "math_id": 67, "text": "H=H_A\\otimes H_B" }, { "math_id": 68, "text": "H" }, { "math_id": 69, "text": "\\left|\\psi\\right\\rangle\\in H" }, { "math_id": 70, "text": "E^x_a:H\\to H, F^y_b:H\\to H" }, { "math_id": 71, "text": "\\sum_aE^x_a={\\mathbb I}, \\sum_bF^y_b={\\mathbb I} " }, { "math_id": 72, "text": "P(a,b|x,y) =\\left\\langle\\psi\\right|E^x_a F^y_b\\left|\\psi\\right\\rangle" }, { "math_id": 73, "text": "[E^x_a, F^y_b]=0" }, { "math_id": 74, "text": " a,b,x,y" }, { "math_id": 75, "text": "Q_c" }, { "math_id": 76, "text": " \\bar{Q} \\subseteq Q_c" }, { "math_id": 77, "text": "\\bar{Q}" }, { "math_id": 78, "text": " \\bar{Q} = Q_c" }, { "math_id": 79, "text": "\\bar{Q} \\neq Q_c " }, { "math_id": 80, "text": "H_A" }, { "math_id": 81, "text": "H_B" }, { "math_id": 82, "text": "1/\\operatorname{poly}(|X||Y|)" }, { "math_id": 83, "text": "d_A, d_B" }, { "math_id": 84, "text": "H_A\\otimes H_B" }, { "math_id": 85, "text": "\\dim(H_A)=d_A" }, { "math_id": 86, "text": "\\dim(H_B)=d_B" }, { "math_id": 87, "text": "Q^1\\supset Q^2\\supset Q^3\\supset..." }, { "math_id": 88, "text": "P(a,b|x,y)\\in Q_c" }, { "math_id": 89, "text": "P(a,b|x,y)\\in Q^k" }, { "math_id": 90, "text": "k" }, { "math_id": 91, "text": "P(a,b|x,y)\\not\\in Q_c" }, { "math_id": 92, "text": "P(a,b|x,y)\\not\\in Q^k" }, { "math_id": 93, "text": "\\bar{Q}\\not=Q_c" }, { "math_id": 94, "text": "Q_c- \\bar{Q}" }, { "math_id": 95, "text": "\\bar{Q}=Q_c" }, { "math_id": 96, "text": "P(a,b|x,y)\\not\\in\\bar{Q}" }, { "math_id": 97, "text": "(y)" }, { "math_id": 98, "text": " \\sum_a P(a,b|x,y)= \\sum_a P(a,b|x^\\prime,y)=:P_B(b|y)," }, { "math_id": 99, "text": "\\sum_b P(a,b|x,y)= \\sum_b P(a,b|x,y^\\prime)=:P_A(a|x). " }, { "math_id": 100, "text": "P(a,b|x,y)=\\frac{1}{2}\\delta_{xy,a\\oplus b}." }, { "math_id": 101, "text": "a\\oplus b" }, { "math_id": 102, "text": "p>1/2" }, { "math_id": 103, "text": "2\\sqrt{2}\\left(\\frac{2}{\\sqrt{3}}-1\\right)\\approx 0.4377" }, { "math_id": 104, "text": " f_{0,1}^n\\to 1" }, { "math_id": 105, "text": "n" }, { "math_id": 106, "text": "a,b" }, { "math_id": 107, "text": "f(x\\oplus y)" }, { "math_id": 108, "text": "k\\in\\{1,...,n\\}" }, { "math_id": 109, "text": "x_k" }, { "math_id": 110, "text": "s" }, { "math_id": 111, "text": "x_1,...,x_n" }, { "math_id": 112, "text": "a_1,...,a_n" }, { "math_id": 113, "text": "(\\bar{a}|\\bar{x})" }, { "math_id": 114, "text": "(\\bar{a}^\\prime|\\bar{x}^\\prime)" }, { "math_id": 115, "text": "x_k=x_k^\\prime " }, { "math_id": 116, "text": "a_k\\not=a_k^\\prime " }, { "math_id": 117, "text": "0.052" }, { "math_id": 118, "text": "\\tilde{Q}" }, { "math_id": 119, "text": "Q_c\\supset \\bar{Q}" }, { "math_id": 120, "text": "E^x_a, F^y_b" }, { "math_id": 121, "text": "\\left|\\psi\\right\\rangle" }, { "math_id": 122, "text": " \\tilde{E}^x_a, \\tilde{F}^y_b ,\\left|\\tilde{\\psi}\\right\\rangle" }, { "math_id": 123, "text": " E^x_a, F^y_b ,\\left|\\psi\\right\\rangle" } ]
https://en.wikipedia.org/wiki?curid=8978774
8978968
Graceful labeling
Type of graph vertex labeling In graph theory, a graceful labeling of a graph with m edges is a labeling of its vertices with some subset of the integers from 0 to m inclusive, such that no two vertices share a label, and each edge is uniquely identified by the absolute difference between its endpoints, such that this magnitude lies between 1 and m inclusive. A graph which admits a graceful labeling is called a graceful graph. The name "graceful labeling" is due to Solomon W. Golomb; this type of labeling was originally given the name β-labeling by Alexander Rosa in a 1967 paper on graph labelings. A major open problem in graph theory is the graceful tree conjecture or Ringel–Kotzig conjecture, named after Gerhard Ringel and Anton Kotzig, and sometimes abbreviated GTC (not to be confused with Kotzig's conjecture on regularly path connected graphs). It hypothesizes that all trees are graceful. It is still an open conjecture, although a related but weaker conjecture known as "Ringel's conjecture" was partially proven in 2020. Kotzig once called the effort to prove the conjecture a "disease". Another weaker version of graceful labelling is near-graceful labeling, in which the vertices can be labeled using some subset of the integers on [0, "m" + 1] such that no two vertices share a label, and each edge is uniquely identified by the absolute difference between its endpoints (this magnitude lies on [1, "m" + 1]). Another conjecture in graph theory is Rosa's conjecture, named after Alexander Rosa, which says that all triangular cacti are graceful or nearly-graceful. A graceful graph with edges 0 to m is conjectured to have no fewer than formula_0 vertices, due to sparse ruler results. This conjecture has been verified for all graphs with 213 or fewer edges.
[ { "math_id": 0, "text": " \\left\\lceil \\sqrt{3 m+\\tfrac{9}{4}} \\right\\rfloor " } ]
https://en.wikipedia.org/wiki?curid=8978968
8979437
Stochastic approximation
Family of iterative methods Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations. In a nutshell, stochastic approximation algorithms deal with a function of the form formula_0 which is the expected value of a function depending on a random variable formula_1. The goal is to recover properties of such a function formula_2 without evaluating it directly. Instead, stochastic approximation algorithms use random samples of formula_3 to efficiently approximate properties of formula_2 such as zeros or extrema. Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with big data. These applications range from stochastic optimization methods and algorithms, to online forms of the EM algorithm, reinforcement learning via temporal differences, and deep learning, and others. Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory. The earliest, and prototypical, algorithms of this kind are the Robbins–Monro and Kiefer–Wolfowitz algorithms introduced respectively in 1951 and 1952. Robbins–Monro algorithm. The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function formula_4, and a constant formula_5, such that the equation formula_6 has a unique root at formula_7. It is assumed that while we cannot directly observe the function formula_4, we can instead obtain measurements of the random variable formula_8 where formula_9. The structure of the algorithm is to then generate iterates of the form: formula_10 Here, formula_11 is a sequence of positive step sizes. Robbins and Monro proved, Theorem 2 that formula_12 converges in formula_13 (and hence also in probability) to formula_14, and Blum later proved the convergence is actually with probability one, provided that: formula_17 A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: formula_18, for formula_19. Other series are possible but in order to average out the noise in formula_8, the above condition must be met. Subsequent developments and Polyak–Ruppert averaging. While the Robbins–Monro algorithm is theoretically able to achieve formula_26 under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning. Chung (1954) and Fabian (1968) showed that we would achieve optimal convergence rate formula_25 with formula_27 (or formula_28). Lai and Robbins designed adaptive procedures to estimate formula_15 such that formula_29 has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991) and Ruppert (1988) independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:formula_30The convergence of formula_31 to the unique root formula_14 relies on the condition that the step sequence formula_32 decreases sufficiently slowly. That is A1) formula_33 Therefore, the sequence formula_34 with formula_35 satisfies this restriction, but formula_36 does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate formula_25 yet with a more robust step size policy. Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate formula_25. A more general result is given in Chapter 11 of Kushner and Yin by defining interpolated time formula_37, interpolated process formula_38 and interpolated normalized process formula_39 as formula_40Let the iterate average be formula_41 and the associate normalized error to be formula_42. With assumption A1) and the following A2) A2) "There is a Hurwitz matrix formula_43 and a symmetric and positive-definite matrix formula_44 such that formula_45 converges weakly to formula_46, where formula_46 is the statisolution to formula_47where formula_48 is a standard Wiener process." satisfied, and define "formula_49". Then for each "formula_50", "formula_51" The success of the averaging idea is because of the time scale separation of the original sequence "formula_52" and the averaged sequence "formula_53", with the time scale of the former one being faster. Application in stochastic optimization. Suppose we want to solve the following stochastic optimization problem formula_54where formula_55 is differentiable and convex, then this problem is equivalent to find the root formula_14 of formula_56. Here formula_57 can be interpreted as some "observed" cost as a function of the chosen formula_58 and random effects formula_59. In practice, it might be hard to get an analytical form of formula_60, Robbins–Monro method manages to generate a sequence formula_61 to approximate formula_14 if one can generate formula_62 , in which the conditional expectation of formula_63 given formula_64 is exactly formula_65, i.e. formula_66 is simulated from a conditional distribution defined by formula_67 Here formula_68 is an unbiased estimator of formula_60. If formula_59 depends on formula_58, there is in general no natural way of generating a random outcome formula_68 that is an unbiased estimator of the gradient. In some special cases when either IPA or likelihood ratio methods are applicable, then one is able to obtain an unbiased gradient estimator formula_68. If formula_59 is viewed as some "fundamental" underlying random process that is generated "independently" of formula_58, and under some regularization conditions for derivative-integral interchange operations so that formula_69, then formula_70 gives the fundamental gradient unbiased estimate. However, for some applications we have to use finite-difference methods in which formula_68 has a conditional expectation close to formula_60 but not exactly equal to it. We then define a recursion analogously to Newton's Method in the deterministic algorithm: formula_71 Convergence of the algorithm. The following result gives sufficient conditions on formula_72 for the algorithm to converge: C1) formula_73 C2) formula_74 C3) formula_75 C4) formula_76 C5) formula_77 formula_78 Then formula_79 converges to formula_80 almost surely. Here are some intuitive explanations about these conditions. Suppose formula_81 is a uniformly bounded random variables. If C2) is not satisfied, i.e. formula_82 , thenformula_83is a bounded sequence, so the iteration cannot converge to formula_80 if the initial guess formula_84 is too far away from formula_80. As for C3) note that if formula_79 converges to formula_80 then formula_85 so we must have formula_86 ,and the condition C3) ensures it. A natural choice would be formula_87. Condition C5) is a fairly stringent condition on the shape of formula_88; it gives the search direction of the algorithm. Example (where the stochastic gradient method is appropriate). Suppose formula_89, where formula_90 is differentiable and formula_91 is a random variable independent of formula_58. Then formula_92 depends on the mean of formula_59, and the stochastic gradient method would be appropriate in this problem. We can choose formula_93 Kiefer–Wolfowitz algorithm. The Kiefer–Wolfowitz algorithm was introduced in 1952 by Jacob Wolfowitz and Jack Kiefer, and was motivated by the publication of the Robbins–Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function. Let formula_94 be a function which has a maximum at the point formula_95. It is assumed that formula_96 is unknown; however, certain observations formula_97, where formula_98, can be made at any point formula_99. The structure of the algorithm follows a gradient-like method, with the iterates being generated as formula_100 where formula_101 and formula_102 are independent. At every step, the gradient of formula_96 is approximated akin to a central difference method with formula_103. So the sequence formula_104 specifies the sequence of finite difference widths used for the gradient approximation, while the sequence formula_32 specifies a sequence of positive step sizes taken along that direction. Kiefer and Wolfowitz proved that, if formula_96 satisfied certain regularity conditions, then formula_105 will converge to formula_58 in probability as formula_106, and later Blum in 1954 showed formula_105 converges to formula_58 almost surely, provided that: A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would be formula_123 and formula_124. Further developments. An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on. These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size formula_128 should not converge to zero but should be chosen so as to track the function., 2nd ed., chapter 3 C. Johan Masreliez and R. Douglas Martin were the first to apply stochastic approximation to robust estimation. The main tool for analyzing stochastic approximations algorithms (including the Robbins–Monro and the Kiefer–Wolfowitz algorithms) is a theorem by Aryeh Dvoretzky published in 1956. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " f(\\theta) = \\operatorname E_{\\xi} [F(\\theta,\\xi)] " }, { "math_id": 1, "text": "\\xi " }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "F(\\theta,\\xi)" }, { "math_id": 4, "text": "M(\\theta)" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "M(\\theta) = \\alpha" }, { "math_id": 7, "text": "\\theta^*" }, { "math_id": 8, "text": "N(\\theta)" }, { "math_id": 9, "text": "\\operatorname E[N(\\theta)] = M(\\theta)" }, { "math_id": 10, "text": "\\theta_{n+1}=\\theta_n - a_n(N(\\theta_n) - \\alpha)" }, { "math_id": 11, "text": "a_1, a_2, \\dots" }, { "math_id": 12, "text": "\\theta_n" }, { "math_id": 13, "text": "L^2" }, { "math_id": 14, "text": "\\theta^*" }, { "math_id": 15, "text": "M'(\\theta^*)" }, { "math_id": 16, "text": "a_n" }, { "math_id": 17, "text": "\\qquad \\sum^{\\infty}_{n=0}a_n = \\infty \\quad \\mbox{ and } \\quad \\sum^{\\infty}_{n=0}a^2_n < \\infty \\quad " }, { "math_id": 18, "text": "a_n=a/n" }, { "math_id": 19, "text": " a > 0 " }, { "math_id": 20, "text": "f(\\theta)" }, { "math_id": 21, "text": "\\Theta" }, { "math_id": 22, "text": "\\operatorname E[f(\\theta_n) - f^*] = O(1/n)" }, { "math_id": 23, "text": "f^*" }, { "math_id": 24, "text": "\\theta \\in \\Theta" }, { "math_id": 25, "text": "O(1/\\sqrt{n})" }, { "math_id": 26, "text": " O(1/n)" }, { "math_id": 27, "text": "a_n=\\bigtriangledown^2f(\\theta^*)^{-1}/n" }, { "math_id": 28, "text": "a_n=\\frac{1}{(nM'(\\theta^*))}" }, { "math_id": 29, "text": "\\theta_n" }, { "math_id": 30, "text": " \\theta_{n+1} - \\theta_n = a_n(\\alpha - N(\\theta_n)), \\qquad \\bar{\\theta}_n = \\frac{1}{n} \\sum^{n-1}_{i=0} \\theta_i " }, { "math_id": 31, "text": " \\bar{\\theta}_n " }, { "math_id": 32, "text": "\\{a_n\\}" }, { "math_id": 33, "text": " a_n \\rightarrow 0, \\qquad \\frac{a_n - a_{n+1}}{a_n} = o(a_n)" }, { "math_id": 34, "text": "a_n = n^{-\\alpha}" }, { "math_id": 35, "text": "0 < \\alpha < 1" }, { "math_id": 36, "text": "\\alpha = 1" }, { "math_id": 37, "text": "t_n=\\sum_{i=0}^{n-1}a_i" }, { "math_id": 38, "text": "\\theta^n(\\cdot)" }, { "math_id": 39, "text": "U^n(\\cdot)" }, { "math_id": 40, "text": "\\theta^n(t)=\\theta_{n+i},\\quad U^n(t)=(\\theta_{n+i}-\\theta^*)/\\sqrt{a_{n+i}}\\quad\\mbox{for}\\quad t\\in[t_{n+i}-t_n,t_{n+i+1}-t_n),i\\ge0" }, { "math_id": 41, "text": "\\Theta_n=\\frac{a_n}{t}\\sum_{i=n}^{n+t/a_n-1}\\theta_i" }, { "math_id": 42, "text": "\\hat{U}^n(t)=\\frac{\\sqrt{a_n}}{t}\\sum_{i=n}^{n+t/a_n-1}(\\theta_i-\\theta^*)" }, { "math_id": 43, "text": "A" }, { "math_id": 44, "text": "\\Sigma" }, { "math_id": 45, "text": "\\{U^n(\\cdot)\\}" }, { "math_id": 46, "text": "U(\\cdot)" }, { "math_id": 47, "text": "dU = AU \\, dt +\\Sigma^{1/2} \\, dw" }, { "math_id": 48, "text": "w(\\cdot)" }, { "math_id": 49, "text": "\\bar{V}=(A^{-1})'\\Sigma(A')^{-1}" }, { "math_id": 50, "text": "t" }, { "math_id": 51, "text": "\\hat{U}^n(t)\\stackrel{\\mathcal{D}}{\\longrightarrow}\\mathcal{N}(0,V_t),\\quad \\text{where}\\quad V_t=\\bar{V}/t+O(1/t^2)." }, { "math_id": 52, "text": "\\{\\theta_n\\}" }, { "math_id": 53, "text": "\\{\\Theta_n\\}" }, { "math_id": 54, "text": "g(\\theta^*) = \\min_{\\theta\\in\\Theta}\\operatorname{E}[Q(\\theta,X)]," }, { "math_id": 55, "text": "g(\\theta) = \\operatorname{E}[Q(\\theta,X)]" }, { "math_id": 56, "text": "\\nabla g(\\theta) = 0" }, { "math_id": 57, "text": "Q(\\theta,X)" }, { "math_id": 58, "text": "\\theta" }, { "math_id": 59, "text": "X" }, { "math_id": 60, "text": "\\nabla g(\\theta)" }, { "math_id": 61, "text": "(\\theta_n)_{n\\geq 0}" }, { "math_id": 62, "text": "(X_n)_{n\\geq 0}\n" }, { "math_id": 63, "text": "X_n\n\n" }, { "math_id": 64, "text": "\\theta_n\n" }, { "math_id": 65, "text": "\\nabla g(\\theta_n)" }, { "math_id": 66, "text": "X_n" }, { "math_id": 67, "text": "\\operatorname{E}[H(\\theta,X)|\\theta = \\theta_n] = \\nabla g(\\theta_n)." }, { "math_id": 68, "text": "H(\\theta, X)" }, { "math_id": 69, "text": "\\operatorname{E}\\Big[\\frac{\\partial}{\\partial\\theta}Q(\\theta,X)\\Big] = \\nabla g(\\theta)" }, { "math_id": 70, "text": "H(\\theta, X) = \\frac{\\partial}{\\partial \\theta}Q(\\theta, X)" }, { "math_id": 71, "text": "\\theta_{n+1} = \\theta_n - \\varepsilon_n H(\\theta_n,X_{n+1})." }, { "math_id": 72, "text": "\\theta_n\n\n " }, { "math_id": 73, "text": "\\varepsilon_n \\geq 0, \\forall\\; n\\geq 0. " }, { "math_id": 74, "text": "\\sum_{n=0}^\\infty \\varepsilon_n = \\infty " }, { "math_id": 75, "text": "\\sum_{n=0}^{\\infty}\\varepsilon_n^2 <\\infty " }, { "math_id": 76, "text": "|X_n| \\leq B, \\text{ for a fixed bound } B. " }, { "math_id": 77, "text": "g(\\theta) \\text{ is strictly convex, i.e.} " }, { "math_id": 78, "text": "\\inf_{\\delta\\leq |\\theta - \\theta^*|\\leq 1/\\delta}\\langle\\theta-\\theta^*, \\nabla g(\\theta)\\rangle > 0,\\text{ for every } 0< \\delta < 1.\n" }, { "math_id": 79, "text": "\\theta_n " }, { "math_id": 80, "text": "\\theta^* " }, { "math_id": 81, "text": "H(\\theta_n, X_{n+1})" }, { "math_id": 82, "text": "\\sum_{n=0}^\\infty \\varepsilon_n < \\infty\n " }, { "math_id": 83, "text": "\\theta_n - \\theta_0 = -\\sum_{i=0}^{n-1} \\varepsilon_i H(\\theta_i, X_{i+1})\n " }, { "math_id": 84, "text": "\\theta_0\n " }, { "math_id": 85, "text": "\\theta_{n+1} - \\theta_n = -\\varepsilon_n H(\\theta_n, X_{n+1}) \\rightarrow 0, \\text{ as } n\\rightarrow \\infty." }, { "math_id": 86, "text": "\\varepsilon_n \\downarrow 0 " }, { "math_id": 87, "text": "\\varepsilon_n = 1/n " }, { "math_id": 88, "text": "g(\\theta)" }, { "math_id": 89, "text": "Q(\\theta, X) = f(\\theta) + \\theta^T X" }, { "math_id": 90, "text": "f" }, { "math_id": 91, "text": "X\\in \\mathbb{R}^p" }, { "math_id": 92, "text": "g(\\theta)=\\operatorname{E}[Q(\\theta,X)] = f(\\theta)+\\theta^T\\operatorname{E}X" }, { "math_id": 93, "text": "H(\\theta, X) = \\frac{\\partial}{\\partial\\theta}Q(\\theta,X) = \\frac{\\partial}{\\partial\\theta}f(\\theta) + X." }, { "math_id": 94, "text": "M(x) " }, { "math_id": 95, "text": "\\theta " }, { "math_id": 96, "text": "M(x)" }, { "math_id": 97, "text": "N(x)" }, { "math_id": 98, "text": "\\operatorname E[N(x)] = M(x)" }, { "math_id": 99, "text": "x" }, { "math_id": 100, "text": " x_{n+1} = x_n + a_n\\cdot\\left(\\frac{N(x_n + c_n) - N(x_n -c_n)}{2 c_n} \\right) " }, { "math_id": 101, "text": "N(x_n+c_n)" }, { "math_id": 102, "text": "N(x_n-c_n)" }, { "math_id": 103, "text": "h=2c_n" }, { "math_id": 104, "text": "\\{c_n\\}" }, { "math_id": 105, "text": "x_n" }, { "math_id": 106, "text": "n\\to\\infty\n " }, { "math_id": 107, "text": "\\operatorname{Var}(N(x))\\le S<\\infty" }, { "math_id": 108, "text": "M(\\cdot)" }, { "math_id": 109, "text": "C_0 \\subset \\mathbb R^d" }, { "math_id": 110, "text": "\\beta>0" }, { "math_id": 111, "text": "B>0" }, { "math_id": 112, "text": "|x'-\\theta|+|x''-\\theta|<\\beta \\quad \\Longrightarrow \\quad |M(x')-M(x'')|<B|x'-x''|" }, { "math_id": 113, "text": " \\rho>0 " }, { "math_id": 114, "text": " R>0 " }, { "math_id": 115, "text": "|x'-x''|<\\rho \\quad \\Longrightarrow \\quad |M(x')-M(x'')|<R" }, { "math_id": 116, "text": " \\delta>0 " }, { "math_id": 117, "text": " \\pi(\\delta)>0 " }, { "math_id": 118, "text": "|z-\\theta|>\\delta \\quad \\Longrightarrow \\quad \\inf_{\\delta/2>\\varepsilon>0}\\frac{|M(z+\\varepsilon)-M(z-\\varepsilon)|}{\\varepsilon}>\\pi(\\delta)" }, { "math_id": 119, "text": "\\quad c_n \\rightarrow 0\\quad \\text{as}\\quad n\\to\\infty " }, { "math_id": 120, "text": " \\sum^\\infty_{n=0} a_n =\\infty " }, { "math_id": 121, "text": " \\sum^\\infty_{n=0} a_nc_n <\\infty " }, { "math_id": 122, "text": " \\sum^\\infty_{n=0} a_n^2c_n^{-2} <\\infty " }, { "math_id": 123, "text": "a_n = 1/n" }, { "math_id": 124, "text": "c_n = n^{-1/3}" }, { "math_id": 125, "text": "d+1" }, { "math_id": 126, "text": "d " }, { "math_id": 127, "text": "d" }, { "math_id": 128, "text": "a_n" } ]
https://en.wikipedia.org/wiki?curid=8979437
898010
Hodge cycle
Kind of homology class in differential geometry In differential geometry, a Hodge cycle or Hodge class is a particular kind of homology class defined on a complex algebraic variety "V", or more generally on a Kähler manifold. A homology class "x" in a homology group formula_0 where "V" is a non-singular complex algebraic variety or Kähler manifold is a Hodge cycle, provided it satisfies two conditions. Firstly, "k" is an even integer formula_1, and in the direct sum decomposition of "H" shown to exist in Hodge theory, "x" is purely of type formula_2. Secondly, "x" is a rational class, in the sense that it lies in the image of the abelian group homomorphism formula_3 defined in algebraic topology (as a special case of the universal coefficient theorem). The conventional term Hodge "cycle" therefore is slightly inaccurate, in that "x" is considered as a "class" (modulo boundaries); but this is normal usage. The importance of Hodge cycles lies primarily in the Hodge conjecture, to the effect that Hodge cycles should always be algebraic cycles, for "V" a complete algebraic variety. This is an unsolved problem, one of the Millennium Prize Problems. It is known that being a Hodge cycle is a necessary condition to be an algebraic cycle that is rational, and numerous particular cases of the conjecture are known.
[ { "math_id": 0, "text": "H_k(V, \\Complex) = H" }, { "math_id": 1, "text": "2p" }, { "math_id": 2, "text": "(p,p)" }, { "math_id": 3, "text": "H_k(V, \\Q) \\to H" } ]
https://en.wikipedia.org/wiki?curid=898010
8980593
Nonlinear conjugate gradient method
In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function formula_0 formula_1 the minimum of formula_2 is obtained when the gradient is 0: formula_3. Whereas linear conjugate gradient seeks a solution to the linear equation formula_4, the nonlinear conjugate gradient method is generally used to find the local minimum of a nonlinear function using its gradient formula_5 alone. It works when the function is approximately quadratic near the minimum, which is the case when the function is twice differentiable at the minimum and the second derivative is non-singular there. Given a function formula_0 of formula_6 variables to minimize, its gradient formula_5 indicates the direction of maximum increase. One simply starts in the opposite (steepest descent) direction: formula_7 with an adjustable step length formula_8 and performs a line search in this direction until it reaches the minimum of formula_9: formula_10, formula_11 After this first iteration in the steepest direction formula_12, the following steps constitute one iteration of moving along a subsequent conjugate direction formula_13, where formula_14: With a pure quadratic function the minimum is reached within "N" iterations (excepting roundoff error), but a non-quadratic function will make slower progress. Subsequent search directions lose conjugacy requiring the search direction to be reset to the steepest descent direction at least every "N" iterations, or sooner if progress stops. However, resetting every iteration turns the method into steepest descent. The algorithm stops when it finds the minimum, determined when no progress is made after a direction reset (i.e. in the steepest descent direction), or when some tolerance criterion is reached. Within a linear approximation, the parameters formula_8 and formula_20 are the same as in the linear conjugate gradient method but have been obtained with line searches. The conjugate gradient method can follow narrow (ill-conditioned) valleys, where the steepest descent method slows down and follows a criss-cross pattern. Four of the best known formulas for formula_16 are named after their developers: formula_21 formula_22 formula_23 formula_24. These formulas are equivalent for a quadratic function, but for nonlinear optimization the preferred formula is a matter of heuristics or taste. A popular choice is formula_25, which provides a direction reset automatically. Algorithms based on Newton's method potentially converge much faster. There, both step direction and length are computed from the gradient as the solution of a linear system of equations, with the coefficient matrix being the exact Hessian matrix (for Newton's method proper) or an estimate thereof (in the quasi-Newton methods, where the observed change in the gradient during the iterations is used to update the Hessian estimate). For high-dimensional problems, the exact computation of the Hessian is usually prohibitively expensive, and even its storage can be problematic, requiring formula_26 memory (but see the limited-memory L-BFGS quasi-Newton method). The conjugate gradient method can also be derived using optimal control theory. In this accelerated optimization theory, the conjugate gradient method falls out as a nonlinear optimal feedback controller, formula_27for the double integrator system, formula_28 The quantities formula_29 and formula_30 are variable feedback gains. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\displaystyle f(x)" }, { "math_id": 1, "text": "\\displaystyle f(x)=\\|Ax-b\\|^2," }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\\nabla_x f=2 A^T(Ax-b)=0" }, { "math_id": 4, "text": "\\displaystyle A^T Ax=A^T b" }, { "math_id": 5, "text": "\\nabla_x f" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "\\Delta x_0=-\\nabla_x f (x_0) " }, { "math_id": 8, "text": "\\displaystyle \\alpha" }, { "math_id": 9, "text": "\\displaystyle f" }, { "math_id": 10, "text": "\\displaystyle \\alpha_0:= \\arg \\min_\\alpha f(x_0+\\alpha \\Delta x_0)" }, { "math_id": 11, "text": "\\displaystyle x_1=x_0+\\alpha_0 \\Delta x_0" }, { "math_id": 12, "text": "\\displaystyle \\Delta x_0" }, { "math_id": 13, "text": "\\displaystyle s_n" }, { "math_id": 14, "text": "\\displaystyle s_0=\\Delta x_0" }, { "math_id": 15, "text": "\\Delta x_n=-\\nabla_x f (x_n) " }, { "math_id": 16, "text": "\\displaystyle \\beta_n" }, { "math_id": 17, "text": "\\displaystyle s_n=\\Delta x_n+\\beta_n s_{n-1}" }, { "math_id": 18, "text": "\\displaystyle \\alpha_n=\\arg \\min_{\\alpha} f(x_n+\\alpha s_n)" }, { "math_id": 19, "text": "\\displaystyle x_{n+1}=x_{n}+\\alpha_{n} s_{n}" }, { "math_id": 20, "text": "\\displaystyle \\beta" }, { "math_id": 21, "text": "\\beta_{n}^{FR} = \\frac{\\Delta x_n^T \\Delta x_n}\n{\\Delta x_{n-1}^T \\Delta x_{n-1}}.\n" }, { "math_id": 22, "text": "\\beta_{n}^{PR} = \\frac{\\Delta x_n^T (\\Delta x_n-\\Delta x_{n-1})}\n{\\Delta x_{n-1}^T \\Delta x_{n-1}}.\n" }, { "math_id": 23, "text": "\\beta_n^{HS} = \\frac{\\Delta x_n^T (\\Delta x_n-\\Delta x_{n-1})}\n{-s_{n-1}^T (\\Delta x_n-\\Delta x_{n-1})}.\n" }, { "math_id": 24, "text": "\\beta_{n}^{DY} = \\frac{\\Delta x_n^T \\Delta x_n}\n{-s_{n-1}^T (\\Delta x_n-\\Delta x_{n-1})}.\n" }, { "math_id": 25, "text": "\\displaystyle \\beta=\\max\\{0, \\beta^{PR}\\}" }, { "math_id": 26, "text": "O(N^2)" }, { "math_id": 27, "text": "u = k(x, \\dot x):= -\\gamma_a \\nabla_x f(x) - \\gamma_b \\dot x " }, { "math_id": 28, "text": "\\ddot x = u" }, { "math_id": 29, "text": "\\gamma_a > 0" }, { "math_id": 30, "text": "\\gamma_b > 0" } ]
https://en.wikipedia.org/wiki?curid=8980593
8981301
Vegard's law
In crystallography, materials science and metallurgy, Vegard's law is an empirical finding (heuristic approach) resembling the rule of mixtures. In 1921, Lars Vegard discovered that the lattice parameter of a solid solution of two constituents is approximately a weighted mean of the two constituents' lattice parameters at the same temperature: formula_0 "e.g.", in the case of a mixed oxide of uranium and plutonium as used in the fabrication of MOX nuclear fuel: formula_1 Vegard's law assumes that both components A and B in their pure form ("i.e.", before mixing) have the same crystal structure. Here, "a"A(1-"x")B"x" is the lattice parameter of the solid solution, "a"A and "a"B are the lattice parameters of the pure constituents, and "x" is the molar fraction of B in the solid solution. Vegard's law is seldom perfectly obeyed; often deviations from the linear behavior are observed. A detailed study of such deviations was conducted by King. However, it is often used in practice to obtain rough estimates when experimental data are not available for the lattice parameter for the system of interest. For systems known to approximately obey Vegard's law, the approximation may also be used to estimate the composition of a solution from knowledge of its lattice parameters, which are easily obtained from diffraction data. For example, consider the semiconductor compound InP"x"As(1-"x"). A relation exists between the constituent elements and their associated lattice parameters, "a", such that: formula_2 When variations in lattice parameter are very small across the entire composition range, Vegard's law becomes equivalent to Amagat's law. Relationship to band gaps in semiconductors. In many binary semiconducting systems, the band gap in semiconductors is approximately a linear function of the lattice parameter. Therefore, if the lattice parameter of a semiconducting system follows Vegard's law, one can also write a linear relationship between the band gap and composition. Using InP"x"As(1-"x") as before, the band gap energy, formula_3, can be written as: formula_4 Sometimes, the linear interpolation between the band gap energies is not accurate enough, and a second term to account for the curvature of the band gap energies as a function of composition is added. This curvature correction is characterized by the bowing parameter, "b": formula_5 Mineralogy. The following excerpt from Takashi Fujii (1960) summarises well the limits of the Vegard’s law in the context of mineralogy and also makes the link with the Gladstone–Dale equation: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; See also. When considering the empirical correlation of some physical properties and the chemical composition of solid compounds, other relationships, rules, or laws, also closely resembles the Vegard's law, and in fact the more general rule of mixtures: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_{\\mathrm{A}_{(1-x)}\\mathrm{B}_{x}} = (1-x)\\ a_\\mathrm{A} + x\\ a_\\mathrm{B}" }, { "math_id": 1, "text": "a_\\mathrm{U_{0.93}Pu_{0.07}O_{2}} = 0.93\\ a_\\mathrm{UO_2} + 0.07\\ a_\\mathrm{PuO_2}" }, { "math_id": 2, "text": "a_{\\mathrm{InP}_{x}\\mathrm{As}_{(1-x)}} = x\\ a_\\mathrm{InP} + (1-x)\\ a_\\mathrm{InAs}" }, { "math_id": 3, "text": "E_g" }, { "math_id": 4, "text": "E_{g,\\mathrm{InPAs}} = x\\ E_{g,\\mathrm{InP}}+(1-x)\\ E_{g,\\mathrm{InAs}}" }, { "math_id": 5, "text": "E_{g,\\mathrm{InPAs}} = x\\ E_{g,\\mathrm{InP}}+(1-x)\\ E_{g,\\mathrm{InAs}}-bx\\ (1-x)" } ]
https://en.wikipedia.org/wiki?curid=8981301
8983001
The Foundations of Arithmetic
Book by Gottlob Frege The Foundations of Arithmetic () is a book by Gottlob Frege, published in 1884, which investigates the philosophical foundations of arithmetic. Frege refutes other idealist and materialist theories of number and develops his own platonist theory of numbers. The "Grundlagen" also helped to motivate Frege's later works in logicism. The book was also seminal in the philosophy of language. Michael Dummett traces the linguistic turn to Frege's "Grundlagen" and his context principle. The book was not well received and was not read widely when it was published. It did, however, draw the attentions of Bertrand Russell and Ludwig Wittgenstein, who were both heavily influenced by Frege's philosophy. An English translation was published (Oxford, 1950) by J. L. Austin, with a second edition in 1960. In the enquiry that follows, I have kept to three fundamental principles: always to separate sharply the psychological from the logical, the subjective from the objective; never to ask for the meaning of a word in isolation, but only in the context of a proposition never to lose sight of the distinction between concept and object. Linguistic turn. In order to answer a Kantian question about numbers, "How are numbers given to us, granted that we have no idea or intuition of them?" Frege invokes his "context principle", stated at the beginning of the book, that only in the context of a proposition do words have meaning, and thus finds the solution to be in defining "the sense of a proposition in which a number word occurs." Thus an ontological and epistemological problem, traditionally solved along idealist lines, is instead solved along linguistic ones. Criticisms of predecessors. Psychologistic accounts of mathematics. Frege objects to any account of mathematics based on psychologism, that is, the view that mathematics and numbers are relative to the subjective thoughts of the people who think of them. According to Frege, psychological accounts appeal to what is subjective, while mathematics is purely objective: mathematics is completely independent from human thought. Mathematical entities, according to Frege, have objective properties regardless of humans thinking of them: it is not possible to think of mathematical statements as something that evolved naturally through human history and evolution. He sees a fundamental distinction between logic (and its extension, according to Frege, math) and psychology. Logic explains necessary facts, whereas psychology studies certain thought processes in individual minds. Ideas are private, so idealism about mathematics implies there is "my two" and "your two" rather than simply the number two. Kant. Frege greatly appreciates the work of Immanuel Kant. However, he criticizes him mainly on the grounds that numerical statements are not synthetic-a priori, but rather analytic-a priori. Kant claims that 7+5=12 is an unprovable synthetic statement. No matter how much we analyze the idea of 7+5 we will not find there the idea of 12. We must arrive at the idea of 12 by application to objects in the intuition. Kant points out that this becomes all the more clear with bigger numbers. Frege, on this point precisely, argues towards the opposite direction. Kant wrongly assumes that in a proposition containing "big" numbers we must count points or some such thing to assert their truth value. Frege argues that without ever having any intuition toward any of the numbers in the following equation: 654,768+436,382=1,091,150 we nevertheless can assert it is true. This is provided as evidence that such a proposition is analytic. While Frege agrees that geometry is indeed synthetic a priori, arithmetic must be analytic. Mill. Frege roundly criticizes the empiricism of John Stuart Mill. He claims that Mill's idea that numbers correspond to the various ways of splitting collections of objects into subcollections is inconsistent with confidence in calculations involving large numbers. He further quips, "thank goodness everything is not nailed down!" Frege also denies that Mill's philosophy deals adequately with the concept of zero. He goes on to argue that the operation of addition cannot be understood as referring to physical quantities, and that Mill's confusion on this point is a symptom of a larger problem of confounding the applications of arithmetic with arithmetic itself. Frege uses the example of a deck of cards to show numbers do not inhere in objects. Asking "how many" is nonsense without the further clarification of cards or suits or what, showing numbers belong to concepts, not to objects. Julius Caesar problem. The book contains Frege's famous anti-structuralist Julius Caesar problem. Frege contends a proper theory of mathematics would explain why Julius Caesar is not a number. Development of Frege's own view of a number. Frege makes a distinction between particular numerical statements such as 1+1=2, and general statements such as a+b=b+a. The latter are statements true of numbers just as well as the former. Therefore, it is necessary to ask for a definition of the concept of number itself. Frege investigates the possibility that number is determined in external things. He demonstrates how numbers function in natural language just as adjectives. "This desk has 5 drawers" is similar in form to "This desk has green drawers". The drawers being green is an objective fact, grounded in the external world. But this is not the case with 5. Frege argues that each drawer is on its own green, but not every drawer is 5. Frege urges us to remember that from this it does not follow that numbers may be subjective. Indeed, numbers are similar to colors at least in that both are wholly objective. Frege tells us that we can convert number statements where number words appear adjectivally (e.g., 'there are four horses') into statements where number terms appear as singular terms ('the number of horses is four'). Frege recommends such translations because he takes numbers to be objects. It makes no sense to ask whether any objects fall under 4. After Frege gives some reasons for thinking that numbers are objects, he concludes that statements of numbers are assertions about concepts. Frege takes this observation to be the fundamental thought of "Grundlagen". For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concept "horse in the barn". Frege attempts to explain our grasp of numbers through a contextual definition of the cardinality operation ('the number of...', or formula_0). He attempts to construct the content of a judgment involving numerical identity by relying on Hume's principle (which states that the number of Fs equals the number of Gs if and only if F and G are equinumerous, i.e. in one-one correspondence). He rejects this definition because it doesn't fix the truth value of identity statements when a singular term not of the form 'the number of Fs' flanks the identity sign. Frege goes on to give an explicit definition of number in terms of extensions of concepts, but expresses some hesitation. Frege's definition of a number. Frege argues that numbers are objects and assert something about a concept. Frege defines numbers as extensions of concepts. 'The number of F's' is defined as the extension of the concept "G is a concept that is equinumerous to F". The concept in question leads to an equivalence class of all concepts that have the number of F (including F). Frege defines 0 as the extension of the concept "being non self-identical". So, the number of this concept is the extension of the concept of all concepts that have no objects falling under them. The number 1 is the extension of being identical with 0. Legacy. The book was fundamental in the development of two main disciplines, the foundations of mathematics and philosophy. Although Bertrand Russell later found a major flaw in Frege's Basic Law V (this flaw is known as Russell's paradox, which is resolved by axiomatic set theory), the book was influential in subsequent developments, such as "Principia Mathematica". The book can also be considered the starting point in analytic philosophy, since it revolves mainly around the analysis of language, with the goal of clarifying the concept of number. Frege's views on mathematics are also a starting point on the philosophy of mathematics, since it introduces an innovative account on the epistemology of numbers and mathematics in general, known as logicism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Nx: Fx " } ]
https://en.wikipedia.org/wiki?curid=8983001
898316
Conchoid of de Sluze
Family of algebraic curves of the form r = sec(θ) + a*cos(θ) In algebraic geometry, the conchoids of de Sluze are a family of plane curves studied in 1662 by Walloon mathematician René François Walter, baron de Sluze. The curves are defined by the polar equation formula_0 In cartesian coordinates, the curves satisfy the implicit equation formula_1 except that for "a" = 0 the implicit form has an acnode (0,0) not present in polar form. They are rational, circular, cubic plane curves. These expressions have an asymptote "x" = 1 (for "a" ≠ 0). The point most distant from the asymptote is (1 + "a", 0). (0,0) is a crunode for "a" &lt; −1. The area between the curve and the asymptote is, for "a" ≥ −1, formula_2 while for "a" &lt; −1, the area is formula_3 If "a" &lt; −1, the curve will have a loop. The area of the loop is formula_4 Four of the family have names of their own: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r=\\sec\\theta+a\\cos\\theta \\,." }, { "math_id": 1, "text": "(x-1)(x^2+y^2)=ax^2 \\," }, { "math_id": 2, "text": "|a|(1+a/4)\\pi \\," }, { "math_id": 3, "text": "\\left(1-\\frac a2\\right)\\sqrt{-(a+1)}-a\\left(2+\\frac a2\\right)\\arcsin\\frac1{\\sqrt{-a}}." }, { "math_id": 4, "text": "\\left(2+\\frac a2\\right)a\\arccos\\frac1{\\sqrt{-a}} + \\left(1-\\frac a2\\right)\\sqrt{-(a+1)}." } ]
https://en.wikipedia.org/wiki?curid=898316
8983708
Lamé parameters
Material property in strain-stress relationship In continuum mechanics, Lamé parameters (also called the Lamé coefficients, Lamé constants or Lamé moduli) are two material-dependent quantities denoted by "λ" and "μ" that arise in strain-stress relationships. In general, "λ" and "μ" are individually referred to as "Lamé's first parameter" and "Lamé's second parameter", respectively. Other names are sometimes employed for one or both parameters, depending on context. For example, the parameter "μ" is referred to in fluid dynamics as the dynamic viscosity of a fluid (not expressed in the same units); whereas in the context of elasticity, "μ" is called the shear modulus, and is sometimes denoted by "G" instead of "μ". Typically the notation "G" is seen paired with the use of Young's modulus "E", and the notation "μ" is paired with the use of "λ". In homogeneous and isotropic materials, these define Hooke's law in 3D, formula_0 where σ is the stress tensor, ε the strain tensor, "I" the identity matrix and tr the trace function. Hooke's law may be written in terms of tensor components using index notation as formula_1 where δij is the Kronecker delta. The two parameters together constitute a parameterization of the elastic moduli for homogeneous isotropic media, popular in mathematical literature, and are thus related to the other elastic moduli; for instance, the bulk modulus can be expressed as "K" = "λ" + "μ". Relations for other moduli are found in the ("λ", "G") row of the conversions table at the end of this article. Although the shear modulus, "μ", must be positive, the Lamé's first parameter, "λ", can be negative, in principle; however, for most materials it is also positive. The parameters are named after Gabriel Lamé. They have the same dimension as stress and are usually given in SI unit of stress [Pa]. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol{\\sigma} = 2\\mu \\boldsymbol{\\varepsilon} + \\lambda \\; \\operatorname{tr}(\\boldsymbol{\\varepsilon}) I," }, { "math_id": 1, "text": "\\sigma_{ij} = 2 \\mu \\varepsilon_{ij} + \\lambda \\delta_{ij} \\varepsilon_{kk}," } ]
https://en.wikipedia.org/wiki?curid=8983708
8984724
Dirichlet algebra
In mathematics, a Dirichlet algebra is a particular type of algebra associated to a compact Hausdorff space "X". It is a closed subalgebra of "C"("X"), the uniform algebra of bounded continuous functions on "X", whose real parts are dense in the algebra of bounded continuous real functions on "X". The concept was introduced by Andrew Gleason (1957). Example. Let formula_0 be the set of all rational functions that are continuous on formula_1; in other words functions that have no poles in formula_1. Then formula_2 is a *-subalgebra of formula_3, and of formula_4. If formula_5 is dense in formula_4, we say formula_0 is a Dirichlet algebra. It can be shown that if an operator formula_6 has formula_1 as a spectral set, and formula_0 is a Dirichlet algebra, then formula_6 has a normal boundary dilation. This generalises Sz.-Nagy's dilation theorem, which can be seen as a consequence of this by letting formula_7
[ { "math_id": 0, "text": "\\mathcal{R}(X)" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\mathcal{S} = \\mathcal{R}(X) + \\overline{\\mathcal{R}(X)}" }, { "math_id": 3, "text": "C(X)" }, { "math_id": 4, "text": "C\\left(\\partial X\\right)" }, { "math_id": 5, "text": "\\mathcal{S}" }, { "math_id": 6, "text": "T" }, { "math_id": 7, "text": "X=\\mathbb{D}." } ]
https://en.wikipedia.org/wiki?curid=8984724
898483
Semistable abelian variety
In algebraic geometry, a semistable abelian variety is an abelian variety defined over a global or local field, which is characterized by how it reduces at the primes of the field. For an abelian variety formula_0 defined over a field formula_1 with ring of integers formula_2, consider the Néron model of formula_0, which is a 'best possible' model of formula_0 defined over formula_2. This model may be represented as a scheme over formula_3 (cf. spectrum of a ring) for which the generic fibre constructed by means of the morphism formula_4 gives back formula_0. The Néron model is a smooth group scheme, so we can consider formula_5, the connected component of the Néron model which contains the identity for the group law. This is an open subgroup scheme of the Néron model. For a residue field formula_6, formula_7 is a group variety over formula_6, hence an extension of an abelian variety by a linear group. If this linear group is an algebraic torus, so that formula_7 is a semiabelian variety, then formula_0 has "semistable reduction" at the prime corresponding to formula_6. If formula_1 is a global field, then formula_0 is semistable if it has good or semistable reduction at all primes. The fundamental semistable reduction theorem of Alexander Grothendieck states that an abelian variety acquires semistable reduction over a finite extension of formula_1. Semistable elliptic curve. A semistable elliptic curve may be described more concretely as an elliptic curve that has bad reduction only of multiplicative type. Suppose "E" is an elliptic curve defined over the rational number field formula_8. It is known that there is a finite, non-empty set "S" of prime numbers "p" for which "E" has "bad reduction" "modulo" "p". The latter means that the curve formula_9 obtained by reduction of "E" to the prime field with "p" elements has a singular point. Roughly speaking, the condition of multiplicative reduction amounts to saying that the singular point is a double point, rather than a cusp. Deciding whether this condition holds is effectively computable by Tate's algorithm. Therefore in a given case it is decidable whether or not the reduction is semistable, namely multiplicative reduction at worst. The semistable reduction theorem for "E" may also be made explicit: "E" acquires semistable reduction over the extension of "F" generated by the coordinates of the points of order 12. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "\\mathrm{Spec}(R)" }, { "math_id": 4, "text": "\\mathrm{Spec}(F) \\to \\mathrm{Spec}(R) " }, { "math_id": 5, "text": "A^0" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "A^0_k" }, { "math_id": 8, "text": "\\mathbb{Q}" }, { "math_id": 9, "text": "E_p" } ]
https://en.wikipedia.org/wiki?curid=898483
8984861
Spectral set
Set on Banach space In operator theory, a set formula_0 is said to be a spectral set for a (possibly unbounded) linear operator formula_1 on a Banach space if the spectrum of formula_1 is in formula_2 and von-Neumann's inequality holds for formula_1 on formula_2 - i.e. for all rational functions formula_3 with no poles on formula_2 formula_4 This concept is related to the topic of analytic functional calculus of operators. In general, one wants to get more details about the operators constructed from functions with the original operator as the variable. For a detailed discussion of spectral sets and von Neumann's inequality, see.
[ { "math_id": 0, "text": "X\\subseteq\\mathbb{C}" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "r(x)" }, { "math_id": 4, "text": "\\left\\Vert r(T) \\right\\Vert \\leq \\left\\Vert r \\right\\Vert_{X} = \\sup \\left\\{\\left\\vert r(x) \\right\\vert : x\\in X \\right\\}" } ]
https://en.wikipedia.org/wiki?curid=8984861
898605
Rain sensor
Device to detect water from rain A rain sensor or "rain switch" is a switching device activated by rainfall. There are two main applications for rain sensors. The first is a water conservation device connected to an automatic irrigation system that causes the system to shut down in the event of rainfall. The second is a device used to protect the interior of an automobile from rain and to support the automatic mode of windscreen wipers. Principle of operation. The rain sensor works on the principle of total internal reflection. An infrared light shone at a 45-degree angle on a clear area of the windshield is reflected and is sensed by the sensor inside the car. When it rains, the wet glass causes the light to scatter and a lesser amount of light gets reflected back to the sensor. An additional application in professional satellite communications antennas is to trigger a rain blower on the aperture of the antenna feed, to remove water droplets from the mylar cover that keeps pressurized and dry air inside the wave-guides. Irrigation sensors. Rain sensors for irrigation systems are available in both wireless and hard-wired versions, most employing hygroscopic disks that swell in the presence of rain and shrink back down again as they dry out — an electrical switch is in turn depressed or released by the hygroscopic disk stack, and the rate of drying is typically adjusted by controlling the ventilation reaching the stack. However, some electrical type sensors are also marketed that use tipping bucket or conductance type probes to measure rainfall. Wireless and wired versions both use similar mechanisms to temporarily suspend watering by the irrigation controller specifically they are connected to the irrigation controller's sensor terminals, or are installed in series with the solenoid valve common circuit such that they prevent the opening of any valves when rain has been sensed. Some irrigation rain sensors also contain a freeze sensor to keep the system from operating in freezing temperatures, particularly where irrigation systems are still used over the winter. Some type of sensor is required on new lawn sprinkler systems in Florida, New Jersey, Minnesota, Connecticut and most parts of Texas. Automotive sensors. In 1958, the Cadillac Motor Car Division of General Motors experimented with a water-sensitive switch that triggered various electric motors to close the convertible top and raise the open windows of a specially-built Eldorado Biarritz model, in case of rain. The first such device appears to have been used for that same purpose in a concept vehicle designated Le Sabre and built around 1950–51. General Motors' automatic rain sensor for convertible tops was available as a dealer-installed option during the 1950s for vehicles such as the Chevrolet Bel Air. For the 1996 model year, Cadillac once again equipped cars with an automatic rain sensor; this time to automatically trigger the windshield wipers and adjust their speed to conditions as necessary. In December 2017 Tesla started rolling out an OTA update (2017.52.3) enabling their AP2.x cars to utilize the onboard cameras to passively detect rain without the use of a dedicated sensor. Most vehicles with this feature have an "auto" position on the control column. Physics of rain sensor. The most common modern rain sensors are based on the principle of total internal reflection. At all times, an infrared light is beamed at a 45-degree angle into the windshield from the interior. If the glass is dry, the critical angle for total internal refraction is around 42°. This value is obtained with the total internal refraction formula formula_0 where formula_1 is the approximate value on air's refraction index for infrared and formula_2 is the approximate value of the glass refraction index, also for infrared. In that case, since the incident angle of light is 45°, all the light is reflected and the detector receives maximum intensity. If the glass is wet, the critical angle changes to around 60° because the refraction index of water is higher than air (formula_3). In that case, because the incident angle is 45°, total internal reflection is not obtained. Part of the light beam is transmitted through the glass and the intensity measured for reflection is lower : the system detects water and the wipers turn on.
[ { "math_id": 0, "text": "\\sin (\\theta_\\text{c}) = \\frac{n_1}{n_2}" }, { "math_id": 1, "text": "n_1 = 1" }, { "math_id": 2, "text": "n_2 = 1.5" }, { "math_id": 3, "text": "n_1 = 1.3" } ]
https://en.wikipedia.org/wiki?curid=898605
8987340
Variational Monte Carlo
In computational physics, variational Monte Carlo (VMC) is a quantum Monte Carlo method that applies the variational method to approximate the ground state of a quantum system. The basic building block is a generic wave function formula_0 depending on some parameters formula_1. The optimal values of the parameters formula_1 is then found upon minimizing the total energy of the system. In particular, given the Hamiltonian formula_2, and denoting with formula_3 a many-body configuration, the expectation value of the energy can be written as: formula_4 Following the Monte Carlo method for evaluating integrals, we can interpret formula_5 as a probability distribution function, sample it, and evaluate the energy expectation value formula_6 as the average of the so-called local energy formula_7. Once formula_6 is known for a given set of variational parameters formula_1, then optimization is performed in order to minimize the energy and obtain the best possible representation of the ground-state wave-function. VMC is no different from any other variational method, except that the many-dimensional integrals are evaluated numerically. Monte Carlo integration is particularly crucial in this problem since the dimension of the many-body Hilbert space, comprising all the possible values of the configurations formula_3, typically grows exponentially with the size of the physical system. Other approaches to the numerical evaluation of the energy expectation values would therefore, in general, limit applications to much smaller systems than those analyzable thanks to the Monte Carlo approach. The accuracy of the method then largely depends on the choice of the variational state. The simplest choice typically corresponds to a mean-field form, where the state formula_8 is written as a factorization over the Hilbert space. This particularly simple form is typically not very accurate since it neglects many-body effects. One of the largest gains in accuracy over writing the wave function separably comes from the introduction of the so-called Jastrow factor. In this case the wave function is written as formula_9, where formula_10 is the distance between a pair of quantum particles and formula_11 is a variational function to be determined. With this factor, we can explicitly account for particle-particle correlation, but the many-body integral becomes unseparable, so Monte Carlo is the only way to evaluate it efficiently. In chemical systems, slightly more sophisticated versions of this factor can obtain 80–90% of the correlation energy (see electronic correlation) with less than 30 parameters. In comparison, a configuration interaction calculation may require around 50,000 parameters to reach that accuracy, although it depends greatly on the particular case being considered. In addition, VMC usually scales as a small power of the number of particles in the simulation, usually something like "N"2−4 for calculation of the energy expectation value, depending on the form of the wave function. Wave function optimization in VMC. QMC calculations crucially depend on the quality of the trial-function, and so it is essential to have an optimized wave-function as close as possible to the ground state. The problem of function optimization is a very important research topic in numerical simulation. In QMC, in addition to the usual difficulties to find the minimum of multidimensional parametric function, the statistical noise is present in the estimate of the cost function (usually the energy), and its derivatives, required for an efficient optimization. Different cost functions and different strategies were used to optimize a many-body trial-function. Usually three cost functions were used in QMC optimization energy, variance or a linear combination of them. The variance optimization method has the advantage that the exact wavefunction's variance is known. (Because the exact wavefunction is an eigenfunction of the Hamiltonian, the variance of the local energy is zero). This means that variance optimization is ideal in that it is bounded from below, it is positive defined and its minimum is known. Energy minimization may ultimately prove more effective, however, as different authors recently showed that the energy optimization is more effective than the variance one. There are different motivations for this: first, usually one is interested in the lowest energy rather than in the lowest variance in both variational and diffusion Monte Carlo; second, variance optimization takes many iterations to optimize determinant parameters and often the optimization can get stuck in multiple local minimum and it suffers of the "false convergence" problem; third energy-minimized wave functions on average yield more accurate values of other expectation values than variance minimized wave functions do. The optimization strategies can be divided into three categories. The first strategy is based on correlated sampling together with deterministic optimization methods. Even if this idea yielded very accurate results for the first-row atoms, this procedure can have problems if parameters affect the nodes, and moreover density ratio of the current and initial trial-function increases exponentially with the size of the system. In the second strategy one use a large bin to evaluate the cost function and its derivatives in such way that the noise can be neglected and deterministic methods can be used. The third approach, is based on an iterative technique to handle directly with noise functions. The first example of these methods is the so-called Stochastic Gradient Approximation (SGA), that was used also for structure optimization. Recently an improved and faster approach of this kind was proposed the so-called Stochastic Reconfiguration (SR) method. VMC and deep learning. In 2017, Giuseppe Carleo and Matthias Troyer used a VMC objective function to train an artificial neural network to find the ground state of a quantum mechanical system. More generally, artificial neural networks are being used as a wave function ansatz (known as neural network quantum states) in VMC frameworks for finding ground states of quantum mechanical systems. The use of neural network ansatzes for VMC has been extended to fermions, enabling electronic structure calculations that are significantly more accurate than VMC calculations which do not use neural networks.
[ { "math_id": 0, "text": "| \\Psi(a) \\rangle " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\mathcal{H} " }, { "math_id": 3, "text": " X " }, { "math_id": 4, "text": " E(a) = \\frac{\\langle \\Psi(a) | \\mathcal{H} | \\Psi(a) \\rangle} {\\langle \\Psi(a) | \\Psi(a) \\rangle } = \\frac{\\int | \\Psi(X,a) | ^2 \\frac{\\mathcal{H}\\Psi(X,a)}{\\Psi(X,a)} \\, dX} { \\int | \\Psi(X,a)|^2 \\, dX}. " }, { "math_id": 5, "text": " \\frac{ | \\Psi(X,a) | ^2 } { \\int | \\Psi(X,a) | ^2 \\, dX } " }, { "math_id": 6, "text": " E(a) " }, { "math_id": 7, "text": "E_{\\textrm{loc}}(X) = \\frac{\\mathcal{H}\\Psi(X,a)}{\\Psi(X,a)} " }, { "math_id": 8, "text": " \\Psi " }, { "math_id": 9, "text": " \\Psi(X) = \\exp(\\sum{u(r_{ij})})" }, { "math_id": 10, "text": " r_{ij} " }, { "math_id": 11, "text": " u(r) " } ]
https://en.wikipedia.org/wiki?curid=8987340
8987495
Diffusion Monte Carlo
Diffusion Monte Carlo (DMC) or diffusion quantum Monte Carlo is a quantum Monte Carlo method that uses a Green's function to calculate low-lying energies of a quantum many-body Hamiltonian. Introduction and motivation of the algorithm. Diffusion Monte Carlo has the potential to be numerically exact, meaning that it can find the exact ground state energy for any quantum system within a given error, but approximations must often be made and their impact must be assessed in particular cases. When actually attempting the calculation, one finds that for bosons, the algorithm scales as a polynomial with the system size, but for fermions, DMC scales exponentially with the system size. This makes exact large-scale DMC simulations for fermions impossible; however, DMC employing a clever approximation known as the fixed-node approximation can still yield very accurate results. To motivate the algorithm, let's look at the Schrödinger equation for a particle in some potential in one dimension: formula_0 We can condense the notation a bit by writing it in terms of an "operator" equation, with formula_1 where formula_2 is the Hamiltonian operator. So then we have formula_3 where we have to keep in mind that formula_2 is an operator, not a simple number or function. There are special functions, called eigenfunctions, for which formula_4, where formula_5 is a number. These functions are special because no matter where we evaluate the action of the formula_2operator on the wave function, we always get the same number formula_5. These functions are called stationary states, because the time derivative at any point formula_6 is always the same, so the amplitude of the wave function never changes in time. Since the overall phase of a wave function is not measurable, the system does not change in time. We are usually interested in the wave function with the lowest energy eigenvalue, the ground state. We're going to write a slightly different version of the Schrödinger equation that will have the same energy eigenvalue, but, instead of being oscillatory, it will be convergent. Here it is: formula_7. We've removed the imaginary number from the time derivative and added in a constant offset of formula_8, which is the ground state energy. We don't actually know the ground state energy, but there will be a way to determine it self-consistently which we'll introduce later. Our modified equation (some people call it the imaginary-time Schrödinger equation) has some nice properties. The first thing to notice is that if we happen to guess the ground state wave function, then formula_9 and the time derivative is zero. Now suppose that we start with another wave function(formula_10), which is not the ground state but is not orthogonal to it. Then we can write it as a linear sum of eigenfunctions: formula_11 Since this is a linear differential equation, we can look at the action of each part separately. We already determined that formula_12 is stationary. Suppose we take formula_13. Since formula_12 is the lowest-energy eigenfunction, the associate eigenvalue of formula_13 satisfies the property formula_14. Thus the time derivative of formula_15 is negative, and will eventually go to zero, leaving us with only the ground state. This observation also gives us a way to determine formula_8. We watch the amplitude of the wave function as we propagate through time. If it increases, then decrease the estimation of the offset energy. If the amplitude decreases, then increase the estimate of the offset energy. Stochastic implementation and the Green's function. Now we have an equation that, as we propagate it forward in time and adjust formula_8 appropriately, we find the ground state of any given Hamiltonian. This is still a harder problem than classical mechanics, though, because instead of propagating single positions of particles, we must propagate entire functions. In classical mechanics, we could simulate the motion of the particles by setting formula_16, if we assume that the force is constant over the time span of formula_17. For the imaginary time Schrödinger equation, instead, we propagate forward in time using a convolution integral with a special function called a Green's function. So we get formula_18. Similarly to classical mechanics, we can only propagate for small slices of time; otherwise the Green's function is inaccurate. As the number of particles increases, the dimensionality of the integral increases as well, since we have to integrate over all coordinates of all particles. We can do these integrals by Monte Carlo integration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i\\frac{\\partial \\Psi(x,t)}{\\partial t}=-\\frac{1}{2}\\frac{\\partial^2 \\Psi(x,t)}{\\partial x^2} + V(x)\\Psi(x,t)." }, { "math_id": 1, "text": "H=-\\frac{1}{2}\\frac{\\partial^2 }{\\partial x^2} + V(x)" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "i\\frac{\\partial\\Psi(x,t)}{\\partial t}=H\\Psi(x,t)," }, { "math_id": 4, "text": "H\\Psi(x)=E\\Psi(x)" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "-\\frac{\\partial\\Psi(x,t)}{\\partial t}=(H-E_0)\\Psi(x,t)" }, { "math_id": 8, "text": "E_0" }, { "math_id": 9, "text": "H\\Phi_0(x)=E_0\\Phi_0(x)" }, { "math_id": 10, "text": "\\Psi" }, { "math_id": 11, "text": "\\Psi=c_0\\Phi_0+\\sum_{i=1}^\\infty c_i\\Phi_i" }, { "math_id": 12, "text": "\\Phi_0" }, { "math_id": 13, "text": "\\Phi_1" }, { "math_id": 14, "text": "E_1 > E_0" }, { "math_id": 15, "text": "c_1" }, { "math_id": 16, "text": "x(t+\\tau)=x(t)+\\tau v(t)+0.5 F(t)\\tau^2" }, { "math_id": 17, "text": "\\tau" }, { "math_id": 18, "text": " \\Psi(x,t+\\tau)=\\int G(x,x',\\tau) \\Psi(x',t) dx' " } ]
https://en.wikipedia.org/wiki?curid=8987495
898784
Quotient space (linear algebra)
Vector space consisting of affine subsets In linear algebra, the quotient of a vector space formula_0 by a subspace formula_1 is a vector space obtained by "collapsing" formula_1 to zero. The space obtained is called a quotient space and is denoted formula_2 (read "formula_0 mod formula_1" or "formula_0 by formula_1"). Definition. Formally, the construction is as follows. Let formula_0 be a vector space over a field formula_3, and let formula_1 be a subspace of formula_0. We define an equivalence relation formula_4 on formula_0 by stating that formula_5 iff formula_6. That is, formula_7 is related to formula_8 if and only if one can be obtained from the other by adding an element of formula_1. From this definition, one can deduce that any element of formula_1 is related to the zero vector; more precisely, all the vectors in formula_1 get mapped into the equivalence class of the zero vector. The equivalence class – or, in this case, the coset – of formula_7 is often denoted formula_9 since it is given by formula_10. The quotient space formula_2 is then defined as formula_11, the set of all equivalence classes induced by formula_4 on formula_0. Scalar multiplication and addition are defined on the equivalence classes by It is not hard to check that these operations are well-defined (i.e. do not depend on the choice of representatives). These operations turn the quotient space formula_2 into a vector space over formula_3 with formula_1 being the zero class, formula_15. The mapping that associates to formula_16 the equivalence class formula_17 is known as the quotient map. Alternatively phrased, the quotient space formula_2 is the set of all affine subsets of formula_0 which are parallel to formula_1. Examples. Lines in Cartesian Plane. Let "X" = R2 be the standard Cartesian plane, and let "Y" be a line through the origin in "X". Then the quotient space "X"/"Y" can be identified with the space of all lines in "X" which are parallel to "Y". That is to say that, the elements of the set "X"/"Y" are lines in "X" parallel to "Y". Note that the points along any one such line will satisfy the equivalence relation because their difference vectors belong to "Y". This gives a way to visualize quotient spaces geometrically. (By re-parameterising these lines, the quotient space can more conventionally be represented as the space of all points along a line through the origin that is not parallel to "Y". Similarly, the quotient space for R3 by a line through the origin can again be represented as the set of all co-parallel lines, or alternatively be represented as the vector space consisting of a plane which only intersects the line at the origin.) Subspaces of Cartesian Space. Another example is the quotient of R"n" by the subspace spanned by the first "m" standard basis vectors. The space R"n" consists of all "n"-tuples of real numbers ("x"1, ..., "x""n"). The subspace, identified with R"m", consists of all "n"-tuples such that the last "n" − "m" entries are zero: ("x"1, ..., "x""m", 0, 0, ..., 0). Two vectors of R"n" are in the same equivalence class modulo the subspace if and only if they are identical in the last "n" − "m" coordinates. The quotient space R"n"/R"m" is isomorphic to R"n"−"m" in an obvious manner. Polynomial Vector Space. Let formula_18 be the vector space of all cubic polynomials over the real numbers. Then formula_19 is a quotient space, where each element is the set corresponding to polynomials that differ by a quadratic term only. For example, one element of the quotient space is formula_20, while another element of the quotient space is formula_21. General Subspaces. More generally, if "V" is an (internal) direct sum of subspaces "U" and "W," formula_22 then the quotient space "V"/"U" is naturally isomorphic to "W". Lebesgue Integrals. An important example of a functional quotient space is an L"p" space. Properties. There is a natural epimorphism from "V" to the quotient space "V"/"U" given by sending "x" to its equivalence class ["x"]. The kernel (or nullspace) of this epimorphism is the subspace "U". This relationship is neatly summarized by the short exact sequence formula_23 If "U" is a subspace of "V", the dimension of "V"/"U" is called the codimension of "U" in "V". Since a basis of "V" may be constructed from a basis "A" of "U" and a basis "B" of "V"/"U" by adding a representative of each element of "B" to "A", the dimension of "V" is the sum of the dimensions of "U" and "V"/"U". If "V" is finite-dimensional, it follows that the codimension of "U" in "V" is the difference between the dimensions of "V" and "U": formula_24 Let "T" : "V" → "W" be a linear operator. The kernel of "T", denoted ker("T"), is the set of all "x" in "V" such that "Tx" = 0. The kernel is a subspace of "V". The first isomorphism theorem for vector spaces says that the quotient space "V"/ker("T") is isomorphic to the image of "V" in "W". An immediate corollary, for finite-dimensional spaces, is the rank–nullity theorem: the dimension of "V" is equal to the dimension of the kernel (the nullity of "T") plus the dimension of the image (the rank of "T"). The cokernel of a linear operator "T" : "V" → "W" is defined to be the quotient space "W"/im("T"). Quotient of a Banach space by a subspace. If "X" is a Banach space and "M" is a closed subspace of "X", then the quotient "X"/"M" is again a Banach space. The quotient space is already endowed with a vector space structure by the construction of the previous section. We define a norm on "X"/"M" by formula_25 Examples. Let "C"[0,1] denote the Banach space of continuous real-valued functions on the interval [0,1] with the sup norm. Denote the subspace of all functions "f" ∈ "C"[0,1] with "f"(0) = 0 by "M". Then the equivalence class of some function "g" is determined by its value at 0, and the quotient space "C"[0,1]/"M" is isomorphic to R. If "X" is a Hilbert space, then the quotient space "X"/"M" is isomorphic to the orthogonal complement of "M". Generalization to locally convex spaces. The quotient of a locally convex space by a closed subspace is again locally convex. Indeed, suppose that "X" is locally convex so that the topology on "X" is generated by a family of seminorms {"p"α | α ∈ "A"} where "A" is an index set. Let "M" be a closed subspace, and define seminorms "q"α on "X"/"M" by formula_26 Then "X"/"M" is a locally convex space, and the topology on it is the quotient topology. If, furthermore, "X" is metrizable, then so is "X"/"M". If "X" is a Fréchet space, then so is "X"/"M".
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "V/N" }, { "math_id": 3, "text": "\\mathbb{K}" }, { "math_id": 4, "text": "\\sim" }, { "math_id": 5, "text": "x \\sim y" }, { "math_id": 6, "text": "x - y \\in N" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "[x] = x + N" }, { "math_id": 10, "text": "[x] = \\{ x + n: n \\in N \\}" }, { "math_id": 11, "text": "V/_\\sim" }, { "math_id": 12, "text": "\\alpha [x] = [\\alpha x]" }, { "math_id": 13, "text": "\\alpha \\in \\mathbb{K}" }, { "math_id": 14, "text": "[x] + [y] = [x+y]" }, { "math_id": 15, "text": "[0]" }, { "math_id": 16, "text": "v \\in V" }, { "math_id": 17, "text": "[v]" }, { "math_id": 18, "text": "\\mathcal{P}_3(\\mathbb{R})" }, { "math_id": 19, "text": "\\mathcal{P}_3(\\mathbb{R}) / \\langle x^2 \\rangle " }, { "math_id": 20, "text": "\\{x^3 + a x^2 - 2x + 3 : a \\in \\mathbb{R}\\}" }, { "math_id": 21, "text": "\\{a x^2 + 2.7 x : a \\in \\mathbb{R}\\}" }, { "math_id": 22, "text": "V=U\\oplus W" }, { "math_id": 23, "text": "0\\to U\\to V\\to V/U\\to 0.\\," }, { "math_id": 24, "text": "\\mathrm{codim}(U) = \\dim(V/U) = \\dim(V) - \\dim(U)." }, { "math_id": 25, "text": " \\| [x] \\|_{X/M} = \\inf_{m \\in M} \\|x-m\\|_X = \\inf_{m \\in M} \\|x+m\\|_X = \\inf_{y\\in [x]}\\|y\\|_X. " }, { "math_id": 26, "text": "q_\\alpha([x]) = \\inf_{v\\in [x]} p_\\alpha(v)." } ]
https://en.wikipedia.org/wiki?curid=898784
898792
Euler's equations (rigid body dynamics)
Quasilinear first-order ordinary differential equation &lt;templatestyles src="Hlist/styles.css"/&gt; In classical mechanics, Euler's rotation equations are a vectorial quasilinear first-order ordinary differential equation describing the rotation of a rigid body, using a rotating reference frame with angular velocity ω whose axes are fixed to the body. Their general vector form is formula_0 where "M" is the applied torques and "I" is the inertia matrix. The vector formula_1 is the angular acceleration. Again, note that all quantities are defined in the rotating reference frame. In orthogonal principal axes of inertia coordinates the equations become formula_2 where "Mk" are the components of the applied torques, "Ik" are the principal moments of inertia and ω"k" are the components of the angular velocity. In the absence of applied torques, one obtains the Euler top. When the torques are due to gravity, there are special cases when the motion of the top is integrable. Derivation. In an inertial frame of reference (subscripted "in"), Euler's second law states that the time derivative of the angular momentum L equals the applied torque: formula_3 For point particles such that the internal forces are central forces, this may be derived using Newton's second law. For a rigid body, one has the relation between angular momentum and the moment of inertia Iin given as formula_4 In the inertial frame, the differential equation is not always helpful in solving for the motion of a general rotating rigid body, as both Iin and ω can change during the motion. One may instead change to a coordinate frame fixed in the rotating body, in which the moment of inertia tensor is constant. Using a reference frame such as that at the center of mass, the frame's position drops out of the equations. In any rotating reference frame, the time derivative must be replaced so that the equation becomes formula_5 and so the cross product arises, see time derivative in rotating reference frame. The vector components of the torque in the inertial and the rotating frames are related by formula_6 where formula_7 is the rotation tensor (not rotation matrix), an orthogonal tensor related to the angular velocity vector by formula_8 for any vector u. Now formula_9 is substituted and the time derivatives are taken in the rotating frame, while realizing that the particle positions and the inertia tensor does not depend on time. This leads to the general vector form of Euler's equations which are valid in such a frame formula_0 The equations are also derived from Newton's laws in the discussion of the resultant torque. More generally, by the tensor transform rules, any rank-2 tensor formula_10 has a time-derivative formula_11 such that for any vector formula_12, one has formula_13. This yields the Euler's equations by plugging in formula_14 Principal axes form. When choosing a frame so that its axes are aligned with the principal axes of the inertia tensor, its component matrix is diagonal, which further simplifies calculations. As described in the moment of inertia article, the angular momentum L can then be written formula_15 Also in some frames not tied to the body can it be possible to obtain such simple (diagonal tensor) equations for the rate of change of the angular momentum. Then ω must be the angular velocity for rotation of that frames axes instead of the rotation of the body. It is however still required that the chosen axes are still principal axes of inertia. The resulting form of the Euler rotation equations is useful for rotation-symmetric objects that allow some of the principal axes of rotation to be chosen freely. Special case solutions. Torque-free precessions. Torque-free precessions are non-trivial solution for the situation where the torque on the right hand side is zero. When I is not constant in the external reference frame (i.e. the body is moving and its inertia tensor is not constantly diagonal) then I cannot be pulled through the derivative operator acting on L. In this case I("t") and ω("t") do change together in such a way that the derivative of their product is still zero. This motion can be visualized by Poinsot's construction. Generalized Euler equations. The Euler equations can be generalized to any simple Lie algebra. The original Euler equations come from fixing the Lie algebra to be formula_16, with generators formula_17 satisfying the relation formula_18. Then if formula_19 (where formula_20 is a time coordinate, not to be confused with basis vectors formula_21) is an formula_16-valued function of time, and formula_22 (with respect to the Lie algebra basis), then the (untorqued) original Euler equations can be written formula_23 To define formula_24 in a basis-independent way, it must be a self-adjoint map on the Lie algebra formula_25 with respect to the invariant bilinear form on formula_25. This expression generalizes readily to an arbitrary simple Lie algebra, say in the standard classification of simple Lie algebras. This can also be viewed as a Lax pair formulation of the generalized Euler equations, suggesting their integrability. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathbf{I} \\dot{\\boldsymbol\\omega} + \\boldsymbol\\omega \\times \\left( \\mathbf{I} \\boldsymbol\\omega \\right) = \\mathbf{M}.\n" }, { "math_id": 1, "text": "\\dot{\\boldsymbol\\omega}" }, { "math_id": 2, "text": "\n\\begin{align}\nI_1\\,\\dot{\\omega}_{1} + (I_3-I_2)\\,\\omega_2\\,\\omega_3 &= M_{1}\\\\\nI_2\\,\\dot{\\omega}_{2} + (I_1-I_3)\\,\\omega_3\\,\\omega_1 &= M_{2}\\\\\nI_3\\,\\dot{\\omega}_{3} + (I_2-I_1)\\,\\omega_1\\,\\omega_2 &= M_{3}\n\\end{align}\n" }, { "math_id": 3, "text": "\n\\frac{d\\mathbf{L}_{\\text{in}}}{dt} = \\mathbf{M}_{\\text{in}}\n" }, { "math_id": 4, "text": "\\mathbf{L}_{\\text{in}} = \\mathbf{I}_{\\text{in}} \\boldsymbol\\omega" }, { "math_id": 5, "text": "\n\\left(\\frac{d\\mathbf{L}}{dt}\\right)_\\mathrm{rot} + \\boldsymbol\\omega\\times\\mathbf{L} = \\mathbf{M}\n" }, { "math_id": 6, "text": "\n\\mathbf{M}_{\\text{in}} = \\mathbf{Q}\\mathbf{M},\n" }, { "math_id": 7, "text": "\\mathbf{Q}" }, { "math_id": 8, "text": "\\boldsymbol\\omega \\times \\boldsymbol{u} = \\dot{\\mathbf{Q}} \\mathbf{Q}^{-1}\\boldsymbol{u}" }, { "math_id": 9, "text": "\\mathbf{L} = \\mathbf{I} \\boldsymbol\\omega" }, { "math_id": 10, "text": "\\mathbf{T}" }, { "math_id": 11, "text": "\\mathbf{\\dot T} " }, { "math_id": 12, "text": " \\mathbf{u}" }, { "math_id": 13, "text": "\\mathbf{\\dot T} \\mathbf{u} = \\boldsymbol{\\omega}\\times (\\mathbf{T} \\mathbf{u}) - \\mathbf{T}(\\boldsymbol{\\omega}\\times \\mathbf{u})" }, { "math_id": 14, "text": "\\frac{d}{dt} \\left( \\mathbf{I} \\boldsymbol\\omega \\right) = \\mathbf{M}." }, { "math_id": 15, "text": "\n\\mathbf{L} = L_{1}\\mathbf{e}_{1} + L_{2}\\mathbf{e}_{2} + L_{3}\\mathbf{e}_{3} = \\sum_{i=1}^3 I_{i}\\omega_{i}\\mathbf{e}_{i}\n" }, { "math_id": 16, "text": "\\mathfrak{so}(3)" }, { "math_id": 17, "text": "{t_1, t_2, t_3}" }, { "math_id": 18, "text": "[t_a, t_b] = \\epsilon_{abc}t_c" }, { "math_id": 19, "text": "\\boldsymbol\\omega(t) = \\sum_a \\omega_a(t)t_a" }, { "math_id": 20, "text": "t" }, { "math_id": 21, "text": "t_a" }, { "math_id": 22, "text": "\\mathbf{I} = \\mathrm{diag}(I_1, I_2, I_3)" }, { "math_id": 23, "text": "\\mathbf{I}\\dot\\boldsymbol\\omega = [\\mathbf{I}\\boldsymbol\\omega, \\boldsymbol\\omega]." }, { "math_id": 24, "text": "\\mathbf{I}" }, { "math_id": 25, "text": "\\mathfrak{g}" } ]
https://en.wikipedia.org/wiki?curid=898792
8988211
Forensic geology
Forensic geology is the study of evidence relating to materials found in the Earth used to answer questions raised by the legal system. In 1975, Ray Murray and fellow Rutgers University professor John Tedrow published "Forensic Geology". The main use of forensic geology as it is applied today is regarding trace evidence. By examining the soil and sediment particles forensic geologists can potentially link a suspect to a particular crime or a particular crime scene. Forensic geologists work with many other disciplines of science such as medicine, biology, geography, and engineering amongst others. In 2008, Alastair Ruffell and Jennifer McKinley, both of Queen's University Belfast, published "Geoforensics" a book that focuses more on the use of geomorphology and geophysics for searches. In 2010, forensic soil scientist Lorna Dawson of the James Hutton Institute co-edited and contributed chapters to the textbook "Criminal and Environmental Soil Forensics". In 2012, Elisa Bergslien, at SUNY Buffalo State, published a general textbook on the topic, "An Introduction to Forensic Geoscience." Early use of forensic geology. According to Murray, forensic geology began with Sherlock Holmes writer, Sir Arthur Conan Doyle. The character Sherlock Holmes claimed to be able to identify where an individual had been by various methods, including his having memorized the exposed geology of London to such a degree that detecting certain clays on a person's shoe would give away a locale. Georg Popp, of Frankfurt, Germany, may have been the first to use soil analysis for linking suspects to a crime scene. In 1891, Hans Gross used microscopic analysis of soils and other materials from a suspect's shoes to link him to the crime scene. Physical description. Colour. Colour is one of the most important physical characteristics associated with soil samples. One technique used is comparing the soil to the Munsell soil chart. In a majority parts of the world during a forensic investigation determining the soil colour is required. This analysis can be achieved in the field itself with the Munsell soil chart using human perspective. Although colour is a very subjective topic, two people can have a complete different perception of colour and could then associate it differently with the Munsell soil chart thus effecting the accuracy of this method. .To avoid the errors of simply using human perception, to obtain objective results computer controlled spectrophotometry can be used. One computerized method is using CIELAB which consists of using an electronic spectrophotometer and calorimeter to create 3D plotting of colour. Using three coordinates L* relates to a reflection of lightness, a* refers to red/ green colours and b* yellow/ blue colours. This method uses a derivative mathematical system to achieve a uniform colour space for analysis. This technique provides numerical values to be associated with colour to then be using with accordance of the Munsell soil chart. The most commonly used technique to determine colour of a soil sample has been measuring the colour once the sample has been dried. Although to obtain a more thorough analysis different case studies have taken colour measurements when the soil sample is moist, allowing organic materials to decompose, removing iron oxides, crushing and heating the sample. Density. Another physical characteristic used is measuring the density in units of formula_0or formula_1. This can be achieved in regards to the "particle density" or "material density," this measurement will vary depending on the specific type of material measured"." To determine the weight of the sample in question a simple scale is used. To determine the volume of the sample, if it is a rock being measured, it can be placed in water and be measured by the displacement. In regards to soil sample the same technique is used although the soil sample is placed in a cling film to avoid disintegration. The main use of density of the sample in forensic geology is to obtain the best description of the sample in question. Particle size distributions. One of the most discriminating physical characteristics consist of particle size where it is characterized as "particle size frequency distributions". This consists of the materials weight, weight %, number of particles present, or the volume. Depending on the sample, different methods can be used such as examination use a microscope, laser diffraction, dry/ wet sieving, computer program analysis and many more. Chemical description. Ph. Ph is the measure of hydrogen activity present and to determine the pH they calculate the level of dissociation of the hydrogen ions. Within the realm of pH it can be associated with acidic, basic or neutral. Although more can be determined with pH such as the elemental composition and the level of essential nutrients and toxicity. It can indicate the presence of many elements such as P, Zn, B, Cu, Fe etc., as well as estimating lime requirement. In recent years that has been much improvement to portable pH meters that are used in the field. Decades ago the portable devices has numerous malfunctions regarding the electrodes. Nowadays pH meters due to microcircuitry and plastic not only reduces the cost of these devices but also allows for an overall better protection of the unit. Further studies are attempting a technique to produce a device to obtain microsite pH in various soil systems by using plant cells via microprocedures. This would also be able to decipher the different pH present in the soil matrix.   Evidence collection. In the application of forensic geology there are two distinct types of soil samples. The first being the questioned sample, samples of unknown origin. These types of sample can be taken from someones shoe for example. The other type of sample consists of the control sample which the forensic geologist can choose. The most common control sample would be soil taken from the crime scene. The questioned and control sample would then be compared to find similarities or distinction from the two. Regarding the evidence collection from a questioned sample, these are most likely samples acquired by accident. Such as a suspect obtained soil or rocks in their shoes or pants. The forensic geologist therefor does not chose the size of the questioned sample and most likely will not be comparable in size to the control sample. Depending on the question sample the forensic geologist will have to use professional judgement on the optimal technique to compare it to the control sample. In some situations only loose particles are available for comparison. If the questioned sample is found to be a lump of soil, this lump needs to obtained in its entirety to essentially preserve the different layers of soil in the lump itself as well as the keeping intact the particles. Methods such as using adhesive tape, vacuuming, shaking items over a tarp are also used in the field. Controlled samples consist of two sub categories samples from the scene itself or an alibi location. Soil samples can differ from a very small distance and is why the questioned sample should be examined first two establish particle size/ colour or any other distinguishable factors to then carefully choose a location at the scene to sample in comparison. Samples obtained from these scenes can also be submitted to the forensic lab with other physical evidence. Depending on the lab, they would have distinct instructions for the collector on how the sample is collected/ submitted. When samples are being taken from the ground it is recommended to take samples from different layers of the soil such as the horizon or the bed layer. It is important to obtain samples that vary on colour, mineral composition and textures. The tools used to collect evidence depend on type of sample either questioned or control as well as its structure and size. Smaller quantities of soil can bee retrieved using forceps, tweezers, palette knives etc. When attempting to remove soil that is stuck to a surface an ice pick, razor blade or anything with a flat surface is sufficient. Control samples are normally larger thus requiring the implementation of a larger tool such as a garden shovel. When taking a soil sample from the ground normally just the surface is being sampled. Need to assure the samples are allowed to dry before collection, moist samples will still allow further biological material will be modified changing the overall composition of the sample. Dry samples can be placed in plastic cartons/ vials and leak proof containers. To avoid continuous microbial activity related to sample taken from damp area, they are refrigerated prior. Geophysical instruments. Ground penetrating radar. The principle use of a ground-penetrating radar device in regards to forensic geology is to find buried bodies. This instrument has been most useful in resolving missing person cases. As well improving the recovery of the body by giving a general area to which the body is buried this also decreases excavation time. Studies conducted using this device show data of its capability to discover hallow and forensic relevant locations as well as specific geometries. This method includes reflective signalling off of objects in the ground that undergo different electronic properties via a transmittance signal. The actual device contains a radio transmitter and receiver that attaches to antennas that are attached to the ground. Seismograph. The seismograph devices functions under the principle of waves that travel differently through rocks and are recorded as vibrations. For it to function properly there must be shocks produced and a manor on detecting these waves. Depending on the rocks under the earth, the waves will travel at different velocities. This is measured through the time it takes for waves to reach the area of shock along with the time it takes to reach the detector. In forensic applications, the device is used to understand explosives and differentiating them from natural occurrences such as earthquakes. Because of the varying surface wave magnitude varies significantly between both occurrences. Natural disasters emit longer and higher intensity energy then man mad events such as chemical explosions, nuclear explosion, plane or train crashes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g/cm^3" }, { "math_id": 1, "text": "kg/m^3 \n" } ]
https://en.wikipedia.org/wiki?curid=8988211
8988283
Reptation Monte Carlo
Reptation Monte Carlo is a quantum Monte Carlo method. It is similar to Diffusion Monte Carlo, except that it works with paths rather than points. This has some advantages relating to calculating certain properties of the system under study that diffusion Monte Carlo has difficulty with. In both diffusion Monte Carlo and reptation Monte Carlo, the method first aims to solve the time-dependent Schrödinger equation in the imaginary time direction. When you propagate the Schrödinger equation in time, you get the dynamics of the system under study. When you propagate it in imaginary time, you get a system that tends towards the ground state of the system. When substituting formula_0 in place of formula_1, the Schrodinger equation becomes identical with a diffusion equation. Diffusion equations can be solved by imagining a huge population of particles (sometimes called "walkers"), each diffusing in a way that solves the original equation. This is how diffusion Monte Carlo works. Reptation Monte Carlo works in a very similar way, but is focused on the paths that the walkers take, rather than the density of walkers. In particular, a path may be mutated using a Metropolis algorithm which tries a change (normally at one end of the path) and then accepts or rejects the change based on a probability calculation. The update step in diffusion Monte Carlo would be moving the walkers slightly, and then duplicating and removing some of them. By contrast, the update step in reptation Monte Carlo mutates a path, and then accepts or rejects the mutation. References.
[ { "math_id": 0, "text": "it" }, { "math_id": 1, "text": "t" } ]
https://en.wikipedia.org/wiki?curid=8988283
898856
Conchoid (mathematics)
Curve traced by a line as it slides along another curve about a fixed point In geometry, a conchoid is a curve derived from a fixed point O, another curve, and a length d. It was invented by the ancient Greek mathematician Nicomedes. Description. For every line through O that intersects the given curve at A the two points on the line which are d from A are on the conchoid. The conchoid is, therefore, the cissoid of the given curve and a circle of radius d and center O. They are called "conchoids" because the shape of their outer branches resembles conch shells. The simplest expression uses polar coordinates with O at the origin. If formula_0 expresses the given curve, then formula_1 expresses the conchoid. If the curve is a line, then the conchoid is the "conchoid of Nicomedes". For instance, if the curve is the line "x" = "a", then the line's polar form is "r" = "a" sec "θ" and therefore the conchoid can be expressed parametrically as formula_2 A limaçon is a conchoid with a circle as the given curve. The so-called conchoid of de Sluze and conchoid of Dürer are not actually conchoids. The former is a strict cissoid and the latter a construction more general yet. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r=\\alpha(\\theta)" }, { "math_id": 1, "text": "r=\\alpha(\\theta)\\pm d " }, { "math_id": 2, "text": "x=a \\pm d \\cos \\theta,\\, y=a \\tan \\theta \\pm d \\sin \\theta." } ]
https://en.wikipedia.org/wiki?curid=898856
899085
Cissoid
Plane curve constructed from two other curves and a fixed point In geometry, a cissoid (from grc " "κισσοειδής" (kissoeidēs)" 'ivy-shaped') is a plane curve generated from two given curves "C"1, "C"2 and a point O (the pole). Let L be a variable line passing through O and intersecting "C"1 at "P"1 and "C"2 at "P"2. Let P be the point on L so that formula_0 (There are actually two such points but P is chosen so that P is in the same direction from O as "P"2 is from "P"1.) Then the locus of such points P is defined to be the cissoid of the curves "C"1, "C"2 relative to O. Slightly different but essentially equivalent definitions are used by different authors. For example, P may be defined to be the point so that formula_1 This is equivalent to the other definition if "C"1 is replaced by its reflection through O. Or P may be defined as the midpoint of "P"1 and "P"2; this produces the curve generated by the previous curve scaled by a factor of 1/2. Equations. If "C"1 and "C"2 are given in polar coordinates by formula_2 and formula_3 respectively, then the equation formula_4 describes the cissoid of "C"1 and "C"2 relative to the origin. However, because a point may be represented in multiple ways in polar coordinates, there may be other branches of the cissoid which have a different equation. Specifically, "C"1 is also given by formula_5 So the cissoid is actually the union of the curves given by the equations formula_6 It can be determined on an individual basis depending on the periods of "f"1 and "f"2, which of these equations can be eliminated due to duplication. For example, let "C"1 and "C"2 both be the ellipse formula_7 The first branch of the cissoid is given by formula_8 which is simply the origin. The ellipse is also given by formula_9 so a second branch of the cissoid is given by formula_10 which is an oval shaped curve. If each "C"1 and "C"2 are given by the parametric equations formula_11 and formula_12 then the cissoid relative to the origin is given by formula_13 Specific cases. When "C"1 is a circle with center O then the cissoid is conchoid of "C"2. When "C"1 and "C"2 are parallel lines then the cissoid is a third line parallel to the given lines. Hyperbolas. Let "C"1 and "C"2 be two non-parallel lines and let O be the origin. Let the polar equations of "C"1 and "C"2 be formula_14 and formula_15 By rotation through angle formula_16 we can assume that formula_17 Then the cissoid of "C"1 and "C"2 relative to the origin is given by formula_18 Combining constants gives formula_19 which in Cartesian coordinates is formula_20 This is a hyperbola passing through the origin. So the cissoid of two non-parallel lines is a hyperbola containing the pole. A similar derivation show that, conversely, any hyperbola is the cissoid of two non-parallel lines relative to any point on it. Cissoids of Zahradnik. A cissoid of Zahradnik (named after Karel Zahradnik) is defined as the cissoid of a conic section and a line relative to any point on the conic. This is a broad family of rational cubic curves containing several well-known examples. Specifically: formula_21 is the cissoid of the circle formula_22 and the line formula_23 relative to the origin. formula_24 is the cissoid of the circle formula_22 and the line formula_25 relative to the origin. formula_26 is the cissoid of the circle formula_22 and the line formula_27 relative to the origin. This is, in fact, the curve for which the family is named and some authors refer to this as simply as cissoid. formula_29 is the cissoid of the ellipse formula_30 and the line formula_31 relative to the origin. To see this, note that the line can be written formula_32 and the ellipse can be written formula_33 So the cissoid is given by formula_34 which is a parametric form of the folium.
[ { "math_id": 0, "text": "\\overline{OP} = \\overline{P_1 P_2}." }, { "math_id": 1, "text": "\\overline{OP} = \\overline{OP_1} + \\overline{OP_2}." }, { "math_id": 2, "text": "r=f_1(\\theta)" }, { "math_id": 3, "text": "r=f_2(\\theta)" }, { "math_id": 4, "text": "r=f_2(\\theta)-f_1(\\theta)" }, { "math_id": 5, "text": " \\begin{align}\n& r=-f_1(\\theta+\\pi) \\\\\n& r=-f_1(\\theta-\\pi) \\\\\n& r=f_1(\\theta+2\\pi) \\\\\n& r=f_1(\\theta-2\\pi) \\\\\n& \\qquad \\qquad \\vdots\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align}\n& r=f_2(\\theta)-f_1(\\theta) \\\\\n& r=f_2(\\theta)+f_1(\\theta+\\pi) \\\\ \n&r=f_2(\\theta)+f_1(\\theta-\\pi) \\\\\n& r=f_2(\\theta)-f_1(\\theta+2\\pi) \\\\\n& r=f_2(\\theta)-f_1(\\theta-2\\pi) \\\\\n& \\qquad \\qquad \\vdots\n\\end{align}" }, { "math_id": 7, "text": "r=\\frac{1}{2-\\cos \\theta}." }, { "math_id": 8, "text": "r=\\frac{1}{2-\\cos \\theta}-\\frac{1}{2-\\cos \\theta}=0," }, { "math_id": 9, "text": "r=\\frac{-1}{2+\\cos \\theta}," }, { "math_id": 10, "text": "r=\\frac{1}{2-\\cos \\theta}+\\frac{1}{2+\\cos \\theta}" }, { "math_id": 11, "text": "x = f_1(p),\\ y = px" }, { "math_id": 12, "text": "x = f_2(p),\\ y = px," }, { "math_id": 13, "text": "x = f_2(p)-f_1(p),\\ y = px." }, { "math_id": 14, "text": "r=\\frac{a_1}{\\cos (\\theta-\\alpha_1)}" }, { "math_id": 15, "text": "r=\\frac{a_2}{\\cos (\\theta-\\alpha_2)}." }, { "math_id": 16, "text": "\\tfrac{\\alpha_1-\\alpha_2}{2}," }, { "math_id": 17, "text": "\\alpha_1 = \\alpha,\\ \\alpha_2 = -\\alpha." }, { "math_id": 18, "text": "\\begin{align}\nr & = \\frac{a_2}{\\cos (\\theta+\\alpha)} - \\frac{a_1}{\\cos (\\theta-\\alpha)} \\\\\n& =\\frac{a_2\\cos (\\theta-\\alpha)-a_1\\cos (\\theta+\\alpha)}{\\cos (\\theta+\\alpha)\\cos (\\theta-\\alpha)} \\\\\n& =\\frac{(a_2\\cos\\alpha-a_1\\cos\\alpha)\\cos\\theta-(a_2\\sin\\alpha+a_1\\sin\\alpha)\\sin\\theta}{\\cos^2\\alpha\\ \\cos^2\\theta-\\sin^2\\alpha\\ \\sin^2\\theta}.\n\\end{align}" }, { "math_id": 19, "text": "r=\\frac{b\\cos\\theta+c\\sin\\theta}{\\cos^2\\theta-m^2\\sin^2\\theta}" }, { "math_id": 20, "text": "x^2-m^2y^2=bx+cy." }, { "math_id": 21, "text": "2x(x^2+y^2)=a(3x^2-y^2)" }, { "math_id": 22, "text": "(x+a)^2+y^2 = a^2" }, { "math_id": 23, "text": "x=-\\tfrac{a}{2}" }, { "math_id": 24, "text": "y^2(a+x) = x^2(a-x)" }, { "math_id": 25, "text": "x=-a" }, { "math_id": 26, "text": "x(x^2+y^2)+2ay^2=0" }, { "math_id": 27, "text": "x=-2a" }, { "math_id": 28, "text": "x=ka," }, { "math_id": 29, "text": "x^3+y^3=3axy" }, { "math_id": 30, "text": "x^2-xy+y^2 = -a(x+y)" }, { "math_id": 31, "text": "x+y=-a" }, { "math_id": 32, "text": "x=-\\frac{a}{1+p},\\ y=px" }, { "math_id": 33, "text": "x=-\\frac{a(1+p)}{1-p+p^2},\\ y=px." }, { "math_id": 34, "text": "x=-\\frac{a}{1+p}+\\frac{a(1+p)}{1-p+p^2} = \\frac{3ap}{1+p^3},\\ y=px" } ]
https://en.wikipedia.org/wiki?curid=899085
899115
Lunar distance
Distance from center of Earth to center of Moon &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The instantaneous Earth–Moon distance, or distance to the Moon, is the distance from the center of Earth to the center of the Moon. Lunar distance (LD or formula_0), or Earth–Moon characteristic distance, is a unit of measure in astronomy. More technically, it is the semi-major axis of the geocentric lunar orbit. The lunar distance is on average approximately , or 1.28 light-seconds; this is roughly 30 times Earth's diameter or 9.5 times Earth's circumference. Around 389 lunar distances make up an AU astronomical unit (roughly the distance from Earth to the Sun). Lunar distance is commonly used to express the distance to near-Earth object encounters. Lunar semi-major axis is an important astronomical datum; the few millimeter precision of the range measurements determines semi-major axis to a few decimeters; it has implications for testing gravitational theories such as general relativity, and for refining other astronomical values, such as the mass, radius, and rotation of Earth. The measurement is also useful in characterizing the lunar radius, as well as the mass of and distance to the Sun. Millimeter-precision measurements of the lunar distance are made by measuring the time taken for laser beam light to travel between stations on Earth and retroreflectors placed on the Moon. The Moon is spiraling away from Earth at an average rate of per year, as detected by the Lunar Laser Ranging experiment. Value. Because of the influence of the Sun and other perturbations, the Moon does not travel on a true ellipse around the Earth. Different methods have been used to nevertheless define a semi-major axis. Ernest William Brown provided a formula for the parallax of the Moon as viewed from opposite sides of the Earth, involving trigonometric terms. This is equivalent to a formula for the inverse of the distance, and the average value of this is the inverse of . On the other hand, the time-averaged distance (rather than the inverse of the average inverse distance) between the centers of Earth and the Moon is . One can also model the orbit as an ellipse that is constantly changing, and in this case one can find a formula for the semi-major axis, again involving trigonometric terms. The average value by this method is 383,397 km. The actual distance varies over the course of the orbit of the Moon. Values at closest approach (perigee) or at farthest (apogee) are rarer the more extreme they are. The graph at right shows the distribution of perigee and apogee over six thousand years. Jean Meeus gives the following extreme values for 1500 BC to AD 8000: LD (or LDEO) Variation. The instantaneous lunar distance is constantly changing. The actual distance between the Moon and Earth can change as quickly as , or more than in just 6 hours, due to its non-circular orbit. There are other effects that also influence the lunar distance. Some factors include: The formula of Chapront and Touzé for the distance in kilometres begins with the terms: formula_1 where formula_2 is the mean anomaly (more or less how moon has moved from perigee) and formula_3 is the mean elongation (more or less how far it has moved from conjunction with the Sun at new moon). They can be calculated from GM = 134.963 411 38° + 13.064 992 953 630°/d · t D = 297.850 204 20° + 12.190 749 117 502°/d · t where t is the time (in days) since January 1, 2000 (see Epoch (astronomy)). This shows that the smallest perigee occurs at either new moon or full moon (ca 356870 km), as does the greatest apogee (ca 406079 km), whereas the greatest perigee will be around half-moon (ca 370180 km), as will be the smallest apogee (ca 404593 km). The exact values will be slightly different due to other terms. Twice in every full moon cycle of about 411 days there will be a minimal perigee and a maximal apogee, separated by two weeks, and a maximal perigee and a minimal apogee, also separated by two weeks. Perturbations and eccentricity. The distance to the Moon can be measured to an accuracy of over a 1-hour sampling period, which results in an overall uncertainty of a decimeter for the semi-major axis. However, due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by the gravitational effects of various astronomical bodies – most significantly the Sun and less so Venus and Jupiter. Other forces responsible for minute perturbations are: gravitational attraction to other planets in the Solar System and to asteroids; tidal forces; and relativistic effects. The effect of radiation pressure from the Sun contributes an amount of ± to the lunar distance. Although the instantaneous uncertainty is a few millimeters, the measured lunar distance can change by more than from the mean value throughout a typical month. These perturbations are well understood and the lunar distance can be accurately modeled over thousands of years. Tidal dissipation. Through the action of tidal forces, the angular momentum of Earth's rotation is slowly being transferred to the Moon's orbit. The result is that Earth's rate of spin is gradually decreasing (at a rate of ), and the lunar orbit is gradually expanding. The rate of recession is . However, it is believed that this rate has recently increased, as a rate of would imply that the Moon is only 1.5 billion years old, whereas scientific consensus supports an age of about 4 billion years. It is also believed that this anomalously high rate of recession may continue to accelerate. Theoretically, the lunar distance will continue to increase until the Earth and Moon become tidally locked, as are Pluto and Charon. This would occur when the duration of the lunar orbital period equals the rotational period of Earth, which is estimated to be 47 Earth days. The two bodies would then be at equilibrium, and no further rotational energy would be exchanged. However, models predict that 50 billion years would be required to achieve this configuration, which is significantly longer than the expected lifetime of the Solar System. Orbital history. Laser measurements show that the average lunar distance is increasing, which implies that the Moon was closer in the past, and that Earth's days were shorter. Fossil studies of mollusk shells from the Campanian era (80 million years ago) show that there were 372 days (of 23 h 33 min) per year during that time, which implies that the lunar distance was about 60.05 R🜨 (383,000 km or 238,000 mi). There is geological evidence that the average lunar distance was about 52 R🜨 (332,000 km or 205,000 mi) during the Precambrian Era; 2500 million years BP. The widely accepted giant impact hypothesis states that the Moon was created as a result of a catastrophic impact between Earth and another planet, resulting in a re-accumulation of fragments at an initial distance of 3.8 R🜨 (24,000 km or 15,000 mi). This theory assumes the initial impact to have occurred 4.5 billion years ago. History of measurement. Until the late 1950s all measurements of lunar distance were based on optical angular measurements: the earliest accurate measurement was by Hipparchus in the 2nd century BC. The space age marked a turning point when the precision of this value was much improved. During the 1950s and 1960s, there were experiments using radar, lasers, and spacecraft, conducted with the benefit of computer processing and modeling. Some historically significant or otherwise interesting methods of determining the lunar distance: Parallax. The oldest method of determining the lunar distance involved measuring the angle between the Moon and a chosen reference point from multiple locations, simultaneously. The synchronization can be coordinated by making measurements at a pre-determined time, or during an event which is observable to all parties. Before accurate mechanical chronometers, the synchronization event was typically a lunar eclipse, or the moment when the Moon crossed the meridian (if the observers shared the same longitude). This measurement technique is known as lunar parallax. For increased accuracy, the measured angle can be adjusted to account for refraction and distortion of light passing through the atmosphere. Lunar eclipse. Early attempts to measure the distance to the Moon exploited observations of a lunar eclipse combined with knowledge of Earth's radius and an understanding that the Sun is much further than the Moon. By observing the geometry of a lunar eclipse, the lunar distance can be calculated using trigonometry. The earliest accounts of attempts to measure the lunar distance using this technique were by Greek astronomer and mathematician Aristarchus of Samos in the 4th century BC and later by Hipparchus, whose calculations produced a result of 59–67 R🜨 ( or ). This method later found its way into the work of Ptolemy, who produced a result of R🜨 ( or ) at its farthest point. Meridian crossing. An expedition by French astronomer A.C.D. Crommelin observed lunar meridian transits on the same night from two different locations. Careful measurements from 1905 to 1910 measured the angle of elevation at the moment when a specific lunar crater (Mösting A) crossed the local meridian, from stations at Greenwich and at Cape of Good Hope. A distance was calculated with an uncertainty of , and this remained the definitive lunar distance value for the next half century. Occultations. By recording the instant when the Moon occults a background star, (or similarly, measuring the angle between the Moon and a background star at a predetermined moment) the lunar distance can be determined, as long as the measurements are taken from multiple locations of known separation. Astronomers O'Keefe and Anderson calculated the lunar distance by observing four occultations from nine locations in 1952. They calculated a semi-major axis of ( ± ). This value was refined in 1962 by Irene Fischer, who incorporated updated geodetic data to produce a value of ( ± ). Radar. The distance to the moon was measured by means of radar first in 1946 as part of Project Diana. Later, an experiment was conducted in 1957 at the U.S. Naval Research Laboratory that used the echo from radar signals to determine the Earth-Moon distance. Radar pulses lasting were broadcast from a diameter radio dish. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. From that measurement, the distance could be calculated. In practice, however, the signal-to-noise ratio was so low that an accurate measurement could not be reliably produced. The experiment was repeated in 1958 at the Royal Radar Establishment, in England. Radar pulses lasting were transmitted with a peak power of 2 megawatts, at a repetition rate of 260 pulses per second. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. Multiple signals were added together to obtain a reliable signal by superimposing oscilloscope traces onto photographic film. From the measurements, the distance was calculated with an uncertainty of . These initial experiments were intended to be proof-of-concept experiments and only lasted one day. Follow-on experiments lasting one month produced a semi-major axis of ( ± ), which was the most precise measurement of the lunar distance at the time. Laser ranging. An experiment which measured the round-trip time of flight of laser pulses reflected directly off the surface of the Moon was performed in 1962, by a team from Massachusetts Institute of Technology, and a Soviet team at the Crimean Astrophysical Observatory. During the Apollo missions in 1969, astronauts placed retroreflectors on the surface of the Moon for the purpose of refining the accuracy and precision of this technique. The measurements are ongoing and involve multiple laser facilities. The instantaneous precision of the Lunar Laser Ranging experiments can achieve small millimeter resolution, and is the most reliable method of determining the lunar distance. The semi-major axis is determined to be 384,399.0 km. Amateur astronomers and citizen scientists. Due to the modern accessibility of accurate timing devices, high resolution digital cameras, GPS receivers, powerful computers and near-instantaneous communication, it has become possible for amateur astronomers to make high accuracy measurements of the lunar distance. On May 23, 2007, digital photographs of the Moon during a near-occultation of Regulus were taken from two locations, in Greece and England. By measuring the parallax between the Moon and the chosen background star, the lunar distance was calculated. A more ambitious project called the "Aristarchus Campaign" was conducted during the lunar eclipse of 15 April 2014. During this event, participants were invited to record a series of five digital photographs from moonrise until culmination (the point of greatest altitude). The method took advantage of the fact that the Moon is actually closest to an observer when it is at its highest point in the sky, compared to when it is on the horizon. Although it appears that the Moon is biggest when it is near the horizon, the opposite is true. This phenomenon is known as the Moon illusion. The reason for the difference in distance is that the distance from the center of the Moon to the center of the Earth is nearly constant throughout the night, but an observer on the surface of Earth is actually 1 Earth radius from the center of Earth. This offset brings them closest to the Moon when it is overhead. Modern cameras have achieved a resolution capable of capturing the Moon with enough precision to detect and measure this tiny variation in apparent size. The results of this experiment were calculated as LD = R🜨. The accepted value for that night was 60.61 R🜨, which implied a accuracy. The benefit of this method is that the only measuring equipment needed is a modern digital camera (equipped with an accurate clock, and a GPS receiver). Other experimental methods of measuring the lunar distance that can be performed by amateur astronomers involve: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Delta_{\\oplus L}" }, { "math_id": 1, "text": "\n\\begin{alignat}{3}\n\\frac{d}{\\mathrm{km}} = 385000.5584 & \\ -\\ 20905.3550 \\cdot \\cos(G_M) \\\\\n & \\ - \\ 3699.1109 \\cdot \\cos(2D - G_M) \\\\\n & \\ - \\ 2955.9676 \\cdot \\cos(2D) \\\\\n & \\ - \\ 569.9251 \\cdot \\cos(2G_M) \\\\\n & \\ \\pm \\ \\dotsc\n\\end{alignat}\n" }, { "math_id": 2, "text": "G_M" }, { "math_id": 3, "text": "D" } ]
https://en.wikipedia.org/wiki?curid=899115
899159
Statically indeterminate
When a structure's static equilibrium equations have no unique solution In statics and structural mechanics, a structure is statically indeterminate when the equilibrium equations – force and moment equilibrium conditions – are insufficient for determining the internal forces and reactions on that structure. Mathematics. Based on Newton's laws of motion, the equilibrium equations available for a two-dimensional body are: formula_0 the vectorial sum of the forces acting on the body equals zero. This translates to: formula_1 the sum of the horizontal components of the forces equals zero; formula_2 the sum of the vertical components of forces equals zero; formula_3 the sum of the moments (about an arbitrary point) of all forces equals zero. In the beam construction on the right, the four unknown reactions are V"A", V"B", V"C", and H"A". The equilibrium equations are: formula_4 Since there are four unknown forces (or variables) (V"A", V"B", V"C", and H"A") but only three equilibrium equations, this system of simultaneous equations does not have a unique solution. The structure is therefore classified as "statically indeterminate". To solve statically indeterminate systems (determine the various moment and force reactions within it), one considers the material properties and compatibility in deformations. Statically determinate. If the support at B is removed, the reaction V"B" cannot occur, and the system becomes statically determinate (or isostatic). Note that the system is "completely constrained" here. The system becomes an exact constraint kinematic coupling. The solution to the problem is: formula_5 If, in addition, the support at A is changed to a roller support, the number of reactions are reduced to three (without H"A"), but the beam can now be moved horizontally; the system becomes "unstable" or "partly constrained"—a mechanism rather than a structure. In order to distinguish between this and the situation when a system under equilibrium is perturbed and becomes unstable, it is preferable to use the phrase "partly constrained" here. In this case, the two unknowns V"A" and V"C" can be determined by resolving the vertical force equation and the moment equation simultaneously. The solution yields the same results as previously obtained. However, it is not possible to satisfy the horizontal force equation unless F"h" = 0. Statical determinacy. Descriptively, a statically determinate structure can be defined as a structure where, if it is possible to find internal actions in equilibrium with external loads, those internal actions are unique. The structure has no possible states of self-stress, i.e. internal forces in equilibrium with zero external loads are not possible. Statical indeterminacy, however, is the existence of a non-trivial (non-zero) solution to the homogeneous system of equilibrium equations. It indicates the possibility of self-stress (stress in the absence of an external load) that may be induced by mechanical or thermal action. Mathematically, this requires a stiffness matrix to have full rank. A statically indeterminate structure can only be analyzed by including further information like material properties and deflections. Numerically, this can be achieved by using matrix structural analyses, finite element method (FEM) or the moment distribution method (Hardy Cross) . Practically, a structure is called 'statically overdetermined' when it comprises more mechanical constraints – like walls, columns or bolts – than absolutely necessary for stability. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sum \\mathbf F = 0 :" }, { "math_id": 1, "text": " \\sum \\mathbf H = 0 :" }, { "math_id": 2, "text": " \\sum \\mathbf V = 0 :" }, { "math_id": 3, "text": " \\sum \\mathbf M = 0 :" }, { "math_id": 4, "text": "\\begin{align}\n\\sum \\mathbf V = 0 \\quad & \\implies \\quad \\mathbf V_A - \\mathbf F_v + \\mathbf V_B + \\mathbf V_C = 0 \\\\\n\\sum \\mathbf H = 0 \\quad & \\implies \\quad \\mathbf H_A = 0 \\\\\n \\sum \\mathbf M_A = 0 \\quad & \\implies \\quad \\mathbf F_v \\cdot a - \\mathbf V_B \\cdot (a + b) - \\mathbf V_C \\cdot (a + b + c) = 0 \n\\end{align}" }, { "math_id": 5, "text": "\\begin{align}\n \\mathbf H_A &= \\mathbf F_h \\\\\n \\mathbf V_C &= \\frac{\\mathbf F_v \\cdot a}{a + b + c} \\\\\n \\mathbf V_A &= \\mathbf F_v - \\mathbf V_C\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=899159
8992759
Kan fibration
Map between simplicial sets with lifting property In mathematics, Kan complexes and Kan fibrations are part of the theory of simplicial sets. Kan fibrations are the fibrations of the standard model category structure on simplicial sets and are therefore of fundamental importance. Kan complexes are the fibrant objects in this model category. The name is in honor of Daniel Kan. Definitions. Definition of the standard n-simplex. For each "n" ≥ 0, recall that the standard formula_0-simplex, formula_1, is the representable simplicial set formula_2 Applying the geometric realization functor to this simplicial set gives a space homeomorphic to the topological standard formula_0-simplex: the convex subspace of formula_3 consisting of all points formula_4 such that the coordinates are non-negative and sum to 1. Definition of a horn. For each "k" ≤ "n", this has a subcomplex formula_5, the "k"-th horn inside formula_1, corresponding to the boundary of the "n"-simplex, with the "k"-th face removed. This may be formally defined in various ways, as for instance the union of the images of the "n" maps formula_6 corresponding to all the other faces of formula_1. Horns of the form formula_7 sitting inside formula_8 look like the black V at the top of the adjacent image. If formula_9 is a simplicial set, then maps formula_10 correspond to collections of formula_0 formula_11-simplices satisfying a compatibility condition, one for each formula_12. Explicitly, this condition can be written as follows. Write the formula_11-simplices as a list formula_13 and require that formula_14 for all formula_15 with formula_16. These conditions are satisfied for the formula_11-simplices of formula_17 sitting inside formula_1. Definition of a Kan fibration. A map of simplicial sets formula_18 is a Kan fibration if, for any formula_19 and formula_20, and for any maps formula_21 and formula_22 such that formula_23 (where formula_24 is the inclusion of formula_5 in formula_1), there exists a map formula_25 such that formula_26 and formula_27. Stated this way, the definition is very similar to that of fibrations in topology (see also homotopy lifting property), whence the name "fibration". Technical remarks. Using the correspondence between formula_0-simplices of a simplicial set formula_9 and morphisms formula_28 (a consequence of the Yoneda lemma), this definition can be written in terms of simplices. The image of the map formula_29 can be thought of as a horn as described above. Asking that formula_30 factors through formula_31 corresponds to requiring that there is an formula_0-simplex in formula_32 whose faces make up the horn from formula_30 (together with one other face). Then the required map formula_33 corresponds to a simplex in formula_9 whose faces include the horn from formula_34. The diagram to the right is an example in two dimensions. Since the black V in the lower diagram is filled in by the blue formula_35-simplex, if the black V above maps down to it then the striped blue formula_35-simplex has to exist, along with the dotted blue formula_36-simplex, mapping down in the obvious way. Kan complexes defined from Kan fibrations. A simplicial set formula_9 is called a Kan complex if the map from formula_37, the one-point simplicial set, is a Kan fibration. In the model category for simplicial sets, formula_38 is the terminal object and so a Kan complex is exactly the same as a fibrant object. Equivalently, this could be stated as: if every map formula_39 from a horn has an extension to formula_1, meaning there is a lift formula_40 such thatformula_41for the inclusion map formula_42, then formula_9 is a Kan complex. Conversely, every Kan complex has this property, hence it gives a simple technical condition for a Kan complex. Examples. Simplicial sets from singular homology. An important example comes from the construction of singular simplices used to define singular homology, called the singular functorpg 7formula_43.Given a space formula_9, define a singular formula_0-simplex of X to be a continuous map from the standard topological formula_0-simplex (as described above) to formula_9, formula_44 Taking the set of these maps for all non-negative formula_0 gives a graded set, formula_45. To make this into a simplicial set, define face maps formula_46 by formula_47 and degeneracy maps formula_48 by formula_49. Since the union of any formula_50 faces of formula_51 is a strong deformation retract of formula_51, any continuous function defined on these faces can be extended to formula_51, which shows that formula_52 is a Kan complex. Relation with geometric realization. It is worth noting the singular functor is right adjoint to the geometric realization functorformula_53giving the isomorphismformula_54 Simplicial sets underlying simplicial groups. It can be shown that the simplicial set underlying a simplicial group is always fibrantpg 12. In particular, for a simplicial abelian group, its geometric realization is homotopy equivalent to a product of Eilenberg-Maclane spacesformula_55In particular, this includes classifying spaces. So the spaces formula_56, formula_57, and the infinite lens spaces formula_58 are correspond to Kan complexes of some simplicial set. In fact, this set can be constructed explicitly using the Dold–Kan correspondence of a chain complex and taking the underlying simplicial set of the simplicial abelian group. Geometric realizations of small groupoids. Another important source of examples are the simplicial sets associated to a small groupoid formula_59. This is defined as the geometric realization of the simplicial set formula_60 and is typically denoted formula_61. We could have also replaced formula_59 with an infinity groupoid. It is conjectured that the homotopy category of geometric realizations of infinity groupoids is equivalent to the homotopy category of homotopy types. This is called the homotopy hypothesis. Non-example: standard n-simplex. It turns out the standard formula_0-simplex formula_1 is not a Kan complexpg 38. The construction of a counter example in general can be found by looking at a low dimensional example, say formula_62. Taking the map formula_63 sendingformula_64gives a counter example since it cannot be extended to a map formula_65 because the maps have to be order preserving. If there was a map, it would have to sendformula_66but this isn't a map of simplicial sets. Categorical properties. Simplicial enrichment and function complexes. For simplicial sets formula_67 there is an associated simplicial set called the function complex formula_68, where the simplices are defined asformula_69and for an ordinal map formula_70 there is an induced mapformula_71(since the first factor of Hom is contravariant) defined by sending a map formula_72 to the compositionformula_73 Exponential law. This complex has the following exponential law of simplicial sets formula_74 which sends a map formula_75 to the composite map formula_76 where formula_77 for formula_78 lifted to the n-simplex formula_1. Kan fibrations and pull-backs. Given a (Kan) fibration formula_79 and an inclusion of simplicial sets formula_80, there is a fibration pg 21formula_81(where formula_82 is in the function complex in the category of simplicial sets) induced from the commutative diagramformula_83where formula_84 is the pull-back map given by pre-composiiton and formula_85 is the pushforward map given by post-composition. In particular, the previous fibration implies formula_86 and formula_87 are fibrations. Applications. Homotopy groups of Kan complexes. The homotopy groups of a fibrant simplicial set may be defined combinatorially, using horns, in a way that agrees with the homotopy groups of the topological space which realizes it. For a Kan complex formula_9 and a vertex formula_88, as a set formula_89 is defined as the set of maps formula_90 of simplicial sets fitting into a certain commutative diagram: formula_91Notice the fact formula_92 is mapped to a point is equivalent to the definition of the sphere formula_93 as the quotient formula_94 for the standard unit ballformula_95Defining the group structure requires a little more work. Essentially, given two maps formula_96 there is an associated formula_97-simplice formula_98 such that formula_99 gives their addition. This map is well-defined up to simplicial homotopy classes of maps, giving the group structure. Moreover, the groups formula_89 are Abelian for formula_100. For formula_101, it is defined as the homotopy classes formula_102 of vertex maps formula_88. Homotopy groups of simplicial sets. Using model categories, any simplicial set formula_9 has a fibrant replacement formula_103 which is homotopy equivalent to formula_9 in the homotopy category of simplicial sets. Then, the homotopy groups of formula_9 can be defined asformula_104where formula_105 is a lift of formula_88 to formula_103. These fibrant replacements can be thought of a topological analogue of resolutions of a chain complex (such as a projective resolution or a flat resolution).
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\Delta^n" }, { "math_id": 2, "text": "\\Delta^n(i) = \\mathrm{Hom}_{\\mathbf{\\Delta}} ([i], [n])" }, { "math_id": 3, "text": "\\mathbb{R}^{n+1}" }, { "math_id": 4, "text": "(t_0,\\dots,t_n)" }, { "math_id": 5, "text": "\\Lambda^n_k" }, { "math_id": 6, "text": "\\Delta^{n-1} \\rightarrow \\Delta^n" }, { "math_id": 7, "text": "\\Lambda_k^2" }, { "math_id": 8, "text": "\\Delta^2" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "s: \\Lambda_k^n \\to X" }, { "math_id": 11, "text": "(n-1)" }, { "math_id": 12, "text": "0 \\leq k \\leq n-1" }, { "math_id": 13, "text": "(s_0,\\dots,s_{k-1},s_{k+1},\\dots,s_{n})" }, { "math_id": 14, "text": "d_i s_j = d_{j-1} s_i\\," }, { "math_id": 15, "text": "i < j" }, { "math_id": 16, "text": "i,j \\neq k" }, { "math_id": 17, "text": "\\Lambda_k^n" }, { "math_id": 18, "text": "f: X\\rightarrow Y" }, { "math_id": 19, "text": "n\\ge 1" }, { "math_id": 20, "text": "0\\le k\\le n" }, { "math_id": 21, "text": "s:\\Lambda^n_k\\rightarrow X" }, { "math_id": 22, "text": "y:\\Delta^n\\rightarrow Y\\," }, { "math_id": 23, "text": "f \\circ s=y \\circ i" }, { "math_id": 24, "text": "i" }, { "math_id": 25, "text": "x:\\Delta^n \\rightarrow X" }, { "math_id": 26, "text": "s=x \\circ i" }, { "math_id": 27, "text": "y=f \\circ x" }, { "math_id": 28, "text": "\\Delta^n \\to X" }, { "math_id": 29, "text": "fs: \\Lambda_k^n \\to Y" }, { "math_id": 30, "text": "fs" }, { "math_id": 31, "text": "yi" }, { "math_id": 32, "text": "Y" }, { "math_id": 33, "text": "x: \\Delta^n\\to X" }, { "math_id": 34, "text": "s" }, { "math_id": 35, "text": "2" }, { "math_id": 36, "text": "1" }, { "math_id": 37, "text": "X \\to \\{*\\}" }, { "math_id": 38, "text": "\\{*\\}" }, { "math_id": 39, "text": "\\alpha: \\Lambda^n_k \\to X" }, { "math_id": 40, "text": "\\tilde{\\alpha}: \\Delta^n \\to X" }, { "math_id": 41, "text": "\\alpha = \\tilde{\\alpha}\\circ \\iota" }, { "math_id": 42, "text": "\\iota: \\Lambda^n_k \\hookrightarrow \\Delta^n" }, { "math_id": 43, "text": "S: \\text{Top} \\to s\\text{Sets}" }, { "math_id": 44, "text": "f: \\Delta_n \\to X" }, { "math_id": 45, "text": "S(X) = \\coprod_n S_n(X)" }, { "math_id": 46, "text": "d_i: S_n(X)\\to S_{n-1}(X)" }, { "math_id": 47, "text": "(d_i f)(t_0,\\dots,t_{n-1}) = f(t_0,\\dots,t_{i-1},0,t_i,\\dots,t_{n-1})\\," }, { "math_id": 48, "text": "s_i: S_n(X)\\to S_{n+1}(X)" }, { "math_id": 49, "text": "(s_i f)(t_0,\\dots,t_{n+1}) = f(t_0,\\dots,t_{i-1},t_i + t_{i+1},t_{i+2},\\dots,t_{n+1})\\," }, { "math_id": 50, "text": "n+1" }, { "math_id": 51, "text": "\\Delta_{n+1}" }, { "math_id": 52, "text": "S(X)" }, { "math_id": 53, "text": "|\\cdot|:s\\text{Sets} \\to \\text{Top}" }, { "math_id": 54, "text": "\\text{Hom}_{\\text{Top}}(|X|,Y) \\cong \\text{Hom}_{s\\text{Sets}}(X,S(Y))" }, { "math_id": 55, "text": "\\prod_{i \\in I} K(A_i,n_i)" }, { "math_id": 56, "text": "S^1 \\simeq K(\\mathbb{Z},1)" }, { "math_id": 57, "text": "\\mathbb{CP}^\\infty \\simeq K(\\mathbb{Z},2)" }, { "math_id": 58, "text": "L^\\infty_q \\simeq K(\\mathbb{Z}/q, 2)" }, { "math_id": 59, "text": "\\mathcal{G}" }, { "math_id": 60, "text": "[\\Delta^{op},\\mathcal{G}]" }, { "math_id": 61, "text": "B\\mathcal{G}" }, { "math_id": 62, "text": "\\Delta^1" }, { "math_id": 63, "text": "\\Lambda_0^2 \\to \\Delta^1" }, { "math_id": 64, "text": "\\begin{matrix}\n[0,2] \\mapsto [0,0] & \n[0,1] \\mapsto [0,1]\n\\end{matrix}" }, { "math_id": 65, "text": "\\Delta^2 \\to \\Delta^1" }, { "math_id": 66, "text": "\\begin{align}\n0 \\mapsto 0 \\\\\n1 \\mapsto 1 \\\\\n2 \\mapsto 0\n\\end{align}" }, { "math_id": 67, "text": "X,Y" }, { "math_id": 68, "text": "\\textbf{Hom}(X,Y)" }, { "math_id": 69, "text": "\\textbf{Hom}_n(X,Y) = \\text{Hom}_{s\\text{Sets}}(X\\times\\Delta^n, Y)" }, { "math_id": 70, "text": "\\theta : [m] \\to [n]" }, { "math_id": 71, "text": "\\theta^*: \\textbf{Hom}(X,Y)_n \\to \\textbf{Hom}(X,Y)_m" }, { "math_id": 72, "text": "f:X\\times\\Delta^n \\to Y" }, { "math_id": 73, "text": "X\\times\\Delta^m \\xrightarrow{1\\times \\theta}X\\times\\Delta^n \\xrightarrow{f} Y" }, { "math_id": 74, "text": "\\text{ev}_*:\\text{Hom}_{s\\text{Sets}}(K, \\textbf{Hom}(X,Y)) \\to \\text{Hom}_{s\\text{Sets}}(X\\times K, Y)" }, { "math_id": 75, "text": "f: K \\to \\textbf{Hom}(X,Y)" }, { "math_id": 76, "text": "X\\times K \\xrightarrow{1\\times g}X\\times\\textbf{Hom}(X,Y) \\xrightarrow{ev} Y" }, { "math_id": 77, "text": "ev(x,f) = f(x,\\iota_n)" }, { "math_id": 78, "text": "\\iota_n \\in \\text{Hom}_\\Delta([n],[n])" }, { "math_id": 79, "text": "p:X \\to Y" }, { "math_id": 80, "text": "i: K \\hookrightarrow L" }, { "math_id": 81, "text": "\\textbf{Hom}(L,X) \\xrightarrow{(i^*,p_*)}\\textbf{Hom}(K,X)\\times_{\\textbf{Hom}(K,Y)}\\textbf{Hom}(L, Y)" }, { "math_id": 82, "text": "\\textbf{Hom}" }, { "math_id": 83, "text": "\\begin{matrix}\n\\textbf{Hom}(L,X) & \\xrightarrow{p_*} & \\textbf{Hom}(L,Y) \\\\\ni^* \\downarrow & & \\downarrow i^* \\\\\n\\textbf{Hom}(K,X) & \\xrightarrow{p_*} & \\textbf{Hom}(K,Y)\n\\end{matrix}" }, { "math_id": 84, "text": "i^*" }, { "math_id": 85, "text": "p_*" }, { "math_id": 86, "text": "p_*:\\textbf{Hom}(L,X) \\to \\textbf{Hom}(L,Y)" }, { "math_id": 87, "text": "i^*:\\textbf{Hom}(L,Y) \\to \\textbf{Hom}(K,Y)" }, { "math_id": 88, "text": "x:\\Delta^0 \\to X" }, { "math_id": 89, "text": "\\pi_n(X,x)" }, { "math_id": 90, "text": "\\alpha:\\Delta^n \\to X" }, { "math_id": 91, "text": "\\pi_n(X,x) =\n\\left\\{\n\\alpha: \\Delta^n \\to X :\n\\begin{matrix}\n\\Delta^n & \\overset{\\alpha}{\\to} & X \\\\\n\\uparrow & & \\uparrow x \\\\\n\\partial \\Delta^n & \\to & \\Delta^0\n\\end{matrix} \\right\\}" }, { "math_id": 92, "text": "\\partial\\Delta^n" }, { "math_id": 93, "text": "S^n" }, { "math_id": 94, "text": "B^n / \\partial B^n" }, { "math_id": 95, "text": "B^n = \\{x \\in \\mathbb{R}^n : ||x||_{eu} \\leq 1 \\}" }, { "math_id": 96, "text": "\\alpha,\\beta:\\Delta^n \\to X" }, { "math_id": 97, "text": "(n+1)" }, { "math_id": 98, "text": "\\omega:\\Delta^{n+1} \\to X" }, { "math_id": 99, "text": "d_n\\omega:\\Delta^n \\to X" }, { "math_id": 100, "text": "n \\geq 2" }, { "math_id": 101, "text": "\\pi_0(X)" }, { "math_id": 102, "text": "[x ]" }, { "math_id": 103, "text": "\\hat{X}" }, { "math_id": 104, "text": "\\pi_n(X,x) := \\pi_n(\\hat{X},\\hat{x})" }, { "math_id": 105, "text": "\\hat{x}" } ]
https://en.wikipedia.org/wiki?curid=8992759
899382
Compactly generated space
Property of topological spaces In topology, a topological space formula_0 is called a compactly generated space or k-space if its topology is determined by compact spaces in a manner made precise below. There is in fact no commonly agreed upon definition for such spaces, as different authors use variations of the definition that are not exactly equivalent to each other. Also some authors include some separation axiom (like Hausdorff space or weak Hausdorff space) in the definition of one or both terms, and others don't. In the simplest definition, a "compactly generated space" is a space that is coherent with the family of its compact subspaces, meaning that for every set formula_1 formula_2 is open in formula_0 if and only if formula_3 is open in formula_4 for every compact subspace formula_5 Other definitions use a family of continuous maps from compact spaces to formula_0 and declare formula_0 to be compactly generated if its topology coincides with the final topology with respect to this family of maps. And other variations of the definition replace compact spaces with compact Hausdorff spaces. Compactly generated spaces were developed to remedy some of the shortcomings of the category of topological spaces. In particular, under some of the definitions, they form a cartesian closed category while still containing the typical spaces of interest, which makes them convenient for use in algebraic topology. Definitions. General framework for the definitions. Let formula_6 be a topological space, where formula_7 is the topology, that is, the collection of all open sets in formula_8 There are multiple (non-equivalent) definitions of "compactly generated space" or "k-space" in the literature. These definitions share a common structure, starting with a suitably specified family formula_9 of continuous maps from some compact spaces to formula_8 The various definitions differ in their choice of the family formula_10 as detailed below. The final topology formula_11 on formula_0 with respect to the family formula_9 is called the k-ification of formula_12 Since all the functions in formula_9 were continuous into formula_13 the k-ification of formula_7 is finer than (or equal to) the original topology formula_7. The open sets in the k-ification are called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;k-open sets in formula_14 they are the sets formula_15 such that formula_16 is open in formula_4 for every formula_17 in formula_18 Similarly, the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;k-closed sets in formula_0 are the closed sets in its k-ification, with a corresponding characterization. In the space formula_19 every open set is k-open and every closed set is k-closed. The space formula_0 together with the new topology formula_11 is usually denoted formula_20 The space formula_0 is called compactly generated or a k-space (with respect to the family formula_9) if its topology is determined by all maps in formula_9, in the sense that the topology on formula_0 is equal to its k-ification; equivalently, if every k-open set is open in formula_19 or if every k-closed set is closed in formula_14 or in short, if formula_21 As for the different choices for the family formula_9, one can take all the inclusions maps from certain subspaces of formula_19 for example all compact subspaces, or all compact Hausdorff subspaces. This corresponds to choosing a set formula_22 of subspaces of formula_8 The space formula_0 is then "compactly generated" exactly when its topology is coherent with that family of subspaces; namely, a set formula_23 is open (resp. closed) in formula_0 exactly when the intersection formula_24 is open (resp. closed) in formula_4 for every formula_25 Another choice is to take the family of all continuous maps from arbitrary spaces of a certain type into formula_19 for example all such maps from arbitrary compact spaces, or from arbitrary compact Hausdorff spaces. These different choices for the family of continuous maps into formula_0 lead to different definitions of "compactly generated space". Additionally, some authors require formula_0 to satisfy a separation axiom (like Hausdorff or weak Hausdorff) as part of the definition, while others don't. The definitions in this article will not comprise any such separation axiom. As an additional general note, a sufficient condition that can be useful to show that a space formula_0 is compactly generated (with respect to formula_9) is to find a subfamily formula_26 such that formula_0 is compactly generated with respect to formula_27 For coherent spaces, that corresponds to showing that the space is coherent with a subfamily of the family of subspaces. For example, this provides one way to show that locally compact spaces are compactly generated. Below are some of the more commonly used definitions in more detail, in increasing order of specificity. For Hausdorff spaces, all three definitions are equivalent. So the terminology &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;compactly generated Hausdorff space is unambiguous and refers to a compactly generated space (in any of the definitions) that is also Hausdorff. Definition 1. Informally, a space whose topology is determined by its compact subspaces, or equivalently in this case, by all continuous maps from arbitrary compact spaces. A topological space formula_0 is called compactly-generated or a k-space if it satisfies any of the following equivalent conditions: (1) The topology on formula_0 is coherent with the family of its compact subspaces; namely, it satisfies the property: a set formula_23 is open (resp. closed) in formula_0 exactly when the intersection formula_24 is open (resp. closed) in formula_4 for every compact subspace formula_28 (2) The topology on formula_0 coincides with the final topology with respect to the family of all continuous maps formula_17 from all compact spaces formula_29 (3) formula_0 is a quotient space of a topological sum of compact spaces. (4) formula_0 is a quotient space of a weakly locally compact space. As explained in the final topology article, condition (2) is well-defined, even though the family of continuous maps from arbitrary compact spaces is not a set but a proper class. The equivalence between conditions (1) and (2) follows from the fact that every inclusion from a subspace is a continuous map; and on the other hand, every continuous map formula_17 from a compact space formula_4 has a compact image formula_30 and thus factors through the inclusion of the compact subspace formula_30 into formula_8 Definition 2. Informally, a space whose topology is determined by all continuous maps from arbitrary compact Hausdorff spaces. A topological space formula_0 is called compactly-generated or a k-space if it satisfies any of the following equivalent conditions: (1) The topology on formula_0 coincides with the final topology with respect to the family of all continuous maps formula_17 from all compact Hausdorff spaces formula_29 In other words, it satisfies the condition: a set formula_23 is open (resp. closed) in formula_0 exactly when formula_31 is open (resp. closed) in formula_4 for every compact Hausdorff space formula_4 and every continuous map formula_32 (2) formula_0 is a quotient space of a topological sum of compact Hausdorff spaces. (3) formula_0 is a quotient space of a locally compact Hausdorff space. As explained in the final topology article, condition (1) is well-defined, even though the family of continuous maps from arbitrary compact Hausdorff spaces is not a set but a proper class. Every space satisfying Definition 2 also satisfies Definition 1. The converse is not true. For example, the one-point compactification of the Arens-Fort space is compact and hence satisfies Definition 1, but it does not satisfies Definition 2. Definition 2 is the one more commonly used in algebraic topology. This definition is often paired with the weak Hausdorff property to form the category CGWH of compactly generated weak Hausdorff spaces. Definition 3. Informally, a space whose topology is determined by its compact Hausdorff subspaces. A topological space formula_0 is called compactly-generated or a k-space if its topology is coherent with the family of its compact Hausdorff subspaces; namely, it satisfies the property: a set formula_23 is open (resp. closed) in formula_0 exactly when the intersection formula_24 is open (resp. closed) in formula_4 for every compact Hausdorff subspace formula_28 Every space satisfying Definition 3 also satisfies Definition 2. The converse is not true. For example, the Sierpiński space formula_33 with topology formula_34 does not satisfy Definition 3, because its compact Hausdorff subspaces are the singletons formula_35 and formula_36, and the coherent topology they induce would be the discrete topology instead. On the other hand, it satisfies Definition 2 because it is homeomorphic to the quotient space of the compact interval formula_37 obtained by identifying all the points in formula_38 By itself, Definition 3 is not quite as useful as the other two definitions as it lacks some of the properties implied by the others. For example, every quotient space of a space satisfying Definition 1 or Definition 2 is a space of the same kind. But that does not hold for Definition 3. However, for weak Hausdorff spaces Definitions 2 and 3 are equivalent. Thus the category CGWH can also be defined by pairing the weak Hausdorff property with Definition 3, which may be easier to state and work with than Definition 2. Motivation. Compactly generated spaces were originally called k-spaces, after the German word "kompakt". They were studied by Hurewicz, and can be found in General Topology by Kelley, Topology by Dugundji, Rational Homotopy Theory by Félix, Halperin, and Thomas. The motivation for their deeper study came in the 1960s from well known deficiencies of the usual category of topological spaces. This fails to be a cartesian closed category, the usual cartesian product of identification maps is not always an identification map, and the usual product of CW-complexes need not be a CW-complex. By contrast, the category of simplicial sets had many convenient properties, including being cartesian closed. The history of the study of repairing this situation is given in the article on the "n"Lab on convenient categories of spaces. The first suggestion (1962) to remedy this situation was to restrict oneself to the full subcategory of compactly generated Hausdorff spaces, which is in fact cartesian closed. These ideas extend on the de Vries duality theorem. A definition of the exponential object is given below. Another suggestion (1964) was to consider the usual Hausdorff spaces but use functions continuous on compact subsets. These ideas generalize to the non-Hausdorff case; i.e. with a different definition of compactly generated spaces. This is useful since identification spaces of Hausdorff spaces need not be Hausdorff. In modern-day algebraic topology, this property is most commonly coupled with the weak Hausdorff property, so that one works in the category CGWH of compactly generated weak Hausdorff spaces. Examples. As explained in the Definitions section, there is no universally accepted definition in the literature for compactly generated spaces; but Definitions 1, 2, 3 from that section are some of the more commonly used. In order to express results in a more concise way, this section will make use of the abbreviations CG-1, CG-2, CG-3 to denote each of the three definitions unambiguously. This is summarized in the table below (see the Definitions section for other equivalent conditions for each). For Hausdorff spaces the properties CG-1, CG-2, CG-3 are equivalent. Such spaces can be called "compactly generated Hausdorff" without ambiguity. Every CG-3 space is CG-2 and every CG-2 space is CG-1. The converse implications do not hold in general, as shown by some of the examples below. For weak Hausdorff spaces the properties CG-2 and CG-3 are equivalent. Sequential spaces are CG-2. This includes first countable spaces, Alexandrov-discrete spaces, finite spaces. Every CG-3 space is a T1 space (because given a singleton formula_39 its intersection with every compact Hausdorff subspace formula_40 is the empty set or a single point, which is closed in formula_41 hence the singleton is closed in formula_0). Finite T1 spaces have the discrete topology. So among the finite spaces, which are all CG-2, the CG-3 spaces are the ones with the discrete topology. Any finite non-discrete space, like the Sierpiński space, is an example of CG-2 space that is not CG-3. Compact spaces and weakly locally compact spaces are CG-1, but not necessarily CG-2 (see examples below). Compactly generated Hausdorff spaces include the Hausdorff version of the various classes of spaces mentioned above as CG-1 or CG-2, namely Hausdorff sequential spaces, Hausdorff first countable spaces, locally compact Hausdorff spaces, etc. In particular, metric spaces and topological manifolds are compactly generated. CW complexes are also Hausdorff compactly generated. To provide examples of spaces that are not compactly generated, it is useful to examine "anticompact" spaces, that is, spaces whose compact subspaces are all finite. If a space formula_0 is anticompact and T1, every compact subspace of formula_0 has the discrete topology and the corresponding k-ification of formula_0 is the discrete topology. Therefore, any anticompact T1 non-discrete space is not CG-1. Examples include: Other examples of (Hausdorff) spaces that are not compactly generated include: For examples of spaces that are CG-1 and not CG-2, one can start with any space formula_44 that is not CG-1 (for example the Arens-Fort space or an uncountable product of copies of formula_42) and let formula_0 be the one-point compactification of formula_45 The space formula_0 is compact, hence CG-1. But it is not CG-2 because open subspaces inherit the CG-2 property and formula_44 is an open subspace of formula_0 that is not CG-2. Properties. Subspaces. Subspaces of a compactly generated space are not compactly generated in general, even in the Hausdorff case. For example, the ordinal space formula_46 where formula_47 is the first uncountable ordinal is compact Hausdorff, hence compactly generated. Its subspace with all limit ordinals except formula_47 removed is isomorphic to the Fortissimo space, which is not compactly generated (as mentioned in the Examples section, it is anticompact and non-discrete). Another example is the Arens space, which is sequential Hausdorff, hence compactly generated. It contains as a subspace the Arens-Fort space, which is not compactly generated. In a CG-1 space, every closed set is CG-1. The same does not hold for open sets. For instance, as shown in the Examples section, there are many spaces that are not CG-1, but they are open in their one-point compactification, which is CG-1. In a CG-2 space formula_19 every closed set is CG-2; and so is every open set (because there is a quotient map formula_48 for some locally compact Hausdorff space formula_44 and for an open set formula_15 the restriction of formula_49 to formula_50 is also a quotient map on a locally compact Hausdorff space). The same is true more generally for every locally closed set, that is, the intersection of an open set and a closed set. In a CG-3 space, every closed set is CG-3. Quotients. The disjoint union formula_51 of a family formula_52 of topological spaces is CG-1 if and only if each space formula_53 is CG-1. The corresponding statements also hold for CG-2 and CG-3. A quotient space of a CG-1 space is CG-1. In particular, every quotient space of a weakly locally compact space is CG-1. Conversely, every CG-1 space formula_0 is the quotient space of a weakly locally compact space, which can be taken as the disjoint union of the compact subspaces of formula_8 A quotient space of a CG-2 space is CG-2. In particular, every quotient space of a locally compact Hausdorff space is CG-2. Conversely, every CG-2 space is the quotient space of a locally compact Hausdorff space. A quotient space of a CG-3 space is not CG-3 in general. In fact, every CG-2 space is a quotient space of a CG-3 space (namely, some locally compact Hausdorff space); but there are CG-2 spaces that are not CG-3. For a concrete example, the Sierpiński space is not CG-3, but is homeomorphic to the quotient of the compact interval formula_37 obtained by identifying formula_54 to a point. More generally, any final topology on a set induced by a family of functions from CG-1 spaces is also CG-1. And the same holds for CG-2. This follows by combining the results above for disjoint unions and quotient spaces, together with the behavior of final topologies under composition of functions. A wedge sum of CG-1 spaces is CG-1. The same holds for CG-2. This is also an application of the results above for disjoint unions and quotient spaces. Products. The product of two compactly generated spaces need not be compactly generated, even if both spaces are Hausdorff and sequential. For example, the space formula_55 with the subspace topology from the real line is first countable; the space formula_56 with the quotient topology from the real line with the positive integers identified to a point is sequential. Both spaces are compactly generated Hausdorff, but their product formula_57 is not compactly generated. However, in some cases the product of two compactly generated spaces is compactly generated: When working in a category of compactly generated spaces (like all CG-1 spaces or all CG-2 spaces), the usual product topology on formula_57 is not compactly generated in general, so cannot serve as a categorical product. But its k-ification formula_58 does belong to the expected category and is the categorical product. Continuity of functions. The continuous functions on compactly generated spaces are those that behave well on compact subsets. More precisely, let formula_59 be a function from a topological space to another and suppose the domain formula_0 is compactly generated according to one of the definitions in this article. Since compactly generated spaces are defined in terms of a final topology, one can express the continuity of formula_60 in terms of the continuity of the composition of formula_60 with the various maps in the family used to define the final topology. The specifics are as follows. If formula_0 is CG-1, the function formula_60 is continuous if and only if the restriction formula_61 is continuous for each compact formula_28 If formula_0 is CG-2, the function formula_60 is continuous if and only if the composition formula_62 is continuous for each compact Hausdorff space formula_4 and continuous map formula_63 If formula_0 is CG-3, the function formula_60 is continuous if and only if the restriction formula_61 is continuous for each compact Hausdorff formula_28 Miscellaneous. For topological spaces formula_0 and formula_64 let formula_65 denote the space of all continuous maps from formula_0 to formula_44 topologized by the compact-open topology. If formula_0 is CG-1, the path components in formula_65 are precisely the homotopy equivalence classes. K-ification. Given any topological space formula_0 we can define a possibly finer topology on formula_0 that is compactly generated, sometimes called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;k-ification of the topology. Let formula_66 denote the family of compact subsets of formula_8 We define the new topology on formula_0 by declaring a subset formula_2 to be closed if and only if formula_67 is closed in formula_68 for each index formula_69 Denote this new space by formula_20 One can show that the compact subsets of formula_70 and formula_0 coincide, and the induced topologies on compact subsets are the same. It follows that formula_70 is compactly generated. If formula_0 was compactly generated to start with then formula_71 Otherwise the topology on formula_70 is strictly finer than formula_0 (i.e., there are more open sets). This construction is functorial. We denote formula_72 the full subcategory of formula_73 with objects the compactly generated spaces, and formula_74 the full subcategory of formula_72 with objects the Hausdorff spaces. The functor from formula_73 to formula_72 that takes formula_0 to formula_70 is right adjoint to the inclusion functor formula_75 The exponential object in formula_74 is given by formula_76 where formula_77 is the space of continuous maps from formula_0 to formula_44 with the compact-open topology. These ideas can be generalized to the non-Hausdorff case. This is useful since identification spaces of Hausdorff spaces need not be Hausdorff. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "A \\subseteq X," }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "A \\cap K" }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": "K \\subseteq X." }, { "math_id": 6, "text": "(X,T)" }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": "X." }, { "math_id": 9, "text": "\\mathcal F" }, { "math_id": 10, "text": "\\mathcal F," }, { "math_id": 11, "text": "T_{\\mathcal F}" }, { "math_id": 12, "text": "T." }, { "math_id": 13, "text": "(X,T)," }, { "math_id": 14, "text": "X;" }, { "math_id": 15, "text": "U\\subseteq X" }, { "math_id": 16, "text": "f^{-1}(U)" }, { "math_id": 17, "text": "f:K\\to X" }, { "math_id": 18, "text": "\\mathcal F." }, { "math_id": 19, "text": "X," }, { "math_id": 20, "text": "kX." }, { "math_id": 21, "text": "kX=X." }, { "math_id": 22, "text": "\\mathcal C" }, { "math_id": 23, "text": "A\\subseteq X" }, { "math_id": 24, "text": "A\\cap K" }, { "math_id": 25, "text": "K\\in\\mathcal C." }, { "math_id": 26, "text": "\\mathcal G\\subseteq\\mathcal F" }, { "math_id": 27, "text": "\\mathcal G." }, { "math_id": 28, "text": "K\\subseteq X." }, { "math_id": 29, "text": "K." }, { "math_id": 30, "text": "f(K)" }, { "math_id": 31, "text": "f^{-1}(A)" }, { "math_id": 32, "text": "f:K\\to X." }, { "math_id": 33, "text": "X=\\{0,1\\}" }, { "math_id": 34, "text": "\\{\\emptyset,\\{1\\},X\\}" }, { "math_id": 35, "text": "\\{0\\}" }, { "math_id": 36, "text": "\\{1\\}" }, { "math_id": 37, "text": "[0,1]" }, { "math_id": 38, "text": "(0,1]." }, { "math_id": 39, "text": "\\{x\\}\\subseteq X," }, { "math_id": 40, "text": "K\\subseteq X" }, { "math_id": 41, "text": "K;" }, { "math_id": 42, "text": "\\mathbb R" }, { "math_id": 43, "text": "\\mathbb Z" }, { "math_id": 44, "text": "Y" }, { "math_id": 45, "text": "Y." }, { "math_id": 46, "text": "\\omega_1+1=[0,\\omega_1]" }, { "math_id": 47, "text": "\\omega_1" }, { "math_id": 48, "text": "q:Y\\to X" }, { "math_id": 49, "text": "q" }, { "math_id": 50, "text": "q^{-1}(U)" }, { "math_id": 51, "text": "{\\coprod}_i X_i" }, { "math_id": 52, "text": "(X_i)_{i\\in I}" }, { "math_id": 53, "text": "X_i" }, { "math_id": 54, "text": "(0,1]" }, { "math_id": 55, "text": "X=\\Reals \\setminus \\{1, 1/2, 1/3, \\ldots\\}" }, { "math_id": 56, "text": "Y=\\Reals / \\{1,2,3,\\ldots\\}" }, { "math_id": 57, "text": "X\\times Y" }, { "math_id": 58, "text": "k(X\\times Y)" }, { "math_id": 59, "text": "f:X\\to Y" }, { "math_id": 60, "text": "f" }, { "math_id": 61, "text": "f\\vert_K:K\\to Y" }, { "math_id": 62, "text": "f\\circ u:K\\to Y" }, { "math_id": 63, "text": "u:K\\to X." }, { "math_id": 64, "text": "Y," }, { "math_id": 65, "text": "C(X,Y)" }, { "math_id": 66, "text": "\\{K_\\alpha\\}" }, { "math_id": 67, "text": "A \\cap K_\\alpha" }, { "math_id": 68, "text": "K_\\alpha" }, { "math_id": 69, "text": "\\alpha." }, { "math_id": 70, "text": "kX" }, { "math_id": 71, "text": "kX = X." }, { "math_id": 72, "text": "\\mathbf{CGTop}" }, { "math_id": 73, "text": "\\mathbf{Top}" }, { "math_id": 74, "text": "\\mathbf{CGHaus}" }, { "math_id": 75, "text": "\\mathbf{CGTop} \\to \\mathbf{Top}." }, { "math_id": 76, "text": "k(Y^X)" }, { "math_id": 77, "text": "Y^X" } ]
https://en.wikipedia.org/wiki?curid=899382
8994161
Egyptian Mathematical Leather Roll
Ancient Egyptian text The Egyptian Mathematical Leather Roll (EMLR) is a 10 × 17 in (25 × 43 cm) leather roll purchased by Alexander Henry Rhind in 1858. It was sent to the British Museum in 1864, along with the Rhind Mathematical Papyrus, but it was not chemically softened and unrolled until 1927 (Scott, Hall 1927). The writing consists of Middle Kingdom hieratic characters written right to left. Scholars date the EMLR to the 17th century BCE. Mathematical content. This leather roll is an aid for computing Egyptian fractions. It contains 26 sums of unit fractions which equal another unit fraction. The sums appear in two columns, and are followed by two more columns which contain exactly the same sums. Of the 26 sums listed, ten are Eye of Horus numbers: 1/2, 1/4 (twice), 1/8 (thrice), 1/16 (twice), 1/32, 1/64 converted from Egyptian fractions. There are seven other sums having even denominators converted from Egyptian fractions: 1/6 (listed twice–but wrong once), 1/10, 1/12, 1/14, 1/20 and 1/30. By way of example, the three 1/8 conversions followed one or two scaling factors as alternatives: 1. 1/8 x 3/3 = 3/24 = (2 + 1)/24 = 1/12 + 1/24 2. 1/8 x 5/5 = 5/40 = (4 + 1)/40 = 1/10 + 1/40 3. 1/8 x 25/25 = 25/200 = (8 + 17)/200 = 1/25 + (17/200 x 6/6) = 1/25 + 102/1200 = 1/25 + (80 + 16 + 6)/1200 = 1/25 + 1/15 + 1/75 + 1/200 Finally, there were nine sums, having odd denominators, converted from Egyptian fractions: 2/3, 1/3 (twice), 1/5, 1/7, 1/9, 1/11, 1/13 and 1/15. The British Museum examiners found no introduction or description to how or why the equivalent unit fraction series were computed. Equivalent unit fraction series are associated with fractions 1/3, 1/4, 1/8 and 1/16. There was a trivial error associated with the final 1/15 unit fraction series. The 1/15 series was listed as equal to 1/6. Another serious error was associated with 1/13, an issue that the examiners of 1927 did not attempt to resolve. Modern analysis. The original mathematical texts never explain where the procedures and formulas came from. This holds true for the EMLR as well. Scholars have attempted to deduce what techniques the ancient Egyptians may have used to construct both the unit fraction tables of the EMLR and the 2/n tables known from the Rhind Mathematical Papyrus and the Lahun Mathematical Papyri. Both types of tables were used to aid in computations dealing with fractions, and for the conversion of measuring units. It has been noted that there are groups of unit fraction decompositions in the EMLR which are very similar. For instance lines 5 and 6 easily combine into the equation 1/3 + 1/6 = 1/2. It is easy to derive lines 11, 13, 24, 20, 21, 19, 23, 22, 25 and 26 by dividing this equation by 3, 4, 5, 6, 7, 8, 10, 15, 16 and 32 respectively. Some of the problems would lend themselves to a solution via an algorithm which involves multiplying both the numerator and the denominator by the same term and then further reducing the resulting equation: formula_0 This method leads to a solution for the fraction 1/8 as appears in the EMLR when using N=25 (using modern mathematical notation): formula_1 formula_2 Modern conclusions. The EMLR has been considered a student scribe test document since 1927, the year that the text was unrolled at the British Museum. The scribe practiced conversions of rational numbers 1/p and 1/pq to alternative unit fraction series. Reading available Middle Kingdom math records, RMP 2/n table being one, modern students of Egyptian arithmetic may see that trained scribes improved conversions of 2/n and n/p to concise unit fraction series by applying algorithmic and non-algorithmic methods. Chronology. The following chronology shows several milestones that marked the recent progress toward reporting a clearer understanding of the EMLR's contents, related to the RMP 2/"n" table. See also. Egyptian mathematical texts: Other:
[ { "math_id": 0, "text": "\\frac{1}{pq} = \\frac{1}{N}\\times\\frac{N}{pq} " }, { "math_id": 1, "text": "1/8 = 1/25 \\times 25/8 = 1/5 \\times 25/40 = 1/5 \\times (3/5 + 1/40) " }, { "math_id": 2, "text": "= 1/5 \\times (1/5 + 2/5 + 1/40) = 1/5 \\times (1/5 + 1/3 + 1/15 + 1/40) = 1/25 + 1/15 + 1/75 + 1/200" } ]
https://en.wikipedia.org/wiki?curid=8994161
899452
Lanchester's laws
Formulae for relative strengths of military forces Lanchester's laws are mathematical formulas for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B. In 1915 and 1916 during World War I, M. Osipov and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. Among these are what is known as "Lanchester's linear law" (for ancient combat) and "Lanchester's square law" (for modern combat with long-range weapons such as firearms). As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania. Lanchester's linear law. For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons. The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force. Lanchester's square law. Lanchester's square law is also known as the N-square law. Description. With firearms engaging each other directly with aimed shooting from a distance, they can attack multiple targets and can receive fire from multiple directions. The rate of attrition now depends only on the number of weapons shooting. Lanchester determined that the power of such a force is proportional not to the number of units it has, but to the square of the number of units. This is known as Lanchester's square law. More precisely, the law specifies the casualties a shooting force will inflict over a period of time, relative to those inflicted by the opposing force. In its basic form, the law is only useful to predict outcomes and casualties by attrition. It does not apply to whole armies, where tactical deployment means not all troops will be engaged all the time. It only works where each unit (soldier, ship, etc.) can kill only one equivalent unit at a time. For this reason, the law does not apply to machine guns, artillery with unguided munitions, or nuclear weapons. The law requires an assumption that casualties accumulate over time: it does not work in situations in which opposing troops kill each other instantly, either by shooting simultaneously or by one side getting off the first shot and inflicting multiple casualties. Note that Lanchester's square law does not apply to technological force, only numerical force; so it requires an N-squared-fold increase in quality to compensate for an N-fold decrease in quantity. Example equations. Suppose that two armies, Red and Blue, are engaging each other in combat. Red is shooting a continuous stream of bullets at Blue. Meanwhile, Blue is shooting a continuous stream of bullets at Red. Let symbol "A" represent the number of soldiers in the Red force. Each one has "offensive firepower α", which is the number of enemy soldiers it can incapacitate (e.g., kill or injure) per unit time. Likewise, Blue has "B" soldiers, each with offensive firepower "β". Lanchester's square law calculates the number of soldiers lost on each side using the following pair of equations. Here, "dA/dt" represents the rate at which the number of Red soldiers is changing at a particular instant. A negative value indicates the loss of soldiers. Similarly, "dB/dt" represents the rate of change of the number of Blue soldiers. formula_0 formula_1 The solution to these equations shows that: The first three of these conclusions are obvious. The final one is the origin of the name "square law". Relation to the salvo combat model. Lanchester's equations are related to the more recent salvo combat model equations, with two main differences. First, Lanchester's original equations form a continuous time model, whereas the basic salvo equations form a discrete time model. In a gun battle, bullets or shells are typically fired in large quantities. Each round has a relatively low chance of hitting its target, and does a relatively small amount of damage. Therefore, Lanchester's equations model gunfire as a stream of firepower that continuously weakens the enemy force over time. By comparison, cruise missiles typically are fired in relatively small quantities. Each one has a high probability of hitting its target, and carries a relatively powerful warhead. Therefore, it makes more sense to model them as a discrete pulse (or salvo) of firepower in a discrete time model. Second, Lanchester's equations include only offensive firepower, whereas the salvo equations also include defensive firepower. Given their small size and large number, it is not practical to intercept bullets and shells in a gun battle. By comparison, cruise missiles can be intercepted (shot down) by surface-to-air missiles and anti-aircraft guns. So it is important to include such active defenses in a missile combat model. Lanchester's law in use. Lanchester's laws have been used to model historical battles for research purposes. Examples include Pickett's Charge of Confederate infantry against Union infantry during the 1863 Battle of Gettysburg, the 1940 Battle of Britain between the British and German air forces, and the Battle of Kursk. In modern warfare, to take into account that to some extent both linear and the square apply often, an exponent of 1.5 is used. Lanchester's laws have also been used to model guerrilla warfare. Attempts have been made to apply Lanchester's laws to conflicts between animal groups. Examples include tests with chimpanzees and ants. The chimpanzee application was relatively successful. A study of Australian meat ants and Argentine ants confirmed the square law, a study of fire ants did not confirm the square law. Helmbold Parameters. The Helmbold Parameters provide quick, concise, exact numerical indices, soundly based on historical data, for comparing battles with respect to their bitterness and the degree to which side had the advantage. While their definition is modeled after a solution of the Lanchester Square Law's differential equations, their numerical values are based entirely on the initial and final strengths of the opponents and in no way depend upon the validity of Lanchester's Square Law as a model of attrition during the course of a battle. The solution of Lanchester's Square Law used here can be written as:formula_2where formula_3 is the time since the battle began, formula_4 and formula_5 are the surviving fractions of the attacker's and defender's forces at time formula_3, formula_6 is the Helmbold intensity parameter, formula_7 is the Helmbold defender's advantage parameter, formula_8 is the duration of the battle, and formula_9 is the Helmbold bitterness parameter. If the initial and final strengths of the two sides are known it is possible to solve for the parameters formula_10, formula_11, formula_7, and formula_9. If the battle duration formula_8 is also known, then it is possible to solve for formula_6. If, as is normally the case, formula_9 is small enough that the hyperbolic functions can, without any significant error, be replaced by their series expansion up to terms in the first power of formula_9, and if we adopt the following abbreviations for the casualty fractionsformula_12then the following approximate relations hold:formula_13That formula_9 is a kind of "average" (specifically, the geometric mean) of the casualty fractions justifies using it as an index of the bitterness of the battle. We note here that for statistical work it is better to use the natural logarithms of the Helmbold Parameters. We will call them, in an obvious notation, formula_14, formula_15, and formula_16. Major findings. See Helmbold (2021): Some observers have noticed a similar post-WWII decline in casualties at the level of wars instead of battles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\mathrm{d}A}{\\mathrm{d}t}=-\\beta B" }, { "math_id": 1, "text": "\\frac{\\mathrm{d}B}{\\mathrm{d}t}=-\\alpha A" }, { "math_id": 2, "text": "\\begin{aligned}\na(t) &= \\cosh(\\lambda t) - \\mu \\sinh(\\lambda t) \\\\\nd(t) &= \\cosh(\\lambda t) - \\mu^{-1}\\sinh(\\lambda t) \\\\\n\\varepsilon &= \\lambda T\n\\end{aligned}" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "a(t)" }, { "math_id": 5, "text": "d(t)" }, { "math_id": 6, "text": "\\lambda" }, { "math_id": 7, "text": "\\mu" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "\\varepsilon" }, { "math_id": 10, "text": "a(T)" }, { "math_id": 11, "text": "d(T)" }, { "math_id": 12, "text": "\\begin{aligned}\nF_{A} &= 1-a(T) \\\\\nF_{D} &= 1-d(T)\n\\end{aligned}" }, { "math_id": 13, "text": "\\begin{aligned}\n\\varepsilon &= \\sqrt{F_{A}F_{D}} \\\\\n\\mu &= F_{A}/F_{D}\n\\end{aligned}" }, { "math_id": 14, "text": "\\log\\mu" }, { "math_id": 15, "text": "\\log\\varepsilon" }, { "math_id": 16, "text": "\\log\\lambda" }, { "math_id": 17, "text": "F_{A}" }, { "math_id": 18, "text": "F_{D}" } ]
https://en.wikipedia.org/wiki?curid=899452
8995919
Hypsometric equation
Atmospheric equation in meteorology The hypsometric equation, also known as the thickness equation, relates an atmospheric pressure ratio to the equivalent thickness of an atmospheric layer considering the layer mean of virtual temperature, gravity, and occasionally wind. It is derived from the hydrostatic equation and the ideal gas law. Formulation. The hypsometric equation is expressed as: formula_0 where: In meteorology, formula_7 and formula_8 are isobaric surfaces. In radiosonde observation, the hypsometric equation can be used to compute the height of a pressure level given the height of a reference pressure level and the mean virtual temperature in between. Then, the newly computed height can be used as a new reference level to compute the height of the next level given the mean virtual temperature in between, and so on. Derivation. The hydrostatic equation: formula_9 where formula_10 is the density [kg/m3], is used to generate the equation for hydrostatic equilibrium, written in differential form: formula_11 This is combined with the ideal gas law: formula_12 to eliminate formula_10: formula_13 This is integrated from formula_14 to formula_15: formula_16 "R" and "g" are constant with "z", so they can be brought outside the integral. If temperature varies linearly with "z" (e.g., given a small change in "z"), it can also be brought outside the integral when replaced with formula_4, the average virtual temperature between formula_14 and formula_15. formula_17 Integration gives formula_18 simplifying to formula_19 Rearranging: formula_20 or, eliminating the natural log: formula_21 Correction. The Eötvös effect can be taken into account as a correction to the hypsometric equation. Physically, using a frame of reference that rotates with Earth, an air mass moving eastward effectively weighs less, which corresponds to an increase in thickness between pressure levels, and vice versa. The corrected hypsometric equation follows: formula_22 where the correction due to the Eötvös effect, A, can be expressed as follows: formula_23 where This correction is considerable in tropical large-scale atmospheric motion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h = z_2 - z_1 = \\frac{R \\cdot \\overline{T_v}}{g} \\, \\ln \\left(\\frac{p_1}{p_2}\\right),\n" }, { "math_id": 1, "text": "h" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "\\overline{T_v}" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "p_1" }, { "math_id": 8, "text": "p_2" }, { "math_id": 9, "text": "p = \\rho \\cdot g \\cdot z," }, { "math_id": 10, "text": "\\rho" }, { "math_id": 11, "text": "dp = - \\rho \\cdot g \\cdot dz." }, { "math_id": 12, "text": "p = \\rho \\cdot R \\cdot T_v" }, { "math_id": 13, "text": "\\frac{\\mathrm{d}p}{p} = \\frac{-g}{R \\cdot T_v} \\, \\mathrm{d}z." }, { "math_id": 14, "text": "z_1" }, { "math_id": 15, "text": "z_2" }, { "math_id": 16, "text": "\\int_{p(z_1)}^{p(z_2)} \\frac{\\mathrm{d}p}{p} = \\int_{z_1}^{z_2}\\frac{-g}{R \\cdot T_v} \\, \\mathrm{d}z." }, { "math_id": 17, "text": "\\int_{p(z_1)}^{p(z_2)} \\frac{\\mathrm{d}p}{p} = \\frac{-g}{R \\cdot \\overline{T_v}}\\int_{z_1}^{z_2} \\, \\mathrm{d}z." }, { "math_id": 18, "text": "\\ln \\left( \\frac{p(z_2)}{p(z_1)} \\right) = \\frac{-g}{R \\cdot \\overline{T_v}} (z_2 - z_1), " }, { "math_id": 19, "text": "\\ln \\left( \\frac{p_1}{p_2} \\right) = \\frac{g}{R \\cdot \\overline{T_v}} (z_2 - z_1). " }, { "math_id": 20, "text": "z_2 - z_1 = \\frac{R \\cdot \\overline{T_v}}{g} \\ln \\left( \\frac{p_1}{p_2} \\right), " }, { "math_id": 21, "text": " \\frac{p_1}{p_2} = e^{\\frac{g}{R \\cdot \\overline{T_v}} \\cdot (z_2 - z_1)}." }, { "math_id": 22, "text": "h = z_2 - z_1 = \\frac{R \\cdot \\overline{T_v}}{g(1+A)} \\cdot \\ln \\left(\\frac{p_1}{p_2}\\right),\n" }, { "math_id": 23, "text": "A = -\\frac{1}{g} \\left(2 \\Omega \\overline{u} \\cos \\phi + \\frac{\\overline{u}^2 + \\overline{v}^2}{r}\\right),\n" }, { "math_id": 24, "text": "\\Omega" }, { "math_id": 25, "text": "\\phi" }, { "math_id": 26, "text": "r" }, { "math_id": 27, "text": "\\overline{u}" }, { "math_id": 28, "text": "\\overline{v}" } ]
https://en.wikipedia.org/wiki?curid=8995919
8997770
List of graphs
This partial list of graphs contains definitions of graphs and graph families. For collected definitions of graph theory terms that do not refer to individual graph types, such as "vertex" and "path", see Glossary of graph theory. For links to existing articles about particular kinds of graphs, see . Some of the finite structures considered in graph theory have names, sometimes inspired by the graph's topology, and sometimes after their discoverer. A famous example is the Petersen graph, a concrete graph on 10 vertices that appears as a minimal example or counterexample in many different contexts. Highly symmetric graphs. Strongly regular graphs. The strongly regular graph on "v" vertices and rank "k" is usually denoted srg("v,k",λ,μ). Symmetric graphs. A symmetric graph is one in which there is a symmetry (graph automorphism) taking any ordered pair of adjacent vertices to any other ordered pair; the Foster census lists all small symmetric 3-regular graphs. Every strongly regular graph is symmetric, but not vice versa. Graph families. Complete graphs. The complete graph on formula_0 vertices is often called the "formula_0-clique" and usually denoted formula_1, from German "komplett". Complete bipartite graphs. The complete bipartite graph is usually denoted formula_2. For formula_3 see the section on star graphs. The graph formula_4 equals the 4-cycle formula_5 (the square) introduced below. Cycles. The cycle graph on formula_0 vertices is called the "n-cycle" and usually denoted formula_6. It is also called a "cyclic graph", a "polygon" or the "n-gon". Special cases are the "triangle" formula_7, the "square" formula_5, and then several with Greek naming "pentagon" formula_8, "hexagon" formula_9, etc. Friendship graphs. The friendship graph "Fn" can be constructed by joining "n" copies of the cycle graph "C"3 with a common vertex. Fullerene graphs. In graph theory, the term fullerene refers to any 3-regular, planar graph with all faces of size 5 or 6 (including the external face). It follows from Euler's polyhedron formula, "V" – "E" + "F" = 2 (where "V", "E", "F" indicate the number of vertices, edges, and faces), that there are exactly 12 pentagons in a fullerene and "h" = "V"/2 – 10 hexagons. Therefore "V" = 20 + 2"h"; "E" = 30 + 3"h". Fullerene graphs are the Schlegel representations of the corresponding fullerene compounds. An algorithm to generate all the non-isomorphic fullerenes with a given number of hexagonal faces has been developed by G. Brinkmann and A. Dress. G. Brinkmann also provided a freely available implementation, called fullgen. Platonic solids. The complete graph on four vertices forms the skeleton of the tetrahedron, and more generally the complete graphs form skeletons of simplices. The hypercube graphs are also skeletons of higher-dimensional regular polytopes. Snarks. A snark is a bridgeless cubic graph that requires four colors in any proper edge coloring. The smallest snark is the Petersen graph, already listed above. Star. A star "S""k" is the complete bipartite graph "K"1,"k". The star "S"3 is called the claw graph. Wheel graphs. The wheel graph "Wn" is a graph on "n" vertices constructed by connecting a single vertex to every vertex in an ("n" − 1)-cycle. Other graphs. This partial list contains definitions of graphs and graph families which are known by particular names, but do not have a Wikipedia article of their own. Gear. A gear graph, denoted "G""n", is a graph obtained by inserting an extra vertex between each pair of adjacent vertices on the perimeter of a wheel graph "W""n". Thus, "G""n" has 2"n"+1 vertices and 3"n" edges. Gear graphs are examples of squaregraphs, and play a key role in the forbidden graph characterization of squaregraphs. Gear graphs are also known as cogwheels and bipartite wheels. Helm. A helm graph, denoted Hn, is a graph obtained by attaching a single edge and node to each node of the outer circuit of a wheel graph Wn. Lobster. A lobster graph is a tree in which all the vertices are within distance 2 of a central path. Compare "caterpillar". Web. The web graph "W""n","r" is a graph consisting of "r" concentric copies of the cycle graph "C""n", with corresponding vertices connected by "spokes". Thus "W""n",1 is the same graph as "C""n", and "W"n,2 is a prism. A web graph has also been defined as a prism graph "Y""n"+1, 3, with the edges of the outer cycle removed.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "K_n" }, { "math_id": 2, "text": "K_{n,m}" }, { "math_id": 3, "text": "n=1" }, { "math_id": 4, "text": "K_{2,2}" }, { "math_id": 5, "text": "C_4" }, { "math_id": 6, "text": "C_n" }, { "math_id": 7, "text": "C_3" }, { "math_id": 8, "text": "C_5" }, { "math_id": 9, "text": "C_6" } ]
https://en.wikipedia.org/wiki?curid=8997770
899908
Chess piece relative value
Point-based valuation system for chess pieces In chess, a relative value (or point value) is a standard value conventionally assigned to each piece. Piece valuations have no role in the rules of chess but are useful as an aid to assessing a position. The best known system assigns 1 point to a pawn, 3 points to a knight or bishop, 5 points to a rook and 9 points to a queen. However, valuation systems provide only a rough guide and the true value of a piece is very position dependent. &lt;templatestyles src="Template:TOC_left/styles.css" /&gt; Standard valuations. Piece values exist because calculating all the way to checkmate in most positions is beyond the reach even of top computers. Thus players aim primarily to create a material advantage, and to chase this goal it is necessary to quantitatively approximate the strength of an army of pieces. Such piece values are valid for, and conceptually averaged over, tactically "quiet" positions where immediate tactical gain of material will not happen. The following table is the most common assignment of point values. The oldest derivation of the standard values is due to the Modenese School (Ercole del Rio, Giambattista Lolli, and Domenico Lorenzo Ponziani) in the 18th century and is partially based on the earlier work of Pietro Carrera. The value of the king is undefined as it cannot be captured, let alone traded, during the course of the game. Chess engines usually assign the king an arbitrary large value such as 200 points or more to indicate that the inevitable loss of the king due to checkmate trumps all other considerations. The endgame is a different story, as there is less danger of checkmate, allowing the king to take a more active role. The king is good at attacking and defending nearby pieces and pawns. It is better at defending such pieces than the knight is, and it is better at attacking them than the bishop is. Overall, this makes it more powerful than a minor piece but less powerful than a rook, so its fighting value is worth about four points. This system has some shortcomings. Combinations of pieces do not always equal the sum of their parts; for instance, two bishops on opposite colors are usually worth slightly more than a bishop plus a knight, and three &lt;dfn id=""&gt;minor pieces&lt;/dfn&gt; (nine points) are often slightly stronger than two rooks (ten points) or a queen (nine points). Chess-variant theorist Ralph Betza identified the 'leveling effect', which causes reduction of the value of stronger pieces in the presence of opponent weaker pieces, due to the latter interdicting access to part of the board for the former in order to prevent the value difference from evaporating by 1-for-1 trading. This effect causes 3 queens to badly lose against 7 knights (when both start behind a wall of pawns), even though the added piece values predict that the knights player is two knights short of equality. In a less exotic case it explains why trading rooks in the presence of a queen-vs-3-minors imbalance favors the queen player, as the rooks hinder the queen, but not so much the minors. Adding piece values thus is a first approximation, because one must also consider how well pieces cooperate with each other (e.g. opposite-coloured bishops cooperate very well), and how fast the piece travels (e.g. a short-range piece far away from the action on a large board is almost worthless). The evaluation of the pieces depends on many parameters. Edward Lasker said, "It is difficult to compare the relative value of different pieces, as so much depends on the peculiarities of the position...". Nevertheless, he said that the bishop and knight (&lt;dfn id=""&gt;minor pieces&lt;/dfn&gt;) are equal, the rook is worth a minor piece plus one or two pawns, and the queen is worth three minor pieces or two rooks. Larry Kaufman suggests the following values in the middlegame: The &lt;dfn id=""&gt;bishop pair&lt;/dfn&gt; is worth 7.5 pawns – half a pawn more than the individual values of its constituent bishops combined. (Although it would be a very theoretical situation, there is no such bonus for a pair of same-coloured bishops. Per investigations by H. G. Muller, three light-squared bishops and one dark-squared one would receive only a 0.5-point bonus, while two on each colour would receive a 1-point bonus. Thus, one could rather think of it as penalising the absence of a piece, though more imbalanced combinations like 3:0 or 4:0 were not tested.) The position of the pieces also makes a significant difference, e.g. pawns near the edges are worth less than those near the centre, pawns close to promotion are worth far more, pieces controlling the centre are worth more than average, trapped pieces (such as &lt;dfn id=""&gt;bad bishops&lt;/dfn&gt;) are worth less, etc. Alternative valuations. Although the 1-3-3-5-9 system of point totals is the most commonly given, many other systems of valuing pieces have been proposed. Several systems have the bishop as usually being slightly more powerful than a knight. "Note:" Where a value for the king is given, this is used when considering piece development, its power in the endgame, etc. Larry Kaufman's 2021 system. Larry Kaufman in 2021 gives a more detailed system based on his experience working with chess engines, depending on the presence or absence of queens. He uses "middlegame" to mean positions where both queens are on the board, "threshold" for positions where there is an imbalance (one queen versus none, or two queens versus one), and "endgame" for positions without queens. (Kaufman did not give the queen's value in the middlegame or endgame cases, since in these cases both sides have the same number of queens and it cancels out.) The file of a pawn is also important, because this cannot change except by capture. According to Kaufman, the difference is small in the endgame (when queens are absent), but in the middlegame (when queens are present) the difference is substantial: In conclusion: In the endgame: In the threshold case (queen versus other pieces): In the middlegame case: The above is written for around ten pawns on the board (a normal number); the value of the rooks goes down as pawns are added, and goes up as pawns are removed. Finally, Kaufman proposes a simplified version that avoids decimals: use the traditional values P = 1, N = 3, B = 3+, and R = 5 with queens off the board, but use P = 1, N = 4, B = 4+, R = 6, Q = 11 when at least one player has a queen. The point is to show that two minor pieces equal rook and two pawns with queens on the board, but only rook and one pawn without queens. Hans Berliner's system. World Correspondence Chess Champion Hans Berliner gives the following valuations, based on experience and computer experiments: There are adjustments for the &lt;dfn id=""&gt;rank&lt;/dfn&gt; and &lt;dfn id=""&gt;file&lt;/dfn&gt; of a pawn and adjustments for the pieces depending on how &lt;dfn id=""&gt;open&lt;/dfn&gt; or &lt;dfn id=""&gt;closed&lt;/dfn&gt; the position is. Bishops, rooks, and queens gain up to 10 percent more value in open positions and lose up to 20 percent in closed positions. Knights gain up to 50 percent in closed positions and lose up to 30 percent in the corners and edges of the board. The value of a &lt;dfn id=""&gt;good bishop&lt;/dfn&gt; may be at least 10 percent higher than that of a &lt;dfn id=""&gt;bad bishop&lt;/dfn&gt;. There are different types of doubled pawns; see the diagram. White's doubled pawns on the b-file are the best situation in the diagram, since advancing the pawns and exchanging can get them un-doubled and mobile. The doubled b-pawn is worth 0.75 points. If the black pawn on a6 were on c6, it would not be possible to dissolve the doubled pawn, and it would be worth only 0.5 points. The doubled pawn on f2 is worth about 0.5 points. The second white pawn on the h-file is worth only 0.33 points, and additional pawns on the file would be worth only 0.2 points. Changing valuations in the endgame. As already noted when the standard values were first formulated, the relative strength of the pieces will change as a game progresses to the endgame. Pawns gain value as their path towards promotion becomes clear, and strategy begins to revolve around either defending or capturing them before they can promote. Knights lose value as their unique mobility becomes a detriment to crossing an empty board. Rooks and (to a lesser extent) bishops gain value as their lines of movement and attack are less obstructed. Queens slightly lose value as their high mobility becomes less proportionally useful when there are fewer pieces to attack and defend. Some examples follow. C.J.S. Purdy gave &lt;dfn id=""&gt;minor pieces&lt;/dfn&gt; a value of &lt;templatestyles src="Fraction/styles.css" /&gt;3+1⁄2 points in the opening and middlegame but 3 points in the endgame. Shortcomings of piece valuation systems. There are shortcomings of giving each type of piece a single, static value. Two minor pieces plus two pawns are sometimes as good as a queen. Two rooks are sometimes better than a queen and pawn. Many of the systems have a 2-point difference between the rook and a &lt;dfn id=""&gt;minor piece&lt;/dfn&gt;, but most theorists put that difference at about &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄2 points (see ). In some open positions, a rook plus a pair of bishops are stronger than two rooks plus a knight. Example 1. Positions in which a bishop and knight can be exchanged for a rook and pawn are fairly common (see diagram). In this position, White should not do that, e.g.: 1. Nxf7? Rxf7 2. Bxf7+ Kxf7 This seems like an even exchange (6 points for 6 points), but it is not, as two minor pieces are better than a rook and pawn in the middlegame. In most openings, two minor pieces are better than a rook and pawn and are usually at least as good as a rook and two pawns until the position is greatly simplified (i.e. late middlegame or endgame). Minor pieces get into play earlier than rooks, and they coordinate better, especially when there are many pieces and pawns on the board. On the other hand, rooks are usually blocked by pawns until later in the game. Pachman also notes that the &lt;dfn id=""&gt;bishop pair&lt;/dfn&gt; is almost always better than a rook and pawn. Example 2. In this position, White has exchanged a queen and a pawn (10 points) for three minor pieces (9 points). White is better because three minor pieces are usually better than a queen because of their greater mobility, and Black's extra pawn is not important enough to change the situation. Three minor pieces are almost as strong as two rooks. Example 3. In this position, Black is ahead in material, but White is better. White's queenside is completely defended, and Black's additional queen has no target; additionally, White is much more active than Black and can gradually build up pressure on Black's weak kingside. Fairy pieces. In general, the approximate value formula_0 in centipawns of a short-range leaper with formula_1 moves on an 8 × 8 board is formula_2. The quadratic term reflects the possibility of cooperation between moves. If pieces are asymmetrical, moves going forward are about twice as valuable as move going sideways or backward, presumably because enemy pieces can generally be found in the forward direction. Similarly, capturing moves are usually twice as valuable as noncapturing moves (of relevance for pieces that do not capture the same way they move). There also seems to be significant value in reaching different squares (e.g. ignoring the board edges, a king and knight both have 8 moves, but in one or two moves a knight can reach 40 squares whereas a king can only reach 24). It is also valuable for a piece to have moves to squares that are orthogonally adjacent, as this enables it to wipe out lone passed pawns (and also checkmate the king, but this is less important as usually enough pawns survive to the late endgame to allow checkmate to be achieved via promotion). As many games are decided by promotion, the effectiveness of a piece in opposing or supporting pawns is a major part of its value. An unexpected result from empirical computer studies is that the princess (a bishop-knight compound) and empress (a rook-knight compound) have almost exactly the same value, even though the lone rook is two pawns stronger than the lone bishop. The empress is about 50 centipawns weaker than the queen, and the cardinal 75 centipawns weaker than the queen. This does not appear to have much to do with the bishop's colourboundedness being masked in the compound, because adding a non-capturing backward step turns out to benefit the bishop about as much as the knight; and it also does not have much to do with the bishop's lack of mating potential being so masked, because adding a backward step (capturing and non-capturing) to the bishop benefits it about as much as adding such a step to the knight as well. A more likely explanation seems to be the large number of orthogonal contacts in the move pattern of the princess, with 16 such contacts for the princess compared to 8 for the empress and queen each: such orthogonal contacts would explain why even in cylindrical chess, the rook is still stronger than the bishop even though they now have the same mobility. This makes the princess extremely good at annihilating pawn chains, because it can attack a pawn as well as the square in front of it. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "V = 33N + 0.7{N}^2" } ]
https://en.wikipedia.org/wiki?curid=899908
900
Americium
Chemical element with atomic number 95 (Am) Americium is a synthetic chemical element; it has symbol Am and atomic number 95. It is radioactive and a transuranic member of the actinide series in the periodic table, located under the lanthanide element europium and was thus named after the Americas by analogy. Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer. Americium is a relatively soft radioactive metal with silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples. History. Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series." The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as "pandemonium" (from Greek for "all demons" or "hell") and "delirium" (from Latin for "madness"). Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years. formula_0 The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h. formula_1 The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children "Quiz Kids" five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. Occurrence. The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am), though the quantities would be tiny and this has not been confirmed. Extraterrestrial long-lived 247Cm is probably also deposited on Earth and has 243Am as one of its intermediate decay products, but again this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have theoretically been detected in Przybylski's Star. Synthesis and extraction. Isotope nucleosynthesis. Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order . Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: &lt;chem&gt;^{238}_{92}U -&gt;[\ce{(n,\gamma)}] ^{239}_{92}U -&gt;[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np -&gt;[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu&lt;/chem&gt; The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: &lt;chem&gt;^{239}_{94}Pu -&gt;[\ce{2(n,\gamma)}] ^{241}_{94}Pu -&gt;[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am&lt;/chem&gt; The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: formula_2 Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: &lt;chem&gt;^{239}_{94}Pu -&gt;[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu -&gt;[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am&lt;/chem&gt; Metal generation. Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A "bis"-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. formula_3 An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: formula_4 Physical properties. In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters "a" = 346.8 pm and "c" = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic ("fcc") symmetry, space group Fm3m and lattice constant "a" = 489 pm. This "fcc" structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an "fcc" phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 μOhm·cm to 10 μOhm·cm after 40 hours, and saturates at about 16 μOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 μOhm·cm at liquid helium to 69 μOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter "a" axis and for the longer "c" hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (Δf"H"°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties. Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states +2, +4, +5, +6 and +7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to "f"-"f" transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds. Oxygen compounds. Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides. Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: &lt;chem&gt;{Am} + \underset{mercury\ halide}{HgX2} -&gt;[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg}&lt;/chem&gt; Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: &lt;chem&gt;Am^3+ + 3F^- -&gt; AmF3(v)&lt;/chem&gt; The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: &lt;chem&gt;2AmF3 + F2 -&gt; 2AmF4&lt;/chem&gt; Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R3). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium(III) chloride hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: AmCl3 + H2O -&gt; AmOCl + 2HCl Chalcogenides and pnictides. The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Silicides and borides. Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 &lt; x &lt; 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group "I"41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere. Organoamericium compounds. Analogous to uranocene, americium forms the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3. Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides. Biological aspects. Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus "Citrobacter" precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. Fission. The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors. There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals. Isotopes. About 18 isotopes and 11 nuclear isomers are known for americium, having mass numbers 229, 230, and 232 through 247. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass. Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV. Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U. Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U. Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle. Applications. Ionization-type smoke detector. Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation. The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms. Radionuclide. As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes. Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator. One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer. In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function. Neutron source. The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction: &lt;chem&gt;^{241}_{95}Am -&gt; ^{237}_{93}Np + ^{4}_{2}He + \gamma&lt;/chem&gt; &lt;chem&gt;^{9}_{4}Be + ^{4}_{2}He -&gt; ^{12}_{6}C + ^{1}_{0}n + \gamma&lt;/chem&gt; The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations. Production of other elements. Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm: &lt;chem&gt;^{243}_{95}Am -&gt;[\ce{(n,\gamma)}] ^{244}_{95}Am -&gt;[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm&lt;/chem&gt; Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O. Spectrometer. Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete. Health concerns. As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth. If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity. Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{^{239}_{94}Pu ->[\\ce{(n,\\gamma)}] ^{240}_{94}Pu ->[\\ce{(n,\\gamma)}] ^{241}_{94}Pu ->[\\beta^-][14.35\\ \\ce{yr}] ^{241}_{95}Am}\\ \\left( \\ce{->[\\alpha][432.2\\ \\ce{yr}] ^{237}_{93}Np} \\right)" }, { "math_id": 1, "text": "\\ce{^{241}_{95}Am ->[\\ce{(n,\\gamma)}] ^{242}_{95}Am}\\ \\left(\\ce{->[\\beta^-][16.02\\ \\ce{h}] ^{242}_{96}Cm} \\right)" }, { "math_id": 2, "text": "\\begin{cases}\n79\\%: & \\ce{^{241}_{95}Am ->[\\ce{(n,\\gamma)}] ^{242}_{95}Am}\n\\\\\n10\\%: & \\ce{^{241}_{95}Am ->[\\ce{(n,\\gamma)}] ^{242 m}_{95}Am}\n\\end{cases}" }, { "math_id": 3, "text": "\\mathrm{2\\ AmF_3\\ +\\ 3\\ Ba\\ \\longrightarrow \\ 2\\ Am\\ +\\ 3\\ BaF_2}" }, { "math_id": 4, "text": "\\mathrm{3\\ AmO_2\\ +\\ 4\\ La\\ \\longrightarrow \\ 3\\ Am\\ +\\ 2\\ La_2O_3}" } ]
https://en.wikipedia.org/wiki?curid=900
9000484
Quantum amplifier
In physics, a quantum amplifier is an amplifier that uses quantum mechanical methods to amplify a signal; examples include the active elements of lasers and optical amplifiers. The main properties of the quantum amplifier are its amplification coefficient and uncertainty. These parameters are not independent; the higher the amplification coefficient, the higher the uncertainty (noise). In the case of lasers, the uncertainty corresponds to the amplified spontaneous emission of the active medium. The unavoidable noise of quantum amplifiers is one of the reasons for the use of digital signals in optical communications and can be deduced from the fundamentals of quantum mechanics. Introduction. An amplifier increases the amplitude of whatever goes through it. While classical amplifiers take in classical signals, quantum amplifiers take in quantum signals, such as coherent states. This does not necessarily mean that the output is a coherent state; indeed, typically it is not. The form of the output depends on the specific amplifier design. Besides amplifying the intensity of the input, quantum amplifiers can also increase the quantum noise present in the signal. Exposition. The physical electric field in a paraxial single-mode pulse can be approximated with superposition of modes; the electric field formula_0 of a single mode can be described as formula_1 where The analysis of the noise in the system is made with respect to the mean value of the annihilation operator. To obtain the noise, one solves for the real and imaginary parts of the projection of the field to a given mode formula_6. Spatial coordinates do not appear in the solution. Assume that the mean value of the initial field is formula_7. Physically, the initial state corresponds to the coherent pulse at the input of the optical amplifier; the final state corresponds to the output pulse. The amplitude-phase behavior of the pulse must be known, although only the quantum state of the corresponding mode is important. The pulse may be treated in terms of a single-mode field. A quantum amplifier is a unitary transform formula_8, acting the initial state formula_9 and producing the amplified state formula_10, as follows: formula_11 This equation describes the quantum amplifier in the Schrödinger representation. The amplification depends on the mean value formula_12 of the field operator formula_5 and its dispersion formula_13. A coherent state is a state with minimal uncertainty; when the state is transformed, the uncertainty may increase. This increase can be interpreted as noise in the amplifier. The gain formula_14 can be defined as follows: formula_15 The can be written also in the Heisenberg representation; the changes are attributed to the amplification of the field operator. Thus, the evolution of the operator "A" is given by formula_16, while the state vector remains unchanged. The gain is given by formula_17 In general, the gain formula_14 may be complex, and it may depend on the initial state. For laser applications, the amplification of coherent states is important. Therefore, it is usually assumed that the initial state is a coherent state characterized by a complex-valued initial parameter formula_18 such that formula_19. Even with such a restriction, the gain may depend on the amplitude or phase of the initial field. In the following, the Heisenberg representation is used; all brackets are assumed to be evaluated with respect to the initial coherent state. formula_20 The expectation values are assumed to be evaluated with respect to the initial coherent state. This quantity characterizes the increase of the uncertainty of the field due to amplification. As the uncertainty of the field operator does not depend on its parameter, the quantity above shows how much output field differs from a coherent state. Linear phase-invariant amplifiers. Linear phase-invariant amplifiers may be described as follows. Assume that the unitary operator formula_21 amplifies in such a way that the input formula_5 and the output formula_22 are related by a linear equation formula_23 where formula_24 and formula_25 are c-numbers and formula_26 is a creation operator characterizing the amplifier. Without loss of generality, it may be assumed that formula_24 and formula_25 are real. The commutator of the field operators is invariant under unitary transformation formula_27: formula_28 From the unitarity of formula_27, it follows that formula_29 satisfies the canonical commutation relations for operators with Bose statistics: formula_30 The c-numbers are then formula_31 Hence, the phase-invariant amplifier acts by introducing an additional mode to the field, with a large amount of stored energy, behaving as a boson. Calculating the gain and the noise of this amplifier, one finds formula_32 and formula_33 The coefficient formula_34 is sometimes called the "intensity amplification coefficient". The noise of the linear phase-invariant amplifier is given by formula_35. The gain can be dropped by splitting the beam; the estimate above gives the minimal possible noise of the linear phase-invariant amplifier. The linear amplifier has an advantage over the multi-mode amplifier: if several modes of a linear amplifier are amplified by the same factor, the noise in each mode is determined independently;that is, modes in a linear quantum amplifier are independent. To obtain a large amplification coefficient with minimal noise, one may use homodyne detection, constructing a field state with known amplitude and phase, corresponding to the linear phase-invariant amplifier. The uncertainty principle sets the lower bound of quantum noise in an amplifier. In particular, the output of a laser system and the output of an optical generator are not coherent states. Nonlinear amplifiers. Nonlinear amplifiers do not have a linear relation between their input and output. The maximum noise of a nonlinear amplifier cannot be much smaller than that of an idealized linear amplifier. This limit is determined by the derivatives of the mapping function; a larger derivative implies an amplifier with greater uncertainty. Examples include most lasers, which include near-linear amplifiers, operating close to their threshold and thus exhibiting large uncertainty and nonlinear operation. As with the linear amplifiers, they may preserve the phase and keep the uncertainty low, but there are exceptions. These include parametric oscillators, which amplify while shifting the phase of the input. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "~E_{\\rm phys}~" }, { "math_id": 1, "text": " \\vec E_{\\rm phys}(\\vec x)~=~ \\vec e~ \\hat a~ M(\\vec x)~\\exp(ikz-{\\rm i}\\omega t) ~+~ {\\rm Hermitian~conjugate}~" }, { "math_id": 2, "text": "~\\vec x =\\{x_1,x_2,z \\}~" }, { "math_id": 3, "text": "~\\vec e ~" }, { "math_id": 4, "text": "~k~" }, { "math_id": 5, "text": "~\\hat a~" }, { "math_id": 6, "text": "~ M(\\vec x) ~" }, { "math_id": 7, "text": "~{\\left\\langle\\hat a\\right\\rangle_{\\rm initial}}~" }, { "math_id": 8, "text": " \\hat U " }, { "math_id": 9, "text": "~|{\\rm initial}\\rangle~" }, { "math_id": 10, "text": "~|{\\rm final}\\rangle~" }, { "math_id": 11, "text": "~|{\\rm final}\\rangle = U |\\rm initial \\rangle " }, { "math_id": 12, "text": "~\\langle \\hat a\\rangle ~" }, { "math_id": 13, "text": "~\\langle \\hat a^\\dagger \\hat a\\rangle - \\langle \\hat a^\\dagger \\rangle \\langle \\hat a\\rangle~" }, { "math_id": 14, "text": "~G~" }, { "math_id": 15, "text": " G= \\frac{\\left\\langle\\hat a\\right\\rangle _{\\rm final}}{\\left\\langle\\hat a\\right\\rangle _{\\rm initial}} " }, { "math_id": 16, "text": "~ \\hat A =\\hat U^\\dagger \\hat a \\hat U~ " }, { "math_id": 17, "text": "~ G= \\frac{\\left\\langle\\hat A\\right\\rangle _{\\rm initial}}{\\left\\langle\\hat a\\right\\rangle _{\\rm initial}}~" }, { "math_id": 18, "text": "~\\alpha~" }, { "math_id": 19, "text": "~~|{\\rm initial}\\rangle=|\\alpha\\rangle~" }, { "math_id": 20, "text": "{\\rm noise}= \\langle \\hat A^\\dagger \\hat A\\rangle -\\langle \\hat A^\\dagger \\rangle\\langle \\hat A\\rangle - \\left(\\langle \\hat a^\\dagger \\hat a\\rangle -\\langle \\hat a^\\dagger \\rangle\\langle \\hat a\\rangle\\right)" }, { "math_id": 21, "text": "~\\hat U~" }, { "math_id": 22, "text": "~\\hat A={\\hat U}^\\dagger \\hat a \\hat U~" }, { "math_id": 23, "text": "~\\hat A = c \\hat a + s \\hat b^\\dagger," }, { "math_id": 24, "text": "~c~" }, { "math_id": 25, "text": "~s~" }, { "math_id": 26, "text": "~\\hat b^\\dagger~" }, { "math_id": 27, "text": "~\\hat U~ " }, { "math_id": 28, "text": "\\hat A\\hat A^\\dagger -\\hat A^\\dagger\\hat A =\\hat a\\hat a^\\dagger -\\hat a^\\dagger \\hat a=1." }, { "math_id": 29, "text": "~ \\hat b~ " }, { "math_id": 30, "text": " ~\\hat b\\hat b^\\dagger -\\hat b^\\dagger \\hat b=1~" }, { "math_id": 31, "text": " ~c^2 \\!-\\! s^2=1~." }, { "math_id": 32, "text": "~~G\\!=\\!c~~" }, { "math_id": 33, "text": "~~{\\rm noise} =c^2\\!-\\!1." }, { "math_id": 34, "text": "~~ g\\!=\\!|G|^2~~" }, { "math_id": 35, "text": "g-1" } ]
https://en.wikipedia.org/wiki?curid=9000484
900125
Begriffsschrift
1879 book on logic by Gottlob Frege Begriffsschrift (German for, roughly, "concept-writing") is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book. "Begriffsschrift" is usually translated as "concept writing" or "concept notation"; the full title of the book identifies it as "a formula language, modeled on that of arithmetic, for pure thought." Frege's motivation for developing his formal approach to logic resembled Leibniz's motivation for his "calculus ratiocinator" (despite that, in the foreword Frege clearly denies that he achieved this aim, and also that his main aim would be constructing an ideal language like Leibniz's, which Frege declares to be a quite hard and idealistic—though not impossible—task). Frege went on to employ his logical calculus in his research on the foundations of mathematics, carried out over the next quarter-century. This is the first work in Analytical Philosophy, a field that future British and Anglo philosophers such as Bertrand Russell further developed. Notation and the system. The calculus contains the first appearance of quantified variables, and is essentially classical bivalent second-order logic with identity. It is bivalent in that sentences or formulas denote either True or False; second order because it includes relation variables in addition to object variables and allows quantification over both. The modifier "with identity" specifies that the language includes the identity relation, =. Frege stated that his book was his version of a characteristica universalis, a Leibnizian concept that would be applied in mathematics. Frege presents his calculus using idiosyncratic two-dimensional notation: connectives and quantifiers are written using lines connecting formulas, rather than the symbols ¬, ∧, and ∀ in use today. For example, that judgement "B" materially implies judgement "A", i.e. formula_0 is written as . In the first chapter, Frege defines basic ideas and notation, like proposition ("judgement"), the universal quantifier ("the generality"), the conditional, negation and the "sign for identity of content" formula_1 (which he used to indicate both material equivalence and identity proper); in the second chapter he declares nine formalized propositions as axioms. In chapter 1, §5, Frege defines the conditional as follows: "Let A and B refer to judgeable contents, then the four possibilities are: Let signify that the third of those possibilities does not obtain, but one of the three others does. So if we negate , that means the third possibility is valid, i.e. we negate A and assert B." The calculus in Frege's work. Frege declared nine of his propositions to be axioms, and justified them by arguing informally that, given their intended meanings, they express self-evident truths. Re-expressed in contemporary notation, these axioms are: These are propositions 1, 2, 8, 28, 31, 41, 52, 54, and 58 in the "Begriffschrifft". (1)–(3) govern material implication, (4)–(6) negation, (7) and (8) identity, and (9) the universal quantifier. (7) expresses Leibniz's indiscernibility of identicals, and (8) asserts that identity is a reflexive relation. All other propositions are deduced from (1)–(9) by invoking any of the following inference rules: The main results of the third chapter, titled "Parts from a general series theory," concern what is now called the ancestral of a relation "R". ""a" is an "R"-ancestor of "b" is written "aR"*"b". Frege applied the results from the "Begriffsschrifft", including those on the ancestral of a relation, in his later work "The Foundations of Arithmetic". Thus, if we take "xRy" to be the relation "y" = "x" + 1, then 0"R"*"y" is the predicate "y" is a natural number." (133) says that if "x", "y", and "z" are natural numbers, then one of the following must hold: "x" &lt; "y", "x" = "y", or "y" &lt; "x". This is the so-called "law of trichotomy". Influence on other works. For a careful recent study of how the "Begriffsschrift" was reviewed in the German mathematical literature, see Vilko (1998). Some reviewers, especially Ernst Schröder, were on the whole favorable. All work in formal logic subsequent to the "Begriffsschrift" is indebted to it, because its second-order logic was the first formal logic capable of representing a fair bit of mathematics and natural language. Some vestige of Frege's notation survives in the "turnstile" symbol formula_16 derived from his "Urteilsstrich" ("judging/inferring stroke") │ and "Inhaltsstrich" (i.e. "content stroke") ──. Frege used these symbols in the "Begriffsschrift" in the unified form ├─ for declaring that a proposition is true. In his later "Grundgesetze" he revises slightly his interpretation of the ├─ symbol. In "Begriffsschrift" the "Definitionsdoppelstrich" (i.e. "definition double stroke") │├─ indicates that a proposition is a definition. Furthermore, the negation sign formula_17 can be read as a combination of the horizontal "Inhaltsstrich" with a vertical negation stroke. This negation symbol was reintroduced by Arend Heyting in 1930 to distinguish intuitionistic from classical negation. It also appears in Gerhard Gentzen's doctoral dissertation. In the "Tractatus Logico Philosophicus", Ludwig Wittgenstein pays homage to Frege by employing the term "Begriffsschrift" as a synonym for logical formalism. Frege's 1892 essay, "On Sense and Reference," recants some of the conclusions of the "Begriffsschrifft" about identity (denoted in mathematics by the "=" sign). In particular, he rejects the "Begriffsschrift" view that the identity predicate expresses a relationship between names, in favor of the conclusion that it expresses a relationship between the objects that are denoted by those names. Editions. Translations: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " B \\rightarrow A " }, { "math_id": 1, "text": " \\equiv " }, { "math_id": 2, "text": " \\vdash \\ \\ A \\rightarrow \\left( B \\rightarrow A \\right) " }, { "math_id": 3, "text": " \\vdash \\ \\ \\left[ \\ A \\rightarrow \\left( B \\rightarrow C \\right) \\ \\right] \\ \\rightarrow \\ \\left[ \\ \\left( A \\rightarrow B \\right) \\rightarrow \\left( A \\rightarrow C \\right) \\ \\right] " }, { "math_id": 4, "text": " \\vdash \\ \\ \\left[ \\ D \\rightarrow \\left( B \\rightarrow A \\right) \\ \\right] \\ \\rightarrow \\ \\left[ \\ B \\rightarrow \\left( D \\rightarrow A \\right) \\ \\right] " }, { "math_id": 5, "text": " \\vdash \\ \\ \\left( B \\rightarrow A \\right) \\ \\rightarrow \\ \\left( \\lnot A \\rightarrow \\lnot B \\right) " }, { "math_id": 6, "text": " \\vdash \\ \\ \\lnot \\lnot A \\rightarrow A " }, { "math_id": 7, "text": " \\vdash \\ \\ A \\rightarrow \\lnot\\lnot A " }, { "math_id": 8, "text": " \\vdash \\ \\ \\left( c=d \\right) \\rightarrow \\left( f\\left(c\\right) = f\\left(d\\right) \\right) " }, { "math_id": 9, "text": " \\vdash \\ \\ c = c " }, { "math_id": 10, "text": " \\vdash \\ \\ \\forall a \\ f(a) \\rightarrow \\ f(c) " }, { "math_id": 11, "text": "\\vdash B" }, { "math_id": 12, "text": "\\vdash A \\to B" }, { "math_id": 13, "text": "\\vdash A" }, { "math_id": 14, "text": "\\vdash P \\to \\forall x A(x)" }, { "math_id": 15, "text": "\\vdash P \\to A(x)" }, { "math_id": 16, "text": "\\vdash" }, { "math_id": 17, "text": "\\neg" } ]
https://en.wikipedia.org/wiki?curid=900125
900160
Internal wave
Type of gravity waves that oscillate within a fluid medium Internal waves are gravity waves that oscillate within a fluid medium, rather than on its surface. To exist, the fluid must be stratified: the density must change (continuously or discontinuously) with depth/height due to changes, for example, in temperature and/or salinity. If the density changes over a small vertical distance (as in the case of the thermocline in lakes and oceans or an atmospheric inversion), the waves propagate horizontally like surface waves, but do so at slower speeds as determined by the density difference of the fluid below and above the interface. If the density changes continuously, the waves can propagate vertically as well as horizontally through the fluid. Internal waves, also called internal gravity waves, go by many other names depending upon the fluid stratification, generation mechanism, amplitude, and influence of external forces. If propagating horizontally along an interface where the density rapidly decreases with height, they are specifically called interfacial (internal) waves. If the interfacial waves are large amplitude they are called internal solitary waves or internal solitons. If moving vertically through the atmosphere where substantial changes in air density influences their dynamics, they are called anelastic (internal) waves. If generated by flow over topography, they are called Lee waves or mountain waves. If the mountain waves break aloft, they can result in strong warm winds at the ground known as Chinook winds (in North America) or Foehn winds (in Europe). If generated in the ocean by tidal flow over submarine ridges or the continental shelf, they are called internal tides. If they evolve slowly compared to the Earth's rotational frequency so that their dynamics are influenced by the Coriolis effect, they are called inertia gravity waves or, simply, inertial waves. Internal waves are usually distinguished from Rossby waves, which are influenced by the change of Coriolis frequency with latitude. Visualization of internal waves. An internal wave can readily be observed in the kitchen by slowly tilting back and forth a bottle of salad dressing - the waves exist at the interface between oil and vinegar. Atmospheric internal waves can be visualized by wave clouds: at the wave crests air rises and cools in the relatively lower pressure, which can result in water vapor condensation if the relative humidity is close to 100%. Clouds that reveal internal waves launched by flow over hills are called lenticular clouds because of their lens-like appearance. Less dramatically, a train of internal waves can be visualized by rippled cloud patterns described as herringbone sky or mackerel sky. The outflow of cold air from a thunderstorm can launch large amplitude internal solitary waves at an atmospheric inversion. In northern Australia, these result in Morning Glory clouds, used by some daredevils to glide along like a surfer riding an ocean wave. Satellites over Australia and elsewhere reveal these waves can span many hundreds of kilometers. Undulations of the oceanic thermocline can be visualized by satellite because the waves increase the surface roughness where the horizontal flow converges, and this increases the scattering of sunlight (as in the image at the top of this page showing of waves generated by tidal flow through the Strait of Gibraltar). Buoyancy, reduced gravity and buoyancy frequency. According to Archimedes principle, the weight of an immersed object is reduced by the weight of fluid it displaces. This holds for a fluid parcel of density formula_0 surrounded by an ambient fluid of density formula_1. Its weight per unit volume is formula_2, in which formula_3 is the acceleration of gravity. Dividing by a characteristic density, formula_4, gives the definition of the reduced gravity: formula_5 If formula_6, formula_7 is positive though generally much smaller than formula_3. Because water is much more dense than air, the displacement of water by air from a surface gravity wave feels nearly the full force of gravity (formula_8). The displacement of the thermocline of a lake, which separates warmer surface from cooler deep water, feels the buoyancy force expressed through the reduced gravity. For example, the density difference between ice water and room temperature water is 0.002 the characteristic density of water. So the reduced gravity is 0.2% that of gravity. It is for this reason that internal waves move in slow-motion relative to surface waves. Whereas the reduced gravity is the key variable describing buoyancy for interfacial internal waves, a different quantity is used to describe buoyancy in continuously stratified fluid whose density varies with height as formula_9. Suppose a water column is in hydrostatic equilibrium and a small parcel of fluid with density formula_10 is displaced vertically by a small distance formula_11. The buoyant restoring force results in a vertical acceleration, given by formula_12 This is the spring equation whose solution predicts oscillatory vertical displacement about formula_13 in time about with frequency given by the buoyancy frequency: formula_14 The above argument can be generalized to predict the frequency, formula_15, of a fluid parcel that oscillates along a line at an angle formula_16 to the vertical: formula_17. This is one way to write the dispersion relation for internal waves whose lines of constant phase lie at an angle formula_16 to the vertical. In particular, this shows that the buoyancy frequency is an upper limit of allowed internal wave frequencies. Mathematical modeling of internal waves. The theory for internal waves differs in the description of interfacial waves and vertically propagating internal waves. These are treated separately below. Interfacial waves. In the simplest case, one considers a two-layer fluid in which a slab of fluid with uniform density formula_18 overlies a slab of fluid with uniform density formula_19. Arbitrarily the interface between the two layers is taken to be situated at formula_20 The fluid in the upper and lower layers are assumed to be irrotational. So the velocity in each layer is given by the gradient of a velocity potential, formula_21 and the potential itself satisfies Laplace's equation: formula_22 Assuming the domain is unbounded and two-dimensional (in the formula_23 plane), and assuming the wave is periodic in formula_24 with wavenumber formula_25 the equations in each layer reduces to a second-order ordinary differential equation in formula_26. Insisting on bounded solutions the velocity potential in each layer is formula_27 and formula_28 with formula_29 the amplitude of the wave and formula_15 its angular frequency. In deriving this structure, matching conditions have been used at the interface requiring continuity of mass and pressure. These conditions also give the dispersion relation: formula_30 in which the reduced gravity formula_7 is based on the density difference between the upper and lower layers: formula_31 with formula_3 the Earth's gravity. Note that the dispersion relation is the same as that for deep water surface waves by setting formula_32 Internal waves in uniformly stratified fluid. The structure and dispersion relation of internal waves in a uniformly stratified fluid is found through the solution of the linearized conservation of mass, momentum, and internal energy equations assuming the fluid is incompressible and the background density varies by a small amount (the Boussinesq approximation). Assuming the waves are two dimensional in the x-z plane, the respective equations are formula_33 formula_34 formula_35 formula_36 in which formula_0 is the perturbation density, formula_37 is the pressure, and formula_38 is the velocity. The ambient density changes linearly with height as given by formula_9 and formula_4, a constant, is the characteristic ambient density. Solving the four equations in four unknowns for a wave of the form formula_39 gives the dispersion relation formula_40 in which formula_41 is the buoyancy frequency and formula_42 is the angle of the wavenumber vector to the horizontal, which is also the angle formed by lines of constant phase to the vertical. The phase velocity and group velocity found from the dispersion relation predict the unusual property that they are perpendicular and that the vertical components of the phase and group velocities have opposite sign: if a wavepacket moves upward to the right, the crests move downward to the right. Internal waves in the ocean. Most people think of waves as a surface phenomenon, which acts between water (as in lakes or oceans) and the air. Where low density water overlies high density water in the ocean, internal waves propagate along the boundary. They are especially common over the continental shelf regions of the world oceans and where brackish water overlies salt water at the outlet of large rivers. There is typically little surface expression of the waves, aside from slick bands that can form over the trough of the waves. Internal waves are the source of a curious phenomenon called dead water, first reported in 1893 by the Norwegian oceanographer Fridtjof Nansen, in which a boat may experience strong resistance to forward motion in apparently calm conditions. This occurs when the ship is sailing on a layer of relatively fresh water whose depth is comparable to the ship's draft. This causes a wake of internal waves that dissipates a huge amount of energy. Properties of internal waves. Internal waves typically have much lower frequencies and higher amplitudes than surface gravity waves because the density differences (and therefore the restoring forces) within a fluid are usually much smaller. Wavelengths vary from centimetres to kilometres with periods of seconds to hours respectively. The atmosphere and ocean are continuously stratified: potential density generally increases steadily downward. Internal waves in a continuously stratified medium may propagate vertically as well as horizontally. The dispersion relation for such waves is curious: For a freely-propagating internal wave packet, the direction of propagation of energy (group velocity) is perpendicular to the direction of propagation of wave crests and troughs (phase velocity). An internal wave may also become confined to a finite region of altitude or depth, as a result of varying stratification or wind. Here, the wave is said to be "ducted" or "trapped", and a vertically standing wave may form, where the vertical component of group velocity approaches zero. A ducted internal wave "mode" may propagate horizontally, with parallel group and phase velocity vectors, analogous to propagation within a waveguide. At large scales, internal waves are influenced both by the rotation of the Earth as well as by the stratification of the medium. The frequencies of these geophysical wave motions vary from a lower limit of the Coriolis frequency (inertial motions) up to the Brunt–Väisälä frequency, or buoyancy frequency (buoyancy oscillations). Above the Brunt–Väisälä frequency, there may be evanescent internal wave motions, for example those resulting from partial reflection. Internal waves at tidal frequencies are produced by tidal flow over topography/bathymetry, and are known as internal tides. Similarly, atmospheric tides arise from, for example, non-uniform solar heating associated with diurnal motion. Onshore transport of planktonic larvae. Cross-shelf transport, the exchange of water between coastal and offshore environments, is of particular interest for its role in delivering meroplanktonic larvae to often disparate adult populations from shared offshore larval pools. Several mechanisms have been proposed for the cross-shelf of planktonic larvae by internal waves. The prevalence of each type of event depends on a variety of factors including bottom topography, stratification of the water body, and tidal influences. Internal tidal bores. Similarly to surface waves, internal waves change as they approach the shore. As the ratio of wave amplitude to water depth becomes such that the wave “feels the bottom,” water at the base of the wave slows down due to friction with the sea floor. This causes the wave to become asymmetrical and the face of the wave to steepen, and finally the wave will break, propagating forward as an internal bore. Internal waves are often formed as tides pass over a shelf break. The largest of these waves are generated during springtides and those of sufficient magnitude break and progress across the shelf as bores. These bores are evidenced by rapid, step-like changes in temperature and salinity with depth, the abrupt onset of upslope flows near the bottom and packets of high frequency internal waves following the fronts of the bores. The arrival of cool, formerly deep water associated with internal bores into warm, shallower waters corresponds with drastic increases in phytoplankton and zooplankton concentrations and changes in plankter species abundances. Additionally, while both surface waters and those at depth tend to have relatively low primary productivity, thermoclines are often associated with a chlorophyll maximum layer. These layers in turn attract large aggregations of mobile zooplankton that internal bores subsequently push inshore. Many taxa can be almost absent in warm surface waters, yet plentiful in these internal bores. Surface slicks. While internal waves of higher magnitudes will often break after crossing over the shelf break, smaller trains will proceed across the shelf unbroken. At low wind speeds these internal waves are evidenced by the formation of wide surface slicks, oriented parallel to the bottom topography, which progress shoreward with the internal waves. Waters above an internal wave converge and sink in its trough and upwell and diverge over its crest. The convergence zones associated with internal wave troughs often accumulate oils and flotsam that occasionally progress shoreward with the slicks. These rafts of flotsam can also harbor high concentrations of larvae of invertebrates and fish an order of magnitude higher than the surrounding waters. Predictable downwellings. Thermoclines are often associated with chlorophyll maximum layers. Internal waves represent oscillations of these thermoclines and therefore have the potential to transfer these phytoplankton rich waters downward, coupling benthic and pelagic systems. Areas affected by these events show higher growth rates of suspension feeding ascidians and bryozoans, likely due to the periodic influx of high phytoplankton concentrations. Periodic depression of the thermocline and associated downwelling may also play an important role in the vertical transport of planktonic larvae. Trapped cores. Large steep internal waves containing trapped, reverse-oscillating cores can also transport parcels of water shoreward. These non-linear waves with trapped cores had previously been observed in the laboratory and predicted theoretically. These waves propagate in environments characterized by high shear and turbulence and likely derive their energy from waves of depression interacting with a shoaling bottom further upstream. The conditions favorable to the generation of these waves are also likely to suspend sediment along the bottom as well as plankton and nutrients found along the benthos in deeper water. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Other. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho" }, { "math_id": 1, "text": "\\rho_0" }, { "math_id": 2, "text": "g(\\rho-\\rho_0)" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "\\rho_{00}" }, { "math_id": 5, "text": "g^\\prime \\equiv g \\frac{\\rho-\\rho_0}{\\rho_{00}}" }, { "math_id": 6, "text": "\\rho>\\rho_0" }, { "math_id": 7, "text": "g^\\prime" }, { "math_id": 8, "text": "g^\\prime \\sim g" }, { "math_id": 9, "text": "\\rho_0(z)" }, { "math_id": 10, "text": "\\rho_0(z_0)" }, { "math_id": 11, "text": "\\Delta z" }, { "math_id": 12, "text": "\\frac{d^2 \\Delta z}{dt^2} = - g^\\prime = - g (\\rho_0(z_0)-\\rho_0(z_0+\\Delta z))/\\rho_0(z_0) \\simeq - g \\left(-\\frac{d\\rho_0}{dz} \\Delta z\\right)/\\rho_0(z_0)" }, { "math_id": 13, "text": "z_0" }, { "math_id": 14, "text": " N = \\left(-\\frac{g}{\\rho_0} \\frac{d\\rho_0}{dz}\\right)^{1/2}." }, { "math_id": 15, "text": "\\omega" }, { "math_id": 16, "text": "\\Theta" }, { "math_id": 17, "text": "\\omega = N \\cos\\Theta" }, { "math_id": 18, "text": "\\rho_1" }, { "math_id": 19, "text": "\\rho_2" }, { "math_id": 20, "text": "z=0." }, { "math_id": 21, "text": "{\\vec{u}=\\nabla\\phi,}" }, { "math_id": 22, "text": "\\nabla^2\\phi=0." }, { "math_id": 23, "text": "x-z" }, { "math_id": 24, "text": "x" }, { "math_id": 25, "text": "k>0," }, { "math_id": 26, "text": "z" }, { "math_id": 27, "text": "\\phi_1(x,z,t) = A e^{-kz} \\cos(kx - \\omega t)" }, { "math_id": 28, "text": "\\phi_2(x,z,t) = A e^{kz} \\cos(kx - \\omega t)," }, { "math_id": 29, "text": "A" }, { "math_id": 30, "text": "\\omega^2 = g^\\prime k" }, { "math_id": 31, "text": "g^\\prime = \\frac{\\rho_2-\\rho_1}{\\rho_2+\\rho_1}\\, g," }, { "math_id": 32, "text": "g^\\prime=g." }, { "math_id": 33, "text": "\\partial_x u + \\partial_z w = 0" }, { "math_id": 34, "text": "\\rho_{00} \\partial_t u = - \\partial_x p" }, { "math_id": 35, "text": "\\rho_{00} \\partial_t w = - \\partial_z p - \\rho g" }, { "math_id": 36, "text": "\\partial_t \\rho = -w d\\rho_0/dz" }, { "math_id": 37, "text": "p" }, { "math_id": 38, "text": "(u,w)" }, { "math_id": 39, "text": "\\exp[i(kx+mz-\\omega t)]" }, { "math_id": 40, "text": "\\omega^2 = N^2 \\frac{k^2}{k^2+m^2} = N^2 \\cos^2\\Theta" }, { "math_id": 41, "text": "N" }, { "math_id": 42, "text": "\\Theta=\\tan^{-1}(m/k)" } ]
https://en.wikipedia.org/wiki?curid=900160
90021
Flocking
Swarming behaviour of birds when flying or foraging Flocking is the behavior exhibited when a group of birds, called a flock, are foraging or in flight. Sheep and goats also exhibit flocking behavior. Computer simulations and mathematical models that have been developed to emulate the flocking behaviours of birds can also generally be applied to the "flocking" behaviour of other species. As a result, the term "flocking" is sometimes applied, in computer science, to species other than birds, to mean collective motion by a group of self-propelled entities, a collective animal behaviour exhibited by many living beings such as fish, bacteria, and insects. Flocking is considered an emergent behaviour arising from simple rules that are followed by individuals and does not involve any central coordination. In nature. There are parallels with the shoaling behaviour of fish, the swarming behaviour of insects, and herd behaviour of land animals. During the winter months, starlings are known for aggregating into huge flocks of hundreds to thousands of individuals, murmurations, which when they take flight altogether, render large displays of intriguing swirling patterns in the skies above observers. Flocking behaviour was simulated on a computer in 1987 by Craig Reynolds with his simulation program, Boids. This program simulates simple agents (boids) that are allowed to move according to a set of basic rules. The result is akin to a flock of birds, a school of fish, or a swarm of insects. Measurement. Measurements of bird flocking have been made using high-speed cameras, and a computer analysis has been made to test the simple rules of flocking mentioned below. It is found that they generally hold true in the case of bird flocking, but the long range attraction rule (cohesion) applies to the nearest 5–10 neighbors of the flocking bird and is independent of the distance of these neighbors from the bird. In addition, there is an anisotropy with regard to this cohesive tendency, with more cohesion being exhibited towards neighbors to the sides of the bird, rather than in front or behind. This is likely due to the field of vision of the flying bird being directed to the sides rather than directly forward or backward. Another recent study is based on an analysis of high speed camera footage of flocks above Rome, and uses a computer model assuming minimal behavioural rules. Algorithm. Rules. Basic models of flocking behaviour are controlled by three simple rules: Avoid crowding neighbours (short range repulsion) Steer towards average heading of neighbours Steer towards average position of neighbours (long range attraction) With these three simple rules, the flock moves in an extremely realistic way, creating complex motion and interaction that would be extremely hard to create otherwise. Rule variants. The basic model has been extended in several different ways since Reynolds proposed it. For instance, Delgado-Mata et al. extended the basic model to incorporate the effects of fear. Olfaction was used to transmit emotion between animals, through pheromones modelled as particles in a free expansion gas. Hartman and Benes introduced a complementary force to the alignment that they call the change of leadership. This steer defines the chance of the bird to become a leader and try to escape. Hemelrijk and Hildenbrandt used attraction, alignment, and avoidance, and extended this with a number of traits of real starlings: The authors showed that the specifics of flying behaviour as well as large flock size and low number of interaction partners were essential to the creation of the variable shape of flocks of starlings. Complexity. In flocking simulations, there is no central control; each bird behaves autonomously. In other words, each bird has to decide for itself which flocks to consider as its environment. Usually environment is defined as a circle (2D) or sphere (3D) with a certain radius (representing reach). A basic implementation of a flocking algorithm has complexity formula_0 – each bird searches through all other birds to find those which fall into its environment. Possible improvements: Lee Spector, Jon Klein, Chris Perry and Mark Feinstein studied the emergence of collective behaviour in evolutionary computation systems. Bernard Chazelle proved that under the assumption that each bird adjusts its velocity and position to the other birds within a fixed radius, the time it takes to converge to a steady state is an iterated exponential of height logarithmic in the number of birds. This means that if the number of birds is large enough, the convergence time will be so great that it might as well be infinite. This result applies only to convergence to a steady state. For example, arrows fired into the air at the edge of a flock will cause the whole flock to react more rapidly than can be explained by interactions with neighbors, which are slowed down by the time delay in the bird's central nervous systems—bird-to-bird-to-bird. Applications. In Cologne, Germany, two biologists from the University of Leeds demonstrated a flock-like behaviour in humans. The group of people exhibited a very similar behavioural pattern to that of a flock, where if 5% of the flock would change direction the others would follow suit. When one person was designated as a predator and everyone else was to avoid him, the flock behaved very much like a school of fish. Flocking has also been considered as a means of controlling the behaviour of Unmanned Air Vehicles (UAVs). Flocking is a common technology in screensavers, and has found its use in animation. Flocking has been used in many films to generate crowds which move more realistically. Tim Burton's "Batman Returns" (1992) featured flocking bats. Flocking behaviour has been used for other interesting applications. It has been applied to automatically program Internet multi-channel radio stations. It has also been used for visualizing information and for optimization tasks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^2)" }, { "math_id": 1, "text": "O(n k)" }, { "math_id": 2, "text": "O(1)" } ]
https://en.wikipedia.org/wiki?curid=90021
9002263
Blind equalization
Blind equalization is a digital signal processing technique in which the transmitted signal is inferred (equalized) from the received signal, while making use only of the transmitted signal statistics. Hence, the use of the word "blind" in the name. Blind equalization is essentially blind deconvolution applied to digital communications. Nonetheless, the emphasis in blind equalization is on online estimation of the equalization filter, which is the inverse of the channel impulse response, rather than the estimation of the channel impulse response itself. This is due to blind deconvolution common mode of usage in digital communications systems, as a means to extract the continuously transmitted signal from the received signal, with the channel impulse response being of secondary intrinsic importance. The estimated equalizer is then convolved with the received signal to yield an estimation of the transmitted signal. Problem statement. Noiseless model. Assuming a linear time invariant channel with impulse response formula_0, the noiseless model relates the received signal formula_1 to the transmitted signal formula_2 via formula_3 The blind equalization problem can now be formulated as follows; Given the received signal formula_1, find a filter formula_4, called an equalization filter, such that formula_5 where formula_6 is an estimation of formula_7. The solution formula_6 to the blind equalization problem is not unique. In fact, it may be determined only up to a signed scale factor and an arbitrary time delay. That is, if formula_8 are estimates of the transmitted signal and channel impulse response, respectively, then formula_9 give rise to the same received signal formula_10 for any real scale factor formula_11 and integral time delay formula_12. In fact, by symmetry, the roles of formula_7 and formula_13 are Interchangeable. Noisy model. In the noisy model, an additional term, formula_14, representing additive noise, is included. The model is therefore formula_15 Algorithms. Many algorithms for the solution of the blind equalization problem have been suggested over the years. However, as one usually has access to only a finite number of samples from the received signal formula_16, further restrictions must be imposed over the above models to render the blind equalization problem tractable. One such assumption, common to all algorithms described below is to assume that the channel has finite impulse response, formula_17, where formula_18 is an arbitrary natural number. This assumption may be justified on physical grounds, since the energy of any real signal must be finite, and therefore its impulse response must tend to zero. Thus it may be assumed that all coefficients beyond a certain point are negligibly small. Minimum phase. If the channel impulse response is assumed to be minimum phase, the problem becomes trivial. Bussgang methods. Bussgang methods make use of the Least mean squares filter algorithm formula_19 with formula_20 where formula_21 is an appropriate positive adaptation step and formula_22 is a suitable nonlinear function. Polyspectra techniques. Polyspectra techniques utilize higher order statistics in order to compute the equalizer. References. [1] C. RICHARD JOHNSON, JR., et. el., "Blind Equalization Using the Constant Modulus Criterion: A Review", PROCEEDINGS OF THE IEEE, VOL. 86, NO. 10, OCTOBER 1998.
[ { "math_id": 0, "text": "\\{h[n]\\}_{n=-\\infty}^{\\infty}" }, { "math_id": 1, "text": "r[k]" }, { "math_id": 2, "text": "s[k]" }, { "math_id": 3, "text": "r[k]=\\sum_{n=-\\infty}^{\\infty}h[n]s[k-n]" }, { "math_id": 4, "text": "w[k]" }, { "math_id": 5, "text": "\\hat{s}[k]=\\sum_{n=-\\infty}^{\\infty}w[n]r[k-n]" }, { "math_id": 6, "text": "\\hat{s}" }, { "math_id": 7, "text": "s" }, { "math_id": 8, "text": "\\{\\tilde{s}[n],\\tilde{h}[n]\\}" }, { "math_id": 9, "text": "\\{c\\tilde{s}[n+d],\\tilde{h}[n-d]/c\\}" }, { "math_id": 10, "text": "r" }, { "math_id": 11, "text": "c" }, { "math_id": 12, "text": "d" }, { "math_id": 13, "text": "h" }, { "math_id": 14, "text": "n[k]" }, { "math_id": 15, "text": "r[k]=\\sum_{n=-\\infty}^{\\infty}h[n]s[k-n]+n[k]" }, { "math_id": 16, "text": "r(t)" }, { "math_id": 17, "text": "\\{h[n]\\}_{n=-N}^{N}" }, { "math_id": 18, "text": "N" }, { "math_id": 19, "text": "w_{n+1}[k] = w_n[k]+\\mu\\,e^{*}[n]r[n-k], \nk=-N,...N" }, { "math_id": 20, "text": "e[n] = \\mathbf{g}(\\hat{s}[n])-\\hat{s}[n]" }, { "math_id": 21, "text": "\\mu" }, { "math_id": 22, "text": "\\mathbf{g}" } ]
https://en.wikipedia.org/wiki?curid=9002263
9006362
Paint sheen
Glossiness of a paint finish Sheen is a measure of the reflected light (glossiness) from a paint finish. "Glossy" and "flat" (or "matte") are typical extreme levels of glossiness of a finish. Gloss paint is shiny and reflects most light in the specular (mirror-like) direction, while on flat paints most of the light diffuses in a range of angles. The gloss level of paint can also affect its apparent colour. Between those extremes, there are a number of intermediate gloss levels. Their common names, from the most dull to the most shiny, include "matte", "eggshell", "satin", "silk", "semi-gloss" and "high gloss". These terms are not standardized, and not all manufacturers use all these terms. Terminology. Firwood, a UK paint manufacturer measures gloss as percentages of light reflected from an emitted source back into an apparatus from specified angles, ranging between 60° and 20° depending on the reflectivity. With very low gloss levels (such as matte finishes), a 60° angle is too great to measure light reflectance accurately, so a lower angle of 20° is usually used. The returned light into the apparatus allows the gloss to be classified as follows: Technology. The sheen or gloss level of a paint is principally determined by the ratio of resinous, adhesive binder, which solidifies after drying, and solid, powdery pigment. The more binder the coating contains, the more regular reflection will be made from its smooth surface; conversely, with less binder, grains of pigment become exposed to the surface, scattering the light and providing matte effect. To a lesser extent, gloss is also affected by other factors: refraction index of the pigment particles, viscosity and refraction index of the binder. An important indicator is "pigment-volume concentration" (PVC), defined as the ratio of pigment volume and total paint volume: formula_0 PVC affects both physical and optical properties of a paint. Matte paints have less binder, which makes them more susceptible to mechanical damages (however, they are less visible than on glossy surfaces). More binder provides a smoother and more solid surface. However, at a certain PVC, called "critical PVC" (CPVC), the paint is already saturated with binder and the surface becomes solid and glossy, without protruding particles; adding more binder (lowering PVC) will not affect the sheen. CPVC generally depends on the binder-pigment system used, and generally falls in the 35–65% range. As a gloss finish will reveal surface imperfections such as sanding marks, surfaces must generally be prepared more thoroughly for gloss finishes. Gloss-finish paints are generally more resistant to damage than flat paint, more resistant to staining, and easier to clean. Flat paint may become glossier through burnishing or staining with grease; glossy paint may lose its gloss and look scratched if abraded. Unlike gloss paint, flat paint can generally be touched up locally without repainting the entire surface. Gloss level can be characterized by the angular distribution of light scattered from a surface, measured with a glossmeter, but there are various ways of measuring this, and different industries have different standards. Applications. In traditional household interiors, walls are usually painted in flat or eggshell gloss, wooden trim (including doors and window sash) in high gloss, and ceilings almost invariably in flat. Similarly, exterior trim is usually painted with a gloss paint, while the body of the house is painted in a lower gloss. Gloss paint is commonplace in the automotive industry for car bodies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{PVC} = \\frac{V_\\text{pigment}}{V_\\text{pigment} + V_\\text{binder}}" } ]
https://en.wikipedia.org/wiki?curid=9006362
900726
Hyperpolarization (physics)
Hyperpolarization is the nuclear spin polarization of a material in a magnetic field far beyond thermal equilibrium conditions determined by the Boltzmann distribution. It can be applied to gases such as 129Xe and 3He, and small molecules where the polarization levels can be enhanced by a factor of 104-105 above thermal equilibrium levels. Hyperpolarized noble gases are typically used in magnetic resonance imaging (MRI) of the lungs. Hyperpolarized small molecules are typically used for "in vivo" metabolic imaging. For example, a hyperpolarized metabolite can be injected into animals or patients and the metabolic conversion can be tracked in real-time. Other applications include determining the function of the neutron spin-structures by scattering polarized electrons from a very polarized target (3He), surface interaction studies, and neutron polarizing experiments. Spin-exchange optical pumping. Introduction. Spin exchange optical pumping (SEOP) is one of several hyperpolarization techniques discussed on this page. This technique specializes in creating hyperpolarized (HP) noble gases, such as 3He, 129Xe, and quadrupolar 131Xe, 83Kr, and 21Ne. Noble gases are required because SEOP is performed in the gas phase, they are chemically inert, non-reactive, chemically stable with respect to alkali metals, and their T1 is long enough to build up polarization. Spin 1/2 noble gases meet all these requirements, and spin 3/2 noble gases do to an extent, although some spin 3/2 do not have a sufficient T1. Each of these noble gases has their own specific application, such as characterizing lung space and tissue via "in vivo" molecular imaging and functional imaging of lungs, to study changes in metabolism of healthy versus cancer cells, or use as targets for nuclear physics experiments. During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal, such as caesium or rubidium inside a sealed glass vessel. Infrared light is necessary because it contains the wavelengths necessary to excite the alkali metal electrons, although the wavelength necessary to excite sodium electrons is below this region (Table 1). The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. Nitrogen is used as a quenching gas, which prevents the fluorescence of the polarized alkali metal, which would lead to de-polarization of the noble gas. If fluorescence was not quenched, the light emitted during relaxation would be randomly polarized, working against the circularly polarized laser light. While different sizes of glass vessels (also called cells), and therefore different pressures, are used depending on the application, one amagat of total pressure of noble gas and nitrogen is sufficient for SEOP and 0.1 amagat of nitrogen density is needed to quench fluorescence. Great improvements in 129Xe hyperpolarization technology have achieved &gt; 50% level at flow rates of 1–2 L/min, which enables human clinical applications. History. The discovery of SEOP took decades for all the pieces to fall into place to create a complete technique. First, in 1897, Zeeman's studies of sodium vapor led to the first result of "optical pumping". The next piece was found in 1950 when Kastler determined a method to electronically spin-polarize rubidium alkali metal vapor using an applied magnetic field and illuminating the vapor with resonant circularly polarized light. Ten years later, Marie-Anne Bouchiat, T. M. Carver, and C. M. Varnum performed "spin exchange", in which the electronic spin polarization was transferred to nuclear spins of a noble gas (3He and 129Xe) through gas-phased collisions. Since then, this method has been greatly improved and expanded to use with other noble gases and alkali metals. Theory. To explain the processes of excitation, optical pumping, and spin exchange easier, the most common alkali metal used for this process, rubidium, will be used as an example. Rubidium has an odd number of electrons, with only one in the outermost shell that can be excited under the right conditions. There are two transitions that can occur, one referred to as the D1 line where the transition occurs from the 52S1/2 state to the 52P1/2 state and another referred to the D2 line where the transition occurs from the 52S1/2 to the 52P3/2 state. The D1 and D2 transitions can occur if the rubidium atoms are illuminated with light at a wavelength of 794.7 nm and 780 nm, respectively (Figure 1). While it is possible to cause either excitation, laser technology is well-developed for causing the D1 transition to occur. Those lasers are said to be tuned to the D1 wavelength (794.7 nm) of rubidium. In order to increase the polarization level above thermal equilibrium, the populations of the spin states must be altered. In the absence of magnetic field, the two spin states of a spin I = nuclei are in the same energy level, but in the presence of a magnetic field, the energy levels split into ms = ±1/2 energy levels (Figure 2). Here, ms is the spin angular momentum with possible values of +1/2 (spin up) or -1/2 (spin down), often drawn as vectors pointing up or down, respectively. The difference in population between these two energy levels is what produces an NMR signal. For example, the two electrons in the spin down state cancel two of the electrons in the spin up state, leaving only one spin up nucleus to be detected with NMR. However, the populations of these states can be altered via hyperpolarization, allowing the spin up energy level to be more populated and therefore increase the NMR signal. This is done by first optically pumping alkali metal, then transferring the polarization to a noble gas nucleus to increase the population of the spin up state. The absorption of laser light by the alkali metal is the first process in SEOP. Left-circularly polarized light tuned to the D1 wavelength of the alkali metal excites the electrons from the spin down 2S1/2 (ms=-1/2) state into the spin up 2P1/2 (ms=+1/2) state, where collisional mixing then occurs as the noble gas atoms collide with the alkali metal atoms and the ms=-1/2 state is partially populated (Figure 3). Circularly polarized light is necessary at low magnetic fields because it allows only one type of angular momentum to be absorbed, allowing the spins to be polarized. Relaxation then occurs from the excited states (ms=±1/2) to the ground states (ms=±1/2) as the atoms collide with nitrogen, thus quenching any chance of fluorescence and causing the electrons to return to the two ground states in equal populations. Once the spins are depolarized (return to the ms=-1/2 state), they are excited again by the continuous wave laser light and the process repeats itself. In this way, a larger population of electron spins in the ms=+1/2 state accumulates. The polarization of the rubidium, PRb, can be calculated by using the formula below: formula_0 Where n↑ and n↓ and are the number of atoms in the spin up (mS=+1/2) and spin down (mS=-1/2) 2S1/2 states. Next, the optically pumped alkali metal collides with the noble gas, allowing for spin exchange to occur where the alkali metal electron polarization is transferred to the noble gas nuclei (Figure 4). There are two mechanisms in which this can occur. The angular momentum can be transferred via binary collisions (Figure 4A, also called two-body collisions) or while the noble gas, N2 buffer gas, and vapor phase alkali metal are held in close proximity via van der Waals forces (Figure 4B, also called three body collisions). In cases where van der Waals forces are very small compared to binary collisions (such is the case for 3He), the noble gas and alkali metal collide and polarization is transferred from the AM to the noble gas. Binary collisions are also possible for 129Xe. At high pressures, van der Waals forces dominate, but at low pressures binary collisions dominate. Buildup of polarization. This cycle of excitation, polarization, depolarization, and re-polarization, etc. takes time before a net polarization is achieved. The buildup of nuclear polarization, PN(t), is given by: formula_1 Where ⟨PA⟩ is the alkali metal polarization, γSE is the spin exchange rate, and Γ is the longitudinal relaxation rate of the noble gas. Relaxation of the nuclear polarization can occur via several mechanisms and is written as a sum of these contributions: formula_2 Where Γt, Γp, Γg, and Γw represent the relaxation from the transient Xe2 dimer, the persistent Xe2 dimer, diffusion through gradients in the applied magnetic field, and wall relaxation, respectively. In most cases, the largest contributors to the total relaxation are persistent dimers and wall relaxations. A Xe2 dimer can occur when two Xe atoms collide and are held together via van der Waals forces, and it can be broken when a third atom collides with it. It is similar to Xe-Rb during spin exchange (spin transfer) where they are held in close proximity to each other via van der Waals forces. Wall relaxation is when the hyperpolarized Xe collides with the walls of the cell and is de-polarized due to paramagnetic impurities in the glass. The buildup time constant, ΓB, can be measured by collecting NMR spectra at time intervals falling within the time it takes to reach steady-state polarization (i.e. the maximum polarization that can be achieved, seen by the maximum signal output). The signal integrals are then plotted over time and can be fit to obtain the buildup time constant. Collecting a buildup curve at several different temperatures and plotting the values as a function of alkali metal vapor density (since vapor density increases with an increase in cell temperature) can be used to determine the spin destruction rate and the per-atom spin exchange rate using: formula_3 Where γ' is the per-atom spin exchange rate, [AM] is the alkali metal vapor density, and ΓSD is the spin destruction rate. This plot should be linear, where γ' is the slope and ΓSD is the y-intercept. Relaxation: T1. Spin exchange optical pumping can continue indefinitely with continuous illumination, but there are several factors that cause relaxation of polarization and thus a return to the thermal equilibrium populations when illumination is stopped. In order to use hyperpolarized noble gases in applications such as lung imaging, the gas must be transferred from the experimental setup to a patient. As soon as the gas is no longer actively being optically pumped, the degree of hyperpolarization begins to decrease until thermal equilibrium is reached. However, the hyperpolarization must last long enough to transfer the gas to the patient and obtain an image. The longitudinal spin relaxation time, denoted as T1, can be measured easily by collecting NMR spectra as the polarization decreases over time once illumination is stopped. This relaxation rate is governed by several depolarization mechanisms and is written as: formula_4 Where the three contributing terms are for collisional relaxation (CR), magnetic field inhomogeneity (MFI) relaxation, and relaxation caused by the presence of paramagnetic oxygen (O2). The T1 duration could be anywhere from minutes to several hours, depending on how much care is put into lessening the effects of CR, MFI, and O2. The last term has been quantified to be 0.360 s−1 amagat−1, but the first and second terms are hard to quantify since the degree of their contribution to the overall T1 is dependent on how well the experimental setup and cell are optimized and prepared. Experimental setup in SEOP. In order to perform SEOP, it is first necessary to prepare the optical cell. Optical cells (Figure 5) are designed for the particular system in mind and glass blown using a transparent material, typically pyrex glass (borosilicate). This cell must then be cleaned to eliminate all contaminants, particularly paramagnetic materials which decrease polarization and the T1. The inner surface of the cell is then coated to (a) serve as a protective layer for the glass in order to lessen the chance of corrosion by the alkali metal, and (b) minimize depolarization caused by the collisions of polarized gas molecules with the walls of the cell. Decreasing wall relaxation leads to longer and higher polarization of the noble gas. While several coatings have been tested over the years, SurfaSil (Figure 6, now referred to as hydrocarbon soluble siliconizing fluid) is the most common coating used in a ratio of 1:10 SurfaSil: hexane because it provides long T1 values. The thickness of the SurfaSil layer is about 0.3-0.4 μm. Once evenly coated and dried, the cell is then placed in an inert environment and a droplet of alkali metal (≈200 mg) is placed in the cell, which is then dispersed to create an even coating on the walls of the cells. One method for transferring the alkali metal into the cell is by distillation. In the distillation method, the cell is connected to a glass manifold equipped to hold both pressurized gas and vacuum, where an ampoule of alkali metal is connected. The manifold and cell are vacuumed, then the ampoule seal is broken and the alkali metal is moved into the cell using the flame of a gas torch. The cell is then filled with the desired gas mixture of nitrogen and noble gas. Care must be taken not to poison the cell at any stage of cell preparation (expose the cell to atmospheric air). Several cell sizes and designs have been used over the years. The application desired is what governs the design of the optical pumping cell and is dependent on laser diameter, optimization needs, and clinical use considerations. The specific alkali metal(s) and gases are also chosen based on the desired applications. Once the cell is complete, a surface coil (or coils, depending on the desired coil type) is taped to the outside of the cell, which a) allows RF pulses to be produced in order to tip the polarized spins into the detection field (x,y plane) and b) detects the signal produced by the polarized nuclear spins. The cell is placed in an oven which allows for the cell and its contents to be heated so the alkali metal enters the vapor phase, and the cell is centered in a coil system which generates an applied magnetic field (along the z-axis). A laser, tuned to the D1 line (electric-dipole transition) of the alkali metal and with a beam diameter matching the diameter of the optical cell, is then aligned with the optical flats of the cell in such a way where the entirety of the cell is illuminated by laser light to provide the largest polarization possible (Figure 7). The laser can be anywhere between tens of watts to hundreds of watts, where higher the power yields larger polarization but is more costly. In order to further increase polarization, a retro-reflective mirror is placed behind the cell in order to pass the laser light through the cell twice. Additionally, an IR iris is placed behind the mirror, providing information of laser light absorption by the alkali metal atoms. When the laser is illuminating the cell, but the cell is at room temperature, the IR iris is used to measure the percent transmittance of laser light through the cell. As the cell is heated, the rubidium enters the vapor phase and starts to absorb laser light, causing the percent transmittance to decrease. The difference in the IR spectrum between a room temperature spectrum and a spectrum taken while the cell is heated can be used to calculate an estimated rubidium polarization value, PRb. As SEOP continues to develop and improve, there are several types of NMR coils, ovens, magnetic field generating coils, and lasers that have been and are being used to generate hyperpolarized gases. Generally, the NMR coils are hand made for the specific purpose, either by turning copper wire by hand in the desired shape, or by 3D printing the coil. Commonly, the oven is a forced-air oven, with two faces made of glass for the laser light to pass through the cell, a removable lid, and a hole through which a hot air line is connected, which allows the cell to be heated via conduction. The magnetic field generating coils can be a pair of Helmholtz coils, used to generate the desired magnetic field strength, whose desired field is governed by: formula_5 Where ω is the Larmour frequency, or desired detection frequency, γ is the gyromagnetic ratio of the nuclei of interest, and B0 is the magnetic field required to detect the nuclei at the desired frequency. A set of four electromagnetic coils can also be used (i.e. from Acutran) and other coil designs are being tested. In the past, laser technology was a limiting factor for SEOP, where only a couple alkali metals could be used due to the lack of, for example, cesium lasers. However, there have been several new developments, including better cesium lasers, higher power, narrower spectral width, etc. which are allowing the reaches of SEOP to increase. Nevertheless, there are several key features required. Ideally, the laser should be continuous wave to ensure the alkali metal and noble gas remains polarized at all times. In order to induce this polarization, the laser light must be circularly polarized in the direction which allows the electrons to become spin polarized. This is done by passing the laser light through a polarizing beam splitter to separate the "s" and " p" components, then through a quarter wave plate, which converts the linearly polarized light into circularly polarized light. Noble gases and alkali metals. SEOP has successfully been used and is fairly well developed for 3He, 129Xe, and 83Kr for biomedical applications. Additionally, several improvements are under way to get enhanced and interpretable imaging of cancer cells in biomedical science. Studies involving hyperpolarization of 131Xe are underway, piquing the interest of physicists. There are also improvements being made to allow not only rubidium to be utilized in the spin transfer, but also cesium. In principle, any alkali metal can be used for SEOP, but rubidium is usually preferred due to its high vapor pressure, allowing experiments to be carried out at relatively low temperatures (80 °C-130 °C), decreasing the chance of damaging the glass cell. Additionally, laser technology for the alkali metal of choice has to exist and be developed enough get substantial polarization. Previously, the lasers available to excite the D1 cesium transition were not well-developed, but they are now becoming more powerful and less expensive. Preliminary studies even show that cesium may provide better results than rubidium, even though rubidium has been the go-to alkali metal of choice for SEOP. The hyperpolarization method called spin-exchange optical pumping (SEOP) is being used to hyperpolarize noble gases such as Xenon-129 and Helium-3. When an inhaled hyperpolarized gas like 3He or 129Xe is imaged, there is a higher magnetization density of NMR-active molecules in the lung compared to traditional 1H imaging, which improves the MRI images that can be obtained. Unlike proton MRI which reports on anatomical features of lung tissues, XenonMRI reports lung function including gas ventilation, diffusion, and perfusion. Rationale. Our target is to identify the infection or disease (cancer, for example) anywhere in our body like cerebral, brain, blood, and fluid, and tissues. This infectious cell is called collectively biomarker. According to the World Health Organization (WHO) and collaborating with United Nations and International Labor organization have convincingly defined the Biomarker as "any substance, structure, or process that can be measured in the body or its products and influence or predict the incidence of outcome or disease". Biomarker has to be quantifiable up-to certain level in biological process in well-being. One specific example of biomarker is blood cholesterol that is commonly acquainted with us reliable for coronary heart disease; another biomarker is PSA (Prostate-Specific Antigen) and has been contributing to prostate cancer. There are a lot of biomarkers are considering as being cancer: Hepatitis C virus ribonucleic acid (HCV-RNA), International Normalized Ratio (INR), Prothrombin Time (PT), Monoclonal Protein (M protein), Cancer Antigen-125 (CA-125), Human Immunodeficiency Virus -Ribonucleic Acid (HIV RNA), B-type Natriuretic Peptide (BNP).27and Lymphoma cell (Ramos cell lines and Jurkat cell lines) a form of cancer. Other common biomarkers are breast cancer, Ovarian cancer, Colorectal cancer, Lung cancer and brain tumor. This disease-causing verdict agent is the biomarker is existing extremely trace amount especially initial state of the disease. Therefore, identifying or getting images of biomarker is tricky and, in few circumstances, uncertain by NMR tech. Hence, we must use the contrasting agent to enhance the images at least to visualize level to Physicians. As molecules of biomarker is less abundant "in vivo" system. The NMR or MRI experiment provides a very small signal even in some cases, the analyzer can miss the signal peak in data due to the lack in abundance of biomarkers. Therefore, to make sure, to reach the true conclusion about the existence of trouble-causing biomarkers, we need to enhance the probe (contrasting mechanisms) to get the clear peak at the most visible level of peak height as well as the position of the peak in data. If it is possible to gather the acceptable and clearly interpretable data from NMR or MRI experiment by using the contrasting agent, then experts can take a right initial step to recover the patients who already have been suffering from cancer. Among the various technique to get the enhanced data in MRI experiment, SEOP is one of them. Researchers in SEOP are interested to use the 129Xe. Because 129Xe has a number of favorable facts in NMR Tech. for working as a contrasting agent even over the other novel gases: Solubility of Xenon in water medium 11% means at 25 °C 11 mL Xenon gas could be absorbed by 100 mL of water. Figure-9 below, In NMR experimental data, there are different chemical shift values for different tissues in "in vivo" environment. All peaks are positioned through such a big range of chemical shift values for 129Xe is viable. Because 129Xe has long range up-to 1700ppm chemical shift value range in NMR data. Other important spectral information includes: Figure 9. NMR data for Xe-129 biosensor in "in vivo" biological system. 129Xe(g) shows satisfactory enhancement in polarization during SEOP compared to the thermal enhancement in polarization. This is demonstrated by the experimental data values when NMR spectra are acquired at different magnetic field strengths. A couple of important points from experimental data are: (Figure 11) Longitudinal spin relaxation time (T1) is very sensitive with an increase of magnetic field and hence enhance the NMR signals is noticeable in SEOP in case of 129Xe. As T1 is higher for blue marking conditioning NMR experiment shows more enhanced peak compare to other. For hyperpolarized 129Xe in tedlar bags, the T1 is 38±12 minutes when data collected in presence of 1.5 mT magnetic field. However, satisfactory increment in T1delay time (354±24 minutes) when data was collected in presence of 3000 mT magnetic field. Use of Rb vs. Cs in SEOP NMR experiments. In general, we can use the either 87Rb or 133Cs alkali metal atoms with inert nitrogen gas. However, we are using 133Cs atoms with nitrogen to make the spin exchange with 129Xe for number of advantages: Although 129Xe has a bunch of preferable characteristic applications in NMR technique, 83Kr can also be used since it has a lot of advantages in NMR techniques in different ways than 129Xe. Imaging applications of SEOP. Steps are also being taken in academia and industry to use this hyperpolarized gas for lung imaging. Once the gas (129Xe) is hyperpolarized through the SEOP process and the alkali metal is removed, a patient (either healthy or suffering from a lung disease), can breathe in the gas and an MRI can be taken. This results in an image of the spaces in the lungs filled with the gas. While the process to get to the point of imaging the patient may require knowledge from scientists very familiar with this technique and the equipment, steps are being taken to eliminate the need for this knowledge so that a hospital technician would be able to produce the hyperpolarized gas using a polarizer. Hyperpolarization machines are currently being used to develop hyperpolarized xenon gas that is used as a visualization agent for the lungs. Xenon-129 is a safe inert noble gas that can be used to quantify lung function. With a single 10-second breath hold, hyperpolarized Xenon-129 is used with MRI to enable 3-dimensional lung imaging. Xenon MRI is being used to monitor patients with pulmonary-vascular, obstructive, or fibrotic lung disease. Temperature-ramped 129Xe SEOP in an automated high-output batch model hyperpolarized 129Xe can utilize three prime temperature range to put certain conditions: First, 129Xe hyperpolarization rate is superlative high at hot condition. Second, in warm condition the hyperpolarization of 129Xe is unity. Third, at cold condition, the level of hyperpolarization of 129Xe gas at least can get the (at human body's temperature) imaging although during the transferring into the Tedlar bag having poor percentage of 87Rb (less than 5 ng/L dose). Multiparameter analysis of 87Rb/129Xe SEOP at high xenon pressure and photon flux could be used as 3D-printing and stopped flow contrasting agent in clinical scale. In "situ" technique, the NMR machine was run for tracking the dynamics of 129Xe polarization as a function of SEOP-cell conditioning with different operating parameters such as data collecting temperature, photon flux, and 129Xe partial pressure to enhance the 129Xe polarization ("P"Xe). All of those polarization values of 129Xe has been approved by pushing the hyperpolarized 129Xe gas and all MRI experiment also done at lower magnetic field 47.5 mT. Finally demonstrations indicated that such a high pressure region, polarization of 129Xe gases could be increment even more that the limit that already has been shown. Better SEOP thermal management and optimizing the polarizing kinetics has been further improved with good efficacy. SEOP on solids. Not only can SEOP be used to hyperpolarize noble gases, but a more recent development is SEOP on solids. It was first performed in 2007 and was used to polarize nuclei in a solid, allowing for nuclei that cannot be polarized by other methods to become hyperpolarized. For example, nuclear polarization of 133Cs in the form of a solid film of CsH can be increased above the Boltzmann limit. This is done by first optically pumping cesium vapor, then transferring the spin polarization to CsH salt, yielding an enhancement of 4.0. The cells are made as previously described using distillation, then filled with hydrogen gas and heated to allow for the Cs metal to react with the gaseous hydrogen to form the CsH salt. Unreacted hydrogen was removed, and the process was repeated several times to increase the thickness of the CsH film, then pressurized with nitrogen gas. Usually, SEOP experiments are done with the cell centered in Helmholtz or electromagnetic coils, as previously described, but these experiments were done in a superconducting 9.4 T magnet by shining the laser through the magnet and electrically heating the cell. In the future, it may be possible to use this technique to transfer polarization to 6Li or 7Li, leading to even more applications since the T1 is expected to be longer. Since the discovery of this technique that allows solids to be characterized, it has been improved in such a way where polarized light is not necessary to polarize the solid; instead, unpolarized light in a magnetic field can be used. In this method, glass wool is coated with CsH salt, increasing the surface area of the CsH and therefore increasing the chances of spin transfer, yielding 80-fold enhancements at low field (0.56 T). Like in hyperpolarizing CsH film, the cesium metal in this glass wool method was allowed to react with hydrogen gas, but in this case the CsH formed on the glass fibers instead of the glass cell. Metastability exchange optical pumping. 3He can also be hyperpolarized using metastability exchange optical pumping (MEOP). This process is able to polarize 3He nuclei in the ground state with optically pumped 3He nuclei in the metastable state. MEOP only involves 3He nuclei at room temperature and at low pressure (≈a few mbars). The process of MEOP is very efficient (high polarization rate), however, compression of the gas up to atmospheric pressure is needed. Dynamic nuclear polarization. Compounds containing NMR-sensitive nuclei, such as 1H, 13C or 15N, can be hyperpolarized using Dynamic nuclear polarization (DNP). DNP is typically performed at low temperature (≈1 K) and high magnetic field (≈3 T). The compound is subsequently thawed and dissolved to yield a room temperature solution containing hyperpolarized nuclei. This liquid can be used in "in vivo" metabolic imaging for oncology and other applications. The 13C polarization levels in solid compounds can reach up to ≈64% and the losses during dissolution and transfer of the sample for NMR measurements can be minimized to a few percent. Compounds containing NMR-active nuclei can also be hyperpolarized using chemical reactions with para-hydrogen, see Para-Hydrogen Induced Polarization (PHIP). Parahydrogen induced polarization. Molecular hydrogen, H2, contains two different spin isomers, para-hydrogen and ortho-hydrogen, with a ratio of 25:75 at room temperature. Creating para-hydrogen induced polarization (PHIP) means that this ratio is increased, in other words that para-hydrogen is enriched. This can be accomplished by cooling hydrogen gas and then inducing ortho-to-para conversion via an iron-oxide or charcoal catalyst. When performing this procedure at ≈70 K (i.e. with liquid nitrogen), para-hydrogen is enriched from 25% to ca. 50%. When cooling to below 20 K and then inducing the ortho-to-para conversion, close to 100% parahydrogen can be obtained. For practical applications, the PHIP is most commonly transferred to organic molecules by reacting the hyperpolarized hydrogen with precursor molecules in the presence of a transition metal catalyst. Proton NMR signals with ca. 10,000-fold increased intensity can be obtained compared to NMR signals of the same organic molecule without PHIP and thus only "thermal" polarization at room temperature. Signal amplification by reversible exchange (SABRE). Signal amplification by reversible exchange (SABRE) is a technique to hyperpolarize samples without chemically modifying them. Compared to orthohydrogen or organic molecules, a much greater fraction of the hydrogen nuclei in parahydrogen align with an applied magnetic field. In SABRE, a metal center reversibly binds to both the test molecule and a parahydrogen molecule facilitating the target molecule to pick up the polarization of the parahydrogen. This technique can be improved and utilized for a wide range of organic molecules by using an intermediate "relay" molecule like ammonia. The ammonia efficiently binds to the metal center and picks up the polarization from the parahydrogen. The ammonia then transfers it other molecules that don't bind as well to the metal catalyst. This enhanced NMR signal allows the rapid analysis of very small amounts of material. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{P_{Rb}}={{\\mathrm{n}_\\uparrow-\\mathrm{n}_\\downarrow}\\over{\\mathrm{n}_\\uparrow+\\mathrm{n}_\\downarrow}}" }, { "math_id": 1, "text": "\\mathrm{P}_N(\\mathrm{t})=\\left \\langle\\mathrm{P}_A\\right \\rangle \\left ( \\frac{\\gamma_{SE}}{\\gamma_{SE}+\\Gamma} \\right ) [1-e^{(\\gamma_{SE}+\\Gamma)\\mathrm{t}}]" }, { "math_id": 2, "text": "\\Gamma=\\Gamma_t+\\Gamma_p+\\Gamma_g+\\Gamma_w" }, { "math_id": 3, "text": "\\Gamma_B=\\gamma'\\times[AM]+\\Gamma_{SD}" }, { "math_id": 4, "text": "\\frac{1}{\\mathrm{T}_1}=\\left ( \\frac{1}{T_1} \\right )_{CR}+\\left ( \\frac{1}{T_1} \\right )_{MFI}+\\left ( \\frac{1}{T_1} \\right )_{O2}" }, { "math_id": 5, "text": "\\omega=\\gamma\\mathrm{B}_0" } ]
https://en.wikipedia.org/wiki?curid=900726
900733
Plasma oscillation
Rapid oscillations of electron densityPlasma oscillations, also known as Langmuir waves (after Irving Langmuir), are rapid oscillations of the electron density in conducting media such as plasmas or metals in the ultraviolet region. The oscillations can be described as an instability in the dielectric function of a free electron gas. The frequency depends only weakly on the wavelength of the oscillation. The quasiparticle resulting from the quantization of these oscillations is the "plasmon". Langmuir waves were discovered by American physicists Irving Langmuir and Lewi Tonks in the 1920s. They are parallel in form to Jeans instability waves, which are caused by gravitational instabilities in a static medium. Mechanism. Consider an electrically neutral plasma in equilibrium, consisting of a gas of positively charged ions and negatively charged electrons. If one displaces by a tiny amount an electron or a group of electrons with respect to the ions, the Coulomb force pulls the electrons back, acting as a restoring force. 'Cold' electrons. If the thermal motion of the electrons is ignored, it is possible to show that the charge density oscillates at the "plasma frequency" formula_0 (SI units), formula_1 (cgs units), where formula_2 is the number density of electrons, formula_3 is the electric charge, formula_4 is the effective mass of the electron, and formula_5 is the permittivity of free space. Note that the above formula is derived under the approximation that the ion mass is infinite. This is generally a good approximation, as the electrons are so much lighter than ions. Proof using Maxwell equations. Assuming charge density oscillations formula_6 the continuity equation: formula_7 the Gauss law formula_8 and the conductivity formula_9 taking the divergence on both sides and substituting the above relations: formula_10 which is always true only if formula_11 But this is also the dielectric constant (see Drude Model) formula_12 and the condition of transparency (i.e. formula_13 from a certain plasma frequency formula_14 and above), the same condition here formula_15 apply to make possible also the propagation of density waves in the charge density. This expression must be modified in the case of electron-positron plasmas, often encountered in astrophysics. Since the frequency is independent of the wavelength, these oscillations have an infinite phase velocity and zero group velocity. Note that, when formula_16, the plasma frequency, formula_17, depends only on physical constants and electron density formula_2. The numeric expression for angular plasma frequency is formula_18 Metals are only transparent to light with a frequency higher than the metal's plasma frequency. For typical metals such as aluminium or silver, formula_2 is approximately 1023 cm−3, which brings the plasma frequency into the ultraviolet region. This is why most metals reflect visible light and appear shiny. 'Warm' electrons. When the effects of the electron thermal speed formula_19 are taken into account, the electron pressure acts as a restoring force as well as the electric field and the oscillations propagate with frequency and wavenumber related by the longitudinal Langmuir wave: formula_20 called the Bohm–Gross dispersion relation. If the spatial scale is large compared to the Debye length, the oscillations are only weakly modified by the pressure term, but at small scales the pressure term dominates and the waves become dispersionless with a speed of formula_21. For such waves, however, the electron thermal speed is comparable to the phase velocity, i.e., formula_22 so the plasma waves can accelerate electrons that are moving with speed nearly equal to the phase velocity of the wave. This process often leads to a form of collisionless damping, called Landau damping. Consequently, the large-"k" portion in the dispersion relation is difficult to observe and seldom of consequence. In a bounded plasma, fringing electric fields can result in propagation of plasma oscillations, even when the electrons are cold. In a metal or semiconductor, the effect of the ions' periodic potential must be taken into account. This is usually done by using the electrons' effective mass in place of "m". Plasma oscillations and the effect of the negative mass. Plasma oscillations may give rise to the effect of the “negative mass”. The mechanical model giving rise to the negative effective mass effect is depicted in Figure 1. A core with mass formula_23 is connected internally through the spring with constant formula_24 to a shell with mass formula_25. The system is subjected to the external sinusoidal force formula_26. If we solve the equations of motion for the masses formula_25 and formula_23 and replace the entire system with a single effective mass formula_27 we obtain: formula_28 where formula_29. When the frequency formula_30 approaches formula_31 from above the effective mass formula_27 will be negative. The negative effective mass (density) becomes also possible based on the electro-mechanical coupling exploiting plasma oscillations of a free electron gas (see Figure 2). The negative mass appears as a result of vibration of a metallic particle with a frequency of formula_30 which is close the frequency of the plasma oscillations of the electron gas formula_23 relatively to the ionic lattice formula_25. The plasma oscillations are represented with the elastic spring formula_32, where formula_14 is the plasma frequency. Thus, the metallic particle vibrated with the external frequency "ω" is described by the effective mass formula_33 which is negative when the frequency formula_30 approaches formula_14 from above. Metamaterials exploiting the effect of the negative mass in the vicinity of the plasma frequency were reported. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega_{\\mathrm{pe}} = \\sqrt{\\frac{n_\\mathrm{e} e^{2}}{m^*\\varepsilon_0}}, \\left[\\mathrm{rad/s}\\right]" }, { "math_id": 1, "text": "\\omega_{\\mathrm{pe}} = \\sqrt{\\frac{4 \\pi n_\\mathrm{e} e^{2}}{m^*}}, \\left[\\mathrm{rad/s}\\right]" }, { "math_id": 2, "text": "n_\\mathrm{e}" }, { "math_id": 3, "text": "e" }, { "math_id": 4, "text": "m^*" }, { "math_id": 5, "text": "\\varepsilon_0" }, { "math_id": 6, "text": "\\rho(\\omega)=\\rho_0 e^{-i\\omega t}" }, { "math_id": 7, "text": "\\nabla \\cdot \\mathbf{j} = - \\frac{\\partial \\rho}{\\partial t} = i \\omega \\rho(\\omega) " }, { "math_id": 8, "text": "\\nabla \\cdot \\mathbf{E}(\\omega) = 4 \\pi \\rho(\\omega)" }, { "math_id": 9, "text": "\\mathbf{j}(\\omega) = \\sigma(\\omega) \\mathbf{E}(\\omega)" }, { "math_id": 10, "text": "i \\omega \\rho(\\omega) = 4 \\pi \\sigma(\\omega) \\rho(\\omega)" }, { "math_id": 11, "text": "1+ \\frac {4 \\pi i \\sigma(\\omega)}{\\omega} = 0" }, { "math_id": 12, "text": "\\epsilon(\\omega) = 1+ \\frac {4 \\pi i \\sigma(\\omega)}{\\omega} " }, { "math_id": 13, "text": "\\epsilon \\ge 0" }, { "math_id": 14, "text": "\\omega_{\\rm p}" }, { "math_id": 15, "text": "\\epsilon = 0" }, { "math_id": 16, "text": "m^*=m_\\mathrm{e}" }, { "math_id": 17, "text": "\\omega_{\\mathrm{pe}}" }, { "math_id": 18, "text": "f_\\text{pe} = \\frac{\\omega_\\text{pe}}{2\\pi}~\\left[\\text{Hz}\\right]" }, { "math_id": 19, "text": "v_{\\mathrm{e,th}} = \\sqrt{k_\\mathrm{B} T_{\\mathrm{e}} / m_\\mathrm{e}}" }, { "math_id": 20, "text": "\n\\omega^2 =\\omega_{\\mathrm{pe}}^2 +\\frac{3k_\\mathrm{B}T_{\\mathrm{e}}}{m_\\mathrm{e}}k^2=\\omega_{\\mathrm{pe}}^2 + 3 k^2 v_{\\mathrm{e,th}}^2,\n" }, { "math_id": 21, "text": "\\sqrt{3} \\cdot v_{\\mathrm{e,th}}" }, { "math_id": 22, "text": "\nv \\sim v_{\\mathrm{ph}} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{\\omega}{k},\n" }, { "math_id": 23, "text": "m_2" }, { "math_id": 24, "text": "k_2" }, { "math_id": 25, "text": "m_1" }, { "math_id": 26, "text": "F(t)=\\widehat{F}\\sin\\omega t" }, { "math_id": 27, "text": "m_{\\rm eff}" }, { "math_id": 28, "text": "m_{\\rm eff}=m_1+{m_2\\omega_0^2\\over \\omega_0^2-\\omega^2}," }, { "math_id": 29, "text": "\\omega_0=\\sqrt{k_2 / m_2}" }, { "math_id": 30, "text": "\\omega" }, { "math_id": 31, "text": "\\omega_0" }, { "math_id": 32, "text": "k_2 = \\omega_{\\rm p}^2m_2" }, { "math_id": 33, "text": "m_{\\rm eff}=m_1+{m_2\\omega_{\\rm p}^2\\over \\omega_{\\rm p}^2-\\omega^2}," } ]
https://en.wikipedia.org/wiki?curid=900733
9007520
ISO/IEC 80000
Published standard series about physical quantities and units of measurement ISO 80000 or IEC 80000, Quantities and units, is an international standard describing the International System of Quantities (ISQ). It was developed and promulgated jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It serves as a style guide for using physical quantities and units of measurement, formulas involving them, and their corresponding units, in scientific and educational documents for worldwide use. The ISO/IEC 80000 family of standards was completed with the publication of the first edition of Part 1 in November 2009. Overview. By 2021, ISO/IEC 80000 comprised 13 parts, two of which (parts 6 and 13) were developed by IEC and the remaining 11 were developed by ISO, with a further three parts (15, 16 and 17) under development. Part 14 was withdrawn. Subject areas. By 2021 the 80000 standard had 13 published parts. A description of each part is available online, with the complete parts for sale. Part 1: General. ISO 80000-1:2022 revised ISO 80000-1:2009, which replaced ISO 31-0:1992 and ISO 1000:1992. This document gives general information and definitions concerning quantities, systems of quantities, units, quantity and unit symbols, and coherent unit systems, especially the International System of Quantities (ISQ). The descriptive text of this part is available online. Part 2: Mathematics. ISO 80000-2:2019 revised ISO 80000-2:2009, which superseded ISO 31-11. It specifies mathematical symbols, explains their meanings, and gives verbal equivalents and applications. The descriptive text of this part is available online. Part 3: Space and time. ISO 80000-3:2019 revised ISO 80000-3:2006, which supersedes ISO 31-1 and ISO 31-2. It gives names, symbols, definitions and units for quantities of space and time. The descriptive text of this part is available online. A definition of the decibel, included in the original 2006 publication, was omitted in the 2019 revision, leaving ISO/IEC 80000 without a definition of this unit; a new part of the standard, IEC 80000-15 (Logarithmic and related quantities), is under development. Part 4: Mechanics. ISO 80000-4:2019 revised ISO 80000-4:2006, which superseded ISO 31-3. It gives names, symbols, definitions and units for quantities of mechanics. The descriptive text of this part is available online. Part 5: Thermodynamics. ISO 80000-5:2019 revised ISO 80000-5:2007, which superseded ISO 31-4. It gives names, symbols, definitions and units for quantities of thermodynamics. The descriptive text of this part is available online. Part 6: Electromagnetism. IEC 80000-6:2022 revised IEC 80000-6:2008, which superseded ISO 31-5 as well as IEC 60027-1. It gives names, symbols, and definitions for quantities and units of electromagnetism. The descriptive text of this part is available online. Part 7: Light and radiation. ISO 80000-7:2019 revised ISO 80000-7:2008, which superseded ISO 31-6. It gives names, symbols, definitions and units for quantities used for light and optical radiation in the wavelength range of approximately 1 nm to 1 mm. The descriptive text of this part is available online. Part 8: Acoustics. ISO 80000-8:2020 revised ISO 80000-8:2007, which revised ISO 31-7:1992. It gives names, symbols, definitions and units for quantities of acoustics. The descriptive text of this part is available online. It has a foreword, scope introduction, scope, normative references (of which there are none), as well as terms and definitions. It includes definitions of sound pressure, sound power and sound exposure, and their corresponding levels: sound pressure level, sound power level and sound exposure level. It includes definitions of the following quantities: Part 13: Information science and technology. IEC 80000-13:2008 was reviewed and confirmed in 2022 and published in 2008, and replaced subclauses 3.8 and 3.9 of IEC 60027-2:2005 and IEC 60027-3. It defines quantities and units used in information science and information technology, and specifies names and symbols for these quantities and units. It has a scope; normative references; names, definitions and symbols; and prefixes for binary multiples. Quantities defined in this standard are: The Standard also includes definitions for units relating to information technology, such as the erlang (E), bit (bit), octet (o), byte (B), baud (Bd), shannon (Sh), hartley (Hart) and the natural unit of information (nat). Clause 4 of the Standard defines standard binary prefixes used to denote powers of 1024 as 10241 (kibi-), 10242 (mebi-), 10243 (gibi-), 10244 (tebi-), 10245 (pebi-), 10246 (exbi-), 10247 (zebi-) and 10248 (yobi-). International System of Quantities. Part 1 of ISO 80000 introduces the International System of Quantities and describes its relationship with the International System of Units (SI). Specifically, its introduction states "The system of quantities, including the relations among the quantities used as the basis of the units of the SI, is named the "International System of Quantities", denoted 'ISQ', in all languages." It further clarifies that "ISQ is simply a convenient notation to assign to the essentially infinite and continually evolving and expanding system of quantities and equations on which all of modern science and technology rests. ISQ is a shorthand notation for the 'system of quantities on which the SI is based'." Units of the ISO and IEC 80000 series. The standard includes all SI units but is not limited to only SI units. Units that form part of the standard but not the SI include the units of information storage (bit and byte), units of entropy (shannon, natural unit of information and hartley), and the erlang (a unit of traffic intensity). The standard includes all SI prefixes as well as the binary prefixes kibi-, mebi-, gibi-, etc., originally introduced by the International Electrotechnical Commission to standardise binary multiples of byte such as mebibyte (MiB), for 2 bytes, to distinguish them from their decimal counterparts such as megabyte (MB), for precisely one million (2) bytes. In the standard, the application of the binary prefixes is not limited to units of information storage. For example, a frequency ten octaves above one hertz, i.e., 210 Hz (), is one kibihertz (1 KiHz). These binary prefixes were standardized first in a 1999 addendum to IEC 60027-2. The harmonized IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005, which had defined the prefixes for binary multiples. The only significant change in IEC 80000-13 is the addition of explicit definitions for some quantities. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[L, (\\Omega)]" } ]
https://en.wikipedia.org/wiki?curid=9007520
9007528
Computer performance
Amount of useful work accomplished by a computer In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved: Technical and non-technical definitions. The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be Whilst the above definition relates to a scientific, technical approach, the following definition given by Arnold Allen would be useful for a non-technical audience: "The word "performance" in computer performance means the same thing that performance means in other contexts, that is, it means "How well is the computer doing the work it is supposed to do?"" As an aspect of software quality. Computer software performance, particularly software application response time, is an aspect of software quality that is important in human–computer interactions. Performance engineering. Performance engineering within systems engineering encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the performance requirements defined for the solution. Performance engineering continuously deals with trade-offs between types of performance. Occasionally a CPU designer can find a way to make a CPU with better overall performance by improving one of the aspects of performance, presented below, without sacrificing the CPU's performance in other areas. For example, building the CPU out of better, faster transistors. However, sometimes pushing one type of performance to an extreme leads to a CPU with worse overall performance, because other important aspects were sacrificed to get one impressive-looking number, for example, the chip's clock rate (see the megahertz myth). Application performance engineering. Application Performance Engineering (APE) is a specific methodology within performance engineering designed to meet the challenges associated with application performance in increasingly distributed mobile, cloud and terrestrial IT environments. It includes the roles, skills, activities, practices, tools and deliverables applied at every phase of the application lifecycle that ensure an application will be designed, implemented and operationally supported to meet non-functional performance requirements. Aspects of performance. Computer performance metrics (things to measure) include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length and speed up. CPU benchmarks are available. Availability. Availability of a system is typically measured as a factor of its reliability - as reliability increases, so does availability (that is, less downtime). Availability of a system may also be increased by the strategy of focusing on increasing testability and maintainability and not on reliability. Improving maintainability is generally easier than reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, it is likely to dominate the availability (prediction uncertainty) problem, even while maintainability levels are very high. Response time. Response time is the total amount of time it takes to respond to a request for service. In computing, that service can be any unit of work from a simple disk IO to loading a complex web page. The response time is the sum of three numbers: Processing speed. Most consumers pick a computer architecture (normally Intel IA-32 architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see megahertz myth). Some system designers building parallel computers pick CPUs based on the speed per dollar. Channel capacity. Channel capacity is the tightest upper bound on the rate of information that can be reliably transmitted over a communications channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution. Latency. Latency is a time delay between the cause and the effect of some physical change in the system being observed. Latency is a result of the limited velocity with which any physical interaction can take place. This velocity is always lower or equal to speed of light. Therefore, every physical system that has non-zero spatial dimensions will experience some sort of latency. The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency (delay between what the user commands and when the computer provides the results) has a strong effect on user satisfaction and usability. Computers run sets of instructions called a process. In operating systems, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system may choose to adjust the scheduling of each transition (high-low or low-high) based on an internal clock. The latency is the delay between the process instruction commanding the transition and the hardware actually transitioning the voltage from high to low or low to high. System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has a deterministic response. Bandwidth. In computer networking, bandwidth is a measurement of bit-rate of available or consumed data communication resources, expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.). Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate, or physical layer useful bit rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, spectral bandwidth, RF bandwidth, signal bandwidth or analog bandwidth. Throughput. In general terms, throughput is the rate of production or the rate at which something can be processed. In communication networks, throughput is essentially synonymous to digital bandwidth consumption. In wireless networks or cellular communication networks, the system spectral efficiency in bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area. In integrated circuits, often a block in a data flow diagram has a single input and a single output, and operates on discrete packets of information. Examples of such blocks are FFT modules or binary multipliers. Because the units of throughput are the reciprocal of the unit for propagation delay, which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as an ASIC or embedded processor to a communications channel, simplifying system analysis. Scalability. Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. Power consumption. The amount of electric power used by the computer (power consumption). This becomes especially important for systems with limited power sources such as solar, batteries, and human power. Performance per watt. System designers building parallel computers, such as Google's hardware, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. For spaceflight computers, the processing speed per watt ratio is a more useful performance criterion than raw processing speed due to limited on-board resources of power. Compression ratio. Compression is useful because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data must be decompressed to use, this extra processing imposes computational or other costs through decompression; this situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off. Size and weight. This is an important performance feature of mobile systems, from the smart phones you keep in your pocket to the portable embedded systems in a spacecraft. Environmental impact. The effect of computing on the environment, during manufacturing and recycling as well as during use. Measurements are taken with the objectives of reducing waste, reducing hazardous materials, and minimizing a computer's ecological footprint. Transistor count. The number of transistors on an integrated circuit (IC). Transistor count is the most common measure of IC complexity. Benchmarks. Because there are so many programs to test a CPU on all aspects of performance, benchmarks were developed. The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the Certification Mark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC. Software performance testing. In software engineering, performance testing is, in general, conducted to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate, or verify other quality attributes of the system, such as scalability, reliability, and resource usage. Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design, and architecture of a system. Profiling (performance analysis). In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or frequency and duration of function calls. The most common use of profiling information is to aid program optimization. Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a "profiler" (or "code profiler"). A number of different techniques may be used by profilers, such as event-based, statistical, instrumented, and simulation methods. Performance tuning. Performance tuning is the improvement of system performance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. The motivation for such activity is called a performance problem, which can be real or anticipated. Most systems will respond to increased load with some degree of decreasing performance. A system's ability to accept a higher load is called scalability, and modifying a system to handle a higher load is synonymous to performance tuning. Systematic tuning follows these steps: Perceived performance. Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects. The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as provides a visual cue to let them know the system is handling their request. In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance. Performance Equation. The total amount of time (t) required to execute a particular benchmark program is formula_0 , or equivalently formula_1 where Even on one machine, a different compiler or the same compiler with different compiler optimization switches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark? A CPU designer is often required to implement a particular instruction set, and so cannot change N. Sometimes a designer focuses on improving performance by making significant improvements in f (with techniques such as deeper pipelines and faster caches), while (hopefully) not sacrificing too much C—leading to a speed-demon CPU design. Sometimes a designer focuses on improving performance by making significant improvements in CPI (with techniques such as out-of-order execution, superscalar CPUs, larger caches, caches with improved hit rates, improved branch prediction, speculative execution, etc.), while (hopefully) not sacrificing too much clock frequency—leading to a brainiac CPU design. For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t=\\tfrac{NC}{f}" }, { "math_id": 1, "text": "P=\\tfrac{If}{N}" }, { "math_id": 2, "text": "P = \\frac{1}{t}" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "C = \\frac{1}{I}" }, { "math_id": 6, "text": "I = \\frac{1}{C}" } ]
https://en.wikipedia.org/wiki?curid=9007528
9009334
Monotone cubic interpolation
Variant of cubic interpolation that preserves monotonicity In the mathematical field of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Monotonicity is preserved by linear interpolation but not guaranteed by cubic interpolation. Monotone cubic Hermite interpolation. Monotone interpolation can be accomplished using cubic Hermite spline with the tangents formula_0 modified to ensure the monotonicity of the resulting Hermite spline. An algorithm is also available for monotone quintic Hermite interpolation. Interpolant selection. There are several ways of selecting interpolating tangents for each data point. This section will outline the use of the Fritsch–Carlson method. Note that only one pass of the algorithm is required. Let the data points be formula_1 indexed in sorted order for formula_2. (a) the function formula_23, or (b) formula_24, or (c) formula_25. Only condition (a) is sufficient to ensure strict monotonicity: formula_26 must be positive. One simple way to satisfy this constraint is to restrict the vector formula_27 to a circle of radius 3. That is, if formula_28, then set formula_29, and rescale the tangents via formula_30. Alternatively it is sufficient to restrict formula_31 and formula_32. To accomplish this, if formula_33, then set formula_34, and if formula_35, then set formula_36. Cubic interpolation. After the preprocessing above, evaluation of the interpolated spline is equivalent to cubic Hermite spline, using the data formula_37, formula_38, and formula_39 for formula_2. To evaluate at formula_40, find the index formula_41 in the sequence where formula_40, lies between formula_37, and formula_42, that is: formula_43. Calculate formula_44 then the interpolated value is formula_45 where formula_46 are the basis functions for the cubic Hermite spline. Example implementation. The following JavaScript implementation takes a data set and produces a monotone cubic spline interpolant function: * Monotone cubic spline interpolation * Usage example listed at bottom; this is a fully-functional package. For * example, this can be executed either at sites like * https://www.programiz.com/javascript/online-compiler/ * or using nodeJS. function DEBUG(s) { /* Uncomment the following to enable verbose output of the solver: */ //console.log(s); var j = 0; var createInterpolant = function(xs, ys) { var i, length = xs.length; // Deal with length issues if (length === 1) { // Impl: Precomputing the result prevents problems if ys is mutated later and allows garbage collection of ys // Impl: Unary plus properly converts values to numbers var result = +ys[0]; return function(x) { return result; }; // Rearrange xs and ys so that xs is sorted var indexes = []; indexes.sort(function(a, b) { return xs[a] &lt; xs[b] ? -1 : 1; }); var oldXs = xs, oldYs = ys; // Impl: Creating new arrays also prevents problems if the input arrays are mutated later xs = []; ys = []; // Impl: Unary plus properly converts values to numbers DEBUG("debug: xs = [ " + xs + " ]") DEBUG("debug: ys = [ " + ys + " ]") // Get consecutive differences and slopes var dys = [], dxs = [], ms = []; for (i = 0; i &lt; length - 1; i++) { var dx = xs[i + 1] - xs[i], dy = ys[i + 1] - ys[i]; dxs.push(dx); dys.push(dy); ms.push(dy/dx); // Get degree-1 coefficients var c1s = [ms[0]]; for (i = 0; i &lt; dxs.length - 1; i++) { var m = ms[i], mNext = ms[i + 1]; if (m*mNext &lt;= 0) { c1s.push(0); } else { var dx_ = dxs[i], dxNext = dxs[i + 1], common = dx_ + dxNext; c1s.push(3*common/((common + dxNext)/m + (common + dx_)/mNext)); c1s.push(ms[ms.length - 1]); DEBUG("debug: dxs = [ " + dxs + " ]") DEBUG("debug: ms = [ " + ms + " ]") DEBUG("debug: c1s.length = " + c1s.length) DEBUG("debug: c1s = [ " + c1s + " ]") // Get degree-2 and degree-3 coefficients var c2s = [], c3s = []; for (i = 0; i &lt; c1s.length - 1; i++) { var c1 = c1s[i]; var m_ = ms[i]; var invDx = 1/dxs[i]; var common_ = c1 + c1s[i + 1] - m_ - m_; DEBUG("debug: " + i + ". c1 = " + c1); DEBUG("debug: " + i + ". m_ = " + m_); DEBUG("debug: " + i + ". invDx = " + invDx); DEBUG("debug: " + i + ". common_ = " + common_); c2s.push((m_ - c1 - common_)*invDx); c3s.push(common_*invDx*invDx); DEBUG("debug: c2s = [ " + c2s + " ]") DEBUG("debug: c3s = [ " + c3s + " ]") // Return interpolant function return function(x) { // The rightmost point in the dataset should give an exact result var i = xs.length - 1; // Search for the interval x is in, returning the corresponding y if x is one of the original xs var low = 0, mid, high = c3s.length - 1, rval, dval; while (low &lt;= high) { mid = Math.floor(0.5*(low + high)); var xHere = xs[mid]; else { j++; i = mid; var diff = x - xs[i]; rval = ys[i] + diff * (c1s[i] + diff * (c2s[i] + diff * c3s[i])); dval = c1s[i] + diff * (2*c2s[i] + diff * 3*c3s[i]); DEBUG("debug: " + j + ". x = " + x + ". i = " + i + ", diff = " + diff + ", rval = " + rval + ", dval = " + dval); return [ rval, dval ]; i = Math.max(0, high); // Interpolate var diff = x - xs[i]; j++; rval = ys[i] + diff * (c1s[i] + diff * (c2s[i] + diff * c3s[i])); dval = c1s[i] + diff * (2*c2s[i] + diff * 3*c3s[i]); DEBUG("debug: " + j + ". x = " + x + ". i = " + i + ", diff = " + diff + ", rval = " + rval + ", dval = " + dval); return [ rval, dval ]; }; Usage example below will approximate x^2 for 0 &lt;= x &lt;= 4. Command line usage example (requires installation of nodejs): node monotone-cubic-spline.js var X = [0, 1, 2, 3, 4]; var F = [0, 1, 4, 9, 16]; var f = createInterpolant(X,F); var N = X.length; console.log("# BLOCK 0 :: Data for monotone-cubic-spline.js"); console.log("X" + "\t" + "F"); for (var i = 0; i &lt; N; i += 1) { console.log(F[i] + '\t' + X[i]); console.log(" "); console.log(" "); console.log("# BLOCK 1 :: Interpolated data for monotone-cubic-spline.js"); console.log(" x " + "\t\t" + " P(x) " + "\t\t" + " dP(x)/dx "); var message = "; var M = 25; for (var i = 0; i &lt;= M; i += 1) { var x = X[0] + (X[N-1]-X[0])*i/M; var rvals = f(x); var P = rvals[0]; var D = rvals[1]; message += x.toPrecision(15) + '\t' + P.toPrecision(15) + '\t' + D.toPrecision(15) + '\n'; console.log(message);
[ { "math_id": 0, "text": "m_i" }, { "math_id": 1, "text": "(x_k,y_k)" }, { "math_id": 2, "text": "k=1,\\,\\dots\\,n" }, { "math_id": 3, "text": "\\delta_k =\\frac{y_{k+1}-y_k}{x_{k+1}-x_k}" }, { "math_id": 4, "text": "k=1,\\,\\dots\\,n-1" }, { "math_id": 5, "text": "m_k = \\frac{\\delta_{k-1}+\\delta_k}{2}" }, { "math_id": 6, "text": "k=2,\\,\\dots\\,n-1" }, { "math_id": 7, "text": "m_1 = \\delta_1 \\quad \\text{ and } \\quad m_n = \\delta_{n-1}\\," }, { "math_id": 8, "text": "\\delta_{k-1}" }, { "math_id": 9, "text": "\\delta_k" }, { "math_id": 10, "text": "m_k = 0 " }, { "math_id": 11, "text": "\\delta_k = 0" }, { "math_id": 12, "text": "y_k=y_{k+1}" }, { "math_id": 13, "text": "m_k = m_{k+1} = 0," }, { "math_id": 14, "text": "k\\," }, { "math_id": 15, "text": "\\alpha_k = m_k/\\delta_k \\quad \\text{ and } \\quad \\beta_k = m_{k+1}/\\delta_k" }, { "math_id": 16, "text": "\\alpha_k" }, { "math_id": 17, "text": "\\beta_k" }, { "math_id": 18, "text": "(x_k,\\,y_k)" }, { "math_id": 19, "text": "m_{k}=0\\," }, { "math_id": 20, "text": "\\alpha_k < 0" }, { "math_id": 21, "text": "m_{k+1}=0\\," }, { "math_id": 22, "text": "\\beta_k < 0" }, { "math_id": 23, "text": "\\phi_k = \\alpha_k - \\frac{(2 \\alpha_k + \\beta_k - 3)^2}{3(\\alpha_k + \\beta_k - 2)} > 0\\," }, { "math_id": 24, "text": "\\alpha_k + 2\\beta_k - 3 \\le 0\\," }, { "math_id": 25, "text": "2\\alpha_k + \\beta_k - 3 \\le 0\\," }, { "math_id": 26, "text": "\\phi_k" }, { "math_id": 27, "text": "(\\alpha_k,\\,\\beta_k)" }, { "math_id": 28, "text": "\\alpha_k^2 + \\beta_k^2 > 9\\," }, { "math_id": 29, "text": "\\tau_k = \\frac{3}{\\sqrt{\\alpha_k^2 + \\beta_k^2}}\\," }, { "math_id": 30, "text": "m_k = \\tau_k\\, \\alpha_k \\,\\delta_k \\quad \\text{ and } \\quad m_{k+1} = \\tau_k\\, \\beta_k\\, \\delta_k\\," }, { "math_id": 31, "text": "\\alpha_k \\le 3" }, { "math_id": 32, "text": "\\beta_k \\le 3\\," }, { "math_id": 33, "text": "\\alpha_k > 3\\," }, { "math_id": 34, "text": "m_k = 3 \\, \\delta_k\\," }, { "math_id": 35, "text": "\\beta_k > 3\\," }, { "math_id": 36, "text": "m_{k+1} = 3 \\, \\delta_k\\," }, { "math_id": 37, "text": "x_k" }, { "math_id": 38, "text": "y_k" }, { "math_id": 39, "text": "m_k" }, { "math_id": 40, "text": "x" }, { "math_id": 41, "text": "k" }, { "math_id": 42, "text": "x_{k+1}" }, { "math_id": 43, "text": "x_k \\leq x \\leq x_{k+1}" }, { "math_id": 44, "text": "\\Delta = x_{k+1}-x_k \\quad \\text{ and } \\quad t = \\frac{x - x_k}{\\Delta}" }, { "math_id": 45, "text": "f_\\text{interpolated}(x) = y_k\\cdot h_{00}(t) + \\Delta\\cdot m_k\\cdot h_{10}(t) + y_{k+1}\\cdot h_{01}(t) + \\Delta\\cdot m_{k+1}\\cdot h_{11}(t)" }, { "math_id": 46, "text": "h_{ii}" } ]
https://en.wikipedia.org/wiki?curid=9009334
9010084
PTAS reduction
In computational complexity theory, a PTAS reduction is an approximation-preserving reduction that is often used to perform reductions between solutions to optimization problems. It preserves the property that a problem has a polynomial time approximation scheme (PTAS) and is used to define completeness for certain classes of optimization problems such as APX. Notationally, if there is a PTAS reduction from a problem A to a problem B, we write formula_0. With ordinary polynomial-time many-one reductions, if we can describe a reduction from a problem A to a problem B, then any polynomial-time solution for B can be composed with that reduction to obtain a polynomial-time solution for the problem A. Similarly, our goal in defining PTAS reductions is so that given a PTAS reduction from an optimization problem A to a problem B, a PTAS for B can be composed with the reduction to obtain a PTAS for the problem A. Definition. Formally, we define a PTAS reduction from A to B using three polynomial-time computable functions, "f", "g", and "α", with the following properties: Properties. From the definition it is straightforward to show that: L-reductions imply PTAS reductions. As a result, one may show the existence of a PTAS reduction via a L-reduction instead. PTAS reductions are used to define completeness in APX, the class of optimization problems with constant-factor approximation algorithms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{A} \\leq_{\\text{PTAS}} \\text{B}" }, { "math_id": 1, "text": "f(x)" }, { "math_id": 2, "text": "1 + \\alpha(\\epsilon)" }, { "math_id": 3, "text": "g(x,y,\\epsilon)" }, { "math_id": 4, "text": "1 + \\epsilon" }, { "math_id": 5, "text": "\\text{B} \\in \\text{PTAS} \\implies \\text{A} \\in \\text{PTAS}" }, { "math_id": 6, "text": "\\text{A} \\not\\in \\text{PTAS} \\implies \\text{B} \\not\\in \\text{PTAS}" } ]
https://en.wikipedia.org/wiki?curid=9010084
901172
Zone plate
Device used to focus light using diffraction A zone plate is a device used to focus light or other things exhibiting wave character. Unlike lenses or curved mirrors, zone plates use diffraction instead of refraction or reflection. Based on analysis by French physicist Augustin-Jean Fresnel, they are sometimes called Fresnel zone plates in his honor. The zone plate's focusing ability is an extension of the Arago spot phenomenon caused by diffraction from an opaque disc. A zone plate consists of a set of concentric rings, known as Fresnel zones, which alternate between being opaque and transparent. Light hitting the zone plate will diffract around the opaque zones. The zones can be spaced so that the diffracted light constructively interferes at the desired focus, creating an image there. Design and manufacture. To get constructive interference at the focus, the zones should switch from opaque to transparent at radii where formula_0 where "n" is an integer, λ is the wavelength of the light the zone plate is meant to focus and "f" is the distance from the center of the zone plate to the focus. When the zone plate is small compared to the focal length, this can be approximated as formula_1 For plates with many zones, you can calculate the distance to the focus if you only know the radius of the outermost zone, "r""N", and its width, Δ"r""N": formula_2 In the long focal length limit, the area of each zone is equal, because the width of the zones must decrease farther from the center. The maximum possible resolution of a zone plate depends on the smallest zone width, formula_3 Because of this, the smallest size object you can image, Δ"l", is limited by how small you can reliably make your zones. Zone plates are frequently manufactured using lithography. As lithography technology improves and the size of features that can be manufactured decreases, the possible resolution of zone plates manufactured with this technique can improve. Continuous zone plates. Unlike a standard lens, a binary zone plate produces intensity maxima along the axis of the plate at odd fractions ("f"/3, "f"/5, "f"/7, etc.). Although these contain less energy (counts of the spot) than the principal focus (because it is wider), they have the same maximum intensity (counts/m2). However, if the zone plate is constructed so that the opacity varies in a gradual, sinusoidal manner, the resulting diffraction causes only a single focal point to be formed. This type of zone plate pattern is the equivalent of a transmission hologram of a converging lens. For a smooth zone plate, the opacity (or transparency) at a point can be given by: formula_4 where formula_5 is the distance from the plate center, and formula_6 determines the plate's scale. Binary zone plates use almost the same formula, however they depend only on the sign: formula_7 Free parameter. It does not matter to the constructive interference what the absolute phase is, but only that it is the same from each ring. So an arbitrary length can be added to all the paths formula_8 This reference phase can be chosen to optimize secondary properties such as side lobes. Applications. Physics. There are many wavelengths of light outside of the visible area of the electromagnetic spectrum where traditional lens materials like glass are not transparent, and so lenses are more difficult to manufacture. Likewise, there are many wavelengths for which there are no materials with a refractive index significantly differing from one. X-rays, for example, are only weakly refracted by glass or other materials, and so require a different technique for focusing. Zone plates eliminate the need for finding transparent, refractive, easy-to-manufacture materials for every region of the spectrum. The same zone plate will focus light of many wavelengths to different foci, which means they can also be used to filter out unwanted wavelengths while focusing the light of interest. Other waves such as sound waves and, due to quantum mechanics, matter waves can be focused in the same way. Wave plates have been used to focus beams of neutrons and helium atoms. Photography. Zone plates are also used in photography in place of a lens or pinhole for a glowing, soft-focus image. One advantage over pinholes (aside from the unique, fuzzy look achieved with zone plates) is that the transparent area is larger than that of a comparable pinhole. The result is that the effective f-number of a zone plate is lower than for the corresponding pinhole and the exposure time can be decreased. Common f-numbers for a pinhole camera range from &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/150 to &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/200 or higher, whereas zone plates are frequently &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/40 and lower. This makes hand held shots feasible at the higher ISO settings available with newer DSLR cameras. Gunsights. Zone plates have been proposed as a cheap alternative to more expensive optical sights or targeting lasers. Lenses. Zone plates may be used as imaging lenses with a single focus as long as the type of grating used is sinusoidal in nature. A specifically designed Fresnel zone plate with blazed phase structures is sometimes called a kinoform. Reflection. A zone plate used as a reflector will allow radio waves to be focused as if by a parabolic reflector. This allows the reflector to be flat, and so easier to make. It also allows an appropriately patterned Fresnel reflector to be mounted flush to the side of a building, avoiding the wind loading that a paraboloid would be subject to. Software testing. A bitmap representation of a zone plate image may be used for testing various image processing algorithms, such as: An open-source zone-plate image generator is available.
[ { "math_id": 0, "text": "r_n = \\sqrt{n\\lambda f + \\frac{1}{4}n^2 \\lambda^2}" }, { "math_id": 1, "text": "r_n \\simeq \\sqrt{n\\lambda f}" }, { "math_id": 2, "text": "f = \\frac{2r_N \\Delta r_N}{\\lambda}" }, { "math_id": 3, "text": "\\frac{\\Delta l}{\\Delta r_N} \\approx 1.22" }, { "math_id": 4, "text": "\\frac{1 \\pm \\cos\\left(kr^2\\right)}{2}\\," }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\frac{1 \\pm \\sgn\\left(\\cos\\left(kr^2\\right)\\right)}{2}\\," }, { "math_id": 8, "text": "r_n = \\sqrt{(n + \\alpha)\\lambda f + \\frac{1}{4}(n + \\alpha)^2 \\lambda^2}" } ]
https://en.wikipedia.org/wiki?curid=901172
901260
Close-packing of equal spheres
Dense arrangement of congruent spheres in an infinite, regular arrangement In geometry, close-packing of equal spheres is a dense arrangement of congruent spheres in an infinite, regular arrangement (or lattice). Carl Friedrich Gauss proved that the highest average density – that is, the greatest fraction of space occupied by spheres – that can be achieved by a lattice packing is formula_0. The same packing density can also be achieved by alternate stackings of the same close-packed planes of spheres, including structures that are aperiodic in the stacking direction. The Kepler conjecture states that this is the highest density that can be achieved by any arrangement of spheres, either regular or irregular. This conjecture was proven by T. C. Hales. Highest density is known only for 1, 2, 3, 8, and 24 dimensions. Many crystal structures are based on a close-packing of a single kind of atom, or a close-packing of large ions with smaller ions filling the spaces between them. The cubic and hexagonal arrangements are very close to one another in energy, and it may be difficult to predict which form will be preferred from first principles. FCC and HCP lattices. There are two simple regular lattices that achieve this highest average density. They are called face-centered cubic (FCC) (also called cubic close packed) and hexagonal close-packed (HCP), based on their symmetry. Both are based upon sheets of spheres arranged at the vertices of a triangular tiling; they differ in how the sheets are stacked upon one another. The FCC lattice is also known to mathematicians as that generated by the A3 root system. Cannonball problem. The problem of close-packing of spheres was first mathematically analyzed by Thomas Harriot around 1587, after a question on piling cannonballs on ships was posed to him by Sir Walter Raleigh on their expedition to America. Cannonballs were usually piled in a rectangular or triangular wooden frame, forming a three-sided or four-sided pyramid. Both arrangements produce a face-centered cubic lattice – with different orientation to the ground. Hexagonal close-packing would result in a six-sided pyramid with a hexagonal base. The cannonball problem asks which flat square arrangements of cannonballs can be stacked into a square pyramid. Édouard Lucas formulated the problem as the Diophantine equation formula_1 or formula_2 and conjectured that the only solutions are formula_3 and formula_4. Here formula_5 is the number of layers in the pyramidal stacking arrangement and formula_6 is the number of cannonballs along an edge in the flat square arrangement. Positioning and spacing. In both the FCC and HCP arrangements each sphere has twelve neighbors. For every sphere there is one gap surrounded by six spheres (octahedral) and two smaller gaps surrounded by four spheres (tetrahedral). The distances to the centers of these gaps from the centers of the surrounding spheres is √ for the tetrahedral, and √2 for the octahedral, when the sphere radius is 1. Relative to a reference layer with positioning A, two more positionings B and C are possible. Every sequence of A, B, and C without immediate repetition of the same one is possible and gives an equally dense packing for spheres of a given radius. The most regular ones are There is an uncountably infinite number of disordered arrangements of planes (e.g. ABCACBABABAC...) that are sometimes collectively referred to as "Barlow packings", after crystallographer William Barlow. In close-packing, the center-to-center spacing of spheres in the "xy" plane is a simple honeycomb-like tessellation with a pitch (distance between sphere centers) of one sphere diameter. The distance between sphere centers, projected on the "z" (vertical) axis, is: formula_7 where "d" is the diameter of a sphere; this follows from the tetrahedral arrangement of close-packed spheres. The coordination number of HCP and FCC is 12 and their atomic packing factors (APFs) are equal to the number mentioned above, 0.74. Lattice generation. When forming any sphere-packing lattice, the first fact to notice is that whenever two spheres touch a straight line may be drawn from the center of one sphere to the center of the other intersecting the point of contact. The distance between the centers along the shortest path namely that straight line will therefore be "r"1 + "r"2 where "r"1 is the radius of the first sphere and "r"2 is the radius of the second. In close packing all of the spheres share a common radius, "r". Therefore, two centers would simply have a distance 2"r". Simple HCP lattice. To form an A-B-A-B-... hexagonal close packing of spheres, the coordinate points of the lattice will be the spheres' centers. Suppose, the goal is to fill a box with spheres according to HCP. The box would be placed on the "x"-"y"-"z" coordinate space. First form a row of spheres. The centers will all lie on a straight line. Their "x"-coordinate will vary by 2"r" since the distance between each center of the spheres are touching is 2"r". The "y"-coordinate and z-coordinate will be the same. For simplicity, say that the balls are the first row and that their "y"- and "z"-coordinates are simply "r", so that their surfaces rest on the zero-planes. Coordinates of the centers of the first row will look like (2"r", "r", "r"), (4"r", "r", "r"), (6"r" ,"r", "r"), (8"r" ,"r", "r"), ... . Now, form the next row of spheres. Again, the centers will all lie on a straight line with "x"-coordinate differences of 2"r", but there will be a shift of distance "r" in the "x"-direction so that the center of every sphere in this row aligns with the "x"-coordinate of where two spheres touch in the first row. This allows the spheres of the new row to slide in closer to the first row until all spheres in the new row are touching two spheres of the first row. Since the new spheres "touch" two spheres, their centers form an equilateral triangle with those two neighbors' centers. The side lengths are all 2"r", so the height or "y"-coordinate difference between the rows is √3"r". Thus, this row will have coordinates like this: formula_8 The first sphere of this row only touches one sphere in the original row, but its location follows suit with the rest of the row. The next row follows this pattern of shifting the "x"-coordinate by "r" and the "y"-coordinate by √3. Add rows until reaching the "x" and "y" maximum borders of the box. In an A-B-A-B-... stacking pattern, the odd numbered "planes" of spheres will have exactly the same coordinates save for a pitch difference in the "z"-coordinates and the even numbered "planes" of spheres will share the same "x"- and "y"-coordinates. Both types of planes are formed using the pattern mentioned above, but the starting place for the "first" row's first sphere will be different. Using the plane described precisely above as plane #1, the A plane, place a sphere on top of this plane so that it lies touching three spheres in the A-plane. The three spheres are all already touching each other, forming an equilateral triangle, and since they all touch the new sphere, the four centers form a regular tetrahedron. All of the sides are equal to 2"r" because all of the sides are formed by two spheres touching. The height of which or the "z"-coordinate difference between the two "planes" is . This, combined with the offsets in the "x" and "y"-coordinates gives the centers of the first row in the B plane: formula_9 The second row's coordinates follow the pattern first described above and are: formula_10 The difference to the next plane, the A plane, is again in the "z"-direction and a shift in the "x" and "y" to match those "x"- and "y"-coordinates of the first A plane. In general, the coordinates of sphere centers can be written as: formula_11 where "i", "j" and "k" are indices starting at 0 for the "x"-, "y"- and "z"-coordinates. Miller indices. Crystallographic features of HCP systems, such as vectors and atomic plane families, can be described using a four-value Miller index notation ( "hkil" ) in which the third index "i" denotes a degenerate but convenient component which is equal to −"h" − "k". The "h", "i" and "k" index directions are separated by 120°, and are thus not orthogonal; the "l" component is mutually perpendicular to the "h", "i" and "k" index directions. Filling the remaining space. The FCC and HCP packings are the densest known packings of equal spheres with the highest symmetry (smallest repeat units). Denser sphere packings are known, but they involve unequal sphere packing. A packing density of 1, filling space completely, requires non-spherical shapes, such as honeycombs. Replacing each contact point between two spheres with an edge connecting the centers of the touching spheres produces tetrahedrons and octahedrons of equal edge lengths. The FCC arrangement produces the tetrahedral-octahedral honeycomb. The HCP arrangement produces the gyrated tetrahedral-octahedral honeycomb. If, instead, every sphere is augmented with the points in space that are closer to it than to any other sphere, the duals of these honeycombs are produced: the rhombic dodecahedral honeycomb for FCC, and the trapezo-rhombic dodecahedral honeycomb for HCP. Spherical bubbles appear in soapy water in a FCC or HCP arrangement when the water in the gaps between the bubbles drains out. This pattern also approaches the rhombic dodecahedral honeycomb or trapezo-rhombic dodecahedral honeycomb. However, such FCC or HCP foams of very small liquid content are unstable, as they do not satisfy Plateau's laws. The Kelvin foam and the Weaire–Phelan foam are more stable, having smaller interfacial energy in the limit of a very small liquid content. There are two types of interstitial holes left by hcp and fcc conformations; tetrahedral and octahedral void. Four spheres surround the tetrahedral hole with three spheres being in one layer and one sphere from the next layer. Six spheres surround an octahedral voids with three spheres coming from one layer and three spheres coming from the next layer. Structures of many simple chemical compounds, for instance, are often described in terms of small atoms occupying tetrahedral or octahedral holes in closed-packed systems that are formed from larger atoms. Layered structures are formed by alternating empty and filled octahedral planes. Two octahedral layers usually allow for four structural arrangements that can either be filled by an hpc of fcc packing systems. In filling tetrahedral holes a complete filling leads to fcc field array. In unit cells, hole filling can sometimes lead to polyhedral arrays with a mix of hcp and fcc layering. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\pi}{3\\sqrt 2} \\approx 0.74048" }, { "math_id": 1, "text": "\\sum_{n=1}^{N} n^2 = M^2" }, { "math_id": 2, "text": "\\frac{1}{6} N(N+1)(2N+1) = M^2" }, { "math_id": 3, "text": "N = 1, M = 1," }, { "math_id": 4, "text": "N = 24, M = 70" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "\\text{pitch}_Z = \\sqrt{6} \\cdot {d\\over 3}\\approx0.816\\,496\\,58 d," }, { "math_id": 8, "text": "\\left(r, r + \\sqrt{3}r, r\\right),\\ \\left(3r, r + \\sqrt{3}r, r\\right),\\ \\left(5r, r + \\sqrt{3}r, r\\right),\\ \\left(7r, r + \\sqrt{3}r, r\\right), \\dots." }, { "math_id": 9, "text": "\\left(r, r + \\frac{\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\ \\left(3r, r + \\frac{\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\ \\left(5r, r + \\frac{\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\ \\left(7r, r + \\frac{\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right), \\dots. " }, { "math_id": 10, "text": "\\left(2r, r + \\frac{4\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\ \\left(4r, r + \\frac{4\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\ \\left(6r, r + \\frac{4\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\ \\left(8r,r + \\frac{4\\sqrt{3}r}{3}, r + \\frac{\\sqrt{6}r2}{3}\\right),\\dots. " }, { "math_id": 11, "text": "\\begin{bmatrix}\n 2i + ((j\\ +\\ k) \\bmod 2)\\\\\n \\sqrt{3}\\left[j + \\frac{1}{3}(k \\bmod 2)\\right]\\\\\n \\frac{2\\sqrt{6}}{3}k\n\\end{bmatrix}r" } ]
https://en.wikipedia.org/wiki?curid=901260
901382
Interaction picture
View of quantum mechanics In quantum mechanics, the interaction picture (also known as the interaction representation or Dirac picture after Paul Dirac, who introduced it) is an intermediate representation between the Schrödinger picture and the Heisenberg picture. Whereas in the other two pictures either the state vector or the operators carry time dependence, in the interaction picture both carry part of the time dependence of observables. The interaction picture is useful in dealing with changes to the wave functions and observables due to interactions. Most field-theoretical calculations use the interaction representation because they construct the solution to the many-body Schrödinger equation as the solution to the free-particle problem plus some unknown interaction parts. Equations that include operators acting at different times, which hold in the interaction picture, don't necessarily hold in the Schrödinger or the Heisenberg picture. This is because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others. The interaction picture is a special case of unitary transformation applied to the Hamiltonian and state vectors. Definition. Operators and state vectors in the interaction picture are related by a change of basis (unitary transformation) to those same operators and state vectors in the Schrödinger picture. To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts: formula_0 Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that "H"0,S is well understood and exactly solvable, while "H"1,S contains some harder-to-analyze perturbation to this system. If the Hamiltonian has "explicit time-dependence" (for example, if the quantum system interacts with an applied external electric field that varies in time), it will usually be advantageous to include the explicitly time-dependent terms with "H"1,S, leaving "H"0,S time-independent: formula_1 We proceed assuming that this is the case. If there "is" a context in which it makes sense to have "H"0,S be time-dependent, then one can proceed by replacing formula_2 by the corresponding time-evolution operator in the definitions below. State vectors. Let formula_3 be the time-dependent state vector in the Schrödinger picture. A state vector in the interaction picture, formula_4, is defined with an additional time-dependent unitary transformation. formula_5 Operators. An operator in the interaction picture is defined as formula_6 Note that "A"S("t") will typically not depend on t and can be rewritten as just "A"S. It only depends on t if the operator has "explicit time dependence", for example, due to its dependence on an applied external time-varying electric field. Another instance of explicit time dependence may occur when "A"S("t") is a density matrix (see below). Hamiltonian operator. For the operator formula_7 itself, the interaction picture and Schrödinger picture coincide: formula_8 This is easily seen through the fact that operators commute with differentiable functions of themselves. This particular operator then can be called formula_7 without ambiguity. For the perturbation Hamiltonian formula_9, however, formula_10 where the interaction-picture perturbation Hamiltonian becomes a time-dependent Hamiltonian, unless ["H"1,S, "H"0,S] = 0. It is possible to obtain the interaction picture for a time-dependent Hamiltonian "H"0,S("t") as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by "H"0,S("t"), or more explicitly with a time-ordered exponential integral. Density matrix. The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let "ρ"I and "ρ"S be the density matrices in the interaction picture and the Schrödinger picture respectively. If there is probability "pn" to be in the physical state |"ψ""n"⟩, then formula_11 Time-evolution. Time-evolution of states. Transforming the Schrödinger equation into the interaction picture gives formula_12 which states that in the interaction picture, a quantum state is evolved by the interaction part of the Hamiltonian as expressed in the interaction picture. A proof is given in Fetter and Walecka. Time-evolution of operators. If the operator "A"S is time-independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for "A"I("t") is given by formula_13 In the interaction picture the operators evolve in time like the operators in the Heisenberg picture with the Hamiltonian "H'" "H"0. Time-evolution of the density matrix. The evolution of the density matrix in the interaction picture is formula_14 in consistency with the Schrödinger equation in the interaction picture. Expectation values. For a general operator formula_15, the expectation value in the interaction picture is given by formula_16 Using the density-matrix expression for expectation value, we will get formula_17 Schwinger–Tomonaga equation. The term interaction representation was invented by Schwinger. In this new mixed representation the state vector is no longer constant in general, but it is constant if there is no coupling between fields. The change of representation leads directly to the Tomonaga–Schwinger equation: formula_18 formula_19 Where the Hamiltonian in this case is the QED interaction Hamiltonian, but it can also be a generic interaction, and formula_20 is a spacelike surface that is passing through the point formula_21. The derivative formally represents a variation over that surface given formula_21 fixed. It is difficult to give a precise mathematical formal interpretation of this equation. This approach is called the 'differential' and 'field' approach by Schwinger, as opposed to the 'integral' and 'particle' approach of the Feynman diagrams. The core idea is that if the interaction has a small coupling constant (i.e. in the case of electromagnetism of the order of the fine structure constant) successive perturbative terms will be powers of the coupling constant and therefore smaller. Use. The purpose of the interaction picture is to shunt all the time dependence due to "H"0 onto the operators, thus allowing them to evolve freely, and leaving only "H"1,I to control the time-evolution of the state vectors. The interaction picture is convenient when considering the effect of a small interaction term, "H"1,S, being added to the Hamiltonian of a solved system, "H"0,S. By utilizing the interaction picture, one can use time-dependent perturbation theory to find the effect of "H"1,I, e.g., in the derivation of Fermi's golden rule, or the Dyson series in quantum field theory: in 1947, Shin'ichirō Tomonaga and Julian Schwinger appreciated that covariant perturbation theory could be formulated elegantly in the interaction picture, since field operators can evolve in time as free fields, even in the presence of interactions, now treated perturbatively in such a Dyson series. Summary comparison of evolution in all pictures. For a time-independent Hamiltonian "H"S, where "H"0,S is the free Hamiltonian, References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_\\text{S} = H_{0,\\text{S}} + H_{1,\\text{S}}." }, { "math_id": 1, "text": "H_\\text{S}(t) = H_{0,\\text{S}} + H_{1,\\text{S}}(t)." }, { "math_id": 2, "text": "\\mathrm{e}^{\\pm \\mathrm{i} H_{0,\\text{S}} t/\\hbar}" }, { "math_id": 3, "text": "|\\psi_\\text{S}(t)\\rangle = \\mathrm{e}^{-\\mathrm{i}H_\\text{S}t/\\hbar}|\\psi(0)\\rangle" }, { "math_id": 4, "text": "|\\psi_\\text{I}(t)\\rangle" }, { "math_id": 5, "text": " | \\psi_\\text{I}(t) \\rangle = \\text{e}^{\\mathrm{i} H_{0,\\text{S}} t / \\hbar} | \\psi_\\text{S}(t) \\rangle." }, { "math_id": 6, "text": "A_\\text{I}(t) = \\mathrm{e}^{\\mathrm{i} H_{0,\\text{S}} t / \\hbar} A_\\text{S}(t) \\mathrm{e}^{-\\mathrm{i} H_{0,\\text{S}} t / \\hbar}." }, { "math_id": 7, "text": "H_0" }, { "math_id": 8, "text": "H_{0,\\text{I}}(t) = \\mathrm{e}^{\\mathrm{i} H_{0,\\text{S}} t / \\hbar} H_{0,\\text{S}} \\mathrm{e}^{-\\mathrm{i} H_{0,\\text{S}} t / \\hbar} = H_{0,\\text{S}}." }, { "math_id": 9, "text": "H_{1,\\text{I}}" }, { "math_id": 10, "text": "H_{1,\\text{I}}(t) = \\mathrm{e}^{\\mathrm{i} H_{0,\\text{S}} t / \\hbar} H_{1,\\text{S}} \\mathrm{e}^{-\\mathrm{i} H_{0,\\text{S}} t / \\hbar}," }, { "math_id": 11, "text": "\\begin{align}\n\\rho_\\text{I}(t)\n&= \\sum_n p_n(t) \\left|\\psi_{n,\\text{I}}(t)\\right\\rang \\left\\lang \\psi_{n,\\text{I}}(t)\\right| \\\\\n&= \\sum_n p_n(t) \\mathrm{e}^{\\mathrm{i} H_{0,\\text{S}} t / \\hbar} \\left|\\psi_{n,\\text{S}}(t)\\right\\rang \\left\\lang \\psi_{n,\\text{S}}(t)\\right| \\mathrm{e}^{-\\mathrm{i} H_{0,\\text{S}} t / \\hbar} \\\\\n&= \\mathrm{e}^{\\mathrm{i} H_{0,\\text{S}} t / \\hbar} \\rho_\\text{S}(t) \\mathrm{e}^{-\\mathrm{i} H_{0,\\text{S}} t / \\hbar}.\n\\end{align}" }, { "math_id": 12, "text": " \\mathrm{i} \\hbar \\frac{\\mathrm{d}}{\\mathrm{d}t} |\\psi_\\text{I}(t)\\rang = H_{1,\\text{I}}(t) |\\psi_\\text{I}(t)\\rang, " }, { "math_id": 13, "text": " \\mathrm{i}\\hbar\\frac{\\mathrm{d}}{\\mathrm{d}t}A_\\text{I}(t) = [A_\\text{I}(t),H_{0,\\text{S}}]." }, { "math_id": 14, "text": " \\mathrm{i}\\hbar \\frac{\\mathrm{d}}{\\mathrm{d}t} \\rho_\\text{I}(t) = [H_{1,\\text{I}}(t), \\rho_\\text{I}(t)]," }, { "math_id": 15, "text": "A" }, { "math_id": 16, "text": "\n \\langle A_\\text{I}(t) \\rangle =\n \\langle \\psi_\\text{I}(t) | A_\\text{I}(t) | \\psi_\\text{I}(t) \\rangle =\n \\langle \\psi_\\text{S}(t) | e^{-i H_{0,\\text{S}} t} e^{i H_{0,\\text{S}} t} \\, A_\\text{S} \\, e^{-i H_{0,\\text{S}} t} e^{i H_{0,\\text{S}} t } | \\psi_\\text{S}(t) \\rangle =\n \\langle A_\\text{S}(t) \\rangle.\n" }, { "math_id": 17, "text": "\\langle A_\\text{I}(t) \\rangle = \\operatorname{Tr}\\big(\\rho_\\text{I}(t) \\, A_\\text{I}(t)\\big)." }, { "math_id": 18, "text": "ihc \\frac {\\partial \\Psi[\\sigma]}{\\partial \\sigma(x)} = \\hat{H}(x)\\Psi(\\sigma) " }, { "math_id": 19, "text": " \\hat{H}(x) = - \\frac{1}{c} j_{\\mu}(x) A^{\\mu}(x) " }, { "math_id": 20, "text": "\\sigma" }, { "math_id": 21, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=901382
901459
Second-countable space
Topological space whose topology has a countable base In topology, a second-countable space, also called a completely separable space, is a topological space whose topology has a countable base. More explicitly, a topological space formula_0 is second-countable if there exists some countable collection formula_1 of open subsets of formula_0 such that any open subset of formula_0 can be written as a union of elements of some subfamily of formula_2. A second-countable space is said to satisfy the second axiom of countability. Like other countability axioms, the property of being second-countable restricts the number of open sets that a space can have. Many "well-behaved" spaces in mathematics are second-countable. For example, Euclidean space (R"n") with its usual topology is second-countable. Although the usual base of open balls is uncountable, one can restrict to the collection of all open balls with rational radii and whose centers have rational coordinates. This restricted set is countable and still forms a basis. Properties. Second-countability is a stronger notion than first-countability. A space is first-countable if each point has a countable local base. Given a base for a topology and a point "x", the set of all basis sets containing "x" forms a local base at "x". Thus, if one has a countable base for a topology then one has a countable local base at every point, and hence every second-countable space is also a first-countable space. However any uncountable discrete space is first-countable but not second-countable. Second-countability implies certain other topological properties. Specifically, every second-countable space is separable (has a countable dense subset) and Lindelöf (every open cover has a countable subcover). The reverse implications do not hold. For example, the lower limit topology on the real line is first-countable, separable, and Lindelöf, but not second-countable. For metric spaces, however, the properties of being second-countable, separable, and Lindelöf are all equivalent. Therefore, the lower limit topology on the real line is not metrizable. In second-countable spaces—as in metric spaces—compactness, sequential compactness, and countable compactness are all equivalent properties. Urysohn's metrization theorem states that every second-countable, Hausdorff regular space is metrizable. It follows that every such space is completely normal as well as paracompact. Second-countability is therefore a rather restrictive property on a topological space, requiring only a separation axiom to imply metrizability. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "\\mathcal{U} = \\{U_i\\}_{i=1}^{\\infty}" }, { "math_id": 2, "text": "\\mathcal{U}" }, { "math_id": 3, "text": " X = [0,1] \\cup [2,3] \\cup [4,5] \\cup \\dots \\cup [2k, 2k+1] \\cup \\dotsb" } ]
https://en.wikipedia.org/wiki?curid=901459
901593
Lambda cube
In mathematical logic and type theory, the λ-cube (also written lambda cube) is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to: The different ways to combine these three dimensions yield the 8 vertices of the cube, each corresponding to a different kind of typed system. The λ-cube can be generalized into the concept of a pure type system. Examples of Systems. (λ→) Simply typed lambda calculus. The simplest system found in the λ-cube is the simply typed lambda calculus, also called λ→. In this system, the only way to construct an abstraction is by making "a term depend on a term", with the typing rule: formula_3 (λ2) System F. In System F (also named λ2 for the "second-order typed lambda calculus") there is another type of abstraction, written with a formula_4, that allows "terms to depend on types", with the following rule: formula_5 The terms beginning with a formula_4 are called polymorphic, as they can be applied to different types to get different functions, similarly to polymorphic functions in ML-like languages. For instance, the polymorphic identity fun x -&gt; x of OCaml has type 'a -&gt; 'a meaning it can take an argument of any type codice_0 and return an element of that type. This type corresponds in λ2 to the type formula_6. (λω) System Fω. In System Fformula_7 a construction is introduced to supply "types that depend on other types". This is called a type constructor and provides a way to build "a function with a type as a "value"". An example of such a type constructor is the type of binary trees with leaves labeled by data of a given type formula_8: formula_9, where "formula_10" informally means "formula_8 is a type". This is a function that takes a type parameter formula_8 as an argument and returns the type of formula_11s of values of type formula_8. In concrete programming, this feature corresponds to the ability to define type constructors inside the language, rather than considering them as primitives. The previous type constructor roughly corresponds to the following definition of a tree with labeled leaves in OCaml: type 'a tree = | Leaf of 'a | Node of 'a tree * 'a tree This type constructor can be applied to other types to obtain new types. E.g., to obtain type of trees of integers:type int_tree = int tree System Fformula_7 is generally not used on its own, but is useful to isolate the independent feature of type constructors. (λP) Lambda-P. In the λP system, also named ΛΠ, and closely related to the LF Logical Framework, one has so called dependent types. These are "types that are allowed to depend on terms". The crucial introduction rule of the system is formula_12 where formula_13 represents valid types. The new type constructor formula_14 corresponds via the Curry-Howard isomorphism to a universal quantifier, and the system λP as a whole corresponds to first-order logic with implication as only connective. An example of these dependent types in concrete programming is the type of vectors on a certain length: the length is a term, on which the type depends. (λω) System Fω. System Fω combines both the formula_4 constructor of System F and the type constructors from System Fformula_7. Thus System Fω provides both "terms that depend on types" and "types that depend on types". (λC) Calculus of constructions. In the calculus of constructions, denoted as λC in the cube or as λPω, these four features cohabit, so that both types and terms can depend on types and terms. The clear border that exists in λ→ between terms and types is somewhat abolished, as all types except the universal formula_15 are themselves terms with a type. Formal definition. As for all systems based upon the simply typed lambda calculus, all systems in the cube are given in two steps: first, raw terms, together with a notion of β-reduction, and then typing rules that allow to type those terms. The set of sorts is defined as formula_16, sorts are represented with the letter formula_17. There is also a set formula_18 of variables, represented by the letters formula_19. The raw terms of the eight systems of the cube are given by the following syntax: formula_20 and formula_21 denoting formula_22 when formula_23 does not occur free in formula_24. The environments, as is usual in typed systems, are given by formula_25 The notion of β-reduction is common to all systems in the cube. It is written formula_26 and given by the rulesformula_27formula_28formula_29formula_30formula_31Its reflexive, transitive closure is written formula_32. The following typing rules are also common to all systems in the cube:formula_33formula_34formula_35formula_36formula_37The difference between the systems is in the pairs of sorts formula_38 that are allowed in the following two typing rules:formula_39formula_40 The correspondence between the systems and the pairs formula_38 allowed in the rules is the following: Each direction of the cube corresponds to one pair (excluding the pair formula_41 shared by all systems), and in turn each pair corresponds to one possibility of dependency between terms and types: Comparison between the systems. λ→. A typical derivation that can be obtained isformula_45or with the arrow shortcutformula_46closely resembling the identity (of type formula_47) of the usual λ→. Note that all types used must appear in the context, because the only derivation that can be done in an empty context is formula_48. The computing power is quite weak, it corresponds to the extended polynomials (polynomials together with a conditional operator). λ2. In λ2, such terms can be obtained asformula_49with formula_50. If one reads formula_51 as a universal quantification, via the Curry-Howard isomorphism, this can be seen as a proof of the principle of explosion. In general, λ2 adds the possibility to have impredicative types such as formula_52, that is terms quantifying over all types including themselves.The polymorphism also allows the construction of functions that were not constructible in λ→. More precisely, the functions definable in λ2 are those provably total in second-order Peano arithmetic. In particular, all primitive recursive functions are definable. λP. In λP, the ability to have types depending on terms means one can express logical predicates. For instance, the following is derivable:formula_53which corresponds, via the Curry-Howard isomorphism, to a proof of formula_54.From the computational point of view, however, having dependent types does not enhance computational power, only the possibility to express more precise type properties. The conversion rule is strongly needed when dealing with dependent types, because it allows to perform computation on the terms in the type. For instance, if you have formula_55 and formula_56, you need to apply the conversion rule to obtain formula_57 to be able to type formula_58. λω. In λω, the following operatorformula_59is definable, that is formula_60. The derivationformula_61can be obtained already in λ2, however the polymorphic formula_62 can only be defined if the rule formula_43 is also present. From a computing point of view, λω is extremely strong, and has been considered as a basis for programming languages. λC. The calculus of constructions has both the predicate expressiveness of λP and the computational power of λω, hence why λC is also called λPω, so it is very powerful, both on the logical side and on the computational side. Relation to other systems. The system Automath is similar to λ2 from a logical point of view. The ML-like languages, from a typing point of view, lie somewhere between λ→ and λ2, as they admit a restricted kind of polymorphic types, that is the types in prenex normal form. However, because they feature some recursion operators, their computing power is greater than that of λ2. The Coq system is based on an extension of λC with a linear hierarchy of universes, rather than only one untypable formula_63, and the ability to construct inductive types. Pure type systems can be seen as a generalization of the cube, with an arbitrary set of sorts, axiom, product and abstraction rules. Conversely, the systems of the lambda cube can be expressed as pure type systems with two sorts formula_64, the only axiom formula_65, and a set of rules formula_66 such that formula_67. Via the Curry-Howard isomorphism, there is a one-to-one correspondence between the systems in the lambda cube and logical systems, namely: All the logics are implicative (i.e. the only connectives are formula_68 and formula_69), however one can define other connectives such as formula_70 or formula_71 in an impredicative way in second and higher order logics. In the weak higher order logics, there are variables for higher order predicates, but no quantification on those can be done. Common properties. All systems in the cube enjoy All of these can be proven on generic pure type systems. Any term well-typed in a system of the cube is strongly normalizing, although this property is not common to all pure type systems. No system in the cube is Turing complete. Subtyping. Subtyping however is not represented in the cube, even though systems like formula_83, known as higher-order bounded quantification, which combines subtyping and polymorphism are of practical interest, and can be further generalized to bounded type operators. Further extensions to formula_83 allow the definition of purely functional objects; these systems were generally developed after the lambda cube paper was published. The idea of the cube is due to the mathematician Henk Barendregt (1991). The framework of pure type systems generalizes the lambda cube in the sense that all corners of the cube, as well as many other systems can be represented as instances of this general framework. This framework predates the lambda cube by a couple of years. In his 1991 paper, Barendregt also defines the corners of the cube in this framework. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightarrow" }, { "math_id": 1, "text": "\\uparrow" }, { "math_id": 2, "text": "\\nearrow" }, { "math_id": 3, "text": "\\frac{\\Gamma, x : \\sigma \\;\\vdash\\; t : \\tau}{\\Gamma \\;\\vdash\\; \\lambda x . t : \\sigma \\to \\tau}" }, { "math_id": 4, "text": "\\Lambda" }, { "math_id": 5, "text": "\\frac{\\Gamma \\;\\vdash\\; t : \\sigma}{\\Gamma \\;\\vdash\\; \\Lambda \\alpha . t : \\Pi \\alpha . \\sigma} \\;\\text{ if } \\alpha\\text{ does not occur free in }\\Gamma" }, { "math_id": 6, "text": "\\Pi \\alpha . \\alpha \\to \\alpha" }, { "math_id": 7, "text": "\\underline{\\omega}" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "\\mathsf{TREE} := \\lambda A : * . \\Pi B . (A \\to B) \\to (B \\to B \\to B) \\to B" }, { "math_id": 10, "text": "A:*" }, { "math_id": 11, "text": "\\mathsf{TREE}" }, { "math_id": 12, "text": "\\frac{\\Gamma, x : A \\;\\vdash\\; B : *}{\\Gamma \\;\\vdash\\; (\\Pi x : A . B) : *}" }, { "math_id": 13, "text": "*" }, { "math_id": 14, "text": "\\Pi" }, { "math_id": 15, "text": "\\square" }, { "math_id": 16, "text": "S := \\{*, \\square \\}" }, { "math_id": 17, "text": "s" }, { "math_id": 18, "text": "V" }, { "math_id": 19, "text": "x,y,\\dots" }, { "math_id": 20, "text": "A := x \\mid s \\mid A~A \\mid \\lambda x : A . A \\mid \\Pi x : A . A" }, { "math_id": 21, "text": "A \\to B" }, { "math_id": 22, "text": "\\Pi x : A . B" }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": "B" }, { "math_id": 25, "text": "\\Gamma := \\emptyset \\mid \\Gamma, x : A" }, { "math_id": 26, "text": "\\to_{\\beta}" }, { "math_id": 27, "text": "\\frac{}{(\\lambda x : A . B)~C \\to_{\\beta} B[C/x]}" }, { "math_id": 28, "text": "\\frac{B \\to_{\\beta} B'}{\\lambda x : A . B \\to_{\\beta} \\lambda x : A . B'}" }, { "math_id": 29, "text": "\\frac{A \\to_{\\beta} A'}{\\lambda x : A . B \\to_{\\beta} \\lambda x : A' . B}" }, { "math_id": 30, "text": "\\frac{B \\to_{\\beta} B'}{\\Pi x : A . B \\to_{\\beta} \\Pi x : A . B'}" }, { "math_id": 31, "text": "\\frac{A \\to_{\\beta} A'}{\\Pi x : A . B \\to_{\\beta} \\Pi x : A' . B}" }, { "math_id": 32, "text": "=_\\beta" }, { "math_id": 33, "text": "\\frac{}{\\vdash * : \\square}\\quad \\text{(Axiom)}" }, { "math_id": 34, "text": "\\frac{\\Gamma \\vdash A : s \\quad x\\text{ does not occur in }\\Gamma}{\\Gamma, x : A \\vdash x : A }\\quad \\text{(Start)}" }, { "math_id": 35, "text": "\\frac{\\Gamma \\vdash A : B \\quad \\Gamma \\vdash C : s}{\\Gamma, x : C \\vdash A : B}\\quad \\text{(Weakening)}" }, { "math_id": 36, "text": "\\frac{\\Gamma \\vdash C : \\Pi x : A . B \\quad \\Gamma \\vdash a : A}{\\Gamma \\vdash Ca : B[a/x]}\\quad\\text{(Application)}" }, { "math_id": 37, "text": "\\frac{\\Gamma \\vdash A : B \\quad B =_{\\beta} B' \\quad \\Gamma \\vdash B' : s}{\\Gamma \\vdash A : B'}\\quad\\text{(Conversion)}" }, { "math_id": 38, "text": "(s_1,s_2)" }, { "math_id": 39, "text": "\\frac{\\Gamma \\vdash A : s_1 \\quad \\Gamma, x : A \\vdash B : s_2}{\\Gamma \\vdash \\Pi x : A . B : s_2}\\quad\\text{(Product)}" }, { "math_id": 40, "text": "\\frac{\\Gamma \\vdash A : s_1 \\quad \\Gamma, x : A \\vdash b : B \\quad \\Gamma, x : A \\vdash B : s_2}{\\Gamma \\vdash \\lambda x : A . b : \\Pi x : A . B}\\quad\\text{(Abstraction)}" }, { "math_id": 41, "text": "(*,*)" }, { "math_id": 42, "text": "(*,\\square)" }, { "math_id": 43, "text": "(\\square, *)" }, { "math_id": 44, "text": "(\\square, \\square)" }, { "math_id": 45, "text": "\\alpha : * \\vdash \\lambda x : \\alpha . x : \\Pi x : \\alpha . \\alpha" }, { "math_id": 46, "text": "\\alpha : * \\vdash \\lambda x : \\alpha . x : \\alpha \\to \\alpha" }, { "math_id": 47, "text": "\\alpha" }, { "math_id": 48, "text": "\\vdash * : \\square" }, { "math_id": 49, "text": "\\vdash (\\lambda \\beta : * . \\lambda x : \\bot . x \\beta) : \\Pi \\beta : * . \\bot \\to \\beta" }, { "math_id": 50, "text": "\\bot = \\Pi \\alpha : * . \\alpha" }, { "math_id": 51, "text": "\\Pi" }, { "math_id": 52, "text": "\\bot" }, { "math_id": 53, "text": "\\alpha : *, a_0 : \\alpha, p : \\alpha \\to *, q : * \\vdash \\lambda z : (\\Pi x : \\alpha . p x \\to q) . \\lambda y : (\\Pi x : \\alpha . p x) . (z a_0) (y a_0) : (\\Pi x : \\alpha . p x \\to q) \\to (\\Pi x : \\alpha . p x) \\to q" }, { "math_id": 54, "text": "(\\forall x : A, P x \\to Q) \\to (\\forall x : A, P x) \\to Q" }, { "math_id": 55, "text": "\\Gamma \\vdash A : P((\\lambda x . x)y)" }, { "math_id": 56, "text": "\\Gamma \\vdash B : \\Pi x : P(y) . C" }, { "math_id": 57, "text": "\\Gamma \\vdash A : P(y)" }, { "math_id": 58, "text": "\\Gamma \\vdash B A : C" }, { "math_id": 59, "text": "AND := \\lambda \\alpha : * . \\lambda \\beta : * . \\Pi \\gamma : * . (\\alpha \\to \\beta \\to \\gamma) \\to \\gamma" }, { "math_id": 60, "text": "\\vdash AND : * \\to * \\to *" }, { "math_id": 61, "text": "\\alpha : *, \\beta : * \\vdash \\Pi \\gamma : * . (\\alpha \\to \\beta \\to \\gamma) \\to \\gamma : *" }, { "math_id": 62, "text": "AND" }, { "math_id": 63, "text": "\\square" }, { "math_id": 64, "text": "\\{*, \\square\\}" }, { "math_id": 65, "text": "\\{*,\\square\\}" }, { "math_id": 66, "text": "R" }, { "math_id": 67, "text": "\\{(*,*,*)\\} \\subseteq R \\subseteq \\{(*,*,*), (*,\\square, \\square), (\\square, *, *), (\\square, \\square, \\square) \\}" }, { "math_id": 68, "text": "\\to" }, { "math_id": 69, "text": "\\forall" }, { "math_id": 70, "text": "\\wedge" }, { "math_id": 71, "text": "\\neg" }, { "math_id": 72, "text": "M \\to_\\beta N" }, { "math_id": 73, "text": "M \\to_\\beta N'" }, { "math_id": 74, "text": "N''" }, { "math_id": 75, "text": "N \\to^*_\\beta N''" }, { "math_id": 76, "text": "N' \\to^*_\\beta N''" }, { "math_id": 77, "text": "\\Gamma \\vdash M : T" }, { "math_id": 78, "text": "M \\to_\\beta M'" }, { "math_id": 79, "text": "\\Gamma \\vdash M' : T" }, { "math_id": 80, "text": "\\Gamma \\vdash A : B" }, { "math_id": 81, "text": "\\Gamma \\vdash A : B'" }, { "math_id": 82, "text": "B =_\\beta B'" }, { "math_id": 83, "text": "F^\\omega_{<:}" } ]
https://en.wikipedia.org/wiki?curid=901593
901613
Logical framework
In logic, a logical framework provides a means to define (or present) a logic as a signature in a higher-order type theory in such a way that provability of a formula in the original logic reduces to a type inhabitation problem in the framework type theory. This approach has been used successfully for (interactive) automated theorem proving. The first logical framework was Automath; however, the name of the idea comes from the more widely known Edinburgh Logical Framework, LF. Several more recent proof tools like Isabelle are based on this idea. Unlike a direct embedding, the logical framework approach allows many logics to be embedded in the same type system. Overview. A logical framework is based on a general treatment of syntax, rules and proofs by means of a dependently typed lambda calculus. Syntax is treated in a style similar to, but more general than Per Martin-Löf's system of arities. To describe a logical framework, one must provide the following: This is summarized by: ""Framework = Language + Representation"." LF. In the case of the LF logical framework, the meta-language is the λΠ-calculus. This is a system of first-order dependent function types which are related by the propositions as types principle to first-order minimal logic. The key features of the λΠ-calculus are that it consists of entities of three levels: objects, types and kinds (or type classes, or families of types). It is predicative, all well-typed terms are strongly normalizing and Church-Rosser and the property of being well-typed is decidable. However, type inference is undecidable. A logic is represented in the LF logical framework by the judgements-as-types representation mechanism. This is inspired by Per Martin-Löf's development of Kant's notion of judgement, in the 1983 Siena Lectures. The two higher-order judgements, the hypothetical formula_0 and the general, formula_1, correspond to the ordinary and dependent function space, respectively. The methodology of judgements-as-types is that judgements are represented as the types of their proofs. A logical system formula_2 is represented by its signature which assigns kinds and types to a finite set of constants that represents its syntax, its judgements and its rule schemes. An object-logic's rules and proofs are seen as primitive proofs of hypothetico-general judgements formula_3. An implementation of the LF logical framework is provided by the Twelf system at Carnegie Mellon University. Twelf includes References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "J\\vdash K" }, { "math_id": 1, "text": "\\Lambda x\\in J. K(x)" }, { "math_id": 2, "text": "{\\mathcal L}" }, { "math_id": 3, "text": "\\Lambda x\\in C. J(x)\\vdash K" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "\\lambda\\Pi" } ]
https://en.wikipedia.org/wiki?curid=901613
9017103
Error diffusion
Type of halftoning Error diffusion is a type of halftoning in which the quantization residual is distributed to neighboring pixels that have not yet been processed. Its main use is to convert a multi-level image into a binary image, though it has other applications. Unlike many other halftoning methods, error diffusion is classified as an area operation, because what the algorithm does at one location influences what happens at other locations. This means buffering is required, and complicates parallel processing. Point operations, such as ordered dither, do not have these complications. Error diffusion has the tendency to enhance edges in an image. This can make text in images more readable than in other halftoning techniques. Early history. Richard Howland Ranger received United States patent 1790723 for his invention, "Facsimile system". The patent, which issued in 1931, describes a system for transmitting images over telephone or telegraph lines, or by radio. Ranger's invention permitted continuous-tone photographs to be converted first into black and white, then transmitted to remote locations, which had a pen moving over a piece of paper. To render black, the pen was lowered to the paper; to produce white, the pen was raised. Shades of gray were rendered by intermittently raising and lowering the pen, depending upon the luminance of the gray desired. Ranger's invention used capacitors to store charges, and vacuum tube comparators to determine when the present luminance, plus any accumulated error, was above a threshold (causing the pen to be raised) or below (causing the pen to be lowered). In this sense, it was an analog version of error diffusion. Digital era. Floyd and Steinberg described a system for performing error diffusion on digital images based on a simple kernel formula_0 where "formula_1" denotes a pixel in the current row which has already been processed (hence diffusing error to it would be pointless), and "#" denotes the pixel currently being processed. Nearly concurrently, J. F. Jarvis, C. N. Judice, and W. H. Ninke of Bell Labs disclosed a similar method, which they termed "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;minimized average error" using a larger kernel formula_2 Algorithm description. Error diffusion takes a monochrome or color image and reduces the number of quantization levels. A popular application of error diffusion involves reducing the number of quantization states to just two per channel. This makes the image suitable for printing on binary printers such as black and white laser printers. In the discussion which follows, it is assumed that the number of quantization states in the error diffused image is two per channel, unless otherwise stated. One-dimensional error diffusion. The simplest form of the algorithm scans the image one row at a time and one pixel at a time. The current pixel is compared to a half-gray value. If it is above the value a white pixel is generated in the resulting image. If the pixel is below the half way brightness, a black pixel is generated. Different methods may be used if the target palette is not monochrome, such as thresholding with two values if the target palette is black, gray and white. The generated pixel is either full bright, or full black, so there is an error in the image. The error is then added to the next pixel in the image and the process repeats. Two-dimensional error diffusion. One-dimensional error diffusion tends to have severe image artifacts that show up as distinct vertical lines. Two-dimensional error diffusion reduces the visual artifacts. The simplest algorithm is exactly like one-dimensional error diffusion, except that half the error is added to the next pixel, and half of the error is added to the pixel on the next line below. The kernel is formula_3 where "#" denotes the pixel currently being processed. Further refinement can be had by dispersing the error further away from the current pixel, as in the matrices given above in "Digital era". The sample image at the start of this article is an example of two-dimensional error diffusion. Color error diffusion. The same algorithms may be applied to each of the red, green, and blue (or cyan, magenta, yellow, black) channels of a color image to achieve a color effect on printers such as color laser printers that can only print single color values. However, better visual results may be obtained by first converting the color channels into a perceptive color model that will separate lightness, hue and saturation channels, so that a higher weight for error diffusion will be given to the lightness channel, than to the hue channel. The motivation for this conversion is that human vision better perceives small differences of lightness in small local areas, than similar differences of hue in the same area, and even more than similar differences of saturation on the same area. For example, if there is a small error in the green channel that cannot be represented, and another small error in the red channel in the same case, the properly weighted sum of these two errors may be used to adjust a perceptible lightness error, that can be represented in a balanced way between all three color channels (according to their respective statistical contribution to the lightness), even if this produces a larger error for the hue when converting the green channel. This error will be diffused in the neighboring pixels. In addition, gamma correction may be needed on each of these perceptive channels, if they don't scale linearly with the human vision, so that error diffusion can be accumulated linearly to these gamma-corrected linear channels, before computing the final color channels of the rounded pixel colors, using a reverse conversion to the native non gamma-corrected image format and from which the new residual error will be computed and converted again to be distributed to the next pixels. Error diffusion with several gray levels. Error Diffusion may also be used to produce output images with more than two levels (per channel, in the case of color images). This has application in displays and printers which can produce 4, 8, or 16 levels in each image plane, such as electrostatic printers and displays in compact mobile telephones. Rather than use a single threshold to produce binary output, the closest permitted level is determined, and the error, if any, is diffused as described above. Printer considerations. Most printers overlap the black dots slightly, so there is not an exact one-to-one relationship to dot frequency (in dots per unit area) and lightness. Tone scale linearization may be applied to the source image to get the printed image to look correct. Edge enhancement versus lightness preservation. When an image has a transition from light to dark, the error-diffusion algorithm tends to make the next generated pixel be black. Dark-to-light transitions tend to result in the next generated pixel being white. This causes an edge-enhancement effect at the expense of gray-level reproduction accuracy. This results in error diffusion having a higher apparent resolution than other halftone methods. This is especially beneficial with images with text in them, such as the typical facsimile. This effect shows fairly well in the picture at the top of this article. The grass detail and the text on the sign is well preserved, and the lightness in the sky, containing little detail. A cluster-dot halftone image of the same resolution would be much less sharp.
[ { "math_id": 0, "text": "\\frac{1}{16} \\begin{bmatrix}\n - & \\# & 7 \\\\\n 3 & 5 & 1\n\\end{bmatrix}," }, { "math_id": 1, "text": "-" }, { "math_id": 2, "text": "\\frac{1}{48} \\begin{bmatrix}\n - & - & \\# & 7 & 5 \\\\\n 3 & 5 & 7 & 5 & 3 \\\\\n 1 & 3 & 5 & 3 & 1\n\\end{bmatrix}." }, { "math_id": 3, "text": "\\frac{1}{2} \\begin{bmatrix}\n \\# & 1 \\\\\n 1 & 0\n\\end{bmatrix}," } ]
https://en.wikipedia.org/wiki?curid=9017103
9017838
Virtual fixture
Overlay of augmented sensory information upon a user's perception of a real environment A virtual fixture is an overlay of augmented sensory information upon a user's perception of a real environment in order to improve human performance in both direct and remotely manipulated tasks. Developed in the early 1990s by Louis Rosenberg at the U.S. Air Force Research Laboratory (AFRL), Virtual Fixtures was a pioneering platform in virtual reality and augmented reality technologies. History. Virtual Fixtures was first developed by Louis Rosenberg in 1992 at the USAF Armstrong Labs, resulting in the first immersive augmented reality system ever built. Because 3D graphics were too slow in the early 1990s to present a photorealistic and spatially-registered augmented reality, Virtual Fixtures used two real physical robots, controlled by a full upper-body exoskeleton worn by the user. To create the immersive experience for the user, a unique optics configuration was employed that involved a pair of binocular magnifiers aligned so that the user's view of the robot arms were brought forward so as to appear registered in the exact location of the user's real physical arms. The result was a spatially-registered immersive experience in which the user moved his or her arms, while seeing robot arms in the place where his or her arms should be. The system also employed computer-generated virtual overlays in the form of simulated physical barriers, fields, and guides, designed to assist in the user while performing real physical tasks. Fitts Law performance testing was conducted on batteries of human test subjects, demonstrating for the first time, that a significant enhancement in human performance of real-world dexterous tasks could be achieved by providing immersive augmented reality overlays to users. Concept. The concept of virtual fixtures was first introduced as an overlay of virtual sensory information on a workspace in order to improve human performance in direct and remotely manipulated tasks. The virtual sensory overlays can be presented as physically realistic structures, registered in space such that they are perceived by the user to be fully present in the real workspace environment. The virtual sensory overlays can also be abstractions that have properties not possible of real physical structures. The concept of sensory overlays is difficult to visualize and talk about, as a consequence the virtual fixture metaphor was introduced. To understand what a virtual fixture is an analogy with a real physical fixture such as a ruler is often used. A simple task such as drawing a straight line on a piece of paper free-hand is a task that most humans are unable to perform with good accuracy and high speed. However, the use of a simple device such as a ruler allows the task to be carried out quickly and with good accuracy. The use of a ruler helps the user by guiding the pen along the ruler reducing the tremor and mental load of the user, thus increasing the quality of the results. When the Virtual Fixture concept was proposed to the U.S. Air Force in 1991, augmented surgery was an example use case, expanding the idea from a virtual ruler guiding a real pencil, to a virtual medical fixture guiding a real physical scalpel manipulated by a real surgeon. The objective was to overlay virtual content upon the surgeon's direct perception of the real workspace with sufficient realism that it would be perceived as authentic additions to the surgical environment and thereby enhance surgical skill, dexterity, and performance. A proposed benefit of virtual medical fixtures as compared to real hardware was that because they were virtual additions to the ambient reality, they could be partially submerged within real patients, providing guidance and/or barriers within unexposed tissues. The definition of virtual fixtures is much broader than simply providing guidance of the end-effector. For example, auditory virtual fixtures are used to increase the user awareness by providing audio clues that helps the user by providing multi modal cues for localization of the end-effector. However, in the context of human-machine collaborative systems, the term virtual fixtures is often used to refer to a task dependent virtual aid that is overlaid upon a real environment and guides the user's motion along desired directions while preventing motion in undesired directions or regions of the workspace. Virtual fixtures can be either "guiding virtual fixtures" or "forbidden regions virtual fixtures". A forbidden regions virtual fixture could be used, for example, in a teleoperated setting where the operator has to drive a vehicle at a remote site to accomplish an objective. If there are pits at the remote site which would be harmful for the vehicle to fall into forbidden regions could be defined at the various pits locations, thus preventing the operator from issuing commands that would result in the vehicle ending up in such a pit. Such illegal commands could easily be sent by an operator because of, for instance, delays in the teleoperation loop, poor telepresence or a number of other reasons. An example of a guiding virtual fixture could be when the vehicle must follow a certain trajectory, The operator is then able to control the progress along the "preferred direction" while motion along the "non-preferred direction" is constrained. With both forbidden regions and guiding virtual fixtures the "stiffness", or its inverse the "compliance", of the fixture can be adjusted. If the compliance is high (low stiffness) the fixture is "soft". On the other hand, when the compliance is zero (maximum stiffness) the fixture is "hard". Virtual fixture control law. This section describes how a control law that implements virtual fixtures can be derived. It is assumed that the robot is a purely kinematic device with end-effector position formula_0 and end-effector orientation formula_1 expressed in the robot's base frame formula_2. The input control signal formula_3 to the robot is assumed to be a desired end-effector velocity formula_4. In a tele-operated system it is often useful to scale the input velocity from the operator, formula_5 before feeding it to the robot controller. If the input from the user is of another form such as a force or position it must first be transformed to an input velocity, by for example scaling or differentiating. Thus the control signal formula_3 would be computed from the operator's input velocity formula_5 as: formula_6 If formula_7 there exists a one-to-one mapping between the operator and the slave robot. If the constant formula_8 is replaced by a diagonal matrix formula_9 it is possible to adjust the compliance independently for different dimensions of formula_10. For example, setting the first three elements on the diagonal of formula_9 to formula_8 and all other elements to zero would result in a system that only permits translational motion and not rotation. This would be an example of a hard virtual fixture that constrains the motion from formula_11 to formula_12. If the rest of the elements on the diagonal were set to a small value, instead of zero, the fixture would be soft, allowing some motion in the rotational directions. To express more general constraints assume a time-varying matrix formula_13 which represents the preferred direction at time formula_14. Thus if formula_15 the preferred direction is along a curve in formula_16. Likewise, formula_17 would give preferred directions that span a surface. From formula_18 two projection operators can be defined, the span and kernel of the column space: formula_19 If formula_18 does not have full column rank the span can not be computed, consequently it is better to compute the span by using the pseudo-inverse, thus in practice the span is computed as: formula_20 where formula_21 denotes the pseudo-inverse of formula_18. If the input velocity is split into two components as: formula_22 it is possible to rewrite the control law as: formula_23 Next introduce a new compliance that affects only the non-preferred component of the velocity input and write the final control law as: formula_24 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{p} = \\left[ x,y,z \\right]" }, { "math_id": 1, "text": "\\mathbf{r} = \\left[ r_\\textrm{x}, r_\\textrm{y}, r_\\textrm{z} \\right]" }, { "math_id": 2, "text": "F_\\textrm{r}" }, { "math_id": 3, "text": "\\mathbf{u}" }, { "math_id": 4, "text": "\\mathbf{v} = \\dot{\\mathbf{x}} = \\left[ \\dot{\\mathbf{p}}, \\dot{\\mathbf{r}} \\right]" }, { "math_id": 5, "text": "\\mathbf{v}_\\textrm{op}" }, { "math_id": 6, "text": "\\mathbf{u} = c \\cdot \\mathbf{v}_\\textrm{op}" }, { "math_id": 7, "text": "c=1" }, { "math_id": 8, "text": "c" }, { "math_id": 9, "text": "\\mathbf{C}" }, { "math_id": 10, "text": "\\dot{\\mathbf{x}}" }, { "math_id": 11, "text": "\\mathbf{x} \\in \\mathbb{R}^6" }, { "math_id": 12, "text": "\\mathbf{p} \\in \\mathbb{R}^3" }, { "math_id": 13, "text": "\\mathbf{D}(t) \\in \\mathbb{R}^{6 \\times n},~ n \\in [1..6]" }, { "math_id": 14, "text": "t" }, { "math_id": 15, "text": "n=1" }, { "math_id": 16, "text": "\\mathbb{R}^{6}" }, { "math_id": 17, "text": "n=2" }, { "math_id": 18, "text": "\\mathbf{D}" }, { "math_id": 19, "text": "\n\\begin{align}\n\\textrm{Span}(\\mathbf{D}) & \\equiv \\left[ \\mathbf{D} \\right] = \n\\mathbf{D}(\\mathbf{D}^T\\mathbf{D})^{-1}\\mathbf{D}^T \\\\\n\\textrm{Kernel}(\\mathbf{D}) & \\equiv \\langle \\mathbf{D} \\rangle = \\mathbf{I} - \\left[ \\mathbf{D} \\right]\n\\end{align}\n" }, { "math_id": 20, "text": "\n\\textrm{Span}(\\mathbf{D}) \\equiv \\left[ \\mathbf{D} \\right] = \\mathbf{D}(\\mathbf{D}^T\\mathbf{D})^{\\dagger}\\mathbf{D}^T\n" }, { "math_id": 21, "text": "\\mathbf{D}^\\dagger" }, { "math_id": 22, "text": "\\mathbf{v}_\\textrm{D} \\equiv \\left[ \\mathbf{D} \\right]\n\\mathbf{v}_\\textrm{op} \\textrm{~and~} \\mathbf{v}_\\tau \\equiv\n\\mathbf{v}_\\textrm{op} - \\mathbf{v}_\\textrm{D} = \\langle \\mathbf{D} \\rangle\n\\mathbf{v}_\\textrm{op}\n" }, { "math_id": 23, "text": "\\mathbf{v} = c \\cdot \\mathbf{v}_\\textrm{op} = c \\left( \\mathbf{v}_\\textrm{D} +\n\\mathbf{v}_\\tau \\right)\n" }, { "math_id": 24, "text": "\n\\mathbf{v} = c \\left( \\mathbf{v}_\\textrm{D} +\nc_\\tau \\cdot \\mathbf{v}_\\tau \\right) = \nc \\left( \\left[ \\mathbf{D} \\right] + c_\\tau \\langle \\mathbf{D} \\rangle \\right)\n\\mathbf{v}_\\textrm{op}\n" } ]
https://en.wikipedia.org/wiki?curid=9017838
9018311
Machmeter
Flight instrument A Machmeter is an aircraft pitot-static system flight instrument that shows the ratio of the true airspeed to the speed of sound, a dimensionless quantity called Mach number. This is shown on a Machmeter as a decimal fraction. An aircraft flying at the speed of sound is flying at a Mach number of one, expressed as "Mach 1". Use. As an aircraft in transonic flight approaches the speed of sound, it first reaches its critical mach number, where air flowing over low-pressure areas of its surface locally reaches the speed of sound, forming shock waves. The indicated airspeed for this condition changes with ambient temperature, which in turn changes with altitude. Therefore, indicated airspeed is not entirely adequate to warn the pilot of the impending problems. Mach number is more useful, and most high-speed aircraft are limited to a maximum operating Mach number, also known as MMO. For example, if the MMO is Mach 0.83, then at where the speed of sound under standard conditions is , the true airspeed at MMO is . The speed of sound increases with air temperature, so at Mach 0.83 at where the air is much warmer than at , the true airspeed at MMO would be . Operation. Modern electronic Machmeters use information from an air data computer system which makes calculations using inputs from a pitot-static system. Some older mechanical Machmeters use an altitude aneroid and an airspeed capsule which together convert pitot-static pressure into Mach number. The Machmeter suffers from instrument and position errors. Calibration. In subsonic flow the Mach meter can be calibrated according to: formula_0 where: formula_1 is Mach number "qc" is impact pressure (dynamic pressure) formula_2 is static pressure and assuming the ratio of specific heats is 1.4 When a shock wave forms across the pitot tube the required formula is derived from the Rayleigh Supersonic Pitot equation, and is solved iteratively: formula_3 where: formula_4 is now total pressure measured behind the normal shock. Note that the inputs required are total pressure and static pressure. Air temperature input is not required.
[ { "math_id": 0, "text": "\n{M}=\\sqrt{5\\left[\\left(\\frac{q_c}{p}+1\\right)^\\frac{2}{7}-1\\right]}\\,\nor, \n{M}=\\sqrt{5\\left[\\left(\\frac{p_t}{p}\\right)^\\frac{2}{7}-1\\right]}\\,\n" }, { "math_id": 1, "text": "\\ M\\," }, { "math_id": 2, "text": "\\ p" }, { "math_id": 3, "text": "{M}=0.88128485\\sqrt{\\frac{p_t}{p}\\left(1-\\frac{1}{[7M^2]}\\right)^\\frac{5}{2}}" }, { "math_id": 4, "text": "\\ p_t" } ]
https://en.wikipedia.org/wiki?curid=9018311
9018986
Mass segregation (astronomy)
Gravitational process, eg in star clusters In astronomy, dynamical mass segregation is the process by which heavier members of a gravitationally bound system, such as a star cluster, tend to move toward the center, while lighter members tend to move farther away from the center. Equipartition of kinetic energy. During a close encounter of two members of the cluster, the members exchange both energy and momentum. Although energy can be exchanged in either direction, there is a statistical tendency for the kinetic energy of the two members to equalize during an encounter; this statistical phenomenon is called equipartition, and is similar to the fact that the expected kinetic energy of the molecules of a gas are all the same at a given temperature. Since kinetic energy is proportional to mass times the square of the speed, equipartition requires the less massive members of a cluster to be moving faster. The more massive members will thus tend to sink into lower orbits (that is, orbits closer to the center of the cluster), while the less massive members will tend to rise to higher orbits. The time it takes for the kinetic energies of the cluster members to roughly equalize is called the relaxation time of the cluster. A relaxation time-scale assuming energy is exchanged through two-body interactions was approximated in the textbook by Binney &amp; Tremaine as formula_0 where formula_1 is the number of stars in the cluster and formula_2 is the typical time it takes for a star to cross the cluster. This is on the order of 100 million years for a typical globular cluster with radius 10 parsecs consisting of 100 thousand stars. The most massive stars in a cluster can segregate more rapidly than the less massive stars. This time-scale can be approximated using a toy model developed by Lyman Spitzer of a cluster where stars only have two possible masses (formula_3 and formula_4). In this case, the more massive stars (mass formula_3) will segregate in the time formula_5 Outward segregation of white dwarfs was observed in the globular cluster 47 Tucanae in a HST study of the region. Primordial mass segregation. Primordial mass segregation is non-uniform distribution of masses present at the formation of a cluster. The argument that a star cluster is primordially mass segregated is typically based on a comparison of virialization timescales and the cluster's age. However, several dynamical mechanisms to accelerate virialization compared to two-body interactions have been examined. In star-forming regions, it is often observed that O-type stars are preferentially located in the center of a young cluster. Evaporation. After relaxation, the speed of some low mass members can be greater than the escape velocity of the cluster, which results in these members being lost to the cluster. This process is called evaporation. (A similar phenomenon explains the loss of lighter gases from a planet, such as hydrogen and helium from the Earth—after equipartition, some molecules of sufficiently light gases at the top of the atmosphere will exceed the escape velocity of the planet and be lost.) Through evaporation, most open clusters eventually dissipate, as indicated by the fact that most existing open clusters are quite young. Globular clusters, being more tightly bound, appear to be more durable. In the Galaxy. The relaxation time of the Milky Way galaxy is approximately 10 trillion years, on the order of thousand times the age of the galaxy itself. Thus, any observed mass segregation in our galaxy must be almost entirely primordial. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_\\mathrm{relax}=\\frac {N}{8\\ln N}\\times t_\\mathrm{cross} \\ ," }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": " t_\\mathrm{cross}" }, { "math_id": 3, "text": "m_1" }, { "math_id": 4, "text": "m_2" }, { "math_id": 5, "text": "t_\\mathrm{m_1}=\\frac {m_2}{m_1}\\times t_\\mathrm{relax} \\ ." } ]
https://en.wikipedia.org/wiki?curid=9018986
902
Atom
Smallest unit of a chemical element Atoms are the basic particles of the chemical elements. An atom consists of a nucleus of protons and generally neutrons, surrounded by an electromagnetically bound swarm of electrons. The chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. Atoms with the same number of protons but a different number of neutrons are called isotopes of the same element. Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. Atoms are smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. They are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects. More than 99.94% of an atom's mass is in the nucleus. Protons have a positive electric charge and neutrons have no charge, so the nucleus is positively charged. The electrons are negatively charged, and this opposing charge is what binds them to the nucleus. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral as a whole. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation). The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to attach and detach from each other is responsible for most of the physical changes observed in nature. Chemistry is the science that studies these changes. History of atomic theory. In philosophy. The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word "atom" is derived from the ancient Greek word "atomos", which means "uncuttable". But this ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton found evidence that matter really is composed of discrete units, and so applied the word "atom" to those units. Dalton's law of multiple proportions. In the early 1800s, John Dalton compiled experimental data gathered by him and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in any group of chemical compounds which all contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This pattern suggested that each element combines with other elements in multiples of a basic unit of weight, with each element having a unit of unique weight. Dalton decided to call these units "atoms". For example, there are two types of tin oxide: one is a grey powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. Dalton concluded that in the grey oxide there is one atom of oxygen for every atom of tin, and in the white oxide there are two atoms of oxygen for every atom of tin (SnO and SnO2). Dalton also analyzed iron oxides. There is one type of iron oxide that is a black powder which is 78.1% iron and 21.9% oxygen; and there is another iron oxide that is a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. Dalton concluded that in these oxides, for every two atoms of iron, there are two or three atoms of oxygen respectively (Fe2O2 and Fe2O3). As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Discovery of the electron. In 1897, J. J. Thomson discovered that cathode rays are not a form of light but made of negatively charged particles because they can be deflected by electric and magnetic fields. He measured these particles to be at least a thousand times lighter than hydrogen (the lightest atom). He called these new particles "corpuscles" but they were later renamed "electrons" since these are the particles that carry electricity. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. Thomson explained that an electric current is the passing of electrons from one atom to the next, and when there was no current the electrons embedded themselves in the atoms. This in turn meant that atoms were not indivisible as scientists thought. The atom was composed of electrons whose negative charge was balanced out by some source of positive charge to create an electrically neutral atom. Ions, Thomson explained, must be atoms which have an excess or shortage of electrons. Discovery of the nucleus. The electrons in the atom logically had to be balanced out by a commensurate amount of positive charge, but Thomson had no idea where this positive charge came from, so he tentatively proposed that this positive charge was everywhere in the atom, the atom being in the shape of a sphere. Following from this, he imagined the balance of electrostatic forces would distribute the electrons throughout the sphere in a more or less even manner. Thomson's model is popularly known as the plum pudding model, though neither Thomson nor his colleagues used this analogy. Thomson's model was incomplete, it was unable to predict any other properties of the elements such as emission spectra and valencies. It was soon rendered obsolete by the discovery of the atomic nucleus. Between 1908 and 1913, Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They did this to measure the scattering patterns of the alpha particles. They spotted a small number of alpha particles being deflected by angles greater than 90°. This shouldn't have been possible according to the Thomson model of the atom, whose charges were too diffuse to produce a sufficiently strong electric field. The deflections should have all been negligible. Rutherford proposed that the positive charge of the atom along with most of the atom's mass is concentrated in a tiny nucleus at the center of the atom. Only such an intense concentration of positive charge, anchored by its high mass and separated from the negative charge, could produce an electric field that could deflect the alpha particles so strongly. Bohr model. A problem in classical mechanics is that an accelerating charged particle radiates electromagnetic radiation, causing the particle to lose kinetic energy. Circular motion counts as acceleration, which means that an electron orbiting a central charge should spiral down into the nucleus as it loses speed. In 1913, the physicist Niels Bohr proposed a new model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable and why elements absorb and emit electromagnetic radiation in discrete spectra. Bohr's model could only predict the emission spectra of hydrogen, not atoms with more than one electron. Discovery of protons and neutrons. Back in 1815, William Prout observed that the atomic weights of many elements were multiples of hydrogen's atomic weight, which is true for all of them if one takes isotopes into account. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion is equal to the negative charge of an electron. In 1913, Henry Moseley discovered that the frequencies of X-ray emissions from an excited atom were a mathematical function of its atomic number and hydrogen's nuclear charge. In 1917 Rutherford bombarded nitrogen gas with alpha particles and detected hydrogen ions being emitted from the gas, and concluded that they were produced by alpha particles hitting and splitting the nitrogen atoms. These observations led Rutherford to conclude that the hydrogen nucleus is a singular particle with a positive charge equal to the electron's negative charge. He named this particle "proton" in 1920. An element's atomic number, which is defined as the element's position on the periodic table, is also the number of protons it has in its nucleus. The atomic weight of each element is higher than its proton number, so Rutherford hypothesized that the surplus weight was carried by unknown particles with no electric charge and a mass equal to that of the proton. In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons. The current consensus model. In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed that all particles behave like waves to some extent, and in 1926 Erwin Schroedinger used this idea to develop the Schroedinger equation, which describes electrons as three-dimensional waveforms rather than points in space. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be found. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Structure. Subatomic particles. Though the word "atom" originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is the least massive of these particles by four orders of magnitude at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass of . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a mass of . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus. All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to formula_0 femtometres, where formula_1 is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits "identical" fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, "e=mc2", where "m" is the mass loss and "c" is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon begins to decrease. That means that a fusion process producing a nucleus that has an atomic number higher than about 26, and a mass number higher than about 60, is an endothermic process. Thus, more massive nuclei cannot undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 "million" eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. Properties. Nuclear properties. By definition, any two atoms with an identical number of "protons" in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of "neutrons" are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 251 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 161 (bringing the total to 251) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 35 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.1 stable isotopes per element. Twenty-six "monoisotopic elements" have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 251 known stable nuclides, only four have both an odd number of protons "and" odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10, and nitrogen-14. (Tantalum-180m is odd-odd and observationally stable, but is predicted to decay with a very long half-life.) Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138, and lutetium-176. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Mass. The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size. Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Radioactive decay. Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Magnetic moment. Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 ħ, or "spin-&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. Energy levels. The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with "n" because the (average) distance to the nucleus increases. Dependence of the energy on ℓ is caused not by the electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valence and bonding behavior. Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. States. Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. Identification. While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface. Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Origin and current state. Baryonic matter forms about 4% of the total energy density of the observable universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates "all" electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Formation. Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple-alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Earth. Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. Rare and theoretical forms. Superheavy elements. All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with "Z" &gt; 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Exotic matter. Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1.07 \\sqrt[3]{A}" }, { "math_id": 1, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=902
9021355
Null function
Type of subroutine in computer science In computer science, a null function (or null operator) is a subroutine that leaves the program state unchanged. When it is part of the instruction set of a processor, it is called a NOP or NOOP (No OPeration). Mathematically, a (computer) function formula_0 is null if and only if its execution leaves the program state formula_1 unchanged. That is, a null function is an identity function whose domain and codomain are both the state space formula_2 of the program, and for which: formula_3 for all elements formula_4. Less rigorous definitions may also be encountered. For example, a function may take a single operand, transform it into a new data type, and return the result. While such usages bear a strong visual resemblance to identity functions, they create or alter a binary data value and thus change the program state. From a software maintainability perspective it is better to identify such "minor" alternations of state explicitly, since calling them null functions provides future maintainers of the code with no insights on their actual purposes. Uses. Null functions have several uses. During software development, null functions with the same names and type signatures as a planned functions are often used as stubs—that is, as non-functional placeholders that allow the incomplete body of code to be compiled and tested prior to completion of all planned features. Null functions, particularly the NOP variety, are also used to provide delays of indeterminate length within wait loops. This is a common strategy in dedicated device controllers that must wait for an external input and have no other tasks to perform while they are waiting. Such wait loops are also used in software applications on larger multiprocessing computer systems. However, for multiprocessing systems a better approach is to use operating system functions that let other processes use the CPU during the waiting period. A third use of null functions is as the definition of a program feature that, if created inadvertently, is almost always deleterious. Unintended null functions can arise during the development of complex programs, and like dead code, such occurrences indicate serious flaws in program structures. A null function or method is often used as the default behavior of a revectorable function or overrideable method in an object framework. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "s" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "f(s)=s" }, { "math_id": 4, "text": "s \\in S" } ]
https://en.wikipedia.org/wiki?curid=9021355
9023
Discounted cash flow
Method of valuing a project, company, or asset The discounted cash flow (DCF) analysis, in financial analysis, is a method used to value a security, project, company, or asset, that incorporates the time value of money. Discounted cash flow analysis is widely used in investment finance, real estate development, corporate financial management, and patent valuation. Used in industry as early as the 1700s or 1800s, it was widely discussed in financial economics in the 1960s, and U.S. courts began employing the concept in the 1980s and 1990s. Application. In discount cash flow analysis, all future cash flows are estimated and discounted by using cost of capital to give their present values (PVs). The sum of all future cash flows, both incoming and outgoing, is the net present value (NPV), which is taken as the value of the cash flows in question; see aside. For further context see ; and for the mechanics see valuation using discounted cash flows, which includes modifications typical for startups, private equity and venture capital, corporate finance "projects", and mergers and acquisitions. Using DCF analysis to compute the NPV takes as input cash flows and a discount rate and gives as output a present value. The opposite process takes cash flows and a price (present value) as inputs, and provides as output the discount rate; this is used in bond markets to obtain the yield. History. Discounted cash flow calculations have been used in some form since money was first lent at interest in ancient times. Studies of ancient Egyptian and Babylonian mathematics suggest that they used techniques similar to discounting future cash flows. Modern discounted cash flow analysis has been used since at least the early 1700s in the UK coal industry. Discounted cash flow valuation is differentiated from the accounting book value, which is based on the amount paid for the asset. Following the stock market crash of 1929, discounted cash flow analysis gained popularity as a valuation method for stocks. Irving Fisher in his 1930 book "The Theory of Interest" and John Burr Williams's 1938 text "The Theory of Investment Value" first formally expressed the DCF method in modern economic terms. Mathematics. Discounted cash flows. The discounted cash flow formula is derived from the present value formula for calculating the time value of money formula_0 and compounding returns: formula_1. Thus the discounted present value (for one cash flow in one future period) is expressed as: formula_2 where Where multiple cash flows in multiple time periods are discounted, it is necessary to sum them as follows: formula_3 for each future cash flow ("FV") at any time period ("t") in years from the present time, summed over all time periods. The sum can then be used as a net present value figure. If the amount to be paid at time 0 (now) for all the future cash flows is known, then that amount can be substituted for "DPV" and the equation can be solved for "r", that is the internal rate of return. All the above assumes that the interest rate remains constant throughout the whole period. If the cash flow stream is assumed to continue indefinitely, the finite forecast is usually combined with the assumption of constant cash flow growth beyond the discrete projection period. The total value of such cash flow stream is the sum of the finite discounted cash flow forecast and the Terminal value (finance). Continuous cash flows. For continuous cash flows, the summation in the above formula is replaced by an integration: formula_4 where formula_5 is now the "rate" of cash flow, and formula_6. Discount rate. The act of discounting future cash flows asks "how much money would have to be invested currently, at a given rate of return, to yield the forecast cash flow, at its future date?" In other words, discounting returns the present value of future cash flows, where the rate used is the cost of capital that "appropriately" reflects the risk, and timing, of the cash flows. This "required return" thus incorporates: For the latter, various models have been developed, where the premium is (typically) calculated as a function of the asset's performance with reference to some macroeconomic variable - for example, the CAPM compares the asset's historical returns to the "overall market's"; see and . An alternate, although less common approach, is to apply a "fundamental valuation" method, such as the "T-model", which instead relies on accounting information. Other methods of discounting, such as hyperbolic discounting, are studied in academia and said to reflect intuitive decision-making, but are not generally used in industry. In this context the above is referred to as "exponential discounting". The terminology "expected return", although formally the mathematical expected value, is often used interchangeably with the above, where "expected" means "required" or "demanded" by investors. The method may also be modified by industry, for example various formulae have been proposed when choosing a discount rate in a healthcare setting; similarly in a mining setting, where risk-characteristics can differ (dramatically) by property. Methods of appraisal of a company or project. For these valuation purposes, a number of different DCF methods are distinguished today, some of which are outlined below. The details are likely to vary depending on the capital structure of the company. However the assumptions used in the appraisal (especially the equity discount rate and the projection of the cash flows to be achieved) are likely to be at least as important as the precise model used. Both the income stream selected and the associated cost of capital model determine the valuation result obtained with each method. (This is one reason these valuation methods are formally referred to as the Discounted Future Economic Income methods.) The below is offered as a high-level treatment; for the components / steps of business modeling here, see . Shortcomings. The following difficulties are identified with the application of DCF in valuation: Integrated future value. To address the lack of integration of the short and long term importance, value and risks associated with natural and social capital into the traditional DCF calculation, companies are valuing their environmental, social and governance (ESG) performance through an Integrated Management approach to reporting, that expands DCF or Net Present Value to Integrated Future Value (IntFV). This allows companies to value their investments not just for their financial return but also the long term environmental and social return of their investments. By highlighting environmental, social and governance performance in reporting, decision makers have the opportunity to identify new areas for value creation that are not revealed through traditional financial reporting. As an example, the social cost of carbon is one value that can be incorporated into Integrated Future Value calculations to encompass the damage to society from greenhouse gas emissions that result from an investment. This is an integrated approach to reporting that supports Integrated Bottom Line (IBL) decision making, which takes triple bottom line (TBL) a step further and combines financial, environmental and social performance reporting into one balance sheet. This approach provides decision makers with the insight to identify opportunities for value creation that promote growth and change within an organization. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "DCF = \\frac{CF_1}{(1+r)^1} + \\frac{CF_2}{(1+r)^2} + \\dotsb +\n\\frac{CF_n}{(1+r)^n}" }, { "math_id": 1, "text": "FV = DCF \\cdot (1+r)^n" }, { "math_id": 2, "text": "DPV = \\frac{FV}{(1+r)^n}" }, { "math_id": 3, "text": "DPV = \\sum_{t=0}^{N} \\frac{FV_t}{(1+r)^{t}}" }, { "math_id": 4, "text": "DPV= \\int_0^T FV(t) \\, e^{-\\lambda t} dt = \\int_0^T \\frac{FV(t)}{(1 + r)^t} \\, dt\\,," }, { "math_id": 5, "text": "FV(t)" }, { "math_id": 6, "text": "\\lambda = \\ln(1+r)" } ]
https://en.wikipedia.org/wiki?curid=9023
9023027
Exponential polynomial
In mathematics, exponential polynomials are functions on fields, rings, or abelian groups that take the form of polynomials in a variable and an exponential function. Definition. In fields. An exponential polynomial generally has both a variable "x" and some kind of exponential function "E"("x"). In the complex numbers there is already a canonical exponential function, the function that maps "x" to "e""x". In this setting the term exponential polynomial is often used to mean polynomials of the form "P"("x", "e""x") where "P" ∈ C["x", "y"] is a polynomial in two variables. There is nothing particularly special about C here; exponential polynomials may also refer to such a polynomial on any exponential field or exponential ring with its exponential function taking the place of "e""x" above. Similarly, there is no reason to have one variable, and an exponential polynomial in "n" variables would be of the form "P"("x"1, ..., "x""n", "e""x"1, ..., "e""x""n"), where "P" is a polynomial in 2"n" variables. For formal exponential polynomials over a field "K" we proceed as follows. Let "W" be a finitely generated Z-submodule of "K" and consider finite sums of the form formula_0 where the "f""i" are polynomials in "K"["X"] and the exp("w""i" "X") are formal symbols indexed by "w""i" in "W" subject to exp("u" + "v") = exp("u") exp("v"). In abelian groups. A more general framework where the term 'exponential polynomial' may be found is that of exponential functions on abelian groups. Similarly to how exponential functions on exponential fields are defined, given a topological abelian group "G" a homomorphism from "G" to the additive group of the complex numbers is called an additive function, and a homomorphism to the multiplicative group of nonzero complex numbers is called an exponential function, or simply an exponential. A product of additive functions and exponentials is called an exponential monomial, and a linear combination of these is then an exponential polynomial on "G". Properties. Ritt's theorem states that the analogues of unique factorization and the factor theorem hold for the ring of exponential polynomials. Applications. Exponential polynomials on R and C often appear in transcendental number theory, where they appear as auxiliary functions in proofs involving the exponential function. They also act as a link between model theory and analytic geometry. If one defines an exponential variety to be the set of points in R"n" where some finite collection of exponential polynomials vanish, then results like Khovanskiǐ's theorem in differential geometry and Wilkie's theorem in model theory show that these varieties are well-behaved in the sense that the collection of such varieties is stable under the various set-theoretic operations as long as one allows the inclusion of the image under projections of higher-dimensional exponential varieties. Indeed, the two aforementioned theorems imply that the set of all exponential varieties forms an o-minimal structure over R. Exponential polynomials also appear in the characteristic equation associated with linear delay differential equations. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_{i=1}^{m} f_i(X) \\exp(w_i X) \\ , " } ]
https://en.wikipedia.org/wiki?curid=9023027
9023795
General Relativity (book)
1984 graduate textbook by Robert M. Wald General Relativity is a graduate textbook and reference on Albert Einstein's general theory of relativity written by the gravitational physicist Robert Wald. Overview. First published by the University of Chicago Press in 1984, the book, a tome of almost 500 pages, covers many aspects of the general theory of relativity. It is divided into two parts. Part I covers the fundamentals of the subject and Part II the more advanced topics such as causal structure, and quantum effects. The book uses the abstract index notation for tensors. It treats spinors, the variational-principle formulation, the initial-value formulation, (exact) gravitational waves, singularities, Penrose diagrams, Hawking radiation, and black-hole thermodynamics. It is aimed at beginning graduate students and researchers. To this end, most of the materials in Part I is geared towards an introductory course on the subject while Part II covers a wide range of advanced topics for a second term or further study. The essential mathematical methods for the formulation of general relativity are presented in Chapters 2 and 3 while more advanced techniques are discussed in Appendices A to C. Wald believes that this is the best way forward because putting all the mathematical techniques at the beginning of the book would prove to be a major obstruction for students while developing these mathematical tools as they get used would mean they are too scattered to be useful. While the Hamiltonian formalism is often presented in conjunction with the initial-value formulation, Wald's coverage of the latter is independent of the former, which is thus relegated to the appendix, alongside the Lagrangian formalism. This book uses the formula_0 sign convention for reasons of technical convenience. However, there is one important exception. In Chapter 13 – and "only" in Chapter 13 –, the sign convention is switched to formula_1 because it is easier to treat spinors this way. Moreover, this is the most common sign convention used in the literature. Most of the book uses geometrized units, meaning the fundamental natural constants formula_2 (Newton's gravitational constant) and formula_3 (the speed of light in vacuum) are set equal to one, except when predictions that can be tested are made. Assessment. According to Daniel Finley, a professor at the University of New Mexico, this textbook offers good physics intuition. However, the author did not use the most modern mathematical methods available, and his treatment of cosmology is now outdated. Finley believes that the abstract index notation is difficult to learn, though convenient for those who have mastered it. Theoretical physicist James W. York wrote that "General Relativity" is a sophisticated yet concise book on the subject that should be appealing to the mathematically inclined, as a high level of rigor is maintained throughout the book. However, he believed the material on linearized gravity is too short, and recommended "Gravitation" by Charles Misner, Kip Thorne, and John Archibald Wheeler, and "Gravitation and Cosmology" by Steven Weinberg as supplements. Hans C. Ohanian, who taught and researched gravitation at the Rensselaer Polytechnic Institute, opined that "General Relativity" provides a modern introduction to the subject with emphasis on tensor and topological methods and offers some "sharp insights." However, its quality is very variable. Topics such as geodetic motion in the Schwarzschild metric, the Krushkal extension, and energy extraction from black holes, are handled well while empirical tests of Einstein's theory are barely scratched and the treatment of advanced topics, including cosmology, is just too brief to be useful to students. Due to its heavy use of higher mathematics, it may not be suitable for an introductory course. Lee Smolin argued that "General Relativity" bridges the gap between the presentation of the material in older textbooks and the literature. For example, while the early pioneers of the subject, including Einstein himself, employed coordinate-based methods, researchers since the mid-1960s have switched to coordinate-free formulations, of which Wald's text is entirely based. Its style is uniformly clear and economic, if too brief at times. Topics that deserve more attention include gravitational radiation and cosmology. However, this book can be supplemented by those by Misner, Thorne, and Wheeler, and by Weinberg. Smolin was teaching a course on general relativity to undergraduates as well as graduate students at Yale University using this book and felt satisfied with the results. He also found it useful as a reference to refresh his memory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-+++" }, { "math_id": 1, "text": "+---" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=9023795
9025098
Hermite normal form
In linear algebra, the Hermite normal form is an analogue of reduced echelon form for matrices over the integers Z. Just as reduced echelon form can be used to solve problems about the solution to the linear system Ax=b where x is in R"n", the Hermite normal form can solve problems about the solution to the linear system Ax=b where this time x is restricted to have integer coordinates only. Other applications of the Hermite normal form include integer programming, cryptography, and abstract algebra. Definition. Various authors may prefer to talk about Hermite normal form in either row-style or column-style. They are essentially the same up to transposition. Row-style Hermite normal form. An m by n matrix A with integer entries has a (row) Hermite normal form H if there is a square unimodular matrix U where H=UA and H has the following restrictions: The third condition is not standard among authors, for example some sources force non-pivots to be nonpositive or place no sign restriction on them. However, these definitions are equivalent by using a different unimodular matrix U. A unimodular matrix is a square invertible integer matrix whose determinant is 1 or −1. Column-style Hermite normal form. A "m"-by-"n" matrix A with integer entries has a (column) Hermite normal form H if there is a square unimodular matrix U where H=AU and H has the following restrictions: Note that the row-style definition has a unimodular matrix U multiplying A on the left (meaning U is acting on the rows of A), while the column-style definition has the unimodular matrix action on the columns of A. The two definitions of Hermite normal forms are simply transposes of each other. Existence and uniqueness of the Hermite normal form. Every full row rank "m"-by-"n" matrix A with integer entries has a unique "m"-by-"n" matrix H in Hermite normal form, such that H=UA for some square unimodular matrix U. Examples. In the examples below, H is the Hermite normal form of the matrix A, and U is a unimodular matrix such that "UA" = "H". formula_0 formula_1 If A has only one row then either "H" = "A" or "H" = −"A", depending on whether the single row of A has a positive or negative leading coefficient. Algorithms. There are many algorithms for computing the Hermite normal form, dating back to 1851. One such algorithm is described in. But only in 1979 an algorithm for computing the Hermite normal form that ran in strongly polynomial time was first developed; that is, the number of steps to compute the Hermite normal form is bounded above by a polynomial in the dimensions of the input matrix, and the space used by the algorithm (intermediate numbers) is bounded by a polynomial in the binary encoding size of the numbers in the input matrix. One class of algorithms is based on Gaussian elimination in that special elementary matrices are repeatedly used. The LLL algorithm can also be used to efficiently compute the Hermite normal form. Applications. Lattice calculations. A typical lattice in R"n" has the form formula_2 where the a"i" are in R"n". If the "columns" of a matrix A are the a"i", the lattice can be associated with the columns of a matrix, and A is said to be a basis of L. Because the Hermite normal form is unique, it can be used to answer many questions about two lattice descriptions. For what follows, formula_3 denotes the lattice generated by the columns of A. Because the basis is in the columns of the matrix A, the column-style Hermite normal form must be used. Given two bases for a lattice, A and A', the equivalence problem is to decide if formula_4 This can be done by checking if the column-style Hermite normal form of A and A' are the same up to the addition of zero columns. This strategy is also useful for deciding if a lattice is a subset (formula_5 if and only if formula_6), deciding if a vector v is in a lattice (formula_7 if and only if formula_8), and for other calculations. Integer solutions to linear systems. The linear system Ax = b has an integer solution x if and only if the system Hy = b has an integer solution y where y = U−1x and H is the column-style Hermite normal form of A. Checking that Hy = b has an integer solution is easier than Ax = b because the matrix H is triangular. Implementations. Many mathematical software packages can compute the Hermite normal form: Over an arbitrary Dedekind domain. Hermite normal form can be defined when we replace Z by an arbitrary Dedekind domain. (for instance, any principal-ideal domain). For instance, in control theory it can be useful to consider Hermite normal form for the polynomials F["x"] over a given field F. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nA=\\begin{pmatrix}\n3 & 3 & 1 & 4 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 19 & 16 \\\\\n0 & 0 & 0 & 3\n\\end{pmatrix}\n\\qquad\nH=\\begin{pmatrix}\n3 & 0 & 1 & 1\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 &19 & 1\\\\\n0 & 0 & 0 & 3\n\\end{pmatrix}\n\\qquad\nU = \\left(\\begin{array}{rrrr}\n1 & -3 & 0 & -1 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & -5 \\\\\n0 & 0 & 0 & 1\n\\end{array}\\right)\n" }, { "math_id": 1, "text": "\nA = \\begin{pmatrix}\n2 & 3 & 6 & 2 \\\\\n5 & 6 & 1 & 6 \\\\\n8 & 3 & 1 & 1\n\\end{pmatrix}\n\\qquad\nH = \\left(\\begin{array}{rrrr}\n1 & 0 & 50 & -11 \\\\\n0 & 3 & 28 & -2 \\\\\n0 & 0 & 61 & -13\n\\end{array}\\right)\n\\qquad\nU = \\left(\\begin{array}{rrr}\n9 & -5 & 1 \\\\\n5 & -2 & 0 \\\\\n11 & -6 & 1\n\\end{array}\\right)\n" }, { "math_id": 2, "text": "\nL = \\left\\{\\left. \\sum_{i=1}^n \\alpha_i \\mathbf a_i \\; \\right\\vert \\; \\alpha_i \\in{\\textbf Z} \\right\\}\n" }, { "math_id": 3, "text": "L_A" }, { "math_id": 4, "text": "L_{A} = L_{A'}." }, { "math_id": 5, "text": "L_{A} \\subseteq L_{A'}" }, { "math_id": 6, "text": "L_{[A \\mid A']} = L_{A'}" }, { "math_id": 7, "text": "v \\in L_{A}" }, { "math_id": 8, "text": "L_{[v \\mid A]} = L_A" } ]
https://en.wikipedia.org/wiki?curid=9025098
902592
Handle decomposition
In mathematics, a handle decomposition of an "m"-manifold "M" is a union formula_0 where each formula_1 is obtained from formula_2 by the attaching of formula_3-handles. A handle decomposition is to a manifold what a CW-decomposition is to a topological space—in many regards the purpose of a handle decomposition is to have a language analogous to CW-complexes, but adapted to the world of smooth manifolds. Thus an "i"-handle is the smooth analogue of an "i"-cell. Handle decompositions of manifolds arise naturally via Morse theory. The modification of handle structures is closely linked to Cerf theory. Motivation. Consider the standard CW-decomposition of the "n"-sphere, with one zero cell and a single "n"-cell. From the point of view of smooth manifolds, this is a degenerate decomposition of the sphere, as there is no natural way to see the smooth structure of formula_4 from the eyes of this decomposition—in particular the smooth structure near the "0"-cell depends on the behavior of the characteristic map formula_5 in a neighbourhood of formula_6. The problem with CW-decompositions is that the attaching maps for cells do not live in the world of smooth maps between manifolds. The germinal insight to correct this defect is the tubular neighbourhood theorem. Given a point "p" in a manifold "M", its closed tubular neighbourhood formula_7 is diffeomorphic to formula_8, thus we have decomposed "M" into the disjoint union of formula_7 and formula_9 glued along their common boundary. The vital issue here is that the gluing map is a diffeomorphism. Similarly, take a smooth embedded arc in formula_9, its tubular neighbourhood is diffeomorphic to formula_10. This allows us to write formula_11 as the union of three manifolds, glued along parts of their boundaries: 1) formula_8 2) formula_10 and 3) the complement of the open tubular neighbourhood of the arc in formula_9. Notice all the gluing maps are smooth maps—in particular when we glue formula_10 to formula_8 the equivalence relation is generated by the embedding of formula_12 in formula_13, which is smooth by the tubular neighbourhood theorem. Handle decompositions are an invention of Stephen Smale. In his original formulation, the process of attaching a "j"-handle to an "m"-manifold "M" assumes that one has a smooth embedding of formula_14. Let formula_15. The manifold formula_16 (in words, "M" union a "j"-handle along "f" ) refers to the disjoint union of formula_11 and formula_17 with the identification of formula_18 with its image in formula_19, i.e., formula_20 where the equivalence relation formula_21 is generated by formula_22 for all formula_23. One says a manifold "N" is obtained from "M" by attaching "j"-handles if the union of "M" with finitely many "j"-handles is diffeomorphic to "N". The definition of a handle decomposition is then as in the introduction. Thus, a manifold has a handle decomposition with only "0"-handles if it is diffeomorphic to a disjoint union of balls. A connected manifold containing handles of only two types (i.e.: 0-handles and "j"-handles for some fixed "j") is called a handlebody. Terminology. When forming "M" union a "j"-handle formula_17 formula_20 formula_24 is known as the attaching sphere. formula_25 is sometimes called the framing of the attaching sphere, since it gives trivialization of its normal bundle. formula_26 is the belt sphere of the handle formula_17 in formula_27. A manifold obtained by attaching "g" "k"-handles to the disc formula_8 is an "(m,k)"-handlebody of genus "g" . Cobordism presentations. A handle presentation of a cobordism consists of a cobordism "W" where formula_28 and an ascending union formula_29 where "M" is m-dimensional, "W" is "m+1"-dimensional, formula_30 is diffeomorphic to formula_31 and formula_32 is obtained from formula_33 by the attachment of "i"-handles. Whereas handle decompositions are the analogue for manifolds what cell decompositions are to topological spaces, handle presentations of cobordisms are to manifolds with boundary what relative cell decompositions are for pairs of spaces. Morse theoretic viewpoint. Given a Morse function formula_34 on a compact boundaryless manifold "M", such that the critical points formula_35 of "f" satisfy formula_36, and provided formula_37 then for all "j", formula_38 is diffeomorphic to formula_39 where "I"("j") is the index of the critical point formula_40. The "index" "I(j)" refers to the dimension of the maximal subspace of the tangent space formula_41 where the Hessian is negative definite. Provided the indices satisfy formula_42 this is a handle decomposition of "M", moreover, every manifold has such Morse functions, so they have handle decompositions. Similarly, given a cobordism formula_43 with formula_44 and a function formula_45 which is Morse on the interior and constant on the boundary and satisfying the increasing index property, there is an induced handle presentation of the cobordism "W". When "f" is a Morse function on "M", -"f" is also a Morse function. The corresponding handle decomposition / presentation is called the dual decomposition. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\emptyset = M_{-1} \\subset M_0 \\subset M_1 \\subset M_2 \\subset \\dots \\subset M_{m-1} \\subset M_m = M" }, { "math_id": 1, "text": "M_i" }, { "math_id": 2, "text": "M_{i-1}" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "S^n" }, { "math_id": 5, "text": "\\chi : D^n \\to S^n" }, { "math_id": 6, "text": "S^{n-1} \\subset D^n" }, { "math_id": 7, "text": "N_p" }, { "math_id": 8, "text": "D^m" }, { "math_id": 9, "text": "M \\setminus \\operatorname{int}(N_p)" }, { "math_id": 10, "text": "I \\times D^{m-1}" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": "(\\partial I)\\times D^{m-1}" }, { "math_id": 13, "text": "\\partial D^m" }, { "math_id": 14, "text": "f : S^{j-1} \\times D^{m-j} \\to \\partial M" }, { "math_id": 15, "text": "H^j = D^j \\times D^{m-j}" }, { "math_id": 16, "text": "M \\cup_f H^j" }, { "math_id": 17, "text": "H^j" }, { "math_id": 18, "text": "S^{j-1} \\times D^{m-j}" }, { "math_id": 19, "text": "\\partial M" }, { "math_id": 20, "text": " M \\cup_f H^j = \\left( M \\sqcup (D^j \\times D^{m-j}) \\right) / \\sim" }, { "math_id": 21, "text": "\\sim" }, { "math_id": 22, "text": "(p,x) \\sim f(p,x)" }, { "math_id": 23, "text": "(p,x) \\in S^{j-1} \\times D^{m-j} \\subset D^j \\times D^{m-j}" }, { "math_id": 24, "text": "f(S^{j-1} \\times \\{0\\}) \\subset M" }, { "math_id": 25, "text": "f" }, { "math_id": 26, "text": "\\{0\\}^j \\times S^{m-j-1} \\subset D^j \\times D^{m-j} = H^j" }, { "math_id": 27, "text": " M \\cup_f H^j" }, { "math_id": 28, "text": "\\partial W = M_0 \\cup M_1" }, { "math_id": 29, "text": "W_{-1} \\subset W_0 \\subset W_1 \\subset \\cdots \\subset W_{m+1} = W " }, { "math_id": 30, "text": "W_{-1}" }, { "math_id": 31, "text": "M_0 \\times [0,1]" }, { "math_id": 32, "text": "W_i" }, { "math_id": 33, "text": "W_{i-1}" }, { "math_id": 34, "text": "f : M \\to \\R" }, { "math_id": 35, "text": "\\{p_1, \\ldots, p_k\\} \\subset M" }, { "math_id": 36, "text": "f(p_1) < f(p_2) < \\cdots < f(p_k) " }, { "math_id": 37, "text": "t_0 < f(p_1) < t_1 < f(p_2) < \\cdots < t_{k-1} < f(p_k) < t_k ," }, { "math_id": 38, "text": "f^{-1}[t_{j-1},t_{j}]" }, { "math_id": 39, "text": "(f^{-1}(t_{j-1}) \\times [0,1]) \\cup H^{I(j)}" }, { "math_id": 40, "text": "p_{j}" }, { "math_id": 41, "text": "T_{p_j}M" }, { "math_id": 42, "text": "I(1) \\leq I(2) \\leq \\cdots \\leq I(k)" }, { "math_id": 43, "text": "W" }, { "math_id": 44, "text": " \\partial W = M_0 \\cup M_1" }, { "math_id": 45, "text": " f: W \\to \\R" }, { "math_id": 46, "text": "T^1" }, { "math_id": 47, "text": "(M \\cup_f H^i) \\cup_g H^j" }, { "math_id": 48, "text": "j \\leq i" }, { "math_id": 49, "text": "(M \\cup H^j) \\cup H^i" }, { "math_id": 50, "text": "S^m" } ]
https://en.wikipedia.org/wiki?curid=902592
902614
Howland will forgery trial
1868 U.S. court case The Howland will forgery trial ("Robinson v. Mandell") was a U.S. court case in 1868 where businesswoman Henrietta "Hetty" Howland Robinson, who would later become the richest woman in America, contested the validity of the will of her grandaunt, Sylvia Ann Howland. According to Sylvia Howland's will, half of her $2 million estate () would go to various charities and entities, the rest would be in a trust for Hetty Robinson. Robinson challenged the will's validity by producing an earlier will that left the entire estate to her, and which included a clause invalidating any subsequent wills. The case was ultimately decided against Robinson after the court ruled that the clause invalidating future wills and Sylvia's signature to it were forgeries. It is famous for the forensic use of mathematics by Benjamin Peirce as an expert witness. History. Sylvia Ann Howland died in 1865, leaving roughly half her fortune of some 2 million dollars () to various legatees, with the residue to be held in trust for the benefit of Robinson, Howland's niece. The remaining principal was to be distributed to various beneficiaries on Robinson's death. Robinson produced an earlier will, leaving her the whole estate outright. To the will was attached a second and separate page, putatively seeking to invalidate any subsequent wills. Howland's executor, Thomas Mandell, rejected Robinson's claim, insisting that the second page was a forgery, and Robinson sued. In the ensuing case of "Robinson v. Mandell", Charles Sanders Peirce testified that he had made pairwise comparisons of 42 examples of Howland's signature, overlaying them and counting the number of downstrokes that overlapped. Each signature featured 30 downstrokes and he concluded that, on average, 6 of the 30 overlapped, 1 in 5. Benjamin Peirce, Charles' father, showed that the number of overlapping downstrokes between two signatures also closely followed the binomial distribution, the expected distribution if each downstroke was an independent event. When the admittedly genuine signature on the first page of the contested will was compared with that on the second, all 30 downstrokes coincided, suggesting that the second signature was a tracing of the first. Benjamin Peirce then took the stand and asserted that, given the independence of each downstroke, the probability that all 30 downstrokes should coincide in two genuine signatures was formula_0. That is one in 2,666,000,000,000,000,000,000, in the order of magnitude of sextillions. He went on to observe: So vast improbability is practically an impossibility. Such evanescent shadows of probability cannot belong to actual life. They are unimaginably less than those least things which the law cares not for. ... The coincidence which has occurred here must have had its origin in an intention to produce it. It is utterly repugnant to sound reason to attribute this coincidence to any cause but design. The court ruled that Robinson's testimony in support of Howland's signature was inadmissible as she was a party to the will, thus having a conflict of interest. The statistical evidence was not called upon in judgment. The case is one of a series of attempts to introduce mathematical reasoning into the courts. "People v. Collins" is a more recent example. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textstyle\\frac{1}{2.666 \\times 10^{21}}" } ]
https://en.wikipedia.org/wiki?curid=902614
902820
Emissivity
Capacity of an object to radiate electromagnetic energy The emissivity of the surface of a material is its effectiveness in emitting energy as thermal radiation. Thermal radiation is electromagnetic radiation that most commonly includes both visible radiation (light) and infrared radiation, which is not visible to human eyes. A portion of the thermal radiation from very hot objects (see photograph) is easily visible to the eye. The emissivity of a surface depends on its chemical composition and geometrical structure. Quantitatively, it is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature as given by the Stefan–Boltzmann law. (A comparison with Planck's law is used if one is concerned with particular wavelengths of thermal radiation.) The ratio varies from 0 to 1. The surface of a perfect black body (with an emissivity of 1) emits thermal radiation at the rate of approximately 448 watts per square metre (W/m2) at a room temperature of . Objects have emissivities less than 1.0, and emit radiation at correspondingly lower rates. However, wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures may have an emissivity greater than 1. Practical applications. Emissivities are important in a variety of contexts: Mathematical definitions. In its most general form, emissivity can be specified for a particular wavelength, direction, and polarization. However, the most commonly used form of emissivity is the "hemispherical total emissivity", which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature. Some specific forms of emissivity are detailed below. Hemispherical emissivity. Hemispherical emissivity of a surface, denoted "ε", is defined as formula_0 where Spectral hemispherical emissivity. Spectral hemispherical emissivity in frequency and spectral hemispherical emissivity in wavelength of a surface, denoted "ε"ν and "ε"λ, respectively, are defined as formula_1 where Directional emissivity. Directional emissivity of a surface, denoted "ε"Ω, is defined as formula_2 where Spectral directional emissivity. Spectral directional emissivity in frequency and spectral directional emissivity in wavelength of a surface, denoted "ε"ν,Ω and "ε"λ,Ω, respectively, are defined as formula_3 where Hemispherical emissivity can also be expressed as a weighted average of the directional spectral emissivities as described in textbooks on "radiative heat transfer". Emissivities of common surfaces. Emissivities "ε" can be measured using simple devices such as Leslie's cube in conjunction with a thermal radiation detector such as a thermopile or a bolometer. The apparatus compares the thermal radiation from a surface to be tested with the thermal radiation from a nearly ideal, black sample. The detectors are essentially black absorbers with very sensitive thermometers that record the detector's temperature rise when exposed to thermal radiation. For measuring room temperature emissivities, the detectors must absorb thermal radiation completely at infrared wavelengths near 10×10−6 metre. Visible light has a wavelength range of about 0.4–0.7×10−6 metre from violet to deep red. Emissivity measurements for many surfaces are compiled in many handbooks and texts. Some of these are listed in the following table. Notes: Closely related properties. Absorptance. There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the "absorptivity" of a surface). Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivity. The relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1. Mirror-like, metallic surfaces that reflect light will thus have low emissivities, since the reflected light isn't absorbed. A polished silver surface has an emissivity of about 0.02 near room temperature. Black soot absorbs thermal radiation very well; it has an emissivity as large as 0.97, and hence soot is a fair approximation to an ideal black body. With the exception of bare, polished metals, the appearance of a surface to the eye is not a good guide to emissivities near room temperature. For example, white paint absorbs very little visible light. However, at an infrared wavelength of 10×10−6 metre, paint absorbs light very well, and has a high emissivity. Similarly, pure water absorbs very little visible light, but water is nonetheless a strong infrared absorber and has a correspondingly high emissivity. Emittance. Emittance (or emissive power) is the total amount of thermal energy emitted per unit area per unit time for all possible wavelengths. Emissivity of a body at a given temperature is the ratio of the total emissive power of a body to the total emissive power of a perfectly black body at that temperature. Following Planck's law, the total energy radiated increases with temperature while the peak of the emission spectrum shifts to shorter wavelengths. The energy emitted at shorter wavelengths increases more rapidly with temperature. For example, an ideal blackbody in thermal equilibrium at , will emit 97% of its energy at wavelengths below . The term emissivity is generally used to describe a simple, homogeneous surface such as silver. Similar terms, emittance and thermal emittance, are used to describe thermal radiation measurements on complex surfaces such as insulation products. Measurement of Emittance. Emittance of a surface can be measured directly or indirectly from the emitted energy from that surface. In the direct radiometric method, the emitted energy from the sample is measured directly using a spectroscope such as Fourier transform infrared spectroscopy (FTIR). In the indirect calorimetric method, the emitted energy from the sample is measured indirectly using a calorimeter. In addition to these two commonly applied methods, inexpensive emission measurement technique based on the principle of two-color pyrometry. Emissivities of planet Earth. The emissivity of a planet or other astronomical body is determined by the composition and structure of its outer skin. In this context, the "skin" of a planet generally includes both its semi-transparent atmosphere and its non-gaseous surface. The resulting radiative emissions to space typically function as the primary cooling mechanism for these otherwise isolated bodies. The balance between all other incoming plus internal sources of energy versus the outgoing flow regulates planetary temperatures. For Earth, equilibrium skin temperatures range near the freezing point of water, 260±50 K (-13±50 °C, 8±90 °F). The most energetic emissions are thus within a band spanning about 4-50 μm as governed by Planck's law. Emissivities for the atmosphere and surface components are often quantified separately, and validated against satellite- and terrestrial-based observations as well as laboratory measurements. These emissivities serve as input parameters within some simpler meteorlogic and climatologic models. Surface. Earth's surface emissivities (εs) have been inferred with satellite-based instruments by directly observing surface thermal emissions at nadir through a less obstructed atmospheric window spanning 8-13 μm. Values range about εs=0.65-0.99, with lowest values typically limited to the most barren desert areas. Emissivities of most surface regions are above 0.9 due to the dominant influence of water; including oceans, land vegetation, and snow/ice. Globally averaged estimates for the hemispheric emissivity of Earth's surface are in the vicinity of εs=0.95. Atmosphere. Water also dominates the planet's atmospheric emissivity and absorptivity in the form of water vapor. Clouds, carbon dioxide, and other components make substantial additional contributions, especially where there are gaps in the water vapor absorption spectrum. Nitrogen (N2) and oxygen (O2) - the primary atmospheric components - interact less significantly with thermal radiation in the infrared band. Direct measurement of Earths atmospheric emissivities (εa) are more challenging than for land surfaces due in part to the atmosphere's multi-layered and more dynamic structure. Upper and lower limits have been measured and calculated for εa in accordance with extreme yet realistic local conditions. At the upper limit, dense low cloud structures (consisting of liquid/ice aerosols and saturated water vapor) close the infrared transmission windows, yielding near to black body conditions with εa≈1. At a lower limit, clear sky (cloud-free) conditions promote the largest opening of transmission windows. The more uniform concentration of long-lived trace greenhouse gases in combination with water vapor pressures of 0.25-20 mbar then yield minimum values in the range of εa=0.55-0.8 (with ε=0.35-0.75 for a simulated water-vapor-only atmosphere). Carbon dioxide (CO2) and other greenhouse gases contribute about ε=0.2 to εa when atmospheric humidity is low. Researchers have also evaluated the contribution of differing cloud types to atmospheric absorptivity and emissivity. These days, the detailed processes and complex properties of radiation transport through the atmosphere are evaluated by general circulation models using radiation transport codes and databases such as MODTRAN/HITRAN. Emission, absorption, and scattering are thereby simulated through both space and time. For many practical applications it may not be possible, economical or necessary to know all emissivity values locally. "Effective" or "bulk" values for an atmosphere or an entire planet may be used. These can be based upon remote observations (from the ground or outer space) or defined according to the simplifications utilized by a particular model. For example, an effective global value of εa≈0.78 has been estimated from application of an idealized single-layer-atmosphere energy-balance model to Earth. Effective emissivity due to atmosphere. The IPCC reports an outgoing thermal radiation flux (OLR) of 239 (237–242) W m-2 and a surface thermal radiation flux (SLR) of 398 (395–400) W m-2, where the parenthesized amounts indicate the 5-95% confidence intervals as of 2015. These values indicate that the atmosphere (with clouds included) reduces Earth's overall emissivity, relative to its surface emissions, by a factor of 239/398 ≈ 0.60. In other words, emissions to space are given by formula_4 where formula_5 is the effective emissivity of Earth as viewed from space and formula_6 is the effective temperature of the surface. History. The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and others. In 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodies. For example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of space. By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varepsilon = \\frac{M_\\mathrm{e}}{M_\\mathrm{e}^\\circ}," }, { "math_id": 1, "text": "\\begin{align}\n \\varepsilon_\\nu &= \\frac{M_{\\mathrm{e},\\nu}}{M_{\\mathrm{e},\\nu}^\\circ}, \\\\\n \\varepsilon_\\lambda &= \\frac{M_{\\mathrm{e},\\lambda}}{M_{\\mathrm{e},\\lambda}^\\circ},\n\\end{align}" }, { "math_id": 2, "text": "\\varepsilon_\\Omega = \\frac{L_{\\mathrm{e},\\Omega}}{L_{\\mathrm{e},\\Omega}^\\circ}," }, { "math_id": 3, "text": "\\begin{align}\n \\varepsilon_{\\nu,\\Omega} &= \\frac{L_{\\mathrm{e},\\Omega,\\nu}}{L_{\\mathrm{e},\\Omega,\\nu}^\\circ}, \\\\\n \\varepsilon_{\\lambda,\\Omega} &= \\frac{L_{\\mathrm{e},\\Omega,\\lambda}}{L_{\\mathrm{e},\\Omega,\\lambda}^\\circ},\n\\end{align}" }, { "math_id": 4, "text": "\\mathrm{OLR} = \\epsilon_\\mathrm{eff}\\,\\sigma\\,T_{se}^4" }, { "math_id": 5, "text": "\\epsilon_\\mathrm{eff} \\approx 0.6" }, { "math_id": 6, "text": " T_\\mathrm{se} \\equiv \\left[\\mathrm{SLR}/\\sigma\\right]^{1/4} \\approx" } ]
https://en.wikipedia.org/wiki?curid=902820
902982
Slug (unit)
Unit of mass &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The slug is a derived unit of mass in a weight-based system of measures, most notably within the British Imperial measurement system and the United States customary measures system. Systems of measure either define mass and derive a force unit "or" define a base force and derive a mass unit (cf. "poundal", a derived unit of force in a mass-based system). A slug is defined as a mass that is accelerated by 1 ft/s2 when a net force of one pound (lbf) is exerted on it. formula_0 One slug is a mass equal to based on standard gravity, the international foot, and the avoirdupois pound. In other words, at the Earth's surface (in standard gravity), an object with a mass of 1 slug weighs approximately . History. The "slug" is part of a subset of units known as the gravitational FPS system, one of several such specialized systems of mechanical units developed in the late 19th and the early 20th century. "Geepound" was another name for this unit in early literature. The name "slug" was coined before 1900 by British physicist Arthur Mason Worthington, but it did not see any significant use until decades later. It is derived from the meaning "solid block of metal" (cf. "slug" fake coin or "slug" projectile), not from the slug mollusc. A 1928 textbook says: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;No name has yet been given to the unit of mass and, in fact, as we have developed the theory of dynamics no name is necessary. Whenever the mass, "m", appears in our formulae, we substitute the ratio of the convenient force-acceleration pair "(w/g)", and measure the mass in lbs. per ft./sec.2 or in grams per cm./sec.2. The slug is listed in the Regulations under the Weights and Measures (National Standards) Act, 1960. This regulation defines the units of weights and measures, both regular and metric, in Australia. Related units. The inch version of the slug (equal to 1 lbf⋅s2/in, or 12slugs) has no official name, but is commonly referred to as a "blob", "slinch" (a portmanteau of the words slug and inch), "slugette", or "snail". It is equivalent to based on standard gravity. Similar (but long-obsolete) metric units included the "glug" (980.665 g) in a gravitational system related to the centimetre–gram–second system, and the "mug", "hyl", "par", or "TME" (, 9.80665 kg) in a gravitational system related to the metre–kilogram–second system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n 1~\\text{slug}= 1~\\text{lbf}{\\cdot}\\frac{\\text{s}^2}{\\text{ft}}\n \\quad\\Longleftrightarrow\\quad\n 1~\\text{lbf}= 1~\\text{slug}{\\cdot}\\frac{\\text{ft}}{\\text{s}^2}\n" } ]
https://en.wikipedia.org/wiki?curid=902982
903032
Modular exponentiation
Operation in modular arithmetic Modular exponentiation is exponentiation performed over a modulus. It is useful in computer science, especially in the field of public-key cryptography, where it is used in both Diffie–Hellman key exchange and RSA public/private keys. Modular exponentiation is the remainder when an integer "b" (the base) is raised to the power "e" (the exponent), and divided by a positive integer "m" (the modulus); that is, "c" "b""e" mod "m". From the definition of division, it follows that 0 ≤ "c" &lt; "m". For example, given "b" = 5, "e" = 3 and "m" = 13, dividing 53 125 by 13 leaves a remainder of "c" = 8. Modular exponentiation can be performed with a "negative" exponent "e" by finding the modular multiplicative inverse "d" of "b" modulo "m" using the extended Euclidean algorithm. That is: "c" "b""e" mod "m" "d"−"e" mod "m", where "e" &lt; 0 and "b" ⋅ "d" ≡ 1 (mod "m"). Modular exponentiation is efficient to compute, even for very large integers. On the other hand, computing the modular discrete logarithm – that is, finding the exponent "e" when given "b", "c", and "m" – is believed to be difficult. This one-way function behavior makes modular exponentiation a candidate for use in cryptographic algorithms. Direct method. The most direct method of calculating a modular exponent is to calculate "b""e" directly, then to take this number modulo "m". Consider trying to compute "c", given "b" 4, "e" 13, and "m" 497: "c" ≡ 413 (mod 497) One could use a calculator to compute 413; this comes out to 67,108,864. Taking this value modulo 497, the answer "c" is determined to be 445. Note that "b" is only one digit in length and that "e" is only two digits in length, but the value "b""e" is 8 digits in length. In strong cryptography, "b" is often at least 1024 bits. Consider "b" 5 × 1076 and "e" 17, both of which are perfectly reasonable values. In this example, "b" is 77 digits in length and "e" is 2 digits in length, but the value "b""e" is 1,304 decimal digits in length. Such calculations are possible on modern computers, but the sheer magnitude of such numbers causes the speed of calculations to slow considerably. As "b" and "e" increase even further to provide better security, the value "b""e" becomes unwieldy. The time required to perform the exponentiation depends on the operating environment and the processor. The method described above requires O("e") multiplications to complete. Memory-efficient method. Keeping the numbers smaller requires additional modular reduction operations, but the reduced size makes each operation faster, saving time (as well as memory) overall. This algorithm makes use of the identity ("a" ⋅ "b") mod "m" [("a" mod "m") ⋅ ("b" mod "m")] mod "m" The modified algorithm is: Inputs "An integer b (base), integer e (exponent), and a positive integer m (modulus)" Outputs "The modular exponent c where" "c" "b""e" "mod m" # Initialise "c" 1 and loop variable "e′" 0 # While "e′" &lt; "e" do ## Increment e′ by 1 ## Calculate "c" ("b" ⋅ "c") mod "m" # Output c Note that at the end of every iteration through the loop, the equation "c" ≡ "b""e′" (mod "m") holds true. The algorithm ends when the loop has been executed e times. At that point c contains the result of "b""e" mod "m". In summary, this algorithm increases e′ by one until it is equal to e. At every step multiplying the result from the previous iteration, c, by b and performing a modulo operation on the resulting product, thereby keeping the resulting c a small integer. The example "b" 4, "e" 13, and "m" 497 is presented again. The algorithm performs the iteration thirteen times: "(e′" 1)   "c" (4 ⋅ 1) mod 497 4 mod 497 4 "(e′" 2)   "c " (4 ⋅ 4) mod 497 16 mod 497 16 "(e′" 3)   "c" (4 ⋅ 16) mod 497 64 mod 497 64 "(e′" 4)   "c" (4 ⋅ 64) mod 497 256 mod 497 256 "(e′" 5)   "c" (4 ⋅ 256) mod 497 1024 mod 497 30 "(e′" 6)   "c" (4 ⋅ 30) mod 497 120 mod 497 120 "(e′" 7)   "c" (4 ⋅ 120) mod 497 480 mod 497 480 "(e′" 8)   "c" (4 ⋅ 480) mod 497 1920 mod 497 429 "(e′" 9)   "c" (4 ⋅ 429) mod 497 1716 mod 497 225 "(e′" 10)   "c" (4 ⋅ 225) mod 497 900 mod 497 403 "(e′" 11)   "c" (4 ⋅ 403) mod 497 1612 mod 497 121 "(e′" 12)   "c" (4 ⋅ 121) mod 497 484 mod 497 484 "(e′" 13)   "c" (4 ⋅ 484) mod 497 1936 mod 497 445 The final answer for c is therefore 445, as in the direct method. Like the first method, this requires O("e") multiplications to complete. However, since the numbers used in these calculations are much smaller than the numbers used in the first algorithm's calculations, the computation time decreases by a factor of at least O("e") in this method. In pseudocode, this method can be performed the following way: function modular_pow(base, exponent, modulus) is if modulus = 1 then return 0 c := 1 for e_prime = 0 to exponent-1 do c := (c * base) mod modulus return c Right-to-left binary method. A third method drastically reduces the number of operations to perform modular exponentiation, while keeping the same memory footprint as in the previous method. It is a combination of the previous method and a more general principle called exponentiation by squaring (also known as "binary exponentiation"). First, it is required that the exponent "e" be converted to binary notation. That is, "e" can be written as: formula_0 In such notation, the "length" of "e" is "n" bits. "a""i" can take the value 0 or 1 for any "i" such that 0 ≤ "i" &lt; "n". By definition, "a""n" − 1 1. The value "b""e" can then be written as: formula_1 The solution "c" is therefore: formula_2 Pseudocode. The following is an example in pseudocode based on Applied Cryptography by Bruce Schneier. The inputs base, exponent, and modulus correspond to b, e, and m in the equations given above. function modular_pow(base, exponent, modulus) is if modulus = 1 then return 0 Assert :: (modulus - 1) * (modulus - 1) does not overflow base result := 1 base := base mod modulus while exponent &gt; 0 do if (exponent mod 2 == 1) then result := (result * base) mod modulus exponent := exponent » 1 base := (base * base) mod modulus return result Note that upon entering the loop for the first time, the code variable base is equivalent to b. However, the repeated squaring in the third line of code ensures that at the completion of every loop, the variable base is equivalent to "b"2"i" mod "m", where i is the number of times the loop has been iterated. (This makes i the next working bit of the binary exponent exponent, where the least-significant bit is exponent0). The first line of code simply carries out the multiplication in formula_3. If a is zero, no code executes since this effectively multiplies the running total by one. If a instead is one, the variable base (containing the value "b"2"i" mod "m" of the original base) is simply multiplied in. In this example, the base b is raised to the exponent "e" = 13. The exponent is 1101 in binary. There are four binary digits, so the loop executes four times, with values "a"0 = 1, "a"1 = 0, "a"2 = 1, and "a"3 = 1. First, initialize the result formula_4 to 1 and preserve the value of "b" in the variable "x": formula_5. Step 1) bit 1 is 1, so set formula_6; set formula_7. Step 2) bit 2 is 0, so do not reset "R"; set formula_8. Step 3) bit 3 is 1, so set formula_9; set formula_10. Step 4) bit 4 is 1, so set formula_11; This is the last step so we don't need to square "x". We are done: "R" is now formula_12. Here is the above calculation, where we compute "b" = 4 to the power "e" = 13, performed modulo 497. Initialize: formula_13 and formula_14. Step 1) bit 1 is 1, so set formula_15; set formula_16. Step 2) bit 2 is 0, so do not reset "R"; set formula_17. Step 3) bit 3 is 1, so set formula_18; set formula_19. Step 4) bit 4 is 1, so set formula_20; We are done: R is now formula_21, the same result obtained in the previous algorithms. The running time of this algorithm is O(logexponent). When working with large values of exponent, this offers a substantial speed benefit over the previous two algorithms, whose time is O(exponent). For example, if the exponent was 220 = 1048576, this algorithm would have 20 steps instead of 1048576 steps. Implementation in Lua. function modPow(b, e, m) if m == 1 then return 0 end local r = 1 b = b % m while e &gt; 0 do if e % 2 == 1 then r = (r*b) % m end b = (b*b) % m e = e » 1 --use 'e = math.floor(e / 2)' on Lua 5.2 or older end return r end Left-to-right binary method. We can also use the bits of the exponent in left to right order. In practice, we would usually want the result modulo some modulus "m". In that case, we would reduce each multiplication result (mod "m") before proceeding. For simplicity, the modulus calculation is omitted here. This example shows how to compute formula_12 using left to right binary exponentiation. The exponent is 1101 in binary; there are 4 bits, so there are 4 iterations. Initialize the result to 1: formula_22. Step 1) formula_23; bit 1 = 1, so compute formula_24; Step 2) formula_25; bit 2 = 1, so compute formula_26; Step 3) formula_27; bit 3 = 0, so we are done with this step; Step 4) formula_28; bit 4 = 1, so compute formula_29. Minimum multiplications. In "The Art of Computer Programming, Vol. 2, Seminumerical Algorithms", page 463, Donald Knuth notes that contrary to some assertions, this method does not always give the minimum possible number of multiplications. The smallest counterexample is for a power of 15, when the binary method needs six multiplications. Instead, form "x"3 in two multiplications, then "x"6 by squaring "x"3, then "x"12 by squaring "x"6, and finally "x"15 by multiplying "x"12 and "x"3, thereby achieving the desired result with only five multiplications. However, many pages follow describing how such sequences might be contrived in general. Generalizations. Matrices. The m-th term of any constant-recursive sequence (such as Fibonacci numbers or Perrin numbers) where each term is a linear function of k previous terms can be computed efficiently modulo n by computing "Am" mod "n", where A is the corresponding "k"×"k" companion matrix. The above methods adapt easily to this application. This can be used for primality testing of large numbers n, for example. A recursive algorithm for codice_0 = "Ab" mod "c", where A is a square matrix. function Matrix_ModExp(Matrix A, int b, int c) is if b == 0 then return I // The identity matrix if (b mod 2 == 1) then return (A * Matrix_ModExp(A, b - 1, c)) mod c Matrix D := Matrix_ModExp(A, b / 2, c) return (D * D) mod c Finite cyclic groups. Diffie–Hellman key exchange uses exponentiation in finite cyclic groups. The above methods for modular matrix exponentiation clearly extend to this context. The modular matrix multiplication "C" ≡ "AB" (mod "n") is simply replaced everywhere by the group multiplication "c" = "ab". Reversible and quantum modular exponentiation. In quantum computing, modular exponentiation appears as the bottleneck of Shor's algorithm, where it must be computed by a circuit consisting of reversible gates, which can be further broken down into quantum gates appropriate for a specific physical device. Furthermore, in Shor's algorithm it is possible to know the base and the modulus of exponentiation at every call, which enables various circuit optimizations. Software implementations. Because modular exponentiation is an important operation in computer science, and there are efficient algorithms (see above) that are much faster than simply exponentiating and then taking the remainder, many programming languages and arbitrary-precision integer libraries have a dedicated function to perform modular exponentiation: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e = \\sum_{i=0}^{n-1} a_i 2^i" }, { "math_id": 1, "text": "b^e = b^{\\left( \\sum_{i=0}^{n-1} a_i 2^i \\right)} = \\prod_{i=0}^{n-1} b^{a_i 2^i}" }, { "math_id": 2, "text": "c \\equiv \\prod_{i=0}^{n-1} b^{a_i 2^i} \\pmod m" }, { "math_id": 3, "text": "\\prod_{i=0}^{n-1} b^{a_i 2^i}\\pmod m" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": " R \\leftarrow 1 \\, ( = b^0) \\text{ and } x \\leftarrow b " }, { "math_id": 6, "text": " R \\leftarrow R \\cdot x \\text{ }(= b^1) " }, { "math_id": 7, "text": " x \\leftarrow x^2 \\text{ }(= b^2) " }, { "math_id": 8, "text": " x \\leftarrow x^2 \\text{ }(= b^4) " }, { "math_id": 9, "text": " R \\leftarrow R \\cdot x \\text{ }(= b^5) " }, { "math_id": 10, "text": " x \\leftarrow x^2 \\text{ }(= b^8) " }, { "math_id": 11, "text": " R \\leftarrow R \\cdot x \\text{ }(= b^{13}) " }, { "math_id": 12, "text": "b^{13}" }, { "math_id": 13, "text": " R \\leftarrow 1 \\, ( = b^0) " }, { "math_id": 14, "text": " x \\leftarrow b = 4 " }, { "math_id": 15, "text": "R \\leftarrow R \\cdot 4 \\equiv 4 \\pmod{497} " }, { "math_id": 16, "text": " x \\leftarrow x^2 \\text{ }(= b^2) \\equiv 4^2 \\equiv 16 \\pmod{497} " }, { "math_id": 17, "text": " x \\leftarrow x^2 \\text{ }(= b^4) \\equiv 16^2\\equiv 256 \\pmod{497} " }, { "math_id": 18, "text": " R \\leftarrow R \\cdot x \\text{ }(= b^5) \\equiv 4 \\cdot 256 \\equiv 30 \\pmod{497} " }, { "math_id": 19, "text": " x \\leftarrow x^2 \\text{ }(= b^8) \\equiv 256^2 \\equiv 429 \\pmod{497} " }, { "math_id": 20, "text": " R \\leftarrow R \\cdot x \\text{ }(= b^{13}) \\equiv 30 \\cdot 429 \\equiv 445 \\pmod{497} " }, { "math_id": 21, "text": "4^{13} \\equiv 445 \\pmod{497}" }, { "math_id": 22, "text": "r \\leftarrow 1 \\, ( = b^0)" }, { "math_id": 23, "text": "r \\leftarrow r^2 \\, ( = b^0)" }, { "math_id": 24, "text": "r \\leftarrow r \\cdot b \\,( = b^1)" }, { "math_id": 25, "text": "r \\leftarrow r^2 \\, ( = b^2)" }, { "math_id": 26, "text": "r \\leftarrow r \\cdot b \\, ( = b^3)" }, { "math_id": 27, "text": "r \\leftarrow r^2 \\, ( = b^6)" }, { "math_id": 28, "text": "r \\leftarrow r^2 \\, ( = b^{12})" }, { "math_id": 29, "text": "r \\leftarrow r \\cdot b \\, ( = b^{13})" } ]
https://en.wikipedia.org/wiki?curid=903032
9032191
Natural-neighbor interpolation
Method of spatial interpolation Natural-neighbor interpolation or Sibson interpolation is a method of spatial interpolation, developed by Robin Sibson. The method is based on Voronoi tessellation of a discrete set of spatial points. This has advantages over simpler methods of interpolation, such as nearest-neighbor interpolation, in that it provides a smoother approximation to the underlying "true" function. Formulation. The basic equation is: formula_0 where formula_1 is the estimate at formula_2, formula_3 are the weights and formula_4 are the known data at formula_5. The weights, formula_3, are calculated by finding how much of each of the surrounding areas is "stolen" when inserting formula_2 into the tessellation. formula_6 where "A(x)" is the volume of the new cell centered in "x", and "A(x""i"")" is the volume of the intersection between the new cell centered in "x" and the old cell centered in "x""i". formula_7 where "l(x""i"")" is the measure of the interface between the cells linked to "x" and "x""i" in the Voronoi diagram (length in 2D, surface in 3D) and "d(x""i"")", the distance between "x" and "x""i". Properties. There are several useful properties of natural neighbor interpolation: Extensions. Natural neighbor interpolation has also been implemented in a discrete form, which has been demonstrated to be computationally more efficient in at least some circumstances. A form of discrete natural neighbor interpolation has also been developed that gives a measure of interpolation uncertainty. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G(x)=\\sum^n_{i=1}{w_i(x)f(x_i)}" }, { "math_id": 1, "text": "G(x)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "w_i" }, { "math_id": 4, "text": "f(x_i)" }, { "math_id": 5, "text": "(x_i)" }, { "math_id": 6, "text": "w_i(\\mathbf{x})=\\frac{A(\\mathbf{x}_i)}{A(\\mathbf{x})}" }, { "math_id": 7, "text": "w_i(\\mathbf{x})=\\frac{\\frac{l(\\mathbf{x}_i)}{d(\\mathbf{x}_i)}}{\\sum_{k=1}^n \\frac{l(\\mathbf{x}_k)}{d(\\mathbf{x}_k)}}" } ]
https://en.wikipedia.org/wiki?curid=9032191
9032406
Tupper's self-referential formula
Formula that visually represents itself when graphed Tupper's self-referential formula is a formula that visually represents itself when graphed at a specific location in the ("x", "y") plane. History. The formula was defined by Jeff Tupper and appears as an example in Tupper's 2001 SIGGRAPH paper on reliable two-dimensional computer graphing algorithms. This paper discusses methods related to the GrafEq formula-graphing program developed by Tupper. Although the formula is called "self-referential", Tupper did not name it as such. Formula. The formula is an inequality defined as: formula_0 where formula_1 denotes the floor function, and mod is the modulo operation. Plots. Let formula_2 equal the following 543-digit integer: 960 939 379 918 958 884 971 672 962 127 852 754 715 004 339 660 129 306 651 505 519 271 702 802 395 266 424 689 642 842 174 350 718 121 267 153 782 770 623 355 993 237 280 874 144 307 891 325 963 941 337 723 487 857 735 749 823 926 629 715 517 173 716 995 165 232 890 538 221 612 403 238 855 866 184 013 235 585 136 048 828 693 337 902 491 454 229 288 667 081 096 184 496 091 705 183 454 067 827 731 551 705 405 381 627 380 967 602 565 625 016 981 482 083 418 783 163 849 115 590 225 610 003 652 351 370 343 874 461 848 378 737 238 198 224 849 863 465 033 159 410 054 974 700 593 138 339 226 497 249 461 751 545 728 366 702 369 745 461 014 655 997 933 798 537 483 143 786 841 806 593 422 227 898 388 722 980 000 748 404 719 Graphing the set of points formula_3 in formula_4 and formula_5 which satisfy the formula, results in the following plot: The formula is a general-purpose method of decoding a bitmap stored in the constant formula_2, and it could be used to draw any other image. When applied to the unbounded positive range formula_6, the formula tiles a vertical swath of the plane with a pattern that contains all possible 17-pixel-tall bitmaps. One horizontal slice of that infinite bitmap depicts the drawing formula itself, but this is not remarkable, since other slices depict all other possible formulae that might fit in a 17-pixel-tall bitmap. Tupper has created extended versions of his original formula that rule out all but one slice. The constant formula_2 is a simple monochrome bitmap image of the formula treated as a binary number and multiplied by 17. If formula_2 is divided by 17, the least significant bit encodes the upper-right corner formula_7; the 17 least significant bits encode the rightmost column of pixels; the next 17 least significant bits encode the 2nd-rightmost column, and so on. It fundamentally describes a way to plot points on a two-dimensional surface. The value of formula_2 is the number whose binary digits form the plot. The following plot demonstrates the addition of different values of formula_2. In the fourth subplot, the k-value of "AFGP" and "Aesthetic Function Graph" is added to get the resultant graph, where both texts can be seen with some distortion due to the effects of binary addition. The information regarding the shape of the plot is stored within formula_2. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{2} < \\left\\lfloor \\mathrm{mod}\\left(\\left\\lfloor \\frac{y}{17} \\right\\rfloor 2^{-17 \\lfloor x \\rfloor - \\mathrm{mod}\\left(\\lfloor y\\rfloor, 17\\right)},2\\right)\\right\\rfloor" }, { "math_id": 1, "text": "\\lfloor \\dots \\rfloor" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "(x, y)" }, { "math_id": 4, "text": "0 \\le x < 106" }, { "math_id": 5, "text": "k \\le y < k + 17" }, { "math_id": 6, "text": "0 \\le y" }, { "math_id": 7, "text": "(k, 0)" } ]
https://en.wikipedia.org/wiki?curid=9032406
9032663
Airway resistance
In respiratory physiology, airway resistance is the resistance of the respiratory tract to airflow during inhalation and exhalation. Airway resistance can be measured using plethysmography. Definition. Analogously to Ohm's law: formula_0 Where: formula_1 So: formula_2 Where: N.B. PA and formula_7 change constantly during the respiratory cycle. Determinants of airway resistance. There are several important determinants of airway resistance including: Hagen–Poiseuille equation. In fluid dynamics, the Hagen–Poiseuille equation is a physical law that gives the pressure drop in a fluid flowing through a long cylindrical pipe. The assumptions of the equation are that the flow is laminar viscous and incompressible and the flow is through a constant circular cross-section that is substantially longer than its diameter. The equation is also known as the "Hagen–Poiseuille law", "Poiseuille law" and "Poiseuille equation". formula_8 Where: Dividing both sides by formula_7 and given the above definition shows:- formula_13 While the assumptions of the Hagen–Poiseuille equation are not strictly true of the respiratory tract it serves to show that, because of the fourth power, relatively small changes in the radius of the airways causes large changes in airway resistance. An individual small airway has much greater resistance than a large airway, however there are many more small airways than large ones. Therefore, resistance is greatest at the bronchi of intermediate size, in between the fourth and eighth bifurcation. Laminar flow versus turbulent flow. Where air is flowing in a laminar manner it has less resistance than when it is flowing in a turbulent manner. If flow becomes turbulent, and the pressure difference is increased to maintain flow, this response itself increases resistance. This means that a large increase in pressure difference is required to maintain flow if it becomes turbulent. Whether flow is laminar or turbulent is complicated, however generally flow within a pipe will be laminar as long as the Reynolds number is less than 2300. formula_14 where: This shows that larger airways are more prone to turbulent flow than smaller airways. In cases of upper airway obstruction the development of turbulent flow is a very important mechanism of increased airway resistance, this can be treated by administering Heliox, a breathing gas which is much less dense than air and consequently more conductive to laminar flow. Changes in airway resistance. Airway resistance is not constant. As shown above airway resistance is markedly affected by changes in the diameter of the airways. Therefore, diseases affecting the respiratory tract can increase airway resistance. Airway resistance can also change over time. During an asthma attack the airways constrict causing an increase in airway resistance. Airway resistance can also vary between inspiration and expiration: In emphysema there is destruction of the elastic tissue of the lungs which help hold the small airways open. Therefore, during expiration, particularly forced expiration, these airways may collapse causing increased airway resistance. Derived parameters. Airway conductance (GAW). This is simply the mathematical inverse of airway resistance. formula_20 formula_21 Where V is the lung volume at which RAW was measured. Specific airway resistance (sRaw). Also called volumic airway resistance. Due to the elastic nature of the tissue that supports the small airways airway resistance changes with lung volume. It is not practically possible to measure airway resistance at a set absolute lung volume, therefore specific airway resistance attempts to correct for differences in lung volume at which different measurements of airway resistance were made. Specific airway resistance is often measured at FRC, in which case: formula_22 formula_23 Where V is the lung volume at which GAW was measured. Specific airway conductance (sGaw). Also called volumic airway conductance. Similarly to specific airway resistance, specific airway conductance attempts to correct for differences in lung volume. Specific airway conductance is often measured at FRC, in which case: formula_24 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_{AW} = \\frac {{\\Delta}P}{\\dot V}" }, { "math_id": 1, "text": "{\\Delta P} = P_{ATM} - P_A" }, { "math_id": 2, "text": "R_{AW} = \\frac {P_{\\mathrm{ATM}} - P_{\\mathrm{A}}}{\\dot V}" }, { "math_id": 3, "text": "R_{AW}" }, { "math_id": 4, "text": "{\\Delta}P" }, { "math_id": 5, "text": "P_{ATM}" }, { "math_id": 6, "text": "P_A" }, { "math_id": 7, "text": "\\dot V" }, { "math_id": 8, "text": " {\\Delta P} = \\frac{8 \\eta l {\\dot V}}{ \\pi r^4} " }, { "math_id": 9, "text": "\\Delta P" }, { "math_id": 10, "text": "l" }, { "math_id": 11, "text": " \\eta " }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": " R = \\frac{8 \\eta l}{\\pi r^{4}} " }, { "math_id": 14, "text": "Re = {{\\rho {\\mathrm v} d} \\over \\mu}" }, { "math_id": 15, "text": "Re" }, { "math_id": 16, "text": "d" }, { "math_id": 17, "text": "{\\mathbf \\mathrm v}" }, { "math_id": 18, "text": "{\\mu}" }, { "math_id": 19, "text": "{\\rho}\\," }, { "math_id": 20, "text": "G_{AW} = \\frac{1}{R_{AW}}" }, { "math_id": 21, "text": "sR_{AW} = {R_{AW}}{V}" }, { "math_id": 22, "text": "sR_{AW} = {R_{AW}}\\times{FRC}" }, { "math_id": 23, "text": "sG_{AW} = \\frac{G_{AW}}{V} = \\frac{1}{R_{AW}V} = \\frac{1}{sR_{AW}}" }, { "math_id": 24, "text": "sG_{AW} = \\frac{G_{AW}}{FRC}" } ]
https://en.wikipedia.org/wiki?curid=9032663