id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
826723 | Angular diameter | How large a sphere or circle appears
The angular diameter, angular size, apparent diameter, or apparent size is an angular distance describing how large a sphere or circle appears from a given point of view. In the vision sciences, it is called the visual angle, and in optics, it is the angular aperture (of a lens). The angular diameter can alternatively be thought of as the angular displacement through which an eye or camera must rotate to look from one side of an apparent circle to the opposite side. Humans can resolve with their naked eyes diameters down to about 1 arcminute (approximately 0.017° or 0.0003 radians). This corresponds to 0.3 m at a 1 km distance, or to perceiving Venus as a disk under optimal conditions.
Formula.
The angular diameter of a circle whose plane is perpendicular to the displacement vector between the point of view and the center of said circle can be calculated using the formula
formula_0
in which formula_1 is the angular diameter in "degrees", and formula_2 is the actual diameter of the object, and formula_3 is the distance to the object. When formula_4, we have formula_5, and the result obtained is in radians.
For a spherical object whose "actual" diameter equals formula_6 and where formula_3 is the distance to the "center "of the sphere, the angular diameter can be found by the following modified formula
formula_7
The difference is due to the fact that the apparent edges of a sphere are its tangent points, which are closer to the observer than the center of the sphere, and have a distance between them which is smaller than the actual diameter. The above formula can be found by understanding that in the case of a spherical object, a right triangle can be constructed such that its three vertices are the observer, the center of the sphere, and one of the sphere's tangent points, with formula_3 as the hypotenuse and formula_8 as the sine.
The difference is significant only for spherical objects of large angular diameter, since the following small-angle approximations hold for small values of formula_9:
formula_10
Estimating angular diameter using the hand.
Estimates of angular diameter may be obtained by holding the hand at right angles to a fully extended arm, as shown in the figure.
Use in astronomy.
In astronomy, the sizes of celestial objects are often given in terms of their angular diameter as seen from Earth, rather than their actual sizes. Since these angular diameters are typically small, it is common to present them in arcseconds (″). An arcsecond is 1/3600th of one degree (1°) and a radian is 180/"π" degrees. So one radian equals 3,600 × 180/formula_11 arcseconds, which is about 206,265 arcseconds (1 rad ≈ 206,264.806247"). Therefore, the angular diameter of an object with physical diameter "d" at a distance "D", expressed in arcseconds, is given by:
formula_12.
These objects have an angular diameter of 1″:
Thus, the angular diameter of Earth's orbit around the Sun as viewed from a distance of 1 pc is 2″, as 1 AU is the mean radius of Earth's orbit.
The angular diameter of the Sun, from a distance of one light-year, is 0.03″, and that of Earth 0.0003″. The angular diameter 0.03″ of the Sun given above is approximately the same as that of a human body at a distance of the diameter of Earth.
This table shows the angular sizes of noteworthy celestial bodies as seen from Earth:
The angular diameter of the Sun, as seen from Earth, is about 250,000 times that of Sirius. (Sirius has twice the diameter and its distance is 500,000 times as much; the Sun is 1010 times as bright, corresponding to an angular diameter ratio of 105, so Sirius is roughly 6 times as bright per unit solid angle.)
The angular diameter of the Sun is also about 250,000 times that of Alpha Centauri A (it has about the same diameter and the distance is 250,000 times as much; the Sun is 4×1010 times as bright, corresponding to an angular diameter ratio of 200,000, so Alpha Centauri A is a little brighter per unit solid angle).
The angular diameter of the Sun is about the same as that of the Moon. (The Sun's diameter is 400 times as large and its distance also; the Sun is 200,000 to 500,000 times as bright as the full Moon (figures vary), corresponding to an angular diameter ratio of 450 to 700, so a celestial body with a diameter of 2.5–4″ and the same brightness per unit solid angle would have the same brightness as the full Moon.)
Even though Pluto is physically larger than Ceres, when viewed from Earth (e.g., through the Hubble Space Telescope) Ceres has a much larger apparent size.
Angular sizes measured in degrees are useful for larger patches of sky. (For example, the three stars of the Belt cover about 4.5° of angular size.) However, much finer units are needed to measure the angular sizes of galaxies, nebulae, or other objects of the night sky.
Degrees, therefore, are subdivided as follows:
To put this in perspective, the full Moon as viewed from Earth is about <templatestyles src="Fraction/styles.css" />1⁄2°, or 30′ (or 1800″). The Moon's motion across the sky can be measured in angular size: approximately 15° every hour, or 15″ per second. A one-mile-long line painted on the face of the Moon would appear from Earth to be about 1″ in length.
In astronomy, it is typically difficult to directly measure the distance to an object, yet the object may have a known physical size (perhaps it is similar to a closer object with known distance) and a measurable angular diameter. In that case, the angular diameter formula can be inverted to yield the angular diameter distance to distant objects as
formula_13
In non-Euclidean space, such as our expanding universe, the angular diameter distance is only one of several definitions of distance, so that there can be different "distances" to the same object. See Distance measures (cosmology).
Non-circular objects.
Many deep-sky objects such as galaxies and nebulae appear non-circular and are thus typically given two measures of diameter: major axis and minor axis. For example, the Small Magellanic Cloud has a visual apparent diameter of ° 20′ × ° 5′.
Defect of illumination.
Defect of illumination is the maximum angular width of the unilluminated part of a celestial body seen by a given observer. For example, if an object is 40″ of arc across and is 75% illuminated, the defect of illumination is 10″.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta = 2\\arctan \\left(\\frac{d}{2D}\\right),"
},
{
"math_id": 1,
"text": "\\delta"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "D \\gg d"
},
{
"math_id": 5,
"text": "\\delta \\approx d / D"
},
{
"math_id": 6,
"text": "d_\\mathrm{act},"
},
{
"math_id": 7,
"text": "\\delta = 2\\arcsin \\left(\\frac{d_\\mathrm{act}}{2D}\\right)"
},
{
"math_id": 8,
"text": "\\frac{d_\\mathrm{act}}{2D}"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "\\arcsin x \\approx \\arctan x \\approx x."
},
{
"math_id": 11,
"text": "\\pi"
},
{
"math_id": 12,
"text": "\\delta = 206,265 ~ (d / D) ~ \\mathrm{arcseconds}"
},
{
"math_id": 13,
"text": "d \\equiv 2 D \\tan \\left( \\frac{\\delta}{2} \\right)."
}
] | https://en.wikipedia.org/wiki?curid=826723 |
826868 | Cutting stock problem | Mathematical problem in operations research
In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimization problem in mathematics that arises from applications in industry. In terms of computational complexity, the problem is an NP-hard problem reducible to the knapsack problem. The problem can be formulated as an integer linear programming problem.
Illustration of one-dimensional cutting-stock problem.
A paper machine can produce an unlimited number of master (jumbo) rolls, each 5600 mm wide. The following 13 items must be cut, in the table below.
The important thing about this kind of problem is that many different product units can be made from the same master roll, and the number of possible combinations is itself very large, in general, and not trivial to enumerate.
The problem therefore is to find an optimum set of patterns of making product rolls from the master roll, such that the demand is satisfied and waste is minimized.
Bounds and checks.
A simple lower bound is obtained by dividing the total amount of product by the size of each master roll. The total product required is 1380 x 22 + 1520 x 25 + ... + 2200 x 20 = 407160 mm. Each master roll is 5600 mm, requiring a minimum of 72.7 rolls, which means 73 rolls or more are required.
Solution.
There are 308 possible patterns for this small instance. The optimal answer requires 73 master rolls and has 0.401% waste; it can be shown computationally that in this case the minimum number of patterns with this level of waste is 10. It can also be computed that 19 different such solutions exist, each with 10 patterns and a waste of 0.401%, of which one such solution is shown below and in the picture:
Classification.
Cutting-stock problems can be classified in several ways. One way is the dimensionality of the cutting: the above example illustrates a one-dimensional (1D) problem; other industrial applications of 1D occur when cutting pipes, cables, and steel bars. Two-dimensional (2D) problems are encountered in furniture, clothing and glass production. When either the master item or the required parts are irregular-shaped (a situation often encountered in the leather, textile, metals industries) this is referred to as the "nesting" problem.
Not many three-dimensional (3D) applications involving cutting are known; however the closely related 3D packing problem has many industrial applications, such as packing objects into shipping containers (see e.g. containerization: the related sphere packing problem has been studied since the 17th century (Kepler conjecture)).
Applications.
Industrial applications of cutting-stock problems for high production volumes arise especially when basic material is produced in large rolls that are further cut into smaller units (see roll slitting). This is done e.g. in paper and plastic film industries but also in production of flat metals like steel or brass. There are many variants and additional constraints arising from special production constraints due to machinery and process limits, customer requirements and quality issues; some examples are:
History.
The cutting stock problem was first formulated by Kantorovich in 1939. In 1951 before computers became widely available, L. V. Kantorovich and V. A. Zalgaller suggested solving the problem of the economical use of material at the cutting stage with the help of linear programming. The proposed technique was later called the "column generation method".
Mathematical formulation and solution approaches.
The standard formulation for the cutting-stock problem (but not the only one) starts with a list of "m" orders, each requiring formula_0 pieces, where formula_1. We then construct a list of all possible combinations of cuts (often called "patterns" or "configurations"). Let formula_2 be the number of those patterns. We associate with each pattern a positive integer variable formula_3, representing how many times pattern formula_4 is to be used, where formula_5. The linear integer program is then:
formula_6
formula_7
formula_8, integer
where formula_9 is the number of times order formula_10 appears in pattern formula_4 and formula_11 is the cost (often the waste) of pattern formula_4. The precise nature of the quantity constraints can lead to subtly different mathematical characteristics. The above formulation's quantity constraints are minimum constraints (at least the given amount of each order must be produced, but possibly more).
When formula_12, the objective minimises the number of utilised master items and, if the constraint for the quantity to be produced is replaced by equality, it is called the bin packing problem.
The most general formulation has two-sided constraints (and in this case a minimum-waste solution may consume more than the minimum number of master items):
formula_13
This formulation applies not just to one-dimensional problems. Many variations are possible, including one where the objective is not to minimise the waste, but to maximise the total value of the produced items, allowing each order to have a different value.
In general, the number of possible patterns grows exponentially as a function of "m", the number of orders. As the number of orders increases, it may therefore become impractical to enumerate the possible cutting patterns.
An alternative approach uses delayed column-generation. This method solves the cutting-stock problem by starting with just a few patterns. It generates additional patterns when they are needed. For the one-dimensional case, the new patterns are introduced by solving an auxiliary optimization problem called the knapsack problem, using dual variable information from the linear program. The knapsack problem has well-known methods to solve it, such as branch and bound and dynamic programming. The Delayed Column Generation method can be much more efficient than the original approach, particularly as the size of the problem grows. The column generation approach as applied to the cutting stock problem was pioneered by Gilmore and Gomory in a series of papers published in the 1960s. Gilmore and Gomory showed that this approach is guaranteed to converge to the (fractional) optimal solution, without needing to enumerate all the possible patterns in advance.
A limitation of the original Gilmore and Gomory method is that it does not handle integrality, so the solution may contain fractions, e.g. a particular pattern should be produced 3.67 times. Rounding to the nearest integer often does not work, in the sense that it may lead to a sub-optimal solution and/or under- or over-production of some of the orders (and possible infeasibility in the presence of two-sided demand constraints). This limitation is overcome in modern algorithms, which can solve to optimality (in the sense of finding solutions with minimum waste) very large instances of the problem (generally larger than encountered in practice).
The cutting-stock problem is often highly degenerate, in that multiple solutions with the same amount of waste are possible. This degeneracy arises because it is possible to move items around, creating new patterns, without affecting the amount of waste. This gives rise to a whole collection of related problems which are concerned with some other criterion, such as the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "q_j"
},
{
"math_id": 1,
"text": "j = 1,\\ldots,m"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "i = 1,\\ldots,C"
},
{
"math_id": 6,
"text": "\\min\\sum_{i=1}^C c_i x_i"
},
{
"math_id": 7,
"text": "\\text{s.t.}\\sum_{i=1}^C a_{ij} x_i \\ge q_j, \\quad \\quad \\forall j=1,\\dots,m"
},
{
"math_id": 8,
"text": "x_i \\ge 0"
},
{
"math_id": 9,
"text": "a_{ij}"
},
{
"math_id": 10,
"text": "j"
},
{
"math_id": 11,
"text": "c_i"
},
{
"math_id": 12,
"text": "c_i=1"
},
{
"math_id": 13,
"text": "q_j \\le \\sum_{i=1}^n a_{ij} x_i \\le Q_j, \\quad \\quad \\forall j=1,\\dots,m"
}
] | https://en.wikipedia.org/wiki?curid=826868 |
826885 | Braess's paradox | Paradox related to increasing roadway capacity
Braess's paradox is the observation that adding one or more roads to a road network can slow down overall traffic flow through it. The paradox was first discovered by Arthur Pigou in 1920, and later named after the German mathematician Dietrich Braess in 1968.
The paradox may have analogies in electrical power grids and biological systems. It has been suggested that, in theory, the improvement of a malfunctioning network could be accomplished by removing certain parts of it. The paradox has been used to explain instances of improved traffic flow when existing major roads are closed.
Discovery and definition.
Dietrich Braess, a mathematician at Ruhr University, Germany, noticed the flow in a road network could be impeded by adding a new road, when he was working on traffic modelling. His idea was that if each driver is making the optimal self-interested decision as to which route is quickest, a shortcut could be chosen too often for drivers to have the shortest travel times possible. More formally, the idea behind Braess's discovery is that the Nash equilibrium may not equate with the best overall flow through a network.
The paradox is stated as follows:"For each point of a road network, let there be given the number of cars starting from it and the destination of the cars. Under these conditions, one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favourable to them, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times."
Adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance. That is because the Nash equilibrium of such a system is not necessarily optimal. The network change induces a new game structure which leads to a (multiplayer) prisoner's dilemma. In a Nash equilibrium, drivers have no incentive to change their routes. While the system is not in a Nash equilibrium, individual drivers are able to improve their respective travel times by changing the routes they take. In the case of Braess's paradox, drivers will continue to switch until they reach Nash equilibrium despite the reduction in overall performance.
If the latency functions are linear, adding an edge can never make total travel time at equilibrium worse by a factor of more than 4/3.
Possible instances of the paradox in action.
Prevalence.
In 1983, Steinberg and Zangwill provided, under reasonable assumptions, the necessary and sufficient conditions for Braess's paradox to occur in a general transportation network when a new route is added. (Note that their result applies to the addition of "any" new route, not just to the case of adding a single link.) As a corollary, they obtain that Braess's paradox is about as likely to occur as not occur when a random new route is added.
Traffic.
Braess's paradox has a counterpart in case of a reduction of the road network (which may cause a reduction of individual commuting time).
In Seoul, South Korea, traffic around the city sped up when a motorway was removed as part of the Cheonggyecheon restoration project. In Stuttgart, Germany, after investments into the road network in 1969, the traffic situation did not improve until a section of newly built road was closed for traffic again. In 1990 the temporary closing of 42nd Street in Manhattan, New York City, for Earth Day reduced the amount of congestion in the area. In 2008 Youn, Gastner and Jeong demonstrated specific routes in Boston, New York City and London where that might actually occur and pointed out roads that could be closed to reduce predicted travel times. In 2009, New York experimented with closures of Broadway at Times Square and Herald Square, which resulted in improved traffic flow and permanent pedestrian plazas.
In 2012, Paul Lecroart, of the institute of planning and development of the Île-de-France, wrote that "Despite initial fears, the removal of main roads does not cause deterioration of traffic conditions beyond the starting adjustments. The traffic transfer are limited and below expectations". He also notes that some private vehicle trips (and related economic activity) are not transferred to public transport and simply disappear ("evaporate").
The same phenomenon was also observed when road closing was not part of an urban project but the consequence of an accident. In 2012 in Rouen, a bridge was destroyed by fire. Over the next two years, other bridges were used more, but the total number of cars crossing bridges was reduced.
Electricity.
In 2012, scientists at the Max Planck Institute for Dynamics and Self-Organization demonstrated, through computational modelling, the potential for the phenomenon to occur in power transmission networks where power generation is decentralized.
In 2012, an international team of researchers from Institut Néel (CNRS, France), INP (France), IEMN (CNRS, France) and UCL (Belgium) published in "Physical Review Letters" a paper showing that Braess's paradox may occur in mesoscopic electron systems. In particular, they showed that adding a path for electrons in a nanoscopic network paradoxically reduced its conductance. That was shown both by simulations as well as experiments at low temperature using scanning gate microscopy.
Springs.
A model with springs and ropes can show that a hung weight can rise in height despite a taut rope in the hanging system being cut, and follows from the same mathematical structure as the original Braess's paradox.
For two identical springs joined in series by a short rope, their total spring constant is half of each individual spring, resulting in a long stretch when a certain weight is hung. This remains the case as we add two longer ropes in slack to connect the lower end of the upper spring to the hung weight (lower end of the lower spring), and the upper end of the lower spring to the hanging point (upper end of the upper spring). However, when the short rope is cut, the longer ropes become taut, and the two springs become parallel (in the mechanical sense) to each other. The total spring constant is twice that of each individual spring, and when the length of the long ropes is not too long, the hung weight will actually be higher compared to before the short rope was cut.
The fact that the hung weight rises despite cutting a taut rope (the short rope) in the hanging system is counter-intuitive, but it does follow from Hooke's law and the way springs work in series and in parallel.
Biology.
Adilson E. Motter and collaborators demonstrated that Braess's paradox outcomes may often occur in biological and ecological systems. Motter suggests removing part of a perturbed network could rescue it. For resource management of endangered species food webs, in which extinction of many species might follow sequentially, selective removal of a doomed species from the network could in principle bring about the positive outcome of preventing a series of further extinctions.
Team sports strategy.
It has been suggested that in basketball, a team can be seen as a network of possibilities for a route to scoring a basket, with a different efficiency for each pathway, and a star player could reduce the overall efficiency of the team, analogous to a shortcut that is overused increasing the overall times for a journey through a road network. A proposed solution for maximum efficiency in scoring is for a star player to shoot about the same number of shots as teammates. However, this approach is not supported by hard statistical evidence, as noted in the original paper.
Blockchain networks.
Braess's paradox has been shown to appear in blockchain payment channel networks, also known as layer-2 networks. Payment channel networks implement a solution to the scalability problem of blockchain networks, allowing transactions of high rates without recording them on the blockchain. In such a network, users can establish a channel by locking funds on each side of the channel. Transactions are executed either through a channel connecting directly the payer and payee or through a path of channels with intermediate users that ask for some fees.
While intuitively, opening new channels allows higher routing flexibility, adding a new channel might cause higher fees, and similarly closing existing channels might decrease fees. The paper presented a theoretical analysis with conditions for the paradox, methods for mitigating the paradox as well as an empirical analysis, showing the appearance in practice of the paradox and its effects on Bitcoin's Lightning network.
Mathematical approach.
Example.
Consider a road network as shown in the adjacent diagram on which 4000 drivers wish to travel from point Start to End. The travel time in minutes on the Start–A road is the number of travellers (T) divided by 100, and on Start–B is a constant 45 minutes (likewise with the roads across from them). If the dashed road does not exist (so the traffic network has 4 roads in total), the time needed to drive Start–A–End route with formula_0 drivers would be formula_1. The time needed to drive the Start–B–End route with formula_2 drivers would be formula_3. As there are 4000 drivers, the fact that formula_4 can be used to derive the fact that formula_5 when the system is at equilibrium. Therefore, each route takes formula_6 minutes. If either route took less time, it would not be a Nash equilibrium: a rational driver would switch from the longer route to the shorter route.
Now suppose the dashed line A–B is a road with an extremely short travel time of approximately 0 minutes. Suppose that the road is opened and one driver tries Start–A–B–End. To his surprise he finds that his time is formula_7 minutes, a saving of almost 25 minutes. Soon, more of the 4000 drivers are trying this new route. The time taken rises from 40.01 and keeps climbing. When the number of drivers trying the new route reaches 2500, with 1500 still in the Start–B–End route, their time will be formula_8 minutes, which is no improvement over the original route. Meanwhile, those 1500 drivers have been slowed to formula_9 minutes, a 20-minute increase. They are obliged to switch to the new route via A too, so it now takes formula_10 minutes. Nobody has any incentive to travel A-End or Start-B because any driver trying them will take 85 minutes. Thus, the opening of the cross route triggers an irreversible change to it by everyone, costing everyone 80 minutes instead of the original 65. If every driver were to agree not to use the A–B path, or if that route were closed, every driver would benefit by a 15-minute reduction in travel time.
Existence of an equilibrium.
If one assumes the travel time for each person driving on an edge to be equal, an equilibrium will always exist.
Let formula_11 be the formula for the travel time of each person traveling along edge formula_12 when formula_13 people take that edge. Suppose there is a traffic graph with formula_14 people driving along edge formula_12. Let the energy of formula_12, formula_15, be
formula_16
(If formula_17 let formula_18). Let the total energy of the traffic graph be the sum of the energies of every edge in the graph.
Take a choice of routes that minimizes the total energy. Such a choice must exist because there are finitely many choices of routes. That will be an equilibrium.
Assume, for contradiction, this is not the case. Then, there is at least one driver who can switch the route and improve the travel time. Suppose the original route is formula_19 while the new route is formula_20. Let formula_21 be total energy of the traffic graph, and consider what happens when the route formula_22 is removed. The energy of each edge formula_23 will be reduced by formula_24 and so the formula_21 will be reduced by formula_25. That is simply the total travel time needed to take the original route. If the new route is then added, formula_20, the total energy formula_21 will be increased by the total travel time needed to take the new route. Because the new route is shorter than the original route, formula_21 must decrease relative to the original configuration, contradicting the assumption that the original set of routes minimized the total energy.
Therefore, the choice of routes minimizing total energy is an equilibrium.
Finding an equilibrium.
The above proof outlines a procedure known as best response dynamics, which finds an equilibrium for a linear traffic graph and terminates in a finite number of steps. The algorithm is termed "best response" because at each step of the algorithm, if the graph is not at equilibrium then some driver has a best response to the strategies of all other drivers and switches to that response.
Pseudocode for Best Response Dynamics:
Let "P" be some traffic pattern.
while "P" is not at equilibrium:
compute the potential energy "e" of "P"
for each driver "d" in "P":
for each alternate path "p" available to "d":
compute the potential energy "n" of the pattern when "d" takes path "p"
if "n" < "e":
modify "P" so that "d" takes path "p"
continue the topmost while
At each step, if some particular driver could do better by taking an alternate path (a "best response"), doing so strictly decreases the energy of the graph. If no driver has a best response, the graph is at equilibrium. Since the energy of the graph strictly decreases with each step, the best response dynamics algorithm must eventually halt.
How far from optimal is traffic at equilibrium?
If the travel time functions are linear, that is formula_26 for some formula_27, then at worst, traffic in the energy-minimizing equilibrium is twice as bad as socially optimal.
Proof: Let formula_28 be some traffic configuration, with associated energy formula_29 and total travel time formula_30. For each edge, the energy is the sum of an arithmetic progression, and using the formula for the sum of an arithmetic progression, one can show that formula_31. If formula_32 is the socially-optimal traffic flow and formula_33 is the energy-minimizing traffic flow, the inequality implies that formula_34.
Thus, the total travel time for the energy-minimizing equilibrium is at most twice as bad as for the optimal flow.
Dynamics analysis of Braess's paradox.
In 2013, Dal Forno and Merlone interpret Braess's paradox as a dynamical ternary choice problem. The analysis shows how the new path changes the problem. Before the new path is available, the dynamics is the same as in binary choices with externalities, but the new path transforms it into a ternary choice problem. The addition of an extra resource enriches the complexity of the dynamics. In fact, there can even be coexistence of cycles, and the implication of the paradox on the dynamics can be seen from both a geometrical and an analytical perspective.
Effect of network topology.
Mlichtaich proved that Braess's paradox may occur if and only if the network is not a series-parallel graph.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "\\tfrac{a}{100} + 45"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "\\tfrac{b}{100} + 45"
},
{
"math_id": 4,
"text": "a + b = 4000"
},
{
"math_id": 5,
"text": "a = b = 2000"
},
{
"math_id": 6,
"text": "\\tfrac{2000}{100} + 45 = 65"
},
{
"math_id": 7,
"text": "\\tfrac{2000}{100} + \\tfrac{2001}{100} = 40.01"
},
{
"math_id": 8,
"text": "\\tfrac{2500}{100} + \\tfrac{4000}{100} = 65"
},
{
"math_id": 9,
"text": " 45 + \\tfrac{4000}{100} = 85"
},
{
"math_id": 10,
"text": "\\tfrac{4000}{100} + \\tfrac{4000}{100} = 80"
},
{
"math_id": 11,
"text": "L_e(x)"
},
{
"math_id": 12,
"text": "e"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "x_e"
},
{
"math_id": 15,
"text": "E(e)"
},
{
"math_id": 16,
"text": "\\sum_{i=1}^{x_e} L_e(i) = L_e(1) + L_e(2) + \\cdots + L_e(x_e)"
},
{
"math_id": 17,
"text": "x_e = 0"
},
{
"math_id": 18,
"text": "E(e) = 0"
},
{
"math_id": 19,
"text": "e_0, e_1, \\ldots, e_n"
},
{
"math_id": 20,
"text": "e'_0, e'_1, \\ldots, e'_m"
},
{
"math_id": 21,
"text": "E"
},
{
"math_id": 22,
"text": "e_0, e_1, ... e_n"
},
{
"math_id": 23,
"text": "e_i"
},
{
"math_id": 24,
"text": "L_{e_i}(x_{e_i})"
},
{
"math_id": 25,
"text": "\\sum_{i=0}^n L_{e_i}(x_{e_i})"
},
{
"math_id": 26,
"text": "L_e(x) = a_e x + b_e"
},
{
"math_id": 27,
"text": "a_e, b_e \\geq 0"
},
{
"math_id": 28,
"text": "Z"
},
{
"math_id": 29,
"text": "E(Z)"
},
{
"math_id": 30,
"text": "T(Z)"
},
{
"math_id": 31,
"text": "E(Z)\\leq T(Z)\\leq 2E(Z)"
},
{
"math_id": 32,
"text": "Z_o"
},
{
"math_id": 33,
"text": "Z_e"
},
{
"math_id": 34,
"text": "T(Z_e) \\leq 2E(Z_e) \\leq 2E(Z_o) \\leq 2T(Z_o)"
}
] | https://en.wikipedia.org/wiki?curid=826885 |
826951 | Data-flow analysis | Method of analyzing variables in software
Data-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A program's control-flow graph (CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. A canonical example of a data-flow analysis is reaching definitions.
A simple way to perform data-flow analysis of programs is to set up data-flow equations for each node of the control-flow graph and solve them by repeatedly calculating the output from the input locally at each node until the whole system stabilizes, i.e., it reaches a fixpoint. This general approach, also known as "Kildall's method", was developed by Gary Kildall while teaching at the Naval Postgraduate School.
Basic principles.
Data-flow analysis is the process of collecting information about the way the variables are defined and used in the program. It attempts to obtain particular information at each point in a procedure. Usually, it is enough to obtain this information at the boundaries of basic blocks, since from that it is easy to compute the information at points in the basic block. In forward flow analysis, the exit state of a block is a function of the block's entry state. This function is the composition of the effects of the statements in the block. The entry state of a block is a function of the exit states of its predecessors. This yields a set of data-flow equations:
For each block b:
formula_0
formula_1
In this, formula_2 is the transfer function of the block formula_3. It works on the entry state formula_4, yielding the exit state formula_5. The join operation formula_6 combines the exit states of the predecessors formula_7 of formula_3, yielding the entry state of formula_3.
After solving this set of equations, the entry and/or exit states of the blocks can be used to derive properties of the program at the block boundaries. The transfer function of each statement separately can be applied to get information at a point inside a basic block.
Each particular type of data-flow analysis has its own specific transfer function and join operation. Some data-flow problems require backward flow analysis. This follows the same plan, except that the transfer function is applied to the exit state yielding the entry state, and the join operation works on the entry states of the successors to yield the exit state.
The entry point (in forward flow) plays an important role: Since it has no predecessors, its entry state is well defined at the start of the analysis. For instance, the set of local variables with known values is empty. If the control-flow graph does not contain cycles (there were no explicit or implicit loops in the procedure) solving the equations is straightforward. The control-flow graph can then be topologically sorted; running in the order of this sort, the entry states can be computed at the start of each block, since all predecessors of that block have already been processed, so their exit states are available. If the control-flow graph does contain cycles, a more advanced algorithm is required.
An iterative algorithm.
The most common way of solving the data-flow equations is by using an iterative algorithm. It starts with an approximation of the in-state of each block. The out-states are then computed by applying the transfer functions on the in-states. From these, the in-states are updated by applying the join operations. The latter two steps are repeated until we reach the so-called fixpoint: the situation in which the in-states (and the out-states in consequence) do not change.
A basic algorithm for solving data-flow equations is the round-robin iterative algorithm:
for "i" ← 1 to "N"
"initialize node i"
while ("sets are still changing")
for "i" ← 1 to "N"
"recompute sets at node i"
Convergence.
To be usable, the iterative approach should actually reach a fixpoint. This can be guaranteed
by imposing constraints on the combination of the value domain of the states, the transfer functions and the join operation.
The value domain should be a partial order with finite height (i.e., there are no infinite ascending chains formula_8 < formula_9 < ...). The combination of the transfer function and the join operation should be monotonic with respect to this partial order. Monotonicity ensures that on each iteration the value will either stay the same or will grow larger, while finite height ensures that it cannot grow indefinitely. Thus we will ultimately reach a situation where T(x) = x for all x, which is the fixpoint.
The work list approach.
It is easy to improve on the algorithm above by noticing that the in-state of a block will not change if the out-states of its predecessors don't change. Therefore, we introduce a work list: a list of blocks that still need to be processed. Whenever the out-state of a block changes, we add its successors to the work list. In each iteration, a block is removed from the work list. Its out-state is computed. If the out-state changed, the block's successors are added to the work list. For efficiency, a block should not be in the work list more than once.
The algorithm is started by putting information-generating blocks in the work list. It terminates when the
work list is empty.
Ordering.
The efficiency of iteratively solving data-flow equations is influenced by the order at which local nodes are visited. Furthermore, it depends on whether the data-flow equations are used for forward or backward data-flow analysis over the CFG. Intuitively, in a forward flow problem, it would be fastest if all predecessors of a block have been processed before the block itself, since then the iteration will use the latest information. In the absence of loops it is possible to order the blocks in such a way that the correct out-states are computed by processing each block only once.
In the following, a few iteration orders for solving data-flow equations are discussed
(a related concept to iteration order of a CFG is tree traversal of a
tree).
Initialization.
The initial value of the in-states is important to obtain correct and accurate results.
If the results are used for compiler optimizations, they should provide conservative information, i.e. when applying the information, the program should not change semantics.
The iteration of the fixpoint algorithm will take the values in the direction of the maximum element. Initializing all blocks with the maximum element is therefore not useful. At least one block starts in a state with a value less than the maximum. The details depend on the
data-flow problem. If the minimum element represents totally conservative information, the results can be used safely even during the data-flow iteration. If it represents the most accurate information, fixpoint should be reached before the results can be applied.
Examples.
The following are examples of properties of computer programs that can be calculated by data-flow analysis.
Note that the properties calculated by data-flow analysis are typically only approximations of the real
properties. This is because data-flow analysis operates on the syntactical structure of the CFG without
simulating the exact control flow of the program.
However, to be still useful in practice, a data-flow analysis algorithm is typically designed to calculate
an upper respectively lower approximation of the real program properties.
Forward analysis.
The reaching definition analysis calculates for each program point the set of definitions that
may potentially reach this program point.
if b == 4 then
a = 5;
else
a = 3;
endif
if a < 4 then
The reaching definition of variable at line 7 is the set of assignments at line 2 and at line 4.
Backward analysis.
The live variable analysis calculates for each program point the variables that may be
potentially read afterwards before their next write update. The result is typically used by
dead code elimination to remove statements that assign to a variable whose value is not used afterwards.
The in-state of a block is the set of variables that are live at the start of it. It initially contains all variables live (contained) in the block, before the transfer function is applied and the actual contained values are computed. The transfer function of a statement is applied by killing the variables that are written within this block (remove them from the set of live variables). The out-state of a block is the set of variables that are live at the end of the block and is computed by the union of the block's successors' in-states.
Initial code:
<templatestyles src="Col-begin/styles.css"/>
Backward analysis:
<templatestyles src="Col-begin/styles.css"/>
The in-state of b3 only contains "b" and "d", since "c" has been written. The out-state of b1 is the union of the in-states of b2 and b3. The definition of "c" in b2 can be removed, since "c" is not live immediately after the statement.
Solving the data-flow equations starts with initializing all in-states and out-states to the empty set. The work list is initialized by inserting the exit point (b3) in the work list (typical for backward flow). Its computed in-state differs from the previous one, so its predecessors b1 and b2 are inserted and the process continues. The progress is summarized in the table below.
Note that b1 was entered in the list before b2, which forced processing b1 twice (b1 was re-entered as predecessor of b2). Inserting b2 before b1 would have allowed earlier completion.
Initializing with the empty set is an optimistic initialization: all variables start out as dead. Note that the out-states cannot shrink from one iteration to the next, although the out-state can be smaller than the in-state. This can be seen from the fact that after the first iteration the out-state can only change by a change of the in-state. Since the in-state starts as the empty set, it can only grow in further iterations.
Other approaches.
Several modern compilers use static single-assignment form as the method for analysis of variable dependencies.
In 2002, Markus Mohnen described a new method of data-flow analysis that does not require the explicit construction of a data-flow graph, instead relying on abstract interpretation of the program and keeping a working set of program counters. At each conditional branch, both targets are added to the working set. Each path is followed for as many instructions as possible (until end of program or until it has looped with no changes), and then removed from the set and the next program counter retrieved.
A combination of control flow analysis and data flow analysis has shown to be useful and complementary in identifying cohesive source code regions implementing functionalities of a system (e.g., features, requirements or use cases).
Special classes of problems.
There are a variety of special classes of dataflow problems which have efficient or general solutions.
Bit vector problems.
The examples above are problems in which the data-flow value is a set, e.g. the set of reaching definitions (Using a bit for a definition position in the program), or the set of live variables. These sets can be represented efficiently as bit vectors, in which each bit represents set membership of one particular element. Using this representation, the join and transfer functions can be implemented as bitwise logical operations. The join operation is typically union or intersection, implemented by bitwise "logical or" and "logical and".
The transfer function for each block can be decomposed in so-called "gen" and "kill" sets.
As an example, in live-variable analysis, the join operation is union. The "kill" set is the set of variables that are written in a block, whereas the "gen" set is the set of variables that are read without being written first. The data-flow equations become
formula_10
formula_11
In logical operations, this reads as
out("b") = 0
for "s" in succ("b")
out("b") = out("b") or in("s")
in("b") = (out("b") and not kill("b")) or gen("b")
Dataflow problems which have sets of data-flow values which can be represented as bit vectors are called bit vector problems, gen-kill problems, or locally separable problems. Such problems have generic polynomial-time solutions.
In addition to the reaching definitions and live variables problems mentioned above, the following problems are instances of bitvector problems:
IFDS problems.
Interprocedural, finite, distributive, subset problems or IFDS problems are another class of problem with a generic polynomial-time solution. Solutions to these problems provide context-sensitive and flow-sensitive dataflow analyses.
There are several implementations of IFDS-based dataflow analyses for popular programming languages, e.g. in the Soot and WALA frameworks for Java analysis.
Every bitvector problem is also an IFDS problem, but there are several significant IFDS problems that are not bitvector problems, including truly-live variables and possibly-uninitialized variables.
Sensitivities.
Data-flow analysis is typically path-insensitive, though it is possible to define data-flow equations that yield a path-sensitive analysis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " out_b = trans_b (in_b) "
},
{
"math_id": 1,
"text": " in_b = join_{p \\in pred_b}(out_p) "
},
{
"math_id": 2,
"text": " trans_b "
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "in_b"
},
{
"math_id": 5,
"text": "out_b"
},
{
"math_id": 6,
"text": "join"
},
{
"math_id": 7,
"text": "p \\in pred_b"
},
{
"math_id": 8,
"text": "x_1"
},
{
"math_id": 9,
"text": "x_2"
},
{
"math_id": 10,
"text": " out_b = \\bigcup_{s \\in succ_b} in_s "
},
{
"math_id": 11,
"text": " in_b = (out_b - kill_b) \\cup gen_b "
}
] | https://en.wikipedia.org/wiki?curid=826951 |
826956 | Clausius–Mossotti relation | Equation for a material's dielectric constant given its atomic polarizability
In electromagnetism, the Clausius–Mossotti relation, named for O. F. Mossotti and Rudolf Clausius, expresses the dielectric constant (relative permittivity, "ε"r) of a material in terms of the atomic polarizability, α, of the material's constituent atoms and/or molecules, or a homogeneous mixture thereof. It is equivalent to the Lorentz–Lorenz equation, which relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. It may be expressed as:
formula_0
where
In the case that the material consists of a mixture of two or more species, the right hand side of the above equation would consist of the sum of the molecular polarizability contribution from each species, indexed by i in the following form:
formula_2
In the CGS system of units the Clausius–Mossotti relation is typically rewritten to show the molecular polarizability "volume" formula_3 which has units of volume [m3]. Confusion may arise from the practice of using the shorter name "molecular polarizability" for both formula_4 and formula_5 within literature intended for the respective unit system.
The Clausius–Mossotti relation assumes only an induced dipole relevant to its polarizability and is thus inapplicable for substances with a significant permanent dipole. It is applicable to gases such as and at sufficiently low densities and pressures. For example, the Clausius–Mossotti relation is accurate for N2 gas up to 1000 atm between 25 °C and 125 °C. Moreover, the Clausius–Mossotti relation may be applicable to substances if the applied electric field is at a sufficiently high frequencies such that any permanent dipole modes are inactive.
Lorentz–Lorenz equation.
The Lorentz–Lorenz equation is similar to the Clausius–Mossotti relation, except that it relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. The Lorentz–Lorenz equation is named after the Danish mathematician and scientist Ludvig Lorenz, who published it in 1869, and the Dutch physicist Hendrik Lorentz, who discovered it independently in 1878.
The most general form of the Lorentz–Lorenz equation is (in Gaussian-CGS units)
formula_6
where n is the refractive index, N is the number of molecules per unit volume, and formula_7 is the mean polarizability.
This equation is approximately valid for homogeneous solids as well as liquids and gases.
When the square of the refractive index is formula_8, as it is for many gases, the equation reduces to:
formula_9
or simply
formula_10
This applies to gases at ordinary pressures. The refractive index n of the gas can then be expressed in terms of the molar refractivity A as:
formula_11
where p is the pressure of the gas, R is the universal gas constant, and T is the (absolute) temperature, which together determine the number density N.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\varepsilon_\\mathrm{r} - 1}{\\varepsilon_\\mathrm{r} + 2} = \\frac{N \\alpha}{3\\varepsilon_0}"
},
{
"math_id": 1,
"text": "\\varepsilon_r = \\tfrac{\\varepsilon}{\\varepsilon_0}"
},
{
"math_id": 2,
"text": "\\frac{\\varepsilon_\\mathrm{r} - 1}{\\varepsilon_\\mathrm{r} + 2} = \\sum_i \\frac{N_i \\alpha_i}{3\\varepsilon_0}"
},
{
"math_id": 3,
"text": "\\alpha' = \\tfrac{\\alpha}{4\\pi\\varepsilon_0}"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\alpha'"
},
{
"math_id": 6,
"text": " \\frac{n^2 - 1}{n^2 + 2} = \\frac{4 \\pi}{3} N \\alpha_\\mathrm{m} "
},
{
"math_id": 7,
"text": "\\alpha_\\mathrm{m}"
},
{
"math_id": 8,
"text": " n^2 \\approx 1 "
},
{
"math_id": 9,
"text": " n^2 - 1 \\approx 4 \\pi N \\alpha_\\mathrm{m}"
},
{
"math_id": 10,
"text": " n - 1 \\approx 2 \\pi N \\alpha_\\mathrm{m}"
},
{
"math_id": 11,
"text": " n \\approx \\sqrt{1 + \\frac{3 A p}{R T}}"
}
] | https://en.wikipedia.org/wiki?curid=826956 |
826997 | Regression analysis | Set of statistical processes for estimating the relationships among variables
<templatestyles src="Machine learning/styles.css"/>
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the "outcome" or "response" variable, or a "label" in machine learning parlance) and one or more independent variables (often called "regressors", "predictors", "covariates", "explanatory variables" or "features"). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line (or hyperplane) that minimizes the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (see linear regression), this allows the researcher to estimate the conditional expectation (or population average value) of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters (e.g., quantile regression or Necessary Condition Analysis) or estimate the conditional expectation across a broader collection of non-linear models (e.g., nonparametric regression).
Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data.
History.
The earliest form of regression was the method of least squares, which was published by Legendre in 1805, and by Gauss in 1809. Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821, including a version of the Gauss–Markov theorem.
The term "regression" was coined by Francis Galton in the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean).
For Galton, regression had only this biological meaning, but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context. In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925. Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
In the 1950s and 1960s, economists used electromechanical desk calculators to calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.
Regression methods continue to be an area of active research. In recent decades, new methods have been developed for robust regression, regression involving correlated responses such as time series and growth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, and causal inference with regression.
Regression model.
In practice, researchers first select a model they would like to estimate and then use their chosen method (e.g., ordinary least squares) to estimate the parameters of that model. Regression models involve the following components:
In various fields of application, different terminologies are used in place of dependent and independent variables.
Most regression models propose that formula_3 is a function (regression function) of formula_1 and formula_5, with formula_4 representing an additive error term that may stand in for un-modeled determinants of formula_3 or random statistical noise:
formula_6
The researchers' goal is to estimate the function formula_7 that most closely fits the data. To carry out regression analysis, the form of the function formula_8 must be specified. Sometimes the form of this function is based on knowledge about the relationship between formula_3 and formula_1 that does not rely on the data. If no such knowledge is available, a flexible or convenient form for formula_8 is chosen. For example, a simple univariate regression may propose formula_9, suggesting that the researcher believes formula_10 to be a reasonable approximation for the statistical process generating the data.
Once researchers determine their preferred statistical model, different forms of regression analysis provide tools to estimate the parameters formula_11. For example, least squares (including its most common variant, ordinary least squares) finds the value of formula_11 that minimizes the sum of squared errors formula_12. A given regression method will ultimately provide an estimate of formula_0, usually denoted formula_13 to distinguish the estimate from the true (unknown) parameter value that generated the data. Using this estimate, the researcher can then use the "fitted value" formula_14 for prediction or to assess the accuracy of the model in explaining the data. Whether the researcher is intrinsically interested in the estimate formula_13 or the predicted value formula_15 will depend on context and their goals. As described in ordinary least squares, least squares is widely used because the estimated function formula_16 approximates the conditional expectation formula_17. However, alternative variants (e.g., least absolute deviations or quantile regression) are useful when researchers want to model other functions formula_18.
It is important to note that there must be sufficient data to estimate a regression model. For example, suppose that a researcher has access to formula_19 rows of data with one dependent and two independent variables: formula_20. Suppose further that the researcher wants to estimate a bivariate linear model via least squares: formula_21. If the researcher only has access to formula_22 data points, then they could find infinitely many combinations formula_23 that explain the data equally well: any combination can be chosen that satisfies formula_24, all of which lead to formula_25 and are therefore valid solutions that minimize the sum of squared residuals. To understand why there are infinitely many options, note that the system of formula_22 equations is to be solved for 3 unknowns, which makes the system underdetermined. Alternatively, one can visualize infinitely many 3-dimensional planes that go through formula_22 fixed points.
More generally, to estimate a least squares model with formula_26 distinct parameters, one must have formula_27 distinct data points. If formula_28, then there does not generally exist a set of parameters that will perfectly fit the data. The quantity formula_29 appears often in regression analysis, and is referred to as the degrees of freedom in the model. Moreover, to estimate a least squares model, the independent variables formula_30 must be linearly independent: one must "not" be able to reconstruct any of the independent variables by adding and multiplying the remaining independent variables. As discussed in ordinary least squares, this condition ensures that formula_31 is an invertible matrix and therefore that a unique solution formula_13 exists.
Underlying assumptions.
By itself, a regression is simply a calculation using the data. In order to interpret the output of regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions. These assumptions often include:
A handful of conditions are sufficient for the least-squares estimator to possess desirable properties: in particular, the Gauss–Markov assumptions imply that the parameter estimates will be unbiased, consistent, and efficient in the class of linear unbiased estimators. Practitioners have developed a variety of methods to maintain some or all of these desirable properties in real-world settings, because these classical assumptions are unlikely to hold exactly. For example, modeling errors-in-variables can lead to reasonable estimates independent variables are measured with errors. Heteroscedasticity-consistent standard errors allow the variance of formula_4 to change across values of formula_1. Correlated errors that exist within subsets of the data or follow specific patterns can be handled using "clustered standard errors, geographic weighted regression", or Newey–West standard errors, among other techniques. When rows of data correspond to locations in space, the choice of how to model formula_4 within geographic units can have important consequences. The subfield of econometrics is largely focused on developing techniques that allow researchers to make reasonable real-world conclusions in real-world settings, where classical assumptions do not hold exactly.
Linear regression.
In linear regression, the model specification is that the dependent variable, formula_33 is a linear combination of the "parameters" (but need not be linear in the "independent variables"). For example, in simple linear regression for modeling formula_34 data points there is one independent variable: formula_35, and two parameters, formula_36 and formula_37:
straight line: formula_38
In multiple linear regression, there are several independent variables or functions of independent variables.
Adding a term in formula_39 to the preceding regression gives:
parabola: formula_40
This is still linear regression; although the expression on the right hand side is quadratic in the independent variable formula_41, it is linear in the parameters formula_36, formula_37 and formula_42
In both cases, formula_43 is an error term and the subscript formula_2 indexes a particular observation.
Returning our attention to the straight line case: Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model:
formula_44
The residual, formula_45, is the difference between the value of the dependent variable predicted by the model, formula_46, and the true value of the dependent variable, formula_47. One method of estimation is ordinary least squares. This method obtains parameter estimates that minimize the sum of squared residuals, SSR:
formula_48
Minimization of this function results in a set of normal equations, a set of simultaneous linear equations in the parameters, which are solved to yield the parameter estimators, formula_49.
In the case of simple regression, the formulas for the least squares estimates are
formula_50
formula_51
where formula_52 is the mean (average) of the formula_53 values and formula_54 is the mean of the formula_55 values.
Under the assumption that the population error term has a constant variance, the estimate of that variance is given by:
formula_56
This is called the mean square error (MSE) of the regression. The denominator is the sample size reduced by the number of model parameters estimated from the same data, formula_57 for formula_58 regressors or formula_59 if an intercept is used. In this case, formula_60 so the denominator is formula_61.
The standard errors of the parameter estimates are given by
formula_62
formula_63
Under the further assumption that the population error term is normally distributed, the researcher can use these estimated standard errors to create confidence intervals and conduct hypothesis tests about the population parameters.
General linear model.
In the more general multiple regression model, there are formula_58 independent variables:
formula_64
where formula_65 is the formula_2-th observation on the formula_66-th independent variable.
If the first independent variable takes the value 1 for all formula_2, formula_67, then formula_37 is called the regression intercept.
The least squares parameter estimates are obtained from formula_58 normal equations. The residual can be written as
formula_68
The normal equations are
formula_69
In matrix notation, the normal equations are written as
formula_70
where the formula_71 element of formula_72 is formula_65, the formula_2 element of the column vector formula_73 is formula_47, and the formula_66 element of formula_74 is formula_75. Thus formula_72 is formula_76, formula_73 is formula_77, and formula_74 is formula_78. The solution is
formula_79
Diagnostics.
Once a regression model has been constructed, it may be important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include the R-squared, analyses of the pattern of residuals and hypothesis testing. Statistical significance can be checked by an F-test of the overall fit, followed by t-tests of individual parameters.
Interpretations of these diagnostic tests rest heavily on the model's assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions and complicate inference. With relatively large samples, however, a central limit theorem can be invoked such that hypothesis testing may proceed using asymptotic approximations.
Limited dependent variables.
Limited dependent variables, which are response variables that are categorical variables or are variables constrained to fall only in a certain range, often arise in econometrics.
The response variable may be non-continuous ("limited" to lie on some subset of the real line). For binary (zero or one) variables, if analysis proceeds with least-squares linear regression, the model is called the linear probability model. Nonlinear models for binary dependent variables include the probit and logit model. The multivariate probit model is a standard method of estimating a joint relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. Censored regression models may be used when the dependent variable is only sometimes observed, and Heckman correction type models may be used when the sample is not randomly selected from the population of interest. An alternative to such procedures is linear regression based on polychoric correlation (or polyserial correlations) between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, then count models like the Poisson regression or the negative binomial model may be used.
Nonlinear regression.
When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative procedure. This introduces many complications which are summarized in Differences between linear and non-linear least squares.
Prediction (interpolation and extrapolation).
Regression models predict a value of the "Y" variable given known values of the "X" variables. Prediction within the range of values in the dataset used for model-fitting is known informally as "interpolation". Prediction outside this range of the data is known as "extrapolation". Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values.
A "prediction interval" that represents the uncertainty may accompany the point prediction. Such intervals tend to expand rapidly as the values of the independent variable(s) moved outside the range covered by the observed data.
For such reasons and others, some tend to say that it might be unwise to undertake extrapolation.
However, this does not cover the full set of modeling errors that may be made: in particular, the assumption of a particular form for the relation between "Y" and "X". A properly conducted regression analysis will include an assessment of how well the assumed form is matched by the observed data, but it can only do so within the range of values of the independent variables actually available. This means that any extrapolation is particularly reliant on the assumptions being made about the structural form of the regression relationship. If this knowledge includes the fact that the dependent variable cannot go outside a certain range of values, this can be made use of in selecting the model – even if the observed dataset has no values particularly near such bounds. The implications of this step of choosing an appropriate functional form for the regression can be great when extrapolation is considered. At a minimum, it can ensure that any extrapolation arising from a fitted model is "realistic" (or in accord with what is known).
Power and sample size calculations.
There are no generally agreed methods for relating the number of observations versus the number of independent variables in the model. One method conjectured by Good and Hardin is formula_80, where formula_19 is the sample size, formula_81 is the number of independent variables and formula_82 is the number of observations needed to reach the desired precision if the model had only one independent variable. For example, a researcher is building a linear regression model using a dataset that contains 1000 patients (formula_19). If the researcher decides that five observations are needed to precisely define a straight line (formula_82), then the maximum number of independent variables the model can support is 4, because
formula_83.
Other methods.
Although the parameters of a regression model are usually estimated using the method of least squares, other methods which have been used include:
Software.
All major statistical software packages perform least squares regression analysis and inference. Simple linear regression and multiple regression using least squares can be done in some spreadsheet applications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized. Different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Evan J. Williams, "I. Regression," pp. 523–41.
Julian C. Stanley, "II. Analysis of Variance," pp. 541–554. | [
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "X_i"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "Y_i"
},
{
"math_id": 4,
"text": "e_i"
},
{
"math_id": 5,
"text": " \\beta"
},
{
"math_id": 6,
"text": "Y_i = f (X_i, \\beta) + e_i"
},
{
"math_id": 7,
"text": "f(X_i, \\beta)"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "f(X_i, \\beta) = \\beta_0 + \\beta_1 X_i"
},
{
"math_id": 10,
"text": "Y_i = \\beta_0 + \\beta_1 X_i + e_i"
},
{
"math_id": 11,
"text": "\\beta "
},
{
"math_id": 12,
"text": "\\sum_i (Y_i - f(X_i, \\beta))^2"
},
{
"math_id": 13,
"text": "\\hat{\\beta}"
},
{
"math_id": 14,
"text": "\\hat{Y_i} = f(X_i,\\hat{\\beta})"
},
{
"math_id": 15,
"text": "\\hat{Y_i}"
},
{
"math_id": 16,
"text": "f(X_i, \\hat{\\beta})"
},
{
"math_id": 17,
"text": "E(Y_i|X_i)"
},
{
"math_id": 18,
"text": "f(X_i,\\beta)"
},
{
"math_id": 19,
"text": "N"
},
{
"math_id": 20,
"text": "(Y_i, X_{1i}, X_{2i})"
},
{
"math_id": 21,
"text": "Y_i = \\beta_0 + \\beta_1 X_{1i} + \\beta_2 X_{2i} + e_i"
},
{
"math_id": 22,
"text": "N=2"
},
{
"math_id": 23,
"text": "(\\hat{\\beta}_0, \\hat{\\beta}_1, \\hat{\\beta}_2)"
},
{
"math_id": 24,
"text": "\\hat{Y}_i = \\hat{\\beta}_0 + \\hat{\\beta}_1 X_{1i} + \\hat{\\beta}_2 X_{2i}"
},
{
"math_id": 25,
"text": "\\sum_i \\hat{e}_i^2 = \\sum_i (\\hat{Y}_i - (\\hat{\\beta}_0 + \\hat{\\beta}_1 X_{1i} + \\hat{\\beta}_2 X_{2i}))^2 = 0"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "N \\geq k"
},
{
"math_id": 28,
"text": "N > k"
},
{
"math_id": 29,
"text": "N-k"
},
{
"math_id": 30,
"text": "(X_{1i}, X_{2i}, ..., X_{ki})"
},
{
"math_id": 31,
"text": "X^{T}X"
},
{
"math_id": 32,
"text": "E(e_i | X_i) = 0"
},
{
"math_id": 33,
"text": " y_i "
},
{
"math_id": 34,
"text": " n "
},
{
"math_id": 35,
"text": " x_i "
},
{
"math_id": 36,
"text": "\\beta_0"
},
{
"math_id": 37,
"text": "\\beta_1"
},
{
"math_id": 38,
"text": "y_i=\\beta_0 +\\beta_1 x_i +\\varepsilon_i,\\quad i=1,\\dots,n.\\!"
},
{
"math_id": 39,
"text": "x_i^2"
},
{
"math_id": 40,
"text": "y_i=\\beta_0 +\\beta_1 x_i +\\beta_2 x_i^2+\\varepsilon_i,\\ i=1,\\dots,n.\\!"
},
{
"math_id": 41,
"text": "x_i"
},
{
"math_id": 42,
"text": "\\beta_2."
},
{
"math_id": 43,
"text": "\\varepsilon_i"
},
{
"math_id": 44,
"text": " \\widehat{y}_i = \\widehat{\\beta}_0 + \\widehat{\\beta}_1 x_i. "
},
{
"math_id": 45,
"text": " e_i = y_i - \\widehat{y}_i "
},
{
"math_id": 46,
"text": " \\widehat{y}_i"
},
{
"math_id": 47,
"text": "y_i"
},
{
"math_id": 48,
"text": "SSR=\\sum_{i=1}^n e_i^2"
},
{
"math_id": 49,
"text": "\\widehat{\\beta}_0, \\widehat{\\beta}_1"
},
{
"math_id": 50,
"text": "\\widehat{\\beta}_1=\\frac{\\sum(x_i-\\bar{x})(y_i-\\bar{y})}{\\sum(x_i-\\bar{x})^2}"
},
{
"math_id": 51,
"text": "\\widehat{\\beta}_0=\\bar{y}-\\widehat{\\beta}_1\\bar{x}"
},
{
"math_id": 52,
"text": "\\bar{x}"
},
{
"math_id": 53,
"text": "x"
},
{
"math_id": 54,
"text": "\\bar{y}"
},
{
"math_id": 55,
"text": "y"
},
{
"math_id": 56,
"text": " \\hat{\\sigma}^2_\\varepsilon = \\frac{SSR}{n-2}"
},
{
"math_id": 57,
"text": "(n-p)"
},
{
"math_id": 58,
"text": "p"
},
{
"math_id": 59,
"text": "(n-p-1)"
},
{
"math_id": 60,
"text": "p=1"
},
{
"math_id": 61,
"text": "n-2"
},
{
"math_id": 62,
"text": "\\hat\\sigma_{\\beta_1}=\\hat\\sigma_{\\varepsilon} \\sqrt{\\frac{1}{\\sum(x_i-\\bar x)^2}}"
},
{
"math_id": 63,
"text": "\\hat\\sigma_{\\beta_0}=\\hat\\sigma_\\varepsilon \\sqrt{\\frac{1}{n} + \\frac{\\bar{x}^2}{\\sum(x_i-\\bar x)^2}}=\\hat\\sigma_{\\beta_1} \\sqrt{\\frac{\\sum x_i^2}{n}}. "
},
{
"math_id": 64,
"text": " y_i = \\beta_1 x_{i1} + \\beta_2 x_{i2} + \\cdots + \\beta_p x_{ip} + \\varepsilon_i, \\, "
},
{
"math_id": 65,
"text": "x_{ij}"
},
{
"math_id": 66,
"text": "j"
},
{
"math_id": 67,
"text": "x_{i1} = 1"
},
{
"math_id": 68,
"text": "\\varepsilon_i=y_i - \\hat\\beta_1 x_{i1} - \\cdots - \\hat\\beta_p x_{ip}."
},
{
"math_id": 69,
"text": "\\sum_{i=1}^n \\sum_{k=1}^p x_{ij}x_{ik}\\hat \\beta_k=\\sum_{i=1}^n x_{ij}y_i,\\ j=1,\\dots,p.\\,"
},
{
"math_id": 70,
"text": "\\mathbf{(X^\\top X )\\hat{\\boldsymbol{\\beta}}= {}X^\\top Y},\\,"
},
{
"math_id": 71,
"text": "ij"
},
{
"math_id": 72,
"text": "\\mathbf X"
},
{
"math_id": 73,
"text": "Y"
},
{
"math_id": 74,
"text": "\\hat \\boldsymbol \\beta"
},
{
"math_id": 75,
"text": "\\hat \\beta_j"
},
{
"math_id": 76,
"text": "n \\times p"
},
{
"math_id": 77,
"text": "n \\times 1"
},
{
"math_id": 78,
"text": "p \\times 1"
},
{
"math_id": 79,
"text": "\\mathbf{\\hat{\\boldsymbol{\\beta}}= (X^\\top X )^{-1}X^\\top Y}.\\,"
},
{
"math_id": 80,
"text": "N=m^n"
},
{
"math_id": 81,
"text": "n"
},
{
"math_id": 82,
"text": "m"
},
{
"math_id": 83,
"text": "\\frac{\\log 1000}{\\log5}\\approx4.29 "
}
] | https://en.wikipedia.org/wiki?curid=826997 |
8269999 | Sensitivity time control | Sensitivity time control (STC), also known as swept-gain control, is a system used to attenuate the very strong signals returned from nearby ground clutter targets in the first few range gates of a radar receiver. Without this attenuation, the receiver would routinely saturate due to the strong signals. This is used in air traffic control systems and has an influence on the shape of the elevation pattern of the surveillance antenna. It is represented in terms of numerical value typically expressed in decibels (dB), starting from zero, indicating that there is no muting and that the radar system is accepting all returns.
The radar equation is based on formula_0, meaning that doubling the range to a target results in sixteen times less energy being returned. STC is due to the corollary of this statement - nearby targets return orders of magnitude more radio signal. In the case of a long-range radar with high power outputs, the return from nearby targets can be so powerful that it causes the amplifiers to saturate, producing a blank area on the screen beyond which nothing can be detected until the amplifiers return to normal operation again.
For early radar systems, the solution was to point the signal away from the ground. This can be difficult for ground or ship-based radars, which required other solutions. In the case of the ground-based AMES Type 7, for instance, the radars were installed in natural dish-like depressions so that all returns below a certain angle were cut off very close to the radar. This still had the same effect in terms of causing the amplifiers to saturate, but occurred so rapidly after transmission that the saturation decayed at relatively short ranges. The downside to this approach is that it permanently hides any signal below a certain angle, which for a very long-range system might prevent it from seeing anything near the radar site.
STC addresses this problem by implementing a reverse gain curve with the same characteristics as the radar equation, that is, a formula_0 dependency or some function close to that (often there are discrete steps). This dramatically damps down amplification of signals received shortly after the detection pulse is sent, preventing them from saturating the receiver. The gain modification is reduced over time, until it reaches zero at some selected distance from the radar site, often on the order of .
Because the system works by muting nearby signals, it may have the side-effect of eliminating small targets near the radar. This is fine for many applications, like air traffic control, where the targets are large and nearby aircraft are often guided using a local-area radar. | [
{
"math_id": 0,
"text": "1/range^4"
}
] | https://en.wikipedia.org/wiki?curid=8269999 |
8270363 | A (disambiguation) | A is the first letter of the Latin and English alphabet.
A may also refer to:
<templatestyles src="Template:TOC_right/styles.css" /><templatestyles src="Template:TOC limit/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathbb{A}"
},
{
"math_id": 1,
"text": "\\forall"
}
] | https://en.wikipedia.org/wiki?curid=8270363 |
8271556 | F (disambiguation) | F is the sixth letter of the Latin alphabet.
F may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\vec F"
}
] | https://en.wikipedia.org/wiki?curid=8271556 |
8271581 | G (disambiguation) | G is the seventh letter of the Latin alphabet.
G may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
Arts, entertainment and media.
Rating systems.
"G" is a common type of content rating that applies to media entertainment, such as films, television shows and computer games, generally denoting "General Audience" meaning that access is not restricted. The following organizations all use the rating:
The "G" rating is further documented at Motion picture content rating system and Television content rating system.
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "G_{p,q}^{m,n}"
}
] | https://en.wikipedia.org/wiki?curid=8271581 |
8271728 | L (disambiguation) | L, or l, is the twelfth letter of the English alphabet.
L or l may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathfrak{L}"
},
{
"math_id": 1,
"text": "\\mathcal{L}"
}
] | https://en.wikipedia.org/wiki?curid=8271728 |
8271924 | O (disambiguation) | O, or o, is the fifteenth letter of the English alphabet.
O may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "f = O(g)"
},
{
"math_id": 1,
"text": "f = o(g)"
},
{
"math_id": 2,
"text": "\\mathcal{O}"
}
] | https://en.wikipedia.org/wiki?curid=8271924 |
8271967 | P (disambiguation) | P, or p, is the sixteenth letter of the English alphabet.
P may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathbb{P}"
}
] | https://en.wikipedia.org/wiki?curid=8271967 |
8272039 | U (disambiguation) | U, or u, is the twenty-first letter of the English alphabet.
U may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\cup"
}
] | https://en.wikipedia.org/wiki?curid=8272039 |
82728 | Quantum superposition | Principle of quantum mechanics
Quantum superposition is a fundamental principle of quantum mechanics that states that linear combinations of solutions to the Schrödinger equation are also solutions of the Schrödinger equation. This follows from the fact that the Schrödinger equation is a linear differential equation in time and position. More precisely, the state of a system is given by a linear combination of all the eigenfunctions of the Schrödinger equation governing that system.
An example is a qubit used in quantum information processing. A qubit state is most generally a superposition of the basis states formula_0 and formula_1:
formula_2
where formula_3 is the quantum state of the qubit, and formula_0, formula_1 denote particular solutions to the Schrödinger equation in Dirac notation weighted by the two probability amplitudes formula_4 and formula_5 that both are complex numbers. Here formula_6 corresponds to the classical 0 bit, and formula_7 to the classical 1 bit. The probabilities of measuring the system in the formula_0 or formula_1 state are given by formula_8 and formula_9 respectively (see the Born rule). Before the measurement occurs the qubit is in a superposition of both states.
The interference fringes in the double-slit experiment provide another example of the superposition principle.
Background.
Paul Dirac described the superposition principle as follows:
The general principle of superposition of quantum mechanics applies to the states [that are theoretically possible without mutual interference or contradiction] ... of any one dynamical system. It requires us to assume that between these states there exist peculiar relationships such that whenever the system is definitely in one state we can consider it as being partly in each of two or more other states. The original state must be regarded as the result of a kind of superposition of the two or more new states, in a way that cannot be conceived on classical ideas. Any state may be considered as the result of a superposition of two or more other states, and indeed in an infinite number of ways. Conversely, any two or more states may be superposed to give a new state...
The non-classical nature of the superposition process is brought out clearly if we consider the superposition of two states, "A" and "B", such that there exists an observation which, when made on the system in state "A", is certain to lead to one particular result, "a" say, and when made on the system in state "B" is certain to lead to some different result, "b" say. What will be the result of the observation when made on the system in the superposed state? The answer is that the result will be sometimes "a" and sometimes "b", according to a probability law depending on the relative weights of "A" and "B" in the superposition process. It will never be different from both "a" and "b" [i.e., either "a" or "b"]. "The intermediate character of the state formed by superposition thus expresses itself through the probability of a particular result for an observation being intermediate between the corresponding probabilities for the original states, not through the result itself being intermediate between the corresponding results for the original states."
Anton Zeilinger, referring to the prototypical example of the double-slit experiment, has elaborated regarding the creation and destruction of quantum superposition:
"[T]he superposition of amplitudes ... is only valid if there is no way to know, even in principle, which path the particle took. It is important to realize that this does not imply that an observer actually takes note of what happens. It is sufficient to destroy the interference pattern, if the path information is accessible in principle from the experiment or even if it is dispersed in the environment and beyond any technical possibility to be recovered, but in principle still ‘‘out there.’’ The absence of any such information is "the essential criterion" for quantum interference to appear.
Theory.
General formalism.
Any state can be expanded as a sum of the eigenstates of an Hermitian operator, like the Hamiltonian, because the eigenstates form a complete basis:
formula_10
where formula_11 are the energy eigenstates of the Hamiltonian. For continuous variables like position eigenstates, formula_12:
formula_13
where formula_14 is the projection of the state into the formula_12 basis and is called the wave function of the particle. In both instances we notice that formula_15 can be expanded as a superposition of an infinite number of basis states.
Example.
Given the Schrödinger equation
formula_16
where formula_11 indexes the set of eigenstates of the Hamiltonian with energy eigenvalues formula_17 we see immediately that
formula_18
where
formula_19
is a solution of the Schrödinger equation but is not generally an eigenstate because formula_20 and formula_21 are not generally equal. We say that formula_22 is made up of a superposition of energy eigenstates. Now consider the more concrete case of an electron that has either spin up or down. We now index the eigenstates with the spinors in the formula_23 basis:
formula_24
where formula_25 and formula_26 denote spin-up and spin-down states respectively. As previously discussed, the magnitudes of the complex coefficients give the probability of finding the electron in either definite spin state:
formula_27
formula_28
formula_29
where the probability of finding the particle with either spin up or down is normalized to 1. Notice that formula_5 and formula_30 are complex numbers, so that
formula_31
is an example of an allowed state. We now get
formula_32
formula_33
formula_34
If we consider a qubit with both position and spin, the state is a superposition of all possibilities for both:
formula_35
where we have a general state formula_36 is the sum of the tensor products of the position space wave functions and spinors.
Experiments and applications.
Experiments.
Successful experiments involving superpositions of relatively large (by the standards of quantum physics) objects have been performed.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|0 \\rangle"
},
{
"math_id": 1,
"text": "|1 \\rangle"
},
{
"math_id": 2,
"text": "|\\Psi \\rangle = c_0|0\\rangle + c_1|1\\rangle,"
},
{
"math_id": 3,
"text": "|\\Psi \\rangle"
},
{
"math_id": 4,
"text": "c_0"
},
{
"math_id": 5,
"text": "c_1"
},
{
"math_id": 6,
"text": "|0 \\rangle "
},
{
"math_id": 7,
"text": "|1 \\rangle "
},
{
"math_id": 8,
"text": "|c_0|^2"
},
{
"math_id": 9,
"text": "|c_1|^2"
},
{
"math_id": 10,
"text": "\n |\\alpha\\rangle = \\sum_n c_n |n\\rangle,\n"
},
{
"math_id": 11,
"text": "|n\\rangle"
},
{
"math_id": 12,
"text": "|x\\rangle"
},
{
"math_id": 13,
"text": "\n |\\alpha \\rangle = \\int dx' |x'\\rangle \\langle x'|\\alpha \\rangle,\n"
},
{
"math_id": 14,
"text": "\\phi_\\alpha(x) = \\langle x| \\alpha \\rangle"
},
{
"math_id": 15,
"text": "|\\alpha\\rangle"
},
{
"math_id": 16,
"text": "\\hat H |n\\rangle = E_n |n\\rangle, "
},
{
"math_id": 17,
"text": "E_n,"
},
{
"math_id": 18,
"text": "\\hat H\\big(|n\\rangle + |n'\\rangle\\big) = E_n |n\\rangle + E_{n'} |n'\\rangle,"
},
{
"math_id": 19,
"text": "|\\Psi\\rangle = |n\\rangle + |n'\\rangle"
},
{
"math_id": 20,
"text": "E_n"
},
{
"math_id": 21,
"text": "E_{n'}"
},
{
"math_id": 22,
"text": "|\\Psi\\rangle"
},
{
"math_id": 23,
"text": "\\hat z"
},
{
"math_id": 24,
"text": "|\\Psi\\rangle = c_1 |{\\uparrow}\\rangle + c_2 |{\\downarrow}\\rangle,"
},
{
"math_id": 25,
"text": "|{\\uparrow}\\rangle"
},
{
"math_id": 26,
"text": "|{\\downarrow}\\rangle"
},
{
"math_id": 27,
"text": " P\\big(|{\\uparrow}\\rangle\\big) = |c_1|^2,"
},
{
"math_id": 28,
"text": " P\\big(|{\\downarrow}\\rangle\\big) = |c_2|^2,"
},
{
"math_id": 29,
"text": " P_\\text{total} = P\\big(|{\\uparrow}\\rangle\\big) + P\\big(|{\\downarrow}\\rangle\\big) = |c_1|^2 + |c_2|^2 = 1,"
},
{
"math_id": 30,
"text": "c_2"
},
{
"math_id": 31,
"text": "|\\Psi\\rangle = \\frac{3}{5} i |{\\uparrow}\\rangle + \\frac{4}{5} |{\\downarrow}\\rangle."
},
{
"math_id": 32,
"text": "P\\big(|{\\uparrow}\\rangle\\big) = \\left|\\frac{3i}{5}\\right|^2 = \\frac{9}{25},"
},
{
"math_id": 33,
"text": "P\\big(|{\\downarrow}\\rangle\\big) = \\left|\\frac{4}{5}\\right|^2 = \\frac{16}{25},"
},
{
"math_id": 34,
"text": "P_\\text{total} = P\\big(|{\\uparrow}\\rangle\\big) + P\\big(|{\\downarrow}\\rangle\\big) = \\frac{9}{25} + \\frac{16}{25} = 1."
},
{
"math_id": 35,
"text": "\n \\Psi = \\psi_+(x) \\otimes |{\\uparrow}\\rangle + \\psi_-(x) \\otimes |{\\downarrow}\\rangle,\n"
},
{
"math_id": 36,
"text": "\\Psi"
}
] | https://en.wikipedia.org/wiki?curid=82728 |
827305 | Noncommutative quantum field theory | Quantum field theory using noncommutative mathematics
In mathematical physics, noncommutative quantum field theory (or quantum field theory on noncommutative spacetime) is an application of noncommutative mathematics to the spacetime of quantum field theory that is an outgrowth of noncommutative geometry and index theory in which the coordinate functions are noncommutative. One commonly studied version of such theories has the "canonical" commutation relation:
formula_0
where formula_1 and formula_2 are the hermitian generators of a noncommutative formula_3-algebra of "functions on spacetime". That means that (with any given set of axes), it is impossible to accurately measure the position of a particle with respect to more than one axis. In fact, this leads to an uncertainty relation for the coordinates analogous to the Heisenberg uncertainty principle.
Various lower limits have been claimed for the noncommutative scale, (i.e. how accurately positions can be measured) but there is currently no experimental evidence in favour of such a theory or grounds for ruling them out.
One of the novel features of noncommutative field theories is the UV/IR mixing phenomenon in which the physics at high energies affects the physics at low energies which does not occur in quantum field theories in which the coordinates commute.
Other features include violation of Lorentz invariance due to the preferred direction of noncommutativity. Relativistic invariance can however be retained in the sense of twisted Poincaré invariance of the theory. The causality condition is modified from that of the commutative theories.
History and motivation.
Heisenberg was the first to suggest extending noncommutativity to the coordinates as a possible way of removing the infinite quantities appearing in field theories before the renormalization procedure was developed and had gained acceptance. The first paper on the subject was published in 1947 by Hartland Snyder. The success of the renormalization method resulted in little attention being paid to the subject for some time. In the 1980s, mathematicians, most notably Alain Connes, developed noncommutative geometry. Among other things, this work generalized the notion of differential structure to a noncommutative setting. This led to an operator algebraic description of noncommutative space-times, with the problem that it classically corresponds to a manifold with positively defined metric tensor, so that there is no description of (noncommutative) causality in this approach. However it also led to the development of a Yang–Mills theory on a noncommutative torus.
The particle physics community became interested in the noncommutative approach because of a paper by Nathan Seiberg and Edward Witten. They argued in the context of string theory that the coordinate functions of the endpoints of open strings constrained to a D-brane in the presence of a constant Neveu–Schwarz B-field—equivalent to a constant magnetic field on the brane—would satisfy the noncommutative algebra set out above. The implication is that a quantum field theory on noncommutative spacetime can be interpreted as a low energy limit of the theory of open strings.
Two papers, one by Sergio Doplicher, Klaus Fredenhagen and John Roberts
and the other by D. V. Ahluwalia,
set out another motivation for the possible noncommutativity of space-time.
The arguments go as follows: According to general relativity, when the energy density grows sufficiently large, a black hole is formed. On the other hand, according to the Heisenberg uncertainty principle, a measurement of a space-time separation causes an uncertainty in momentum inversely proportional to the extent of the separation. Thus energy whose scale corresponds to the uncertainty in momentum is localized in the system within a region corresponding to the uncertainty in position. When the separation is small enough, the Schwarzschild radius of the system is reached and a black hole is formed, which prevents any information from escaping the system. Thus there is a lower bound for the measurement of length. A sufficient condition for preventing gravitational collapse can be expressed as an uncertainty relation for the coordinates. This relation can in turn be derived from a commutation relation for the coordinates.
It is worth stressing that, differently from other approaches, in particular those relying upon Connes' ideas, here the noncommutative spacetime is a proper spacetime, i.e. it extends the idea of a four-dimensional pseudo-Riemannian manifold. On the other hand, differently from Connes' noncommutative geometry, the proposed model turns out to be coordinate-dependent from scratch.
In Doplicher Fredenhagen Roberts' paper noncommutativity of coordinates concerns all four spacetime coordinates and not only spatial ones.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n[x^{\\mu}, x^{\\nu}]=i \\theta^{\\mu \\nu} \\,\\!\n"
},
{
"math_id": 1,
"text": "x^{\\mu}"
},
{
"math_id": 2,
"text": "x^{\\nu}"
},
{
"math_id": 3,
"text": "C^*"
}
] | https://en.wikipedia.org/wiki?curid=827305 |
82739 | Tychonoff's theorem | Product of any collection of compact topological spaces is compact
In mathematics, Tychonoff's theorem states that the product of any collection of compact topological spaces is compact with respect to the product topology. The theorem is named after Andrey Nikolayevich Tikhonov (whose surname sometimes is transcribed "Tychonoff"), who proved it first in 1930 for powers of the closed unit interval and in 1935 stated the full theorem along with the remark that its proof was the same as for the special case. The earliest known published proof is contained in a 1935 article by Tychonoff, .
Tychonoff's theorem is often considered as perhaps the single most important result in general topology (along with Urysohn's lemma). The theorem is also valid for topological spaces based on fuzzy sets.
Topological definitions.
The theorem depends crucially upon the precise definitions of compactness and of the product topology; in fact, Tychonoff's 1935 paper defines the product topology for the first time. Conversely, part of its importance is to give confidence that these particular definitions are the most useful (i.e. most well-behaved) ones.
Indeed, the Heine–Borel definition of compactness—that every covering of a space by open sets admits a finite subcovering—is relatively recent. More popular in the 19th and early 20th centuries was the Bolzano-Weierstrass criterion that every bounded infinite sequence admits a convergent subsequence, now called sequential compactness. These conditions are equivalent for metrizable spaces, but neither one implies the other in the class of all topological spaces.
It is almost trivial to prove that the product of two sequentially compact spaces is sequentially compact—one passes to a subsequence for the first component and then a subsubsequence for the second component. An only slightly more elaborate "diagonalization" argument establishes the sequential compactness of a countable product of sequentially compact spaces. However, the product of continuum many copies of the closed unit interval (with its usual topology) fails to be sequentially compact with respect to the product topology, even though it is compact by Tychonoff's theorem (e.g., see ).
This is a critical failure: if "X" is a completely regular Hausdorff space, there is a natural embedding from "X" into [0,1]"C"("X",[0,1]), where "C"("X",[0,1]) is the set of continuous maps from "X" to [0,1]. The compactness of [0,1]"C"("X",[0,1]) thus shows that every completely regular Hausdorff space embeds in a compact Hausdorff space (or, can be "compactified".) This construction is the Stone–Čech compactification. Conversely, all subspaces of compact Hausdorff spaces are completely regular Hausdorff, so this characterizes the completely regular Hausdorff spaces as those that can be compactified. Such spaces are now called Tychonoff spaces.
Applications.
Tychonoff's theorem has been used to prove many other mathematical theorems. These include theorems about compactness of certain spaces such as the Banach–Alaoglu theorem on the weak-* compactness of the unit ball of the dual space of a normed vector space, and the Arzelà–Ascoli theorem characterizing the sequences of functions in which every subsequence has a uniformly convergent subsequence. They also include statements less obviously related to compactness, such as the De Bruijn–Erdős theorem stating that every minimal "k"-chromatic graph is finite, and the Curtis–Hedlund–Lyndon theorem providing a topological characterization of cellular automata.
As a rule of thumb, any sort of construction that takes as input a fairly general object (often of an algebraic, or topological-algebraic nature) and outputs a compact space is likely to use Tychonoff: e.g., the Gelfand space of maximal ideals of a commutative C*-algebra, the Stone space of maximal ideals of a Boolean algebra, and the Berkovich spectrum of a commutative Banach ring.
Proofs of Tychonoff's theorem.
1) Tychonoff's 1930 proof used the concept of a complete accumulation point.
2) The theorem is a quick corollary of the Alexander subbase theorem.
More modern proofs have been motivated by the following considerations: the approach to compactness via convergence of subsequences leads to a simple and transparent proof in the case of countable index sets. However, the approach to convergence in a topological space using sequences is sufficient when the space satisfies the first axiom of countability (as metrizable spaces do), but generally not otherwise. However, the product of uncountably many metrizable spaces, each with at least two points, fails to be first countable. So it is natural to hope that a suitable notion of convergence in arbitrary spaces will lead to a compactness criterion generalizing sequential compactness in metrizable spaces that will be as easily applied to deduce the compactness of products. This has turned out to be the case.
3) The theory of convergence via filters, due to Henri Cartan and developed by Bourbaki in 1937, leads to the following criterion: assuming the ultrafilter lemma, a space is compact if and only if each ultrafilter on the space converges. With this in hand, the proof becomes easy: the (filter generated by the) image of an ultrafilter on the product space under any projection map is an ultrafilter on the factor space, which therefore converges, to at least one "xi". One then shows that the original ultrafilter converges to "x" = ("xi"). In his textbook, Munkres gives a reworking of the Cartan–Bourbaki proof that does not explicitly use any filter-theoretic language or preliminaries.
4) Similarly, the Moore–Smith theory of convergence via nets, as supplemented by Kelley's notion of a universal net, leads to the criterion that a space is compact if and only if each universal net on the space converges. This criterion leads to a proof (Kelley, 1950) of Tychonoff's theorem, which is, word for word, identical to the Cartan/Bourbaki proof using filters, save for the repeated substitution of "universal net" for "ultrafilter base".
5) A proof using nets but not universal nets was given in 1992 by Paul Chernoff.
Tychonoff's theorem and the axiom of choice.
All of the above proofs use the axiom of choice (AC) in some way. For instance, the third proof uses that every filter is contained in an ultrafilter (i.e., a maximal filter), and this is seen by invoking Zorn's lemma. Zorn's lemma is also used to prove Kelley's theorem, that every net has a universal subnet. In fact these uses of AC are essential: in 1950 Kelley proved that Tychonoff's theorem implies the axiom of choice in ZF. Note that one formulation of AC is that the Cartesian product of a family of nonempty sets is nonempty; but since the empty set is most certainly compact, the proof cannot proceed along such straightforward lines. Thus Tychonoff's theorem joins several other basic theorems (e.g. that every vector space has a basis) in being "equivalent" to AC.
On the other hand, the statement that every filter is contained in an ultrafilter does not imply AC. Indeed, it is not hard to see that it is equivalent to the Boolean prime ideal theorem (BPI), a well-known intermediate point between the axioms of Zermelo-Fraenkel set theory (ZF) and the ZF theory augmented by the axiom of choice (ZFC). A first glance at the second proof of Tychnoff may suggest that the proof uses no more than (BPI), in contradiction to the above. However, the spaces in which every convergent filter has a unique limit are precisely the Hausdorff spaces. In general we must select, for each element of the index set, an element of the nonempty set of limits of the projected ultrafilter base, and of course this uses AC. However, it also shows that the compactness of the product of compact Hausdorff spaces can be proved using (BPI), and in fact the converse also holds. Studying the "strength" of Tychonoff's theorem for various restricted classes of spaces is an active area in set-theoretic topology.
The analogue of Tychonoff's theorem in pointless topology does not require any form of the axiom of choice.
Proof of the axiom of choice from Tychonoff's theorem.
To prove that Tychonoff's theorem in its general version implies the axiom of choice, we establish that every infinite cartesian product of non-empty sets is nonempty. The trickiest part of the proof is introducing the right topology. The right topology, as it turns out, is the cofinite topology with a small twist. It turns out that every set given this topology automatically becomes a compact space. Once we have this fact, Tychonoff's theorem can be applied; we then use the finite intersection property (FIP) definition of compactness. The proof itself (due to J. L. Kelley) follows:
Let {"Ai"} be an indexed family of nonempty sets, for "i" ranging in "I" (where "I" is an arbitrary indexing set). We wish to show that the cartesian product of these sets is nonempty. Now, for each "i", take "Xi" to be "Ai" with the index "i" itself tacked on (renaming the indices using the disjoint union if necessary, we may assume that "i" is not a member of "Ai", so simply take "Xi" = "Ai" ∪ {"i"}).
Now define the cartesian product
formula_0
along with the natural projection maps "πi" which take a member of "X" to its "i"th term.
We give each "Xj" the topology whose open sets are: the empty set, the singleton {"i"}, the set "Xi". This makes "Xi" compact, and by Tychonoff's theorem, "X" is also compact (in the product topology). The projection maps are continuous; all the "Ai"'s are closed, being complements of the singleton open set {"i"} in "Xi". So the inverse images π"i"−1("Ai") are closed subsets of "X". We note that
formula_1
and prove that these inverse images have the FIP. Let "i1", ..., "iN" be a finite collection of indices in "I". Then the "finite" product "Ai1" × ... × "AiN"
is non-empty (only finitely many choices here, so AC is not needed); it merely consists of "N"-tuples. Let "a" = ("a"1, ..., "aN") be such an "N"-tuple. We extend "a" to the whole index set: take "a" to the function "f" defined by "f"("j") = "ak" if "j" = "ik", and "f"("j") = "j" otherwise. "This step is where the addition of the extra point to each space is crucial", for it allows us to define "f" for everything outside of the "N"-tuple in a precise way without choices (we can already choose, by construction, "j" from "Xj" ). π"ik"("f") = "ak" is obviously an element of each "Aik" so that "f" is in each inverse image; thus we have
formula_2
By the FIP definition of compactness, the entire intersection over "I" must be nonempty, and the proof is complete.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X = \\prod_{i \\in I} X_i"
},
{
"math_id": 1,
"text": "\\prod_{i \\in I} A_i = \\bigcap_{i \\in I} \\pi_i^{-1}(A_i) "
},
{
"math_id": 2,
"text": "\\bigcap_{k = 1}^N \\pi_{i_k}^{-1}(A_{i_k}) \\neq \\varnothing."
}
] | https://en.wikipedia.org/wiki?curid=82739 |
8276156 | Dehn–Sommerville equations | In mathematics, the Dehn–Sommerville equations are a complete set of linear relations between the numbers of faces of different dimension of a simplicial polytope. For polytopes of dimension 4 and 5, they were found by Max Dehn in 1905. Their general form was established by Duncan Sommerville in 1927. The Dehn–Sommerville equations can be restated as a symmetry condition for the "h"-vector of the simplicial polytope and this has become the standard formulation in recent combinatorics literature. By duality, analogous equations hold for simple polytopes.
Statement.
Let "P" be a "d"-dimensional simplicial polytope. For "i" = 0, 1, ..., "d" − 1, let "f""i" denote the number of "i"-dimensional faces of "P". The sequence
formula_0
is called the "f"-vector of the polytope "P". Additionally, set
formula_1
Then for any "k" = −1, 0, ..., "d" − 2, the following Dehn–Sommerville equation holds:
formula_2
When "k" = −1, it expresses the fact that Euler characteristic of a ("d" − 1)-dimensional simplicial sphere is equal to 1 + (−1)"d" − 1.
Dehn–Sommerville equations with different "k" are not independent. There are several ways to choose a maximal independent subset consisting of formula_3 equations. If "d" is even then the equations with "k" = 0, 2, 4, ..., "d" − 2 are independent. Another independent set consists of the equations with "k" = −1, 1, 3, ..., "d" − 3. If "d" is odd then the equations with "k" = −1, 1, 3, ..., "d" − 2 form one independent set and the equations with "k" = −1, 0, 2, 4, ..., "d" − 3 form another.
Equivalent formulations.
Sommerville found a different way to state these equations:
formula_4
where 0 ≤ k ≤ <templatestyles src="Fraction/styles.css" />1⁄2(d−1). This can be further facilitated introducing the notion of "h"-vector of "P". For "k" = 0, 1, ..., "d", let
formula_5
The sequence
formula_6
is called the "h"-vector of "P". The "f"-vector and the "h"-vector uniquely determine each other through the relation
formula_7
Then the Dehn–Sommerville equations can be restated simply as
formula_8
The equations with 0 ≤ k ≤ <templatestyles src="Fraction/styles.css" />1⁄2(d−1) are independent, and the others are manifestly equivalent to them.
Richard Stanley gave an interpretation of the components of the "h"-vector of a simplicial convex polytope "P" in terms of the projective toric variety "X" associated with (the dual of) "P". Namely, they are the dimensions of the even intersection cohomology groups of "X":
formula_9
(the odd intersection cohomology groups of "X" are all zero). In this language, the last form of the Dehn–Sommerville equations, the symmetry of the "h"-vector, is a manifestation of the Poincaré duality in the intersection cohomology of "X". | [
{
"math_id": 0,
"text": " f(P)=(f_0,f_1,\\ldots,f_{d-1}) "
},
{
"math_id": 1,
"text": " f_{-1}=1, f_d=1. "
},
{
"math_id": 2,
"text": "\\sum_{j=k}^{d-1} (-1)^j \\binom{j+1}{k+1} f_j = (-1)^{d-1}f_k. "
},
{
"math_id": 3,
"text": "\\left[\\frac{d+1}{2}\\right]"
},
{
"math_id": 4,
"text": " \\sum_{i=-1}^{k-1}(-1)^{d+i}\\binom{d-i-1}{d-k} f_i = \\sum_{i=-1}^{d-k-1}(-1)^i \\binom{d-i-1}{k} f_i, "
},
{
"math_id": 5,
"text": " h_k = \\sum_{i=0}^k (-1)^{k-i}\\binom{d-i}{k-i}f_{i-1}. "
},
{
"math_id": 6,
"text": "h(P)=(h_0,h_1,\\ldots,h_d)"
},
{
"math_id": 7,
"text": " \\sum_{i=0}^d f_{i-1}(t-1)^{d-i}=\\sum_{k=0}^d h_k t^{d-k}. "
},
{
"math_id": 8,
"text": " h_k = h_{d-k} \\quad\\text{ for } 0\\leq k\\leq d. "
},
{
"math_id": 9,
"text": " h_k=\\dim_{\\mathbb{Q}}\\operatorname{IH}^{2k}(X,\\mathbb{Q}) "
}
] | https://en.wikipedia.org/wiki?curid=8276156 |
827635 | Learning vector quantization | In computer science, learning vector quantization (LVQ) is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.
Overview.
LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas and the k-nearest neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.
An LVQ system is represented by prototypes formula_0 which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.
An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.
LVQ systems can be applied to multi-class classification problems in a natural way.
A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009) and references therein.
LVQ can be a source of great help in classifying text documents.
Algorithm.
Below follows an informal description.<br>
The algorithm consists of three basic steps. The algorithm's input is:
The algorithm's flow is:
Note: formula_2 and formula_8 are vectors in feature space. | [
{
"math_id": 0,
"text": "W=(w(i),...,w(n))"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "\\vec{w_i}"
},
{
"math_id": 3,
"text": "i = 0,1,...,M - 1 "
},
{
"math_id": 4,
"text": "c_i"
},
{
"math_id": 5,
"text": " \\vec{w_i} "
},
{
"math_id": 6,
"text": " \\eta "
},
{
"math_id": 7,
"text": " L "
},
{
"math_id": 8,
"text": "\\vec{x}"
},
{
"math_id": 9,
"text": "y"
},
{
"math_id": 10,
"text": "\\vec{w_m}"
},
{
"math_id": 11,
"text": "d(\\vec{x},\\vec{w_m}) = \\min\\limits_i {d(\\vec{x},\\vec{w_i})} "
},
{
"math_id": 12,
"text": "\\, d"
},
{
"math_id": 13,
"text": " \\vec{w_m} \\gets \\vec{w_m} + \\eta \\cdot \\left( \\vec{x} - \\vec{w_m} \\right) "
},
{
"math_id": 14,
"text": " c_m = y"
},
{
"math_id": 15,
"text": " \\vec{w_m} \\gets \\vec{w_m} - \\eta \\cdot \\left( \\vec{x} - \\vec{w_m} \\right) "
},
{
"math_id": 16,
"text": " c_m \\neq y"
}
] | https://en.wikipedia.org/wiki?curid=827635 |
827658 | Bundle (mathematics) | In mathematics, a bundle is a generalization of a fiber bundle dropping the condition of a local product structure. The requirement of a local product structure rests on the bundle having a topology. Without this requirement, more general objects can be considered bundles. For example, one can consider a bundle π: "E" → "B" with "E" and "B" sets. It is no longer true that the preimages formula_0 must all look alike, unlike fiber bundles, where the fibers must all be isomorphic (in the case of vector bundles) and homeomorphic.
Definition.
A bundle is a triple ("E", "p", "B") where "E", "B" are sets and "p" : "E" → "B" is a map.
This definition of a bundle is quite unrestrictive. For instance, the empty function defines a bundle. Nonetheless it serves well to introduce the basic terminology, and every type of bundle has the basic ingredients of above with restrictions on "E", "p", "B" and usually there is additional structure.
For each "b" ∈ "B", "p"−1("b") is the fibre or fiber of the bundle over "b".
A bundle ("E*", "p*", "B*") is a subbundle of ("E", "p", "B") if "B*" ⊂ "B", "E*" ⊂ "E" and "p*"
"p"|"E*".
A cross section is a map "s" : "B" → "E" such that "p"("s"("b"))
"b" for each "b" ∈ "B", that is, "s"("b") ∈ "p"−1("b").
Bundle objects.
More generally, bundles or bundle objects can be defined in any category: in a category C, a bundle is simply an epimorphism π: "E" → "B". If the category is not concrete, then the notion of a preimage of the map is not necessarily available. Therefore these bundles may have no fibers at all, although for sufficiently well behaved categories they do; for instance, for a category with pullbacks and a terminal object 1 the points of "B" can be identified with morphisms "p":1→"B" and the fiber of "p" is obtained as the pullback of "p" and π. The category of bundles over "B" is a subcategory of the slice category (C↓"B") of objects over "B", while the category of bundles without fixed base object is a subcategory of the comma category ("C"↓"C") which is also the functor category C², the category of morphisms in C.
The category of smooth vector bundles is a bundle object over the category of smooth manifolds in Cat, the category of small categories. The functor taking each manifold to its tangent bundle is an example of a section of this bundle object.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi^{-1}(x)"
}
] | https://en.wikipedia.org/wiki?curid=827658 |
827701 | Kerma (physics) | Kinetic energy released by ionizing radiation from uncharged particles per unit mass
In radiation physics, kerma is an acronym for "kinetic energy released per unit mass" (alternately, "kinetic energy released in matter", "kinetic energy released in material", or "kinetic energy released in materials"), defined as the sum of the initial kinetic energies of all the charged particles liberated by uncharged ionizing radiation (i.e., indirectly ionizing radiation such as photons and neutrons) in a sample of matter, divided by the mass of the sample. It is defined by the quotient
formula_0.
Units.
The SI unit of kerma is the gray (Gy) (or joule per kilogram), the same as the unit of absorbed dose. However, kerma can be different from absorbed dose, depending on the energies involved. This is because ionization energy is not accounted for. While kerma approximately equals absorbed dose at low energies, kerma is much higher than absorbed dose at higher energies, because some energy escapes from the absorbing volume in the form of bremsstrahlung (X-rays) or fast-moving electrons, and is not counted as absorbed dose.
Process of energy transfer.
Photon energy is transferred to matter in a two-step process. First, energy is transferred to charged particles in the medium through various photon interactions (e.g. photoelectric effect, Compton scattering, pair production, and photodisintegration). Next, these secondary charged particles transfer their energy to the medium through atomic excitation and ionizations.
For low-energy photons, kerma is numerically approximately the same as absorbed dose. For higher-energy photons, kerma is larger than absorbed dose because some highly energetic secondary electrons and X-rays escape the region of interest before depositing their energy. The escaping energy is counted in kerma, but not in absorbed dose. For low-energy X-rays, this is usually a negligible distinction. This can be understood when one looks at the components of kerma.
There are two independent contributions to the total kerma, collision kerma formula_1 and radiative kerma formula_2 – thus, formula_3. Collision kerma results in the production of electrons that dissipate their energy as ionization and excitation due to the interaction between the charged particle and the atomic electrons. Radiative kerma results in the production of radiative photons due to the interaction between the charged particle and atomic nuclei (mostly via Bremsstrahlung radiation), but can also include photons produced by annihilation of positrons in flight.
Frequently, the quantity formula_1 is of interest, and is usually expressed as
formula_4
where "g" is the average fraction of energy transferred to electrons that is lost through bremsstrahlung.
Calibration of radiation protection instruments.
Air kerma is of importance in the practical calibration of instruments for photon measurement, where it is used for the traceable calibration of gamma instrument metrology facilities using a "free air" ion chamber to measure air kerma.
IAEA safety report 16 states "The quantity "air kerma" should be used for calibrating the reference photon radiation fields and reference instruments. Radiation protection monitoring instruments should be calibrated in terms of dose equivalent quantities. Area dosimeters or dose ratemeters should be calibrated in terms of the ambient dose equivalent, H*(10), or the directional dose equivalent, H′(0.07),without any phantom present, i.e. free in air."
Conversion coefficients from air kerma in Gy to equivalent dose in Sv are published in the International Commission on Radiological Protection (ICRP) report 74 (1996). For instance, air kerma rate is converted to tissue equivalent dose using a factor of Sv/Gy (air) = 1.21 for Cs 137 at 0.662 MeV.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K = \\operatorname{d}\\!E_\\text{tr}/\\operatorname{d}\\!m"
},
{
"math_id": 1,
"text": "k_\\text{col}"
},
{
"math_id": 2,
"text": "k_\\text{rad}"
},
{
"math_id": 3,
"text": "K = k_\\text{col} + k_\\text{rad}"
},
{
"math_id": 4,
"text": "k_\\text{col} = K (1 - g),"
}
] | https://en.wikipedia.org/wiki?curid=827701 |
827792 | Rare Earth hypothesis | Hypothesis that complex extraterrestrial life is improbable and extremely rare
In planetary astronomy and astrobiology, the Rare Earth hypothesis argues that the origin of life and the evolution of biological complexity, such as sexually reproducing, multicellular organisms on Earth, and subsequently human intelligence, required an improbable combination of astrophysical and geological events and circumstances. According to the hypothesis, complex extraterrestrial life is an improbable phenomenon and likely to be rare throughout the universe as a whole. The term "Rare Earth" originates from "" (2000), a book by Peter Ward, a geologist and paleontologist, and Donald E. Brownlee, an astronomer and astrobiologist, both faculty members at the University of Washington.
In the 1970s and 1980s, Carl Sagan and Frank Drake, among others, argued that Earth is a typical rocky planet in a typical planetary system, located in a non-exceptional region of a common barred spiral galaxy. From the principle of mediocrity (extended from the Copernican principle), they argued that the evolution of life on Earth, including human beings, was also typical, and therefore that the universe teems with complex life. Ward and Brownlee argue that planets, planetary systems, and galactic regions that are as accommodating for complex life as are the Earth, the Solar System, and our own galactic region are not typical at all but actually exceedingly rare.
<templatestyles src="Template:TOC limit/styles.css" />
Fermi paradox.
There is no reliable or reproducible evidence that extraterrestrial organisms of any kind have visited Earth. No transmissions or evidence of intelligent extraterrestrial life have been detected or observed anywhere other than Earth in the Universe. This runs counter to the knowledge that the Universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. These contradictory facts form the basis for the Fermi paradox, of which the Rare Earth hypothesis is one proposed solution.
Requirements for complex life.
The Rare Earth hypothesis argues that the evolution of biological complexity anywhere in the universe requires the coincidence of a large number of fortuitous circumstances, including, among others, a galactic habitable zone; a central star and planetary system having the requisite character (i.e. a circumstellar habitable zone); a terrestrial planet of the right mass; the advantage of one or more gas giant guardians like Jupiter and possibly a large natural satellite to shield the planet from frequent impact events; conditions needed to ensure the planet has a magnetosphere and plate tectonics; a chemistry similar to that present in the Earth's lithosphere, atmosphere, and oceans; the influence of periodic "evolutionary pumps" such as massive glaciations and bolide impacts; and whatever factors may have led to the emergence of eukaryotic cells, sexual reproduction, and the Cambrian explosion of animal, plant, and fungi phyla. The evolution of human beings and of human intelligence may have required yet further specific events and circumstances, all of which are extremely unlikely to have happened were it not for the Cretaceous–Paleogene extinction event 66 million years ago removing dinosaurs as the dominant terrestrial vertebrates.
In order for a small rocky planet to support complex life, Ward and Brownlee argue, the values of several variables must fall within narrow ranges. The universe is so vast that it might still contain many Earth-like planets, but if such planets exist, they are likely to be separated from each other by many thousands of light-years. Such distances may preclude communication among any intelligent species that may evolve on such planets, which would solve the Fermi paradox: "If extraterrestrial aliens are common, why aren't they obvious?"
The right location in the right kind of galaxy.
Rare Earth suggests that much of the known universe, including large parts of our galaxy, are "dead zones" unable to support complex life. Those parts of a galaxy where complex life is possible make up the galactic habitable zone, which is primarily characterized by distance from the Galactic Center.
Item #1 rules out the outermost reaches of a galaxy; #2 and #3 rule out galactic inner regions. Hence a galaxy's habitable zone may be a relatively narrow ring of adequate conditions sandwiched between its uninhabitable center and outer reaches.
Also, a habitable planetary system must maintain its favorable location long enough for complex life to evolve. A star with an eccentric (elliptical or hyperbolic) galactic orbit will pass through some spiral arms, unfavorable regions of high star density; thus a life-bearing star must have a galactic orbit that is nearly circular, with a close synchronization between the orbital velocity of the star and of the spiral arms. This further restricts the galactic habitable zone within a fairly narrow range of distances from the Galactic Center. Lineweaver et al. calculate this zone to be a ring 7 to 9 kiloparsecs in radius, including no more than 10% of the stars in the Milky Way, about 20 to 40 billion stars. Gonzalez "et al." would halve these numbers; they estimate that at most 5% of stars in the Milky Way fall within the galactic habitable zone.
Approximately 77% of observed galaxies are spiral, two-thirds of all spiral galaxies are barred, and more than half, like the Milky Way, exhibit multiple arms. According to Rare Earth, our own galaxy is unusually quiet and dim (see below), representing just 7% of its kind. Even so, this would still represent more than 200 billion galaxies in the known universe.
Our galaxy also appears unusually favorable in suffering fewer collisions with other galaxies over the last 10 billion years, which can cause more supernovae and other disturbances. Also, the Milky Way's central black hole seems to have neither too much nor too little activity.
The orbit of the Sun around the center of the Milky Way is indeed almost perfectly circular, with a period of 226 Ma (million years), closely matching the rotational period of the galaxy. However, the majority of stars in barred spiral galaxies populate the spiral arms rather than the halo and tend to move in gravitationally aligned orbits, so there is little that is unusual about the Sun's orbit. While the Rare Earth hypothesis predicts that the Sun should rarely, if ever, have passed through a spiral arm since its formation, astronomer Karen Masters has calculated that the orbit of the Sun takes it through a major spiral arm approximately every 100 million years. Some researchers have suggested that several mass extinctions do indeed correspond with previous crossings of the spiral arms.
The right orbital distance from the right type of star.
The terrestrial example suggests that complex life requires liquid water, the maintenance of which requires an orbital distance neither too close nor too far from the central star, another scale of habitable zone or Goldilocks principle.
The habitable zone varies with the star's type and age.
For advanced life, the star must also be highly stable, which is typical of middle star life, about 4.6 billion years old. Proper metallicity and size are also important to stability. The Sun has a low (0.1%) luminosity variation. To date, no solar twin star, with an exact match of the Sun's luminosity variation, has been found, though some come close. The star must also have no stellar companions, as in binary systems, which would disrupt the orbits of any planets. Estimates suggest 50% or more of all star systems are binary. Stars gradually brighten over time and it takes hundreds of millions or billions of years for animal life to evolve. The requirement for a planet to remain in the habitable zone even as its boundaries move outwards over time restricts the size of what Ward and Brownlee call the "continuously habitable zone" for animals. They cite a calculation that it is very narrow, within 0.95 and 1.15 astronomical units (one AU is the distance between the Earth and the Sun), and argue that even this may be too large because it is based on the whole zone within which liquid water can exist, and water near boiling point may be much too hot for animal life.
The liquid water and other gases available in the habitable zone bring the benefit of the greenhouse effect. Even though the Earth's atmosphere contains a water vapor concentration from 0% (in arid regions) to 4% (in rainforest and ocean regions) and – as of November 2022 – only 417.2 parts per million of CO2, these small amounts suffice to raise the average surface temperature by about 40 °C, with the dominant contribution being due to water vapor.
Rocky planets must orbit within the habitable zone for life to form. Although the habitable zone of such hot stars as Sirius or Vega is wide, hot stars also emit much more ultraviolet radiation that ionizes any planetary atmosphere. Such stars may also become red giants before advanced life evolves on their planets. These considerations rule out the massive and powerful stars of type F6 to O (see stellar classification) as homes to evolved metazoan life.
Conversely, small red dwarf stars have small habitable zones wherein planets are in tidal lock, with one very hot side always facing the star and another very cold side always facing away, and they are also at increased risk of solar flares (see Aurelia). As such, it is disputed whether they can support life. Rare Earth proponents claim that only stars from F7 to K1 types are hospitable. Such stars are rare: G type stars such as the Sun (between the hotter F and cooler K) comprise only 9% of the hydrogen-burning stars in the Milky Way.
Such aged stars as red giants and white dwarfs are also unlikely to support life. Red giants are common in globular clusters and elliptical galaxies. White dwarfs are mostly dying stars that have already completed their red giant phase. Stars that become red giants expand into or overheat the habitable zones of their youth and middle age (though theoretically planets at much greater distances may then become habitable).
An energy output that varies with the lifetime of the star will likely prevent life (e.g., as Cepheid variables). A sudden decrease, even if brief, may freeze the water of orbiting planets, and a significant increase may evaporate it and cause a greenhouse effect that prevents the oceans from reforming.
All known life requires the complex chemistry of metallic elements. The absorption spectrum of a star reveals the presence of metals within, and studies of stellar spectra reveal that many, perhaps most, stars are poor in metals. Because heavy metals originate in supernova explosions, metallicity increases in the universe over time. Low metallicity characterizes the early universe: globular clusters and other stars that formed when the universe was young, stars in most galaxies other than large spirals, and stars in the outer regions of all galaxies. Metal-rich central stars capable of supporting complex life are therefore believed to be most common in the less dense regions of the larger spiral galaxies—where radiation also happens to be weak.
The right arrangement of planets around the star.
Rare Earth proponents argue that a planetary system capable of sustaining complex life must be structured more or less like the Solar System, with small, rocky inner planets and massive outer gas giants. Without the protection of such "celestial vacuum cleaner" planets, such as Jupiter, with strong gravitational pulls, other planets would be subject to more frequent catastrophic asteroid collisions. An asteroid only twice the size of the one which cause the Cretaceous–Paleogene extinction might have wiped out all complex life.
Observations of exoplanets have shown that arrangements of planets similar to the Solar System are rare. Most planetary systems have super-Earths, several times larger than Earth, close to their star, whereas the Solar System's inner region has only a few small rocky planets and none inside Mercury's orbit. Only 10% of stars have giant planets similar to Jupiter and Saturn, and those few rarely have stable, nearly circular orbits distant from their star. Konstantin Batygin and colleagues argue that these features can be explained if, early in the history of the Solar System, Jupiter and Saturn drifted towards the Sun, sending showers of planetesimals towards the super-Earths which sent them spiralling into the Sun, and ferrying icy building blocks into the terrestrial region of the Solar System which provided the building blocks for the rocky planets. The two giant planets then drifted out again to their present positions. In the view of Batygin and his colleagues: "The concatenation of chance events required for this delicate choreography suggest that small, Earth-like rocky planets – and perhaps life itself – could be rare throughout the cosmos."
A continuously stable orbit.
Rare Earth proponents argue that a gas giant also must not be too close to a body where life is developing. Close placement of one or more gas giants could disrupt the orbit of a potential life-bearing planet, either directly or by drifting into the habitable zone.
Newtonian dynamics can produce chaotic planetary orbits, especially in a system having large planets at high orbital eccentricity.
The need for stable orbits rules out stars with planetary systems that contain large planets with orbits close to the host star (called "hot Jupiters"). It is believed that hot Jupiters have migrated inwards to their current orbits. In the process, they would have catastrophically disrupted the orbits of any planets in the habitable zone. To exacerbate matters, hot Jupiters are much more common orbiting F and G class stars.
A terrestrial planet of the right size.
The Rare Earth hypothesis argues that life requires terrestrial planets like Earth, and since gas giants lack such a surface, that complex life cannot arise there.
A planet that is too small cannot maintain much atmosphere, rendering its surface temperature low and variable and oceans impossible. A small planet will also tend to have a rough surface, with large mountains and deep canyons. The core will cool faster, and plate tectonics may be brief or entirely absent. A planet that is too large will retain too dense an atmosphere, like Venus. Although Venus is similar in size and mass to Earth, its surface atmospheric pressure is 92 times that of Earth, and its surface temperature is 735 K (462 °C; 863 °F). The early Earth once had a similar atmosphere, but may have lost it in the giant impact event which formed the Moon.
Plate tectonics.
Rare Earth proponents argue that plate tectonics and a strong magnetic field are essential for biodiversity, global temperature regulation, and the carbon cycle. The lack of mountain chains elsewhere in the Solar System is evidence that Earth is the only body which now has plate tectonics, and thus the only one capable of supporting life.
Plate tectonics depend on the right chemical composition and a long-lasting source of heat from radioactive decay. Continents must be made of less dense felsic rocks that "float" on underlying denser mafic rock. Taylor emphasizes that tectonic subduction zones require the lubrication of oceans of water. Plate tectonics also provide a means of biochemical cycling.
Plate tectonics and, as a result, continental drift and the creation of separate landmasses would create diversified ecosystems and biodiversity, one of the strongest defenses against extinction. An example of species diversification and later competition on Earth's continents is the Great American Interchange. North and Middle America drifted into South America at around 3.5 to 3 Ma. The fauna of South America had already evolved separately for about 30 million years, since Antarctica separated, but, after the merger, many species were wiped out, mainly in South America, by competing North American animals.
A large moon.
The Moon is unusual because the other rocky planets in the Solar System either have no satellites (Mercury and Venus), or only relatively tiny satellites which are probably captured asteroids (Mars). After Charon, the Moon is also the largest natural satellite in the Solar System relative to the size of its parent body, being 27% the size of Earth.
The giant-impact theory hypothesizes that the Moon resulted from the impact of a roughly Mars-sized body, dubbed Theia, with the young Earth. This giant impact also gave the Earth its axial tilt (inclination) and velocity of rotation. Rapid rotation reduces the daily variation in temperature and makes photosynthesis viable. The "Rare Earth" hypothesis further argues that the axial tilt cannot be too large or too small (relative to the orbital plane). A planet with a large tilt will experience extreme seasonal variations in climate. A planet with little or no tilt will lack the stimulus to evolution that climate variation provides. In this view, the Earth's tilt is "just right". The gravity of a large satellite also stabilizes the planet's tilt; without this effect, the variation in tilt would be chaotic, probably making complex life forms on land impossible.
If the Earth had no Moon, the ocean tides resulting solely from the Sun's gravity would be only half that of the lunar tides. A large satellite gives rise to tidal pools, which may be essential for the formation of complex life, though this is far from certain.
A large satellite also increases the likelihood of plate tectonics through the effect of tidal forces on the planet's crust. The impact that formed the Moon may also have initiated plate tectonics, without which the continental crust would cover the entire planet, leaving no room for oceanic crust. It is possible that the large-scale mantle convection needed to drive plate tectonics could not have emerged if the crust had a uniform composition. A further theory indicates that such a large moon may also contribute to maintaining a planet's magnetic shield by continually acting upon a metallic planetary core as dynamo, thus protecting the surface of the planet from charged particles and cosmic rays, and helping to ensure the atmosphere is not stripped over time by solar winds.
An atmosphere.
A terrestrial planet must be the right size, like Earth and Venus, in order to retain an atmosphere. On Earth, once the giant impact of Theia thinned Earth's atmosphere, other events were needed to make the atmosphere capable of sustaining life. The Late Heavy Bombardment reseeded Earth with water lost after the impact of Theia. The development of an ozone layer generated a protective shield against ultraviolet (UV) sunlight. Nitrogen and carbon dioxide are needed in a correct ratio for life to form. Lightning is needed for nitrogen fixation. The gaseous carbon dioxide needed for life comes from sources such as volcanoes and geysers. Carbon dioxide is preferably needed at relatively low levels (currently at approximately 400 ppm on Earth) because at high levels it is poisonous. Precipitation is needed to have a stable water cycle. A proper atmosphere must reduce diurnal temperature variation.
One or more evolutionary triggers for complex life.
Regardless of whether planets with similar physical attributes to the Earth are rare or not, some argue that life tends not to evolve into anything more complex than simple bacteria without being provoked by rare and specific circumstances. Biochemist Nick Lane argues that simple cells (prokaryotes) emerged soon after Earth's formation, but since almost half the planet's life had passed before they evolved into complex ones (eukaryotes), all of whom share a common ancestor, this event can only have happened once. According to some views, prokaryotes lack the cellular architecture to evolve into eukaryotes because a bacterium expanded up to eukaryotic proportions would have tens of thousands of times less energy available to power its metabolism. Two billion years ago, one simple cell incorporated itself into another, multiplied, and evolved into mitochondria that supplied the vast increase in available energy that enabled the evolution of complex eukaryotic life. If this incorporation occurred only once in four billion years or is otherwise unlikely, then life on most planets remains simple. An alternative view is that the evolution of mitochondria was environmentally triggered, and that mitochondria-containing organisms appeared soon after the first traces of atmospheric oxygen.
The evolution and persistence of sexual reproduction is another mystery in biology. The purpose of sexual reproduction is unclear, as in many organisms it has a 50% cost (fitness disadvantage) in relation to asexual reproduction. Mating types (types of gametes, according to their compatibility) may have arisen as a result of anisogamy (gamete dimorphism), or the male and female sexes may have evolved before anisogamy. It is also unknown why most sexual organisms use a binary mating system, and why some organisms have gamete dimorphism. Charles Darwin was the first to suggest that sexual selection drives speciation; without it, complex life would probably not have evolved.
The right time in evolutionary history.
While life on Earth is regarded to have spawned relatively early in the planet's history, the evolution from multicellular to intelligent organisms took around 800 million years. Civilizations on Earth have existed for about 12,000 years, and radio communication reaching space has existed for little more than 100 years. Relative to the age of the Solar System (~4.57 Ga) this is a short time, in which extreme climatic variations, super volcanoes, and large meteorite impacts were absent. These events would severely harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction, caused by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the extinction of 95% of known species around 251.2 Ma ago. About 65 million years ago, the Chicxulub impact at the Cretaceous–Paleogene boundary (~65.5 Ma) on the Yucatán peninsula in Mexico led to a mass extinction of the most advanced species at that time.
Rare Earth equation.
The following discussion is adapted from Cramer. The Rare Earth equation is Ward and Brownlee's riposte to the Drake equation. It calculates formula_0, the number of Earth-like planets in the Milky Way having complex life forms, as:
formula_1
where:
We assume formula_3. The Rare Earth hypothesis can then be viewed as asserting that the product of the other nine Rare Earth equation factors listed below, which are all fractions, is no greater than 10−10 and could plausibly be as small as 10−12. In the latter case, formula_0 could be as small as 0 or 1. Ward and Brownlee do not actually calculate the value of formula_0, because the numerical values of quite a few of the factors below can only be conjectured. They cannot be estimated simply because we have but one data point: the Earth, a rocky planet orbiting a G2 star in a quiet suburb of a large barred spiral galaxy, and the home of the only intelligent species we know; namely, ourselves.
The Rare Earth equation, unlike the Drake equation, does not factor the probability that complex life evolves into intelligent life that discovers technology. Barrow and Tipler review the consensus among such biologists that the evolutionary path from primitive Cambrian chordates, e.g., "Pikaia" to "Homo sapiens", was a highly improbable event. For example, the large brains of humans have marked adaptive disadvantages, requiring as they do an expensive metabolism, a long gestation period, and a childhood lasting more than 25% of the average total life span. Other improbable features of humans include:
Advocates.
Writers who support the Rare Earth hypothesis:
Criticism.
Cases against the Rare Earth hypothesis take various forms.
The hypothesis appears anthropocentric.
The hypothesis concludes, more or less, that complex life is rare because it can evolve only on the surface of an Earth-like planet or on a suitable satellite of a planet. Some biologists, such as Jack Cohen, believe this assumption too restrictive and unimaginative; they see it as a form of circular reasoning.
According to David Darling, the Rare Earth hypothesis is neither hypothesis nor prediction, but merely a description of how life arose on Earth. In his view, Ward and Brownlee have done nothing more than select the factors that best suit their case.
Critics also argue that there is a link between the Rare Earth hypothesis and the unscientific idea of intelligent design.
Exoplanets around main sequence stars are being discovered in large numbers.
An increasing number of extrasolar planet discoveries are being made, with 7,026 planets in 4,949 planetary systems known as of none }}. Rare Earth proponents argue life cannot arise outside Sun-like systems, due to tidal locking and ionizing radiation outside the F7–K1 range. However, some exobiologists have suggested that stars outside this range may give rise to life under the right circumstances; this possibility is a central point of contention to the theory because these late-K and M category stars make up about 82% of all hydrogen-burning stars.
Current technology limits the testing of important Rare Earth criteria: surface water, tectonic plates, a large moon and biosignatures are currently undetectable. Though planets the size of Earth are difficult to detect and classify, scientists now think that rocky planets are common around Sun-like stars. The Earth Similarity Index (ESI) of mass, radius and temperature provides a means of measurement, but falls short of the full Rare Earth criteria.
Rocky planets orbiting within habitable zones may not be rare.
Some argue that Rare Earth's estimates of rocky planets in habitable zones (formula_2 in the Rare Earth equation) are too restrictive. James Kasting cites the Titius–Bode law to contend that it is a misnomer to describe habitable zones as narrow when there is a 50% chance of at least one planet orbiting within one. In 2013, astronomers using the Kepler space telescope's data estimated that about one-fifth of G-type and K-type stars (sun-like stars and orange dwarfs) are expected to have an Earth-sized or super-Earth-sized planet ( Earths wide) close to an Earth-like orbit (), yielding about 8.8 billion of them for the entire Milky Way Galaxy.
Uncertainty over Jupiter's role.
The requirement for a system to have a Jovian planet as protector (Rare Earth equation factor formula_11) has been challenged, affecting the number of proposed extinction events (Rare Earth equation factor formula_12). Kasting's 2001 review of Rare Earth questions whether a Jupiter protector has any bearing on the incidence of complex life. Computer modelling including the 2005 Nice model and 2007 Nice 2 model yield inconclusive results in relation to Jupiter's gravitational influence and impacts on the inner planets. A study by Horner and Jones (2008) using computer simulation found that while the total effect on all orbital bodies within the Solar System is unclear, Jupiter has caused more impacts on Earth than it has prevented. Lexell's Comet, a 1770 near miss that passed closer to Earth than any other comet in recorded history, was known to be caused by the gravitational influence of Jupiter.
Plate tectonics may not be unique to Earth or a requirement for complex life.
Ward and Brownlee argue that for complex life to evolve (Rare Earth equation factor formula_8), tectonics must be present to generate biogeochemical cycles, and predicted that such geological features would not be found outside of Earth, pointing to a lack of observable mountain ranges and subduction. There is, however, no scientific consensus on the evolution of plate tectonics on Earth. Though it is believed that tectonic motion first began around three billion years ago, by this time photosynthesis and oxygenation had already begun. Furthermore, recent studies point to plate tectonics as an episodic planetary phenomenon, and that life may evolve during periods of "stagnant-lid" rather than plate tectonic states.
Recent evidence also points to similar activity either having occurred or continuing to occur elsewhere. The geology of Pluto, for example, described by Ward and Brownlee as "without mountains or volcanoes ... devoid of volcanic activity", has since been found to be quite the contrary, with a geologically active surface possessing organic molecules and mountain ranges like Tenzing Montes and Hillary Montes comparable in relative size to those of Earth, and observations suggest the involvement of endogenic processes. Plate tectonics has been suggested as a hypothesis for the Martian dichotomy, and in 2012 geologist An Yin put forward evidence for active plate tectonics on Mars. Europa has long been suspected to have plate tectonics and in 2014 NASA announced evidence of active subduction. Like Europa, analysis of the surface of Jupiter's largest moon Ganymede strike-strip faulting and surface materials of possible endogenic origin suggests that plate tectonics has also taken place there.
In 2017, scientists studying the geology of Charon confirmed that icy plate tectonics also operated on Pluto's largest moon. Since 2017 several studies of the geodynamics of Venus have also found that, contrary to the view that the lithosphere of Venus is static, it is actually being deformed via active processes similar to plate tectonics, though with less subduction, implying that geodynamics are not a rare occurrence in Earth sized bodies.
Kasting suggests that there is nothing unusual about the occurrence of plate tectonics in large rocky planets and liquid water on the surface as most should generate internal heat even without the assistance of radioactive elements. Studies by Valencia and Cowan suggest that plate tectonics may be inevitable for terrestrial planets Earth-sized or larger, that is, Super-Earths, which are now known to be more common in planetary systems.
Free oxygen may be neither rare nor a prerequisite for multicellular life.
The hypothesis that molecular oxygen, necessary for animal life, is rare and that a Great Oxygenation Event (Rare Earth equation factor formula_8) could only have been triggered and sustained by tectonics, appears to have been invalidated by more recent discoveries.
Ward and Brownlee ask "whether oxygenation, and hence the rise of animals, would ever have occurred on a world where there were no continents to erode". Extraterrestrial free oxygen has recently been detected around other solid objects, including Mercury, Venus, Mars, Jupiter's four Galilean moons, Saturn's moons Enceladus, Dione and Rhea and even the atmosphere of a comet. This has led scientists to speculate whether processes other than photosynthesis could be capable of generating an environment rich in free oxygen. Wordsworth (2014) concludes that oxygen generated other than through photodissociation may be likely on Earth-like exoplanets, and could actually lead to false positive detections of life. Narita (2015) suggests photocatalysis by titanium dioxide as a geochemical mechanism for producing oxygen atmospheres.
Since Ward & Brownlee's assertion that "there is irrefutable evidence that oxygen is a necessary ingredient for animal life", anaerobic metazoa have been found that indeed do metabolise without oxygen. "Spinoloricus cinziae", for example, a species discovered in the hypersaline anoxic L'Atalante basin at the bottom of the Mediterranean Sea in 2010, appears to metabolise with hydrogen, lacking mitochondria and instead using hydrogenosomes. Studies since 2015 of the eukaryotic genus "Monocercomonoides" that lack mitochondrial organelles are also significant as there are no detectable signs that mitochondria are part of the organism. Since then further eukaryotes, particularly parasites, have been identified to be completely absent of mitochondrial genome, such as the 2020 discovery in "Henneguya zschokkei".<ref name="10.1073/pnas.1909907117"></ref> Further investigation into alternative metabolic pathways used by these organisms appear to present further problems for the premise.
Stevenson (2015) has proposed other membrane alternatives for complex life in worlds without oxygen. In 2017, scientists from the NASA Astrobiology Institute discovered the necessary chemical preconditions for the formation of azotosomes on Saturn's moon Titan, a world that lacks atmospheric oxygen. Independent studies by Schirrmeister and by Mills concluded that Earth's multicellular life existed prior to the Great Oxygenation Event, not as a consequence of it.
NASA scientists Hartman and McKay argue that plate tectonics may in fact slow the rise of oxygenation (and thus stymie complex life rather than promote it). Computer modelling by Tilman Spohn in 2014 found that plate tectonics on Earth may have arisen from the effects of complex life's emergence, rather than the other way around as the Rare Earth might suggest. The action of lichens on rock may have contributed to the formation of subduction zones in the presence of water. Kasting argues that if oxygenation caused the Cambrian explosion then any planet with oxygen producing photosynthesis should have complex life.
A magnetosphere may not be rare or a requirement.
The importance of Earth's magnetic field to the development of complex life has been disputed. The origin of Earth's magnetic field remains a mystery though the presence of a magnetosphere appears to be relatively common for larger planetary mass objects as all Solar System planets larger than Earth possess one. There is increasing evidence of present or past magnetic activity in terrestrial bodies such as the Moon, Ganymede, Mercury and Mars. Without sufficient measurement present studies rely heavily on modelling methods developed in 2006 by Olson & Christensen to predict field strength. Using a sample of 496 planets such models predict Kepler-186f to be one of few of Earth size that would support a magnetosphere (though such a field around this planet has not currently been confirmed). However current recent empirical evidence points to the occurrence of much larger and more powerful fields than those found in our Solar System, some of which cannot be explained by these models.
Kasting argues that the atmosphere provides sufficient protection against cosmic rays even during times of magnetic pole reversal and atmosphere loss by sputtering. Kasting also dismisses the role of the magnetic field in the evolution of eukaryotes, citing the age of the oldest known magnetofossils.
A large moon may be neither rare nor necessary.
The requirement of a large moon (Rare Earth equation factor formula_10) has also been challenged. Even if it were required, such an occurrence may not be as unique as predicted by the Rare Earth Hypothesis. Work by Edward Belbruno and J. Richard Gott of Princeton University suggests that giant impactors such as those that may have formed the Moon can indeed form in planetary trojan points (L4 or L5 Lagrangian point) which means that similar circumstances may occur in other planetary systems.
The assertion that the Moon's stabilization of Earth's obliquity and spin is a requirement for complex life has been questioned. Kasting argues that a moonless Earth would still possess habitats with climates suitable for complex life and questions whether the spin rate of a moonless Earth can be predicted. Although the giant impact theory posits that the impact forming the Moon increased Earth's rotational speed to make a day about 5 hours long, the Moon has slowly "stolen" much of this speed to reduce Earth's solar day since then to about 24 hours and continues to do so: in 100 million years Earth's solar day will be roughly 24 hours 38 minutes (the same as Mars's solar day); in 1 billion years, 30 hours 23 minutes. Larger secondary bodies would exert proportionally larger tidal forces that would in turn decelerate their primaries faster and potentially increase the solar day of a planet in all other respects like Earth to over 120 hours within a few billion years. This long solar day would make effective heat dissipation for organisms in the tropics and subtropics extremely difficult in a similar manner to tidal locking to a red dwarf star. Short days (high rotation speed) cause high wind speeds at ground level. Long days (slow rotation speed) cause the day and night temperatures to be too extreme.
Many Rare Earth proponents argue that the Earth's plate tectonics would probably not exist if not for the tidal forces of the Moon or the impact of Theia (prolonging mantle effects). The hypothesis that the Moon's tidal influence initiated or sustained Earth's plate tectonics remains unproven, though at least one study implies a temporal correlation to the formation of the Moon. Evidence for the past existence of plate tectonics on planets like Mars which may never have had a large moon would counter this argument, although plate tectonics may fade anyway before a moon is relevant to life. Kasting argues that a large moon is not required to initiate plate tectonics.
Complex life may arise in alternative habitats.
Rare Earth proponents argue that simple life may be common, though complex life requires specific environmental conditions to arise. Critics consider life could arise on a moon of a gas giant, though this is less likely if life requires volcanicity. The moon must have stresses to induce tidal heating, but not so dramatic as seen on Jupiter's Io. However, the moon is within the gas giant's intense radiation belts, sterilizing any biodiversity before it can get established. Dirk Schulze-Makuch disputes this, hypothesizing alternative biochemistries for alien life. While Rare Earth proponents argue that only microbial extremophiles could exist in subsurface habitats beyond Earth, some argue that complex life can also arise in these environments. Examples of extremophile animals such as the "Hesiocaeca methanicola", an animal that inhabits ocean floor methane clathrates substances more commonly found in the outer Solar System, the tardigrades which can survive in the vacuum of space or "Halicephalobus mephisto" which exists in crushing pressure, scorching temperatures and extremely low oxygen levels 3.6 kilometres ( 2.2 miles) deep in the Earth's crust, are sometimes cited by critics as complex life capable of thriving in "alien" environments. Jill Tarter counters the classic counterargument that these species adapted to these environments rather than arose in them, by suggesting that we cannot assume conditions for life to emerge which are not actually known. There are suggestions that complex life could arise in sub-surface conditions which may be similar to those where life may have arisen on Earth, such as the tidally heated subsurfaces of Europa or Enceladus. Ancient circumvental ecosystems such as these support complex life on Earth such as "Riftia pachyptila" that exist completely independent of the surface biosphere.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "N=N^* \\cdot n_e \\cdot f_g \\cdot f_p \\cdot f_{pm} \\cdot f_i \\cdot f_c \\cdot f_l \\cdot f_m \\cdot f_j \\cdot f_{me}"
},
{
"math_id": 2,
"text": "n_e"
},
{
"math_id": 3,
"text": "N^* \\cdot n_e=5\\cdot10^{11}"
},
{
"math_id": 4,
"text": "f_g"
},
{
"math_id": 5,
"text": "f_p"
},
{
"math_id": 6,
"text": "f_{pm}"
},
{
"math_id": 7,
"text": "f_i"
},
{
"math_id": 8,
"text": "f_c"
},
{
"math_id": 9,
"text": "f_l"
},
{
"math_id": 10,
"text": "f_m"
},
{
"math_id": 11,
"text": "f_j"
},
{
"math_id": 12,
"text": "f_{me}"
}
] | https://en.wikipedia.org/wiki?curid=827792 |
828001 | Vortex ring | Torus-shaped vortex in a fluid
A vortex ring, also called a toroidal vortex, is a torus-shaped vortex in a fluid; that is, a region where the fluid mostly spins around an imaginary axis line that forms a closed loop. The dominant flow in a vortex ring is said to be toroidal, more precisely poloidal.
Vortex rings are plentiful in turbulent flows of liquids and gases, but are rarely noticed unless the motion of the fluid is revealed by suspended particles—as in the smoke rings which are often produced intentionally or accidentally by smokers. Fiery vortex rings are also a commonly produced trick by fire eaters. Visible vortex rings can also be formed by the firing of certain artillery, in mushroom clouds, in microbursts, and rarely in volcanic eruptions.
A vortex ring usually tends to move in a direction that is perpendicular to the plane of the ring and such that the inner edge of the ring moves faster forward than the outer edge. Within a stationary body of fluid, a vortex ring can travel for relatively long distance, carrying the spinning fluid with it.
Structure.
In a typical vortex ring, the fluid particles move in roughly circular paths around an imaginary circle (the "core") that is perpendicular to those paths. As in any vortex, the velocity of the fluid is roughly constant except near the core, so that the angular velocity increases towards the core, and most of the vorticity (and hence most of the energy dissipation) is concentrated near it.
Unlike a sea wave, whose motion is only apparent, a moving vortex ring actually carries the spinning fluid along. Just as a rotating wheel lessens friction between a car and the ground, the poloidal flow of the vortex lessens the friction between the core and the surrounding stationary fluid, allowing it to travel a long distance with relatively little loss of mass and kinetic energy, and little change in size or shape. Thus, a vortex ring can carry mass much further and with less dispersion than a jet of fluid. That explains, for instance, why a smoke ring keeps traveling long after any extra smoke blown out with it has stopped and dispersed. These properties of vortex rings are exploited in the vortex ring gun for riot control, and vortex ring toys such as the air vortex cannons.
Formation.
Formation process.
The formation of vortex rings has fascinated the scientific community for more than a century, starting with William Barton Rogers who made sounding observations of the formation process of air vortex rings in air, air rings in liquids, and liquid rings in liquids. In particular, William Barton Rogers made use of the simple experimental method of letting a drop of liquid fall on a free liquid surface; a falling colored drop of liquid, such as milk or dyed water, will inevitably form a vortex ring at the interface due to the surface tension.
A method proposed by G. I. Taylor to generate a vortex ring is to impulsively start a disk from rest. The flow separates to form a cylindrical vortex sheet and by artificially dissolving the disk, one is left with an isolated vortex ring. This is the case when someone is stirring their cup of coffee with a spoon and observing the propagation of a half-vortex in the cup.
In a laboratory, vortex rings are formed by impulsively discharging fluid through a sharp-edged nozzle or orifice. The impulsive motion of the piston/cylinder system is either triggered by an electric actuator or by a pressurized vessel connected to a control valve. For a nozzle geometry, and at first approximation, the exhaust speed is uniform and equal to the piston speed. This is referred as a parallel starting jet. It is possible to have a conical nozzle in which the streamlines at the exhaust are directed toward the centerline. This is referred as a converging starting jet. The orifice geometry which consists in an orifice plate covering the straight tube exhaust, can be considered as an infinitely converging nozzle but the vortex formation differs considerably from the converging nozzle, principally due to the absence of boundary layer in the thickness of the orifice plate throughout the formation process. The fast moving fluid (A) is therefore discharged into a quiescent fluid (B). The shear imposed at the interface between the two fluids slows down the outer layer of the fluid (A) relatively to the centerline fluid. In order to satisfy the Kutta condition, the flow is forced to detach, curl and roll-up in the form of a vortex sheet. Later, the vortex sheet detaches from the feeding jet and propagates freely downstream due to its self-induced kinematics. This is the process commonly observed when a smoker forms smoke rings from their mouth, and how vortex ring toys work.
Secondary effects are likely to modify the formation process of vortex rings. Firstly, at the very first instants, the velocity profile at the exhaust exhibits extrema near the edge causing a large vorticity flux into the vortex ring. Secondly, as the ring grows in size at the edge of the exhaust, negative vorticity is generated on the outer wall of the generator which considerably reduces the circulation accumulated by the primary ring. Thirdly, as the boundary layer inside the pipe, or nozzle, thickens, the velocity profile approaches the one of a Poiseuille flow and the centerline velocity at the exhaust is measured to be larger than the prescribed piston speed. Last but not least, in the event the piston-generated vortex ring is pushed through the exhaust, it may interact or even merge with the primary vortex, hence modifying its characteristic, such as circulation, and potentially forcing the transition of the vortex ring to turbulence.
Vortex ring structures are easily observable in nature. For instance, a mushroom cloud formed by a nuclear explosion or volcanic eruption, has a vortex ring-like structure. Vortex rings are also seen in many different biological flows; blood is discharged into the left ventricle of the human heart in the form of a vortex ring and jellyfishes or squids were shown to propel themselves in water by periodically discharging vortex rings in the surrounding. Finally, for more industrial applications, the synthetic jet which consists in periodically-formed vortex rings, was proved to be an appealing technology for flow control, heat and mass transfer and thrust generation
Vortex formation number.
Prior to Gharib "et al." (1998), few studies had focused on the formation of vortex rings generated with long stroke-to-diameter ratios formula_0, where formula_1 is the length of the column of fluid discharged through the exhaust and formula_2 is the diameter of the exhaust. For short stroke ratios, only one isolated vortex ring is generated and no fluid is left behind in the formation process. For long stroke ratios, however, the vortex ring is followed by some energetic fluid, referred as the trailing jet. On top of showing experimental evidence of the phenomenon, an explanation of the phenomenon was provided in terms of energy maximisation invoking a variational principle first reported by Kelvin and later proven by Benjamin (1976), or Friedman & Turkington (1981). Ultimately, Gharib "et al." (1998) observed the transition between these two states to occur at a non-dimensional time formula_3, or equivalently a stroke ratio formula_0, of about 4. The robustness of this number with respect to initial and boundary conditions suggested the quantity to be a universal constant and was thus named "formation number".
The phenomenon of 'pinch-off', or detachment, from the feeding starting jet is observed in a wide range of flows observed in nature. For instance, it was shown that biological systems such as the human heart or swimming and flying animals generate vortex rings with a stroke-to-diameter ratio close to the formation number of about 4, hence giving ground to the existence of an optimal vortex ring formation process in terms of propulsion, thrust generation and mass transport. In particular, the squid "lolliguncula brevis" was shown to propel itself by periodically emitting vortex rings at a stroke-ratio close to 4. Moreover, in another study by Gharib "et al" (2006), the formation number was used as an indicator to monitor the health of the human heart and identify patients with dilated cardiomyopathy.
Other examples.
Vortex ring state in helicopters.
Air vortices can form around the main rotor of a helicopter, causing a dangerous condition known as vortex ring state (VRS) or "settling with power". In this condition, air that moves down through the rotor turns outward, then up, inward, and then down through the rotor again. This re-circulation of flow can negate much of the lifting force and cause a catastrophic loss of altitude. Applying more power (increasing collective pitch) serves to further accelerate the downwash through which the main-rotor is descending, exacerbating the condition.
In the human heart.
A vortex ring is formed in the left ventricle of the human heart during cardiac relaxation (diastole), as a jet of blood enters through the mitral valve. This phenomenon was initially observed in vitro and subsequently strengthened by analyses based on color Doppler mapping and magnetic resonance imaging. Some recent studies have also confirmed the presence of a vortex ring during rapid filling phase of diastole and implied that the process of vortex ring formation can influence mitral annulus dynamics.
Bubble rings.
Releasing air underwater forms bubble rings, which are vortex rings of water with bubbles (or even a single donut-shaped bubble) trapped along its axis line. Such rings are often produced by scuba divers and dolphins.
Volcanoes.
Under particular conditions, some volcanic vents can produce large visible vortex rings. Though a rare phenomenon, several volcanoes have been observed emitting massive vortex rings as erupting steam and gas condense, forming visible toroidal clouds:
Separated vortex rings.
There has been research and experiments on the existence of separated vortex rings (SVR) such as those formed in the wake of the pappus of a dandelion. This special type of vortex ring effectively stabilizes the seed as it travels through the air and increases the lift generated by the seed. Compared to a standard vortex ring, which is propelled downstream, the axially symmetric SVR remains attached to the pappus for the duration of its flight and uses drag to enhance the travel. These dandelion seed structures have been used to create tiny battery-free wireless sensors that can float in the wind and be dispersed across a large area.
Theory.
Historical studies.
The formation of vortex rings has fascinated the scientific community for more than a century, starting with William Barton Rogers who made sounding observations of the formation process of air vortex rings in air, air rings in liquids, and liquid rings in liquids. In particular, William Barton Rogers made use of the simple experimental method of letting a drop of liquid fall on a free liquid surface; a falling colored drop of liquid, such as milk or dyed water, will inevitably form a vortex ring at the interface due to the surface tension.
Vortex rings were first mathematically analyzed by the German physicist Hermann von Helmholtz, in his 1858 paper "On Integrals of the Hydrodynamical Equations which Express Vortex-motion".
Circular vortex lines.
For a single zero-thickness vortex ring, the vorticity is represented by a Dirac delta function as formula_4 where formula_5 denotes the coordinates of the vortex filament of strength formula_6 in a constant formula_7 half-plane. The Stokes stream function is:
formula_8
with formula_9
where formula_10 and formula_11 are respectively the least and the greatest distance from the point formula_12 to the vortex line, and where formula_13 is the complete elliptic integral of the first kind and formula_14 is the complete elliptic integral of the second kind.
A circular vortex line is the limiting case of a thin vortex ring. Because there is no core thickness, the speed of the ring is infinite, as well as the kinetic energy. The hydrodynamic impulse can be expressed in term of the strength, or 'circulation' formula_6, of the vortex ring as formula_15.
Thin-core vortex rings.
The discontinuity introduced by the Dirac delta function prevents the computation of the speed and the kinetic energy of a circular vortex line. It is however possible to estimate these quantities for a vortex ring having a finite small thickness. For a thin vortex ring, the core can be approximated by a disk of radius formula_16 which is assumed to be infinitesimal compared to the radius of the ring formula_17, i.e. formula_18. As a consequence, inside and in the vicinity of the core ring, one may write: formula_19, formula_20 and formula_21, and, in the limit of formula_22, the elliptic integrals can be approximated by formula_23 and formula_24.
For a uniform vorticity distribution formula_25 in the disk, the Stokes stream function can therefore be approximated by
</ref>
formula_26
The resulting circulation formula_27, hydrodynamic impulse formula_28 and kinetic energy formula_14 are
formula_29
It is also possible to find the translational ring speed (which is finite) of such isolated thin-core vortex ring:
formula_30
which finally results in the well-known expression found by Kelvin and published in the English translation by Tait of von Helmholtz's paper:
formula_31
Spherical vortices.
Hill's spherical vortex is an example of steady vortex flow and may be used to model vortex rings having a vorticity distribution extending to the centerline. More precisely, the model supposes a linearly distributed vorticity distribution in the radial direction starting from the centerline and bounded by a sphere of radius formula_16 as:
formula_32
where formula_33 is the constant translational speed of the vortex.
Finally, the Stokes stream function of Hill's spherical vortex can be computed and is given by:
formula_34
The above expressions correspond to the stream function describing a steady flow. In a fixed frame of reference, the stream function of the bulk flow having a speed formula_33 should be added.
The circulation, the hydrodynamic impulse and the kinetic energy can also be calculated in terms of the translational speed formula_33 and radius formula_16:
</ref>
formula_35
Such a structure or an electromagnetic equivalent has been suggested as an explanation for the internal structure of ball lightning. For example, Shafranov used a magnetohydrodynamic (MHD) analogy to Hill's stationary fluid mechanical vortex to consider the equilibrium conditions of axially symmetric MHD configurations, reducing the problem to the theory of stationary flow of an incompressible fluid. In axial symmetry, he considered general equilibrium for distributed currents and concluded under the Virial Theorem that if there were no gravitation, a bounded equilibrium configuration could exist only in the presence of an azimuthal current.
Fraenkel-Norbury model.
The Fraenkel-Norbury model of isolated vortex ring, sometimes referred as the standard model, refers to the class of steady vortex rings having a linear distribution of vorticity in the core and parametrised by the mean core radius formula_36, where formula_37 is the area of the vortex core and formula_17 is the radius of the ring. Approximate solutions were found for thin-core rings, i.e. formula_38, and thick Hill's-like vortex rings, i.e. formula_39, Hill's spherical vortex having a mean core radius of precisely formula_40. For mean core radii in between, one must rely on numerical methods. Norbury (1973) found numerically the resulting steady vortex ring of given mean core radius, and this for a set of 14 mean core radii ranging from 0.1 to 1.35. The resulting streamlines defining the core of the ring were tabulated, as well as the translational speed. In addition, the circulation, the hydrodynamic impulse and the kinetic energy of such steady vortex rings were computed and presented in non-dimensional form.
Instabilities.
A kind of azimuthal radiant-symmetric structure was observed by Maxworthy when the vortex ring traveled around a critical velocity, which is between the turbulence and laminar states. Later Huang and Chan reported that if the initial state of the vortex ring is not perfectly circular, another kind of instability would occur. An elliptical vortex ring undergoes an oscillation in which it is first stretched in the vertical direction and squeezed in the horizontal direction, then passes through an intermediate state where it is circular, then is deformed in the opposite way (stretched in the horizontal direction and squeezed in the vertical) before reversing the process and returning to the original state.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L/D"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "t^*=Ut/D"
},
{
"math_id": 4,
"text": " \\omega\\left(r,x\\right)=\\kappa\\delta\\left(r-r'\\right)\\delta\\left(x-x'\\right)"
},
{
"math_id": 5,
"text": " \\left(r',x'\\right)"
},
{
"math_id": 6,
"text": "\\kappa"
},
{
"math_id": 7,
"text": "\\theta"
},
{
"math_id": 8,
"text": " \\psi(r,x)=-\\frac{\\kappa}{2\\pi}\\left(r_1+r_2\\right)\\left[K(\\lambda)-E(\\lambda)\\right] "
},
{
"math_id": 9,
"text": " r_1^2 = \\left(x-x'\\right)^2+\\left(r-r'\\right)^2 \\qquad r_2^2 = \\left(x-x'\\right)^2+\\left(r+r'\\right)^2 \\qquad \\lambda = \\frac{r_2-r_1}{r_2+r_1} "
},
{
"math_id": 10,
"text": "r_1"
},
{
"math_id": 11,
"text": "r_2"
},
{
"math_id": 12,
"text": "P(r,x)"
},
{
"math_id": 13,
"text": "K"
},
{
"math_id": 14,
"text": "E"
},
{
"math_id": 15,
"text": "I = \\rho \\pi \\kappa R^2 "
},
{
"math_id": 16,
"text": "a"
},
{
"math_id": 17,
"text": "R"
},
{
"math_id": 18,
"text": "a/R \\ll 1 "
},
{
"math_id": 19,
"text": " r_1/r_2 \\ll 1 "
},
{
"math_id": 20,
"text": "r_2 \\approx 2R"
},
{
"math_id": 21,
"text": " 1- \\lambda^2 \\approx 4 r_1/R "
},
{
"math_id": 22,
"text": "\\lambda \\approx 1 "
},
{
"math_id": 23,
"text": " K(\\lambda) = 1/2 \\ln\\left({16}/{(1-\\lambda^2)}\\right) "
},
{
"math_id": 24,
"text": " E(\\lambda) = 1 "
},
{
"math_id": 25,
"text": "\\omega(r,x)=\\omega_0"
},
{
"math_id": 26,
"text": " \\psi(r,x)=-\\frac{\\omega_0}{2\\pi}R\\iint{\\left(\\ln\\frac{8R}{r_1}-2\\right)\\,dr'dx'} "
},
{
"math_id": 27,
"text": "\\Gamma"
},
{
"math_id": 28,
"text": "I"
},
{
"math_id": 29,
"text": "\\begin{align}\n\\Gamma &= \\pi\\omega_0 a^2\\\\\nI &= \\rho\\pi\\Gamma R^2 \\\\\nE &= \\frac{1}{2}\\rho\\Gamma^2R\\left(\\ln\\frac{8R}{a}-\\frac{7}{4}\\right)\n\\end{align}"
},
{
"math_id": 30,
"text": " U=\\frac{E}{2I}+\\frac{3}{8\\pi}\\frac{\\Gamma}{R} "
},
{
"math_id": 31,
"text": " U=\\frac{\\Gamma}{4\\pi R}\\left(\\ln\\frac{8R}{a}-\\frac{1}{4}\\right) "
},
{
"math_id": 32,
"text": " \\frac{\\omega}{r}=\\frac{15}{2}\\frac{U}{a^2}"
},
{
"math_id": 33,
"text": "U"
},
{
"math_id": 34,
"text": "\\begin{align}\n&\\psi(r,x) = -\\frac{3}{4}\\frac{U}{a^2}r^2\\left(a^2-r^2-x^2\\right) && \\text{inside the vortex} \\\\\n&\\psi(r,x) = \\frac{1}{2}Ur^2\\left[1-\\frac{a^3}{\\left(x^2+r^2\\right)^{3/2}}\\right] && \\text{outside the vortex}\n\\end{align}"
},
{
"math_id": 35,
"text": "\\begin{align}\n\\Gamma &= 5Ua \\\\\nI &= 2\\pi\\rho Ua^3 \\\\\nE & = \\frac{10\\pi}{7}\\rho U^2a^3\n\\end{align}"
},
{
"math_id": 36,
"text": "\\epsilon=\\sqrt{A/\\pi R^2}"
},
{
"math_id": 37,
"text": "A"
},
{
"math_id": 38,
"text": "\\epsilon\\ll 1"
},
{
"math_id": 39,
"text": "\\epsilon\\rightarrow\\sqrt{2}"
},
{
"math_id": 40,
"text": "\\epsilon=\\sqrt{2}"
}
] | https://en.wikipedia.org/wiki?curid=828001 |
8280771 | Fast multipole method | The fast multipole method (FMM) is a numerical technique that was developed to speed up the calculation of long-ranged forces in the "n"-body problem. It does this by expanding the system Green's function using a multipole expansion, which allows one to group sources that lie close together and treat them as if they are a single source.
The FMM has also been applied in accelerating the iterative solver in the method of moments (MOM) as applied to computational electromagnetics problems, and in particular in computational bioelectromagnetism. The FMM was first introduced in this manner by Leslie Greengard and Vladimir Rokhlin Jr. and is based on the multipole expansion of the vector Helmholtz equation. By treating the interactions between far-away basis functions using the FMM, the corresponding matrix elements do not need to be explicitly stored, resulting in a significant reduction in required memory. If the FMM is then applied in a hierarchical manner, it can improve the complexity of matrix-vector products in an iterative solver from formula_0 to formula_1 in finite arithmetic, i.e., given a tolerance formula_2, the matrix-vector product is guaranteed to be within a tolerance formula_3 The dependence of the complexity on the tolerance formula_2 is formula_4, i.e., the complexity of FMM is formula_5. This has expanded the area of applicability of the MOM to far greater problems than were previously possible.
The FMM, introduced by Rokhlin Jr. and Greengard has been said to be one of the top ten algorithms of the 20th century. The FMM algorithm reduces the complexity of matrix-vector multiplication involving a certain type of dense matrix which can arise out of many physical systems.
The FMM has also been applied for efficiently treating the Coulomb interaction in the Hartree–Fock method and density functional theory calculations in quantum chemistry.
Sketch of the Algorithm.
In its simplest form, the fast multipole method seeks to evaluate the following function:
formula_6,
where formula_7 are a set of poles and formula_8 are the corresponding pole weights on a set of points formula_9 with formula_10. This is the one-dimensional form of the problem, but the algorithm can be easily generalized to multiple dimensions and kernels other than formula_11.
Naively, evaluating formula_12 on formula_13 points requires formula_14 operations. The crucial observation behind the fast multipole method is that if the distance between formula_15 and formula_16 is large enough, then formula_11 is well-approximated by a polynomial. Specifically, let formula_17 be the Chebyshev nodes of order formula_18 and let formula_19 be the corresponding Lagrange basis polynomials. One can show that the interpolating polynomial:
formula_20
converges quickly with polynomial order, formula_21, provided that the pole is far enough away from the region of interpolation, formula_22 and formula_23. This is known as the "local expansion".
The speed-up of the fast multipole method derives from this interpolation: provided that all the poles are "far away", we evaluate the sum only on the Chebyshev nodes at a cost of formula_24, and then interpolate it onto all the desired points at a cost of formula_25:
formula_26
Since formula_27, where formula_28 is the numerical tolerance, the total cost is formula_29.
To ensure that the poles are indeed well-separated, one recursively subdivides the unit interval such that only formula_30 poles end up in each interval. One then uses the explicit formula within each interval and interpolation for all intervals that are well-separated. This does not spoil the scaling, since one needs at most formula_31 levels within the given tolerance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{O}(N^2)"
},
{
"math_id": 1,
"text": "\\mathcal{O}(N)"
},
{
"math_id": 2,
"text": "\\varepsilon"
},
{
"math_id": 3,
"text": "\\varepsilon."
},
{
"math_id": 4,
"text": "\\mathcal{O}(\\log(1/\\varepsilon))"
},
{
"math_id": 5,
"text": "\\mathcal{O}(N\\log(1/\\varepsilon))"
},
{
"math_id": 6,
"text": "f(y) = \\sum_{\\alpha=1}^N \\frac{\\phi_\\alpha}{y - x_\\alpha}"
},
{
"math_id": 7,
"text": "x_\\alpha \\in [-1, 1]"
},
{
"math_id": 8,
"text": "\\phi_\\alpha\\in\\mathbb C"
},
{
"math_id": 9,
"text": "\\{y_1,\\ldots,y_M\\}"
},
{
"math_id": 10,
"text": "y_\\beta \\in [-1, 1]"
},
{
"math_id": 11,
"text": "(y - x)^{-1}"
},
{
"math_id": 12,
"text": "f(y)"
},
{
"math_id": 13,
"text": "M"
},
{
"math_id": 14,
"text": "\\mathcal O(MN)"
},
{
"math_id": 15,
"text": "y"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "-1 < t_1 < \\ldots < t_p < 1"
},
{
"math_id": 18,
"text": "p \\ge 2"
},
{
"math_id": 19,
"text": "u_1(y), \\ldots, u_p(y)"
},
{
"math_id": 20,
"text": "\\frac{1}{y-x} = \\sum_{i=1}^p \\frac{1}{t_i - x} u_i(y) + \\epsilon_p(y)"
},
{
"math_id": 21,
"text": "|\\epsilon_{p(y)}| < 5^{-p}"
},
{
"math_id": 22,
"text": "|x| \\ge 3"
},
{
"math_id": 23,
"text": "|y| < 1"
},
{
"math_id": 24,
"text": "\\mathcal O(N p)"
},
{
"math_id": 25,
"text": "\\mathcal O(M p)"
},
{
"math_id": 26,
"text": "\\sum_{\\alpha=1}^N \\frac{\\phi_\\alpha}{y_\\beta - x_\\alpha}\n= \\sum_{i=1}^p u_i(y_\\beta) \\sum_{\\alpha=1}^N \\frac{1}{t_i - x_\\alpha} \\phi_\\alpha\n"
},
{
"math_id": 27,
"text": "p = -\\log_5\\epsilon"
},
{
"math_id": 28,
"text": "\\epsilon"
},
{
"math_id": 29,
"text": "\\mathcal O((M + N) \\log(1/\\epsilon))"
},
{
"math_id": 30,
"text": "\\mathcal O(p)"
},
{
"math_id": 31,
"text": "\\log(1/\\epsilon)"
}
] | https://en.wikipedia.org/wiki?curid=8280771 |
828131 | Weak ordering | Mathematical ranking of a set
<templatestyles src="Stack/styles.css"/>
In mathematics, especially order theory, a weak ordering is a mathematical formalization of the intuitive notion of a ranking of a set, some of whose members may be tied with each other. Weak orders are a generalization of totally ordered sets (rankings without ties) and are in turn generalized by (strictly) partially ordered sets and preorders.
There are several common ways of formalizing weak orderings, that are different from each other but cryptomorphic (interconvertable with no loss of information): they may be axiomatized as strict weak orderings (strictly partially ordered sets in which incomparability is a transitive relation), as total preorders (transitive binary relations in which at least one of the two possible relations exists between every pair of elements), or as ordered partitions (partitions of the elements into disjoint subsets, together with a total order on the subsets). In many cases another representation called a preferential arrangement based on a utility function is also possible.
Weak orderings are counted by the ordered Bell numbers. They are used in computer science as part of partition refinement algorithms, and in the C++ Standard Library.
Examples.
In horse racing, the use of photo finishes has eliminated some, but not all, ties or (as they are called in this context) dead heats, so the outcome of a horse race may be modeled by a weak ordering. In an example from the Maryland Hunt Cup steeplechase in 2007, The Bruce was the clear winner, but two horses Bug River and Lear Charm tied for second place, with the remaining horses farther back; three horses did not finish. In the weak ordering describing this outcome, The Bruce would be first, Bug River and Lear Charm would be ranked after The Bruce but before all the other horses that finished, and the three horses that did not finish would be placed last in the order but tied with each other.
The points of the Euclidean plane may be ordered by their distance from the origin, giving another example of a weak ordering with infinitely many elements, infinitely many subsets of tied elements (the sets of points that belong to a common circle centered at the origin), and infinitely many points within these subsets. Although this ordering has a smallest element (the origin itself), it does not have any second-smallest elements, nor any largest element.
Opinion polling in political elections provides an example of a type of ordering that resembles weak orderings, but is better modeled mathematically in other ways. In the results of a poll, one candidate may be clearly ahead of another, or the two candidates may be statistically tied, meaning not that their poll results are equal but rather that they are within the margin of error of each other. However, if candidate formula_5 is statistically tied with formula_7 and formula_6 is statistically tied with formula_8 it might still be possible for formula_5 to be clearly better than formula_8 so being tied is not in this case a transitive relation. Because of this possibility, rankings of this type are better modeled as semiorders than as weak orderings.
Axiomatizations.
Suppose throughout that formula_3 is a homogeneous binary relation on a set formula_9 (that is, formula_3 is a subset of formula_10) and as usual, write formula_4 and say that formula_4 holds or is true if and only if formula_11
Strict weak orderings.
Preliminaries on incomparability and transitivity of incomparability
Two elements formula_5 and formula_6 of formula_9 are said to be incomparable with respect to formula_3 if neither formula_4 nor formula_12 is true.
A strict partial order formula_3 is a strict weak ordering if and only if incomparability with respect to formula_3 is an equivalence relation.
Incomparability with respect to formula_3 is always a homogeneous symmetric relation on formula_13
It is reflexive if and only if formula_3 is irreflexive (meaning that formula_14 is always false), which will be assumed so that transitivity is the only property this "incomparability relation" needs in order to be an equivalence relation.
Define also an induced homogeneous relation formula_15 on formula_9 by declaring that
formula_16
where importantly, this definition is not necessarily the same as: formula_17 if and only if formula_18
Two elements formula_19 are incomparable with respect to formula_3 if and only if formula_20 are equivalent with respect to formula_15 (or less verbosely, formula_21-equivalent), which by definition means that both formula_22 are true.
The relation "are incomparable with respect to formula_23" is thus identical to (that is, equal to) the relation "are formula_21-equivalent" (so in particular, the former is transitive if and only if the latter is).
When formula_3 is irreflexive then the property known as "transitivity of incomparability" (defined below) is exactly the condition necessary and sufficient to guarantee that the relation "are formula_21-equivalent" does indeed form an equivalence relation on formula_13
When this is the case, it allows any two elements formula_19 satisfying formula_22 to be identified as a single object (specifically, they are identified together in their common equivalence class).
Definition
A strict weak ordering on a set formula_9 is a strict partial order formula_3 on formula_9 for which the incomparability relation induced on formula_9 by formula_3 is a transitive relation.
Explicitly, a strict weak order on formula_9 is a homogeneous relation formula_3 on formula_9 that has all four of the following properties:
Properties (1), (2), and (3) are the defining properties of a strict partial order, although there is some redundancy in this list as asymmetry (3) implies irreflexivity (1), and also because irreflexivity (1) and transitivity (2) together imply asymmetry (3). The incomparability relation is always symmetric and it will be reflexive if and only if formula_3 is an irreflexive relation (which is assumed by the above definition).
Consequently, a strict partial order formula_3 is a strict weak order if and only if its induced incomparability relation is an equivalence relation.
In this case, its equivalence classes partition formula_9 and moreover, the set formula_24 of these equivalence classes can be strictly totally ordered by a binary relation, also denoted by formula_25 that is defined for all formula_26 by:
formula_27 for some (or equivalently, for all) representatives formula_28
Conversely, any strict total order on a partition formula_24 of formula_9 gives rise to a strict weak ordering formula_3 on formula_9 defined by formula_29 if and only if there exists sets formula_26 in this partition such that formula_30
Not every partial order obeys the transitive law for incomparability. For instance, consider the partial order in the set formula_31 defined by the relationship formula_32 The pairs formula_33 are incomparable but formula_1 and formula_2 are related, so incomparability does not form an equivalence relation and this example is not a strict weak ordering.
For transitivity of incomparability, each of the following conditions is necessary, and for strict partial orders also sufficient:
Total preorders.
Strict weak orders are very closely related to total preorders or (non-strict) weak orders, and the same mathematical concepts that can be modeled with strict weak orderings can be modeled equally well with total preorders. A total preorder or weak order is a preorder in which any two elements are comparable. A total preorder formula_15 satisfies the following properties:
A total order is a total preorder which is antisymmetric, in other words, which is also a partial order. Total preorders are sometimes also called preference relations.
The complement of a strict weak order is a total preorder, and vice versa, but it seems more natural to relate strict weak orders and total preorders in a way that preserves rather than reverses the order of the elements. Thus we take the converse of the complement: for a strict weak ordering formula_25 define a total preorder formula_15 by setting formula_17 whenever it is not the case that formula_45 In the other direction, to define a strict weak ordering < from a total preorder formula_46 set formula_4 whenever it is not the case that formula_47
In any preorder there is a corresponding equivalence relation where two elements formula_5 and formula_6 are defined as equivalent if formula_48 In the case of a total preorder the corresponding partial order on the set of equivalence classes is a total order. Two elements are equivalent in a total preorder if and only if they are incomparable in the corresponding strict weak ordering.
Ordered partitions.
A partition of a set formula_9 is a family of non-empty disjoint subsets of formula_9 that have formula_9 as their union. A partition, together with a total order on the sets of the partition, gives a structure called by Richard P. Stanley an ordered partition and by Theodore Motzkin a list of sets. An ordered partition of a finite set may be written as a finite sequence of the sets in the partition: for instance, the three ordered partitions of the set formula_49 are
formula_50
formula_51
formula_52
In a strict weak ordering, the equivalence classes of incomparability give a set partition, in which the sets inherit a total ordering from their elements, giving rise to an ordered partition. In the other direction, any ordered partition gives rise to a strict weak ordering in which two elements are incomparable when they belong to the same set in the partition, and otherwise inherit the order of the sets that contain them.
Representation by functions.
For sets of sufficiently small cardinality, a fourth axiomatization is possible, based on real-valued functions. If formula_53 is any set then a real-valued function formula_54 on formula_53 induces a strict weak order on formula_53 by setting
formula_55
The associated total preorder is given by setting
formula_56
and the associated equivalence by setting
formula_57
The relations do not change when formula_58 is replaced by formula_59 (composite function), where formula_60 is a strictly increasing real-valued function defined on at least the range of formula_61 Thus for example, a utility function defines a preference relation. In this context, weak orderings are also known as preferential arrangements.
If formula_53 is finite or countable, every weak order on formula_53 can be represented by a function in this way. However, there exist strict weak orders that have no corresponding real function. For example, there is no such function for the lexicographic order on formula_62 Thus, while in most preference relation models the relation defines a utility function up to order-preserving transformations, there is no such function for lexicographic preferences.
More generally, if formula_53 is a set, formula_63 is a set with a strict weak ordering formula_64 and formula_65 is a function, then formula_58 induces a strict weak ordering on formula_53 by setting
formula_55
As before, the associated total preorder is given by setting
formula_66
and the associated equivalence by setting
formula_67
It is not assumed here that formula_58 is an injective function, so a class of two equivalent elements on formula_63 may induce a larger class of equivalent elements on formula_68 Also, formula_58 is not assumed to be a surjective function, so a class of equivalent elements on formula_63 may induce a smaller or empty class on formula_68 However, the function formula_58 induces an injective function that maps the partition on formula_53 to that on formula_69 Thus, in the case of finite partitions, the number of classes in formula_53 is less than or equal to the number of classes on formula_69
Related types of ordering.
Semiorders generalize strict weak orderings, but do not assume transitivity of incomparability. A strict weak order that is trichotomous is called a strict total order. The total preorder which is the inverse of its complement is in this case a total order.
For a strict weak order formula_3 another associated reflexive relation is its reflexive closure, a (non-strict) partial order formula_70 The two associated reflexive relations differ with regard to different formula_0 and formula_1 for which neither formula_29 nor formula_71: in the total preorder corresponding to a strict weak order we get formula_72 and formula_73 while in the partial order given by the reflexive closure we get neither formula_74 nor formula_75 For strict total orders these two associated reflexive relations are the same: the corresponding (non-strict) total order. The reflexive closure of a strict weak ordering is a type of series-parallel partial order.
All weak orders on a finite set.
Combinatorial enumeration.
The number of distinct weak orders (represented either as strict weak orders or as total preorders) on an formula_76-element set is given by the following sequence (sequence in the OEIS):
Note that "S"("n", "k") refers to Stirling numbers of the second kind.
These numbers are also called the Fubini numbers or ordered Bell numbers.
For example, for a set of three labeled items, there is one weak order in which all three items are tied. There are three ways of partitioning the items into one singleton set and one group of two tied items, and each of these partitions gives two weak orders (one in which the singleton is smaller than the group of two, and one in which this ordering is reversed), giving six weak orders of this type. And there is a single way of partitioning the set into three singletons, which can be totally ordered in six different ways. Thus, altogether, there are 13 different weak orders on three items.
Adjacency structure.
Unlike for partial orders, the family of weak orderings on a given finite set is not in general connected by moves that add or remove a single order relation to or from a given ordering. For instance, for three elements, the ordering in which all three elements are tied differs by at least two pairs from any other weak ordering on the same set, in either the strict weak ordering or total preorder axiomatizations. However, a different kind of move is possible, in which the weak orderings on a set are more highly connected. Define a dichotomy to be a weak ordering with two equivalence classes, and define a dichotomy to be compatible with a given weak ordering if every two elements that are related in the ordering are either related in the same way or tied in the dichotomy. Alternatively, a dichotomy may be defined as a Dedekind cut for a weak ordering. Then a weak ordering may be characterized by its set of compatible dichotomies. For a finite set of labeled items, every pair of weak orderings may be connected to each other by a sequence of moves that add or remove one dichotomy at a time to or from this set of dichotomies. Moreover, the undirected graph that has the weak orderings as its vertices, and these moves as its edges, forms a partial cube.
Geometrically, the total orderings of a given finite set may be represented as the vertices of a permutohedron, and the dichotomies on this same set as the facets of the permutohedron. In this geometric representation, the weak orderings on the set correspond to the faces of all different dimensions of the permutohedron (including the permutohedron itself, but not the empty set, as a face). The codimension of a face gives the number of equivalence classes in the corresponding weak ordering. In this geometric representation the partial cube of moves on weak orderings is the graph describing the covering relation of the face lattice of the permutohedron.
For instance, for formula_77 the permutohedron on three elements is just a regular hexagon. The face lattice of the hexagon (again, including the hexagon itself as a face, but not including the empty set) has thirteen elements: one hexagon, six edges, and six vertices, corresponding to the one completely tied weak ordering, six weak orderings with one tie, and six total orderings. The graph of moves on these 13 weak orderings is shown in the figure.
Applications.
As mentioned above, weak orders have applications in utility theory. In linear programming and other types of combinatorial optimization problem, the prioritization of solutions or of bases is often given by a weak order, determined by a real-valued objective function; the phenomenon of ties in these orderings is called "degeneracy", and several types of tie-breaking rule have been used to refine this weak ordering into a total ordering in order to prevent problems caused by degeneracy.
Weak orders have also been used in computer science, in partition refinement based algorithms for lexicographic breadth-first search and lexicographic topological ordering. In these algorithms, a weak ordering on the vertices of a graph (represented as a family of sets that partition the vertices, together with a doubly linked list providing a total order on the sets) is gradually refined over the course of the algorithm, eventually producing a total ordering that is the output of the algorithm.
In the Standard Library for the C++ programming language, the set and multiset data types sort their input by a comparison function that is specified at the time of template instantiation, and that is assumed to implement a strict weak ordering.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "\\,<\\,"
},
{
"math_id": 4,
"text": "x < y"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "y,"
},
{
"math_id": 8,
"text": "z,"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "S \\times S"
},
{
"math_id": 11,
"text": "(x, y) \\in \\,<.\\,"
},
{
"math_id": 12,
"text": "y < x"
},
{
"math_id": 13,
"text": "S."
},
{
"math_id": 14,
"text": "x < x"
},
{
"math_id": 15,
"text": "\\,\\lesssim\\,"
},
{
"math_id": 16,
"text": "x \\lesssim y \\text{ is true } \\quad \\text{ if and only if } \\quad y < x \\text{ is false}"
},
{
"math_id": 17,
"text": "x \\lesssim y"
},
{
"math_id": 18,
"text": "x < y \\text{ or } x = y."
},
{
"math_id": 19,
"text": "x, y \\in S"
},
{
"math_id": 20,
"text": "x \\text{ and } y"
},
{
"math_id": 21,
"text": "\\,\\lesssim"
},
{
"math_id": 22,
"text": "x \\lesssim y \\text{ and } y \\lesssim x"
},
{
"math_id": 23,
"text": "\\,<"
},
{
"math_id": 24,
"text": "\\mathcal{P}"
},
{
"math_id": 25,
"text": "\\,<,"
},
{
"math_id": 26,
"text": "A, B \\in \\mathcal{P}"
},
{
"math_id": 27,
"text": "A < B \\text{ if and only if } a < b"
},
{
"math_id": 28,
"text": "a \\in A \\text{ and } b \\in B."
},
{
"math_id": 29,
"text": "a < b"
},
{
"math_id": 30,
"text": "a \\in A, b \\in B, \\text{ and } A < B."
},
{
"math_id": 31,
"text": "\\{ a, b ,c \\}"
},
{
"math_id": 32,
"text": "b < c."
},
{
"math_id": 33,
"text": "a, b \\text{ and } a, c"
},
{
"math_id": 34,
"text": "x < z \\text{ or } z < y"
},
{
"math_id": 35,
"text": "z"
},
{
"math_id": 36,
"text": "x < z \\text{ and } y < z"
},
{
"math_id": 37,
"text": "z < x \\text{ and } z < y"
},
{
"math_id": 38,
"text": "x, y, \\text{ and } z,"
},
{
"math_id": 39,
"text": "x \\lesssim y \\text{ and } y \\lesssim z"
},
{
"math_id": 40,
"text": "x \\lesssim z."
},
{
"math_id": 41,
"text": "x \\text{ and } y,"
},
{
"math_id": 42,
"text": "x \\lesssim y \\text{ or } y \\lesssim x."
},
{
"math_id": 43,
"text": "x,"
},
{
"math_id": 44,
"text": "x \\lesssim x."
},
{
"math_id": 45,
"text": "y < x."
},
{
"math_id": 46,
"text": "\\,\\lesssim,"
},
{
"math_id": 47,
"text": "y \\lesssim x."
},
{
"math_id": 48,
"text": "x \\lesssim y \\text{ and } y \\lesssim x."
},
{
"math_id": 49,
"text": "\\{a, b\\}"
},
{
"math_id": 50,
"text": "\\{a\\}, \\{b\\},"
},
{
"math_id": 51,
"text": "\\{b\\}, \\{a\\}, \\; \\text{ and }"
},
{
"math_id": 52,
"text": "\\{a, b\\}."
},
{
"math_id": 53,
"text": "X"
},
{
"math_id": 54,
"text": "f : X \\to \\R"
},
{
"math_id": 55,
"text": "a < b \\text{ if and only if } f(a) < f(b)."
},
{
"math_id": 56,
"text": "a {}\\lesssim{} b \\text{ if and only if } f(a) \\leq f(b)"
},
{
"math_id": 57,
"text": "a {}\\sim{} b \\text{ if and only if } f(a) = f(b)."
},
{
"math_id": 58,
"text": "f"
},
{
"math_id": 59,
"text": "g \\circ f"
},
{
"math_id": 60,
"text": "g"
},
{
"math_id": 61,
"text": "f."
},
{
"math_id": 62,
"text": "\\R^n."
},
{
"math_id": 63,
"text": "Y"
},
{
"math_id": 64,
"text": "\\,<,\\,"
},
{
"math_id": 65,
"text": "f : X \\to Y"
},
{
"math_id": 66,
"text": "a {}\\lesssim{} b \\text{ if and only if } f(a) {}\\lesssim{} f(b),"
},
{
"math_id": 67,
"text": "a {}\\sim{} b \\text{ if and only if } f(a) {}\\sim{} f(b)."
},
{
"math_id": 68,
"text": "X."
},
{
"math_id": 69,
"text": "Y."
},
{
"math_id": 70,
"text": "\\,\\leq."
},
{
"math_id": 71,
"text": "b < a"
},
{
"math_id": 72,
"text": "a \\lesssim b"
},
{
"math_id": 73,
"text": "b \\lesssim a,"
},
{
"math_id": 74,
"text": "a \\leq b"
},
{
"math_id": 75,
"text": "b \\leq a."
},
{
"math_id": 76,
"text": "n"
},
{
"math_id": 77,
"text": "n = 3,"
}
] | https://en.wikipedia.org/wiki?curid=828131 |
8281950 | Eternity puzzle | Tiling puzzle
The Eternity puzzle is a tiling puzzle created by Christopher Monckton and launched by the Ertl Company in June 1999. It was marketed as being practically unsolvable, with a £1 million prize on offer for whoever could solve it within four years. The prize was paid out in October 2000 for a winning solution arrived at by two mathematicians from Cambridge. A follow-up prize puzzle called Eternity II was launched in 2007.
Description.
The puzzle's scope was to fill a large equiangular (but not equilateral) dodecagon board with 209 puzzle pieces. The board is equipped with a triangular grid made of equilateral triangles. Its sides alternate in length: six sides coincide with the grid and are 7 triangles (placed edge-to-edge) long, while the other sides are slightly shorter and measure 8 triangles base-to-tip, which equals formula_0 edge lengths.
Each puzzle piece is a 12-polydrafter (dodecadrafter) made of twelve 30-60-90 triangles (that is, a continuous compound of twelve halves of equilateral triangles, restricted to the grid layout). Each piece has an area equal to that of 6 equilateral triangles, and the area of the entire dodecagon is exactly 209 * 6 = 1254 equilateral triangles' (or 2508 drafters) worth.
A hint piece was shown placed on every board and solution sheet, although it was not required to be placed there in any solution submission for the prize. Five other hints could be obtained by solving three smaller clue puzzles, which were sold separately.
Solution.
As soon as the puzzle was launched, an online community emerged devoted to solving it, centred on a mailing list on which many ideas and techniques were discussed. It was soon realised that it was trivial to fill the board almost completely, to an "end-game position" where an irregularly-shaped void had to be filled with only a few pieces, at which point the pieces left would be the "wrong shapes" to fill the remaining space. The hope of solving the end-game depended vitally on having pieces that were easy to tile together in a variety of shapes. Computer searches were carried out to find which pieces tiled well or badly, and these data used to alter otherwise-standard backtracking search programs to use the bad pieces first, in the hope of being left with only good pieces in the hard final part of the search.
The puzzle was solved on May 15, 2000, before the first deadline, by two Cambridge mathematicians, Alex Selby and Oliver Riordan. Key to their success was the mathematical rigour with which they approached the problem of determining the tileability of individual pieces and of empty regions within the board. These provided measures of the probability that a given piece could help to fill or 'tile' a given region, and the probability that a given region could be tiled by some combination of pieces. In the search for a solution, these probabilities were used to identify which partial tilings, out of a vast number explored by the computer program, were most likely to lead to a solution. A complete solution was obtained within seven months of development with the aid of two domestic PCs.
A second solution was found independently by Guenter Stertenbrink and submitted just 6 weeks later, on July 1, 2000. No other solutions have since been published, and the originally intended solution also remains unpublished.
Neither of the known solutions have any of the six hint pieces correctly placed. According to Alex Selby the puzzle was actually significantly easier to solve without enforcing any fixed hint pieces.
Prize.
The puzzle's inventor, Christopher Monckton, put up half the prize money himself, the other half being put up by underwriters in the London insurance market. According to Eternity's rules, possible solutions to the puzzle would be received by mail on September 21, 2000. If no correct solutions were opened, the mail for the next year would be kept until September 30, 2001, the process being repeated every year until 2003, after which no entries would be accepted.
Before marketing the puzzle, Monckton had thought that it would take at least three years before anyone could crack the puzzle. One estimate made at the time stated that the puzzle had 10500 possible attempts at a solution, and it would take longer than the lifetime of the Universe to calculate all of them even if you had a million computers.
Once solved, Monckton claimed that the earlier-than-expected solution had forced him to sell his 67-room house, Crimonmogate, to pay the prize. In 2006, he said that the claim had been a PR stunt to boost sales over Christmas, that the house's sale was unrelated to the prize as he was going to sell it anyway.
Influence.
The architectural design of the Perth Arena in Perth, Western Australia, was heavily influenced by the eternity puzzle; the exterior design is also strongly reflected throughout the main arena, foyers, breakout function rooms and the entrance to the venue.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "4\\sqrt{3} \\approx 6.9"
}
] | https://en.wikipedia.org/wiki?curid=8281950 |
8282374 | Tropical cyclone | Type of rapidly rotating storm system
A tropical cyclone is a rapidly rotating storm system with a low-pressure center, a closed low-level atmospheric circulation, strong winds, and a spiral arrangement of thunderstorms that produce heavy rain and squalls. Depending on its location and strength, a tropical cyclone is called a hurricane (), typhoon (), tropical storm, cyclonic storm, tropical depression, or simply cyclone. A hurricane is a strong tropical cyclone that occurs in the Atlantic Ocean or northeastern Pacific Ocean. A typhoon occurs in the northwestern Pacific Ocean. In the Indian Ocean and South Pacific, comparable storms are referred to as "tropical cyclones". In modern times, on average around 80 to 90 named tropical cyclones form each year around the world, over half of which develop hurricane-force winds of or more.
Tropical cyclones typically form over large bodies of relatively warm water. They derive their energy through the evaporation of water from the ocean surface, which ultimately condenses into clouds and rain when moist air rises and cools to saturation. This energy source differs from that of mid-latitude cyclonic storms, such as nor'easters and European windstorms, which are powered primarily by horizontal temperature contrasts. Tropical cyclones are typically between in diameter.
The strong rotating winds of a tropical cyclone are a result of the conservation of angular momentum imparted by the Earth's rotation as air flows inwards toward the axis of rotation. As a result, cyclones rarely form within 5° of the equator. Tropical cyclones are very rare in the South Atlantic (although occasional examples do occur) due to consistently strong wind shear and a weak Intertropical Convergence Zone. In contrast, the African easterly jet and areas of atmospheric instability give rise to cyclones in the Atlantic Ocean and Caribbean Sea.
Heat energy from the ocean, acts as the accelerator for tropical cyclones. This causes inland regions to suffer far less damage from cyclones than coastal regions, although the impacts of flooding are felt across the board. Coastal damage may be caused by strong winds and rain, high waves (due to winds), storm surges (due to wind and severe pressure changes), and the potential of spawning tornadoes.
Tropical cyclones draw in air from a large area and concentrate the water content of that air into precipitation over a much smaller area. This replenishing of moisture-bearing air after rain may cause multi-hour or multi-day extremely heavy rain up to from the coastline, far beyond the amount of water that the local atmosphere holds at any one time. This in turn can lead to river flooding, overland flooding, and a general overwhelming of local water control structures across a large area.
Climate change affects tropical cyclones in several ways. Scientists found that climate change can exacerbate the impact of tropical cyclones by increasing their duration, occurrence, and intensity due to the warming of ocean waters and intensification of the water cycle.
<templatestyles src="Template:TOC limit/styles.css" />
Definition and terminology.
A tropical cyclone is the generic term for a warm-cored, non-frontal synoptic-scale low-pressure system over tropical or subtropical waters around the world. The systems generally have a well-defined center which is surrounded by deep atmospheric convection and a closed wind circulation at the surface. A tropical cyclone is generally deemed to have formed once mean surface winds in excess of are observed. It is assumed at this stage that a tropical cyclone has become self-sustaining and can continue to intensify without any help from its environment.
Depending on its location and strength, a tropical cyclone is referred to by different names, including "hurricane", "typhoon", "tropical storm", "cyclonic storm", "tropical depression", or simply "cyclone". A hurricane is a strong tropical cyclone that occurs in the Atlantic Ocean or northeastern Pacific Ocean, and a typhoon occurs in the northwestern Pacific Ocean. In the Indian Ocean and South Pacific, comparable storms are referred to as "tropical cyclones", and such storms in the Indian Ocean can also be called "severe cyclonic storms".
"Tropical" refers to the geographical origin of these systems, which form almost exclusively over tropical seas. "Cyclone" refers to their winds moving in a circle, whirling round their central clear eye, with their surface winds blowing counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. The opposite direction of circulation is due to the Coriolis effect.
Formation.
Tropical cyclones tend to develop during the summer, but have been noted in nearly every month in most tropical cyclone basins. Tropical cyclones on either side of the Equator generally have their origins in the Intertropical Convergence Zone, where winds blow from either the northeast or southeast. Within this broad area of low-pressure, air is heated over the warm tropical ocean and rises in discrete parcels, which causes thundery showers to form. These showers dissipate quite quickly; however, they can group together into large clusters of thunderstorms. This creates a flow of warm, moist, rapidly rising air, which starts to rotate cyclonically as it interacts with the rotation of the earth.
Several factors are required for these thunderstorms to develop further, including sea surface temperatures of around and low vertical wind shear surrounding the system, atmospheric instability, high humidity in the lower to middle levels of the troposphere, enough Coriolis force to develop a low-pressure center, and a pre-existing low-level focus or disturbance.
There is a limit on tropical cyclone intensity which is strongly related to the water temperatures along its path. and upper-level divergence.
An average of 86 tropical cyclones of tropical storm intensity form annually worldwide. Of those, 47 reach strength higher than , and 20 become intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson scale).
Climate oscillations such as El Niño–Southern Oscillation (ENSO) and the Madden–Julian oscillation modulate the timing and frequency of tropical cyclone development. Rossby waves can aid in the formation of a new tropical cyclone by disseminating the energy of an existing, mature storm. Kelvin waves can contribute to tropical cyclone formation by regulating the development of the westerlies. Cyclone formation is usually reduced 3 days prior to the wave's crest and increased during the 3 days after.
Formation regions and warning centers.
The majority of tropical cyclones each year form in one of seven tropical cyclone basins, which are monitored by a variety of meteorological services and warning centers. Ten of these warning centers worldwide are designated as either a Regional Specialized Meteorological Centre or a Tropical Cyclone Warning Centre by the World Meteorological Organization's (WMO) tropical cyclone programme. These warning centers issue advisories which provide basic information and cover a systems present, forecast position, movement and intensity, in their designated areas of responsibility. Meteorological services around the world are generally responsible for issuing warnings for their own country, however, there are exceptions, as the United States National Hurricane Center and Fiji Meteorological Service issue alerts, watches and warnings for various island nations in their areas of responsibility. The United States Joint Typhoon Warning Center and Fleet Weather Center also publicly issue warnings, about tropical cyclones on behalf of the United States Government. The Brazilian Navy Hydrographic Center names South Atlantic tropical cyclones, however the South Atlantic is not a major basin, and not an official basin according to the WMO.
Interactions with climate.
Each year on average, around 80 to 90 named tropical cyclones form around the world, of which over half develop hurricane-force winds of or more. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are in season. In the Northern Atlantic Ocean, a distinct cyclone season occurs from June 1 to November 30, sharply peaking from late August through September. The statistical peak of the Atlantic hurricane season is September 10. The Northeast Pacific Ocean has a broader period of activity, but in a similar time frame to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone year begins on July 1 and runs all year-round encompassing the tropical cyclone seasons, which run from November 1 until the end of April, with peaks in mid-February to early March.
Of various modes of variability in the climate system, El Niño–Southern Oscillation has the largest effect on tropical cyclone activity. Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. When the subtropical ridge position shifts due to El Niño, so will the preferred tropical cyclone tracks. Areas west of Japan and Korea tend to experience much fewer September–November tropical cyclone impacts during El Niño and neutral years. During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat to China and much greater intensity in the Philippines. The Atlantic Ocean experiences depressed activity due to increased vertical wind shear across the region during El Niño years. Tropical cyclones are further influenced by the Atlantic Meridional Mode, the Quasi-biennial oscillation and the Madden–Julian oscillation.
Influence of climate change.
Climate change can affect tropical cyclones in a variety of ways: an intensification of rainfall and wind speed, a decrease in overall frequency, an increase in the frequency of very intense storms and a poleward extension of where the cyclones reach maximum intensity are among the possible consequences of human-induced climate change. Tropical cyclones use warm, moist air as their fuel. As climate change is warming ocean temperatures, there is potentially more of this fuel available. Between 1979 and 2017, there was a global increase in the proportion of tropical cyclones of Category 3 and higher on the Saffir–Simpson scale. The trend was most clear in the North Atlantic and in the Southern Indian Ocean. In the North Pacific, tropical cyclones have been moving poleward into colder waters and there was no increase in intensity over this period. With warming, a greater percentage (+13%) of tropical cyclones are expected to reach Category 4 and 5 strength. A 2019 study indicates that climate change has been driving the observed trend of rapid intensification of tropical cyclones in the Atlantic basin. Rapidly intensifying cyclones are hard to forecast and therefore pose additional risk to coastal communities.
Warmer air can hold more water vapor: the theoretical maximum water vapor content is given by the Clausius–Clapeyron relation, which yields ≈7% increase in water vapor in the atmosphere per warming. All models that were assessed in a 2019 review paper show a future increase of rainfall rates. Additional sea level rise will increase storm surge levels. It is plausible that extreme wind waves see an increase as a consequence of changes in tropical cyclones, further exacerbating storm surge dangers to coastal communities. The compounding effects from floods, storm surge, and terrestrial flooding (rivers) are projected to increase due to global warming.
There is currently no consensus on how climate change will affect the overall frequency of tropical cyclones. A majority of climate models show a decreased frequency in future projections. For instance, a 2020 paper comparing nine high-resolution climate models found robust decreases in frequency in the Southern Indian Ocean and the Southern Hemisphere more generally, while finding mixed signals for Northern Hemisphere tropical cyclones. Observations have shown little change in the overall frequency of tropical cyclones worldwide, with increased frequency in the North Atlantic and central Pacific, and significant decreases in the southern Indian Ocean and western North Pacific. There has been a poleward expansion of the latitude at which the maximum intensity of tropical cyclones occurs, which may be associated with climate change. In the North Pacific, there may also have been an eastward expansion. Between 1949 and 2016, there was a slowdown in tropical cyclone translation speeds. It is unclear still to what extent this can be attributed to climate change: climate models do not all show this feature.
A study review article published in 2021 concluded that the geographic range of tropical cyclones will probably expand poleward in response to climate warming of the Hadley circulation.
Intensity.
Tropical cyclone intensity is based on wind speeds and pressure; relationships between winds and pressure are often used in determining the intensity of a storm. Tropical cyclone scales such as the Saffir-Simpson Hurricane Wind Scale and Australia's scale (Bureau of Meteorology) only use wind speed for determining the category of a storm. The most intense storm on record is Typhoon Tip in the northwestern Pacific Ocean in 1979, which reached a minimum pressure of and maximum sustained wind speeds of . The highest maximum sustained wind speed ever recorded was in Hurricane Patricia in 2015—the most intense cyclone ever recorded in the Western Hemisphere.
Factors that influence intensity.
Warm sea surface temperatures are required in order for tropical cyclones to form and strengthen. The commonly-accepted minimum temperature range for this to occur is , however, multiple studies have proposed a lower minimum of . Higher sea surface temperatures result in faster intensification rates and sometimes even rapid intensification. High ocean heat content, also known as Tropical Cyclone Heat Potential, allows storms to achieve a higher intensity. Most tropical cyclones that experience rapid intensification are traversing regions of high ocean heat content rather than lower values. High ocean heat content values can help to offset the oceanic cooling caused by the passage of a tropical cyclone, limiting the effect this cooling has on the storm. Faster-moving systems are able to intensify to higher intensities with lower ocean heat content values. Slower-moving systems require higher values of ocean heat content to achieve the same intensity.
The passage of a tropical cyclone over the ocean causes the upper layers of the ocean to cool substantially, a process known as upweling, which can negatively influence subsequent cyclone development. This cooling is primarily caused by wind-driven mixing of cold water from deeper in the ocean with the warm surface waters. This effect results in a negative feedback process that can inhibit further development or lead to weakening. Additional cooling may come in the form of cold water from falling raindrops (this is because the atmosphere is cooler at higher altitudes). Cloud cover may also play a role in cooling the ocean, by shielding the ocean surface from direct sunlight before and slightly after the storm passage. All these effects can combine to produce a dramatic drop in sea surface temperature over a large area in just a few days. Conversely, the mixing of the sea can result in heat being inserted in deeper waters, with potential effects on global climate.
Vertical wind shear decreases tropical cyclone predicability, with storms exhibiting wide range of responses in the presence of shear. Wind shear often negatively affects tropical cyclone intensification by displacing moisture and heat from a system's center. Low levels of vertical wind shear are most optimal for strengthening, while stronger wind shear induces weakening. Dry air entraining into a tropical cyclone's core has a negative effect on its development and intensity by diminishing atmospheric convection and introducing asymmetries in the storm's structure. Symmetric, strong outflow leads to a faster rate of intensification than observed in other systems by mitigating local wind shear. Weakening outflow is associated with the weakening of rainbands within a tropical cyclone. Tropical cyclones may still intensify, even rapidly, in the presence of moderate or strong wind shear depending on the evolution and structure of the storm's convection.
The size of tropical cyclones plays a role in how quickly they intensify. Smaller tropical cyclones are more prone to rapid intensification than larger ones. The Fujiwhara effect, which involves interaction between two tropical cyclones, can weaken and ultimately result in the dissipation of the weaker of two tropical cyclones by reducing the organization of the system's convection and imparting horizontal wind shear. Tropical cyclones typically weaken while situated over a landmass because conditions are often unfavorable as a result of the lack of oceanic forcing. The Brown ocean effect can allow a tropical cyclone to maintain or increase its intensity following landfall, in cases where there has been copious rainfall, through the release of latent heat from the saturated soil. Orographic lift can cause an significant increase in the intensity of the convection of a tropical cyclone when its eye moves over a mountain, breaking the capped boundary layer that had been restraining it. Jet streams can both enhance and inhibit tropical cyclone intensity by influencing the storm's outflow as well as vertical wind shear.
Rapid intensification.
On occasion, tropical cyclones may undergo a process known as rapid intensification, a period in which the maximum sustained winds of a tropical cyclone increase by or more within 24 hours. Similarly, rapid deepening in tropical cyclones is defined as a minimum sea surface pressure decrease of per hour or within a 24-hour period; explosive deepening occurs when the surface pressure decreases by per hour for at least 12 hours or per hour for at least 6 hours. For rapid intensification to occur, several conditions must be in place. Water temperatures must be extremely high (near or above ), and water of this temperature must be sufficiently deep such that waves do not upwell cooler waters to the surface. On the other hand, Tropical Cyclone Heat Potential is one of such non-conventional subsurface oceanographic parameters influencing the cyclone intensity. Wind shear must be low; when wind shear is high, the convection and circulation in the cyclone will be disrupted. Usually, an anticyclone in the upper layers of the troposphere above the storm must be present as well—for extremely low surface pressures to develop, air must be rising very rapidly in the eyewall of the storm, and an upper-level anticyclone helps channel this air away from the cyclone efficiently. However, some cyclones such as Hurricane Epsilon have rapidly intensified despite relatively unfavorable conditions.
Dissipation.
There are a number of ways a tropical cyclone can weaken, dissipate, or lose its tropical characteristics. These include making landfall, moving over cooler water, encountering dry air, or interacting with other weather systems; however, once a system has dissipated or lost its tropical characteristics, its remnants could regenerate a tropical cyclone if environmental conditions become favorable.
A tropical cyclone can dissipate when it moves over waters significantly cooler than . This will deprive the storm of such tropical characteristics as a warm core with thunderstorms near the center, so that it becomes a remnant low-pressure area. Remnant systems may persist for several days before losing their identity. This dissipation mechanism is most common in the eastern North Pacific. Weakening or dissipation can also occur if a storm experiences vertical wind shear which causes the convection and heat engine to move away from the center; this normally ceases the development of a tropical cyclone. In addition, its interaction with the main belt of the Westerlies, by means of merging with a nearby frontal zone, can cause tropical cyclones to evolve into extratropical cyclones. This transition can take 1–3 days.
Should a tropical cyclone make landfall or pass over an island, its circulation could start to break down, especially if it encounters mountainous terrain. When a system makes landfall on a large landmass, it is cut off from its supply of warm moist maritime air and starts to draw in dry continental air. This, combined with the increased friction over land areas, leads to the weakening and dissipation of the tropical cyclone. Over a mountainous terrain, a system can quickly weaken; however, over flat areas, it may endure for two to three days before circulation breaks down and dissipates.
Over the years, there have been a number of techniques considered to try to artificially modify tropical cyclones. These techniques have included using nuclear weapons, cooling the ocean with icebergs, blowing the storm away from land with giant fans, and seeding selected storms with dry ice or silver iodide. These techniques, however, fail to appreciate the duration, intensity, power or size of tropical cyclones.
Methods for assessing intensity.
A variety of methods or techniques, including surface, satellite, and aerial, are used to assess the intensity of a tropical cyclone. Reconnaissance aircraft fly around and through tropical cyclones, outfitted with specialized instruments, to collect information that can be used to ascertain the winds and pressure of a system. Tropical cyclones possess winds of different speeds at different heights. Winds recorded at flight level can be converted to find the wind speeds at the surface. Surface observations, such as ship reports, land stations, mesonets, coastal stations, and buoys, can provide information on a tropical cyclone's intensity or the direction it is traveling. Wind-pressure relationships (WPRs) are used as a way to determine the pressure of a storm based on its wind speed. Several different methods and equations have been proposed to calculate WPRs. Tropical cyclones agencies each use their own, fixed WPR, which can result in inaccuracies between agencies that are issuing estimates on the same system. The ASCAT is a scatterometer used by the MetOp satellites to map the wind field vectors of tropical cyclones. The SMAP uses an L-band radiometer channel to determine the wind speeds of tropical cyclones at the ocean surface, and has been shown to be reliable at higher intensities and under heavy rainfall conditions, unlike scatterometer-based and other radiometer-based instruments.
The Dvorak technique plays a large role in both the classification of a tropical cyclone and the determination of its intensity. Used in warning centers, the method was developed by Vernon Dvorak in the 1970s, and uses both visible and infrared satellite imagery in the assessment of tropical cyclone intensity. The Dvorak technique uses a scale of "T-numbers", scaling in increments of 0.5 from T1.0 to T8.0. Each T-number has an intensity assigned to it, with larger T-numbers indicating a stronger system. Tropical cyclones are assessed by forecasters according to an array of patterns, including curved banding features, shear, central dense overcast, and eye, in order to determine the T-number and thus assess the intensity of the storm. The Cooperative Institute for Meteorological Satellite Studies works to develop and improve automated satellite methods, such as the Advanced Dvorak Technique (ADT) and SATCON. The ADT, used by a large number of forecasting centers, uses infrared geostationary satellite imagery and an algorithm based upon the Dvorak technique to assess the intensity of tropical cyclones. The ADT has a number of differences from the conventional Dvorak technique, including changes to intensity constraint rules and the usage of microwave imagery to base a system's intensity upon its internal structure, which prevents the intensity from leveling off before an eye emerges in infrared imagery. The SATCON weights estimates from various satellite-based systems and microwave sounders, accounting for the strengths and flaws in each individual estimate, to produce a consensus estimate of a tropical cyclone's intensity which can be more reliable than the Dvorak technique at times.
Intensity metrics.
Multiple intensity metrics are used, including accumulated cyclone energy (ACE), the Hurricane Surge Index, the Hurricane Severity Index, the Power Dissipation Index (PDI), and integrated kinetic energy (IKE). ACE is a metric of the total energy a system has exerted over its lifespan. ACE is calculated by summing the squares of a cyclone's sustained wind speed, every six hours as long as the system is at or above tropical storm intensity and either tropical or subtropical. The calculation of the PDI is similar in nature to ACE, with the major difference being that wind speeds are cubed rather than squared. The Hurricane Surge Index is a metric of the potential damage a storm may inflict via storm surge. It is calculated by squaring the dividend of the storm's wind speed and a climatological value (), and then multiplying that quantity by the dividend of the radius of hurricane-force winds and its climatological value (). This can be represented in equation form as:
formula_0
where formula_1 is the storm's wind speed and formula_2 is the radius of hurricane-force winds. The Hurricane Severity Index is a scale that can assign up to 50 points to a system; up to 25 points come from intensity, while the other 25 come from the size of the storm's wind field. The IKE model measures the destructive capability of a tropical cyclone via winds, waves, and surge. It is calculated as:
formula_3
where formula_4 is the density of air, formula_5 is a sustained surface wind speed value, and formula_6 is the volume element.
Classification and naming.
Classification.
Around the world, tropical cyclones are classified in different ways, based on the location (tropical cyclone basins), the structure of the system and its intensity. For example, within the Northern Atlantic and Eastern Pacific basins, a tropical cyclone with wind speeds of over is called a hurricane, while it is called a typhoon or a severe cyclonic storm within the Western Pacific or North Indian oceans. When a hurricane passes west across the International Dateline in the Northern Hemisphere, it becomes known as a typhoon. This happened in 2014 for Hurricane Genevieve, which became Typhoon Genevieve. Within the Southern Hemisphere, it is either called a hurricane, tropical cyclone or a severe tropical cyclone, depending on if it is located within the South Atlantic, South-West Indian Ocean, Australian region or the South Pacific Ocean. The descriptors for tropical cyclones with wind speeds below also vary by tropical cyclone basin and may be further subdivided into categories such as "tropical storm", "cyclonic storm", "tropical depression", or "deep depression".
Naming.
The practice of using given names to identify tropical cyclones dates back to the late 1800s and early 1900s and gradually superseded the existing system—simply naming cyclones based on what they hit. The system currently used provides positive identification of severe weather systems in a brief form, that is readily understood and recognized by the public. The credit for the first usage of personal names for weather systems is generally given to the Queensland Government Meteorologist Clement Wragge who named systems between 1887 and 1907. This system of naming weather systems subsequently fell into disuse for several years after Wragge retired, until it was revived in the latter part of World War II for the Western Pacific. Formal naming schemes have subsequently been introduced for the North and South Atlantic, Eastern, Central, Western and Southern Pacific basins as well as the Australian region and Indian Ocean.
At present, tropical cyclones are officially named by one of twelve meteorological services and retain their names throughout their lifetimes to provide ease of communication between forecasters and the general public regarding forecasts, watches, and warnings. Since the systems can last a week or longer and more than one can be occurring in the same basin at the same time, the names are thought to reduce the confusion about what storm is being described. Names are assigned in order from predetermined lists with one, three, or ten-minute sustained wind speeds of more than depending on which basin it originates. However, standards vary from basin to basin with some tropical depressions named in the Western Pacific, while tropical cyclones have to have a significant amount of gale-force winds occurring around the center before they are named within the Southern Hemisphere. The names of significant tropical cyclones in the North Atlantic Ocean, Pacific Ocean, and Australian region are retired from the naming lists and replaced with another name. Tropical cyclones that develop around the world are assigned an identification code consisting of a two-digit number and suffix letter by the warning centers that monitor them.
Related cyclone types.
In addition to tropical cyclones, there are two other classes of cyclones within the spectrum of cyclone types. These kinds of cyclones, known as extratropical cyclones and subtropical cyclones, can be stages a tropical cyclone passes through during its formation or dissipation. An "extratropical cyclone" is a storm that derives energy from horizontal temperature differences, which are typical in higher latitudes. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses; although not as frequently, an extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone. From space, extratropical storms have a characteristic "comma-shaped" cloud pattern. Extratropical cyclones can also be dangerous when their low-pressure centers cause powerful winds and high seas.
A "subtropical cyclone" is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form in a wide band of latitudes, from the equator to 50°. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm.
Structure.
Eye and center.
At the center of a mature tropical cyclone, air sinks rather than rises. For a sufficiently strong storm, air may sink over a layer deep enough to suppress cloud formation, thereby creating a clear "eye". Weather in the eye is normally calm and free of convective clouds, although the sea may be extremely violent. The eye is normally circular and is typically in diameter, though eyes as small as and as large as have been observed.
The cloudy outer edge of the eye is called the "eyewall". The eyewall typically expands outward with height, resembling an arena football stadium; this phenomenon is sometimes referred to as the "stadium effect". The eyewall is where the greatest wind speeds are found, air rises most rapidly, clouds reach their highest altitude, and precipitation is the heaviest. The heaviest wind damage occurs where a tropical cyclone's eyewall passes over land.
In a weaker storm, the eye may be obscured by the central dense overcast, which is the upper-level cirrus shield that is associated with a concentrated area of strong thunderstorm activity near the center of a tropical cyclone.
The eyewall may vary over time in the form of eyewall replacement cycles, particularly in intense tropical cyclones. Outer rainbands can organize into an outer ring of thunderstorms that slowly moves inward, which is believed to rob the primary eyewall of moisture and angular momentum. When the primary eyewall weakens, the tropical cyclone weakens temporarily. The outer eyewall eventually replaces the primary one at the end of the cycle, at which time the storm may return to its original intensity.
Size.
There are a variety of metrics commonly used to measure storm size. The most common metrics include the radius of maximum wind, the radius of wind (i.e. gale force), the radius of outermost closed isobar (ROCI), and the radius of vanishing wind. An additional metric is the radius at which the cyclone's relative vorticity field decreases to 1×10−5 s−1.
On Earth, tropical cyclones span a large range of sizes, from as measured by the radius of vanishing wind. They are largest on average in the northwest Pacific Ocean basin and smallest in the northeastern Pacific Ocean basin. If the radius of outermost closed isobar is less than two degrees of latitude (), then the cyclone is "very small" or a "midget". A radius of 3–6 latitude degrees () is considered "average sized". "Very large" tropical cyclones have a radius of greater than 8 degrees (). Observations indicate that size is only weakly correlated to variables such as storm intensity (i.e. maximum wind speed), radius of maximum wind, latitude, and maximum potential intensity. Typhoon Tip is the largest cyclone on record, with tropical storm-force winds in diameter. The smallest storm on record is Tropical Storm Marco of 2008, which generated tropical storm-force winds only in diameter.
Movement.
The movement of a tropical cyclone (i.e. its "track") is typically approximated as the sum of two terms: "steering" by the background environmental wind and "beta drift". Some tropical cyclones can move across large distances, such as Hurricane John, the second longest-lasting tropical cyclone on record, which traveled , the longest track of any Northern Hemisphere tropical cyclone, over its 31-day lifespan in 1994.
Environmental steering.
Environmental steering is the primary influence on the motion of tropical cyclones. It represents the movement of the storm due to prevailing winds and other wider environmental conditions, similar to "leaves carried along by a stream".
Physically, the winds, or flow field, in the vicinity of a tropical cyclone may be treated as having two parts: the flow associated with the storm itself, and the large-scale background flow of the environment. Tropical cyclones can be treated as local maxima of vorticity suspended within the large-scale background flow of the environment. In this way, tropical cyclone motion may be represented to first-order as advection of the storm by the local environmental flow. This environmental flow is termed the "steering flow" and is the dominant influence on tropical cyclone motion. The strength and direction of the steering flow can be approximated as a vertical integration of the winds blowing horizontally in the cyclone's vicinity, weighted by the altitude at which those winds are occurring. Because winds can vary with height, determining the steering flow precisely can be difficult.
The pressure altitude at which the background winds are most correlated with a tropical cyclone's motion is known as the "steering level". The motion of stronger tropical cyclones is more correlated with the background flow averaged across a thicker portion of troposphere compared to weaker tropical cyclones whose motion is more correlated with the background flow averaged across a narrower extent of the lower troposphere. When wind shear and latent heat release is present, tropical cyclones tend to move towards regions where potential vorticity is increasing most quickly.
Climatologically, tropical cyclones are steered primarily westward by the east-to-west trade winds on the equatorial side of the subtropical ridge—a persistent high-pressure area over the world's subtropical oceans. In the tropical North Atlantic and Northeast Pacific oceans, the trade winds steer tropical easterly waves westward from the African coast toward the Caribbean Sea, North America, and ultimately into the central Pacific Ocean before the waves dampen out. These waves are the precursors to many tropical cyclones within this region. In contrast, in the Indian Ocean and Western Pacific in both hemispheres, tropical cyclogenesis is influenced less by tropical easterly waves and more by the seasonal movement of the Intertropical Convergence Zone and the monsoon trough. Other weather systems such as mid-latitude troughs and broad monsoon gyres can also influence tropical cyclone motion by modifying the steering flow.
Beta drift.
In addition to environmental steering, a tropical cyclone will tend to drift poleward and westward, a motion known as "beta drift". This motion is due to the superposition of a vortex, such as a tropical cyclone, onto an environment in which the Coriolis force varies with latitude, such as on a sphere or beta plane. The magnitude of the component of tropical cyclone motion associated with the beta drift ranges between and tends to be larger for more intense tropical cyclones and at higher latitudes. It is induced indirectly by the storm itself as a result of feedback between the cyclonic flow of the storm and its environment.
Physically, the cyclonic circulation of the storm advects environmental air poleward east of center and equatorial west of center. Because air must conserve its angular momentum, this flow configuration induces a cyclonic gyre equatorward and westward of the storm center and an anticyclonic gyre poleward and eastward of the storm center. The combined flow of these gyres acts to advect the storm slowly poleward and westward. This effect occurs even if there is zero environmental flow. Due to a direct dependence of the beta drift on angular momentum, the size of a tropical cyclone can affect the influence of beta drift on its motion; beta drift imparts a greater influence on the movement of larger tropical cyclones than that of smaller ones.
Multiple storm interaction.
A third component of motion that occurs relatively infrequently involves the interaction of multiple tropical cyclones. When two cyclones approach one another, their centers will begin orbiting cyclonically about a point between the two systems. Depending on their separation distance and strength, the two vortices may simply orbit around one another, or else may spiral into the center point and merge. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will orbit around it. This phenomenon is called the Fujiwhara effect, after Sakuhei Fujiwhara.
Interaction with the mid-latitude westerlies.
Though a tropical cyclone typically moves from east to west in the tropics, its track may shift poleward and eastward either as it moves west of the subtropical ridge axis or else if it interacts with the mid-latitude flow, such as the jet stream or an extratropical cyclone. This motion, termed "recurvature", commonly occurs near the western edge of the major ocean basins, where the jet stream typically has a poleward component and extratropical cyclones are common. An example of tropical cyclone recurvature was Typhoon Ioke in 2006.
Effects.
Natural phenomena caused or worsened by tropical cyclones.
Tropical cyclones out at sea cause large waves, heavy rain, floods and high winds, disrupting international shipping and, at times, causing shipwrecks. Tropical cyclones stir up water, leaving a cool wake behind them, which causes the region to be less favorable for subsequent tropical cyclones. On land, strong winds can damage or destroy vehicles, buildings, bridges, and other outside objects, turning loose debris into deadly flying projectiles. The storm surge, or the increase in sea level due to the cyclone, is typically the worst effect from landfalling tropical cyclones, historically resulting in 90% of tropical cyclone deaths. Cyclone Mahina produced the highest storm surge on record, , at Bathurst Bay, Queensland, Australia, in March 1899. Other ocean-based hazards that tropical cyclones produce are rip currents and undertow. These hazards can occur hundreds of kilometers (hundreds of miles) away from the center of a cyclone, even if other weather conditions are favorable.
The broad rotation of a landfalling tropical cyclone, and vertical wind shear at its periphery, spawns tornadoes. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall. Hurricane Ivan produced 120 tornadoes, more than any other tropical cyclone. Lightning activity is produced within tropical cyclones; this activity is more intense within stronger storms and closer to and within the storm's eyewall. Tropical cyclones can increase the amount of snowfall a region experiences by delivering additional moisture. Wildfires can be worsened when a nearby storm fans their flames with its strong winds.
Effect on property and human life.
Tropical cyclones regularly affect the coastlines of most of Earth's major bodies of water along the Atlantic, Pacific, and Indian oceans. Tropical cyclones have caused significant destruction and loss of human life, resulting in about 2 million deaths since the 19th century. Large areas of standing water caused by flooding lead to infection, as well as contributing to mosquito-borne illnesses. Crowded evacuees in shelters increase the risk of disease propagation. Tropical cyclones significantly interrupt infrastructure, leading to power outages, bridge and road destruction, and the hampering of reconstruction efforts. Winds and water from storms can damage or destroy homes, buildings, and other manmade structures. Tropical cyclones destroy agriculture, kill livestock, and prevent access to marketplaces for both buyers and sellers; both of these result in financial losses. Powerful cyclones that make landfall – moving from the ocean to over land – are some of the most powerful, although that is not always the case. An average of 86 tropical cyclones of tropical storm intensity form annually worldwide, with 47 reaching hurricane or typhoon strength, and 20 becoming intense tropical cyclones, super typhoons, or major hurricanes (at least of Category 3 intensity).
Africa.
In Africa, tropical cyclones can originate from tropical waves generated over the Sahara Desert, or otherwise strike the Horn of Africa and Southern Africa. Cyclone Idai in March 2019 hit central Mozambique, becoming the deadliest tropical cyclone on record in Africa, with 1,302 fatalities, and damage estimated at US$2.2 billion. Réunion island, located east of Southern Africa, experiences some of the wettest tropical cyclones on record. In January 1980, Cyclone Hyacinthe produced 6,083 mm (239.5 in) of rain over 15 days, which was the largest rain total recorded from a tropical cyclone on record.
Asia.
In Asia, tropical cyclones from the Indian and Pacific oceans regularly affect some of the most populated countries on Earth. In 1970, a cyclone struck Bangladesh, then known as East Pakistan, producing a storm surge that killed at least 300,000 people; this made it the deadliest tropical cyclone on record. In October 2019, Typhoon Hagibis struck the Japanese island of Honshu and inflicted US$15 billion in damage, making it the costliest storm on record in Japan. The islands that comprise Oceania, from Australia to French Polynesia, are routinely affected by tropical cyclones. In Indonesia, a cyclone struck the island of Flores in April 1973, killing 1,653 people, making it the deadliest tropical cyclone recorded in the Southern Hemisphere.
North and South America.
Atlantic and Pacific hurricanes regularly affect North America. In the United States, hurricanes Katrina in 2005 and Harvey in 2017 are the country's costliest ever natural disasters, with monetary damage estimated at US$125 billion. Katrina struck Louisiana and largely destroyed the city of New Orleans, while Harvey caused significant flooding in southeastern Texas after it dropped of rainfall; this was the highest rainfall total on record in the country.
The northern portion of South America experiences occasional tropical cyclones, with 173 fatalities from Tropical Storm Bret in August 1993. The South Atlantic Ocean is generally inhospitable to the formation of a tropical storm. However, in March 2004, Hurricane Catarina struck southeastern Brazil as the first hurricane on record in the South Atlantic Ocean.
Europe.
Europe is rarely affected by tropical cyclones; however, the continent regularly encounters storms after they transitioned into extratropical cyclones. Only one tropical depression – Vince in 2005 – struck Spain, and only one subtropical cyclone – Subtropical Storm Alpha in 2020 – struck Portugal. Occasionally, there are tropical-like cyclones in the Mediterranean Sea.
Environmental effects.
Although cyclones take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they affect, as they may bring much-needed precipitation to otherwise dry regions. Their precipitation may also alleviate drought conditions by restoring soil moisture, though one study focused on the Southeastern United States suggested tropical cyclones did not offer significant drought recovery. Tropical cyclones also help maintain the global heat balance by moving warm, moist tropical air to the middle latitudes and polar regions, and by regulating the thermohaline circulation through upweling. Research on Pacific cyclones has demonstrated that deeper layers of the ocean receive a heat transfer from these powerful storms. The storm surge and winds of hurricanes may be destructive to human-made structures, but they also stir up the waters of coastal estuaries, which are typically important fish breeding locales. Ecosystems, such as saltmarshes and Mangrove forests, can be severely damaged or destroyed by tropical cyclones, which erode land and destroy vegetation. Tropical cyclones can cause harmful algae blooms to form in bodies of water by increasing the amount of nutrients available. Insect populations can decrease in both quantity and diversity after the passage of storms. Strong winds associated with tropical cyclones and their remnants are capable of felling thousands of trees, causing damage to forests.
When hurricanes surge upon shore from the ocean, salt is introduced to many freshwater areas and raises the salinity levels too high for some habitats to withstand. Some are able to cope with the salt and recycle it back into the ocean, but others can not release the extra surface water quickly enough or do not have a large enough freshwater source to replace it. Because of this, some species of plants and vegetation die due to the excess salt. In addition, hurricanes can carry toxins and acids onshore when they make landfall. The floodwater can pick up the toxins from different spills and contaminate the land that it passes over. These toxins are harmful to the people and animals in the area, as well as the environment around them. Tropical cyclones can cause oil spills by damaging or destroying pipelines and storage facilities. Similarly, chemical spills have been reported when chemical and processing facilities were damaged. Waterways have become contaminated with toxic levels of metals such as nickel, chromium, and mercury during tropical cyclones.
Tropical cyclones can have an extensive effect on geography, such as creating or destroying land. Cyclone Bebe increased the size of Tuvalu island, Funafuti Atoll, by nearly 20%. Hurricane Walaka destroyed the small East Island in 2018, which destroyed the habitat for the endangered Hawaiian monk seal, as well as, threatened sea turtles and seabirds. Landslides frequently occur during tropical cyclones and can vastly alter landscapes; some storms are capable of causing hundreds to tens of thousands of landslides. Storms can erode coastlines over an extensive area and transport the sediment to other locations.
Observation and forecasting.
Observation.
Tropical cyclones have occurred around the world for millennia. Reanalyses and research are being undertaken to extend the historical record, through the usage of proxy data such as overwash deposits, beach ridges and historical documents such as diaries. Major tropical cyclones leave traces in overwash records and shell layers in some coastal areas, which have been used to gain insight into hurricane activity over the past thousands of years. Sediment records in Western Australia suggest an intense tropical cyclone in the 4th millennium BC.
Proxy records based on paleotempestological research have revealed that major hurricane activity along the Gulf of Mexico coast varies on timescales of centuries to millennia. In the year 957, a powerful typhoon struck southern China, killing around 10,000 people due to flooding. The Spanish colonization of Mexico described "tempestades" in 1730, although the official record for Pacific hurricanes only dates to 1949. In the south-west Indian Ocean, the tropical cyclone record goes back to 1848. In 2003, the Atlantic hurricane reanalysis project examined and analyzed the historical record of tropical cyclones in the Atlantic back to 1851, extending the existing database from 1886.
Before satellite imagery became available during the 20th century, many of these systems went undetected unless it impacted land or a ship encountered it by chance. Often in part because of the threat of hurricanes, many coastal regions had sparse population between major ports until the advent of automobile tourism; therefore, the most severe portions of hurricanes striking the coast may have gone unmeasured in some instances. The combined effects of ship destruction and remote landfall severely limit the number of intense hurricanes in the official record before the era of hurricane reconnaissance aircraft and satellite meteorology. Although the record shows a distinct increase in the number and strength of intense hurricanes, therefore, experts regard the early data as suspect. The ability of climatologists to make a long-term analysis of tropical cyclones is limited by the amount of reliable historical data.
During the 1940s, routine aircraft reconnaissance started in both the Atlantic and Western Pacific basin during the mid-1940s, which provided ground truth data, however, early flights were only made once or twice a day. Polar-orbiting weather satellites were first launched by the United States National Aeronautics and Space Administration in 1960 but were not declared operational until 1965. However, it took several years for some of the warning centers to take advantage of this new viewing platform and develop the expertise to associate satellite signatures with storm position and intensity.
Intense tropical cyclones pose a particular observation challenge, as they are a dangerous oceanic phenomenon, and weather stations, being relatively sparse, are rarely available on the site of the storm itself. In general, surface observations are available only if the storm is passing over an island or a coastal area, or if there is a nearby ship. Real-time measurements are usually taken in the periphery of the cyclone, where conditions are less catastrophic and its true strength cannot be evaluated. For this reason, there are teams of meteorologists that move into the path of tropical cyclones to help evaluate their strength at the point of landfall.
Tropical cyclones are tracked by weather satellites capturing visible and infrared images from space, usually at half-hour to quarter-hour intervals. As a storm approaches land, it can be observed by land-based Doppler weather radar. Radar plays a crucial role around landfall by showing a storm's location and intensity every several minutes. Other satellites provide information from the perturbations of GPS signals, providing thousands of snapshots per day and capturing atmospheric temperature, pressure, and moisture content.
In situ measurements, in real-time, can be taken by sending specially equipped reconnaissance flights into the cyclone. In the Atlantic basin, these flights are regularly flown by United States government hurricane hunters. These aircraft fly directly into the cyclone and take direct and remote-sensing measurements. The aircraft also launch GPS dropsondes inside the cyclone. These sondes measure temperature, humidity, pressure, and especially winds between flight level and the ocean's surface. A new era in hurricane observation began when a remotely piloted Aerosonde, a small drone aircraft, was flown through Tropical Storm Ophelia as it passed Virginia's eastern shore during the 2005 hurricane season. A similar mission was also completed successfully in the western Pacific Ocean.
Forecasting.
High-speed computers and sophisticated simulation software allow forecasters to produce computer models that predict tropical cyclone tracks based on the future position and strength of high- and low-pressure systems. Combining forecast models with increased understanding of the forces that act on tropical cyclones, as well as with a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades. However, scientists are not as skillful at predicting the intensity of tropical cyclones. The lack of improvement in intensity forecasting is attributed to the complexity of tropical systems and an incomplete understanding of factors that affect their development. New tropical cyclone position and forecast information is available at least every six hours from the various warning centers.
Geopotential height.
In meteorology, geopotential heights are used when creating forecasts and analyzing pressure systems. Geopotential heights represent the estimate of the real height of a pressure system above the average sea level. Geopotential heights for weather are divided up into several levels. The lowest geopotential height level is , which represents the lowest of the atmosphere. The moisture content, gained by using either the relative humidity or the precipitable water value, is used in creating forecasts for precipitation. The next level, , is at a height of ; 700 hPa is regarded as the highest point in the lower atmosphere. At this layer, both vertical movement and moisture levels are used to locate and create forecasts for precipitation. The middle level of the atmosphere is at or a height of . The 500 hPa level is used for measuring atmospheric vorticity, commonly known as the spin of air. The relative humidity is also analyzed at this height in order to establish where precipitation is likely to materialize. The next level occurs at or a height of . The top-most level is located at , which corresponds to a height of . Both the 200 and 300 hPa levels are mainly used to locate the jet stream.
Society and culture.
Preparations.
Ahead of the formal season starting, people are urged to prepare for the effects of a tropical cyclone by politicians and weather forecasters, among others. They prepare by determining their risk to the different types of weather, tropical cyclones cause, checking their insurance coverage and emergency supplies, as well as determining where to evacuate to if needed. When a tropical cyclone develops and is forecast to impact land, each member nation of the World Meteorological Organization issues various watches and warnings to cover the expected effects. However, there are some exceptions with the United States National Hurricane Center and Fiji Meteorological Service responsible for issuing or recommending warnings for other nations in their area of responsibility.
An important decision in individual preparedness is determining if and when to evacuate an area that will be affected by a tropical cyclone. Tropical cyclone tracking charts allow people to track ongoing systems to form their own opinions regarding where the storms are going and whether or not they need to prepare for the system being tracked, including possible evacuation. This continues to be encouraged by the National Oceanic and Atmospheric Administration and National Hurricane Center.
Response.
Hurricane response is the disaster response after a hurricane. Activities performed by hurricane responders include assessment, restoration, and demolition of buildings; removal of debris and waste; repairs to land-based and maritime infrastructure; and public health services including search and rescue operations. Hurricane response requires coordination between federal, tribal, state, local, and private entities. According to the National Voluntary Organizations Active in Disaster, potential response volunteers should affiliate with established organizations and should not self-deploy, so that proper training and support can be provided to mitigate the danger and stress of response work.
Hurricane responders face many hazards. Hurricane responders may be exposed to chemical and biological contaminants including stored chemicals, sewage, human remains, and mold growth encouraged by flooding, as well as asbestos and lead that may be present in older buildings. Common injuries arise from falls from heights, such as from a ladder or from level surfaces; from electrocution in flooded areas, including from backfeed from portable generators; or from motor vehicle accidents. Long and irregular shifts may lead to sleep deprivation and fatigue, increasing the risk of injuries, and workers may experience mental stress associated with a traumatic incident. Additionally, heat stress is a concern as workers are often exposed to hot and humid temperatures, wear protective clothing and equipment, and have physically difficult tasks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\frac{v}{33\\ m/s}\\right)^2\\times\\left(\\frac{r}{96.6\\ km}\\right)\\,"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\int_{Vol} \\frac{1}{2}pu^2d_{v}\\,"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "u"
},
{
"math_id": 6,
"text": "d_v"
}
] | https://en.wikipedia.org/wiki?curid=8282374 |
828279 | NewDES | Block cipher
In cryptography, NewDES is a symmetric key block cipher. It was created in 1984–1985 by Robert Scott as a potential DES replacement.
Despite its name, it is not derived from DES and has quite a different structure. Its intended niche as a DES replacement has now mostly been filled by AES. The algorithm was revised with a modified key schedule in 1996 to counter a related-key attack; this version is sometimes referred to as NewDES-96.
In 2004, Scott posted some comments on sci.crypt reflecting on the motivation behind NewDES's design and what he might have done differently so as to make the cipher more secure.
Algorithm.
NewDES, unlike DES, has no bit-level permutations, making it easy to implement in software. All operations are performed on whole bytes. It is a product cipher, consisting of 17 rounds performed on a 64-bit data block and makes use of a 120-bit key.
In each round, subkey material is XORed with the 1-byte sub-blocks of data, then fed through an S-box, the output of which is then XORed with another sub-block of data. In total, 8 XORs are performed in each round. The S-box is derived from the United States Declaration of Independence (used as a nothing-up-my-sleeve number).
Each set of two rounds uses seven 1-byte subkeys, which are derived by splitting 56 bits of the key into bytes. The key is then rotated 56 bits for use in the next two rounds.
Cryptanalysis.
Only a small amount of cryptanalysis has been published on NewDES. The designer showed that NewDES exhibits the full avalanche effect after seven rounds: every ciphertext bit depends on every plaintext bit and key bit.
NewDES has the same complementation property that DES has: namely, that if
formula_0
then
formula_1
where
formula_2
is the bitwise complement of "x". This means that the work factor for a brute force attack is reduced by a factor of 2. Eli Biham also noticed that changing a full byte in all the key and data bytes leads to another complementation property. This reduces the work factor by 28.
Biham's related-key attack can break NewDES with 233 chosen-key chosen plaintexts, meaning that NewDES is not as secure as DES.
John Kelsey, Bruce Schneier, and David Wagner used related-key cryptanalysis to develop another attack on NewDES; it requires 232 known plaintexts and one related key.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_K(P)=C,"
},
{
"math_id": 1,
"text": "E_{\\overline{K}}(\\overline{P})=\\overline{C},"
},
{
"math_id": 2,
"text": "\\overline{x}"
}
] | https://en.wikipedia.org/wiki?curid=828279 |
8284237 | Von Kármán constant | Constant in fluid dynamics
In fluid dynamics, the von Kármán constant (or Kármán's constant), named for Theodore von Kármán, is a dimensionless constant involved in the logarithmic law describing the distribution of the longitudinal velocity in the wall-normal direction of a turbulent fluid flow near a boundary with a no-slip condition. The equation for such boundary layer flow profiles is:
formula_0
where "u" is the mean flow velocity at height "z" above the boundary. The roughness height (also known as roughness length) "z0" is where formula_1 appears to go to zero. Further "κ" is the von Kármán constant being typically 0.41, and formula_2 is the friction velocity which depends on the shear stress "τw" at the boundary of the flow:
formula_3
with "ρ" the fluid density.
The Kármán constant is often used in turbulence modeling, for instance in boundary-layer meteorology to calculate fluxes of momentum, heat and moisture from the atmosphere to the land surface. It is considered to be a universal ("κ" ≈ 0.40).
Gaudio, Miglio and Dey argued that the Kármán constant is however nonuniversal in flows over mobile sediment beds.
In recent years the von Kármán constant has been subject to periodic scrutiny. Reviews (Foken, 2006; Hogstrom, 1988; Hogstrom, 1996) report values of "κ" between 0.35 and 0.42. The overall conclusion of over 18 studies is that "κ" is constant, close to 0.40. For incompressible and frictionless ("ideal") fluids, Baumert (2013) used Kolmogorov's classical ideas on turbulence to derive ideal values of a number of relevant constants of turbulent motions, among them "von Kármán's " constant as formula_4. | [
{
"math_id": 0,
"text": "u=\\frac{u_{\\star}}{\\kappa}\\ln\\frac{z}{z_0},"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "u_\\star"
},
{
"math_id": 3,
"text": "u_\\star = \\sqrt{\\frac{\\tau_w}{\\rho}},"
},
{
"math_id": 4,
"text": "\\kappa = 1/\\sqrt{2\\pi} \\approx 0.399 "
}
] | https://en.wikipedia.org/wiki?curid=8284237 |
828436 | QR code | Type of matrix barcode
A QR code (quick-response code) is a type of two-dimensional matrix barcode, invented in 1994, by Japanese company Denso Wave for labelling automobile parts. It features black squares on a white background with fiducial markers, readable by imaging devices like cameras, and processed using Reed–Solomon error correction until the image can be appropriately interpreted. The required data are then extracted from patterns that are present in both the horizontal and the vertical components of the QR image.
Whereas a barcode is a machine-readable optical image that contains information specific to the labeled item, the QR code contains the data for a locator, an identifier, and web-tracking. To store data efficiently, QR codes use four standardized modes of encoding: (I) numeric, (ii) alphanumeric, (iii) byte or binary, and (iv) kanji. Compared to standard UPC barcodes, the QR labeling system was applied beyond the automobile industry because of faster reading of the optical image and greater data-storage capacity in applications such as product tracking, item identification, time tracking, document management, and general marketing.
History.
The QR code system was invented in 1994, at the Denso Wave automotive products company, in Japan. The initial alternating-square design presented by the team of researchers, headed by Masahiro Hara, was influenced by the black counters and the white counters played on a Go board; the pattern of position detection was found and determined by applying the least-used ratio (1:1:3:1:1) in black and white areas on printed matter, which cannot be misidentified by an optical scanner. The functional purpose of the QR code system was to facilitate keeping track of the types and numbers of automobile parts, by replacing individually-scanned bar-code labels on each box of auto parts with a single label that contained the data of each label. The quadrangular configuration of the QR code system consolidated the data of the various bar-code labels with Kanji, Kana, and alphanumeric codes printed onto a single label.
Adoption.
QR codes are used in a much broader context, including both commercial tracking applications and convenience-oriented applications aimed at mobile phone users (termed mobile tagging). QR codes may be used to display text to the user, to open a webpage on the user's device, to add a vCard contact to the user's device, to open a Uniform Resource Identifier (URI), to connect to a wireless network, or to compose an email or text message. There are a great many QR code generators available as software or as online tools that are either free or require a paid subscription. The QR code has become one of the most-used types of two-dimensional code.
During June 2011, 14 million American mobile users scanned a QR code or a barcode. Some 58% of those users scanned a QR or barcode from their homes, while 39% scanned from retail stores; 53% of the 14 million users were men between the ages of 18 and 34.
In 2022, 89 million people in the United States scanned a QR code using their mobile devices, up by 26 percent compared to 2020. The majority of QR code users used them to make payments or to access product and menu information.
In September 2020, a survey found that 18.8 percent of consumers in the United States and the United Kingdom strongly agreed that they had noticed an increase in QR code use since the then-active COVID-19-related restrictions had begun several months prior.
Standards.
Several standards cover the encoding of data as QR codes:
At the application layer, there is some variation between most of the implementations. Japan's NTT DoCoMo has established de facto standards for the encoding of URLs, contact information, and several other data types. The open-source "ZXing" project maintains a list of QR code data types.
Uses.
QR codes have become common in consumer advertising. Typically, a smartphone is used as a QR code scanner, displaying the code and converting it to some useful form (such as a standard URL for a website, thereby obviating the need for a user to type it into a Web browser).
QR code has become a focus of advertising strategy, since it provides a way to access a brand's website more quickly than by manually entering a URL. Beyond mere convenience to the consumer, the importance of this capability is that it increases the conversion rate: the chance that contact with the advertisement will convert to a sale. It coaxes interested prospects further down the conversion funnel with little delay or effort, bringing the viewer to the advertiser's website immediately, whereas a longer and more targeted sales pitch may lose the viewer's interest.
Although initially used to track parts in vehicle manufacturing, QR codes are used over a much wider range of applications. These include commercial tracking, warehouse stock control, entertainment and transport ticketing, product and loyalty marketing, and in-store product labeling. Examples of marketing include where a company's discounted and percent discount can be captured using a QR code decoder that is a mobile app, or storing a company's information such as address and related information alongside its alpha-numeric text data as can be seen in telephone directory yellow pages.
They can also be used to store personal information for organizations. An example of this is the Philippines National Bureau of Investigation (NBI) where NBI clearances now come with a QR code. Many of these applications target mobile-phone users (via mobile tagging). Users may receive text, add a vCard contact to their device, open a URL, or compose an e-mail or text message after scanning QR codes. They can generate and print their own QR codes for others to scan and use by visiting one of several pay or free QR code-generating sites or apps. Google had an API, now deprecated, to generate QR codes, and apps for scanning QR codes can be found on nearly all smartphone devices.
QR codes storing addresses and URLs may appear in magazines, on signs, on buses, on business cards, or on almost any object about which users might want information. Users with a camera phone equipped with the correct reader application can scan the image of the QR code to display text and contact information, connect to a wireless network, or open a web page in the phone's browser. This act of linking from physical world objects is termed hardlinking or object hyperlinking. QR codes also may be linked to a location to track where a code has been scanned. Either the application that scans the QR code retrieves the geo information by using GPS and cell tower triangulation (aGPS) or the URL encoded in the QR code itself is associated with a location. In 2008, a Japanese stonemason announced plans to engrave QR codes on gravestones, allowing visitors to view information about the deceased, and family members to keep track of visits. Psychologist Richard Wiseman was one of the first authors to include QR codes in a book, in "" (2011). Microsoft Office and LibreOffice have a functionality to insert QR code into documents.
QR codes have been incorporated into currency. In June 2011, The Royal Dutch Mint ("Koninklijke Nederlandse Munt") issued the world's first official coin with a QR code to celebrate the centenary of its current building and premises. The coin can be scanned by a smartphone and originally linked to a special website with content about the historical event and design of the coin. In 2014, the Central Bank of Nigeria issued a 100-naira banknote to commemorate its centennial, the first banknote to incorporate a QR code in its design. When scanned with an internet-enabled mobile device, the code goes to a website that tells the centenary story of Nigeria.
In 2015, the Central Bank of the Russian Federation issued a 100-rubles note to commemorate the annexation of Crimea by the Russian Federation. It contains a QR code into its design, and when scanned with an internet-enabled mobile device, the code goes to a website that details the historical and technical background of the commemorative note. In 2017, the Bank of Ghana issued a 5-cedis banknote to commemorate 60 years of central banking in Ghana. It contains a QR code in its design which, when scanned with an internet-enabled mobile device, goes to the official Bank of Ghana website.
Credit card functionality is under development. In September 2016, the Reserve Bank of India (RBI) launched the eponymously named BharatQR, a common QR code jointly developed by all the four major card payment companies – National Payments Corporation of India that runs RuPay cards along with Mastercard, Visa, and American Express. It will also have the capability of accepting payments on the Unified Payments Interface (UPI) platform.
Augmented reality.
QR codes are used in some augmented reality systems to determine the positions of objects in 3-dimensional space.
Mobile operating systems.
QR codes can be used on various mobile device operating systems. Both Android and iOS devices can natively scan QR codes without downloading an external app. The camera app can scan and display the kind of QR code along with the link. These devices support URL redirection, which allows QR codes to send metadata to existing applications on the device. Many free apps are available with the ability to scan the codes and hard-link to an external URL.
Virtual stores.
QR codes have been used to establish "virtual stores", where a gallery of product information and QR codes is presented to the customer, e.g. on a train station wall. The customers scan the QR codes, and the products are delivered to their homes. This use started in South Korea, and Argentina, but is currently expanding globally. Walmart, Procter & Gamble and Woolworths have already adopted the Virtual Store concept.
QR code payment.
QR codes can be used to store bank account information or credit card information, or they can be specifically designed to work with particular payment provider applications. There are several trial applications of QR code payments across the world. In developing countries including China, India and Bangladesh QR code payment is a very popular and convenient method of making payments. Since Alipay designed a QR code payment method in 2011, mobile payment has been quickly adopted in China. As of 2018, around 83% of all payments were made via mobile payment.
In November 2012, QR code payments were deployed on a larger scale in the Czech Republic when an open format for payment information exchange – a Short Payment Descriptor – was introduced and endorsed by the Czech Banking Association as the official local solution for QR payments. In 2013, the European Payment Council provided guidelines for the EPC QR code enabling SCT initiation within the Eurozone.
In 2017, Singapore created a task force including government agencies such as the Monetary Authority of Singapore and Infocomm Media Development Authority to spearhead a system for e-payments using standardized QR code specifications. These specific dimensions are specialized for Singapore.
The e-payment system, Singapore Quick Response Code (SGQR), essentially merges various QR codes into one label that can be used by both parties in the payment system. This allows for various banking apps to facilitate payments between multiple customers and a merchant that displays a single QR code. The SGQR scheme is co-owned by MAS and IMDA. A single SDQR label contains e-payments and combines multiple payment options. People making purchases can scan the code and see which payment options the merchant accepts.
Website login.
QR codes can be used to log into websites: a QR code is shown on the login page on a computer screen, and when a registered user scans it with a verified smartphone, they will automatically be logged in. Authentication is performed by the smartphone, which contacts the server. Google deployed such a login scheme in 2012.
Mobile ticket.
There is a system whereby a QR code can be displayed on a device such as a smartphone and used as an admission ticket. Its use is common for J1 League and Nippon Professional Baseball tickets in Japan. In some cases, rights can be transferred via the Internet. In Latvia, QR codes can be scanned in Riga public transport to validate Rīgas Satiksme e-tickets.
Restaurant ordering.
Restaurants can present a QR code near the front door or at the table allowing guests to view an online menu, or even redirect them to an online ordering website or app, allowing them to order and/or possibly pay for their meal without having to use a cashier or waiter. QR codes can also link to daily or weekly specials that are not printed on the standardized menus, and enable the establishment to update the entire menu without needing to print copies. At table-serve restaurants, QR codes enable guests to order and pay for their meals without a waiter involved – the QR code contains the table number so servers know where to bring the food.<ref name="Verge, Kastrenakes, 9/29/20"></ref> This application has grown especially since the need for social distancing during the 2020 COVID-19 pandemic prompted reduced contact between service staff and customers.
Joining a Wi‑Fi network.
By specifying the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, mobile device users can quickly scan and join networks without having to manually enter the data. A MeCard-like format is supported by Android and iOS 11+.
Funerary use.
A QR code can link to an obituary and can be placed on a headstone. In 2008, Ishinokoe in Yamanashi Prefecture, Japan began to sell tombstones with QR codes produced by IT DeSign, where the code leads to a virtual grave site of the deceased. Other companies, such as Wisconsin-based Interactive Headstones, have also begun implementing QR codes into tombstones. In 2014, the Jewish Cemetery of La Paz in Uruguay began implementing QR codes for tombstones.
Electronic authentication.
QR codes can be used to generate time-based one-time passwords for electronic authentication.
Loyalty programs.
QR codes have been used by various retail outlets that have loyalty programs. Sometimes these programs are accessed with an app that is loaded onto a phone and includes a process triggered by a QR code scan. The QR codes for loyalty programs tend to be found printed on the receipt for a purchase or on the products themselves. Users in these schemes collect award points by scanning a code.
Counterfeit detection.
Serialised QR codes have been used by brands and governments to let consumers, retailers and distributors verify the authenticity of the products and help with detecting counterfeit products, as part of a brand protection program. However, the security level of a regular QR code is limited since QR codes printed on original products are easily reproduced on fake products, even though the analysis of data generated as a result of QR code scanning can be used to detect counterfeiting and illicit activity. A higher security level can be attained by embedding a digital watermark or copy detection pattern into the image of the QR code. This makes the QR code more secure against counterfeiting attempts; products that display a code which is counterfeit, although valid as a QR code, can be detected by scanning the secure QR code with the appropriate app.
The treaty regulating apostilles (documents bearing a seal of authenticity), has been updated to allow the issuance of digital apostilles by countries; a digital apostille is a PDF document with a cryptographic signature containing a QR code for a canonical URL of the original document, allowing users to verify the apostille from a printed version of the document.
Product tracing.
Different studies have been conducted to assess the effectiveness of QR codes as a means of conveying labelling information and their use as part of a food traceability system. In a field experiment, it was found that when provided free access to a smartphone with a QR code scanning app, 52.6% of participants would use it to access labelling information. A study made in South Korea showed that consumers appreciate QR code used in food traceability system, as they provide detailed information about food, as well as information that helps them in their purchasing decision. If QR codes are serialised, consumers can access a web page showing the supply chain for each ingredient, as well as information specific to each related batch, including meat processors and manufacturers, which helps address the concerns they have about the origin of their food.
COVID-19 pandemic.
After the COVID-19 pandemic began spreading, QR codes began to be used as a "touchless" system to display information, show menus, or provide updated consumer information, especially in the hospitality industry. Restaurants replaced paper or laminated plastic menus with QR code decals on the table, which opened an online version of the menu. This prevented the need to dispose of single-use paper menus, or institute cleaning and sanitizing procedures for permanent menus after each use. Local television stations have also begun to utilize codes on local newscasts to allow viewers quicker access to stories or information involving the pandemic, including testing and immunization scheduling websites, or for links within stories mentioned in the newscasts overall.
In Australia, patrons were required to scan QR codes at shops, clubs, supermarkets, and other service and retail establishments on entry to assist contact tracing. Singapore, Taiwan, the United Kingdom, and New Zealand used similar systems.
QR codes are also present on COVID-19 vaccination certificates in places such as Canada and the EU (EU Digital COVID certificate), where they can be scanned to verify the information on the certificate.
Design.
Unlike the older, one-dimensional barcodes that were designed to be mechanically scanned by a narrow beam of light, a QR code is detected by a 2-dimensional digital image sensor and then digitally analyzed by a programmed processor. The processor locates the three distinctive squares at the corners of the QR code image, using a smaller square (or multiple squares) near the fourth corner to normalize the image for size, orientation, and angle of viewing. The small dots throughout the QR code are then converted to binary numbers and validated with an error-correcting algorithm.
Information capacity.
The amount of data that can be represented by a QR code symbol depends on the data type ("mode", or input character set), version (1, ..., 40, indicating the overall dimensions of the symbol, i.e. 4 × version number + 17 dots on each side), and error correction level. The maximum storage capacities occur for version 40 and error correction level L (low), denoted by 40-L:
Here are some samples of QR codes:
Error correction.
QR codes use Reed–Solomon error correction over the finite field formula_0 or GF(28), the elements of which are encoded as bytes of 8 bits; the byte formula_1 with a standard numerical value formula_2 encodes the field element formula_3 where formula_4 is taken to be a primitive element satisfying formula_5. The primitive polynomial is formula_6, corresponding to the polynomial number 285, with initial root = 0.
The Reed–Solomon code uses one of 37 different polynomials over formula_0, with degrees ranging from 7 to 68, depending on how many error correction bytes the code adds. It is implied by the form of Reed–Solomon used () that these polynomials are all on the form formula_7. However, the rules for selecting the degree formula_8 are specific to the QR standard.
For example, the generator polynomial used for the Version 1 QR code (21×21), when 7 error correction bytes are used, is:
formula_9.
The highest power of formula_10 in the polynomial (the degree formula_8, of the polynomial) determines the number of error correction bytes. In this case, the degree is 7.
When discussing the Reed–Solomon code phase there is some risk for confusion, in that the QR ISO/IEC standard uses the term "codeword" for the elements of formula_0, which with respect to the Reed–Solomon code are symbols, whereas it uses the term "block" for what with respect to the Reed–Solomon code are the codewords. The number of data versus error correction bytes within each block depends on (i) the version (side length) of the QR symbol and (ii) the error correction level, of which there are four. The higher the error correction level, the less storage capacity. The following table lists the approximate error correction capability at each of the four levels:
In larger QR symbols, the message is broken up into several Reed–Solomon code blocks. The block size is chosen so that no attempt is made at correcting more than 15 errors per block; this limits the complexity of the decoding algorithm. The code blocks are then interleaved together, making it less likely that localized damage to a QR symbol will overwhelm the capacity of any single block.
The Version 1 QR symbol with level L error correction, for example, consists of a single error correction block with a total of 26 code bytes (made of 19 message bytes and seven error correction bytes). It can correct up to 2 byte errors. Hence, this code is known as a (26,19,2) error correction code over GF(28) .
Due to error correction, it is possible to create artistic QR codes with embellishments to make them more readable or attractive to the human eye, and to incorporate colors, logos, and other features into the QR code block; the embellishments are treated as errors, but the codes still scan correctly.
It is also possible to design artistic QR codes without reducing the error correction capacity by manipulating the underlying mathematical constructs. Image processing algorithms are also used to reduce errors in QR-code.
Encoding.
The format information records two things: the error correction level and the mask pattern used for the symbol. Masking is used to break up patterns in the data area that might confuse a scanner, such as large blank areas or misleading features that look like the locator marks. The mask patterns are defined on a grid that is repeated as necessary to cover the whole symbol. Modules corresponding to the dark areas of the mask are inverted. The 5-bit format information is protected from errors with a BCH code, and two complete copies are included in each QR symbol. A (15,5) triple error-correcting BCH code over GF(24) is used, having the generator polynomial formula_11. It can correct at most 3 bit-errors out of the 5 data bits. There are a total of 15 bits in this BCH code (10 bits are added for error correction). This 15-bit code is itself X-ORed with a fixed 15-bit mask pattern (101010000010010) to prevent an all-zero string.
The message dataset is placed from right to left in a zigzag pattern, as shown below. In larger symbols, this is complicated by the presence of the alignment patterns and the use of multiple interleaved error-correction blocks.
The general structure of a QR encoding is as a sequence of 4 bit indicators with payload length dependent on the indicator mode (e.g. byte encoding payload length is dependent on the first byte).
Four-bit indicators are used to select the encoding mode and convey other information.
Encoding modes can be mixed as needed within a QR symbol. (e.g., a url with a long string of alphanumeric characters )
[ Mode Indicator][ Mode bitstream ] --> [ Mode Indicator][ Mode bitstream ] --> etc... --> [ 0000 End of message (Terminator) ]
After every indicator that selects an encoding mode is a length field that tells how many characters are encoded in that mode. The number of bits in the length field depends on the encoding and the symbol version.
Alphanumeric encoding mode stores a message more compactly than the byte mode can, but cannot store lower-case letters and has only a limited selection of punctuation marks, which are sufficient for rudimentary web addresses. Two characters are coded in an 11-bit value by this formula:
V = 45 × C1 + C2
This has the exception that the last character in an alphanumeric string with an odd length is read as a 6-bit value instead.
Decoding example.
The following images offer more information about the QR code.
Variants.
Model 1.
"Model 1 QR code" is an older version of the specification. It is visually similar to the widely seen model 2 codes, but lacks alignment patterns. Differences are in the bottom right corner, and in the midsections of the bottom and right edges are additional functional regions.
Micro QR code.
Micro QR code is a smaller version of the QR code standard for applications where symbol size is limited. There are four different versions (sizes) of Micro QR codes: the smallest is 11×11 modules; the largest can hold 35 numeric characters, or 21 ASCII alphanumeric characters, or 15 bytes (128 bits).
Rectangular Micro QR Code.
Rectangular Micro QR Code (also known as rMQR Code) is two-dimensional (2D) matrix barcode invented and standardized in 2022 by Denso Wave as ISO/IEC 23941. rMQR Code is designed as a rectangular variation of QR code and has the same parameters and applications as original QR code. But rMQR Code is more suitable for the rectangular areas and has difference between width and height up to 19 in R7x139 version.
iQR code.
iQR code is an alternative to existing square QR codes developed by Denso Wave. iQR codes can be created in square or rectangular formations; this is intended for situations where a longer and narrower rectangular shape is more suitable, such as on cylindrical objects. iQR codes can fit the same amount of information in 30% less space. There are 61 versions of square iQR codes, and 15 versions of rectangular codes. For squares, the minimum size is 9 × 9 modules; rectangles have a minimum of 19 × 5 modules. iQR codes add error correction level S, which allows for 50% error correction. iQR Codes had not been given an ISO/IEC specification as of 2015, and only proprietary Denso Wave products could create or read iQR codes.
Secure QR code.
Secure Quick Response (SQR) code is a QR code that contains a "private data" segment after the terminator instead of the specified filler bytes "ec 11". This private data segment must be deciphered with an encryption key. This can be used to store private information and to manage a company's internal information.
Frame QR.
Frame QR is a QR code with a "canvas area" that can be flexibly used. In the center of this code is the canvas area, where graphics, letters, and more can be flexibly arranged, making it possible to lay out the code without losing the design of illustrations, photos, etc.
HCC2D.
Researchers have proposed a new High Capacity Colored 2-Dimensional (HCC2D) Code, which builds upon a QR code basis for preserving the QR robustness to distortions and uses colors for increasing data density (as of 2014 it is still in the prototyping phase). The HCC2D code specification is described in details in Querini "et al." (2014), while techniques for color classification of HCC2D code cells are described in detail in Querini and Italiano (2014), which is an extended version of Querini and Italiano (2013).
Introducing colors into QR codes requires addressing additional issues. In particular, during QR code reading only the brightness information is taken into account, while HCC2D codes have to cope with chromatic distortions during the decoding phase. In order to ensure adaptation to chromatic distortions that arise in each scanned code, HCC2D codes make use of an additional field: the Color Palette Pattern. This is because color cells of a Color Palette Pattern are supposed to be distorted in the same way as color cells of the Encoding Region. Replicated color palettes are used for training machine-learning classifiers.
AQR.
Accessible QR is a type of QR code that combines a standard QR code with a dot-dash pattern positioned around one corner of the code to provide product information for people who are blind and partially sighted. The codes, announce product categories and product details such as instructions, ingredients, safety warnings, and recycling information. The data is structured for the needs of users who are blind or partially sighted and offers larger text or audio output. It can read QR codes from a metre away, activating the smartphone's accessibility features like VoiceOver to announce product details.
License.
The use of QR code technology is freely licensed as long as users follow the standards for QR code documented with JIS or ISO/IEC. Non-standardized codes may require special licensing.
Denso Wave owns a number of patents on QR code technology, but has chosen to exercise them in a limited fashion. In order to promote widespread usage of the technology Denso Wave chose to waive its rights to a key patent in its possession for "standardized" codes only. In the US, the granted QR code patent is 5726435, and in Japan 2938338, both of which have expired. The European Patent Office granted patent 0672994 to Denso Wave, which was then validated into French, UK, and German patents, all of which expired in March 2015.
The text "QR Code" itself is a registered trademark and wordmark of Denso Wave Incorporated. In UK, the trademark is registered as E921775, the term "QR Code", with a filing date of 3 September 1998. The UK version of the trademark is based on the Kabushiki Kaisha Denso (DENSO CORPORATION) trademark, filed as Trademark 000921775, the term "QR Code", on 3 September 1998 and registered on 16 December 1999 with the European Union OHIM (Office for Harmonization in the Internal Market).
The U.S. Trademark for the term "QR Code" is Trademark 2435991 and was filed on 29 September 1998 with an amended registration date of 13 March 2001, assigned to Denso Corporation. In South Korea, trademark application filed on 18 November 2011 was refused at 20 March 2012, because the Korean Intellectual Property Office viewed that the phrase was genericized among South Korean people to refer to matrix barcodes in general.
Risks.
The only context in which common QR codes can carry executable data is the URL data type. These URLs may host JavaScript code, which can be used to exploit vulnerabilities in applications on the host system, such as the reader, the web browser, or the image viewer, since a reader will typically send the data to the application associated with the data type used by the QR code.
In the case of no software exploits, malicious QR codes combined with a permissive reader can still put a computer's contents and user's privacy at risk. This practice is known as "attagging", a portmanteau of "attack tagging". They are easily created and can be affixed over legitimate QR codes. On a smartphone, the reader's permissions may allow use of the camera, full Internet access, read/write contact data, GPS, read browser history, read/write local storage, and global system changes.
Risks include linking to dangerous web sites with browser exploits, enabling the microphone/camera/GPS, and then streaming those feeds to a remote server, analysis of sensitive data (passwords, files, contacts, transactions), and sending email/SMS/IM messages or packets for DDoS as part of a botnet, corrupting privacy settings, stealing identity, and even containing malicious logic themselves such as JavaScript or a virus. These actions could occur in the background while the user is only seeing the reader opening a seemingly harmless web page. In Russia, a malicious QR code caused phones that scanned it to send premium texts at a fee of $6 each. QR codes have also been linked to scams in which stickers are placed on parking meters, posing as quick payment options, as seen in Austin, San Antonio and Boston, among other cities across the United States and Australia.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{F}_{256}"
},
{
"math_id": 1,
"text": "b_7b_6b_5b_4b_3b_2b_1b_0"
},
{
"math_id": 2,
"text": "\\textstyle\\sum_{i=0}^7 b_i 2^i"
},
{
"math_id": 3,
"text": "\\textstyle\\sum_{i=0}^7 b_i \\alpha^i"
},
{
"math_id": 4,
"text": " \\alpha \\in \\mathbb{F}_{256}"
},
{
"math_id": 5,
"text": "\\alpha^8 + \\alpha^4 + \\alpha^3 + \\alpha^2 + 1 = 0"
},
{
"math_id": 6,
"text": "x^8 + x^4 + x^3 + x^2 + 1 "
},
{
"math_id": 7,
"text": "\\prod_{i=0}^{n-1} (x - \\alpha^i)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "g(x)= x^7+\\alpha^{87}x^6+\\alpha^{229}x^5+\\alpha^{146}x^4+\\alpha^{149}x^3+\\alpha^{238}x^2+\\alpha^{102}x+\\alpha^{21}"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "g(x)=x^{10}+x^8+x^5+x^4+x^2+x+1"
}
] | https://en.wikipedia.org/wiki?curid=828436 |
8284591 | Mallows's Cp | In statistics, Mallows's formula_0, named for Colin Lingwood Mallows, is used to assess the fit of a regression model that has been estimated using ordinary least squares. It is applied in the context of model selection, where a number of predictor variables are available for predicting some outcome, and the goal is to find the best model involving a subset of these predictors. A small value of formula_1 means that the model is relatively precise.
Mallows's "Cp" has been shown to be equivalent to Akaike information criterion in the special case of Gaussian linear regression.
Definition and properties.
Mallows's "Cp" addresses the issue of overfitting, in which model selection statistics such as the residual sum of squares always get smaller as more variables are added to a model. Thus, if we aim to select the model giving the smallest residual sum of squares, the model including all variables would always be selected. Instead, the "Cp" statistic calculated on a sample of data estimates the sum squared prediction error (SSPE) as its population target
formula_2
where formula_3 is the fitted value from the regression model for the "i"th case, "E"("Y""i" | "X""i") is the expected value for the "i"th case, and σ2 is the error variance (assumed constant across the cases). The mean squared prediction error (MSPE) will not automatically get smaller as more variables are added. The optimum model under this criterion is a compromise influenced by the sample size, the effect sizes of the different predictors, and the degree of collinearity between them.
If "P" regressors are selected from a set of "K" > "P", the "Cp" statistic for that particular set of regressors is defined as:
formula_4
where
Alternative definition.
Given a linear model such as:
formula_7
where:
An alternate version of "Cp" can also be defined as:
formula_11
where
Note that this version of the "Cp" does not give equivalent values to the earlier version, but the model with the smallest "Cp" from this definition will also be the same model with the smallest "Cp" from the earlier definition.
Limitations.
The "Cp" criterion suffers from two main limitations
Practical use.
The "Cp" statistic is often used as a stopping rule for various forms of stepwise regression. Mallows proposed the statistic as a criterion for selecting among many alternative subset regressions. Under a model not suffering from appreciable lack of fit (bias), "Cp" has expectation nearly equal to "P"; otherwise the expectation is roughly "P" plus a positive bias term. Nevertheless, even though it has expectation greater than or equal to "P", there is nothing to prevent "Cp" < "P" or even "Cp" < 0 in extreme cases. It is suggested that one should choose a subset that has "Cp" approaching "P", from above, for a list of subsets ordered by increasing "P". In practice, the positive bias can be adjusted for by selecting a model from the ordered list of subsets, such that "Cp" < 2"P".
Since the sample-based "Cp" statistic is an estimate of the MSPE, using "Cp" for model selection does not completely guard against overfitting. For instance, it is possible that the selected model will be one in which the sample "Cp" was a particularly severe underestimate of the MSPE.
Model selection statistics such as "Cp" are generally not used blindly, but rather information about the field of application, the intended use of the model, and any known biases in the data are taken into account in the process of model selection.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{C_p}"
},
{
"math_id": 1,
"text": "C_p"
},
{
"math_id": 2,
"text": "\nE\\sum_i (\\hat{Y}_i - E(Y_i\\mid X_i))^2/\\sigma^2,\n"
},
{
"math_id": 3,
"text": "\\hat{Y}_i"
},
{
"math_id": 4,
"text": " C_p={SSE_p \\over S^2} - N + 2(P+1), "
},
{
"math_id": 5,
"text": "SSE_p = \\sum_{i=1}^N(Y_i-\\hat{Y}_{pi})^2"
},
{
"math_id": 6,
"text": " {1 \\over N-K} \\sum_{i=1}^N (Y_i- \\hat{Y}_i)^2 "
},
{
"math_id": 7,
"text": " Y = \\beta_0 + \\beta_1X_1+\\cdots+\\beta_pX_p + \\varepsilon "
},
{
"math_id": 8,
"text": " \\beta_0,\\ldots,\\beta_p "
},
{
"math_id": 9,
"text": " X_1,\\ldots,X_p "
},
{
"math_id": 10,
"text": " \\varepsilon "
},
{
"math_id": 11,
"text": " C_p=\\frac{1}{n}(\\operatorname{RSS} + 2p\\hat{\\sigma}^2) "
},
{
"math_id": 12,
"text": " \\hat{\\sigma}^2 "
}
] | https://en.wikipedia.org/wiki?curid=8284591 |
8286275 | Suspension polymerization | Polymerization reaction among monomers suspended in a liquid
<templatestyles src="Template:Quote_box/styles.css" />
IUPAC definition
Polymerization in which polymer is formed in monomer, or monomer-solvent dropletsin a "continuous phase" that is a nonsolvent for both the monomer and the formed polymer.
"Note 1": In suspension polymerization, the initiator is located mainly in the monomer phase.
"Note 2": Monomer or monomer-solvent droplets in suspension polymerization havediameters usually exceeding 10 μm.
In polymer chemistry, suspension polymerization is a heterogeneous radical polymerization process that uses mechanical agitation to mix a monomer or mixture of monomers in a liquid phase, such as water, while the monomers polymerize, forming spheres of polymer. The monomer droplets (size of the order 10-1000 μm) are suspended in the liquid phase. The individual monomer droplets can be considered as undergoing bulk polymerization. The liquid phase outside these droplets help in better conduction of heat and thus tempering the increase in temperature.
While choosing a liquid phase for suspension polymerization, low viscosity, high thermal conductivity and low-temperature variation of viscosity are generally preferred. The primary advantage of suspension polymerization over other types of polymerization is that a higher degree of polymerization can be achieved without monomer boil-off. During this process, there is often a possibility of these monomer droplets sticking to each other and causing creaming in the solution. To prevent this, the mixture is carefully stirred or a protective colloid is often added. One of the most common suspending agents is polyvinyl alcohol (PVA). Usually, the monomer conversion is completed unlike in bulk polymerization, and the initiator used in this is monomer-soluble.
This process is used in the production of many commercial resins, including polyvinyl chloride (PVC), a widely-used plastic, styrene resins including polystyrene, expanded polystyrene, and high-impact polystyrene, as well as poly(styrene-acrylonitrile) and poly(methyl methacrylate).
Particle properties.
Suspension polymerization is divided into two main types, depending on the morphology of the particles that result. In bead polymerization, the polymer is soluble in its monomer and the result is a smooth, translucent bead. In powder polymerization, the polymer is not soluble in its monomer and the resultant bead will be porous and irregular. The morphology of the polymer can be changed by adding a monomer diluent, an inert liquid that is insoluble with the liquid matrix. The diluent changes the solubility of the polymer in the monomer and gives a measure of control over the porosity of the resulting polymer.
The polymer beads that result can range in size from 100 nm to 5 mm. The size is controlled by the stirring speed, the volume fraction of monomer, the concentration and identity of the stabilizers used, and the viscosities of the different components. The following equation derived empirically summarizes some of these interactions:
formula_0
d is the average particle size, k includes parameters related to the reaction vessel design, Dv is the reaction vessel diameter, Ds is the diameter of the stirrer, R is the volume ratio of the monomer to the liquid matrix, N is the stirring speed, νm and νl are the viscosity of the monomer phase and liquid matrix respectively, ε is the interfacial tension of the two phases, and Cs is the concentration of stabilizer. The most common way to control the particle size is to change the stirring speed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bar{d} = k{D_v \\cdot R \\cdot \\nu_m \\cdot \\epsilon \\over D_s \\cdot N \\cdot \\nu_l \\cdot C_s}"
}
] | https://en.wikipedia.org/wiki?curid=8286275 |
8286632 | Digital root | Repeated sum of a number's digits
The digital root (also repeated digital sum) of a natural number in a given radix is the (single digit) value obtained by an iterative process of summing digits, on each iteration using the result from the previous iteration to compute a digit sum. The process continues until a single-digit number is reached. For example, in base 10, the digital root of the number 12345 is 6 because the sum of the digits in the number is 1 + 2 + 3 + 4 + 5 = 15, then the addition process is repeated again for the resulting number 15, so that the sum of 1 + 5 equals 6, which is the digital root of that number. In base 10, this is equivalent to taking the remainder upon division by 9 (except when the digital root is 9, where the remainder upon division by 9 will be 0), which allows it to be used as a divisibility rule.
Formal definition.
Let formula_0 be a natural number. For base formula_1, we define the digit sum formula_2 to be the following:
formula_3
where formula_4 is the number of digits in the number in base formula_5, and
formula_6
is the value of each digit of the number. A natural number formula_0 is a digital root if it is a fixed point for formula_7, which occurs if formula_8.
All natural numbers formula_0 are preperiodic points for formula_7, regardless of the base. This is because if formula_9, then
formula_10
and therefore
formula_11
because formula_1.
If formula_12, then trivially
formula_8
Therefore, the only possible digital roots are the natural numbers formula_13, and there are no cycles other than the fixed points of formula_13.
Example.
In base 12, 8 is the additive digital root of the base 10 number 3110, as for formula_14
formula_15
formula_16
formula_17
formula_18
formula_19
This process shows that 3110 is 1972 in base 12. Now for formula_20
formula_21
formula_22
formula_23
shows that 19 is 17 in base 12. And as 8 is a 1-digit number in base 12,
formula_24.
Direct formulas.
We can define the digit root directly for base formula_1 formula_25 in the following ways:
Congruence formula.
The formula in base formula_5 is:
formula_26
or,
formula_27
In base 10, the corresponding sequence is (sequence in the OEIS).
The digital root is the value modulo formula_28 because formula_29 and thus formula_30 So regardless of the position formula_31 of digit formula_32, formula_33, which explains why digits can be meaningfully added. Concretely, for a three-digit number formula_34,
formula_35
To obtain the modular value with respect to other numbers formula_36, one can take weighted sums, where the weight on the formula_31-th digit corresponds to the value of formula_37. In base 10, this is simplest for formula_38, where higher digits except for the unit digit vanish (since 2 and 5 divide powers of 10), which corresponds to the familiar fact that the divisibility of a decimal number with respect to 2, 5, and 10 can be checked by the last digit.
Also of note is the modulus formula_39. Since formula_40 and thus formula_41 taking the "alternating" sum of digits yields the value modulo formula_42.
Using the floor function.
It helps to see the digital root of a positive integer as the position it holds with respect to the largest multiple of formula_43 less than the number itself. For example, in base 6 the digital root of 11 is 2, which means that 11 is the second number after formula_44. Likewise, in base 10 the digital root of 2035 is 1, which means that formula_45. If a number produces a digital root of exactly formula_43, then the number is a multiple of formula_43.
With this in mind the digital root of a positive integer formula_0 may be defined by using floor function formula_46, as
formula_47
Additive persistence.
The additive persistence counts how many times we must sum its digits to arrive at its digital root.
For example, the additive persistence of 2718 in base 10 is 2: first we find that 2 + 7 + 1 + 8 = 18, then that 1 + 8 = 9.
There is no limit to the additive persistence of a number in a number base formula_5. Proof: For a given number formula_0, the persistence of the number consisting of formula_0 repetitions of the digit 1 is 1 higher than that of formula_0. The smallest numbers of additive persistence 0, 1, ... in base 10 are:
0, 10, 19, 199, 19 999 999 999 999 999 999 999, ... (sequence in the OEIS)
The next number in the sequence (the smallest number of additive persistence 5) is 2 × 102×(1022 − 1)/9 − 1 (that is, 1 followed by 2 222 222 222 222 222 222 222 nines). For any fixed base, the sum of the digits of a number is proportional to its logarithm; therefore, the additive persistence is proportional to the iterated logarithm.
Programming example.
The example below implements the digit sum described in the definition above to search for digital roots and additive persistences in Python.
def digit_sum(x: int, b: int) -> int:
total = 0
while x > 0:
total = total + (x % b)
x = x // b
return total
def digital_root(x: int, b: int) -> int:
seen = set()
while x not in seen:
seen.add(x)
x = digit_sum(x, b)
return x
def additive_persistence(x: int, b: int) -> int:
seen = set()
while x not in seen:
seen.add(x)
x = digit_sum(x, b)
return len(seen) - 1
In popular culture.
Digital roots are used in Western numerology, but certain numbers deemed to have occult significance (such as 11 and 22) are not always completely reduced to a single digit.
Digital roots form an important mechanic in the visual novel adventure game "Nine Hours, Nine Persons, Nine Doors".
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "b > 1"
},
{
"math_id": 2,
"text": "F_{b} : \\mathbb{N} \\rightarrow \\mathbb{N}"
},
{
"math_id": 3,
"text": "F_{b}(n) = \\sum_{i=0}^{k - 1} d_i"
},
{
"math_id": 4,
"text": "k = \\lfloor \\log_{b}{n} \\rfloor + 1"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "d_i = \\frac{n \\bmod{b^{i+1}} - n \\bmod b^i}{b^i}"
},
{
"math_id": 7,
"text": "F_{b}"
},
{
"math_id": 8,
"text": "F_{b}(n) = n"
},
{
"math_id": 9,
"text": "n \\geq b"
},
{
"math_id": 10,
"text": "n = \\sum_{i=0}^{k - 1} d_i b^i"
},
{
"math_id": 11,
"text": "F_{b}(n) = \\sum_{i=0}^{k - 1} d_i < \\sum_{i=0}^{k - 1} d_i b^i = n"
},
{
"math_id": 12,
"text": "n < b"
},
{
"math_id": 13,
"text": "0 \\leq n < b"
},
{
"math_id": 14,
"text": "n = 3110"
},
{
"math_id": 15,
"text": "d_0 = \\frac{3110 \\bmod{12^{0+1}} - 3110 \\bmod 12^0}{12^0} = \\frac{3110 \\bmod{12} - 3110 \\bmod 1}{1} = \\frac{2 - 0}{1} = \\frac{2}{1} = 2"
},
{
"math_id": 16,
"text": "d_1 = \\frac{3110 \\bmod{12^{1+1}} - 3110 \\bmod 12^1}{12^1} = \\frac{3110 \\bmod{144} - 3110 \\bmod 12}{12} = \\frac{86 - 2}{12} = \\frac{84}{12} = 7"
},
{
"math_id": 17,
"text": "d_2 = \\frac{3110 \\bmod{12^{2+1}} - 3110 \\bmod 12^2}{12^2} = \\frac{3110 \\bmod{1728} - 3110 \\bmod 144}{144} = \\frac{1382 - 86}{144} = \\frac{1296}{144} = 9"
},
{
"math_id": 18,
"text": "d_3 = \\frac{3110 \\bmod{12^{3+1}} - 3110 \\bmod 12^3}{12^3} = \\frac{3110 \\bmod{20736} - 3110 \\bmod 1728}{1728} = \\frac{3110 - 1382}{1728} = \\frac{1728}{1728} = 1"
},
{
"math_id": 19,
"text": "F_{12}(3110) = \\sum_{i=0}^{4 - 1} d_i = 2 + 7 + 9 + 1 = 19"
},
{
"math_id": 20,
"text": "F_{12}(3110) = 19"
},
{
"math_id": 21,
"text": "d_0 = \\frac{19 \\bmod{12^{0+1}} - 19 \\bmod 12^0}{12^0} = \\frac{19 \\bmod{12} - 19 \\bmod 1}{1} = \\frac{7 - 0}{1} = \\frac{7}{1} = 7"
},
{
"math_id": 22,
"text": "d_1 = \\frac{19 \\bmod{12^{1+1}} - 19 \\bmod 12^1}{12^1} = \\frac{19 \\bmod{144} - 19 \\bmod 12}{12} = \\frac{19 - 7}{12} = \\frac{12}{12} = 1"
},
{
"math_id": 23,
"text": "F_{12}(19) = \\sum_{i=0}^{2 - 1} d_i = 1 + 7 = 8"
},
{
"math_id": 24,
"text": "F_{12}(8) = 8"
},
{
"math_id": 25,
"text": "\\operatorname{dr}_{b} : \\mathbb{N} \\rightarrow \\mathbb{N}"
},
{
"math_id": 26,
"text": " \\operatorname{dr}_{b}(n) = \n\\begin{cases}\n0 & \\mbox{if}\\ n = 0, \\\\ \nb - 1 & \\mbox{if}\\ n \\neq 0,\\ n\\ \\equiv 0 \\pmod{(b - 1)},\\\\ \nn \\bmod{(b - 1)} & \\mbox{if}\\ n \\not\\equiv 0 \\pmod{(b - 1)}\n\\end{cases}\n"
},
{
"math_id": 27,
"text": " \\operatorname{dr}_{b}(n) = \n\\begin{cases}\n0 & \\mbox{if}\\ n = 0, \\\\ \n1\\ +\\ ((n-1) \\bmod{(b - 1)}) & \\mbox{if}\\ n \\neq 0.\n\\end{cases}\n"
},
{
"math_id": 28,
"text": "(b - 1)"
},
{
"math_id": 29,
"text": "b \\equiv 1 \\pmod{(b - 1)},"
},
{
"math_id": 30,
"text": "b^i \\equiv 1^i \\equiv 1 \\pmod{(b - 1)}."
},
{
"math_id": 31,
"text": "i"
},
{
"math_id": 32,
"text": "d_i"
},
{
"math_id": 33,
"text": "d_i b^i\\equiv d_i \\pmod{(b-1)}"
},
{
"math_id": 34,
"text": "n = d_2 b^2 + d_1 b^1 + d_0 b^0"
},
{
"math_id": 35,
"text": "\\operatorname{dr}_{b}(n) \\equiv d_2 b^2 + d_1 b^1 + d_0 b^0 \\equiv d_2 (1) + d_1 (1) + d_0 (1) \\equiv d_2 + d_1 + d_0 \\pmod{(b - 1)}."
},
{
"math_id": 36,
"text": "m"
},
{
"math_id": 37,
"text": "b^i \\bmod{m}"
},
{
"math_id": 38,
"text": "m=2, 5,\\text{ and }10"
},
{
"math_id": 39,
"text": "m = b + 1"
},
{
"math_id": 40,
"text": "b \\equiv -1 \\pmod{(b + 1)},"
},
{
"math_id": 41,
"text": "b^2 \\equiv (-1)^2 \\equiv 1 \\pmod{(b + 1)},"
},
{
"math_id": 42,
"text": "(b + 1)"
},
{
"math_id": 43,
"text": "b - 1"
},
{
"math_id": 44,
"text": "6 - 1 = 5"
},
{
"math_id": 45,
"text": "2035 - 1 = 2034|9"
},
{
"math_id": 46,
"text": "\\lfloor x\\rfloor "
},
{
"math_id": 47,
"text": "\\operatorname{dr}_{b}(n)=n-(b - 1)\\left\\lfloor\\frac{n-1}{b - 1}\\right\\rfloor."
},
{
"math_id": 48,
"text": "a_1 + a_2"
},
{
"math_id": 49,
"text": "a_1"
},
{
"math_id": 50,
"text": "a_2"
},
{
"math_id": 51,
"text": "\\operatorname{dr}_{b}(a_1 + a_2) = \\operatorname{dr}_{b}(\\operatorname{dr}_{b}(a_1)+\\operatorname{dr}_{b}(a_2))."
},
{
"math_id": 52,
"text": "a_1 - a_2"
},
{
"math_id": 53,
"text": "\\operatorname{dr}_{b}(a_1 - a_2) \\equiv (\\operatorname{dr}_{b}(a_1)-\\operatorname{dr}_{b}(a_2)) \\pmod{(b - 1)}."
},
{
"math_id": 54,
"text": "-n"
},
{
"math_id": 55,
"text": "\\operatorname{dr}_{b}(-n) \\equiv -\\operatorname{dr}_{b}(n) \\bmod{b - 1}."
},
{
"math_id": 56,
"text": "a_1 \\cdot a_2"
},
{
"math_id": 57,
"text": "\\operatorname{dr}_{b}(a_1 a_2) = \\operatorname{dr}_{b}(\\operatorname{dr}_{b}(a_1)\\cdot\\operatorname{dr}_{b}(a_2) )."
}
] | https://en.wikipedia.org/wiki?curid=8286632 |
8286726 | Barabási–Albert model | Scale-free network generation algorithm
The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the World Wide Web, citation networks, and some social networks are thought to be approximately scale-free and certainly contain few nodes (called hubs) with unusually high degree as compared to the other nodes of the network. The BA model tries to explain the existence of such nodes in real networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert.
Concepts.
Many observed networks (at least approximately) fall into the class of scale-free networks, meaning that they have power-law (or scale-free) degree distributions, while random graph models such as the Erdős–Rényi (ER) model and the Watts–Strogatz (WS) model do not exhibit power laws. The Barabási–Albert model is one of several proposed models that generate scale-free networks. It incorporates two important general concepts: growth and preferential attachment. Both growth and preferential attachment exist widely in real networks.
Growth means that the number of nodes in the network increases over time.
Preferential attachment means that the more connected a node is, the more likely it is to receive new links. Nodes with a higher degree have a stronger ability to grab links added to the network. Intuitively, the preferential attachment can be understood if we think in terms of social networks connecting people. Here a link from A to B means that person A "knows" or "is acquainted with" person B. Heavily linked nodes represent well-known people with lots of relations. When a newcomer enters the community, they are more likely to become acquainted with one of those more visible people rather than with a relative unknown. The BA model was proposed by assuming that in the World Wide Web, new pages link preferentially to hubs, i.e. very well known sites such as Google, rather than to pages that hardly anyone knows. If someone selects a new page to link to by randomly choosing an existing link, the probability of selecting a particular page would be proportional to its degree. The BA model claims that this explains the preferential attachment probability rule.
Later, the Bianconi–Barabási model works to address this issue by introducing a "fitness" parameter.
Preferential attachment is an example of a positive feedback cycle where initially random variations (one node initially having more links or having started accumulating links earlier than another) are automatically reinforced, thus greatly magnifying differences. This is also sometimes called the Matthew effect, "the rich get richer". See also autocatalysis.
Algorithm.
The only parameter in the BA model is formula_0, a positive integer. The network initializes with a network of formula_1 nodes.
At each step, add one new node, then sample formula_0 existing vertices from the network, with a probability that is proportional to the number of links that the existing nodes already have (The original papers did not specify how to handle cases where the same existing node is chosen multiple times.). Formally, the probability formula_2 that the new node is connected to node formula_3 is
formula_4
where formula_5 is the degree of node formula_3 and the sum is made over all pre-existing nodes formula_6 (i.e. the denominator results in twice the current number of edges in the network). This step can be performed by first uniformly sampling one edge, then sampling one of the two vertices on the edge.
Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a "preference" to attach themselves to the already heavily linked nodes.
Properties.
Degree distribution.
The degree distribution resulting from the BA model is scale free, in particular, it is a power law of the form
formula_8
Hirsch index distribution.
The h-index or Hirsch index distribution was shown to also be scale free and was proposed as the lobby index, to be used as a centrality measure
formula_9
Furthermore, an analytic result for the density of nodes with h-index 1 can be obtained in the case where formula_7
formula_10
Node degree correlations.
Correlations between the degrees of connected nodes develop spontaneously in the BA model because of the way the network evolves. The probability, formula_11, of finding a link that connects a node of degree formula_12 to an ancestor node of degree formula_13 in the BA model for the special case of formula_14 (BA tree) is given by
formula_15
This confirms the existence of degree correlations, because if the distributions were uncorrelated, we would get formula_16.
For general formula_17, the fraction of links who connect a node of degree formula_12 to a node of degree formula_13 is
formula_18
Also, the nearest-neighbor degree distribution formula_19, that is, the degree distribution of the neighbors of a node with degree formula_20, is given by
formula_21
In other words, if we select a node with degree formula_22, and then select one of its neighbors randomly, the probability that this randomly selected neighbor will have degree formula_23 is given by the expression formula_24 above.
Clustering coefficient.
An analytical result for the clustering coefficient of the BA model was obtained by Klemm and Eguíluz and proven by Bollobás. A mean-field approach to study the clustering coefficient was applied by Fronczak, Fronczak and Holyst.
This behavior is still distinct from the behavior of small-world networks where clustering is independent of system size.
In the case of hierarchical networks, clustering as a function of node degree also follows a power-law,
formula_25
This result was obtained analytically by Dorogovtsev, Goltsev and Mendes.
Spectral properties.
The spectral density of BA model has a different shape from the semicircular spectral density of random graph. It has a triangle-like shape with the top lying well above the semicircle and edges decaying as a power law. In (Section 5.1), it was proved that the shape of this spectral density is not an exact triangular function by analyzing the moments of the spectral density as a function of the power-law exponent.
Dynamic scaling.
By definition, the BA model describes a time developing phenomenon and hence, besides its scale-free property, one could also look for its dynamic scaling property.
In the BA network nodes can also be characterized by generalized degree formula_28, the product
of the square root of the birth time of each node and their corresponding degree formula_12, instead
of the degree formula_12 alone since the time of birth matters in the BA network. We find that the
generalized degree distribution formula_29 has some non-trivial features and exhibits dynamic scaling
formula_30
It implies that the distinct plots of formula_26 vs formula_28 would collapse into a universal curve if we plot formula_31 vs formula_27.
Limiting cases.
Model A.
Model A retains growth but does not include preferential attachment. The probability of a new node connecting to any pre-existing node is equal. The resulting degree distribution in this limit is geometric, indicating that growth alone is not sufficient to produce a scale-free structure.
Model B.
Model B retains preferential attachment but eliminates growth. The model begins with a fixed number of disconnected nodes and adds links, preferentially choosing high degree nodes as link destinations. Though the degree distribution early in the simulation looks scale-free, the distribution is not stable, and it eventually becomes nearly Gaussian as the network nears saturation. So preferential attachment alone is not sufficient to produce a scale-free structure.
The failure of models A and B to lead to a scale-free distribution indicates that growth and preferential attachment are needed simultaneously to reproduce the stationary power-law distribution observed in real networks.
Non-linear preferential attachment.
The BA model can be thought of as a specific case of the more general non-linear preferential attachment (NLPA) model. The NLPA algorithm is identical to the BA model with the attachment probability replaced by the more general form
formula_32
where formula_33 is a constant positive exponent. If formula_34, NLPA reduces to the BA model and is referred to as "linear". If formula_35, NLPA is referred to as "sub-linear" and the degree distribution of the network tends to a stretched exponential distribution. If formula_36, NLPA is referred to as "super-linear" and a small number of nodes connect to almost all other nodes in the network. For both formula_37 and formula_36, the scale-free property of the network is broken in the limit of infinite system size. However, if formula_33 is only slightly larger than formula_38, NLPA may result in degree distributions which appear to be transiently scale free.
History.
Preferential attachment made its first appearance in 1923 in the celebrated urn model of the Hungarian mathematician György Pólya in 1923. The master equation method, which yields a more transparent derivation, was applied to the problem by Herbert A. Simon in 1955 in the course of studies of the sizes of cities and other phenomena. It was first applied to explain citation frequencies by Derek de Solla Price in 1976. Price was interested in the accumulation of citations of scientific papers and the Price model used "cumulative advantage" (his name for preferential attachment) to generate a fat tailed distribution. In the language of modern citations network, Price's model produces a directed network, i.e. the version of the Barabási-Albert model. The name "preferential attachment" and the present popularity of scale-free network models is due to the work of Albert-László Barabási and Réka Albert, who discovered that a similar process is present in real networks, and applied in 1999 preferential attachment to explain the numerically observed degree distributions on the web.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "m_0 \\geq m"
},
{
"math_id": 2,
"text": "p_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "p_i = \\frac{k_i}{\\sum_j k_j},"
},
{
"math_id": 5,
"text": "k_i"
},
{
"math_id": 6,
"text": "j"
},
{
"math_id": 7,
"text": "m_0=1"
},
{
"math_id": 8,
"text": "P(k)\\sim k^{-3} \\, "
},
{
"math_id": 9,
"text": "H(k)\\sim k^{-6} \\, "
},
{
"math_id": 10,
"text": "H(1)\\Big|_{m_0=1}=4-\\pi \\, "
},
{
"math_id": 11,
"text": "n_{k\\ell}"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "\\ell"
},
{
"math_id": 14,
"text": "m=1"
},
{
"math_id": 15,
"text": "n_{k\\ell}=\\frac{4\\left(\\ell-1\\right)}{k\\left(k+1\\right)\\left(k+\\ell\\right)\\left(k+\\ell+1\\right)\\left(k+\\ell+2\\right)}+\\frac{12\\left(\\ell-1\\right)}{k\\left(k+\\ell-1\\right)\\left(k+\\ell\\right)\\left(k+\\ell+1\\right)\\left(k+\\ell+2\\right)}."
},
{
"math_id": 16,
"text": "n_{k\\ell}=k^{-3}\\ell^{-3}"
},
{
"math_id": 17,
"text": " m "
},
{
"math_id": 18,
"text": " p(k,\\ell)= \\frac{ 2m(m+1)}{k(k+1)\\ell(\\ell+1)} \\left[ 1-\\frac{\\binom{2m+2}{m+1} \\binom{k+\\ell-2m}{\\ell-m}}{\\binom{k+\\ell+2}{\\ell+1}} \\right] ."
},
{
"math_id": 19,
"text": " p(\\ell\\mid k) "
},
{
"math_id": 20,
"text": " k "
},
{
"math_id": 21,
"text": " p(\\ell\\mid k)= \\frac{ m (k+2) }{k \\ell (\\ell+1)} \\left[ 1-\\frac{\\binom{2m+2}{m+1} \\binom{k+\\ell-2m}{\\ell-m}}{\\binom{k+\\ell+2}{\\ell+1 }} \\right] ."
},
{
"math_id": 22,
"text": " k"
},
{
"math_id": 23,
"text": "\\ell "
},
{
"math_id": 24,
"text": " p(\\ell|k) "
},
{
"math_id": 25,
"text": "C(k) = k^{-1}. \\, "
},
{
"math_id": 26,
"text": "F(q,t)"
},
{
"math_id": 27,
"text": "q/t^{1/2}"
},
{
"math_id": 28,
"text": "q"
},
{
"math_id": 29,
"text": "F(q, t)"
},
{
"math_id": 30,
"text": "F(q,t)\\sim t^{-1/2}\\phi(q/t^{1/2})."
},
{
"math_id": 31,
"text": "F(q,t)t^{1/2}"
},
{
"math_id": 32,
"text": "p_i = \\frac{k_i^{\\alpha}}{\\sum_j k_j^{\\alpha}},"
},
{
"math_id": 33,
"text": "\\alpha"
},
{
"math_id": 34,
"text": "\\alpha=1"
},
{
"math_id": 35,
"text": "0<\\alpha<1"
},
{
"math_id": 36,
"text": "\\alpha>1"
},
{
"math_id": 37,
"text": "\\alpha<1"
},
{
"math_id": 38,
"text": "1"
}
] | https://en.wikipedia.org/wiki?curid=8286726 |
8287543 | Community structure | Concept in graph theory
In the study of complex networks, a network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. In the particular case of "non-overlapping" community finding, this implies that the network divides naturally into groups of nodes with dense connections internally and sparser connections between groups. But "overlapping" communities are also allowed. The more general definition is based on the principle that pairs of nodes are more likely to be connected if they are both members of the same community(ies), and less likely to be connected if they do not share communities. A related but different problem is community search, where the goal is to find a community that a certain vertex belongs to.
Properties.
In the study of networks, such as computer and information networks, social networks and biological networks, a number of different characteristics have been found to occur commonly, including the small-world property, heavy-tailed degree distributions, and clustering, among others. Another common characteristic is community structure.
In the context of networks, community structure refers to the occurrence of groups of nodes in a network that are more densely connected internally than with the rest of the network, as shown in the example image to the right. This inhomogeneity of connections suggests that the network has certain natural divisions within it.
Communities are often defined in terms of the partition of the set of vertices, that is each node is put into one and only one community, just as in the figure. This is a useful simplification and most community detection methods find this type of community structure. However, in some cases a better representation could be one where vertices are in more than one community. This might happen in a social network where each vertex represents a person, and the communities represent the different groups of friends: one community for family, another community for co-workers, one for friends in the same sports club, and so on. The use of cliques for community detection discussed below is just one example of how such overlapping community structure can be found.
Some networks may not have any meaningful community structure. Many basic network models, for example, such as the random graph and the Barabási–Albert model, do not display community structure.
Importance.
Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc.
Finding an underlying community structure in a network, if it exists, is important for a number of reasons. Communities allow us to create a large scale map of a network since individual communities act like meta-nodes in the network which makes its study easier.
Individual communities also shed light on the function of the system represented by the network since communities often correspond to functional units of the system. In metabolic networks, such functional groups correspond to cycles or pathways whereas in the protein interaction network, communities correspond to proteins with similar functionality inside a biological cell. Similarly, citation networks form communities by research topic. Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. Such insight can be useful in improving some algorithms on graphs such as spectral clustering.
Importantly, communities often have very different properties than the average properties of the networks. Thus, only concentrating on the average properties usually misses many important and interesting features inside the networks. For example, in a given social network, both gregarious and reticent groups might exists simultaneously.
Existence of communities also generally affects various processes like rumour spreading or epidemic spreading happening on a network. Hence to properly understand such processes, it is important to detect communities and also to study how they affect the spreading processes in various settings.
Finally, an important application that community detection has found in network science is the prediction of missing links and the identification of false links in the network. During the measurement process, some links may not get observed for a number of reasons. Similarly, some links could falsely enter into the data because of the errors in the measurement. Both these cases are well handled by community detection algorithm since it allows one to assign the probability of existence of an edge between a given pair of nodes.
Algorithms for finding communities.
Finding communities within an arbitrary network can be a computationally difficult task. The number of communities, if any, within the network is typically unknown and the communities are often of unequal size and/or density. Despite these difficulties, however, several methods for community finding have been developed and employed with varying levels of success.
Minimum-cut method.
One of the oldest algorithms for dividing networks into parts is the minimum cut method (and variants such as ratio cut and normalized cut). This method sees use, for example, in load balancing for parallel computing in order to minimize communication between processor nodes.
In the minimum-cut method, the network is divided into a predetermined number of parts, usually of approximately the same size, chosen such that the number of edges between groups is minimized. The method works well in many of the applications for which it was originally intended but is less than ideal for finding community structure in general networks since it will find communities regardless of whether they are implicit in the structure, and it will find only a fixed number of them.
Hierarchical clustering.
Another method for finding community structures in networks is hierarchical clustering. In this method one defines a similarity measure quantifying some (usually topological) type of similarity between node pairs. Commonly used measures include the cosine similarity, the Jaccard index, and the Hamming distance between rows of the adjacency matrix. Then one groups similar nodes into communities according to this measure. There are several common schemes for performing the grouping, the two simplest being single-linkage clustering, in which two groups are considered separate communities if and only if all pairs of nodes in different groups have similarity lower than a given threshold, and complete linkage clustering, in which all nodes within every group have similarity greater than a threshold. An important step is how to determine the threshold to stop the agglomerative clustering, indicating a near-to-optimal community structure. A common strategy consist to build one or several metrics monitoring global properties of the network, which peak at given step of the clustering. An interesting approach in this direction is the use of various similarity or dissimilarity measures, combined through convex sums. Another approximation is the computation of a quantity monitoring the density of edges within clusters with respect to the density between clusters, such as the partition density, which has been proposed when the similarity metric is defined between edges (which permits the definition of overlapping communities), and extended when the similarity is defined between nodes, which allows to consider alternative definitions of communities such as guilds (i.e. groups of nodes sharing a similar number of links with respect to the same neighbours but not necessarily connected themselves). These methods can be extended to consider multidimensional networks, for instance when we are dealing with networks having nodes with different types of links.
Girvan–Newman algorithm.
Another commonly used algorithm for finding communities is the Girvan–Newman algorithm. This algorithm identifies edges in a network that lie between communities and then removes them, leaving behind just the communities themselves. The identification is performed by employing the graph-theoretic measure betweenness centrality, which assigns a number to each edge which is large if the edge lies "between" many pairs of nodes.
The Girvan–Newman algorithm returns results of reasonable quality and is popular because it has been implemented in a number of standard software packages. But it also runs slowly, taking time O("m"2"n") on a network of "n" vertices and "m" edges, making it impractical for networks of more than a few thousand nodes.
Modularity maximization.
In spite of its known drawbacks, one of the most widely used methods for community detection is modularity maximization. Modularity is a benefit function that measures the quality of a particular division of a network into communities. The modularity maximization method detects communities by searching over possible divisions of a network for one or more that have particularly high modularity. Since exhaustive search over all possible divisions is usually intractable, practical algorithms are based on approximate optimization methods such as greedy algorithms, simulated annealing, or spectral optimization, with different approaches offering different balances between speed and accuracy.
A popular modularity maximization approach is the Louvain method, which iteratively optimizes local communities until global modularity can no longer be improved given perturbations to the current community state.
An algorithm that utilizes the RenEEL scheme, which is an example of the Extremal Ensemble Learning (EEL) paradigm, is currently the best modularity maximizing algorithm.
The usefulness of modularity optimization is questionable, as it has been shown that modularity optimization often fails to detect clusters smaller than some scale, depending on the size of the network (resolution limit); on the other hand the landscape of modularity values is characterized by a huge degeneracy of partitions with high modularity, close to the absolute maximum, which may be very different from each other.
Statistical inference.
Methods based on statistical inference attempt to fit a generative model to the network data, which encodes the community structure. The overall advantage of this approach compared to the alternatives is its more principled nature, and the capacity to inherently address issues of statistical significance. Most methods in the literature are based on the stochastic block model as well as variants including mixed membership,
degree-correction, and hierarchical structures.
Model selection can be performed using principled approaches such as minimum description length (or equivalently, Bayesian model selection) and likelihood-ratio test. Currently many algorithms exist to perform efficient inference of stochastic block models, including belief propagation
and agglomerative Monte Carlo.
In contrast to approaches that attempt to cluster a network given an objective function, this class of methods is based on generative models, which not only serve as a description of the large-scale structure of the network, but also can be used to "generalize" the data and predict the occurrence of missing or spurious links in the network.
Clique-based methods.
Cliques are subgraphs in which every node is connected to every other node in the clique. As nodes can not be more tightly connected than this, it is not surprising that there are many approaches to community detection in networks based on the detection of cliques in a graph and the analysis of how these overlap. Note that as a node can be a member of more than one clique, a node can be a member of more than one community in these methods giving an ""overlapping community structure".
One approach is to find the "maximal cliques"". That is to find the cliques which are not the subgraph of any other clique. The classic algorithm to find these is the Bron–Kerbosch algorithm. The overlap of these can be used to define communities in several ways. The simplest is to consider only maximal cliques bigger than a minimum size (number of nodes). The union of these cliques then defines a subgraph whose components (disconnected parts) then define communities. Such approaches are often implemented in social network analysis software such as UCInet.
The alternative approach is to use cliques of fixed size formula_0. The overlap of these can be used to define a type of formula_0-regular hypergraph or a structure which is a generalisation of the line graph (the case when formula_1) known as a "Clique graph". The clique graphs have vertices which represent the cliques in the original graph while the edges of the clique graph record the overlap of the clique in the original graph. Applying any of the previous community detection methods (which assign each node to a community) to the clique graph then assigns each clique to a community. This can then be used to determine community membership of nodes in the cliques. Again as a node may be in several cliques, it can be a member of several communities.
For instance the clique percolation method defines communities as percolation clusters of formula_0-cliques. To do this it
finds all formula_0-cliques in a network, that is all the complete sub-graphs of formula_0-nodes.
It then defines two formula_0-cliques to be adjacent if they share formula_2 nodes, that is this is used to define edges in a clique graph. A community is then defined to be the maximal union of formula_0-cliques in which we can reach any formula_0-clique from any other formula_0-clique through series of formula_0-clique adjacencies. That is communities are just the connected components in the clique graph. Since a node can belong to several different formula_0-clique percolation clusters at the same time, the communities can overlap with each other.
Community detection in latent feature spaces.
A network can be represented or projected onto a latent space via representation learning methods to efficiently represent a system. Then, various clustering methods can be employed to detect community structures. For Euclidean spaces, methods like embedding-based Silhouette community detection can be utilized. For Hypergeometric latent spaces, critical gap method or modified density-based, hierarchical, or partitioning-based clustering methods can be utilized.
Testing methods of finding communities algorithms.
The evaluation of algorithms, to detect which are better at detecting community structure, is still an open question. It must be based on analyses of networks of known structure. A typical example is the "four groups" test, in which a network is divided into four equally-sized groups (usually of 32 nodes each) and the probabilities of connection within and between groups varied to create more or less challenging structures for the detection algorithm. Such benchmark graphs are a special case of the planted l-partition model
of Condon and Karp, or more generally of "stochastic block models", a general class of random network models containing community structure. Other more flexible benchmarks have been proposed that allow for varying group sizes and nontrivial degree distributions, such as LFR benchmark
which is an extension of the four groups benchmark that includes heterogeneous distributions of node degree and community size, making it a more severe test of community detection methods.
Commonly used computer-generated benchmarks start with a network of well-defined communities. Then, this structure is degraded by rewiring or removing links and it gets harder and harder for the algorithms to detect the original partition. At the end, the network reaches a point where it is essentially random. This kind of benchmark may be called "open". The performance on these benchmarks is evaluated by measures such as normalized mutual information or variation of information. They compare the solution obtained by an algorithm with the original community structure, evaluating the similarity of both partitions.
Detectability.
During recent years, a rather surprising result has been obtained by various groups which shows that a phase transition exists in the community detection problem, showing that as the density of connections inside communities and between communities become more and more equal or both become smaller (equivalently, as the community structure becomes too weak or the network becomes too sparse), suddenly the communities become undetectable. In a sense, the communities themselves still exist, since the presence and absence of edges is still correlated with the community memberships of their endpoints; but it becomes information-theoretically impossible to label the nodes better than chance, or even distinguish the graph from one generated by a null model such as the Erdos–Renyi model without community structure. This transition is independent of the type of algorithm being used to detect communities, implying that there exists a fundamental limit on our ability to detect communities in networks, even with optimal Bayesian inference (i.e., regardless of our computational resources).
Consider a stochastic block model with total formula_3 nodes, formula_4 groups of equal size, and let formula_5 and formula_6 be the connection probabilities inside and between the groups respectively. If formula_7, the network would possess community structure since the link density inside the groups would be more than the density of links between the groups. In the sparse case, formula_5 and formula_6 scale as formula_8 so that the average degree is constant:
formula_9 and formula_10
Then it becomes impossible to detect the communities when:
formula_11
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "k=2"
},
{
"math_id": 2,
"text": "k-1"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": " q=2 "
},
{
"math_id": 5,
"text": " p_\\text{in} "
},
{
"math_id": 6,
"text": "p_\\text{out}"
},
{
"math_id": 7,
"text": "p_\\text{in}>p_\\text{out}"
},
{
"math_id": 8,
"text": "O(1/n)"
},
{
"math_id": 9,
"text": "p_\\text{in}=c_\\text{in}/n"
},
{
"math_id": 10,
"text": "p_\\text{out}=c_\\text{out}/n"
},
{
"math_id": 11,
"text": "c_\\text{in}-c_\\text{out}=\\sqrt{2(c_\\text{in}+c_\\text{out})}"
}
] | https://en.wikipedia.org/wiki?curid=8287543 |
8287689 | Hierarchical clustering of networks | Hierarchical clustering is one method for finding community structures in a network. The technique arranges the network into a hierarchy of groups according to a specified weight function. The data can then be represented in a tree structure known as a dendrogram. Hierarchical clustering can either be agglomerative or divisive depending on whether one proceeds through the algorithm by adding links to or removing links from the network, respectively. One divisive technique is the Girvan–Newman algorithm.
Algorithm.
In the hierarchical clustering algorithm, a weight formula_0 is first assigned to each pair of vertices formula_1 in the network. The weight, which can vary depending on implementation (see section below), is intended to indicate how closely related the vertices are. Then, starting with all the nodes in the network disconnected, begin pairing nodes from highest to lowest weight between the pairs (in the divisive case, start from the original network and remove links from lowest to highest weight). As links are added, connected subsets begin to form. These represent the network's community structures.
The components at each iterative step are always a subset of other structures. Hence, the subsets can be represented using a tree diagram, or dendrogram. Horizontal slices of the tree at a given level indicate the communities that exist above and below a value of the weight.
Weights.
There are many possible weights for use in hierarchical clustering algorithms. The specific weight used is dictated by the data as well as considerations for computational speed. Additionally, the communities found in the network are highly dependent on the choice of weighting function. Hence, when compared to real-world data with a known community structure, the various weighting techniques have been met with varying degrees of success.
Two weights that have been used previously with varying success are the number of node-independent paths between each pair of vertices and the total number of paths between vertices weighted by the length of the path. One disadvantage of these weights, however, is that both weighting schemes tend to separate single peripheral vertices from their rightful communities because of the small number of paths going to these vertices. For this reason, their use in hierarchical clustering techniques is far from optimal.
Edge betweenness centrality has been used successfully as a weight in the Girvan–Newman algorithm. This technique is similar to a divisive hierarchical clustering algorithm, except the weights are recalculated with each step.
The change in modularity of the network with the addition of a node has also been used successfully as a weight. This method provides a computationally less-costly alternative to the Girvan-Newman algorithm while yielding similar results. | [
{
"math_id": 0,
"text": "W_{ij}"
},
{
"math_id": 1,
"text": "(i,j)"
}
] | https://en.wikipedia.org/wiki?curid=8287689 |
8288796 | MDS matrix | Represents a function with diffusion properties useful in cryptography
An MDS matrix (maximum distance separable) is a matrix representing a function with certain diffusion properties that have useful applications in cryptography. Technically, an formula_0 matrix formula_1 over a finite field formula_2 is an MDS matrix if it is the transformation matrix of a linear transformation formula_3 from formula_4 to formula_5 such that no two different formula_6-tuples of the form formula_7 coincide in formula_8 or more components.
Equivalently, the set of all formula_6-tuples formula_7 is an MDS code, i.e., a linear code that reaches the Singleton bound.
Let formula_9 be the matrix obtained by joining the identity matrix formula_10 to formula_1. Then a necessary and sufficient condition for a matrix formula_1 to be MDS is that every possible formula_11 submatrix obtained by removing formula_12 rows from formula_13 is non-singular. This is also equivalent to the following: all the sub-determinants of the matrix formula_1 are non-zero. Then a binary matrix formula_1 (namely over the field with two elements) is never MDS unless it has only one row or only one column with all components formula_14.
Reed–Solomon codes have the MDS property and are frequently used to obtain the MDS matrices used in cryptographic algorithms.
Serge Vaudenay suggested using MDS matrices in cryptographic primitives to produce what he called "multipermutations", not-necessarily linear functions with this same property. These functions have what he called "perfect diffusion": changing formula_15 of the inputs changes at least formula_16 of the outputs. He showed how to exploit imperfect diffusion to cryptanalyze functions that are not multipermutations.
MDS matrices are used for diffusion in such block ciphers as AES, SHARK, Square, Twofish, Anubis, KHAZAD, Manta, Hierocrypt, Kalyna, Camellia and HADESMiMC, and in the stream cipher MUGI and the cryptographic hash function Whirlpool, Poseidon.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m \\times n"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "f(x) = Ax"
},
{
"math_id": 4,
"text": "K^n"
},
{
"math_id": 5,
"text": "K^m"
},
{
"math_id": 6,
"text": "(m + n)"
},
{
"math_id": 7,
"text": "(x, f(x))"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\tilde A = \\begin{pmatrix} \\mathrm{I}_n \\\\ \\hline \\mathrm{A} \\end{pmatrix}"
},
{
"math_id": 10,
"text": "\\mathrm{I}_n"
},
{
"math_id": 11,
"text": "n \\times n"
},
{
"math_id": 12,
"text": "m"
},
{
"math_id": 13,
"text": "\\tilde A"
},
{
"math_id": 14,
"text": "1"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": "m - t + 1"
}
] | https://en.wikipedia.org/wiki?curid=8288796 |
82916 | Gear | Rotating circular machine part with teeth that mesh with another toothed part
A gear or gearwheel is a rotating machine part typically used to transmit rotational motion and/or torque by means of a series of teeth that engage with compatible teeth of another gear or other part. The teeth can be integral saliences or cavities machined on the part, or separate pegs inserted into it. In the latter case, the gear is usually called a cogwheel. A cog may be one of those pegs or the whole gear. Two or more meshing gears are called a gear train.
The smaller member of a pair of meshing gears is often called pinion. Most commonly, gears and gear trains can be used to trade torque for rotational speed between two axles or other rotating parts and/or to change the axis of rotation and/or to invert the sense of rotation. A gear may also be used to transmit linear force and/or linear motion to a rack, a straight bar with a row of compatible teeth.
Gears are among the most common mechanical parts. They come in a great variety of shapes and materials, and are used for many different functions and applications. Diameters may range from a few μm in micromachines, to a few mm in watches and toys to over 10 metres in some mining equipment. Other types of parts that are somewhat similar in shape and function to gears include the sprocket, which is meant to engage with a link chain instead of another gear, and the timing pulley, meant to engage a timing belt. Most gears are round and have equal teeth, designed to operate as smoothly as possible; but there are several applications for non-circular gears, and the Geneva drive has an extremely uneven operation, by design.
Gears can be seen are instances of the basic lever "machine". When a small gear drives a larger one, the mechanical advantage of this ideal lever causes the torque "T" to increase but the rotational speed "ω" to decrease. The opposite effect is obtained when a large gear drives a small one. The changes are proportional to the "gear ratio" "r", the ratio of the tooth counts. namely, "T"2/"T"1 = "r" = "N"2/"N"1", and "ω"2/"ω"1 = 1/"r" = "N"1/"N"2". Depending on the geometry of the pair, the sense of rotation may also be inverted (from clockwise to anti-clockwise , or vice-versa).
Most vehicles have a transmission or "gearbox" containing a set of gears that can be meshed in multiple configurations. The gearbox lets the operator vary the torque that is applied to the wheels without changing the engine's speed. Gearboxes are used also in many other machines, such as lathes and conveyor belts. In all those cases, terms like "first gear", "high gear", and "reverse gear" refer to the overall torque ratios of different meshing configurations, rather than to specific physical gears. These terms may be applied even when the vehicle does not actually contain gears, as in a continuously variable transmission.
History.
The earliest surviving gears date from the 4th century BC in China (Zhan Guo times – Late East Zhou dynasty), which have been preserved at the Luoyang Museum of Henan Province, China.
In Europe, Aristotle mentions gears around 330 BC, as wheel drives in windlasses. He observed that the direction of rotation is reversed when one gear wheel drives another gear wheel. Philon of Byzantium was one of the first who used gears in water raising devices. Gears appear in works connected to Hero of Alexandria, in Roman Egypt circa AD 50, but can be traced back to the mechanics of the Library of Alexandria in 3rd-century BC Ptolemaic Egypt, and were greatly developed by the Greek polymath Archimedes (287–212 BC). The earliest surviving gears in Europe were found in the Antikythera mechanism an example of a very early and intricate geared device, designed to calculate astronomical positions of the sun, moon, and planets, and predict eclipses. Its time of construction is now estimated between 150 and 100 BC.
The Chinese engineerMa Jun (c. 200–265 AD) described a south-pointing chariot. A set of differential gears connected to the wheels and to a pointer on top of the chariot kept the direction of latter unchanged as the chariot turned.
Another early surviving example of geared mechanism is a complex calendrical device showing the phase of the Moon, the day of the month and the places of the Sun and the Moon in the Zodiac was invented in the Byzantine empire in the early 6th century AD.
Geared mechanical water clocks were built in China by 725 AD.
Around 1221 AD, a geared astrolabe was built in Isfahan showing the position of the moon in the zodiac and its phase, and the number of days since new moon.
The worm gear was invented in the Indian subcontinent, for use in roller cotton gins, some time during the 13th–14th centuries.
A complex astronomical clock, called the Astrarium, was built between 1348 and 1364 by Giovanni Dondi dell'Orologio. It had seven faces and 107 moving parts; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The Salisbury Cathedral clock, built in 1386, it is the world's oldest still working geared mechanical clock.
Differential gears were used by the British clock maker Joseph Williamson in 1720.
However, the oldest functioning gears by far were created by Nature, and are seen in the hind legs of the nymphs of the planthopper insect "Issus coleoptratus".
Etymology.
The word "gear" is probably from Old Norse "gørvi" (plural "gørvar") 'apparel, gear,' related to "gøra", "gørva" 'to make, construct, build; set in order, prepare,' a common verb in Old Norse, "used in a wide range of situations from writing a book to dressing meat". In this context, the meaning of 'toothed wheel in machinery' first attested 1520s; specific mechanical sense of 'parts by which a motor communicates motion' is from 1814; specifically of a vehicle (bicycle, automobile, etc.) by 1888.
A "cog" is a tooth on a wheel. From Middle English cogge, from Old Norse (compare Norwegian "kugg" ('cog'), Swedish "kugg", "kugge" ('cog, tooth')), from Proto-Germanic *"kuggō" (compare Dutch "kogge" ('cogboat'), German "Kock"), from Proto-Indo-European *"gugā" ('hump, ball') (compare Lithuanian "gugà" ('pommel, hump, hill'), from PIE *"gēw-" ('to bend, arch'). First used c. 1300 in the sense of 'a wheel having teeth or cogs; late 14c., 'tooth on a wheel'; cog-wheel, early 15c.
Materials.
The gears of the Antikythera mechanism are made of bronze, and the earliest surviving Chinese gears are made of iron, These metals, as well as tin, have been generally used for clocks and similar mechanisms to this day.
Historically, large gears, such as used in flour mills, were commonly made of wood rather than metal. They were cogwheels, made by inserting a series of wooden pegs or cogs around the rim of a wheel. The cogs were often made of maple wood.
Wooden gears have been gradually replaced by ones made or metal, such as cast iron at first, then steel and aluminum. Steel is most commonly used because of its high strength-to-weight ratio and low cost. Aluminum is not as strong as steel for the same geometry, but is lighter and easier to machine. powder metallurgy may be used with alloys that cannot be easily cast or machined.
Still, because of cost or other considerations, some early metal gears had wooden cogs, each tooth forming a type of specialised 'through' mortise and tenon joint
More recently engineering plastics and composite materials have been replacing metals in many applications, especially those with moderate speed and torque. They are not as strong as steel, but are cheaper, can be mass-manufactured by injection molding don't need lubrication. Plastic gears may even be intentionally designed to be the weakest part in a mechanism, so that in case of jamming they will fail first and thus avoid damage to more expensive parts. Such sacrificial gears may be a simpler alternative to other overload-protection devices such as clutches and torque- or current-limited motors.
In spite of the advantages of metal and plastic, wood continued to be used for large gears until a couple of centuries ago, because of cost, weight, tradition, or other considerations. In 1967 the Thompson Manufacturing Company of Lancaster, New Hampshire still had a very active business in supplying tens of thousands of maple gear teeth per year, mostly for use in paper mills and grist mills, some dating back over 100 years.
Manufacture.
The most common techniques for gear manufacturing are dies, sand, and investment casting; injection molding; powder metallurgy; blanking; and gear cutting.
As of 2014, an estimated 80% of all gearing produced worldwide is produced by net shape molding. Molded gearing is usually powder metallurgy, plastic injection, or metal die casting. Gears produced by powder metallurgy often require a sintering step after they are removed from the mold. Cast gears require gear cutting or other machining to shape the teeth to the necessary precision. The most common form of gear cutting is hobbing, but gear shaping, milling, and broaching may be used instead.
Metal gears intended for heavy duty operation, such as in the transmissions of cars and trucks, the teeth are heat treated to make them hard and more wear resistant while leaving the core soft but tough. For large gears that are prone to warp, a quench press is used.
Gears can be made by 3D printing; however, this alternative is typically used only for prototypes or very limited production quantities, because of its high cost, low accuracy, and relatively low strength of the resulting part.
Comparison with other drive mechanisms.
Besides gear trains, other alternative methods of transmitting torque between non-coaxial parts include link chains driven by sprockets, friction drives, belts and pulleys, hydraulic couplings, and timing belts.
One major advantage of gears is that their rigid body and the snug interlocking of the teeth ensure precise tracking of the rotation acros the gear train, limited only by backlash and other mechanical defects. For this reason they are favored in precision applications such as watches. Gear trains also can have fewer separate parts (only two) and have minimal power loss, minimal wear, and long life. Gears are also often the most efficient and compact way of transmitting torque between two non-parallel axes.
On the other hand, gears are more expensive to manufacture, may require periodic lubrication, and may have greater mass and rotational inertia than the equivalent pulleys. More importantly, the distance between the axes of matched gears is limited and cannot be changed once they are manufactured. There are also applications where slippage under overload or transients (as occurs with belts, hydraulics, and friction wheels) is not only acceptable but desirable.
Ideal gear model.
For basic analysis purposes, each gear can be idealized as a perfectly rigid body that, in normal operation, turns around a "rotation axis" that is fixed in space, without sliding along it. Thus, each point of the gear can move only along a circle that is perpendicular to its axis and centered on it. At any moment "t", all points of the gear will be rotating around that axis with the same angular speed "ω"("t"), in the same sense. The speed need not be constant over time.
The "action surface" of the gear consists of all points of its surface that, in normal operation, may contact the matching gear with positive pressure. All other parts of the surface are irrelevant (except that they cannot be crossed by any part of the matching gear). In a gear with "N" teeth, the working surface has "N"-fold rotational symmetry about the axis, meaning that it is congruent with itself when the gear rotates by 1/"N" of a turn.
If the gear is meant to transmit or receive torque with a definite sense only (clockwise or counterclockwise with respect to some reference viewpoint), the action surface consists of "N" separate patches, the "tooth faces"; which have the same shape and are positioned in the same way relative to the axis, spaced 1/"N" turn apart.
If the torque on each gear may have both senses, the action surface will have two sets of "N" tooth faces; each set will be effective only while the torque has one specific sense, and the two sets can be analyzed independently of the other. However, in this case the gear usually has also "flip over" symmetry, so that the two sets of tooth faces are congruent after the gear is flipped. This arrangement ensures that the two gears are firmly locked together, at all times, with no backlash.
During operation, each point "p" of each tooth face will at some moment contact a tooth face of the matching gear at some point "q" of one of its tooth faces. At that moment and at those points, the two faces must have the same perpendicular direction but opposite orientation. But since the two gears are rotating around different axes, the points "p" and "q" are moving along different circles; therefore, the contact cannot last more than one instant, and "p" will then either slide across the other face, or stop contacting it altogether.
On the other hand, at any given moment there is at least one such pair of contact points; usually more than one, even a whole line or surface of contact.
Actual gears deviate from this model in many ways: they are not perfectly rigid, their mounting does not ensure that the rotation axis will be perfectly fixed in space, the teeth may have slightly different shapes and spacing, the tooth faces are not perfectly smooth, and so on. Yet, these deviations from the ideal model can be ignored for a basic analysis of the operation of a gear set.
Relative axis position.
One criterion for classifying gears is the relative position and direction of the axes or rotation of the gears that are to be meshed together.
Parallel.
In the most common configuration, the axes of rotation of the two gears are parallel, and usually their sizes are such that they contact near a point between the two axes. In this configuration, the two gears turn in opposite senses.
Occasionally the axes are parallel but one gear is nested inside the other. In this configuration, both gears turn in the same sense.
If the two gears are cut by an imaginary plane perpendicular to the axes, each section of one gear will interact only with the corresponding section of the other gear. Thus the three-dimensional gear train can be understood as a stack of gears that are flat and infinitesimally thin — that is, essentially two-dimensional.
Crossed.
In a "crossed" arrangement, the axes of rotation of the two gears are not parallel but cross at an arbitrary angle except zero or 180 degrees.
For best operation, each wheel then must be a bevel gear, whose overall shape is like a slice (frustum) of a cone whose apex is the meeting point of the two axes.
Bevel gears with equal numbers of teeth and shaft axes at 90 degrees are called "miter" (US) or "mitre" (UK) gears.
Independently of the angle between the axes, the larger of two unequal matching bevel gears may be internal or external, depending the desired relative sense of rotation.
If the two gears are sliced by an imaginary sphere whose center is the point where the two axes cross, each section will remain on the surface of that sphere as the gear rotates, and the section of one gear will interact only with the corresponding section of the other gear. In this way, a pair of meshed 3D gears can be understood as a stack of nested infinitely thin cup-like gears.
Skew.
The gears in a matching pair are said to be "skew" if their axes of rotation are skew lines -- neither parallel nor intersecting.
In this case, the best shape for each pitch surface is neither cylindrical nor conical but a portion of a hyperboloid of revolution. Such gears are called "hypoid" for short. Hypoid gears are most commonly found with shafts at 90 degrees.
Contact between hypoid gear teeth may be even smoother and more gradual than with spiral bevel gear teeth, but also have a sliding action along the meshing teeth as it rotates and therefore usually require some of the most viscous types of gear oil to avoid it being extruded from the mating tooth faces, the oil is normally designated HP (for hypoid) followed by a number denoting the viscosity. Also, the pinion can be designed with fewer teeth than a spiral bevel pinion, with the result that gear ratios of 60:1 and higher are feasible using a single set of hypoid gears. This style of gear is most common in motor vehicle drive trains, in concert with a differential. Whereas a regular (nonhypoid) ring-and-pinion gear set is suitable for many applications, it is not ideal for vehicle drive trains because it generates more noise and vibration than a hypoid does. Bringing hypoid gears to market for mass-production applications was an engineering improvement of the 1920s.
Tooth orientation.
Internal and external.
A gear is said to be "external" if its teeth are directed generally away from the rotation axis, and "internal" otherwise. In a pair of matching wheels, only one of them (the larger one) may be internal.
Crown.
A "crown gear" or "contrate gear" is one whose teeth project at right angles to the plane. A crown gear is also sometimes meshed with an escapement such as found in mechanical clocks.
Tooth cut direction.
Gear teeth typically extend across the whole thickness of the gear. Another criterion for classifying gears is the general direction of the teeth across that dimension. This attribute is affected by the relative position and direction of the axes or rotation of the gears that are to be meshed together.
Straight.
In a cylindrical "spur gear" or "straight-cut gear", the tooth faces are straight along the direction parallel to the axis of rotation. Any imaginary cylinder with the same axis will cut the teeth along parallel straight lines.
The teeth can be either internal or external. Two spur gears mesh together correctly only if fitted to parallel shafts. No axial thrust is created by the tooth loads. Spur gears are excellent at moderate speeds but tend to be noisy at high speeds.
For arrangements with crossed non-parallel axes, the faces in a straight-cut gear are parts of a general conical surface whose generating lines ("generatrices") go through the meeting point of the two axes, resulting in a bevel gear. Such gears are generally used only at speeds below 5 m/s (1000 ft/min), or, for small gears, 1000 r.p.m.
Helical.
In a "helical" or "dry fixed" gear the tooth walls are not parallel to the axis of rotation, but are set at an angle. An imaginary pitch surface (cylinder, cone, or hyperboloid, depending on the relative axis positions) intersects each tooth face along an arc of an helix. Helical gears can be meshed in "parallel" or orientations. The former refers to when the shafts are parallel to each other; this is the most common orientation. In the latter, the shafts are non-parallel, and in this configuration the gears are sometimes known as "skew gears".
The angled teeth engage more gradually than do spur gear teeth, causing them to run more smoothly and quietly. With parallel helical gears, each pair of teeth first make contact at a single point at one side of the gear wheel; a moving curve of contact then grows gradually across the tooth face to a maximum, then recedes until the teeth break contact at a single point on the opposite side. In spur gears, teeth suddenly meet at a line contact across their entire width, causing stress and noise. Spur gears make a characteristic whine at high speeds. For this reason spur gears are used in low-speed applications and in situations where noise control is not a problem, and helical gears are used in high-speed applications, large power transmission, or where noise abatement is important. The speed is considered high when the pitch line velocity exceeds 25 m/s.
A disadvantage of helical gears is a resultant thrust along the axis of the gear, which must be accommodated by appropriate thrust bearings. However, this issue can be circumvented by using a herringbone gear or "double helical gear", which has no axial thrust - and also provides self-aligning of the gears. This results in less axial thrust than a comparable spur gear.
A second disadvantage of helical gears is also a greater degree of sliding friction between the meshing teeth, often addressed with additives in the lubricant.
For a "crossed" or "skew" configuration, the gears must have the same pressure angle and normal pitch; however, the helix angle and handedness can be different. The relationship between the two shafts is actually defined by the helix angle(s) of the two shafts and the handedness, as defined:
formula_0 for gears of the same handedness,
formula_1 for gears of opposite handedness,
where formula_2 is the helix angle for the gear. The crossed configuration is less mechanically sound because there is only a point contact between the gears, whereas in the parallel configuration there is a line contact.
Quite commonly, helical gears are used with the helix angle of one having the negative of the helix angle of the other; such a pair might also be referred to as having a right-handed helix and a left-handed helix of equal angles. The two equal but opposite angles add to zero: the angle between shafts is zero—that is, the shafts are "parallel". Where the sum or the difference (as described in the equations above) is not zero, the shafts are "crossed". For shafts "crossed" at right angles, the helix angles are of the same hand because they must add to 90 degrees. (This is the case with the gears in the illustration above: they mesh correctly in the crossed configuration: for the parallel configuration, one of the helix angles should be reversed. The gears illustrated cannot mesh with the shafts parallel.)
Double helical.
Double helical gears overcome the problem of axial thrust presented by single helical gears by using a double set of teeth, slanted in opposite directions. A double helical gear can be thought of as two mirrored helical gears mounted closely together on a common axle. This arrangement cancels out the net axial thrust, since each half of the gear thrusts in the opposite direction, resulting in a net axial force of zero. This arrangement can also remove the need for thrust bearings. However, double helical gears are more difficult to manufacture due to their more complicated shape.
Herringbone gears are a special type of helical gears. They do not have a groove in the middle like some other double helical gears do; the two mirrored helical gears are joined so that their teeth form a V shape. This can also be applied to bevel gears, as in the final drive of the Citroën Type A. Another type of double helical gear is a Wüst gear.
For both possible rotational directions, there exist two possible arrangements for the oppositely-oriented helical gears or gear faces. One arrangement is called stable, and the other unstable. In a stable arrangement, the helical gear faces are oriented so that each axial force is directed toward the center of the gear. In an unstable arrangement, both axial forces are directed away from the center of the gear. In either arrangement, the total (or "net") axial force on each gear is zero when the gears are aligned correctly. If the gears become misaligned in the axial direction, the unstable arrangement generates a net force that may lead to disassembly of the gear train, while the stable arrangement generates a net corrective force. If the direction of rotation is reversed, the direction of the axial thrusts is also reversed, so a stable configuration becomes unstable, and vice versa.
Stable double helical gears can be directly interchanged with spur gears without any need for different bearings.
Worm.
"Worms" resemble screws. A worm is meshed with a "worm wheel", which looks similar to a spur gear.
Worm-and-gear sets are a simple and compact way to achieve a high torque, low speed gear ratio. For example, helical gears are normally limited to gear ratios of less than 10:1 while worm-and-gear sets vary from 10:1 to 500:1. A disadvantage is the potential for considerable sliding action, leading to low efficiency.
A worm gear is a species of helical gear, but its helix angle is usually somewhat large (close to 90 degrees) and its body is usually fairly long in the axial direction. These attributes give it screw like qualities. The distinction between a worm and a helical gear is that at least one tooth persists for a full rotation around the helix. If this occurs, it is a 'worm'; if not, it is a 'helical gear'. A worm may have as few as one tooth. If that tooth persists for several turns around the helix, the worm appears, superficially, to have more than one tooth, but what one in fact sees is the same tooth reappearing at intervals along the length of the worm. The usual screw nomenclature applies: a one-toothed worm is called "single thread" or "single start"; a worm with more than one tooth is called "multiple thread" or "multiple start". The helix angle of a worm is not usually specified. Instead, the lead angle, which is equal to 90 degrees minus the helix angle, is given.
In a worm-and-gear set, the worm can always drive the gear. However, if the gear attempts to drive the worm, it may or may not succeed. Particularly if the lead angle is small, the gear's teeth may simply lock against the worm's teeth, because the force component circumferential to the worm is not sufficient to overcome friction. In traditional music boxes, however, the gear drives the worm, which has a large helix angle. This mesh drives the speed-limiter vanes which are mounted on the worm shaft.
Worm-and-gear sets that do lock are called self locking, which can be used to advantage, as when it is desired to set the position of a mechanism by turning the worm and then have the mechanism hold that position. An example is the machine head found on some types of stringed instruments.
If the gear in a worm-and-gear set is an ordinary helical gear only a single point of contact is achieved. If medium to high power transmission is desired, the tooth shape of the gear is modified to achieve more intimate contact by making both gears partially envelop each other. This is done by making both concave and joining them at a saddle point; this is called a cone-drive or "Double enveloping".
Worm gears can be right or left-handed, following the long-established practice for screw threads.
Tooth profile.
Another criterion to classify gears is the "tooth profile", the shape of the cross-section of a tooth face by an imaginary cut perpendicular to the pitch surface, such as the transverse, normal, or axial plane.
The tooth profile is crucial for the smoothness and uniformity of the movement of matching gears, as well as for the friction and wear.
Artisanal.
The teeth of antique or artisanal gears that were cut by hand from sheet material, like those in the Antikhytera mechanism, generally had simple profiles, such as triangles.
The teeth of larger gears — such as used in windmills — were usually pegs with simple shapes like cylinders, parallelepipeds, or triangular prisms inserted into a smooth wooden or metal wheel; or were holes with equally simple shapes cut into such a wheel.
Because of their sub-optimal profile, the effective gear ratio of such artisanal matching gears was not constant, but fluctuated over each tooth cycle, resulting in vibrations, noise, and accelerated wear.
Cage.
A "cage gear", also called a "lantern gear" or "lantern pinion" is one of those artisanal has cylindrical rods for teeth, parallel to the axle and arranged in a circle around it, much as the bars on a round bird cage or lantern. The assembly is held together by disks at each end, into which the tooth rods and axle are set. Cage gears are more efficient than solid pinions, and dirt can fall through the rods rather than becoming trapped and increasing wear. They can be constructed with very simple tools as the teeth are not formed by cutting or milling, but rather by drilling holes and inserting rods.
Sometimes used in clocks, a cage gear should always be driven by a gearwheel, not used as the driver. The cage gear was not initially favoured by conservative clock makers. It became popular in turret clocks where dirty working conditions were most commonplace. Domestic American clock movements often used them.
Mathematical.
In most modern gears, the tooth profile is usually not straight or circular, but of special form designed to achieve a constant angular velocity ratio.
There is an infinite variety of tooth profiles that will achieve this goal. In fact, given a fairly arbitrary tooth shape, it is possible to develop a tooth profile for the mating gear that will do it.
Parallel and crossed axes.
However, two constant velocity tooth profiles are the most commonly used in modern times for gears with parallel or crossed axes, based on the "cycloid" and "involute" curves.
Cycloidal gears were more common until the late 1800s. Since then, the involute has largely superseded it, particularly in drive train applications. The cycloid is in some ways the more interesting and flexible shape; however the involute has two advantages: it is easier to manufacture, and it permits the center-to-center spacing of the gears to vary over some range without ruining the constancy of the velocity ratio. Cycloidal gears only work properly if the center spacing is exactly right. Cycloidal gears are still commonly used in mechanical clocks.
Skew axes.
For non-parallel axes with non-straight tooth cuts, the best tooth profile is one of several spiral bevel gear shapes. These include Gleason types (circular arc with non-constant tooth depth), Oerlikon and Curvex types (circular arc with constant tooth depth), Klingelnberg Cyclo-Palloid (Epicycloid with constant tooth depth) or Klingelnberg Palloid.
The tooth faces in these gear types are not involute cylinders or cones but patches of octoidal surfaces. Manufacturing such tooth faces may require a 5-axis milling machine.
Spiral bevel gears have the same advantages and disadvantages relative to their straight-cut cousins as helical gears do to spur gears, such as lower noise and vibration. Simplified calculated bevel gears on the basis of an equivalent cylindrical gear in normal section with an involute tooth form show a deviant tooth form with reduced tooth strength by 10-28% without offset and 45% with offset.
Special gear trains.
Rack and pinion.
A "rack" is a toothed bar or rod that can be thought of as a sector gear with an infinitely large radius of curvature. Torque can be converted to linear force by meshing a rack with a round gear called a "pinion": the pinion turns, while the rack moves in a straight line. Such a mechanism is used in the steering of automobiles to convert the rotation of the steering wheel into the left-to-right motion of the tie rod(s) that are attached to the front wheels.
Racks also feature in the theory of gear geometry, where, for instance, the tooth shape of an interchangeable set of gears may be specified for the rack (infinite radius), and the tooth shapes for gears of particular actual radii are then derived from that. The rack and pinion gear type is also used in a rack railway.
Epicyclic gear train.
In epicyclic gearing, one or more of the gear axes moves. Examples are sun and planet gearing (see below), cycloidal drive, automatic transmissions, and mechanical differentials.
Sun and planet.
Sun and planet gearing is a method of converting reciprocating motion into rotary motion that was used in steam engines. James Watt used it on his early steam engines to get around the patent on the crank, but it also provided the advantage of increasing the flywheel speed so Watt could use a lighter flywheel.
In the illustration, the sun is yellow, the planet red, the reciprocating arm is blue, the flywheel is green and the driveshaft is gray.
Non-circular gears.
Non-circular gears are designed for special purposes. While a regular gear is optimized to transmit torque to another engaged member with minimum noise and wear and maximum efficiency, a non-circular gear's main objective might be ratio variations, axle displacement oscillations and more. Common applications include textile machines, potentiometers and continuously variable transmissions.
Non-rigid gears.
Most gears are ideally rigid bodies which transmit torque and movement through the lever principle and contact forces between the teeth. Namely, the torque applied to one gear causes it to rotate as rigid body, so that its teeth push against those of the matched gear, which in turn rotates as a rigid body transmitting the torque to its axle. Some specialized gear escape this pattern, however.
Harmonic gear.
A "harmonic gear" or "strain wave gear" is a specialized gearing mechanism often used in industrial motion control, robotics and aerospace for its advantages over traditional gearing systems, including lack of backlash, compactness and high gear ratios.
Though the diagram does not demonstrate the correct configuration, it is a "timing gear," conventionally with far more teeth than a traditional gear to ensure a higher degree of precision.
Magnetic gear.
In a "magnetic gear" pair there is no contact between the two members; the torque is instead transmitted through magnetic fields. The cogs of each gear are constant magnets with periodic alternation of opposite magnetic poles on mating surfaces. Gear components are mounted with a backlash capability similar to other mechanical gearings. Although they cannot exert as much force as a traditional gear due to limits on magnetic field strength, such gears work without touching and so are immune to wear, have very low noise, minimal power losses from friction and can slip without damage making them very reliable. They can be used in configurations that are not possible for gears that must be physically touching and can operate with a non-metallic barrier completely separating the driving force from the load. The magnetic coupling can transmit force into a hermetically sealed enclosure without using a radial shaft seal, which may leak.
formula_4 in metric units or formula_5 in imperial units.
formula_6
where m is the module and p the circular pitch. The units of module are customarily millimeters; an "English Module" is sometimes used with the units of inches. When the diametral pitch, DP, is in English units,
formula_7 in conventional metric units.
The distance between the two axis becomes:
formula_8
where a is the axis distance, z1 and z2 are the number of cogs (teeth) for each of the two wheels (gears). These numbers (or at least one of them) is often chosen among primes to create an even contact between every cog of both wheels, and thereby avoid unnecessary wear and damage. An even uniform gear wear is achieved by ensuring the tooth counts of the two gears meshing together are relatively prime to each other; this occurs when the greatest common divisor (GCD) of each gear tooth count equals 1, e.g. GCD(16,25)=1; if a 1:1 gear ratio is desired a relatively prime gear may be inserted in between the two gears; this maintains the 1:1 ratio but reverses the gear direction; a second relatively prime gear could also be inserted to restore the original rotational direction while maintaining uniform wear with all 4 gears in this case. Mechanical engineers, at least in continental Europe, usually use the module instead of circular pitch. The module, just like the circular pitch, can be used for all types of cogs, not just evolvent based straight cogs.
formula_9
formula_15
Ratio of the number of teeth to the pitch diameter. Could be measured in teeth per inch or teeth per centimeter, but conventionally has units of per inch of diameter. Where the module, m, is in metric units
formula_16 in English units
Nomenclature.
Helical gear.
Several other helix parameters can be viewed either in the normal or transverse planes. The subscript n usually indicates the normal.
Worm gear.
Subscript w denotes the worm, subscript g denotes the gear.
formula_22
formula_23
formula_24
Pitch.
Pitch is the distance between a point on one tooth and the corresponding point on an adjacent tooth. It is a dimension measured along a line or curve in the transverse, normal, or axial directions. The use of the single word "pitch" without qualification may be ambiguous, and for this reason it is preferable to use specific designations such as transverse circular pitch, normal base pitch, axial pitch.
formula_25
formula_26
formula_27 degrees or formula_28 radians
Backlash.
Backlash is the error in motion that occurs when gears change direction. It exists because there is always some gap between the trailing face of the driving tooth and the leading face of the tooth behind it on the driven gear, and that gap must be closed before force can be transferred in the new direction. The term "backlash" can also be used to refer to the size of the gap, not just the phenomenon it causes; thus, one could speak of a pair of gears as having, for example, "0.1 mm of backlash." A pair of gears could be designed to have zero backlash, but this would presuppose perfection in manufacturing, uniform thermal expansion characteristics throughout the system, and no lubricant. Therefore, gear pairs are designed to have some backlash. It is usually provided by reducing the tooth thickness of each gear by half the desired gap distance. In the case of a large gear and a small pinion, however, the backlash is usually taken entirely off the gear and the pinion is given full sized teeth. Backlash can also be provided by moving the gears further apart. The backlash of a gear train equals the sum of the backlash of each pair of gears, so in long trains backlash can become a problem.
For situations that require precision, such as instrumentation and control, backlash can be minimized through one of several techniques. For instance, the gear can be split along a plane perpendicular to the axis, one half fixed to the shaft in the usual manner, the other half placed alongside it, free to rotate about the shaft, but with springs between the two-halves providing relative torque between them, so that one achieves, in effect, a single gear with expanding teeth. Another method involves tapering the teeth in the axial direction and letting the gear slide in the axial direction to take up slack.
Standard pitches and the module system.
Although gears can be made with any pitch, for convenience and interchangeability standard pitches are frequently used. Pitch is a property associated with linear dimensions and so differs whether the standard values are in the imperial (inch) or metric systems. Using "inch" measurements, standard diametral pitch values with units of "per inch" are chosen; the "diametrical pitch" is the number of teeth on a gear of one inch pitch diameter. Common standard values for spur gears are 3, 4, 5, 6, 8, 10, 12, 16, 20, 24, 32, 48, 64, 72, 80, 96, 100, 120, and 200. Certain standard pitches such as "1/10" and "1/20" in inch measurements, which mesh with linear rack, are actually (linear) "circular pitch" values with units of "inches"
When gear dimensions are in the metric system the pitch specification is generally in terms of module or "modulus", which is effectively a length measurement across the "pitch diameter". The term module is understood to mean the pitch diameter in millimetres divided by the number of teeth. When the module is based upon inch measurements, it is known as the "English module" to avoid confusion with the metric module. Module is a direct dimension ("millimeters per tooth"), unlike diametrical pitch, which is an inverse dimension ("teeth per inch"). Thus, if the pitch diameter of a gear is 40 mm and the number of teeth 20, the module is 2, which means that there are 2 mm of pitch diameter for each tooth. The preferred standard module values are 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8, 1.0, 1.25, 1.5, 2.0, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20, 25, 32, 40 and 50.
Gear model in modern physics.
Modern physics adopted the gear model in different ways. In the nineteenth century, James Clerk Maxwell developed a model of electromagnetism in which magnetic field lines were rotating tubes of incompressible fluid. Maxwell used a gear wheel and called it an "idle wheel" to explain the electric current as a rotation of particles in opposite directions to that of the rotating field lines.
More recently, quantum physics uses "quantum gears" in their model. A group of gears can serve as a model for several different systems, such as an artificially constructed nanomechanical device or a group of ring molecules.
The three wave hypothesis compares the wave–particle duality to a bevel gear.
Gear mechanism in natural world.
The gear mechanism was previously considered exclusively artificial, but as early as 1957, gears had been recognized in the hind legs of various species of planthoppers and scientists from the University of Cambridge characterized their functional significance in 2013 by doing high-speed photography of the nymphs of "Issus coleoptratus" at Cambridge University. These gears are found only in the nymph forms of all planthoppers, and are lost during the final molt to the adult stage. In "I. coleoptratus", each leg has a 400-micrometer strip of teeth, pitch radius 200 micrometers, with 10 to 12 fully interlocking spur-type gear teeth, including filleted curves at the base of each tooth to reduce the risk of shearing. The joint rotates like mechanical gears, and synchronizes "Issus's" hind legs when it jumps to within 30 microseconds, preventing yaw rotation. The gears are not connected all the time. One is located on each of the juvenile insect's hind legs, and when it prepares to jump, the two sets of teeth lock together. As a result, the legs move in almost perfect unison, giving the insect more power as the gears rotate to their stopping point and then unlock.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography | [
{
"math_id": 0,
"text": "E = \\beta_1 + \\beta_2"
},
{
"math_id": 1,
"text": "E = \\beta_1 - \\beta_2"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "\\psi"
},
{
"math_id": 4,
"text": "d = \\frac{Nm_n}{\\cos\\psi}"
},
{
"math_id": 5,
"text": "d = \\frac{N}{P_d\\cos\\psi}"
},
{
"math_id": 6,
"text": "m = \\frac{p}{\\pi}"
},
{
"math_id": 7,
"text": "m = \\frac{25.4}{DP}"
},
{
"math_id": 8,
"text": "a = \\frac{m}{2}(z_1 + z_2)"
},
{
"math_id": 9,
"text": " d_w = \\frac{2a}{u+1} = \\frac{2a}{\\frac{z_2}{z_1} + 1}. "
},
{
"math_id": 10,
"text": "\\theta"
},
{
"math_id": 11,
"text": "D_o"
},
{
"math_id": 12,
"text": "a = \\frac{1}{2}(D_o - D)"
},
{
"math_id": 13,
"text": "b = \\frac{1}{2}(D - \\text{root diameter})"
},
{
"math_id": 14,
"text": "h_t"
},
{
"math_id": 15,
"text": "DP = \\frac{N}{d} = \\frac{\\pi}{p}"
},
{
"math_id": 16,
"text": "DP = \\frac{25.4}{m}"
},
{
"math_id": 17,
"text": "p_b"
},
{
"math_id": 18,
"text": "p_n"
},
{
"math_id": 19,
"text": "p_n = p\\cos(\\psi)"
},
{
"math_id": 20,
"text": "\\lambda"
},
{
"math_id": 21,
"text": "d_w"
},
{
"math_id": 22,
"text": " \\epsilon_\\gamma = \\epsilon_\\alpha + \\epsilon_\\beta "
},
{
"math_id": 23,
"text": " m_{\\rm t} = m_{\\rm p} + m_{\\rm F} "
},
{
"math_id": 24,
"text": " m_{\\rm o} = \\sqrt{m_{\\rm p}^2 + m_{\\rm F}^2} "
},
{
"math_id": 25,
"text": " P_{\\rm d} = \\frac{N}{d} = \\frac{25.4}{m} = \\frac{\\pi}{p} "
},
{
"math_id": 26,
"text": " P_{\\rm nd} = \\frac{P_{\\rm d}}{\\cos\\psi} "
},
{
"math_id": 27,
"text": " \\tau = \\frac{360}{z} "
},
{
"math_id": 28,
"text": " \\frac{2\\pi}{z} "
}
] | https://en.wikipedia.org/wiki?curid=82916 |
8292324 | Solid solution strengthening | Type of alloying which improves strength of pure metals
In metallurgy, solid solution strengthening is a type of alloying that can be used to improve the strength of a pure metal. The technique works by adding atoms of one element (the alloying element) to the crystalline lattice of another element (the base metal), forming a solid solution. The local nonuniformity in the lattice due to the alloying element makes plastic deformation more difficult by impeding dislocation motion through stress fields. In contrast, alloying beyond the solubility limit can form a second phase, leading to strengthening via other mechanisms (e.g. the precipitation of intermetallic compounds).
Types.
Depending on the size of the alloying element, a substitutional solid solution or an interstitial solid solution can form. In both cases, atoms are visualised as rigid spheres where the overall crystal structure is essentially unchanged. The rationale of crystal geometry to atom solubility prediction is summarized in the Hume-Rothery rules and Pauling's rules.
Substitutional solid solution strengthening occurs when the solute atom is large enough that it can replace solvent atoms in their lattice positions. Some alloying elements are only soluble in small amounts, whereas some solvent and solute pairs form a solution over the whole range of binary compositions. Generally, higher solubility is seen when solvent and solute atoms are similar in atomic size (15% according to the Hume-Rothery rules) and adopt the same crystal structure in their pure form. Examples of completely miscible binary systems are Cu-Ni and the Ag-Au face-centered cubic (FCC) binary systems, and the Mo-W body-centered cubic (BCC) binary system.
Interstitial solid solutions form when the solute atom is small enough (radii up to 57% the radii of the parent atoms) to fit at interstitial sites between the solvent atoms. The atoms crowd into the interstitial sites, causing the bonds of the solvent atoms to compress and thus deform (this rationale can be explained with Pauling's rules). Elements commonly used to form interstitial solid solutions include H, Li, Na, N, C, and O. Carbon in iron (steel) is one example of interstitial solid solution.
Mechanism.
The strength of a material is dependent on how easily dislocations in its crystal lattice can be propagated. These dislocations create stress fields within the material depending on their character. When solute atoms are introduced, local stress fields are formed that interact with those of the dislocations, impeding their motion and causing an increase in the yield stress of the material, which means an increase in strength of the material. This gain is a result of both lattice distortion and the modulus effect.
When solute and solvent atoms differ in size, local stress fields are created that can attract or repel dislocations in their vicinity. This is known as the size effect. By relieving tensile or compressive strain in the lattice, the solute size mismatch can put the dislocation in a lower energy state. In substitutional solid solutions, these stress fields are spherically symmetric, meaning they have no shear stress component. As such, substitutional solute atoms do not interact with the shear stress fields characteristic of screw dislocations. Conversely, in interstitial solid solutions, solute atoms cause a tetragonal distortion, generating a shear field that can interact with edge, screw, and mixed dislocations. The attraction or repulsion of the dislocation to the solute atom depends on whether the atom sits above or below the slip plane. For example, consider an edge dislocation encountering a smaller solute atom above its slip plane. In this case, the interaction energy is negative, resulting in attraction of the dislocation to the solute. This is due to the reduced dislocation energy by the compressed volume lying above the dislocation core. If the solute atom were positioned below the slip plane, the dislocation would be repelled by the solute. However, the overall interaction energy between an edge dislocation and a smaller solute is negative because the dislocation spends more time at sites with attractive energy. This is also true for solute atom with size greater than the solvent atom. Thus, the interaction energy dictated by the size effect is generally negative.
The elastic modulus of the solute atom can also determine the extent of strengthening. For a “soft” solute with elastic modulus lower than that of the solvent, the interaction energy due to modulus mismatch ("U"modulus) is negative, which reinforce the size interaction energy ("U"size). In contrast, "U"modulus is positive for a “hard” solute, which results in lower total interaction energy than a soft atom. Even though the interaction force is negative (attractive) in both cases when the dislocation is approaching the solute. The maximum force ("F"max) necessary to tear dislocation away from the lowest energy state (i.e. the solute atom) is greater for the soft solute than the hard one. As a result, a soft solute will strengthen a crystal more than a hard solute due to the synergistic strengthening by combining both size and modulus effects.
The elastic interaction effects (i.e. size and modulus effects) dominate solid-solution strengthening for most crystalline materials. However, other effects, including charge and stacking fault effects, may also play a role. For ionic solids where electrostatic interaction dictates bond strength, charge effect is also important. For example, addition of divalent ion to a monovalent material may strengthen the electrostatic interaction between the solute and the charged matrix atoms that comprise a dislocation. However, this strengthening is to a less extent than the elastic strengthening effects. For materials containing a higher density of stacking faults, solute atoms may interact with the stacking faults either attractively or repulsively. This lowers the stacking fault energy, leading to repulsion of the partial dislocations, which thus makes the material stronger.
Surface carburizing, or case hardening, is one example of solid solution strengthening in which the density of solute carbon atoms is increased close to the surface of the steel, resulting in a gradient of carbon atoms throughout the material. This provides superior mechanical properties to the surface of the steel without having to use a higher-cost material for the component.
Governing equations.
Solid solution strengthening increases yield strength of the material by increasing the shear stress, formula_0, to move dislocations:
formula_1
where "c" is the concentration of the solute atoms, "G" is the shear modulus, "b" is the magnitude of the Burger's vector, and formula_2 is the lattice strain due to the solute. This is composed of two terms, one describing lattice distortion and the other local modulus change.
formula_3
Here, formula_4 the term that captures the local modulus change, formula_5 a constant dependent on the solute atoms and formula_6 is the lattice distortion term.
The lattice distortion term can be described as:
formula_7, where "a" is the lattice parameter of the material.
Meanwhile, the local modulus change is captured in the following expression:
formula_8, where "G" is shear modulus of the solute material.
Implications.
In order to achieve noticeable material strengthening via solution strengthening, one should alloy with solutes of higher shear modulus, hence increasing the local shear modulus in the material. In addition, one should alloy with elements of different equilibrium lattice constants. The greater the difference in lattice parameter, the higher the local stress fields introduced by alloying.
Alloying with elements of higher shear modulus or of very different lattice parameters will increase the stiffness and introduce local stress fields respectively. In either case, the dislocation propagation will be hindered at these sites, impeding plasticity and increasing yield strength proportionally with solute concentration.
Solid solution strengthening depends on:
For many common alloys, rough experimental fits can be found for the addition in strengthening provided in the form of:
formula_9
where formula_10 is a solid solution strengthening coefficient and formula_11 is the concentration of solute in atomic fractions.
Nevertheless, one should not add so much solute as to precipitate a new phase. This occurs if the concentration of the solute reaches a certain critical point given by the binary system phase diagram. This critical concentration therefore puts a limit to the amount of solid solution strengthening that can be achieved with a given material.
Examples.
Aluminum alloys.
An example of aluminum alloys where solid solution strengthening happens by adding magnesium and manganese into the aluminum matrix. Commercially Mn can be added to the AA3xxx series and Mg can be added to the AA5xxx series. Mn addition to the Aluminum alloys assists in the recrystallization and recovery of the alloy which influences the grain size as well. Both of these systems are used in low to medium-strength applications, with appreciable formability and corrosion resistance.
Nickel-based superalloys.
Many nickel-based superalloys depend on solid solution as a strengthening mechanism. The most popular example is the Inconel family, where many of these alloys contain chromium and iron and some other additions of cobalt, molybdenum, niobium, and titanium. The nickel-based superalloys are well known for their intensive use in the industrial field especially the aeronautical and the aerospace industry due to their superior mechanical and corrosion properties at high temperatures.
An example of the use of the nickel-based superalloys in the industrial field would be turbine blades. In practice, this alloy is known as MAR—M200 and is solid solution strengthened by chromium, tungsten and cobalt in the matrix and is also precipitation hardened by carbide and boride precipitates at the grain boundaries. The key impacting factor for these turbine blades lies in the grain size which an increase in grain size can lead to a significant reduction in the strain rate. An example of this reduced strain rate in MAR--M200 can be seen in the figures to the right where the figure on the bottom has a grain size of 100um and the figure on the top has a grain size of 10mm.
This reduced strain rate is extremely important for turbine blade operation because they undergo significant mechanical stress and high temperatures which can lead to the onset of creep deformation. Therefore, the precise control of grain size in nickel-based superalloys is key to creep resistance and mechanical reliability and longevity. Some ways to control the grain size lie in the manufacturing techniques like directional solidification and single crystal casting.
Stainless steel.
Stainless steel is one of the most commonly used metals in many industries. Solid solution strengthening of steel is one of the mechanisms used to enhance the properties of the alloy. Austenitic steels mainly contain chromium, nickel, molybdenum, and manganese. It is being used mostly for cookware, kitchen equipment, and in marine applications for its good corrosion properties in saline environments.
Titanium alloys.
Titanium and titanium alloys have been wide usage in aerospace, medical, and maritime applications. The most known titanium alloy that adopts solid solution strengthening is Ti-6Al-4V. Also, the addition of oxygen to pure Ti alloy adopts a solid solution strengthening as a mechanism to the material, while adding it to Ti-6Al-4V alloy doesn’t have the same influence.
Copper alloys.
Bronze and brass are both copper alloys that are solid solution strengthened. Bronze is the result of adding about 12% tin to copper while brass is the result of adding about 34% zinc to copper. Both of these alloys are being utilized in coins production, ship hardware, and art. | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\Delta \\tau =Gb \\epsilon^\\tfrac 3 2 \\sqrt c "
},
{
"math_id": 2,
"text": "\\epsilon"
},
{
"math_id": 3,
"text": "\\epsilon = | \\epsilon_G - \\beta \\epsilon_a | "
},
{
"math_id": 4,
"text": "\\epsilon_G"
},
{
"math_id": 5,
"text": "\\beta"
},
{
"math_id": 6,
"text": "\\epsilon_a"
},
{
"math_id": 7,
"text": "\\epsilon_a = \\dfrac {\\Delta a}{a\\Delta c}"
},
{
"math_id": 8,
"text": "\\epsilon_G = \\dfrac {\\Delta G}{G\\Delta c}"
},
{
"math_id": 9,
"text": "\\Delta\\sigma_s = k_s\\sqrt{c}"
},
{
"math_id": 10,
"text": "k_s"
},
{
"math_id": 11,
"text": "c"
}
] | https://en.wikipedia.org/wiki?curid=8292324 |
82930 | FLOPS | Measure of computer performance
Floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
For such cases, it is a more accurate measure than measuring instructions per second.
Floating-point arithmetic.
Floating-point arithmetic is needed for very large or very small real numbers, or computations that require a large dynamic range. Floating-point representation is similar to scientific notation, except everything is carried out in base two, rather than base ten. The encoding scheme stores the sign, the exponent (in base two for Cray and VAX, base two or ten for IEEE floating point formats, and base 16 for IBM Floating Point Architecture) and the significand (number after the radix point). While several similar formats are in use, the most common is ANSI/IEEE Std. 754-1985. This standard defines the format for 32-bit numbers called "single precision", as well as 64-bit numbers called "double precision" and longer numbers called "extended precision" (used for intermediate results). Floating-point representations can support a much wider range of values than fixed-point, with the ability to represent very small numbers and very large numbers.
Dynamic range and precision.
The exponentiation inherent in floating-point computation assures a much larger dynamic range – the largest and smallest numbers that can be represented – which is especially important when processing data sets where some of the data may have extremely large range of numerical values or where the range may be unpredictable. As such, floating-point processors are ideally suited for computationally intensive applications.
Computational performance.
FLOPS and MIPS are units of measure for the numerical computing performance of a computer. Floating-point operations are typically used in fields such as scientific computational research, as well as in machine learning. However, before the late 1980s floating-point hardware (it's possible to implement FP arithmetic in software over any integer hardware) was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" capability. Thus the unit MIPS was useful to measure integer performance of any computer, including those without such a capability, and to account for architecture differences, similar MOPS (million operations per second) was used as early as 1970 as well. Note that besides integer (or fixed-point) arithmetics, examples of integer operation include data movement (A to B) or value testing (If A = B, then C). That's why MIPS as a performance benchmark is adequate when a computer is used in database queries, word processing, spreadsheets, or to run multiple virtual operating systems. In 1974 David Kuck coined the terms flops and megaflops for the description of supercomputer performance of the day by the number of floating-point calculations they performed per second. This was much better than using the prevalent MIPS to compare computers as this statistic usually had little bearing on the arithmetic capability of the machine on scientific tasks.
FLOPS on an HPC-system can be calculated using this equation:
formula_0
This can be simplified to the most common case: a computer that has exactly 1 CPU:
formula_1
FLOPS can be recorded in different measures of precision, for example, the TOP500 supercomputer list ranks computers by 64 bit (double-precision floating-point format) operations per second, abbreviated to "FP64". Similar measures are available for 32-bit ("FP32") and 16-bit ("FP16") operations.
Performance records.
Single computer records.
In June 1997, Intel's ASCI Red was the world's first computer to achieve one teraFLOPS and beyond. Sandia director Bill Camp said that ASCI Red had the best reliability of any supercomputer ever built, and "was supercomputing's high-water mark in longevity, price, and performance".
NEC's SX-9 supercomputer was the world's first vector processor to exceed 100 gigaFLOPS per single core.
In June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, but MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the Top500.org list. It has special-purpose pipelines for simulating molecular dynamics.
By 2007, Intel Corporation unveiled the experimental multi-core POLARIS chip, which achieves 1 teraFLOPS at 3.13 GHz. The 80-core chip can raise this result to 2 teraFLOPS at 6.26 GHz, although the thermal dissipation at this frequency exceeds 190 watts.
In June 2007, Top500.org reported the fastest computer in the world to be the IBM Blue Gene/L supercomputer, measuring a peak of 596 teraFLOPS. The Cray XT4 hit second place with 101.7 teraFLOPS.
On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS, faster than the Blue Gene/L. When configured to do so, it can reach speeds in excess of three petaFLOPS.
On October 25, 2007, NEC Corporation of Japan issued a press release announcing its SX series model SX-9, claiming it to be the world's fastest vector supercomputer. The SX-9 features the first CPU capable of a peak vector performance of 102.4 gigaFLOPS per single core.
On February 4, 2008, the NSF and the University of Texas at Austin opened full scale research runs on an AMD, Sun supercomputer named Ranger,
the most powerful supercomputing system in the world for open science research, which operates at sustained speed of 0.5 petaFLOPS.
On May 25, 2008, an American supercomputer built by IBM, named 'Roadrunner', reached the computing milestone of one petaFLOPS. It headed the June 2008 and November 2008 TOP500 list of the most powerful supercomputers (excluding grid computers). The computer is located at Los Alamos National Laboratory in New Mexico. The computer's name refers to the New Mexico state bird, the greater roadrunner ("Geococcyx californianus").
In June 2008, AMD released ATI Radeon HD 4800 series, which are reported to be the first GPUs to achieve one teraFLOPS. On August 12, 2008, AMD released the ATI Radeon HD 4870X2 graphics card with two Radeon R770 GPUs totaling 2.4 teraFLOPS.
In November 2008, an upgrade to the Cray Jaguar supercomputer at the Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) raised the system's computing power to a peak 1.64 petaFLOPS, making Jaguar the world's first petaFLOPS system dedicated to open research. In early 2009 the supercomputer was named after a mythical creature, Kraken. Kraken was declared the world's fastest university-managed supercomputer and sixth fastest overall in the 2009 TOP500 list. In 2010 Kraken was upgraded and can operate faster and is more powerful.
In 2009, the Cray Jaguar performed at 1.75 petaFLOPS, beating the IBM Roadrunner for the number one spot on the TOP500 list.
In October 2010, China unveiled the Tianhe-1, a supercomputer that operates at a peak computing rate of 2.5 petaFLOPS.
As of 2010[ [update]] the fastest PC processor reached 109 gigaFLOPS (Intel Core i7 980 XE) in double precision calculations. GPUs are considerably more powerful. For example, Nvidia Tesla C2050 GPU computing processors perform around 515 gigaFLOPS in double precision calculations, and the AMD FireStream 9270 peaks at 240 gigaFLOPS.
In November 2011, it was announced that Japan had achieved 10.51 petaFLOPS with its K computer. It has 88,128 SPARC64 VIIIfx processors in 864 racks, with theoretical performance of 11.28 petaFLOPS. It is named after the Japanese word "kei", which stands for 10 quadrillion, corresponding to the target speed of 10 petaFLOPS.
On November 15, 2011, Intel demonstrated a single x86-based processor, code-named "Knights Corner", sustaining more than a teraFLOPS on a wide range of DGEMM operations. Intel emphasized during the demonstration that this was a sustained teraFLOPS (not "raw teraFLOPS" used by others to get higher but less meaningful numbers), and that it was the first general purpose processor to ever cross a teraFLOPS.
On June 18, 2012, IBM's Sequoia supercomputer system, based at the U.S. Lawrence Livermore National Laboratory (LLNL), reached 16 petaFLOPS, setting the world record and claiming first place in the latest TOP500 list.
On November 12, 2012, the TOP500 list certified Titan as the world's fastest supercomputer per the LINPACK benchmark, at 17.59 petaFLOPS. It was developed by Cray Inc. at the Oak Ridge National Laboratory and combines AMD Opteron processors with "Kepler" NVIDIA Tesla graphics processing unit (GPU) technologies.
On June 10, 2013, China's Tianhe-2 was ranked the world's fastest with 33.86 petaFLOPS.
On June 20, 2016, China's Sunway TaihuLight was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (out of 125 peak petaFLOPS). The system was installed at the National Supercomputing Center in Wuxi, and represented more performance than the next five most powerful systems on the TOP500 list did at the time combined.
In June 2019, Summit, an IBM-built supercomputer now running at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 148.6 petaFLOPS on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs.
In June 2022, the United States' Frontier is the most powerful supercomputer on TOP500, reaching 1102 petaFlops (1.102 exaFlops) on the LINPACK benchmarks.
Distributed computing records.
Distributed computing uses the Internet to link personal computers to achieve more FLOPS:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{FLOPS} = \\text{racks} \\times \\frac{\\text{nodes}}{\\text{rack}} \\times \\frac{\\text{sockets}}{\\text{node}} \\times \\frac{\\text{cores}}{\\text{socket}} \\times \\frac{\\text{cycles}}{ \\text{second}} \\times \\frac{\\text{FLOPs}}{\\text{cycle}}."
},
{
"math_id": 1,
"text": "\\text{FLOPS} = \\text{cores} \\times \\frac{\\text{cycles}}{ \\text{second}} \\times \\frac{\\text{FLOPs}}{\\text{cycle}}."
}
] | https://en.wikipedia.org/wiki?curid=82930 |
8293816 | Countably compact space | In mathematics a topological space is called countably compact if every countable open cover has a finite subcover.
Equivalent definitions.
A topological space "X" is called countably compact if it satisfies any of the following equivalent conditions:
(1) Every countable open cover of "X" has a finite subcover.
(2) Every infinite "set" "A" in "X" has an ω-accumulation point in "X".
(3) Every "sequence" in "X" has an accumulation point in "X".
(4) Every countable family of closed subsets of "X" with an empty intersection has a finite subfamily with an empty intersection.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[0,1]"
}
] | https://en.wikipedia.org/wiki?curid=8293816 |
8293820 | Sequentially compact space | Topological space where every sequence has a convergent subsequence
In mathematics, a topological space "X" is sequentially compact if every sequence of points in "X" has a convergent subsequence converging to a point in formula_0.
Every metric space is naturally a topological space, and for metric spaces, the notions of compactness and sequential compactness are equivalent (if one assumes countable choice). However, there exist sequentially compact topological spaces that are not compact, and compact topological spaces that are not sequentially compact.
Examples and properties.
The space of all real numbers with the standard topology is not sequentially compact; the sequence formula_1 given by formula_2 for all natural numbers "formula_3" is a sequence that has no convergent subsequence.
If a space is a metric space, then it is sequentially compact if and only if it is compact. The first uncountable ordinal with the order topology is an example of a sequentially compact topological space that is not compact. The product of formula_4 copies of the closed unit interval is an example of a compact space that is not sequentially compact.
Related notions.
A topological space "formula_0" is said to be limit point compact if every infinite subset of "formula_0" has a limit point in "formula_0", and countably compact if every countable open cover has a finite subcover. In a metric space, the notions of sequential compactness, limit point compactness, countable compactness and compactness are all equivalent (if one assumes the axiom of choice).
In a sequential (Hausdorff) space sequential compactness is equivalent to countable compactness.
There is also a notion of a one-point sequential compactification—the idea is that the non convergent sequences should all converge to the extra point.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "(s_n)"
},
{
"math_id": 2,
"text": "s_n = n"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "2^{\\aleph_0}=\\mathfrak c"
}
] | https://en.wikipedia.org/wiki?curid=8293820 |
8294303 | Payback period | Accounting termPayback period in capital budgeting refers to the time required to recoup the funds expended in an investment, or to reach the break-even point.
For example, a $1000 investment made at the start of year 1 which returned $500 at the end of year 1 and year 2 respectively would have a two-year payback period. Payback period is usually expressed in years. Starting from investment year by calculating Net Cash Flow for each year:
formula_0
Then:
formula_1
Accumulate by year until Cumulative Cash Flow is a positive number: that year is the payback year.
Description.
Payback does not allow for the time value of money. Payback period intuitively measures how long something takes to "pay for itself." All else being equal, shorter payback periods are preferable to longer payback periods. Payback period is popular due to its ease of use despite the recognized limitations described below. See Cut off period.
The term is also widely used in other types of investment areas, often with respect to energy efficiency technologies, maintenance, upgrades, or other changes. For example, a compact fluorescent light bulb may be described as having a payback period of a certain number of years or operating hours, assuming certain costs. Here, the return to the investment consists of reduced operating costs.
Although primarily a financial term, the concept of a payback period is occasionally extended to other uses, such as energy payback period (the period of time over which the energy savings of a project equal the amount of energy expended since project inception); these other terms may not be standardized or widely used.
Purpose.
Payback period is often used as an analysis tool because it is easy to apply and easy to understand for most individuals, regardless of academic training or field of endeavor. When used carefully or to compare similar investments, it can be quite useful. As a stand-alone tool to compare an investment to "doing nothing," payback period has no explicit criteria for decision-making (except, perhaps, that the payback period should be less than infinity).
The payback period is considered a method of analysis with serious limitations and qualifications for its use, because it does not account for the time value of money, risk, financing, or other important considerations, such as the opportunity cost. Whilst the time value of money can be rectified by applying a weighted average cost of capital discount, it is generally agreed that this tool for investment decisions should not be used in isolation.
Alternative measures of "return" preferred by economists are net present value and internal rate of return. An implicit assumption in the use of payback period is that returns to the investment continue after the payback period. Payback period does not specify any required comparison to other investments or even to not making an investment.
Construction.
Payback period is usually expressed in years. Start by calculating Net Cash Flow for each year: Net Cash Flow Year 1 = Cash Inflow Year 1 - Cash Outflow Year 1. Then Cumulative Cash Flow = (Net Cash Flow Year 1 + Net Cash Flow Year 2 + Net Cash Flow Year 3, etc.) Accumulate by year until Cumulative Cash Flow is a positive number: that year is the payback year.
To calculate a more exact payback period:
Payback Period = Amount to be Invested/Estimated Annual Net Cash Flow.
It can also be calculated using the formula:
Payback Period = (p - n)÷p + ny
= 1 + ny - n÷p (unit:years)
Where
ny= The number of years after the initial investment at which the last negative value of cumulative cash flow occurs.
n= The value of cumulative cash flow at which the last negative value of cumulative cash flow occurs.
p= The value of cash flow at which the first positive value of cumulative cash flow occurs.
This formula can only be used to calculate the soonest payback period; that is, the first period after which the investment has paid for itself. If the cumulative cash flow drops to a negative value some time after it has reached a positive value, thereby changing the payback period, this formula can't be applied. This formula ignores values that arise after the payback period has been reached.
Additional complexity arises when the cash flow changes sign several times; i.e., it contains outflows in the midst or at the end of the project lifetime. The modified payback period algorithm may be applied then.
Shortcomings.
Payback period doesn't take into consideration the time value of money and therefore may not present the true picture when it comes to evaluating cash flows of a project. This issue is addressed by using DPP, which uses discounted cash flows.
Payback also ignores the cash flows beyond the payback period. Most major capital expenditures have a long life span and continue to provide cash flows even after the payback period. Since the payback period focuses on short term profitability, a valuable project may be overlooked if the payback period is the only consideration.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Net Cash Flow Year 1} = \\text{Cash Inflow Year 1} - \\text{Cash Outflow Year 1}"
},
{
"math_id": 1,
"text": "\\text{Cumulative Cash Flow} = (\\text{Net Cash Flow Year 1} + \\text{Net Cash Flow Year 2} +\\ldots + \\text{Net Cash Flow Year n})"
}
] | https://en.wikipedia.org/wiki?curid=8294303 |
8294978 | Physical activity level | Way of expressing a person's daily physical activity
The physical activity level (PAL) is a way to express a person's daily physical activity as a number and is used to estimate their total energy expenditure. In combination with the basal metabolic rate, it can be used to compute the amount of food energy a person needs to consume to maintain a particular lifestyle.
Definition.
The physical activity level is defined for a non-pregnant, non-lactating adult as that person's total energy expenditure (TEE) in a 24-hour period, divided by his or her basal metabolic rate (BMR):
formula_0
The level of physical activity can also be estimated based on a list of the physical activities a person performs from day to day. Each activity is connected to a number, the physical activity ratio. The physical activity level is then the time-weighted average of the physical activity ratios.
Examples.
The following table shows indicative numbers for the Physical activity level for several lifestyles:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{PAL}=\\frac{TEE_{24h}}{\\text{BMR}}"
}
] | https://en.wikipedia.org/wiki?curid=8294978 |
8297920 | Balding–Nichols model | Model in population genetics
In population genetics, the Balding–Nichols model is a statistical description of the allele frequencies in the components of a sub-divided population. With background allele frequency "p" the allele frequencies, in sub-populations separated by Wright's "FST" "F", are distributed according to independent draws from
formula_0
where "B" is the Beta distribution. This distribution has mean "p" and variance "Fp"(1 – "p").
The model is due to David Balding and Richard Nichols and is widely used in the forensic analysis of DNA profiles and in population models for genetic epidemiology.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B\\left(\\frac{1-F}{F}p,\\frac{1-F}{F}(1-p)\\right)"
}
] | https://en.wikipedia.org/wiki?curid=8297920 |
82991 | Chamber music | Form of classical music composed for a small group of instruments
Chamber music is a form of classical music that is composed for a small group of instruments—traditionally a group that could fit in a palace chamber or a large room. Most broadly, it includes any art music that is performed by a small number of performers, with one performer to a part (in contrast to orchestral music, in which each string part is played by a number of performers). However, by convention, it usually does not include solo instrument performances.
Because of its intimate nature, chamber music has been described as "the music of friends". For more than 100 years, chamber music was played primarily by amateur musicians in their homes, and even today, when chamber music performance has migrated from the home to the concert hall, many musicians, amateur and professional, still play chamber music for their own pleasure. Playing chamber music requires special skills, both musical and social, that differ from the skills required for playing solo or symphonic works.
Johann Wolfgang von Goethe described chamber music (specifically, string quartet music) as "four rational people conversing". This conversational paradigm – which refers to the way one instrument introduces a melody or motif and then other instruments subsequently "respond" with a similar motif – has been a thread woven through the history of chamber music composition from the end of the 18th century to the present. The analogy to conversation recurs in descriptions and analyses of chamber music compositions.
History.
From its earliest beginnings in the Medieval period to the present, chamber music has been a reflection of the changes in the technology and the society that produced it.
Early beginnings.
During the Middle Ages and the early Renaissance, instruments were used primarily as accompaniment for singers. String players would play along with the melody line sung by the singer. There were also purely instrumental ensembles, often of stringed precursors of the violin family, called consorts.
Some analysts consider the origin of classical instrumental ensembles to be the sonata da camera (chamber sonata) and the sonata da chiesa (church sonata). These were compositions for one to five or more instruments. The sonata da camera was a suite of slow and fast movements, interspersed with dance tunes; the sonata da chiesa was the same, but the dances were omitted. These forms gradually developed into the trio sonata of the Baroque – two treble instruments and a bass instrument, often with a keyboard or other chording instrument (harpsichord, organ, harp or lute, for example) filling in the harmony. Both the bass instrument and the chordal instrument would play the basso continuo part.
During the Baroque period, chamber music as a genre was not clearly defined. Often, works could be played on any variety of instruments, in orchestral or chamber ensembles. "The Art of Fugue" by Johann Sebastian Bach, for example, can be played on a keyboard instrument (harpsichord or organ) or by a string quartet or a string orchestra. The instrumentation of trio sonatas was also often flexibly specified; some of Handel's sonatas are scored for "German flute, Hoboy [oboe] or Violin" Bass lines could be played by violone, cello, theorbo, or bassoon, and sometimes three or four instruments would join in the bass line in unison. Sometimes composers mixed movements for chamber ensembles with orchestral movements. Telemann's 'Tafelmusik' (1733), for example, has five sets of movements for various combinations of instruments, ending with a full orchestral section.
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube from "The Musical Offering", played by Ensemble Brillante
Baroque chamber music was often contrapuntal; that is, each instrument played the same melodic materials at different times, creating a complex, interwoven fabric of sound. Because each instrument was playing essentially the same melodies, all the instruments were equal. In the trio sonata, there is often no ascendent or solo instrument, but all three instruments share equal importance.
The harmonic role played by the keyboard or other chording instrument was subsidiary, and usually the keyboard part was not even written out; rather, the chordal structure of the piece was specified by numeric codes over the bass line, called figured bass.
In the second half of the 18th century, tastes began to change: many composers preferred a new, lighter Galant style, with "thinner texture, ... and clearly defined melody and bass" to the complexities of counterpoint. Now a new custom arose that gave birth to a new form of chamber music: the serenade. Patrons invited street musicians to play evening concerts below the balconies of their homes, their friends and their lovers. Patrons and musicians commissioned composers to write suitable suites of dances and tunes, for groups of two to five or six players. These works were called serenades, nocturnes, divertimenti, or cassations (from gasse=street). The young Joseph Haydn was commissioned to write several of these.
Haydn, Mozart, and the classical style.
Joseph Haydn is generally credited with creating the modern form of chamber music as we know it, although scholars today such as Roger Hickman argue "the idea that Haydn invented the string quartet and single-handedly advanced the genre is based on only a vague notion of the true history of the eighteenth-century genre." A typical string quartet of the period would consist of
Haydn was by no means the only composer developing new modes of chamber music. Even before Haydn, many composers were already experimenting with new forms. Giovanni Battista Sammartini, Ignaz Holzbauer, and Franz Xaver Richter wrote precursors of the string quartet. Franz Ignaz von Beecke (1733-1803), with his Piano Quintet in A minor (1770) and 17 string quartets was also one of the pioneers of chamber music of the Classical period.
Another renowned composer of chamber music of the period was Wolfgang Amadeus Mozart. Mozart's seven piano trios and two piano quartets were the first to apply the conversational principle to chamber music with piano. Haydn's piano trios are essentially piano sonatas with the violin and cello playing mostly supporting roles, doubling the treble and bass lines of the piano score. But Mozart gives the strings an independent role, using them as a counter to the piano, and adding their individual voices to the chamber music conversation.
Mozart introduced the newly invented clarinet into the chamber music arsenal, with the Kegelstatt Trio for viola, clarinet and piano, K. 498, and the Quintet for Clarinet and String Quartet, K. 581. He also tried other innovative ensembles, including the quintet for violin, two violas, cello, and horn, K. 407, quartets for flute and strings, and various wind instrument combinations. He wrote six string quintets for two violins, two violas and cello, which explore the rich tenor tones of the violas, adding a new dimension to the string quartet conversation.
Mozart's string quartets are considered the pinnacle of the classical art. The six string quartets that he dedicated to Haydn, his friend and mentor, inspired the elder composer to say to Mozart's father, "I tell you before God as an honest man that your son is the greatest composer known to me either in person or by reputation. He has taste, and, what is more, the most profound knowledge of composition."
Many other composers wrote chamber compositions during this period that were popular at the time and are still played today. Luigi Boccherini, Italian composer and cellist, wrote nearly a hundred string quartets, and more than one hundred quintets for two violins, viola and two cellos. In this innovative ensemble, later used by Schubert, Boccherini gives flashy, virtuosic solos to the principal cello, as a showcase for his own playing. Violinist Carl Ditters von Dittersdorf and cellist Johann Baptist Wanhal, who both played pickup quartets with Haydn on second violin and Mozart on viola, were popular chamber music composers of the period.
From home to hall.
The turn of the 19th century saw dramatic changes in society and in music technology which had far-reaching effects on the way chamber music was composed and played.
Collapse of the aristocratic system.
Throughout the 18th century, the composer was normally an employee of an aristocrat, and the chamber music he or she composed was for the pleasure of aristocratic players and listeners. Haydn, for example, was an employee of Nikolaus I, Prince Esterházy, a music lover and amateur baryton player, for whom Haydn wrote many of his string trios. Mozart wrote three string quartets for the King of Prussia, Frederick William II, a cellist. Many of Beethoven's quartets were first performed with patron Count Andrey Razumovsky on second violin. Boccherini composed for the king of Spain.
With the decline of the aristocracy and the rise of new social orders throughout Europe, composers increasingly had to make money by selling their compositions and performing concerts. They often gave subscription concerts, which involved renting a hall and collecting the receipts from the performance. Increasingly, they wrote chamber music not only for rich patrons, but for professional musicians playing for a paying audience.
Changes in the structure of stringed instruments.
At the beginning of the 19th century, luthiers developed new methods of constructing the violin, viola and cello that gave these instruments a richer tone, more volume, and more carrying power. Also at this time, bowmakers made the violin bow longer, with a thicker ribbon of hair under higher tension. This improved projection, and also made possible new bowing techniques. In 1820, Louis Spohr invented the chinrest, which gave violinists more freedom of movement in their left hands, for a more nimble technique. These changes contributed to the effectiveness of public performances in large halls, and expanded the repertoire of techniques available to chamber music composers.
Invention of the pianoforte.
Throughout the Baroque era, the harpsichord was one of the main instruments used in chamber music. The harpsichord used quills to pluck strings, and it had a delicate sound. Due to the design of the harpsichord, the attack or weight with which the performer played the keyboard did not change the volume or tone. Between about 1750 and the late 1700s, the harpsichord gradually fell out of use. By the late 1700s, the pianoforte became more popular as an instrument for performance. Even though the pianoforte was invented by Bartolomeo Cristofori at the beginning of the 1700s, it did not become widely used until the end of that century, when technical improvements in its construction made it a more effective instrument. Unlike the harpsichord, the pianoforte could play soft or loud dynamics and sharp sforzando attacks depending on how hard or soft the performer played the keys. The improved pianoforte was adopted by Mozart and other composers, who began composing chamber ensembles with the piano playing a leading role. The piano was to become more and more dominant through the 19th century, so much so that many composers, such as Franz Liszt and Frédéric Chopin, wrote almost exclusively for solo piano (or solo piano with orchestra).
Beethoven.
Ludwig van Beethoven straddled this period of change as a giant of Western music. Beethoven transformed chamber music, raising it to a new plane, both in terms of content and in terms of the technical demands on performers and audiences. His works, in the words of Maynard Solomon, were "...the models against which nineteenth-century romanticism measured its achievements and failures." His late quartets, in particular, were considered so daunting an accomplishment that many composers after him were afraid to try composing quartets; Johannes Brahms composed and tore up 20 string quartets before he dared publish a work that he felt was worthy of the "giant marching behind".
Beethoven made his formal debut as a composer with three Piano Trios, Op. 1. Even these early works, written when Beethoven was only 22, while adhering to a strictly classical mold, showed signs of the new paths that Beethoven was to forge in the coming years. When he showed the manuscript of the trios to Haydn, his teacher, prior to publication, Haydn approved of the first two, but warned against publishing the third trio, in C minor, as too radical, warning it would not "...be understood and favorably received by the public."
Haydn was wrong—the third trio was the most popular of the set, and Haydn's criticisms caused a falling-out between him and the sensitive Beethoven. The trio is, indeed, a departure from the mold that Haydn and Mozart had formed. Beethoven makes dramatic deviations of tempo within phrases and within movements. He greatly increases the independence of the strings, especially the cello, allowing it to range above the piano and occasionally even the violin.
If his Op. 1 trios introduced Beethoven's works to the public, his Septet, Op. 20, established him as one of Europe's most popular composers. The septet, scored for violin, viola, cello, contrabass, clarinet, horn, and bassoon, was a huge hit. It was played in concerts again and again. It appeared in transcriptions for many combinations – one of which, for clarinet, cello and piano, was written by Beethoven himself – and was so popular that Beethoven feared it would eclipse his other works. So much so that by 1815, Carl Czerny wrote that Beethoven "could not endure his septet and grew angry because of the universal applause which it has received." The septet is written as a classical divertimento in six movements, including two minuets, and a set of variations. It is full of catchy tunes, with solos for everyone, including the contrabass.
<templatestyles src="Template:Quote_box/styles.css" />
Beethoven: Septet, Op. 20, first movement, played by the Ensemble Mediterrain
In his 17 string quartets, composed over the course of 37 of his 56 years, Beethoven goes from classical composer par excellence to creator of musical Romanticism, and finally, with his late string quartets, he transcends classicism and romanticism to create a genre that defies categorization. Stravinsky referred to the Große Fuge, of the late quartets, as, "...this absolutely contemporary piece of music that will be contemporary forever."
The string quartets 1–6, Op. 18, were written in the classical style, in the same year that Haydn wrote his Op. 76 string quartets. Even here, Beethoven stretched the formal structures pioneered by Haydn and Mozart. In the quartet Op. 18, No. 1, in F major, for example, there is a long, lyrical solo for cello in the second movement, giving the cello a new type of voice in the quartet conversation. And the last movement of Op. 18, No. 6, "La Malincolia", creates a new type of formal structure, interleaving a slow, melancholic section with a manic dance. Beethoven was to use this form in later quartets, and Brahms and others adopted it as well.
<templatestyles src="Template:Quote_box/styles.css" />
Beethoven: Quartet, Op. 59, No. 3, played by the Modigliani Quartet
<templatestyles src="Template:Quote_box/styles.css" />
Piano Trio, Op. 70, No. 1, "Ghost", played by the Claremont Trio
In the years 1805 to 1806, Beethoven composed the three Op. 59 quartets on a commission from Count Razumovsky, who played second violin in their first performance. These quartets, from Beethoven's middle period, were pioneers in the romantic style. Besides introducing many structural and stylistic innovations, these quartets were much more difficult technically to perform – so much so that they were, and remain, beyond the reach of many amateur string players. When first violinist Ignaz Schuppanzigh complained of their difficulty, Beethoven retorted, "Do you think I care about your wretched violin when the spirit moves me?" Among the difficulties are complex syncopations and cross-rhythms; synchronized runs of sixteenth, thirty-second, and sixty-fourth notes; and sudden modulations requiring special attention to intonation. In addition to the Op. 59 quartets, Beethoven wrote two more quartets during his middle period – Op. 74, the "Harp" quartet, named for the unusual harp-like effect Beethoven creates with pizzicato passages in the first movement, and Op. 95, the "Serioso".
The Serioso is a transitional work that ushers in Beethoven's late period – a period of compositions of great introspection. "The particular kind of inwardness of Beethoven's last style period", writes Joseph Kerman, gives one the feeling that "the music is sounding only for the composer and for one other auditor, an awestruck eavesdropper: you." In the late quartets, the quartet conversation is often disjointed, proceeding like a stream of consciousness. Melodies are broken off, or passed in the middle of the melodic line from instrument to instrument. Beethoven uses new effects, never before essayed in the string quartet literature: the ethereal, dreamlike effect of open intervals between the high E string and the open A string in the second movement of quartet Op. 132; the use of "sul ponticello" (playing on the bridge of the violin) for a brittle, scratchy sound in the Presto movement of Op. 131; the use of the Lydian mode, rarely heard in Western music for 200 years, in Op. 132; a cello melody played high above all the other strings in the finale of Op. 132. Yet for all this disjointedness, each quartet is tightly designed, with an overarching structure that ties the work together.
Beethoven wrote eight piano trios, five string trios, two string quintets, and numerous pieces for wind ensemble. He also wrote ten sonatas for violin and piano and five sonatas for cello and piano.
Franz Schubert.
As Beethoven, in his last quartets, went off in his own direction, Franz Schubert carried on and established the emerging romantic style. In his 31 years, Schubert devoted much of his life to chamber music, composing 15 string quartets, two piano trios, string trios, a piano quintet commonly known as the "Trout Quintet", an octet for strings and winds, and his famous quintet for two violins, viola, and two cellos.
<templatestyles src="Template:Quote_box/styles.css" />
Franz Schubert, "Trout Quintet", D. 667, performed by the Chamber Music Society of Lincoln Center
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube: String Quintet in C, D. 956, first movement, recorded at the Fredonia Quartet Program, July 2008
Schubert's music, as his life, exemplified the contrasts and contradictions of his time. On the one hand, he was the darling of Viennese society: he starred in soirées that became known as "Schubertiaden", where he played his light, mannered compositions that expressed the gemütlichkeit of Vienna of the 1820s. On the other hand, his own short life was shrouded in tragedy, wracked by poverty and ill health. Chamber music was the ideal medium to express this conflict, "to reconcile his essentially lyric themes with his feeling for dramatic utterance within a form that provided the possibility of extreme color contrasts." The String Quintet in C, D.956, is an example of how this conflict is expressed in music. After a slow introduction, the first theme of the first movement, fiery and dramatic, leads to a bridge of rising tension, peaking suddenly and breaking into the second theme, a lilting duet in the lower voices. The alternating Sturm und Drang and relaxation continue throughout the movement.
These contending forces are expressed in some of Schubert's other works: in the quartet Death and the Maiden, the Rosamunde quartet and in the stormy, one-movement Quartettsatz, D. 703.
Felix Mendelssohn.
Unlike Schubert, Felix Mendelssohn had a life of peace and prosperity. Born into a wealthy Jewish family in Hamburg, Mendelssohn proved himself a child prodigy. By the age of 16, he had written his first major chamber work, the String Octet, Op. 20. Already in this work, Mendelssohn showed some of the unique style that was to characterize his later works; notably, the gossamer light texture of his scherzo movements, exemplified also by the "Canzonetta" movement of the String Quartet, Op. 12, and the scherzo of the Piano Trio No. 1 in D minor, Op. 49.
Another characteristic that Mendelssohn pioneered is the cyclic form in overall structure. This means the reuse of thematic material from one movement to the next, to give the total piece coherence. In his second string quartet, he opens the piece with a peaceful adagio section in A major, that contrasts with the stormy first movement in A minor. After the final, vigorous Presto movement, he returns to the opening adagio to conclude the piece. This string quartet is also Mendelssohn's homage to Beethoven; the work is studded with quotes from Beethoven's middle and late quartets.
During his adult life, Mendelssohn wrote two piano trios, seven works for string quartet, two string quintets, the octet, a sextet for piano and strings, and numerous sonatas for piano with violin, cello, and clarinet.
Robert Schumann.
Robert Schumann continued the development of cyclic structure. In his Piano Quintet in E flat, Op. 44, Schumann wrote a double fugue in the finale, using the theme of the first movement and the theme of the last movement. Both Schumann and Mendelssohn, following the example set by Beethoven, revived the fugue, which had fallen out of favor since the Baroque period. However, rather than writing strict, full-length fugues, they used counterpoint as another mode of conversation between the chamber music instruments. Many of Schumann's chamber works, including all three of his string quartets and his piano quartet have contrapuntal sections interwoven seamlessly into the overall compositional texture.
The composers of the first half of the 19th century were acutely aware of the conversational paradigm established by Haydn and Mozart. Schumann wrote that in a true quartet "everyone has something to say ... a conversation, often truly beautiful, often oddly and turbidly woven, among four people." Their awareness is exemplified by composer and virtuoso violinist Louis Spohr. Spohr divided his 36 string quartets into two types: the "quatuor brillant", essentially a violin concerto with string trio accompaniment; and "quatuor dialogue", in the conversational tradition.
Chamber music and society in the 19th century.
During the 19th century, with the rise of new technology driven by the Industrial Revolution, printed music became cheaper and thus more accessible while domestic music making gained widespread popularity. Composers began to incorporate new elements and techniques into their works to appeal to this open market, since there was an increased consumer desire for chamber music. While improvements in instruments led to more public performances of chamber music, it remained very much a type of music to be played as much as performed. Amateur quartet societies sprang up throughout Europe, and no middling-sized city in Germany or France was without one. These societies sponsored house concerts, compiled music libraries, and encouraged the playing of quartets and other ensembles. In European countries, in particular Germany and France, like minded musicians were brought together and started to develop a strong connection with the community. Composers were in high favor with orchestral works and solo virtuosi works, which made up the largest part of the public concert repertoire. Early French composers including Camille Saint-Saëns and César Franck.
Apart from the "central" Austro-Germanic countries, there was an occurrence of the subculture of chamber music in other regions such as Britain. There chamber music was often performed by upper- and middle-class men with less advanced musical skills in an unexpected setting such as informal ensembles in private residence with few audience members. In Britain, the most common form of chamber music compositions are the string quartets, sentimental songs and piano chamber works like the piano trio, in a way depicts the standard conception of the conventional "Victorian music making". In the middle of the 19th century, with the rise of the feminist movement, women also started to receive acceptability to be participated in chamber music.
Thousands of quartets were published by hundreds of composers; between 1770 and 1800, more than 2000 quartets were published, and the pace did not decline in the next century. Throughout the 19th century, composers published string quartets now long neglected: George Onslow wrote 36 quartets and 35 quintets; Gaetano Donizetti wrote dozens of quartets, Antonio Bazzini, Anton Reicha, Carl Reissiger, Joseph Suk and others wrote to fill an insatiable demand for quartets. In addition, there was a lively market for string quartet arrangements of popular and folk tunes, piano works, symphonies, and opera arias.
But opposing forces were at work. The middle of the 19th century saw the rise of superstar virtuosi, who drew attention away from chamber music toward solo performance. The piano, which could be mass-produced, became an instrument of preference, and many composers, like Chopin and Liszt, composed primarily if not exclusively for piano.
The ascendance of the piano, and of symphonic composition, was not merely a matter of preference; it was also a matter of ideology. In the 1860s, a schism grew among romantic musicians over the direction of music. Many composers tend to express their romantic persona through their works. By the time, these chamber works are not necessarily dedicated for any specific dedicatee. Famous chamber works such as Fanny Mendelssohn D minor Piano Trio, Ludwig van Beethoven's Trio in E-flat major, and Franz Schubert's Piano Quintet in A major are all highly personal. Liszt and Richard Wagner led a movement that contended that "pure music" had run its course with Beethoven, and that new, programmatic forms of music–in which music created "images" with its melodies–were the future of the art. The composers of this school had no use for chamber music. Opposing this view was Johannes Brahms and his associates, especially the powerful music critic Eduard Hanslick. This War of the Romantics shook the artistic world of the period, with vituperative exchanges between the two camps, concert boycotts, and petitions.
Although amateur playing thrived throughout the 19th century, this was also a period of increasing professionalization of chamber music performance. Professional quartets began to dominate the chamber music concert stage. The Hellmesberger Quartet, led by Joseph Hellmesberger, and the Joachim Quartet, led by Joseph Joachim, debuted many of the new string quartets by Brahms and other composers. Another famous quartet player was Vilemina Norman Neruda, also known as Lady Hallé. Indeed, during the last third of the century, women performers began taking their place on the concert stage: an all-women string quartet led by Emily Shinner, and the Lucas quartet, also all women, were two notable examples.
Toward the 20th century.
It was Johannes Brahms who carried the torch of Romantic music toward the 20th century. Heralded by Robert Schumann as the forger of "new paths" in music, Brahms's music is a bridge from the classical to the modern. On the one hand, Brahms was a traditionalist, conserving the musical traditions of Bach and Mozart. Throughout his chamber music, he uses traditional techniques of counterpoint, incorporating fugues and canons into rich conversational and harmonic textures. On the other hand, Brahms expanded the structure and the harmonic vocabulary of chamber music, challenging traditional notions of tonality. An example of this is in the Brahms second string sextet, Op. 36.
Traditionally, composers wrote the first theme of a piece in the key of the piece, firmly establishing that key as the tonic, or home, key of the piece. The opening theme of Op. 36 starts in the tonic (G major), but already by the third measure has modulated to the unrelated key of E-flat major. As the theme develops, it ranges through various keys before coming back to the tonic G major. This "harmonic audacity", as Swafford describes it, opened the way for bolder experiments to come.
<templatestyles src="Template:Quote_box/styles.css" />
Brahms sextet Op. 36, played by the Borromeo Quartet, and Liz Freivogel and Daniel McDonough of the Jupiter String Quartet
Not only in harmony, but also in overall musical structure, Brahms was an innovator. He developed a technique that Arnold Schoenberg described as "developing variation". Rather than discretely defined phrases, Brahms often runs phrase into phrase, and mixes melodic motives to create a fabric of continuous melody. Schoenberg, the creator of the 12-tone system of composition, traced the roots of his modernism to Brahms, in his essay "Brahms the Progressive".
All told, Brahms published 24 works of chamber music, including three string quartets, five piano trios, the quintet for piano and strings, Op. 34, and other works. Among his last works were the clarinet quintet, Op. 115, and a trio for clarinet, cello and piano. He wrote a trio for the unusual combination of piano, violin and horn, Op. 40. He also wrote two songs for alto singer, viola and piano, Op. 91, reviving the form of voice with string obbligato that had been virtually abandoned since the Baroque.
The exploration of tonality and of structure begun by Brahms was continued by composers of the French school. César Franck's piano quintet in F minor, composed in 1879, further established the cyclic form first explored by Schumann and Mendelssohn, reusing the same thematic material in each of the three movements. Claude Debussy's string quartet, Op. 10, is considered a watershed in the history of chamber music. The quartet uses the cyclic structure, and constitutes a final divorce from the rules of classical harmony. "Any sounds in any combination and in any succession are henceforth free to be used in a musical continuity", Debussy wrote. Pierre Boulez said that Debussy freed chamber music from "rigid structure, frozen rhetoric and rigid aesthetics".
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube, first movement, played by the Cypress String Quartet
Debussy's quartet, like the string quartets of Maurice Ravel and of Gabriel Fauré, created a new tone color for chamber music, a color and texture associated with the Impressionist movement. Violist James Dunham, of the Cleveland and Sequoia Quartets, writes of the Ravel quartet, "I was simply overwhelmed by the sweep of sonority, the sensation of colors constantly changing ..." For these composers, chamber ensembles were the ideal vehicle for transmitting this atmospheric sense, and chamber works constituted much of their oeuvre.
Nationalism in chamber music.
Parallel with the trend to seek new modes of tonality and texture was another new development in chamber music: the rise of nationalism. Composers turned more and more to the rhythms and tonalities of their native lands for inspiration and material. "Europe was impelled by the Romantic tendency to establish in musical matters the national boundaries more and more sharply", wrote Alfred Einstein. "The collecting and sifting of old traditional melodic treasures ... formed the basis for a creative art-music." For many of these composers, chamber music was the natural vehicle for expressing their national characters.
<templatestyles src="Template:Quote_box/styles.css" />
Dvořák: piano quintet, Op. 81, played by the Lincoln Center Chamber Players
Czech composer Antonín Dvořák created in his chamber music a new voice for the music of his native Bohemia. In 14 string quartets, three string quintets, two piano quartets, a string sextet, four piano trios, and numerous other chamber compositions, Dvořák incorporates folk music and modes as an integral part of his compositions. For example, in the piano quintet in A major, Op. 81, the slow movement is a Dumka, a Slavic folk ballad that alternates between a slow expressive song and a fast dance. Dvořák's fame in establishing a national art music was so great that the New York philanthropist and music connoisseur Jeannette Thurber invited him to America, to head a conservatory that would establish an American style of music. There, Dvořák wrote his string quartet in F major, Op. 96, nicknamed "The American". While composing the work, Dvořák was entertained by a group of Kickapoo Indians who performed native dances and songs, and these songs may have been incorporated in the quartet.
Bedřich Smetana, another Czech, wrote a piano trio and string quartet, both of which incorporate native Czech rhythms and melodies. In Russia, Russian folk music permeated the works of the late 19th-century composers. Pyotr Ilyich Tchaikovsky uses a typical Russian folk dance in the final movement of his string sextet, "Souvenir de Florence", Op. 70. Alexander Borodin's second string quartet contains references to folk music, and the slow Nocturne movement of that quartet recalls Middle Eastern modes that were current in the Muslim sections of southern Russia. Edvard Grieg used the musical style of his native Norway in his string quartet in G minor, Op. 27 and his violin sonatas.
In Hungary, composers Zoltán Kodály and Béla Bartók pioneered the science of ethnomusicology by performing one of the first comprehensive studies of folk music. Ranging across the Magyar provinces, they transcribed, recorded, and classified tens of thousands of folk melodies. They used these tunes in their compositions, which are characterized by the asymmetrical rhythms and modal harmonies of that music. Their chamber music compositions, and those of the Czech composer Leoš Janáček, combined the nationalist trend with the 20th century search for new tonalities. Janáček's string quartets not only incorporate the tonalities of Czech folk music, they also reflect the rhythms of speech in Czech.
New sounds for a new world.
The end of western tonality, begun subtly by Brahms and made explicit by Debussy, posed a crisis for composers of the 20th century. It was not merely an issue of finding new types of harmonies and melodic systems to replace the diatonic scale that was the basis of western harmony; the whole structure of western music – the relationships between movements and between structural elements within movements – was based on the relationships between different keys. So composers were challenged with building a whole new structure for music.
This was coupled with the feeling that the era that saw the invention of automobiles, the telephone, electric lighting, and world war needed new modes of expression. "The century of the aeroplane deserves its music", wrote Debussy.
Inspiration from folk music.
The search for a new music took several directions. The first, led by Bartók, was toward the tonal and rhythmic constructs of folk music. Bartók's research into Hungarian and other eastern European and Middle Eastern folk music revealed to him a musical world built of musical scales that were neither major nor minor, and complex rhythms that were alien to the concert hall. In his fifth quartet, for example, Bartók uses a time signature of , "startling to the classically-trained musician, but second-nature to the folk musician." Structurally, also, Bartók often invents or borrows from folk modes. In the sixth string quartet, for example, Bartók begins each movement with a slow, elegiac melody, followed by the main melodic material of the movement, and concludes the quartet with a slow movement that is built entirely on this elegy. This is a form common in many folk music cultures.
Bartók's six string quartets are often compared with Beethoven's late quartets. In them, Bartók builds new musical structures, explores sonorities never previously produced in classical music (for example, the snap pizzicato, where the player lifts the string and lets it snap back on the fingerboard with an audible buzz), and creates modes of expression that set these works apart from all others. "Bartók's last two quartets proclaim the sanctity of life, progress and the victory of humanity despite the anti-humanistic dangers of the time", writes analyst John Herschel Baron. The last quartet, written when Bartók was preparing to flee the Nazi invasion of Hungary for a new and uncertain life in the U.S., is often seen as an autobiographical statement of the tragedy of his times.
Bartók was not alone in his explorations of folk music. Igor Stravinsky's "Three Pieces for String Quartet" is structured as three Russian folksongs, rather than as a classical string quartet. Stravinsky, like Bartók, used asymmetrical rhythms throughout his chamber music; the Histoire du soldat, in Stravinsky's own arrangement for clarinet, violin and piano, constantly shifts time signatures between two, three, four and five beats to the bar. In Britain, composers Ralph Vaughan Williams, William Walton and Benjamin Britten drew on English folk music for much of their chamber music: Vaughan Williams incorporates folksongs and country fiddling in his first string quartet. American composer Charles Ives wrote music that was distinctly American. Ives gave programmatic titles to much of his chamber music; his first string quartet, for example, is called "From the Salvation Army", and quotes American Protestant hymns in several places.
Serialism, polytonality and polyrhythms.
A second direction in the search for a new tonality was twelve-tone serialism. Arnold Schoenberg developed the twelve-tone method of composition as an alternative to the structure provided by the diatonic system. His method entails building a piece using a series of the twelve notes of the chromatic scale, permuting it and superimposing it on itself to create the composition.
Schoenberg did not arrive immediately at the serial method. His first chamber work, the string sextet Verklärte Nacht, was mostly a late German romantic work, though it was bold in its use of modulations. The first work that was frankly atonal was the second string quartet; the last movement of this quartet, which includes a soprano, has no key signature. Schoenberg further explored atonality with "Pierrot Lunaire", for singer, flute or piccolo, clarinet, violin, cello and piano. The singer uses a technique called Sprechstimme, halfway between speech and song.
After developing the twelve-tone technique, Schoenberg wrote a number of chamber works, including two more string quartets, a string trio, and a wind quintet. He was followed by a number of other twelve-tone composers, the most prominent of whom were his students Alban Berg, who wrote the "Lyric Suite" for string quartet, and Anton Webern, who wrote "Five Movements for String Quartet", op. 5.
Twelve-tone technique was not the only new experiment in tonality. Darius Milhaud developed the use of polytonality, that is, music where different instruments play in different keys at the same time. Milhaud wrote 18 string quartets; quartets number 14 and 15 are written so that each can be played by itself, or the two can be played at the same time as an octet. Milhaud also used jazz idioms, as in his "Suite" for clarinet, violin and piano.
The American composer Charles Ives used not only polytonality in his chamber works, but also polymeter. In his first string quartet he writes a section where the first violin and viola play in formula_0 time while the second violin and cello play in formula_1.
Neoclassicism.
The plethora of directions that music took in the first quarter of the 20th century led to a reaction by many composers. Led by Stravinsky, these composers looked to the music of preclassical Europe for inspiration and stability. While Stravinsky's neoclassical works – such as the "Double Canon for String Quartet" – sound contemporary, they are modeled on Baroque and early classical forms – the canon, the fugue, and the Baroque sonata form.
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube, second movement, "Schnelle Achtel", played by Ana Farmer, David Boyden, Austin Han, and Dylan Mattingly
Paul Hindemith was another neoclassicist. His many chamber works are essentially tonal, though they use many dissonant harmonies. Hindemith wrote seven string quartets and two string trios, among other chamber works. At a time when composers were writing works of increasing complexity, beyond the reach of amateur musicians, Hindemith explicitly recognized the importance of amateur music-making, and intentionally wrote pieces that were within the abilities of nonprofessional players.
The works that the composer summarised as "Kammermusik", a collection of eight extended compositions, consists mostly of concertante works, comparable to Bach's "Brandenburg Concertos".
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube, Largo; Allegro molto; played by the Seraphina String Quartet (Sabrina Tabby and Caeli Smith, violins; Madeline Smith, viola; Genevieve Tabby, cello)
Dmitri Shostakovich was one of the most prolific of chamber music composers of the 20th century, writing 15 string quartets, two piano trios, the piano quintet, and numerous other chamber works. Shostakovich's music was for a long time banned in the Soviet Union and Shostakovich himself was in personal danger of deportation to Siberia. His eighth quartet is an autobiographical work, that expresses his deep depression from his ostracization, bordering on suicide: it quotes from previous compositions, and uses the four-note motif DSCH, the composer's initials.
Stretching the limits.
As the century progressed, many composers created works for small ensembles that, while they formally might be considered chamber music, challenged many of the fundamental characteristics that had defined the genre over the last 150 years.
Music of friends.
The idea of composing music that could be played at home has been largely abandoned. Bartók was among the first to part with this idea. "Bartók never conceived these quartets for private performance but rather for large, public concerts." Aside from the many almost insurmountable technical difficulties of many modern pieces, some of them are hardly suitable for performance in a small room. For example, "Different Trains" by Steve Reich is scored for live string quartet and recorded tape, which layers together a carefully orchestrated sound collage of speech, recorded train sounds, and three string quartets.
Relation of composer and performer.
Traditionally, the composer wrote the notes, and the performer interpreted them. But this is no longer the case in much modern music. In "Für kommende Zeiten" (For Times to Come), Stockhausen writes verbal instructions describing what the performers are to play. "Star constellations/with common points/and falling stars ... Abrupt end" is a sample.
Composer Terry Riley describes how he works with the Kronos Quartet, an ensemble devoted to contemporary music: "When I write a score for them, it's an unedited score. I put in just a minimal amount of dynamics and phrasing marks ...we spend a lot of time trying out different ideas in order to shape the music, to form it. At the end of the process, it makes the performers actually "own" the music. That to me is the best way for composers and musicians to interact."
New sounds.
Composers seek new timbres, remote from the traditional blend of strings, piano and woodwinds that characterized chamber music in the 19th century. This search led to the incorporation of new instruments in the 20th century, such as the theremin and the synthesizer in chamber music compositions.
Many composers sought new timbres within the framework of traditional instruments. "Composers begin to hear new timbres and new timbral combinations, which are as important to the new music of the twentieth century as the so-called breakdown of functional tonality," writes music historian James McCalla. Examples are numerous: Bartók's Sonata for Two Pianos and Percussion (1937), Schoenberg's "Pierrot lunaire", Charles Ives's "Quartertone Pieces" for two pianos tuned a quartertone apart. Other composers used electronics and extended techniques to create new sonorities. An example is George Crumb's "Black Angels", for electric string quartet (1970). The players not only bow their amplified instruments, they also beat on them with thimbles, pluck them with paper clips and play on the wrong side of the bridge or between the fingers and the nut. Still other composers have sought to explore the timbres created by including instruments which are not often associated with a typical orchestral ensemble. For example, Robert Davine explores the orchestral timbres of the accordion when it is included in a traditional wind trio in his Divertimento for accordion, flute, clarinet and bassoon. and Karlheinz Stockhausen wrote a "Helicopter String Quartet".
What do these changes mean for the future of chamber music? "With the technological advances have come questions of aesthetics and sociological changes in music", writes analyst Baron. "These changes have often resulted in accusations that technology has destroyed chamber music and that technological advance is in inverse proportion to musical worth. The ferocity of these attacks only underscores how fundamental these changes are, and only time will tell if humankind will benefit from them."
In contemporary society.
Analysts agree that the role of chamber music in society has changed profoundly in the last 50 years; yet there is little agreement as to what that change is. On the one hand, Baron contends that "chamber music in the home ... remained very important in Europe and America until the Second World War, after which the increasing invasion of radio and recording reduced its scope considerably." This view is supported by subjective impressions. "Today there are so many more millions of people listening to music, but far fewer playing chamber music just for the pleasure of it", says conductor and pianist Daniel Barenboim.
However, recent surveys suggest there is, on the contrary, a resurgence of home music making. In the radio program "Amateurs Help Keep Chamber Music Alive" from 2005, reporter Theresa Schiavone cites a Gallup poll showing an increase in the sale of stringed instruments in America. Joe Lamond, president of the National Association of Music Manufacturers (NAMM) attributes the increase to a growth of home music-making by adults approaching retirement. "I would really look to the demographics of the [baby] boomers", he said in an interview. These people "are starting to look for something that matters to them ... nothing makes them feel good more than playing music."
A study by the European Music Office in 1996 suggests that not only older people are playing music. "The number of adolescents today to have done music has almost doubled by comparison with those born before 1960", the study shows. While most of this growth is in popular music, some is in chamber music and art music, according to the study.
While there is no agreement about the number of chamber music players, the opportunities for amateurs to play have certainly grown. The number of chamber music camps and retreats, where amateurs can meet for a weekend or a month to play together, has burgeoned. "Music for the Love of It", an organization to promote amateur playing, publishes a directory of music workshops that lists more than 500 workshops in 24 countries for amateurs in 2008 The Associated Chamber Music Players (ACMP) offers a directory of over 5,000 amateur players worldwide who welcome partners for chamber music sessions.
Regardless of whether the number of amateur players has grown or shrunk, the number of chamber music concerts in the west has increased greatly in the last 20 years. Concert halls have largely replaced the home as the venue for concerts. Baron suggests that one of the reasons for this surge is "the spiraling costs of orchestral concerts and the astronomical fees demanded by famous soloists, which have priced both out of the range of most audiences." The repertoire at these concerts is almost universally the classics of the 19th century. However, modern works are increasingly included in programs, and some groups, like the Kronos Quartet, devote themselves almost exclusively to contemporary music and new compositions; and ensembles like the Turtle Island String Quartet, that combine classical, jazz, rock and other styles to create crossover music. Cello Fury and Project Trio offer a new spin to the standard chamber ensemble. Cello Fury consists of three cellists and a drummer and Project Trio includes a flutist, bassist, and cellist.
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube plays chamber music in a Seattle streetcar
Several groups such as Classical Revolution and Simple Measures have taken classical chamber music out of the concert hall and into the streets. Simple Measures, a group of chamber musicians in Seattle (Washington, US), gives concerts in shopping centers, coffee shops, and streetcars. The Providence (Rhode Island, US) String Quartet has started the "Storefront Strings" program, offering impromptu concerts and lessons out of a storefront in one of Providence's poorer neighborhoods. "What really makes this for me", said Rajan Krishnaswami, cellist and founder of Simple Measures, "is the audience reaction ... you really get that audience feedback."
Performance.
Chamber music performance is a specialized field, and requires a number of skills not normally required for the performance of symphonic or solo music. Many performers and authors have written about the specialized techniques required for a successful chamber musician. Chamber music playing, writes M. D. Herter Norton, requires that "individuals ... make a unified whole yet remain individuals. The soloist is a whole unto himself, and in the orchestra individuality is lost in numbers ...".
"Music of friends".
Many performers contend that the intimate nature of chamber music playing requires certain personality traits.
David Waterman, cellist of the Endellion Quartet, writes that the chamber musician "needs to balance assertiveness and flexibility." Good rapport is essential. Arnold Steinhardt, first violinist of the Guarneri Quartet, notes that many professional quartets suffer from frequent turnover of players. "Many musicians cannot take the strain of going "mano a mano" with the same three people year after year."
Mary Norton, a violinist who studied quartet playing with the Kneisel Quartet at the beginning of the last century, goes so far that players of different parts in a quartet have different personality traits. "By tradition the first violin is the leader" but "this does not mean a relentless predominance." The second violinist "is a little everybody's servant." "The artistic contribution of each member will be measured by his skill in asserting or subduing that individuality which he must possess to be at all interesting."
Interpretation.
"For an individual, the problems of interpretation are challenging enough", writes Waterman, "but for a quartet grappling with some of the most profound, intimate and heartfelt compositions in the music literature, the communal nature of decision-making is often more testing than the decisions themselves."
<templatestyles src="Template:Quote_box/styles.css" />
on YouTube – Daniel Epstein teaching the Schumann piano quartet at Manhattan School of Music(Picture: "The Music Lesson" by Jan Vermeer)
The problem of finding agreement on musical issues is complicated by the fact that each player is playing a different part, that may appear to demand dynamics or gestures contrary to those of other parts in the same passage. Sometimes these differences are even specified in the score – for example, where cross-dynamics are indicated, with one instrument crescendoing while another is getting softer.
One of the issues that must be settled in rehearsal is who leads the ensemble at each point of the piece. Normally, the first violin leads the ensemble. By leading, this means that the violinist indicates the start of each movement and their tempos by a gesture with her head or bowing hand. However, there are passages that require other instruments to lead. For example, John Dalley, second violinist of the Guarneri Quartet, says, "We'll often ask [the cellist] to lead in pizzicato passages. A cellist's preparatory motion for pizzicato is larger and slower than that of a violinist."
Players discuss issues of interpretation in rehearsal; but often, in mid-performance, players do things spontaneously, requiring the other players to respond in real time. "After twenty years in the [Guarneri] Quartet, I'm happily surprised on occasion to find myself totally wrong about what I think a player will do, or how he'll react in a particular passage", says violist Michael Tree.
Ensemble, blend, and balance.
Playing together constitutes a major challenge to chamber music players. Many compositions pose difficulties in coordination, with figures such as hemiolas, syncopation, fast unison passages and simultaneously sounded notes that form chords that are challenging to play in tune. But beyond the challenge of merely playing together from a rhythmic or intonation perspective is the greater challenge of sounding good together.
To create a unified chamber music sound – to blend – the players must coordinate the details of their technique. They must decide when to use vibrato and how much. They often need to coordinate their bowing and "breathing" between phrases, to ensure a unified sound. They need to agree on special techniques, such as spiccato, sul tasto, sul ponticello, and so on.
Balance refers to the relative volume of each of the instruments. Because chamber music is a conversation, sometimes one instrument must stand out, sometimes another. It is not always a simple matter for members of an ensemble to determine the proper balance while playing; frequently, they require an outside listener, or a recording of their rehearsal, to tell them that the relations between the instruments are correct.
Intonation.
Chamber music playing presents special problems of intonation. The piano is tuned using equal temperament, that is, the 12 notes of the scale are spaced exactly equally. This method makes it possible for the piano to play in any key; however, all the intervals except the octave sound very slightly out of tune. String players can play with just intonation, that is, they can play specific intervals (such as fifths) exactly in tune. Moreover, string and wind players can use "expressive intonation", changing the pitch of a note to create a musical or dramatic effect. "String intonation is more expressive and sensitive than equal-tempered piano intonation."
However, using true and expressive intonation requires careful coordination with the other players, especially when a piece is going through harmonic modulations. "The difficulty in string quartet intonation is to determine the degree of freedom you have at any given moment", says Steinhardt.
The chamber music experience.
Players of chamber music, both amateur and professional, attest to a unique enchantment with playing in ensemble. "It is not an exaggeration to say that there opened out before me an enchanted world", writes Walter Willson Cobbett, instigator of the Cobbett Competition, Cobbett Medal and editor of "Cobbett's Cyclopedic Survey of Chamber Music".
Ensembles develop a close intimacy of shared musical experience. "It is on the concert stage where the moments of true intimacy occur", writes Steinhardt. "When a performance is in progress, all four of us together enter a zone of magic somewhere between our music stands and become a conduit, messenger, and missionary ... It is an experience too personal to talk about and yet it colors every aspect of our relationship, every good-natured musical confrontation, all the professional gossip, the latest viola joke."
The playing of chamber music has been the inspiration for numerous books, both fiction and nonfiction. "An Equal Music" by Vikram Seth, explores the life and love of the second violinist of a fictional quartet, the Maggiore. Central to the story is the tensions and the intimacy developed between the four members of the quartet. "A strange composite being we are [in performance], not ourselves any more, but the Maggiore, composed of so many disjunct parts: chairs, stands, music, bows, instruments, musicians ..." "The Rosendorf Quartet", by Nathan Shaham, describes the trials of a string quartet in Palestine, before the establishment of the state of Israel. "For the Love of It" by Wayne Booth is a nonfictional account of the author's romance with cello playing and chamber music.
Chamber music societies.
Numerous societies are dedicated to the encouragement and performance of chamber music. Some of these are:
In addition to these national and international organizations, there are also numerous regional and local organizations that support chamber music. Some of the most prominent professional American chamber music ensembles and organizations are:
Ensembles.
This is a partial list of the types of ensembles found in chamber music. The standard repertoire for chamber ensembles is rich, and the totality of chamber music in print in sheet music form is nearly boundless. See the articles on each instrument combination for examples of repertoire.
Notes.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "\\tfrac{3}{4}"
},
{
"math_id": 1,
"text": "\\tfrac{4}{4}"
}
] | https://en.wikipedia.org/wiki?curid=82991 |
830034 | Crystal (mathematics) | In mathematics, crystals are Cartesian sections of certain fibered categories. They were introduced by Alexander Grothendieck (1966a), who named them crystals because in some sense they are "rigid" and "grow". In particular quasicoherent crystals over the crystalline site are analogous to quasicoherent modules over a scheme.
An isocrystal is a crystal up to isogeny. They are formula_0-adic analogues of formula_1-adic étale sheaves, introduced by and (though the definition of isocrystal only appears in part II of this paper by ). Convergent isocrystals are a variation of isocrystals that work better over non-perfect fields, and overconvergent isocrystals are another variation related to overconvergent cohomology theories.
A Dieudonné crystal is a crystal with Verschiebung and Frobenius maps. An F-crystal is a structure in semilinear algebra somewhat related to crystals.
Crystals over the infinitesimal and crystalline sites.
The infinitesimal site formula_2 has as objects the infinitesimal extensions of open sets of formula_3.
If formula_3 is a scheme over formula_4 then the sheaf formula_5 is defined by
formula_6 = coordinate ring of formula_7, where we write formula_7 as an abbreviation for
an object formula_8 of formula_2. Sheaves on this site grow in the sense that they can be extended from open sets to infinitesimal extensions of open sets.
A crystal on the site formula_2 is a sheaf formula_9 of formula_5 modules that is rigid in the following sense:
for any map formula_10 between objects formula_7, formula_11; of formula_2, the natural map from formula_12 to formula_13 is an isomorphism.
This is similar to the definition of a quasicoherent sheaf of modules in the Zariski topology.
An example of a crystal is the sheaf formula_5.
Crystals on the crystalline site
are defined in a similar way.
Crystals in fibered categories.
In general, if formula_14 is a fibered category over formula_9, then a crystal is a cartesian section of the fibered category. In the special case when formula_9 is the category of infinitesimal extensions of a scheme formula_3 and formula_14 the category of quasicoherent modules over objects of formula_9, then crystals of this fibered category are the same as crystals of the infinitesimal site. | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\mathbf{Q}_l"
},
{
"math_id": 2,
"text": "\\text{Inf}(X/S)"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "O_{X/S}"
},
{
"math_id": 6,
"text": "O_{X/S}(T)"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "U\\to T"
},
{
"math_id": 9,
"text": "F"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "T'"
},
{
"math_id": 12,
"text": "f^* F(T)"
},
{
"math_id": 13,
"text": "F(T')"
},
{
"math_id": 14,
"text": "E"
}
] | https://en.wikipedia.org/wiki?curid=830034 |
8300841 | British flag theorem | On distances from opposite corners to a point inside a rectangle
In Euclidean geometry, the British flag theorem says that if a point "P" is chosen inside a rectangle "ABCD" then the sum of the squares of the Euclidean distances from "P" to two opposite corners of the rectangle equals the sum to the other two opposite corners.
As an equation:
formula_0
The theorem also applies to points outside the rectangle, and more generally to the distances from a point in Euclidean space to the corners of a rectangle embedded into the space. Even more generally, if the sums of squares of distances from a point "P" to the two pairs of opposite corners of a parallelogram are compared, the two sums will not in general be equal, but the difference between the two sums will depend only on the shape of the parallelogram and not on the choice of "P".
The theorem can also be thought of as a generalisation of the Pythagorean theorem. Placing the point "P" on any of the four vertices of the rectangle yields the square of the diagonal of the rectangle being equal to the sum of the squares of the width and length of the rectangle, which is the Pythagorean theorem.
Proof.
Drop perpendicular lines from the point "P" to the sides of the rectangle, meeting sides "AB", "BC", "CD", and "AD" at points "W", "X", "Y" and "Z" respectively, as shown in the figure. These four points "WXYZ" form the vertices of an orthodiagonal quadrilateral.
By applying the Pythagorean theorem to the right triangle "AWP", and observing that "WP" = "AZ", it follows that
formula_1
and by a similar argument the squares of the lengths of the distances from "P" to the other three corners can be calculated as
formula_2
formula_3 and
formula_4
Therefore:
formula_5
Isosceles trapezoid.
The British flag theorem can be generalized into a statement about (convex) isosceles trapezoids. More precisely for a trapezoid formula_6 with parallel sides formula_7 and formula_8 and interior pointformula_9 the following equation holds:
formula_10
In the case of a rectangle the fraction formula_11 evaluates to 1 and hence yields the original theorem.
Naming.
This theorem takes its name from the fact that, when the line segments from "P" to the corners of the rectangle are drawn, together with the perpendicular lines used in the proof, the completed figure resembles a Union Flag.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " AP^2 + CP^2 = BP^2 + DP^2. "
},
{
"math_id": 1,
"text": "AP^2 = AW^2 + WP^2 = AW^2 + AZ^2"
},
{
"math_id": 2,
"text": "PC^2 = WB^2 + ZD^2,"
},
{
"math_id": 3,
"text": "BP^2 = WB^2 + AZ^2,"
},
{
"math_id": 4,
"text": "PD^2 = ZD^2 + AW^2."
},
{
"math_id": 5,
"text": "\\begin{align}\nAP^2 + PC^2 &= \\left(AW^2 + AZ^2\\right) + \\left(WB^2 + ZD^2\\right) \\\\[4pt]\n &= \\left(WB^2 + AZ^2\\right) + \\left(ZD^2 + AW^2\\right) \\\\[4pt]\n &= BP^2 + PD^2\n\\end{align}"
},
{
"math_id": 6,
"text": "ABCD"
},
{
"math_id": 7,
"text": "AB"
},
{
"math_id": 8,
"text": "CD"
},
{
"math_id": 9,
"text": "P"
},
{
"math_id": 10,
"text": "|AP|^2+\\frac{|AB|}{|CD|} \\cdot |PC|^2=|BP|^2+\\frac{|AB|}{|CD|} \\cdot |PD|^2"
},
{
"math_id": 11,
"text": "\\tfrac{|AB|}{|CD|}"
}
] | https://en.wikipedia.org/wiki?curid=8300841 |
8302059 | Gamma diversity | Total species diversity in a landscape
In ecology, gamma diversity (γ-diversity) is the total species diversity in a landscape. The term was introduced by R. H. Whittaker together with the terms alpha diversity (α-diversity) and beta diversity (β-diversity). Whittaker's idea was that the total species diversity in a landscape (γ) is determined by two different things, the mean species diversity in sites at a more local scale (α) and the differentiation among those sites (β). According to this reasoning, alpha diversity and beta diversity constitute independent components of gamma diversity:
γ = α × β
Scale considerations.
The area or landscape of interest may be of very different sizes in different situations, and no consensus has been reached on what spatial scales are appropriate to quantify gamma diversity. It has therefore been proposed that the definition of gamma diversity does not need to be tied to a specific spatial scale, but gamma diversity can be measured for an existing dataset at any scale of interest. If results are extrapolated beyond the actual observations, it needs to be taken into account that the species diversity in the dataset generally gives an underestimation of the species diversity in a larger area. The smaller the available sample in relation to the area of interest, the more species that actually exist in the area are not found in the sample. The degree of underestimation can be estimated from a species-area curve.
Different concepts.
Researchers have used different ways to define diversity, which in practice has led to different definitions of gamma diversity as well. Often researchers use the values given by one or more diversity indices, such as species richness, the Shannon index or the Simpson index. However, it has been argued that it would be better to use the effective number of species as the universal measure of species diversity. This measure allows weighting rare and abundant species in different ways, just as the diversity indices collectively do, but its meaning is intuitively easier to understand. The effective number of species is the number of equally abundant species needed to obtain the same mean proportional species abundance as that observed in the dataset of interest (where all species may not be equally abundant).
Calculation.
Suppose species diversity is equated with the effective number of species in a dataset. Then gamma diversity can be calculated by first taking the weighted mean of species proportional abundances in the dataset, and then taking the inverse of this mean. The equation is
formula_0
The denominator equals mean proportional species abundance in the dataset as calculated with the weighted generalized mean with exponent "q" − 1. In the equation, "S" is the total number of species (species richness) in the dataset, and the proportional abundance of the "i"th species is formula_1.
Large values of "q" lead to smaller gamma diversity than small values of "q", because increasing "q" increases the weight given to those species with the highest proportional abundance, and fewer equally abundant species are hence needed to obtain this proportional abundance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{}^q\\!D_\\gamma = \\frac 1 {\\sqrt[q-1]{\\sum_{i=1}^S p_i p_i^{q-1}}}. "
},
{
"math_id": 1,
"text": "p_i"
}
] | https://en.wikipedia.org/wiki?curid=8302059 |
8302382 | Contraharmonic mean | In mathematics, a contraharmonic mean is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, formula_0, where "p" = 2.
Definition.
The contraharmonic mean of a set of positive real numbers is defined as the arithmetic mean of the squares of the numbers divided by the arithmetic mean of the numbers:
formula_1
Two-variable formulae.
From the formulas for the arithmetic mean and harmonic mean of two variables we have:
formula_2
Notice that for two variables the average of the harmonic and contraharmonic means is exactly equal to the arithmetic mean:
<templatestyles src="Block indent/styles.css"/>A(H("a", "b"), C("a", "b")) = A("a", "b")
As "a" gets closer to 0 then H("a", "b") also gets closer to 0. The harmonic mean is very sensitive to low values. On the other hand, the contraharmonic mean is sensitive to larger values, so as "a" approaches 0 then C("a", "b") approaches "b" (so their average remains A("a", "b")).
There are two other notable relationships between 2-variable means. First, the geometric mean of the arithmetic and harmonic means is equal to the geometric mean of the two values:
formula_3
The second relationship is that the geometric mean of the arithmetic and contraharmonic means is the root mean square:
formula_4
The contraharmonic mean of two variables can be constructed geometrically using a trapezoid.
Additional constructions.
The contraharmonic mean can be constructed on a circle similar to the way the Pythagorean means of two variables are constructed. The contraharmonic mean is the remainder of the diameter on which the harmonic mean lies.
History.
The contraharmonic mean was discovered by the Greek mathematician Eudoxus in the 4th century BCE.
Properties.
It is easy to show that this satisfies the characteristic properties of a mean of some list of values formula_5:
The first property implies the "fixed point property", that for all "k" > 0,
<templatestyles src="Block indent/styles.css"/>C("k", "k", ..., "k") = "k"
The contraharmonic mean is higher in value than the arithmetic mean and also higher than the root mean square:
formula_8
where x is a list of values, H is the harmonic mean, G is geometric mean, L is the logarithmic mean, A is the arithmetic mean, R is the root mean square and C is the contraharmonic mean. Unless all values of x are the same, the ≤ signs above can be replaced by <.
The name "contraharmonic" may be due to the fact that when taking the mean of only two variables, the contraharmonic mean is as high above the arithmetic mean as the arithmetic mean is above the harmonic mean (i.e., the arithmetic mean of the two variables is equal to the arithmetic mean of their harmonic and contraharmonic means).
Relationship to arithmetic mean and variance.
The contraharmonic mean of a random variable is equal to the sum of the arithmetic mean and the variance divided by the arithmetic mean. Since the variance is always ≥0 the contraharmonic mean is always greater than or equal to the arithmetic mean.
The ratio of the variance and the mean was proposed as a test statistic by Clapham. This statistic is the contraharmonic mean less one.
Other relationships.
Any integer contraharmonic mean of two different positive integers is the hypotenuse of a Pythagorean triple, while any hypotenuse of a Pythagorean triple is a contraharmonic mean of two different positive integers.
It is also related to Katz's statistic
formula_9
where "m" is the mean, "s"2 the variance and "n" is the sample size.
"J"n is asymptotically normally distributed with a mean of zero and variance of 1.
Uses in statistics.
The problem of a size biased sample was discussed by Cox in 1969 on a problem of sampling fibres. The expectation of size biased sample is equal to its contraharmonic mean, and the contraharmonic mean is also used to estimate bias fields in "multiplicative" models, rather than the arithmetic mean as used in "additive" models.
The contraharmonic mean can be used to average the intensity value of neighbouring pixels in graphing, so as to reduce noise in images and make them clearer to the eye.
The probability of a fibre being sampled is proportional to its length. Because of this the usual sample mean (arithmetic mean) is a biased estimator of the true mean. To see this consider
formula_10
where "f"("x") is the true population distribution, "g"("x") is the length weighted distribution and "m" is the sample mean. Taking the usual expectation of the mean here gives the contraharmonic mean rather than the usual (arithmetic) mean of the sample. This problem can be overcome by taking instead the expectation of the harmonic mean (1/"x"). The expectation and variance of 1/"x" are
formula_11
and has variance
formula_12
where E is the expectation operator. Asymptotically E[1/"x"] is distributed normally.
The asymptotic efficiency of length biased sampling depends compared to random sampling on the underlying distribution. if "f"("x") is log normal the efficiency is 1 while if the population is gamma distributed with index "b", the efficiency is "b"/("b" − 1). This distribution has been used in modelling consumer behaviour as well as quality sampling.
It has been used in image analysis, and recently alongside the exponential distribution in transport planning in the form of its inverse.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_p"
},
{
"math_id": 1,
"text": "\\begin{align}\n \\operatorname{C}\\left(x_1, x_2, \\dots, x_n\\right) &= {{1 \\over n} \\left(x_1^2 + x_2^2 + \\cdots + x_n^2\\right) \\over {1 \\over n}\\left(x_1 + x_2 + \\cdots + x_n\\right)}, \\\\[3pt]\n &= {{x_1^2 + x_2^2 + \\cdots + x_n^2} \\over {x_1 + x_2 + \\cdots + x_n}}.\n\\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align}\n \\operatorname{A}(a, b) &= {{a + b} \\over 2} \\\\\n \\operatorname{H}(a, b) &= {1 \\over {{1 \\over 2} \\cdot {\\left({1 \\over a} + {1 \\over b}\\right)}}} = {{2ab} \\over {a + b}} \\\\\n \\operatorname{C}(a, b) &= 2 \\cdot A(a ,b) - H(a, b) \\\\\n &= a + b - {{2ab} \\over {a + b}} = {{(a + b)^2 - 2ab} \\over {a + b}} \\\\\n &= {{a^2 + b^2} \\over {a + b}}\n\\end{align}"
},
{
"math_id": 3,
"text": "\n \\operatorname{G}(\\operatorname{A}(a, b), \\operatorname{H}(a, b)) = \\operatorname{G}\\left({{a + b} \\over 2}, {{2ab} \\over {a + b}}\\right) =\n \\sqrt{{{a + b}\\over 2} \\cdot {{2ab} \\over {a + b}}} = \\sqrt{ab} = \\operatorname{G}(a, b)\n"
},
{
"math_id": 4,
"text": "\\begin{align}\n &\\operatorname{G}\\left(\\operatorname{A}(a, b), \\operatorname{C}(a, b)\\right)\n ={} \\operatorname{G}\\left({{a + b} \\over 2}, {{a^2 + b^2} \\over {a + b}}\\right) \\\\\n ={} &\\sqrt{{{a + b} \\over 2} \\cdot {{a^2 + b^2} \\over {a + b}}}\n ={} \\sqrt{{{a^2 + b^2} \\over 2}} \\\\[2pt]\n ={} &\\operatorname{R}(a, b)\n\\end{align}"
},
{
"math_id": 5,
"text": "\\mathbf{x}"
},
{
"math_id": 6,
"text": "\\min\\left(\\mathbf{x}\\right) \\le \\operatorname{C}\\left(\\mathbf{x}\\right) \\le \\max\\left(\\mathbf{x}\\right)"
},
{
"math_id": 7,
"text": "\\operatorname{C}\\left(t \\cdot \\mathbf{x}_1, t \\cdot \\mathbf{x}_2,\\, \\dots,\\, t \\cdot \\mathbf{x}_n\\right) = t \\cdot C\\left(\\mathbf{x}_1, \\mathbf{x}_2,\\, \\dots,\\, \\mathbf{x}_n\\right)\\text{ for }t > 0"
},
{
"math_id": 8,
"text": "\\min(\\mathbf{x}) \\leq \\operatorname{H}(\\mathbf{x}) \\leq \\operatorname{G}(\\mathbf{x}) \\leq \\operatorname{L}(\\mathbf{x}) \\leq \\operatorname{A}(\\mathbf{x}) \\leq \\operatorname{R}(\\mathbf{x}) \\leq \\operatorname{C}(\\mathbf{x}) \\leq \\max(\\mathbf{x}) "
},
{
"math_id": 9,
"text": " J_n = \\sqrt { \\frac { n } { 2 } } \\frac { s^2 - m } { m } "
},
{
"math_id": 10,
"text": "g(x) = \\frac{x f(x)}{m}"
},
{
"math_id": 11,
"text": "\\operatorname{E}\\left[ \\frac{1}{x} \\right] = \\frac{1}{m}"
},
{
"math_id": 12,
"text": "\\operatorname{Var} \\left( \\frac{1}{x} \\right) = \\frac{m E\\left[\\frac{1}{x} - 1\\right]}{nm^2}"
}
] | https://en.wikipedia.org/wiki?curid=8302382 |
8304736 | RiskMetrics | American financial services company
The RiskMetrics variance model (also known as exponential smoother) was first established in 1989, when Sir Dennis Weatherstone, the new chairman of J.P. Morgan, asked for a daily report measuring and explaining the risks of his firm. Nearly four years later in 1992, J.P. Morgan launched the RiskMetrics methodology to the marketplace, making the substantive research and analysis that satisfied Sir Dennis Weatherstone's request freely available to all market participants.
In 1998, as client demand for the group's risk management expertise exceeded the firm's internal risk management resources, the Corporate Risk Management Department was spun off from J.P. Morgan as RiskMetrics Group with 23 founding employees. The RiskMetrics technical document was revised in 1996. In 2001, it was revised again in "Return to RiskMetrics". In 2006, a new method for modeling risk factor returns was introduced (RM2006). On 25 January 2008, RiskMetrics Group listed on the New York Stock Exchange (NYSE: RISK). In June 2010, RiskMetrics was acquired by MSCI for $1.55 billion.
Risk measurement process.
Portfolio risk measurement can be broken down into steps. The first is modeling the market that drives changes in the portfolio's value. The market model must be sufficiently specified so that the portfolio can be revalued using information from the market model. The risk measurements are then extracted from the probability distribution of the changes in portfolio value. The change in value of the portfolio is typically referred to by portfolio managers as profit and loss, or P&L
Risk factors.
Risk management systems are based on models that describe potential changes in the factors affecting portfolio value. These risk factors are the building blocks for all pricing functions. In general, the factors driving the prices of financial securities are equity prices, foreign exchange rates, commodity prices, interest rates, correlation and volatility. By generating future scenarios for each risk factor, we can infer changes in portfolio value and reprice the portfolio for different "states of the world".
Portfolio risk measures.
Standard deviation.
The first widely used portfolio risk measure was the standard deviation of portfolio value, as described by Harry Markowitz. While comparatively easy to calculate, standard deviation is not an ideal risk measure since it penalizes profits as well as losses.
Value at risk.
The 1994 tech doc popularized VaR as the risk measure of choice among investment banks looking to be able to measure their portfolio risk for the benefit of banking regulators. VaR is a downside risk measure, meaning that it typically focuses on losses.
Expected shortfall.
A third commonly used risk measure is expected shortfall, also known variously as expected tail loss, XLoss, conditional VaR, or CVaR.
Marginal VaR.
The Marginal VaR of a position with respect to a portfolio can be thought of as the amount of risk that the position is adding to the portfolio. It can be formally defined as the difference between the VaR of the total portfolio and the VaR of the portfolio without the position.
<templatestyles src="Template:Blockquote/styles.css" />To measure the effect of changing positions on portfolio risk, individual VaRs are insufficient. Volatility measures the uncertainty in the return of an asset, taken in isolation. When this asset belongs to a portfolio, however, what matters is the contribution to portfolio risk.
Incremental risk.
Incremental risk statistics provide information regarding the sensitivity of portfolio risk to changes in the position holding sizes in the portfolio.
An important property of incremental risk is subadditivity. That is, the sum of the incremental risks of the positions in a portfolio equals the total risk of the portfolio. This property has important applications in the allocation of risk to different units, where the goal is to keep the sum of the risks equal to the total risk.
Since there are three risk measures covered by RiskMetrics, there are three incremental risk measures: Incremental VaR (IVaR), Incremental Expected Shortfall (IES), and Incremental Standard Deviation (ISD).
Incremental statistics also have applications to portfolio optimization. A portfolio with minimum risk will have incremental risk equal to zero for all positions. Conversely, if the incremental risk is zero for all positions, the portfolio is guaranteed to have minimum risk only if the risk measure is subadditive.
Coherent risk measures.
A coherent risk measure satisfies the following four properties:
1. Subadditivity
A risk measure is subadditive if for any portfolios A and B, the risk of A+B is never greater than the risk of A plus the risk of B. In other words, the risk of the sum of subportfolios is smaller than or equal to the sum of their individual risks.
Standard deviation and expected shortfall are subadditive, while VaR is not.
Subadditivity is required in connection with aggregation of risks across desks, business units, accounts, or subsidiary companies. This property is important when different business units calculate their risks independently and we want to get an idea of the total risk involved. Lack of subadditivity could also be a matter of concern for regulators, where firms might be motivated to break up into affiliates to satisfy capital requirements.
2. Translation invariance
Adding cash to the portfolio decreases its risk by the same amount.
3. Positive homogeneity of degree 1
If we double the size of every position in a portfolio, the risk of the portfolio will be twice as large.
4. Monotonicity
If losses in portfolio A are larger than losses in portfolio B for all possible risk factor return scenarios, then the risk of portfolio A is higher than the risk of portfolio B.
Assessing risk measures.
The estimation process of any risk measure can be wrong by a considerable margin. If from the imprecise estimate we cannot get a good understanding what the true value could be, then the estimate is virtually worthless. A good risk measurement is to supplement any estimated risk measure with some indicator of their precision, or, of the size of its error.
There are various ways to quantify the error of some estimates. One approach is to estimate a confidence interval of the risk measurement.
Market models.
RiskMetrics describes three models for modeling the risk factors that define financial markets.
Covariance approach.
The first is very similar to the mean-covariance approach of Markowitz. Markowitz assumed that asset covariance matrix formula_0 can be observed. The covariance matrix can be used to compute portfolio variance. RiskMetrics assumes that the market is driven by risk factors with observable covariance. The risk factors are represented by time series of prices or levels of stocks, currencies, commodities, and interest rates. Instruments are evaluated from these risk factors via various pricing models. The portfolio itself is assumed to be some linear combination of these instruments.
Historical simulation.
The second market model assumes that the market only has finitely many possible changes, drawn from a risk factor return sample of a defined historical period. Typically one performs a historical simulation by sampling from past day-on-day risk factor changes, and applying them to the current level of the risk factors to obtain risk factor price scenarios. These perturbed risk factor price scenarios are used to generate a profit (loss) distribution for the portfolio.
This method has the advantage of simplicity, but as a model, it is slow to adapt to changing market conditions. It also suffers from simulation error, as the number of simulations is limited by the historical period (typically between 250 and 500 business days).
Monte carlo simulation.
The third market model assumes that the logarithm of the return, or, log-return, of any risk factor typically follows a normal distribution. Collectively, the log-returns of the risk factors are multivariate normal. Monte Carlo algorithm simulation generates random market scenarios drawn from that multivariate normal distribution. For each scenario, the profit (loss) of the portfolio is computed. This collection of profit (loss) scenarios provides a sampling of the profit (loss) distribution from which one can compute the risk measures of choice.
Criticism.
Nassim Taleb in his book "The Black Swan" (2007) wrote:
Banks are now more vulnerable to the Black Swan than ever before with "scientists" among their staff taking care of exposures. The giant firm J. P. Morgan put the entire world at risk by introducing in the nineties RiskMetrics, a phony method aiming at managing people’s risks. A related method called “Value-at-Risk,” which relies on the quantitative measurement of risk, has been spreading.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma"
}
] | https://en.wikipedia.org/wiki?curid=8304736 |
8305335 | Metrical task system | Task systems are mathematical objects used to model the set of possible configurations of online algorithms. They were introduced by Borodin, Linial and Saks (1992) to model a variety of online problems. A task system determines a set of states and costs to change states. Task systems obtain as input a sequence of requests such that each request assigns processing times to the states. The objective of an online algorithm for task systems is to create a schedule that minimizes the overall cost incurred due to processing the tasks with respect to the states and due to the cost to change states.
If the cost function to change states is a metric, the task system is a metrical task system (MTS). This is the most common type of task systems.
Metrical task systems generalize online problems such as paging, list accessing, and the k-server problem (in finite spaces).
Formal definition.
A task system is a pair formula_0 where formula_1 is a set of states and formula_2 is a distance function. If formula_3 is a metric, formula_0 is a metrical task system. An input to the task system is a sequence formula_4 such that for each formula_5, formula_6 is a vector of formula_7 non-negative entries that determine the processing costs for the formula_7 states when processing the formula_5th task.
An algorithm for the task system produces a schedule formula_8 that determines the sequence of states. For instance, formula_9 means that the formula_5th task formula_6 is run in state formula_10. The processing cost of a schedule is
formula_11
The objective of the algorithm is to find a schedule such that the cost is minimized.
Known results.
As usual for online problems, the most common measure to analyze algorithms for metrical task systems is the competitive analysis, where the performance of an online algorithm is compared to the performance of an optimal offline algorithm. For deterministic online algorithms, there is a tight bound formula_12 on the competitive ratio due to Borodin et al. (1992).
For randomized online algorithms, the competitive ratio is lower bounded by formula_13 and upper bounded by formula_14. The lower bound is due to Bartal et al. (2006, 2005). The upper bound is due to Bubeck, Cohen, Lee and Lee (2018) who improved upon a result of Fiat and Mendel (2003).
There are many results for various types of restricted metrics. | [
{
"math_id": 0,
"text": "(S,d)"
},
{
"math_id": 1,
"text": "S=\\{s_1,s_2,\\dotsc,s_n\\}"
},
{
"math_id": 2,
"text": " d:S \\times S \\rightarrow \\mathbb{R}"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "\\sigma = T_1,T_2,\\dotsc,T_l"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "T_i"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "\\pi"
},
{
"math_id": 9,
"text": " \\pi(i)=s_j"
},
{
"math_id": 10,
"text": "s_j"
},
{
"math_id": 11,
"text": " \\mathrm{cost}(\\pi,\\sigma) = \\sum_{i=1}^l d(\\pi(i-1),\\pi(i)) + T_i(\\pi(i))."
},
{
"math_id": 12,
"text": " 2n-1"
},
{
"math_id": 13,
"text": " \\Omega(\\log n / \\log\\log n) "
},
{
"math_id": 14,
"text": " O\\left((\\log n)^2\\right)"
}
] | https://en.wikipedia.org/wiki?curid=8305335 |
83060 | Magnetometer | Device that measures magnetism
A magnetometer is a device that measures magnetic field or magnetic dipole moment. Different types of magnetometers measure the direction, strength, or relative change of a magnetic field at a particular location. A compass is one such device, one that measures the direction of an ambient magnetic field, in this case, the Earth's magnetic field. Other magnetometers measure the magnetic dipole moment of a magnetic material such as a ferromagnet, for example by recording the effect of this magnetic dipole on the induced current in a coil.
The first magnetometer capable of measuring the absolute magnetic intensity at a point in space was invented by Carl Friedrich Gauss in 1833 and notable developments in the 19th century included the Hall effect, which is still widely used.
Magnetometers are widely used for measuring the Earth's magnetic field, in geophysical surveys, to detect magnetic anomalies of various types, and to determine the dipole moment of magnetic materials. In an aircraft's attitude and heading reference system, they are commonly used as a heading reference. Magnetometers are also used by the military as a triggering mechanism in magnetic mines to detect submarines. Consequently, some countries, such as the United States, Canada and Australia, classify the more sensitive magnetometers as military technology, and control their distribution.
Magnetometers can be used as metal detectors: they can detect only magnetic (ferrous) metals, but can detect such metals at a much greater distance than conventional metal detectors, which rely on conductivity. Magnetometers are capable of detecting large objects, such as cars, at over , while a conventional metal detector's range is rarely more than .
In recent years, magnetometers have been miniaturized to the extent that they can be incorporated in integrated circuits at very low cost and are finding increasing use as miniaturized compasses (MEMS magnetic field sensor).
Introduction.
Magnetic fields.
Magnetic fields are vector quantities characterized by both strength and direction. The strength of a magnetic field is measured in units of tesla in the SI units, and in gauss in the cgs system of units. 10,000 gauss are equal to one tesla. Measurements of the Earth's magnetic field are often quoted in units of nanotesla (nT), also called a gamma. The Earth's magnetic field can vary from 20,000 to 80,000 nT depending on location, fluctuations in the Earth's magnetic field are on the order of 100 nT, and magnetic field variations due to magnetic anomalies can be in the picotesla (pT) range. "Gaussmeters" and "teslameters" are magnetometers that measure in units of gauss or tesla, respectively. In some contexts, magnetometer is the term used for an instrument that measures fields of less than 1 millitesla (mT) and gaussmeter is used for those measuring greater than 1 mT.
Types of magnetometer.
There are two basic types of magnetometer measurement. "Vector magnetometers" measure the vector components of a magnetic field. "Total field magnetometers" or "scalar magnetometers" measure the magnitude of the vector magnetic field. Magnetometers used to study the Earth's magnetic field may express the vector components of the field in terms of "declination" (the angle between the horizontal component of the field vector and true, or geographic, north) and the "inclination" (the angle between the field vector and the horizontal surface).
"Absolute magnetometers" measure the absolute magnitude or vector magnetic field, using an internal calibration or known physical constants of the magnetic sensor. "Relative magnetometers" measure magnitude or vector magnetic field relative to a fixed but uncalibrated baseline. Also called "variometers", relative magnetometers are used to measure variations in magnetic field.
Magnetometers may also be classified by their situation or intended use. "Stationary magnetometers" are installed to a fixed position and measurements are taken while the magnetometer is stationary. "Portable" or "mobile magnetometers" are meant to be used while in motion and may be manually carried or transported in a moving vehicle. "Laboratory magnetometers" are used to measure the magnetic field of materials placed within them and are typically stationary. "Survey magnetometers" are used to measure magnetic fields in geomagnetic surveys; they may be fixed base stations, as in the INTERMAGNET network, or mobile magnetometers used to scan a geographic region.
Performance and capabilities.
The performance and capabilities of magnetometers are described through their technical specifications. Major specifications include
Early magnetometers.
The compass, consisting of a magnetized needle whose orientation changes in response to the ambient magnetic field, is a simple type of magnetometer, one that measures the direction of the field. The oscillation frequency of a magnetized needle is proportional to the square-root of the strength of the ambient magnetic field; so, for example, the oscillation frequency of the needle of a horizontally situated compass is proportional to the square-root of the horizontal intensity of the ambient field.
In 1833, Carl Friedrich Gauss, head of the Geomagnetic Observatory in Göttingen, published a paper on measurement of the Earth's magnetic field. It described a new instrument that consisted of a permanent bar magnet suspended horizontally from a gold fibre. The difference in the oscillations when the bar was magnetised and when it was demagnetised allowed Gauss to calculate an absolute value for the strength of the Earth's magnetic field.
The gauss, the CGS unit of magnetic flux density was named in his honour, defined as one maxwell per square centimeter; it equals 1×10−4 tesla (the SI unit).
Francis Ronalds and Charles Brooke independently invented magnetographs in 1846 that continuously recorded the magnet's movements using photography, thus easing the load on observers. They were quickly utilised by Edward Sabine and others in a global magnetic survey and updated machines were in use well into the 20th century.
Laboratory magnetometers.
Laboratory magnetometers measure the magnetization, also known as the magnetic moment of a sample material. Unlike survey magnetometers, laboratory magnetometers require the sample to be placed inside the magnetometer, and often the temperature, magnetic field, and other parameters of the sample can be controlled. A sample's magnetization, is primarily dependent on the ordering of unpaired electrons within its atoms, with smaller contributions from nuclear magnetic moments, Larmor diamagnetism, among others. Ordering of magnetic moments are primarily classified as diamagnetic, paramagnetic, ferromagnetic, or antiferromagnetic (although the zoology of magnetic ordering also includes ferrimagnetic, helimagnetic, toroidal, spin glass, etc.). Measuring the magnetization as a function of temperature and magnetic field can give clues as to the type of magnetic ordering, as well as any phase transitions between different types of magnetic orders that occur at critical temperatures or magnetic fields. This type of magnetometry measurement is very important to understand the magnetic properties of materials in physics, chemistry, geophysics and geology, as well as sometimes biology.
SQUID (superconducting quantum interference device).
SQUIDs are a type of magnetometer used both as survey and as laboratory magnetometers. SQUID magnetometry is an extremely sensitive absolute magnetometry technique. However SQUIDs are noise sensitive, making them impractical as laboratory magnetometers in high DC magnetic fields, and in pulsed magnets. Commercial SQUID magnetometers are available for temperatures between 300 mK and 400 K, and magnetic fields up to 7 tesla.
Inductive pickup coils.
Inductive pickup coils (also referred as inductive sensor) measure the magnetic dipole moment of a material by detecting the current induced in a coil due to the changing magnetic moment of the sample. The sample's magnetization can be changed by applying a small ac magnetic field (or a rapidly changing dc field), as occurs in capacitor-driven pulsed magnets. These measurements require differentiating between the magnetic field produced by the sample and that from the external applied field. Often a special arrangement of cancellation coils is used. For example, half of the pickup coil is wound in one direction, and the other half in the other direction, and the sample is placed in only one half. The external uniform magnetic field is detected by both halves of the coil, and since they are counter-wound, the external magnetic field produces no net signal.
VSM (vibrating-sample magnetometer).
Vibrating-sample magnetometers (VSMs) detect the dipole moment of a sample by mechanically vibrating the sample inside of an inductive pickup coil or inside of a SQUID coil. Induced current or changing flux in the coil is measured. The vibration is typically created by a motor or a piezoelectric actuator. Typically the VSM technique is about an order of magnitude less sensitive than SQUID magnetometry. VSMs can be combined with SQUIDs to create a system that is more sensitive than either one alone. Heat due to the sample vibration can limit the base temperature of a VSM, typically to 2 kelvin. VSM is also impractical for measuring a fragile sample that is sensitive to rapid acceleration.
Pulsed-field extraction magnetometry.
Pulsed-field extraction magnetometry is another method making use of pickup coils to measure magnetization. Unlike VSMs where the sample is physically vibrated, in pulsed-field extraction magnetometry, the sample is secured and the external magnetic field is changed rapidly, for example in a capacitor-driven magnet. One of multiple techniques must then be used to cancel out the external field from the field produced by the sample. These include counterwound coils that cancel the external uniform field and background measurements with the sample removed from the coil.
Torque magnetometry.
Magnetic torque magnetometry can be even more sensitive than SQUID magnetometry. However, magnetic torque magnetometry doesn't measure magnetism directly as all the previously mentioned methods do. Magnetic torque magnetometry instead measures the torque τ acting on a sample's magnetic moment μ as a result of a uniform magnetic field B, τ = μ × B. A torque is thus a measure of the sample's magnetic or shape anisotropy. In some cases the sample's magnetization can be extracted from the measured torque. In other cases, the magnetic torque measurement is used to detect magnetic phase transitions or quantum oscillations. The most common way to measure magnetic torque is to mount the sample on a cantilever and measure the displacement via capacitance measurement between the cantilever and nearby fixed object, or by measuring the piezoelectricity of the cantilever, or by optical interferometry off the surface of the cantilever.
Faraday force magnetometry.
Faraday force magnetometry uses the fact that a spatial magnetic field gradient produces force that acts on a magnetized object, F = (M⋅∇)B. In Faraday force magnetometry the force on the sample can be measured by a scale (hanging the sample from a sensitive balance), or by detecting the displacement against a spring. Commonly a capacitive load cell or cantilever is used because of its sensitivity, size, and lack of mechanical parts. Faraday force magnetometry is approximately one order of magnitude less sensitive than a SQUID. The biggest drawback to Faraday force magnetometry is that it requires some means of not only producing a magnetic field, but also producing a magnetic field gradient. While this can be accomplished by using a set of special pole faces, a much better result can be achieved by using set of gradient coils. A major advantage to Faraday force magnetometry is that it is small and reasonably tolerant to noise, and thus can be implemented in a wide range of environments, including a dilution refrigerator. Faraday force magnetometry can also be complicated by the presence of torque (see previous technique). This can be circumvented by varying the gradient field independently of the applied DC field so the torque and the Faraday force contribution can be separated, and/or by designing a Faraday force magnetometer that prevents the sample from being rotated.
Optical magnetometry.
Optical magnetometry makes use of various optical techniques to measure magnetization. One such technique, Kerr magnetometry makes use of the magneto-optic Kerr effect, or MOKE. In this technique, incident light is directed at the sample's surface. Light interacts with a magnetized surface nonlinearly so the reflected light has an elliptical polarization, which is then measured by a detector. Another method of optical magnetometry is Faraday rotation magnetometry. Faraday rotation magnetometry utilizes nonlinear magneto-optical rotation to measure a sample's magnetization. In this method a Faraday modulating thin film is applied to the sample to be measured and a series of images are taken with a camera that senses the polarization of the reflected light. To reduce noise, multiple pictures are then averaged together. One advantage to this method is that it allows mapping of the magnetic characteristics over the surface of a sample. This can be especially useful when studying such things as the Meissner effect on superconductors. Microfabricated optically pumped magnetometers (μOPMs) can be used to detect the origin of brain seizures more precisely and generate less heat than currently available superconducting quantum interference devices, better known as SQUIDs. The device works by using polarized light to control the spin of rubidium atoms which can be used to measure and monitor the magnetic field.
Survey magnetometers.
Survey magnetometers can be divided into two basic types:
A vector is a mathematical entity with both magnitude and direction. The Earth's magnetic field at a given point is a vector. A magnetic compass is designed to give a horizontal bearing direction, whereas a "vector magnetometer" measures both the magnitude and direction of the total magnetic field. Three orthogonal sensors are required to measure the components of the magnetic field in all three dimensions.
They are also rated as "absolute" if the strength of the field can be calibrated from their own known internal constants or "relative" if they need to be calibrated by reference to a known field.
A "magnetograph" is a magnetometer that continuously records data over time. This data is typically represented in magnetograms.
Magnetometers can also be classified as "AC" if they measure fields that vary relatively rapidly in time (>100 Hz), and "DC" if they measure fields that vary only slowly (quasi-static) or are static. AC magnetometers find use in electromagnetic systems (such as magnetotellurics), and DC magnetometers are used for detecting mineralisation and corresponding geological structures.
Scalar magnetometers.
Proton precession magnetometer.
"Proton precession magnetometer"s, also known as "proton magnetometers", PPMs or simply mags, measure the resonance frequency of protons (hydrogen nuclei) in the magnetic field to be measured, due to nuclear magnetic resonance (NMR). Because the precession frequency depends only on atomic constants and the strength of the ambient magnetic field, the accuracy of this type of magnetometer can reach 1 ppm.
A direct current flowing in a solenoid creates a strong magnetic field around a hydrogen-rich fluid (kerosene and decane are popular, and even water can be used), causing some of the protons to align themselves with that field. The current is then interrupted, and as protons realign themselves with the ambient magnetic field, they precess at a frequency that is directly proportional to the magnetic field. This produces a weak rotating magnetic field that is picked up by a (sometimes separate) inductor, amplified electronically, and fed to a digital frequency counter whose output is typically scaled and displayed directly as field strength or output as digital data.
For hand/backpack carried units, PPM sample rates are typically limited to less than one sample per second. Measurements are typically taken with the sensor held at fixed locations at approximately 10 metre increments.
Portable instruments are also limited by sensor volume (weight) and power consumption. PPMs work in field gradients up to 3,000 nT/m, which is adequate for most mineral exploration work. For higher gradient tolerance, such as mapping banded iron formations and detecting large ferrous objects, Overhauser magnetometers can handle 10,000 nT/m, and caesium magnetometers can handle 30,000 nT/m.
They are relatively inexpensive (< US$8,000) and were once widely used in mineral exploration. Three manufacturers dominate the market: GEM Systems, Geometrics and Scintrex. Popular models include G-856/857, Smartmag, GSM-18, and GSM-19T.
For mineral exploration, they have been superseded by Overhauser, caesium, and potassium instruments, all of which are fast-cycling, and do not require the operator to pause between readings.
Overhauser effect magnetometer.
The "Overhauser effect magnetometer" or "Overhauser magnetometer" uses the same fundamental effect as the "proton precession magnetometer" to take measurements. By adding free radicals to the measurement fluid, the nuclear Overhauser effect can be exploited to significantly improve upon the proton precession magnetometer. Rather than aligning the protons using a solenoid, a low power radio-frequency field is used to align (polarise) the electron spin of the free radicals, which then couples to the protons via the Overhauser effect. This has two main advantages: driving the RF field takes a fraction of the energy (allowing lighter-weight batteries for portable units), and faster sampling as the electron-proton coupling can happen even as measurements are being taken. An Overhauser magnetometer produces readings with a 0.01 nT to 0.02 nT standard deviation while sampling once per second.
Caesium vapour magnetometer.
The "optically pumped caesium vapour magnetometer" is a highly sensitive (300 fT/Hz0.5) and accurate device used in a wide range of applications. It is one of a number of alkali vapours (including rubidium and potassium) that are used in this way.
The device broadly consists of a photon emitter, such as a laser, an absorption chamber containing caesium vapour mixed with a "buffer gas" through which the emitted photons pass, and a photon detector, arranged in that order. The buffer gas is usually helium or nitrogen and they are used to reduce collisions between the caesium vapour atoms.
The basic principle that allows the device to operate is the fact that a caesium atom can exist in any of nine energy levels, which can be informally thought of as the placement of electron atomic orbitals around the atomic nucleus. When a caesium atom within the chamber encounters a photon from the laser, it is excited to a higher energy state, emits a photon and falls to an indeterminate lower energy state. The caesium atom is "sensitive" to the photons from the laser in three of its nine energy states, and therefore, assuming a closed system, all the atoms eventually fall into a state in which all the photons from the laser pass through unhindered and are measured by the photon detector. The caesium vapour has become transparent. This process happens continuously to maintain as many of the electrons as possible in that state.
At this point, the sample (or population) is said to have been optically pumped and ready for measurement to take place. When an external field is applied it disrupts this state and causes atoms to move to different states which makes the vapour less transparent. The photo detector can measure this change and therefore measure the magnitude of the magnetic field.
In the most common type of caesium magnetometer, a very small AC magnetic field is applied to the cell. Since the difference in the energy levels of the electrons is determined by the external magnetic field, there is a frequency at which this small AC field makes the electrons change states. In this new state, the electrons once again can absorb a photon of light. This causes a signal on a photo detector that measures the light passing through the cell. The associated electronics use this fact to create a signal exactly at the frequency that corresponds to the external field.
Another type of caesium magnetometer modulates the light applied to the cell. This is referred to as a Bell-Bloom magnetometer, after the two scientists who first investigated the effect. If the light is turned on and off at the frequency corresponding to the Earth's field, there is a change in the signal seen at the photo detector. Again, the associated electronics use this to create a signal exactly at the frequency that corresponds to the external field. Both methods lead to high performance magnetometers.
Potassium vapour magnetometer.
Potassium is the only optically pumped magnetometer that operates on a single, narrow electron spin resonance (ESR) line in contrast to other alkali vapour magnetometers that use irregular, composite and wide spectral lines and helium with the inherently wide spectral line.
Metastable helium-4 scalar magnetometer.
Magnetometers based on helium-4 excited to its metastable triplet state thanks to a plasma discharge have been developed in the 1960s and 70s by Texas Instruments, then by its spinoff Polatomic, and from late 1980s by . The latter pioneered a configuration which cancels the dead-zones, which are a recurrent problem of atomic magnetometers. This configuration was demonstrated to show an accuracy of 50 pT in orbit operation. The ESA chose this technology for the Swarm mission, which was launched in 2013. An experimental vector mode, which could compete with fluxgate magnetometers was tested in this mission with overall success.
Applications.
The caesium and potassium magnetometers are typically used where a higher performance magnetometer than the proton magnetometer is needed. In archaeology and geophysics, where the sensor sweeps through an area and many accurate magnetic field measurements are often needed, caesium and potassium magnetometers have advantages over the proton magnetometer.
The caesium and potassium magnetometer's faster measurement rate allows the sensor to be moved through the area more quickly for a given number of data points. Caesium and potassium magnetometers are insensitive to rotation of the sensor while the measurement is being made.
The lower noise of caesium and potassium magnetometers allow those measurements to more accurately show the variations in the field with position.
Vector magnetometers.
Vector magnetometers measure one or more components of the magnetic field electronically. Using three orthogonal magnetometers, both azimuth and dip (inclination) can be measured. By taking the square root of the sum of the squares of the components the total magnetic field strength (also called total magnetic intensity, TMI) can be calculated by the Pythagorean theorem.
Vector magnetometers are subject to temperature drift and the dimensional instability of the ferrite cores. They also require leveling to obtain component information, unlike total field (scalar) instruments. For these reasons they are no longer used for mineral exploration.
Rotating coil magnetometer.
The magnetic field induces a sine wave in a rotating coil. The amplitude of the signal is proportional to the strength of the field, provided it is uniform, and to the sine of the angle between the rotation axis of the coil and the field lines. This type of magnetometer is obsolete.
Hall effect magnetometer.
The most common magnetic sensing devices are solid-state Hall effect sensors. These sensors produce a voltage proportional to the applied magnetic field and also sense polarity. They are used in applications where the magnetic field strength is relatively large, such as in anti-lock braking systems in cars, which sense wheel rotation speed via slots in the wheel disks.
Magnetoresistive devices.
These are made of thin strips of Permalloy, a high magnetic permeability, nickel-iron alloy, whose electrical resistance varies with a change in magnetic field. They have a well-defined axis of sensitivity, can be produced in 3-D versions and can be mass-produced as an integrated circuit. They have a response time of less than 1 microsecond and can be sampled in moving vehicles up to 1,000 times/second. They can be used in compasses that read within 1°, for which the underlying sensor must reliably resolve 0.1°.
Fluxgate magnetometer.
The fluxgate magnetometer was invented by H. Aschenbrenner and G. Goubau in 1936. A team at Gulf Research Laboratories led by Victor Vacquier developed airborne fluxgate magnetometers to detect submarines during World War II and after the war confirmed the theory of plate tectonics by using them to measure shifts in the magnetic patterns on the sea floor.
A fluxgate magnetometer consists of a small magnetically susceptible core wrapped by two coils of wire. An alternating electric current is passed through one coil, driving the core through an alternating cycle of magnetic saturation; i.e., magnetised, unmagnetised, inversely magnetised, unmagnetised, magnetised, and so forth. This constantly changing field induces a voltage in the second coil which is measured by a detector. In a magnetically neutral background, the input and output signals match. However, when the core is exposed to a background field, it is more easily saturated in alignment with that field and less easily saturated in opposition to it. Hence the alternating magnetic field and the induced output voltage, are out of step with the input current. The extent to which this is the case depends on the strength of the background magnetic field. Often, the signal in the output coil is integrated, yielding an output analog voltage proportional to the magnetic field.
A wide variety of sensors are currently available and used to measure magnetic fields. Fluxgate compasses and gradiometers measure the direction and magnitude of magnetic fields. Fluxgates are affordable, rugged and compact with miniaturization recently advancing to the point of complete sensor solutions in the form of IC chips, including examples from both academia and industry. This, plus their typically low power consumption makes them ideal for a variety of sensing applications. Gradiometers are commonly used for archaeological prospecting and unexploded ordnance (UXO) detection such as the German military's popular "Foerster".
The typical fluxgate magnetometer consists of a "sense" (secondary) coil surrounding an inner "drive" (primary) coil that is closely wound around a highly permeable core material, such as mu-metal or permalloy. An alternating current is applied to the drive winding, which drives the core in a continuous repeating cycle of saturation and unsaturation. To an external field, the core is alternately weakly permeable and highly permeable. The core is often a toroidally wrapped ring or a pair of linear elements whose drive windings are each wound in opposing directions. Such closed flux paths minimise coupling between the drive and sense windings. In the presence of an external magnetic field, with the core in a highly permeable state, such a field is locally attracted or gated (hence the name fluxgate) through the sense winding. When the core is weakly permeable, the external field is less attracted. This continuous gating of the external field in and out of the sense winding induces a signal in the sense winding, whose principal frequency is twice that of the drive frequency, and whose strength and phase orientation vary directly with the external-field magnitude and polarity.
There are additional factors that affect the size of the resultant signal. These factors include the number of turns in the sense winding, magnetic permeability of the core, sensor geometry, and the gated flux rate of change with respect to time.
Phase synchronous detection is used to extract these harmonic signals from the sense winding and convert them into a DC voltage proportional to the external magnetic field. Active current feedback may also be employed, such that the sense winding is driven to counteract the external field. In such cases, the feedback current varies linearly with the external magnetic field and is used as the basis for measurement. This helps to counter inherent non-linearity between the applied external field strength and the flux gated through the sense winding.
SQUID magnetometer.
SQUIDs, or superconducting quantum interference devices, measure extremely small changes in magnetic fields. They are very sensitive vector magnetometers, with noise levels as low as 3 fT Hz−½ in commercial instruments and 0.4 fT Hz−½ in experimental devices. Many liquid-helium-cooled commercial SQUIDs achieve a flat noise spectrum from near DC (less than 1 Hz) to tens of kilohertz, making such devices ideal for time-domain biomagnetic signal measurements. SERF atomic magnetometers demonstrated in laboratories so far reach competitive noise floor but in relatively small frequency ranges.
SQUID magnetometers require cooling with liquid helium () or liquid nitrogen () to operate, hence the packaging requirements to use them are rather stringent both from a thermal-mechanical as well as magnetic standpoint. SQUID magnetometers are most commonly used to measure the magnetic fields produced by laboratory samples, also for brain or heart activity (magnetoencephalography and magnetocardiography, respectively). Geophysical surveys use SQUIDs from time to time, but the logistics of cooling the SQUID are much more complicated than other magnetometers that operate at room temperature.
Zero-field optically-pumped magnetometers.
Magnetometers based on atomic gasses can perform vector measurements of the magnetic field in the low field regime, where the decay of the atomic coherence becomes faster than the Larmor frequency. The physics of such magnetometers is based on the Hanle effect. Such zero-field optically pumped magnetometers have been tested in various configurations and with different atomic species, notably alkali (potassium, rubidium and cesium), helium and mercury. For the case of alkali, the coherence times were greatly limited due to spin-exchange relaxation. A major breakthrough happened at the beginning of the 2000 decade, Romalis group in Princeton demonstrated that in such a low field regime, alkali coherence times can be greatly enhanced if a high enough density can be reached by high temperature heating, this is the so-called SERF effect.
The main interest of optically-pumped magnetometers is to replace SQUID magnetometers in applications where cryogenic cooling is a drawback. This is notably the case of medical imaging where such cooling imposes a thick thermal insulation, strongly affecting the amplitude of the recorded biomagnetic signals. Several startup companies are currently developing optically pumped magnetometers for biomedical applications: those of TwinLeaf, quSpin and FieldLine being based on alkali vapors, and those of Mag4Health on metastable helium-4.
Spin-exchange relaxation-free (SERF) atomic magnetometers.
At sufficiently high atomic density, extremely high sensitivity can be achieved. Spin-exchange-relaxation-free (SERF) atomic magnetometers containing potassium, caesium, or rubidium vapor operate similarly to the caesium magnetometers described above, yet can reach sensitivities lower than 1 fT Hz−<templatestyles src="Fraction/styles.css" />1⁄2. The SERF magnetometers only operate in small magnetic fields. The Earth's field is about 50 μT; SERF magnetometers operate in fields less than 0.5 μT.
Large volume detectors have achieved a sensitivity of 200 aT Hz−<templatestyles src="Fraction/styles.css" />1⁄2. This technology has greater sensitivity per unit volume than SQUID detectors. The technology can also produce very small magnetometers that may in the future replace coils for detecting radio-frequency magnetic fields. This technology may produce a magnetic sensor that has all of its input and output signals in the form of light on fiber-optic cables. This lets the magnetic measurement be made near high electrical voltages.
Calibration of magnetometers.
The calibration of magnetometers is usually performed by means of coils which are supplied by an electrical current to create a magnetic field. It allows to characterize the sensitivity of the magnetometer (in terms of V/T). In many applications the homogeneity of the calibration coil is an important feature. For this reason, coils like Helmholtz coils are commonly used either in a single axis or a three axis configuration. For demanding applications a high homogeneity magnetic field is mandatory, in such cases magnetic field calibration can be performed using a Maxwell coil, cosine coils, or calibration in the highly homogenous Earth's magnetic field.
Uses.
Magnetometers have a very diverse range of applications, including locating objects such as submarines, sunken ships, hazards for tunnel boring machines, hazards in coal mines, unexploded ordnance, toxic waste drums, as well as a wide range of mineral deposits and geological structures. They also have applications in heart beat monitors, weapon systems positioning, sensors in anti-locking brakes, weather prediction (via solar cycles), steel pylons, drill guidance systems, archaeology, plate tectonics and radio wave propagation and planetary exploration. Laboratory magnetometers determine the magnetic dipole moment of a magnetic sample, typically as a function of temperature, magnetic field, or other parameter. This helps to reveal its magnetic properties such as ferromagnetism, antiferromagnetism, superconductivity, or other properties that affect magnetism.
Depending on the application, magnetometers can be deployed in spacecraft, aeroplanes ("fixed wing" magnetometers), helicopters ("stinger" and "bird"), on the ground ("backpack"), towed at a distance behind quad bikes(ATVs) on a ("sled" or "trailer"), lowered into boreholes ("tool", "probe" or "sonde") and towed behind boats ("tow fish").
Mechanical stress measurement.
Magnetometers are used to measure or monitor mechanical stress in ferromagnetic materials. Mechanical stress will improve alignment of magnetic domains in microscopic scale that will raise the magnetic field measured close to the material by magnetometers. There are different hypothesis about stress-magnetisation relationship. However the effect of mechanical stress on measured magnetic field near the specimen is claimed to be proven in many scientific publications. There have been efforts to solve the inverse problem of magnetisation-stress resolution in order to quantify the stress based on measured magnetic field.
Accelerator physics.
Magnetometers are used extensively in experimental particle physics to measure the magnetic field of pivotal components such as the concentration or focusing beam-magnets.
Archaeology.
Magnetometers are also used to detect archaeological sites, shipwrecks, and other buried or submerged objects. Fluxgate gradiometers are popular due to their compact configuration and relatively low cost. Gradiometers enhance shallow features and negate the need for a base station. Caesium and Overhauser magnetometers are also very effective when used as gradiometers or as single-sensor systems with base stations.
The TV program "Time Team" popularised 'geophys', including magnetic techniques used in archaeological work to detect fire hearths, walls of baked bricks and magnetic stones such as basalt and granite. Walking tracks and roadways can sometimes be mapped with differential compaction in magnetic soils or with disturbances in clays, such as on the Great Hungarian Plain. Ploughed fields behave as sources of magnetic noise in such surveys.
Auroras.
Magnetometers can give an indication of auroral activity before the light from the aurora becomes visible. A grid of magnetometers around the world constantly measures the effect of the solar wind on the Earth's magnetic field, which is then published on the K-index.
Coal exploration.
While magnetometers can be used to help map basin shape at a regional scale, they are more commonly used to map hazards to coal mining, such as basaltic intrusions (dykes, sills, and volcanic plug) that destroy resources and are dangerous to longwall mining equipment. Magnetometers can also locate zones ignited by lightning and map siderite (an impurity in coal).
The best survey results are achieved on the ground in high-resolution surveys (with approximately 10 m line spacing and 0.5 m station spacing). Bore-hole magnetometers using a Ferretcan also assist when coal seams are deep, by using multiple sills or looking beneath surface basalt flows.
Modern surveys generally use magnetometers with GPS technology to automatically record the magnetic field and their location. The data set is then corrected with data from a second magnetometer (the base station) that is left stationary and records the change in the Earth's magnetic field during the survey.
Directional drilling.
Magnetometers are used in directional drilling for oil or gas to detect the azimuth of the drilling tools near the drill. They are most often paired with accelerometers in drilling tools so that both the inclination and azimuth of the drill can be found.
Military.
For defensive purposes, navies use arrays of magnetometers laid across sea floors in strategic locations (i.e. around ports) to monitor submarine activity. The Russian Alfa-class titanium submarines were designed and built at great expense to thwart such systems (as pure titanium is non-magnetic).
Military submarines are degaussed—by passing through large underwater loops at regular intervals—to help them escape detection by sea-floor monitoring systems, magnetic anomaly detectors, and magnetically-triggered mines. However, submarines are never completely de-magnetised. It is possible to tell the depth at which a submarine has been by measuring its magnetic field, which is distorted as the pressure distorts the hull and hence the field. Heating can also change the magnetization of steel.
Submarines tow long sonar arrays to detect ships, and can even recognise different propeller noises. The sonar arrays need to be accurately positioned so they can triangulate direction to targets (e.g. ships). The arrays do not tow in a straight line, so fluxgate magnetometers are used to orient each sonar node in the array.
Fluxgates can also be used in weapons navigation systems, but have been largely superseded by GPS and ring laser gyroscopes.
Magnetometers such as the German Foerster are used to locate ferrous ordnance. Caesium and Overhauser magnetometers are used to locate and help clean up old bombing and test ranges.
UAV payloads also include magnetometers for a range of defensive and offensive tasks.
Mineral exploration.
Magnetometric surveys can be useful in defining magnetic anomalies which represent ore (direct detection), or in some cases gangue minerals associated with ore deposits (indirect or inferential detection). This includes iron ore, magnetite, hematite, and often pyrrhotite.
Developed countries such as Australia, Canada and USA invest heavily in systematic airborne magnetic surveys of their respective continents and surrounding oceans, to assist with map geology and in the discovery of mineral deposits. Such aeromag surveys are typically undertaken with 400 m line spacing at 100 m elevation, with readings every 10 meters or more. To overcome the asymmetry in the data density, data is interpolated between lines (usually 5 times) and data along the line is then averaged. Such data is gridded to an 80 m × 80 m pixel size and image processed using a program like ERMapper. At an exploration lease scale, the survey may be followed by a more detailed helimag or crop duster style fixed wing at 50 m line spacing and 50 m elevation (terrain permitting). Such an image is gridded on a 10 x 10 m pixel, offering 64 times the resolution.
Where targets are shallow (<200 m), aeromag anomalies may be followed up with ground magnetic surveys on 10 m to 50 m line spacing with 1 m station spacing to provide the best detail (2 to 10 m pixel grid) (or 25 times the resolution prior to drilling).
Magnetic fields from magnetic bodies of ore fall off with the inverse distance cubed (dipole target), or at best inverse distance squared (magnetic monopole target). One analogy to the resolution-with-distance is a car driving at night with lights on. At a distance of 400 m one sees one glowing haze, but as it approaches, two headlights, and then the left blinker, are visible.
There are many challenges interpreting magnetic data for mineral exploration. Multiple targets mix together like multiple heat sources and, unlike light, there is no magnetic telescope to focus fields. The combination of multiple sources is measured at the surface. The geometry, depth, or magnetisation direction (remanence) of the targets are also generally not known, and so multiple models can explain the data.
Potent by Geophysical Software Solutions is a leading magnetic (and gravity) interpretation package used extensively in the Australian exploration industry.
Magnetometers assist mineral explorers both directly (i.e., gold mineralisation associated with magnetite, diamonds in kimberlite pipes) and, more commonly, indirectly, such as by mapping geological structures conducive to mineralisation (i.e., shear zones and alteration haloes around granites).
Airborne Magnetometers detect the change in the Earth's magnetic field using sensors attached to the aircraft in the form of a "stinger" or by towing a magnetometer on the end of a cable. The magnetometer on a cable is often referred to as a "bomb" because of its shape. Others call it a "bird".
Because hills and valleys under the aircraft make the magnetic readings rise and fall, a radar altimeter keeps track of the transducer's deviation from the nominal altitude above ground. There may also be a camera that takes photos of the ground. The location of the measurement is determined by also recording a GPS.
Mobile phones.
Many smartphones contain miniaturized microelectromechanical systems (MEMS) magnetometers which are used to detect magnetic field strength and are used as compasses. The iPhone 3GS has a magnetometer, a magnetoresistive permalloy sensor, the AN-203 produced by Honeywell. In 2009, the price of three-axis magnetometers dipped below US$1 per device and dropped rapidly. The use of a three-axis device means that it is not sensitive to the way it is held in orientation or elevation. Hall effect devices are also popular.
Researchers at Deutsche Telekom have used magnetometers embedded in mobile devices to permit touchless 3D interaction. Their interaction framework, called MagiTact, tracks changes to the magnetic field around a cellphone to identify different gestures made by a hand holding or wearing a magnet.
Oil exploration.
Seismic methods are preferred to magnetometers as the primary survey method for oil exploration although magnetic methods can give additional information about the underlying geology and in some environments evidence of leakage from traps. Magnetometers are also used in oil exploration to show locations of geologic features that make drilling impractical, and other features that give geophysicists a more complete picture of stratigraphy.
Spacecraft.
A three-axis fluxgate magnetometer was part of the "Mariner 2" and "Mariner 10" missions. A dual technique magnetometer is part of the "Cassini–Huygens" mission to explore Saturn. This system is composed of a vector helium and fluxgate magnetometers. Magnetometers were also a component instrument on the Mercury "MESSENGER" mission. A magnetometer can also be used by satellites like GOES to measure both the magnitude and direction of the magnetic field of a planet or moon.
Magnetic surveys.
Systematic surveys can be used to in searching for mineral deposits or locating lost objects. Such surveys are divided into:
Aeromag datasets for Australia can be downloaded from the GADDS database.
Data can be divided in point located and image data, the latter of which is in ERMapper format.
Magnetovision.
On the base of space measured distribution of magnetic field parameters (e.g. amplitude or direction), the magnetovision images may be generated. Such presentation of magnetic data is very useful for further analyse and data fusion.
Gradiometer.
Magnetic gradiometers are pairs of magnetometers with their sensors separated, usually horizontally, by a fixed distance. The readings are subtracted to measure the difference between the sensed magnetic fields, which gives the field gradients caused by magnetic anomalies. This is one way of compensating both for the variability in time of the Earth's magnetic field and for other sources of electromagnetic interference, thus allowing for more sensitive detection of anomalies. Because nearly equal values are being subtracted, the noise performance requirements for the magnetometers is more extreme.
Gradiometers enhance shallow magnetic anomalies and are thus good for archaeological and site investigation work. They are also good for real-time work such as unexploded ordnance (UXO) location. It is twice as efficient to run a base station and use two (or more) mobile sensors to read parallel lines simultaneously (assuming data is stored and post-processed). In this manner, both along-line and cross-line gradients can be calculated.
Position control of magnetic surveys.
In traditional mineral exploration and archaeological work, grid pegs placed by theodolite and tape measure were used to define the survey area. Some UXO surveys used ropes to define the lanes. Airborne surveys used radio triangulation beacons, such as Siledus.
Non-magnetic electronic hipchain triggers were developed to trigger magnetometers. They used rotary shaft encoders to measure distance along disposable cotton reels.
Modern explorers use a range of low-magnetic signature GPS units, including real-time kinematic GPS.
Heading errors in magnetic surveys.
Magnetic surveys can suffer from noise coming from a range of sources. Different magnetometer technologies suffer different kinds of noise problems.
Heading errors are one group of noise. They can come from three sources:
Some total field sensors give different readings depending on their orientation. Magnetic materials in the sensor itself are the primary cause of this error. In some magnetometers, such as the vapor magnetometers (caesium, potassium, etc.), there are sources of heading error in the physics that contribute small amounts to the total heading error.
Console noise comes from magnetic components on or within the console. These include ferrite in cores in inductors and transformers, steel frames around LCDs, legs on IC chips and steel cases in disposable batteries. Some popular MIL spec connectors also have steel springs.
Operators must take care to be magnetically clean and should check the 'magnetic hygiene' of all apparel and items carried during a survey. Akubra hats are very popular in Australia, but their steel rims must be removed before use on magnetic surveys. Steel rings on notepads, steel capped boots and steel springs in overall eyelets can all cause unnecessary noise in surveys. Pens, mobile phones and stainless steel implants can also be problematic.
The magnetic response (noise) from ferrous object on the operator and console can change with heading direction because of induction and remanence. Aeromagnetic survey aircraft and quad bike systems can use special compensators to correct for heading error noise.
Heading errors look like herringbone patterns in survey images. Alternate lines can also be corrugated.
Image processing of magnetic data.
Recording data and image processing is superior to real-time work because subtle anomalies often missed by the operator (especially in magnetically noisy areas) can be correlated between lines, shapes and clusters better defined. A range of sophisticated enhancement techniques can also be used. There is also a hard copy and need for systematic coverage.
Aircraft navigation.
The Magnetometer Navigation (MAGNAV) algorithm was initially running as a flight experiment in 2004. Later on, diamond magnetometers were developed by the United States Air Force Research Laboratory (AFRL) as a better method of navigation which cannot be jammed by the enemy.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rm{nT}/\\sqrt{\\rm{Hz}}"
}
] | https://en.wikipedia.org/wiki?curid=83060 |
8307380 | Superfluid film | Thin layer of liquid in a superfluid state
Superfluidity is a phenomenon where a fluid, or a fraction of a fluid, loses all its viscosity and can flow without resistance. A superfluid film is the thin film it may then form as a result.
Superfluid helium, for example, forms a 30-nanometre film on the surface of any container. The film's properties cause the helium to climb the walls of the container and, if this is not closed, flow out.
Superfluidity, like superconductivity, is a macroscopic manifestation of quantum mechanics. There is considerable interest, both theoretical and practical, in these quantum phase transitions. There has been a tremendous amount of work done in the field of phase transitions and critical phenomena in two dimensions. Much of the interest in this field is because as the number of dimensions increases, the number of exactly solvable models diminishes drastically. In three or more dimensions one must resort to a mean field theory approach. The theory of superfluid transitions in two dimensions is known as the Kosterlitz-Thouless (KT) theory. The 2D XY model - where the order parameter is characterized by an amplitude and a phase - is the universality class for this transition.
Experimental techniques.
Maximising the film's area.
In looking at phase transitions in thin films, specifically helium, the two main experimental signatures are the superfluid fraction and heat capacity. If either of these measurements were to be done on a superfluid film in a typical open container, the film signal would be overwhelmed by the background signal from the container. Therefore, when studying superfluid films, it is of paramount importance to study a system of large surface area as to enhance the film signal.
There are several ways of doing this. In the first, a long thin strip of material such as PET film is rolled up into a "jelly roll" configuration. The result is a film that is a long continuous plane, referred to as a planar film. A second way is to have a highly porous material such as porous gold, Vycor, or Aerogel. This results in a multiply connected film where the substrate is much like Swiss cheese with the holes interconnected. These porous materials all have an extremely high surface area to volume ratio. A third method is to separate two extremely flat plates by a thin spacer, again resulting in a large surface area to volume ratio.
Measuring the superfluid fraction.
The film's superfluid response can be measured by using a "torsional oscillator" to measure the moment of inertia of a cell containing it. The oscillator comprises a torsion rod to which the cell is attached, together with an arrangement for oscillating the cell at its resonant frequency around the rod's axis. A higher resonant frequency corresponds to a lower moment of inertia.
Any superfluid fraction of the film loses its viscosity, and therefore doesn't participate in the oscillations. This means it no longer contributes to the cell's moment of inertia, and the resonant frequency increases.
The oscillation is achieved via capacitive coupling with a fin or pair of fins, depending on the configuration. (The arrangement in the diagram uses one fin, shown in grey.)
An early design of torsional oscillator was first used by Andronikashvili to detect superfluid in bulk fluid 4He, and later modified by John Reppy and co-workers at Cornell in the 1970s.
Recall that the resonant period of a torsional oscillator is formula_0. Therefore, lowering the moment of inertia reduces the resonant period of the oscillator. By measuring the period drop as a function of temperature, and total loading of the film from the empty cell value, one can deduce the fraction of the film that has entered the superfluid state. A typical set of data clearly showing the superfluid decoupling in helium films is shown in ref. 2.
Measurements at higher velocities.
A typical torsional oscillator has a resonant frequency on the order of 1000 Hz. This corresponds to a maximum velocity of the substrate of micrometres per second. The critical velocity of helium films is reported to be on the order of 0.1 m/s . Therefore, in comparison to the critical velocity, the oscillator is almost at rest. To probe theories of dynamical aspects of thin film phase transitions one must use an oscillator with a much higher frequency. The quartz crystal microbalance provides just such a tool having a resonant frequency of about 10 kHz. The operating principles are much the same as for a torsional oscillator. When the thin film is adsorbed onto the surface of the crystal, the resonant frequency of the quartz crystal drops. As the crystal is cooled through the superfluid transition, the superfluid decouples and the frequency increases.
Some results.
The KT theory has been confirmed in a set of experiments by Bishop and Reppy in planar films, i.e. Helium films on mylar . Specifically, they found that the transition temperature scaled with film thickness and the superfluid transition is found in films as thin as 5% of a monolayer. More recently, it has been found that near the transition temperature when the correlation lengths exceed any relevant length scale in the system, a multiply connected film will behave as a 3D system near its critical point.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\pi\\sqrt{m/k}"
}
] | https://en.wikipedia.org/wiki?curid=8307380 |
830776 | Discrete valuation | In mathematics, a discrete valuation is an integer valuation on a field "K"; that is, a function:
formula_0
satisfying the conditions:
formula_1
formula_2
formula_3
for all formula_4.
Note that often the trivial valuation which takes on only the values formula_5 is explicitly excluded.
A field with a non-trivial discrete valuation is called a discrete valuation field.
Discrete valuation rings and valuations on fields.
To every field formula_6 with discrete valuation formula_7 we can associate the subring
formula_8
of formula_6, which is a discrete valuation ring. Conversely, the valuation formula_9 on a discrete valuation ring formula_10 can be extended in a unique way to a discrete valuation on the quotient field formula_11; the associated discrete valuation ring formula_12 is just formula_10.
Examples.
More examples can be found in the article on discrete valuation rings.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\nu:K\\to\\mathbb Z\\cup\\{\\infty\\}"
},
{
"math_id": 1,
"text": "\\nu(x\\cdot y)=\\nu(x)+\\nu(y)"
},
{
"math_id": 2,
"text": "\\nu(x+y)\\geq\\min\\big\\{\\nu(x),\\nu(y)\\big\\}"
},
{
"math_id": 3,
"text": "\\nu(x)=\\infty\\iff x=0"
},
{
"math_id": 4,
"text": "x,y\\in K"
},
{
"math_id": 5,
"text": "0,\\infty"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "\\nu"
},
{
"math_id": 8,
"text": "\\mathcal{O}_K := \\left\\{ x \\in K \\mid \\nu(x) \\geq 0 \\right\\}"
},
{
"math_id": 9,
"text": "\\nu: A \\rightarrow \\Z\\cup\\{\\infty\\}"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "K=\\text{Quot}(A)"
},
{
"math_id": 12,
"text": "\\mathcal{O}_K"
},
{
"math_id": 13,
"text": "p"
},
{
"math_id": 14,
"text": "x \\in \\mathbb{Q}"
},
{
"math_id": 15,
"text": "x = p^j\\frac{a}{b}"
},
{
"math_id": 16,
"text": "j, a,b \\in \\Z"
},
{
"math_id": 17,
"text": "a,b"
},
{
"math_id": 18,
"text": "\\nu(x) = j"
},
{
"math_id": 19,
"text": "\\Q"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "K=M(X)"
},
{
"math_id": 22,
"text": "X\\to\\Complex\\cup\\{\\infin\\}"
},
{
"math_id": 23,
"text": "p\\in X"
},
{
"math_id": 24,
"text": "\\nu(f)=j"
},
{
"math_id": 25,
"text": "j"
},
{
"math_id": 26,
"text": "f(z)/(z-p)^j"
},
{
"math_id": 27,
"text": "\\nu(f)=j>0"
},
{
"math_id": 28,
"text": "f"
},
{
"math_id": 29,
"text": "\\nu(f)=j<0"
},
{
"math_id": 30,
"text": "-j"
}
] | https://en.wikipedia.org/wiki?curid=830776 |
8307819 | Expected shortfall | Measure of financial risk
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst formula_0 of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.
Expected shortfall is also called conditional value at risk (CVaR), average value at risk (AVaR), expected tail loss (ETL), and superquantile.
ES estimates the risk of an investment in a conservative way, focusing on the less profitable outcomes. For high values of formula_1 it ignores the most profitable but unlikely possibilities, while for small values of formula_1 it focuses on the worst losses. On the other hand, unlike the discounted maximum loss, even for lower values of formula_1 the expected shortfall does not consider only the single most catastrophic outcome. A value of formula_1 often used in practice is 5%.
Expected shortfall is considered a more useful risk measure than VaR because it is a coherent spectral measure of financial portfolio risk. It is calculated for a given quantile-level formula_1 and is defined to be the mean loss of portfolio value given that a loss is occurring at or below the formula_1-quantile.
Formal definition.
If formula_2 (an L"p") is the payoff of a portfolio at some future time and formula_3 then we define the expected shortfall as
formula_4
where formula_5 is the value at risk. This can be equivalently written as
formula_6
where formula_7 is the lower formula_8-quantile and formula_9 is the indicator function. Note, that the second term vanishes for random variables with continuous distribution functions.
The dual representation is
formula_10
where formula_11 is the set of probability measures which are absolutely continuous to the physical measure formula_12 such that formula_13 almost surely. Note that formula_14 is the Radon–Nikodym derivative of formula_15 with respect to formula_12.
Expected shortfall can be generalized to a general class of coherent risk measures on formula_16 spaces (Lp space) with a corresponding dual characterization in the corresponding formula_17 dual space. The domain can be extended for more general Orlicz Hearts.
If the underlying distribution for formula_18 is a continuous distribution then the expected shortfall is equivalent to the tail conditional expectation defined by formula_19.
Informally, and non-rigorously, this equation amounts to saying "in case of losses so severe that they occur only alpha percent of the time, what is our average loss".
Expected shortfall can also be written as a distortion risk measure given by the distortion function
formula_20
Examples.
Example 1. If we believe our average loss on the worst 5% of the possible outcomes for our portfolio is EUR 1000, then we could say our expected shortfall is EUR 1000 for the 5% tail.
Example 2. Consider a portfolio that will have the following possible values at the end of the period:
Now assume that we paid 100 at the beginning of the period for this portfolio. Then the profit in each case is ("ending value"−100) or:
From this table let us calculate the expected shortfall formula_21 for a few values of formula_1:
To see how these values were calculated, consider the calculation of formula_22, the expectation in the worst 5% of cases. These cases belong to (are a subset of) row 1 in the profit table, which have a profit of −100 (total loss of the 100 invested). The expected profit for these cases is −100.
Now consider the calculation of formula_23, the expectation in the worst 20 out of 100 cases. These cases are as follows: 10 cases from row one, and 10 cases from row two (note that 10+10 equals the desired 20 cases). For row 1 there is a profit of −100, while for row 2 a profit of −20. Using the expected value formula we get
formula_24
Similarly for any value of formula_1. We select as many rows starting from the top as are necessary to give a cumulative probability of formula_1 and then calculate an expectation over those cases. In general, the last row selected may not be fully used (for example in calculating formula_25 we used only 10 of the 30 cases per 100 provided by row 2).
As a final example, calculate formula_26. This is the expectation over all cases, or
formula_27
The value at risk (VaR) is given below for comparison.
Properties.
The expected shortfall formula_21 increases as formula_1 decreases.
The 100%-quantile expected shortfall formula_29 equals negative of the expected value of the portfolio.
For a given portfolio, the expected shortfall formula_21 is greater than or equal to the Value at Risk formula_28 at the same formula_1 level.
Optimization of expected shortfall.
Expected shortfall, in its standard form, is known to lead to a generally non-convex optimization problem. However, it is possible to transform the problem into a linear program and find the global solution. This property makes expected shortfall a cornerstone of alternatives to mean-variance portfolio optimization, which account for the higher moments (e.g., skewness and kurtosis) of a return distribution.
Suppose that we want to minimize the expected shortfall of a portfolio. The key contribution of Rockafellar and Uryasev in their 2000 paper is to introduce the auxiliary function formula_30 for the expected shortfall:formula_31Where formula_32 and formula_33 is a loss function for a set of portfolio weights formula_34 to be applied to the returns. Rockafellar/Uryasev proved that formula_35 is convex with respect to formula_36 and is equivalent to the expected shortfall at the minimum point. To numerically compute the expected shortfall for a set of portfolio returns, it is necessary to generate formula_37 simulations of the portfolio constituents; this is often done using copulas. With these simulations in hand, the auxiliary function may be approximated by:formula_38This is equivalent to the formulation:formula_39 Finally, choosing a linear loss function formula_40 turns the optimization problem into a linear program. Using standard methods, it is then easy to find the portfolio that minimizes expected shortfall.
Formulas for continuous probability distributions.
Closed-form formulas exist for calculating the expected shortfall when the payoff of a portfolio formula_18 or a corresponding loss formula_41 follows a specific continuous distribution. In the former case, the expected shortfall corresponds to the opposite number of the left-tail conditional expectation below formula_42:
formula_43
Typical values of formula_44 in this case are 5% and 1%.
For engineering or actuarial applications it is more common to consider the distribution of losses formula_41, the expected shortfall in this case corresponds to the right-tail conditional expectation above formula_45 and the typical values of formula_8 are 95% and 99%:
formula_46
Since some formulas below were derived for the left-tail case and some for the right-tail case, the following reconciliations can be useful:
formula_47
Normal distribution.
If the payoff of a portfolio formula_18 follows the normal (Gaussian) distribution with p.d.f. formula_48 then the expected shortfall is equal to formula_49, where formula_50 is the standard normal p.d.f., formula_51 is the standard normal c.d.f., so formula_52 is the standard normal quantile.
If the loss of a portfolio formula_53 follows the normal distribution, the expected shortfall is equal to formula_54.
Generalized Student's t-distribution.
If the payoff of a portfolio formula_18 follows the generalized Student's t-distribution with p.d.f. formula_55 then the expected shortfall is equal to formula_56, where formula_57 is the standard t-distribution p.d.f., formula_58 is the standard t-distribution c.d.f., so formula_59 is the standard t-distribution quantile.
If the loss of a portfolio formula_53 follows generalized Student's t-distribution, the expected shortfall is equal to formula_60.
Laplace distribution.
If the payoff of a portfolio formula_18 follows the Laplace distribution with the p.d.f.
formula_61
and the c.d.f.
formula_62
then the expected shortfall is equal to formula_63 for formula_64.
If the loss of a portfolio formula_53 follows the Laplace distribution, the expected shortfall is equal to
formula_65
Logistic distribution.
If the payoff of a portfolio formula_18 follows the logistic distribution with p.d.f. formula_66 and the c.d.f. formula_67 then the expected shortfall is equal to formula_68.
If the loss of a portfolio formula_53 follows the logistic distribution, the expected shortfall is equal to formula_69.
Exponential distribution.
If the loss of a portfolio formula_53 follows the exponential distribution with p.d.f. formula_70 and the c.d.f. formula_71 then the expected shortfall is equal to formula_72.
Pareto distribution.
If the loss of a portfolio formula_53 follows the Pareto distribution with p.d.f. formula_73 and the c.d.f. formula_74 then the expected shortfall is equal to formula_75.
Generalized Pareto distribution (GPD).
If the loss of a portfolio formula_53 follows the GPD with p.d.f.
formula_76
and the c.d.f.
formula_77
then the expected shortfall is equal to
formula_78
and the VaR is equal to
formula_79
Weibull distribution.
If the loss of a portfolio formula_53 follows the Weibull distribution with p.d.f. formula_80 and the c.d.f. formula_81 then the expected shortfall is equal to formula_82, where formula_83 is the upper incomplete gamma function.
Generalized extreme value distribution (GEV).
If the payoff of a portfolio formula_18 follows the GEV with p.d.f. formula_84 and c.d.f. formula_85 then the expected shortfall is equal to formula_86 and the VaR is equal to formula_87, where formula_83 is the upper incomplete gamma function, formula_88 is the logarithmic integral function.
If the loss of a portfolio formula_53 follows the GEV, then the expected shortfall is equal to formula_89, where formula_90 is the lower incomplete gamma function, formula_91 is the Euler-Mascheroni constant.
Generalized hyperbolic secant (GHS) distribution.
If the payoff of a portfolio formula_18 follows the GHS distribution with p.d.f. formula_92and the c.d.f. formula_93 then the expected shortfall is equal to formula_94, where formula_95 is the dilogarithm and formula_96 is the imaginary unit.
Johnson's SU-distribution.
If the payoff of a portfolio formula_18 follows Johnson's SU-distribution with the c.d.f. formula_97 then the expected shortfall is equal to formula_98, where formula_99 is the c.d.f. of the standard normal distribution.
Burr type XII distribution.
If the payoff of a portfolio formula_18 follows the Burr type XII distribution the p.d.f. formula_100 and the c.d.f. formula_101, the expected shortfall is equal to formula_102, where formula_103 is the hypergeometric function. Alternatively, formula_104.
Dagum distribution.
If the payoff of a portfolio formula_18 follows the Dagum distribution with p.d.f. formula_105 and the c.d.f. formula_106, the expected shortfall is equal to formula_107, where formula_103 is the hypergeometric function.
Lognormal distribution.
If the payoff of a portfolio formula_18 follows lognormal distribution, i.e. the random variable formula_108 follows the normal distribution with p.d.f. formula_48, then the expected shortfall is equal to formula_109, where formula_51 is the standard normal c.d.f., so formula_52 is the standard normal quantile.
Log-logistic distribution.
If the payoff of a portfolio formula_18 follows log-logistic distribution, i.e. the random variable formula_108 follows the logistic distribution with p.d.f. formula_110, then the expected shortfall is equal to formula_111, where formula_112 is the regularized incomplete beta function, formula_113.
As the incomplete beta function is defined only for positive arguments, for a more generic case the expected shortfall can be expressed with the hypergeometric function: formula_114.
If the loss of a portfolio formula_53 follows log-logistic distribution with p.d.f. formula_115 and c.d.f. formula_116, then the expected shortfall is equal to formula_117, where formula_118 is the incomplete beta function.
Log-Laplace distribution.
If the payoff of a portfolio formula_18 follows log-Laplace distribution, i.e. the random variable formula_108 follows the Laplace distribution the p.d.f. formula_119, then the expected shortfall is equal to
formula_120
Log-generalized hyperbolic secant (log-GHS) distribution.
If the payoff of a portfolio formula_18 follows log-GHS distribution, i.e. the random variable formula_108 follows the GHS distribution with p.d.f. formula_121, then the expected shortfall is equal to
formula_122
where formula_103 is the hypergeometric function.
Dynamic expected shortfall.
The conditional version of the expected shortfall at the time "t" is defined by
formula_123
where formula_124.
This is not a time-consistent risk measure. The time-consistent version is given by
formula_125
such that
formula_126
See also.
Methods of statistical estimation of VaR and ES can be found in Embrechts et al. and Novak. When forecasting VaR and ES, or optimizing portfolios to minimize tail risk, it is important to account for asymmetric dependence and non-normalities in the distribution of stock returns such as auto-regression, asymmetric volatility, skewness, and kurtosis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "q\\%"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "X \\in L^p(\\mathcal{F})"
},
{
"math_id": 3,
"text": "0 < \\alpha < 1"
},
{
"math_id": 4,
"text": " \\operatorname{ES}_\\alpha(X) = -\\frac{1}{\\alpha} \\int_0^\\alpha \\operatorname{VaR}_\\gamma(X) \\, d\\gamma"
},
{
"math_id": 5,
"text": "\\operatorname{VaR}_\\gamma"
},
{
"math_id": 6,
"text": "\\operatorname{ES}_\\alpha(X) = -\\frac{1}{\\alpha} \\left(\\operatorname E[X \\ 1_{\\{X \\leq x_{\\alpha}\\}}] + x_\\alpha(\\alpha - P[X \\leq x_\\alpha])\\right)"
},
{
"math_id": 7,
"text": "x_\\alpha = \\inf\\{x \\in \\mathbb{R}: P(X \\leq x) \\geq \\alpha\\}=-\\operatorname{VaR}_\\alpha(X)"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "1_A(x) = \\begin{cases}1 &\\text{if }x \\in A\\\\ 0 &\\text{else}\\end{cases}"
},
{
"math_id": 10,
"text": " \\operatorname {ES}_\\alpha(X) = \\inf_{Q \\in \\mathcal{Q}_\\alpha} E^Q[X]"
},
{
"math_id": 11,
"text": "\\mathcal{Q}_\\alpha"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "\\frac{dQ}{dP} \\leq \\alpha^{-1}"
},
{
"math_id": 14,
"text": "\\frac{dQ}{dP}"
},
{
"math_id": 15,
"text": "Q"
},
{
"math_id": 16,
"text": "L^p"
},
{
"math_id": 17,
"text": "L^q"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "\\operatorname{TCE}_{\\alpha}(X) = E[-X\\mid X \\leq -\\operatorname{VaR}_{\\alpha}(X)]"
},
{
"math_id": 20,
"text": "g(x) = \\begin{cases}\\frac{x}{1-\\alpha} & \\text{if }0 \\leq x < 1-\\alpha,\\\\ 1 & \\text{if }1-\\alpha \\leq x \\leq 1.\\end{cases} \\quad"
},
{
"math_id": 21,
"text": "\\operatorname{ES}_q"
},
{
"math_id": 22,
"text": "\\operatorname{ES}_{0.05}"
},
{
"math_id": 23,
"text": "\\operatorname{ES}_{0.20}"
},
{
"math_id": 24,
"text": "\\frac{ \\frac{10}{100}(-100)+\\frac{10}{100}(-20) }{ \\frac{20}{100}} = -60."
},
{
"math_id": 25,
"text": "-\\operatorname{ES}_{0.20}"
},
{
"math_id": 26,
"text": "-\\operatorname{ES}_1"
},
{
"math_id": 27,
"text": "0.1(-100)+0.3(-20)+0.4\\cdot 0+0.2\\cdot 50 = -6. \\, "
},
{
"math_id": 28,
"text": "\\operatorname{VaR}_q"
},
{
"math_id": 29,
"text": "\\operatorname{ES}_{1}"
},
{
"math_id": 30,
"text": "F_{\\alpha}(w,\\gamma)"
},
{
"math_id": 31,
"text": " F_\\alpha(w,\\gamma) = \\gamma + {1\\over{1-\\alpha}} \\int_{\\ell(w,x)\\geq \\gamma} \\left[\\ell(w,x)-\\gamma\\right]_{+} p(x) \\, dx"
},
{
"math_id": 32,
"text": "\\gamma = \\operatorname{VaR}_\\alpha(X)"
},
{
"math_id": 33,
"text": "\\ell(w,x)"
},
{
"math_id": 34,
"text": "w\\in\\mathbb{R}^p"
},
{
"math_id": 35,
"text": "F_\\alpha(w,\\gamma)"
},
{
"math_id": 36,
"text": "\\gamma"
},
{
"math_id": 37,
"text": "J"
},
{
"math_id": 38,
"text": "\\widetilde{F}_\\alpha(w,\\gamma) = \\gamma + {1\\over{(1-\\alpha)J}}\\sum_{j=1}^J [\\ell(w,x_j) - \\gamma]_{+}"
},
{
"math_id": 39,
"text": "\\min_{\\gamma,z,w} \\; \\gamma + {1\\over{(1-\\alpha)J}} \\sum_{j=1}^J z_j, \\quad \\text{s.t. } z_j \\geq \\ell(w,x_j)-\\gamma,\\; z_j \\geq 0"
},
{
"math_id": 40,
"text": "\\ell(w,x_{j}) = -w^T x_j"
},
{
"math_id": 41,
"text": "L = -X"
},
{
"math_id": 42,
"text": "-\\operatorname{VaR}_\\alpha (X)"
},
{
"math_id": 43,
"text": "\\operatorname {ES}_\\alpha(X) = E[-X\\mid X \\leq -\\operatorname{VaR}_\\alpha(X)] = -\\frac{1}{\\alpha}\\int_0^\\alpha \\operatorname{VaR}_\\gamma(X) \\, d\\gamma = -\\frac{1}{\\alpha} \\int_{-\\infty}^{-\\operatorname{VaR}_\\alpha(X)} xf(x) \\, dx."
},
{
"math_id": 44,
"text": "\\alpha"
},
{
"math_id": 45,
"text": "\\operatorname{VaR}_\\alpha (L)"
},
{
"math_id": 46,
"text": "\\operatorname {ES}_\\alpha(L)\n= \\operatorname E[L\\mid L \\geq \\operatorname{VaR}_\\alpha(L)]\n= \\frac{1}{1-\\alpha} \\int^1_\\alpha \\operatorname{VaR}_\\gamma(L) \\, d\\gamma\n= \\frac{1}{1-\\alpha} \\int^{+\\infty}_{\\operatorname{VaR}_\\alpha(L)} yf(y) \\, dy."
},
{
"math_id": 47,
"text": " \\operatorname {ES}_\\alpha(X)\n= -\\frac{1}{\\alpha} \\operatorname E[X] + \\frac{1-\\alpha}{\\alpha} \\operatorname {ES}_\\alpha(L) \\text{ and } \\operatorname{ES}_\\alpha(L)\n= \\frac{1}{1-\\alpha} \\operatorname E[L]+\\frac{\\alpha}{1-\\alpha} \\operatorname {ES}_\\alpha(X)."
},
{
"math_id": 48,
"text": "f(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}"
},
{
"math_id": 49,
"text": "\\operatorname{ES}_\\alpha(X) = -\\mu+\\sigma\\frac{\\varphi(\\Phi^{-1}(\\alpha))}{\\alpha}"
},
{
"math_id": 50,
"text": "\\varphi(x)=\\frac{1}{\\sqrt{2\\pi}}e^{-\\frac{x^2}{2}}"
},
{
"math_id": 51,
"text": "\\Phi(x)"
},
{
"math_id": 52,
"text": "\\Phi^{-1}(\\alpha)"
},
{
"math_id": 53,
"text": "L"
},
{
"math_id": 54,
"text": "\\operatorname{ES}_\\alpha(L) = \\mu+\\sigma\\frac{\\varphi(\\Phi^{-1}(\\alpha))}{1-\\alpha}"
},
{
"math_id": 55,
"text": "f(x) = \\frac{\\Gamma\\left(\\frac{\\nu+1}{2}\\right)}{\\Gamma\\left(\\frac{\\nu}{2} \\right) \\sqrt{\\pi\\nu} \\sigma} \\left(1+\\frac{1}{\\nu}\\left(\\frac{x-\\mu}{\\sigma}\\right)^2\\right)^{-\\frac{\\nu+1}{2}}"
},
{
"math_id": 56,
"text": "\\operatorname{ES}_\\alpha(X) = -\\mu+\\sigma\\frac{\\nu+(\\Tau^{-1}(\\alpha))^2}{\\nu-1}\\frac{\\tau(\\Tau^{-1}(\\alpha))}{\\alpha}"
},
{
"math_id": 57,
"text": "\\tau(x)=\\frac{\\Gamma\\bigl(\\frac{\\nu+1}{2}\\bigr)}{\\Gamma\\bigl(\\frac{\\nu}{2}\\bigr)\\sqrt{\\pi\\nu}}\\Bigl(1+\\frac{x^2}{\\nu}\\Bigr)^{-\\frac{\\nu+1}{2}}"
},
{
"math_id": 58,
"text": "\\Tau(x)"
},
{
"math_id": 59,
"text": "\\Tau^{-1}(\\alpha)"
},
{
"math_id": 60,
"text": "\\operatorname{ES}_\\alpha(L) = \\mu+\\sigma\\frac{\\nu+(\\Tau^{-1}(\\alpha))^2}{\\nu-1}\\frac{\\tau(\\Tau^{-1}(\\alpha))}{1-\\alpha}"
},
{
"math_id": 61,
"text": "f(x) = \\frac{1}{2b}e^{-|x-\\mu|/b}"
},
{
"math_id": 62,
"text": "F(x) = \\begin{cases}\n1 - \\frac{1}{2} e^{-(x-\\mu)/b} & \\text{if }x \\geq \\mu,\\\\[4pt]\n\\frac{1}{2} e^{(x-\\mu)/b} & \\text{if }x < \\mu.\n\\end{cases}"
},
{
"math_id": 63,
"text": "\\operatorname{ES}_\\alpha(X) = -\\mu + b(1 - \\ln 2\\alpha)"
},
{
"math_id": 64,
"text": "\\alpha \\le 0.5"
},
{
"math_id": 65,
"text": "\\operatorname{ES}_\\alpha(L) = \\begin{cases}\n\\mu + b \\frac{\\alpha}{1-\\alpha} (1-\\ln2\\alpha) & \\text{if }\\alpha < 0.5,\\\\[4pt]\n\\mu + b[1 - \\ln(2(1-\\alpha))] & \\text{if }\\alpha \\ge 0.5.\n\\end{cases}"
},
{
"math_id": 66,
"text": "f(x) = \\frac{1}{s} e^{-\\frac{x-\\mu}{s}}\\left(1+e^{-\\frac{x-\\mu}{s}}\\right)^{-2}"
},
{
"math_id": 67,
"text": "F(x) = \\left(1+e^{-\\frac{x-\\mu}{s}}\\right)^{-1}"
},
{
"math_id": 68,
"text": "\\operatorname{ES}_\\alpha(X) = -\\mu + s \\ln\\frac{(1-\\alpha)^{1-\\frac{1}{\\alpha}}}{\\alpha}"
},
{
"math_id": 69,
"text": "\\operatorname{ES}_\\alpha(L) = \\mu + s\\frac{-\\alpha\\ln\\alpha-(1-\\alpha)\\ln(1-\\alpha)}{1-\\alpha}"
},
{
"math_id": 70,
"text": "f(x) = \\begin{cases}\\lambda e^{-\\lambda x} & \\text{if }x \\geq 0,\\\\ 0 & \\text{if }x < 0.\\end{cases}"
},
{
"math_id": 71,
"text": "F(x) = \\begin{cases}1 - e^{-\\lambda x} & \\text{if }x \\geq 0,\\\\ 0 & \\text{if }x < 0.\\end{cases}"
},
{
"math_id": 72,
"text": "\\operatorname{ES}_\\alpha(L) = \\frac{-\\ln(1-\\alpha)+1}{\\lambda}"
},
{
"math_id": 73,
"text": "f(x) = \\begin{cases}\n\\frac{a x_m^a}{x^{a+1}} & \\text{if }x \\geq x_m,\\\\\n0 & \\text{if }x < x_m.\n\\end{cases}"
},
{
"math_id": 74,
"text": "F(x) = \\begin{cases}\n1 - (x_m/x)^a & \\text{if }x \\geq x_m,\\\\\n0 & \\text{if }x < x_m.\n\\end{cases}"
},
{
"math_id": 75,
"text": "\\operatorname{ES}_\\alpha(L) = \\frac{x_m a}{(1-\\alpha)^{1/a}(a-1)}"
},
{
"math_id": 76,
"text": "f(x) = \\frac{1}{s} \\left( 1+\\frac{\\xi (x-\\mu)}{s} \\right)^{\\left(-\\frac{1}{\\xi}-1\\right)}"
},
{
"math_id": 77,
"text": "F(x) = \\begin{cases}\n1 - \\left(1+\\frac{\\xi(x-\\mu)}{s}\\right)^{-1 /\\xi} & \\text{if }\\xi \\ne 0,\\\\\n1-\\exp \\left( -\\frac{x-\\mu}{s} \\right) & \\text{if }\\xi = 0.\n\\end{cases}"
},
{
"math_id": 78,
"text": "\\operatorname{ES}_\\alpha(L) = \\begin{cases}\n\\mu + s \\left[ \\frac{(1-\\alpha)^{-\\xi}}{1-\\xi}+\\frac{(1-\\alpha)^{-\\xi}-1}{\\xi} \\right] & \\text{if }\\xi \\ne 0,\\\\\n\\mu + s \\left[1 - \\ln(1-\\alpha) \\right] & \\text{if }\\xi = 0,\n\\end{cases}"
},
{
"math_id": 79,
"text": " \\operatorname{VaR}_\\alpha(L) = \\begin{cases}\n\\mu + s \\frac{(1-\\alpha)^{-\\xi}-1}{\\xi} & \\text{if }\\xi \\ne 0,\\\\\n\\mu - s \\ln(1-\\alpha) & \\text{if }\\xi = 0.\n\\end{cases}"
},
{
"math_id": 80,
"text": "f(x) = \\begin{cases}\n\\frac{k}{\\lambda} \\left(\\frac{x}{\\lambda}\\right)^{k-1} e^{-(x/\\lambda)^k} & \\text{if }x \\geq 0,\\\\\n0 & \\text{if }x < 0.\n\\end{cases}"
},
{
"math_id": 81,
"text": "F(x) = \\begin{cases}\n1 - e^{-(x/\\lambda)^k} & \\text{if }x \\geq 0,\\\\\n0 & \\text{if }x < 0.\n\\end{cases}"
},
{
"math_id": 82,
"text": "\\operatorname{ES}_\\alpha(L) = \\frac{\\lambda}{1-\\alpha} \\Gamma\\left(1+\\frac{1}{k},-\\ln(1-\\alpha)\\right)"
},
{
"math_id": 83,
"text": "\\Gamma(s,x)"
},
{
"math_id": 84,
"text": "f(x) = \\begin{cases}\n\\frac{1}{\\sigma} \\left( 1+\\xi \\frac{ x-\\mu}{\\sigma} \\right)^{-\\frac{1}{\\xi}-1} \\exp\\left[-\\left( 1 + \\xi \\frac{x-\\mu}{\\sigma} \\right)^{-{1}/{\\xi}}\\right] & \\text{if } \\xi \\ne 0,\\\\\n\\frac{1}{\\sigma}e^{-\\frac{x-\\mu}{\\sigma}}e^{-e^{-\\frac{x-\\mu}{\\sigma}}} & \\text{if } \\xi = 0.\n\\end{cases}"
},
{
"math_id": 85,
"text": "F(x) = \\begin{cases}\n\\exp\\left(-\\left(1+\\xi\\frac{x-\\mu}{\\sigma}\\right)^{-{1}/{\\xi}}\\right) & \\text{if }\\xi \\ne 0,\\\\\n\\exp\\left(-e^{-\\frac{x-\\mu}{\\sigma}}\\right) & \\text{if }\\xi = 0.\n\\end{cases}"
},
{
"math_id": 86,
"text": "\\operatorname{ES}_\\alpha(X) = \\begin{cases}\n-\\mu - \\frac{\\sigma}{\\alpha \\xi} \\big[ \\Gamma(1-\\xi,-\\ln\\alpha)-\\alpha \\big] & \\text{if }\\xi \\ne 0,\\\\\n-\\mu - \\frac{\\sigma}{\\alpha} \\big[ \\text{li}(\\alpha) - \\alpha \\ln(-\\ln \\alpha) \\big] & \\text{if }\\xi = 0.\n\\end{cases}"
},
{
"math_id": 87,
"text": "\\operatorname{VaR}_\\alpha(X) = \\begin{cases}\n-\\mu - \\frac{\\sigma}{\\xi} \\left[(-\\ln \\alpha)^{-\\xi}-1 \\right] & \\text{if }\\xi \\ne 0,\\\\\n-\\mu + \\sigma \\ln(-\\ln\\alpha) & \\text{if }\\xi = 0.\n\\end{cases}"
},
{
"math_id": 88,
"text": "\\mathrm{li}(x) = \\int \\frac{dx}{\\ln x}"
},
{
"math_id": 89,
"text": "\\operatorname{ES}_\\alpha(X) = \\begin{cases}\n\\mu + \\frac{\\sigma}{(1-\\alpha) \\xi} \\bigl[ \\gamma(1-\\xi,-\\ln\\alpha)-(1-\\alpha) \\bigr] & \\text{if }\\xi \\ne 0,\\\\\n\\mu + \\frac{\\sigma}{1-\\alpha} \\bigl[y - \\text{li}(\\alpha) + \\alpha \\ln(-\\ln \\alpha) \\bigr] & \\text{if }\\xi = 0.\n\\end{cases}"
},
{
"math_id": 90,
"text": "\\gamma(s,x)"
},
{
"math_id": 91,
"text": "y"
},
{
"math_id": 92,
"text": "f(x) = \\frac{1}{2 \\sigma} \\operatorname{sech}\\left(\\frac{\\pi}{2} \\frac{x-\\mu}{\\sigma}\\right)"
},
{
"math_id": 93,
"text": "F(x) = \\frac{2}{\\pi}\\arctan\\left[\\exp\\left(\\frac{\\pi}{2}\\frac{x-\\mu}{\\sigma}\\right)\\right]"
},
{
"math_id": 94,
"text": "\\operatorname{ES}_\\alpha(X) =\n- \\mu - \\frac{2\\sigma}{\\pi} \\ln\\left( \\tan \\frac{\\pi\\alpha}{2} \\right)\n- \\frac{2\\sigma}{\\pi^2\\alpha}i\\left[\\operatorname{Li}_2\\left(-i\\tan\\frac{\\pi\\alpha}{2}\\right)-\\operatorname{Li}_2\\left(i\\tan\\frac{\\pi\\alpha}{2}\\right)\\right]"
},
{
"math_id": 95,
"text": "\\operatorname{Li}_2"
},
{
"math_id": 96,
"text": "i=\\sqrt{-1}"
},
{
"math_id": 97,
"text": "F(x) = \\Phi\\left[\\gamma+\\delta\\sinh^{-1}\\left(\\frac{x-\\xi}{\\lambda}\\right)\\right]"
},
{
"math_id": 98,
"text": "\\operatorname{ES}_\\alpha(X) =\n-\\xi - \\frac{\\lambda}{2\\alpha}\n\\left[\n \\exp\\left(\\frac{1-2\\gamma\\delta}{2\\delta^2}\\right) \\;\n \\Phi\\left(\\Phi^{-1}(\\alpha)-\\frac{1}{\\delta}\\right)\n - \\exp\\left(\\frac{1+2\\gamma\\delta}{2\\delta^2}\\right) \\;\n \\Phi\\left(\\Phi^{-1}(\\alpha)+\\frac{1}{\\delta}\\right)\n\\right]"
},
{
"math_id": 99,
"text": "\\Phi"
},
{
"math_id": 100,
"text": "f(x) = \\frac{ck}{\\beta}\n\\left(\\frac{x-\\gamma}{\\beta}\\right)^{c-1}\n\\left[1+\\left(\\frac{x-\\gamma}{\\beta} \\right)^c\\right]^{-k-1}"
},
{
"math_id": 101,
"text": "F(x) = 1-\\left[1+\\left(\\frac{x-\\gamma}{\\beta} \\right)^c \\right]^{-k}"
},
{
"math_id": 102,
"text": "\\operatorname{ES}_\\alpha(X) =\n- \\gamma\n- \\frac{\\beta}{\\alpha}\n \\left( (1-\\alpha)^{-1/k}-1 \\right)^{1/c}\n \\left[ \\alpha -1+{_2F_1}\\left(\\frac{1}{c},k;1+\\frac{1}{c};1-(1-\\alpha)^{-1/k}\\right) \\right]"
},
{
"math_id": 103,
"text": "_2F_1"
},
{
"math_id": 104,
"text": "\\operatorname{ES}_\\alpha(X) =\n- \\gamma\n- \\frac{\\beta}{\\alpha} \n \\frac{ck}{c+1}\n \\left( (1-\\alpha)^{-1/k}-1 \\right)^{1+\\frac{1}{c}}\n {_2F_1}\\left(1+\\frac{1}{c}, k+1;2+\\frac{1}{c};1-(1-\\alpha)^{-1/k}\\right) "
},
{
"math_id": 105,
"text": "f(x) =\n\\frac{ck}{\\beta}\n\\left(\\frac{x-\\gamma}{\\beta}\\right)^{ck-1}\n\\left[1+\\left(\\frac{x-\\gamma}{\\beta}\\right)^c\\right]^{-k-1}"
},
{
"math_id": 106,
"text": "F(x) = \\left[1+\\left(\\frac{x-\\gamma}{\\beta}\\right)^{-c}\\right]^{-k}"
},
{
"math_id": 107,
"text": "\\operatorname{ES}_\\alpha(X) =\n- \\gamma\n- \\frac{\\beta}{\\alpha}\n \\frac{ck}{ck+1}\n \\left( \\alpha^{-1/k}-1 \\right)^{-k-\\frac{1}{c}}\n {_2F_1}\\left(k+1,k+\\frac{1}{c};k+1+\\frac{1}{c};-\\frac{1}{\\alpha^{-1/k}-1}\\right) "
},
{
"math_id": 108,
"text": "\\ln(1+X)"
},
{
"math_id": 109,
"text": "\\operatorname{ES}_\\alpha(X) = 1 - \\exp\\left(\\mu+\\frac{\\sigma^2}{2}\\right) \\frac{\\Phi\\left(\\Phi^{-1}(\\alpha)-\\sigma\\right)}{\\alpha}"
},
{
"math_id": 110,
"text": "f(x) = \\frac{1}{s} e^{-\\frac{x-\\mu}{s}} \\left(1+e^{-\\frac{x-\\mu}{s}}\\right)^{-2}"
},
{
"math_id": 111,
"text": "\\operatorname{ES}_\\alpha(X) = 1-\\frac{e^\\mu}{\\alpha}I_\\alpha(1+s,1-s)\\frac{\\pi s}{\\sin\\pi s}"
},
{
"math_id": 112,
"text": "I_\\alpha"
},
{
"math_id": 113,
"text": "I_\\alpha(a,b)=\\frac{\\Beta_\\alpha(a,b)}{\\Beta(a,b)}"
},
{
"math_id": 114,
"text": "\\operatorname{ES}_\\alpha(X) = 1-\\frac{e^\\mu \\alpha^s}{s+1} {_2F_1}(s,s+1;s+2;\\alpha)"
},
{
"math_id": 115,
"text": "f(x) = \\frac{\\frac{b}{a}(x/a)^{b-1}}{(1+(x/a)^b)^2}"
},
{
"math_id": 116,
"text": "F(x) = \\frac{1}{1+(x/a)^{-b}}"
},
{
"math_id": 117,
"text": "\\operatorname{ES}_\\alpha(L) =\n\\frac{a}{1-\\alpha}\n\\left[\n \\frac{\\pi}{b} \\csc\\left(\\frac{\\pi}{b}\\right)\n - \\Beta_\\alpha \\left(\\frac{1}{b}+1,1-\\frac{1}{b}\\right)\n\\right]"
},
{
"math_id": 118,
"text": "B_\\alpha"
},
{
"math_id": 119,
"text": "f(x) = \\frac{1}{2b}e^{-\\frac{|x-\\mu|}{b}}"
},
{
"math_id": 120,
"text": "\\operatorname{ES}_\\alpha(X) = \\begin{cases}\n1 - \\frac{e^\\mu (2\\alpha)^b}{b+1} & \\text{if }\\alpha \\le 0.5,\\\\\n1 - \\frac{e^\\mu 2^{-b}}{\\alpha(b-1)} \\left[(1-\\alpha)^{(1-b)}-1\\right] & \\text{if } \\alpha > 0.5.\n\\end{cases}"
},
{
"math_id": 121,
"text": "f(x) = \\frac{1}{2 \\sigma} \\operatorname{sech} \\left(\\frac{\\pi}{2}\\frac{x-\\mu}{\\sigma}\\right)"
},
{
"math_id": 122,
"text": "\\operatorname{ES}_\\alpha(X) = 1 - \\frac{1}{\\alpha(\\sigma+{\\pi/2})} \\left(\\tan\\frac{\\pi \\alpha}{2}\\exp\\frac{\\pi \\mu}{2\\sigma}\\right)^{2\\sigma/\\pi} \\tan\\frac{\\pi \\alpha}{2} {_2F_1}\\left(1,\\frac{1}{2}+\\frac{\\sigma}{\\pi};\\frac{3}{2}+\\frac{\\sigma}{\\pi};-\\tan\\left(\\frac{\\pi \\alpha}{2}\\right)^2\\right),"
},
{
"math_id": 123,
"text": "\\operatorname{ES}_\\alpha^t(X) = \\operatorname{ess\\sup}_{Q \\in \\mathcal{Q}_{\\alpha}^t} E^Q[-X \\mid \\mathcal{F}_t]"
},
{
"math_id": 124,
"text": "\\mathcal{Q}_{\\alpha}^t = \\left\\{Q = P\\,\\vert_{\\mathcal{F}_t}: \\frac{dQ}{dP} \\leq \\alpha_t^{-1} \\text{ a.s.}\\right\\} "
},
{
"math_id": 125,
"text": "\\rho_{\\alpha}^t(X) = \\operatorname{ess\\sup}_{Q \\in \\tilde{\\mathcal{Q}}_{\\alpha}^t} E^Q[-X\\mid\\mathcal{F}_t]"
},
{
"math_id": 126,
"text": "\\tilde{\\mathcal{Q}}_{\\alpha}^t = \\left\\{Q \\ll P: \\operatorname{E}\\left[\\frac{dQ}{dP} \\mid \\mathcal{F}_{\\tau+1} \\right] \\leq \\alpha_t^{-1} \\operatorname{E}\\left[\\frac{dQ}{dP} \\mid \\mathcal{F}_{\\tau}\\right] \\; \\forall \\tau \\geq t \\text{ a.s.}\\right\\}."
}
] | https://en.wikipedia.org/wiki?curid=8307819 |
8309537 | Wind turbine design | Process of defining the form of wind turbine systems
Wind turbine design is the process of defining the form and configuration of a wind turbine to extract energy from the wind. An installation consists of the systems needed to capture the wind's energy, point the turbine into the wind, convert mechanical rotation into electrical power, and other systems to start, stop, and control the turbine.
In 1919, German physicist Albert Betz showed that for a hypothetical ideal wind-energy extraction machine, the fundamental laws of conservation of mass and energy allowed no more than 16/27 (59.3%) of the wind's kinetic energy to be captured. This Betz' law limit can be approached by modern turbine designs which reach 70 to 80% of this theoretical limit.
In addition to the blades, design of a complete wind power system must also address the hub, controls, generator, supporting structure and foundation. Turbines must also be integrated into power grids.
<templatestyles src="Template:TOC limit/styles.css" />
Aerodynamics.
Blade shape and dimension are determined by the aerodynamic performance required to efficiently extract energy, and by the strength required to resist forces on the blade.
The aerodynamics of a horizontal-axis wind turbine are not straightforward. The air flow at the blades is not the same as that away from the turbine. The way that energy is extracted from the air also causes air to be deflected by the turbine. Wind turbine aerodynamics at the rotor surface exhibit phenomena that are rarely seen in other aerodynamic fields.
Power control.
Rotation speed must be controlled for efficient power generation and to keep the turbine components within speed and torque limits. The centrifugal force on the blades increases as the square of the rotation speed, which makes this structure sensitive to overspeed. Because power increases as the cube of the wind speed, turbines have must survive much higher wind loads (such as gusts of wind) than those loads from which they generate power.
A wind turbine must produce power over a range of wind speeds. The cut-in speed is around 3–4 m/s for most turbines, and cut-out at 25 m/s. If the rated wind speed is exceeded the power has to be limited.
A control system involves three basic elements: sensors to measure process variables, actuators to manipulate energy capture and component loading, and control algorithms that apply information gathered by the sensors to coordinate the actuators.
Any wind blowing above the survival speed damages the turbine. The survival speed of commercial wind turbines ranges from 40 m/s (144 km/h, 89 MPH) to 72 m/s (259 km/h, 161 MPH), typically around 60 m/s (216 km/h, 134 MPH). Some turbines can survive .
Stall.
A stall on an airfoil occurs when air passes over it in such a way that the generation of lift rapidly decreases. Usually this is due to a high angle of attack (AOA), but can also result from dynamic effects. The blades of a fixed pitch turbine can be designed to stall in high wind speeds, slowing rotation. This is a simple fail-safe mechanism to help prevent damage. However, other than systems with dynamically controlled pitch, it cannot produce a constant power output over a large range of wind speeds, which makes it less suitable for large scale, power grid applications.
A fixed-speed HAWT (Horizontal Axis Wind Turbine) inherently increases its angle of attack at higher wind speed as the blades speed up. A natural strategy, then, is to allow the blade to stall when the wind speed increases. This technique was successfully used on many early HAWTs. However, the degree of blade pitch tended to increase noise levels.
Vortex generators may be used to control blade lift characteristics. VGs are placed on the airfoil to enhance the lift if they are placed on the lower (flatter) surface or limit the maximum lift if placed on the upper (higher camber) surface.
Furling.
Furling works by decreasing the angle of attack, which reduces drag and blade cross-section. One major problem is getting the blades to stall or furl quickly enough in a wind gust. A fully furled turbine blade, when stopped, faces the edge of the blade into the wind.
Loads can be reduced by making a structural system softer or more flexible. This can be accomplished with downwind rotors or with curved blades that twist naturally to reduce angle of attack at higher wind speeds. These systems are nonlinear and couple the structure to the flow field - requiring design tools to evolve to model these nonlinearities.
Standard turbines all furl in high winds. Since furling requires acting against the torque on the blade, it requires some form of pitch angle control, which is achieved with a slewing drive. This drive precisely angles the blade while withstanding high torque loads. In addition, many turbines use hydraulic systems. These systems are usually spring-loaded, so that if hydraulic power fails, the blades automatically furl. Other turbines use an electric servomotor for every blade. They have a battery-reserve in case of grid failure. Small wind turbines (under 50 kW) with variable-pitching generally use systems operated by centrifugal force, either by flyweights or geometric design, and avoid electric or hydraulic controls.
Fundamental gaps exist in pitch control, limiting the reduction of energy costs, according to a report funded by the Atkinson Center for a Sustainable Future. Load reduction is currently focused on full-span blade pitch control, since individual pitch motors are the actuators on commercial turbines. Significant load mitigation has been demonstrated in simulations for blades, tower, and drive train. However, further research is needed to increase energy capture and mitigate fatigue loads.
A control technique applied to the pitch angle is done by comparing the power output with the power value at the rated engine speed (power reference, Ps reference). Pitch control is done with PI controller. In order to adjust pitch rapidly enough, the actuator uses the time constant Tservo, an integrator and limiters. The pitch angle remains from 0° to 30° with a change rate of 10°/second.
As in the figure at the right, the reference pitch angle is compared with the actual pitch angle b and then the difference is corrected by the actuator. The reference pitch angle, which comes from the PI controller, goes through a limiter. Restrictions are important to maintain the pitch angle in real terms. Limiting the change rate is especially important during network faults. The importance is due to the fact that the controller decides how quickly it can reduce the aerodynamic energy to avoid acceleration during errors.
Other controls.
Generator torque.
Modern large wind turbines operate at variable speeds. When wind speed falls below the turbine's rated speed, generator torque is used to control the rotor speed to capture as much power as possible. The most power is captured when the tip speed ratio is held constant at its optimum value (typically between 6 and 7). This means that rotor speed increases proportional to wind speed. The difference between the aerodynamic torque captured by the blades and the applied generator torque controls the rotor speed. If the generator torque is lower, the rotor accelerates, and if the generator torque is higher, the rotor slows. Below rated wind speed, the generator torque control is active while the blade pitch is typically held at the constant angle that captures the most power, fairly flat to the wind. Above rated wind speed, the generator torque is typically held constant while the blade pitch is adjusted accordingly.
One technique to control a permanent magnet synchronous motor is field-oriented control. Field-oriented control is a closed loop strategy composed of two current controllers (an inner loop and cascading outer loop) necessary for controlling the torque, and one speed controller.
Constant torque angle control.
In this control strategy the d axis current is kept at zero, while the vector current aligns with the q axis in order to maintain the torque angle at 90o. This is a common control strategy because only the Iqs current must be controlled. The torque equation of the generator is a linear equation dependent only on the Iqs current.
So, the electromagnetic torque for Ids = 0 (we can achieve that with the d-axis controller) is now:
formula_0
Thus, the complete system of the machine side converter and the cascaded PI controller loops is given by the figure. The control inputs are the duty rations mds and mqs, of the PWM-regulated converter. It displays the control scheme for the wind turbine in the machine side and simultaneously how the Ids to zero (the torque equation is linear).
Yawing.
Large turbines are typically actively controlled to face the wind direction measured by a wind vane situated on the back of the nacelle. By minimizing the yaw angle (the misalignment between wind and turbine pointing direction), power output is maximized and non-symmetrical loads minimized. However, since wind direction varies, the turbine does not strictly follow the wind and experiences a small yaw angle on average. The power output losses can be approximated to fall with (cos(yaw angle))3. Particularly at low-to-medium wind speeds, yawing can significantly reduce output, with wind common variations reaching 30°. At high wind speeds, wind direction is less variable.
Electrical braking.
Braking a small turbine can be done by dumping energy from the generator into a resistor bank, converting kinetic energy into heat. This method is useful if the kinetic load on the generator is suddenly reduced or is too small to keep the turbine speed within its allowed limit.
Cyclically braking slows the blades, which increases the stalling effect and reducing efficiency. Rotation can be kept at a safe speed in faster winds while maintaining (nominal) power output. This method is usually not applied on large, grid-connected wind turbines.
Mechanical braking.
A mechanical drum brake or disc brake stops rotation in emergency situations such as extreme gust events. The brake is a secondary means to hold the turbine at rest for maintenance, with a rotor lock system as primary means. Such brakes are usually applied only after blade furling and electromagnetic braking have reduced the turbine speed because mechanical brakes can ignite a fire inside the nacelle if used at full speed. Turbine load increases if the brake is applied at rated RPM.
Turbine size.
Turbines come in size classes. The smallest, with power less than 10 kW are used in homes, farms and remote applications whereas intermediate wind turbines (10-250 kW ) are useful for village power, hybrid systems and distributed power. The world's largest wind turbine as of 2021 was Vestas' V236-15.0 MW turbine. The new design's blades offer the largest swept area in the world with three blades giving a rotor diameter of . Ming Yang in China have announced a larger 16 MW design.
For a given wind speed, turbine mass is approximately proportional to the cube of its blade-length. Wind power intercepted is proportional to the square of blade-length. The maximum blade-length of a turbine is limited by strength, stiffness, and transport considerations.
Labor and maintenance costs increase slower than turbine size, so to minimize costs, wind farm turbines are basically limited by the strength of materials, and siting requirements.
Low temperature.
Utility-scale wind turbine generators have minimum temperature operating limits that apply in areas with temperatures below . Turbines must be protected from ice accumulation that can make anemometer readings inaccurate and which, in certain turbine control designs, can cause high structure loads and damage. Some turbine manufacturers offer low-temperature packages at extra cost, which include internal heaters, different lubricants, and different alloys for structural elements. If low-temperatures are combined with a low-wind condition, the turbine requires an external supply of power, equivalent to a few percent of its rated output, for internal heating. For example, the St. Leon Wind Farm in Manitoba, Canada, has a total rating of 99 MW and is estimated to need up to 3 MW (around 3% of capacity) of station service power a few days a year for temperatures down to .
Nacelle.
The nacelle houses the gearbox and generator connecting the tower and rotor. Sensors detect the wind speed and direction, and motors turn the nacelle into the wind to maximize output.
Gearbox.
In conventional wind turbines, the blades spin a shaft that is connected through a gearbox to the generator. The gearbox converts the turning speed of the blades (15 to 20 RPM for a one-megawatt turbine) into the 1,800 (750-3600) RPM that the generator needs to generate electricity. Gearboxes are one of the more expensive components for installing and maintaining wind turbines. Analysts from GlobalData estimate that the gearbox market grew from $3.2bn in 2006 to $6.9bn in 2011. The market leader for Gearbox production was Winergy in 2011. The use of magnetic gearboxes has been explored as a way of reducing maintenance costs.
Generator.
For large horizontal-axis wind turbines (HAWT), the generator is mounted in a nacelle at the top of a tower, behind the rotor hub. Older wind turbines generate electricity through asynchronous machines directly connected to the grid. The gearbox reduces generator cost and weight. Commercial generators have a rotor carrying a winding so that a rotating magnetic field is produced inside a set of windings called the stator. While the rotating winding consumes a fraction of a percent of the generator output, adjustment of the field current allows good control over the output voltage.
The rotor's varying output frequency and voltage can be matched to the fixed values of the grid using multiple technologies such as doubly fed induction generators or full-effect converters, which converts the variable frequency current to DC and then back to AC using inverters. Although such alternatives require costly equipment and cost power, the turbine can capture a significantly larger fraction of the wind energy. Most are low voltage 660 Volt, but some offshore turbines (several MW) are 3.3 kV medium voltage.
In some cases, especially when offshore, a large collector transformer converts the wind farm's medium-voltage AC grid to DC and transmits the energy through a power cable to an onshore HVDC converter station.
Hydraulic.
Hydraulic wind turbines perform the frequency and torque adjustments of gearboxes via a pressurized hydraulic fluid. Typically, the action of the turbine pressurizes the fluid with a hydraulic pump at the nacelle. Meanwhile, components on the ground can transform this pressure into energy, and recirculate the working fluid. Typically, the working fluid used in this kind of hydrostatic transmission is oil, which serves as a lubricant, reducing losses due to friction in the hydraulic units and allowing for a broad range of operating temperatures. However, other concepts are currently under study, which involve using water as the working fluid because it is abundant and eco-friendly.
Hydraulic turbines provide benefits to both operation and capital costs. They can use hydraulic units with variable displacement to have a continuously variable transmission that adapts in real time. This decouples generator speed to rotor speed, avoiding stalling and allowing for operating the turbine at an optimum speed and torque. This built-in transmission is how these hydraulic systems avoid the need for a conventional gearbox. Furthermore, hydraulic instead of mechanical power conversion introduces a damping effect on rotation fluctuations, reducing fatigue of the drivetrain and improving turbine structural integrity. Additionally, using a pressurized fluid instead of mechanical components allows for the electrical conversion to occur on the ground instead of the nacelle: this reduces maintenance difficulty, and reduces weight and center of gravity of the turbine. Studies estimate that these benefits may yield to a 3.9-18.9% reduction in the levelized cost of power for offshore wind turbines.
Some years ago, Mitsubishi, through its branch Artemis, deployed the Sea Angel, a unique hydraulic wind turbine at the utility scale. The Digital Displacement technology underwent trials on the Sea Angel, a wind turbine rated at 7 MW. This design is capable of adjusting the displacement of the central unit in response to erratic wind velocities, thereby maintaining the optimal efficiency of the system. Still, these systems are newer and in earlier stages of commercialization compared to conventional gearboxes.
Gearless.
Gearless wind turbines (also called direct drive) eliminate the gearbox. Instead, the rotor shaft is attached directly to the generator, which spins at the same speed as the blades.
Advantages of permanent magnet direct drive generators (PMDD) over geared generators include increased efficiency, reduced noise, longer lifetime, high torque at low RPM, faster and precise positioning, and drive stiffness. PMDD generators "eliminate the gear-speed increaser, which is susceptible to significant accumulated fatigue torque loading, related reliability issues, and maintenance costs".
To make up for a direct-drive generator's slower rotation rate, the diameter of the generator's rotor is increased so that it can contain more magnets to create the required frequency and power. Gearless wind turbines are often heavier than geared wind turbines. An EU study showed that gearbox reliability is not the main problem in wind turbines. The reliability of direct drive turbines offshore is still not known, given the small sample size.
Experts from Technical University of Denmark estimate that a geared generator with permanent magnets may require 25 kg/MW of the rare-earth element neodymium, while a gearless may use 250 kg/MW.
In December 2011, the US Department of Energy announced a critical shortage of rare-earth elements such as neodymium. China produces more than 95% of rare-earth elements, while Hitachi holds more than 600 patents covering neodymium magnets. Direct-drive turbines require 600 kg of permanent magnet material per megawatt, which translates to several hundred kilograms of rare-earth content per megawatt, as neodymium content is estimated to be 31% of magnet weight. Hybrid drivetrains (intermediate between direct drive and traditional geared) use significantly less rare-earth materials. While permanent magnet wind turbines only account for about 5% of the market outside of China, their market share inside of China is estimated at 25% or higher. In 2011, demand for neodymium in wind turbines was estimated to be 1/5 of that in electric vehicles.
Blades.
Blade design.
The ratio between the blade speed and the wind speed is called tip-speed ratio. High efficiency 3-blade-turbines have tip speed/wind speed ratios of 6 to 7. Wind turbines spin at varying speeds (a consequence of their generator design). Use of aluminum and composite materials has contributed to low rotational inertia, which means that newer wind turbines can accelerate quickly if the winds pick up, keeping the tip speed ratio more nearly constant. Operating closer to their optimal tip speed ratio during energetic gusts of wind allows wind turbines to improve energy capture from sudden gusts.
Noise increases with tip speed. To increase tip speed without increasing noise would reduce torque into the gearbox and generator, reducing structural loads, thereby reducing cost. The noise reduction is linked to the detailed blade aerodynamics, especially factors that reduce abrupt stalling. The inability to predict stall restricts the use of aggressive aerodynamics. Some blades (mostly on Enercon) have a winglet to increase performance and reduce noise.
A blade can have a lift-to-drag ratio of 120, compared to 70 for a sailplane and 15 for an airliner.
The hub.
In simple designs, the blades are directly bolted to the hub and are unable to pitch, which leads to aerodynamic stall above certain windspeeds. In more sophisticated designs, they are bolted to the pitch bearing, which adjusts their angle of attack with the help of a pitch system according to the wind speed. Pitch control is performed by hydraulic or electric systems (battery or ultracapacitor). The pitch bearing is bolted to the hub. The hub is fixed to the rotor shaft, which drives the generator directly or through a gearbox.
Blade count.
The number of blades is selected for aerodynamic efficiency, component costs, and system reliability. Noise emissions are affected by the location of the blades upwind or downwind of the tower and the rotor speed. Given that the noise emissions from the blades' trailing edges and tips vary by the 5th power of blade speed, a small increase in tip speed dramatically increases noise.
Wind turbines almost universally use either two or three blades. However, patents present designs with additional blades, such as Chan Shin's multi-unit rotor blade system. Aerodynamic efficiency increases with number of blades but with diminishing return. Increasing from one to two yields a six percent increase, while going from two to three yields an additional three percent. Further increasing the blade count yields minimal improvements and sacrifices too much in blade stiffness as the blades become thinner.
Theoretically, an infinite number of blades of zero width is the most efficient, operating at a high value of the tip speed ratio, but this is not practical.
Component costs affected by blade count are primarily for materials and manufacturing of the turbine rotor and drive train. Generally, the lower the number of blades, the lower the material and manufacturing costs. In addition, fewer blades allow higher rotational speed. Blade stiffness requirements to avoid tower interference limit blade thickness, but only when the blades are upwind of the tower; deflection in a downwind machine increases tower clearance. Fewer blades with higher rotational speeds reduce peak torque in the drive train, resulting in lower gearbox and generator costs.
System reliability is affected by blade count primarily through the dynamic loading of the rotor into the drive train and tower systems. While aligning the wind turbine to changes in wind direction (yawing), each blade experiences a cyclic load at its root end depending on blade position. However, these cyclic loads when combined at the drive train shaft are symmetrically balanced for three blades, yielding smoother operation during yaw. One or two blade turbines can use a pivoting teetered hub to nearly eliminate the cyclic loads into the drive shaft and system during yawing. In 2012, a Chinese 3.6 MW two-blade turbine was tested in Denmark.
Aesthetics are a factor in that the three-bladed rotor rates more pleasing to look at than a one- or two-bladed rotor.
Blade materials.
In general, materials should meet the following criteria:
Metals are undesirable because of their vulnerability to fatigue. Ceramics have low fracture toughness, resulting in early blade failure. Traditional polymers are not stiff enough to be useful, and wood has problems with repeatability, especially considering the blade length. That leaves fiber-reinforced composites, which have high strength and stiffness and low density.
Wood and canvas sails were used on early windmills due to their low price, availability, and ease of manufacture. Smaller blades can be made from light metals such as aluminium. These materials, however, require frequent maintenance. Wood and canvas construction limits the airfoil shape to a flat plate, which has a relatively high ratio of drag to force captured(low aerodynamic efficiency) compared to solid airfoils. Construction of solid airfoil designs requires inflexible materials such as metals or composites. Some blades incorporate lightning conductors.
Increasing blade length pushed power generation from the single megawatt range to upwards of 10 megawatts. A larger area effectively increases tip-speed ratio at a given wind speed, thus increasing its energy extraction. Software such as HyperSizer (originally developed for spacecraft design) can be used to improve blade design.
As of 2015 the rotor diameters of onshore wind turbine blades reached 130 meters, while the diameter of offshore turbines reached 170 meters. In 2001, an estimated 50 million kilograms of fiberglass laminate were used in wind turbine blades.
An important goal is to control blade weight. Since blade mass scales as the cube of the turbine radius, gravity loading constrains systems with larger blades. Gravitational loads include axial and tensile/ compressive loads (top/bottom of rotation) as well as bending (lateral positions). The magnitude of these loads fluctuates cyclically and the edgewise moments (see below) are reversed every 180° of rotation. Typical rotor speeds and design life are ~10 and 20 years, respectively, with the number of lifetime revolutions on the order of 10^8. Considering wind, it is expected that turbine blades go through ~10^9 loading cycles.
Wind is another source of rotor blade loading. Lift causes bending in the flatwise direction (out of rotor plane) while airflow around the blade cause edgewise bending (in the rotor plane). Flaps bending involves tension on the pressure (upwind) side and compression on the suction (downwind) side. Edgewise bending involves tension on the leading edge and compression on the trailing edge.
Wind loads are cyclical because of natural variability in wind speed and wind shear (higher speeds at top of rotation).
Failure in ultimate loading of wind-turbine rotor blades exposed to wind and gravity loading is a failure mode that needs to be considered when the rotor blades are designed. The wind speed that causes bending of the rotor blades exhibits a natural variability, and so does the stress response in the rotor blades. Also, the resistance of the rotor blades, in terms of their tensile strengths, exhibits a natural variability. Given the increasing size of production wind turbines, blade failures are increasingly relevant when assessing public safety risks from wind turbines. The most common failure is the loss of a blade or part thereof. This has to be considered in the design.
In light of these failure modes and increasingly larger blade systems, researchers seek cost-effective materials with higher strength-to-mass ratios.
Polymer.
The majority of commercialized wind turbine blades are made from fiber-reinforced polymers (FRPs), which are composites consisting of a polymer matrix and fibers. The long fibers provide longitudinal stiffness and strength, and the matrix provides fracture toughness, delamination strength, out-of-plane strength, and stiffness. Material indices based on maximizing power efficiency, high fracture toughness, fatigue resistance, and thermal stability are highest for glass and carbon fiber reinforced plastics (GFRPs and CFRPs).
In turbine blades, matrices such as thermosets or thermoplastics are used; as of 2017, thermosets are more common. These allow for the fibers to be bound together and add toughness. Thermosets make up 80% of the market, as they have lower viscosity, and also allow for low temperature cure, both features contributing to ease of processing during manufacture. Thermoplastics offer recyclability that the thermosets do not, however their processing temperature and viscosity are much higher, limiting the product size and consistency, which are both important for large blades. Fracture toughness is higher for thermoplastics, but the fatigue behavior is worse.
Manufacturing blades in the 40 to 50-metre range involves proven fiberglass composite fabrication techniques. Manufacturers such as Nordex SE and GE Wind use an infusion process. Other manufacturers vary this technique, some including carbon and wood with fiberglass in an epoxy matrix. Other options include pre-impregnated ("prepreg") fiberglass and vacuum-assisted resin transfer moulding. Each of these options uses a glass-fiber reinforced polymer composite constructed with differing complexity. Perhaps the largest issue with open-mould, wet systems is the emissions associated with the volatile organic compounds ("VOCs") released. Preimpregnated materials and resin infusion techniques contain all VOCs, however these contained processes have their challenges, because the production of thick laminates necessary for structural components becomes more difficult. In particular, the preform resin permeability dictates the maximum laminate thickness; also, bleeding is required to eliminate voids and ensure proper resin distribution. One solution to resin distribution is to use partially impregnated fiberglass. During evacuation, the dry fabric provides a path for airflow and, once heat and pressure are applied, the resin may flow into the dry region, resulting in an evenly impregnated laminate structure.
Epoxy.
Epoxy-based composites have environmental, production, and cost advantages over other resin systems. Epoxies also allow shorter cure cycles, increased durability, and improved surface finish. Prepreg operations further reduce processing time over wet lay-up systems. As turbine blades passed 60 metres, infusion techniques became more prevalent, because traditional resin transfer moulding injection times are too long compared to resin set-up time, limiting laminate thickness. Injection forces resin through a thicker ply stack, thus depositing the resin in the laminate structure before gelation occurs. Specialized epoxy resins have been developed to customize lifetimes and viscosity.
Carbon fiber-reinforced load-bearing spars can reduce weight and increase stiffness. Using carbon fibers in 60-metre turbine blades is estimated to reduce total blade mass by 38% and decrease cost by 14% compared to 100% fiberglass. Carbon fibers have the added benefit of reducing the thickness of fiberglass laminate sections, further addressing the problems associated with resin wetting of thick lay-up sections. Wind turbines benefit from the trend of decreasing carbon fiber costs.
Although glass and carbon fibers have many optimal qualities, their downsides include the fact that high filler fraction (10-70 wt%) causes increased density as well as microscopic defects and voids that can lead to premature failure.
Carbon nanotubes.
Carbon nanotubes (CNTs) can reinforce polymer-based nanocomposites. CNTs can be grown or deposited on the fibers or added into polymer resins as a matrix for FRP structures. Using nanoscale CNTs as filler instead of traditional microscale filler (such as glass or carbon fibers) results in CNT/polymer nanocomposites, for which the properties can be changed significantly at low filler contents (typically < 5 wt%). They have low density and improve the elastic modulus, strength, and fracture toughness of the polymer matrix. The addition of CNTs to the matrix also reduces the propagation of interlaminar cracks.
Research on a low-cost carbon fiber (LCCF) at Oak Ridge National Laboratory gained attention in 2020, because it can mitigate the structural damage from lightning strikes. On glass fiber wind turbines, lightning strike protection (LSP) is usually added on top, but this is effectively deadweight in terms of structural contribution. Using conductive carbon fiber can avoid adding this extra weight.
Research.
Some polymer composites feature self-healing properties. Since the blades of the turbine form cracks from fatigue due to repetitive cyclic stresses, self-healing polymers are attractive for this application, because they can improve reliability and buffer various defects such as delamination. Embedding paraffin wax-coated copper wires in a fiber reinforced polymer creates a network of tubes. Using a catalyst, these tubes and dicyclopentadiene (DCPD) then react to form a thermosetting polymer, which repairs the cracks as they form in the material. As of 2019, this approach is not yet commercial.
Further improvement is possible through the use of carbon nanofibers (CNFs) in the blade coatings. A major problem in desert environments is erosion of the leading edges of blades by sand-laden wind, which increases roughness and decreases aerodynamic performance. The particle erosion resistance of fiber-reinforced polymers is poor when compared to metallic materials and elastomers. Replacing glass fiber with CNF on the composite surface greatly improves erosion resistance. CNFs provide good electrical conductivity (important for lightning strikes), high damping ratio, and good impact-friction resistance.
For wind turbines, especially those offshore, or in wet environments, base surface erosion also occurs. For example, in cold climates, ice can build up on the blades and increase roughness. At high speeds, this same erosion impact can occur from rainwater. A useful coating must have good adhesion, temperature tolerance, weather tolerance (to resist erosion from salt, rain, sand, etc.), mechanical strength, ultraviolet light tolerance, and have anti-icing and flame retardant properties. Along with this, the coating should be cheap and environmentally friendly.
Super hydrophobic surfaces (SHS) cause water droplets to bead, and roll off the blades. SHS prevents ice formation, up to -25 C, as it changes the ice formation process.; specifically, small ice islands form on SHS, as opposed to a large ice front. Further, due to the lowered surface area from the hydrophobic surface, aerodynamic forces on the blade allow these islands to glide off the blade, maintaining proper aerodynamics. SHS can be combined with heating elements to further prevent ice formation.
Lightning.
Lightning damage over the course of a 25-year lifetime goes from surface level scorching and cracking of the laminate material, to ruptures in the blade or full separation in the adhesives that hold the blade together. It is most common to observe lightning strikes on the tips of the blades, especially in rainy weather due to embedded copper wiring. The most common method countermeasure, especially in non-conducting blade materials like GFRPs and CFRPs, is to add lightning "arresters", which are metallic wires that ground the blade, skipping the blades and gearbox entirely.
Blade repair.
Wind turbine blades typically require repair after 2–5 years. Notable causes of blade damage comes from manufacturing defects, transportation, assembly, installation, lightning strikes, environmental wear, thermal cycling, leading edge erosion, or fatigue. Due to composite blade material and function, repair techniques found in aerospace applications often apply or provide a basis for basic repairs.
Depending on the nature of the damage, the approach of blade repairs can vary. Erosion repair and protection includes coatings, tapes, or shields. Structural repairs require bonding or fastening new material to the damaged area. Nonstructural matrix cracks and delaminations require fills and seals or resin injections. If ignored, minor cracks or delaminations can propagate and create structural damage.
Four zones have been identified with their respective repair needs:
After the past few decades of rapid wind expansion across the globe, wind turbines are aging. This aging brings operation and maintenance(O&M) costs along with it, increasing as turbines approach their end of life. If damages to blades are not caught in time, power production and blade lifespan are decreased. Estimates project that 20-25% of the total levelized cost per kWh produced stems from blade O&M alone.
Blade recycling.
The Global Wind Energy Council (GWEC) predicted that wind energy will supply 28.5% of global energy by 2030. This requires a newer and larger fleet of more efficient turbines and the corresponding decommissioning of older ones. Based on a European Wind Energy Association study, in 2010 between 110 and 140 kilotonnes of composites were consumed to manufacture blades. The majority of the blade material ends up as waste, and requires recycling. As of 2020, most end-of-use blades are stored or sent to landfills rather than recycled. Typically, glass-fiber-reinforced polymers (GFRPs) comprise around 70% of the laminate material in the blade. GFRPs are not combustible, and so hinder the incineration of combustible materials. Therefore, conventional recycling methods are inappropriate. Depending on whether individual fibers are to be recovered, GFRP recycling may involve:
Start-up company Global Fiberglass Solutions claimed in 2020 that it had a method to process blades into pellets and fiber boards for use in flooring and walls. The company started producing samples at a plant in Sweetwater, Texas.
Tower.
Height.
Wind velocities increase at higher altitudes due to surface aerodynamic drag (by land or water surfaces) and air viscosity. The variation in velocity with altitude, called wind shear, is most dramatic near the surface. Typically, the variation follows the wind profile power law, which predicts that wind speed rises proportionally to the seventh root of altitude. Doubling the altitude of a turbine, then, increases the expected wind speeds by 10% and the expected power by 34%. To avoid buckling, doubling the tower height generally requires doubling the tower diameter, increasing the amount of material by a factor of at least four.
During the night, or when the atmosphere becomes stable, wind speed close to the ground usually subsides whereas at turbine hub altitude it does not decrease that much or may even increase. As a result, the wind speed is higher and a turbine will produce more power than expected from the 1/7 power law: doubling the altitude may increase wind speed by 20% to 60%. A stable atmosphere is caused by radiative cooling of the surface and is common in a temperate climate: it usually occurs when there is a (partly) clear sky at night. When the (high altitude) wind is strong (a 10-meter wind speed higher than approximately 6 to 7 m/s) the stable atmosphere is disrupted because of friction turbulence and the atmosphere turns neutral. A daytime atmosphere is either neutral (no net radiation; usually with strong winds and heavy clouding) or unstable (rising air because of ground heating—by the sun). The 1/7 power law is a good approximation of the wind profile. Indiana was rated as having a wind capacity of 30,000 MW, but by raising the expected turbine height from 50 m to 70 m raised the wind capacity to 40,000 MW, and could be double that at 100 m.
For HAWTs, tower heights approximately two to three times the blade length balance material costs of the tower against better utilisation of the more expensive active components.
Road restrictions make tower transport with a diameter of more than 4.3 m difficult. Swedish analyses showed that the bottom wing tip must be at least 30 m above the tree tops. A 3 MW turbine may increase output from 5,000 MWh to 7,700 MWh per year by rising from 80 to 125 meters. A tower profile made of connected shells rather than cylinders can have a larger diameter and still be transportable. A 100 m prototype tower with TC bolted 18 mm 'plank' shells at the wind turbine test center Høvsøre in Denmark was certified by Det Norske Veritas, with a Siemens nacelle. Shell elements can be shipped in standard 12 m shipping containers.
As of 2003, typical modern wind turbine installations used towers. Height is typically limited by the availability of cranes. This led to proposals for "partially self-erecting wind turbines" that, for a given available crane, allow taller towers that locate a turbine in stronger and steadier winds, and "self-erecting wind turbines" that could be installed without cranes.
Materials.
Currently, the majority of wind turbines are supported by conical tubular steel towers. These towers represent 30% – 65% of the turbine weight and therefore account for a large percentage of transport costs. The use of lighter tower materials could reduce the overall transport and construction cost, as long as stability is maintained. Higher grade S500 steel costs 20%-25% more than S335 steel (standard structural steel), but it requires 30% less material because of its improved strength. Therefore, replacing wind turbine towers with S500 steel offer savings in weight and cost.
Another disadvantage of conical steel towers is meeting the requirements of wind turbines taller than 90 meters. High performance concrete may increase tower height and increase lifetime. A hybrid of prestressed concrete and steel improves performance over standard tubular steel at tower heights of 120 meters. Concrete also allows small precast sections to be assembled on site. One downside of concrete towers is the higher CO2 emissions during concrete production. However, the overall environmental impact should be positive if concrete towers can double the wind turbine lifetime.
Wood is another alternative: a 100-metre tower supporting a 1.5 MW turbine operates in Germany. The wood tower shares the same transportation benefits of the segmented steel shell tower, but without the steel. A 2 MW turbine on a wooden tower started operating in Sweden in 2023.
Another approach is to form the tower on site via spiral welding rolled sheet steel. Towers of any height and diameter can be formed this way, eliminating restrictions driven by transport requirements. A factory can be built in one month. The developer claims 80% labor savings over conventional approaches.
Grid connection.
Grid-connected wind turbines, until the 1970s, were fixed-speed. As recently as 2003, nearly all grid-connected wind turbines operated at constant speed (synchronous generators) or within a few percent of constant speed (induction generators). As of 2011, many turbines used fixed-speed induction generators (FSIG). By then, most newly connected turbines were variable speed.
Early control systems were designed for peak power extraction, also called maximum power point tracking—they attempted to pull the maximum power from a given wind turbine under the current wind conditions. More recent systems deliberately pull less than maximum power in most circumstances, in order to provide other benefits, which include:
The generator produces alternating current (AC). The most common method in large modern turbines is to use a doubly fed induction generator directly connected to the grid. Some turbines drive an AC/AC converter—which converts the AC to direct current (DC) with a rectifier and then back to AC with an inverter—in order to match grid frequency and phase.
A useful technique to connect a PMSG to the grid is via a back-to-back converter. Control schemes can achieve unity power factor in the connection to the grid. In that way the wind turbine does not consume reactive power, which is the most common problem with turbines that use induction machines. This leads to a more stable power system. Moreover, with different control schemes a PMSG turbine can provide or consume reactive power. So, it can work as a dynamic capacitor/inductor bank to help with grid stability.
The diagram shows the control scheme for a unity power factor :
Reactive power regulation consists of one PI controller in order to achieve operation with unity power factor (i.e. Qgrid = 0 ). IdN has to be regulated to reach zero at steady-state (IdNref = 0).
The complete system of the grid side converter and the cascaded PI controller loops is displayed in the figure.
Construction.
As wind turbine usage has increased, so have companies that assist in the planning and construction of wind turbines. Most often, turbine parts are shipped via sea or rail, and then via truck to the installation site. Due to the massive size of the components involved, companies usually need to obtain transportation permits and ensure that the chosen trucking route is free of potential obstacles such as overpasses, bridges, and narrow roads. Groups known as "reconnaissance teams" will scout the way up to a year in advance as they identify problematic roads, cut down trees, and relocate utility poles. Turbine blades continue to increase in size, sometimes necessitating brand new logistical plans, as previously used routes may not allow a larger blade. Specialized vehicles known as Schnabel trailers are custom-designed to load and transport turbine sections: tower sections can be loaded without a crane and the rear end of the trailer is steerable, allowing for easier maneuvering. Drivers must be specially trained.
Foundations.
Wind turbines, by their nature, are very tall, slender structures, and this can cause a number of issues when the structural design of the foundations are considered. The foundations for a conventional engineering structure are designed mainly to transfer the vertical load (dead weight) to the ground, generally allowing comparatively unsophisticated arrangement to be used. However, in the case of wind turbines, the force of the wind's interaction with the rotor at the top of the tower creates a strong tendency to tip the wind turbine over. This loading regime causes large moment loads to be applied to the foundations of a wind turbine. As a result, considerable attention needs to be given when designing the footings to ensure that the foundation will resist this tipping tendency.
One of the most common foundations for offshore wind turbines is the monopile, a single large-diameter (4 to 6 metres) tubular steel pile driven to a depth of 5-6 times the diameter of the pile into the seabed. The cohesion of the soil, and friction between the pile and the soil provide the necessary structural support for the wind turbine.
In onshore turbines the most common type of foundation is a gravity foundation, where a large mass of concrete spread out over a large area is used to resist the turbine loads. Wind turbine size & type, wind conditions and soil conditions at the site are all determining factors in the design of the foundation. Prestressed piles or rock anchors are alternative foundation designs that use much less concrete and steel.
Costs.
A wind turbine is a complex and integrated system. Structural elements comprise the majority of the weight and cost. All parts of the structure must be inexpensive, lightweight, durable, and manufacturable, surviving variable loading and environmental conditions. Turbine systems with fewer failures require less maintenance, are lighter and last longer, reducing costs.
The major parts of a turbine divide as: tower 22%, blades 18%, gearbox 14%, generator 8%.
Specification.
Turbine design specifications contain a power curve and availability guarantee. The wind resource assessment makes it possible to calculate commercial viability. Typical operating temperature range is . In areas with extreme climate (like Inner Mongolia or Rajasthan) climate-specific versions are required.
Wind turbines can be designed and validated according to IEC 61400 standards.
RDS-PP (Reference Designation System for Power Plants) is a standardized system used worldwide to create structured hierarchy of wind turbine components. This facilitates turbine maintenance and operation cost, and is used during all stages of a turbine creation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Te= 3/2 p (\\lambda pm Iqs + (Lds-Lqs) Ids Iqs )=3/2 p \\lambda pm Iqs"
}
] | https://en.wikipedia.org/wiki?curid=8309537 |
8309686 | Coordination number | Number of atoms, molecules or ions bonded to a molecule or crystal
In chemistry, crystallography, and materials science, the coordination number, also called ligancy, of a central atom in a molecule or crystal is the number of atoms, molecules or ions bonded to it. The ion/molecule/atom surrounding the central ion/molecule/atom is called a ligand. This number is determined somewhat differently for molecules than for crystals.
For molecules and polyatomic ions the coordination number of an atom is determined by simply counting the other atoms to which it is bonded (by either single or multiple bonds). For example, [Cr(NH3)2Cl2Br2]− has Cr3+ as its central cation, which has a coordination number of 6 and is described as "hexacoordinate". The common coordination numbers are 4, 6 and 8.
Molecules, polyatomic ions and coordination complexes.
In chemistry, coordination number, defined originally in 1893 by Alfred Werner, is the total number of neighbors of a central atom in a molecule or ion. The concept is most commonly applied to coordination complexes.
Simple and commonplace cases.
The most common coordination number for "d-"block transition metal complexes is 6. The coordination number does not distinguish the geometry of such complexes, i.e. octahedral vs trigonal prismatic.
For transition metal complexes, coordination numbers range from 2 (e.g., AuI in Ph3PAuCl) to 9 (e.g., ReVII in [ReH9]2−). Metals in the "f"-block (the lanthanoids and actinoids) can accommodate higher coordination number due to their greater ionic radii and availability of more orbitals for bonding. Coordination numbers of 8 to 12 are commonly observed for "f"-block elements. For example, with bidentate nitrate ions as ligands, CeIV and ThIV form the 12-coordinate ions [Ce(NO3)6]2− (ceric ammonium nitrate) and [Th(NO3)6]2−. When the surrounding ligands are much smaller than the central atom, even higher coordination numbers may be possible. One computational chemistry study predicted a particularly stable PbHe152+ ion composed of a central lead ion coordinated with no fewer than 15 helium atoms. Among the Frank–Kasper phases, the packing of metallic atoms can give coordination numbers of up to 16. At the opposite extreme, steric shielding can give rise to unusually low coordination numbers. An extremely rare instance of a metal adopting a coordination number of 1 occurs in the terphenyl-based arylthallium(I) complex 2,6-Tipp2C6H3Tl, where Tipp is the 2,4,6-triisopropylphenyl group.
Polyhapto ligands.
Coordination numbers become ambiguous when dealing with polyhapto ligands.
For π-electron ligands such as the cyclopentadienide ion [C5H5]−, alkenes and the cyclooctatetraenide ion [C8H8]2−, the number of adjacent atoms in the π-electron system that bind to the central atom is termed the hapticity. In ferrocene the hapticity, "η", of each cyclopentadienide anion is five, Fe("η"5-C5H5)2. Various ways exist for assigning the contribution made to the coordination number of the central iron atom by each cyclopentadienide ligand. The contribution could be assigned as one since there is one ligand, or as five since there are five neighbouring atoms, or as three since there are three electron pairs involved. Normally the count of electron pairs is taken.
Surfaces and reconstruction.
The coordination numbers are well defined for atoms in the interior of a crystal lattice: one counts the nearest neighbors in all directions. The number of neighbors of an interior atom is termed the bulk coordination number. For surfaces, the number of neighbors is more limited, so the surface coordination number is smaller than the bulk coordination number. Often the surface coordination number is unknown or variable. The surface coordination number is also dependent on the Miller indices of the surface. In a body-centered cubic (BCC) crystal, the bulk coordination number is 8, whereas, for the (100) surface, the surface coordination number is 4.
Case studies.
A common way to determine the coordination number of an atom is by X-ray crystallography. Related techniques include neutron or electron diffraction. The coordination number of an atom can be determined straightforwardly by counting nearest neighbors.
α-Aluminium has a regular cubic close packed structure, fcc, where each aluminium atom has 12 nearest neighbors, 6 in the same plane and 3 above and below and the coordination polyhedron is a cuboctahedron. α-Iron has a body centered cubic structure where each iron atom has 8 nearest neighbors situated at the corners of a cube.
The two most common allotropes of carbon have different coordination numbers. In diamond, each carbon atom is at the centre of a regular tetrahedron formed by four other carbon atoms, the coordination number is four, as for methane. Graphite is made of two-dimensional layers in which each carbon is covalently bonded to three other carbons; atoms in other layers are further away and are not nearest neighbours, giving a coordination number of 3.
For chemical compounds with regular lattices such as sodium chloride and caesium chloride, a count of the nearest neighbors gives a good picture of the environment of the ions. In sodium chloride each sodium ion has 6 chloride ions as nearest neighbours (at 276 pm) at the corners of an octahedron and each chloride ion has 6 sodium atoms (also at 276 pm) at the corners of an octahedron. In caesium chloride each caesium has 8 chloride ions (at 356 pm) situated at the corners of a cube and each chloride has eight caesium ions (also at 356 pm) at the corners of a cube.
Complications.
In some compounds the metal-ligand bonds may not all be at the same distance. For example in PbCl2, the coordination number of Pb2+ could be said to be seven or nine, depending on which chlorides are assigned as ligands. Seven chloride ligands have Pb-Cl distances of 280–309 pm. Two chloride ligands are more distant, with a Pb-Cl distances of 370 pm.
In some cases a different definition of coordination number is used that includes atoms at a greater distance than the nearest neighbours. The very broad definition adopted by the International Union of Crystallography, IUCR, states that the coordination number of an atom in a crystalline solid depends on the chemical bonding model and the way in which the coordination number is calculated.
Some metals have irregular structures. For example, zinc has a distorted hexagonal close packed structure. Regular hexagonal close packing of spheres would predict that each atom has 12 nearest neighbours and a triangular orthobicupola (also called an anticuboctahedron or twinned cuboctahedron) coordination polyhedron. In zinc there are only 6 nearest neighbours at 266 pm in the same close packed plane with six other, next-nearest neighbours, equidistant, three in each of the close packed planes above and below at 291 pm. It is considered to be reasonable to describe the coordination number as 12 rather than 6. Similar considerations can be applied to the regular body centred cube structure where in addition to the 8 nearest neighbors there 6 more, approximately 15% more distant, and in this case the coordination number is often considered to be 14.
Many chemical compounds have distorted structures. Nickel arsenide, NiAs has a structure where nickel and arsenic atoms are 6-coordinate. Unlike sodium chloride where the chloride ions are cubic close packed, the arsenic anions are hexagonal close packed. The nickel ions are 6-coordinate with a distorted octahedral coordination polyhedron where columns of octahedra share opposite faces. The arsenic ions are not octahedrally coordinated but have a trigonal prismatic coordination polyhedron. A consequence of this arrangement is that the nickel atoms are rather close to each other. Other compounds that share this structure, or a closely related one are some transition metal sulfides such as FeS and CoS, as well as some intermetallics. In cobalt(II) telluride, CoTe, the six tellurium and two cobalt atoms are all equidistant from the central Co atom.
Two other examples of commonly-encountered chemicals are Fe2O3 and TiO2. Fe2O3 has a crystal structure that can be described as having a near close packed array of oxygen atoms with iron atoms filling two thirds of the octahedral holes. However each iron atom has 3 nearest neighbors and 3 others a little further away. The structure is quite complex, the oxygen atoms are coordinated to four iron atoms and the iron atoms in turn share vertices, edges and faces of the distorted octahedra. TiO2 has the rutile structure. The titanium atoms 6-coordinate, 2 atoms at 198.3 pm and 4 at 194.6 pm, in a slightly distorted octahedron. The octahedra around the titanium atoms share edges and vertices to form a 3-D network. The oxide ions are 3-coordinate in a trigonal planar configuration.
Usage in quasicrystal, liquid and other disordered systems.
The coordination number of systems with disorder cannot be precisely defined.
The first coordination number can be defined using the radial distribution function "g"("r"):
formula_0
where "r"0 is the rightmost position starting from "r" = 0 whereon "g"("r") is approximately zero, "r"1 is the first minimum. Therefore, it is the area under the first peak of "g"("r").
The second coordination number is defined similarly:
formula_1
Alternative definitions for the coordination number can be found in literature, but in essence the main idea is the same. One of those definition are as follows: Denoting the position of the first peak as "r"p,
formula_2
The first coordination shell is the spherical shell with radius between "r"0 and "r"1 around the central particle under investigation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n_1 = 4 \\pi \\int_{r_0}^{r_1} r^2 g(r) \\rho \\, dr, "
},
{
"math_id": 1,
"text": "n_2 = 4 \\pi \\int_{r_1}^{r_2} r^2 g(r) \\rho \\, dr. "
},
{
"math_id": 2,
"text": "n'_1 = 8 \\pi \\int_{r_0}^{r_p} r^2 g(r) \\rho \\, dr. "
}
] | https://en.wikipedia.org/wiki?curid=8309686 |
831261 | Bhattacharyya distance | Similarity of two probability distributions
In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. It is closely related to the Bhattacharyya coefficient, which is a measure of the amount of overlap between two statistical samples or populations.
It is not a metric, despite being named a "distance", since it does not obey the triangle inequality.
History.
Both the Bhattacharyya distance and the Bhattacharyya coefficient are named after Anil Kumar Bhattacharyya, a statistician who worked in the 1930s at the Indian Statistical Institute. He has developed this through a series of papers. He developed the method to measure the distance between two non-normal distributions and illustrated this with the classical multinomial populations, this work despite being submitted for publication in 1941, appeared almost five years later in Sankhya. Consequently, Professor Bhattacharyya started working toward developing a distance metric for probability distributions that are absolutely continuous with respect to the Lebesgue measure and published his progress in 1942, at Proceedings of the Indian Science Congress and the final work has appeared in 1943 in the Bulletin of the Calcutta Mathematical Society.
Definition.
For probability distributions formula_0 and formula_1 on the same domain formula_2, the Bhattacharyya distance is defined as
formula_3
where
formula_4
is the Bhattacharyya coefficient for discrete probability distributions.
For continuous probability distributions, with formula_5 and formula_6 where formula_7 and formula_8 are the probability density functions, the Bhattacharyya coefficient is defined as
formula_9.
More generally, given two probability measures formula_10 on a measurable space formula_11, let formula_12 be a (sigma finite) measure such that formula_0 and formula_1 are absolutely continuous with respect to formula_12 i.e. such that formula_13, and formula_14 for probability density functions formula_15 with respect to formula_12 defined formula_12-almost everywhere. Such a measure, even such a probability measure, always exists, e.g. formula_16. Then define the Bhattacharyya measure on formula_11 by
formula_17
It does not depend on the measure formula_12, for if we choose a measure formula_18 such that formula_12 and an other measure choice formula_19 are absolutely continuous i.e. formula_20 and formula_21, then
formula_22,
and similarly for formula_1. We then have
formula_23.
We finally define the Bhattacharyya coefficient
formula_24.
By the above, the quantity formula_25 does not depend on formula_12, and by the Cauchy inequality formula_26. In particular if formula_27 is absolutely continuous wrt to formula_1 with Radon Nikodym derivative formula_28, then formula_29
Gaussian case.
Let formula_30, formula_31, where formula_32 is the normal distribution with mean formula_18 and variance formula_33; then
formula_34.
And in general, given two multivariate normal distributions formula_35,
formula_36,
where formula_37 Note that the first term is a squared Mahalanobis distance.
Properties.
formula_38 and formula_39.
formula_40 does not obey the triangle inequality, though the Hellinger distance formula_41 does.
Bounds on Bayes error.
The Bhattacharyya distance can be used to upper and lower bound the Bayes error rate:
formula_42
where formula_43 and formula_44 is the posterior probability.
Applications.
The Bhattacharyya coefficient quantifies the "closeness" of two random statistical samples.
Given two sequences from distributions formula_10, bin them into formula_45 buckets, and let the frequency of samples from formula_0 in bucket formula_46 be formula_47, and similarly for formula_48, then the sample Bhattacharyya coefficient is
formula_49
which is an estimator of formula_50. The quality of estimation depends on the choice of buckets; too few buckets would overestimate formula_50, while too many would underestimate.
A common task in classification is estimating the separability of classes. Up to a multiplicative factor, the squared Mahalanobis distance is a special case of the Bhattacharyya distance when the two classes are normally distributed with the same variances. When two classes have similar means but significantly different variances, the Mahalanobis distance would be close to zero, while the Bhattacharyya distance would not be.
The Bhattacharyya coefficient is used in the construction of polar codes.
The Bhattacharyya distance is used in feature extraction and selection, image processing, speaker recognition, phone clustering, and in genetics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\mathcal{X}"
},
{
"math_id": 3,
"text": "D_B(P,Q) = -\\ln \\left( BC(P,Q) \\right)"
},
{
"math_id": 4,
"text": "BC(P,Q) = \\sum_{x\\in \\mathcal{X}} \\sqrt{P(x) Q(x)}"
},
{
"math_id": 5,
"text": "P(dx) = p(x)dx"
},
{
"math_id": 6,
"text": "Q(dx) = q(x) dx"
},
{
"math_id": 7,
"text": "p(x)"
},
{
"math_id": 8,
"text": "q(x)"
},
{
"math_id": 9,
"text": "BC(P,Q) = \\int_{\\mathcal{X}} \\sqrt{p(x) q(x)}\\, dx"
},
{
"math_id": 10,
"text": "P, Q"
},
{
"math_id": 11,
"text": "(\\mathcal X, \\mathcal B)"
},
{
"math_id": 12,
"text": "\\lambda"
},
{
"math_id": 13,
"text": "P(dx) = p(x)\\lambda(dx)"
},
{
"math_id": 14,
"text": "Q(dx) = q(x)\\lambda(dx)"
},
{
"math_id": 15,
"text": "p, q"
},
{
"math_id": 16,
"text": "\\lambda = \\tfrac12(P + Q)"
},
{
"math_id": 17,
"text": " bc(dx |P,Q) = \\sqrt{p(x)q(x)}\\, \\lambda(dx) = \\sqrt{\\frac{P(dx)}{\\lambda(dx)}(x)\\frac{Q(dx)}{\\lambda(dx)}(x)}\\lambda(dx)."
},
{
"math_id": 18,
"text": "\\mu"
},
{
"math_id": 19,
"text": "\\lambda'"
},
{
"math_id": 20,
"text": "\\lambda = l(x)\\mu"
},
{
"math_id": 21,
"text": "\\lambda' = l'(x) \\mu"
},
{
"math_id": 22,
"text": "P(dx) = p(x)\\lambda(dx) = p'(x)\\lambda'(dx) = p(x)l(x) \\mu(dx) = p'(x)l'(x)\\mu(dx)"
},
{
"math_id": 23,
"text": "bc(dx |P,Q) = \\sqrt{p(x) q(x)}\\, \\lambda(dx) = \\sqrt{p(x)q(x)}\\, l(x)\\mu(x) = \\sqrt{p(x)l(x)q(x)\\, l(x)}\\mu(dx) = \\sqrt{p'(x)l'(x) q'(x)l'(x)}\\, \\mu(dx) = \\sqrt{p'(x)q'(x)}\\,\\lambda'(dx)"
},
{
"math_id": 24,
"text": " BC(P,Q) = \\int_{\\mathcal X} bc(dx|P,Q) = \\int_{\\mathcal{X}} \\sqrt{p(x) q(x)}\\, \\lambda(dx)"
},
{
"math_id": 25,
"text": "BC(P,Q)"
},
{
"math_id": 26,
"text": "0\\le BC(P,Q) \\le 1 "
},
{
"math_id": 27,
"text": "P(dx) = p(x)Q(dx)"
},
{
"math_id": 28,
"text": "p(x) = \\frac{P(dx)}{Q(dx)}(x)"
},
{
"math_id": 29,
"text": "BC(P, Q) = \\int_{\\mathcal X} \\sqrt{p(x)} Q(dx) = \\int_{\\mathcal X} \\sqrt{\\frac{P(dx)}{Q(dx)}} Q(dx) = E_Q\\left[\\sqrt{\\frac{P(dx)}{Q(dx)}}\\right] "
},
{
"math_id": 30,
"text": "p\\sim\\mathcal{N}(\\mu_p,\\sigma_p^2)"
},
{
"math_id": 31,
"text": "q\\sim\\mathcal{N}(\\mu_q,\\sigma_q^2)"
},
{
"math_id": 32,
"text": "{\\mathcal {N}}(\\mu ,\\sigma ^{2})"
},
{
"math_id": 33,
"text": "\\sigma^2"
},
{
"math_id": 34,
"text": "D_B(p,q) = \\frac{1}{4} \\frac{(\\mu_p-\\mu_q)^{2}}{\\sigma_p^2+\\sigma_q^2} + \\frac 1 2 \\ln\\left(\\frac{\\sigma^2_p + \\sigma^2_q}{2\\sigma_p\\sigma_q}\\right)"
},
{
"math_id": 35,
"text": "p_i=\\mathcal{N}(\\boldsymbol\\mu_i,\\,\\boldsymbol\\Sigma_i)"
},
{
"math_id": 36,
"text": "D_B(p_1, p_2)={1\\over 8}(\\boldsymbol\\mu_1-\\boldsymbol\\mu_2)^T \\boldsymbol\\Sigma^{-1}(\\boldsymbol\\mu_1-\\boldsymbol\\mu_2)+{1\\over 2}\\ln \\,\\left({\\det \\boldsymbol\\Sigma \\over \\sqrt{\\det \\boldsymbol\\Sigma_1 \\, \\det \\boldsymbol\\Sigma_2} }\\right)"
},
{
"math_id": 37,
"text": "\\boldsymbol\\Sigma={\\boldsymbol\\Sigma_1+\\boldsymbol\\Sigma_2 \\over 2}."
},
{
"math_id": 38,
"text": "0 \\le BC \\le 1"
},
{
"math_id": 39,
"text": "0 \\le D_B \\le \\infty"
},
{
"math_id": 40,
"text": "D_B"
},
{
"math_id": 41,
"text": "\\sqrt{1-BC(p,q)}"
},
{
"math_id": 42,
"text": " \\frac{1}{2} - \\frac{1}{2}\\sqrt{1-4\\rho^2} \\leq L^* \\leq \\rho"
},
{
"math_id": 43,
"text": "\\rho = \\mathbb E \\sqrt {\\eta(X)(1-\\eta(X))}"
},
{
"math_id": 44,
"text": "\\eta(X) = \\mathbb P(Y=1 | X)"
},
{
"math_id": 45,
"text": "n"
},
{
"math_id": 46,
"text": "i"
},
{
"math_id": 47,
"text": "p_i"
},
{
"math_id": 48,
"text": "q_i"
},
{
"math_id": 49,
"text": "BC(\\mathbf{p},\\mathbf{q}) = \\sum_{i=1}^n \\sqrt{p_i q_i},"
},
{
"math_id": 50,
"text": "BC(P, Q)"
}
] | https://en.wikipedia.org/wiki?curid=831261 |
831350 | Distance matrix | Square matrix containing the distances between elements in a set
In mathematics, computer science and especially graph theory, a distance matrix is a square matrix (two-dimensional array) containing the distances, taken pairwise, between the elements of a set. Depending upon the application involved, the "distance" being used to define this matrix may or may not be a metric. If there are N elements, this matrix will have size "N"×"N". In graph-theoretic applications, the elements are more often referred to as points, nodes or vertices.
Non-metric distance matrix.
In general, a distance matrix is a weighted adjacency matrix of some graph. In a network, a directed graph with weights assigned to the arcs, the distance between two nodes of the network can be defined as the minimum of the sums of the weights on the shortest paths joining the two nodes. This distance function, while well defined, is not a metric. There need be no restrictions on the weights other than the need to be able to combine and compare them, so negative weights are used in some applications. Since paths are directed, symmetry can not be guaranteed, and if cycles exist the distance matrix may not be hollow.
An algebraic formulation of the above can be obtained by using the min-plus algebra. Matrix multiplication in this system is defined as follows: Given two "n" × "n" matrices "A" = ("aij") and "B" = ("bij"), their distance product "C" = ("cij") = "A" ⭑ "B" is defined as an "n" × "n" matrix such that
formula_0
Note that the off-diagonal elements that are not connected directly will need to be set to infinity or a suitable large value for the min-plus operations to work correctly. A zero in these locations will be incorrectly interpreted as an edge with no distance, cost, etc.
If W is an "n" × "n" matrix containing the edge weights of a graph, then Wk (using this distance product) gives the distances between vertices using paths of length at most k edges, and Wn is the distance matrix of the graph.
An arbitrary graph G on n vertices can be modeled as a weighted complete graph on n vertices by assigning a weight of one to each edge of the complete graph that corresponds to an edge of G and zero to all other edges. W for this complete graph is the adjacency matrix of G. The distance matrix of G can be computed from W as above, however, Wn calculated by the usual matrix multiplication only encodes the number of paths between any two vertices of length exactly n.
Metric distance matrix.
The value of a distance matrix formalism in many applications is in how the distance matrix can manifestly encode the metric axioms and in how it lends itself to the use of linear algebra techniques. That is, if "M" = ("xij") with 1 ≤ "i", "j" ≤ "N" is a distance matrix for a metric distance, then
When a distance matrix satisfies the first three axioms (making it a semi-metric) it is sometimes referred to as a pre-distance matrix. A pre-distance matrix that can be embedded in a Euclidean space is called a Euclidean distance matrix. For mixed-type data that contain numerical as well as categorical descriptors, Gower's distance is a common alternative.
Another common example of a metric distance matrix arises in coding theory when in a block code the elements are strings of fixed length over an alphabet and the distance between them is given by the Hamming distance metric. The smallest non-zero entry in the distance matrix measures the error correcting and error detecting capability of the code.
Additive distance matrix.
An additive distance matrix is a special type of matrix used in bioinformatics to build a phylogenetic tree. Let x be the lowest common ancestor between two species i and j, we expect "Mij" = "Mix" + "Mxj". This is where the additive metric comes from. A distance matrix M for a set of species S is said to be additive if and only if there exists a phylogeny T for S such that:
For this case, M is called an additive matrix and T is called an additive tree. Below we can see an example of an additive distance matrix and its corresponding tree:
Ultrametric distance matrix.
The ultrametric distance matrix is defined as an additive matrix which models the constant molecular clock. It is used to build a phylogenetic tree. A matrix M is said to be ultrametric if there exists a tree T such that:
Here is an example of an ultrametric distance matrix with its corresponding tree:
Bioinformatics.
The distance matrix is widely used in the bioinformatics field, and it is present in several methods, algorithms and programs. Distance matrices are used to represent protein structures in a coordinate-independent manner, as well as the pairwise distances between two sequences in sequence space. They are used in structural and sequential alignment, and for the determination of protein structures from NMR or X-ray crystallography.
Sometimes it is more convenient to express data as a similarity matrix.
It is also used to define the distance correlation.
Sequence alignment.
An alignment of two sequences is formed by inserting spaces in arbitrary locations along the sequences so that they end up with the same length and there are no two spaces at the same position of the two augmented sequences. One of the primary methods for sequence alignment is dynamic programming. The method is used to fill the distance matrix and then obtain the alignment. In typical usage, for sequence alignment a matrix is used to assign scores to amino-acid matches or mismatches, and a gap penalty for matching an amino-acid in one sequence with a gap in the other.
Global alignment.
The Needleman-Wunsch algorithm used to calculate global alignment uses dynamic programming to obtain the distance matrix.
Local alignment.
The Smith-Waterman algorithm is also dynamic programming based which consists also in obtaining the distance matrix and then obtain the local alignment.
Multiple sequence alignment.
Multiple sequence alignment is an extension of pairwise alignment to align several sequences at a time. Different MSA methods are based on the same idea of the distance matrix as global and local alignments.
There are other methods that have their own program due to their popularity:
MAFFT.
Multiple alignment using fast Fourier transform (MAFFT) is a program with an algorithm based on progressive alignment, and it offers various multiple alignment strategies. First, MAFFT constructs a distance matrix based on the number of shared 6-tuples. Second, it builds the guide tree based on the previous matrix. Third, it clusters the sequences with the help of the fast Fourier transform and starts the alignment. Based on the new alignment, it reconstructs the guide tree and align again.
Phylogenetic analysis.
To perform phylogenetic analysis, the first step is to reconstruct the phylogenetic tree: given a collection of species, the problem is to reconstruct or infer the ancestral relationships among the species, i.e., the phylogenetic tree among the species. Distance matrix methods perform this activity.
Distance matrix methods.
Distance matrix methods of phylogenetic analysis explicitly rely on a measure of "genetic distance" between the sequences being classified, and therefore require multiple sequences as an input. Distance methods attempt to construct an all-to-all matrix from the sequence query set describing the distance between each sequence pair. From this is constructed a phylogenetic tree that places closely related sequences under the same interior node and whose branch lengths closely reproduce the observed distances between sequences. Distance-matrix methods may produce either rooted or unrooted trees, depending on the algorithm used to calculate them. Given "n" species, the input is an "n" × "n" distance matrix M where "M"ij is the mutation distance between species "i" and "j" . The aim is to output a tree of degree 3 which is consistent with the distance matrix.
They are frequently used as the basis for progressive and iterative types of multiple sequence alignment. The main disadvantage of distance-matrix methods is their inability to efficiently use information about local high-variation regions that appear across multiple subtrees. Despite potential problems, distance methods are extremely fast, and they often produce a reasonable estimate of phylogeny. They also have certain benefits over the methods that use characters directly. Notably, distance methods allow use of data that may not be easily converted to character data, such as DNA-DNA hybridization assays.
The following are distance based methods for phylogeny reconstruction:
Additive tree reconstruction.
Additive tree reconstruction is based on additive and ultrametric distance matrices. These matrices have a special characteristic:
Consider an additive matrix M. For any three species i, j, k, the corresponding tree is unique. Every ultrametric distance matrix is an additive matrix. We can observe this property for the tree below, which consists on the species i, j, k.
The additive tree reconstruction technique starts with this tree. And then adds one more species each time, based on the distance matrix combined with the property mentioned above. For example, consider an additive matrix M and 5 species "a", "b", "c", "d" and "e". First we form an additive tree for two species "a" and "b". Then we chose a third one, let's say "c" and attach it to a point "x" on the edge between "a" and "b". The edge weights are computed with the property above. Next we add the fourth species "d" to any of the edges. If we apply the property then we identify that "d" should be attached to only one specific edge. Finally, we add "e" following the same procedure as before.
UPGMA.
The basic principle of UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is that similar species should be closer in the phylogenetic tree. Hence, it builds the tree by clustering similar sequences iteratively. The method works by building the phylogenetic tree bottom up from its leaves. Initially, we have "n" leaves (or "n" singleton trees), each representing a species in "S". Those "n" leaves are referred as "n" clusters. Then, we perform "n"-1 iterations. In each iteration, we identify two clusters "C"1 and "C"2 with the smallest average distance and merge them to form a bigger cluster "C". If we suppose M is ultrametric, for any cluster "C" created by the UPGMA algorithm, "C" is a valid ultrametric tree.
Neighbor joining.
Neighbor is a bottom-up clustering method. It takes a distance matrix specifying the distance between each pair of sequences. The algorithm starts with a completely unresolved tree, whose topology corresponds to that of a star network, and iterates over the following steps until the tree is completely resolved and all branch lengths are known:
Fitch-Margoliash.
The Fitch–Margoliash method uses a weighted least squares method for clustering based on genetic distance. Closely related sequences are given more weight in the tree construction process to correct for the increased inaccuracy in measuring distances between distantly related sequences. The least-squares criterion applied to these distances is more accurate but less efficient than the neighbor-joining methods. An additional improvement that corrects for correlations between distances that arise from many closely related sequences in the data set can also be applied at increased computational cost.
Data Mining and Machine Learning.
Data Mining.
A common function in data mining is applying cluster analysis on a given set of data to group data based on how similar or more similar they are when compared to other groups. Distance matrices became heavily dependent and utilized in cluster analysis since similarity can be measured with a distance metric. Thus, distance matrix became the representation of the similarity measure between all the different pairs of data in the set.
Hierarchical clustering.
A distance matrix is necessary for traditional hierarchical clustering algorithms which are often heuristic methods employed in biological sciences such as phylogeny reconstruction. When implementing any of the hierarchical clustering algorithms in data mining, the distance matrix will contain all pair-wise distances between every point and then will begin to create clusters between two different points or clusters based entirely on distances from the distance matrix.
If N be the number of points, the complexity of hierarchical clustering is:
Machine Learning.
Distance metrics are a key part of several machine learning algorithms, which are used in both supervised and unsupervised learning. They are generally used to calculate the similarity between data points: this is where the distance matrix is an essential element. The use of an effective distance matrix improves the performance of the machine learning model, whether it is for classification tasks or for clustering.
K-Nearest Neighbors.
A distance matrix is utilized in the k-NN algorithm which is one of the slowest but simplest and most used instance-based machine learning algorithms that can be used both in classification and regression tasks. It is one of the slowest machine learning algorithms since each test sample's predicted result requires a fully computed distance matrix between the test sample and each training sample in the training set. Once the distance matrix is computed, the algorithm selects the K number of training samples that are the closest to the test sample to predict the test sample's result based on the selected set's majority (classification) or average (regression) value.
This classification focused model predicts the label of the target based on the distance matrix between the target and each of the training samples to determine the K-number of samples that are the closest/nearest to the target.
Computer Vision.
A distance matrix can be used in neural networks for 2D to 3D regression in image predicting machine learning models.
Information retrieval.
Distance matrices using Gaussian mixture distance.
Potential basic algorithms worth noting on the topic of information retrieval is Fish School Search algorithm an information retrieval that partakes in the act of using distance matrices in order for gathering collective behavior of fish schools. By using a feeding operator to update their weights
Eq. A:
formula_4
Eq. B:
formula_5
Stepvol defines the size of the maximum volume displacement preformed with the distance matrix, specifically using a Euclidean distance matrix.
Clustering Documents.
The implementation of hierarchical clustering with distance-based metrics to organize and group similar documents together will require the need and utilization of a distance matrix. The distance matrix will represent the degree of association that a document has with another document that will be used to create clusters of closely associated documents that will be utilized in retrieval methods of relevant documents for a user's query.
Isomap.
Isomap incorporates distance matrices to utilize geodesic distances to able to compute lower-dimensional embeddings. This helps to address a collection of documents that reside within a massive number of dimensions and empowers to perform document clustering.
Neighborhood Retrieval Visualizer (NeRV).
An algorithm used for both unsupervised and supervised visualization that uses distance matrices to find similar data based on the similarities shown on a display/screen.
The distance matrix needed for Unsupervised NeRV can be computed through fixed input pairwise distances.
The distance matrix needed for Supervised NeRV requires formulating a supervised distance metric to be able to compute the distance of the input in a supervised manner.
Chemistry.
The distance matrix is a mathematical object widely used in both graphical-theoretical (topological) and geometric (topographic) versions of chemistry. The distance matrix is used in chemistry in both explicit and implicit forms.
Interconversion mechanisms between two permutational isomers.
Distance matrices were used as the main approach to depict and reveal the shortest path sequence needed to determine the rearrangement between the two permutational isomers.
Distance Polynomials and Distance Spectra.
Explicit use of Distance matrices is required in order to construct the distance polynomials and distance spectra of molecular structures.
Structure-property model.
Implicit use of Distance matrices was applied through the use of the distance based metric Weiner number/Weiner Index which was formulated to represent the distances in all chemical structures. The Weiner number is equal to half-sum of the elements of the distance matrix.
Graph-theoretical Distance matrix.
Distance matrix in chemistry that are used for the 2-D realization of molecular graphs, which are used to illustrate the main foundational features of a molecule in a myriad of applications.
Geometric-Distance Matrix.
While the graph-theoretical distance matrix 2-D captures the constitutional features of the molecule, its three-dimensional (3D) character is encoded in the geometric-distance matrix. The geometric-distance matrix is a different type of distance matrix that is based on the graph-theoretical distance matrix of a molecule to represent and graph the 3-D molecule structure. The geometric-distance matrix of a molecular structure "G" is a real symmetric "n" x "n" matrix defined in the same way as a 2-D matrix. However, the matrix elements "D"ij will hold a collection of shortest Cartesian distances between "i" and "j" in "G". Also known as topographic matrix, the geometric-distance matrix can be constructed from the known geometry of the molecule. As an example, the geometric-distance matrix of the carbon skeleton of "2,4-dimethylhexane" is shown below:
Other Applications.
Time Series Analysis.
Dynamic Time Warping distance matrices are utilized with the clustering and classification algorithms of a collection/group of time series objects.
Examples.
For example, suppose these data are to be analyzed, where pixel Euclidean distance is the distance metric.
The distance matrix would be:
These data can then be viewed in graphic form as a heat map. In this image, black denotes a distance of 0 and white is maximal distance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c_{ij} = \\min_{k=1}^n \\{a_{ik} + b_{kj}\\}."
},
{
"math_id": 1,
"text": "O(N^3)"
},
{
"math_id": 2,
"text": "O(N^2)"
},
{
"math_id": 3,
"text": "O(k * n * d)"
},
{
"math_id": 4,
"text": "\nx_i(t+1)=x_{i}(t)- step_{vol} rand(0,1)\\frac{x_{i}(t) - B(t)}{distance(x_{i}(t),B(t))},\n"
},
{
"math_id": 5,
"text": "\nx_i(t+1)=x_{i}(t)+step_{vol} rand(0,1)\\frac{x_{i}(t) - B(t)}{distance(x_{i}(t),B(t))},\n"
}
] | https://en.wikipedia.org/wiki?curid=831350 |
8313563 | James A. D. W. Anderson | British computer scientist
James Arthur Dean Wallace Anderson, known as James Anderson, is a retired member of academic staff in the School of Systems Engineering at the University of Reading, England, where he used to teach compilers, algorithms, fundamentals of computer science and computer algebra, programming and computer graphics.
Anderson quickly gained publicity in December 2006 in the United Kingdom when the regional BBC South Today reported his claim of "having solved a 1200 year old problem", namely that of division by zero. However, commentators quickly responded that his ideas are just a variation of the standard IEEE 754 concept of NaN (Not a Number), which has been commonly employed on computers in floating point arithmetic for many years.
Dr Anderson defended against the criticism of his claims on BBC Berkshire on 12 December 2006, saying, "If anyone doubts me I can hit them over the head with a computer that does it."
Research and background.
Anderson was a member of the British Computer Society, the British Machine Vision Association, Eurographics, and the British Society for the Philosophy of Science. He was also a teacher in the Computer Science department (School of Systems Engineering) at the University of Reading. He was
a psychology graduate who worked in the Electrical and Electronic Engineering departments at the University of Sussex and Plymouth Polytechnic (now the University of Plymouth). His doctorate is from the University of Reading for (in Anderson's words) "developing a canonical description of the perspective transformations in whole numbered dimensions".
He has written multiple papers on division by zero and has invented what he calls the "Perspex machine".
Anderson claims that "mathematical arithmetic is sociologically invalid" and that IEEE floating-point arithmetic, with NaN, is also faulty.
Transreal arithmetic.
Anderson's transreal numbers were first mentioned in a 1997 publication, and made well known on the Internet in 2006, but not accepted as useful by the mathematics community. These numbers are used in his concept of transreal arithmetic and the Perspex machine. According to Anderson, transreal numbers include all of the real numbers, plus three others: infinity (formula_1), negative infinity (formula_2) and "nullity" (formula_3), a number that lies outside the affinely extended real number line. (Nullity, confusingly, has an existing mathematical meaning.)
Anderson intends the axioms of transreal arithmetic to complement the axioms of standard arithmetic; they are supposed to produce the same result as standard arithmetic for all calculations where standard arithmetic defines a result. In addition, they are intended to define a consistent numeric result for the calculations which are undefined in standard arithmetic, such as division by zero.
Transreal arithmetic and other arithmetics.
"Transreal arithmetic" is derived from projective geometry but produces results similar to IEEE floating point arithmetic, a floating point arithmetic commonly used on computers. IEEE floating point arithmetic, like transreal arithmetic, uses affine infinity (two separate infinities, one positive and one negative) rather than projective infinity (a single unsigned infinity, turning the number line into a loop).
Here are some identities in transreal arithmetic with the IEEE equivalents:
The main difference is that IEEE arithmetic replaces the real (and transreal) number zero with positive and negative zero. (This is so that it can preserve the sign of a nonzero real number whose absolute value has been rounded down to zero. See also infinitesimal.) Division of any non-zero finite number by zero results in either positive or negative infinity.
Another difference between transreal and IEEE floating-point operations is that nullity compares equal to nullity, whereas NaN does not compare equal to NaN. This is due to nullity being a number, whereas NaN is an indeterminate value. It is easy to see that nullity is not an indeterminate value. For example, the numerator of nullity is zero, but the numerator of an indeterminate value is indeterminate. Thus nullity and indeterminant have different properties, which is to say they are not the same! In IEEE, the inequality is because two expressions which both fail to have a numerical value cannot be numerically equivalent.
Anderson's analysis of the properties of transreal algebra is given in his paper on "perspex machines".
Due to the more expansive definition of numbers in transreal arithmetic, several identities and theorems which apply to all numbers in standard arithmetic are not universal in transreal arithmetic. For instance, in transreal arithmetic, formula_4 is not true for all formula_5, since formula_6. That problem is addressed in ref. pg. 7. Similarly, it is not always the case in transreal arithmetic that a number can be cancelled with its reciprocal to yield formula_0. Cancelling zero with its reciprocal in fact yields nullity.
Examining the axioms provided by Anderson, it is easy to see that any arithmetical term, being a sum, difference, product, or quotient, which contains an occurrence of the constant formula_3 is provably equivalent to formula_3. This is to say that nullity is absorptive over these arithmetical operations. Formally, let formula_7 be any arithmetical term with a sub-arithmetical-term formula_3, then formula_8 is a theorem of the theory proposed by Anderson.
Media coverage.
Anderson's transreal arithmetic, and concept of "nullity" in particular, were introduced to the public by the BBC with its report in December 2006 where Anderson was featured on a BBC television segment teaching schoolchildren about his concept of "nullity". The report implied that Anderson had "discovered" the solution to division by zero, rather than simply attempting to formalize it. The report also suggested that Anderson was the first to solve this problem, when in fact the result of zero divided by zero has been expressed formally in a number of different ways (for example, NaN).
The BBC was criticized for irresponsible journalism, but the producers of the segment defended the BBC, stating that the report was a light-hearted look at a mathematical problem aimed at a mainstream, regional audience for BBC South Today rather than at a global audience of mathematicians. The BBC later posted a follow-up giving Anderson's response to many claims that the theory is flawed.
Applications.
Anderson has been trying to market his ideas for transreal arithmetic and "Perspex machines" to investors. He claims that his work can produce computers which run "orders of magnitude faster than today's computers". He has also claimed that it can help solve such problems as quantum gravity, the mind-body connection, consciousness and free will.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1"
},
{
"math_id": 1,
"text": "\\infty"
},
{
"math_id": 2,
"text": "-\\infty"
},
{
"math_id": 3,
"text": "\\Phi"
},
{
"math_id": 4,
"text": "a-a=0"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "\\Phi-\\Phi=\\Phi"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "t=\\Phi"
}
] | https://en.wikipedia.org/wiki?curid=8313563 |
8315 | Diamagnetism | Magnetic property of ordinary materials
Diamagnetism is the property of materials that are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than the permeability of vacuum, "μ"0. In most materials, diamagnetism is a weak effect which can be detected only by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it entirely expels any magnetic field from its interior (the Meissner effect).
Diamagnetism was first discovered when Anton Brugmans observed in 1778 that bismuth was repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as "diamagnetic" (the prefix "dia-" meaning "through" or "across"), then later changed it to "diamagnetism".
A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic.
Materials.
Diamagnetism is a property of all materials, and always makes a weak contribution to the material's response to a magnetic field. However, other forms of magnetism (such as ferromagnetism or paramagnetism) are so much stronger such that, when different forms of magnetism are present in a material, the diamagnetic contribution is usually negligible. Substances where the diamagnetic behaviour is the strongest effect are termed diamagnetic materials, or diamagnets. Diamagnetic materials are those that some people generally think of as "non-magnetic", and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants (named after Paul Pascal).
Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as "χ"v
"μ"v − 1. This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is "χ"v
. The most strongly diamagnetic material is bismuth, "χ"v
, although pyrolytic carbon may have a susceptibility of "χ"v
in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Because "χ"v is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value.
In rare cases, the diamagnetic contribution can be stronger than paramagnetic contribution. This is the case for gold, which has a magnetic susceptibility less than 0 (and is thus by definition a diamagnetic material), but when measured carefully with X-ray magnetic circular dichroism, has an extremely weak paramagnetic contribution that is overcome by a stronger diamagnetic contribution.
Superconductors.
Superconductors may be considered perfect diamagnets ("χ"v
−1), because they expel all magnetic fields (except in a thin surface layer) due to the Meissner effect.
Demonstrations.
Curving water surfaces.
If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by a reflection in its surface.
Levitation.
Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space.
A thin slice of pyrolytic graphite, which is an unusually strongly diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective and relatively convenient demonstration of diamagnetism.
The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated.
In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass.
Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity.
A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet.
Theory.
The electrons in a material generally settle in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field.
The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory of Langevin for diamagnetism gives the same prediction as the quantum theory. The classical theory is given below.
Langevin diamagnetism.
Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity B, applied to an electron with charge e and mass m, gives rise to Larmor precession with frequency ω
eB / 2m. The number of revolutions per unit time is, so the current for an atom with Z electrons is (in SI units)
formula_0
The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the z axis. The average loop area can be given as formula_1, where formula_2 is the mean square distance of the electrons perpendicular to the z axis. The magnetic moment is therefore
formula_3
If the distribution of charge is spherically symmetric, we can suppose that the distribution of x,y,z coordinates are independent and identically distributed. Then formula_4, where formula_5 is the mean square distance of the electrons from the nucleus. Therefore, formula_6. If formula_7 is the number of atoms per unit volume, the volume diamagnetic susceptibility in SI units is
formula_8
In atoms, Langevin susceptibility is of the same order of magnitude as Van Vleck paramagnetic susceptibility.
In metals.
The Langevin theory is not the full picture for metals because there are also non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the (volume) diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is
formula_9
where formula_10 is the Fermi energy. This is equivalent to formula_11, exactly formula_12 times Pauli paramagnetic susceptibility, where formula_13 is the Bohr magneton and formula_14 is the density of states (number of states per energy per volume). This formula takes into account the spin degeneracy of the carriers (spin-1/2 electrons).
In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the De Haas–Van Alphen effect, also first described theoretically by Landau.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " I = -\\frac{Ze^2B}{4 \\pi m}."
},
{
"math_id": 1,
"text": "\\scriptstyle \\pi\\left\\langle\\rho^2\\right\\rangle"
},
{
"math_id": 2,
"text": "\\scriptstyle \\left\\langle\\rho^2\\right\\rangle"
},
{
"math_id": 3,
"text": " \\mu = -\\frac{Ze^2B}{4 m}\\langle\\rho^2\\rangle."
},
{
"math_id": 4,
"text": "\\scriptstyle \\left\\langle x^2 \\right\\rangle \\;=\\; \\left\\langle y^2 \\right\\rangle \\;=\\; \\left\\langle z^2 \\right\\rangle \\;=\\; \\frac{1}{3}\\left\\langle r^2 \\right\\rangle"
},
{
"math_id": 5,
"text": "\\scriptstyle \\left\\langle r^2 \\right\\rangle"
},
{
"math_id": 6,
"text": "\\scriptstyle \\left\\langle \\rho^2 \\right\\rangle \\;=\\; \\left\\langle x^2\\right\\rangle \\;+\\; \\left\\langle y^2 \\right\\rangle \\;=\\; \\frac{2}{3}\\left\\langle r^2 \\right\\rangle"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "\\chi = \\frac{\\mu_0 n \\mu}{B} = -\\frac{\\mu_0e^2 Zn }{6 m}\\langle r^2\\rangle."
},
{
"math_id": 9,
"text": "\\chi = -\\mu_0\\frac{e^2}{12\\pi^2 m\\hbar}\\sqrt{2mE_{\\rm F}},"
},
{
"math_id": 10,
"text": "E_{\\rm F}"
},
{
"math_id": 11,
"text": "-\\mu_0\\mu_{\\rm B}^2 g(E_{\\rm F})/3"
},
{
"math_id": 12,
"text": "-1/3"
},
{
"math_id": 13,
"text": "\\mu_{\\rm B}=e\\hbar/2m"
},
{
"math_id": 14,
"text": "g(E)"
}
] | https://en.wikipedia.org/wiki?curid=8315 |
831689 | Pontryagin's maximum principle | Principle in optimal control theory for best way to change state in a dynamical system
Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.
The maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students, and its initial application was to the maximization of the terminal speed of a rocket. The result was derived using ideas from the classical calculus of variations. After a slight perturbation of the optimal control, one considers the first-order term of a Taylor expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows.
Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization. A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time. The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not. However, in contrast to the Hamilton–Jacobi–Bellman equation, which needs to hold over the entire state space to be valid, Pontryagin's Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory.
Notation.
For set formula_0 and functions
formula_1,
formula_2,
formula_3,
formula_4,
we use the following notation:
formula_5,
formula_6,
formula_7,
formula_8,
formula_9.
Formal statement of necessary conditions for minimization problems.
Here the necessary conditions are shown for minimization of a functional.
Consider an n-dimensional dynamical system, with state variable formula_10, and control variable formula_11, where formula_0 is the set of admissible controls. The evolution of the system is determined by the state and the control, according to the differential equation formula_12. Let the system's initial state be formula_13 and let the system's evolution be controlled over the time-period with values formula_14. The latter is determined by the following differential equation:
formula_15
The control trajectory formula_16 is to be chosen according to an objective. The objective is a functional formula_17 defined by
formula_18,
where formula_19 can be interpreted as the "rate" of cost for exerting control formula_20 in state formula_21, and formula_22 can be interpreted as the cost for ending up at state formula_21. The specific choice of formula_23 depends on the application.
The constraints on the system dynamics can be adjoined to the Lagrangian formula_24 by introducing time-varying Lagrange multiplier vector formula_25, whose elements are called the "costates" of the system. This motivates the construction of the Hamiltonian formula_26 defined for all formula_27 by:
formula_28
where formula_29 is the transpose of formula_25.
Pontryagin's minimum principle states that the optimal state trajectory formula_30, optimal control formula_31, and corresponding Lagrange multiplier vector formula_32 must minimize the Hamiltonian formula_26 so that
for all time formula_27 and for all permissible control inputs formula_11. Here, the trajectory of the Lagrangian multiplier vector formula_25 is the solution to the costate equation and its terminal conditions:
If formula_33 is fixed, then these three conditions in (1)-(3) are the necessary conditions for an optimal control.
If the final state formula_33 is not fixed (i.e., its differential variation is not zero), there is an additional condition
These four conditions in (1)-(4) are the necessary conditions for an optimal control.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{U}"
},
{
"math_id": 1,
"text": "\\Psi : \\reals^n \\to \\reals"
},
{
"math_id": 2,
"text": "H : \\reals^n \\times \\mathcal{U} \\times \\reals^n \\times \\reals \\to \\reals"
},
{
"math_id": 3,
"text": "L : \\reals^n \\times \\mathcal{U} \\to \\reals"
},
{
"math_id": 4,
"text": "f : \\reals^n \\times \\mathcal{U} \\to \\reals^n"
},
{
"math_id": 5,
"text": "\n\\Psi_T(x(T))= \\left.\\frac{\\partial \\Psi(x)}{\\partial T}\\right|_{x=x(T)} \\,\n"
},
{
"math_id": 6,
"text": "\n\\Psi_x(x(T))=\\begin{bmatrix} \\left.\\frac{\\partial\n\\Psi(x)}{\\partial x_1}\\right|_{x=x(T)} & \\cdots & \\left.\\frac{\\partial\n\\Psi(x)}{\\partial x_n} \\right|_{x=x(T)}\n\\end{bmatrix}\n"
},
{
"math_id": 7,
"text": "\nH_x(x^*,u^*,\\lambda^*,t)=\\begin{bmatrix} \\left.\\frac{\\partial H}{\\partial x_1}\\right|_{x=x^*,u=u^*,\\lambda=\\lambda^*}\n& \\cdots & \\left.\\frac{\\partial H}{\\partial x_n}\\right|_{x=x^*,u=u^*,\\lambda=\\lambda^*}\n\\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\nL_x(x^*,u^*)=\\begin{bmatrix} \\left.\\frac{\\partial L}{\\partial x_1}\\right|_{x=x^*,u=u^*}\n& \\cdots & \\left.\\frac{\\partial L}{\\partial x_n}\\right|_{x=x^*,u=u^*}\n\\end{bmatrix}\n"
},
{
"math_id": 9,
"text": "\nf_x(x^*,u^*)=\\begin{bmatrix} \\left.\\frac{\\partial f_1}{\\partial x_1}\\right|_{x=x^*,u=u^*} & \\cdots & \\left.\\frac{\\partial f_1}{\\partial x_n}\\right|_{x=x^*,u=u^*} \\\\\n\\vdots & \\ddots & \\vdots \\\\ \\left.\\frac{\\partial f_n}{\\partial x_1}\\right|_{x=x^*,u=u^*} &\n\\ldots & \\left.\\frac{\\partial f_n}{\\partial x_n}\\right|_{x=x^*,u=u^*}\n\\end{bmatrix}\n"
},
{
"math_id": 10,
"text": "x \\in \\R^n"
},
{
"math_id": 11,
"text": "u \\in \\mathcal{U}"
},
{
"math_id": 12,
"text": "\\dot{x}=f(x,u)"
},
{
"math_id": 13,
"text": "x_0"
},
{
"math_id": 14,
"text": "t \\in [0, T]"
},
{
"math_id": 15,
"text": "\n\\dot{x}=f(x,u), \\quad x(0)=x_0, \\quad u(t) \\in \\mathcal{U}, \\quad t \\in [0,T]\n"
},
{
"math_id": 16,
"text": "u: [0, T] \\to \\mathcal{U}"
},
{
"math_id": 17,
"text": "J"
},
{
"math_id": 18,
"text": "\nJ=\\Psi(x(T))+\\int^T_0 L\\big(x(t),u(t)\\big) \\,dt\n"
},
{
"math_id": 19,
"text": "L(x, u)"
},
{
"math_id": 20,
"text": "u"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "\\Psi(x)"
},
{
"math_id": 23,
"text": "L, \\Psi"
},
{
"math_id": 24,
"text": "L"
},
{
"math_id": 25,
"text": "\\lambda"
},
{
"math_id": 26,
"text": "H"
},
{
"math_id": 27,
"text": "t \\in [0,T]"
},
{
"math_id": 28,
"text": "\nH\\big(x(t),u(t),\\lambda(t),t\\big)=\\lambda^{\\rm T}(t)\\cdot f\\big(x(t),u(t)\\big) + L\\big(x(t),u(t)\\big)\n"
},
{
"math_id": 29,
"text": "\\lambda^{\\rm T}"
},
{
"math_id": 30,
"text": "x^*"
},
{
"math_id": 31,
"text": "u^*"
},
{
"math_id": 32,
"text": "\\lambda^*"
},
{
"math_id": 33,
"text": "x(T)"
}
] | https://en.wikipedia.org/wiki?curid=831689 |
831744 | Visitor center | Physical location that provides tourist information on the place or attraction where it is located
A visitor center or centre (see American and British English spelling differences), visitor information center or tourist information centre is a physical location that provides information to tourists.
Types.
A visitor center may be a Civic center at a specific attraction or place of interest, such as a landmark, national park, national forest, or state park, providing information (such as trail maps, and about camp sites, staff contact, restrooms, etc.) and in-depth educational exhibits and artifact displays (for example, about natural or cultural history). Often a film or other media display is used. If the site has permit requirements or guided tours, the visitor center is often the place where these are coordinated.
A tourist information center provides visitors with information on the area's attractions, lodgings, maps, and other items relevant to tourism. These are often operated at the airport or other port of entry, by the local government or chamber of commerce. Some are called information centers.
Signage.
The Unicode code block Letterlike Symbols allocates a code point (U+2139) for a symbol that may used to identify an information source. The default form is a lower case, roman type, serif, extra bold, letter , but the script typeface form formula_0 is common.
Europe.
United Kingdom.
In the United Kingdom, there is a nationwide network of Tourist Information Centres run by the British Tourist Authority (BTA), represented online by the VisitBritain website and public relations organisation. Other TICs are run by local authorities or through private organisations such as local shops in association with BTA.
In England, VisitEngland promotes domestic tourism.
In Wales, the Welsh Government supports TICs through Visit Wales.
In Scotland, the Scottish Government supports VisitScotland, the official tourist organisation of Scotland, which also operates Tourist Information Centres across Scotland.
Poland.
In Poland there are special offices and tables giving free information about tourist attractions. Offices are situated in interesting places in popular tourists' destinations and tables usually stay near monuments and important culture
North America.
In North America, a welcome center is a rest area with a visitor center, located after the entrance from one state or province to another state or province or in some cases another country, usually along an Interstate Highway or other freeway. These information centers are operated by the state they are located in. The first example opened on 4 May 1935, next to US 12 in New Buffalo, Michigan, near the Indiana state line.
Many United States cities, such as Houston, Texas and Boca Raton, Florida, as well as counties and other areas smaller than states, also operate welcome centers, though usually with less facilities than state centers have.
In Ontario, there are 11 Ontario Travel Information Centres located along 400-series highways.
South America.
Peru.
Peru features Iperú, Tourist Information and Assistance, a free service that provides tourist information for domestic and foreign travelers, the information covers destinations, attractions, recommended routes and licensed tourism companies in Peru. It also provides assistance on various procedures or where tourists have problems of various kinds. Iperú receives complaints and suggestions for destinations and tourism companies operating in Peru (lodging, travel agencies, airlines, buses, etc.).
Iperú, Tourist Information and Assistance has a nationwide network represented online by the Peru.travel website, the 24/7 line (51 1) 5748000, and 31 local offices in 13 regions in all over Peru: Lima-Callao, Amazonas, Piura, Lambayeque, La Libertad, Ancash, Arequipa, Tacna, Puno, Ayacucho, Cusco, Tumbes and Iquitos.
The official tourist organization or national tourist board of Peru is PromPerú, a national organization that promotes both tourism and international commerce of this country worldwide.
Oceania.
In Australia, most visitor centres are local or state government-run, or in some cases as an association of tourism operators on behalf of the government, usually managed by a board or executive. Those that comply with a national accreditation programme use the italic formula_0 as pictured. These visitor information centres (often abbreviated as VICs) provide information on the local area, and usually perform services such as accommodation and tour bookings, flight/bus/train/hire car options, and act as the first point of contact a visitor has with the town or region.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
}
] | https://en.wikipedia.org/wiki?curid=831744 |
8317687 | Cauchy's theorem (geometry) | Theorem in geometry
Cauchy's theorem is a theorem in geometry, named after Augustin Cauchy. It states that
convex polytopes in three dimensions with congruent corresponding faces must be congruent to each other. That is, any polyhedral net formed by unfolding the faces of the polyhedron onto a flat surface, together with gluing instructions describing which faces should be connected to each other, uniquely determines the shape of the original polyhedron. For instance, if six squares are connected in the pattern of a cube, then they must form a cube: there is no convex polyhedron with six square faces connected in the same way that does not have the same shape.
This is a fundamental result in rigidity theory: one consequence of the theorem is that, if one makes a physical model of a convex polyhedron by connecting together rigid plates for each of the polyhedron faces with flexible hinges along the polyhedron edges, then this ensemble of plates and hinges will necessarily form a rigid structure.
Statement.
Let "P" and "Q" be "combinatorially equivalent" 3-dimensional convex polytopes; that is, they are convex polytopes with isomorphic face lattices. Suppose further that each pair of corresponding faces from "P" and "Q" are congruent to each other, i.e. equal up to a rigid motion. Then "P" and "Q" are themselves congruent.
To see that convexity is necessary, consider a regular icosahedron. One can "push in" a vertex to create a nonconvex polyhedron that is still combinatorially equivalent to the regular icosahedron; that is, one can take five faces of the icosahedron meeting at a vertex, which form the sides of a pentagonal pyramid, and reflect the pyramid with respect to its base.
History.
The result originated in Euclid's "Elements", where solids are called equal if the same holds for their faces. This version of the result was proved by Cauchy in 1813 based on earlier work by Lagrange. An error in Cauchy's proof of the main lemma was corrected by Ernst Steinitz, Isaac Jacob Schoenberg, and Aleksandr Danilovich Aleksandrov. The corrected proof of Cauchy is so short and elegant, that it is considered to be one of the Proofs from THE BOOK.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb R^3"
}
] | https://en.wikipedia.org/wiki?curid=8317687 |
8320430 | Chan's algorithm | Algorithm for finding the convex hull of a set of points in the plane
In computational geometry, Chan's algorithm, named after Timothy M. Chan, is an optimal output-sensitive algorithm to compute the convex hull of a set formula_0 of formula_1 points, in 2- or 3-dimensional space.
The algorithm takes formula_2 time, where formula_3 is the number of vertices of the output (the convex hull). In the planar case, the algorithm combines an formula_4 algorithm (Graham scan, for example) with Jarvis march (formula_5), in order to obtain an optimal formula_2 time. Chan's algorithm is notable because it is much simpler than the Kirkpatrick–Seidel algorithm, and it naturally extends to 3-dimensional space. This paradigm has been independently developed by Frank Nielsen in his Ph.D. thesis.
Algorithm.
Overview.
A single pass of the algorithm requires a parameter formula_6 which is between 0 and formula_1 (number of points of our set formula_0). Ideally, formula_7 but formula_3, the number of vertices in the output convex hull, is not known at the start. Multiple passes with increasing values of formula_6 are done which then terminates when formula_8(see below on choosing parameter formula_6).
The algorithm starts by arbitrarily partitioning the set of points formula_0 into formula_9 subsets formula_10 with at most formula_6 points each; notice that formula_11.
For each subset formula_12, it computes the convex hull, formula_13, using an formula_14 algorithm (for example, Graham scan), where formula_15 is the number of points in the subset. As there are formula_16 subsets of formula_17 points each, this phase takes formula_18 time.
During the second phase, Jarvis's march is executed, making use of the precomputed (mini) convex hulls, formula_19. At each step in this Jarvis's march algorithm, we have a point formula_20 in the convex hull (at the beginning, formula_20 may be the point in formula_0 with the lowest y coordinate, which is guaranteed to be in the convex hull of formula_0), and need to find a point formula_21 such that all other points of formula_0 are to the right of the line formula_22, where the notation formula_21 simply means that the next point, that is formula_23, is determined as a function of formula_20 and formula_0. The convex hull of the set formula_12, formula_13, is known and contains at most formula_6 points (listed in a clockwise or counter-clockwise order), which allows to compute formula_24 in formula_25 time by binary search. Hence, the computation of formula_24 for all the formula_16 subsets can be done in formula_26 time. Then, we can determine formula_27 using the same technique as normally used in Jarvis's march, but only considering the points formula_28 (i.e. the points in the mini convex hulls) instead of the whole set formula_0. For those points, one iteration of Jarvis's march is formula_29 which is negligible compared to the computation for all subsets. Jarvis's march completes when the process has been repeated formula_30 times (because, in the way Jarvis march works, after at most formula_3 iterations of its outermost loop, where formula_3 is the number of points in the convex hull of formula_0, we must have found the convex hull), hence the second phase takes formula_31 time, equivalent to formula_2 time if formula_6 is close to formula_3 (see below the description of a strategy to choose formula_6 such that this is the case).
By running the two phases described above, the convex hull of formula_1 points is computed in formula_2 time.
Choosing the parameter "m".
If an arbitrary value is chosen for formula_6, it may happen that formula_32. In that case, after formula_6 steps in the second phase, we interrupt the Jarvis's march as running it to the end would take too much time.
At that moment, a formula_33 time will have been spent, and the convex hull will not have been calculated.
The idea is to make multiple passes of the algorithm with increasing values of formula_6; each pass terminates (successfully or unsuccessfully) in formula_33 time. If formula_6 increases too slowly between passes, the number of iterations may be large; on the other hand, if it rises too quickly, the first formula_6 for which the algorithm terminates successfully may be much larger than formula_3, and produce a complexity formula_34.
Squaring Strategy.
One possible strategy is to "square" the value of formula_6 at each iteration, up to a maximum value of formula_1 (corresponding to a partition in singleton sets). Starting from a value of 2, at iteration formula_35, formula_36 is chosen. In that case, formula_37 iterations are made, given that the algorithm terminates once we have
formula_38
with the logarithm taken in base formula_39, and the total running time of the algorithm is
formula_40
In three dimensions.
To generalize this construction for the 3-dimensional case, an formula_4 algorithm to compute the 3-dimensional convex hull by Preparata and Hong should be used instead of Graham scan, and a 3-dimensional version of Jarvis's march needs to be used. The time complexity remains formula_2.
Pseudocode.
In the following pseudocode, text between parentheses and in italic are comments. To fully understand the following pseudocode, it is recommended that the reader is already familiar with Graham scan and Jarvis march algorithms to compute the convex hull, formula_41, of a set of points,
formula_0.
Input: Set formula_0 with formula_1 points .
Output: Set formula_41 with formula_3 points, the convex hull of formula_0.
"(Pick a point of formula_0 which is guaranteed to be in formula_41: for instance, the point with the lowest y coordinate.)"
"(This operation takes formula_42 time: e.g., we can simply iterate through formula_0.)"
formula_43
"(formula_44 is used in the Jarvis march part of this Chan's algorithm,"
"so that to compute the second point, formula_45, in the convex hull of formula_0.)"
"(Note: formula_44 is not a point of formula_0.)"
"(For more info, see the comments close to the corresponding part of the Chan's algorithm.)"
formula_46
"(Note: formula_3, the number of points in the final convex hull of formula_0, is not known.)"
"(These are the iterations needed to discover the value of formula_6, which is an estimate of formula_3.)"
"(formula_47 is required for this Chan's algorithm to find the convex hull of formula_0.)"
"(More specifically, we want formula_48, so that not to perform too many unnecessary iterations"
"and so that the time complexity of this Chan's algorithm is formula_49.)"
"(As explained above in this article, a strategy is used where at most formula_50 iterations are required to find formula_6.)"
"(Note: the final formula_6 may not be equal to formula_3, but it is never smaller than formula_3 and greater than formula_51.)"
"(Nevertheless, this Chan's algorithm stops once formula_3 iterations of the outermost loop are performed,"
"that is, even if formula_52, it doesn't perform formula_6 iterations of the outermost loop.)"
"(For more info, see the Jarvis march part of this algorithm below, where formula_41 is returned if formula_53.)"
for formula_54 do
"(Set parameter formula_6 for the current iteration. A "squaring scheme" is used as described above in this article."
"There are other schemes: for example, the "doubling scheme", where formula_55, for formula_56."
"If the "doubling scheme" is used, though, the resulting time complexity of this Chan's algorithm is formula_57.)"
formula_58
"(Initialize an empty list (or array) to store the points of the convex hull of formula_0, as they are found.)"
formula_59
formula_60
"(Arbitrarily split set of points formula_0 into formula_61 subsets of roughly formula_6 elements each.)"
formula_62
"(Compute the convex hull of all formula_16 subsets of points, formula_63.)"
"(It takes formula_64 time.)"
"If formula_65, then the time complexity is formula_66.)"
for formula_67 do
"(Compute the convex hull of subset formula_68, formula_12, using Graham scan, which takes formula_69 time.)"
"(formula_13 is the convex hull of the subset of points formula_12.)"
formula_70
"(At this point, the convex hulls formula_71 of respectively the subsets of points formula_72 have been computed.)"
"(Now, use a modified version of the Jarvis march algorithm to compute the convex hull of formula_0.)"
"(Jarvis march performs in formula_73 time, where formula_1 is the number of input points and formula_3 is the number of points in the convex hull.)"
"(Given that Jarvis march is an output-sensitive algorithm, its running time depends on the size of the convex hull, formula_3.)"
"(In practice, it means that Jarvis march performs formula_3 iterations of its outermost loop."
"At each of these iterations, it performs at most formula_1 iterations of its innermost loop.)"
"(We want formula_48, so we do not want to perform more than formula_6 iterations in the following outer loop.)"
"(If the current formula_6 is smaller than formula_3, i.e. formula_74, the convex hull of formula_0 cannot be found.)"
"(In this modified version of Jarvis march, we perform an operation inside the innermost loop which takes formula_75 time."
"Hence, the total time complexity of this modified version is"
"formula_76"
"If formula_65, then the time complexity is formula_66.)"
for formula_77 do
"(Note: here, a point in the convex hull of formula_0 is already known, that is formula_78.)"
"(In this inner for loop, formula_16 possible next points to be on the convex hull of formula_0, formula_79, are computed.)"
"(Each of these formula_16 possible next points is from a different formula_13:"
"that is, formula_80 is a possible next point on the convex hull of formula_0 which is part of the convex hull of formula_13.)"
"(Note: formula_79 depend on formula_81: that is, for each iteration formula_81, there are formula_16 possible next points to be on the convex hull of formula_0.)"
"(Note: at each iteration formula_81, only one of the points among formula_79 is added to the convex hull of formula_0.)"
for formula_67 do
"(formula_82 finds the point formula_83 such that the angle formula_84 is maximized ,"
"where formula_84 is the angle between the vectors formula_85 and formula_86. Such formula_87 is stored in formula_80.)"
"(Angles do not need to be calculated directly: the orientation test can be used .)"
"(formula_82 can be performed in formula_75 time.)"
"(Note: at the iteration formula_88, formula_89 and formula_78 is known and is a point in the convex hull of formula_0:"
"in this case, it is the point of formula_0 with the lowest y coordinate.)"
formula_90
"(Choose the point formula_91 which maximizes the angle formula_92 to be the next point on the convex hull of formula_0.)"
formula_93
"(Jarvis march terminates when the next selected point on the convext hull, formula_23, is the initial point, formula_94.)"
if formula_53
"(Return the convex hull of formula_0 which contains formula_95 points.)"
"(Note: of course, no need to return formula_23 which is equal to formula_94.)"
return formula_96
else
formula_97
"(If after formula_6 iterations a point formula_23 has not been found so that formula_53, then formula_74.)"
"(We need to start over with a higher value for formula_6.)"
Implementation.
Chan's paper contains several suggestions that may improve the practical performance of the algorithm, for example:
Extensions.
Chan's paper contains some other problems whose known algorithms can be made optimal output sensitive using his technique, for example:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "O(n \\log h)"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "O(n \\log n)"
},
{
"math_id": 5,
"text": "O(nh)"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "m = h"
},
{
"math_id": 8,
"text": "m \\geq h"
},
{
"math_id": 9,
"text": "K = \\lceil n/m \\rceil"
},
{
"math_id": 10,
"text": "(Q_k)_{k=1,2,...K}"
},
{
"math_id": 11,
"text": "K=O(n/m)"
},
{
"math_id": 12,
"text": "Q_k"
},
{
"math_id": 13,
"text": "C_k"
},
{
"math_id": 14,
"text": "O(p \\log p)"
},
{
"math_id": 15,
"text": "p"
},
{
"math_id": 16,
"text": "K"
},
{
"math_id": 17,
"text": "O(m)"
},
{
"math_id": 18,
"text": "K\\cdot O(m \\log m) = O(n \\log m)"
},
{
"math_id": 19,
"text": "(C_k)_{k=1,2,...K}"
},
{
"math_id": 20,
"text": "p_{i}"
},
{
"math_id": 21,
"text": "p_{i+1} = f(p_{i},P)"
},
{
"math_id": 22,
"text": "p_{i}p_{i+1}"
},
{
"math_id": 23,
"text": "p_{i+1}"
},
{
"math_id": 24,
"text": "f(p_{i},Q_k)"
},
{
"math_id": 25,
"text": "O(\\log m)"
},
{
"math_id": 26,
"text": "O(K \\log m)"
},
{
"math_id": 27,
"text": "f(p_{i},P)"
},
{
"math_id": 28,
"text": "(f(p_{i},Q_k))_{1\\leq k\\leq K}"
},
{
"math_id": 29,
"text": "O(K)"
},
{
"math_id": 30,
"text": "O(h)"
},
{
"math_id": 31,
"text": "O(Kh \\log m)"
},
{
"math_id": 32,
"text": "m<h"
},
{
"math_id": 33,
"text": "O(n \\log m)"
},
{
"math_id": 34,
"text": "O(n \\log m) > O(n \\log h)"
},
{
"math_id": 35,
"text": "t"
},
{
"math_id": 36,
"text": "m = \\min \\left(n,2^{2^t} \\right)"
},
{
"math_id": 37,
"text": "O(\\log\\log h)"
},
{
"math_id": 38,
"text": "m = 2^{2^t} \\geq h \\iff \\log \\left( 2^{2^t} \\right) \\geq \\log h \\iff 2^t \\geq \\log h \\iff \\log {2^t} \\geq \\log {\\log h} \\iff t \\geq \\log {\\log h},"
},
{
"math_id": 39,
"text": "2"
},
{
"math_id": 40,
"text": " \\sum_{t=0}^{\\lceil \\log\\log h \\rceil} O\\left(n \\log \\left(2^{2^t} \\right)\\right) = O(n) \\sum_{t=0}^{\\lceil \\log\\log h \\rceil} 2^t = O\\left(n \\cdot 2^{1+\\lceil \\log\\log h \\rceil}\\right) = O(n \\log h)."
},
{
"math_id": 41,
"text": "C"
},
{
"math_id": 42,
"text": "\\mathcal{O}(n)"
},
{
"math_id": 43,
"text": "p_1 := PICK\\_START(P) "
},
{
"math_id": 44,
"text": "p_0"
},
{
"math_id": 45,
"text": "p_2"
},
{
"math_id": 46,
"text": "p_0 := (-\\infty, 0)"
},
{
"math_id": 47,
"text": "h \\leq m"
},
{
"math_id": 48,
"text": "h \\leq m \\leq h^2"
},
{
"math_id": 49,
"text": "\\mathcal{O}(n \\log h)"
},
{
"math_id": 50,
"text": "\\log \\log n"
},
{
"math_id": 51,
"text": "h^2"
},
{
"math_id": 52,
"text": "m \\neq h"
},
{
"math_id": 53,
"text": "p_{i+1} == p_1"
},
{
"math_id": 54,
"text": "1\\leq t\\leq \\log \\log n"
},
{
"math_id": 55,
"text": "m = 2^t"
},
{
"math_id": 56,
"text": "t=1, \\dots, \\left \\lceil \\log h \\right\\rceil "
},
{
"math_id": 57,
"text": "\\mathcal{O}(n \\log^2 h)"
},
{
"math_id": 58,
"text": "m:=2^{2^t}"
},
{
"math_id": 59,
"text": "C := ()"
},
{
"math_id": 60,
"text": "ADD(C, p_1)"
},
{
"math_id": 61,
"text": "K = \\left\\lceil \\frac{n}{m} \\right\\rceil"
},
{
"math_id": 62,
"text": "Q_1, Q_2, \\dots , Q_K := SPLIT(P, m)"
},
{
"math_id": 63,
"text": "Q_1,Q_2, \\dots , Q_K"
},
{
"math_id": 64,
"text": "\\mathcal{O}(K m \\log m) = \\mathcal{O}(n \\log m)"
},
{
"math_id": 65,
"text": "m \\leq h^2"
},
{
"math_id": 66,
"text": "\\mathcal{O}(n \\log h^2) = \\mathcal{O}(n \\log h)"
},
{
"math_id": 67,
"text": "1\\leq k\\leq K"
},
{
"math_id": 68,
"text": "k"
},
{
"math_id": 69,
"text": "\\mathcal{O}(m \\log m)"
},
{
"math_id": 70,
"text": "C_k := GRAHAM\\_SCAN(Q_k)"
},
{
"math_id": 71,
"text": "C_1, C_2, \\dots, C_K"
},
{
"math_id": 72,
"text": "Q_1, Q_2, \\dots , Q_K"
},
{
"math_id": 73,
"text": "\\mathcal{O}(nh)"
},
{
"math_id": 74,
"text": "m < h"
},
{
"math_id": 75,
"text": "\\mathcal{O}(\\log m)"
},
{
"math_id": 76,
"text": "\\mathcal{O}(m K \\log m) = \\mathcal{O}(m \\left\\lceil \\frac{n}{m} \\right\\rceil \\log m) = \\mathcal{O}(n \\log m) = \\mathcal{O}(n \\log 2^{2^t}) = \\mathcal{O}(n 2^t)."
},
{
"math_id": 77,
"text": "1\\leq i\\leq m"
},
{
"math_id": 78,
"text": "p_1"
},
{
"math_id": 79,
"text": "q_{i,1}, q_{i,2}, \\dots, q_{i,K}"
},
{
"math_id": 80,
"text": "q_{i,k}"
},
{
"math_id": 81,
"text": "i"
},
{
"math_id": 82,
"text": "JARVIS\\_BINARY\\_SEARCH"
},
{
"math_id": 83,
"text": "d \\in C_k"
},
{
"math_id": 84,
"text": "\\measuredangle p_{i-1}p_id"
},
{
"math_id": 85,
"text": "\\overrightarrow{p_ip_{i-1}}"
},
{
"math_id": 86,
"text": "\\overrightarrow{p_i d}"
},
{
"math_id": 87,
"text": "d"
},
{
"math_id": 88,
"text": "i = 1"
},
{
"math_id": 89,
"text": "p_{i-1} = p_0 = (-\\infty, 0)"
},
{
"math_id": 90,
"text": "q_{i,k} := JARVIS\\_BINARY\\_SEARCH(p_{i-1}, p_i, C_k)"
},
{
"math_id": 91,
"text": "z \\in \\{ q_{i,1}, q_{i,2}, \\dots, q_{i,K} \\} "
},
{
"math_id": 92,
"text": "\\measuredangle p_{i-1}p_iz"
},
{
"math_id": 93,
"text": "p_{i+1} := JARVIS\\_NEXT\\_CH\\_POINT(p_{i-1}, p_i, (q_{i,1}, q_{i,2}, \\dots, q_{i,K}))"
},
{
"math_id": 94,
"text": "p_{1}"
},
{
"math_id": 95,
"text": "i = h"
},
{
"math_id": 96,
"text": "C := (p_1, p_2, \\dots, p_{i})"
},
{
"math_id": 97,
"text": "ADD(C, p_{i+1})"
},
{
"math_id": 98,
"text": "L(S)"
},
{
"math_id": 99,
"text": "S"
},
{
"math_id": 100,
"text": " O(n\\log n) "
},
{
"math_id": 101,
"text": " O(n\\log h) "
},
{
"math_id": 102,
"text": "O(n\\log h) "
}
] | https://en.wikipedia.org/wiki?curid=8320430 |
832212 | Multilinear form | Map from multiple vectors to an underlying field of scalars, linear in each argument
In abstract algebra and multilinear algebra, a multilinear form on a vector space formula_0 over a field formula_1 is a map
formula_2
that is separately formula_1-linear in each of its formula_3 arguments. More generally, one can define multilinear forms on a module over a commutative ring. The rest of this article, however, will only consider multilinear forms on finite-dimensional vector spaces.
A multilinear formula_3-form on formula_0 over formula_4 is called a (covariant) formula_5-tensor, and the vector space of such forms is usually denoted formula_6 or formula_7.
Tensor product.
Given a formula_3-tensor formula_8 and an formula_9-tensor formula_10, a product formula_11, known as the tensor product, can be defined by the property
formula_12
for all formula_13. The tensor product of multilinear forms is not commutative; however it is bilinear and associative:
formula_14, formula_15
and
formula_16
If formula_17 forms a basis for an formula_18-dimensional vector space formula_0 and formula_19 is the corresponding dual basis for the dual space formula_20, then the products formula_21, with formula_22 form a basis for formula_6. Consequently, formula_6 has dimension formula_23.
Examples.
Bilinear forms.
If formula_24, formula_25 is referred to as a bilinear form. A familiar and important example of a (symmetric) bilinear form is the standard inner product (dot product) of vectors.
Alternating multilinear forms.
An important class of multilinear forms are the alternating multilinear forms, which have the additional property that
formula_26
where formula_27 is a permutation and formula_28 denotes its sign (+1 if even, –1 if odd). As a consequence, alternating multilinear forms are antisymmetric with respect to swapping of any two arguments (i.e., formula_29 and formula_30):
formula_31
With the additional hypothesis that the characteristic of the field formula_1 is not 2, setting formula_32 implies as a corollary that formula_33; that is, the form has a value of 0 whenever two of its arguments are equal. Note, however, that some authors use this last condition as the defining property of alternating forms. This definition implies the property given at the beginning of the section, but as noted above, the converse implication holds only when formula_34.
An alternating multilinear formula_3-form on formula_0 over formula_4 is called a multicovector of degree formula_5 or formula_5-covector, and the vector space of such alternating forms, a subspace of formula_6, is generally denoted formula_35, or, using the notation for the isomorphic "k"th exterior power of formula_36(the dual space of formula_0), formula_37. Note that linear functionals (multilinear 1-forms over formula_4) are trivially alternating, so that formula_38, while, by convention, 0-forms are defined to be scalars: formula_39.
The determinant on formula_40 matrices, viewed as an formula_18 argument function of the column vectors, is an important example of an alternating multilinear form.
Exterior product.
The tensor product of alternating multilinear forms is, in general, no longer alternating. However, by summing over all permutations of the tensor product, taking into account the parity of each term, the "exterior product" (formula_41, also known as the "wedge product") of multicovectors can be defined, so that if formula_42 and formula_43, then formula_44:
formula_45
where the sum is taken over the set of all permutations over formula_46 elements, formula_47. The exterior product is bilinear, associative, and graded-alternating: if formula_42 and formula_43 then formula_48.
Given a basis formula_17 for formula_0 and dual basis formula_19 for formula_49, the exterior products formula_50, with formula_51 form a basis for formula_35. Hence, the dimension of formula_35 for "n"-dimensional formula_0 is formula_52.
Differential forms.
Differential forms are mathematical objects constructed via tangent spaces and multilinear forms that behave, in many ways, like differentials in the classical sense. Though conceptually and computationally useful, differentials are founded on ill-defined notions of infinitesimal quantities developed early in the history of calculus. Differential forms provide a mathematically rigorous and precise framework to modernize this long-standing idea. Differential forms are especially useful in multivariable calculus (analysis) and differential geometry because they possess transformation properties that allow them be integrated on curves, surfaces, and their higher-dimensional analogues (differentiable manifolds). One far-reaching application is the modern statement of Stokes' theorem, a sweeping generalization of the fundamental theorem of calculus to higher dimensions.
The synopsis below is primarily based on Spivak (1965) and Tu (2011).
Definition of differential k-forms and construction of 1-forms.
To define differential forms on open subsets formula_53, we first need the notion of the tangent space of formula_54at formula_55, usually denoted formula_56 or formula_57. The vector space formula_57 can be defined most conveniently as the set of elements formula_58 (formula_59, with formula_60 fixed) with vector addition and scalar multiplication defined by formula_61 and formula_62, respectively. Moreover, if formula_63 is the standard basis for formula_54, then formula_64 is the analogous standard basis for formula_57. In other words, each tangent space formula_57 can simply be regarded as a copy of formula_54 (a set of tangent vectors) based at the point formula_55. The collection (disjoint union) of tangent spaces of formula_54 at all formula_60 is known as the tangent bundle of formula_54 and is usually denoted formula_65. While the definition given here provides a simple description of the tangent space of formula_54, there are other, more sophisticated constructions that are better suited for defining the tangent spaces of smooth manifolds in general ("see the article on tangent spaces for details").
A differential formula_5-form on formula_53 is defined as a function formula_66 that assigns to every formula_67 a formula_3-covector on the tangent space of formula_54at formula_55, usually denoted formula_68. In brief, a differential formula_3-form is a formula_3-covector field. The space of formula_3-forms on formula_69 is usually denoted formula_70; thus if formula_66 is a differential formula_3-form, we write formula_71. By convention, a continuous function on formula_69 is a differential 0-form: formula_72.
We first construct differential 1-forms from 0-forms and deduce some of their basic properties. To simplify the discussion below, we will only consider smooth differential forms constructed from smooth (formula_73) functions. Let formula_74 be a smooth function. We define the 1-form formula_75 on formula_69 for formula_67 and formula_76 by formula_77, where formula_78 is the total derivative of formula_79 at formula_55. (Recall that the total derivative is a linear transformation.) Of particular interest are the projection maps (also known as coordinate functions) formula_80, defined by formula_81, where formula_82 is the "i"th standard coordinate of formula_83. The 1-forms formula_84 are known as the basic 1-forms; they are conventionally denoted formula_85. If the standard coordinates of formula_76 are formula_86, then application of the definition of formula_75 yields formula_87, so that formula_88, where formula_89 is the Kronecker delta. Thus, as the dual of the standard basis for formula_57, formula_90 forms a basis for formula_91. As a consequence, if formula_66 is a 1-form on formula_69, then formula_66 can be written as formula_92 for smooth functions formula_93. Furthermore, we can derive an expression for formula_75 that coincides with the classical expression for a total differential:
formula_94
["Comments on" "notation:" In this article, we follow the convention from tensor calculus and differential geometry in which multivectors and multicovectors are written with lower and upper indices, respectively. Since differential forms are multicovector fields, upper indices are employed to index them. The opposite rule applies to the "components" of multivectors and multicovectors, which instead are written with upper and lower indices, respectively. For instance, we represent the standard coordinates of vector formula_59 as formula_95, so that formula_96 in terms of the standard basis formula_63. In addition, superscripts appearing in the "denominator" of an expression (as in formula_97) are treated as lower indices in this convention. When indices are applied and interpreted in this manner, the number of upper indices minus the number of lower indices in each term of an expression is conserved, both within the sum and across an equal sign, a feature that serves as a useful mnemonic device and helps pinpoint errors made during manual computation.]
Basic operations on differential k-forms.
The "exterior product" (formula_41) and "exterior derivative" (formula_98) are two fundamental operations on differential forms. The exterior product of a formula_3-form and an formula_9-form is a formula_99-form, while the exterior derivative of a formula_3-form is a formula_100-form. Thus, both operations generate differential forms of higher degree from those of lower degree.
The exterior product formula_101 of differential forms is a special case of the exterior product of multicovectors in general ("see above"). As is true in general for the exterior product, the exterior product of differential forms is bilinear, associative, and is graded-alternating.
More concretely, if formula_102 and formula_103, then
formula_104
Furthermore, for any set of indices formula_105,
formula_106
If formula_107, formula_108, and formula_109, then the indices of formula_110 can be arranged in ascending order by a (finite) sequence of such swaps. Since formula_111, formula_112 implies that formula_113. Finally, as a consequence of bilinearity, if formula_66 and formula_114 are the sums of several terms, their exterior product obeys distributivity with respect to each of these terms.
The collection of the exterior products of basic 1-forms formula_115 constitutes a basis for the space of differential "k"-forms. Thus, any formula_71 can be written in the form
formula_116
where formula_117 are smooth functions. With each set of indices formula_118 placed in ascending order, (*) is said to be the standard presentation of formula_66. <br>
In the previous section, the 1-form formula_75 was defined by taking the exterior derivative of the 0-form (continuous function) formula_79. We now extend this by defining the exterior derivative operator formula_119 for formula_120. If the standard presentation of formula_3-form formula_66 is given by (*), the formula_100-form formula_121 is defined by
formula_122
A property of formula_98 that holds for all smooth forms is that the second exterior derivative of any formula_66 vanishes identically: formula_123. This can be established directly from the definition of formula_98 and the equality of mixed second-order partial derivatives of formula_124 functions ("see the article on closed and exact forms for details").
Integration of differential forms and Stokes' theorem for chains.
To integrate a differential form over a parameterized domain, we first need to introduce the notion of the pullback of a differential form. Roughly speaking, when a differential form is integrated, applying the pullback transforms it in a way that correctly accounts for a change-of-coordinates.
Given a differentiable function formula_125 and formula_3-form formula_126, we call formula_127 the pullback of formula_114 by formula_79 and define it as the formula_3-form such that
formula_128
for formula_129, where formula_130 is the map formula_131.
If formula_132 is an formula_18-form on formula_54 (i.e., formula_133), we define its integral over the unit formula_18-cell as the iterated Riemann integral of formula_79:
formula_134
Next, we consider a domain of integration parameterized by a differentiable function formula_135, known as an "n"-cube. To define the integral of formula_136 over formula_137, we "pull back" from formula_138 to the unit "n"-cell:
formula_139
To integrate over more general domains, we define an formula_140-chain formula_141 as the formal sum of formula_18-cubes and set
formula_142
An appropriate definition of the formula_143-chain formula_144, known as the boundary of formula_145, allows us to state the celebrated Stokes' theorem (Stokes–Cartan theorem) for chains in a subset of formula_146: "If formula_66 is a" "smooth" formula_143"-form on an open set formula_147" "and formula_145" "is a smooth" formula_18"-chain in formula_138, thenformula_148."Using more sophisticated machinery (e.g., germs and derivations), the tangent space formula_149 of any smooth manifold formula_150 (not necessarily embedded in formula_146) can be defined. Analogously, a differential form formula_151 on a general smooth manifold is a map formula_152. Stokes' theorem can be further generalized to arbitrary smooth manifolds-with-boundary and even certain "rough" domains ("see the article on Stokes' theorem for details").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "f\\colon V^k \\to K"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\\R"
},
{
"math_id": 5,
"text": "\\boldsymbol{k}"
},
{
"math_id": 6,
"text": "\\mathcal{T}^k(V)"
},
{
"math_id": 7,
"text": "\\mathcal{L}^k(V)"
},
{
"math_id": 8,
"text": "f\\in\\mathcal{T}^k(V)"
},
{
"math_id": 9,
"text": "\\ell"
},
{
"math_id": 10,
"text": "g\\in\\mathcal{T}^\\ell(V)"
},
{
"math_id": 11,
"text": "f\\otimes g\\in\\mathcal{T}^{k+\\ell}(V)"
},
{
"math_id": 12,
"text": "(f\\otimes g)(v_1,\\ldots,v_k,v_{k+1},\\ldots, v_{k+\\ell})=f(v_1,\\ldots,v_k)g(v_{k+1},\\ldots, v_{k+\\ell}),"
},
{
"math_id": 13,
"text": "v_1,\\ldots,v_{k+\\ell}\\in V"
},
{
"math_id": 14,
"text": "f\\otimes(ag_1+bg_2)=a(f\\otimes g_1)+b(f\\otimes g_2)"
},
{
"math_id": 15,
"text": "(af_1+bf_2)\\otimes g=a(f_1\\otimes g)+b(f_2\\otimes g),"
},
{
"math_id": 16,
"text": "(f\\otimes g)\\otimes h=f\\otimes (g\\otimes h)."
},
{
"math_id": 17,
"text": "(v_1,\\ldots, v_n)"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": "(\\phi^1,\\ldots,\\phi^n)"
},
{
"math_id": 20,
"text": "V^*=\\mathcal{T}^1(V)"
},
{
"math_id": 21,
"text": "\\phi^{i_1}\\otimes\\cdots\\otimes\\phi^{i_k}"
},
{
"math_id": 22,
"text": "1\\le i_1,\\ldots,i_k\\le n"
},
{
"math_id": 23,
"text": "n^k"
},
{
"math_id": 24,
"text": "k=2"
},
{
"math_id": 25,
"text": "f:V\\times V\\to K"
},
{
"math_id": 26,
"text": "f(x_{\\sigma(1)},\\ldots, x_{\\sigma(k)}) = \\sgn(\\sigma)f(x_1,\\ldots, x_k), "
},
{
"math_id": 27,
"text": "\\sigma:\\mathbf{N}_k\\to\\mathbf{N}_k"
},
{
"math_id": 28,
"text": "\\sgn(\\sigma)"
},
{
"math_id": 29,
"text": "\\sigma(p)=q,\\sigma(q)=p "
},
{
"math_id": 30,
"text": "\\sigma(i)=i, 1\\le i\\le k, i\\neq p,q "
},
{
"math_id": 31,
"text": "f(x_1,\\ldots, x_p,\\ldots, x_q,\\ldots, x_k) = -f(x_1,\\ldots, x_q,\\ldots, x_p,\\ldots, x_k). "
},
{
"math_id": 32,
"text": "x_p=x_q=x "
},
{
"math_id": 33,
"text": "f(x_1,\\ldots, x,\\ldots, x,\\ldots, x_k) = 0 "
},
{
"math_id": 34,
"text": "\\operatorname{char}(K)\\neq 2 "
},
{
"math_id": 35,
"text": "\\mathcal{A}^k(V)"
},
{
"math_id": 36,
"text": "V^*"
},
{
"math_id": 37,
"text": "\\bigwedge^k V^*"
},
{
"math_id": 38,
"text": "\\mathcal{A}^1(V)=\\mathcal{T}^1(V)=V^*"
},
{
"math_id": 39,
"text": "\\mathcal{A}^0(V)=\\mathcal{T}^0(V)=\\R"
},
{
"math_id": 40,
"text": "n\\times n"
},
{
"math_id": 41,
"text": "\\wedge"
},
{
"math_id": 42,
"text": "f\\in\\mathcal{A}^k(V)"
},
{
"math_id": 43,
"text": "g\\in\\mathcal{A}^\\ell(V)"
},
{
"math_id": 44,
"text": "f\\wedge g\\in\\mathcal{A}^{k+\\ell}(V)"
},
{
"math_id": 45,
"text": "(f\\wedge g)(v_1,\\ldots, v_{k+\\ell})=\\frac{1}{k!\\ell!}\\sum_{\\sigma\\in S_{k+\\ell}} (\\sgn(\\sigma)) f(v_{\\sigma(1)}, \\ldots, v_{\\sigma(k)})g(v_{\\sigma(k+1)}\n,\\ldots,v_{\\sigma(k+\\ell)}),"
},
{
"math_id": 46,
"text": "k+\\ell"
},
{
"math_id": 47,
"text": "S_{k+\\ell}"
},
{
"math_id": 48,
"text": "f\\wedge g=(-1)^{k\\ell}g\\wedge f"
},
{
"math_id": 49,
"text": "V^*=\\mathcal{A}^1(V)"
},
{
"math_id": 50,
"text": "\\phi^{i_1}\\wedge\\cdots\\wedge\\phi^{i_k}"
},
{
"math_id": 51,
"text": "1\\leq i_1<\\cdots<i_k\\leq n"
},
{
"math_id": 52,
"text": "\\tbinom{n}{k}=\\frac{n!}{(n-k)!\\,k!}"
},
{
"math_id": 53,
"text": "U\\subset\\R^n"
},
{
"math_id": 54,
"text": "\\R^n"
},
{
"math_id": 55,
"text": "p"
},
{
"math_id": 56,
"text": "T_p\\R^n"
},
{
"math_id": 57,
"text": "\\R^n_p"
},
{
"math_id": 58,
"text": "v_p"
},
{
"math_id": 59,
"text": "v\\in\\R^n"
},
{
"math_id": 60,
"text": "p\\in\\R^n"
},
{
"math_id": 61,
"text": "v_p+w_p:=(v+w)_p"
},
{
"math_id": 62,
"text": "a\\cdot(v_p):=(a\\cdot v)_p"
},
{
"math_id": 63,
"text": "(e_1,\\ldots,e_n)"
},
{
"math_id": 64,
"text": "((e_1)_p,\\ldots,(e_n)_p)"
},
{
"math_id": 65,
"text": "T\\R^n:=\\bigcup_{p\\in\\R^n}\\R^n_p"
},
{
"math_id": 66,
"text": "\\omega"
},
{
"math_id": 67,
"text": "p\\in U"
},
{
"math_id": 68,
"text": "\\omega_p:=\\omega(p)\\in\\mathcal{A}^k(\\R^n_p)"
},
{
"math_id": 69,
"text": "U"
},
{
"math_id": 70,
"text": "\\Omega^k(U)"
},
{
"math_id": 71,
"text": "\\omega\\in\\Omega^k(U)"
},
{
"math_id": 72,
"text": "f\\in C^0(U)=\\Omega^0(U)"
},
{
"math_id": 73,
"text": "C^\\infty"
},
{
"math_id": 74,
"text": "f:\\R^n\\to\\R"
},
{
"math_id": 75,
"text": "df"
},
{
"math_id": 76,
"text": "v_p\\in\\R^n_p"
},
{
"math_id": 77,
"text": "(df)_p(v_p):=Df|_p(v)"
},
{
"math_id": 78,
"text": "Df|_p:\\R^n\\to\\R"
},
{
"math_id": 79,
"text": "f"
},
{
"math_id": 80,
"text": "\\pi^i:\\R^n\\to\\R"
},
{
"math_id": 81,
"text": "x\\mapsto x^i"
},
{
"math_id": 82,
"text": "x^i"
},
{
"math_id": 83,
"text": "x\\in\\R^n"
},
{
"math_id": 84,
"text": "d\\pi^i"
},
{
"math_id": 85,
"text": "dx^i"
},
{
"math_id": 86,
"text": "(v^1,\\ldots, v^n)"
},
{
"math_id": 87,
"text": "dx^i_p(v_p)=v^i"
},
{
"math_id": 88,
"text": "dx^i_p((e_j)_p)=\\delta_j^i"
},
{
"math_id": 89,
"text": "\\delta^i_j"
},
{
"math_id": 90,
"text": "(dx^1_p,\\ldots,dx^n_p)"
},
{
"math_id": 91,
"text": "\\mathcal{A}^1(\\R^n_p)=(\\R^n_p)^*"
},
{
"math_id": 92,
"text": "\\sum a_i\\,dx^i"
},
{
"math_id": 93,
"text": "a_i:U\\to\\R"
},
{
"math_id": 94,
"text": "df=\\sum_{i=1}^n D_i f\\; dx^i={\\partial f\\over\\partial x^1} \\, dx^1+\\cdots+{\\partial f\\over\\partial x^n} \\, dx^n."
},
{
"math_id": 95,
"text": "(v^1,\\ldots,v^n)"
},
{
"math_id": 96,
"text": "v=\\sum_{i=1}^n v^ie_i"
},
{
"math_id": 97,
"text": "\\frac{\\partial f}{\\partial x^i}"
},
{
"math_id": 98,
"text": "d"
},
{
"math_id": 99,
"text": "(k+\\ell)"
},
{
"math_id": 100,
"text": "(k+1)"
},
{
"math_id": 101,
"text": "\\wedge:\\Omega^k(U)\\times\\Omega^\\ell(U)\\to\\Omega^{k+\\ell}(U)"
},
{
"math_id": 102,
"text": "\\omega=a_{i_1\\ldots i_k} \\, dx^{i_1}\\wedge\\cdots\\wedge dx^{i_k}"
},
{
"math_id": 103,
"text": "\\eta=a_{j_1\\ldots i_{\\ell}} dx^{j_1}\\wedge\\cdots\\wedge dx^{j_{\\ell}}"
},
{
"math_id": 104,
"text": "\\omega\\wedge\\eta=a_{i_1\\ldots i_k}a_{j_1\\ldots j_\\ell} \\, dx^{i_1}\\wedge\\cdots\\wedge dx^{i_k}\\wedge dx^{j_1} \\wedge \\cdots\\wedge dx^{j_\\ell}."
},
{
"math_id": 105,
"text": "\\{\\alpha_1\\ldots,\\alpha_m\\}"
},
{
"math_id": 106,
"text": "dx^{\\alpha_1} \\wedge\\cdots\\wedge dx^{\\alpha_p} \\wedge \\cdots \\wedge dx^{\\alpha_q} \\wedge\\cdots\\wedge dx^{\\alpha_m} = -dx^{\\alpha_1} \\wedge\\cdots\\wedge dx^{\\alpha_q} \\wedge \\cdots\\wedge dx^{\\alpha_p}\\wedge\\cdots\\wedge dx^{\\alpha_m}."
},
{
"math_id": 107,
"text": "I=\\{i_1,\\ldots,i_k\\}"
},
{
"math_id": 108,
"text": "J=\\{j_1,\\ldots,j_{\\ell}\\}"
},
{
"math_id": 109,
"text": "I\\cap J=\\varnothing"
},
{
"math_id": 110,
"text": "\\omega\\wedge\\eta"
},
{
"math_id": 111,
"text": "dx^\\alpha\\wedge dx^\\alpha=0"
},
{
"math_id": 112,
"text": "I\\cap J\\neq\\varnothing"
},
{
"math_id": 113,
"text": "\\omega\\wedge\\eta=0"
},
{
"math_id": 114,
"text": "\\eta"
},
{
"math_id": 115,
"text": "\\{dx^{i_1}\\wedge\\cdots\\wedge dx^{i_k} \\mid 1\\leq i_1<\\cdots< i_k\\leq n\\}"
},
{
"math_id": 116,
"text": "\\omega=\\sum_{i_1<\\cdots<i_k} a_{i_1\\ldots i_k} \\, dx^{i_1}\\wedge\\cdots\\wedge dx^{i_k}, \\qquad (*)"
},
{
"math_id": 117,
"text": "a_{i_1\\ldots i_k}:U\\to\\R"
},
{
"math_id": 118,
"text": "\\{i_1,\\ldots,i_k\\}"
},
{
"math_id": 119,
"text": "d:\\Omega^k(U)\\to\\Omega^{k+1}(U)"
},
{
"math_id": 120,
"text": "k\\geq1"
},
{
"math_id": 121,
"text": "d\\omega"
},
{
"math_id": 122,
"text": "d\\omega:=\\sum_{i_1<\\ldots <i_k} da_{i_1\\ldots i_k}\\wedge dx^{i_1}\\wedge\\cdots\\wedge dx^{i_k}."
},
{
"math_id": 123,
"text": "d^2\\omega=d(d\\omega)\\equiv 0"
},
{
"math_id": 124,
"text": "C^2"
},
{
"math_id": 125,
"text": "f:\\R^n\\to\\R^m"
},
{
"math_id": 126,
"text": "\\eta\\in\\Omega^k(\\R^m)"
},
{
"math_id": 127,
"text": "f^*\\eta\\in\\Omega^k(\\R^n)"
},
{
"math_id": 128,
"text": "(f^*\\eta)_p(v_{1p},\\ldots, v_{kp}):=\\eta_{f(p)}(f_*(v_{1p}),\\ldots,f_*(v_{kp})),"
},
{
"math_id": 129,
"text": "v_{1p},\\ldots,v_{kp}\\in\\R^n_p"
},
{
"math_id": 130,
"text": "f_*:\\R^n_p\\to\\R^m_{f(p)}"
},
{
"math_id": 131,
"text": "v_p\\mapsto(Df|_p(v))_{f(p)}"
},
{
"math_id": 132,
"text": "\\omega=f\\, dx^1\\wedge\\cdots\\wedge dx^n"
},
{
"math_id": 133,
"text": "\\omega\\in\\Omega^n(\\R^n)"
},
{
"math_id": 134,
"text": "\\int_{[0,1]^n} \\omega = \\int_{[0,1]^n} f\\,dx^1\\wedge\\cdots \\wedge dx^n:= \\int_0^1\\cdots\\int_0^1 f\\, dx^1\\cdots dx^n."
},
{
"math_id": 135,
"text": "c:[0,1]^n\\to A\\subset\\R^m"
},
{
"math_id": 136,
"text": "\\omega\\in\\Omega^n(A)"
},
{
"math_id": 137,
"text": "c"
},
{
"math_id": 138,
"text": "A"
},
{
"math_id": 139,
"text": "\\int_c \\omega :=\\int_{[0,1]^n}c^*\\omega."
},
{
"math_id": 140,
"text": "\\boldsymbol{n}"
},
{
"math_id": 141,
"text": "C=\\sum_i n_ic_i"
},
{
"math_id": 142,
"text": "\\int_C \\omega :=\\sum_i n_i\\int_{c_i} \\omega."
},
{
"math_id": 143,
"text": "(n-1)"
},
{
"math_id": 144,
"text": "\\partial C"
},
{
"math_id": 145,
"text": "C"
},
{
"math_id": 146,
"text": "\\R^m"
},
{
"math_id": 147,
"text": "A\\subset\\R^m"
},
{
"math_id": 148,
"text": "\\int_C d\\omega=\\int_{\\partial C} \\omega"
},
{
"math_id": 149,
"text": "T_p M"
},
{
"math_id": 150,
"text": "M"
},
{
"math_id": 151,
"text": "\\omega\\in\\Omega^k(M)"
},
{
"math_id": 152,
"text": "\\omega:p\\in M\\mapsto\\omega_p\\in \\mathcal{A}^k(T_pM)"
}
] | https://en.wikipedia.org/wiki?curid=832212 |
8323351 | Fresnel rhomb | Optical prism
A Fresnel rhomb is an optical prism that introduces a 90° phase difference between two perpendicular components of polarization, by means of two total internal reflections. If the incident beam is linearly polarized at 45° to the plane of incidence and reflection, the emerging beam is circularly polarized, and vice versa. If the incident beam is linearly polarized at some other inclination, the emerging beam is elliptically polarized with one principal axis in the plane of reflection, and vice versa.
The rhomb usually takes the form of a right parallelepiped, or in other words, a solid with six parallelogram faces (a square is to a cube as a parallelogram is to a parallelepiped). If the incident ray is perpendicular to one of the smaller rectangular faces, the angle of incidence and reflection at both of the longer faces is equal to the acute angle of the parallelogram. This angle is chosen so that each reflection introduces a phase difference of 45° between the components polarized parallel and perpendicular to the plane of reflection. For a given, sufficiently high refractive index, there are two angles meeting this criterion; for example, an index of 1.5 requires an angle of 50.2° or 53.3°.
Conversely, if the angle of incidence and reflection is fixed, the phase difference introduced by the rhomb depends only on its refractive index, which typically varies only slightly over the visible spectrum. Thus the rhomb functions as if it were a wideband quarter-wave plate – in contrast to a conventional birefringent (doubly-refractive) quarter-wave plate, whose phase difference is more sensitive to the frequency (color) of the light. The material of which the rhomb is made – usually glass – is specifically "not" birefringent.
The Fresnel rhomb is named after its inventor, the French physicist Augustin-Jean Fresnel, who developed the device in stages between 1817 and 1823. During that time he deployed it in crucial experiments involving polarization, birefringence, and optical rotation, all of which contributed to the eventual acceptance of his transverse-wave theory of light.
Operation.
Incident electromagnetic waves (such as light) consist of transverse vibrations in the electric and magnetic fields; these are proportional to and at right angles to each other and may therefore be represented by (say) the electric field alone. When striking an interface, the electric field oscillations can be resolved into two perpendicular components, known as the "s" and "p" components, which are parallel to the "surface" and the "plane" of incidence, respectively; in other words, the "s" and "p" components are respectively "square" and "parallel" to the plane of incidence.
Light passing through a Fresnel rhomb undergoes two total internal reflections at the same carefully chosen angle of incidence. After one such reflection, the "p" component is advanced by 1/8 of a cycle (45°; π/4 radians) relative to the "s" component. With "two" such reflections, a relative phase shift of 1/4 of a cycle (90°; π/2) is obtained. The word "relative" is critical: as the wavelength is very small compared with the dimensions of typical apparatus, the "individual" phase advances suffered by the "s" and "p" components are not readily observable, but the "difference" between them is easily observable through its effect on the state of polarization of the emerging light.
If the incoming light is "linearly" polarized (plane-polarized), the "s" and "p" components are initially in phase; hence, after two reflections, "the "p" component is 90° ahead in phase", so that the polarization of the emerging light is "elliptical" with principal axes in the "s" and "p" directions (Fig. 1). Similarly, if the incoming light is elliptically polarized with axes in the "s" and "p" directions, the emerging light is linearly polarized.
In the special case in which the incoming "s" and "p" components not only are in phase but also have equal magnitudes, the initial linear polarization is at 45° to the plane of incidence and reflection, and the final elliptical polarization is "circular". If the circularly polarized light is inspected through an "analyzer" (second polarizer), it "seems" to have been completely "depolarized", because its observed brightness is independent of the orientation of the analyzer. But if this light is processed by a second rhomb, it is "repolarized" at 45° to the plane of reflection in that rhomb – a property not shared by ordinary
(unpolarized) light.
Related devices.
For a general input polarization, the net effect of the rhomb is identical to that of a birefringent (doubly-refractive) quarter-wave plate, except that a simple birefringent plate gives the desired 90° separation at a single frequency, and not (even approximately) at widely different frequencies, whereas the phase separation given by the rhomb depends on its refractive index, which varies only slightly over a wide frequency range (see "Dispersion"). Two Fresnel rhombs can be used in tandem (usually cemented to avoid reflections at their interface) to achieve the function of a half-wave plate. The tandem arrangement, unlike a single Fresnel rhomb, has the additional feature that the emerging beam can be collinear with the original incident beam.
Theory.
In order to specify the phase shift on reflection, we must choose a sign convention for the "reflection coefficient", which is the ratio of the reflected amplitude to the incident amplitude. In the case of the "s" components, for which the incident and reflected vibrations are both normal (perpendicular) to the plane of incidence, the obvious choice is to say that a "positive" reflection coefficient, corresponding to "zero" phase shift, is one for which the incident and reflected fields have the same direction (no reversal; no "inversion"). In the case of the "p" components, this article adopts the convention that a "positive" reflection coefficient is one for which the incident and reflected fields are inclined towards the same medium. We may then cover both cases by saying that a positive reflection coefficient is one for which the direction of the field vector normal to the plane of incidence (the electric vector for the "s" polarization, or the magnetic vector for the "p" polarization) is unchanged by the reflection. (But the reader should be warned that some authors use a different convention for the "p" components, with the result that the stated phase shift differs by 180° from the value given here.)
With the chosen sign convention, the phase advances on total internal reflection, for the "s" and "p" components, are respectively given by
and
where "θ"i is the angle of incidence, and n is the refractive index of the internal (optically denser) medium relative to the external (optically rarer) medium. (Some authors, however, use the reciprocal refractive index, so that their expressions for the phase shifts look different from the above.)
The phase advance of the "p" component relative to the "s" component is then given by
formula_0.
This is plotted in black in Fig. 2, for angles of incidence exceeding the critical angle, for three values of the refractive index. It can be seen that a refractive index of 1.45 is not enough to give a 45° phase difference, whereas a refractive index of 1.5 is enough (by a slim margin) to give a 45° phase difference at two angles of incidence: about 50.2° and 53.3°.
For "θ"i greater than the critical angle, the phase shifts on total reflection are deduced from complex values of the reflection coefficients. For completeness, Fig. 2 also shows the phase shifts on "partial" reflection, for "θ"i "less" than the critical angle. In the latter case, the reflection coefficients for the "s" and "p" components are "real", and are conveniently expressed by "Fresnel's sine law"
and "Fresnel's tangent law"
where "θ"i is the angle of incidence and "θ"t is the angle of refraction (with subscript "t" for "transmitted"), and the sign of the latter result is a function of the convention described above. (We can now see a disadvantage of that convention, namely that the two coefficients have opposite signs as we approach normal incidence; the corresponding advantage is that they have the same signs at grazing incidence.)
By Fresnel's sine law, rs is positive for all angles of incidence with a transmitted ray (since "θ"t > "θ"i for dense-to-rare incidence), giving a phase shift δs of zero. But, by his tangent law, rp is negative for small angles (that is, near normal incidence), and changes sign at "Brewster's angle", where "θ"i and "θ"t are complementary. Thus the phase shift δp is 180° for small "θ"i but switches to 0° at Brewster's angle. Combining the complementarity with Snell's law yields "θ"i
arctan (1/"n") as Brewster's angle for dense-to-rare incidence.
That completes the information needed to plot δs and δp for all angles of incidence in Fig. 2, in which δp is in red and δs in blue. On the angle-of-incidence scale (horizontal axis), Brewster's angle is where δp (red) falls from 180° to 0°, and the critical angle is where both δp and δs (red and blue) start to rise again. To the left of the critical angle is the region of "partial" reflection; here both reflection coefficients are real (phase 0° or 180°) with magnitudes less than 1. To the right of the critical angle is the region of "total" reflection; there both reflection coefficients are complex with magnitudes equal to 1.
In Fig. 2, the phase difference "δ" is computed by a final subtraction; but there are other ways of expressing it. Fresnel himself, in 1823, gave a formula for . Born and Wolf (1970, p. 50) derive an expression for tan ("δ"/2), and find its maximum analytically.
History.
Background.
Augustin-Jean Fresnel came to the study of total internal reflection through his research on polarization. In 1811, François Arago discovered that polarized light was apparently "depolarized" in an orientation-dependent and color-dependent manner when passed through a slice of birefringent crystal: the emerging light showed colors when viewed through an analyzer (second polarizer). "Chromatic polarization", as this phenomenon came to be called, was more thoroughly investigated in 1812 by Jean-Baptiste Biot. In 1813, Biot established that one case studied by Arago, namely quartz cut perpendicular to its optic axis, was actually a gradual rotation of the plane of polarization with distance. He went on to discover that certain liquids, including turpentine ("térébenthine"), shared this property (see "Optical rotation").
In 1816, Fresnel offered his first attempt at a "wave-based" theory of chromatic polarization. Without (yet) explicitly invoking transverse waves, this theory treated the light as consisting of two perpendicularly polarized components.
Stage 1: Coupled prisms (1817).
In 1817, Fresnel noticed that plane-polarized light seemed to be partly depolarized by total internal reflection, if initially polarized at an acute angle to the plane of incidence. By including total internal reflection in a chromatic-polarization experiment, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them. Choosing an appropriate angle of incidence (not yet exactly specified) gave a phase difference of 1/8 of a cycle. Two such reflections from the "parallel faces" of "two coupled prisms" gave a phase difference of 1/4 of a cycle. In that case, if the light was initially polarized at 45° to the plane of incidence and reflection, it appeared to be "completely" depolarized after the two reflections. These findings were reported in a memoir submitted and read to the French Academy of Sciences in November 1817.
In a "supplement" dated January 1818, Fresnel reported that optical rotation could be emulated by passing the polarized light through a pair of "coupled prisms", followed by an ordinary birefringent lamina sliced parallel to its axis, with the axis at 45° to the plane of reflection of the prisms, followed by a second pair of prisms at 90° to the first. This was the first experimental evidence of a mathematical relation between optical rotation and birefringence.
Stage 2: Parallelepiped (1818).
The memoir of November 1817 bears the undated marginal note: "I have since replaced these two coupled prisms by a parallelepiped in glass." A "dated" reference to the parallelepiped form – the form that we would now recognize as a Fresnel rhomb – is found in a memoir which Fresnel read to the Academy on 30 March 1818, and which was subsequently lost until 1846. In that memoir, Fresnel reported that if polarized light was fully "depolarized" by a rhomb, its properties were not further modified by a subsequent passage through an optically rotating medium, whether that medium was a crystal or a liquid or even his own emulator; for example, the light retained its ability to be repolarized by a second rhomb.
Interlude (1818–22).
As an engineer of bridges and roads, and as a proponent of the wave theory of light, Fresnel was still an outsider to the physics establishment when he presented his parallelepiped in March 1818. But he was increasingly difficult to ignore. In April 1818 he claimed priority for the Fresnel integrals. In July he submitted the great memoir on diffraction that immortalized his name in elementary physics textbooks. In 1819 came the announcement of the prize for the memoir on diffraction, the publication of the Fresnel–Arago laws, and the presentation of Fresnel's proposal to install "stepped lenses" in lighthouses.
In 1821, Fresnel derived formulae equivalent to his sine and tangent laws (Eqs. (3) and (4), above) by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Using old experimental data, he promptly confirmed that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water. The experimental confirmation was reported in a "postscript" to the work in which Fresnel expounded his mature theory of chromatic polarization, introducing transverse waves. Details of the derivation were given later, in a memoir read to the academy in January 1823. The derivation combined conservation of energy with continuity of the "tangential" vibration at the interface, but failed to allow for any condition on the "normal" component of vibration. (The first derivation from electromagnetic principles was given by Hendrik Lorentz in 1875.)
Meanwhile, by April 1822, Fresnel accounted for the directions and polarizations of the refracted rays in birefringent crystals of the "biaxial" class – a feat that won the admiration of Pierre-Simon Laplace.
Use in experiments (1822–23).
In a memoir on stress-induced birefringence (now called "photoelasticity") read in September 1822, Fresnel reported an experiment involving a row of glass prisms with their refracting angles in alternating directions, and with two half-prisms at the ends, making the whole assembly rectangular. When the prisms facing the same way were compressed in a vise, objects viewed through the length of the assembly appeared double. At the end of this memoir he proposed a variation of the experiment, involving a Fresnel rhomb, for the purpose of verifying that optical rotation is a form of birefringence: he predicted that if the compressed glass prisms were replaced by (unstressed) monocrystalline quartz prisms with the same direction of optical rotation and with their optic axes aligned along the row, an object seen by looking along the common optic axis would give two images, which would seem unpolarized if viewed through an analyzer alone; but if viewed through a Fresnel rhomb, they would be polarized at ±45° to the plane of reflection.
Confirmation of this prediction was reported in a memoir read in December 1822, in which Fresnel coined the terms "linear polarization", "circular polarization", and "elliptical polarization". In the experiment, the Fresnel rhomb revealed that the two images were circularly polarized in opposite directions, and the separation of the images showed that the different (circular) polarizations propagated at different speeds. To obtain a visible separation, Fresnel needed only one 14°-152°-14° prism and two half-prisms. He found, however, that the separation was improved if the glass half-prisms were replaced by quartz half-prisms whose direction of optical rotation was opposite to that of the 14°-152°-14° prism.
Thus, although we now think of the Fresnel rhomb primarily as a device for converting between linear and circular polarization, it was not until the memoir of December 1822 that Fresnel himself could describe it in those terms.
In the same memoir, Fresnel explained optical rotation by noting that linearly-polarized light could be resolved into two circularly-polarized components rotating in opposite directions. If these components propagated at slightly different speeds (as he had demonstrated for quartz), then the phase difference between them – and therefore the orientation of their linearly-polarized resultant – would vary continuously with distance.
Stage 3: Calculation of angles (1823).
The concept of circular polarization was useful in the memoir of January 1823, containing the detailed derivations of the sine and tangent laws: in that same memoir, Fresnel found that for angles of incidence greater than the critical angle, the resulting reflection coefficients were complex with unit magnitude. Noting that the magnitude represented the amplitude ratio as usual, he guessed that the argument represented the phase shift, and verified the hypothesis by experiment. The verification involved
This procedure was necessary because, with the technology of the time, one could not measure the "s" and "p" phase-shifts directly, and one could not measure an arbitrary degree of ellipticality of polarization, such as might be caused by the difference between the phase shifts. But one could verify that the polarization was "circular", because the brightness of the light was then insensitive to the orientation of the analyzer.
For glass with a refractive index of 1.51, Fresnel calculated that a 45° phase difference between the two reflection coefficients (hence a 90° difference after two reflections) required an angle of incidence of 48°37' or 54°37'. He cut a rhomb to the latter angle and found that it performed as expected. Thus the specification of the Fresnel rhomb was completed.
Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after "three" reflections at the same angle, and "four" reflections at the same angle. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. (Compare Fig. 2 above, which shows that the phase difference "δ" is more sensitive to the refractive index for smaller angles of incidence.)
For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.
Significance.
In summary, the invention of the rhomb was not a single event in Fresnel's career, but a process spanning a large part of it. Arguably, the calculation of the phase shift on total internal reflection marked not only the completion of his theory of the rhomb, but also the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see "Augustin-Jean Fresnel").
The calculation of the phase shift was also a landmark in the application of complex numbers. Leonhard Euler had pioneered the use of complex exponents in solutions of ordinary differential equations, on the understanding that the "real part" of the solution was the relevant part. But Fresnel's treatment of total internal reflection seems to have been the first occasion on which a physical meaning was attached to the "argument" of a complex number. According to Salomon Bochner,
<templatestyles src="Template:Blockquote/styles.css" />We think that this was the first time that complex numbers or any other mathematical objects which are "nothing-but-symbols" were put into the center of an interpretative context of "reality," and it is an extraordinary fact that this interpretation, although the first of its kind, stood up so well to verification by experiment and to the later "maxwellization" of the entire theory. In very loose terms one can say that this was the first time in which "nature" was abstracted from "pure" mathematics, that is from a mathematics which had not been previously abstracted from nature itself.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta = \\delta_{p\\!} - \\delta_s \\,"
}
] | https://en.wikipedia.org/wiki?curid=8323351 |
8324 | Difference engine | Automatic mechanical calculator
A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. It was designed in the 1820s, and was first created by Charles Babbage. The name "difference engine" is derived from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial co-efficients. Some of the most common mathematical functions used in engineering, science and navigation are built from logarithmic and trigonometric functions, which can be approximated by polynomials, so a difference engine can compute many useful tables.
History.
The notion of a mechanical calculator for mathematical functions can be traced back to the Antikythera mechanism of the 2nd century BC, while early modern examples are attributed to Pascal and Leibniz in the 17th century.
In 1784 J. H. Müller, an engineer in the Hessian army, devised and built an adding machine and described the basic principles of a difference machine in a book published in 1786 (the first written reference to a difference machine is dated to 1784), but he was unable to obtain funding to progress with the idea.
Charles Babbage's difference engines.
Charles Babbage began to construct a small difference engine in c. 1819 and had completed it by 1822 (Difference Engine 0). He announced his invention on 14 June 1822, in a paper to the Royal Astronomical Society, entitled "Note on the application of machinery to the computation of astronomical and mathematical tables". This machine used the decimal number system and was powered by cranking a handle. The British government was interested, since producing tables was time-consuming and expensive and they hoped the difference engine would make the task more economical.
In 1823, the British government gave Babbage £1700 to start work on the project. Although Babbage's design was feasible, the metalworking techniques of the era could not economically make parts in the precision and quantity required. Thus the implementation proved to be much more expensive and doubtful of success than the government's initial estimate. According to the 1830 design for Difference Engine No. 1, it would have about 25,000 parts, weigh 4 tons, and operate on 20-digit numbers by sixth-order differences. In 1832, Babbage and Joseph Clement produced a small working model (one-seventh of the plan), which operated on 6-digit numbers by second-order differences. Lady Byron described seeing the working prototype in 1833: "We both went to see the thinking machine (or so it seems) last Monday. It raised several Nos. to the 2nd and 3rd powers, and extracted the root of a Quadratic equation." Work on the larger engine was suspended in 1833.
By the time the government abandoned the project in 1842, Babbage had received and spent over £17,000 on development, which still fell short of achieving a working engine. The government valued only the machine's output (economically produced tables), not the development (at unpredictable cost) of the machine itself. Babbage refused to recognize that predicament. Meanwhile, Babbage's attention had moved on to developing an analytical engine, further undermining the government's confidence in the eventual success of the difference engine. By improving the concept as an analytical engine, Babbage had made the difference engine concept obsolete, and the project to implement it an utter failure in the view of the government.
The incomplete Difference Engine No. 1 was put on display to the public at the 1862 International Exhibition in South Kensington, London.
Babbage went on to design his much more general analytical engine, but later designed an improved "Difference Engine No. 2" design (31-digit numbers and seventh-order differences), between 1846 and 1849. Babbage was able to take advantage of ideas developed for the analytical engine to make the new difference engine calculate more quickly while using fewer parts.
Scheutzian calculation engine.
Inspired by Babbage's difference engine in 1834, Per Georg Scheutz built several experimental models. In 1837 his son Edward proposed to construct a working model in metal, and in 1840 finished the calculating part, capable of calculating series with 5-digit numbers and first-order differences, which was later extended to third-order (1842). In 1843, after adding the printing part, the model was completed.
In 1851, funded by the government, construction of the larger and improved (15-digit numbers and fourth-order differences) machine began, and finished in 1853. The machine was demonstrated at the World's Fair in Paris, 1855 and then sold in 1856 to the Dudley Observatory in Albany, New York. Delivered in 1857, it was the first printing calculator sold. In 1857 the British government ordered the next Scheutz's difference machine, which was built in 1859. It had the same basic construction as the previous one, weighing about .
Others.
Martin Wiberg improved Scheutz's construction (c. 1859, his machine has the same capacity as Scheutz's: 15-digit and fourth-order) but used his device only for producing and publishing printed tables (interest tables in 1860, and logarithmic tables in 1875).
Alfred Deacon of London in c. 1862 produced a small difference engine (20-digit numbers and third-order differences).
American George B. Grant started working on his calculating machine in 1869, unaware of the works of Babbage and Scheutz (Schentz). One year later (1870) he learned about difference engines and proceeded to design one himself, describing his construction in 1871. In 1874 the Boston Thursday Club raised a subscription for the construction of a large-scale model, which was built in 1876. It could be expanded to enhance precision and weighed about .
Christel Hamann built one machine (16-digit numbers and second-order differences) in 1909 for the "Tables of Bauschinger and Peters" ("Logarithmic-Trigonometrical Tables with eight decimal places"), which was first published in Leipzig in 1910. It weighed about .
Burroughs Corporation in about 1912 built a machine for the Nautical Almanac Office which was used as a difference engine of second-order. It was later replaced in 1929 by a Burroughs Class 11 (13-digit numbers and second-order differences, or 11-digit numbers and [at least up to] fifth-order differences).
Alexander John Thompson about 1927 built "integrating and differencing machine" (13-digit numbers and fifth-order differences) for his table of logarithms "Logarithmetica britannica". This machine was composed of four modified Triumphator calculators.
Leslie Comrie in 1928 described how to use the Brunsviga-Dupla calculating machine as a difference engine of second-order (15-digit numbers). He also noted in 1931 that National Accounting Machine Class 3000 could be used as a difference engine of sixth-order.
Construction of two working No. 2 difference engines.
During the 1980s, Allan G. Bromley, an associate professor at the University of Sydney, Australia, studied Babbage's original drawings for the Difference and Analytical Engines at the Science Museum library in London. This work led the Science Museum to construct a working calculating section of difference engine No. 2 from 1985 to 1991, under Doron Swade, the then Curator of Computing. This was to celebrate the 200th anniversary of Babbage's birth in 1991. In 2002, the printer which Babbage originally designed for the difference engine was also completed. The conversion of the original design drawings into drawings suitable for engineering manufacturers' use revealed some minor errors in Babbage's design (possibly introduced as a protection in case the plans were stolen), which had to be corrected. The difference engine and printer were constructed to tolerances achievable with 19th-century technology, resolving a long-standing debate as to whether Babbage's design could have worked using Georgian-era engineering methods. The machine contains 8,000 parts and weighs about 5 tons.
The printer's primary purpose is to produce stereotype plates for use in printing presses, which it does by pressing type into soft plaster to create a flong. Babbage intended that the Engine's results be conveyed directly to mass printing, having recognized that many errors in previous tables were not the result of human calculating mistakes but from slips in the manual typesetting process. The printer's paper output is mainly a means of checking the engine's performance.
In addition to funding the construction of the output mechanism for the Science Museum's difference engine, Nathan Myhrvold commissioned the construction of a second complete Difference Engine No. 2, which was on exhibit at the Computer History Museum in Mountain View, California, from May 2008 to January 2016. It has since been transferred to Intellectual Ventures in Seattle where it is on display just outside the main lobby.
Operation.
The difference engine consists of a number of columns, numbered from 1 to N. The machine is able to store one decimal number in each column. The machine can only add the value of a column "n" + 1 to column "n" to produce the new value of "n". Column "N" can only store a constant, column 1 displays (and possibly prints) the value of the calculation on the current iteration.
The engine is programmed by setting initial values to the columns. Column 1 is set to the value of the polynomial at the start of computation. Column 2 is set to a value derived from the first and higher derivatives of the polynomial at the same value of X. Each of the columns from 3 to "N" is set to a value derived from the formula_0 first and higher derivatives of the polynomial.
Timing.
In the Babbage design, one iteration (i.e. one full set of addition and carry operations) happens for each rotation of the main shaft. Odd and even columns alternately perform an addition in one cycle. The sequence of operations for column formula_1 is thus:
Steps 1,2,3,4 occur for every odd column, while steps 3,4,1,2 occur for every even column.
While Babbage's original design placed the crank directly on the main shaft, it was later realized that the force required to crank the machine would have been too great for a human to handle comfortably. Therefore, the two models that were built incorporate a 4:1 reduction gear at the crank, and four revolutions of the crank are required to perform one full cycle.
Steps.
Each iteration creates a new result, and is accomplished in four steps corresponding to four complete turns of the handle shown at the far right in the picture below. The four steps are:
Subtraction.
The engine represents negative numbers as ten's complements. Subtraction amounts to addition of a negative number. This works in the same manner that modern computers perform subtraction, known as two's complement.
Method of differences.
The principle of a difference engine is Newton's method of divided differences. If the initial value of a polynomial (and of its finite differences) is calculated by some means for some value of X, the difference engine can calculate any number of nearby values, using the method generally known as the method of finite differences. For example, consider the quadratic polynomial
formula_4
with the goal of tabulating the values "p"(0), "p"(1), "p"(2), "p"(3), "p"(4), and so forth. The table below is constructed as follows: the second column contains the values of the polynomial, the third column contains the differences of the two left neighbors in the second column, and the fourth column contains the differences of the two neighbors in the third column:
The numbers in the third values-column are constant. In fact, by starting with any polynomial of degree "n", the column number "n" + 1 will always be constant. This is the crucial fact behind the success of the method.
This table was built from left to right, but it is possible to continue building it from right to left down a diagonal in order to compute more values. To calculate "p"(5) use the values from the lowest diagonal. Start with the fourth column constant value of 4 and copy it down the column. Then continue the third column by adding 4 to 11 to get 15. Next continue the second column by taking its previous value, 22 and adding the 15 from the third column. Thus "p"(5) is 22 + 15 = 37. In order to compute "p"(6), we iterate the same algorithm on the "p"(5) values: take 4 from the fourth column, add that to the third column's value 15 to get 19, then add that to the second column's value 37 to get 56, which is "p"(6). This process may be continued "ad infinitum". The values of the polynomial are produced without ever having to multiply. A difference engine only needs to be able to add. From one loop to the next, it needs to store 2 numbers—in this example (the last elements in the first and second columns). To tabulate polynomials of degree "n", one needs sufficient storage to hold "n" numbers.
Babbage's difference engine No. 2, finally built in 1991, can hold 8 numbers of 31 decimal digits each and can thus tabulate 7th degree polynomials to that precision. The best machines from Scheutz could store 4 numbers with 15 digits each.
Initial values.
The initial values of columns can be calculated by first manually calculating N consecutive values of the function and by backtracking (i.e. calculating the required differences).
Col formula_5 gets the value of the function at the start of computation formula_6. Col formula_7 is the difference between formula_8 and formula_6...
If the function to be calculated is a polynomial function, expressed as
formula_9
the initial values can be calculated directly from the constant coefficients "a"0, "a"1,"a"2, ..., "an" without calculating any data points. The initial values are thus:
Use of derivatives.
Many commonly used functions are analytic functions, which can be expressed as power series, for example as a Taylor series. The initial values can be calculated to any degree of accuracy; if done correctly the engine will give exact results for first N steps. After that, the engine will only give an approximation of the function.
The Taylor series expresses the function as a sum obtained from its derivatives at one point. For many functions the higher derivatives are trivial to obtain; for instance, the sine function at 0 has values of 0 or formula_15 for all derivatives. Setting 0 as the start of computation we get the simplified Maclaurin series
formula_16
The same method of calculating the initial values from the coefficients can be used as for polynomial functions. The polynomial constant coefficients will now have the value
formula_17
Curve fitting.
The problem with the methods described above is that errors will accumulate and the series will tend to diverge from the true function. A solution which guarantees a constant maximum error is to use curve fitting. A minimum of "N" values are calculated evenly spaced along the range of the desired calculations. Using a curve fitting technique like Gaussian reduction an "N"−1th degree polynomial interpolation of the function is found. With the optimized polynomial, the initial values can be calculated as above.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(n-1)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "n+1"
},
{
"math_id": 3,
"text": "n-1"
},
{
"math_id": 4,
"text": "p(x) = 2x^2 - 3x + 2 \\,"
},
{
"math_id": 5,
"text": "1_0"
},
{
"math_id": 6,
"text": "f(0)"
},
{
"math_id": 7,
"text": "2_0"
},
{
"math_id": 8,
"text": "f(1)"
},
{
"math_id": 9,
"text": " f(x) = a_n x^n + a_{n-1} x^{n-1} + \\cdots + a_2 x^2 + a_1 x + a_0 \\, "
},
{
"math_id": 10,
"text": "3_0"
},
{
"math_id": 11,
"text": "4_0"
},
{
"math_id": 12,
"text": "5_0"
},
{
"math_id": 13,
"text": "6_0"
},
{
"math_id": 14,
"text": "..."
},
{
"math_id": 15,
"text": "\\pm1"
},
{
"math_id": 16,
"text": "\n\\sum_{n=0}^{\\infin} \\frac{f^{(n)}(0)}{n!}\\ x^{n}\n"
},
{
"math_id": 17,
"text": "\na_n \\equiv \\frac{f^{(n)}(0)}{n!}\n"
}
] | https://en.wikipedia.org/wiki?curid=8324 |
8324345 | Minimal subtraction scheme | Renormalization scheme in quantum field theory
In quantum field theory, the minimal subtraction scheme, or MS scheme, is a particular renormalization scheme used to absorb the infinities that arise in perturbative calculations beyond leading order, introduced independently by Gerard 't Hooft and Steven Weinberg in 1973. The MS scheme consists of absorbing only the divergent part of the radiative corrections into the counterterms.
In the similar and more widely used modified minimal subtraction, or MS-bar scheme (formula_0), one absorbs the divergent part plus a universal constant that always arises along with the divergence in Feynman diagram calculations into the counterterms. When using dimensional regularization, i.e. formula_1, it is implemented by rescaling the renormalization scale: formula_2, with formula_3 the Euler–Mascheroni constant. | [
{
"math_id": 0,
"text": "\\overline{\\text{MS}}"
},
{
"math_id": 1,
"text": " d^4 p \\to \\mu^{4-d} d^d p"
},
{
"math_id": 2,
"text": "\\mu^2 \\to \\mu^2 \\frac{ e^{\\gamma_{\\rm E}} }{4 \\pi}"
},
{
"math_id": 3,
"text": "\\gamma_{\\rm E}"
}
] | https://en.wikipedia.org/wiki?curid=8324345 |
8324507 | Renormalon | Divergence in perturbative quantum field theory
In physics, a renormalon (a term suggested by 't Hooft) is a particular source of divergence seen in perturbative approximations to quantum field theories (QFT). When a formally divergent series in a QFT is summed using Borel summation, the associated Borel transform of the series can have singularities as a function of the complex transform parameter. The renormalon is a possible type of singularity arising in this complex "Borel plane", and is a counterpart of an instanton singularity. Associated with such singularities, renormalon contributions are discussed in the context of quantum chromodynamics (QCD) and usually have the power-like form formula_0 as functions of the momentum formula_1 (here formula_2 is the momentum cut-off). They are cited against the usual logarithmic effects like formula_3.
Brief history.
Perturbation series in quantum field theory are usually divergent as was firstly indicated by Freeman Dyson. According to the Lipatov method, formula_4-th order contribution of perturbation theory into any quantity can be evaluated at large formula_4 in the saddle-point approximation for functional integrals and is determined by instanton configurations. This contribution behaves usually as formula_5 in dependence on formula_4 and is frequently associated with approximately the same (formula_5) number of Feynman diagrams. Lautrup has noted that there exist individual diagrams giving approximately the same contribution. In principle, it is possible that such diagrams are automatically taken into account in Lipatov's calculation, because its interpretation in terms of diagrammatic technique is problematic. However, 't Hooft put forward a conjecture that Lipatov's and Lautrup's contributions are associated with different types of singularities in the Borel plane, the former with instanton ones and the latter with renormalon ones. Existence of instanton singularities is beyond any doubt, while existence of renormalon ones was never proved rigorously in spite of numerous efforts. Among the essential contributions one should mention the application of the operator product expansion, as was suggested by Parisi.
Recently a proof was suggested for absence of renormalon singularities in formula_6 theory and a general criterion for their existence was formulated in terms of the asymptotic behavior of the Gell-Mann–Low function formula_7. Analytical results for asymptotics of formula_7 in formula_6 theory and QED indicate the absence of renormalon singularities in these theories.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\Lambda/Q\\right)^p"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\Lambda"
},
{
"math_id": 3,
"text": "\\ln\\left(\\Lambda/Q\\right)"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "N!"
},
{
"math_id": 6,
"text": "\\phi^4"
},
{
"math_id": 7,
"text": "\\beta(g)"
}
] | https://en.wikipedia.org/wiki?curid=8324507 |
8325288 | Consensus theorem | Theorem in Boolean algebra
In Boolean algebra, the consensus theorem or rule of consensus is the identity:
formula_0
The consensus or resolvent of the terms formula_1 and formula_2 is formula_3. It is the conjunction of all the unique literals of the terms, excluding the literal that appears unnegated in one term and negated in the other. If formula_4 includes a term that is negated in formula_5 (or vice versa), the consensus term formula_3 is false; in other words, there is no consensus term.
The conjunctive dual of this equation is:
formula_6
formula_7
Consensus.
The consensus or consensus term of two conjunctive terms of a disjunction is defined when one term contains the literal formula_8 and the other the literal formula_9, an opposition. The consensus is the conjunction of the two terms, omitting both formula_8 and formula_9, and repeated literals. For example, the consensus of formula_10 and formula_11 is formula_12. The consensus is undefined if there is more than one opposition.
For the conjunctive dual of the rule, the consensus formula_13 can be derived from formula_14 and formula_15 through the resolution inference rule. This shows that the LHS is derivable from the RHS (if "A" → "B" then "A" → "AB"; replacing "A" with RHS and "B" with ("y" ∨ "z") ). The RHS can be derived from the LHS simply through the conjunction elimination inference rule. Since RHS → LHS and LHS → RHS (in propositional calculus), then LHS = RHS (in Boolean algebra).
Applications.
In Boolean algebra, repeated consensus is the core of one algorithm for calculating the Blake canonical form of a formula.
In digital logic, including the consensus term in a circuit can eliminate race hazards.
History.
The concept of consensus was introduced by Archie Blake in 1937, related to the Blake canonical form. It was rediscovered by Samson and Mills in 1954 and by Quine in 1955. Quine coined the term 'consensus'. Robinson used it for clauses in 1965 as the basis of his "resolution principle".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "xy \\vee \\bar{x}z \\vee yz = xy \\vee \\bar{x}z"
},
{
"math_id": 1,
"text": "xy"
},
{
"math_id": 2,
"text": "\\bar{x}z"
},
{
"math_id": 3,
"text": "yz"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "z"
},
{
"math_id": 6,
"text": "(x \\vee y)(\\bar{x} \\vee z)(y \\vee z) = (x \\vee y)(\\bar{x} \\vee z)"
},
{
"math_id": 7,
"text": " \\begin{align}\n xy \\vee \\bar{x}z \\vee yz &= xy \\vee \\bar{x}z \\vee (x \\vee \\bar{x})yz \\\\\n &= xy \\vee \\bar{x}z \\vee xyz \\vee \\bar{x}yz \\\\\n &= (xy \\vee xyz) \\vee (\\bar{x}z \\vee \\bar{x}yz) \\\\\n &= xy(1\\vee z)\\vee\\bar{x}z(1\\vee y) \\\\\n &= xy \\vee \\bar{x}z\n \\end{align} \n"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "\\bar{a}"
},
{
"math_id": 10,
"text": "\\bar{x}yz"
},
{
"math_id": 11,
"text": "w\\bar{y}z"
},
{
"math_id": 12,
"text": "w\\bar{x}z"
},
{
"math_id": 13,
"text": "y \\vee z"
},
{
"math_id": 14,
"text": "(x\\vee y)"
},
{
"math_id": 15,
"text": "(\\bar{x} \\vee z)"
}
] | https://en.wikipedia.org/wiki?curid=8325288 |
832566 | Northern red snapper | Species of fish
<templatestyles src="Template:Taxobox/core/styles.css" />
The northern red snapper (Lutjanus campechanus) is a species of marine ray-finned fish, a snapper belonging to the family Lutjanidae. It is native to the western Atlantic Ocean, the Caribbean Sea, and the Gulf of Mexico, where it inhabits environments associated with reefs. This species is commercially important and is also sought-after as a game fish.
Taxonomy.
The northern red snapper was first formally described in 1860 as "Mesoprion campechanus" by the Cuban zoologist Felipe Poey with the type locality given as Campeche in Mexico. The specific name reflects the type locality.
Characteristics.
The northern red snapper's body is very similar in shape to other snappers, such as the mangrove snapper, mutton snapper, lane snapper, and dog snapper. All feature a sloped profile, medium-to-large scales, a spiny dorsal fin, and a laterally compressed body. Northern red snapper have short, sharp, needle-like teeth, but they lack the prominent upper canine teeth found on the mutton, dog, and mangrove snappers. They are rather large and are red in color.
This snapper reaches maturity at a length of about . The common adult length is , but may reach . The maximum published weight is 50 lb, 4 oz (22.79 kg) and the oldest reported age is 57+ years.
Coloration of the northern red snapper is light red, with more intense pigment on the back. It has 10 dorsal spines, 14 soft dorsal rays, three anal spines and eight to 9 anal soft rays. Juvenile fish (shorter than 30–35 cm) can also have a dark spot on their sides, below the anterior soft dorsal rays, which fades with age.
Distribution.
The northern red snapper is found at a depth of 30 to 620 feet in the Gulf of Mexico, the Caribbean Sea, and the southeastern Atlantic coast of Mexico and the United States and much less commonly northward as far as Massachusetts. In Latin American Spanish, it is known as ', ', ', or '.
This species commonly inhabits waters from , but can be caught as deep as on occasion. They stay relatively close to the bottom, and inhabit rocky bottoms, ledges, ridges, and artificial reefs, including offshore oil rigs and shipwrecks. Like most other snappers, northern red snapper are gregarious and form large schools around wrecks and reefs. These schools are usually made up of fish of very similar size.
The preferred habitat of this species changes as it grows and matures due to increased need for cover and changing food habits. Newly hatched red snapper spread out over large areas of open benthic habitat, then move to low-relief habitats, such as oyster beds. As they near one year of age, they move to intermediate-relief habitats as the previous year's fish move on to high-relief reefs with room for more individuals. Around artificial reefs such as oil platforms, smaller fish spend time in the upper part of the water column while more mature (and larger) adults live in deeper areas. These larger fish do not allow smaller individuals to share this territory. The largest red snapper spread out over open habitats, as well as reefs.
Reproduction and growth.
Diaz reported weight vs. length data for "L. campechanus" for the National Marine Fisheries Service (US). As northern red snapper grow longer, they increase in weight, but the relationship between length and weight is not linear. The relationship between total length (L, in inches) and total weight (W, in pounds) for nearly all species of fish can be expressed by an equation of the form:
formula_0
Invariably, b is close to 3.0 for all species, and c is a constant that varies among species. Diaz reported that for red snapper, c=0.000010 and b=3.076. These values are for inputs of length in cm and result in weight in kg.
Szedlmayer et al. reported length vs. age data for "L. campechanus" in a primarily artificial reef environment off the coast of Alabama, US: TL(age) = 1,025 (1 – e^( -0.15 age)), N=409, R = 0.96. For the first five years, growth can be estimated as being approximately linear: TL(age) = 97.7 age + 67.6, N = 397, R = 0.87 (for each equation, age is in years and total length is in mm). Szedlmayer & Shipp 1994, Patterson 1999, Nelson and Manooch 1982, Patterson et al 2001, Wilson & Nieland 2001, and Fischer et al 2004 show "L. campechanus" growing most rapidly over its first 8–10 years.
Northern red snappers move to different types of habitats during their growth process. When they are newly spawned, red snapper settle over large areas of open benthic habitat(s). Below age 1, the red snapper move to low-relief habitats for food and cover. If available, oyster shell beds are preferred. The second stage is when these fish outgrow low-relief habitats and move to intermediate-relief habitats as age 1 snapper leave to move on to another growth stage. Next, at about age 2, snapper seek high-relief reefs having low densities of larger snapper. Next, at platforms, smaller snapper occupy the upper water column. Then, the larger, older snapper occupy the deeper areas of the platforms and large benthic reefs and they prevent smaller snapper and other fish from using these habitats. In spite of local habitat preferences, Szedlmayer reported that of 146 "L. campechanus" tagged, released and recaptured within about a year, 57% were still approximately at their respective release site, and 76% were recaptured within 2 km of their release site. The greatest movement by a single fish was 32 km.
A northern red snapper attains sexual maturity at two to five years old, and an adult snapper can live for more than 50 years. Research from 1999–2001 suggested the populations of red snapper off the coast of Texas reach maturity faster and at a smaller size than populations off of the Louisiana and Alabama coasts.
Commercial and recreational use.
Northern red snapper are a prized food fish, caught commercially, as well as recreationally. It is sometimes used in Vietnamese canh chua ("Sour soup"). Red snapper is the most commonly caught snapper in the continental US (almost 50% of the total catch), with similar species being more common elsewhere. They eat almost anything, but prefer small fish and crustaceans. They can be caught on both live and cut bait, and also take artificial lures, but with less vigor. They are commonly caught up to and in length, but fish over have been taken.
Recreational fishing for northern red snapper has been popular for a long time, restricted mostly by fishing limits intended to ensure a sustainable population. The first minimum size limit was introduced in 1984, after a 1981 report described quickly declining harvests (both commercial and recreational) From 1985 to 1990, the annual recreational catch of red snapper was about 1.5 million. From 1991 to 2005, the catch was substantially higher, varying from year to year from 2.5 to 4.0 million.
When northern red snapper bite on a line, they tend to be nibblers and pickers, and a soft touch is needed when trying to catch them. Because the older red snapper like structure, anglers use bottom fishing over reefs, wrecks, and oil rigs, and use line and supplies in the 50-lb class. Since the anglers have to both choose the right bait and present it correctly, they tend to use multiple hooked baits. Favorite baits include squid, whole medium-sized fish, and small strips of fish such as amberjack. Although many northern red snapper are caught on the bottom, in some situations the larger fish are caught on heavy jigs (artificial lures), often tipped with a strip of bait or by freelining baits at the proper upper level.
Interest in recreational fishing for northern red snapper, and in the Gulf of Mexico in general, has increased dramatically. From 1995–2003, the number of Louisiana fishing charter guide license holders increased eight-fold. In 2017, the Gulf of Mexico Fishery Management Council estimated the commercial value of the red snapper fishery to be $129 million. While specific numbers on the economic impact of recreational red snapper fishing are not available, it is clear that the activity has a significant economic impact on coastal communities through tourism and fishing-related activities.
Since 1990, the total catch limit for northern red snapper has been divided into 49% for recreational fishermen and 51% for commercial. Commercially, they are caught on multiple-hook gear with electric reels. Fishing for red snapper has been a major industry in the Gulf of Mexico, but permit restrictions and changes in the quota system for commercial snapper fishermen in the Gulf have made the fish less commercially available. Researchers estimate the bycatch of young red snapper, especially by shrimp trawlers, is a significant concern.
Genetic studies have shown many fish sold as red snapper in the US are not actually "L. campechanus," but other species in the family. Substitution of other species for red snapper is more common in large chain restaurants which serve a common menu nationwide. In these cases, suppliers provide a less costly substitute (usually imported) for red snapper. In countries such as India, where the actual red snapper is not available in its oceans, John snapper, Russell snapper are sold as "red snapper".
Stocking in artificial reefs.
Juvenile northern red snappers have been released on artificial reef habitats off the coast of Sarasota, Florida, to conduct investigations into the use of hatchery-reared juveniles to supplement native populations in the Gulf of Mexico. Artificial reefs off the coast of Alabama have proven to be a favorite habitat of red snapper two years old and older. Gallaway et al. (2009) analyzed several studies and concluded, in 1992, 70 – 80% of the age two red snapper in that area were living around offshore oil platforms.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Media related to at Wikimedia Commons | [
{
"math_id": 0,
"text": "W = cL^b\\!\\,"
}
] | https://en.wikipedia.org/wiki?curid=832566 |
8327127 | Dense-in-itself | Topological subset with no isolated point
In general topology, a subset formula_0 of a topological space is said to be dense-in-itself or crowded
if formula_0 has no isolated point.
Equivalently, formula_0 is dense-in-itself if every point of formula_0 is a limit point of formula_0.
Thus formula_0 is dense-in-itself if and only if formula_1, where formula_2 is the derived set of formula_0.
A dense-in-itself closed set is called a perfect set. (In other words, a perfect set is a closed set without isolated point.)
The notion of dense set is distinct from "dense-in-itself". This can sometimes be confusing, as ""X" is dense in "X" (always true) is not the same as "X" is dense-in-itself" (no isolated point).
Examples.
A simple example of a set that is dense-in-itself but not closed (and hence not a perfect set) is the set of irrational numbers (considered as a subset of the real numbers). This set is dense-in-itself because every neighborhood of an irrational number formula_3 contains at least one other irrational number formula_4. On the other hand, the set of irrationals is not closed because every rational number lies in its closure. Similarly, the set of rational numbers is also dense-in-itself but not closed in the space of real numbers.
The above examples, the irrationals and the rationals, are also dense sets in their topological space, namely formula_5. As an example that is dense-in-itself but not dense in its topological space, consider formula_6. This set is not dense in formula_5 but is dense-in-itself.
Properties.
A singleton subset of a space formula_7 can never be dense-in-itself, because its unique point is isolated in it.
The dense-in-itself subsets of any space are closed under unions. In a dense-in-itself space, they include all open sets. In a dense-in-itself T1 space they include all dense sets. However, spaces that are not T1 may have dense subsets that are not dense-in-itself: for example in the dense-in-itself space formula_8 with the indiscrete topology, the set formula_9 is dense, but is not dense-in-itself.
The closure of any dense-in-itself set is a perfect set.
In general, the intersection of two dense-in-itself sets is not dense-in-itself. But the intersection of a dense-in-itself set and an open set is dense-in-itself.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
"This article incorporates material from Dense in-itself on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "A\\subseteq A'"
},
{
"math_id": 2,
"text": "A'"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y \\neq x"
},
{
"math_id": 5,
"text": "\\mathbb{R}"
},
{
"math_id": 6,
"text": "\\mathbb{Q} \\cap [0,1]"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "X=\\{a,b\\}"
},
{
"math_id": 9,
"text": "A=\\{a\\}"
}
] | https://en.wikipedia.org/wiki?curid=8327127 |
8328 | Divergence | Vector operator in vector calculus
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.
Physical interpretation of divergence.
In physical terms, the divergence of a vector field is the extent to which the vector field flux behaves like a source at a given point. It is a local measure of its "outgoingness" – the extent to which there are more of the field vectors exiting from an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence.
The divergence of a vector field is often illustrated using the simple example of the velocity field of a fluid, a liquid or gas. A moving gas has a velocity, a speed and direction at each point, which can be represented by a vector, so the velocity of the gas forms a vector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore the velocity field has negative divergence everywhere. In contrast, in a gas at a constant temperature and pressure, the net flux of gas out of any closed surface is zero. The gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so the "net" flux is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is called solenoidal.
If the gas is heated only at one point or small region, or a small tube is introduced which supplies a source of additional gas at one point, the gas there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the gas, centered on the heated point. Any closed surface enclosing the heated point will have a flux of gas particles passing out of it, so there is positive divergence at that point. However any closed surface "not" enclosing the point will have a constant density of gas inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore the divergence at any other point is zero.
Definition.
The divergence of a vector field F(x) at a point x0 is defined as the limit of the ratio of the surface integral of F out of the closed surface of a volume "V" enclosing x0 to the volume of "V", as "V" shrinks to zero
formula_0 formula_1 formula_2
where is the volume of "V", "S"("V") is the boundary of "V", and formula_3 is the outward unit normal to that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that contain x0 and approach zero volume. The result, div F, is a scalar function of x.
Since this definition is coordinate-free, it shows that the divergence is the same in any coordinate system. However it is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use.
A vector field with zero divergence everywhere is called "solenoidal" – in which case any closed surface has no net flux across it.
Definition in coordinates.
Cartesian coordinates.
In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field formula_4 is defined as the scalar-valued function:
formula_5
Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an "N"-dimensional vector field F in N-dimensional space is invariant under any invertible linear transformation.
The common notation for the divergence ∇ · F is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the ∇ operator (see del), apply them to the corresponding components of F, and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
Cylindrical coordinates.
For a vector expressed in local unit cylindrical coordinates as
formula_6
where e"a" is the unit vector in direction "a", the divergence is
formula_7
The use of local coordinates is vital for the validity of the expression. If we consider x the position vector and the functions "r"(x), "θ"(x), and "z"(x), which assign the corresponding global cylindrical coordinate to a vector, in general formula_8, formula_9, and formula_10. In particular, if we consider the identity function F(x) = x, we find that:
formula_11.
Spherical coordinates.
In spherical coordinates, with θ the angle with the z axis and φ the rotation around the z axis, and F again written in local unit coordinates, the divergence is
formula_12
Tensor field.
Let A be continuously differentiable second-order tensor field defined as follows:
formula_13
the divergence in cartesian coordinate system is a first-order tensor field and can be defined in two ways:
formula_14
and
formula_15
We have
formula_16
If tensor is symmetric "Aij" = "Aji" then formula_17. Because of this, often in the literature the two definitions (and symbols div and formula_18) are used interchangeably (especially in mechanics equations where tensor symmetry is assumed).
Expressions of formula_19 in cylindrical and spherical coordinates are given in the article del in cylindrical and spherical coordinates.
General coordinates.
Using Einstein notation we can consider the divergence in general coordinates, which we write as "x"1, …, "x""i", …, "x""n", where n is the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, so "x"2 refers to the second component, and not the quantity x squared. The index variable i is used to refer to an arbitrary component, such as "x""i". The divergence can then be written via the Voss-Weyl formula, as:
formula_20
where formula_21 is the local coefficient of the volume element and "Fi" are the components of formula_22 with respect to the local unnormalized covariant basis (sometimes written as formula_23). The Einstein notation implies summation over i, since it appears as both an upper and lower index.
The volume coefficient ρ is a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we have "ρ" = 1, "ρ" = "r" and "ρ" = "r"2 sin "θ", respectively. The volume can also be expressed as formula_24, where "gab" is the metric tensor. The determinant appears because it provides the appropriate invariant definition of the volume, given a set of vectors. Since the determinant is a scalar quantity which doesn't depend on the indices, these can be suppressed, writing formula_25. The absolute value is taken in order to handle the general case where the determinant might be negative, such as in pseudo-Riemannian spaces. The reason for the square-root is a bit subtle: it effectively avoids double-counting as one goes from curved to Cartesian coordinates, and back. The volume (the determinant) can also be understood as the Jacobian of the transformation from Cartesian to curvilinear coordinates, which for "n" = 3 gives formula_26.
Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we write formula_27 for the normalized basis, and formula_28 for the components of F with respect to it, we have that
formula_29
using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant element formula_30, we can conclude that formula_31. After substituting, the formula becomes:
formula_32
See "" for further discussion.
Properties.
The following properties can all be derived from the ordinary differentiation rules of calculus. Most importantly, the divergence is a linear operator, i.e.,
formula_33
for all vector fields F and G and all real numbers "a" and "b".
There is a product rule of the following type: if φ is a scalar-valued function and F is a vector field, then
formula_34
or in more suggestive notation
formula_35
Another product rule for the cross product of two vector fields F and G in three dimensions involves the curl and reads as follows:
formula_36
or
formula_37
The Laplacian of a scalar field is the divergence of the field's gradient:
formula_38
The divergence of the curl of any vector field (in three dimensions) is equal to zero:
formula_39
If a vector field F with zero divergence is defined on a ball in R3, then there exists some vector field G on the ball with F
curl G. For regions in R3 more topologically complicated than this, the latter statement might be false (see Poincaré lemma). The degree of "failure" of the truth of the statement, measured by the homology of the chain complex
formula_40
serves as a nice quantification of the complicatedness of the underlying region "U". These are the beginnings and main motivations of de Rham cohomology.
Decomposition theorem.
It can be shown that any stationary flux v(r) that is twice continuously differentiable in R3 and vanishes sufficiently fast for can be decomposed uniquely into an "irrotational part" E(r) and a "source-free part" B(r). Moreover, these parts are explicitly determined by the respective "source densities" (see above) and "circulation densities" (see the article Curl):
For the irrotational part one has
formula_41
with
formula_42
The source-free part, B, can be similarly written: one only has to replace the "scalar potential" Φ(r) by a "vector potential" A(r) and the terms −∇Φ by +∇ × A, and the source density div v
by the circulation density ∇ × v.
This "decomposition theorem" is a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition, which works in dimensions greater than three as well.
In arbitrary finite dimensions.
The divergence of a vector field can be defined in any finite number formula_43 of dimensions. If
formula_44
in a Euclidean coordinate system with coordinates "x"1, "x"2, ..., "x""n", define
formula_45
In the 1D case, F reduces to a regular function, and the divergence reduces to the derivative.
For any "n", the divergence is a linear operator, and it satisfies the "product rule"
formula_46
for any scalar-valued function φ.
Relation to the exterior derivative.
One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in R3. Define the current two-form as
formula_47
It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density "ρ"
1 "dx" ∧ "dy" ∧ "dz" moving with local velocity F. Its exterior derivative "dj" is then given by
formula_48
where formula_49 is the wedge product.
Thus, the divergence of the vector field F can be expressed as:
formula_50
Here the superscript ♭ is one of the two musical isomorphisms, and ⋆ is the Hodge star operator. When the divergence is written in this way, the operator formula_51 is referred to as the codifferential. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system.
In curvilinear coordinates.
The appropriate expression is more complicated in curvilinear coordinates. The divergence of a vector field extends naturally to any differentiable manifold of dimension "n" that has a volume form (or density) μ, e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two-form for a vector field on R3, on such a manifold a vector field "X" defines an ("n" − 1)-form "j" = "i""X" "μ" obtained by contracting "X" with μ. The divergence is then the function defined by
formula_52
The divergence can be defined in terms of the Lie derivative as
formula_53
This means that the divergence measures the rate of expansion of a unit of volume (a volume element) as it flows with the vector field.
On a pseudo-Riemannian manifold, the divergence with respect to the volume can be expressed in terms of the Levi-Civita connection ∇:
formula_54
where the second expression is the contraction of the vector field valued 1-form ∇"X" with itself and the last expression is the traditional coordinate expression from Ricci calculus.
An equivalent expression without using a connection is
formula_55
where g is the metric and formula_56 denotes the partial derivative with respect to coordinate "x""a". The square-root of the (absolute value of the determinant of the) metric appears because the divergence must be written with the correct conception of the volume. In curvilinear coordinates, the basis vectors are no longer orthonormal; the determinant encodes the correct idea of volume in this case. It appears twice, here, once, so that the formula_57 can be transformed into "flat space" (where coordinates are actually orthonormal), and once again so that formula_56 is also transformed into "flat space", so that finally, the "ordinary" divergence can be written with the "ordinary" concept of volume in flat space ("i.e." unit volume, "i.e." one, "i.e." not written down). The square-root appears in the denominator, because the derivative transforms in the opposite way (contravariantly) to the vector (which is covariant). This idea of getting to a "flat coordinate system" where local computations can be done in a conventional way is called a vielbein. A different way to see this is to note that the divergence is the codifferential in disguise. That is, the divergence corresponds to the expression formula_58 with formula_59 the differential and formula_60 the Hodge star. The Hodge star, by its construction, causes the volume form to appear in all of the right places.
The divergence of tensors.
Divergence can also be generalised to tensors. In Einstein notation, the divergence of a contravariant vector is given by
formula_61
where ∇"μ" denotes the covariant derivative. In this general setting, the correct formulation of the divergence is to recognize that it is a codifferential; the appropriate properties follow from there.
Equivalently, some authors define the divergence of a mixed tensor by using the musical isomorphism ♯: if "T" is a ("p", "q")-tensor ("p" for the contravariant vector and "q" for the covariant one), then we define the "divergence of T" to be the ("p", "q" − 1)-tensor
formula_62
that is, we take the trace over the "first two" covariant indices of the covariant derivative.
The formula_63 symbol refers to the musical isomorphism.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left. \\operatorname{div} \\mathbf{F} \\right|_\\mathbf{x_0} = \\lim_{V \\to 0} \\frac{1}{|V|}"
},
{
"math_id": 1,
"text": "\\scriptstyle S(V)"
},
{
"math_id": 2,
"text": "\\mathbf{F} \\cdot \\mathbf{\\hat n} \\, dS"
},
{
"math_id": 3,
"text": "\\mathbf{\\hat n}"
},
{
"math_id": 4,
"text": "\\mathbf{F} = F_x\\mathbf{i} + F_y\\mathbf{j} + F_z\\mathbf{k}"
},
{
"math_id": 5,
"text": "\\operatorname{div} \\mathbf{F} = \\nabla\\cdot\\mathbf{F} = \\left(\\frac{\\partial}{\\partial x}, \\frac{\\partial}{\\partial y}, \\frac{\\partial}{\\partial z} \\right) \\cdot (F_x,F_y,F_z) = \\frac{\\partial F_x}{\\partial x}+\\frac{\\partial F_y}{\\partial y}+\\frac{\\partial F_z}{\\partial z}."
},
{
"math_id": 6,
"text": "\\mathbf{F}= \\mathbf{e}_r F_r + \\mathbf{e}_\\theta F_\\theta + \\mathbf{e}_z F_z,"
},
{
"math_id": 7,
"text": "\\operatorname{div} \\mathbf F = \\nabla \\cdot \\mathbf{F} = \\frac{1}{r} \\frac{\\partial}{\\partial r} \\left(r F_r\\right) + \\frac1r \\frac{\\partial F_\\theta}{\\partial\\theta} + \\frac{\\partial F_z}{\\partial z}.\n"
},
{
"math_id": 8,
"text": "r(\\mathbf{F}(\\mathbf{x}))\\neq F_r(\\mathbf{x})"
},
{
"math_id": 9,
"text": "\\theta(\\mathbf{F}(\\mathbf{x}))\\neq F_{\\theta}(\\mathbf{x})"
},
{
"math_id": 10,
"text": "z(\\mathbf{F}(\\mathbf{x}))\\neq F_z(\\mathbf{x})"
},
{
"math_id": 11,
"text": "\\theta(\\mathbf{F}(\\mathbf{x})) = \\theta \\neq F_{\\theta}(\\mathbf{x}) = 0"
},
{
"math_id": 12,
"text": "\\operatorname{div}\\mathbf{F} = \\nabla \\cdot \\mathbf{F} = \\frac1{r^2} \\frac{\\partial}{\\partial r}\\left(r^2 F_r\\right) + \\frac1{r\\sin\\theta} \\frac{\\partial}{\\partial \\theta} (\\sin\\theta\\, F_\\theta) + \\frac1{r\\sin\\theta} \\frac{\\partial F_\\varphi}{\\partial \\varphi}."
},
{
"math_id": 13,
"text": "\\mathbf{A} = \\begin{bmatrix}\nA_{11} & A_{12} & A_{13} \\\\\nA_{21} & A_{22} & A_{23} \\\\\nA_{31} & A_{32} & A_{33} \n\\end{bmatrix}"
},
{
"math_id": 14,
"text": "\\operatorname{div} (\\mathbf{A}) \n= \\cfrac{\\partial A_{ik}}{\\partial x_k}~\\mathbf{e}_i = A_{ik,k}~\\mathbf{e}_i \n= \\begin{bmatrix}\n\\dfrac{\\partial A_{11}}{\\partial x_1} +\\dfrac{\\partial A_{12}}{\\partial x_2} +\\dfrac{\\partial A_{13}}{\\partial x_3} \\\\\n\\dfrac{\\partial A_{21}}{\\partial x_1} +\\dfrac{\\partial A_{22}}{\\partial x_2} +\\dfrac{\\partial A_{23}}{\\partial x_3} \\\\\n\\dfrac{\\partial A_{31}}{\\partial x_1} +\\dfrac{\\partial A_{32}}{\\partial x_2} +\\dfrac{\\partial A_{33}}{\\partial x_3}\n\\end{bmatrix}"
},
{
"math_id": 15,
"text": "\n\\nabla\\cdot \\mathbf A = \\cfrac{\\partial A_{ki}}{\\partial x_k}~\\mathbf{e}_i = A_{ki,k}~\\mathbf{e}_i = \n\\begin{bmatrix}\n\\dfrac{\\partial A_{11}}{\\partial x_1} + \\dfrac{\\partial A_{21}}{\\partial x_2} + \\dfrac{\\partial A_{31}}{\\partial x_3} \\\\\n\\dfrac{\\partial A_{12}}{\\partial x_1} + \\dfrac{\\partial A_{22}}{\\partial x_2} + \\dfrac{\\partial A_{32}}{\\partial x_3} \\\\\n\\dfrac{\\partial A_{13}}{\\partial x_1} + \\dfrac{\\partial A_{23}}{\\partial x_2} + \\dfrac{\\partial A_{33}}{\\partial x_3} \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 16,
"text": "\\operatorname{div} (\\mathbf{A^T}) = \\nabla\\cdot\\mathbf A"
},
{
"math_id": 17,
"text": "\\operatorname{div} (\\mathbf{A}) = \\nabla\\cdot\\mathbf A"
},
{
"math_id": 18,
"text": "\\nabla\\cdot"
},
{
"math_id": 19,
"text": "\\nabla\\cdot\\mathbf A"
},
{
"math_id": 20,
"text": "\\operatorname{div}(\\mathbf{F}) = \\frac{1}{\\rho} \\frac{\\partial \\left(\\rho\\, F^i\\right)}{\\partial x^i},"
},
{
"math_id": 21,
"text": "\\rho"
},
{
"math_id": 22,
"text": "\\mathbf{F}=F^i\\mathbf{e}_i"
},
{
"math_id": 23,
"text": "\\mathbf{e}_i = \\partial\\mathbf{x} / \\partial x^i"
},
{
"math_id": 24,
"text": "\\rho = \\sqrt{\\left|\\det g_{ab}\\right|}"
},
{
"math_id": 25,
"text": "\\rho=\\sqrt{\\left|\\det g\\right|}"
},
{
"math_id": 26,
"text": "\\rho = \\left| \\frac{\\partial(x,y,z)}{\\partial (x^1,x^2,x^3)}\\right|"
},
{
"math_id": 27,
"text": "\\hat{\\mathbf{e}}_i"
},
{
"math_id": 28,
"text": "\\hat{F}^i"
},
{
"math_id": 29,
"text": "\\mathbf{F}=F^i \\mathbf{e}_i =\nF^i \\|{\\mathbf{e}_i }\\| \\frac{\\mathbf{e}_i}{\\| \\mathbf{e}_i \\|} =\nF^i \\sqrt{g_{ii}} \\, \\hat{\\mathbf{e}}_i =\n\\hat{F}^i \\hat{\\mathbf{e}}_i,"
},
{
"math_id": 30,
"text": "\\hat{\\mathbf{e}}^i"
},
{
"math_id": 31,
"text": "F^i = \\hat{F}^i / \\sqrt{g_{ii}}"
},
{
"math_id": 32,
"text": "\\operatorname{div}(\\mathbf{F}) = \\frac 1{\\rho} \\frac{\\partial \\left(\\frac{\\rho}{\\sqrt{g_{ii}}}\\hat{F}^i\\right)}{\\partial x^i} =\n \\frac 1{\\sqrt{\\det g}} \\frac{\\partial \\left(\\sqrt{\\frac{\\det g}{g_{ii}}}\\,\\hat{F}^i\\right)}{\\partial x^i}."
},
{
"math_id": 33,
"text": "\\operatorname{div}(a\\mathbf{F} + b\\mathbf{G}) = a \\operatorname{div} \\mathbf{F} + b \\operatorname{div} \\mathbf{G}"
},
{
"math_id": 34,
"text": "\\operatorname{div}(\\varphi \\mathbf{F}) = \\operatorname{grad} \\varphi \\cdot \\mathbf{F} + \\varphi \\operatorname{div} \\mathbf{F},"
},
{
"math_id": 35,
"text": "\\nabla\\cdot(\\varphi \\mathbf{F}) = (\\nabla\\varphi) \\cdot \\mathbf{F} + \\varphi (\\nabla\\cdot\\mathbf{F})."
},
{
"math_id": 36,
"text": "\\operatorname{div}(\\mathbf{F}\\times\\mathbf{G}) = \\operatorname{curl} \\mathbf{F} \\cdot\\mathbf{G} - \\mathbf{F} \\cdot \\operatorname{curl} \\mathbf{G},"
},
{
"math_id": 37,
"text": "\\nabla\\cdot(\\mathbf{F}\\times\\mathbf{G}) = (\\nabla\\times\\mathbf{F})\\cdot\\mathbf{G} - \\mathbf{F}\\cdot(\\nabla\\times\\mathbf{G})."
},
{
"math_id": 38,
"text": "\\operatorname{div}(\\operatorname{grad}\\varphi) = \\Delta\\varphi."
},
{
"math_id": 39,
"text": "\\nabla\\cdot(\\nabla\\times\\mathbf{F})=0."
},
{
"math_id": 40,
"text": "\\{ \\text{scalar fields on } U \\} ~ \\overset{\\operatorname{grad}}{\\rarr} ~ \\{ \\text{vector fields on } U \\} ~ \\overset{\\operatorname{curl}}{\\rarr} ~ \\{ \\text{vector fields on } U \\} ~ \\overset{\\operatorname{div}}{\\rarr} ~ \\{ \\text{scalar fields on } U \\}"
},
{
"math_id": 41,
"text": "\\mathbf E=-\\nabla \\Phi(\\mathbf r),"
},
{
"math_id": 42,
"text": "\\Phi (\\mathbf{r})=\\int_{\\mathbb R^3}\\,d^3\\mathbf r'\\;\\frac{\\operatorname{div} \\mathbf{v}(\\mathbf{r}')}{4\\pi\\left|\\mathbf{r}-\\mathbf{r}'\\right|}."
},
{
"math_id": 43,
"text": "n"
},
{
"math_id": 44,
"text": "\\mathbf{F} = (F_1 , F_2 , \\ldots F_n) ,"
},
{
"math_id": 45,
"text": "\\operatorname{div} \\mathbf{F} = \\nabla\\cdot\\mathbf{F} = \\frac{\\partial F_1}{\\partial x_1} + \\frac{\\partial F_2}{\\partial x_2} + \\cdots + \\frac{\\partial F_n}{\\partial x_n}."
},
{
"math_id": 46,
"text": "\\nabla\\cdot(\\varphi \\mathbf{F}) = (\\nabla\\varphi) \\cdot \\mathbf{F} + \\varphi (\\nabla\\cdot\\mathbf{F})"
},
{
"math_id": 47,
"text": "j = F_1 \\, dy \\wedge dz + F_2 \\, dz \\wedge dx + F_3 \\, dx \\wedge dy ."
},
{
"math_id": 48,
"text": "dj = \\left(\\frac{\\partial F_1}{\\partial x} +\\frac{\\partial F_2}{\\partial y} +\\frac{\\partial F_3}{\\partial z} \\right) dx \\wedge dy \\wedge dz = (\\nabla \\cdot {\\mathbf F}) \\rho "
},
{
"math_id": 49,
"text": "\\wedge"
},
{
"math_id": 50,
"text": "\\nabla \\cdot {\\mathbf F} = {\\star} d{\\star} \\big({\\mathbf F}^\\flat \\big) ."
},
{
"math_id": 51,
"text": "{\\star} d{\\star}"
},
{
"math_id": 52,
"text": "dj = (\\operatorname{div} X) \\mu ."
},
{
"math_id": 53,
"text": "{\\mathcal L}_X \\mu = (\\operatorname{div} X) \\mu ."
},
{
"math_id": 54,
"text": "\\operatorname{div} X = \\nabla \\cdot X = {X^a}_{;a} ,"
},
{
"math_id": 55,
"text": "\\operatorname{div}(X) = \\frac{1}{\\sqrt{\\left|\\det g \\right|}} \\, \\partial_a \\left(\\sqrt{\\left|\\det g \\right|} \\, X^a\\right),"
},
{
"math_id": 56,
"text": "\\partial_a"
},
{
"math_id": 57,
"text": "X^a"
},
{
"math_id": 58,
"text": "\\star d\\star"
},
{
"math_id": 59,
"text": "d"
},
{
"math_id": 60,
"text": "\\star"
},
{
"math_id": 61,
"text": "\\nabla \\cdot \\mathbf{F} = \\nabla_\\mu F^\\mu ,"
},
{
"math_id": 62,
"text": "(\\operatorname{div} T) (Y_1 , \\ldots , Y_{q-1}) = {\\operatorname{trace}} \\Big(X \\mapsto \\sharp (\\nabla T) (X , \\cdot , Y_1 , \\ldots , Y_{q-1}) \\Big);"
},
{
"math_id": 63,
"text": "\\sharp"
}
] | https://en.wikipedia.org/wiki?curid=8328 |
8330403 | Dirichlet process | Family of stochastic processes
In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution.
As an example, a bag of 100 real-world dice is a "random probability mass function (random pmf)"—to sample this random pmf you put your hand in the bag and draw out a die, that is, you draw a pmf. A bag of dice manufactured using a crude process 100 years ago will likely have probabilities that deviate wildly from the uniform pmf, whereas a bag of state-of-the-art dice used by Las Vegas casinos may have barely perceptible imperfections. We can model the randomness of pmfs with the Dirichlet distribution.
The Dirichlet process is specified by a base distribution formula_1 and a positive real number formula_0 called the concentration parameter (also known as scaling parameter). The base distribution is the expected value of the process, i.e., the Dirichlet process draws distributions "around" the base distribution the way a normal distribution draws real numbers around its mean. However, even if the base distribution is continuous, the distributions drawn from the Dirichlet process are almost surely discrete. The scaling parameter specifies how strong this discretization is: in the limit of formula_2, the realizations are all concentrated at a single value, while in the limit of formula_3 the realizations become continuous. Between the two extremes the realizations are discrete distributions with less and less concentration as formula_0 increases.
The Dirichlet process can also be seen as the infinite-dimensional generalization of the Dirichlet distribution. In the same way as the Dirichlet distribution is the conjugate prior for the categorical distribution, the Dirichlet process is the conjugate prior for infinite, nonparametric discrete distributions. A particularly important application of Dirichlet processes is as a prior probability distribution in infinite mixture models.
The Dirichlet process was formally introduced by Thomas S. Ferguson in 1973.
It has since been applied in data mining and machine learning, among others for natural language processing, computer vision and bioinformatics.
Introduction.
Dirichlet processes are usually used when modelling data that tends to repeat previous values in a so-called "rich get richer" fashion. Specifically, suppose that the generation of values formula_4 can be simulated by the following algorithm.
Input: formula_1 (a probability distribution called base distribution), formula_0 (a positive real number called scaling parameter)
For formula_5:
<poem>
a) With probability formula_6 draw formula_7 from formula_1.
b) With probability formula_8 set formula_9, where formula_10 is the number of previous observations of formula_11.
(Formally, formula_12 where formula_13 denotes the number of elements in the set.) </poem>
At the same time, another common model for data is that the observations formula_4 are assumed to be independent and identically distributed (i.i.d.) according to some (random) distribution formula_14. The goal of introducing Dirichlet processes is to be able to describe the procedure outlined above in this i.i.d. model.
The formula_4 observations in the algorithm are not independent, since we have to consider the previous results when generating the next value. They are, however, exchangeable. This fact can be shown by calculating the joint probability distribution of the observations and noticing that the resulting formula only depends on which formula_11 values occur among the observations and how many repetitions they each have. Because of this exchangeability, de Finetti's representation theorem applies and it implies that the observations formula_4 are conditionally independent given a (latent) distribution formula_14. This formula_14 is a random variable itself and has a distribution. This distribution (over distributions) is called a Dirichlet process (formula_15). In summary, this means that we get an equivalent procedure to the above algorithm:
In practice, however, drawing a concrete distribution formula_14 is impossible, since its specification requires an infinite amount of information. This is a common phenomenon in the context of Bayesian non-parametric statistics where a typical task is to learn distributions on function spaces, which involve effectively infinitely many parameters. The key insight is that in many applications the infinite-dimensional distributions appear only as an intermediary computational device and are not required for either the initial specification of prior beliefs or for the statement of the final inference.
Formal definition.
Given a measurable set "S", a base probability distribution "H" and a positive real number formula_0, the Dirichlet process formula_17 is a stochastic process whose sample path (or realization, i.e. an infinite sequence of random variates drawn from the process) is a probability distribution over "S", such that the following holds. For any measurable finite partition of "S", denoted formula_18,
formula_19
formula_20
where formula_21 denotes the Dirichlet distribution and the notation formula_22 means that the random variable formula_23 has the distribution formula_24.
Alternative views.
There are several equivalent views of the Dirichlet process. Besides the formal definition above, the Dirichlet process can be defined implicitly through de Finetti's theorem as described in the first section; this is often called the Chinese restaurant process. A third alternative is the stick-breaking process, which defines the Dirichlet process constructively by writing a distribution sampled from the process as formula_25, where formula_26 are samples from the base distribution formula_1, formula_27 is an indicator function centered on formula_28 (zero everywhere except for formula_29) and the formula_30 are defined by a recursive scheme that repeatedly samples from the beta distribution formula_31.
The Chinese restaurant process.
A widely employed metaphor for the Dirichlet process is based on the so-called Chinese restaurant process. The metaphor is as follows:
Imagine a Chinese restaurant in which customers enter. A new customer sits down at a table with a probability proportional to the number of customers already sitting there. Additionally, a customer opens a new table with a probability proportional to the scaling parameter formula_0. After infinitely many customers entered, one obtains a probability distribution over infinitely many tables to be chosen.
This probability distribution over the tables is a random sample of the probabilities of observations drawn from a Dirichlet process with scaling parameter formula_0.
If one associates draws from the base measure formula_1 with every table, the resulting distribution over the sample space formula_32 is a random sample of a Dirichlet process.
The Chinese restaurant process is related to the Pólya urn sampling scheme which yields samples from finite Dirichlet distributions.
Because customers sit at a table with a probability proportional to the number of customers already sitting at the table, two properties of the DP can be deduced:
The stick-breaking process.
A third approach to the Dirichlet process is the so-called stick-breaking process view. Conceptually, this involves repeatedly breaking off and discarding a random fraction (sampled from a Beta distribution) of a "stick" that is initially of length 1. Remember that draws from a Dirichlet process are distributions over a set formula_32. As noted previously, the distribution drawn is discrete with probability 1. In the stick-breaking process view, we explicitly use the discreteness and give the probability mass function of this (random) discrete distribution as:
formula_33
where formula_34 is the indicator function which evaluates to zero everywhere, except for formula_35. Since this distribution is random itself, its mass function is parameterized by two sets of random variables: the locations formula_36 and the corresponding probabilities formula_37. In the following, we present without proof what these random variables are.
The locations formula_38 are independent and identically distributed according to formula_1, the base distribution of the Dirichlet process. The probabilities formula_30 are given by a procedure resembling the breaking of a unit-length stick (hence the name):
formula_39
where formula_40 are independent random variables with the beta distribution formula_31. The resemblance to 'stick-breaking' can be seen by considering formula_30 as the length of a piece of a stick. We start with a unit-length stick and in each step we break off a portion of the remaining stick according to formula_40 and assign this broken-off piece to formula_30. The formula can be understood by noting that after the first "k" − 1 values have their portions assigned, the length of the remainder of the stick is formula_41 and this piece is broken according to formula_40 and gets assigned to formula_30.
The smaller formula_0 is, the less of the stick will be left for subsequent values (on average), yielding more concentrated distributions.
The stick-breaking process is similar to the construction where one samples sequentially from marginal beta distributions in order to generate a sample from a Dirichlet distribution.
The Pólya urn scheme.
Yet another way to visualize the Dirichlet process and Chinese restaurant process is as a modified Pólya urn scheme sometimes called the Blackwell–MacQueen sampling scheme. Imagine that we start with an urn filled with formula_0 black balls. Then we proceed as follows:
The resulting distribution over colours is the same as the distribution over tables in the Chinese restaurant process. Furthermore, when we draw a black ball, if rather than generating a new colour, we instead pick a random value from a base distribution formula_1 and use that value to label the new ball, the resulting distribution over labels will be the same as the distribution over the values in a Dirichlet process.
Use as a prior distribution.
The Dirichlet Process can be used as a prior distribution to estimate the probability distribution that generates the data. In this section, we consider the model
formula_42
The Dirichlet Process distribution satisfies prior conjugacy, posterior consistency, and the Bernstein–von Mises theorem.
Prior conjugacy.
In this model, the posterior distribution is again a Dirichlet process. This means that the Dirichlet process is a conjugate prior for this model. The posterior distribution is given by
formula_43
where formula_44 is defined below.
Posterior consistency.
If we take the frequentist view of probability, we believe there is a true probability distribution formula_45 that generated the data. Then it turns out that the Dirichlet process is consistent in the weak topology, which means that for every weak neighbourhood formula_46 of formula_45, the posterior probability of formula_46 converges to formula_47.
Bernstein–Von Mises theorem.
In order to interpret the credible sets as confidence sets, a Bernstein–von Mises theorem is needed. In case of the Dirichlet process we compare the posterior distribution with the empirical process formula_48. Suppose formula_49 is a formula_50-Donsker class, i.e.
formula_51
for some Brownian Bridge formula_52. Suppose also that there exists a function formula_53 such that formula_54 such that formula_55, then, formula_56 almost surely
formula_57
This implies that credible sets you construct are asymptotic confidence sets, and the Bayesian inference based on the Dirichlet process is asymptotically also valid frequentist inference.
Use in Dirichlet mixture models.
To understand what Dirichlet processes are and the problem they solve we consider the example of data clustering. It is a common situation that data points are assumed to be distributed in a hierarchical fashion where each data point belongs to a (randomly chosen) cluster and the members of a cluster are further distributed randomly within that cluster.
Example 1.
For example, we might be interested in how people will vote on a number of questions in an upcoming election. A reasonable model for this situation might be to classify each voter as a liberal, a conservative or a moderate and then model the event that a voter says "Yes" to any particular question as a Bernoulli random variable with the probability dependent on which political cluster they belong to. By looking at how votes were cast in previous years on similar pieces of legislation one could fit a predictive model using a simple clustering algorithm such as "k"-means. That algorithm, however, requires knowing in advance the number of clusters that generated the data. In many situations, it is not possible to determine this ahead of time, and even when we can reasonably assume a number of clusters we would still like to be able to check this assumption. For example, in the voting example above the division into liberal, conservative and moderate might not be finely tuned enough; attributes such as a religion, class or race could also be critical for modelling voter behaviour, resulting in more clusters in the model.
Example 2.
As another example, we might be interested in modelling the velocities of galaxies using a simple model assuming that the velocities are clustered, for instance by assuming each velocity is distributed according to the normal distribution formula_59, where the formula_60th observation belongs to the formula_61th cluster of galaxies with common expected velocity. In this case it is far from obvious how to determine a priori how many clusters (of common velocities) there should be and any model for this would be highly suspect and should be checked against the data. By using a Dirichlet process prior for the distribution of cluster means we circumvent the need to explicitly specify ahead of time how many clusters there are, although the concentration parameter still controls it implicitly.
We consider this example in more detail. A first naive model is to presuppose that there are formula_62 clusters of normally distributed velocities with common known fixed variance formula_63. Denoting the event that the formula_60th observation is in the formula_61th cluster as formula_64 we can write this model as:
formula_65
That is, we assume that the data belongs to formula_62 distinct clusters with means formula_58 and that formula_66 is the (unknown) prior probability of a data point belonging to the formula_61th cluster. We assume that we have no initial information distinguishing the clusters, which is captured by the symmetric prior formula_67. Here formula_21 denotes the Dirichlet distribution and formula_68 denotes a vector of length formula_62 where each element is 1. We further assign independent and identical prior distributions formula_69 to each of the cluster means, where formula_1 may be any parametric distribution with parameters denoted as formula_70. The hyper-parameters formula_0 and formula_70 are taken to be known fixed constants, chosen to reflect our prior beliefs about the system. To understand the connection to Dirichlet process priors we rewrite this model in an equivalent but more suggestive form:
formula_71
Instead of imagining that each data point is first assigned a cluster and then drawn from the distribution associated to that cluster we now think of each observation being associated with parameter formula_72 drawn from some discrete distribution formula_73 with support on the formula_62 means. That is, we are now treating the formula_72 as being drawn from the random distribution formula_73 and our prior information is incorporated into the model by the distribution over distributions formula_73.
We would now like to extend this model to work without pre-specifying a fixed number of clusters formula_62. Mathematically, this means we would like to select a random prior distribution formula_74 where the values of the clusters means formula_58 are again independently distributed according to formula_75 and the distribution over formula_66 is symmetric over the infinite set of clusters. This is exactly what is accomplished by the model:
formula_76
With this in hand we can better understand the computational merits of the Dirichlet process. Suppose that we wanted to draw formula_77 observations from the naive model with exactly formula_62 clusters. A simple algorithm for doing this would be to draw formula_62 values of formula_58 from formula_69, a distribution formula_78 from formula_67 and then for each observation independently sample the cluster formula_61 with probability formula_66 and the value of the observation according to formula_79. It is easy to see that this algorithm does not work in case where we allow infinite clusters because this would require sampling an infinite dimensional parameter formula_80. However, it is still possible to sample observations formula_81. One can e.g. use the Chinese restaurant representation described below and calculate the probability for used clusters and a new cluster to be created. This avoids having to explicitly specify formula_80. Other solutions are based on a truncation of clusters: A (high) upper bound to the true number of clusters is introduced and cluster numbers higher than the lower bound are treated as one cluster.
Fitting the model described above based on observed data formula_24 means finding the posterior distribution formula_82 over cluster probabilities and their associated means. In the infinite dimensional case it is obviously impossible to write down the posterior explicitly. It is, however, possible to draw samples from this posterior using a modified Gibbs sampler. This is the critical fact that makes the Dirichlet process prior useful for inference.
Applications of the Dirichlet process.
Dirichlet processes are frequently used in "Bayesian nonparametric statistics". "Nonparametric" here does not mean a parameter-less model, rather a model in which representations grow as more data are observed. Bayesian nonparametric models have gained considerable popularity in the field of machine learning because of the above-mentioned flexibility, especially in unsupervised learning. In a Bayesian nonparametric model, the prior and posterior distributions are not parametric distributions, but stochastic processes. The fact that the Dirichlet distribution is a probability distribution on the simplex of sets of non-negative numbers that sum to one makes it a good candidate to model distributions over distributions or distributions over functions. Additionally, the nonparametric nature of this model makes it an ideal candidate for clustering problems where the distinct number of clusters is unknown beforehand. In addition, the Dirichlet process has also been used for developing a mixture of expert models, in the context of supervised learning algorithms (regression or classification settings). For instance, mixtures of Gaussian process experts, where the number of required experts must be inferred from the data.
As draws from a Dirichlet process are discrete, an important use is as a prior probability in infinite mixture models. In this case, formula_32 is the parametric set of component distributions. The generative process is therefore that a sample is drawn from a Dirichlet process, and for each data point, in turn, a value is drawn from this sample distribution and used as the component distribution for that data point. The fact that there is no limit to the number of distinct components which may be generated makes this kind of model appropriate for the case when the number of mixture components is not well-defined in advance. For example, the infinite mixture of Gaussians model, as well as associated mixture regression models, e.g.
The infinite nature of these models also lends them to natural language processing applications, where it is often desirable to treat the vocabulary as an infinite, discrete set.
The Dirichlet Process can also be used for nonparametric hypothesis testing, i.e. to develop Bayesian nonparametric versions of the classical nonparametric hypothesis tests, e.g. sign test, Wilcoxon rank-sum test, Wilcoxon signed-rank test, etc.
For instance, Bayesian nonparametric versions of the Wilcoxon rank-sum test and the Wilcoxon signed-rank test have been developed by using the imprecise Dirichlet process, a prior ignorance Dirichlet process. | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "\\alpha\\rightarrow 0"
},
{
"math_id": 3,
"text": "\\alpha\\rightarrow\\infty"
},
{
"math_id": 4,
"text": "X_1,X_2,\\dots"
},
{
"math_id": 5,
"text": "n\\ge 1"
},
{
"math_id": 6,
"text": "\\frac{\\alpha}{\\alpha+n-1}"
},
{
"math_id": 7,
"text": "X_n"
},
{
"math_id": 8,
"text": "\\frac{n_x}{\\alpha+n-1}"
},
{
"math_id": 9,
"text": "X_n=x"
},
{
"math_id": 10,
"text": "n_x"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "n_x := | \\{ j\\colon X_j=x \\text{ and } j<n \\} |"
},
{
"math_id": 13,
"text": "| \\cdot |"
},
{
"math_id": 14,
"text": "P"
},
{
"math_id": 15,
"text": "\\operatorname{DP}"
},
{
"math_id": 16,
"text": "\\operatorname{DP}\\left(H,\\alpha\\right)"
},
{
"math_id": 17,
"text": "\\operatorname{DP}(H, \\alpha)"
},
{
"math_id": 18,
"text": "\\{B_i\\}_{i=1}^n"
},
{
"math_id": 19,
"text": "\\text{if } X \\sim \\operatorname{DP}(H,\\alpha)"
},
{
"math_id": 20,
"text": "\\text{then }(X(B_1),\\dots,X(B_n)) \\sim \\operatorname{Dir}(\\alpha H(B_1),\\dots, \\alpha H(B_n)),"
},
{
"math_id": 21,
"text": "\\operatorname{Dir}"
},
{
"math_id": 22,
"text": "X \\sim D"
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "D"
},
{
"math_id": 25,
"text": "f(x)=\\sum_{k=1}^\\infty\\beta_k\\delta_{x_k}(x)"
},
{
"math_id": 26,
"text": "\\{x_k\\}_{k=1}^\\infty"
},
{
"math_id": 27,
"text": "\\delta_{x_k}"
},
{
"math_id": 28,
"text": "x_k"
},
{
"math_id": 29,
"text": "\\delta_{x_k}(x_k)=1"
},
{
"math_id": 30,
"text": "\\beta_k"
},
{
"math_id": 31,
"text": "\\operatorname{Beta}(1,\\alpha)"
},
{
"math_id": 32,
"text": "S"
},
{
"math_id": 33,
"text": "f(\\theta) = \\sum_{k=1}^\\infty \\beta_k \\cdot \\delta_{\\theta_k}(\\theta)"
},
{
"math_id": 34,
"text": "\\delta_{\\theta_k}"
},
{
"math_id": 35,
"text": "\\delta_{\\theta_k}(\\theta_k)=1"
},
{
"math_id": 36,
"text": "\\left\\{\\theta_k\\right\\}_{k=1}^\\infty "
},
{
"math_id": 37,
"text": "\\left\\{\\beta_k\\right\\}_{k=1}^\\infty "
},
{
"math_id": 38,
"text": "\\theta_k"
},
{
"math_id": 39,
"text": "\\beta_k = \\beta'_k\\cdot\\prod_{i=1}^{k-1}\\left(1-\\beta'_i\\right)"
},
{
"math_id": 40,
"text": "\\beta'_k"
},
{
"math_id": 41,
"text": "\\prod_{i=1}^{k-1}\\left(1-\\beta'_i\\right)"
},
{
"math_id": 42,
"text": "\n\\begin{align}\nP & \\sim \\textrm{DP}(H, \\alpha )\\\\\nX_1, \\ldots, X_n \\mid P & \\, \\overset{\\textrm{i.i.d.}}{\\sim} \\, P.\n\\end{align}\n"
},
{
"math_id": 43,
"text": " \n\\begin{align}\nP \\mid X_1, \\ldots, X_n \n&\\sim \\textrm{DP}\\left(\\frac{\\alpha}{\\alpha + n} H + \\frac{1}{\\alpha + n} \\sum_{i = 1}^n \\delta_{X_i}, \\; \\alpha + n \\right) \\\\\n&= \\textrm{DP}\\left(\\frac{\\alpha}{\\alpha + n} H + \\frac{n}{\\alpha + n} \\mathbb{P}_n, \\; \\alpha + n \\right) \n\\end{align}\n"
},
{
"math_id": 44,
"text": "\\mathbb{P}_n"
},
{
"math_id": 45,
"text": "P_0"
},
{
"math_id": 46,
"text": "U"
},
{
"math_id": 47,
"text": "1"
},
{
"math_id": 48,
"text": "\\mathbb{P}_n = \\frac{1}{n} \\sum_{i = 1}^n \\delta_{X_i}"
},
{
"math_id": 49,
"text": " \\mathcal{F}"
},
{
"math_id": 50,
"text": "P_0 "
},
{
"math_id": 51,
"text": "\n\\sqrt n \\left( \\mathbb{P}_n - P_0 \\right) \\rightsquigarrow G_{P_0}\n"
},
{
"math_id": 52,
"text": " G_{P_0} "
},
{
"math_id": 53,
"text": " F"
},
{
"math_id": 54,
"text": " F(x) \\geq \\sup_{f \\in \\mathcal{F}} f(x) "
},
{
"math_id": 55,
"text": " \\int F^2 \\, \\mathrm{d} H < \\infty "
},
{
"math_id": 56,
"text": " P_0 "
},
{
"math_id": 57,
"text": "\n\\sqrt{n} \\left( P - \\mathbb{P}_n \\right) \\mid X_1, \\cdots, X_n \\rightsquigarrow G_{P_0}.\n"
},
{
"math_id": 58,
"text": "\\mu_k"
},
{
"math_id": 59,
"text": "v_i\\sim N(\\mu_k,\\sigma^2)"
},
{
"math_id": 60,
"text": "i"
},
{
"math_id": 61,
"text": "k"
},
{
"math_id": 62,
"text": "K"
},
{
"math_id": 63,
"text": "\\sigma^2"
},
{
"math_id": 64,
"text": "z_i=k"
},
{
"math_id": 65,
"text": "\n\\begin{align}\n(v_i \\mid z_i=k, \\mu_k) & \\sim N(\\mu_k,\\sigma^2) \\\\\n\\operatorname{P} (z_i=k) & = \\pi_k \\\\\n(\\boldsymbol{\\pi}\\mid \\alpha) & \\sim \\operatorname{Dir}\\left(\\frac{\\alpha}{K} \\cdot \\mathbf{1}_K\\right) \\\\\n\\mu_k & \\sim H(\\lambda)\n\\end{align}\n"
},
{
"math_id": 66,
"text": "\\pi_k"
},
{
"math_id": 67,
"text": "\\operatorname{Dir}\\left(\\alpha/K\\cdot\\mathbf{1}_K\\right)"
},
{
"math_id": 68,
"text": "\\mathbf{1}_K"
},
{
"math_id": 69,
"text": "H(\\lambda)"
},
{
"math_id": 70,
"text": "\\lambda"
},
{
"math_id": 71,
"text": "\n\\begin{align}\n(v_i \\mid \\tilde{\\mu}_i) & \\sim N(\\tilde{\\mu}_i,\\sigma^2) \\\\\n\\tilde{\\mu}_i & \\sim G=\\sum_{k=1}^K \\pi_k \\delta_{\\mu_k} (\\tilde{\\mu}_i) \\\\\n(\\boldsymbol{\\pi}\\mid \\alpha) & \\sim \\operatorname{Dir}\\left(\\frac{\\alpha}{K} \\cdot \\mathbf{1}_K \\right) \\\\\n\\mu_k & \\sim H(\\lambda)\n\\end{align}\n"
},
{
"math_id": 72,
"text": "\\tilde{\\mu}_i"
},
{
"math_id": 73,
"text": "G"
},
{
"math_id": 74,
"text": "G(\\tilde{\\mu}_i)=\\sum_{k=1}^\\infty\\pi_k \\delta_{\\mu_k}(\\tilde{\\mu}_i)"
},
{
"math_id": 75,
"text": "H\\left(\\lambda\\right)"
},
{
"math_id": 76,
"text": "\n\\begin{align}\n(v_i \\mid \\tilde{\\mu}_i) & \\sim N(\\tilde{\\mu}_i,\\sigma^2)\\\\\n\\tilde{\\mu}_i & \\sim G \\\\\nG & \\sim \\operatorname{DP}(H(\\lambda),\\alpha)\n\\end{align}\n"
},
{
"math_id": 77,
"text": "n"
},
{
"math_id": 78,
"text": "\\pi"
},
{
"math_id": 79,
"text": "N\\left(\\mu_k,\\sigma^2\\right)"
},
{
"math_id": 80,
"text": "\\boldsymbol{\\pi}"
},
{
"math_id": 81,
"text": "v_i"
},
{
"math_id": 82,
"text": "p\\left(\\boldsymbol{\\pi},\\boldsymbol{\\mu}\\mid D\\right)"
}
] | https://en.wikipedia.org/wiki?curid=8330403 |
8334381 | Coble creep | Coble creep, a form of diffusion creep, is a mechanism for deformation of crystalline solids. Contrasted with other diffusional creep mechanisms, Coble creep is similar to Nabarro–Herring creep in that it is dominant at lower stress levels and higher temperatures than creep mechanisms utilizing dislocation glide. Coble creep occurs through the diffusion of atoms in a material along grain boundaries. This mechanism is observed in polycrystals or along the surface in a single crystal, which produces a net flow of material and a sliding of the grain boundaries.
Robert L. Coble first reported his theory of how materials creep across grain boundaries and at high temperatures in alumina. Here he famously noticed a different creep mechanism that was more dependent on the size of the grain.
The strain rate in a material experiencing Coble creep is given by
formula_0
where
formula_1 is a geometric prefactor
formula_2 is the applied stress,
formula_3 is the average grain diameter,
formula_4 is the grain boundary width,
formula_5 is the diffusion coefficient in the grain boundary,
formula_6 is the vacancy formation energy,
formula_7 is the activation energy for diffusion along the grain boundary
formula_8 is the Boltzmann constant,
formula_9 is the temperature in kelvins
formula_10 is the atomic volume for the material.
Derivation.
Coble creep, a diffusive mechanism, is driven by a vacancy (or mass) concentration gradient. The change in vacancy concentration formula_11 from its equilibrium value formula_12 is given by
formula_13
This can be seen by noting that formula_14 and taking a high temperature expansion, where the first term on the right hand side is the vacancy concentration from tensile stress and the second term is the concentration due to compressive stress. This change in concentration occurs perpendicular to the applied stress axis, while parallel to the stress there is no change in vacancy concentration (due to the resolved stress and work being zero).
We continue by assuming a spherical grain, to be consistent with the derivation for Nabarro–Herring creep; however, we will absorb geometric constants into a proportionality constant formula_15. If we consider the vacancy concentration across the grain under an applied tensile stress, then we note that there is a larger vacancy concentration at the equator (perpendicular to the applied stress) than at the poles (parallel to the applied stress). Therefore, a vacancy flux exists between the poles and equator of the grain. The vacancy flux is given by Fick's first law at the boundary: the diffusion coefficient formula_16 times the gradient of vacancy concentration. For the gradient, we take the average value given by formula_17 where we've divided the total concentration difference by the arclength between equator and pole then multiply by the boundary width formula_4 and length formula_18.
formula_19
where formula_20 is a proportionality constant. From here, we note that the volume change formula_21 due to a flux of vacancies diffusing from a source of area formula_22 is the vacancy flux formula_23 times atomic volume formula_10:
formula_24
where the second equality follows from the definition of strain rate: formula_25. From here we can read off the strain rate:
formula_26
where formula_27 has absorbed constants and the vacancy diffusivity through the grain boundary formula_28.
Comparison to other creep mechanisms.
Nabarro–Herring.
Coble creep and Nabarro–Herring creep are closely related mechanisms. They are both diffusion processes, driven by the same concentration gradient of vacancies, occur in high temperature, low stress environments and their derivations are similar. For both mechanisms, the strain rate formula_29 is linearly proportional to the applied stress formula_2 and there is an exponential temperature dependence. The difference is that for Coble creep, mass transport occurs along grain boundaries, whereas for Nabarro–Herring the diffusion occurs through the crystal. Because of this, Nabarro–Herring creep does not have a dependence on grain boundary thickness, and has a weaker dependence on grain size formula_3. In Nabarro–Herring creep, the strain rate is proportional to formula_30 as opposed to the formula_31 dependence for Coble creep. When considering the net diffusional creep rate, the sum of both diffusional rates is vital as they work in a parallel processes.
The activation energy for Nabarro–Herring creep is, in general, different than that of Coble creep. This can be used to identify which mechanism is dominant. For example, the activation energy for dislocation climb is the same as for Nabarro–Herring, so by comparing the temperature dependence of low and high stress regimes, one can determine whether Coble creep or Nabarro–Herring creep is dominant.
Researchers commonly use these relationships to determine which mechanism is dominant in a material; by varying the grain size and measuring how the strain rate is affected, they can determine the value of formula_32 in formula_33 and conclude whether Coble or Nabarro–Herring creep is dominant.
Dislocation creep.
Under moderate to high stress, the dominant creep mechanism is no longer linear in the applied stress formula_2. Dislocation creep, sometimes called power law creep (PLC), has a power law dependence on the applied stress ranging from 3 to 8. Dislocation movement is related to the atomic and lattice structure of the crystal, so different materials respond differently to stress, as opposed to Coble creep which is always linear. This makes the two mechanisms easily identifiable by finding the slope of formula_34 vs formula_35.
Dislocation climb-glide and Coble creep both induce grain boundary sliding.
Deformation mechanism maps.
To understand the temperature and stress regimes in which Coble creep is dominant for a material, it is helpful to look at deformation mechanism maps. These maps plot a normalized stress versus a normalized temperature and demarcate where specific creep mechanisms are dominant for a given material and grain size (some maps imitate a 3rd axis to show grain size). These maps should only be used as a guide, as they are based on heuristic equations. These maps are helpful to determine the creep mechanism when the working stresses and temperature are known for an application to guide the design of the material.
Grain boundary sliding.
Since Coble creep involves mass transport along grain boundaries, cracks or voids would form within the material without proper accommodation. Grain boundary sliding is the process by which grains move to prevent separation at grain boundaries. This process typically occurs on timescales significantly faster than that of mass diffusion (an order of magnitude quicker). Because of this, the rate of grain boundary sliding is typically irrelevant to determining material processes. However, certain grain boundaries, such as coherent boundaries or where structural features inhibit grain boundary movement, can slow down the rate of grain boundary sliding to the point where it needs to be taken into consideration. The processes underlying grain boundary sliding are the same as those causing diffusional creep
This mechanism is originally proposed by Ashby and Verrall in 1973 as a grain switching creep. This is competitive with Coble creep; however, grain switching will dominate at large stresses while Coble creep dominates at low stresses.
This model predicts a strain rate with the threshold strain for grain switching formula_36.
formula_37
The relation to Coble creep is clear by looking at the first term which is dependent on grain boundary thickness formula_4 and inverse grain size cubed formula_31.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{d\\epsilon_C}{dt} \\equiv \\dot{\\epsilon}_C = A_C\\frac{\\delta'}{d^3}\\frac{\\sigma\\Omega}{kT}D_0e^{-(Q_f+Q_m)/kT} = A_C\\frac{\\delta'}{d^3}\\frac{\\sigma\\Omega}{kT}D_{GB},"
},
{
"math_id": 1,
"text": "A_c"
},
{
"math_id": 2,
"text": "\\sigma"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "\\delta'"
},
{
"math_id": 5,
"text": "D_{GB} = D_0e^{-(Q_f+Q_m)/kT}"
},
{
"math_id": 6,
"text": "Q_f"
},
{
"math_id": 7,
"text": "Q_m"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\\Omega"
},
{
"math_id": 11,
"text": "\\Delta C"
},
{
"math_id": 12,
"text": "C_0"
},
{
"math_id": 13,
"text": "\\Delta C = C_0 \\frac{\\sigma\\Omega}{kT}"
},
{
"math_id": 14,
"text": "\\Delta C\\propto C_0 e^{\\sigma\\Omega/kT} - C_0 e^{-\\sigma\\Omega/kT}"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "D_v"
},
{
"math_id": 17,
"text": "\\Delta C / (\\pi R / 2)"
},
{
"math_id": 18,
"text": "2\\pi R"
},
{
"math_id": 19,
"text": "\nJ_v = A' D_v \\frac{\\Delta C}{\\pi R/2}\\delta' (2\\pi R)\n"
},
{
"math_id": 20,
"text": "A'"
},
{
"math_id": 21,
"text": "\\pi R^2 dR/dt"
},
{
"math_id": 22,
"text": "\\pi R^2"
},
{
"math_id": 23,
"text": "J_v"
},
{
"math_id": 24,
"text": "\nJ_v\\Omega = \\pi R^2 dR/dt = \\pi R^3 \\dot{\\epsilon}\n"
},
{
"math_id": 25,
"text": "\\dot{\\epsilon} = (1/R)(dR/dt)"
},
{
"math_id": 26,
"text": " \n\\dot{\\epsilon}_C = \\frac{J_v\\Omega}{\\pi R^3} = A D_v\\Delta C\\Omega\\frac{\\delta'}{R^3} = A D_{GB}\\frac{\\sigma\\Omega}{kT}\\frac{\\delta'}{R^3}\n"
},
{
"math_id": 27,
"text": "A' \\rightarrow A"
},
{
"math_id": 28,
"text": "D_{GB}=D_v C_0\\Omega"
},
{
"math_id": 29,
"text": "\\dot{\\epsilon}"
},
{
"math_id": 30,
"text": "d^{-2}"
},
{
"math_id": 31,
"text": "d^{-3}"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "\\dot{\\epsilon}~\\alpha ~d^n"
},
{
"math_id": 34,
"text": "\\log{\\dot{\\epsilon}}"
},
{
"math_id": 35,
"text": "\\log{\\sigma}"
},
{
"math_id": 36,
"text": "0.72\\gamma/d"
},
{
"math_id": 37,
"text": "\n\\dot{\\epsilon}_{GS} \\propto \\frac{\\Omega}{kT}\\frac{\\delta'D_{GB}}{d^3}(\\sigma-\\frac{0.72\\gamma}{d})\n"
}
] | https://en.wikipedia.org/wiki?curid=8334381 |
8334655 | Einstein–Brillouin–Keller method | Semi-classical method for computing quantum eigenvalues
The Einstein–Brillouin–Keller (EBK) method is a semiclassical technique (named after Albert Einstein, Léon Brillouin, and Joseph B. Keller) used to compute eigenvalues in quantum-mechanical systems. EBK quantization is an improvement from Bohr-Sommerfeld quantization which did not consider the caustic phase jumps at classical turning points. This procedure is able to reproduce exactly the spectrum of the 3D harmonic oscillator, particle in a box, and even the relativistic fine structure of the hydrogen atom.
In 1976–1977, Michael Berry and M. Tabor derived an extension to Gutzwiller trace formula for the density of states of an integrable system starting from EBK quantization.
There have been a number of recent results on computational issues related to this topic, for example, the work of Eric J. Heller and Emmanuel David Tannenbaum using a partial differential equation gradient descent approach.
Procedure.
Given a separable classical system defined by coordinates formula_0, in which every pair formula_1 describes a closed function or a periodic function in formula_2, the EBK procedure involves quantizing the line integrals of formula_3 over the closed orbit of formula_2:
formula_4
where formula_5 is the action-angle coordinate, formula_6 is a positive integer, and formula_7 and formula_8 are Maslov indexes. formula_7 corresponds to the number of classical turning points in the trajectory of formula_2 (Dirichlet boundary condition), and formula_8 corresponds to the number of reflections with a hard wall (Neumann boundary condition).
Examples.
1D Harmonic oscillator.
The Hamiltonian of a simple harmonic oscillator is given by
formula_9
where formula_10 is the linear momentum and formula_11 the position coordinate. The action variable is given by
formula_12
where we have used that formula_13 is the energy and that the closed trajectory is 4 times the trajectory from 0 to the turning point formula_14.
The integral turns out to be
formula_15,
which under EBK quantization there are two soft turning points in each orbit formula_16 and formula_17. Finally, that yields
formula_18,
which is the exact result for quantization of the quantum harmonic oscillator.
2D hydrogen atom.
The Hamiltonian for a non-relativistic electron (electric charge formula_19) in a hydrogen atom is:
formula_20
where formula_21 is the canonical momentum to the radial distance formula_22, and formula_23 is the canonical momentum of the azimuthal angle formula_24.
Take the action-angle coordinates:
formula_25
For the radial coordinate formula_22:
formula_26
formula_27
where we are integrating between the two classical turning points formula_28 (formula_29)
formula_30
Using EBK quantization formula_31 :
formula_32
formula_33
formula_34
and by making formula_35 the spectrum of the 2D hydrogen atom is recovered :
formula_36
Note that for this case formula_37 almost coincides with the usual quantization of the angular momentum operator on the plane formula_38. For the 3D case, the EBK method for the total angular momentum is equivalent to the Langer correction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(q_i,p_i);i\\in\\{1,2,\\cdots,d\\}"
},
{
"math_id": 1,
"text": "(q_i,p_i)"
},
{
"math_id": 2,
"text": "q_i"
},
{
"math_id": 3,
"text": "p_i"
},
{
"math_id": 4,
"text": "I_i=\\frac{1}{2\\pi}\\oint p_i dq_i = \\hbar \\left(n_i+\\frac{\\mu_i}{4}+\\frac{b_i}{2}\\right) "
},
{
"math_id": 5,
"text": "I_i"
},
{
"math_id": 6,
"text": "n_i"
},
{
"math_id": 7,
"text": "\\mu_i"
},
{
"math_id": 8,
"text": "b_i"
},
{
"math_id": 9,
"text": "H=\\frac{p^2}{2m}+\\frac{m\\omega^2x^2}{2}"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "I=\\frac{2}{\\pi}\\int_0^{x_0}\\sqrt{2mE-m^2\\omega^2x^2}\\mathrm{d}x"
},
{
"math_id": 13,
"text": "H=E"
},
{
"math_id": 14,
"text": "x_0=\\sqrt{2E/m\\omega^2}"
},
{
"math_id": 15,
"text": "E=I\\omega"
},
{
"math_id": 16,
"text": "\\mu_x=2"
},
{
"math_id": 17,
"text": "b_x=0"
},
{
"math_id": 18,
"text": "E=\\hbar\\omega(n+1/2)"
},
{
"math_id": 19,
"text": "e"
},
{
"math_id": 20,
"text": "H=\\frac{p_r^2}{2m}+\\frac{p_\\varphi^2}{2mr^2}-\\frac{e^2}{4\\pi\\epsilon_{0} r}"
},
{
"math_id": 21,
"text": "p_r"
},
{
"math_id": 22,
"text": "r"
},
{
"math_id": 23,
"text": "p_\\varphi"
},
{
"math_id": 24,
"text": "\\varphi"
},
{
"math_id": 25,
"text": "I_\\varphi=\\text{constant}=|L|"
},
{
"math_id": 26,
"text": "p_r=\\sqrt{2mE-\\frac{L^2}{r^2}+\\frac{e^2}{4\\pi\\epsilon_0 r}}"
},
{
"math_id": 27,
"text": "I_r=\\frac{1}{\\pi}\\int_{r_1}^{r_2} p_r dr = \\frac{me^2}{4\\pi\\epsilon_0\\sqrt{-2mE}}-|L| "
},
{
"math_id": 28,
"text": "r_1,r_2"
},
{
"math_id": 29,
"text": "\\mu_r=2"
},
{
"math_id": 30,
"text": "E=-\\frac{me^4}{32\\pi^2\\epsilon_0^2(I_r+I_\\varphi)^2}"
},
{
"math_id": 31,
"text": "b_r=\\mu_\\varphi=b_\\varphi=0,n_\\varphi=m"
},
{
"math_id": 32,
"text": " I_\\varphi=\\hbar m\\quad;\\quad m=0,1,2,\\cdots "
},
{
"math_id": 33,
"text": "I_r=\\hbar(n_r+1/2)\\quad;\\quad n_r=0,1,2,\\cdots "
},
{
"math_id": 34,
"text": "E=-\\frac{me^4}{32\\pi^2\\epsilon_0^2\\hbar^2(n_r+m+1/2)^2}"
},
{
"math_id": 35,
"text": "n=n_r+m+1"
},
{
"math_id": 36,
"text": "E_n=-\\frac{me^4}{32\\pi^2\\epsilon_0^2\\hbar^2(n-1/2)^2}\\quad;\\quad n=1,2,3,\\cdots"
},
{
"math_id": 37,
"text": "I_\\varphi=|L| "
},
{
"math_id": 38,
"text": "L_z"
}
] | https://en.wikipedia.org/wiki?curid=8334655 |
833690 | Tolerance interval | Type of statistical probability
A tolerance interval (TI) is a statistical interval within which, with some confidence level, a specified sampled proportion of a population falls. "More specifically, a 100×"p"%/100×(1−α) tolerance interval provides limits within which at least a certain proportion ("p") of the population falls with a given level of confidence (1−α)." "A ("p", 1−α) tolerance interval (TI) based on a sample is constructed so that it would include at least a proportion "p" of the sampled population with confidence 1−α; such a TI is usually referred to as p-content − (1−α) coverage TI." "A (p, 1−α) upper tolerance limit (TL) is simply a 1−α upper confidence limit for the 100 "p" percentile of the population."
Definition.
Given
Then a tolerance interval with endpoints formula_6 which has the defining property:
formula_7, without referring to a sample formula_4.
This is in contrast to a prediction interval with endpoints formula_8 has the defining property formula_9.
Calculation.
One-sided normal tolerance intervals have an exact solution in terms of the sample mean and sample variance based on the noncentral "t"-distribution.
Two-sided normal tolerance intervals can be estimated using the chi-squared distribution.
Relation to other intervals.
"In the parameters-known case, a 95% tolerance interval and a 95% prediction interval are the same." If we knew a population's exact parameters, we would be able to compute a range within which a certain proportion of the population falls. For example, if we know a population is normally distributed with mean formula_10 and standard deviation formula_11, then the interval formula_12 includes 95% of the population (1.96 is the z-score for 95% coverage of a normally distributed population).
However, if we have only a sample from the population, we know only the sample mean formula_13 and sample standard deviation formula_14, which are only estimates of the true parameters. In that case, formula_15 will not necessarily include 95% of the population, due to variance in these estimates. A tolerance interval bounds this variance by introducing a confidence level formula_16, which is the confidence with which this interval actually includes the specified proportion of the population. For a normally distributed population, a z-score can be transformed into a ""k" factor" or tolerance factor for a given formula_16 via lookup tables or several approximation formulas. "As the degrees of freedom approach infinity, the prediction and tolerance intervals become equal."
The tolerance interval is less widely known than the confidence interval and prediction interval, a situation some educators have lamented, as it can lead to misuse of the other intervals where a tolerance interval is more appropriate.
The tolerance interval differs from a confidence interval in that the confidence interval bounds a single-valued population parameter (the mean or the variance, for example) with some confidence, while the tolerance interval bounds the range of data values that includes a specific proportion of the population. Whereas a confidence interval's size is entirely due to sampling error, and will approach a zero-width interval at the true population parameter as sample size increases, a tolerance interval's size is due partly to sampling error and partly to actual variance in the population, and will approach the population's probability interval as sample size increases.
The tolerance interval is related to a prediction interval in that both put bounds on variation in future samples. However, the prediction interval only bounds a single future sample, whereas a tolerance interval bounds the entire population (equivalently, an arbitrary sequence of future samples). In other words, a prediction interval covers a specified proportion of a population "on average", whereas a tolerance interval covers it "with a certain confidence level", making the tolerance interval more appropriate if a single interval is intended to bound multiple future samples.
Examples.
gives the following example: So consider once again a proverbial EPA mileage test scenario, in which several nominally identical autos of a particular model are tested to produce mileage figures formula_17. If such data are processed to produce a 95% confidence interval for the mean mileage of the model, it is, for example, possible to use it to project the mean or total gasoline consumption for the manufactured fleet of such autos over their first 5,000 miles of use. Such an interval, would however, not be of much help to a person renting one of these cars and wondering whether the (full) 10-gallon tank of gas will suffice to carry him the 350 miles to his destination. For that job, a prediction interval would be much more useful. (Consider the differing implications of being "95% sure" that formula_18 as opposed to being "95% sure" that formula_19.) But neither a confidence interval for formula_10 nor a prediction interval for a single additional mileage is exactly what is needed by a design engineer charged with determining how large a gas tank the model really needs to guarantee that 99% of the autos produced will have a 400-mile cruising range. What the engineer really needs is a tolerance interval for a fraction formula_20 of mileages of such autos.
Another example is given by: The air lead levels were collected from formula_21 different areas within the facility. It was noted that the log-transformed lead levels fitted a normal distribution well (that is, the data are from a lognormal distribution. Let formula_10 and formula_22, respectively, denote the population mean and variance for the log-transformed data. If formula_23 denotes the corresponding random variable, we thus have formula_24. We note that formula_25 is the median air lead level. A confidence interval for formula_10 can be constructed the usual way, based on the "t"-distribution; this in turn will provide a confidence interval for the median air lead level. If formula_26 and formula_27 denote the sample mean and standard deviation of the log-transformed data for a sample of size n, a 95% confidence interval for formula_10 is given by formula_28, where formula_29 denotes the formula_30 quantile of a "t"-distribution with formula_31 degrees of freedom. It may also be of interest to derive a 95% upper confidence bound for the median air lead level. Such a bound for formula_10 is given by formula_32. Consequently, a 95% upper confidence bound for the median air lead is given by formula_33. Now suppose we want to predict the air lead level at a particular area within the laboratory. A 95% upper prediction limit for the log-transformed lead level is given by formula_34. A two-sided prediction interval can be similarly computed. The meaning and interpretation of these intervals are well known. For example, if the confidence interval formula_28 is computed repeatedly from independent samples, 95% of the intervals so computed will include the true value of formula_10, in the long run. In other words, the interval is meant to provide information concerning the parameter formula_10 only. A prediction interval has a similar interpretation, and is meant to provide information concerning a single lead level only. Now suppose we want to use the sample to conclude whether or not at least 95% of the population lead levels are below a threshold. The confidence interval and prediction interval cannot answer this question, since the confidence interval is only for the median lead level, and the prediction interval is only for a single lead level. What is required is a tolerance interval; more specifically, an upper tolerance limit. The upper tolerance limit is to be computed subject to the condition that at least 95% of the population lead levels is below the limit, with a certain confidence level, say 99%.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{x}=(x_1,\\ldots,x_n)"
},
{
"math_id": 1,
"text": "\\mathbf{X}=(X_1,\\ldots,X_n)"
},
{
"math_id": 2,
"text": "F_\\theta"
},
{
"math_id": 3,
"text": "\\theta"
},
{
"math_id": 4,
"text": "X_0"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "(L(\\mathbf{x}), U(\\mathbf{x})]"
},
{
"math_id": 7,
"text": "\\inf_\\theta\\{{\\Pr}_\\theta\\left(F_\\theta(U(\\mathbf{X})) - F_\\theta(L(\\mathbf{X})\\right) \\ge p)\\} = 100(1-\\alpha)"
},
{
"math_id": 8,
"text": "[l(\\mathbf{x}), u(\\mathbf{x})]"
},
{
"math_id": 9,
"text": " \\inf_\\theta\\{{\\Pr}_\\theta(X_0 \\in [l(\\mathbf{X}), u(\\mathbf{X})])\\}= 100(1-\\alpha)"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "\\sigma"
},
{
"math_id": 12,
"text": "\\mu \\pm 1.96\\sigma"
},
{
"math_id": 13,
"text": "\\hat{\\mu}"
},
{
"math_id": 14,
"text": "\\hat{\\sigma}"
},
{
"math_id": 15,
"text": "\\hat{\\mu} \\pm 1.96\\hat{\\sigma}"
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "y_1, y_2, ..., y_n"
},
{
"math_id": 18,
"text": "\\mu \\ge 35"
},
{
"math_id": 19,
"text": "y_{n+1} \\ge 35"
},
{
"math_id": 20,
"text": "p = .99"
},
{
"math_id": 21,
"text": "n=15"
},
{
"math_id": 22,
"text": "\\sigma^2"
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "X \\sim \\mathcal{N}(\\mu, \\sigma^2)"
},
{
"math_id": 25,
"text": "\\exp(\\mu)"
},
{
"math_id": 26,
"text": "\\bar{X}"
},
{
"math_id": 27,
"text": "S"
},
{
"math_id": 28,
"text": "\\bar{X} \\pm t_{n-1,0.975} S / \\sqrt{n}"
},
{
"math_id": 29,
"text": "t_{m,1-\\alpha}"
},
{
"math_id": 30,
"text": "1-\\alpha"
},
{
"math_id": 31,
"text": "m"
},
{
"math_id": 32,
"text": "\\bar{X} + t_{n-1,0.95} S / \\sqrt{n}"
},
{
"math_id": 33,
"text": "\\exp{\\left( \\bar{X} + t_{n-1,0.95} S / \\sqrt{n} \\right)}"
},
{
"math_id": 34,
"text": "\\bar{X} + t_{n-1,0.95} S \\sqrt{\\left( 1 + 1/n \\right)}"
}
] | https://en.wikipedia.org/wiki?curid=833690 |
8337525 | Watts–Strogatz model | Method of generating random small-world graphs
The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their article published in 1998 in the "Nature" scientific journal. The model also became known as the (Watts) "beta" model after Watts used formula_0 to formulate it in his popular science book "".
Rationale for the model.
The formal study of random graphs dates back to the work of Paul Erdős and Alfréd Rényi. The graphs they considered, now known as the classical or Erdős–Rényi (ER) graphs, offer a simple and powerful model with many applications.
However the ER graphs do not have two important properties observed in many real-world networks:
The Watts and Strogatz model was designed as the simplest possible model that addresses the first of the two limitations. It accounts for clustering while retaining the short average path lengths of the ER model. It does so by interpolating between a randomized structure close to ER graphs and a regular ring lattice. Consequently, the model is able to at least partially explain the "small-world" phenomena in a variety of networks, such as the power grid, neural network of C. elegans, networks of movie actors, or fat-metabolism communication in budding yeast.
Algorithm.
Given the desired number of nodes formula_1, the mean degree formula_2 (assumed to be an even integer), and a parameter formula_0, all satisfying formula_3 and formula_4, the model constructs an undirected graph with formula_1 nodes and formula_5 edges in the following way:
Properties.
The underlying lattice structure of the model produces a locally clustered network, while the randomly rewired links dramatically reduce the average path lengths. The algorithm introduces about formula_18 of such non-lattice edges. Varying formula_0 makes it possible to interpolate between a regular lattice (formula_19) and a structure close to an Erdős–Rényi random graph formula_20 with formula_21 at formula_22. It does not approach the actual ER model since every node will be connected to at least formula_23 other nodes.
The three properties of interest are the average path length, the clustering coefficient, and the degree distribution.
Average path length.
For a ring lattice, the average path length is formula_24 and scales linearly with the system size. In the limiting case of formula_25, the graph approaches a random graph with formula_26, while not actually converging to it. In the intermediate region formula_27, the average path length falls very rapidly with increasing formula_0, quickly approaching its limiting value.
Clustering coefficient.
For the ring lattice the clustering coefficient formula_28, and so tends to formula_29 as formula_2 grows, independently of the system size. In the limiting case of formula_25 the clustering coefficient is of the same order as the clustering coefficient for classical random graphs, formula_30 and is thus inversely proportional to the system size. In the intermediate region the clustering coefficient remains quite close to its value for the regular lattice, and only falls at relatively high formula_0. This results in a region where the average path length falls rapidly, but the clustering coefficient does not, explaining the "small-world" phenomenon.
If we use the Barrat and Weigt measure for clustering formula_31 defined as the fraction between the average number of edges between the neighbors of a node and the average number of possible edges between these neighbors, or, alternatively,
formula_32
then we get formula_33
Degree distribution.
The degree distribution in the case of the ring lattice is just a Dirac delta function centered at formula_2. The degree distribution for a large number of nodes and formula_27 can be written as,
formula_34
where formula_35 is the number of edges that the formula_36 node has or its degree. Here formula_37, and formula_38. The shape of the degree distribution is similar to that of a random graph and has a pronounced peak at formula_39 and decays exponentially for large formula_40. The topology of the network is relatively homogeneous, meaning that all nodes are of similar degree.
Limitations.
The major limitation of the model is that it produces an unrealistic degree distribution. In contrast, real networks are often scale-free networks inhomogeneous in degree, having hubs and a scale-free degree distribution. Such networks are better described in that respect by the preferential attachment family of models, such as the Barabási–Albert (BA) model. (On the other hand, the Barabási–Albert model fails to produce the high levels of clustering seen in real networks, a shortcoming not shared by the Watts and Strogatz model. Thus, neither the Watts and Strogatz model nor the Barabási–Albert model should be viewed as fully realistic.)
The Watts and Strogatz model also implies a fixed number of nodes and thus cannot be used to model network growth.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "0 \\le \\beta \\le 1"
},
{
"math_id": 4,
"text": "N\\gg K \\gg \\ln N \\gg 1"
},
{
"math_id": 5,
"text": "\\frac{NK}{2}"
},
{
"math_id": 6,
"text": "K/2"
},
{
"math_id": 7,
"text": "0 \\ldots {N-1}"
},
{
"math_id": 8,
"text": "(i, j)"
},
{
"math_id": 9,
"text": " 0 < |i - j|\\ \\mathrm{mod}\\ \\left( N-1-\\frac K 2 \\right) \\leq \\frac K 2. "
},
{
"math_id": 10,
"text": "i=0,\\dots, {N-1}"
},
{
"math_id": 11,
"text": "i"
},
{
"math_id": 12,
"text": "0 < (j-i) \\ \\mathrm{mod}\\ N \\leq K/2"
},
{
"math_id": 13,
"text": "(i, k)"
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "k \\ne i"
},
{
"math_id": 16,
"text": "(i, {k'})"
},
{
"math_id": 17,
"text": "k' = k"
},
{
"math_id": 18,
"text": "\\beta\\frac{NK}{2}"
},
{
"math_id": 19,
"text": "\\beta=0"
},
{
"math_id": 20,
"text": "G(N, p)"
},
{
"math_id": 21,
"text": "p = \\frac{K}{N-1}"
},
{
"math_id": 22,
"text": "\\beta=1"
},
{
"math_id": 23,
"text": " K / 2 "
},
{
"math_id": 24,
"text": "\\ell(0)\\approx N/2K\\gg 1"
},
{
"math_id": 25,
"text": "\\beta \\rightarrow 1"
},
{
"math_id": 26,
"text": "\\ell(1)\\approx\\frac{\\ln N}{\\ln K}"
},
{
"math_id": 27,
"text": "0<\\beta<1"
},
{
"math_id": 28,
"text": "C(0)=\\frac{3(K-2)}{4(K-1)}"
},
{
"math_id": 29,
"text": "3/4"
},
{
"math_id": 30,
"text": "C=K/(N-1)"
},
{
"math_id": 31,
"text": "C'(\\beta)"
},
{
"math_id": 32,
"text": "C'(\\beta)\\equiv\\frac{3\\times \\text{number of triangles}}{\\text{number of connected triples}}"
},
{
"math_id": 33,
"text": " C'(\\beta)\\sim C(0)(1-\\beta)^3."
},
{
"math_id": 34,
"text": "P(k) \\approx \\sum_{n=0}^{f(k,K)} {{K/2}\\choose{n}} (1-\\beta)^n \\beta^{K/2-n} \\frac{(\\beta K/2)^{k-K/2-n}}{(k-K/2-n)!} e^{-\\beta K/2},"
},
{
"math_id": 35,
"text": "k_i"
},
{
"math_id": 36,
"text": "i^\\text{th}"
},
{
"math_id": 37,
"text": "k\\geq K/2"
},
{
"math_id": 38,
"text": "f(k,K)=\\min(k-K/2,K/2)"
},
{
"math_id": 39,
"text": "k=K"
},
{
"math_id": 40,
"text": "|k-K|"
}
] | https://en.wikipedia.org/wiki?curid=8337525 |
8337647 | Average path length | Concept in network topology
Average path length, or average shortest path length is a concept in network topology that is defined as the average number of steps along the shortest paths for all possible pairs of network nodes. It is a measure of the efficiency of information or mass transport on a network.
Concept.
Average path length is one of the three most robust measures of network topology, along with its clustering coefficient and its degree distribution. Some examples are: the average number of clicks which will lead you from one website to another, or the number of people you will have to communicate through, on an average, to contact a complete stranger. It should not be confused with the diameter of the network, which is defined as the longest geodesic, i.e., the longest shortest path between any two nodes in the network (see Distance (graph theory)).
The average path length distinguishes an easily negotiable network from one, which is complicated and inefficient, with a shorter average path length being more desirable. However, the average path length is simply what the path length will most likely be. The network itself might have some very remotely connected nodes and many nodes, which are neighbors of each other.
Definition.
Consider an unweighted directed graph formula_0 with the set of vertices formula_1. Let formula_2, where formula_3 denote the shortest distance between formula_4 and formula_5.
Assume that formula_6 if formula_5 cannot be reached from formula_4. Then, the average path length formula_7 is:
formula_8
where formula_9 is the number of vertices in formula_0.
Applications.
In a real network like the Internet, a short average path length facilitates the quick transfer of information and reduces costs. The efficiency of mass transfer in a metabolic network can be judged by studying its average path length. A power grid network will have fewer losses if its average path length is minimized.
Most real networks have a very short average path length leading to the concept of a small world where everyone is connected to everyone else through a very short path.
As a result, most models of real networks are created with this condition in mind. One of the first models which tried to explain real networks was the random network model. It was later followed by the Watts and Strogatz model, and even later there were the scale-free networks starting with the BA model. All these models had one thing in common: they all predicted very short average path length.
The average path length depends on the system size but does not change drastically with it. Small world network theory predicts that the average path length changes proportionally to log n, where n is the number of nodes in the network.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "d(v_1, v_2)"
},
{
"math_id": 3,
"text": "v_1, v_2 \\in V"
},
{
"math_id": 4,
"text": "v_1"
},
{
"math_id": 5,
"text": "v_2"
},
{
"math_id": 6,
"text": "d(v_1, v_2) = 0"
},
{
"math_id": 7,
"text": "l_G"
},
{
"math_id": 8,
"text": "l_G = \\frac{1}{n \\cdot (n - 1)} \\cdot \\sum_{i \\ne j} d(v_i, v_j),"
},
{
"math_id": 9,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=8337647 |
8337703 | Water model | Aspect of computational chemistry
In computational chemistry, a water model is used to simulate and thermodynamically calculate water clusters, liquid water, and aqueous solutions with explicit solvent. The models are determined from quantum mechanics, molecular mechanics, experimental results, and these combinations. To imitate a specific nature of molecules, many types of models have been developed. In general, these can be classified by the following three points; (i) the number of interaction points called "site", (ii) whether the model is rigid or flexible, (iii) whether the model includes polarization effects.
An alternative to the explicit water models is to use an implicit solvation model, also termed a continuum model, an example of which would be the COSMO solvation model or the polarizable continuum model (PCM) or a hybrid solvation model.
Simple water models.
The rigid models are considered the simplest water models and rely on non-bonded interactions. In these models, bonding interactions are implicitly treated by holonomic constraints. The electrostatic interaction is modeled using Coulomb's law, and the dispersion and repulsion forces using the Lennard-Jones potential. The potential for models such as TIP3P (transferable intermolecular potential with 3 points) and TIP4P is represented by
formula_0
where "kC", the electrostatic constant, has a value of 332.1 Å·kcal/(mol·e²) in the units commonly used in molecular modeling; "qi" and "qj" are the partial charges relative to the charge of the electron; "rij" is the distance between two atoms or charged sites; and "A" and "B" are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms.
The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model.
2-site.
A 2-site model of water based on the familiar three-site SPC model (see below) has been shown to predict the dielectric properties of water using site-renormalized molecular fluid theory.
3-site.
Three-site models have three interaction points corresponding to the three atoms of the water molecule. Each site has a point charge, and the site corresponding to the oxygen atom also has the Lennard-Jones parameters. Since 3-site models achieve a high computational efficiency, these are widely used for many applications of molecular dynamics simulations. Most of the models use a rigid geometry matching that of actual water molecules. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°.
The table below lists the parameters for some 3-site models.
The SPC/E model adds an average polarization correction to the potential energy function:
formula_1
where μ is the electric dipole moment of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of . Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model.
The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms too, in addition to the one on oxygen. The charges are not modified. Three-site model (TIP3P) has better performance in calculating specific heats.
Flexible SPC water model.
The flexible simple point-charge water model (or flexible SPC water model) is a re-parametrization of the three-site SPC water model. The "SPC" model is rigid, whilst the "flexible SPC" model is flexible. In the model of Toukan and Rahman, the O–H stretching is made anharmonic, and thus the dynamical behavior is well described. This is one of the most accurate three-center water models without taking into account the polarization. In molecular dynamics simulations it gives the correct density and dielectric permittivity of water.
Flexible SPC is implemented in the programs MDynaMix and Abalone.
4-site.
The four-site models have four interaction points by adding one dummy atom near of the oxygen along the bisector of the HOH angle of the three-site models (labeled M in the figure). The dummy atom only has a negative charge. This model improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal–Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is thus of historical interest only. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough.
The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; TIP4P/2005, a general parameterization for simulating the entire phase diagram of condensed water; and TIP4PQ/2005, a similar model but designed to accurately describe the properties of solid and liquid water when quantum effects are included in the simulation.
Most of the four-site water models use an OH distance and HOH angle which match those of the free water molecule. One exception is the OPC model, in which no geometry constraints are imposed other than the fundamental C2v molecular symmetry of the water molecule. Instead, the point charges and their positions are optimized to best describe the electrostatics of the water molecule. OPC reproduces a comprehensive set of bulk properties more accurately than several of the commonly used rigid "n"-site water models. The OPC model is implemented within the AMBER force field.
Others:
5-site.
The 5-site models place the negative charge on dummy atoms (labelled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximal density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums.
Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function "S"("r"):
formula_2
Thus, the "R"L and "R"U parameters only apply to BNS and ST2.
6-site.
Originally designed to study water/ice systems, a 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden. Since it had a very high melting temperature when employed under periodic electrostatic conditions (Ewald summation), a modified version was published later optimized by using the Ewald method for estimating the Coulomb interaction.
Computational cost.
The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O–O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1).
When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E_{ab} = \\sum_i^{\\text{on }a} \\sum_j^{\\text{on }b}\n \\frac {k_C q_i q_j}{r_{ij}}\n + \\frac {A}{r_{\\text{OO}}^{12}}\n - \\frac {B}{r_{\\text{OO}}^6},\n"
},
{
"math_id": 1,
"text": "E_\\text{pol} = \\frac 1 2 \\sum_i\n \\frac{(\\mu - \\mu^0)^2}{\\alpha_i},\n"
},
{
"math_id": 2,
"text": "\nS(r_{ij}) = \n\\begin{cases} \n 0 & \\text{if }r_{ij} \\le R_\\text{L}, \\\\\n \\frac{(r_{ij} - R_L)^2(3R_\\text{U} - R_\\text{L} - 2r_{ij})}{(R_\\text{U} - R_\\text{L})^2} & \\text{if }R_\\text{L} \\le r_{ij} \\le R_\\text{U}, \\\\\n 1 & \\text{if }R_\\text{U} \\le r_{ij}.\n\\end{cases}\n"
}
] | https://en.wikipedia.org/wiki?curid=8337703 |
8339605 | Oriented projective geometry | Oriented projective geometry is an oriented version of real projective geometry.
Whereas the real projective plane describes the set of all unoriented lines through the origin in R3, the oriented projective plane describes lines with a given orientation. There are applications in computer graphics and computer vision where it is necessary to distinguish between rays light being emitted or absorbed by a point.
Elements in an oriented projective space are defined using signed homogeneous coordinates. Let formula_1 be the set of elements of formula_2 excluding the origin.
These spaces can be viewed as extensions of euclidean space. formula_3 can be viewed as the union of two copies of formula_10, the sets ("x",1) and ("x",-1), plus two additional points at infinity, (1,0) and (-1,0). Likewise formula_7 can be viewed as two copies of formula_11, ("x","y",1) and ("x","y",-1), plus one copy of formula_0 ("x","y",0).
An alternative way to view the spaces is as points on the circle or sphere, given by the points ("x","y","w") with
"x"2+"y"2+"w"2=1.
Oriented real projective space.
Let "n" be a nonnegative integer. The (analytical model of, or canonical) oriented (real) projective space or (canonical) two-sided projective space formula_12 is defined as
formula_13
Here, we use formula_14 to stand for "two-sided".
Distance in oriented real projective space.
Distances between two points formula_15 and formula_16 in formula_7 can be defined as elements
formula_17
in formula_3.
Oriented complex projective geometry.
Let "n" be a nonnegative integer. The oriented complex projective space formula_18 is defined as
formula_19. Here, we write formula_20 to stand for the 1-sphere.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{T}"
},
{
"math_id": 1,
"text": "\\mathbb{R}_{*}^n"
},
{
"math_id": 2,
"text": "\\mathbb{R}^n"
},
{
"math_id": 3,
"text": "\\mathbb{T}^1"
},
{
"math_id": 4,
"text": "(x,w) \\in \\mathbb{R}^2_*"
},
{
"math_id": 5,
"text": "(x,w)\\sim(a x,a w)\\,"
},
{
"math_id": 6,
"text": "a>0"
},
{
"math_id": 7,
"text": "\\mathbb{T}^2"
},
{
"math_id": 8,
"text": "(x,y,w) \\in \\mathbb{R}^3_*"
},
{
"math_id": 9,
"text": "(x,y,w)\\sim(a x,a y,a w)\\,"
},
{
"math_id": 10,
"text": "\\mathbb{R}"
},
{
"math_id": 11,
"text": "\\mathbb{R}^2"
},
{
"math_id": 12,
"text": "\\mathbb T^n"
},
{
"math_id": 13,
"text": "\\mathbb T^n=\\{\\{\\lambda Z:\\lambda\\in\\mathbb R_{>0}\\}:Z\\in\\mathbb R^{n+1}\\setminus\\{0\\}\\}=\\{\\mathbb R_{>0}Z:Z\\in\\mathbb R^{n+1}\\setminus\\{0\\}\\}."
},
{
"math_id": 14,
"text": "\\mathbb T"
},
{
"math_id": 15,
"text": "p=(p_x,p_y,p_w)"
},
{
"math_id": 16,
"text": "q=(q_x,q_y,q_w)"
},
{
"math_id": 17,
"text": "((p_x q_w-q_x p_w)^2+(p_y q_w-q_y p_w)^2,\\mathrm{sign}(p_w q_w)(p_w q_w)^2)"
},
{
"math_id": 18,
"text": "{\\mathbb{CP}}^n_{S^1}"
},
{
"math_id": 19,
"text": "{\\mathbb{CP}}^n_{S^1}=\\{\\{\\lambda Z:\\lambda\\in\\mathbb R_{>0}\\}:Z\\in\\mathbb C^{n+1}\\setminus\\{0\\}\\}=\\{\\mathbb R_{>0}Z:Z\\in\\mathbb C^{n+1}\\setminus\\{0\\}\\}"
},
{
"math_id": 20,
"text": "S^1"
}
] | https://en.wikipedia.org/wiki?curid=8339605 |
8339650 | List of mathematic operators | In mathematics, an operator or transform is a function from one space of functions to another. Operators occur commonly in engineering, physics and mathematics. Many are integral operators and differential operators.
In the following "L" is an operator
formula_0
which takes a function formula_1 to another function formula_2. Here, formula_3 and formula_4 are some unspecified function spaces, such as Hardy space, "L"p space, Sobolev space, or, more vaguely, the space of holomorphic functions. | [
{
"math_id": 0,
"text": "L:\\mathcal{F}\\to\\mathcal{G}"
},
{
"math_id": 1,
"text": "y\\in\\mathcal{F}"
},
{
"math_id": 2,
"text": "L[y]\\in\\mathcal{G}"
},
{
"math_id": 3,
"text": "\\mathcal{F}"
},
{
"math_id": 4,
"text": "\\mathcal{G}"
}
] | https://en.wikipedia.org/wiki?curid=8339650 |
8340179 | Number One Electronic Switching System | Defunct telecommunications node in the United States; part of the Bell System
The Number One Electronic Switching System (1ESS) was the first large-scale stored program control (SPC) telephone exchange or electronic switching system in the Bell System. It was manufactured by Western Electric and first placed into service in Succasunna, New Jersey, in May 1965. The switching fabric was composed of a reed relay matrix controlled by wire spring relays which in turn were controlled by a central processing unit (CPU).
The 1AESS central office switch was a plug compatible, higher capacity upgrade from 1ESS with a faster 1A processor that incorporated the existing instruction set for programming compatibility, and used smaller remreed switches, fewer relays, and featured disk storage. It was in service from 1976 to 2017.
Switching fabric.
The voice switching fabric plan was similar to that of the earlier 5XB switch in being bidirectional and in using the call-back principle. The largest full-access matrix switches (the 12A line grids had partial access) in the system, however, were 8x8 rather than 10x10 or 20x16. Thus they required eight stages rather than four to achieve large enough junctor groups in a large office. Crosspoints being more expensive in the new system but switches cheaper, system cost was minimized with fewer crosspoints organized into more switches. The fabric was divided into "Line Networks" and "Trunk Networks" of four stages, and partially folded to allow connecting line-to-line or trunk-to-trunk without exceeding eight stages of switching.
The traditional implementation of a nonblocking minimal spanning switch able to connect formula_0 input customers to formula_0 output customers simultaneously—with the connections initiated in any order—the connection matrix scaled on formula_1. This being impractical, statistical theory is used to design hardware that can connect most of the calls, and block others when traffic exceeds the design capacity. These "blocking switches" are the most common in modern telephone exchanges. They are generally implemented as smaller switch fabrics in cascade. In many, a randomizer is used to select the start of a path through the multistage fabric so that the statistical properties predicted by the theory can be gained. In addition, if the control system is able to rearrange the routing of existing connections on the arrival of a new connection, a full non-blocking matrix requires fewer switch points.
Line and trunk networks.
Each four stage Line Network (LN) or Trunk Network (TN) was divided into Junctor Switch Frames (JSF) and either Line Switch Frames (LSF) in the case of a Line Network, or Trunk Switch Frames (TSF) in the case of a Trunk Network. Links were designated A, B, C, and J for Junctor. A Links were internal to the LSF or TSF; B Links connected LSF or TSF to JSF, C were internal to JSF, and J links or Junctors connected to another net in the exchange.
All JSFs had a unity concentration ratio, that is the number of B links within the network equalled the number of junctors to other networks. Most LSFs had a 4:1 Line Concentration Ratio (LCR); that is the lines were four times as numerous as the B links. In some urban areas 2:1 LSF were used. The B links were often multipled to make a higher LCR, such as 3:1 or (especially in suburban 1ESS) 5:1. Line Networks always had 1024 Junctors, arranged in 16 grids that each switched 64 junctors to 64 B links. Four grids were grouped for control purposes in each of four LJFs.
TSF had a unity concentration, but a TN could have more TSFs than JSFs. Thus their B links were usually multipled to make a Trunk Concentration Ratio (TCR) of 1.25:1 or 1.5:1, the latter being especially common in 1A offices. TSFs and JSFs were identical except for their position in the fabric and the presence of a ninth test access level or "no-test level" in the JSF. Each JSF or TSF was divided into 4 two-stage grids.
Early TNs had four JSF, for a total of 16 grids, 1024 J links and the same number of B links, with four B links from each Trunk Junctor grid to each Trunk Switch grid. Starting in the mid-1970s, larger offices had their B links wired differently, with only two B links from each Trunk Junctor Grid to each Trunk Switch Grid. This allowed a larger TN, with 8 JSF containing 32 grids, connecting 2048 junctors and 2048 B links. Thus the junctor groups could be larger and more efficient. These TN had eight TSF, giving the TN a unity trunk concentration ratio.
Within each LN or TN, the A, B, C and J links were counted from the outer termination to the inner. That is, for a trunk, the trunk Stage 0 switch could connect each trunk to any of eight A links, which in turn were wired to Stage 1 switches to connect them to B links. Trunk Junctor grids also had Stage 0 and Stage 1 switches, the former to connect B links to C links, and the latter to connect C to J links also called Junctors. Junctors were gathered into cables, 16 twisted pairs per cable constituting a Junctor Subgroup, running to the Junctor Grouping Frame where they were plugged into cables to other networks. Each network had 64 or 128 subgroups, and was connected to each other network by one or (usually) several subgroups.
The original 1ESS Ferreed switching fabric was packaged as separate 8x8 switches or other sizes, tied into the rest of the speech fabric and control circuitry by wire wrap connections. The transmit/receive path of the analog voice signal is through a series of magnetic-latching reed switches (very similar to latching relays).
The much smaller Remreed crosspoints, introduced at about the same time as 1AESS, were packaged as grid boxes of four principal types. Type 10A Junctor Grids and 11A Trunk Grids were a box about 16x16x5 inches (40x40x12 cm) with sixteen 8x8 switches inside. Type 12A Line Grids with 2:1 LCR were only about 5 inches (12 cm) wide, with eight 4x4 Stage 0 line switches with ferrods and cutoff contacts for 32 lines, connected internally to four 4x8 Stage 1 switches connecting to B-links. Type 14A Line Grids with 4:1 LCR were about 16x12x5 inches (40x30x12 cm) with 64 lines, 32 A-links and 16 B-links. The boxes were connected to the rest of the fabric and control circuitry by slide-in connectors. Thus the worker had to handle a much bigger, heavier piece of equipment, but did not have to unwrap and rewrap dozens of wires.
Fabric error.
The two controllers in each Junctor Frame had no-test access to their Junctors via their F-switch, a ninth level in the Stage 1 switches which could be opened or closed independently of the crosspoints in the grid. When setting up each call through the fabric, but before connecting the fabric to the line and/or trunk, the controller could connect a test scan point to the talk wires in order to detect potentials. Current flowing through the scan point would be reported to the maintenance software, resulting in a "False Cross and Ground" (FCG) teleprinter message listing the path. Then the maintenance software would tell the call completion software to try again with a different junctor.
With a clean FCG test, the call completion software told the "A" relay in the trunk circuit to operate, connecting its transmission and test hardware to the switching fabric and thus to the line. Then, for an outgoing call, the trunk's scan point would scan for the presence of an off hook line. If the short was not detected, the software would command the printing of a "Supervision Failure" (SUPF) and try again with a different junctor. A similar supervision check was performed when an incoming call was answered. Any of these tests could alert for the presence of a bad crosspoint.
Staff could study a mass of printouts to find which links and crosspoints (out of, in some offices, a million crosspoints) were causing calls to fail on first tries. In the late 1970s, teleprinter channels were gathered together in Switching Control Centers (SCC), later Switching Control Center System, each serving a dozen or more 1ESS exchanges and using their own computers to analyze these and other kinds of failure reports. They generated a so-called histogram (actually a scatterplot) of parts of the fabric where failures were particularly numerous, usually pointing to a particular bad crosspoint, even if it failed sporadically rather than consistently. Local workers could then busy out the appropriate switch or grid and replace it.
When a test access crosspoint itself was stuck closed, it would cause sporadic FCG failures all over both grids that were tested by that controller. Since the J links were externally connected, switchroom staff discovered that such failures could be found by making busy both grids, grounding the controller's test leads, and then testing all 128 J links, 256 wires, for a ground.
Given the restrictions of 1960s hardware, unavoidable failure occurred. Though detected, the system was designed to connect the calling party to the wrong person rather than a disconnect, intercept, etc.
Scan and distribute.
The computer received input from peripherals via magnetic scanners, composed of ferrod sensors, similar in principle to magnetic core memory except that the output was controlled by control windings analogous to the windings of a relay. Specifically, the ferrod was a transformer with four windings. Two small windings ran through holes in the center of a rod of ferrite. A pulse on the Interrogate winding was induced into the Readout winding, if the ferrite was not magnetically saturated. The larger control windings, if current was flowing through them, saturated the magnetic material, hence decoupling the Interrogate winding from the Readout winding which would return a Zero signal. The Interrogate windings of 16 ferrods of a row were wired in series to a driver, and the Readout windings of 64 ferrods of a column were wired to a sense amp. Check circuits ensured that an Interrogate current was indeed flowing.
Scanners were Line Scanners (LSC), Universal Trunk Scanners (USC), Junctor Scanners (JSC) and Master Scanners (MS). The first three only scanned for supervision, while Master Scanners did all other scan jobs. For example, a DTMF Receiver, mounted in a Miscellaneous Trunk frame, had eight demand scan points, one for each frequency, and two supervisory scan points, one to signal the presence of a valid DTMF combination so the software knew when to look at the frequency scan points, and the other to supervise the loop. The supervisory scan point also detected Dial Pulses, with software counting the pulses as they arrived. Each digit when it became valid was stored in a software hopper to be given to the Originating Register.
Ferrods were mounted in pairs, usually with different control windings, so one could supervise a switchward side of a trunk and the other the distant office. Components inside the trunk pack, including diodes, determined for example, whether it performed reverse battery signaling as an incoming trunk, or detected reverse battery from a distant trunk; i.e. was an outgoing trunk.
Line ferrods were also provided in pairs, of which the even numbered one had contacts brought out to the front of the package in lugs suitable for wire wrap so the windings could be strapped for loop start or ground start signaling. The original 1ESS packaging had all the ferrods of an LSF together, and separate from the line switches, while the later 1AESS had each ferrod at the front of the steel box containing its line switch. Odd numbered line equipment could not be made ground start, their ferrods being inaccessible.
The computer controlled the magnetic latching relays by Signal Distributors (SD) packaged in the Universal Trunk frames, Junctor frames, or in Miscellaneous Trunk frames, according to which they were numbered as USD, JSD or MSD. SD were originally contact trees of 30-contact wire spring relays, each driven by a flipflop. Each magnetic latching relay had one transfer contact dedicated to sending a pulse back to the SD, on each operate and release. The pulser in the SD detected this pulse to determine that the action had occurred, or else alerted the maintenance software to print a "FSCAN" report. In later 1AESS versions SD were solid state with several SD points per circuit pack generally on the same shelf or adjacent shelf to the trunk pack.
A few peripherals that needed quicker response time, such as Dial Pulse Transmitters, were controlled via Central Pulse Distributors, which otherwise were mainly used for enabling (alerting) a peripheral circuit controller to accept orders from the Peripheral Unit Address Bus.
1ESS computer.
The duplicate Harvard architecture central processor or "CC" (Central Control) for the 1ESS operated at approximately 200 kHz. It comprised five bays, each two meters high and totaling about four meters in length per CC. Packaging was in cards approximately 4x10 inches (10x25 centimeters) with an edge connector in the back. Backplane wiring was cotton covered wire-wrap wires, not ribbons or other cables. CPU logic was implemented using discrete diode–transistor logic. One hard plastic card commonly held the components necessary to implement, for example, two gates or a flipflop.
A great deal of logic was given over to diagnostic circuitry. CPU diagnostics could be run that would attempt to identify failing card(s). In single card failures, first attempt to repair success rates of 90% or better were common. Multiple card failures were not uncommon and the success rate for first time repair dropped rapidly.
The CPU design was quite complex - using three way interleaving of instruction execution (later called instruction pipeline) to improve throughput. Each instruction would go through an indexing phase, an actual instruction execution phase and an output phase. While an instruction was going through the indexing phase, the previous instruction was in its execution phase and the instruction before it was in its output phase.
In many instructions of the instruction set, data could be optionally masked and/or rotated. Single instructions existed for such esoteric functions as "find first set bit (the rightmost bit that is set) in a data word, optionally reset the bit and tell me the position of the bit". Having this function as an atomic instruction (rather than implementing as a subroutine) dramatically sped scanning for service requests or idle circuits. The central processor was implemented as a hierarchical state machine.
Memory had a 44-bit word length for program stores, of which six bits were for Hamming error correction and one was used for an additional parity check. This left 37 bits for the instruction, of which usually 22 bits were used for the address. This was an unusually wide instruction word for the time.
Program stores also contained permanent data, and could not be written online. Instead, the aluminum memory cards, also called twistor planes, had to be removed in groups of 128 so their permanent magnets could be written offline by a motorized writer, an improvement over the non motorized single card writer used in Project Nike. All memory frames, all busses, and all software and data were fully dual modular redundant. The dual CCs operated in lockstep and the detection of a mismatch triggered an automatic sequencer to change the combination of CC, busses and memory modules until a configuration was reached that could pass a sanity check. Busses were twisted pairs, one pair for each address, data or control bit, connected at the CC and at each store frame by coupling transformers, and ending in terminating resistors at the last frame.
Call Stores were the system's read/write memory, containing the data for calls in progress and other temporary data. They had a 24-bit word, of which one bit was for parity check. They operated similar to magnetic core memory, except that the ferrite was in sheets with a hole for each bit, and the coincident current address and readout wires passed through that hole. The first Call Stores held 8 kilowords, in a frame approximately a meter wide and two meters tall.
The separate program memory and data memory were operated in antiphase, with the addressing phase of Program Store coinciding with the data fetch phase of Call Store and vice versa. This resulted in further overlapping, thus higher program execution speed than might be expected from the slow clock rate.
Programs were mostly written in machine code. Bugs that previously went unnoticed became prominent when 1ESS was brought to big cities with heavy telephone traffic, and delayed the full adoption of the system for a few years. Temporary fixes included the Service Link Network (SLN), which did approximately the job of the Incoming Register Link and Ringing Selection Switch of the 5XB switch, thus diminishing CPU load and decreasing response times for incoming calls, and a Signal Processor (SP) or peripheral computer of only one bay, to handle simple but time-consuming tasks such as the timing and counting of Dial Pulses. 1AESS eliminated the need for SLN and SP.
The half inch tape drive was write only, being used only for Automatic Message Accounting. Program updates were executed by shipping a load of Program Store cards with the new code written on them.
The Basic Generic program included constant "audits" to correct errors in the call registers and other data. When a critical hardware failure in the processor or peripheral units occurred, such as both controllers of a line switch frame failing and unable to receive orders, the machine would stop connecting calls and go into a "phase of memory regeneration", "phase of reinitialization", or "Phase" for short. The Phases were known as Phase 1,2,4 or 5. Lesser phases only cleared the call registers of calls that were in an unstable state that is not yet connected, and took less time.
During a Phase, the system, normally roaring with the sound of relays operating and releasing, would go quiet as no relays were getting orders. The Teletype Model 35 would ring its bell and print a series of P's while the phase lasted. For Central office staff this could be a scary time as seconds and then perhaps minutes passed while they knew subscribers who picked up their phones would get dead silence until the phase was over and the processor regained "sanity" and resumed connecting calls. Greater phases took longer, clearing all call registers, thus disconnecting all calls and treating any off-hook line as a request for dial tone. If the automated phases failed to restore system sanity, there were manual procedures to identify and isolate bad hardware or buses.
1AESS.
Most of the thousands of 1ESS and 1AESS offices in the USA were replaced in the 1990s by DMS-100, 5ESS Switch and other digital switches, and since 2010 also by packet switches. As of late 2014, just over 20 1AESS installations remained in the North American network, which were located mostly in AT&T's legacy BellSouth and AT&T's legacy Southwestern Bell states, especially in the Atlanta GA metro area, the Saint Louis MO metro area, and in the Dallas/Fort Worth TX metro area. In 2015, AT&T did not renew a support contract with Alcatel-Lucent (now Nokia) for the 1AESS systems still in operation and notified Alcatel-Lucent of its intent to remove them all from service by 2017. As a result, Alcatel-Lucent dismantled the last 1AESS lab at the Naperville Bell Labs location in 2015, and announced the discontinuation of support for the 1AESS. In 2017, AT&T completed the removal of remaining 1AESS systems by moving customers to other newer technology switches, typically with Genband switches with TDM trunking only.
The last known 1AESS switch was in Odessa, TX (Odessa Lincoln Federal wirecenter ODSSTXLI). It was disconnected from service around June 3, 2017 and cut over to a Genband G5/G6 packet switch.
Other electronic switching systems.
The No. 1 Electronic Switching System Arranged with Data Features (No. 1 ESS ADF) was an adaptation of the Number One Electronic Switching System to create a store and forward message switching system. It used both single and multi-station lines for transmitting teletypewriter and data messages. It was created to respond to a growing need for rapid and economical delivery of data and printed copy.
Features.
The No. 1 ESS ADF had a large number of features, including:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n^2"
}
] | https://en.wikipedia.org/wiki?curid=8340179 |
834100 | Superparticular ratio | Ratio of two consecutive integers
In mathematics, a superparticular ratio, also called a superparticular number or epimoric ratio, is the ratio of two consecutive integer numbers.
More particularly, the ratio takes the form:
formula_0 where n is a positive integer.
Thus:
<templatestyles src="Template:Blockquote/styles.css" />A superparticular number is when a great number contains a lesser number, to which it is compared, and at the same time one part of it. For example, when 3 and 2 are compared, they contain 2, plus the 3 has another 1, which is half of two. When 3 and 4 are compared, they each contain a 3, and the 4 has another 1, which is a third part of 3. Again, when 5, and 4 are compared, they contain the number 4, and the 5 has another 1, which is the fourth part of the number 4, etc.
Superparticular ratios were written about by Nicomachus in his treatise "Introduction to Arithmetic". Although these numbers have applications in modern pure mathematics, the areas of study that most frequently refer to the superparticular ratios by this name are music theory and the history of mathematics.
Mathematical properties.
As Leonhard Euler observed, the superparticular numbers (including also the multiply superparticular ratios, numbers formed by adding an integer other than one to a unit fraction) are exactly the rational numbers whose continued fraction terminates after two terms. The numbers whose continued fraction terminates in one term are the integers, while the remaining numbers, with three or more terms in their continued fractions, are superpartient.
The Wallis product
formula_1
represents the irrational number π in several ways as a product of superparticular ratios and their inverses. It is also possible to convert the Leibniz formula for π into an Euler product of superparticular ratios in which each term has a prime number as its numerator and the nearest multiple of four as its denominator:
formula_2
In graph theory, superparticular numbers (or rather, their reciprocals, 1/2, 2/3, 3/4, etc.) arise via the Erdős–Stone theorem as the possible values of the upper density of an infinite graph.
Other applications.
In the study of harmony, many musical intervals can be expressed as a superparticular ratio (for example, due to octave equivalency, the ninth harmonic, 9/1, may be expressed as a superparticular ratio, 9/8). Indeed, whether a ratio was superparticular was the most important criterion in Ptolemy's formulation of musical harmony. In this application, Størmer's theorem can be used to list all possible superparticular numbers for a given limit; that is, all ratios of this type in which both the numerator and denominator are smooth numbers.
These ratios are also important in visual harmony. Aspect ratios of 4:3 and 3:2 are common in digital photography, and aspect ratios of 7:6 and 5:4 are used in medium format and large format photography respectively.
Ratio names and related intervals.
Every pair of adjacent positive integers represent a superparticular ratio, and similarly every pair of adjacent harmonics in the harmonic series (music) represent a superparticular ratio. Many individual superparticular ratios have their own names, either in historical mathematics or in music theory. These include the following:
The root of some of these terms comes from Latin "sesqui-" "one and a half" (from "semis" "a half" and "-que" "and") describing the ratio 3:2.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{n + 1}{n} = 1 + \\frac{1}{n}"
},
{
"math_id": 1,
"text": " \\prod_{n=1}^{\\infty} \\left(\\frac{2n}{2n-1} \\cdot \\frac{2n}{2n+1}\\right) = \\frac{2}{1} \\cdot \\frac{2}{3} \\cdot \\frac{4}{3} \\cdot \\frac{4}{5} \\cdot \\frac{6}{5} \\cdot \\frac{6}{7} \\cdots = \\frac{4}{3}\\cdot\\frac{16}{15}\\cdot\\frac{36}{35}\\cdots=2\\cdot\\frac{8}{9}\\cdot\\frac{24}{25}\\cdot\\frac{48}{49}\\cdots=\\frac{\\pi}{2}"
},
{
"math_id": 2,
"text": "\\frac{\\pi}{4} = \\frac{3}{4} \\cdot \\frac{5}{4} \\cdot \\frac{7}{8} \\cdot \\frac{11}{12} \\cdot \\frac{13}{12} \\cdot\\frac{17}{16}\\cdots"
}
] | https://en.wikipedia.org/wiki?curid=834100 |
8343463 | Lateral earth pressure | Pressure of soil in horizontal direction
The lateral earth pressure is the pressure that soil exerts in the horizontal direction. It is important because it affects the consolidation behavior and strength of the soil and because it is considered in the design of geotechnical engineering structures such as retaining walls, basements, tunnels, deep foundations and braced excavations.
The earth pressure problem dates from the beginning of the 18th century, when Gautier listed five areas requiring research, one of which was the dimensions of gravity-retaining walls needed to hold back soil. However, the first major contribution to the field of earth pressures was made several decades later by Coulomb, who considered a rigid mass of soil sliding upon a shear surface. Rankine extended earth pressure theory by deriving a solution for a complete soil mass in a state of failure, as compared with Coulomb's solution which had considered a soil mass bounded by a single failure surface. Originally, Rankine's theory considered the case of only cohesionless soils, with Bell subsequently extending it to cover the case of soils possessing both cohesion and friction. Caquot and Kerisel modified Muller-Breslau's equations to account for a nonplanar rupture surface.
The coefficient of lateral earth pressure.
The coefficient of lateral earth pressure, K, is defined as the ratio of the horizontal effective stress, σ’h, to the vertical effective stress, σ’v. The effective stress is the intergranular stress calculated by subtracting the pore water pressure from the total stress as described in soil mechanics. K for a particular soil deposit is a function of the soil properties and stress history. The minimum stable value of K is called the active earth pressure coefficient, Ka; the active earth pressure is obtained, for example, when a retaining wall moves away from the soil. The maximum stable value of K is called the passive earth pressure coefficient, Kp; the passive earth pressure would develop, for example against a vertical plow that is pushing soil horizontally. For a level ground deposit with zero lateral strain in the soil, the "at-rest" coefficient of lateral earth pressure, K0 is obtained.
There are many theories for predicting lateral earth pressure; some are empirically based, and some are analytically derived.
Symbols definitions.
In this article, the following variables in the equations are defined as follows:
At rest pressure.
The "in situ" lateral pressure of soil is called earth pressure at rest and it is generally calculated by the product of the overburden stress times the coefficient K0; the latter is called the coefficient of earth pressure at rest. K0 can be obtained directly in the field based on e.g. the dilatometer test (DMT) or a borehole pressuremeter test (PMT), although it is more commonly calculated using the well-known Jaky's formula. For loosely deposited sands at rest, Jaky showed analytically that K0 deviates from unity with downward trend as the sinusoidal term of the internal friction angle of material increases, i.e.
formula_0
Jaky's coefficient has been proved later to be also valid for normally consolidated granular deposits and normally consolidated clays.
From a purely theoretical point of view, the very simple formula_1 formula works ideally for the two extreme values of formula_2, where for formula_2 = 0o it gives formula_3 referring to hydrostatic conditions and for formula_2 = 90o (theoretical value) it gives formula_4 referring to a frictional material that can stand vertically without support, thus, exerting no lateral pressure. These extreme cases are enough evidence that the correct expression for the coefficient of earth pressure at rest is the formula_5.
There is a general impression that Jaky's (1944) coefficient of earth pressure at rest is empirical and, indeed, the formula_5 expression is just a simplification of the expression below:
formula_6
However, the latter derives from a fully analytical procedure and corresponds to an intermediate state between the state at rest and the active state (for more information see Pantelidis).
As mentioned earlier, according to the literature, Jaky's formula_5 equation matches very well the experimental data for both normally consolidated sands and clays. Some researchers state, however, that slightly modified forms of Jaky's equation show better fit to their data. However, although some of these modifications gained great popularity, they do not provide better estimations for formula_7. For example, Brooker and Ireland's formula_8 has been based on the laboratory determination of formula_7 of only five samples, whilst the effective angle of shearing resistance of three of them was obtained from the literature, having no control on them. Moreover, refinements in the order of a few percentage points rather support the validity of the formula_5 expression than the superiority of the refined expression.
For overconsolidated soils, Mayne & Kulhawy suggested the following expression:
formula_9
The latter requires the OCR profile with depth to be determined. OCR is the overconsolidation ratio and formula_10 is the effective stress friction angle.
To estimate K0 due to compaction pressures, refer to Ingold (1979)
Pantelidis offered an analytical expression for the coefficient of earth pressure at rest, applicable to cohesive-frictional soils and both horizontal and vertical pseudo-static conditions, which is part of a unified continuum mechanics approach (the expression in question is given in the section below).
Soil lateral active pressure and passive resistance.
The active state occurs when a retained soil mass is allowed to relax or deform laterally and outward (away from the soil mass) to the point of mobilizing its available full shear resistance (or engaging its shear strength) in trying to resist lateral deformation. That is, the soil is at the point of incipient failure by shearing due to unloading in the lateral direction. It is the minimum theoretical lateral pressure that a given soil mass will exert on a retaining wall that will move or rotate away from the soil until the soil active state is reached (not necessarily the actual in-service lateral pressure on walls that do not move when subjected to soil lateral pressures higher than the active pressure).
The passive state occurs when a soil mass is externally forced laterally and inward (towards the soil mass) to the point of mobilizing its available full shear resistance in trying to resist further lateral deformation. That is, the soil mass is at the point of incipient failure by shearing due to loading in the lateral direction. It is the maximum lateral resistance that a given soil mass can offer to a retaining wall that is being pushed towards the soil mass. That is, the soil is at the point of incipient failure by shearing, but this time due to loading in the lateral direction. Thus active pressure and passive resistance define the minimum lateral pressure and the maximum lateral resistance possible from a given mass of soil.
Coulomb's earth pressure coefficients.
Coulomb (1776) first studied the problem of lateral earth pressures on retaining structures. He used the limit equilibrium theory, which considers the failing soil block as a free body in order to determine the limiting horizontal earth pressure.
The limiting horizontal pressures at failure in extension or compression are used to determine the formula_11 and formula_12 respectively. Since the problem is indeterminate, a number of potential failure surfaces must be analysed to identify the critical failure surface (i.e. the surface that produces the maximum or minimum thrust on the wall). Coulombs main assumption is that the failure surface is planar.
Mayniel (1808) later extended Coulomb's equations to account for wall friction, denoted by formula_13. Müller-Breslau (1906) further generalized Mayniel's equations for a non-horizontal backfill and a non-vertical soil-wall interface (represented by an angle formula_14 from the vertical).
formula_15
formula_16
Instead of evaluating the above equations or using commercial software applications for this, books of tables for the most common cases can be used. Generally, instead of formula_11, the horizontal part formula_17 is tabulated. It is the same as formula_11 times formula_18. [Note that under certain conditions, the equation for formula_12 "blows up". For example, if formula_19 and formula_20, then formula_21.]
The actual earth pressure force formula_22 is the sum of the part formula_23 due to the weight of the earth, a part formula_24 due to extra loads such as traffic, minus a part formula_25 due to any cohesion present.
formula_23 is the integral of the pressure over the height of the wall, which equates to formula_11 times the specific gravity of the earth, times one half the wall height squared.
In the case of a uniform pressure loading on a terrace above a retaining wall, formula_24 equates to this pressure times formula_11 times the height of the wall. This applies if the terrace is horizontal or the wall vertical. Otherwise, formula_24 must be multiplied by formula_26.
formula_25 is generally assumed to be zero unless a value of cohesion can be maintained permanently.
formula_23 acts on the wall's surface at one third of its height from the bottom and at an angle formula_13 relative to a right angle at the wall. formula_24 acts at the same angle, but at one half the height.
Rankine's earth pressure coefficients and Bell's extension for cohesive soils.
Rankine's theory, developed in 1857, is a stress field solution that predicts active and passive earth pressure. It assumes that the soil is cohesionless, the wall is non-battered and frictionless whilst the backfill is horizontal. The failure surface on which the soil moves is planar. The expressions for the active and passive lateral earth pressure coefficients are given below.
formula_27
formula_28
For soils with cohesion, Bell developed an analytical solution that uses the square root of the pressure coefficient to predict the cohesion's contribution to the overall resulting pressure. These equations represent the total lateral earth pressure. The first term represents the non-cohesive contribution and the second term the cohesive contribution. The first equation is for the active earth pressure condition and the second for the passive earth pressure condition.
formula_29
formula_30
Note that c' and φ' are the effective cohesion and angle of shearing resistance of the soil respectively. For cohesive soils, the depth of tension crack (referring to the active state) is:formula_31For purely frictional soils with sloping backfill exerting pressure on non-battered, frictionless wall the coefficients are:
formula_32
formula_33
with horizontal components of earth pressure:
formula_34
formula_35
where, β is the backfill inclination angle.
Caquot and Kerisel's analysis for log-spiral failure surfaces.
In 1948, Albert Caquot (1881–1976) and Jean Kerisel (1908–2005) developed an advanced theory that modified Muller-Breslau's equations to account for a non-planar rupture surface. They used a logarithmic spiral to represent the rupture surface instead. This modification is extremely important for passive earth pressure where there is soil-wall friction. Mayniel and Muller-Breslau's equations are unconservative in this situation and are dangerous to apply. For the active pressure coefficient, the logarithmic spiral rupture surface provides a negligible difference compared to Muller-Breslau. These equations are too complex to use, so tables or computers are used instead.
Mononobe-Okabe's and Kapilla's earth pressure coefficients for dynamic conditions.
Mononobe-Okabe's and Kapilla's earth pressure coefficients for dynamic active and passive conditions respectively have been obtained on the same basis as Coulomb's solution. These coefficients are given below:
formula_36formula_37with horizontal components of earth pressure:
formula_34
formula_35
where, formula_38 and formula_39 are the seismic coefficients of horizontal and vertical acceleration respectively, formula_40, formula_41 is the back face inclination angle of the structure with respect to vertical, formula_42 is the angle of friction between structure and soil and formula_43 is the back slope inclination.
The above coefficients are included in numerous seismic design codes worldwide (e.g., EN1998-5, AASHTO), since being suggested as standard methods by Seed and Whitman. The problems with these two solutions are known (e.g., see Anderson]) with the most important one being the square root of negative number for formula_44 (the minus sign stands for the active case whilst the plus sign stands for the passive case).
The various design codes recognize the problem with these coefficients and they either attempt an interpretation, dictate a modification of these equations, or propose alternatives. In this respect:
It is noted that the above empirical corrections on formula_38 made by AASHTO and the Building Seismic Safety Council return coefficients of earth pressure very close to those derived by the analytical solution proposed by Pantelidis (see below).
Mazindrani and Ganjale's approach for cohesive-frictional soils with inclined surface.
Mazindrani and Ganjale presented an analytical solution to the problem of earth pressures exerted on a frictionless, non-battered wall by a cohesive-frictional soil with inclined surface. The derived equations are given below for both the active and passive states:
formula_48
formula_49
with horizontal components for the active and passive earth pressure are:
formula_34
formula_35
ka and kp coefficients for various values of formula_50, formula_41, and formula_51 can be found in tabular form in Mazindrani and Ganjale.
Based on a similar analytical procedure, Gnanapragasam gave a different expression for ka. It is noted, however, that both Mazindrani and Ganjale's and Gnanapragasam's expressions lead to identical active earth pressure values.
Following either approach for the active earth pressure, the depth of tension crack appears to be the same as in the case of zero backfill inclination (see Bell's extension of Rankine's theory).
Pantelidis' unified approach: the generalized coefficients of earth pressure.
Pantelidis offered a unified fully analytical continuum mechanics approach (based on Cauchy's first law of motion) for deriving earth pressure coefficients for all soil states, applicable to cohesive-frictional soils and both horizontal and vertical pseudo-static conditions.
The following symbols are used:
formula_38 and formula_39 are the seismic coefficients of horizontal and vertical acceleration respectively
formula_52, formula_50 and formula_53 are the effective cohesion, effective internal friction angle (peak values) and unit weight of soil respectively
formula_54 is the mobilized cohesion of soil (the mobilized shear strength of soil, i.e. the formula_54and formula_55 parameters, can be obtained either analytically or through relevant charts; see Pantelidis)
formula_56 and formula_57 are the effective elastic constants of soil (i.e. the Young modulus and Poisson's ratio respectively)
formula_58 is the wall height
formula_59 is the depth where the earth pressure is calculated
The coefficient of earth pressure "at rest".
formula_60with formula_61
The coefficient of "active" earth pressure.
formula_62with formula_63
The coefficient of "passive" earth pressure.
formula_64with formula_65
The coefficient of "intermediate" earth pressure on the active "side".
formula_66with formula_67
The coefficient of "intermediate" earth pressure on the passive "side".
formula_68where, formula_69, formula_70, formula_71 and formula_72
with formula_73
formula_74 and formula_75 are parameters related to the transition from the soil wedge of the state at rest to the soil wedge of the passive state (i.e., inclination angle of soil wedge changing from formula_76 to formula_77.
Also, formula_78 and formula_79 are the lateral displacement of the wall and the lateral (maximum) displacement of the wall corresponding to the active or passive state (both at depth formula_59). The latter is given below.
The lateral maximum displacement of the wall corresponding to the active or passive state.
formula_80for smooth retaining wall andformula_81for rough retaining wall
with formula_82 or formula_83 for the active and passive "side" respectively.
The depth of tension crack (active state) or neutral zone (state at rest).
The depth of "neutral zone" in the state at rest is:formula_84whilst the depth of "tension crack" in the active state is:formula_85Under static conditions ( formula_86=formula_87=0), where the mobilized cohesion, formula_88, is equal to the cohesion value at the critical state, formula_89, the above expression is transformed to the well-known:formula_31
Derivation of the earth pressure at rest by the active earth pressure coefficient.
Actually, this has been foreseen in EM1110-2-2502 with the application of a Strength Mobilization Factor (SMF) to c′ and tanφ′. According to this Engineer Manual, an appropriate SMF value allows calculation of greater-than-active earth pressures using Coulomb's active force equation. Assuming an average SMF value equal to 2/3 along Coulomb's failure surface, it has been shown that for purely frictional soils the derived coefficient value of earth pressure matches quite well with the respective one derived from Jaky's formula_5 equation.
In the solution proposed by Pantelidis, the SMF factor is the formula_90 ratio and what has been foreseen by EM1110-2-2502, it can be calculated exactly.
Example #1: For formula_89=20 kPa, formula_2=30o, γ=18 kN/m3, formula_86=formula_87=0, and formula_91=2 m, for the state at rest formula_92=0.211, formula_88=9.00 kPa and formula_93=14.57o. Using this (formula_88, formula_93) pair of values in place of the (formula_89, formula_2) pair of values in the coefficient of active earth pressure (formula_94) given previously, the latter returns a coefficient of earth pressure equal to 0.211, that is, the coefficient of earth pressure at rest.
Example #2: For formula_89=0kPa, formula_2=30o, γ=18 kN/m3, formula_86=0.3, formula_87=0.15, and formula_91=2 m, for the state at rest formula_95=0.602, formula_88=0 kPa and formula_93=14.39o. Using this (formula_88, formula_93) pair of values in place of the (formula_89, formula_2) pair of values and formula_86=formula_87=0 in the coefficient of active earth pressure (formula_94) given previously, the latter returns a coefficient of earth pressure equal to 0.602, that is, again the coefficient of earth pressure at rest.
Comparing Eurocode 8-5 and AASHTO methods for earth pressure analysis against centrifuge tests, finite elements, and the Generalized Coefficients of Earth Pressure.
An exhaustive comparison of the earth pressure methods included in EN1998-5:2004 (use of Mononobe-Okabe method, M-O), prEN1998-5:2021 and AASHTO (M-O with half peak ground acceleration) standards, against contemporary centrifuge tests, finite elements, and the Generalized Coefficients of Earth Pressure has been carried out by Pantelidis and Christodoulou. The latter include, among others, results from 157 numerical cases with two finite element programs (Rocscience’s RS2 and mrearth2d) and two different contemporary centrifuge test studies
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " K_{0(NC)} = 1 - \\sin \\phi' "
},
{
"math_id": 1,
"text": " 1 - \\sin \\phi' "
},
{
"math_id": 2,
"text": " \\phi' "
},
{
"math_id": 3,
"text": " K_{0} = 1 "
},
{
"math_id": 4,
"text": " K_{0} = 0 "
},
{
"math_id": 5,
"text": " K_{0} = 1 - \\sin \\phi' "
},
{
"math_id": 6,
"text": " K_{0(NC)} = \\frac{1+2/3 \\sin \\phi'}{1 + \\sin \\phi'} (1 - \\sin \\phi') "
},
{
"math_id": 7,
"text": " K_{0} "
},
{
"math_id": 8,
"text": " K_{0} = 0.95 - \\sin \\phi' "
},
{
"math_id": 9,
"text": " K_{0(OC)} = K_{0(NC)} * OCR^{(\\sin \\phi ')} \\ "
},
{
"math_id": 10,
"text": "\\phi '"
},
{
"math_id": 11,
"text": "K_a"
},
{
"math_id": 12,
"text": "K_p"
},
{
"math_id": 13,
"text": "\\delta"
},
{
"math_id": 14,
"text": "\\theta"
},
{
"math_id": 15,
"text": " K_a = \\frac{ \\cos ^2 \\left( \\phi' - \\theta \\right)}{\\cos ^2 \\theta \\cos \\left( \\delta + \\theta \\right) \\left( 1 + \\sqrt{ \\frac{ \\sin \\left( \\delta + \\phi' \\right) \\sin \\left( \\phi' - \\beta \\right)}{\\cos \\left( \\delta + \\theta \\right) \\cos \\left( \\beta - \\theta \\right)}} \\ \\right) ^2}"
},
{
"math_id": 16,
"text": " K_p = \\frac{ \\cos ^2 \\left( \\phi' + \\theta \\right)}{\\cos ^2 \\theta \\cos \\left( \\delta - \\theta \\right) \\left( 1 - \\sqrt{ \\frac{ \\sin \\left( \\delta + \\phi' \\right) \\sin \\left( \\phi' + \\beta \\right)}{\\cos \\left( \\delta - \\theta \\right) \\cos \\left( \\beta - \\theta \\right)}} \\ \\right) ^2}"
},
{
"math_id": 17,
"text": "K_{ah}"
},
{
"math_id": 18,
"text": "\\cos(\\delta + \\theta)"
},
{
"math_id": 19,
"text": "\\theta = \\beta = 0"
},
{
"math_id": 20,
"text": "\\phi' = \\delta = 45 ^ \\circ"
},
{
"math_id": 21,
"text": "K_p \\to \\infty"
},
{
"math_id": 22,
"text": "E_a"
},
{
"math_id": 23,
"text": "E_{ag}"
},
{
"math_id": 24,
"text": "E_{ap}"
},
{
"math_id": 25,
"text": "E_{ac}"
},
{
"math_id": 26,
"text": "\\frac{\\cos \\theta \\cdot \\cos \\beta}{\\cos(\\theta - \\beta)}"
},
{
"math_id": 27,
"text": " K_a = \\tan ^2 \\left( 45 - \\frac{\\phi'}{2} \\right) = \\frac{ 1 - \\sin(\\phi') }{ 1 + \\sin(\\phi') }"
},
{
"math_id": 28,
"text": " K_p = \\tan ^2 \\left( 45 + \\frac{\\phi'}{2} \\right) = \\frac{ 1 + \\sin(\\phi') }{ 1 - \\sin(\\phi') } "
},
{
"math_id": 29,
"text": " \\sigma_a = K_a \\sigma_v - 2c' \\sqrt{K_a} \\ "
},
{
"math_id": 30,
"text": " \\sigma_p = K_p \\sigma_v + 2c' \\sqrt{K_p} \\ "
},
{
"math_id": 31,
"text": " z_{tc}=\\frac{2c'}{\\gamma \\tan{(45^o-\\phi'/2)}}"
},
{
"math_id": 32,
"text": " K_a = \\cos \\beta \\frac{\\cos \\beta - \\sqrt{\\cos ^2 \\beta - \\cos ^2 \\phi'}}{\\cos \\beta + \\sqrt{\\cos ^2 \\beta - \\cos ^2 \\phi'}}"
},
{
"math_id": 33,
"text": " K_p = \\cos \\beta \\frac{\\cos \\beta + \\sqrt{\\cos ^2 \\beta - \\cos ^2 \\phi'}}{\\cos \\beta - \\sqrt{\\cos ^2 \\beta - \\cos ^2 \\phi'}}"
},
{
"math_id": 34,
"text": " \\sigma_a =K_a \\gamma z \\cos\\beta"
},
{
"math_id": 35,
"text": " \\sigma_p =K_p \\gamma z \\cos\\beta"
},
{
"math_id": 36,
"text": " K_{ae} = \\frac{\\cos^2{(\\phi'-\\psi-\\beta)}}{\\cos{\\psi}\\cos^2{\\beta}\\cos(\\delta+\\beta+\\psi)\\Bigl(1+\\sqrt{\\frac{\\sin{(\\phi'+\\delta)}\\sin{(\\phi'-\\psi+\\alpha)}}{\\sqrt{\\cos{(\\delta+\\beta+\\phi)}\\cos{(\\alpha-\\beta)}}}}\\Bigr)^2}"
},
{
"math_id": 37,
"text": " K_{pe} = \\frac{\\cos^2{(\\phi'-\\psi+\\beta)}}{\\cos{\\psi}\\cos^2{\\beta}\\cos(\\delta-\\beta+\\psi)\\Bigl(1-\\sqrt{\\frac{\\sin{(\\phi'+\\delta)}\\sin{(\\phi'-\\psi+\\alpha)}}{\\sqrt{\\cos{(\\delta-\\beta+\\phi)}\\cos{(\\alpha-\\beta)}}}}\\Bigr)^2}"
},
{
"math_id": 38,
"text": " k_h"
},
{
"math_id": 39,
"text": " k_v"
},
{
"math_id": 40,
"text": " \\psi=\\arctan{(k_h/(1-k_v))}"
},
{
"math_id": 41,
"text": " \\beta"
},
{
"math_id": 42,
"text": " \\delta"
},
{
"math_id": 43,
"text": " \\alpha"
},
{
"math_id": 44,
"text": " \\phi'<\\psi\\mp\\beta"
},
{
"math_id": 45,
"text": " k_h=(1/2)PGA"
},
{
"math_id": 46,
"text": " PGA"
},
{
"math_id": 47,
"text": " k_h=(2/3)PGA"
},
{
"math_id": 48,
"text": " K_a = \\frac{1}{\\cos^2\\phi'} \\biggl(2\\cos^2\\beta+2\\frac{c'}{\\gamma z}\\cos\\phi'\\sin\\phi'-\\sqrt{4\\cos^2\\beta\\Bigl(\\cos^2\\beta-\\cos^2\\phi'\\Bigr)+4\\biggl(\\frac{c'}{\\gamma z}\\biggl)^2\\cos^2\\phi'+8\\biggl(\\frac{c'}{\\gamma z}\\biggl)\\cos^2\\beta\\sin\\phi'\\cos\\phi'}\\biggl)-1"
},
{
"math_id": 49,
"text": " K_p = \\frac{1}{\\cos^2\\phi'} \\biggl(2\\cos^2\\beta+2\\frac{c'}{\\gamma z}\\cos\\phi'\\sin\\phi'+\\sqrt{4\\cos^2\\beta\\Bigl(\\cos^2\\beta-\\cos^2\\phi'\\Bigr)+4\\biggl(\\frac{c'}{\\gamma z}\\biggl)^2\\cos^2\\phi'+8\\biggl(\\frac{c'}{\\gamma z}\\biggl)\\cos^2\\beta\\sin\\phi'\\cos\\phi'}\\biggl)-1"
},
{
"math_id": 50,
"text": " \\phi'"
},
{
"math_id": 51,
"text": " c'/(\\gamma z)"
},
{
"math_id": 52,
"text": " c'"
},
{
"math_id": 53,
"text": " \\gamma"
},
{
"math_id": 54,
"text": " c_m"
},
{
"math_id": 55,
"text": " \\phi_m"
},
{
"math_id": 56,
"text": " E"
},
{
"math_id": 57,
"text": " \\nu"
},
{
"math_id": 58,
"text": " H"
},
{
"math_id": 59,
"text": " z"
},
{
"math_id": 60,
"text": " K_{oe} = (1 -\\sin\\phi')\\left (1+\\frac{k_h}{1-k_v}\\tan\\phi' \\right )-\\frac{1}{1-k_v}\\frac{2c_m}{\\gamma z}\\tan \\left ( 45^o-\\frac{\\phi'}{2} \\right )"
},
{
"math_id": 61,
"text": " \\sigma_o=K_{oe} (1-k_v)\\gamma z"
},
{
"math_id": 62,
"text": " K_{ae} = \\frac{1 -\\sin\\phi'}{1 +\\sin\\phi'}\\left (1+2\\frac{k_h}{1-k_v}\\tan\\phi' \\right )-\\frac{1}{1-k_v}\\frac{2c_m}{\\gamma z}\\tan \\left ( 45^o-\\frac{\\phi'}{2} \\right )"
},
{
"math_id": 63,
"text": " \\sigma_a=K_{ae} (1-k_v)\\gamma z"
},
{
"math_id": 64,
"text": " K_{pe} = \\frac{1 +\\sin\\phi'}{1 -\\sin\\phi'}\\left (1-2\\frac{k_h}{1-k_v}\\tan\\phi' \\right )+\\frac{1}{1-k_v}\\frac{2c_m}{\\gamma z}\\tan \\left ( 45^o+\\frac{\\phi'}{2} \\right )"
},
{
"math_id": 65,
"text": " \\sigma_p=K_{pe} (1-k_v)\\gamma z"
},
{
"math_id": 66,
"text": " K_{xe,a} = \\biggl(\\frac{1 -\\sin\\phi'}{1 +\\sin\\phi'}\\biggr) \\biggl(\\left (1-\\xi\\sin\\phi' \\right )-\\frac{k_h}{1-k_v}\\tan\\phi'(2+\\xi(1-\\sin\\phi'))\\biggl)-\\frac{1}{1-k_v}\\frac{2c_m}{\\gamma z}\\tan \\left ( 45^o-\\frac{\\phi'}{2} \\right )"
},
{
"math_id": 67,
"text": " \\sigma_{x,a}=K_{xe,a} (1-k_v)\\gamma z"
},
{
"math_id": 68,
"text": " K_{xe,p} = \\biggl(\\frac{1 +\\sin\\phi'}{1 -\\sin\\phi'}\\biggr)^{\\xi_1} \\biggl(\\left (1+\\xi\\sin\\phi' \\right )+\\xi_2\\frac{k_h}{1-k_v}\\tan\\phi'(2+\\xi(1+\\sin\\phi'))\\biggl)+\\frac{1}{1-k_v}\\frac{2c_m}{\\gamma z} \\Biggl( \\frac{\\tan \\left ( 45^o+\\frac{\\phi'}{2} \\right )}{\\tan \\left ( 45^o-\\frac{\\phi'}{2} \\right )}\\Biggr)^{\\xi_1} \\tan \\left ( 45^o-\\frac{\\phi'}{2} \\right )"
},
{
"math_id": 69,
"text": " \\xi_1=1+\\xi"
},
{
"math_id": 70,
"text": " \\xi_2=\\frac{2}{m}-1"
},
{
"math_id": 71,
"text": " \\xi=\\frac{m-1}{m+1}\\Bigl(1-\\frac{1}{m}\\Bigr)-1"
},
{
"math_id": 72,
"text": " m=\\Bigl(1-\\frac{\\Delta x}{\\Delta x_{max}}\\Bigr)^{-1}"
},
{
"math_id": 73,
"text": " \\sigma_{x,p}=K_{xe,p} (1-k_v)\\gamma z"
},
{
"math_id": 74,
"text": " \\xi_1"
},
{
"math_id": 75,
"text": " \\xi_2"
},
{
"math_id": 76,
"text": " 45^o+\\phi'/2"
},
{
"math_id": 77,
"text": " 45^o-\\phi'/2"
},
{
"math_id": 78,
"text": " \\Delta x"
},
{
"math_id": 79,
"text": " \\Delta x_{max}"
},
{
"math_id": 80,
"text": " \\Delta x_{max}=\\frac{\\pi}{4} \\frac{1-\\nu ^2}{E} \\frac{(H+z)^3 (H-z)}{H^2 z} \\Delta \\Kappa (1-k_v) \\gamma z"
},
{
"math_id": 81,
"text": " \\Delta x_{max}=\\frac{\\pi}{4} \\frac{(3-\\nu -4 \\nu^2)(1+\\nu)}{E} \\Biggl(\\frac{1-\\nu^2}{H^2-z^2}+\\frac{\\nu(1+\\nu)H-(1-\\nu^2)z}{(H+z)^3}\\Biggl)^{-1} \\frac{1}{H} \\Delta \\Kappa (1-k_v) \\gamma z"
},
{
"math_id": 82,
"text": " \\Delta \\Kappa=K_{oe}-K_{xe,a}"
},
{
"math_id": 83,
"text": " \\Delta \\Kappa=K_{xe,p}-K_{oe}"
},
{
"math_id": 84,
"text": " z_{nz}=\\frac{c'}{(1-k_v)\\gamma \\tan{\\phi'}}\\Biggl[\\frac{1}{\\biggl(\\cos{\\phi'}+\\frac{k_h}{1-k_v}\\sin{\\phi'}\\biggl)^2}-1 \\Biggl]"
},
{
"math_id": 85,
"text": " z_{tc}=\\frac{c'}{(1-k_v)\\gamma \\tan{\\phi'}}\\Biggl[\\frac{\\tan^2(45+\\phi'/2)}{\\biggl(1+2\\frac{k_h}{1-k_v}\\tan{\\phi'}\\biggl)^2}-1 \\Biggl]"
},
{
"math_id": 86,
"text": " k_h "
},
{
"math_id": 87,
"text": " k_v "
},
{
"math_id": 88,
"text": " c_m "
},
{
"math_id": 89,
"text": " c' "
},
{
"math_id": 90,
"text": " 1/f_m "
},
{
"math_id": 91,
"text": " z "
},
{
"math_id": 92,
"text": " K_0 "
},
{
"math_id": 93,
"text": " \\phi_m "
},
{
"math_id": 94,
"text": " K_{ae}"
},
{
"math_id": 95,
"text": " K_{oe} "
}
] | https://en.wikipedia.org/wiki?curid=8343463 |
834367 | 212 (number) | Natural number
212 (two hundred [and] twelve) is the natural number following 211 and preceding 213.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "212=2^{2}\\times53"
}
] | https://en.wikipedia.org/wiki?curid=834367 |
834644 | James B. Francis | British-American civil engineer (1815–1892)
James Bicheno Francis (May 18, 1815 – September 18, 1892) was a British-American civil engineer, who invented the Francis turbine.
Early years.
James Francis was born in South Leigh, near Witney, Oxfordshire, in England, United Kingdom. He started his engineering career at the early age of seven as he worked as his father's apprentice at the Porth Cawl Railway and Harbor Works in South Wales. When he turned 18, he decided to emigrate to the United States, in 1833. His first job was in Stonington, Connecticut, as an assistant to the railway engineer George Washington Whistler Jr., working on the New York and New Haven Railroad. A year later, James and his boss, Whistler, travelled north to Lowell, Massachusetts, where at the age of 19, he got a draftsman job with the Locks and Canal Company, and Whistler became chief engineer.
A few years later, in 1837, Whistler resigned, and went to work on Russia's major railroads. Before departing, he appointed Francis chief engineer, and sold him his house on Worthen Street. In same year, James married Sarah W. Brownell in Lowell on July 12. Their first son, James Jr., was born March 30, 1840, and then they had five more children.
In 1841 came his first major project for the company. Francis was to analyse how much water each mill factory was using from the company's channel system. Impressed by his abilities, the company named him "Manager of Locks and Canals" in 1845.
As manager and chief engineer, Francis was responsible for the construction of the Northern Canal and Moody Street Feeder. These two canals, built in the late 1840s and early 1850s, completed the 5.6 mile long Lowell canal system, and greatly increased the industrial power of the thriving industrial city's mill complexes.
During his work on the Lowell systems, Francis was also consulted on many other water projects nationwide. When New York needed to increase their water supply, he consulted on the construction of the Quaker Bridge Dam on the Croton River in New York. He also consulted on the dam at Saint Anthony Falls on the Mississippi River.
Their son James fought in the American Civil War as a second lieutenant in 1861. He was wounded in the hand at the Battle of Antietam as a captain, and finished the war in July 1865 as lieutenant colonel.
Francis was elected as a member of the American Philosophical Society in 1865.
Fire sprinkler system.
In 1845, Francis developed the first sprinkler systems ever devised in the United States. However, any use of the system would flood the entire structure and its contents. It was not until 1875 that Henry S. Parmelee invented a sprinkler head that activated only one head at a time.
Turbines.
Francis became fascinated with and tinkered with turbine designs, after Uriah A. Boyden first demonstrated his Boyden turbine in Lowell. The two engineers worked on improving the turbine. And in 1848, Francis and Boyden successfully improved the turbine with what is now known as the Francis turbine. Francis's turbine eclipsed the Boyden turbine in power by 90%. In 1855, Francis published these findings in the "Lowell Hydraulic Experiments".
Flood control.
In 1850, Francis ordered the construction of the Great Gate over the Pawtucket Canal to protect the downtown mills from any devastating floods. This project quickly became known as "Francis's Folly", given that no one believed it would work, let alone ever be needed. But less than two years later, in 1852, the gates saved the city of Lowell from the devastating floods of 1852, and again in 1936, 1938, 2006, and 2007 by preventing the Merrimack River from entering the canal system. However, arson damage to the wooden gate in the 1970s, and the difficult method of dropping it (by breaking a large chain link) prompted the city to use a more modern steel-beam bulkhead in its place in 2006. For his efforts in saving the city from great disaster Francis was awarded a massive silver pitcher and a salver by the City of Lowell.
In 1886, Francis teamed up with two other civil engineers; Eliot C. Clarke and Clemens Herschel to study and publish their findings in the "Prevention of Floods in the Valley of Stony Brook" which laid out a flood prevention system for the city of Boston. The detailed study reviewed the Forest Hills, Hyde Park, Franklin park and Roslindale sections of the city that were subject to flooding.
Later years.
Francis stayed active on all levels of involvement in the city of Lowell, and served as an alderman from 1862 to 1864.
In 1865, Francis researched and published his findings on cast iron, and its use in structural columns in "The Strength of Cast-Iron Columns".
In 1874, Francis served on the American Society of Civil Engineers committee to investigate the cause of the breach of the Mill River dam in Massachusetts. Also on the committee were engineers Theodore Ellis and William Worthen. The investigation proceeded quickly and its report was published within a month of the disaster. It concluded that no engineer was responsible for the design and that it was the “work of non-professional persons”. “The remains of the dam indicate defects of workmanship of the grossest character.”
Francis originated scientific methods of testing hydraulic machinery, and was a founding member of the American Society of Civil Engineers and its president in 1880.
Francis formula.
In 1883, Francis completed his calculation standards for water flow rates, now known as the "Francis equation" or "Francis formula", usually used in fluid dynamics in conjunction with calculating weirs. The equation is
formula_0
where:
Q is the discharge in cubic feet per second over the weir,
L is the length of the weir in feet, and
h1 is the height of the water above the top of the weir.
He remained at the Locks and Canal Company for his entire career, until retirement in 1884 at the age of 69, and remained on as a consultant right up until his death. His son James took over as chief engineer. The rest of his life he spent with his wife Sarah, and their six children in their home on Worthen Street, which was Whistler's old home and now is the Whistler House Museum of Art.
Francis was called to duty in 1889 as a member of an ASCE committee to examine the cause of the Johnstown Flood disaster when the South Fork Dam in Johnstown, Pennsylvania, broke, killing over 2,200 people. Although the report was completed by January 1890, it was immediately suppressed and not released to other ASCE members or the public until mid-1891, two years after the 1889 flood. Francis himself presented the results of the investigation at the ASCE convention in Chattanooga, Tennessee. The other three committee members did not attend. Although Francis’ name headed the report, the "de facto" chairman of the committee was Max Becker, a railroad engineer based in Pittsburgh with virtually no experience in dams or hydraulic engineering. According to Francis, it was Becker who delayed the release of the report. A detailed discussion of the South Fork investigation, the participating engineers, and the science behind the 1889 flood was published in 2018.
Francis died on September 18, 1892, at the age of 77, and is buried at Lowell Cemetery under a massive pillar of cut granite stones, symbolizing the stones used to make the canals.
Honors.
For all his contributions to the world of engineering and his personal generosity, the following are named in his honor:
Notable uses of the Francis turbine.
Today, the Francis turbine is the most widely used water turbine in the world, including China's new Three Gorges Dam as the world's second largest hydroelectric power station in the world, with a future installed capacity of 22,500 MW. With the incorporation of the Francis turbine into almost every hydroelectric dam built since 1900, it is responsible for generating almost one fifth of all the world's electricity:
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "Q = 3.33 h_1^{3 /2} (L-0.2h_1)"
}
] | https://en.wikipedia.org/wiki?curid=834644 |
8349310 | Biangular coordinates | In mathematics, biangular coordinates are a coordinate system for the plane where formula_0 and formula_1 are two fixed points, and the position of a point "P" not on the line formula_2 is determined by the angles formula_3 and formula_4
The sine rule can be used to convert from biangular coordinates to two-center bipolar coordinates.
Applications.
Biangular coordinates can be used in geometric modelling and CAD.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_1"
},
{
"math_id": 1,
"text": "C_2"
},
{
"math_id": 2,
"text": "\\overline{C_1C_2}"
},
{
"math_id": 3,
"text": "\\angle PC_1C_2"
},
{
"math_id": 4,
"text": "\\angle PC_2C_1."
}
] | https://en.wikipedia.org/wiki?curid=8349310 |
835157 | Adhesion | Molecular property
<templatestyles src="Template:Quote_box/styles.css" />
IUPAC definition
Process of attachment of a substance to the surface of another substance.
"Note 1": Adhesion requires energy that can come from chemical and/or physicallinkages, the latter being reversible when enough energy is applied.
"Note 2": In biology, adhesion reflects the behavior of cells shortly after contactto the surface.
"Note 3": In surgery, adhesion is used when two tissues fuse unexpectedly.
Adhesion is the tendency of dissimilar particles or surfaces to cling to one another. (Cohesion refers to the tendency of similar or identical particles and surfaces to cling to one another.)
The forces that cause adhesion and cohesion can be divided into several types. The intermolecular forces responsible for the function of various kinds of stickers and sticky tape fall into the categories of chemical adhesion, dispersive adhesion, and diffusive adhesion. In addition to the cumulative magnitudes of these intermolecular forces, there are also certain emergent mechanical effects.
Surface energy.
Surface energy is conventionally defined as the work that is required to build an area of a particular surface. Another way to view the surface energy is to relate it to the work required to cleave a bulk sample, creating two surfaces. If the new surfaces are identical, the surface energy γ of each surface is equal to half the work of cleavage, W: γ = (1/2)W11.
If the surfaces are unequal, the Young-Dupré equation applies:
W12 = γ1 + γ2 – γ12, where γ1 and γ2 are the surface energies of the two new surfaces, and γ12 is the interfacial energy.
This methodology can also be used to discuss cleavage that happens in another medium: γ12 = (1/2)W121 = (1/2)W212. These two energy quantities refer to the energy that is needed to cleave one species into two pieces while it is contained in a medium of the other species. Likewise for a three species system: γ13 + γ23 – γ12 = W12 + W33 – W13 – W23 = W132, where W132 is the energy of cleaving species 1 from species 2 in a medium of species 3.
A basic understanding of the terminology of cleavage energy, surface energy, and surface tension is very helpful for understanding the physical state and the events that happen at a given surface, but as discussed below, the theory of these variables also yields some interesting effects that concern the practicality of adhesive surfaces in relation to their surroundings.
Mechanisms.
There is no single theory covering adhesion, and particular mechanisms are specific to particular material scenarios.
Five mechanisms of adhesion have been proposed to explain why one material sticks to another:
Mechanical.
Adhesive materials fill the voids or pores of the surfaces and hold surfaces together by interlocking. Other interlocking phenomena are observed on different length scales. Sewing is an example of two materials forming a large scale mechanical bond, velcro forms one on a medium scale, and some textile adhesives (glue) form one at a small scale.
Chemical.
Two materials may form a compound at the joint. The strongest joints are where atoms of the two materials share or swap electrons (known respectively as covalent bonding or ionic bonding). A weaker bond is formed if a hydrogen atom in one molecule is attracted to an atom of nitrogen, oxygen, or fluorine in another molecule, a phenomenon called hydrogen bonding.
Chemical adhesion occurs when the surface atoms of two separate surfaces form ionic, covalent, or hydrogen bonds. The engineering principle behind chemical adhesion in this sense is fairly straightforward: if surface molecules can bond, then the surfaces will be bonded together by a network of these bonds. It bears mentioning that these attractive ionic and covalent forces are effective over only very small distances – less than a nanometer. This means in general not only that surfaces with the potential for chemical bonding need to be brought very close together, but also that these bonds are fairly brittle, since the surfaces then need to be kept close together.
Dispersive.
In dispersive adhesion, also known as physisorption, two materials are held together by van der Waals forces: the attraction between two molecules, each of which has a region of slight positive and negative charge. In the simple case, such molecules are therefore polar with respect to average charge density, although in larger or more complex molecules, there may be multiple "poles" or regions of greater positive or negative charge. These positive and negative poles may be a permanent property of a molecule (Keesom forces) or a transient effect which can occur in any molecule, as the random movement of electrons within the molecules may result in a temporary concentration of electrons in one region (London forces).
In surface science, the term "adhesion" almost always refers to dispersive adhesion. In a typical solid-liquid-gas system (such as a drop of liquid on a solid surrounded by air) the contact angle is used to evaluate adhesiveness indirectly, while a Centrifugal Adhesion Balance allows for direct quantitative adhesion measurements. Generally, cases where the contact angle is low are considered of higher adhesion per unit area. This approach assumes that the lower contact angle corresponds to a higher surface energy. Theoretically, the more exact relation between contact angle and work of adhesion is more involved and is given by the Young-Dupre equation. The contact angle of the three-phase system is a function not only of dispersive adhesion (interaction between the molecules in the liquid and the molecules in the solid) but also cohesion (interaction between the liquid molecules themselves). Strong adhesion and weak cohesion results in a high degree of wetting, a lyophilic condition with low measured contact angles. Conversely, weak adhesion and strong cohesion results in lyophobic conditions with high measured contact angles and poor wetting.
London dispersion forces are particularly useful for the function of adhesive devices, because they do not require either surface to have any permanent polarity. They were described in the 1930s by Fritz London, and have been observed by many researchers. Dispersive forces are a consequence of statistical quantum mechanics. London theorized that attractive forces between molecules that cannot be explained by ionic or covalent interaction can be caused by polar moments within molecules. Multipoles could account for attraction between molecules having permanent multipole moments that participate in electrostatic interaction. However, experimental data showed that many of the compounds observed to experience van der Waals forces had no multipoles at all. London suggested that momentary dipoles are induced purely by virtue of molecules being in proximity to one another. By solving the quantum mechanical system of two electrons as harmonic oscillators at some finite distance from one another, being displaced about their respective rest positions and interacting with each other's fields, London showed that the energy of this system is given by:
formula_0
While the first term is simply the zero-point energy, the negative second term describes an attractive force between neighboring oscillators. The same argument can also be extended to a large number of coupled oscillators, and thus skirts issues that would negate the large scale attractive effects of permanent dipoles cancelling through symmetry, in particular.
The additive nature of the dispersion effect has another useful consequence. Consider a single such dispersive dipole, referred to as the origin dipole. Since any origin dipole is inherently oriented so as to be attracted to the adjacent dipoles it induces, while the other, more distant dipoles are not correlated with the original dipole by any phase relation (thus on average contributing nothing), there is a net attractive force in a bulk of such particles. When considering identical particles, this is called cohesive force.
When discussing adhesion, this theory needs to be converted into terms relating to surfaces. If there is a net attractive energy of cohesion in a bulk of similar molecules, then cleaving this bulk to produce two surfaces will yield surfaces with a dispersive surface energy, since the form of the energy remain the same. This theory provides a basis for the existence of van der Waals forces at the surface, which exist between any molecules having electrons. These forces are easily observed through the spontaneous jumping of smooth surfaces into contact. Smooth surfaces of mica, gold, various polymers and solid gelatin solutions do not stay apart when their separating becomes small enough – on the order of 1–10 nm. The equation describing these attractions was predicted in the 1930s by De Boer and Hamaker:
formula_1
where P is the force (negative for attraction), z is the separation distance, and A is a material-specific constant called the Hamaker constant.
The effect is also apparent in experiments where a polydimethylsiloxane (PDMS) stamp is made with small periodic post structures. The surface with the posts is placed face down on a smooth surface, such that the surface area in between each post is elevated above the smooth surface, like a roof supported by columns. Because of these attractive dispersive forces between the PDMS and the smooth substrate, the elevated surface – or “roof” – collapses down onto the substrate without any external force aside from the van der Waals attraction. Simple smooth polymer surfaces – without any microstructures – are commonly used for these dispersive adhesive properties. Decals and stickers that adhere to glass without using any chemical adhesives are fairly common as toys and decorations and useful as removable labels because they do not rapidly lose their adhesive properties, as do sticky tapes that use adhesive chemical compounds.
These forces also act over very small distances – 99% of the work necessary to break van der Waals bonds is done once surfaces are pulled more than a nanometer apart. As a result of this limited motion in both the van der Waals and ionic/covalent bonding situations, practical effectiveness of adhesion due to either or both of these interactions leaves much to be desired. Once a crack is initiated, it propagates easily along the interface because of the brittle nature of the interfacial bonds.
As an additional consequence, increasing surface area often does little to enhance the strength of the adhesion in this situation. This follows from the aforementioned crack failure – the stress at the interface is not uniformly distributed, but rather concentrated at the area of failure.
Electrostatic.
Some conducting materials may pass electrons to form a difference in electrical charge at the joint. This results in a structure similar to a capacitor and creates an attractive electrostatic force between the materials.
Diffusive.
Some materials may merge at the joint by diffusion. This may occur when the molecules of both materials are mobile and soluble in each other. This would be particularly effective with polymer chains where one end of the molecule diffuses into the other material. It is also the mechanism involved in sintering. When metal or ceramic powders are pressed together and heated, atoms diffuse from one particle to the next. This joins the particles into one.
Diffusive forces are somewhat like mechanical tethering at the molecular level. Diffusive bonding occurs when species from one surface penetrate into an adjacent surface while still being bound to the phase of their surface of origin. One instructive example is that of polymer-on-polymer surfaces. Diffusive bonding in polymer-on-polymer surfaces is the result of sections of polymer chains from one surface interdigitating with those of an adjacent surface. The freedom of movement of the polymers has a strong effect on their ability to interdigitate, and hence, on diffusive bonding. For example, cross-linked polymers are less capable of diffusion and interdigitation because they are bonded together at many points of contact, and are not free to twist into the adjacent surface. Uncrosslinked polymers (thermoplastics), on the other hand are freer to wander into the adjacent phase by extending tails and loops across the interface.
Another circumstance under which diffusive bonding occurs is “scission”. Chain scission is the cutting up of polymer chains, resulting in a higher concentration of distal tails. The heightened concentration of these chain ends gives rise to a heightened concentration of polymer tails extending across the interface. Scission is easily achieved by ultraviolet irradiation in the presence of oxygen gas, which suggests that adhesive devices employing diffusive bonding actually benefit from prolonged exposure to heat/light and air. The longer such a device is exposed to these conditions, the more tails are scissed and branch out across the interface.
Once across the interface, the tails and loops form whatever bonds are favorable. In the case of polymer-on-polymer surfaces, this means more van der Waals forces. While these may be brittle, they are quite strong when a large network of these bonds is formed. The outermost layer of each surface plays a crucial role in the adhesive properties of such interfaces, as even a tiny amount of interdigitation – as little as one or two tails of 1.25 angstrom length – can increase the van der Waals bonds by an order of magnitude.
Strength.
The strength of the adhesion between two materials depends on which of the above mechanisms occur between the two materials, and the surface area over which the two materials contact. Materials that wet against each other tend to have a larger contact area than those that do not. Wetting depends on the surface energy of the materials.
Low surface energy materials such as polyethylene, polypropylene, polytetrafluoroethylene and polyoxymethylene are difficult to bond without special surface preparation.
Another factor determining the strength of an adhesive contact is its shape. Adhesive contacts of complex shape begin to detach at the "edges" of the contact area. The process of destruction of adhesive contacts can be seen in the film.
Other effects.
In concert with the primary surface forces described above, there are several circumstantial effects in play. While the forces themselves each contribute to the magnitude of the adhesion between the surfaces, the following play a crucial role in the overall strength and reliability of an adhesive device.
Stringing.
Stringing is perhaps the most crucial of these effects, and is often seen on adhesive tapes. Stringing occurs when a separation of two surfaces is beginning and molecules at the interface bridge out across the gap, rather than cracking like the interface itself. The most significant consequence of this effect is the restraint of the crack. By providing the otherwise brittle interfacial bonds with some flexibility, the molecules that are stringing across the gap can stop the crack from propagating. Another way to understand this phenomenon is by comparing it to the stress concentration at the point of failure mentioned earlier. Since the stress is now spread out over some area, the stress at any given point has less of a chance of overwhelming the total adhesive force between the surfaces. If failure does occur at an interface containing a viscoelastic adhesive agent, and a crack does propagate, it happens by a gradual process called “fingering”, rather than a rapid, brittle fracture.
Stringing can apply to both the diffusive bonding regime and the chemical bonding regime. The strings of molecules bridging across the gap would either be the molecules that had earlier diffused across the interface or the viscoelastic adhesive, provided that there was a significant volume of it at the interface.
Microstructures.
The interplay of molecular scale mechanisms and hierarchical surface structures is known to result in high levels of static friction and bonding between pairs of surfaces. Technologically advanced adhesive devices sometimes make use of microstructures on surfaces, such as tightly packed periodic posts. These are biomimetic technologies inspired by the adhesive abilities of the feet of various arthropods and vertebrates (most notably, geckos). By intermixing periodic breaks into smooth, adhesive surfaces, the interface acquires valuable crack-arresting properties. Because crack initiation requires much greater stress than does crack propagation, surfaces like these are much harder to separate, as a new crack has to be restarted every time the next individual microstructure is reached.
Hysteresis.
Hysteresis, in this case, refers to the restructuring of the adhesive interface over some period of time, with the result being that the work needed to separate two surfaces is greater than the work that was gained by bringing them together (W > γ1 + γ2). For the most part, this is a phenomenon associated with diffusive bonding. The more time is given for a pair of surfaces exhibiting diffusive bonding to restructure, the more diffusion will occur, the stronger the adhesion will become. The aforementioned reaction of certain polymer-on-polymer surfaces to ultraviolet radiation and oxygen gas is an instance of hysteresis, but it will also happen over time without those factors.
In addition to being able to observe hysteresis by determining if W > γ1 + γ2 is true, one can also find evidence of it by performing “stop-start” measurements. In these experiments, two surfaces slide against one another continuously and occasionally stopped for some measured amount of time. Results from experiments on polymer-on-polymer surfaces show that if the stopping time is short enough, resumption of smooth sliding is easy. If, however, the stopping time exceeds some limit, there is an initial increase of resistance to motion, indicating that the stopping time was sufficient for the surfaces to restructure.
Wettability and absorption.
Some atmospheric effects on the functionality of adhesive devices can be characterized by following the theory of surface energy and interfacial tension. It is known that γ12 = (1/2)W121 = (1/2)W212. If γ12 is high, then each species finds it favorable to cohere while in contact with a foreign species, rather than dissociate and mix with the other. If this is true, then it follows that when the interfacial tension is high, the force of adhesion is weak, since each species does not find it favorable to bond to the other. The interfacial tension of a liquid and a solid is directly related to the liquid's wettability (relative to the solid), and thus one can extrapolate that cohesion increases in non-wetting liquids and decreases in wetting liquids. One example that verifies this is polydimethyl siloxane rubber, which has a work of self-adhesion of 43.6 mJ/m2 in air, 74 mJ/m2 in water (a nonwetting liquid) and 6 mJ/m2 in methanol (a wetting liquid).
This argument can be extended to the idea that when a surface is in a medium with which binding is favorable, it will be less likely to adhere to another surface, since the medium is taking up the potential sites on the surface that would otherwise be available to adhere to another surface. Naturally this applies very strongly to wetting liquids, but also to gas molecules that could adsorb onto the surface in question, thereby occupying potential adhesion sites. This last point is actually fairly intuitive: Leaving an adhesive exposed to air too long gets it dirty, and its adhesive strength will decrease. This is observed in the experiment: when mica is cleaved in air, its cleavage energy, W121 or Wmica/air/mica, is smaller than the cleavage energy in vacuum, Wmica/vac/mica, by a factor of 13.
Lateral adhesion.
Lateral adhesion is the adhesion associated with sliding one object on a substrate such as sliding a drop on a surface. When the two objects are solids, either with or without a liquid between them, the lateral adhesion is described as "friction". However, the behavior of lateral adhesion between a drop and a surface is tribologically very different from friction between solids, and the naturally adhesive contact between a flat surface and a liquid drop makes the lateral adhesion in this case, an individual field. Lateral adhesion can be measured using the centrifugal adhesion balance (CAB), which uses a combination of centrifugal and gravitational forces to decouple the normal and lateral forces in the problem.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = 3 h \\nu - \\frac{3}{4} \\frac{h \\nu \\alpha^2}{R^6}"
},
{
"math_id": 1,
"text": "\\frac{P}{area} = -\\frac{A}{24 \\pi z^3}"
}
] | https://en.wikipedia.org/wiki?curid=835157 |
8354104 | Vergence (optics) | Angle between converging or diverging light rays
In optics, vergence is the angle formed by rays of light that are not perfectly parallel to one another. Rays that move closer to the optical axis as they propagate are said to be "converging", while rays that move away from the axis are "diverging". These imaginary rays are always perpendicular to the wavefront of the light, thus the vergence of the light is directly related to the radii of curvature of the wavefronts. A convex lens or concave mirror will cause parallel rays to focus, converging toward a point. Beyond that focal point, the rays diverge. Conversely, a concave lens or convex mirror will cause parallel rays to diverge.
Light does not actually consist of imaginary rays and light sources are not single-point sources, thus vergence is typically limited to simple ray modeling of optical systems. In a real system, the vergence is a product of the diameter of a light source, its distance from the optics, and the curvature of the optical surfaces. An increase in curvature causes an increase in vergence and a decrease in focal length, and the image or spot size (waist diameter) will be smaller. Likewise, a decrease in curvature decreases vergence, resulting in a longer focal length and an increase in image or spot diameter. This reciprocal relationship between vergence, focal length, and waist diameter are constant throughout an optical system, and is referred to as the optical invariant. A beam that is expanded to a larger diameter will have a lower degree of divergence, but if condensed to a smaller diameter the divergence will be greater.
The simple ray model fails for some situations, such as for laser light, where Gaussian beam analysis must be used instead.
Definition.
In geometrical optics, vergence describes the curvature of optical wavefronts. Vergence is defined as
formula_0
where "n" is the medium's refractive index and "r" is the distance from the point source to the wavefront. Vergence is measured in units of dioptres (D) which are equivalent to m−1. This describes the vergence in terms of optical power. For optics like convex lenses, the converging point of the light exiting the lens is on the input side of the focal plane, and is positive in optical power. For concave lenses, the focal point is on the back side of the lens, or the output side of the focal plane, and is negative in power. A lens with no optical power is called an optical window, having flat, parallel faces. The optical power directly relates to how large positive images will be magnified, and how small negative images will be diminished.
All light sources produce some degree of divergence, as the waves exiting these sources always have some degree of curvature. At the proper distance, these waves can be straightened by use of a lens or mirror, creating collimated beams with minimal divergence, but some degree of divergence will remain, depending on the diameter of the beam versus the focal length. When the distance between the point source and wavefront becomes very large, the vergence goes to zero meaning that the wavefronts are planar and no longer have any detectable curvature. Light from distant stars have such a large radius that any curvature of the wavefronts are undetectable, and have no vergence.
The light can also be pictured as consisting of a bundle of lines radiating in the direction of propagation, which are always perpendicular to the wavefront, called "rays". These imaginary lines of infinitely small thicknesses are separated by only the angle between them. In ray tracing, the vergence can then be pictured as the angle between any two rays. For imaging or beams, the vergence is often described as the angle between the outermost rays in the bundle (marginal rays), at the edge (verge) of a cone of light, and the optical axis. This slope is typically measured in radians. Thus, in this case the convergence of the rays transmitted by a lens is equal to the radius of the light source divided by its distance from the optics. This limits the size of an image or the minimum spot diameter that can be produced by any focusing optics, which is determined by the reciprocal of that equation; the divergence of the light source multiplied by the distance. This relationship between vergence, focal length, and the minimum spot diameter (also called the "waist diameter") remains constant through all space, and is commonly referred to as the optical invariant.
This angular relationship becomes especially important with laser operations such as laser cutting or laser welding, as there is always a trade off between spot diameter, which affects the intensity of the energy, and the distance to the object. When low divergence in the beam is desired, then a larger diameter beam is necessary, but if a smaller beam is needed one must settle for greater divergence, and no change in the position of the lens will alter this. The only way to achieve a smaller spot is to use a lens with a shorter focal-length, or expand the beam to a larger diameter.
However, this measure of the curvature of wavefronts is only fully valid in geometrical optics, not in Gaussian beam optics or in wave optics, where the wavefront at the focus is wavelength-dependent and the curvature is not proportional to the distance from the focus. In this case, the diffraction of the light starts to play a very active role, often limiting the spot size to even larger diameters, especially in the far field. For non-circular light sources, the divergence may differ depending on the cross-sectional position of the rays from the optical axis. Diode lasers, for example, have greater divergence across the parallel direction (fast axis) than the perpendicular (slow axis), producing beams with rectangular profiles. This type of difference in divergence can be reduced by beam-shaping methods, such as using a rod lens that only affects divergence along a single cross-sectional direction.
Convergence, divergence, and sign convention.
Wavefronts propagating toward a single point yield positive vergence. This is also referred to as "convergence" since the wavefronts are all converging to the same point of focus. Contrarily, wavefronts propagating away from a single source point give way to negative vergence. Negative vergence is also called "divergence".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = \\frac{n}{r},"
}
] | https://en.wikipedia.org/wiki?curid=8354104 |
835827 | Empty string | Unique string of length zero
In formal language theory, the empty string, or empty word, is the unique string of length zero.
Formal theory.
Formally, a string is a finite, ordered sequence of characters such as letters, digits or spaces. The empty string is the special case where the sequence has length zero, so there are no symbols in the string.
There is only one empty string, because two strings are only different if they have different lengths or a different sequence of symbols.
In formal treatments, the empty string is denoted with ε or sometimes Λ or λ.
The empty string should not be confused with the empty language ∅, which is a formal language (i.e. a set of strings) that contains no strings, not even the empty string.
The empty string has several properties:
In context-free grammars, a production rule that allows a symbol to produce the empty string is known as an ε-production, and the symbol is said to be "nullable".
Use in programming languages.
In most programming languages, strings are a data type. Strings are typically stored at distinct memory addresses (locations). Thus, the same string (e.g.: the empty string) may be stored in two or more places in memory.
In this way, there could be multiple empty strings in memory, in contrast with the formal theory definition, for which there is only one possible empty string. However, a string comparison function would indicate that all of these empty strings are equal to each other.
Even a string of length zero can require memory to store it, depending on the format being used. In most programming languages, the empty string is distinct from a null reference (or null pointer) because a null reference points to no string at all, not even the empty string.
The empty string is a legitimate string, upon which most string operations should work. Some languages treat some or all of the following in similar ways: empty strings, null references, the integer 0, the floating point number 0, the Boolean value false, the ASCII character NUL, or other such values.
The empty string is usually represented similarly to other strings. In implementations with string terminating character (null-terminated strings or plain text lines), the empty string is indicated by the immediate use of this terminating character.
Different functions, methods, macros, or idioms exist for checking if a string is empty in different languages.
Examples of empty strings.
The empty string is a syntactically valid representation of zero in positional notation (in any base), which does not contain leading zeros. Since the empty string does not have a standard visual representation outside of formal language theory, the number zero is traditionally represented by a single decimal digit 0 instead.
Zero-filled memory area, interpreted as a null-terminated string, is an empty string.
Empty lines of text show the empty string. This can occur from two consecutive EOLs, as often occur in text files, and this is sometimes used in text processing to separate paragraphs, e.g. in MediaWiki.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall c \\in s: P(c)"
}
] | https://en.wikipedia.org/wiki?curid=835827 |
8361 | Definable real number | Real number uniquely specified by description
Informally, a definable real number is a real number that can be uniquely specified by its description. The description may be expressed as a construction or as a formula of a formal language. For example, the positive square root of 2, formula_0, can be defined as the unique positive solution to the equation formula_1, and it can be constructed with a compass and straightedge.
Different choices of a formal language or its interpretation give rise to different notions of definability. Specific varieties of definable numbers include the constructible numbers of geometry, the algebraic numbers, and the computable numbers. Because formal languages can have only countably many formulas, every notion of definable numbers has at most countably many definable real numbers. However, by Cantor's diagonal argument, there are uncountably many real numbers, so almost every real number is undefinable.
Constructible numbers.
One way of specifying a real number uses geometric techniques. A real number formula_2 is a constructible number if there is a method to construct a line segment of length formula_2 using a compass and straightedge, beginning with a fixed line segment of length 1.
Each positive integer, and each positive rational number, is constructible. The positive square root of 2 is constructible. However, the cube root of 2 is not constructible; this is related to the impossibility of doubling the cube.
Real algebraic numbers.
A real number formula_2 is called a real algebraic number if there is a polynomial formula_3, with only integer coefficients, so that formula_2 is a root of formula_4, that is, formula_5.
Each real algebraic number can be defined individually using the order relation on the reals. For example, if a polynomial formula_6 has 5 real roots, the third one can be defined as the unique formula_2 such that formula_7 and such that there are two distinct numbers less than formula_2 at which formula_8 is zero.
All rational numbers are constructible, and all constructible numbers are algebraic. There are numbers such as the cube root of 2 which are algebraic but not constructible.
The real algebraic numbers form a subfield of the real numbers. This means that 0 and 1 are algebraic numbers and, moreover, if formula_9 and formula_10 are algebraic numbers, then so are formula_11, formula_12, formula_13 and, if formula_10 is nonzero, formula_14.
The real algebraic numbers also have the property, which goes beyond being a subfield of the reals, that for each positive integer formula_15 and each real algebraic number formula_9, all of the formula_15th roots of formula_9 that are real numbers are also algebraic.
There are only countably many algebraic numbers, but there are uncountably many real numbers, so in the sense of cardinality most real numbers are not algebraic. This nonconstructive proof that not all real numbers are algebraic was first published by
Georg Cantor in his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers".
Non-algebraic numbers are called transcendental numbers. The best known transcendental numbers are π and e.
Computable real numbers.
A real number is a computable number if there is an algorithm that, given a natural number formula_15, produces a decimal expansion for the number accurate to formula_15 decimal places. This notion was introduced by Alan Turing in 1936.
The computable numbers include the algebraic numbers along with many transcendental numbers including formula_16 and formula_17. Like the algebraic numbers, the computable numbers also form a subfield of the real numbers, and the positive computable numbers are closed under taking formula_15th roots for each positive formula_15.
Not all real numbers are computable. Specific examples of noncomputable real numbers include the limits of Specker sequences, and algorithmically random real numbers such as Chaitin's Ω numbers.
Definability in arithmetic.
Another notion of definability comes from the formal theories of arithmetic, such as Peano arithmetic. The language of arithmetic has symbols for 0, 1, the successor operation, addition, and multiplication, intended to be interpreted in the usual way over the natural numbers. Because no variables of this language range over the real numbers, a different sort of definability is needed to refer to real numbers. A real number formula_9 is "definable in the language of arithmetic" (or "arithmetical") if its Dedekind cut can be defined as a predicate in that language; that is, if there is a first-order formula formula_18 in the language of arithmetic, with three free variables, such that
formula_19
Here "m", "n", and "p" range over nonnegative integers.
The second-order language of arithmetic is the same as the first-order language, except that variables and quantifiers are allowed to range over sets of naturals. A real that is second-order definable in the language of arithmetic is called "analytical".
Every computable real number is arithmetical, and the arithmetical numbers form a subfield of the reals, as do the analytical numbers. Every arithmetical number is analytical, but not every analytical number is arithmetical. Because there are only countably many analytical numbers, most real numbers are not analytical, and thus also not arithmetical.
Every computable number is arithmetical, but not every arithmetical number is computable. For example, the limit of a Specker sequence is an arithmetical number that is not computable.
The definitions of arithmetical and analytical reals can be stratified into the arithmetical hierarchy and analytical hierarchy. In general, a real is computable if and only if its Dedekind cut is at level formula_20 of the arithmetical hierarchy, one of the lowest levels. Similarly, the reals with arithmetical Dedekind cuts form the lowest level of the analytical hierarchy.
Definability in models of ZFC.
A real number formula_9 is first-order definable in the language of set theory, without parameters, if there is a formula formula_18 in the language of set theory, with one free variable, such that formula_9 is the unique real number such that formula_21 holds. This notion cannot be expressed as a formula in the language of set theory.
All analytical numbers, and in particular all computable numbers, are definable in the language of set theory. Thus the real numbers definable in the language of set theory include all familiar real numbers such as 0, 1, formula_16, formula_17, et cetera, along with all algebraic numbers. Assuming that they form a set in the model, the real numbers definable in the language of set theory over a particular model of ZFC form a field.
Each set model formula_22 of ZFC set theory that contains uncountably many real numbers must contain real numbers that are not definable within formula_22 (without parameters). This follows from the fact that there are only countably many formulas, and so only countably many elements of formula_22 can be definable over formula_22. Thus, if formula_22 has uncountably many real numbers, one can prove from "outside" formula_22 that not every real number of formula_22 is definable over formula_22.
This argument becomes more problematic if it is applied to class models of ZFC, such as the von Neumann universe. The assertion "the real number formula_23 is definable over the "class" model formula_24" cannot be expressed as a formula of ZFC. Similarly, the question of whether the von Neumann universe contains real numbers that it cannot define cannot be expressed as a sentence in the language of ZFC. Moreover, there are countable models of ZFC in which all real numbers, all sets of real numbers, functions on the reals, etc. are definable.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}"
},
{
"math_id": 1,
"text": "x^2 = 2"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "p(x)"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "p(r)=0"
},
{
"math_id": 6,
"text": "q(x)"
},
{
"math_id": 7,
"text": "q(r)=0"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "b"
},
{
"math_id": 11,
"text": "a+b"
},
{
"math_id": 12,
"text": "a-b"
},
{
"math_id": 13,
"text": "ab"
},
{
"math_id": 14,
"text": "a/b"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "\\pi"
},
{
"math_id": 17,
"text": "e"
},
{
"math_id": 18,
"text": "\\varphi"
},
{
"math_id": 19,
"text": "\\forall m \\, \\forall n \\, \\forall p \\left (\\varphi(n,m,p)\\iff\\frac{(-1)^p\\cdot n}{m+1}<a \\right )."
},
{
"math_id": 20,
"text": "\\Delta^0_1"
},
{
"math_id": 21,
"text": "\\varphi(a)"
},
{
"math_id": 22,
"text": "M"
},
{
"math_id": 23,
"text": "x"
},
{
"math_id": 24,
"text": "N"
}
] | https://en.wikipedia.org/wiki?curid=8361 |
8364462 | Energetic space | In mathematics, more precisely in functional analysis, an energetic space is, intuitively, a subspace of a given real Hilbert space equipped with a new "energetic" inner product. The motivation for the name comes from physics, as in many physical problems the energy of a system can be expressed in terms of the energetic inner product. An example of this will be given later in the article.
Energetic space.
Formally, consider a real Hilbert space formula_0 with the inner product formula_1 and the norm formula_2. Let formula_3 be a linear subspace of formula_0 and formula_4 be a strongly monotone symmetric linear operator, that is, a linear operator satisfying
The energetic inner product is defined as
formula_11 for all formula_12 in formula_3
and the energetic norm is
formula_13 for all formula_9 in formula_10
The set formula_3 together with the energetic inner product is a pre-Hilbert space. The energetic space formula_14 is defined as the completion of formula_3 in the energetic norm. formula_14 can be considered a subset of the original Hilbert space formula_15 since any Cauchy sequence in the energetic norm is also Cauchy in the norm of formula_0 (this follows from the strong monotonicity property of formula_16).
The energetic inner product is extended from formula_3 to formula_14 by
formula_17
where formula_18 and formula_19 are sequences in "Y" that converge to points in formula_14 in the energetic norm.
Energetic extension.
The operator formula_16 admits an energetic extension formula_20
formula_21
defined on formula_14 with values in the dual space formula_22 that is given by the formula
formula_23 for all formula_12 in formula_24
Here, formula_25 denotes the duality bracket between formula_22 and formula_26 so formula_27 actually denotes formula_28
If formula_9 and formula_29 are elements in the original subspace formula_30 then
formula_31
by the definition of the energetic inner product. If one views formula_32 which is an element in formula_15 as an element in the dual formula_33 via the Riesz representation theorem, then formula_34 will also be in the dual formula_35 (by the strong monotonicity property of formula_16). Via these identifications, it follows from the above formula that formula_36 In different words, the original operator formula_4 can be viewed as an operator formula_37 and then formula_21 is simply the function extension of formula_16 from formula_3 to formula_24
An example from physics.
Consider a string whose endpoints are fixed at two points formula_38 on the real line (here viewed as a horizontal line). Let the vertical outer force density at each point formula_39 formula_40 on the string be formula_41, where formula_42 is a unit vector pointing vertically and formula_43 Let formula_44 be the deflection of the string at the point formula_39 under the influence of the force. Assuming that the deflection is small, the elastic energy of the string is
formula_45
and the total potential energy of the string is
formula_46
The deflection formula_44 minimizing the potential energy will satisfy the differential equation
formula_47
with boundary conditions
formula_48
To study this equation, consider the space formula_49 that is, the Lp space of all square-integrable functions formula_50 in respect to the Lebesgue measure. This space is Hilbert in respect to the inner product
formula_51
with the norm being given by
formula_52
Let formula_3 be the set of all twice continuously differentiable functions formula_50 with the boundary conditions formula_53 Then formula_3 is a linear subspace of formula_54
Consider the operator formula_4 given by the formula
formula_55
so the deflection satisfies the equation formula_56 Using integration by parts and the boundary conditions, one can see that
formula_57
for any formula_9 and formula_29 in formula_10 Therefore, formula_16 is a symmetric linear operator.
formula_16 is also strongly monotone, since, by the Friedrichs's inequality
formula_58
for some formula_59
The energetic space in respect to the operator formula_16 is then the Sobolev space formula_60 We see that the elastic energy of the string which motivated this study is
formula_61
so it is half of the energetic inner product of formula_9 with itself.
To calculate the deflection formula_9 minimizing the total potential energy formula_62 of the string, one writes this problem in the form
formula_63 for all formula_29 in formula_14.
Next, one usually approximates formula_9 by some formula_64, a function in a finite-dimensional subspace of the true solution space. For example, one might let formula_64 be a continuous piecewise linear function in the energetic space, which gives the finite element method. The approximation formula_64 can be computed by solving a system of linear equations.
The energetic norm turns out to be the natural norm in which to measure the error between formula_9 and formula_64, see Céa's lemma. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "(\\cdot|\\cdot)"
},
{
"math_id": 2,
"text": "\\|\\cdot\\|"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "B:Y\\to X"
},
{
"math_id": 5,
"text": "(Bu|v)=(u|Bv)\\, "
},
{
"math_id": 6,
"text": "u, v"
},
{
"math_id": 7,
"text": "(Bu|u) \\ge c\\|u\\|^2"
},
{
"math_id": 8,
"text": "c>0"
},
{
"math_id": 9,
"text": "u"
},
{
"math_id": 10,
"text": "Y."
},
{
"math_id": 11,
"text": "(u|v)_E =(Bu|v)\\,"
},
{
"math_id": 12,
"text": "u,v"
},
{
"math_id": 13,
"text": "\\|u\\|_E=(u|u)^\\frac{1}{2}_E \\, "
},
{
"math_id": 14,
"text": "X_E"
},
{
"math_id": 15,
"text": "X,"
},
{
"math_id": 16,
"text": "B"
},
{
"math_id": 17,
"text": " (u|v)_E = \\lim_{n\\to\\infty} (u_n|v_n)_E"
},
{
"math_id": 18,
"text": "(u_n)"
},
{
"math_id": 19,
"text": "(v_n)"
},
{
"math_id": 20,
"text": "B_E"
},
{
"math_id": 21,
"text": "B_E:X_E\\to X^*_E"
},
{
"math_id": 22,
"text": "X^*_E"
},
{
"math_id": 23,
"text": "\\langle B_E u | v \\rangle_E = (u|v)_E"
},
{
"math_id": 24,
"text": "X_E."
},
{
"math_id": 25,
"text": "\\langle \\cdot |\\cdot \\rangle_E"
},
{
"math_id": 26,
"text": "X_E,"
},
{
"math_id": 27,
"text": "\\langle B_E u | v \\rangle_E"
},
{
"math_id": 28,
"text": "(B_E u)(v)."
},
{
"math_id": 29,
"text": "v"
},
{
"math_id": 30,
"text": "Y,"
},
{
"math_id": 31,
"text": "\\langle B_E u | v \\rangle_E = (u|v)_E = (Bu|v) = \\langle u|B|v\\rangle"
},
{
"math_id": 32,
"text": "Bu,"
},
{
"math_id": 33,
"text": "X^*"
},
{
"math_id": 34,
"text": "Bu"
},
{
"math_id": 35,
"text": "X_E^*"
},
{
"math_id": 36,
"text": "B_E u= Bu."
},
{
"math_id": 37,
"text": "B:Y\\to X_E^*,"
},
{
"math_id": 38,
"text": "a<b"
},
{
"math_id": 39,
"text": "x"
},
{
"math_id": 40,
"text": "(a\\le x \\le b)"
},
{
"math_id": 41,
"text": "f(x)\\mathbf{e}"
},
{
"math_id": 42,
"text": "\\mathbf{e}"
},
{
"math_id": 43,
"text": "f:[a, b]\\to \\mathbb R."
},
{
"math_id": 44,
"text": "u(x)"
},
{
"math_id": 45,
"text": "\\frac{1}{2} \\int_a^b\\! u'(x)^2\\, dx"
},
{
"math_id": 46,
"text": "F(u) = \\frac{1}{2} \\int_a^b\\! u'(x)^2\\,dx - \\int_a^b\\! u(x)f(x)\\,dx."
},
{
"math_id": 47,
"text": "-u''=f\\,"
},
{
"math_id": 48,
"text": "u(a)=u(b)=0.\\,"
},
{
"math_id": 49,
"text": "X=L^2(a, b), "
},
{
"math_id": 50,
"text": "u:[a, b]\\to \\mathbb R"
},
{
"math_id": 51,
"text": "(u|v)=\\int_a^b\\! u(x)v(x)\\,dx,"
},
{
"math_id": 52,
"text": "\\|u\\|=\\sqrt{(u|u)}."
},
{
"math_id": 53,
"text": "u(a)=u(b)=0."
},
{
"math_id": 54,
"text": "X."
},
{
"math_id": 55,
"text": "Bu = -u'',\\,"
},
{
"math_id": 56,
"text": "Bu=f."
},
{
"math_id": 57,
"text": "(Bu|v)=-\\int_a^b\\! u''(x)v(x)\\, dx=\\int_a^b u'(x)v'(x) = (u|Bv) "
},
{
"math_id": 58,
"text": "\\|u\\|^2 = \\int_a^b u^2(x)\\, dx \\le C \\int_a^b u'(x)^2\\, dx = C\\,(Bu|u)"
},
{
"math_id": 59,
"text": "C>0."
},
{
"math_id": 60,
"text": "H^1_0(a, b)."
},
{
"math_id": 61,
"text": "\\frac{1}{2} \\int_a^b\\! u'(x)^2\\, dx = \\frac{1}{2} (u|u)_E,"
},
{
"math_id": 62,
"text": "F(u)"
},
{
"math_id": 63,
"text": "(u|v)_E=(f|v)\\,"
},
{
"math_id": 64,
"text": "u_h"
}
] | https://en.wikipedia.org/wiki?curid=8364462 |
8364800 | Strongly monotone operator | Math concept
In functional analysis, a set-valued mapping formula_0 where "X" is a real Hilbert space is said to be strongly monotone if
formula_1
This is analogous to the notion of strictly increasing for scalar-valued functions of one scalar argument. | [
{
"math_id": 0,
"text": "A:X\\to 2^X"
},
{
"math_id": 1,
"text": "\\exists\\,c>0 \\mbox{ s.t. } \\langle u-v , x-y \\rangle\\geq c \\|x-y\\|^2 \\quad \\forall x,y\\in X, u\\in Ax, v\\in Ay."
}
] | https://en.wikipedia.org/wiki?curid=8364800 |
8365274 | O-ring theory of economic development | Model of economic development
The O-ring theory of economic development is a model of economic development put forward by Michael Kremer in 1993, which proposes that tasks of production must be executed proficiently together in order for any of them to be of high value. The key feature of this model is positive assortative matching, whereby people with similar skill levels work together.
The model argues that the O-ring development theory explains why rich countries produce more complicated products, have larger firms and much higher worker productivity than poor countries.
The name is a reference to the 1986 "Challenger" shuttle disaster, a catastrophe caused by the failure of O-rings.
Model.
The model assumes that firms are risk-neutral, labor markets are competitive, workers supply labor inelastically, workers are imperfect substitutes for one another, and there is a sufficient complementarity of tasks.
Production is broken down into formula_0 tasks. Laborers can use a multitude of techniques of varying efficiency to carry out these tasks depending on their skill. Skill is denoted by formula_1, where formula_2. The concept of formula_1 differs depending on interpretation. It could represent the probability of a worker successfully completing a task, the quality of task completion expressed as a percentage, or the quality of task completion with the condition of a margin of error that could reduce quality. Output then equals the product of the formula_1 values of each of the formula_0 tasks together and scaling it by a firm specific constant, formula_3. This scalar is positively correlated with the number of tasks. The production function is:
F("qi, qj") = "Bqiqj"
The important implication of this production function is positive assortative matching. This can be seen in a hypothetical four-person economy with two low skill workers (qL) and two high skill workers (qH). This equation dictates the productive efficiency of skill matching:
"qH2" + "qL2" ≥ 2"qHqL"
By this equation total product is maximized by pairing those with similar skill levels.
Conclusions.
There are several implications that can be derived from the model:
This model helps explain brain drain and international economic disparity. As Kremer puts it, "If strategic complementarity is sufficiently strong, microeconomically identical nations or groups within nations could settle into equilibria with different levels of human capital".
Extensions.
Garett Jones (2013) builds upon Kremer's O-ring theory to explain why differences in worker skills are associated with "massive" differences in international productivity levels despite causing only modest differences in wages within a country. For this purpose, he distinguishes between O-ring jobs—jobs featuring high strategic complementarities in terms of skill—and foolproof jobs—jobs characterized by diminishing returns to labor—and assumes both production technologies to be available to all countries. He then goes on to show that small international variations in average worker skill per country result in both large international and small intra-national income inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "0 \\le q \\le 1"
},
{
"math_id": 3,
"text": "B"
}
] | https://en.wikipedia.org/wiki?curid=8365274 |
8367 | Depth of field | Distance between the nearest and the furthest objects that are in focus in an image
The depth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image captured with a camera. See also the closely related depth of focus.
Factors affecting depth of field.
For cameras that can only focus on one object distance at a time, depth of field is the distance between the nearest and the farthest objects that are in acceptably sharp focus in the image. "Acceptably sharp focus" is defined using a property called the "circle of confusion".
The depth of field can be determined by focal length, distance to subject (object to be imaged), the acceptable circle of confusion size, and aperture. Limitations of depth of field can sometimes be overcome with various techniques and equipment. The approximate depth of field can be given by:
formula_0
for a given maximum acceptable circle of confusion c, focal length f, f-number N, and distance to subject u.
As distance or the size of the acceptable circle of confusion increases, the depth of field increases; however, increasing the size of the aperture (i.e., reducing f-number) or increasing the focal length reduces the depth of field. Depth of field changes linearly with f-number and circle of confusion, but changes in proportion to the square of the distance to the subject and inversely in proportion to the square of the focal length. As a result, photos taken at extremely close range (i.e., so small u) have a proportionally much smaller depth of field.
Rearranging the DOF equation shows that it is the ratio between distance and focal length that affects DOF;
formula_1
Note that formula_2 is the transverse magnification which is the ratio of the lateral image size to the lateral subject size.
Image sensor size affects DOF in counterintuitive ways. Because the circle of confusion is directly tied to the sensor size, decreasing the size of the sensor while holding focal length and aperture constant will decrease the depth of field (by the crop factor). The resulting image however will have a different field of view. If the focal length is altered to maintain the field of view, the change in focal length will counter the decrease of DOF from the smaller sensor and increase the depth of field (also by the crop factor).
Effect of lens aperture.
For a given subject framing and camera position, the DOF is controlled by the lens aperture diameter, which is usually specified as the f-number (the ratio of lens focal length to aperture diameter). Reducing the aperture diameter (increasing the f-number) increases the DOF because only the light travelling at shallower angles passes through the aperture so only cones of rays with shallower angles reach the image plane. In other words, the circles of confusion are reduced or increasing the DOF.
For a given size of the subject's image in the focal plane, the same f-number on any focal length lens will give the same depth of field. This is evident from the above DOF equation by noting that the ratio "u"/"f" is constant for constant image size. For example, if the focal length is doubled, the subject distance is also doubled to keep the subject image size the same. This observation contrasts with the common notion that "focal length is twice as important to defocus as f/stop", which applies to a constant subject distance, as opposed to constant image size.
Motion pictures make limited use of aperture control; to produce a consistent image quality from shot to shot, cinematographers usually choose a single aperture setting for interiors (e.g., scenes inside a building) and another for exteriors (e.g., scenes in an area outside a building), and adjust exposure through the use of camera filters or light levels. Aperture settings are adjusted more frequently in still photography, where variations in depth of field are used to produce a variety of special effects.
Effect of circle of confusion.
Precise focus is only possible at an exact distance from a lens; at that distance, a point object will produce a small spot image. Otherwise, a point object will produce a larger or blur spot image that is typically and approximately a circle. When this circular spot is sufficiently small, it is visually indistinguishable from a point, and appears to be in focus. The diameter of the largest circle that is indistinguishable from a point is known as the acceptable circle of confusion, or informally, simply as the circle of confusion.
The acceptable circle of confusion depends on how the final image will be used. The circle of confusion as 0.25 mm for an image viewed from 25 cm away is generally accepted.
For 35mm motion pictures, the image area on the film is roughly 22 mm by 16 mm. The limit of tolerable error was traditionally set at diameter, while for 16 mm film, where the size is about half as large, the tolerance is stricter, . More modern practice for 35 mm productions set the circle of confusion limit at .
Camera movements.
The term "camera movements" refers to swivel (swing and tilt, in modern terminology) and shift adjustments of the lens holder and the film holder. These features have been in use since the 1800s and are still in use today on view cameras, technical cameras, cameras with tilt/shift or perspective control lenses, etc. Swiveling the lens or sensor causes the plane of focus (POF) to swivel, and also causes the field of acceptable focus to swivel with the POF; and depending on the DOF criteria, to also change the shape of the field of acceptable focus. While calculations for DOF of cameras with swivel set to zero have been discussed, formulated, and documented since before the 1940s, documenting calculations for cameras with non-zero swivel seem to have begun in 1990.
More so than in the case of the zero swivel camera, there are various methods to form criteria and set up calculations for DOF when swivel is non-zero. There is a gradual reduction of clarity in objects as they move away from the POF, and at some virtual flat or curved surface the reduced clarity becomes unacceptable. Some photographers do calculations or use tables, some use markings on their equipment, some judge by previewing the image.
When the POF is rotated, the near and far limits of DOF may be thought of as wedge-shaped, with the apex of the wedge nearest the camera; or they may be thought of as parallel to the POF.
Object-field calculation methods.
Traditional depth-of-field formulas can be hard to use in practice. As an alternative, the same effective calculation can be done without regard to the focal length and f-number. Moritz von Rohr and later Merklinger observe that the effective absolute aperture diameter can be used for similar formula in certain circumstances.
Moreover, traditional depth-of-field formulas assume equal acceptable circles of confusion for near and far objects. Merklinger suggested that distant objects often need to be much sharper to be clearly recognizable, whereas closer objects, being larger on the film, do not need to be so sharp. The loss of detail in distant objects may be particularly noticeable with extreme enlargements. Achieving this additional sharpness in distant objects usually requires focusing beyond the hyperfocal distance, sometimes almost at infinity. For example, if photographing a cityscape with a traffic bollard in the foreground, this approach, termed the "object field method" by Merklinger, would recommend focusing very close to infinity, and stopping down to make the bollard sharp enough. With this approach, foreground objects cannot always be made perfectly sharp, but the loss of sharpness in near objects may be acceptable if recognizability of distant objects is paramount.
Other authors such as Ansel Adams have taken the opposite position, maintaining that slight unsharpness in foreground objects is usually more disturbing than slight unsharpness in distant parts of a scene.
Overcoming DOF limitations.
Some methods and equipment allow altering the apparent DOF, and some even allow the DOF to be determined after the image is made. These are based or supported by computational imaging processes. For example, focus stacking combines multiple images focused on different planes, resulting in an image with a greater (or less, if so desired) apparent depth of field than any of the individual source images. Similarly, in order to reconstruct the 3-dimensional shape of an object, a depth map can be generated from multiple photographs with different depths of field. Xiong and Shafer concluded, in part, "...the improvements on precisions of focus ranging and defocus ranging can lead to efficient shape recovery methods."
Another approach is focus sweep. The focal plane is swept across the entire relevant range during a single exposure. This creates a blurred image, but with a convolution kernel that is nearly independent of object depth, so that the blur is almost entirely removed after computational deconvolution. This has the added benefit of dramatically reducing motion blur.
Light Scanning Photomacrography (LSP) is another technique used to overcome depth of field limitations in macro and micro photography. This method allows for high-magnification imaging with exceptional depth of field. LSP involves scanning a thin light plane across the subject that is mounted on a moving stage perpendicular to the light plane. This ensures the entire subject remains in sharp focus from the nearest to the farthest details, providing comprehensive depth of field in a single image. Initially developed in the 1960s and further refined in the 1980s and 1990s, LSP was particularly valuable in scientific and biomedical photography before digital focus stacking became prevalent.
Other technologies use a combination of lens design and post-processing: Wavefront coding is a method by which controlled aberrations are added to the optical system so that the focus and depth of field can be improved later in the process.
The lens design can be changed even more: in colour apodization the lens is modified such that each colour channel has a different lens aperture. For example, the red channel may be <templatestyles src="F//sandbox/styles.css" />f/2.4, green may be <templatestyles src="F//sandbox/styles.css" />f/2.4, whilst the blue channel may be <templatestyles src="F//sandbox/styles.css" />f/5.6. Therefore, the blue channel will have a greater depth of field than the other colours. The image processing identifies blurred regions in the red and green channels and in these regions copies the sharper edge data from the blue channel. The result is an image that combines the best features from the different f-numbers.
At the extreme, a plenoptic camera captures 4D light field information about a scene, so the focus and depth of field can be altered after the photo is taken.
Diffraction and DOF.
Diffraction causes images to lose sharpness at high f-numbers (i.e., narrow aperture stop opening sizes), and hence limits the potential depth of field. (This effect is not considered in the above formula giving approximate DOF values.) In general photography this is rarely an issue; because large f-numbers typically require long exposure times to acquire acceptable image brightness, motion blur may cause greater loss of sharpness than the loss from diffraction. However, diffraction is a greater issue in close-up photography, and the overall image sharpness can be degraded as photographers are trying to maximize depth of field with very small apertures.
Hansma and Peterson have discussed determining the combined effects of defocus and diffraction using a root-square combination of the individual blur spots. Hansma's approach determines the f-number that will give the maximum possible sharpness; Peterson's approach determines the minimum f-number that will give the desired sharpness in the final image and yields a maximum depth of field for which the desired sharpness can be achieved. In combination, the two methods can be regarded as giving a maximum and minimum f-number for a given situation, with the photographer free to choose any value within the range, as conditions (e.g., potential motion blur) permit. Gibson gives a similar discussion, additionally considering blurring effects of camera lens aberrations, enlarging lens diffraction and aberrations, the negative emulsion, and the printing paper. Couzin gave a formula essentially the same as Hansma's for optimal f-number, but did not discuss its derivation.
Hopkins, Stokseth, and Williams and Becklund have discussed the combined effects using the modulation transfer function.
DOF scales.
Many lenses include scales that indicate the DOF for a given focus distance and f-number; the 35 mm lens in the image is typical. That lens includes distance scales in feet and meters; when a marked distance is set opposite the large white index mark, the focus is set to that distance. The DOF scale below the distance scales includes markings on either side of the index that correspond to f-numbers. When the lens is set to a given f-number, the DOF extends between the distances that align with the f-number markings.
Photographers can use the lens scales to work backwards from the desired depth of field to find the necessary focus distance and aperture. For the 35 mm lens shown, if it were desired for the DOF to extend from 1 m to 2 m, focus would be set so that index mark was centered between the marks for those distances, and the aperture would be set to <templatestyles src="F//sandbox/styles.css" />f/11.
On a view camera, the focus and f-number can be obtained by measuring the depth of field and performing simple calculations. Some view cameras include DOF calculators that indicate focus and f-number without the need for any calculations by the photographer.
Near:far distribution.
The DOF beyond the subject is always greater than the DOF in front of the subject. When the subject is at the hyperfocal distance or beyond, the far DOF is infinite, so the ratio is 1:∞; as the subject distance decreases, near:far DOF ratio increases, approaching unity at high magnification. For large apertures at typical portrait distances, the ratio is still close to 1:1.
DOF formulae.
This section covers some additional formula for evaluating depth of field; however they are all subject to significant simplifying assumptions: for example, they assume the paraxial approximation of Gaussian optics. They are suitable for practical photography, lens designers would use significantly more complex ones.
Focus and f-number from DOF limits.
For given near and far DOF limits "D"N and "D"F, the required f-number is smallest when focus is set to
formula_3
the harmonic mean of the near and far distances. In practice, this is equivalent to the arithmetic mean for shallow depths of field. Sometimes, view camera users refer to the difference "v"N − "v"F as the "focus spread".
Foreground and background blur.
If a subject is at distance s and the foreground or background is at distance D, let the distance between the subject and the foreground or background be indicated by
formula_4
The blur disk diameter b of a detail at distance "x"d from the subject can be expressed as a function of the subject magnification "m"s, focal length f, f-number N, or alternatively the aperture d, according to
formula_5
The minus sign applies to a foreground object, and the plus sign applies to a background object.
The blur increases with the distance from the subject; when b is less than the circle of confusion, the detail is within the depth of field.
See also.
<templatestyles src="Div col/styles.css"/>
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
General and cited references.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\text{DOF} \\approx \\frac{2u^2Nc}{f^2} \n"
},
{
"math_id": 1,
"text": "\n\\text{DOF} \\approx 2Nc\\left(\\frac{u}{f}\\right)^2=2Nc\\left(1 - \\frac{1}{M_T}\\right)^2\n"
},
{
"math_id": 2,
"text": "M_T = -\\frac{f}{u - f}"
},
{
"math_id": 3,
"text": "s = \\frac{2 D_{\\mathrm N} D_{\\mathrm F}}{D_{\\mathrm N} + D_{\\mathrm F}},"
},
{
"math_id": 4,
"text": "x_{\\mathrm d} = |D - s|."
},
{
"math_id": 5,
"text": "b = \\frac{fm_\\mathrm s}{N} \\frac{x_\\mathrm{d}}{s \\pm x_\\mathrm{d}} = dm_\\mathrm{s} \\frac{x_\\mathrm{d}}{D}."
}
] | https://en.wikipedia.org/wiki?curid=8367 |
8367611 | Two-center bipolar coordinates | In mathematics, two-center bipolar coordinates is a coordinate system based on two coordinates which give distances from two fixed centers formula_0 and formula_1. This system is very useful in some scientific applications (e.g. calculating the electric field of a dipole on a plane).
Transformation to Cartesian coordinates.
When the centers are at formula_2 and formula_3, the transformation to Cartesian coordinates formula_4 from two-center bipolar coordinates formula_5 is
formula_6
formula_7
Transformation to polar coordinates.
When "x" > 0, the transformation to polar coordinates from two-center bipolar coordinates is
formula_8
formula_9
where formula_10 is the distance between the poles (coordinate system centers).
Applications.
Polar plotters use two-center bipolar coordinates to describe the drawing paths required to draw a target image.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "c_1"
},
{
"math_id": 1,
"text": "c_2"
},
{
"math_id": 2,
"text": "(+a, 0)"
},
{
"math_id": 3,
"text": "(-a, 0)"
},
{
"math_id": 4,
"text": "(x, y)"
},
{
"math_id": 5,
"text": "(r_1, r_2)"
},
{
"math_id": 6,
"text": "x = \\frac{r_2^2-r_1^2}{4a}"
},
{
"math_id": 7,
"text": "y = \\pm \\frac{1}{4a}\\sqrt{16a^2r_2^2-(r_2^2-r_1^2+4a^2)^2}"
},
{
"math_id": 8,
"text": "r = \\sqrt{\\frac{r_1^2+r_2^2-2a^2}{2}}"
},
{
"math_id": 9,
"text": "\\theta = \\arctan\\left( \\frac{\\sqrt{r_1^4-8a^2r_1^2-2r_1^2r_2^2-(4a^2-r_2^2)^2}}{r_2^2-r_1^2} \\right)"
},
{
"math_id": 10,
"text": "2 a"
}
] | https://en.wikipedia.org/wiki?curid=8367611 |
8370770 | Monopolistic competition in international trade | Imperfect competition of differentiated products that are not perfect substitutes
Monopolistic competition models are used under the rubric of imperfect competition in International Economics. This model is a derivative of the monopolistic competition model that is part of basic economics. Here, it is tailored to international trade.
Setting up the model.
Monopolies are not often found in practice. The more usual market format is oligopoly: several firms, each of which is large enough so that a change in their price will affect the other firms' price, except for those with monopolies. When looking at oligopolies, the problem of "interdependence" arises. Interdependence means that the firms will when setting their prices, consider the effect this price will have on the actions of both consumers and competitors. For their part, the competitors will consider their expectations of the firm's response to any action they may take in return. Thus, there is a complex game with each side "trying to second guess each others' strategies." The Monopolistic Competition model is used because its simplicity allows the examination of one type of oligopoly while avoiding the issue of interdependence.
Benefits of the model.
The appeal of this model is not its closeness to the real world but its simplicity. What this model accomplishes most is that it shows us the benefits to trade presented by economies of scale.
*An industry consisting of a number of firms, each of which produces differentiated products. The firms are monopolists for their products but depend somewhat on the number of reasonable alternatives available and the price of those alternatives. Each firm within the industry thus faces a demand that is affected by the price and prevalence of reasonable alternatives.
*Generally, we expect a firm's sales to increase the stronger the total demand for the industry's product as a whole. Conversely, we expect the firm to sell less if there are a significant number of firms in the industry and/or the higher the firm's price in relation to those competitors. The demand equation for such a firm would be:
Assumptions of the model.
Background of the model.
formula_0
Q = "S" x [1/n - "b" x (P - "P")]
* "Q" = the firm's sales. "S" is the total sales of the industry. "n" is the number of firms in the industry, and ""b" is a constant term representing the responsiveness of a firm's sales to its price. "P"firm" is the price charged by the firm itself. ""P"comp" is the average price charged by its competitors.
* The intuition of this model is:
* If all firms charge the same price, their respective market share will be 1/n. Firms charging more get less, and firms charging less get more.
*(Note) Assume that lower prices will not bring new consumers into the market. In this model, consumers can only be gained at the expense of other firms. This simplifies things, allowing a focus on the competition among firms and also allows the assumption that if "S" represents the market size, and the firms are charging the same price, the market share of each firm will be "S"/n. | [
{
"math_id": 0,
"text": "Q=S\\times \\left[\\frac{1}{n} - b\\left(P_{firm} - \\bar{P}_{comp}\\right)\\right]"
}
] | https://en.wikipedia.org/wiki?curid=8370770 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.