id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
6838326 | Ghost leg | Method of random selection
Ghost leg is a method of lottery designed to create random pairings between two sets of any number of things, as long as the number of elements in each set is the same. This is often used to distribute things among people, where the number of things distributed is the same as the number of people. For instance, chores or prizes could be assigned fairly and randomly this way.
It is known in Japanese as ', in Korean as Sadaritagi (사다리타기, literally "ladder climbing") and in Chinese as Guijiaotu""' (, literally "ghost leg diagram").
It consists of vertical lines with horizontal lines connecting two adjacent vertical lines scattered randomly along their length; the horizontal lines are called "legs". The number of vertical lines equals the number of people playing, and at the bottom of each line there is an item - a thing that will be paired with a player. The general rule for playing this game is: choose a line on the top, and follow this line downwards. When a horizontal line is encountered, follow it to get to another vertical line and continue downwards. Repeat this procedure until reaching the end of the vertical line. Then the player is given the thing written at the bottom of the line.
If the elements written above the ghost leg are treated as a sequence, and after the ghost leg is used, the same elements are written at the bottom, then the starting sequence has been transformed to another permutation. Hence, ghost leg can be treated as a kind of permuting operator.
Process.
As an example, consider assigning roles in a play to actors.
Another process involves creating the ladder beforehand, then concealing it. Then people take turns choosing a path to start from at the top. If no part of the amidakuji is concealed, then it is possible to fix the system so that a certain pairing is guaranteed, thus defeating the idea of random chance.
Mathematics.
Part of the appeal for this game is that, unlike random chance games like rock, paper, scissors, "amidakuji" will always create a 1:1 correspondence, and can handle arbitrary numbers of pairings. It is guaranteed that two items at the top will never have the same corresponding item at the bottom, nor will any item on the bottom ever lack a corresponding item at the top.
It also works regardless of how many horizontal lines are added. Each person could add one, two, three, or any number of lines, and the 1:1 correspondence would remain.
One way of realizing how this works is to consider the analogy of coins in cups. There are "n" coins in "n" cups, representing the items at the bottom of the "amidakuji". Then, each leg that is added represents swapping the position of two adjacent cups. Thus, in the end there will still be "n" cups, and each cup will have one coin, regardless of how many swaps are performed.
Properties.
Permutation.
A ghost leg transforms an input sequence into an output sequence with the same number of elements with (possibly) different order.
Thus, it can be regarded a permutation of "n" symbols, where "n" is the number of vertical lines in the ghost leg., hence it can be represented by the corresponding permutation matrix.
Periodicity.
Applying a ghost leg a finite number of times to an input sequence eventually generates an output sequence identical to the original input sequence.
I.e., if "M" is a matrix representing a particular ghost leg, then "M""n"
"I" for some finite "n".
Reversibility.
For any ghost leg with matrix representation "M", there exists a ghost leg with representation "M"−1, such that "M" "M"−1
I
Odd-even property of permutation.
As each leg exchanges the two neighboring elements at its ends, the number of legs indicates the odd/even permutation property of the ghost leg. An odd number of legs represents an odd permutation, and an even number of legs gives an even permutation.
Infinite ghost legs with same permutation.
It is possible to express every permutation as a ghost leg, but the expression is not one-to-one, i.e. a particular permutation does not correspond to a unique ghost leg. An infinite number of Ghost Legs represents the same permutation.
Prime.
As there are an infinite number of ghost legs representing a particular permutation, those ghost legs have a kind of equivalence. Among those equivalent ghost legs, the ones which have smallest number of legs are called "prime".
Bubble sort and highest simplicity.
A ghost leg can be constructed arbitrarily, but such a ghost leg is not necessarily prime. It can be proven that only those ghost legs constructed by bubble sort contains the fewest legs, and hence is prime. This is equivalent to saying that bubble sort performs the minimum number of adjacent exchanges to sort a sequence.
Maximum number of legs of prime.
For a permutation with "n" elements, the maximum number of neighbor exchanging = formula_0
In the same way, the maximum number of legs in a prime with "n" tracks = formula_0
Bubblization.
For an arbitrary ghost leg, it is possible to transform it into prime by a procedure called "bubblization".
When bubblization operates, the following two identities are repeatedly applied in order to move and eliminate "useless" legs.
When the two identities cannot be applied any more, the ghost leg is proven to be exactly the same as the ghost leg constructed by bubble sort, thus bubblization can reduce ghost legs to primes.
Randomness.
Since, as mentioned above, an odd number of legs produces an odd permutation and an even number of legs produces an even permutation, a given number of legs can produce a maximum of half the total possible permutations (less than half if the number of legs is small relative to the number of tracks, reaching half as the number of legs increases beyond a certain critical number).
If the legs are drawn randomly (for reasonable definitions of "drawn randomly"), the evenness of the distribution of permutations increases with the number of legs. If the number of legs is small relative to number of tracks, the probabilities of different attainable permutations may vary greatly; for large numbers of legs the probabilities of different attainable permutations approach equality.
In popular culture.
The 1981 arcade game "Amidar", programmed by Konami and published by Stern, uses the same lattice as a maze. The game took its name from "amidakuji". and most of the enemy movement conformed to the game's rules.
An early Master System game called "Psycho Fox" uses the mechanics of an "amidakuji" board as a means to bet a bag of coins on a chance at a prize at the top of the screen. Later Sega Genesis games based on the same game concept "DecapAttack" and its Japanese predecessor "Magical Hat no Buttobi Tabo! Daibōken" follow the same game mechanics, including the "amidakuji" bonus levels.
' features an "amidakuji-style" bonus game that rewards the player with a power-up. "New Super Mario Bros.", "Super Mario 64 DS" and ' feature an "amidakuji"-style minigame in which the player uses the stylus to add new lines that must lead the player character down the winning path as he slides down the board.
In "Mega Man X" Bospider climbs down a web shaped rail similar to
"amidakuji" to attack in the first Sigma's fortress stage.
Azalea Gym in "Pokémon HeartGold" and "SoulSilver" was redesigned with an "amidakuji"-based system of carts to cross. The correct choices lead to the gym leader; the wrong ones lead to other trainers to fight.
"Phantasy Star Online 2" uses the principle of "amidakuji" for a randomly appearing bomb-defusing minigame. One must trace an "amidakuji" path around each bomb to determine which button defuses it; incorrect selections knock players away for a few seconds, wasting time.
In the Japanese drama "Don Quixote" (episode 10), the character Shirota (Shota Matsuda) uses "amidakuji" to help decide between candidate families for an adoption.
In "Raging Loop", a "ghost leg lottery" is described as an analogy for the selection of roles across a village for a ceremony that is central to the game's plot.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{n(n-1)}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=6838326 |
6838711 | Thermoeconomics | Heterodox economic theory
Thermoeconomics, also referred to as biophysical economics, is a school of heterodox economics that applies the laws of statistical mechanics to economic theory. Thermoeconomics can be thought of as the statistical physics of economic value and is a subfield of econophysics.
It is the study of the ways and means by which human societies procure and use energy and other biological and physical resources to produce, distribute, consume and exchange goods and services, while generating various types of waste and environmental impacts. Biophysical economics builds on both social sciences and natural sciences to overcome some of the most fundamental limitations and blind spots of conventional economics. It makes it possible to understand some key requirements and framework conditions for economic growth, as well as related constraints and boundaries.
Thermodynamics.
"Rien ne se perd, rien ne se crée, tout se transforme"
"Nothing is lost, nothing is created, everything is transformed."
-"Antoine Lavoisier, one of the fathers of chemistry"Thermoeconomists maintain that human economic systems can be modeled as thermodynamic systems. Thermoeconomists argue that economic systems always involve matter, energy, entropy, and information. Then, based on this premise, theoretical economic analogs of the first and second laws of thermodynamics are developed. The global economy is viewed as an open system.
Moreover, many economic activities result in the formation of structures. Thermoeconomics applies the statistical mechanics of non-equilibrium thermodynamics to model these activities. In thermodynamic terminology, human economic activity may be described as a dissipative system, which flourishes by consuming free energy in transformations and exchange of resources, goods, and services.
Energy Return on Investment.
formula_0
Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood not through the second law of thermodynamics but in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work.
Peak oil.
Political Implications.
"[T]he escalation of social protest and political instability around the world is causally related to the unstoppable thermodynamics of global hydrocarbon energy decline and its interconnected environmental and economic consequences."
Energy Backed Credit.
"Under this analysis, a reduction of GDP in advanced economies is now likely:"
"The 20th century experienced increasing energy quality and decreasing energy prices. The 21st century will be a story of decreasing energy quality and increasing energy cost."
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "EROI = \\frac{\\text{Energy returned to society}}{\\text{Energy required to get that energy}}"
}
]
| https://en.wikipedia.org/wiki?curid=6838711 |
68388573 | Hantzsche–Wendt manifold | Closed flat 3-manifold
The Hantzsche–Wendt manifold, also known as the HW manifold or didicosm, is a compact, orientable, flat 3-manifold, first studied by Walter Hantzsche and Hilmar Wendt in 1934. It is the only closed flat 3-manifold with first Betti number zero. Its holonomy group is formula_0. It has been suggested as a possible shape of the universe because its complicated geometry can obscure the features in the cosmic microwave background that would arise if the universe is a closed flat manifold, such as the 3-torus.
Construction.
The HW manifold can be built from two cubes that share a face. One construction proceeds as follows:
Generalizations.
In addition to the orientable one (the Hantzsche–Wendt manifold), there are two non-orientable flat 3-manifolds with holonomy group formula_0, known as the first and second amphidicosms, both with first Betti number 1.
Similar flat "n"-dimensional manifolds with holonomy formula_1, known as generalized Hantzsche–Wendt manifolds, may be constructed for any "n"≥2, but orientable ones exist only in odd dimensions. The number of orientable HW manifolds up to diffeomorphism increases exponentially with dimension. All of these have first Betti number "β"1 0 or 1.
Trivia.
The didicosm is eponymous and plays a central role in Greg Egan's science-fiction short story "Didicosm".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}_2^2"
},
{
"math_id": 1,
"text": "\\mathbb{Z}_2^{n-1}"
}
]
| https://en.wikipedia.org/wiki?curid=68388573 |
6838895 | N-jet | An "N"-jet is the set of (partial) derivatives of a function formula_0 up to order "N".
Specifically, in the area of computer vision, the "N"-jet is usually computed from a scale space representation formula_1 of the input image formula_2, and the partial derivatives of formula_1 are used as a basis for expressing various types of visual modules. For example, algorithms for tasks such as feature detection, feature classification, stereo matching, tracking and object recognition can be expressed in terms of "N"-jets computed at one or several scales in scale space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "f(x, y)"
}
]
| https://en.wikipedia.org/wiki?curid=6838895 |
68392400 | Egalitarian cake-cutting | Egalitarian cake-cutting is a kind of fair cake-cutting in which the fairness criterion is the egalitarian rule. The "cake" represents a continuous resource (such as land or time), that has to be allocated among people with different valuations over parts of the resource. The goal in egalitarian cake-cutting is to maximize the smallest value of an agent; subject to this, maximize the next-smallest value; and so on. It is also called leximin cake-cutting, since the optimization is done using the leximin order on the vectors of utilities.
The concept of egalitarian cake-cutting was first described by Dubins and Spanier, who called it "optimal partition".
Existence.
Leximin-optimal allocations exist whenever the set of allocations is a compact space. This is always the case when allocating discrete objects, and easy to prove when allocating a finite number of continuous homogeneous resources. Dubins and Spanier proved that, with a continuous "heterogeneous" resource ("cake"), the set of allocations is compact. Therefore, leximin-optimal cake allocations always exist. For this reason, the leximin cake-allocation rule is sometimes called the Dubins–Spanier rule.
Variants.
When the agents' valuations are not normalized (i.e., different agents may assign a different value to the entire cake), there is a difference between the "absolute utility profile" of an allocation (where element "i" is just the utility of agent "i"), and its "relative utility profile" (where element "i" is the utility of agent "i" divided by the total value for agent "i"). The absolute leximin rule chooses an allocation in which the absolute utility profile is leximin-maximal, and the relative leximin rule chooses an allocation in which the relative utility profile is leximin-maximal.
Properties.
Both variants of the leximin rule are Pareto-optimal and population monotonic. However, they differ in other properties:
Relation to envy-freeness.
Both variants of the leximin rule may yield allocations that are not envy-free. For example, suppose there are 5 agents, the cake is piecewise-homogeneous with 3 regions, and the agents' valuations are (missing values are zeros):
All agents value the entire cake at 15, so absolute-leximin and relative-leximin are equivalent. The largest possible minimum value is 5, so a leximin allocation must give all agents at least 5. This means that the Right must be divided equally among agents C, D, E, and the Middle must be given entirely to agent B. But then A envies B.
Dubins and Spanier proved that, when all value-measures are strictly positive, every relative-leximin allocation is envy-free.
Weller showed an envy-free and efficient cake allocation that is not relative-leximin. The cake is [0,1], there are three agents, and their value measures are trianglular distributions centered at 1/4, 1/2 and 3/4 respectively. The allocation ([0,3/8],[3/8,5/8],[5/8,1]) has utility profile (3/8,7/16,7/16). It is envy-free and utilitarian-optimal, hence Pareto-efficient. However, there is another allocation ([0,5/16],[5/16,11/16],[11/16,1]) with a leximin-better utility profile.
Computation.
Dall'aglio presents an algorithm for computing a leximin-optimal resource allocation.
Aumann, Dombb and Hassidim present an algorithm that, for every "e">0, computes an allocation with egalitarian welfare at least (1-e) of the optimum using formula_0 queries. This is exponential in "n", but polynomial in the number of accuracy digits.
On the other hand, they prove that, unless P=NP, it is impossible to approximate the optimal egalitarian value to a factor better than 2, in time polynomial in "n". The proof is by reduction from 3-dimensional matching (3DM). For every instance of 3DM matching with "m" hyperedges, they construct a cake-cutting instance with "n" agents, where 4"m" ≤ "n" ≤ 5"m". They prove that, if the 3DM instance admits a perfect matching, then there exists a cake allocation with egalitarian value at least 1/"m"; otherwise, there is no cake-allocation with egalitarian value larger than 1/(2"m").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^n \\cdot n\\cdot \\log_2(n/\\epsilon)"
}
]
| https://en.wikipedia.org/wiki?curid=68392400 |
68394738 | Thrust (particle physics) | Variable quantifying the coherence of jets
In high energy physics, thrust is a property, (one of the event shape observables) used to characterize the collision of high energy particles in a collider.
When two high energy particles collide, they typically produce jets of secondary particles. This happens when one or several quark-antiquark pairs are produced during the collision. Each colored quark/antiquark pair travels its separate way and subsequently hadronizes. Many new particles are created by the hadronization process and travel in approximately the same direction as the original pair. This set of particles constitutes a jet.
The thrust quantifies the coherence, or ″jettiness″ of the group of particles resulting from one collision. It is defined as:
formula_0,
where formula_1 is the momentum of particle formula_2, and formula_3 is a unit vector that maximizes formula_4 and defines the thrust axis. The sum is over all the final particles resulting from the collision. In practice, the sum may be carried over the detected particles only.
The thrust formula_4 is stable under collinear splitting of particles, and therefore it is a robust observable, largely insensitive to the details of the specific hadronization process.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T= \\underset{|n|=1}{\\operatorname{max}} \\bigg[\\frac{\\sum_i|p_i.n|}{\\sum_i|p_i|}\\bigg]"
},
{
"math_id": 1,
"text": "p_i"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "T"
}
]
| https://en.wikipedia.org/wiki?curid=68394738 |
6840149 | Metric derivative | Mathematical concept
In mathematics, the metric derivative is a notion of derivative appropriate to parametrized paths in metric spaces. It generalizes the notion of "speed" or "absolute velocity" to spaces which have a notion of distance (i.e. metric spaces) but not direction (such as vector spaces).
Definition.
Let formula_0 be a metric space. Let formula_1 have a limit point at formula_2. Let formula_3 be a path. Then the metric derivative of formula_4 at formula_5, denoted formula_6, is defined by
formula_7
if this limit exists.
Properties.
Recall that AC"p"("I"; "X") is the space of curves "γ" : "I" → "X" such that
formula_8
for some "m" in the "L""p" space "L""p"("I"; R). For "γ" ∈ AC"p"("I"; "X"), the metric derivative of "γ" exists for Lebesgue-almost all times in "I", and the metric derivative is the smallest "m" ∈ "L""p"("I"; R) such that the above inequality holds.
If Euclidean space formula_9 is equipped with its usual Euclidean norm formula_10, and formula_11 is the usual Fréchet derivative with respect to time, then
formula_12
where formula_13 is the Euclidean metric. | [
{
"math_id": 0,
"text": "(M, d)"
},
{
"math_id": 1,
"text": "E \\subseteq \\mathbb{R}"
},
{
"math_id": 2,
"text": "t \\in \\mathbb{R}"
},
{
"math_id": 3,
"text": "\\gamma : E \\to M"
},
{
"math_id": 4,
"text": "\\gamma"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "| \\gamma' | (t)"
},
{
"math_id": 7,
"text": "| \\gamma' | (t) := \\lim_{s \\to 0} \\frac{d (\\gamma(t + s), \\gamma (t))}{| s |},"
},
{
"math_id": 8,
"text": "d \\left( \\gamma(s), \\gamma(t) \\right) \\leq \\int_{s}^{t} m(\\tau) \\, \\mathrm{d} \\tau \\mbox{ for all } [s, t] \\subseteq I"
},
{
"math_id": 9,
"text": "\\mathbb{R}^{n}"
},
{
"math_id": 10,
"text": "\\| - \\|"
},
{
"math_id": 11,
"text": "\\dot{\\gamma} : E \\to V^{*}"
},
{
"math_id": 12,
"text": "| \\gamma' | (t) = \\| \\dot{\\gamma} (t) \\|,"
},
{
"math_id": 13,
"text": "d(x, y) := \\| x - y \\|"
}
]
| https://en.wikipedia.org/wiki?curid=6840149 |
6840205 | Blob detection | A particular task in computer vision
In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. The most common method for blob detection is by using convolution.
Given some property of interest expressed as a function of position on the image, there are two main classes of blob detectors: (i) "differential methods", which are based on derivatives of the function with respect to position, and (ii) "methods based on local extrema", which are based on finding the local maxima and minima of the function. With the more recent terminology used in the field, these detectors can also be referred to as "interest point operators", or alternatively interest region operators (see also interest point detection and corner detection).
There are several motivations for studying and developing blob detectors. One main reason is to provide complementary information about regions, which is not obtained from edge detectors or corner detectors. In early work in the area, blob detection was used to obtain regions of interest for further processing. These regions could signal the presence of objects or parts of objects in the image domain with application to object recognition and/or object tracking. In other domains, such as histogram analysis, blob descriptors can also be used for peak detection with application to segmentation. Another common use of blob descriptors is as main primitives for texture analysis and texture recognition. In more recent work, blob descriptors have found increasingly popular use as interest points for wide baseline stereo matching and to signal the presence of informative image features for appearance-based object recognition based on local image statistics. There is also the related notion of ridge detection to signal the presence of elongated objects.
The Laplacian of Gaussian.
One of the first and also most common blob detectors is based on the "Laplacian of the Gaussian" (LoG). Given an input image formula_0, this image is convolved by a Gaussian kernel
formula_1
at a certain scale formula_2 to give a scale space representation formula_3. Then, the result of applying the Laplacian operator
formula_4
is computed, which usually results in strong positive responses for dark blobs of radius formula_5 (for a two-dimensional image, formula_6 for a formula_7-dimensional image) and strong negative responses for bright blobs of similar size. A main problem when applying this operator at a single scale, however, is that the operator response is strongly dependent on the relationship between the size of the blob structures in the image domain and the size of the Gaussian kernel used for pre-smoothing. In order to automatically capture blobs of different (unknown) size in the image domain, a multi-scale approach is therefore necessary.
A straightforward way to obtain a "multi-scale blob detector with automatic scale selection" is to consider the "scale-normalized Laplacian operator"
formula_8
and to detect "scale-space maxima/minima", that are points that are "simultaneously local maxima/minima of formula_9 with respect to both space and scale" (Lindeberg 1994, 1998). Thus, given a discrete two-dimensional input image formula_0 a three-dimensional discrete scale-space volume formula_10 is computed and a point is regarded as a bright (dark) blob if the value at this point is greater (smaller) than the value in all its 26 neighbours. Thus, simultaneous selection of interest points formula_11 and scales formula_12 is performed according to
formula_13.
Note that this notion of blob provides a concise and mathematically precise operational definition of the notion of "blob", which directly leads to an efficient and robust algorithm for blob detection. Some basic properties of blobs defined from scale-space maxima of the normalized Laplacian operator are that the responses are covariant with translations, rotations and rescalings in the image domain. Thus, if a scale-space maximum is assumed at a point formula_14 then under a rescaling of the image by a scale factor formula_15, there will be a scale-space maximum at formula_16 in the rescaled image (Lindeberg 1998). This in practice highly useful property implies that besides the specific topic of Laplacian blob detection, "local maxima/minima of the scale-normalized Laplacian are also used for scale selection in other contexts", such as in corner detection, scale-adaptive feature tracking (Bretzner and Lindeberg 1998), in the scale-invariant feature transform (Lowe 2004) as well as other image descriptors for image matching and object recognition.
The scale selection properties of the Laplacian operator and other closely scale-space interest point detectors are analyzed in detail in (Lindeberg 2013a).
In (Lindeberg 2013b, 2015) it is shown that there exist other scale-space interest point detectors, such as the determinant of the Hessian operator, that perform better than Laplacian operator or its difference-of-Gaussians approximation for image-based matching using local SIFT-like image descriptors.
The difference of Gaussians approach.
From the fact that the scale space representation formula_10 satisfies the diffusion equation
formula_17
it follows that the Laplacian of the Gaussian operator formula_18 can also be computed as the limit case of the difference between two Gaussian smoothed images (scale space representations)
formula_19.
In the computer vision literature, this approach is referred to as the difference of Gaussians (DoG) approach. Besides minor technicalities, however, this operator is in essence similar to the Laplacian and can be seen as an approximation of the Laplacian operator. In a similar fashion as for the Laplacian blob detector, blobs can be detected from scale-space extrema of differences of Gaussians—see (Lindeberg 2012, 2015) for the explicit relation between the difference-of-Gaussian operator and the scale-normalized Laplacian operator. This approach is for instance used in the scale-invariant feature transform (SIFT) algorithm—see Lowe (2004).
The determinant of the Hessian.
By considering the scale-normalized determinant of the Hessian, also referred to as the Monge–Ampère operator,
formula_20
where formula_21 denotes the Hessian matrix of the scale-space representation formula_22 and then detecting scale-space maxima of this operator one obtains another straightforward differential blob detector with automatic scale selection which also responds to saddles (Lindeberg 1994, 1998)
formula_23.
The blob points formula_11 and scales formula_12 are also defined from an operational differential geometric definitions that leads to blob descriptors that are covariant with translations, rotations and rescalings in the image domain. In terms of scale selection, blobs defined from scale-space extrema of the determinant of the Hessian (DoH) also have slightly better scale selection properties under non-Euclidean affine transformations than the more commonly used Laplacian operator (Lindeberg 1994, 1998, 2015). In simplified form, the scale-normalized determinant of the Hessian computed from Haar wavelets is used as the basic interest point operator in the SURF descriptor (Bay et al. 2006) for image matching and object recognition.
A detailed analysis of the selection properties of the determinant of the Hessian operator and other closely scale-space interest point detectors is given in (Lindeberg 2013a) showing that the determinant of the Hessian operator has better scale selection properties under affine image transformations than the Laplacian operator.
In (Lindeberg 2013b, 2015) it is shown that the determinant of the Hessian operator performs significantly better than the Laplacian operator or its difference-of-Gaussians approximation, as well as better than the Harris or Harris-Laplace operators, for image-based matching using local SIFT-like or SURF-like image descriptors, leading to higher efficiency values and lower 1-precision scores.
The hybrid Laplacian and determinant of the Hessian operator (Hessian-Laplace).
A hybrid operator between the Laplacian and the determinant of the Hessian blob detectors has also been proposed, where spatial selection is done by the determinant of the Hessian and scale selection is performed with the scale-normalized Laplacian (Mikolajczyk and Schmid 2004):
formula_24
formula_25
This operator has been used for image matching, object recognition as well as texture analysis.
Affine-adapted differential blob detectors.
The blob descriptors obtained from these blob detectors with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain blob descriptors that are more robust to perspective transformations, a natural approach is to devise a blob detector that is "invariant to affine transformations". In practice, affine invariant interest points can be obtained by applying affine shape adaptation to a blob descriptor, where the shape of the smoothing kernel is iteratively warped to match the local image structure around the blob, or equivalently a local image patch is iteratively warped while the shape of the smoothing kernel remains rotationally symmetric (Lindeberg and Garding 1997; Baumberg 2000; Mikolajczyk and Schmid 2004, Lindeberg 2008). In this way, we can define affine-adapted versions of the Laplacian/Difference of Gaussian operator, the determinant of the Hessian and the Hessian-Laplace operator (see also Harris-Affine and Hessian-Affine).
Spatio-temporal blob detectors.
The determinant of the Hessian operator has been extended to joint space-time by Willems et al. and Lindeberg, leading to the following scale-normalized differential expression:
formula_26
In the work by Willems et al., a simpler expression corresponding to formula_27 and formula_28 was used. In Lindeberg, it was shown that formula_29 and formula_30 implies better scale selection properties in the sense that the selected scale levels obtained from a spatio-temporal Gaussian blob with spatial extent formula_31 and temporal extent formula_32 will perfectly match the spatial extent and the temporal duration of the blob, with scale selection performed by detecting spatio-temporal scale-space extrema of the differential expression.
The Laplacian operator has been extended to spatio-temporal video data by Lindeberg, leading to the following two spatio-temporal operators, which also constitute models of receptive fields of non-lagged vs. lagged neurons in the LGN:
formula_33
formula_34
For the first operator, scale selection properties call for using formula_27 and formula_35, if we want this operator to assume its maximum value over spatio-temporal scales at a spatio-temporal scale level reflecting the spatial extent and the temporal duration of an onset Gaussian blob. For the second operator, scale selection properties call for using formula_27 and formula_36, if we want this operator to assume its maximum value over spatio-temporal scales at a spatio-temporal scale level reflecting the spatial extent and the temporal duration of a blinking Gaussian blob.
Grey-level blobs, grey-level blob trees and scale-space blobs.
A natural approach to detect blobs is to associate a bright (dark) blob with each local maximum (minimum) in the intensity landscape. A main problem with such an approach, however, is that local extrema are very sensitive to noise. To address this problem, Lindeberg (1993, 1994) studied the problem of detecting local maxima with extent at multiple scales in scale space. A region with spatial extent defined from a watershed analogy was associated with each local maximum, as well a local contrast defined from a so-called delimiting saddle point. A local extremum with extent defined in this way was referred to as a "grey-level blob". Moreover, by proceeding with the watershed analogy beyond the delimiting saddle point, a "grey-level blob tree" was defined to capture the nested topological structure of level sets in the intensity landscape, in a way that is invariant to affine deformations in the image domain and monotone intensity transformations. By studying how these structures evolve with increasing scales, the notion of "scale-space blobs" was introduced. Beyond local contrast and extent, these scale-space blobs also measured how stable image structures are in scale-space, by measuring their "scale-space lifetime".
It was proposed that regions of interest and scale descriptors obtained in this way, with associated scale levels defined from the scales at which normalized measures of blob strength assumed their maxima over scales could be used for guiding other early visual processing. An early prototype of simplified vision systems was developed where such regions of interest and scale descriptors were used for directing the focus-of-attention of an active vision system. While the specific technique that was used in these prototypes can be substantially improved with the current knowledge in computer vision, the overall general approach is still valid, for example in the way that local extrema over scales of the scale-normalized Laplacian operator are nowadays used for providing scale information to other visual processes.
Lindeberg's watershed-based grey-level blob detection algorithm.
For the purpose of detecting "grey-level blobs" (local extrema with extent) from a watershed analogy,
Lindeberg developed an algorithm based on "pre-sorting" the pixels,
alternatively connected regions having the same intensity, in
decreasing order of the intensity values.
Then, comparisons were made between nearest neighbours of either pixels or connected regions.
For simplicity, consider the case of detecting bright grey-level blobs and
let the notation "higher neighbour" stand for "neighbour pixel having a higher grey-level value".
Then, at any stage in the algorithm (carried out in decreasing order of intensity values)
is based on the following classification rules:
Compared to other watershed methods, the flooding in this algorithm stops once the intensity level falls below the intensity value of the so-called "delimiting saddle point" associated with the local maximum. However, it is rather straightforward to extend this approach to other types of watershed constructions. For example, by proceeding beyond the first delimiting saddle point a "grey-level blob tree" can be constructed. Moreover, the grey-level blob detection method was embedded in a scale space representation and performed at all levels of scale, resulting in a representation called the "scale-space primal sketch".
This algorithm with its applications in computer vision is described in more detail in Lindeberg's thesis as well as the monograph on scale-space theory partially based
on that work. Earlier presentations of this algorithm can also be found in . More detailed treatments of applications of grey-level blob detection and the scale-space primal sketch to computer vision and medical image analysis are given in .
Maximally stable extremal regions (MSER).
Matas et al. (2002) were interested in defining image descriptors that are robust under perspective transformations. They studied level sets in the intensity landscape and measured how stable these were along the intensity dimension. Based on this idea, they defined a notion of "maximally stable extremal regions" and showed how these image descriptors can be used as image features for stereo matching.
There are close relations between this notion and the above-mentioned notion of grey-level blob tree. The maximally stable extremal regions can be seen as making a specific subset of the grey-level blob tree explicit for further processing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x, y)"
},
{
"math_id": 1,
"text": "g(x, y, t) = \\frac{1}{2\\pi t} e^{-\\frac{x^2 + y^2}{2 t}}"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "L(x, y; t)\\ = g(x, y, t) * f(x, y)"
},
{
"math_id": 4,
"text": "\\nabla^2 L =L_{xx} + L_{yy}"
},
{
"math_id": 5,
"text": "r^2 = 2 t"
},
{
"math_id": 6,
"text": "r^2 = d t"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": "\\nabla^2_\\mathrm{norm} L = t \\, (L_{xx} + L_{yy})"
},
{
"math_id": 9,
"text": "\\nabla^2_\\mathrm{norm} L"
},
{
"math_id": 10,
"text": "L(x, y, t)"
},
{
"math_id": 11,
"text": "(\\hat{x}, \\hat{y})"
},
{
"math_id": 12,
"text": "\\hat{t}"
},
{
"math_id": 13,
"text": "(\\hat{x}, \\hat{y}; \\hat{t}) = \\operatorname{argmaxminlocal}_{(x, y; t)}((\\nabla^2_\\mathrm{norm} L)(x, y; t))"
},
{
"math_id": 14,
"text": "(x_0, y_0; t_0)"
},
{
"math_id": 15,
"text": "s"
},
{
"math_id": 16,
"text": "\\left(s x_0, s y_0; s^2 t_0\\right)"
},
{
"math_id": 17,
"text": "\\partial_t L = \\frac{1}{2} \\nabla^2 L"
},
{
"math_id": 18,
"text": "\\nabla^2 L(x, y, t)"
},
{
"math_id": 19,
"text": "\\nabla^2_\\mathrm{norm} L(x, y; t) \\approx \\frac{t}{\\Delta t} \\left( L(x, y; t+\\Delta t) - L(x, y; t) \\right) "
},
{
"math_id": 20,
"text": "\\det H_\\mathrm{norm} L = t^2 \\left(L_{xx} L_{yy} - L_{xy}^2\\right)"
},
{
"math_id": 21,
"text": "H L"
},
{
"math_id": 22,
"text": "L"
},
{
"math_id": 23,
"text": "(\\hat{x}, \\hat{y}; \\hat{t}) = \\operatorname{argmaxlocal}_{(x, y; t)}((\\det H_\\mathrm{norm} L)(x, y; t))"
},
{
"math_id": 24,
"text": "(\\hat{x}, \\hat{y}) = \\operatorname{argmaxlocal}_{(x, y)}((\\det H L)(x, y; t))"
},
{
"math_id": 25,
"text": "\\hat{t} = \\operatorname{argmaxminlocal}_{t}((\\nabla^2_\\mathrm{norm} L)(\\hat{x}, \\hat{y}; t))"
},
{
"math_id": 26,
"text": "\n\\det(H_{(x,y,t),\\mathrm{norm}} L) \n = s^{2 \\gamma_s} \\tau^{\\gamma_{\\tau}} \n \\left( L_{xx} L_{yy} L_{tt} + 2 L_{xy} L_{xt} L_{yt} \n - L_{xx} L_{yt}^2 - L_{yy} L_{xt}^2 - L_{tt} L_{xy}^2 \\right).\n"
},
{
"math_id": 27,
"text": "\\gamma_s = 1"
},
{
"math_id": 28,
"text": "\\gamma_{\\tau} = 1"
},
{
"math_id": 29,
"text": "\\gamma_s = 5/4"
},
{
"math_id": 30,
"text": "\\gamma_{\\tau} = 5/4"
},
{
"math_id": 31,
"text": "s = s_0"
},
{
"math_id": 32,
"text": "\\tau = \\tau_0"
},
{
"math_id": 33,
"text": "\n\\partial_{t,\\mathrm{norm}} (\\nabla_{(x,y),\\mathrm{norm}}^2 L) = s^{\\gamma_s} \\tau^{\\gamma_{\\tau}/2} (L_{xxt} + L_{yyt}),\n"
},
{
"math_id": 34,
"text": "\n\\partial_{tt,\\mathrm{norm}} (\\nabla_{(x,y),\\mathrm{norm}}^2 L) = s^{\\gamma_s} \\tau^{\\gamma_{\\tau}} (L_{xxtt} + L_{yytt}).\n"
},
{
"math_id": 35,
"text": "\\gamma_{\\tau} = 1/2"
},
{
"math_id": 36,
"text": "\\gamma_{\\tau} = 3/4"
}
]
| https://en.wikipedia.org/wiki?curid=6840205 |
684037 | Austerity | Economic policies intended to reduce government budget deficits
In economic policy, austerity is a set of political-economic policies that aim to reduce government budget deficits through spending cuts, tax increases, or a combination of both. There are three primary types of austerity measures: higher taxes to fund spending, raising taxes while cutting spending, and lower taxes and lower government spending. Austerity measures are often used by governments that find it difficult to borrow or meet their existing obligations to pay back loans. The measures are meant to reduce the budget deficit by bringing government revenues closer to expenditures. Proponents of these measures state that this reduces the amount of borrowing required and may also demonstrate a government's fiscal discipline to creditors and credit rating agencies and make borrowing easier and cheaper as a result.
In most macroeconomic models, austerity policies which reduce government spending lead to increased unemployment in the short term. These reductions in employment usually occur directly in the public sector and indirectly in the private sector. Where austerity policies are enacted using tax increases, these can reduce consumption by cutting household disposable income. Reduced government spending can reduce gross domestic product (GDP) growth in the short term as government expenditure is itself a component of GDP. In the longer term, reduced government spending can reduce GDP growth if, for example, cuts to education spending leave a country's workforce less able to do high-skilled jobs or if cuts to infrastructure investment impose greater costs on business than they saved through lower taxes. In both cases, if reduced government spending leads to reduced GDP growth, austerity may lead to a higher debt-to-GDP ratio than the alternative of the government running a higher budget deficit. In the aftermath of the Great Recession, austerity measures in many European countries were followed by rising unemployment and slower GDP growth. The result was increased debt-to-GDP ratios despite reductions in budget deficits.
Theoretically in some cases, particularly when the output gap is low, austerity can have the opposite effect and stimulate economic growth. For example, when an economy is operating at or near capacity, higher short-term deficit spending (stimulus) can cause interest rates to rise, resulting in a reduction in private investment, which in turn reduces economic growth. Where there is excess capacity, the stimulus can result in an increase in employment and output. Alberto Alesina, Carlo Favero, and Francesco Giavazzi argue that austerity can be expansionary in situations where government reduction in spending is offset by greater increases in aggregate demand (private consumption, private investment, and exports).
History.
The origin of modern austerity measures is mostly undocumented among academics. During the United States occupation of Haiti that began in 1915, the United States utilized austerity policies where American corporations received a low tax rate while Haitians saw their taxes increase, with a forced labor system creating a "corporate paradise" in occupied Haiti. Another historical example of contemporary austerity is Fascist Italy during a liberal period of the economy from 1922 to 1925. The fascist government utilized austerity policies to prevent the democratization of Italy following World War I, with Luigi Einaudi, Maffeo Pantaleoni, Umberto Ricci and Alberto de' Stefani leading this movement. Austerity measures used by the Weimar Republic of Germany were unpopular and contributed towards the increased support for the Nazi Party in the 1930s.
Justifications.
Austerity measures are typically pursued if there is a threat that a government cannot honour its debt obligations. This may occur when a government has borrowed in currencies that it has no right to issue, for example a South American country that borrows in US dollars. It may also occur if a country uses the currency of an independent central bank that is legally restricted from buying government debt, for example in the Eurozone.
In such a situation, banks and investors may lose confidence in a government's ability or willingness to pay, and either refuse to roll over existing debts, or demand extremely high interest rates. International financial institutions such as the International Monetary Fund (IMF) may demand austerity measures as part of Structural Adjustment Programmes when acting as lender of last resort.
Austerity policies may also appeal to the wealthier class of creditors, who prefer low inflation and the higher probability of payback on their government securities by less profligate governments. More recently austerity has been pursued after governments became highly indebted by assuming private debts following banking crises. (This occurred after Ireland assumed the debts of its private banking sector during the European debt crisis. This rescue of the private sector resulted in calls to cut back the profligacy of the public sector.)
According to Mark Blyth, the concept of austerity emerged in the 20th century, when large states acquired sizable budgets. However, Blyth argues that the theories and sensibilities about the role of the state and capitalist markets that underline austerity emerged from the 17th century onwards. Austerity is grounded in liberal economics' view of the state and sovereign debt as deeply problematic. Blyth traces the discourse of austerity back to John Locke's theory of private property and derivative theory of the state, David Hume's ideas about money and the virtue of merchants, and Adam Smith's theories on economic growth and taxes. On the basis of classic liberal ideas, austerity emerged as a doctrine of neoliberalism in the 20th century.
Economist David M. Kotz suggests that the implementation of austerity measures following the 2007–2008 financial crisis was an attempt to preserve the neoliberal capitalist model.
Theoretical considerations.
In the 1930s during the Great Depression, anti-austerity arguments gained more prominence. John Maynard Keynes became a well known anti-austerity economist, arguing that "The boom, not the slump, is the right time for austerity at the Treasury."
Contemporary Keynesian economists argue that budget deficits are appropriate when an economy is in recession, to reduce unemployment and help spur GDP growth. According to Paul Krugman, since a government is not like a household, reductions in government spending during economic downturns worsen the crisis.
Across an economy, one person's spending is another person's income. In other words, if everyone is trying to reduce their spending, the economy can be trapped in what economists call the paradox of thrift, worsening the recession as GDP falls. In the past this has been offset by encouraging consumerism to rely on debt, but after the 2008 crisis, this has looked like a less and less viable option for sustainable economics.
Krugman argues that, if the private sector is unable or unwilling to consume at a level that increases GDP and employment sufficiently, then the government should be spending more in order to offset the decline in private spending. Keynesian theory is proposed as being responsible for post-war boom years, before the 1970s, and when public sector investment was at its highest across Europe, partially encouraged by the Marshall Plan.
An important component of economic output is business investment, but there is no reason to expect it to stabilize at full utilization of the economy's resources. High business profits do not necessarily lead to increased economic growth. (When businesses and banks have a disincentive to spend accumulated capital, such as cash repatriation taxes from profits in overseas tax havens and interest on excess reserves paid to banks, increased profits can lead to decreasing growth.)
Economists Kenneth Rogoff and Carmen Reinhart wrote in April 2013, "Austerity seldom works without structural reforms – for example, changes in taxes, regulations and labor market policies – and if poorly designed, can disproportionately hit the poor and middle class. Our consistent advice has been to avoid withdrawing fiscal stimulus too quickly, a position identical to that of most mainstream economists."
To help improve the U.S. economy, they (Rogoff and Reinhart) advocated reductions in mortgage principal for 'underwater homes' – those whose negative equity (where the value of the asset is less than the mortgage principal) can lead to a stagnant housing market with no realistic opportunity to reduce private debts.
Multiplier effects.
In October 2012, the IMF announced that its forecasts for countries that implemented austerity programs have been consistently overoptimistic, suggesting that tax hikes and spending cuts have been doing more damage than expected and that countries that implemented fiscal stimulus, such as Germany and Austria, did better than expected.
The IMF reported that this was due to fiscal multipliers that were considerably larger than expected: for example, the IMF estimated that fiscal multipliers based on data from 28 countries ranged between 0.9 and 1.7. In other words, a 1% GDP fiscal consolidation (i.e., austerity) would reduce GDP between 0.9% and 1.7%, thus inflicting far more economic damage than the 0.5 previously estimated in IMF forecasts.
In many countries, little is known about the size of multipliers, as data availability limits the scope for empirical research.
For these countries, Nicoletta Batini, Luc Eyraud and Anke Weber propose a simple method—dubbed the "bucket approach"—to come up with reasonable multiplier estimates. The approach bunches countries into groups (or "buckets") with similar multiplier values, based on their characteristics, and taking into account the effect of (some) temporary factors such as the state of the business cycle.
Different tax and spending choices of equal magnitude have different economic effects:
For example, the U.S. Congressional Budget Office estimated that the payroll tax (levied on all wage earners) has a higher multiplier (impact on GDP) than does the income tax (which is levied primarily on wealthier workers). In other words, raising the payroll tax by $1 as part of an austerity strategy would slow the economy more than would raising the income tax by $1, resulting in less net deficit reduction.
In theory, it would stimulate the economy and reduce the deficit if the payroll tax were lowered and the income tax raised in equal amounts.
Crowding in or out.
The term "crowding out" refers to the extent to which an increase in the budget deficit offsets spending in the private sector. Economist Laura Tyson wrote in June 2012, "By itself an increase in the deficit, either in the form of an increase in government spending or a reduction in taxes, causes an increase in demand". How this affects output, employment, and growth depends on what happens to interest rates:
When the economy is operating near capacity, government borrowing to finance an increase in the deficit causes interest rates to rise and higher interest rates reduce or "crowd out" private investment, reducing growth. This theory explains why large and sustained government deficits take a toll on growth: they reduce capital formation. But this argument rests on how government deficits affect interest rates, and the relationship between government deficits and interest rates varies.
When there is considerable excess capacity, an increase in government borrowing to finance an increase in the deficit does not lead to higher interest rates and does not crowd out private investment. Instead, the higher demand resulting from the increase in the deficit bolsters employment and output directly. The resultant increase in income and economic activity in turn encourages, or "crowds in", additional private spending.
Some argue that the "crowding-in" model is an appropriate solution for current economic conditions.
Government budget balance as a sectoral component.
According to economist Martin Wolf, the U.S. and many Eurozone countries experienced rapid increases in their budget deficits in the wake of the 2008 crisis as a result of significant private-sector retrenchment and ongoing capital account surpluses.
Policy choices had little to do with these deficit increases. This makes austerity measures counterproductive. Wolf explained that government fiscal balance is one of three major financial sectoral balances in a country's economy, along with the foreign financial sector (capital account) and the private financial sector.
By definition, the sum of the surpluses or deficits across these three sectors must be zero. In the U.S. and many Eurozone countries other than Germany, a foreign financial surplus exists because capital is imported (net) to fund the trade deficit. Further, there is a private-sector financial surplus because household savings exceed business investment.
By definition, a government budget deficit must exist so all three net to zero: for example, the U.S. government budget deficit in 2011 was approximately 10% of GDP (8.6% of GDP of which was federal), offsetting a foreign financial surplus of 4% of GDP and a private-sector surplus of 6% of GDP.
Wolf explained in July 2012 that the sudden shift in the private sector from deficit to surplus forced the U.S. government balance into deficit: "The financial balance of the private sector shifted towards surplus by the almost unbelievable cumulative total of 11.2 per cent of gross domestic product between the third quarter of 2007 and the second quarter of 2009, which was when the financial deficit of US government (federal and state) reached its peak. ... No fiscal policy changes explain the collapse into massive fiscal deficit between 2007 and 2009, because there was none of any importance. The collapse is explained by the massive shift of the private sector from financial deficit into surplus or, in other words, from boom to bust."
Wolf also wrote that several European economies face the same scenario and that a lack of deficit spending would likely have resulted in a depression. He argued that a private-sector depression (represented by the private- and foreign-sector surpluses) was being "contained" by government deficit spending.
Economist Paul Krugman also explained in December 2011 the causes of the sizable shift from private-sector deficit to surplus in the U.S.: "This huge move into surplus reflects the end of the housing bubble, a sharp rise in household saving, and a slump in business investment due to lack of customers."
One reason why austerity can be counterproductive in a downturn is due to a significant private-sector financial surplus, in which consumer savings is not fully invested by businesses. In a healthy economy, private-sector savings placed into the banking system by consumers are borrowed and invested by companies. However, if consumers have increased their savings but companies are not investing the money, a surplus develops.
Business investment is one of the major components of GDP. For example, a U.S. private-sector financial deficit from 2004 to 2008 transitioned to a large surplus of savings over investment that exceeded $1 trillion by early 2009, and remained above $800 billion into September 2012. Part of this investment reduction was related to the housing market, a major component of investment. This surplus explains how even significant government deficit spending would not increase interest rates (because businesses still have access to ample savings if they choose to borrow and invest it, so interest rates are not bid upward) and how Federal Reserve action to increase the money supply does not result in inflation (because the economy is awash with savings with no place to go).
Economist Richard Koo described similar effects for several of the developed world economies in December 2011: "Today private sectors in the U.S., the U.K., Spain, and Ireland (but not Greece) are undergoing massive deleveraging [paying down debt rather than spending] in spite of record low interest rates. This means these countries are all in serious balance sheet recessions. The private sectors in Japan and Germany are not borrowing, either. With borrowers disappearing and banks reluctant to lend, it is no wonder that, after nearly three years of record low interest rates and massive liquidity injections, industrial economies are still doing so poorly. Flow of funds data for the U.S. show a massive shift away from borrowing to savings by the private sector since the housing bubble burst in 2007. The shift for the private sector as a whole represents over 9 percent of U.S. GDP at a time of zero interest rates. Moreover, this increase in private sector savings exceeds the increase in government borrowings (5.8 percent of GDP), which suggests that the government is not doing enough to offset private sector deleveraging."
Framing of the debate surrounding austerity.
Many scholars have argued that how the debate surrounding austerity is framed has a heavy impact on the view of austerity in the public eye, and how the public understands macroeconomics as a whole. Wren-Lewis, for example, coined the term 'mediamacro', which refers to "the role of the media reproducing particularly corrosive forms of economic illiteracy—of which the idea that deficits are ipso facto 'bad' is a strong example." This can go as far as ignoring economists altogether; however, it often manifests itself as a drive in which a minority of economists whose ideas about austerity have been thoroughly debunked being pushed to the front to justify public policy, such as in the case of Alberto Alesina (2009), whose pro-austerity works were "thoroughly debunked by the likes of the economists, the IMF, and the Centre for Budget and Policy Priorities (CBPP)." Other anti-austerity economists, such as Seymour have argued that the debate must be reframed as a social and class movement, and its impact judged accordingly, since statecraft is viewed as the main goal.
Further, critics such as Major have highlighted how the OECD and associated international finance organisations have framed the debate to promote austerity, for example, the concept of 'wage-push inflation' which ignores the role played by the profiteering of private companies, and seeks to blame inflation on wages being too high.
Empirical considerations.
According to a 2020 study, austerity increases the risk of default in situations of severe fiscal stress, but reduces the risk of default in situations of low fiscal stress.
Europe.
A typical goal of austerity is to reduce the annual budget deficit without sacrificing growth. Over time, this may reduce the overall debt burden, often measured as the ratio of public debt to GDP.
Eurozone.
During the European debt crisis, many countries embarked on austerity programs, reducing their budget deficits relative to GDP from 2010 to 2011.
According to the "CIA World Factbook", Greece decreased its budget deficit from 10.4% of GDP in 2010 to 9.6% in 2011. Iceland, Italy, Ireland, Portugal, France, and Spain also decreased their budget deficits from 2010 to 2011 relative to GDP but the austerity policy of the Eurozone achieves not only the reduction of budget deficits. The goal of economic consolidation influences the future development of the European social model.
With the exception of Germany, each of these countries had public-debt-to-GDP ratios that increased from 2010 to 2011, as indicated in the chart at right. Greece's public-debt-to-GDP ratio increased from 143% in 2010 to 165% in 2011 Indicating despite declining budget deficits GDP growth was not sufficient to support a decline in the debt-to-GDP ratio for these countries during this period.
Eurostat reported that the overall debt-to-GDP ratio for the EA17 was 70.1% in 2008, 80.0% in 2009, 85.4% in 2010, 87.3% in 2011, and 90.6% in 2012.
Further, real GDP in the EA17 declined for six straight quarters from Q4 2011 to Q1 2013.
Unemployment is another variable considered in evaluating austerity measures. According to the "CIA World Factbook", from 2010 to 2011, the unemployment rates in Spain, Greece, Ireland, Portugal, and the UK increased. France and Italy had no significant changes, while in Germany and Iceland the unemployment rate declined. Eurostat reported that Eurozone unemployment reached record levels in March 2013 at 12.1%, up from 11.6% in September 2012 and 10.3% in 2011. Unemployment varied significantly by country.
Economist Martin Wolf analyzed the relationship between cumulative GDP growth in 2008 to 2012 and total reduction in budget deficits due to austerity policies in several European countries during April 2012 (see chart at right). He concluded, "In all, there is no evidence here that large fiscal contractions budget deficit reductions bring benefits to confidence and growth that offset the direct effects of the contractions. They bring exactly what one would expect: small contractions bring recessions and big contractions bring depressions."
Changes in budget balances (deficits or surpluses) explained approximately 53% of the change in GDP, according to the equation derived from the IMF data used in his analysis.
Similarly, economist Paul Krugman analyzed the relationship between GDP and reduction in budget deficits for several European countries in April 2012 and concluded that austerity was slowing growth. He wrote: "this also implies that 1 euro of austerity yields only about 0.4 euros of reduced deficit, even in the short run. No wonder, then, that the whole austerity enterprise is spiraling into disaster."
Greece.
The Greek government-debt crisis brought a package of austerity measures, put forth by the EU and the IMF mostly in the context of the three successive bailouts the country endured from 2010 to 2018; it was met with great anger by the Greek public, leading to riots and social unrest. On 27 June 2011, trade union organizations began a 48-hour labour strike in advance of a parliamentary vote on the austerity package, the first such strike since 1974.
Massive demonstrations were organized throughout Greece, intended to pressure members of parliament into voting against the package. The second set of austerity measures was approved on 29 June 2011, with 155 out of 300 members of parliament voting in favor. However, one United Nations official warned that the second package of austerity measures in Greece could pose a violation of human rights.
Around 2011, the IMF started issuing guidance suggesting that austerity could be harmful when applied without regard to an economy's underlying fundamentals.
In 2013, it published a detailed analysis concluding that "if financial markets focus on the short-term behavior of the debt ratio, or if country authorities engage in repeated rounds of tightening in an effort to get the debt ratio to converge to the official target", austerity policies could slow or reverse economic growth and inhibit full employment. Keynesian economists and commentators such as Paul Krugman have suggested that this has in fact been occurring, with austerity yielding worse results in proportion to the extent to which it has been imposed.
Overall, Greece lost 25% of its GDP during the crisis. Although the government debt increased only 6% between 2009 and 2017 (from €300 bn to €318 bn) — thanks, in part, to the 2012 debt restructuring —, the critical debt-to-GDP ratio shot up from 127% to 179% mostly due to the severe GDP drop during the handling of the crisis. In all, the Greek economy suffered the longest recession of any advanced capitalist economy to date, overtaking the US Great Depression. As such, the crisis adversely affected the populace as the series of sudden reforms and austerity measures led to impoverishment and loss of income and property, as well as a small-scale humanitarian crisis. Unemployment shot up from 8% in 2008 to 27% in 2013 and remained at 22% in 2017. As a result of the crisis, Greek political system has been upended, social exclusion increased, and hundreds of thousands of well-educated Greeks left the country.
France.
In April and May 2012, France held a presidential election in which the winner, François Hollande, had opposed austerity measures, promising to eliminate France's budget deficit by 2017 by canceling recently enacted tax cuts and exemptions for the wealthy, raising the top tax bracket rate to 75% on incomes over one million euros, restoring the retirement age to 60 with a full pension for those who have worked 42 years, restoring 60,000 jobs recently cut from public education, regulating rent increases, and building additional public housing for the poor. In the legislative elections in June, Hollande's Socialist Party won a supermajority capable of amending the French Constitution and enabling the immediate enactment of the promised reforms. Interest rates on French government bonds fell by 30% to record lows, fewer than 50 basis points above German government bond rates.
Latvia.
Latvia's economy returned to growth in 2011 and 2012, outpacing the 27 nations in the EU, while implementing significant austerity measures. Advocates of austerity argue that Latvia represents an empirical example of the benefits of austerity, while critics argue that austerity created unnecessary hardship with the output in 2013 still below the pre-crisis level. While Anders Åslund maintains that internal devaluation was not opposed by the Latvian public, Jokubas Salyga has recently chronicled widespread protests against austerity in the country.
According to the CIA World Fact Book, "Latvia's economy experienced GDP growth of more than 10% per year during 2006–07, but entered a severe recession in 2008 as a result of an unsustainable current account deficit and large debt exposure amid the softening world economy. Triggered by the collapse of the second largest bank, GDP plunged 18% in 2009. The economy has not returned to pre-crisis levels despite strong growth, especially in the export sector in 2011–12. The IMF, EU, and other international donors provided substantial financial assistance to Latvia as part of an agreement to defend the currency's peg to the euro in exchange for the government's commitment to stringent austerity measures.
The IMF/EU program successfully concluded in December 2011. The government of Prime Minister Valdis Dombrovskis remained committed to fiscal prudence and reducing the fiscal deficit from 7.7% of GDP in 2010, to 2.7% of GDP in 2012." The CIA estimated that Latvia's GDP declined by 0.3% in 2010, then grew by 5.5% in 2011 and 4.5% in 2012. Unemployment was 12.8% in 2011 and rose to 14.3% in 2012. Latvia's currency, the Lati, fell from $0.47 per U.S. dollar in 2008 to $0.55 in 2012, a decline of 17%. Latvia entered the euro zone in 2014. Latvia's trade deficit improved from over 20% of GDP in 2006 to 2007 to under 2% GDP by 2012.
Eighteen months after harsh austerity measures were enacted (including both spending cuts and tax increases), economic growth began to return, although unemployment remained above pre-crisis levels. Latvian exports have skyrocketed and both the trade deficit and budget deficit have decreased dramatically. More than one-third of government positions were eliminated, and the rest received sharp pay cuts. Exports increased after goods prices were reduced due to private business lowering wages in tandem with the government.
Paul Krugman wrote in January 2013 that Latvia had yet to regain its pre-crisis level of employment. He also wrote, "So we're looking at a Depression-level slump, and 5 years later only a partial bounceback; unemployment is down but still very high, and the decline has a lot to do with emigration. It's not what you'd call a triumphant success story, any more than the partial US recovery from 1933 to 1936—which was actually considerably more impressive—represented a huge victory over the Depression. And it's in no sense a refutation of Keynesianism, either. Even in Keynesian models, a small open economy can, in the long run, restore full employment through deflation and internal devaluation; the point, however, is that it involves many years of suffering".
Latvian Prime Minister Valdis Dombrovskis defended his policies in a television interview, stating that Krugman refused to admit his error in predicting that Latvia's austerity policy would fail. Krugman had written a blog post in December 2008 entitled "Why Latvia is the New Argentina", in which he argued for Latvia to devalue its currency as an alternative or in addition to austerity.
United Kingdom.
Post war austerity.
Following the Second World War the United Kingdom had huge debts, large commitments, and had sold many income producing assets. Rationing of food and other goods which had started in the war continued for some years.
21st century austerity programme.
Following the financial crisis of 2007–2008 a period of economic recession began in the UK. The austerity programme was initiated in 2010 by the Conservative and Liberal Democrat coalition government, despite some opposition from the academic community. In his June 2010 budget speech, the Chancellor George Osborne identified two goals. The first was that the structural current budget deficit would be eliminated to "achieve cyclically-adjusted current balance by the end of the rolling, five-year forecast period". The second was that national debt as a percentage of GDP would fall. The government intended to achieve both of its goals through substantial reductions in public expenditure. This was to be achieved by a combination of public spending reductions and tax increases. Economists Alberto Alesina, Carlo A. Favero and Francesco Giavazzi, writing in "Finance & Development" in 2018, argued that deficit reduction policies based on spending cuts typically have almost no effect on output, and hence form a better route to achieving a reduction in the debt-to-GDP ratio than raising taxes. The authors commented that the UK government austerity programme had resulted in growth that was higher than the European average and that the UK's economic performance had been much stronger than the International Monetary Fund had predicted. This claim was challenged most strongly by Mark Blyth, whose 2014 book on austerity claims that austerity not only fails to stimulate growth, but effectively passes that debt down to the working classes. As such, many academics such as Andrew Gamble view Austerity in Britain less as an economic necessity, and more as a tool of statecraft, driven by ideology and not economic requirements. A study published in "The BMJ" in November 2017 found the Conservative government austerity programme had been linked to approximately 120,000 deaths since 2010; however, this was disputed, for example on the grounds that it was an observational study which did not show cause and effect. More studies claim adverse effects of austerity on population health, which include an increase in the mortality rate among pensioners which has been linked to unprecedented reductions in income support, an increase in suicides and the prescription of antidepressants for patients with mental health issues, and an increase in violence, self-harm, and suicide in prisons.
United States.
The United States' response to the 2008 economic crash was largely influenced by Wall Street and IMF interests, who favored fiscal retrenchment in the face of the economic crash. Evidence exists to suggest that Pete Peterson (and the Petersonites) have heavily influenced US policy on economic recovery since the Nixon era, and presented itself in 2008, despite austerity measures being "wildly out of step with public opinion and reputable economic policy...[and showing] anti-Keynesian bias of supply-side economics and a political system skewed to favor Wall Street over Main Street". The nuance of the economic logic of Keynesianism is, however, difficult to put across to the American Public, and compares poorly to the simplistic message which blames government spending, which might explain Obama's preferred position of a halfway point between economic stimulus followed by austerity, which led to him being criticized by economists such as Joseph Stiglitz.
Controversy.
Austerity programs can be controversial. In the Overseas Development Institute (ODI) briefing paper "The IMF and the Third World", the ODI addresses five major complaints against the IMF's austerity conditions. Complaints include such measures being "anti-developmental", "self-defeating", and tending "to have an adverse impact on the poorest segments of the population".
In many situations, austerity programs are implemented by countries that were previously under dictatorial regimes, leading to criticism that citizens are forced to repay the debts of their oppressors.
In 2009, 2010, and 2011, workers and students in Greece and other European countries demonstrated against cuts to pensions, public services, and education spending as a result of government austerity measures.
Following the announcement of plans to introduce austerity measures in Greece, massive demonstrations occurred throughout the country aimed at pressing parliamentarians to vote against the austerity package. In Athens alone, 19 arrests were made, while 46 civilians and 38 policemen had been injured by 29 June 2011. The third round of austerity was approved by the Greek parliament on 12 February 2012 and met strong opposition, especially in Athens and Thessaloniki, where police clashed with demonstrators.
Opponents argue that austerity measures depress economic growth and ultimately cause reduced tax revenues that outweigh the benefits of reduced public spending. Moreover, in countries with already anemic economic growth, austerity can engender deflation, which inflates existing debt. Such austerity packages can also cause the country to fall into a liquidity trap, causing credit markets to freeze up and unemployment to increase. Opponents point to cases in Ireland and Spain in which austerity measures instituted in response to financial crises in 2009 proved ineffective in combating public debt and placed those countries at risk of defaulting in late 2010.
In October 2012, the IMF announced that its forecasts for countries that implemented austerity programs have been consistently overoptimistic, suggesting that tax hikes and spending cuts have been doing more damage than expected and that countries that implemented fiscal stimulus, such as Germany and Austria, did better than expected. These data have been scrutinized by the "Financial Times", which found no significant trends when outliers like Germany and Greece were excluded. Determining the multipliers used in the research to achieve the results found by the IMF was also described as an "exercise in futility" by Professor Carlos Vegh of the University of Michigan. Moreover, Barry Eichengreen of the University of California, Berkeley and Kevin H. O'Rourke of Oxford University write that the IMF's new estimate of the extent to which austerity restricts growth was much lower than historical data suggest.
On 3 February 2015, Joseph Stiglitz wrote: "Austerity had failed repeatedly from its early use under US president Herbert Hoover, which turned the stock-market crash into the Great Depression, to the IMF programs imposed on East Asia and Latin America in recent decades. And yet when Greece got into trouble, it was tried again." Government spending actually rose significantly under Hoover, while revenues were flat.
According to a 2020 study, which used survey experiments in the UK, Portugal, Spain, Italy and Germany, voters strongly disapprove of austerity measures, in particular spending cuts. Voters disapprove of fiscal deficits but not as strongly as austerity. A 2021 study found that incumbent European governments that implemented austerity measures in the Great Recession lost support in opinion polls.
Austerity has been blamed for at least 120,000 deaths between 2010 and 2017 in the UK, with one study putting it at 130,000 and another at 30,000 in 2015 alone. The first study added that "no firm conclusions can be drawn about cause and effect, but the findings back up other research in the field" and campaigners have claimed that cuts to benefits, healthcare and mental health services lead to more deaths including through suicide.
Balancing stimulus and austerity.
Strategies that involve short-term stimulus with longer-term austerity are not mutually exclusive. Steps can be taken in the present that will reduce future spending, such as "bending the curve" on pensions by reducing cost of living adjustments or raising the retirement age for younger members of the population, while at the same time creating short-term spending or tax cut programs to stimulate the economy to create jobs.
IMF managing director Christine Lagarde wrote in August 2011, "For the advanced economies, there is an unmistakable need to restore fiscal sustainability through credible consolidation plans. At the same time we know that slamming on the brakes too quickly will hurt the recovery and worsen job prospects. So fiscal adjustment must resolve the conundrum of being neither too fast nor too slow. Shaping a Goldilocks fiscal consolidation is all about timing. What is needed is a dual focus on medium-term consolidation and short-term support for growth. That may sound contradictory, but the two are mutually reinforcing. Decisions on future consolidation, tackling the issues that will bring sustained fiscal improvement, create space in the near term for policies that support growth."
Federal Reserve Chair Ben Bernanke wrote in September 2011, "the two goals—achieving fiscal sustainability, which is the result of responsible policies set in place for the longer term, and avoiding creation of fiscal headwinds for the recovery—are not incompatible. Acting now to put in place a credible plan for reducing future deficits over the long term, while being attentive to the implications of fiscal choices for the recovery in the near term, can help serve both objectives."
"Age of austerity".
The term "age of austerity" was popularised by UK Conservative Party leader David Cameron in his keynote speech to the Conservative Party forum in Cheltenham on 26 April 2009, in which he committed to end years of what he called "excessive government spending". Theresa May claimed that "Austerity is over" as of 3 October 2018, a statement which was almost immediately met with criticism on the reality of its central claim, particularly in relation to the high possibility of a substantial economic downturn due to Brexit.
Word of the year.
"Merriam-Webster's Dictionary" named the word "austerity" as its "Word of the year" for 2010 because of the number of web searches this word generated that year. According to the president and publisher of the dictionary, ""austerity" had more than 250,000 searches on the dictionary's free online [website] tool" and the spike in searches "came with more coverage of the debt crisis".
Criticism.
According to economist David Stuckler and physician Sanjay Basu in their study "The Body Economic: Why Austerity Kills", a health crisis is being triggered by austerity policies, including up to 10,000 additional suicides that have occurred across Europe and the U.S. since the introduction of austerity programs.
Much of the acceptance of austerity in the general public has centred on the way debate has been framed, and relates to an issue with representative democracy; since the public do not have widely available access to the latest economic research, which is highly critical of economic retrenchment in times of crisis, the public must rely on which politician sounds most plausible. This can unfortunately lead to authoritative leaders pursuing policies which make little, if any, economic sense.
An analysis by Hübscher et al. of 166 elections across Europe since 1980 demonstrates that austerity measures lead to increased electoral abstention and a rise in votes for non-mainstream parties, thereby exacerbating political polarization. Their detailed examination of specific austerity episodes reveals that new, small, and radical parties are the primary beneficiaries of such policies.
A study by Gabriel et al., analyzing elections in 124 European regions from eight countries between 1980 and 2015, found that fiscal consolidations increased the vote share of extreme parties, lowered voter turnout, and heightened political fragmentation. Notably, after the European debt crisis, a 1% reduction in regional public spending resulted in an approximate 3 percentage point rise in the vote share of extreme parties. The findings suggest that austerity measures diminish trust in political institutions and encourage support for more extreme political positions.
According to a 2020 study, austerity does not pay off in terms of reducing the default premium in situations of severe fiscal stress. Rather, austerity increases the default premium. However, in situations of low fiscal stress, austerity does reduce the default premium. The study also found that increases in government consumption had no substantial impact on the default premium.
Clara E. Mattei, assistant professor of economics at the New School for Social Research, posits that austerity is less of a means to "fix the economy" and is more of an ideological weapon of class oppression wielded by economic and political elites in order to suppress revolts and unrest by the working class public and close off any alternatives to the capitalist system. She traces the origins of modern austerity to post-World War I Britain and Italy, when it served as a "powerful counteroffensive" to rising working class agitation and anti-capitalist sentiment. In this, she quotes British economist G. D. H. Cole writing on the British response to the economic downturn of 1921:
"The big working-class offensive had been successfully stalled off; and British capitalism, though threatened with economic adversity, felt itself once more safely in the saddle and well able to cope, both industrially and politically, with any attempt that might still be made from the labour side to unseat it."
DeLong–Summers condition.
J. Bradford DeLong and Lawrence Summers explained why an expansionary fiscal policy is effective in reducing a government's future debt burden, pointing out that the policy has a positive impact on its future productivity level. They pointed out that when an economy is depressed and its nominal interest rate is near zero, the real interest rate charged to firms formula_0 is linked to the output as formula_1. This means that the rate decreases as the real GDP increases, and the actual fiscal multiplier formula_2 is higher than that in normal times; a fiscal stimulus is more effective for the case where the interest rates are at the zero bound. As the economy is boosted by government spending, the increased output yields higher tax revenue, and so we have
formula_3
where formula_4 is a baseline marginal tax-and-transfer rate. Also, we need to take account of the economy's long-run growth rate formula_5, as a steady economic growth rate may reduce its debt-to-GDP ratio. Then we can see that an expansionary fiscal policy is self-financing:
formula_6
formula_7
as long as formula_8 is less than zero. Then we can find that a fiscal stimulus makes the long-term budget in surplus if the real government borrowing rate satisfies the following condition:
formula_9
Impacts on short-run budget deficit.
Research by Gauti Eggertsson et al. indicates that a government's fiscal austerity measures actually increase its short-term budget deficit if the nominal interest rate is very low. In normal time, the government sets the tax rates formula_10 and the central bank controls the nominal interest rate formula_11. If the rate is so low that monetary policies cannot mitigate the negative impact of the austerity measures, the significant decrease of tax base makes the revenue of the government and the budget position worse. If the multiplier is
formula_12
then we have formula_13, where
formula_14
That is, the austerity measures are counterproductive in the short-run, as long as the multiplier is larger than a certain level formula_15. This erosion of the tax base is the effect of the endogenous component of the deficit. Therefore, if the government increases sales taxes, then it reduces the tax base due to its negative effect on the demand, and it upsets the budget balance.
No credit risk.
For a country that has its own currency, its government can create credits by itself, and its central bank can keep the interest rate close to or equal to the nominal risk-free rate. Former Federal Reserve chairman Alan Greenspan says that the probability that the US defaults on its debt repayment is zero, because the US government can print money. The Federal Reserve Bank of St. Louis says that the US government's debt is denominated in US dollars; therefore the government will never go bankrupt, though it may introduce the risk of inflation.
Alternatives to austerity.
A number of alternative plans have been used and proposed as an alternative to implementing austerity measures, examples include:
Alternatives to implementing austerity measures may utilise increased government borrowing in the short-term (such as for use in infrastructure development and public work projects) to attempt to achieve long-term economic growth. Alternately, instead of government borrowing, governments can raise taxes to fund public sector activity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " r^{f} "
},
{
"math_id": 1,
"text": " \\frac{\\partial r^{f} }{\\partial Y} = - \\delta "
},
{
"math_id": 2,
"text": " \\mu "
},
{
"math_id": 3,
"text": " \\frac{\\partial D}{\\partial G} = 1 - \\mu \\tau \\; , "
},
{
"math_id": 4,
"text": " \\tau "
},
{
"math_id": 5,
"text": " g "
},
{
"math_id": 6,
"text": " (r - g) dD - \\tau dY = (r - g )(1 - \\mu \\tau) dG - \\tau \\eta \\mu dG "
},
{
"math_id": 7,
"text": " \\frac{\\partial B}{\\partial G} = (r-g) (1 - \\mu \\tau) - \\tau \\eta \\mu \\; , "
},
{
"math_id": 8,
"text": " \\frac{\\partial B}{\\partial G} "
},
{
"math_id": 9,
"text": " (r-g) (1 - \\mu \\tau) < \\tau \\eta \\mu "
},
{
"math_id": 10,
"text": " \\tau_{s}, \\tau_{i} "
},
{
"math_id": 11,
"text": " i "
},
{
"math_id": 12,
"text": " \\frac{\\partial Y}{\\partial G} > \\gamma \n= \\frac{1 + \\tau_{s} + \\theta \\sigma^{-1} \\psi }{\\tau_{i} + \\tau_{s} + \\theta} "
},
{
"math_id": 13,
"text": " \\frac{\\partial D}{\\partial G} < 0 \\; "
},
{
"math_id": 14,
"text": " \\theta = \\frac{b}{Y } (1+i) \\frac{ \\kappa }{ (1-\\beta \\mu) } \\; . "
},
{
"math_id": 15,
"text": " \\gamma "
}
]
| https://en.wikipedia.org/wiki?curid=684037 |
68403860 | Double extension set theory | Axiomatic set theory
In mathematics, the Double extension set theory (DEST) is an axiomatic set theory proposed by Andrzej Kisielewicz consisting of two separate membership relations on the universe of sets, denoted here by formula_0 and formula_1, and a set of axioms relating the two. The intention behind defining the two membership relations is to avoid the usual paradoxes of set theory, without substantially weakening the axiom of unrestricted comprehension.
Intuitively, in DEST, comprehension is used to define the elements of a set under one membership relation using formulas that involve only the other membership relation. Let formula_2 be a first-order formula with free variable formula_3 in the language of DEST not involving the membership relation formula_1. Then, the axioms of DEST posit a set formula_4 such that formula_5. For instance, formula_6 is a formula involving only formula_0, and thus DEST posits the Russell set formula_7, where formula_8. Observe that for formula_9, we obtain formula_10. Since the membership relations are different, we thus avoid the Russell's paradox.
The focus in DEST is on regular sets, which are sets whose extensions under the two membership relations coincide, i.e., sets formula_11 for which it holds that formula_12. The preceding discussion suggests that the Russell set formula_7 cannot be regular, as otherwise it leads to the Russell's paradox.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\in"
},
{
"math_id": 1,
"text": "\\varepsilon"
},
{
"math_id": 2,
"text": "\\phi(x)"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "A = \\{ x | \\phi(x)\\}"
},
{
"math_id": 5,
"text": "x \\varepsilon A \\iff \\phi(x)"
},
{
"math_id": 6,
"text": "x \\notin x"
},
{
"math_id": 7,
"text": "R = \\{ x | x \\notin x\\}"
},
{
"math_id": 8,
"text": " x \\varepsilon R \\iff x \\notin x"
},
{
"math_id": 9,
"text": "x = R"
},
{
"math_id": 10,
"text": " R \\varepsilon R \\iff R \\notin R"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "\\forall x. x \\in A \\iff x \\varepsilon A"
}
]
| https://en.wikipedia.org/wiki?curid=68403860 |
68407315 | Manganese oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Manganese oxalate is a chemical compound, a salt of manganese and oxalic acid with the chemical formula MnC2O4. The compound creates light pink crystals, does not dissolve in water, and forms crystalline hydrates. It occurs naturally as the mineral Lindbergite.
Synthesis.
Exchange reaction between sodium oxalate and manganese chloride:
formula_0
Physical properties.
Manganese oxalate forms light pink crystals.
It does not dissolve in water, p Ksp= 6.8.
Forms crystalline hydrates of the composition MnC2O4•"n" H2O, where n = 2 and 3.
Crystalline hydrate of the composition MnC2O4•2H2O forms light pink crystals of the orthorhombic system, space group "P"212121, cell parameters a = 0.6262 nm, b = 1.3585 nm, c = 0.6091 nm, Z = 4, melts in its own crystallization water at 100°C.
Chemical properties.
Decomposes on heating:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ MnCl_2 + Na_2C_2O_4 + 2H_2O \\ \\xrightarrow{}\\ MnC_2O_4\\cdot 2H_2O\\downarrow + 2NaCl }"
},
{
"math_id": 1,
"text": "\\mathsf{ MnC_2O_4 \\ \\xrightarrow{215^oC}\\ MnO + CO\\uparrow + CO_2\\uparrow }"
}
]
| https://en.wikipedia.org/wiki?curid=68407315 |
68407499 | Mathematics of apportionment | Mathematical principles
In mathematics and social choice, apportionment problems are a class of fair division problems where the goal is to divide ("apportion") a whole number of identical goods fairly between multiple groups with different entitlements. The original example of an apportionment problem involves distributing seats in a legislature between different federal states or political parties. However, apportionment methods can be applied to other situations as well, including bankruptcy problems, inheritance law (e.g. dividing animals), manpower planning (e.g. demographic quotas), and rounding percentages.
Mathematically, an apportionment method is just a method of rounding real numbers to integers. Despite the simplicity of this problem, every method of rounding suffers one or more paradoxes, as proven by the Balinski-Young theorem. The mathematical theory of apportionment identifies what properties can be expected from an apportionment method.
The mathematical theory of apportionment was studied as early as 1907 by the mathematician Agner Krarup Erlang. It was later developed to a great detail by the mathematician Michel Balinski and the economist Peyton Young.
Definitions.
Input.
The inputs to an apportionment method are:
Output.
The output is a vector of integers formula_11 with formula_12, called an "apportionment" of formula_0, where formula_13 is the number of items allocated to agent "i".
For each agent formula_4, the real number formula_7 is called the quota of formula_4, and denotes the exact number of items that should be given to formula_4. In general, a "fair" apportionment is one in which each allocation formula_13 is as close as possible to the quota formula_6.
An apportionment method may return a set of apportionment vectors (in other words: it is a multivalued function). This is required, since in some cases there is no fair way to distinguish between two possible solutions. For example, if formula_14 (or any other odd number) and formula_15, then (50,51) and (51,50) are both equally reasonable solutions, and there is no mathematical way to choose one over the other. While such ties are extremely rare in practice, the theory must account for them (in practice, when an apportionment method returns multiple outputs, one of them may be chosen by some external priority rules, or by coin flipping, but this is beyond the scope of the mathematical apportionment theory).
An apportionment method is denoted by a multivalued function formula_16; a particular formula_17-solution is a single-valued function formula_18 which selects a single apportionment from formula_16.
A partial apportionment method is an apportionment method for specific fixed values of formula_1 and formula_0; it is a multivalued function formula_19 that accepts only "formula_1"-vectors.
Variants.
Sometimes, the input also contains a vector of integers formula_20 representing "minimum requirements - formula_21" represents the smallest number of items that agent formula_4 should receive, regardless of its entitlement. So there is an additional requirement on the output: formula_22 for all formula_4.
When the agents are political parties, these numbers are usually 0, so this vector is omitted. But when the agents are states or districts, these numbers are often positive in order to ensure that all are represented. They can be the same for all agents (e.g. 1 for USA states, 2 for France districts), or different (e.g. in Canada or the European parliament).
Sometimes there is also a vector of "maximum requirements", but this is less common.
Basic requirements.
There are basic properties that should be satisfied by any reasonable apportionment method. They were given different names by different authors: the names on the left are from Pukelsheim; The names in parentheses on the right are from Balinsky and Young.
Other considerations.
The proportionality of apportionment can be measured by seats-to-votes ratio and Gallagher index. The proportionality of apportionment together with electoral thresholds impact political fragmentation and barrier to entry to the political competition.
Common apportionment methods.
There are many apportionment methods, and they can be classified into several approaches.
Staying within the quota.
The "exact quota" of agent formula_4 is formula_31. A basic requirement from an apportionment method is that it allocates to each agent formula_4 its quota formula_66 if it is an integer; otherwise, it should allocate it an integer that is near the exact quota, that is, either its "lower quota" formula_67 or its "upper quota" formula_68. We say that an apportionment method -
Hamilton's largest-remainder method satisfies both lower quota and upper quota by construction. This does not hold for the divisor methods.
Therefore, no divisor method satisfies both upper quota and lower quota for any number of agents. The uniqueness of Jefferson and Adams holds even in the much larger class of rank-index methods.
This can be seen as a disadvantage of divisor methods, but it can also be considered a disadvantage of the quota criterion:""For example, to give D 26 instead of 25 seats in Table 10.1 would mean taking a seat from one of the smaller states A, B, or C. Such a transfer would penalize the per capita representation of the small state much more - in both absolute and relative terms - than state D is penalized by getting one less than its lower quota. Similar examples can be invented in which some state might reasonably get more than its upper quota. It can be argued that staying within the quota is not really compatible with the idea of proportionality at all, since it allows a much greater variance in the per capita representation of smaller states than it does for larger states"."In Monte-Carlo simulations, Webster's method satisfies both quotas with a very high probability. Moreover, Webster's method is the only division method that satisfies "near quota": there are no agents formula_75 such that moving a seat from formula_4 to formula_76 would bring both of them nearer to their quotas:formula_77.Jefferson's method can be modified to satisfy both quotas, yielding the Quota-Jefferson method. Moreover, "any" divisor method can be modified to satisfy both quotas. This yields the Quota-Webster method, Quota-Hill method, etc. This family of methods is often called the quatatone methods, as they satisfy both quotas and house-monotonicity.
Minimizing pairwise inequality.
One way to evaluate apportionment methods is by whether they minimize the amount of "inequality" between pairs of agents. Clearly, inequality should take into account the different entitlements: if formula_78 then the agents are treated "equally" (w.r.t. to their entitlements); otherwise, if formula_79 then agent formula_4 is favored, and if formula_80 then agent formula_76 is favored. However, since there are 16 ways to rearrange the equality formula_78, there are correspondingly many ways by which inequality can be defined.
This analysis was done by Huntington in the 1920s. Some of the possibilities do not lead to a stable solution. For example, if we define inequality as formula_87, then there are instances in which, for any allocation, moving a seat from one agent to another might decrease their pairwise inequality. There is an example with 3 states with populations (737,534,329) and 16 seats.
Bias towards large/small agents.
The "seat bias" of an apportionment is the tendency of an apportionment method to systematically favor either large or small parties. Jefferson's method and Droop's method are heavily biased in favor of large states; Adams' method is biased in favor of small states; and the Webster and Huntington–Hill methods are effectively unbiased toward either large or small states.
Consistency properties.
Consistency properties are properties that characterize an apportionment "method", rather than a particular apportionment. Each consistency property compares the outcomes of a particular method on different inputs. Several such properties have been studied.
State-population monotonicity means that, if the entitlement of an agent increases, its apportionment should not decrease. The name comes from the setting where the agents are federal states, whose entitlements are determined by their population. A violation of this property is called the "population paradox". There are several variants of this property. One variant - the "pairwise PM" - is satisfied exclusively by divisor methods. That is, an apportionment method is pairwise PM if-and-only-if it is a divisor method.
When formula_88 and formula_89, no partial apportionment method satisfies pairwise-PM, lower quota and upper quota. Combined with the previous statements, it implies that no divisor method satisfies both quotas.
House monotonicity means that, when the total number of seats formula_0 increases, no agent loses a seat. The violation of this property is called the "Alabama paradox". It was considered particularly important in the early days of the USA, when the congress size increased every ten years. House-monotonicity is weaker than pairwise-PM. All rank-index methods (hence all divisor methods) are house-monotone - this clearly follows from the iterative procedure. Besides the divisor methods, there are other house-monotone methods, and some of them also satisfy both quotas. For example, the "Quota method" of Balinsky and Young satisfies house-monotonicity and upper-quota by construction, and it can be proved that it also satisfies lower-quota. It can be generalized: there is a general algorithm that yields "all" apportionment methods which are both house-monotone and satisfy both quotas. However, all these quota-based methods (Quota-Jefferson, Quota-Hill, etc.) may violate pairwise-PM: there are examples in which one agent gains in population but loses seats.
Uniformity (also called coherence) means that, if we take some subset of the agents formula_90, and apply the same method to their combined allocation formula_91, then the result is the vector formula_92. All rank-index methods (hence all divisor methods) are uniform, since they assign seats to agents in a pre-determined method - determined by formula_60, and this order does not depend on the presence or absence of other agents. Moreover, every uniform method that is also "anonymous" and "balanced" must be a rank-index method.
Every uniform method that is also "anonymous", "weakly-exact" and "concordant" (= formula_28 implies formula_29) must be a divisor method. Moreover, among all anonymous methods:
Encouraging coalitions.
When the agents are political parties, they often split or merge. How such splitting/merging affects the apportionment will impact political fragmentation. Suppose a certain apportionment method gives two agents formula_93 some formula_94 seats respectively, and then these two agents form a coalition, and the method is re-activated.
Among the divisor methods:
Since these are different methods, no divisor method gives every coalition of "formula_93" "exactly" formula_95 seats. Moreover, this uniqueness can be extended to the much larger class of rank-index methods.
A weaker property, called "coalitional-stability", is that every coalition of "formula_93" should receive between formula_96 and formula_97 seats; so a party can gain at most one seat by merging/splitting.
Moreover, every method satisfying both quotas is "almost coalitionally-stable" - it gives every coalition between formula_100 and formula_101 seats.
Summary table.
The following table summarizes uniqueness results for classes of apportionment methods. For example, the top-left cell states that Jefferson's method is the unique divisor method satisfying the lower quota rule.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "(t_1,\\ldots,t_n)"
},
{
"math_id": 3,
"text": "t_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "\\sum_{i=1}^n t_i = 1"
},
{
"math_id": 6,
"text": "q_i"
},
{
"math_id": 7,
"text": "q_i := t_i\\cdot h"
},
{
"math_id": 8,
"text": "\\sum_{i=1}^n q_i = h"
},
{
"math_id": 9,
"text": "(p_1,\\ldots,p_n)"
},
{
"math_id": 10,
"text": "t_i = p_i / \\sum_{j=1}^n p_j"
},
{
"math_id": 11,
"text": "a_1,\\ldots,a_n"
},
{
"math_id": 12,
"text": "\\sum_{i=1}^n a_i = h"
},
{
"math_id": 13,
"text": "a_i"
},
{
"math_id": 14,
"text": "h = 101"
},
{
"math_id": 15,
"text": "t_1 = t_2 = 1/2"
},
{
"math_id": 16,
"text": "M(\\mathbf{t}, h)"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "f(\\mathbf{t}, h)"
},
{
"math_id": 19,
"text": "M^*(\\mathbf{t})"
},
{
"math_id": 20,
"text": "r_1,\\ldots,r_n"
},
{
"math_id": 21,
"text": "r_i"
},
{
"math_id": 22,
"text": "a_i \\geq r_i"
},
{
"math_id": 23,
"text": "\\mathbf{t'}"
},
{
"math_id": 24,
"text": "\\mathbf{t}"
},
{
"math_id": 25,
"text": "M(\\mathbf{t'}, h)"
},
{
"math_id": 26,
"text": "t_i = t_j"
},
{
"math_id": 27,
"text": "a_i \\geq a_j-1"
},
{
"math_id": 28,
"text": "t_i > t_j"
},
{
"math_id": 29,
"text": "a_i \\geq a_j"
},
{
"math_id": 30,
"text": "M(c\\cdot \\mathbf{t}, h) = M(\\mathbf{t}, h) "
},
{
"math_id": 31,
"text": "q_i = t_i\\cdot h"
},
{
"math_id": 32,
"text": "(q_1,\\ldots,q_n)"
},
{
"math_id": 33,
"text": "\\mathbf{a}"
},
{
"math_id": 34,
"text": "\\lfloor q_i\\rfloor"
},
{
"math_id": 35,
"text": "\\mathbf{a'}\\in M(\\mathbf{t}, h')"
},
{
"math_id": 36,
"text": "h < h'"
},
{
"math_id": 37,
"text": "\\mathbf{a}'"
},
{
"math_id": 38,
"text": "M(\\mathbf{t}, 6)"
},
{
"math_id": 39,
"text": "M(\\mathbf{t}, 4)"
},
{
"math_id": 40,
"text": "\\{\\mathbf{t}| \\mathbf{a}\\in M(\\mathbf{t}, h)\\}"
},
{
"math_id": 41,
"text": "\\lfloor q_1 \\rfloor,\\ldots,\\lfloor q_n \\rfloor"
},
{
"math_id": 42,
"text": "q_i - \\lfloor q_i \\rfloor"
},
{
"math_id": 43,
"text": "t_i h"
},
{
"math_id": 44,
"text": "t_i\\cdot(h+1)"
},
{
"math_id": 45,
"text": "h+1"
},
{
"math_id": 46,
"text": "M(\\mathbf{t},h) := \\{\\mathbf{a} | a_i = \\operatorname{round}(t_i\\cdot H) \\text{ and } \\sum_{i=1}^n a_i = h \\text{ for some real number } H \\}."
},
{
"math_id": 47,
"text": "d(k)"
},
{
"math_id": 48,
"text": "k\\geq 0"
},
{
"math_id": 49,
"text": "[k, k+1]"
},
{
"math_id": 50,
"text": "[k, d(k)]"
},
{
"math_id": 51,
"text": "k"
},
{
"math_id": 52,
"text": "[d(k), k+1]"
},
{
"math_id": 53,
"text": "k+1"
},
{
"math_id": 54,
"text": "\\operatorname{round}^d(x)"
},
{
"math_id": 55,
"text": "d(k-1)\\leq x \\leq d(k)"
},
{
"math_id": 56,
"text": "d(k) = k"
},
{
"math_id": 57,
"text": "d(k) = k+1"
},
{
"math_id": 58,
"text": "d(k) = k+0.5"
},
{
"math_id": 59,
"text": "\\frac{t_i}{d(a_i)}"
},
{
"math_id": 60,
"text": "r(t,a)"
},
{
"math_id": 61,
"text": "a"
},
{
"math_id": 62,
"text": "r(t_i,a_i)"
},
{
"math_id": 63,
"text": "d(a)"
},
{
"math_id": 64,
"text": "r(t,a) = t/d(a)"
},
{
"math_id": 65,
"text": "a_i = q_i"
},
{
"math_id": 66,
"text": "q_i "
},
{
"math_id": 67,
"text": "\\lfloor q_i\\rfloor "
},
{
"math_id": 68,
"text": "\\lceil q_i\\rceil "
},
{
"math_id": 69,
"text": "a_i\\geq \\lfloor q_i\\rfloor "
},
{
"math_id": 70,
"text": "i "
},
{
"math_id": 71,
"text": "a_i + 1 > q_i "
},
{
"math_id": 72,
"text": "a_i\\leq \\lceil q_i\\rceil "
},
{
"math_id": 73,
"text": "a_i - 1 < q_i "
},
{
"math_id": 74,
"text": "\\frac{q_i}{a_i+1} < 1 < \\frac{q_i}{a_i-1} "
},
{
"math_id": 75,
"text": "i, j"
},
{
"math_id": 76,
"text": "j"
},
{
"math_id": 77,
"text": "q_i-(a_i-1) ~<~ a_i - q_i ~~\\text{ and }~~ (a_j+1)-q_j ~<~ q_j - a_j"
},
{
"math_id": 78,
"text": "a_i/t_i = a_j / t_j"
},
{
"math_id": 79,
"text": "a_i/t_i > a_j / t_j"
},
{
"math_id": 80,
"text": "a_i/t_i < a_j / t_j"
},
{
"math_id": 81,
"text": "|a_i/t_i - a_j / t_j|"
},
{
"math_id": 82,
"text": "a_i - (t_i/t_j)a_j"
},
{
"math_id": 83,
"text": "a_i/t_i \\geq a_j/t_j"
},
{
"math_id": 84,
"text": "a_i(t_j/t_i) - a_j"
},
{
"math_id": 85,
"text": "|t_i/a_i - t_j /a_j|"
},
{
"math_id": 86,
"text": "\\left|\\frac{a_i/t_i}{a_j/t_j} - 1\\right|"
},
{
"math_id": 87,
"text": "|a_i/a_j - t_i/t_j|"
},
{
"math_id": 88,
"text": "n\\geq 4"
},
{
"math_id": 89,
"text": "h\\geq n +3"
},
{
"math_id": 90,
"text": "1,\\ldots,k"
},
{
"math_id": 91,
"text": "h_k = a_1+\\cdots+a_k"
},
{
"math_id": 92,
"text": "(a_1,\\ldots,a_k)"
},
{
"math_id": 93,
"text": "i,j"
},
{
"math_id": 94,
"text": "a_i, a_j"
},
{
"math_id": 95,
"text": "a_i + a_j"
},
{
"math_id": 96,
"text": "a_i + a_j-1"
},
{
"math_id": 97,
"text": "a_i + a_j+1"
},
{
"math_id": 98,
"text": "d"
},
{
"math_id": 99,
"text": "d(a_1 + a_2) \\leq d(a_1) + d(a_2) \\leq d(a_1 + a_2+1)"
},
{
"math_id": 100,
"text": "a_i + a_j-2"
},
{
"math_id": 101,
"text": "a_i + a_j+2"
}
]
| https://en.wikipedia.org/wiki?curid=68407499 |
68408304 | Wolfgang Soergel | German mathematician
Wolfgang Soergel (born 12 June 1962 in Geneva) is a German mathematician, specializing in geometry and representation theory.
Early life and education.
Wolfgang Soergel is the son of the physicist Volker Soergel and a grandson of the paleontologist Johannes Wolfgang Adolf Werner Soergel (1887–1946).
Soergel received his "Promotion" (PhD) in 1988 from the University of Hamburg. His PhD dissertation "Universelle versus relative Einhüllende: Eine geometrische Untersuchung von Quotienten von universellen Einhüllenden halbeinfacher Lie-Algebren" (Universal versus relative envelopes: a geometric investigation of quotients of universal envelopes of semi-simple Lie algebras) was supervised by Jens Carsten Jantzen.
Career.
After postdoctoral positions at UC Berkeley, Harvard University, and MIT, Soergel completed his "Habilitation" at the University of Bonn in 1991. In 1994 he was appointed to a professorial chair at the University of Freiburg. He was an invited speaker at the 1994 International Congress of Mathematicians in Zurich. Since 2008 he has been a full member of the Heidelberg Academy of Sciences and Humanities.
He is the author or coauthor of over 30 research articles. His research in representation theory has important applications to Kazhdan-Lusztig theory and Koszul duality. The category of Soergel bimodules is named in his honor.
His doctoral students include Peter Fiebig, Catharina Stroppel and Geordie Williamson.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
}
]
| https://en.wikipedia.org/wiki?curid=68408304 |
68408392 | Tin(II) oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Tin(II) oxalate is an inorganic compound, a salt of tin and oxalic acid with the chemical formula SnC2O4. The compound looks like colorless crystals, does not dissolve in water, and forms crystalline hydrates.
Synthesis.
Effect of oxalic acid solution on tin(II) oxide :
formula_0
Tin(II) oxalate can also be obtained by using tin(II) chloride and oxalic acid.
Properties.
Tin (II) oxalate forms colorless crystals.
Insoluble in water and acetone. Soluble in dilute HCl, methanol, and petroleum ether.
Forms crystal hydrates of the composition SnC2O4•"n" H2O, where n = 1 and 2.
Decomposes on heating:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ SnO + H_2C_2O_4 \\ \\xrightarrow{}\\ SnC_2O_4\\downarrow + H_2O }"
},
{
"math_id": 1,
"text": "\\mathsf{ SnC_2O_4 \\ \\xrightarrow{380^oC}\\ SnO_2 + 2CO }"
}
]
| https://en.wikipedia.org/wiki?curid=68408392 |
68413173 | House monotonicity | Method of allocating seats in a parliament
House monotonicity (also called house-size monotonicity) is a property of apportionment methods. These are methods for allocating seats in a parliament among federal states (or among political parties). The property says that, if the number of seats in the "house" (the parliament) increases, and the method is re-activated, then no state (or party) should have fewer seats than it previously had. A method that fails to satisfy house-monotonicity is said to have the Alabama paradox.
In the context of committee elections, house monotonicity is often called committee monotonicity. It says that, if the size of the committee increases, then all the candidate that were previously elected, are still elected.
House monotonicity is the special case of "resource monotonicity" for the setting in which the resource consists of identical discrete items (the seats).
Methods violating house-monotonicity.
An example of a method violating house-monotonicity is the largest remainder method (= Hamilton's method). Consider the following instance with three states:
When one seat is added to the house, the share of state C decreases from 2 to 1.
This occurs because increasing the number of seats increases the fair share faster for the large states than for the small states. In particular, large A and B had their fair share increase faster than small C. Therefore, the fractional parts for A and B increased faster than those for C. In fact, they overtook C's fraction, causing C to lose its seat, since the method examines which states have the largest remaining fraction.
This violation is known as the Alabama paradox due to the history of its discovery. After the 1880 census, C. W. Seaton, chief clerk of the United States Census Bureau, computed apportionments for all House sizes between 275 and 350, and discovered that Alabama would get eight seats with a House size of 299 but only seven with a House size of 300.
Methods satisfying house-monotonicity.
Methods for apportionment.
All the highest-averages methods (= divisor methods) satisfy house monotonicity. This is easy to see when considering the implementation of divisor methods as picking sequences: when a seat are added, the only change is that the picking sequence is extended with one additional pick. Therefore, all states keep their previously picked seats. Similarly, rank-index methods, which are generalizations of divisor methods, satisfy house-monotonicity.
Moreover, "capped divisor methods", which are variants of divisor methods in which a state never gets more seats than its upper quota, also satisfy house-monotonicity. An example is the Balinsky-Young quota method.
Every house-monotone method can be defined as a recursive function of the house size "h"."" Formally, an apportionment method formula_0 is house-monotone and satisfies both quotas if-and-only-if it is constructed recursively as follows (see mathematics of apportionment for the definitions and notation):
Every coherent apportionment method is house-monotone.
Methods for multiwinner voting.
The sequential Phragmen's voting rules, both for approval ballots and for ranked ballots, are committee-monotone. The same is true for Thiele's addition method and Thiele's elimination method. However, Thiele's optimization method is not committee-monotone. | [
{
"math_id": 0,
"text": "M(\\mathbf{t},h)"
},
{
"math_id": 1,
"text": "M(\\mathbf{t},0) = 0"
},
{
"math_id": 2,
"text": "M(\\mathbf{t},h) = \\mathbf{a}"
},
{
"math_id": 3,
"text": "M(\\mathbf{t},h+1)"
},
{
"math_id": 4,
"text": "a_i+1"
},
{
"math_id": 5,
"text": "i\\in U(\\mathbf{t},\\mathbf{a})\\cap L(\\mathbf{t},\\mathbf{a})"
},
{
"math_id": 6,
"text": "U(\\mathbf{t},\\mathbf{a})"
},
{
"math_id": 7,
"text": "L(\\mathbf{t},\\mathbf{a})"
}
]
| https://en.wikipedia.org/wiki?curid=68413173 |
684143 | Wedge sum | In topology, the wedge sum is a "one-point union" of a family of topological spaces. Specifically, if "X" and "Y" are pointed spaces (i.e. topological spaces with distinguished basepoints formula_0 and formula_1) the wedge sum of "X" and "Y" is the quotient space of the disjoint union of "X" and "Y" by the identification formula_2
formula_3
where formula_4 is the equivalence closure of the relation formula_5
More generally, suppose formula_6 is a indexed family of pointed spaces with basepoints formula_7 The wedge sum of the family is given by:
formula_8
where formula_4 is the equivalence closure of the relation formula_9
In other words, the wedge sum is the joining of several spaces at a single point. This definition is sensitive to the choice of the basepoints formula_10 unless the spaces formula_6 are homogeneous.
The wedge sum is again a pointed space, and the binary operation is associative and commutative (up to homeomorphism).
Sometimes the wedge sum is called the wedge product, but this is not the same concept as the exterior product, which is also often called the wedge product.
Examples.
The wedge sum of two circles is homeomorphic to a figure-eight space. The wedge sum of formula_11 circles is often called a "bouquet of circles", while a wedge product of arbitrary spheres is often called a bouquet of spheres.
A common construction in homotopy is to identify all of the points along the equator of an formula_11-sphere formula_12. Doing so results in two copies of the sphere, joined at the point that was the equator:
formula_13
Let formula_14 be the map formula_15 that is, of identifying the equator down to a single point. Then addition of two elements formula_16 of the formula_11-dimensional homotopy group formula_17 of a space formula_18 at the distinguished point formula_19 can be understood as the composition of formula_20 and formula_21 with formula_14:
formula_22
Here, formula_23 are maps which take a distinguished point formula_24 to the point formula_25 Note that the above uses the wedge sum of two functions, which is possible precisely because they agree at formula_26 the point common to the wedge sum of the underlying spaces.
Categorical description.
The wedge sum can be understood as the coproduct in the category of pointed spaces. Alternatively, the wedge sum can be seen as the pushout of the diagram formula_27 in the category of topological spaces (where formula_28 is any one-point space).
Properties.
Van Kampen's theorem gives certain conditions (which are usually fulfilled for well-behaved spaces, such as CW complexes) under which the fundamental group of the wedge sum of two spaces formula_18 and formula_29 is the free product of the fundamental groups of formula_18 and formula_30 | [
{
"math_id": 0,
"text": "x_0"
},
{
"math_id": 1,
"text": "y_0"
},
{
"math_id": 2,
"text": "x_0 \\sim y_0:"
},
{
"math_id": 3,
"text": "X \\vee Y = (X \\amalg Y)\\;/{\\sim},"
},
{
"math_id": 4,
"text": "\\,\\sim\\,"
},
{
"math_id": 5,
"text": "\\left\\{ \\left(x_0, y_0\\right) \\right\\}."
},
{
"math_id": 6,
"text": "\\left(X_i\\right)_{i \\in I}"
},
{
"math_id": 7,
"text": "\\left(p_i\\right)_{i \\in I}."
},
{
"math_id": 8,
"text": "\\bigvee_{i \\in I} X_i = \\coprod_{i \\in I} X_i\\;/{\\sim},"
},
{
"math_id": 9,
"text": "\\left\\{ \\left(p_i, p_j\\right) : i, j \\in I\\right\\}."
},
{
"math_id": 10,
"text": "\\left(p_i\\right)_{i \\in I},"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "S^n"
},
{
"math_id": 13,
"text": "S^n/{\\sim} = S^n \\vee S^n."
},
{
"math_id": 14,
"text": "\\Psi"
},
{
"math_id": 15,
"text": "\\Psi : S^n \\to S^n \\vee S^n,"
},
{
"math_id": 16,
"text": "f, g \\in \\pi_n(X,x_0)"
},
{
"math_id": 17,
"text": "\\pi_n(X,x_0)"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "x_0 \\in X"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "g"
},
{
"math_id": 22,
"text": "f + g = (f \\vee g) \\circ \\Psi."
},
{
"math_id": 23,
"text": "f, g : S^n \\to X"
},
{
"math_id": 24,
"text": "s_0 \\in S^n"
},
{
"math_id": 25,
"text": "x_0 \\in X."
},
{
"math_id": 26,
"text": "s_0,"
},
{
"math_id": 27,
"text": "X \\leftarrow \\{ \\bull \\} \\to Y"
},
{
"math_id": 28,
"text": "\\{ \\bull \\}"
},
{
"math_id": 29,
"text": "Y"
},
{
"math_id": 30,
"text": "Y."
}
]
| https://en.wikipedia.org/wiki?curid=684143 |
68414337 | Neptunium(IV) oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Neptunium (IV) oxalate is an inorganic compound, a salt of neptunium and oxalic acid with the chemical formula Np(C2O4)2. The compound is slightly soluble in water, forms crystalline hydrates—green crystals.
Synthesis.
Neptunium(IV) oxalate is formed by the oxalic acid precipitation of neptunium (IV) solutions:
formula_0
Physical properties.
Neptunium(IV) oxalate forms a crystalline hydrate of the composition Np(C2O4)2 • 6H2O with green crystals.
It is insoluble in acetone, and slightly soluble in water.
Chemical properties.
Neptunium(IV) oxalate decomposes on heating:
formula_1
Applications.
Neptunium(IV) oxalate is used as an intermediate product in the purification of neptunium.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ NpCl_4 + 2H_2C_2O_4 \\ \\xrightarrow{}\\ Np(C_2O_4)_2\\cdot 6H_2O\\downarrow + 4HCl }"
},
{
"math_id": 1,
"text": "\\mathsf{ Np(C_2O_4)_2 \\ \\xrightarrow{400^oC}\\ NpO_2 + 2CO_2 + 2CO }"
}
]
| https://en.wikipedia.org/wiki?curid=68414337 |
684207 | Helicoid | Mathematical shape
The helicoid, also known as helical surface, is a smooth surface embedded in three-dimensional space. It is the surface traced by an infinite line that is simultaneously being rotated and lifted along its fixed axis of rotation. It is the third minimal surface to be known, after the plane and the catenoid.
Description.
It was described by Euler in 1774 and by Jean Baptiste Meusnier in 1776. Its name derives from its similarity to the helix: for every point on the helicoid, there is a helix contained in the helicoid which passes through that point. Since it is considered that the planar range extends through negative and positive infinity, close observation shows the appearance of two parallel or mirror planes in the sense that if the slope of one plane is traced, the co-plane can be seen to be bypassed or skipped, though in actuality the co-plane is also traced from the opposite perspective.
The helicoid is also a ruled surface (and a right conoid), meaning that it is a trace of a line. Alternatively, for any point on the surface, there is a line on the surface passing through it. Indeed, Catalan proved in 1842 that the helicoid and the plane were the only ruled minimal surfaces.
A helicoid is also a translation surface in the sense of differential geometry.
The helicoid and the catenoid are parts of a family of helicoid-catenoid minimal surfaces.
The helicoid is shaped like Archimedes screw, but extends infinitely in all directions. It can be described by the following parametric equations in Cartesian coordinates:
formula_0
formula_1
formula_2
where "ρ" and "θ" range from negative infinity to positive infinity, while "α" is a constant. If "α" is positive, then the helicoid is right-handed as shown in the figure; if negative then left-handed.
The helicoid has principal curvatures formula_3. The sum of these quantities gives the mean curvature (zero since the helicoid is a minimal surface) and the product gives the Gaussian curvature.
The helicoid is homeomorphic to the plane formula_4. To see this, let "α" decrease continuously from its given value down to zero. Each intermediate value of "α" will describe a different helicoid, until "α" = 0 is reached and the helicoid becomes a vertical plane.
Conversely, a plane can be turned into a helicoid by choosing a line, or "axis", on the plane, then twisting the plane around that axis.
If a helicoid of radius "R" revolves by an angle of "θ" around its axis while rising by a height "h", the area of the surface is given by
formula_5
Helicoid and catenoid.
The helicoid and the catenoid are locally isometric surfaces; see Catenoid#Helicoid transformation. | [
{
"math_id": 0,
"text": " x = \\rho \\cos (\\alpha \\theta), \\ "
},
{
"math_id": 1,
"text": " y = \\rho \\sin (\\alpha \\theta), \\ "
},
{
"math_id": 2,
"text": " z = \\theta, \\ "
},
{
"math_id": 3,
"text": "\\pm \\alpha /(1+ \\alpha^2 \\rho ^2) \\ "
},
{
"math_id": 4,
"text": " \\mathbb{R}^2 "
},
{
"math_id": 5,
"text": "\\frac{\\theta}{2} \\left[R \\sqrt{R^2+c^2}+c^2 \\ln \\left(\\frac{R + \\sqrt{R^2+c^2}} c\\right) \\right],\n\\ c = \\frac{h}{\\theta}."
}
]
| https://en.wikipedia.org/wiki?curid=684207 |
684210 | Mean curvature | Differential geometry measure
In mathematics, the mean curvature formula_0 of a surface formula_1 is an "extrinsic" measure of curvature that comes from differential geometry and that locally describes the curvature of an embedded surface in some ambient space such as Euclidean space.
The concept was used by Sophie Germain in her work on elasticity theory. Jean Baptiste Marie Meusnier used it in 1776, in his studies of minimal surfaces. It is important in the analysis of minimal surfaces, which have mean curvature zero, and in the analysis of physical interfaces between fluids (such as soap films) which, for example, have constant mean curvature in static flows, by the Young–Laplace equation.
Definition.
Let formula_2 be a point on the surface formula_3 inside the three dimensional Euclidean space R3. Each plane through formula_2 containing the normal line to formula_3 cuts formula_3 in a (plane) curve. Fixing a choice of unit normal gives a signed curvature to that curve. As the plane is rotated by an angle formula_4 (always containing the normal line) that curvature can vary. The maximal curvature formula_5 and minimal curvature formula_6 are known as the "principal curvatures" of formula_3.
The mean curvature at formula_7 is then the average of the signed curvature over all angles formula_4:
formula_8.
By applying Euler's theorem, this is equal to the average of the principal curvatures :
formula_9
More generally , for a hypersurface formula_10 the mean curvature is given as
formula_11
More abstractly, the mean curvature is the trace of the second fundamental form divided by "n" (or equivalently, the shape operator).
Additionally, the mean curvature formula_0 may be written in terms of the covariant derivative formula_12 as
formula_13
using the "Gauss-Weingarten relations," where formula_14 is a smoothly embedded hypersurface, formula_15 a unit normal vector, and formula_16 the metric tensor.
A surface is a minimal surface if and only if the mean curvature is zero. Furthermore, a surface which evolves under the mean curvature of the surface formula_1, is said to obey a heat-type equation called the mean curvature flow equation.
The sphere is the only embedded surface of constant positive mean curvature without boundary or singularities. However, the result is not true when the condition "embedded surface" is weakened to "immersed surface".
Surfaces in 3D space.
For a surface defined in 3D space, the mean curvature is related to a unit normal of the surface:
formula_17
where the normal chosen affects the sign of the curvature. The sign of the curvature depends on the choice of normal: the curvature is positive if the surface curves "towards" the normal. The formula above holds for surfaces in 3D space defined in any manner, as long as the divergence of the unit normal may be calculated. Mean Curvature may also be calculated
formula_18
where I and II denote first and second quadratic form matrices, respectively.
If formula_19 is a parametrization of the surface and formula_20 are two linearly independent vectors in parameter space then the mean curvature can be written in terms of the first and second fundamental forms as
formula_21
where formula_22, formula_23, formula_24, formula_25, formula_26, formula_27.
For the special case of a surface defined as a function of two coordinates, e.g. formula_28, and using the upward pointing normal the (doubled) mean curvature expression is
formula_29
In particular at a point where formula_30, the mean curvature is half the trace of the Hessian matrix of formula_3.
If the surface is additionally known to be axisymmetric with formula_31,
formula_32
where formula_33 comes from the derivative of formula_34.
Implicit form of mean curvature.
The mean curvature of a surface specified by an equation formula_35 can be calculated by using the gradient formula_36 and the Hessian matrix
formula_37
The mean curvature is given by:
formula_38
Another form is as the divergence of the unit normal. A unit normal is given by formula_39 and the mean curvature is
formula_40
In fluid mechanics.
An alternate definition is occasionally used in fluid mechanics to avoid factors of two:
formula_41.
This results in the pressure according to the Young–Laplace equation inside an equilibrium spherical droplet being surface tension times formula_42; the two curvatures are equal to the reciprocal of the droplet's radius
formula_43.
Minimal surfaces.
A minimal surface is a surface which has zero mean curvature at all points. Classic examples include the catenoid, helicoid and Enneper surface. Recent discoveries include Costa's minimal surface and the Gyroid.
CMC surfaces.
An extension of the idea of a minimal surface are surfaces of constant mean curvature. The surfaces of unit constant mean curvature in hyperbolic space are called Bryant surfaces.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": " S"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "\\theta"
},
{
"math_id": 5,
"text": "\\kappa_1"
},
{
"math_id": 6,
"text": "\\kappa_2"
},
{
"math_id": 7,
"text": "p\\in S"
},
{
"math_id": 8,
"text": "H = \\frac{1}{2\\pi}\\int_0^{2\\pi} \\kappa(\\theta) \\;d\\theta"
},
{
"math_id": 9,
"text": "H = {1 \\over 2} (\\kappa_1 + \\kappa_2)."
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "H=\\frac{1}{n}\\sum_{i=1}^{n} \\kappa_{i}."
},
{
"math_id": 12,
"text": "\\nabla"
},
{
"math_id": 13,
"text": "H\\vec{n} = g^{ij}\\nabla_i\\nabla_j X,"
},
{
"math_id": 14,
"text": " X(x) "
},
{
"math_id": 15,
"text": "\\vec{n}"
},
{
"math_id": 16,
"text": "g_{ij}"
},
{
"math_id": 17,
"text": "2 H = -\\nabla \\cdot \\hat n"
},
{
"math_id": 18,
"text": " 2 H = \\text{Trace}((\\mathrm{II})(\\mathrm{I}^{-1}))"
},
{
"math_id": 19,
"text": "S(x,y)"
},
{
"math_id": 20,
"text": "u, v"
},
{
"math_id": 21,
"text": "\\frac{l G-2 m F + n E}{2 ( E G - F^2)}"
},
{
"math_id": 22,
"text": "E = \\mathrm{I}(u,u)"
},
{
"math_id": 23,
"text": "F = \\mathrm{I}(u,v)"
},
{
"math_id": 24,
"text": "G = \\mathrm{I}(v,v)"
},
{
"math_id": 25,
"text": "l = \\mathrm{II}(u,u)"
},
{
"math_id": 26,
"text": "m = \\mathrm{II}(u,v)"
},
{
"math_id": 27,
"text": "n = \\mathrm{II}(v,v)"
},
{
"math_id": 28,
"text": "z = S(x, y)"
},
{
"math_id": 29,
"text": "\\begin{align}\n2 H & = -\\nabla \\cdot \\left(\\frac{\\nabla(z-S)}{|\\nabla(z - S)|}\\right) \\\\\n& = \\nabla \\cdot \\left(\\frac{\\nabla S-\\nabla z}\n{\\sqrt{1 + |\\nabla S|^2}}\\right) \\\\\n& = \n\\frac{\n\\left(1 + \\left(\\frac{\\partial S}{\\partial x}\\right)^2\\right) \\frac{\\partial^2 S}{\\partial y^2} - \n2 \\frac{\\partial S}{\\partial x} \\frac{\\partial S}{\\partial y} \\frac{\\partial^2 S}{\\partial x \\partial y} + \n\\left(1 + \\left(\\frac{\\partial S}{\\partial y}\\right)^2\\right) \\frac{\\partial^2 S}{\\partial x^2}\n}{\\left(1 + \\left(\\frac{\\partial S}{\\partial x}\\right)^2 + \\left(\\frac{\\partial S}{\\partial y}\\right)^2\\right)^{3/2}}.\n\\end{align}\n"
},
{
"math_id": 30,
"text": "\\nabla S=0"
},
{
"math_id": 31,
"text": "z = S(r)"
},
{
"math_id": 32,
"text": "2 H = \\frac{\\frac{\\partial^2 S}{\\partial r^2}}{\\left(1 + \\left(\\frac{\\partial S}{\\partial r}\\right)^2\\right)^{3/2}} + {\\frac{\\partial S}{\\partial r}}\\frac{1}{r \\left(1 + \\left(\\frac{\\partial S}{\\partial r}\\right)^2\\right)^{1/2}},"
},
{
"math_id": 33,
"text": "{\\frac{\\partial S}{\\partial r}} \\frac{1}{r}"
},
{
"math_id": 34,
"text": "z = S(r) = S\\left(\\sqrt{x^2 + y^2} \\right)"
},
{
"math_id": 35,
"text": "F(x,y,z)=0"
},
{
"math_id": 36,
"text": "\\nabla F=\\left( \\frac{\\partial F}{\\partial x}, \\frac{\\partial F}{\\partial y}, \\frac{\\partial F}{\\partial z} \\right)"
},
{
"math_id": 37,
"text": "\\textstyle \\mbox{Hess}(F)=\n\\begin{pmatrix}\n\\frac{\\partial^2 F}{\\partial x^2} & \\frac{\\partial^2 F}{\\partial x\\partial y} & \\frac{\\partial^2 F}{\\partial x\\partial z} \\\\\n\\frac{\\partial^2 F}{\\partial y\\partial x} & \\frac{\\partial^2 F}{\\partial y^2} & \\frac{\\partial^2 F}{\\partial y\\partial z} \\\\\n\\frac{\\partial^2 F}{\\partial z\\partial x} & \\frac{\\partial^2 F}{\\partial z\\partial y} & \\frac{\\partial^2 F}{\\partial z^2} \n\\end{pmatrix}\n.\n"
},
{
"math_id": 38,
"text": "H = \\frac{ \\nabla F\\ \\mbox{Hess}(F) \\ \\nabla F^{\\mathsf {T}} - |\\nabla F|^2\\, \\text{Trace}(\\mbox{Hess}(F)) } { 2|\\nabla F|^3 }"
},
{
"math_id": 39,
"text": "\\frac{\\nabla F}{|\\nabla F|}"
},
{
"math_id": 40,
"text": "H = -{\\frac{1}{2}}\\nabla\\cdot \\left(\\frac{\\nabla F}{|\\nabla F|}\\right)."
},
{
"math_id": 41,
"text": "H_f = (\\kappa_1 + \\kappa_2) \\,"
},
{
"math_id": 42,
"text": "H_f"
},
{
"math_id": 43,
"text": "\\kappa_1 = \\kappa_2 = r^{-1} \\,"
}
]
| https://en.wikipedia.org/wiki?curid=684210 |
6842670 | Extensional tectonics | Geological process of stretching planet crust
Extensional tectonics is concerned with the structures formed by, and the tectonic processes associated with, the stretching of a planetary body's crust or lithosphere.
Deformation styles.
The types of structure and the geometries formed depend on the amount of stretching involved. Stretching is generally measured using the parameter "β", known as the "beta factor", where
formula_0
"t"0 is the initial crustal thickness and "t"1 is the final crustal thickness. It is also the equivalent of the strain parameter "stretch".
Low beta factor.
In areas of relatively low crustal stretching, the dominant structures are high to moderate angle normal faults, with associated half grabens and tilted fault blocks.
High beta factor.
In areas of high crustal stretching, individual extensional faults may become rotated to too low a dip to remain active and a new set of faults may be generated. Large displacements may juxtapose syntectonic sediments against metamorphic rocks of the mid to lower crust and such structures are called detachment faults. In some cases the detachments are folded such that the metamorphic rocks are exposed within antiformal closures and these are known as metamorphic core complexes.
Passive margins.
Passive margins above a weak layer develop a specific set of extensional structures. Large listric regional faults dipping towards the ocean develop with rollover anticlines and related crestal collapse grabens. On some margins, such as the Niger Delta, large counter-regional faults are observed, dipping back towards the continent, forming large grabenal mini-basins with antithetic regional faults.
Geological environments associated with extensional tectonics.
Areas of extensional tectonics are typically associated with:
Continental rifts.
Rifts are linear zones of localized crustal extension. They range in width from somewhat less than 100 km up to several hundred km, consisting of one or more normal faults and related fault blocks. In individual rift segments, one polarity (i.e. dip direction) normally dominates, giving a half-graben geometry. Other common geometries include metamorphic core complexes and tilted blocks. Examples of active continental rifts are the Baikal Rift Zone and the East African Rift.
Divergent plate boundaries.
Divergent plate boundaries are zones of active extension as the crust newly formed at the mid-ocean ridge system becomes involved in the opening process.
Gravitational spreading of zones of thickened crust.
Zones of thickened crust, such as those formed during continent-continent collision tend to spread laterally; this spreading occurs even when the collisional event is still in progress. After the collision has finished the zone of thickened crust generally undergoes gravitational collapse, often with the formation of very large extensional faults. Large-scale Devonian extension, for example, followed immediately after the end of the Caledonian orogeny particularly in East Greenland and western Norway.
Releasing bends along strike-slip faults.
When a strike-slip fault is offset along strike such as to create a gap e.g. a left-stepping bend on a sinistral fault, a zone of extension or transtension is generated. Such bends are known as "releasing bends" or "extensional stepovers" and often form pull-apart basins or "rhombochasms". Examples of active pull-apart basins include the Dead Sea, formed at a left-stepping offset of the sinistral sense Dead Sea Transform system, and the Sea of Marmara, formed at a right-stepping offset on the dextral sense North Anatolian Fault system.
Back-arc basins.
Back-arc basins form behind many subduction zones due to the effects of oceanic trench roll-back which leads to a zone of extension parallel to the island arc.
Passive margins.
A passive margin built out over a weaker layer, such as an overpressured mudstone or salt, tends to spread laterally under its own weight. The inboard part of the sedimentary prism is affected by extensional faulting, balanced by outboard shortening.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\beta = \\frac{t_1}{t_0} \\,,"
}
]
| https://en.wikipedia.org/wiki?curid=6842670 |
68428832 | Samarium(III) oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Samarium(III) oxalate is an inorganic compound, a salt of samarium and oxalic acid with the formula Sm2(C2O4)3. The compound does not dissolve in water, forms a crystalline hydrate with yellow crystals.
Synthesis.
Precipitation of soluble samarium salts with oxalic acid:
formula_0
Also a reaction of samarium nitrate and oxalic acid in an aqueous solution:
formula_1
Physical properties.
Samarium(III) oxalate forms a crystalline hydrate of the composition Sm2(C2O4)3 • 10H2O with yellow crystals.
Chemical properties.
Decomposes on heating:
formula_2
Crystalline hydrate Sm2(C2O4)3 • 10H2O decomposes stepwise.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ 2SmCl_3 + 3H_2C_2O_4 \\ \\xrightarrow{}\\ Sm_2(C_2O_4)_3\\downarrow + 6HCl }"
},
{
"math_id": 1,
"text": "\\mathsf{ 2Sm(NO_3)_3 + 3H_2C_2O_4 \\ \\xrightarrow{}\\ Sm_2(C_2O_4)_3\\downarrow + 6HNO_3 }"
},
{
"math_id": 2,
"text": "\\mathsf{ Sm_2(C_2O_4)_3 \\ \\xrightarrow{800^oC}\\ Sm_2O_3 + 3CO_2 + 3CO }"
}
]
| https://en.wikipedia.org/wiki?curid=68428832 |
68429957 | Next-fit bin packing | Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a "packing" - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic:
Next-Fit is a bounded space algorithm - it requires only one partially-filled bin to be open at any time. The algorithm was studied by David S. Johnson in his doctoral thesis in 1973.
Run time.
The running time of NextFit can be bounded by formula_0, where formula_1 is the number of items in the list.
Approximation ratio.
Denote by NF(L) the number of bins used by NextFit, and by OPT(L) the optimal number of bins possible for the list L.
Upper bound.
Then, for each list formula_2, formula_3. The intuition to the proof s the following. The number of bins used by this algorithm is no more than twice the optimal number of bins. In other words, it is impossible for 2 bins to be at most half full because such a possibility implies that at some point, exactly one bin was at most half full and a new one was opened to accommodate an item of size at most formula_4. But since the first one has at least a space of formula_4, the algorithm will not open a new bin for any item whose size is at most formula_4. Only after the bin fills with more than formula_4 or if an item with a size larger than formula_4 arrives, the algorithm may open a new bin. Thus if we have formula_5 bins, at least formula_6 bins are more than half full. Therefore, formula_7. Because formula_8 is a lower bound of the optimum value formula_9, we get that formula_10 and therefore formula_11.
Lower bound.
For each formula_12, there exists a list formula_2 such that formula_13 and formula_14.
The family of lists for which it holds that formula_15 is given by formula_16 with formula_17. The optimal solution for this list has formula_18 bins containing two items with size formula_19 and one bin with formula_20 items with size formula_21 (i.e., formula_22 bins total), while the solution generated by NF has formula_20 bins with one item of size formula_19 and one item with size formula_23.
Bounded item size.
If the maximum size of an item is formula_24, then the asymptotic approximation ratio ratio formula_25 satisfies:
Other properties.
Next-Fit packs a list and its inverse into the same number of bins.
Next-"k"-Fit (NkF).
Next-k-Fit is a variant of Next-Fit, but instead of keeping only one bin open, the algorithm keeps the last formula_30 bins open and chooses the first bin in which the item fits.
For formula_31, NkF delivers results that are improved compared to the results of NF, however, increasing formula_30 to constant values larger than formula_32 improves the algorithm no further in its worst-case behavior. If algorithm formula_33 is an AlmostAnyFit-algorithm and formula_34 then formula_35. | [
{
"math_id": 0,
"text": "\\mathcal{O}(n)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "NF(L) \\leq 2 \\cdot \\mathrm{OPT}(L) -1 "
},
{
"math_id": 4,
"text": "B/2"
},
{
"math_id": 5,
"text": "K"
},
{
"math_id": 6,
"text": "K-1"
},
{
"math_id": 7,
"text": "\\sum_{i \\in I} s(i)>\\tfrac{K-1}{2}B"
},
{
"math_id": 8,
"text": "\\tfrac{\\sum_{i \\in I} s(i)}{B}"
},
{
"math_id": 9,
"text": "\\mathrm{OPT}"
},
{
"math_id": 10,
"text": "K-1<2\\mathrm{OPT}"
},
{
"math_id": 11,
"text": "K \\leq 2\\mathrm{OPT}"
},
{
"math_id": 12,
"text": "N \\in \\mathbb{N}"
},
{
"math_id": 13,
"text": "\\mathrm{OPT}(L) = N"
},
{
"math_id": 14,
"text": "NF(L) = 2 \\cdot \\mathrm{OPT}(L) -2"
},
{
"math_id": 15,
"text": "NF(L) = 2 \\cdot \\mathrm{OPT}(L) - 2"
},
{
"math_id": 16,
"text": "L := \\left(\\frac{1}{2},\\frac{1}{2(N-1)},\\frac{1}{2},\\frac{1}{2(N-1)}, \\dots, \\frac{1}{2},\\frac{1}{2(N-1)}\\right)"
},
{
"math_id": 17,
"text": "|L| = 4(N-1)"
},
{
"math_id": 18,
"text": "N - 1"
},
{
"math_id": 19,
"text": "1/2"
},
{
"math_id": 20,
"text": "2(N-1)"
},
{
"math_id": 21,
"text": "1/2(N-1)"
},
{
"math_id": 22,
"text": "N"
},
{
"math_id": 23,
"text": "1/(2(N-1))"
},
{
"math_id": 24,
"text": "\\alpha"
},
{
"math_id": 25,
"text": "R_{NF}^\\infty"
},
{
"math_id": 26,
"text": "R_{NF}^\\infty(\\text{size}\\leq\\alpha) \\leq 2"
},
{
"math_id": 27,
"text": "\\alpha \\geq 1/2"
},
{
"math_id": 28,
"text": "R_{NF}^\\infty(\\text{size}\\leq\\alpha) \\leq 1/(1-\\alpha)"
},
{
"math_id": 29,
"text": "\\alpha \\leq 1/2"
},
{
"math_id": 30,
"text": "k"
},
{
"math_id": 31,
"text": "k\\geq 2"
},
{
"math_id": 32,
"text": "2"
},
{
"math_id": 33,
"text": "A"
},
{
"math_id": 34,
"text": "m = \\lfloor 1/\\alpha\\rfloor \\geq 2"
},
{
"math_id": 35,
"text": "R_{A}^{\\infty}(\\text{size}\\leq\\alpha)\\leq R_{N2F}^{\\infty}(\\text{size}\\leq\\alpha) = 1+1/m"
}
]
| https://en.wikipedia.org/wiki?curid=68429957 |
68430030 | Best-fit bin packing | Best-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a "packing" - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The best-fit algorithm uses the following heuristic:
Approximation ratio.
Denote by BF(L) the number of bins used by Best-Fit, and by OPT(L) the optimal number of bins possible for the list L. The analysis of BF(L) was done in several steps.
Worst-fit.
Worst-Fit is a "dual" algorithm to best-fit: it tries to put the next item in the bin with "minimum" load.
This algorithm can behave as badly as Next-Fit, and will do so on the worst-case list for that formula_6. Furthermore, it holds that formula_7.
Since Worst-Fit is an AnyFit-algorithm, there exists an AnyFit-algorithm such that formula_8. | [
{
"math_id": 0,
"text": "BF(L) \\leq 1.7\\mathrm{OPT}+3"
},
{
"math_id": 1,
"text": "BF(L) \\leq 1.7\\mathrm{OPT}+2"
},
{
"math_id": 2,
"text": "BF(L) \\leq \\lceil 1.7\\mathrm{OPT}\\rceil"
},
{
"math_id": 3,
"text": "FF(L) \\leq \\lfloor 1.7\\mathrm{OPT}\\rfloor"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "BF(L)"
},
{
"math_id": 6,
"text": "NF(L) = 2 \\cdot \\mathrm{OPT}(L) -2 "
},
{
"math_id": 7,
"text": "R_{WF}^{\\infty}(\\text{size}\\leq \\alpha) = R_{NF}^{\\infty}(\\text{size}\\leq \\alpha)"
},
{
"math_id": 8,
"text": "R_{AF}^{\\infty}(\\alpha) = R_{NF}^{\\infty}(\\alpha)"
}
]
| https://en.wikipedia.org/wiki?curid=68430030 |
68430116 | First-fit bin packing | First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a "packing" - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic:
Approximation ratio.
Denote by FF(L) the number of bins used by First-Fit, and by OPT(L) the optimal number of bins possible for the list L. The analysis of FF(L) was done in several steps.
Below we explain the proof idea.
Asymptotic ratio at most 2.
Here is a proof that the asymptotic ratio is at most 2. If there is an FF bin with sum less than 1/2, then the size of all remaining items is more than 1/2, so the sum of all following bins is more than 1/2. Therefore, all FF bins except at most one have sum at least 1/2. All optimal bins have sum at most 1, so the sum of all sizes is at most OPT. Therefore, number of FF bins is at most 1+OPT/(1/2) = 2*OPT+1
Asymptotic ratio at most 1.75.
Consider first a special case in which all item sizes are at most 1/2. If there is an FF bin with sum less than 2/3, then the size of all remaining items is more than 1/3. Since the sizes are at most 1/2, all following bins (except maybe the last one) have at least two items, and sum larger than 2/3. Therefore, all FF bins except at most one have sum at least 2/3, and the number of FF bins is at most 2+OPT/(2/3) = 3/2*OPT+1.
The "problematic" items are those with size larger than 1/2. So, to improve the analysis, let's give every item larger than 1/2 a "bonus" of R. Define the "weight" of an item as its size plus its bonus. Define the weight of a set of items as the sum of weights of its contents.
Now, the weight of each FF bin with one item (except at most one) is at least 1/2+R, and the weight of each FF bin with two or more items (except at most one) is 2/3. Taking R=1/6 yields that the weight of all FF bins is at least 2/3.
On the other hand, the weight of every bin in the optimal packing is at most 1+R = 7/6, since each such bin has at most one item larger than 1/2. Therefore, the total weight of all items is at most 7/6*OPT, and the number of FF bins is at most 2+(7/6*OPT/(2/3)) = 7/4*OPT+2.
Asymptotic ratio at most 1.7.
The following proof is adapted from.sec.1.2 Define the "weight" of an input item as the item size "x" some "bonus" computed as follows:
formula_9
formula_10.
The asymptotic approximation ratio follows from two claims:
Therefore, asymptotically, the number of bins in the FF packing must be at most 17/10 * OPT.
For claim 1, it is sufficient to prove that, for any set "B" with sum at most 1, bonus("B") is at most 5/12. Indeed:
Therefore, the weight of "B" is at most 1+5/12 = 17/12.
For claim 2, consider first an FF bin "B" with a single item.
Consider now the FF bins "B" with two or more items.
Therefore, the total weight of all FF bins is at least 5/6*(FF - 3) (where we subtract 3 for the single one-item bin with sum<1/2, single two-item bin with sum<2/3, and the k-1 from the two-item bins with sum ≥ 2/3).
All in all, we get that 17/12*OPT ≥ 5/6*(FF-3), so FF ≤ 17/10*OPT+3.
Dósa and Sgall present a tighter analysis that gets rid of the 3, and get that FF ≤ 17/10*OPT.
Lower bound.
There are instances on which the performance bound of 1.7OPT is tight. The following example is based on. The bin capacity is 101, and:
Performance with divisible item sizes.
An important special case of bin-packing is that which the item sizes form a "divisible sequence" (also called "factored"). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, and in addition, the largest item sizes divides the bin size, then FF always finds an optimal packing.Thm.3
Refined first-fit.
Refined-First-Fit (RFF) is another online algorithm for bin packing, that improves on the previously developed FF algorithm. It was presented by Andrew Chi-Chih Yao.
The algorithm.
The items are categorized in four classes, according to their sizes (where the bin capacity is 1):
Similarly, the bins are categorized into four classes: 1, 2, 3 and 4.
Let formula_19 be a fixed integer. The next item formula_20 is assigned into a bin in -
Once the class of the item is selected, it is placed inside bins of that class using first-fit bin packing.
Note that RFF is not an Any-Fit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin (from another class).
Approximation ratio.
RFF has an approximation guarantee of formula_24. There exists a family of lists formula_25 with formula_26 for formula_27. | [
{
"math_id": 0,
"text": "FF(L) \\leq 1.7\\mathrm{OPT}+3"
},
{
"math_id": 1,
"text": "FF(L) \\leq 1.7\\mathrm{OPT}+2"
},
{
"math_id": 2,
"text": "FF(L) \\leq \\lceil 1.7\\mathrm{OPT}\\rceil"
},
{
"math_id": 3,
"text": "FF(L) \\leq 1.7\\mathrm{OPT}+0.9"
},
{
"math_id": 4,
"text": "FF(L)"
},
{
"math_id": 5,
"text": "\\mathrm{OPT}"
},
{
"math_id": 6,
"text": "FF(L) \\leq 1.7\\mathrm{OPT}+0.7"
},
{
"math_id": 7,
"text": "FF(L) \\leq \\lfloor 1.7\\mathrm{OPT}\\rfloor"
},
{
"math_id": 8,
"text": "L"
},
{
"math_id": 9,
"text": "bonus(x) := \\begin{cases}\n0 & x \\leq 1/6\n\\\\\nx/2-1/12 & 1/6<x<1/3\n\\\\\n1/12 & 1/3 \\leq x \\leq 1/2\n\\\\\n1/3 & 1/2 < x\n\\end{cases}\n"
},
{
"math_id": 10,
"text": "weight(x) := x + bonus(x)"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "(1/2,1]"
},
{
"math_id": 13,
"text": "B_1"
},
{
"math_id": 14,
"text": "(2/5,1/2]"
},
{
"math_id": 15,
"text": "B_2"
},
{
"math_id": 16,
"text": "(1/3,2/5]"
},
{
"math_id": 17,
"text": "X"
},
{
"math_id": 18,
"text": "(0,1/3]"
},
{
"math_id": 19,
"text": "m \\in \\{6,7,8,9\\}"
},
{
"math_id": 20,
"text": "i \\in L"
},
{
"math_id": 21,
"text": "i"
},
{
"math_id": 22,
"text": "(mk)"
},
{
"math_id": 23,
"text": "k \\geq 1"
},
{
"math_id": 24,
"text": "RFF(L) \\leq (5/3) \\cdot \\mathrm{OPT}(L) +5 "
},
{
"math_id": 25,
"text": "L_k"
},
{
"math_id": 26,
"text": "RFF(L_k) = (5/3)\\mathrm{OPT}(L_k) +1/3"
},
{
"math_id": 27,
"text": "\\mathrm{OPT}(L) = 6k+1"
}
]
| https://en.wikipedia.org/wiki?curid=68430116 |
68430199 | Harmonic bin packing | Harmonic bin-packing is a family of online algorithms for bin packing. The input to such an algorithm is a list of items of different sizes. The output is a "packing" - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem.
The harmonic bin-packing algorithms rely on partitioning the items into categories based on their sizes, following a Harmonic progression. There are several variants of this idea.
Harmonic-"k".
The Harmonic-"k" algorithm partitions the interval of sizes formula_0 harmonically into formula_1 pieces formula_2 for formula_3 and formula_4 such that formula_5. An item formula_6 is called an formula_7-item, if formula_8.
The algorithm divides the set of empty bins into formula_9 infinite classes formula_10 for formula_11, one bin type for each item type. A bin of type formula_10 is only used for bins to pack items of type formula_12. Each bin of type formula_10 for formula_3 can contain exactly formula_12 formula_7-items. The algorithm now acts as follows:
This algorithm was first described by Lee and Lee. It has a time complexity of formula_15 where "n" is the number of input items. At each step, there are at most formula_9 open bins that can be potentially used to place items, i.e., it is a "k"-bounded space algorithm.
Lee and Lee also studied the asymptotic approximation ratio. They defined a sequence formula_16, formula_17 for formula_18 and proved that for formula_19 it holds that formula_20. For formula_21 it holds that formula_22. Additionally, they presented a family of worst-case examples for that formula_23
Refined-Harmonic (RH).
The Refined-Harmonic combines ideas from the Harmonic-k algorithm with ideas from Refined-First-Fit. It places the items larger than formula_24 similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than formula_25.
The algorithm classifies the items with regard to the following intervals: formula_26, formula_27, formula_28, formula_29, formula_30, for formula_31, and formula_4. The algorithm places the formula_7-items as in Harmonic-k, while it follows a different strategy for the items in formula_32 and formula_33. There are four possibilities to pack formula_32-items and formula_33-items into bins.
An formula_36-bin denotes a bin that is designated to contain a second formula_33-item. The algorithm uses the numbers N_a, N_b, N_ab, N_bb, and N_b' to count the numbers of corresponding bins in the solution. Furthermore, N_c= N_b+N_ab
Algorithm Refined-Harmonic-k for a list L = (i_1, \dots i_n):
1. N_a = N_b = N_ab = N_bb = N_b' = N_c = 0
2. If i_j is an I_k-piece
then use algorithm Harmonic-k to pack it
3. else if i_j is an I_a-item
then if N_b != 1,
then pack i_j into any J_b-bin; N_b--; N_ab++;
else place i_j in a new (empty) bin; N_a++;
4. else if i_j is an I_b-item
then if N_b' = 1
then place i_j into the I_b'-bin; N_b' = 0; N_bb++;
5. else if N_bb <= 3N_c
then place i_j in a new bin and designate it as an I_b'-bin; N_b' = 1
else if N_a != 0
then place i_j into any I_a-bin; N_a--; N_ab++;N_c++
else place i_j in a new bin; N_b++;N_c++
This algorithm was first described by Lee and Lee. They proved that for formula_37 it holds that formula_38.
Other variants.
Modified Harmonic (MH) has asymptotic ratio formula_39.
Modified Harmonic 2 (MH2) has asymptotic ratio formula_40.
Harmonic + 1 (H+1) has asymptotic ratio formula_41.
Harmonic ++ (H++) has asymptotic ratio formula_42 and formula_43. | [
{
"math_id": 0,
"text": "(0,1]"
},
{
"math_id": 1,
"text": "k-1"
},
{
"math_id": 2,
"text": "I_j := (1/(j+1),1/j] "
},
{
"math_id": 3,
"text": "1\\leq j < k"
},
{
"math_id": 4,
"text": "I_k := (0,1/k]"
},
{
"math_id": 5,
"text": "\\bigcup_{j=1}^k I_j = (0,1]"
},
{
"math_id": 6,
"text": "i \\in L"
},
{
"math_id": 7,
"text": "I_j"
},
{
"math_id": 8,
"text": "s(i) \\in I_j"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "B_j"
},
{
"math_id": 11,
"text": "1\\leq j \\leq k"
},
{
"math_id": 12,
"text": "j"
},
{
"math_id": 13,
"text": "I_k"
},
{
"math_id": 14,
"text": "B_k"
},
{
"math_id": 15,
"text": "\\mathcal{O}(n\\log(n))"
},
{
"math_id": 16,
"text": "\\sigma_1 := 1"
},
{
"math_id": 17,
"text": "\\sigma_{i+1} := \\sigma_{i}(\\sigma_{i}+1)"
},
{
"math_id": 18,
"text": "i \\geq 1"
},
{
"math_id": 19,
"text": "\\sigma_{l} < k <\\sigma_{l+1} "
},
{
"math_id": 20,
"text": "R_{Hk}^{\\infty} \\leq \\sum_{i = 1}^{l} 1/\\sigma_i + k/(\\sigma_{l+1}(k-1))"
},
{
"math_id": 21,
"text": "k \\rightarrow \\infty"
},
{
"math_id": 22,
"text": "R_{Hk}^{\\infty} \\approx 1.6910"
},
{
"math_id": 23,
"text": "R_{Hk}^{\\infty} = \\sum_{i = 1}^{l} 1/\\sigma_i + k/(\\sigma_{l+1}(k-1))"
},
{
"math_id": 24,
"text": "1/3"
},
{
"math_id": 25,
"text": "1/2"
},
{
"math_id": 26,
"text": "I_1 := (59/96,1]"
},
{
"math_id": 27,
"text": "I_a := (1/2,59/96]"
},
{
"math_id": 28,
"text": "I_2 := (37/96,1/2]"
},
{
"math_id": 29,
"text": "I_b := (1/3,37/96]"
},
{
"math_id": 30,
"text": "I_j := (1/(j+1),1/j]"
},
{
"math_id": 31,
"text": "j \\in \\{3, \\dots, k-1\\}"
},
{
"math_id": 32,
"text": "I_a"
},
{
"math_id": 33,
"text": "I_b"
},
{
"math_id": 34,
"text": "I_ab"
},
{
"math_id": 35,
"text": "I_bb"
},
{
"math_id": 36,
"text": "I_b'"
},
{
"math_id": 37,
"text": "k = 20"
},
{
"math_id": 38,
"text": "R^\\infty_{RH} \\leq 373/228"
},
{
"math_id": 39,
"text": "R_{MH}^{\\infty} \\leq 538/33 \\approx 1.61562"
},
{
"math_id": 40,
"text": "R_{MH2}^{\\infty} \\leq 239091/148304 \\approx 1.61217"
},
{
"math_id": 41,
"text": "R_{H+1}^\\infty \\geq 1.59217"
},
{
"math_id": 42,
"text": "R_{H++}^\\infty \\leq 1.58889"
},
{
"math_id": 43,
"text": "R_{H++}^{\\infty} \\geq 1.58333"
}
]
| https://en.wikipedia.org/wiki?curid=68430199 |
68430334 | Hitchin's equations | System of partial differential equations used in Higgs field theoryIn mathematics, and in particular differential geometry and gauge theory, Hitchin's equations are a system of partial differential equations for a connection and Higgs field on a vector bundle or principal bundle over a Riemann surface, written down by Nigel Hitchin in 1987. Hitchin's equations are locally equivalent to the harmonic map equation for a surface into the symmetric space dual to the structure group. They also appear as a dimensional reduction of the self-dual Yang–Mills equations from four dimensions to two dimensions, and solutions to Hitchin's equations give examples of Higgs bundles and of holomorphic connections. The existence of solutions to Hitchin's equations on a compact Riemann surface follows from the stability of the corresponding Higgs bundle or the corresponding holomorphic connection, and this is the simplest form of the Nonabelian Hodge correspondence.
The moduli space of solutions to Hitchin's equations was constructed by Hitchin in the rank two case on a compact Riemann surface and was one of the first examples of a hyperkähler manifold constructed.
The nonabelian Hodge correspondence shows it is isomorphic to the Higgs bundle moduli space, and to the moduli space of holomorphic connections.
Using the metric structure on the Higgs bundle moduli space afforded by its description in terms of Hitchin's equations, Hitchin constructed the Hitchin system, a completely integrable system whose twisted generalization over a finite field was used by Ngô Bảo Châu in his proof of the fundamental lemma in the Langlands program, for which he was afforded the 2010 Fields medal.
Definition.
The definition may be phrased for a connection on a vector bundle or principal bundle, with the two perspectives being essentially interchangeable. Here the definition of principal bundles is presented, which is the form that appears in Hitchin's work.
Let formula_0 be a principal formula_1-bundle for a compact real Lie group formula_1 over a compact Riemann surface. For simplicity we will consider the case of formula_2 or formula_3, the special unitary group or special orthogonal group. Suppose formula_4 is a connection on formula_5, and let formula_6 be a section of the complex vector bundle formula_7, where formula_8 is the complexification of the adjoint bundle of formula_5, with fibre given by the complexification formula_9 of the Lie algebra formula_10 of formula_1. That is, formula_6 is a complex formula_11-valued formula_12-form on formula_13. Such a formula_6 is called a Higgs field in analogy with the auxiliary Higgs field appearing in Yang–Mills theory.
For a pair formula_14, Hitchin's equations assert that
formula_15
where formula_16 is the curvature form of formula_4, formula_17 is the formula_18-part of the induced connection on the complexified adjoint bundle formula_19, and formula_20 is the commutator of formula_11-valued one-forms in the sense of Lie algebra-valued differential forms.
Since formula_20 is of type formula_21, Hitchin's equations assert that the formula_22-component formula_23. Since formula_24, this implies that formula_17 is a Dolbeault operator on formula_8 and gives this Lie algebra bundle the structure of a holomorphic vector bundle. Therefore, the condition formula_25 means that formula_6 is a holomorphic formula_11-valued formula_12-form on formula_13. A pair consisting of a holomorphic vector bundle formula_26 with a holomorphic endomorphism-valued formula_12-form formula_6 is called a Higgs bundle, and so every solution to Hitchin's equations produces an example of a Higgs bundle.
Derivation.
Hitchin's equations can be derived as a dimensional reduction of the Yang–Mills equations from four dimension to two dimensions. Consider a connection formula_4 on a trivial principal formula_1-bundle over formula_27. Then there exists four functions formula_28 such that
formula_29
where formula_30 are the standard coordinate differential forms on formula_27. The self-duality equations for the connection formula_4, a particular case of the Yang–Mills equations, can be written
formula_31
where formula_32 is the curvature two-form of formula_4. To dimensionally reduce to two dimensions, one imposes that the connection forms formula_33 are independent of the coordinates formula_34 on formula_27. Thus the components formula_35 define a connection on the restricted bundle over formula_36, and if one relabels formula_37, formula_38 then these are auxiliary formula_10-valued fields over formula_36.
If one now writes formula_39 and formula_40 where formula_41 is the standard complex formula_12-form on formula_42, then the self-duality equations above become precisely Hitchin's equations. Since these equations are conformally invariant on formula_36, they make sense on a conformal compactification of the plane, a Riemann surface.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P\\to \\Sigma"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "G=\\text{SU}(2)"
},
{
"math_id": 3,
"text": "G=\\text{SO}(3)"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "\\Phi"
},
{
"math_id": 7,
"text": "\\text{ad} P^{\\Complex} \\otimes T_{1,0}^* \\Sigma"
},
{
"math_id": 8,
"text": "\\text{ad} P^{\\Complex}"
},
{
"math_id": 9,
"text": "\\mathfrak{g}\\otimes \\Complex"
},
{
"math_id": 10,
"text": "\\mathfrak{g}"
},
{
"math_id": 11,
"text": "\\text{ad} P"
},
{
"math_id": 12,
"text": "(1,0)"
},
{
"math_id": 13,
"text": "\\Sigma"
},
{
"math_id": 14,
"text": "(A,\\Phi)"
},
{
"math_id": 15,
"text": "\\begin{cases}\nF_A + [\\Phi, \\Phi^*] = 0\\\\\n\\bar \\partial_A \\Phi = 0.\n\\end{cases}"
},
{
"math_id": 16,
"text": "F_A\\in \\Omega^2(\\Sigma, \\text{ad} P)"
},
{
"math_id": 17,
"text": "\\bar \\partial_A"
},
{
"math_id": 18,
"text": "(0,1)"
},
{
"math_id": 19,
"text": "\\text{ad} P \\otimes \\Complex"
},
{
"math_id": 20,
"text": "[\\Phi,\\Phi^*]"
},
{
"math_id": 21,
"text": "(1,1)"
},
{
"math_id": 22,
"text": "(0,2)"
},
{
"math_id": 23,
"text": "F_A^{0,2}=0"
},
{
"math_id": 24,
"text": "\\bar \\partial_A^2 = F_A^{0,2}"
},
{
"math_id": 25,
"text": "\\bar \\partial_A \\Phi = 0"
},
{
"math_id": 26,
"text": "E"
},
{
"math_id": 27,
"text": "\\Reals^4"
},
{
"math_id": 28,
"text": "A_1,A_2,A_3,A_4: \\Reals^4 \\to \\mathfrak{g}"
},
{
"math_id": 29,
"text": "A = A_1 dx^1 + A_2 dx^2 + A_3 dx^3 + A_4 dx^4"
},
{
"math_id": 30,
"text": "dx^i"
},
{
"math_id": 31,
"text": " \\begin{cases}\nF_{12} = F_{34}\\\\\nF_{13} = F_{42}\\\\\nF_{14} = F_{23}\n\\end{cases}"
},
{
"math_id": 32,
"text": "F = \\sum_{i<j} F_{ij} dx^i \\wedge dx^j"
},
{
"math_id": 33,
"text": "A_i"
},
{
"math_id": 34,
"text": "x^3,x^4"
},
{
"math_id": 35,
"text": "A_1 dx^1 + A_2 dx^2"
},
{
"math_id": 36,
"text": "\\Reals^2"
},
{
"math_id": 37,
"text": "A_3 = \\phi_1"
},
{
"math_id": 38,
"text": "A_4 = \\phi_2"
},
{
"math_id": 39,
"text": "\\phi = \\phi_1 - i \\phi_2"
},
{
"math_id": 40,
"text": "\\Phi = \\frac{1}{2} \\phi dz"
},
{
"math_id": 41,
"text": "dz = dx^1 + i dx^2"
},
{
"math_id": 42,
"text": "\\Reals^2 = \\Complex"
}
]
| https://en.wikipedia.org/wiki?curid=68430334 |
68431669 | First-fit-decreasing bin packing | Computer science algorithm
First-fit-decreasing (FFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a "packing" - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem, so we use an approximately-optimal heuristic.
Description.
The FFD algorithm works as follows.
In short: FFD orders the items by descending size, and then calls first-fit bin packing.
An equivalent description of the FFD algorithm is as follows.
In the standard description, we loop over the items once, but keep many open bins. In the equivalent description, we loop over the items many times, but keep only a single open bin each time.
Performance analysis.
The performance of FFD was analyzed in several steps. Below, formula_0 denotes the number of bins used by FFD for input set "S" and bin-capacity C.
Worst-case example.
The lower bound example given in by Dósa is the following: Consider the two bin configurations:
If there are 4 copies of formula_8 and 2 copies of formula_9 in the optimal solution, FFD will compute the following bins:
That is, 8 bins total, while the optimum has only 6 bins. Therefore, the upper bound is tight, because formula_15.
This example can be extended to all sizes of formula_16: in the optimal configuration there are 9"k"+6 bins: 6"k"+4 of type B1 and 3"k"+2 of type B2. But FFD needs at least 11"k"+8 bins, which is formula_17.
Performance with divisible item sizes.
An important special case of bin-packing is that which the item sizes form a "divisible sequence" (also called "factored"). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. In this case, FFD always finds the optimal packing.Thm.2
Monotonicity properties.
Contrary to intuition, formula_0 is "not" a monotonic function of "C". Similarly, formula_0 is not a monotonic function of the sizes of items in "S": it is possible that an item shrinks in size, but the number of bins increases.
However, the FFD algorithm has an "asymptotic monotonicity" property, defined as follows.
Examples.
For example, suppose the input is: 44, 24, 24, 22, 21, 17, 8, 8, 6, 6. With capacity 60, FFD packs 3 bins:
But with capacity 61, FFD packs 4 bins:
This is because, with capacity 61, the 17 fits into the first bin, and thus blocks the way to the following 8, 8.
As another example, suppose the inputs are: 51, 28, 28, 28, 27, 25, 12, 12, 10, 10, 10, 10, 10, 10, 10, 10. With capacity 75, FFD packs 4 bins:
But with capacity 76, it needs 5 bins:
Consider the above example with capacity 60. If the 17 becomes 16, then the resulting packing is:
Modified first-fit-decreasing.
Modified first fit decreasing (MFFD) improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Then it proceeds through five phases:
This algorithm was first studied by Johnson and Garey in 1985, where they proved that formula_20. This bound was improved in the year 1995 by Yue and Zhang who proved that formula_21.
Other variants.
Best-fit-decreasing (BFD) is very similar to FFD, except that after the list is sorted, it is processed by best-fit bin packing. Its asymptotic approximation ratio is the same as FFD - 11/9. | [
{
"math_id": 0,
"text": "FFD(S,C)"
},
{
"math_id": 1,
"text": "FFD(S,C) \\leq 11/9 \\mathrm{OPT}(S,C) +4"
},
{
"math_id": 2,
"text": "FFD(S,C) \\leq 11/9 \\mathrm{OPT}(S,C) +1"
},
{
"math_id": 3,
"text": "FFD(S,C) \\leq 11/9 \\mathrm{OPT}(S,C) + 7/9"
},
{
"math_id": 4,
"text": "FFD(S,C) \\leq 11/9 \\mathrm{OPT}(S,C) +6/9"
},
{
"math_id": 5,
"text": "FFD(S,C) = 11/9 \\mathrm{OPT}(S,C) +6/9"
},
{
"math_id": 6,
"text": "B_1 := \\{1/2+\\varepsilon, 1/4+\\varepsilon, 1/4 - 2\\varepsilon\\}"
},
{
"math_id": 7,
"text": "B_2 := \\{1/4 + 2\\varepsilon,1/4 + 2\\varepsilon, 1/4-2\\varepsilon, 1/4 - 2\\varepsilon\\}"
},
{
"math_id": 8,
"text": "B_1"
},
{
"math_id": 9,
"text": "B_2"
},
{
"math_id": 10,
"text": "\\{1/2+\\varepsilon,1/4+2\\varepsilon\\}"
},
{
"math_id": 11,
"text": "\\{1/4+\\varepsilon,1/4+\\varepsilon,1/4+\\varepsilon\\}"
},
{
"math_id": 12,
"text": "\\{1/4+\\varepsilon,1/4-2\\varepsilon,1/4-2\\varepsilon,1/4-2\\varepsilon\\}"
},
{
"math_id": 13,
"text": "\\{1/4-2\\varepsilon,1/4-2\\varepsilon,1/4-2\\varepsilon,1/4-2\\varepsilon\\}"
},
{
"math_id": 14,
"text": "\\{1/4-2\\varepsilon\\}"
},
{
"math_id": 15,
"text": "11/9 \\cdot 6 + 6/9 = 72/9 = 8"
},
{
"math_id": 16,
"text": "\\text{OPT}(S,C)"
},
{
"math_id": 17,
"text": "\\frac{11}{9}(6k+4)+\\frac{6}{9}"
},
{
"math_id": 18,
"text": "OPT(S,C)\\leq m"
},
{
"math_id": 19,
"text": "FFD(S,r\\cdot \\text{MinCap}(S,m))\\leq m"
},
{
"math_id": 20,
"text": "MFFD(S,C) \\leq (71/60)\\mathrm{OPT}(S,C) + (31/6)"
},
{
"math_id": 21,
"text": "MFFD(S,C) \\leq (71/60)\\mathrm{OPT}(S,C) + 1"
}
]
| https://en.wikipedia.org/wiki?curid=68431669 |
6843217 | Formation matrix | In statistics and information theory, the expected formation matrix of a likelihood function formula_0 is the matrix inverse of the Fisher information matrix of formula_0, while the observed formation matrix of formula_0 is the inverse of the observed information matrix of formula_0.
Currently, no notation for dealing with formation matrices is widely used, but in books and articles by Ole E. Barndorff-Nielsen and Peter McCullagh, the symbol formula_1 is used to denote the element of the i-th line and j-th column of the observed formation matrix. The geometric interpretation of the Fisher information matrix (metric) leads to a notation of formula_2 following the notation of the (contravariant) metric tensor in differential geometry. The Fisher information metric is denoted by formula_3 so that using Einstein notation we have formula_4.
These matrices appear naturally in the asymptotic expansion of the distribution of many statistics related to the likelihood ratio.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L(\\theta)"
},
{
"math_id": 1,
"text": "j^{ij}"
},
{
"math_id": 2,
"text": "g^{ij}"
},
{
"math_id": 3,
"text": "g_{ij}"
},
{
"math_id": 4,
"text": " g_{ik}g^{kj} = \\delta_i^j"
}
]
| https://en.wikipedia.org/wiki?curid=6843217 |
68432221 | Neptunium arsenide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Neptunium arsenide is a binary inorganic compound of neptunium and arsenic with the chemical formula NpAs. The compound forms crystals.
Synthesis.
Heating stoichiometric amounts of pure substances:
formula_0
Physical properties.
Neptunium arsenide forms crystals of several modifications:
Neptunium arsenide becomes antiferromagnetic at 175 K.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Np + As \\ \\xrightarrow{T, I_2}\\ NpAs }"
}
]
| https://en.wikipedia.org/wiki?curid=68432221 |
68432447 | Next-fit-decreasing bin packing | Next-fit-decreasing (NFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a "packing" - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The NFD algorithm uses the following heuristic:
In short: NFD orders the items by descending size, and then calls next-fit bin packing.
Performance upper bound.
Baker and Coffman proved that, for every integer "r", when the size of all items is at most 1/"r", the asymptotic approximation ratio of RFD satisfiesformula_0,where formula_1 is a sequence whose first elements are approximately 1.69103, 1.42312, 1.30238. In particular, taking "r"=1 implies that formula_2.
Later, NFD has also been analyzed probabilistically.
Variants.
Next-Fit packs a list and its inverse into the same number of bins. Therefore, Next-Fit-Increasing has the same performance as Next-Fit-Decreasing.
However, Next-Fit-Increasing performs better when there are general cost structures. | [
{
"math_id": 0,
"text": "R^{\\infty}_{NFD}(\\text{size}\\leq 1/r) = h_{\\infty}(r)"
},
{
"math_id": 1,
"text": "h_{\\infty}(r)"
},
{
"math_id": 2,
"text": "R^{\\infty}_{NFD} = h_{\\infty}(1) \\approx 1.69103"
}
]
| https://en.wikipedia.org/wiki?curid=68432447 |
68435956 | Neptunium diarsenide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Neptunium diarsenide is a binary inorganic compound of neptunium and arsenic with the chemical formula NpAs2. The compound forms crystals.
Synthesis.
Heating stoichiometric amounts of neptunium hydride and arsenic:
formula_0
Physical properties.
Neptunium diarsenide forms crystals of the tetragonal system, space group "P"4/"nmm", cell parameters a = 0.3958 nm, c = 0.8098 nm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ NpH_x + 2As \\ \\xrightarrow{450^oC}\\ NpAs_2 + (x/2)H_2 }"
}
]
| https://en.wikipedia.org/wiki?curid=68435956 |
68436355 | Limiting amplitude principle | Mathematical concept for solving the Helmholtz equation
In mathematics, the limiting amplitude principle is a concept from operator theory and scattering theory used for choosing a particular solution to the Helmholtz equation. The choice is made by considering a particular time-dependent problem of the forced oscillations due to the action of a periodic force.
The principle was introduced by Andrey Nikolayevich Tikhonov and Alexander Andreevich Samarskii.
It is closely related to the limiting absorption principle (1905) and the Sommerfeld radiation condition (1912).
The terminology -- both the limiting absorption principle and the limiting amplitude principle -- was introduced by Aleksei Sveshnikov.
Formulation.
To find which solution to the Helmholz equation with nonzero right-hand side
formula_0
with some fixed formula_1, corresponds to the outgoing waves,
one considers the wave equation with the source term,
formula_2
with zero initial data formula_3. A particular solution to the Helmholtz equation corresponding to outgoing waves is obtained as the limit
formula_4
for large times.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta v(x)+k^2 v(x)=-F(x),\\quad x\\in\\R^3,"
},
{
"math_id": 1,
"text": "k>0"
},
{
"math_id": 2,
"text": "(\\Delta-\\partial_t^2)u(x,t)=-F(x)e^{-i k t},\\quad t\\ge 0, \\quad x\\in\\R^3,"
},
{
"math_id": 3,
"text": "u(x,0)=0,\\,\\partial_t u(x,0)=0"
},
{
"math_id": 4,
"text": "v(x)=\\lim_{t\\to +\\infty}u(x,t)e^{i k t}"
}
]
| https://en.wikipedia.org/wiki?curid=68436355 |
68437256 | Plethystic exponential | In mathematics, the plethystic exponential is a certain operator defined on (formal) power series which, like the usual exponential function, translates addition into multiplication. This exponential operator appears naturally in the theory of symmetric functions, as a concise relation between the generating series for elementary, complete and power sums homogeneous symmetric polynomials in many variables. Its name comes from the operation called plethysm, defined in the context of so-called lambda rings.
In combinatorics, the plethystic exponential is a generating function for many well studied sequences of integers, polynomials or power series, such as the number of integer partitions. It is also an important technique in the enumerative combinatorics of unlabelled graphs, and many other combinatorial objects.
In geometry and topology, the plethystic exponential of a certain geometric/topologic invariant of a space, determines the corresponding invariant of its symmetric products.
Definition, main properties and basic examples.
Let formula_0 be a ring of formal power series in the variable formula_1, with coefficients in a commutative ring formula_2. Denote by
formula_3
the ideal consisting of power series without constant term. Then, given formula_4, its plethystic exponential formula_5 is given by
formula_6
where formula_7 is the usual exponential function. It is readily verified that (writing simply formula_5 when the variable is understood):
formula_8
Some basic examples are:
formula_9
In this last example, formula_10 is number of partitions of formula_11.
The plethystic exponential can be also defined for power series rings in many variables.
Product-sum formula.
The plethystic exponential can be used to provide innumerous product-sum identities. This is a consequence of a product formula for plethystic exponentials themselves. If formula_12 denotes a formal power series with real coefficients formula_13, then it is not difficult to show that:formula_14The analogous product expression also holds in the many variables case. One particularly interesting case is its relation to integer partitions and to the cycle index of the symmetric group.
Relation with symmetric functions.
Working with variables formula_15, denote by formula_16 the complete homogeneous symmetric polynomial, that is the sum of all monomials of degree "k" in the variables formula_17, and by formula_18 the elementary symmetric polynomials. Then, the formula_16 and the formula_18 are related to the power sum polynomials: formula_19 by Newton's identities, that can succinctly be written, using plethystic exponentials, as:
formula_20
formula_21
Macdonald's formula for symmetric products.
Let "X" be a finite CW complex, of dimension "d", with Poincaré polynomialformula_22where formula_23 is its "k"th Betti number. Then the Poincaré polynomial of the "n"th symmetric product of "X", denoted formula_24, is obtained from the series expansion:formula_25
The plethystic programme in physics.
In a series of articles, a group of theoretical physicists, including Bo Feng, Amihay Hanany and Yang-Hui He, proposed a programme for systematically counting single and multi-trace gauge invariant operators of supersymmetric gauge theories. In the case of quiver gauge theories of D-branes probing Calabi–Yau singularities, this count is codified in the plethystic exponential of the Hilbert series of the singularity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R[[x]]"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "R^0[[x]] \\subset R[[x]]"
},
{
"math_id": 4,
"text": "f(x)\\in R^0[[x]]"
},
{
"math_id": 5,
"text": "\\text{PE}[f]"
},
{
"math_id": 6,
"text": "\\text{PE}[f](x)= \\exp \\left( \\sum_{k=1}^{\\infty} \\frac{f(x^k)}{k} \\right)"
},
{
"math_id": 7,
"text": "\\exp(\\cdot)"
},
{
"math_id": 8,
"text": "\\begin{align}[ll]\n\\text{PE}[0] & = 1\\\\ \n\\text{PE}[f+g] & = \\text{PE}[f] \\text{PE}[g]\\\\ \n\\text{PE}[-f] & = \\text{PE}[f]^{-1}\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}[ll]\n\\text{PE}[x^n] & = \\frac{1}{1-x^n}, n \\in \\mathbb{N} \\\\ \n\\text{PE}\\left[ \\frac{x}{1-x} \\right] & = 1+\\sum_{n\\geq1}p(n)x^{n}\n\\end{align}"
},
{
"math_id": 10,
"text": "p(n)"
},
{
"math_id": 11,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 12,
"text": "f(x)=\\sum_{k=1}^{\\infty} a_k x^k"
},
{
"math_id": 13,
"text": "a_k"
},
{
"math_id": 14,
"text": "\\text{PE}[f](x)=\\prod_{k=1}^\\infty (1-x^k)^{-a_k} "
},
{
"math_id": 15,
"text": "x_1, x_2, \\ldots, x_n"
},
{
"math_id": 16,
"text": "h_k"
},
{
"math_id": 17,
"text": "x_i"
},
{
"math_id": 18,
"text": "e_k"
},
{
"math_id": 19,
"text": "p_k=x_1^k + \\cdots + x_n^k"
},
{
"math_id": 20,
"text": " \\sum_{n=0}^\\infty h_n \\,t^n = \\text{PE}[p_1 \\,t] = \\text{PE}[x_1 t + \\cdots + x_n t] "
},
{
"math_id": 21,
"text": " \\sum_{n=0}^\\infty (-1)^n e_n \\,t^n = \\text{PE}[- p_1 \\,t] = \\text{PE}[-x_1 t - \\cdots - x_n t] "
},
{
"math_id": 22,
"text": "P_X (t) = \\sum_{k=0}^d b_k(X) \\, t^k"
},
{
"math_id": 23,
"text": "b_k(X)"
},
{
"math_id": 24,
"text": "\\operatorname{Sym}^n (X)"
},
{
"math_id": 25,
"text": "\\text{PE}[P_X(-t)\\,x] = \\prod_{k=0}^d \\left(1-t^k x\\right)^{(-1)^{k+1}b_{k}(X)} = \\sum_{n\\geq 0} P_{\\operatorname{Sym}^n(X)}(-t) \\, x^n "
}
]
| https://en.wikipedia.org/wiki?curid=68437256 |
6843863 | Pure spinor | Class of spinors constructed using Clifford algebras
In the domain of mathematics known as representation theory, pure spinors (or simple spinors) are spinors that are annihilated, under the Clifford algebra representation, by a maximal isotropic subspace of a vector space formula_0 with respect to a scalar product formula_1.
They were introduced by Élie Cartan in the 1930s and further developed by Claude Chevalley.
They are a key ingredient in the study of spin structures and higher dimensional generalizations of twistor theory, introduced by Roger Penrose in the 1960s.
They have been applied to the study of supersymmetric Yang-Mills theory in 10D, superstrings, generalized complex structures
and parametrizing solutions of integrable hierarchies.
Clifford algebra and pure spinors.
Consider a complex vector space formula_0, with either even dimension formula_2 or odd dimension formula_3, and a nondegenerate complex scalar product
formula_4, with values formula_5 on pairs of vectors formula_6.
The Clifford algebra formula_7 is the quotient of the full tensor algebra
on formula_0 by the ideal generated by the relations
formula_8
Spinors are modules of the Clifford algebra, and so in particular there is an action of the
elements of formula_0 on the space of spinors. The complex subspace formula_9 that annihilates
a given nonzero spinor formula_10 has dimension formula_11. If formula_12 then formula_10 is said to be a pure spinor. In terms of stratification of spinor modules by orbits of the spin group formula_13, pure spinors correspond to the smallest orbits, which are the Shilov boundary of the stratification by the orbit types of the spinor representation on the irreducible spinor (or half-spinor) modules.
Pure spinors, defined up to projectivization, are called projective pure spinors. For formula_14 of even dimension formula_15, the space of projective pure spinors is the homogeneous space
formula_16; for formula_14 of odd dimension formula_17, it is formula_18.
Irreducible Clifford module, spinors, pure spinors and the Cartan map.
The irreducible Clifford/spinor module.
Following Cartan and Chevalley,
we may view formula_19 as a direct sum
formula_20
where formula_21 is a totally isotropic subspace of dimension formula_22, and formula_23 is its dual space, with scalar product defined as
formula_24
or
formula_25
respectively.
The Clifford algebra representation formula_26 as endomorphisms of the irreducible Clifford/spinor module formula_27, is generated by the linear elements formula_28, which act as
formula_29
for either formula_30 or formula_31, and
formula_32
for formula_31, when formula_10 is homogeneous of degree formula_33.
Pure spinors and the Cartan map.
A pure spinor formula_34 is defined to be any element formula_35 that is annihilated by a maximal isotropic subspace formula_36 with respect to the scalar product formula_37. Conversely, given a maximal isotropic subspace it is possible to determine the pure spinor that annihilates it, up to multiplication by a complex number, as follows.
Denote the Grassmannian of maximal isotropic (formula_22-dimensional) subspaces of formula_19 as formula_38. The Cartan map
formula_39
is defined, for any element formula_40, with basis formula_41, to have value
formula_42
i.e. the image of formula_43 under the endomorphism formed from taking the product of the Clifford representation endomorphisms
formula_44, which is independent of the choice of basis formula_45.
This is a formula_46-dimensional subspace, due to the isotropy conditions,
formula_47
which imply
formula_48
and hence formula_49 defines an element of the projectivization formula_50 of the irreducible Clifford module formula_51.
It follows from the isotropy conditions that, if the projective class formula_52 of a spinor formula_53 is in the image formula_49 and formula_54, then
formula_55
So any spinor formula_34 with formula_56 is annihilated, under the Clifford representation, by all elements of formula_57. Conversely, if formula_34 is annihilated by formula_58 for all formula_59, then formula_56.
If formula_60 is even dimensional, there are two connected components in the isotropic Grassmannian formula_38, which get mapped, under formula_61, into the two half-spinor subspaces formula_62 in the direct sum decomposition
formula_63
where formula_64 and formula_65 consist, respectively, of the even and odd degree elements of formula_66 .
The Cartan relations.
Define a set of bilinear forms formula_67 on the spinor module formula_27,
with values in formula_68 (which are isomorphic via the scalar product formula_1), by
formula_69
where, for homogeneous elements formula_70,
formula_71 and volume form formula_72 on formula_27,
formula_73
As shown by Cartan, pure spinors formula_74 are uniquely determined by the fact that they satisfy the following set of homogeneous quadratic equations, known as the Cartan relations:
formula_75
on the standard irreducible spinor module.
These determine the image of the submanifold of maximal isotropic subspaces of the vector space formula_76 with respect to the scalar product formula_1, under the Cartan map, which defines an embedding of the Grassmannian of isotropic subspaces of formula_19 in the projectivization of the spinor module (or half-spinor module, in the even dimensional case), realizing these as projective varieties.
There are therefore, in total,
formula_77
Cartan relations, signifying the vanishing of the bilinear forms formula_78 with values in the exterior spaces formula_79 for formula_80, corresponding to these skew symmetric elements of the Clifford algebra. However, since the dimension of the Grassmannian of maximal isotropic subspaces of formula_14 is formula_81 when formula_14 is of even dimension formula_2 and formula_82 when formula_14 has odd dimension formula_83, and the Cartan map is an embedding of the connected components of this in the projectivization of the half-spinor modules when formula_14 is of even dimension and in the irreducible spinor module if it is of odd dimension, the number of independent quadratic constraints is only
formula_84
in the formula_85 dimensional case, and
formula_86
in the formula_87 dimensional case.
In 6 dimensions or fewer, all spinors are pure. In 7 or 8 dimensions, there is a single pure spinor constraint. In 10 dimensions, there are 10 constraints
formula_88
where formula_89 are the Gamma matrices that represent the vectors
in formula_90 that generate the Clifford algebra. However, only formula_91 of these are independent, so the variety of projectivized pure spinors for formula_92 is formula_93 (complex) dimensional.
Applications of pure spinors.
Supersymmetric Yang Mills theory.
For formula_94 dimensional, formula_95 supersymmetric Yang-Mills theory, the super-ambitwistor correspondence, consists of an equivalence between the supersymmetric field equations and the vanishing of supercurvature along super null lines, which are of dimension formula_96, where the formula_97 Grassmannian dimensions correspond to a pure spinor. Dimensional reduction gives the corresponding results for formula_98, formula_99 and formula_100, formula_101 or formula_102.
String theory and generalized Calabi-Yau manifolds.
Pure spinors were introduced in string quantization by Nathan Berkovits. Nigel Hitchin
introduced generalized Calabi–Yau manifolds, where the generalized complex structure is defined by a pure spinor. These spaces describe the geometry of flux compactifications in string theory.
Integrable systems.
In the approach to integrable hierarchies developed by Sato, and his students, equations of the hierarchy are viewed as compatibility conditions for commuting flows on an infinite dimensional Grassmannian. Under the (infinite dimensional) Cartan map, projective pure spinors are equivalent to elements of the infinite dimensional Grassmannian consisting of maximal isotropic subspaces of a Hilbert space under a suitably defined complex scalar product. They therefore serve as moduli for solutions of the BKP integrable hierarchy, parametrizing the associated BKP formula_103-functions, which are generating functions for the flows. Under the Cartan map correspondence, these may be expressed as infinite dimensional Fredholm Pfaffians.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " V "
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": " 2n "
},
{
"math_id": 3,
"text": " 2n+1 "
},
{
"math_id": 4,
"text": " Q "
},
{
"math_id": 5,
"text": " Q(u,v) "
},
{
"math_id": 6,
"text": " (u, v) "
},
{
"math_id": 7,
"text": " Cl(V, Q) "
},
{
"math_id": 8,
"text": "u\\otimes v + v \\otimes u = 2 Q(u,v), \\quad \\forall \\ u, v \\in V. "
},
{
"math_id": 9,
"text": " V^0_\\psi \\subset V "
},
{
"math_id": 10,
"text": " \\psi "
},
{
"math_id": 11,
"text": " m \\le n "
},
{
"math_id": 12,
"text": " m=n "
},
{
"math_id": 13,
"text": "Spin(V,Q)"
},
{
"math_id": 14,
"text": "\\,V\\,"
},
{
"math_id": 15,
"text": "2n"
},
{
"math_id": 16,
"text": " SO(2n)/U(n)"
},
{
"math_id": 17,
"text": "2n+1"
},
{
"math_id": 18,
"text": " SO(2n+1)/U(n)"
},
{
"math_id": 19,
"text": "V"
},
{
"math_id": 20,
"text": "V= V_n \\oplus V_n^*\\ \\text{ or }\\ V= V_n \\oplus V_n^*\\oplus\\mathbf{C},"
},
{
"math_id": 21,
"text": "V_n\\subset V"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "V^*_n"
},
{
"math_id": 24,
"text": " Q(v_1 + w_1,v_2 + w_2) := w_2(v_1) + w_1(v_2),\\quad v_1, v_2 \\in V_n, \\ w_1, w_2 \\in V^*_n, "
},
{
"math_id": 25,
"text": " Q(v_1 + w_1 + a_1,v_2 + w_2+a_2) := w_2(v_1) + w_1(v_2) + a_1 a_2,\\quad a_1, a_2 \\in \\mathbf{C}, "
},
{
"math_id": 26,
"text": "\\Gamma_X \\in \\mathrm{End}(\\Lambda(V_n))"
},
{
"math_id": 27,
"text": "\\Lambda(V_n)"
},
{
"math_id": 28,
"text": "X\\in V"
},
{
"math_id": 29,
"text": " \\Gamma_v(\\psi) = v \\wedge \\psi \\ \\text{ (wedge product) } \\ \\text {for } v \\in V_n \\ \\text{ and } \\Gamma_w(\\psi) = \\iota(w) \\psi \\ \\text{ (inner product) } \\text{for}\\ w \\in V^*_n, "
},
{
"math_id": 30,
"text": "V= V_n \\oplus V_n^*"
},
{
"math_id": 31,
"text": "V= V_n \\oplus V_n^*\\oplus\\mathbf{C}"
},
{
"math_id": 32,
"text": " \\Gamma_a \\psi = (-1)^p a\\ \\psi, \\quad a \\in \\mathbf{C}, \\ \\psi \\in \\Lambda^p(V_n), "
},
{
"math_id": 33,
"text": "p"
},
{
"math_id": 34,
"text": "\\psi"
},
{
"math_id": 35,
"text": "\\psi\\in \\Lambda (V_n) "
},
{
"math_id": 36,
"text": "w\\subset V"
},
{
"math_id": 37,
"text": "\\,Q\\,"
},
{
"math_id": 38,
"text": "\\mathbf{Gr}^0_n(V, Q)"
},
{
"math_id": 39,
"text": " \\mathbf{Ca}: \\mathbf{Gr}^0_n(V, Q)\\rightarrow \\mathbf{P}(\\Lambda (V_n)) "
},
{
"math_id": 40,
"text": "w\\in \\mathbf{Gr}^0_n(V, Q)"
},
{
"math_id": 41,
"text": "(X_1, \\dots, X_n)"
},
{
"math_id": 42,
"text": "\\mathbf{Ca}(w): = \\mathrm{Im}(\\Gamma_{X_1}\\cdots \\Gamma_{X_n});"
},
{
"math_id": 43,
"text": "\\Lambda (V_n) "
},
{
"math_id": 44,
"text": "\\{\\Gamma_{X_i} \\in \\mathrm{End}(\\Lambda (V_n))\\}_{i=1, \\dots, n}"
},
{
"math_id": 45,
"text": "(X_1, \\cdots , X_n)"
},
{
"math_id": 46,
"text": "1"
},
{
"math_id": 47,
"text": "Q(X_i, X_j) =0, \\quad 1\\le i, j \\le n, "
},
{
"math_id": 48,
"text": "\\Gamma_{X_i} \\Gamma_{X_j} + \\Gamma_{X_j} \\Gamma_{X_i}=0, \\quad 1\\le i, j \\le n, "
},
{
"math_id": 49,
"text": "\\mathbf{Ca}(w)"
},
{
"math_id": 50,
"text": " \\mathbf{P}(\\Lambda (V_n))"
},
{
"math_id": 51,
"text": "\\Lambda (V_n)"
},
{
"math_id": 52,
"text": "[\\psi]"
},
{
"math_id": 53,
"text": "\\psi \\in \\Lambda(V_n)"
},
{
"math_id": 54,
"text": "X\\in w"
},
{
"math_id": 55,
"text": " \\Gamma_X(\\psi) =0. "
},
{
"math_id": 56,
"text": "[\\psi]\\in \\mathbf{Ca}(w)"
},
{
"math_id": 57,
"text": "w"
},
{
"math_id": 58,
"text": "\\Gamma_X"
},
{
"math_id": 59,
"text": "X \\in w"
},
{
"math_id": 60,
"text": "V = V_n \\oplus V^*_n"
},
{
"math_id": 61,
"text": "\\mathbf{Ca}"
},
{
"math_id": 62,
"text": "\\Lambda^+(V_n) , \\Lambda^-(V_n) "
},
{
"math_id": 63,
"text": "\\Lambda(V_n) = \\Lambda^+(V_n) \\oplus \\Lambda^-(V_n), "
},
{
"math_id": 64,
"text": "\\Lambda^+(V_n)"
},
{
"math_id": 65,
"text": " \\Lambda^-(V_n) "
},
{
"math_id": 66,
"text": "\\Lambda^(V_n) "
},
{
"math_id": 67,
"text": "\\{\\beta_m\\}_{m=0, \\dots 2n}"
},
{
"math_id": 68,
"text": "\\Lambda^m(V^*) \\sim \\Lambda^m(V)"
},
{
"math_id": 69,
"text": " \\beta_m(\\psi, \\phi)(X_1, \\dots, X_m)\n:=\\beta_0(\\psi, \\Gamma_{X_1} \\cdots \\Gamma_{X_m} \\phi), \\quad\\text{for } \\psi, \\phi \\in \\Lambda(V_n),\\ X_1, \\dots, X_m \\in V, "
},
{
"math_id": 70,
"text": "\\psi\\in \\Lambda^p(V_n)"
},
{
"math_id": 71,
"text": "\\phi\\in \\Lambda^q(V_n)"
},
{
"math_id": 72,
"text": "\\Omega"
},
{
"math_id": 73,
"text": " \\beta_0(\\psi, \\phi)\\,\\Omega = \\begin{cases} \n \\psi \\wedge \\phi \\quad \\text{if }p+q = n \\\\\n0 \\quad \\text{otherwise. }\n\\end{cases} "
},
{
"math_id": 74,
"text": "\\psi\\in \\Lambda(V_n)"
},
{
"math_id": 75,
"text": " \\beta_m(\\psi, \\psi) =0 \\quad \\forall\\ m \\equiv n \\mod(4), \\quad 0\\le m < n "
},
{
"math_id": 76,
"text": "V,"
},
{
"math_id": 77,
"text": " \\sum_{0\\le m \\le n-1 \\atop m \\equiv n, \\text{ mod } 4} {\\text{dim}(V) \\choose m} "
},
{
"math_id": 78,
"text": "\\beta_m"
},
{
"math_id": 79,
"text": "\\,\\Lambda^m(V)\\,"
},
{
"math_id": 80,
"text": " m \\equiv n, \\text{ mod } 4 "
},
{
"math_id": 81,
"text": " \\,\\tfrac{1}{2}\\,n (n-1)\\,"
},
{
"math_id": 82,
"text": " \\,\\tfrac{1}{2}\\,n (n+1)\\,"
},
{
"math_id": 83,
"text": " 2n +1"
},
{
"math_id": 84,
"text": " 2^{n-1} - \\tfrac{1}{2}\\,n(n-1) - 1 "
},
{
"math_id": 85,
"text": "\\,2n\\,"
},
{
"math_id": 86,
"text": " 2^n - \\tfrac{1}{2}\\,n(n+1) - 1 "
},
{
"math_id": 87,
"text": "\\,2n + 1\\,"
},
{
"math_id": 88,
"text": "\\psi \\; \\Gamma_\\mu \\, \\psi = 0~, \\quad \\mu= 1, \\dots, 10, "
},
{
"math_id": 89,
"text": "\\,\\Gamma_\\mu\\,"
},
{
"math_id": 90,
"text": "\\,\\mathbb{C}^{10}\\,"
},
{
"math_id": 91,
"text": "5"
},
{
"math_id": 92,
"text": " V =\\mathbb{C}^{10} "
},
{
"math_id": 93,
"text": "10"
},
{
"math_id": 94,
"text": "d=10 "
},
{
"math_id": 95,
"text": " N=1"
},
{
"math_id": 96,
"text": "(1 | 16) "
},
{
"math_id": 97,
"text": "16"
},
{
"math_id": 98,
"text": "d=6"
},
{
"math_id": 99,
"text": " N=2"
},
{
"math_id": 100,
"text": "d=4"
},
{
"math_id": 101,
"text": " N=3"
},
{
"math_id": 102,
"text": " 4"
},
{
"math_id": 103,
"text": "\\tau"
}
]
| https://en.wikipedia.org/wiki?curid=6843863 |
68439949 | Tamper (nuclear weapon) | Nuclear weapon component
In a nuclear weapon, a tamper is an optional layer of dense material surrounding the fissile material. It is used in nuclear weapon design to reduce the critical mass and to delay the expansion of the reacting material through its inertia, which delays the thermal expansion of the fissioning fuel mass, keeping it supercritical longer. Often the same layer serves both as tamper and as neutron reflector. The weapon disintegrates as the reaction proceeds, and this stops the reaction, so the use of a tamper makes for a longer-lasting, more energetic and more efficient explosion. The yield can be further enhanced using a fissionable tamper.
The first nuclear weapons used heavy natural uranium or tungsten carbide tampers, but a heavy tamper necessitates a larger high-explosive implosion system and makes the entire device larger and heavier. The primary stage of a modern thermonuclear weapon may instead use a lightweight beryllium reflector, which is also transparent to X-rays when ionized, allowing the primary's energy output to escape quickly to be used in compressing the secondary stage. More exotic tamper materials such as gold are used for special purposes like emitting large amounts of X-rays or altering the amount of nuclear fallout.
While the effect of a tamper is to increase efficiency, both by reflecting neutrons and by delaying the expansion of the bomb, the effect on the critical mass is not as great. The reason for this is that the process of reflection is time-consuming. By the time reflected neutrons return to the core, several generations of the chain reaction have passed, meaning the contribution from the older generation is a tiny fraction of the neutron population.
Function.
In "Atomic Energy for Military Purposes" (1945), physicist Henry DeWolf Smyth describes the function of a tamper in nuclear weapon design as similar to the neutron reflector used in a nuclear reactor: <templatestyles src="Template:Blockquote/styles.css" />A similar envelope can be used to reduce the critical size of the bomb, but here the envelope has an additional role: its very inertia delays the expansion of the reacting material. For this reason such an envelope is often called a tamper. Use of a tamper clearly makes for a longer lasting, more energetic and more efficient explosion.
History.
The concept of surrounding the core of a nuclear weapon with a tamper was introduced by Robert Serber in his "Los Alamos Primer", a series of lectures given in April 1943 as part of the Manhattan Project, which built the first nuclear weapons. He noted that since inertia was the key, the densest materials were preferable, and he identified gold, rhenium, tungsten and uranium as the best candidates. He believed they also had good neutron-reflecting properties, although he cautioned that a great deal more work needed to be done in this area. Using elementary diffusion theory, he predicted that the critical mass of a nuclear weapon with a tamper would be one-eighth that of an identical but untamped weapon. He added that in practice this would only be about a quarter instead of an eighth.
Serber noted that the neutron reflection property was not as good as it might first seem, because the neutrons returning from collisions in the tamper would take time to do so. He estimated that for a uranium tamper they might take about 10−7 seconds. By the time reflected neutrons return to the core, several generations of the chain reaction would have passed, meaning the contribution from the older generation is a tiny fraction of the neutron population. The returning neutrons would also be slowed by the collision. It followed that 15% more fissile material was required to get the same energy release with a gold tamper compared to a uranium one, despite the fact that the critical masses differed by 50%. At the time, the critical masses of uranium (and more particularly plutonium) were not precisely known. It was thought that uranium with a uranium tamper might be about 25 kg, while that of plutonium would be about 5 kg.
The Little Boy uranium bomb used in the atomic bombing of Hiroshima had a tungsten carbide tamper. This was important not just for neutron reflection but also for its strength in preventing the projectile from blowing through the target. The tamper had a radius of and a thickness of , for a mass of . This was about 3.5 times the mass of the fissile material used. Tungsten carbide has a high density and a low neutron absorbency cross section. Despite being available in adequate quantity during the Manhattan Project, depleted uranium was not used because it has a relatively high rate of spontaneous fission of about 675 per kg per second; a 300 kg depleted uranium tamper would therefore have an unacceptable chance of initiating a predetonation. Tungsten carbide was commonly used in uranium-233 gun-type nuclear weapons used with artillery pieces for the same reason.
There are advantages to using a fissionable tamper to increase the yield. Uranium-238 will fission when struck by a neutron with , and about half the neutrons produced by the fission of uranium-235 will exceed this threshold. However, a fast neutron striking a uranium-238 nucleus is eight times as likely to be inelastically scattered as to produce a fission, and when it does so, it is slowed to the point below the fission threshold of uranium-238. In the Fat Man type used in the Trinity test and at Nagasaki, the tamper consisted of shells of natural uranium and aluminium. It is estimated that up to 30% of the yield came from fission of the natural uranium tamper. An estimated of the yield was contributed by the photofission of the tamper.
In a boosted fission weapon or a thermonuclear weapon, the neutrons produced by a deuterium-tritium reaction can remain sufficiently energetic to fission uranium-238 even after three collisions with deuterium, but the ones produced by deuterium-deuterium fusion no longer have sufficient energy after even a single collision. A uranium-235 tamper will fission even with slow neutrons. A highly enriched uranium tamper is therefore more efficient than a depleted uranium one, and a smaller tamper can be used to achieve the same yield. The use of enriched uranium tampers therefore became more common once enriched uranium became more plentiful.
An important development after World War II was the lightweight beryllium tamper. In a boosted device the thermonuclear reactions greatly increase the production of neutrons, which makes the inertial property of tampers less important. Beryllium has a low slow neutron absorbency cross section but a very high scattering cross section. When struck by high energy neutrons produced by fission reactions, beryllium emits neutrons. With a beryllium reflector, the critical mass of highly enriched uranium is 14.1 kg, compared with 52.5 kg in an untamped sphere. A beryllium tamper also minimizes the loss of X-rays, which is important for a thermonuclear primary which uses its X-rays to compress the secondary stage.
The beryllium tamper had been considered by the Manhattan Project, but beryllium was in short supply, and experiments with a beryllium tamper did not commence until after the war. Physicist Louis Slotin was killed in May 1946 in a criticality accident involving one. A device with a beryllium tamper was successfully tested in the Operation Tumbler–Snapper How shot on 5 June 1952, and since then beryllium has been widely used as a tamper in thermonuclear primaries. The secondary's tamper (or "pusher") functions to reflect neutrons, confine the fusion fuel with its inertial mass, and enhance the yield with its fissions produced by neutrons emitted from the thermonuclear reactions. It also helps drive the radiation implosion and prevent the loss of thermal energy. For this reason, the heavy tamper is still preferred.
Alternative materials.
Thorium can also be used as a fissionable tamper. It has an atomic weight nearly as high as uranium and a lower propensity to fission, which means that the tamper has to be much thicker. It is possible that a state seeking to develop nuclear weapons capability might add reactor-grade plutonium to a natural uranium tamper. This would cause problems with neutron emissions from the plutonium, but it might be possible to overcome this with a layer of boron-10, which has a high neutron cross section for the absorption of the slow neutrons that fission uranium-235 and plutonium-239, but a low cross-section for the absorption of the fast neutrons that fission uranium-238. It was used in thermonuclear weapons to protect the plutonium spark plug from stray neutrons emitted by the uranium-238 tamper. In the Fat Man type the natural uranium tamper was coated with boron.
Non-fissionable materials can be used as tampers. Sometimes these were substituted for fissionable ones in nuclear tests where a high yield was unnecessary. The most commonly used non-fissionable tamper material is lead, which is both widely available and cheap. British designs often used a lead-bismuth alloy. Bismuth has the highest atomic number of any non-fissionable tamper material. The use of lead and bismuth reduces nuclear fallout, as neither produces isotopes that emit significant amounts of gamma radiation when irradiated with neutrons.
The W71 warhead used in the LIM-49 Spartan anti-ballistic missile had a gold tamper around its secondary to maximize its output of X-rays, which it used to incapacitate incoming nuclear warheads. The irradiation of gold-197 produces gold-198, which has a half-life of 2.697 days and emits gamma rays and beta particles. It therefore produces short-lived but intense radiation, which may have battlefield uses, although this was not its purpose in the W71. Another element evaluated by the US for such a purpose was tantalum. Natural tantalum is almost entirely tantalum-181, which when irradiated with neutrons become tantalum-182, a beta and gamma ray emitter with a half-life of 115 days.
In the theoretical cobalt bomb, cobalt is poor prospect for a tamper because it is relatively light and ionizes at . Natural cobalt is entirely cobalt-59, which becomes cobalt-60 when irradiated with neutrons. With a half-life of 5.26 years, this could produce long-lasting radioactive contamination. The British Tadje nuclear test at Maralinga used cobalt pellets as a "tracer" for determining yield. This fuelled rumours that Britain had been developing a cobalt bomb.
Physics.
The diffusion equation for the number of neutrons within a bomb core is given by:
formula_0
where formula_1 is the number density of neutrons, formula_2 is the average neutron velocity, formula_3 is the number of secondary neutrons produced per fission, formula_4 is the fission mean free path and formula_5 is transport mean free path for neutrons in the core.
formula_1 doesn't depend on the direction, so we can use this form of the Laplace operator in spherical coordinates:
formula_6
Solving the separable partial differential equation gives us:
formula_7
where
formula_8
and
formula_9
For the tamper, the first term in the first equation relating to the production of neutrons can be disregarded, leaving:
formula_10
Set the separation constant as formula_11. If formula_12 (meaning that the neutron density in the tamper is constant) the solution becomes:
formula_13
Where formula_14 and formula_15 are constants of integration.
If formula_16 (meaning that the neutron density in the tamper is growing) the solution becomes:
formula_17
where
formula_18
Serber noted that at the boundary between the core and the tamper, the diffusion stream of neutrons must be continuous, so if the core has radius formula_19 then:
formula_20
If the neutron velocity in the core and the tamper is the same, then formula_21 and:
formula_22
Otherwise each side would have to be multiplied by the relevant neutron velocity. Also:
formula_23
For the case where formula_24:
formula_25
If the tamper is really thick, ie formula_26 this can be approximated as:
formula_27
If the tamper (unrealistically) is a vacuum, then the neutron scattering cross section would be zero and formula_28. The equation becomes:
formula_29
which is satisfied by:
formula_30
If the tamper is very thick and has neutron scattering properties similar to the core, ie:
formula_31
Then the equation becomes:
formula_32
which is satisfied when:
formula_33
In this case, the critical radius is twice what it would be if no tamper were present. Since the volume is proportional to the cube of the radius, we reach Serber's conclusion that an eightfold reduction in the critical mass is theoretically possible.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{\\partial N}{ \\partial t} = \\frac {v_n}{ \\lambda^{core}_f} (\\nu - 1) N + \\frac { \\lambda^{core}_t v_n }{ 3 } (\\nabla^2 N) "
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "v_n"
},
{
"math_id": 3,
"text": "\\nu"
},
{
"math_id": 4,
"text": "\\lambda^{core}_f"
},
{
"math_id": 5,
"text": "\\lambda^{core}_t"
},
{
"math_id": 6,
"text": "\\nabla^2 N = \\frac{1}{r^2} \\frac{\\partial}{\\partial r} \\bigl(r^2 \\frac{\\partial N}{\\partial r} \\bigr) "
},
{
"math_id": 7,
"text": " N_{core} (r, t) = N_0 e^{(\\alpha / \\tau) t} \\Bigl[ \\frac {sin ( r / d_{core} )}{ r } \\Bigr] "
},
{
"math_id": 8,
"text": " \\tau = \\lambda^{core}_f / v_n "
},
{
"math_id": 9,
"text": " d_{core} = \\sqrt { \\frac { \\lambda^{core}_f \\lambda^{core}_t }{3 (- \\alpha + \\nu - 1) } } "
},
{
"math_id": 10,
"text": " \\frac{\\partial N}{ \\partial t} = \\frac { \\lambda^{core}_t v_n }{ 3 } (\\nabla^2 N) "
},
{
"math_id": 11,
"text": " \\delta / \\tau "
},
{
"math_id": 12,
"text": "\\delta = 0"
},
{
"math_id": 13,
"text": " N_{tamper} = \\frac {A}{r} + B "
},
{
"math_id": 14,
"text": "A"
},
{
"math_id": 15,
"text": "B"
},
{
"math_id": 16,
"text": "\\delta > 0"
},
{
"math_id": 17,
"text": " N_{tamper} = e^{(\\delta / \\tau) t} \\Bigl[ A \\frac{e^{r / d_{tamper} }}{r} + B \\frac{e^{-r / d_{tamper} }}{r} \\Bigr] "
},
{
"math_id": 18,
"text": " d_{tamper} = \\sqrt { \\frac {\\lambda^{core}_f \\lambda^{core}_t }{3} }"
},
{
"math_id": 19,
"text": "R_{core}"
},
{
"math_id": 20,
"text": "N_{core} (R_{core}) = N_{tamper} (R_{core}) "
},
{
"math_id": 21,
"text": "\\alpha = \\delta"
},
{
"math_id": 22,
"text": " \\lambda^{core}_t {\\Bigl( \\frac { \\partial N_{core} } {\\partial r } \\Bigr)}_{R_{core}} = \\lambda^{tamper}_t \\Bigl( \\frac { \\partial N_{tamper} } {\\partial r } \\Bigr)_{R_{core}}"
},
{
"math_id": 23,
"text": " N_{tamper} (R_{tamper}) = - \\frac {2}{3} \\lambda^{tamper}_t \\Bigl( \\frac { \\partial N_{tamper} } {\\partial r } \\Bigr)_{R_{core}} "
},
{
"math_id": 24,
"text": "\\alpha = \\delta = 0"
},
{
"math_id": 25,
"text": " \\Bigl[ 1 + \\frac {2 R^{threshold}_{tamper} \\lambda^{tamper}_t } { 3 R^2_{tamper} } - \\frac { R^{threshold}_{tamper} } { R_{tamper} } \\Bigr] \\Bigl[ \\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) cot \\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) - 1 \\Bigr] + \\frac { \\lambda^{tamper}_t} {\\lambda^{core}_t} = 0"
},
{
"math_id": 26,
"text": "R_{tamper} \\gg R^{threshold}_{tamper} "
},
{
"math_id": 27,
"text": "\\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) cot \\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) = 1 - \\frac { \\lambda^{tamper}_t} {\\lambda^{core}_t} "
},
{
"math_id": 28,
"text": "\\lambda^{tamper}_t = \\infty "
},
{
"math_id": 29,
"text": "\\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) cot \\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) = -\\infty"
},
{
"math_id": 30,
"text": "\\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) = \\pi"
},
{
"math_id": 31,
"text": "\\lambda^{tamper}_t \\sim \\lambda^{core}_t "
},
{
"math_id": 32,
"text": "\\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) cot \\Bigl( \\frac {R^{threshold}_{tamper}}{ d_{core} } \\Bigr) = 0"
},
{
"math_id": 33,
"text": "\\frac {R^{threshold}_{tamper}}{ d_{core} } = \\pi/2"
}
]
| https://en.wikipedia.org/wiki?curid=68439949 |
68440670 | Modern Hopfield network | Neural networks
Modern Hopfield networks (also known as Dense Associative Memories) are generalizations of the classical Hopfield networks that break the linear scaling relationship between the number of input features and the number of stored memories. This is achieved by introducing stronger non-linearities (either in the energy function or neurons’ activation functions) leading to super-linear (even an exponential) memory storage capacity as a function of the number of feature neurons. The network still requires a sufficient number of hidden neurons.
The key theoretical idea behind the Modern Hopfield networks is to use an energy function and an update rule that is more sharply peaked around the stored memories in the space of neuron’s configurations compared to the classical Hopfield network.
Classical Hopfield networks.
Hopfield networks are recurrent neural networks with dynamical trajectories converging to fixed point attractor states and described by an energy function. The state of each model neuron formula_0 is defined by a time-dependent variable formula_1, which can be chosen to be either discrete or continuous. A complete model describes the mathematics of how the future state of activity of each neuron depends on the known present or previous activity of all the neurons.
In the original Hopfield model of associative memory, the variables were binary, and the dynamics were described by a one-at-a-time update of the state of the neurons. An energy function quadratic in the formula_1 was defined, and the dynamics consisted of changing the activity of each single neuron formula_2 only if doing so would lower the total energy of the system. This same idea was extended to the case of formula_1 being a continuous variable representing the output of neuron formula_2, and formula_1 being a monotonic function of an input current. The dynamics became expressed as a set of first-order differential equations for which the "energy" of the system always decreased. The energy in the continuous case has one term which is quadratic in the formula_1 (as in the binary model), and a second term which depends on the gain function (neuron's activation function). While having many desirable properties of associative memory, both of these classical systems suffer from a small memory storage capacity, which scales linearly with the number of input features.
Discrete variables.
A simple example of the Modern Hopfield network can be written in terms of binary variables formula_1 that represent the active formula_3 and inactive formula_4 state of the model neuron formula_2.formula_5In this formula the weights formula_6 represent the matrix of memory vectors (index formula_7 enumerates different memories, and index formula_8 enumerates the content of each memory corresponding to the formula_2-th feature neuron), and the function formula_9 is a rapidly growing non-linear function. The update rule for individual neurons (in the asynchronous case) can be written in the following form formula_10which states that in order to calculate the updated state of the formula_11-th neuron the network compares two energies: the energy of the network with the formula_2-th neuron in the ON state and the energy of the network with the formula_2-th neuron in the OFF state, given the states of the remaining neuron. The updated state of the formula_2-th neuron selects the state that has the lowest of the two energies.
In the limiting case when the non-linear energy function is quadratic formula_12 these equations reduce to the familiar energy function and the update rule for the classical binary Hopfield network.
The memory storage capacity of these networks can be calculated for random binary patterns. For the power energy function formula_13 the maximal number of memories that can be stored and retrieved from this network without errors is given byformula_14For an exponential energy function formula_15 the memory storage capacity is exponential in the number of feature neuronsformula_16
Continuous variables.
Modern Hopfield networks or Dense Associative Memories can be best understood in continuous variables and continuous time. Consider the network architecture, shown in Fig.1, and the equations for the neurons' state evolutionwhere the currents of the feature neurons are denoted by formula_17, and the currents of the memory neurons are denoted by formula_18 (formula_19 stands for hidden neurons). There are no synaptic connections among the feature neurons or the memory neurons. A matrix formula_20 denotes the strength of synapses from a feature neuron formula_2 to the memory neuron formula_21. The synapses are assumed to be symmetric, so that the same value characterizes a different physical synapse from the memory neuron formula_21 to the feature neuron formula_2. The outputs of the memory neurons and the feature neurons are denoted by formula_22 and formula_23, which are non-linear functions of the corresponding currents. In general these outputs can depend on the currents of all the neurons in that layer so that formula_24 and formula_25. It is convenient to define these activation function as derivatives of the Lagrangian functions for the two groups of neuronsThis way the specific form of the equations for neuron's states is completely defined once the Lagrangian functions are specified. Finally, the time constants for the two groups of neurons are denoted by formula_26 and formula_27, formula_28 is the input current to the network that can be driven by the presented data.
General systems of non-linear differential equations can have many complicated behaviors that can depend on the choice of the non-linearities and the initial conditions. For Hopfield networks, however, this is not the case - the dynamical trajectories always converge to a fixed point attractor state. This property is achieved because these equations are specifically engineered so that they have an underlying energy function The terms grouped into square brackets represent a Legendre transform of the Lagrangian function with respect to the states of the neurons. If the Hessian matrices of the Lagrangian functions are positive semi-definite, the energy function is guaranteed to decrease on the dynamical trajectory This property makes it possible to prove that the system of dynamical equations describing temporal evolution of neurons' activities will eventually reach a fixed point attractor state.
In certain situations one can assume that the dynamics of hidden neurons equilibrates at a much faster time scale compared to the feature neurons, formula_29. In this case the steady state solution of the second equation in the system (1) can be used to express the currents of the hidden units through the outputs of the feature neurons. This makes it possible to reduce the general theory (1) to an effective theory for feature neurons only. The resulting effective update rules and the energies for various common choices of the Lagrangian functions are shown in Fig.2. In the case of log-sum-exponential Lagrangian function the update rule (if applied once) for the states of the feature neurons is the attention mechanism commonly used in many modern AI systems (see Ref. for the derivation of this result from the continuous time formulation).
Relationship to classical Hopfield network with continuous variables.
Classical formulation of continuous Hopfield networks can be understood as a special limiting case of the Modern Hopfield networks with one hidden layer. Continuous Hopfield Networks for neurons with graded response are typically described by the dynamical equations and the energy function where formula_30, and formula_31 is the inverse of the activation function formula_32. This model is a special limit of the class of models that is called models A, with the following choice of the Lagrangian functions that, according to the definition (2), leads to the activation functions If we integrate out the hidden neurons the system of equations (1) reduces to the equations on the feature neurons (5) with formula_33, and the general expression for the energy (3) reduces to the effective energy While the first two terms in equation (6) are the same as those in equation (9), the third terms look superficially different. In equation (9) it is a Legendre transform of the Lagrangian for the feature neurons, while in (6) the third term is an integral of the inverse activation function. Nevertheless, these two expressions are in fact equivalent, since the derivatives of a function and its Legendre transform are inverse functions of each other. The easiest way to see that these two terms are equal explicitly is to differentiate each one with respect to formula_34. The results of these differentiations for both expressions are equal to formula_35. Thus, the two expressions are equal up to an additive constant. This completes the proof that the classical Hopfield network with continuous states is a special limiting case of the modern Hopfield network (1) with energy (3).
General formulation of the modern Hopfield network.
Biological neural networks have a large degree of heterogeneity in terms of different cell types. This section describes a mathematical model of a fully connected Modern Hopfield network assuming the extreme degree of heterogeneity: every single neuron is different. Specifically, an energy function and the corresponding dynamical equations are described assuming that each neuron has its own activation function and kinetic time scale. The network is assumed to be fully connected, so that every neuron is connected to every other neuron using a symmetric matrix of weights formula_36, indices formula_37 and formula_38 enumerate different neurons in the network, see Fig.3. The easiest way to mathematically formulate this problem is to define the architecture through a Lagrangian function formula_39 that depends on the activities of all the neurons in the network. The activation function for each neuron is defined as a partial derivative of the Lagrangian with respect to that neuron's activity From the biological perspective one can think about formula_40 as an axonal output of the neuron formula_37. In the simplest case, when the Lagrangian is additive for different neurons, this definition results in the activation that is a non-linear function of that neuron's activity. For non-additive Lagrangians this activation function can depend on the activities of a group of neurons. For instance, it can contain contrastive (softmax) or divisive normalization. The dynamical equations describing temporal evolution of a given neuron are given by This equation belongs to the class of models called firing rate models in neuroscience. Each neuron formula_37 collects the axonal outputs formula_41 from all the neurons, weights them with the synaptic coefficients formula_36 and produces its own time-dependent activity formula_42. The temporal evolution has a time constant formula_43, which in general can be different for every neuron. This network has a global energy function where the first two terms represent the Legendre transform of the Lagrangian function with respect to the neurons' currents formula_42. The temporal derivative of this energy function can be computed on the dynamical trajectories leading to (see for details) The last inequality sign holds provided that the matrix formula_44 (or its symmetric part) is positive semi-definite. If, in addition to this, the energy function is bounded from below the non-linear dynamical equations are guaranteed to converge to a fixed point attractor state. The advantage of formulating this network in terms of the Lagrangian functions is that it makes it possible to easily experiment with different choices of the activation functions and different architectural arrangements of neurons. For all those flexible choices the conditions of convergence are determined by the properties of the matrix formula_45 and the existence of the lower bound on the energy function.
Hierarchical associative memory network.
The neurons can be organized in layers so that every neuron in a given layer has the same activation function and the same dynamic time scale. If we assume that there are no horizontal connections between the neurons within the layer (lateral connections) and there are no skip-layer connections, the general fully connected network (11), (12) reduces to the architecture shown in Fig.4. It has formula_46 layers of recurrently connected neurons with the states described by continuous variables formula_47 and the activation functions formula_48, index formula_49 enumerates the layers of the network, and index formula_2 enumerates individual neurons in that layer. The activation functions can depend on the activities of all the neurons in the layer. Every layer can have a different number of neurons formula_50. These neurons are recurrently connected with the neurons in the preceding and the subsequent layers. The matrices of weights that connect neurons in layers formula_49 and formula_51 are denoted by formula_52 (the order of the upper indices for weights is the same as the order of the lower indices, in the example above this means that the index formula_2 enumerates neurons in the layer formula_49, and index formula_53 enumerates neurons in the layer formula_51). The feedforward weights and the feedback weights are equal. The dynamical equations for the neurons' states can be written as with boundary conditions The main difference of these equations from the conventional feedforward networks is the presence of the second term, which is responsible for the feedback from higher layers. These top-down signals help neurons in lower layers to decide on their response to the presented stimuli. Following the general recipe it is convenient to introduce a Lagrangian function formula_54 for the formula_49-th hidden layer, which depends on the activities of all the neurons in that layer. The activation functions in that layer can be defined as partial derivatives of the Lagrangian With these definitions the energy (Lyapunov) function is given by If the Lagrangian functions, or equivalently the activation functions, are chosen in such a way that the Hessians for each layer are positive semi-definite and the overall energy is bounded from below, this system is guaranteed to converge to a fixed point attractor state. The temporal derivative of this energy function is given by Thus, the hierarchical layered network is indeed an attractor network with the global energy function. This network is described by a hierarchical set of synaptic weights that can be learned for each specific problem. | [
{
"math_id": 0,
"text": "i "
},
{
"math_id": 1,
"text": "V_i"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "V_i=+1"
},
{
"math_id": 4,
"text": "V_i=-1"
},
{
"math_id": 5,
"text": "E = - \\sum\\limits_{\\mu = 1}^{N_\\text{mem}} F\\Big(\\sum\\limits_{i=1}^{N_f}\\xi_{\\mu i} V_i\\Big)"
},
{
"math_id": 6,
"text": "\\xi_{\\mu i}"
},
{
"math_id": 7,
"text": "\\mu = 1...N_\\text{mem}"
},
{
"math_id": 8,
"text": "i=1...N_f"
},
{
"math_id": 9,
"text": "F(x)"
},
{
"math_id": 10,
"text": "V^{(t+1)}_i = \\operatorname{sign}\\bigg[ \\sum\\limits_{\\mu=1}^{N_\\text{mem}} \\bigg(F\\Big(\\xi_{\\mu i} + \\sum\\limits_{j\\neq i}\\xi_{\\mu j} V^{(t)}_j\\Big) - F\\Big(-\\xi_{\\mu i} + \\sum\\limits_{j\\neq i}\\xi_{\\mu j} V^{(t)}_j\\Big) \\bigg)\\bigg]"
},
{
"math_id": 11,
"text": "i"
},
{
"math_id": 12,
"text": "F(x) = x^2"
},
{
"math_id": 13,
"text": "F(x)=x^n"
},
{
"math_id": 14,
"text": "N^{\\max}_{\\text{mem}}\\approx \\frac{1}{2 (2n-3)!!} \\frac{N_f^{n-1}}{\\ln(N_f)}"
},
{
"math_id": 15,
"text": "F(x)=e^x"
},
{
"math_id": 16,
"text": "N^{\\max}_{\\text{mem}}\\approx 2^{N_f/2}"
},
{
"math_id": 17,
"text": "x_i"
},
{
"math_id": 18,
"text": "h_\\mu"
},
{
"math_id": 19,
"text": "h"
},
{
"math_id": 20,
"text": "\\xi_{\\mu i}"
},
{
"math_id": 21,
"text": "\\mu"
},
{
"math_id": 22,
"text": "f_\\mu"
},
{
"math_id": 23,
"text": "g_i"
},
{
"math_id": 24,
"text": "f_\\mu = f(\\{h_\\mu\\})"
},
{
"math_id": 25,
"text": "g_i = g(\\{x_i\\})"
},
{
"math_id": 26,
"text": "\\tau_f"
},
{
"math_id": 27,
"text": "\\tau_h"
},
{
"math_id": 28,
"text": "I_i"
},
{
"math_id": 29,
"text": "\\tau_h\\ll\\tau_f"
},
{
"math_id": 30,
"text": "V_i = g(x_i)"
},
{
"math_id": 31,
"text": "g^{-1}(z)"
},
{
"math_id": 32,
"text": "g(x)"
},
{
"math_id": 33,
"text": "T_{ij} = \\sum\\limits_{\\mu=1}^{N_h} \\xi_{\\mu i }\\xi_{\\mu j}"
},
{
"math_id": 34,
"text": "x_i"
},
{
"math_id": 35,
"text": "x_i g(x_i)'"
},
{
"math_id": 36,
"text": "W_{IJ}"
},
{
"math_id": 37,
"text": "I"
},
{
"math_id": 38,
"text": "J"
},
{
"math_id": 39,
"text": "L(\\{x_I\\})"
},
{
"math_id": 40,
"text": "g_I"
},
{
"math_id": 41,
"text": "g_J"
},
{
"math_id": 42,
"text": "x_I"
},
{
"math_id": 43,
"text": "\\tau_I"
},
{
"math_id": 44,
"text": "M_{IK}"
},
{
"math_id": 45,
"text": "M_{IJ}"
},
{
"math_id": 46,
"text": "N_\\text{layer}"
},
{
"math_id": 47,
"text": "x_i^{A}"
},
{
"math_id": 48,
"text": "g_i^{A}"
},
{
"math_id": 49,
"text": "A"
},
{
"math_id": 50,
"text": "N_A"
},
{
"math_id": 51,
"text": "B"
},
{
"math_id": 52,
"text": "\\xi^{(A,B)}_{ij}"
},
{
"math_id": 53,
"text": "j"
},
{
"math_id": 54,
"text": "L^A(\\{x^A_i\\})"
}
]
| https://en.wikipedia.org/wiki?curid=68440670 |
68444148 | Walther graph | Planar bipartite graph with 25 vertices and 31 edges
In the mathematical field of graph theory, the Walther graph, also called the Tutte fragment, is a planar bipartite graph with 25 vertices and 31 edges named after Hansjoachim Walther. It has chromatic index 3, girth 3 and diameter 8.
If the single vertex of degree 1 whose neighbour has degree 3 is removed, the resulting graph has no Hamiltonian path. This property was used by Tutte when combining three Walther graphs to produce the Tutte graph, the first known counterexample to Tait's conjecture that every 3-regular polyhedron has a Hamiltonian cycle.
Algebraic properties.
The Walther graph is an identity graph; its automorphism group is the trivial group.
The characteristic polynomial of the Walther graph is :
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\nx^3 \\left(x^{22} \\right. & {} -31 x^{20}+411 x^{18}-3069 x^{16}+14305 x^{14}-43594 x^{12} \\\\\n& \\left. {} +88418 x^{10}-119039 x^8+103929 x^6-55829 x^4+16539 x^2-2040\\right)\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=68444148 |
68444579 | Takeuti–Feferman–Buchholz ordinal | In the mathematical fields of set theory and proof theory, the Takeuti–Feferman–Buchholz ordinal (TFBO) is a large countable ordinal, which acts as the limit of the range of Buchholz's psi function and Feferman's theta function. It was named by David Madore, after Gaisi Takeuti, Solomon Feferman and Wilfried Buchholz. It is written as formula_0 using Buchholz's psi function, an ordinal collapsing function invented by Wilfried Buchholz, and formula_1 in Feferman's theta function, an ordinal collapsing function invented by Solomon Feferman. It is the proof-theoretic ordinal of several formal theories:
Despite being one of the largest large countable ordinals and recursive ordinals, it is still vastly smaller than the proof-theoretic ordinal of ZFC. | [
{
"math_id": 0,
"text": "\\psi_0(\\varepsilon_{\\Omega_\\omega + 1})"
},
{
"math_id": 1,
"text": "\\theta_{\\varepsilon_{\\Omega_\\omega + 1}}(0)"
},
{
"math_id": 2,
"text": "\\Pi_1^1 -CA + BI"
},
{
"math_id": 3,
"text": "\\Pi_1^1"
},
{
"math_id": 4,
"text": "\\Omega_\\alpha"
},
{
"math_id": 5,
"text": "\\aleph_\\alpha"
},
{
"math_id": 6,
"text": "\\varepsilon_\\beta"
},
{
"math_id": 7,
"text": "\\beta"
},
{
"math_id": 8,
"text": "1+\\beta"
},
{
"math_id": 9,
"text": "\\alpha \\mapsto \\omega^\\alpha"
},
{
"math_id": 10,
"text": "\\psi"
}
]
| https://en.wikipedia.org/wiki?curid=68444579 |
68445623 | Retrieval Data Structure | In computer science, a retrieval data structure, also known as static function, is a space-efficient dictionary-like data type composed of a collection of (key, value) pairs that allows the following operations:
They can also be thought of as a function formula_0 for a universe formula_1 and the set of keys formula_2 where retrieve has to return formula_3 for any value formula_4 and an arbitrary value from formula_5 otherwise.
In contrast to static functions, AMQ-filters support (probabilistic) membership queries and dictionaries additionally allow operations like listing keys or looking up the value associated with a key and returning some other symbol if the key is not contained.
As can be derived from the operations, this data structure does not need to store the keys at all and may actually use "less" space than would be needed for a simple list of the key value pairs. This makes it attractive in situations where the associated data is small (e.g. a few bits) compared to the keys because we can save a lot by reducing the space used by keys.
To give a simple example suppose formula_6 video game names annotated with a boolean indicating whether the game contains a dog that can be petted are given. A static function built from this database can reproduce the associated flag for all names contained in the original set and an arbitrary one for other names. The size of this static function can be made to be only formula_7 bits for a small formula_8 which is obviously much less than any pair based representation.
Examples.
A trivial example of a static function is a sorted list of the keys and values which implements all the above operations and many more.
However, the retrieve on a list is slow and we implement many unneeded operations that can be removed to allow optimizations.
Furthermore, we are even allowed to return junk if the queried key is not contained which we did not use at all.
Perfect hash functions.
Another simple example to build a static function is using a perfect hash function: After building the PHF for our keys, store the corresponding values at the correct position for the key. As can be seen, this approach also allows updating the associated values, the keys have to be static. The correctness follows from the correctness of the perfect hash function. Using a minimum perfect hash function gives a big space improvement if the associated values are relatively small.
XOR-retrieval.
Hashed filters can be categorized by their queries into OR, AND and XOR-filters. For example, the bloom filter is an AND-filter since it returns true for a membership query if all probed locations match. XOR filters work only for static retrievals and are the most promising for building them space efficiently. They are built by solving a linear system which ensures that a query for every key returns true.
Construction.
Given a hash function formula_9 that maps each key to a bitvector of length formula_10 where all formula_11 are linearly independent the following system of linear equations has a solution formula_12:
formula_13
Therefore, the static function is given by formula_9 and formula_14 and the space usage is dominated by formula_14 which is roughly formula_7 bits per key for formula_15, the hash function is assumed to be small.
A retrieval for formula_16 can be expressed as the bitwise XOR of the rows formula_17 for all set bits formula_18 of formula_19. Furthermore, fast queries require sparse formula_19, thus the problems that need to be solved for this method are finding a suitable hash function and still being able to solve the system of linear equations efficiently.
Ribbon retrieval.
Using a sparse random matrix formula_9 makes retrievals cache inefficient because they access most of formula_14 in a random non local pattern. Ribbon retrieval improves on this by giving each formula_19 a consecutive "ribbon" of width formula_20 in which bits are set at random.
Using the properties of formula_11 the matrix formula_14 can be computed in formula_21 expected time: Ribbon solving works by first sorting the rows by their starting position (e.g. counting sort). Then, a REM form can be constructed iteratively by performing row operations on rows strictly below the current row, eliminating all 1-entries in all columns below the first 1-entry of this row. Row operations do not produce any values outside of the ribbon and are very cheap since they only require an XOR of formula_22 bits which can be done in formula_23 time on a RAM. It can be shown that the expected amount of row operations is formula_24. Finally, the solution is obtained by backsubstitution.
Applications.
Approximate membership.
To build an approximate membership data structure use a fingerprinting function formula_25. Then build a static function formula_26 on formula_27 restricted to the domain of our keys formula_28.
Checking the membership of an element formula_16 is done by evaluating formula_26 with formula_29 and returning true if the returned value equals formula_19.
The performance of this data structure is exactly the performance of the underlying static function.
Perfect hash functions.
A retrieval data structure can be used to construct a perfect hash function: First insert the keys into a cuckoo hash table with formula_32 hash functions formula_33 and buckets of size 1. Then, for every key store the index of the hash function that lead to a key's insertion into the hash table in a formula_30-bit retrieval data structure formula_34. The perfect hash function is given by formula_35.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "b \\colon \\, \\mathcal{U} \\to \\{0, 1\\}^r"
},
{
"math_id": 1,
"text": "\\mathcal{U}"
},
{
"math_id": 2,
"text": "S \\subseteq \\mathcal{U}"
},
{
"math_id": 3,
"text": "b(x)"
},
{
"math_id": 4,
"text": "x \\in S"
},
{
"math_id": 5,
"text": "\\{0, 1\\}^r"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "(1 + \\epsilon) n"
},
{
"math_id": 8,
"text": "\\epsilon"
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "m \\geq \\left\\vert S \\right\\vert = n"
},
{
"math_id": 11,
"text": "(h(x))_{x \\in S}"
},
{
"math_id": 12,
"text": "Z \\in \\{ 0, 1 \\}^{m \\times r}"
},
{
"math_id": 13,
"text": "(h(x) \\cdot Z = b(x))_{x \\in S}"
},
{
"math_id": 14,
"text": "Z"
},
{
"math_id": 15,
"text": "m = (1 + \\epsilon) n"
},
{
"math_id": 16,
"text": "x \\in \\mathcal{U}"
},
{
"math_id": 17,
"text": "Z_i"
},
{
"math_id": 18,
"text": "i"
},
{
"math_id": 19,
"text": "h(x)"
},
{
"math_id": 20,
"text": "w = \\mathcal{O}(\\log n / \\epsilon)"
},
{
"math_id": 21,
"text": "\\mathcal{O}(n/\\epsilon^2)"
},
{
"math_id": 22,
"text": "\\mathcal{O}(\\log n/\\epsilon)"
},
{
"math_id": 23,
"text": "\\mathcal{O}(1/\\epsilon)"
},
{
"math_id": 24,
"text": "\\mathcal{O}(n/\\epsilon)"
},
{
"math_id": 25,
"text": "h \\colon\\, \\mathcal{U} \\to \\{ 0, 1 \\}^r"
},
{
"math_id": 26,
"text": "D_{h_S}"
},
{
"math_id": 27,
"text": "h_S"
},
{
"math_id": 28,
"text": "S"
},
{
"math_id": 29,
"text": "x"
},
{
"math_id": 30,
"text": "r"
},
{
"math_id": 31,
"text": "f = 2^r"
},
{
"math_id": 32,
"text": "H=2^r"
},
{
"math_id": 33,
"text": "h_i"
},
{
"math_id": 34,
"text": "D"
},
{
"math_id": 35,
"text": "h_{D(x)}(x)"
}
]
| https://en.wikipedia.org/wiki?curid=68445623 |
68445647 | Positive element | In mathematics, an element of a *-algebra is called positive if it is the sum of elements of the form formula_0.
Definition.
Let formula_1 be a *-algebra. An element formula_2 is called positive if there are finitely many elements formula_3, so that formula_4 holds. This is also denoted by formula_5.
The set of positive elements is denoted by formula_6.
A special case from particular importance is the case where formula_1 is a complete normed *-algebra, that satisfies the C*-identity (formula_7), which is called a C*-algebra.
Examples.
In case formula_1 is a C*-algebra, the following holds:
Criteria.
Let formula_1 be a C*-algebra and formula_2. Then the following are equivalent:
If formula_1 is a unital *-algebra with unit element formula_8, then in addition the following statements are equivalent:
Properties.
In *-algebras.
Let formula_1 be a *-algebra. Then:
In C*-algebras.
Let formula_1 be a C*-algebra. Then:
Partial order.
Let formula_1 be a *-algebra. The property of being a positive element defines a translation invariant partial order on the set of self-adjoint elements formula_26. If formula_59 holds for formula_28, one writes formula_60 or formula_61.
This partial order fulfills the properties formula_62 and formula_63 for all formula_64 with formula_60 and formula_65.
If formula_1 is a C*-algebra, the partial order also has the following properties for formula_28:
Citations.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "a^*a"
},
{
"math_id": 1,
"text": "\\mathcal{A}"
},
{
"math_id": 2,
"text": "a \\in \\mathcal{A}"
},
{
"math_id": 3,
"text": "a_k \\in \\mathcal{A} \\; (k = 1,2,\\ldots,n)"
},
{
"math_id": 4,
"text": "a = \\sum_{k=1}^n a_k^*a_k"
},
{
"math_id": 5,
"text": "a \\geq 0"
},
{
"math_id": 6,
"text": "\\mathcal{A}_+"
},
{
"math_id": 7,
"text": "\\left\\| a^*a \\right\\| = \\left\\| a \\right\\|^2 \\ \\forall a \\in \\mathcal{A}"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "a^* a"
},
{
"math_id": 10,
"text": "aa^*"
},
{
"math_id": 11,
"text": "a \\in \\mathcal{A}_N"
},
{
"math_id": 12,
"text": "f \\geq 0"
},
{
"math_id": 13,
"text": "a"
},
{
"math_id": 14,
"text": "f(a)"
},
{
"math_id": 15,
"text": "a = a^* = a^2"
},
{
"math_id": 16,
"text": "\\sigma(a)"
},
{
"math_id": 17,
"text": "\\sigma(a) \\subseteq \\{ 0, 1 \\}"
},
{
"math_id": 18,
"text": "\\sigma(a) \\subseteq [0, \\infty)"
},
{
"math_id": 19,
"text": "b \\in \\mathcal{A}"
},
{
"math_id": 20,
"text": "a = bb^*"
},
{
"math_id": 21,
"text": "c \\in \\mathcal{A}_{sa}"
},
{
"math_id": 22,
"text": "a = c^2"
},
{
"math_id": 23,
"text": "\\left\\| te - a \\right\\| \\leq t"
},
{
"math_id": 24,
"text": "t \\geq \\left\\| a \\right\\|"
},
{
"math_id": 25,
"text": "a \\in \\mathcal{A}_+"
},
{
"math_id": 26,
"text": "\\mathcal{A}_{sa}"
},
{
"math_id": 27,
"text": "\\alpha a, a+b \\in \\mathcal{A}_+"
},
{
"math_id": 28,
"text": "a,b \\in \\mathcal{A}"
},
{
"math_id": 29,
"text": "\\alpha \\in [0, \\infty)"
},
{
"math_id": 30,
"text": "b^*ab"
},
{
"math_id": 31,
"text": "\\langle \\mathcal{A}_+ \\rangle = \\mathcal{A}^2"
},
{
"math_id": 32,
"text": "\\mathcal{A}_+ - \\mathcal{A}_+ = \\mathcal{A}_{sa} \\cap \\mathcal{A}^2"
},
{
"math_id": 33,
"text": "n \\in \\mathbb{N}"
},
{
"math_id": 34,
"text": "b \\in \\mathcal{A}_+"
},
{
"math_id": 35,
"text": "b^n = a"
},
{
"math_id": 36,
"text": "n"
},
{
"math_id": 37,
"text": "b^*b"
},
{
"math_id": 38,
"text": "|b| = (b^*b)^\\frac{1}{2}"
},
{
"math_id": 39,
"text": "\\alpha \\geq 0"
},
{
"math_id": 40,
"text": "a^\\alpha \\in \\mathcal{A}_+"
},
{
"math_id": 41,
"text": "a^\\alpha a^\\beta = a^{\\alpha + \\beta}"
},
{
"math_id": 42,
"text": "\\beta \\in [0, \\infty)"
},
{
"math_id": 43,
"text": "\\alpha \\mapsto a^\\alpha"
},
{
"math_id": 44,
"text": "\\alpha"
},
{
"math_id": 45,
"text": "ab = ba"
},
{
"math_id": 46,
"text": "a,b \\in \\mathcal{A}_+"
},
{
"math_id": 47,
"text": "ab \\in \\mathcal{A}_+"
},
{
"math_id": 48,
"text": "\\mathcal{A}_{sa} = \\mathcal{A}_+ - \\mathcal{A}_+"
},
{
"math_id": 49,
"text": "\\mathcal{A}^2 = \\mathcal{A}"
},
{
"math_id": 50,
"text": "-a"
},
{
"math_id": 51,
"text": "a = 0"
},
{
"math_id": 52,
"text": "\\mathcal{B}"
},
{
"math_id": 53,
"text": "\\mathcal{B}_+ = \\mathcal{B} \\cap \\mathcal{A}_+"
},
{
"math_id": 54,
"text": "\\Phi"
},
{
"math_id": 55,
"text": "\\Phi(\\mathcal{A}_+) = \\Phi(\\mathcal{A}) \\cap \\mathcal{B}_+"
},
{
"math_id": 56,
"text": "ab = 0"
},
{
"math_id": 57,
"text": "\\left\\| a + b \\right\\| = \\max(\\left\\| a \\right\\|, \\left\\| b \\right\\|)"
},
{
"math_id": 58,
"text": "a \\bot b"
},
{
"math_id": 59,
"text": "b - a \\in \\mathcal{A}_+"
},
{
"math_id": 60,
"text": "a \\leq b"
},
{
"math_id": 61,
"text": "b \\geq a"
},
{
"math_id": 62,
"text": "ta \\leq tb"
},
{
"math_id": 63,
"text": "a + c \\leq b + c"
},
{
"math_id": 64,
"text": "a,b,c \\in \\mathcal{A}_{sa}"
},
{
"math_id": 65,
"text": "t \\in [0, \\infty)"
},
{
"math_id": 66,
"text": "c^*ac \\leq c^*bc"
},
{
"math_id": 67,
"text": "c \\in \\mathcal{A}"
},
{
"math_id": 68,
"text": "c \\in \\mathcal{A}_+"
},
{
"math_id": 69,
"text": "b"
},
{
"math_id": 70,
"text": "ac \\leq bc"
},
{
"math_id": 71,
"text": "-b \\leq a \\leq b"
},
{
"math_id": 72,
"text": "\\left\\| a \\right\\| \\leq \\left\\| b \\right\\|"
},
{
"math_id": 73,
"text": "0 \\leq a \\leq b"
},
{
"math_id": 74,
"text": "a^\\alpha \\leq b^\\alpha"
},
{
"math_id": 75,
"text": "0 < \\alpha \\leq 1"
},
{
"math_id": 76,
"text": "b^{-1} \\leq a^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=68445647 |
68446709 | Coulomb crystal | Physical structures important for trapped ions
A Coulomb crystal (also Ion Coulomb crystal) is a collection of trapped ions confined in a crystal-like structure at low temperature. The structures represent an equilibrium between the repulsive Coulomb interaction between ions and the electric and magnetic fields used to confine the ions. Depending on the confinement techniques and parameters, as well as the number of ions in the trap, these can be 1-, 2- or 3-dimensional, with typical spacing between ions of ~10μm, which is significantly larger than typical solid-state crystal structures. Outside of ion traps, Coulomb crystals also occur naturally in celestial objects such as neutron stars.
Description.
The magnitude of the Coulomb interaction "F" between two ions of charge "q" and "Q" a distance "R" apart is given by
formula_0
directed along the axis between the two ions, where a positive value represents a repulsive force and vice versa.
Trapping techniques include variations on the Paul trap and Penning trap, where the former uses only electric fields while the latter also uses magnetic fields to confine the ions. Considering the simple case of two ions confined in a linear Paul trap, we have a radiofrequency oscillating field, which itself can confine a single ion in the (axial?) direction.
Experimental realisation.
The typical process for creating ICCs in the lab involves ionisation of an elemental source, followed by confinement in an ion trap, where they are imaged via their fluorescence. Changing parameters such as the axial or radial confining potentials may lead to different observed geometries of the crystal, even if the number of ions does not change.
For measurements involving highly charged ions, these are typically observed as "dark" areas in the fluorescence of the Coulomb crystal, due to their different energy levels. This effect is also noticeable when ions in the Coulomb crystal appear to disappear, without changing the structure of the crystal, due to mixing with impurities in a non-ideal vacuum.
Heating effects are also important in the characterisation of Coulomb crystals, since thermal motion can cause the image to blur. This may be stimulated by the cooling laser being slightly off-resonance, and so needs to be carefully monitored.
Applications and properties.
Coulomb crystals of various ionic species have applications across much of physics, for example, in high precision spectroscopy, quantum information processing and cavity QED.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = \\frac{qQ}{4\\pi \\epsilon_0 R^2}"
}
]
| https://en.wikipedia.org/wiki?curid=68446709 |
6844674 | Dangerously irrelevant operator | Class of operators in quantum field theory
In statistical mechanics and quantum field theory, a dangerously irrelevant operator (or dangerous irrelevant operator) is an operator which is irrelevant at a renormalization group fixed point, yet affects the infrared (IR) physics significantly (e.g. because the vacuum expectation value (VEV) of some field depends sensitively upon the coefficient of this operator).
Critical phenomena.
In the theory of critical phenomena, free energy of a system near the critical point depends analytically on the coefficients of generic (not dangerous) irrelevant operators, while the dependence on the coefficients of dangerously irrelevant operators is non-analytic ( p. 49).
The presence of dangerously irrelevant operators leads to the violation of the hyperscaling relation formula_0 between the critical exponents formula_1 and formula_2 in formula_3 dimensions. The simplest example ( p. 93) is the critical point of the Ising ferromagnet in formula_4 dimensions, which is a gaussian theory (free massless scalar formula_5), but the leading irrelevant perturbation formula_6 is dangerously irrelevant. Another example occurs for the Ising model with random-field disorder, where the fixed point occurs at zero temperature, and the temperature perturbation is dangerously irrelevant ( p. 164).
Quantum field theory.
Let us suppose there is a field formula_5 with a potential depending upon two parameters, formula_7 and formula_8.
formula_9
Let us also suppose that formula_7 is positive and nonzero and formula_10 > formula_1. If formula_8 is zero, there is no stable equilibrium. If the scaling dimension of formula_5 is formula_11, then the scaling dimension of formula_8 is formula_12 where formula_3 is the number of dimensions. It is clear that if the scaling dimension of formula_8 is negative, formula_8 is an irrelevant parameter. However, the crucial point is, that the VEV
formula_13.
depends very sensitively upon formula_8, at least for small values of formula_8. Because the nature of infrared physics also depends upon the VEV, it looks very different even for a tiny change in formula_8 not because the physics in the vicinity of formula_14 changes much — it hardly changes at all — but because the VEV we are expanding about has changed enormously.
Supersymmetric models with a modulus can often have dangerously irrelevant parameters.
Other uses of the term.
Consider a renormalization group (RG) flow triggered at short distances by a relevant perturbation of an ultra-violet (UV) fixed point, and flowing at long distances to an infra-red (IR) fixed point. It may be possible (e.g. in perturbation theory) to monitor how dimensions of UV operators change along the RG flow. In such a situation, one sometimes calls dangerously irrelevant a UV operator whose scaling dimension, while irrelevant at short distances: formula_15 , receives a negative correction along a renormalization group flow, so that the operator becomes relevant at long distances: formula_16. This usage of the term is different from the one originally introduced in statistical physics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha=2-d\\nu"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "\\nu"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "d\\ge4"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "\\phi^4"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "b"
},
{
"math_id": 9,
"text": "V\\left(\\phi\\right)=-a \\phi^\\alpha + b\\phi^\\beta"
},
{
"math_id": 10,
"text": "\\beta"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "d-\\beta c"
},
{
"math_id": 13,
"text": "\\langle\\phi\\rangle=\\left(\\frac{a\\alpha}{b\\beta}\\right)^{\\frac{1}{\\beta-\\alpha}}=\\left(\\frac{a\\alpha}{\\beta}\\right)^{\\frac{1}{\\beta-\\alpha}}b^{-\\frac{1}{\\beta-\\alpha}}"
},
{
"math_id": 14,
"text": "\\phi=0"
},
{
"math_id": 15,
"text": "\\Delta_{\\rm UV}>d"
},
{
"math_id": 16,
"text": "\\Delta_{\\rm IR}<d"
}
]
| https://en.wikipedia.org/wiki?curid=6844674 |
684489 | Toughness | Material ability to absorb energy and plastically deform without fracturing
In materials science and metallurgy, toughness is the ability of a material to absorb energy and plastically deform without fracturing. Toughness is the strength with which the material opposes rupture. One definition of material toughness is the amount of energy per unit volume that a material can absorb before rupturing. This measure of toughness is different from that used for fracture toughness, which describes the capacity of materials to resist fracture.
Toughness requires a balance of strength and ductility.
Toughness and strength.
Toughness is related to the area under the stress–strain curve. In order to be tough, a material must be both strong and ductile. For example, brittle materials (like ceramics) that are strong but with limited ductility are not tough; conversely, very ductile materials with low strengths are also not tough. To be tough, a material should withstand both high stresses and high strains. Generally speaking, strength indicates how much force the material can support, while toughness indicates how much energy a material can absorb before rupturing.
Mathematical definition.
Toughness can be determined by integrating the stress-strain curve. It is the energy of mechanical deformation per unit volume prior to fracture. The explicit mathematical description is:
formula_0
where
If the upper limit of integration up to the yield point is restricted, the energy absorbed per unit volume is known as the modulus of resilience. Mathematically, the modulus of resilience can be expressed by the product of the square of the yield stress divided by two times the Young's modulus of elasticity. That is,
<templatestyles src="Block indent/styles.css"/>Modulus of resilience =
Toughness tests.
The toughness of a material can be measured using a small specimen of that material. A typical testing machine uses a pendulum to deform a notched specimen of defined cross-section. The height from which the pendulum fell, minus the height to which it rose after deforming the specimen, multiplied by the weight of the pendulum, is a measure of the energy absorbed by the specimen as it was deformed during the impact with the pendulum. The Charpy and Izod notched impact strength tests are typical ASTM tests used to determine toughness.
Unit of toughness.
Tensile toughness (or "deformation energy", "U"T) is measured in units of joule per cubic metre (J·m−3), or equivalently newton-metre per cubic metre (N·m·m−3), in the SI system and inch-pound-force per cubic inch (in·lbf·in−3) in US customary units:
In the SI system, the unit of tensile toughness can be easily calculated by using area underneath the stress–strain ("σ"–"ε") curve, which gives tensile toughness value, as given below:
Toughest material.
An alloy made of almost equal amounts of chromium, cobalt and nickel (CrCoNi) is the toughest material discovered thus far. It resists fracturing even at incredibly cold temperatures close to absolute zero. It is being considered as a material used in building spacecraft.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\tfrac{\\mbox{energy}}{\\mbox{volume}} = \\int_{0}^{\\varepsilon_f} \\sigma\\, d\\varepsilon "
},
{
"math_id": 1,
"text": " \\varepsilon "
},
{
"math_id": 2,
"text": " \\varepsilon_f "
},
{
"math_id": 3,
"text": " \\sigma "
}
]
| https://en.wikipedia.org/wiki?curid=684489 |
68455761 | Missing sock | Single sock in a pair of socks known or perceived to be missing
A missing sock, lost sock, or odd sock (primarily British English) is a single sock in a pair of socks known or perceived to be permanently or temporarily missing. Socks are usually perceived to be lost immediately before, during, or immediately after doing laundry.
According to popular media articles regarding missing socks, people almost always report losing one sock in a pair, and hardly ever the entire pair of two socks. Various explanations or theories—some scientific or pseudo-scientific and others humorous or facetious—have been proposed to show how or why single socks go missing or are perceived to have gone missing.
The terms odd sock or mismatched sock may instead refer to the remaining "orphaned" sock in a pair where the other matching sock is missing or lost.
Plausible explanations.
Two common plausible explanations for missing socks are that they are lost in transit to or from the laundry, or that they are trapped inside, between, or behind components of ("eaten by") washing machines or clothes dryers. Due to the high rotational speeds of modern front-loading washing machines and dryers, it may be possible for small clothes items such as socks to slip through any holes or tears in the rubber gasket between either machine's spinning drums and their outer metal or plastic cases. Socks may also bunch up or unravel and get caught in the water drain pipe of washing machines or in the lint trap of dryers.
In 2008, American science educator and writer George B. Johnson proposed several hypotheses for why socks go missing:
In his particular case, Johnson rejected all hypotheses except the last one, as it was possible for small items like socks to slip behind the dryer's spinning drum because of gaps between the drum and the dryer's outer metal case.
A 2016 pseudo-scientific consumer study commissioned by Samsung Electronics UK (to advertise their new washing machines where users could add more laundry to a load one piece at a time) referenced multiple human errors—including errors of human perception or psychology—to explain why socks go missing: they may become mismatched by poor folding and sorting of laundry, be intentionally misplaced or stolen, fall in hard-to-reach or hard-to-see spaces behind furniture or radiators, or blow off of clothes lines in high wind. Diffusion of responsibility, poor heuristics, and confirmation bias were the cited psychological reasons. For example: people may not search for lost socks because they assume others are searching; people search for lost socks in the likeliest places they could have been lost but not in the places where they are actually lost; or people may believe socks are or are not lost because they want to believe so despite evidence to the contrary, respectively.
The authors of the Samsung study developed an equation called the "sock loss formula" or "sock loss index" which claims to predict the frequency of sock loss for a given individual: formula_0, where "L" equals laundry size (number of people in a household multiplied by the number of weekly laundry loads), "C" equals "washing complexity" (the number of types of laundry loads such as dark clothes versus white clothes done in a week multiplied by the total number of socks in those loads), "P" equals the positive or negative attitude of the individual toward doing laundry on a scale of 1 (most negative) to 5 (most positive), and "A" equals the "degree of attention" the individual has when doing laundry (the sum of whether the individual checks pockets, unrolls sleeves, turns clothes the right way if they have been turned inside out, and unrolls socks).
Complementary to the previous explanations, it was also suggested that other small clothes (of which people usually have many items and that get washed often) such as underpants, are lost as often as socks, but people do not notice that as often because they don't come in matching pairs. The existence of the non-paired remaining sock draws attention to the lost sock in a way that cannot happen with clothes that naturally come in singles and not pairs. Another suggestion made in this context is that since most people usually take off their socks, but not their underpants, when going to sleep, there is a higher chance for socks to get lost in the bedroom (e.g. pushed under the bed or taken by a pet as a toy).
Prevention.
Home appliance repair and design specialists from Sears and GE suggest not overloading laundry machines and repairing any holes in the gaskets between the spinning drums and the rest of the machines to avoid losing socks in them.
Other practical suggestions include:
Humorous explanations.
Some explanations for the phenomenon jokingly suggest that socks have some innate propensity for going missing, and that this may be a physical property of the universe. For example, in the 1996 book "The Nature of Space and Time" by the physicists Stephen Hawking and Nobel laureate Roger Penrose, they posit that spontaneous black holes are responsible for lost socks.
In his 2008 examination of the phenomenon, George B. Johnson also rejected two humorous hypotheses for why socks go missing: that an "intrinsic property" of the socks themselves predisposes or causes them to go missing; and that the socks transform into something else, such as clothes hangers.
In popular culture.
The Bobs' 1988 song "Where Does the Wayward Footwear Go?", asks where lost socks disappear to, asking "To the bottom of the ocean? Or to China? Or to Cuba? Or Aruba?". A 1993 album by the American indie rock band Grifters is titled "One Sock Missing". In the 2001 American children's film "", lost objects including socks are magically transported to the home of a character named Gort, who is a compulsive hoarder.
American illustrator and voice actor Harry S. Robins wrote and illustrated a book titled "The Meaning of Lost and Mismatched Socks". In the British children's book series Oddies, odd socks are transported to a planet called Oddieworld by a magical washing machine.
The online sock subscription service and retailer Blacksocks was supposedly started after its founder wore mismatched socks to a Japanese tea ceremony.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{Sock loss index} = (L + C) - (P \\times A)"
}
]
| https://en.wikipedia.org/wiki?curid=68455761 |
6845737 | Disintegration theorem | Theorem in measure theory
In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures. In a sense, "disintegration" is the opposite process to the construction of a product measure.
Motivation.
Consider the unit square formula_0 in the Euclidean plane formula_1. Consider the probability measure formula_2 defined on formula_3 by the restriction of two-dimensional Lebesgue measure formula_4 to formula_3. That is, the probability of an event formula_5 is simply the area of formula_6. We assume formula_6 is a measurable subset of formula_3.
Consider a one-dimensional subset of formula_3 such as the line segment formula_7. formula_8 has formula_2-measure zero; every subset of formula_8 is a formula_2-null set; since the Lebesgue measure space is a complete measure space,
formula_9
While true, this is somewhat unsatisfying. It would be nice to say that formula_2 "restricted to" formula_8 is the one-dimensional Lebesgue measure formula_10, rather than the zero measure. The probability of a "two-dimensional" event formula_6 could then be obtained as an integral of the one-dimensional probabilities of the vertical "slices" formula_11: more formally, if formula_12 denotes one-dimensional Lebesgue measure on formula_8, then
formula_13
for any "nice" formula_5. The disintegration theorem makes this argument rigorous in the context of measures on metric spaces.
Statement of the theorem.
The assumptions of the theorem are as follows:
The conclusion of the theorem: There exists a formula_29-almost everywhere uniquely determined family of probability measures formula_30, which provides a "disintegration" of formula_2 into formula_31, such that:
Applications.
Product spaces.
The original example was a special case of the problem of product spaces, to which the disintegration theorem applies.
When formula_16 is written as a Cartesian product formula_43 and formula_44 is the natural projection, then each fibre formula_45 can be canonically identified with formula_46 and there exists a Borel family of probability measures formula_47 in formula_48 (which is formula_49-almost everywhere uniquely determined) such that
formula_50
which is in particular
formula_51
and
formula_52
The relation to conditional expectation is given by the identities
formula_53
formula_54
Vector calculus.
The disintegration theorem can also be seen as justifying the use of a "restricted" measure in vector calculus. For instance, in Stokes' theorem as applied to a vector field flowing through a compact surface formula_55, it is implicit that the "correct" measure on formula_56 is the disintegration of three-dimensional Lebesgue measure formula_57 on formula_56, and that the disintegration of this measure on ∂Σ is the same as the disintegration of formula_57 on formula_58.
Conditional distributions.
The disintegration theorem can be applied to give a rigorous treatment of conditional probability distributions in statistics, while avoiding purely abstract formulations of conditional probability. The theorem is related to the Borel–Kolmogorov paradox, for example.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = [0,1]\\times[0,1]"
},
{
"math_id": 1,
"text": "\\mathbb{R}^2"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "\\lambda^2"
},
{
"math_id": 5,
"text": "E\\subseteq S"
},
{
"math_id": 6,
"text": "E"
},
{
"math_id": 7,
"text": "L_x = \\{x\\}\\times[0, 1]"
},
{
"math_id": 8,
"text": "L_x"
},
{
"math_id": 9,
"text": "E \\subseteq L_{x} \\implies \\mu (E) = 0."
},
{
"math_id": 10,
"text": "\\lambda^1"
},
{
"math_id": 11,
"text": "E\\cap L_x"
},
{
"math_id": 12,
"text": "\\mu_x"
},
{
"math_id": 13,
"text": "\\mu (E) = \\int_{[0, 1]} \\mu_{x} (E \\cap L_{x}) \\, \\mathrm{d} x"
},
{
"math_id": 14,
"text": "\\mathcal{P}(X)"
},
{
"math_id": 15,
"text": "(X, T)"
},
{
"math_id": 16,
"text": "Y"
},
{
"math_id": 17,
"text": "X"
},
{
"math_id": 18,
"text": "\\mu\\in\\mathcal{P}(Y)"
},
{
"math_id": 19,
"text": "\\pi : Y\\to X"
},
{
"math_id": 20,
"text": "\\pi"
},
{
"math_id": 21,
"text": "\\{ \\pi^{-1}(x)\\ |\\ x \\in X\\}"
},
{
"math_id": 22,
"text": "\\pi((a,b)) = a"
},
{
"math_id": 23,
"text": "(a,b) \\in [0,1]\\times [0,1]"
},
{
"math_id": 24,
"text": "\\pi^{-1}(a) = a \\times [0,1]"
},
{
"math_id": 25,
"text": "\\nu \\in\\mathcal{P}(X)"
},
{
"math_id": 26,
"text": "\\nu = \\pi_{*}(\\mu) = \\mu \\circ \\pi^{-1}"
},
{
"math_id": 27,
"text": "x"
},
{
"math_id": 28,
"text": "\\pi^{-1}(x)"
},
{
"math_id": 29,
"text": "\\nu"
},
{
"math_id": 30,
"text": "\\{\\mu_x\\}_{x\\in X} \\subseteq \\mathcal{P}(Y)"
},
{
"math_id": 31,
"text": "\\{\\mu_x\\}_{x \\in X}"
},
{
"math_id": 32,
"text": "x \\mapsto \\mu_{x}"
},
{
"math_id": 33,
"text": "x \\mapsto \\mu_{x} (B)"
},
{
"math_id": 34,
"text": "B\\subseteq Y"
},
{
"math_id": 35,
"text": "x\\in X"
},
{
"math_id": 36,
"text": "\\mu_{x} \\left( Y \\setminus \\pi^{-1} (x) \\right) = 0,"
},
{
"math_id": 37,
"text": "\\mu_x(E) =\\mu_x(E\\cap\\pi^{-1}(x))"
},
{
"math_id": 38,
"text": "f : Y \\to [0,\\infty]"
},
{
"math_id": 39,
"text": "\\int_{Y} f(y) \\, \\mathrm{d} \\mu (y) = \\int_{X} \\int_{\\pi^{-1} (x)} f(y) \\, \\mathrm{d} \\mu_x (y) \\, \\mathrm{d} \\nu (x)."
},
{
"math_id": 40,
"text": "E\\subseteq Y"
},
{
"math_id": 41,
"text": "f"
},
{
"math_id": 42,
"text": "\\mu (E) = \\int_X \\mu_x (E) \\, \\mathrm{d} \\nu (x)."
},
{
"math_id": 43,
"text": "Y = X_1\\times X_2"
},
{
"math_id": 44,
"text": "\\pi_i : Y\\to X_i"
},
{
"math_id": 45,
"text": "\\pi_1^{-1}(x_1)"
},
{
"math_id": 46,
"text": "X_2"
},
{
"math_id": 47,
"text": "\\{ \\mu_{x_{1}} \\}_{x_{1} \\in X_{1}}"
},
{
"math_id": 48,
"text": "\\mathcal{P}(X_2)"
},
{
"math_id": 49,
"text": "(\\pi_1)_*(\\mu)"
},
{
"math_id": 50,
"text": "\\mu = \\int_{X_{1}} \\mu_{x_{1}} \\, \\mu \\left(\\pi_1^{-1}(\\mathrm d x_1) \\right)= \\int_{X_{1}} \\mu_{x_{1}} \\, \\mathrm{d} (\\pi_{1})_{*} (\\mu) (x_{1}),"
},
{
"math_id": 51,
"text": "\\int_{X_1\\times X_2} f(x_1,x_2)\\, \\mu(\\mathrm d x_1,\\mathrm d x_2) = \\int_{X_1}\\left( \\int_{X_2} f(x_1,x_2) \\mu(\\mathrm d x_2\\mid x_1) \\right) \\mu\\left( \\pi_1^{-1}(\\mathrm{d} x_{1})\\right)"
},
{
"math_id": 52,
"text": "\\mu(A \\times B) = \\int_A \\mu\\left(B\\mid x_1\\right) \\, \\mu\\left( \\pi_1^{-1}(\\mathrm{d} x_{1})\\right)."
},
{
"math_id": 53,
"text": "\\operatorname E(f\\mid \\pi_1)(x_1)= \\int_{X_2} f(x_1,x_2) \\mu(\\mathrm d x_2\\mid x_1),"
},
{
"math_id": 54,
"text": "\\mu(A\\times B\\mid \\pi_1)(x_1)= 1_A(x_1) \\cdot \\mu(B\\mid x_1)."
},
{
"math_id": 55,
"text": "\\Sigma \\subset \\mathbb{R}^3"
},
{
"math_id": 56,
"text": "\\Sigma"
},
{
"math_id": 57,
"text": "\\lambda^3"
},
{
"math_id": 58,
"text": "\\partial\\Sigma"
}
]
| https://en.wikipedia.org/wiki?curid=6845737 |
68458113 | Stacker crane problem | In combinatorial optimization, the stacker crane problem is an optimization problem closely related to the traveling salesperson problem. Its input consists of a collection of ordered pairs of points in a metric space, and the goal is to connect these points into a cycle of minimum total length that includes all of the pairs, oriented consistently with each other. It models problems of scheduling the pickup and delivery of individual loads of cargo, by a stacker crane, construction crane or (in drayage) a truck, in a simplified form without constraints on the timing of these deliveries. It was introduced by , with an equivalent formulation in terms of mixed graphs with directed edges modeling the input pairs and undirected edges modeling their distances. Frederickson et al. credit its formulation to a personal communication of Daniel J. Rosenkrantz.
The stacker crane problem can be viewed as a generalization of the traveling salesperson problem in metric spaces: any instance of the traveling salesperson problem can be transformed into an instance of the stacker crane problem, having a pair formula_0 for each point in the travelling salesman instance. In the other direction, the stacker crane problem can be viewed as a special case of the asymmetric traveling salesperson problem, where the points of the asymmetric traveling salesperson problem are the pairs of a stacker crane instance and the distance from one pair to another is taken as the distance from the delivery point of the first pair, through its pickup point, to the delivery point of the second pair. Because it generalizes the traveling salesperson problem, it inherits the same computational complexity: it is NP-hard, and at least as hard to approximate.
An approximation algorithm based on the Christofides algorithm for the traveling salesperson problem can approximate the solution of the stacker crane problem to within an approximation ratio of 9/5.
The problem of designing the back side of an embroidery pattern to minimize the total amount of thread used is closely related to the stacker crane problem, but it allows each of its pairs of points (the ends of the visible stitches on the front side of the pattern) to be traversed in either direction, rather than requiring the traversal to go through all pairs in a consistent direction. It is NP-hard by the same transformation from the traveling salesperson problem, and can be approximated to within an approximation ratio of 2. Another variation of the stacker crane problem, called the dial-a-ride problem, asks for the minimum route for a vehicle to perform a collection of pickups and deliveries while allowing it to hold some number "k" > 1 of loads at any point along its route.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(p,p)"
}
]
| https://en.wikipedia.org/wiki?curid=68458113 |
68466 | Gamma correction | Image luminance mapping function
Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:
formula_0
where the non-negative real input value formula_1 is raised to the power formula_2 and multiplied by the constant "A" to get the output value formula_3. In the common case of "A" = 1, inputs and outputs are typically in the range 0–1.
A gamma value formula_4 is sometimes called an "encoding gamma", and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely, a gamma value formula_5 is called a "decoding gamma", and the application of the expansive power-law nonlinearity is called gamma expansion.
Explanation.
Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color. The human perception of brightness (lightness), under common illumination conditions (neither pitch black nor blindingly bright), follows an approximate power function (which has no relation to the gamma function), with greater sensitivity to relative differences between darker tones than between lighter tones, consistent with the Stevens power law for brightness perception. If images are not gamma-encoded, they allocate too many bits or too much bandwidth to highlights that humans cannot differentiate, and too few bits or too little bandwidth to shadow values that humans are sensitive to and would require more bits/bandwidth to maintain the same visual quality. Gamma encoding of floating-point images is not required (and may be counterproductive), because the floating-point format already provides a piecewise linear approximation of a logarithmic curve.
Although gamma encoding was developed originally to compensate for the brightness characteristics of cathode ray tube (CRT) displays, that is not its main purpose or advantage in modern systems. In CRT displays, the light intensity varies nonlinearly with the electron-gun voltage. Altering the input signal by gamma compression can cancel this nonlinearity, such that the output picture has the intended luminance. However, the gamma characteristics of the display device do not play a factor in the gamma encoding of images and video. They need gamma encoding to maximize the visual quality of the signal, regardless of the gamma characteristics of the display device. The similarity of CRT physics to the inverse of gamma encoding needed for video transmission was a combination of coincidence and engineering, which simplified the electronics in early television sets.
Photographic film has a much greater ability to record fine differences in shade than can be reproduced on photographic paper. Similarly, most video screens are not capable of displaying the range of brightnesses (dynamic range) that can be captured by typical electronic cameras.
For this reason, considerable artistic effort is invested in choosing the reduced form in which the original image should be presented. The gamma correction, or contrast selection, is part of the photographic repertoire used to adjust the reproduced image.
Analogously, digital cameras record light using electronic sensors that usually respond linearly. In the process of rendering linear raw data to conventional RGB data (e.g. for storage into JPEG image format), color space transformations and rendering transformations will be performed. In particular, almost all standard RGB color spaces and file formats use a non-linear encoding (a gamma compression) of the intended intensities of the primary colors of the photographic reproduction. In addition, the intended reproduction is almost always nonlinearly related to the measured scene intensities, via a tone reproduction nonlinearity.
Generalized gamma.
The concept of gamma can be applied to any nonlinear relationship. For the power-law relationship formula_6, the curve on a log–log plot is a straight line, with slope everywhere equal to gamma (slope is represented here by the derivative operator):
formula_7
That is, gamma can be visualized as the slope of the input–output curve when plotted on logarithmic axes. For a power-law curve, this slope is constant, but the idea can be extended to any type of curve, in which case gamma (strictly speaking, "point gamma") is defined as the slope of the curve in any particular region.
Film photography.
When a photographic film is exposed to light, the result of the exposure can be represented on a graph showing log of exposure on the horizontal axis, and density, or negative log of transmittance, on the vertical axis. For a given film formulation and processing method, this curve is its characteristic or Hurter–Driffield curve. Since both axes use logarithmic units, the slope of the linear section of the curve is called the gamma of the film. Negative film typically has a gamma less than 1; positive film (slide film, reversal film) typically has a gamma with absolute value greater than 1.
Microsoft Windows, Mac, sRGB and TV/video standard gammas.
Analog TV.
Output to CRT-based television receivers and monitors does not usually require further gamma correction. The standard video signals that are transmitted or stored in image files incorporate gamma compression matching the gamma expansion of the CRT (although it is not the exact inverse).
For television signals, gamma values are fixed and defined by the analog video standards. CCIR System M and N, associated with NTSC color, use gamma 2.2; systems B/G, H, I, D/K, K1, L and M associated with PAL or SECAM color use gamma 2.8.
Computer displays.
In most computer display systems, images are encoded with a gamma of about 0.45 and decoded with the reciprocal gamma of 2.2. A notable exception, until the release of Mac OS X 10.6 (Snow Leopard) in September 2009, were Macintosh computers, which encoded with a gamma of 0.55 and decoded with a gamma of 1.8. In any case, binary data in still image files (such as JPEG) are explicitly encoded (that is, they carry gamma-encoded values, not linear intensities), as are motion picture files (such as MPEG). The system can optionally further manage both cases, through color management, if a better match to the output device gamma is required.
The sRGB color space standard used with most cameras, PCs, and printers does not use a simple power-law nonlinearity as above, but has a decoding gamma value near 2.2 over much of its range, as shown in the plot to the right/above. Below a compressed value of 0.04045 or a linear intensity of 0.00313, the curve is linear (encoded value proportional to intensity), so "γ" = 1. The dashed black curve behind the red curve is a standard "γ" = 2.2 power-law curve, for comparison.
Gamma correction in computers is used, for example, to display a gamma = 1.8 Apple picture correctly on a gamma = 2.2 PC monitor by changing the image gamma. Another usage is equalizing of the individual color-channel gammas to correct for monitor discrepancies.
Gamma meta information.
Some picture formats allow an image's intended gamma (of transformations between encoded image samples and light output) to be stored as metadata, facilitating automatic gamma correction. The PNG specification includes the gAMA chunk for this purpose and with formats such as JPEG and TIFF the Exif Gamma tag can be used. Some formats can specify the ICC profile which includes a transfer function.
These features have historically caused problems, especially on the web. For HTML and CSS colors and JPG or GIF images without attached color profile metadata, popular browsers passed numerical color values to the display without color management, resulting in substantially different appearance between devices; however those same browsers sent images with gamma explicitly set in metadata through color management, and also applied a default gamma to PNG images with metadata omitted. This made it impossible for PNG images to simultaneously match HTML or untagged JPG colors on every device. This situation has since improved, as most major browsers now support the gamma setting (or lack of it).
Power law for video display.
A "gamma characteristic" is a power-law relationship that approximates the relationship between the encoded luma in a television system and the actual desired image luminance.
With this nonlinear relationship, equal steps in encoded luminance correspond roughly to subjectively equal steps in brightness. Ebner and Fairchild used an exponent of 0.43 to convert linear intensity into lightness (luma) for neutrals; the reciprocal, approximately 2.33 (quite close to the 2.2 figure cited for a typical display subsystem), was found to provide approximately optimal perceptual encoding of grays.
The following illustration shows the difference between a scale with linearly-increasing encoded luminance signal (linear gamma-compressed luma input) and a scale with linearly-increasing intensity scale (linear luminance output).
On most displays (those with gamma of about 2.2), one can observe that the linear-intensity scale has a large jump in perceived brightness between the intensity values 0.0 and 0.1, while the steps at the higher end of the scale are hardly perceptible. The gamma-encoded scale, which has a nonlinearly-increasing intensity, will show much more even steps in perceived brightness.
A cathode ray tube (CRT), for example, converts a video signal to light in a nonlinear way, because the electron gun's intensity (brightness) as a function of applied video voltage is nonlinear. The light intensity "I" is related to the source voltage "V"s according to
formula_8
where "γ" is the Greek letter gamma. For a CRT, the gamma that relates brightness to voltage is usually in the range 2.35 to 2.55; video look-up tables in computers usually adjust the system gamma to the range 1.8 to 2.2, which is in the region that makes a uniform encoding difference give approximately uniform perceptual brightness difference, as illustrated in the diagram at the top of this section.
For simplicity, consider the example of a monochrome CRT. In this case, when a video signal of 0.5 (representing a mid-gray) is fed to the display, the intensity or brightness is about 0.22 (resulting in a mid-gray, about 22% the intensity of white). Pure black (0.0) and pure white (1.0) are the only shades that are unaffected by gamma.
To compensate for this effect, the inverse transfer function (gamma correction) is sometimes applied to the video signal so that the end-to-end response is linear. In other words, the transmitted signal is deliberately distorted so that, after it has been distorted again by the display device, the viewer sees the correct brightness. The inverse of the function above is
formula_9
where "V"c is the corrected voltage, and "V"s is the source voltage, for example, from an image sensor that converts photocharge linearly to a voltage. In our CRT example 1/"γ" is 1/2.2 ≈ 0.45.
A color CRT receives three video signals (red, green, and blue) and in general each color has its own value of gamma, denoted "γ""R", "γ""G" or "γ""B". However, in simple display systems, a single value of "γ" is used for all three colors.
Other display devices have different values of gamma: for example, a Game Boy Advance display has a gamma between 3 and 4 depending on lighting conditions. In LCDs such as those on laptop computers, the relation between the signal voltage "V"s and the intensity "I" is very nonlinear and cannot be described with gamma value. However, such displays apply a correction onto the signal voltage in order to approximately get a standard "γ" = 2.5 behavior. In NTSC television recording, "γ" = 2.2.
The power-law function, or its inverse, has a slope of infinity at zero. This leads to problems in converting from and to a gamma colorspace. For this reason most formally defined colorspaces such as sRGB will define a straight-line segment near zero and add raising "x" + "K" (where "K" is a constant) to a power so the curve has continuous slope. This straight line does not represent what the CRT does, but does make the rest of the curve more closely match the effect of ambient light on the CRT. In such expressions the exponent is "not" the gamma; for instance, the sRGB function uses a power of 2.4 in it, but more closely resembles a power-law function with an exponent of 2.2, without a linear portion.
Methods to perform display gamma correction in computing.
Up to four elements can be manipulated in order to achieve gamma encoding to correct the image to be shown on a typical 2.2- or 1.8-gamma computer display:
In a correctly calibrated system, each component will have a specified gamma for its input and/or output encodings. Stages may change the gamma to correct for different requirements, and finally the output device will do gamma decoding or correction as needed, to get to a linear intensity domain. All the encoding and correction methods can be arbitrarily superimposed, without mutual knowledge of this fact among the different elements; if done incorrectly, these conversions can lead to highly distorted results, but if done correctly as dictated by standards and conventions will lead to a properly functioning system.
In a typical system, for example from camera through JPEG file to display, the role of gamma correction will involve several cooperating parts. The camera encodes its rendered image into the JPEG file using one of the standard gamma values such as 2.2, for storage and transmission. The display computer may use a color management engine to convert to a different color space (such as older Macintosh's "γ" = 1.8 color space) before putting pixel values into its video memory. The monitor may do its own gamma correction to match the CRT gamma to that used by the video system. Coordinating the components via standard interfaces with default standard gamma values makes it possible to get such system properly configured.
Simple monitor tests.
This procedure is useful for making a monitor display images approximately correctly, on systems in which profiles are not used (for example, the Firefox browser prior to version 3.0 and many others) or in systems that assume untagged source images are in the sRGB colorspace.
In the test pattern, the intensity of each solid color bar is intended to be the average of the intensities in the surrounding striped dither; therefore, ideally, the solid areas and the dithers should appear equally bright in a system properly adjusted to the indicated gamma.
Normally a graphics card has contrast and brightness control and a transmissive LCD monitor has contrast, brightness, and backlight control. Graphics card and monitor contrast and brightness have an influence on effective gamma, and should not be changed after gamma correction is completed.
The top two bars of the test image help to set correct contrast and brightness values. There are eight three-digit numbers in each bar. A good monitor with proper calibration shows the six numbers on the right in both bars, a cheap monitor shows only four numbers.
Given a desired display-system gamma, if the observer sees the same brightness in the checkered part and in the homogeneous part of every colored area, then the gamma correction is approximately correct. In many cases the gamma correction values for the primary colors are slightly different.
Setting the color temperature or white point is the next step in monitor adjustment.
Before gamma correction the desired gamma and color temperature should be set using the monitor controls. Using the controls for gamma, contrast and brightness, the gamma correction on an LCD can only be done for one specific vertical viewing angle, which implies one specific horizontal line on the monitor, at one specific brightness and contrast level. An ICC profile allows one to adjust the monitor for several brightness levels. The quality (and price) of the monitor determines how much deviation of this operating point still gives a satisfactory gamma correction. Twisted nematic (TN) displays with 6-bit color depth per primary color have lowest quality. In-plane switching (IPS) displays with typically 8-bit color depth are better. Good monitors have 10-bit color depth, have hardware color management and allow hardware calibration with a tristimulus colorimeter. Often a 6bit plus FRC panel is sold as 8bit and a 8bit plus FRC panel is sold as 10bit. FRC is no true replacement for more bits. The 24-bit and 32-bit color depth formats have 8 bits per primary color.
With Microsoft Windows 7 and above the user can set the gamma correction through the display color calibration tool dccw.exe or other programs. These programs create an ICC profile file and load it as default. This makes color management easy. Increase the gamma slider in the dccw program until the last colored area, often the green color, has the same brightness in checkered and homogeneous area. Use the color balance or individual colors gamma correction sliders in the gamma correction programs to adjust the two other colors. Some old graphics card drivers do not load the color Look Up Table correctly after waking up from standby or hibernate mode and show wrong gamma. In this case update the graphics card driver.
On some operating systems running the X Window System, one can set the gamma correction factor (applied to the existing gamma value) by issuing the command codice_0 for setting gamma correction factor to 0.9, and codice_1 for querying current value of that factor (the default is 1.0). In macOS systems, the gamma and other related screen calibrations are made through the System Preferences.
Scaling and blending.
"Generally," operations on pixel values should be performed in "linear light" (gamma 1). Eric Brasseur discusses the issue at length and provides test images. They serve to point out a widespread problem: Many programs perform scaling in a color space with gamma, instead of a physically correct linear space. The test images are constructed so as to have a drastically different appearance when downsampled incorrectly. Jonas Berlin has created a "your scaling software sucks/rules" image based on this principle.
In addition to scaling, the problem also applies to other forms of downsampling (scaling down), such as chroma subsampling in JPEG's gamma-enabled Y′CbCr. WebP solves this problem by calculating the chroma averages in linear space then converting back to a gamma-enabled space; an iterative solution is used for larger images. The same "sharp YUV" (formerly "smart YUV") code is used in sjpeg and optionally in AVIF. Kornelski provides a simpler approximation by luma-based weighted average. Alpha compositing, color gradients, and 3D rendering are also affected by this issue.
Paradoxically, when upsampling (scaling up) an image, the result processed in a "wrong" (non-physical) gamma color space is often more aesthetically pleasing. This is because resampling filters with negative lobes like Mitchell–Netravali and Lanczos create ringing artifacts linearly even though human perception is non-linear and better approximated by gamma. (Emulating "stepping back," which motivates downsampling in linear light (gamma=1), does not apply when upsampling.) A related method of reducing the visibility of ringing artifacts consists of using a sigmoidal light transfer function as pioneered by ImageMagick and GIMP's LoHalo filter and adapted to video upsampling by madVR, AviSynth and Mpv.
Terminology.
The term intensity refers strictly to the amount of light that is emitted per unit of time and per unit of surface, in units of lux. Note, however, that in many fields of science this quantity is called luminous exitance, as opposed to luminous intensity, which is a different quantity. These distinctions, however, are largely irrelevant to gamma compression, which is applicable to any sort of normalized linear intensity-like scale.
"Luminance" can mean several things even within the context of video and imaging:
One contrasts relative luminance in the sense of color (no gamma compression) with luma in the sense of video (with gamma compression), and denote relative luminance by "Y" and luma by "Y"′, the prime symbol (′) denoting gamma compression.
Note that luma is not directly calculated from luminance, it is the (somewhat arbitrary) weighted sum of gamma compressed RGB components.
Likewise, "brightness" is sometimes applied to various measures, including light levels, though it more properly applies to a subjective visual attribute.
Gamma correction is a type of power law function whose exponent is the Greek letter gamma ("γ"). It should not be confused with the mathematical Gamma function. The lower case gamma, "γ", is a parameter of the former; the upper case letter, Γ, is the name of (and symbol used for) the latter (as in Γ("x")). To use the word "function" in conjunction with gamma correction, one may avoid confusion by saying "generalized power law function".
Without context, a value labeled gamma might be either the encoding or the decoding value. Caution must be taken to correctly interpret the value as that to be applied-to-compensate or to be compensated-by-applying its inverse. In common parlance, in many occasions the decoding value (as 2.2) is employed as if it were the encoding value, instead of its inverse (1/2.2 in this case), which is the "real" value that must be applied to encode gamma.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_\\text{out} = A V_\\text{in}^\\gamma,"
},
{
"math_id": 1,
"text": "V_\\text{in}"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "V_\\text{out}"
},
{
"math_id": 4,
"text": "\\gamma < 1"
},
{
"math_id": 5,
"text": "\\gamma > 1"
},
{
"math_id": 6,
"text": "V_\\text{out} = V_\\text{in}^\\gamma "
},
{
"math_id": 7,
"text": "\\gamma = \\frac{\\mathrm{d} \\log(V_\\text{out})}{\\mathrm{d} \\log(V_\\text{in})}."
},
{
"math_id": 8,
"text": "I \\propto V_\\text{s}^\\gamma,"
},
{
"math_id": 9,
"text": "V_\\text{c} \\propto V_\\text{s}^{1/\\gamma},"
}
]
| https://en.wikipedia.org/wiki?curid=68466 |
68468604 | Platinum–samarium | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Platinum-samarium is a binary inorganic compound of platinum and samarium with the chemical formula PtSm. This intermetallic compound forms crystals.
Synthesis.
Fusion of stoichiometric amounts of pure substances:
formula_0
Physical properties.
Platinum-samarium forms crystals of rhombic crystal system, space group "P nma", cell parameters a = 0.7148 nm, b = 0.4501 nm, c = 0.5638 nm, Z = 4, structure similar to that of iron boride (FeB).
The compound melts congruently at a temperature of ≈1810 °C.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Pt + Sm \\ \\xrightarrow{1810^oC}\\ SmPt }"
}
]
| https://en.wikipedia.org/wiki?curid=68468604 |
684698 | Rabin–Karp algorithm | String searching algorithm
In computer science, the Rabin–Karp algorithm or Karp–Rabin algorithm is a string-searching algorithm created by Richard M. Karp and Michael O. Rabin (1987) that uses hashing to find an exact match of a pattern string in a text. It uses a rolling hash to quickly filter out positions of the text that cannot match the pattern, and then checks for a match at the remaining positions. Generalizations of the same idea can be used to find more than one match of a single pattern, or to find matches for more than one pattern.
To find a single match of a single pattern, the expected time of the algorithm is linear in the combined length of the pattern and text,
although its worst-case time complexity is the product of the two lengths. To find multiple matches, the expected time is linear in the input lengths, plus the combined length of all the matches, which could be greater than linear. In contrast, the Aho–Corasick algorithm can find all matches of multiple patterns in worst-case time and space linear in the input length and the number of matches (instead of the total length of the matches).
A practical application of the algorithm is detecting plagiarism. Given source material, the algorithm can rapidly search through a paper for instances of sentences from the source material, ignoring details such as case and punctuation. Because of the abundance of the sought strings, single-string searching algorithms are impractical.
Overview.
A naive string matching algorithm compares the given pattern against all positions in the given text. Each comparison takes time proportional to the length of the pattern,
and the number of positions is proportional to the length of the text. Therefore, the worst-case time for such a method is proportional to the product of the two lengths.
In many practical cases, this time can be significantly reduced by cutting short the comparison at each position as soon as a mismatch is found, but this idea cannot guarantee any speedup.
Several string-matching algorithms, including the Knuth–Morris–Pratt algorithm and the Boyer–Moore string-search algorithm, reduce the worst-case time for string matching by extracting more information from each mismatch, allowing them to skip over positions of the text that are guaranteed not to match the pattern. The Rabin–Karp algorithm instead achieves its speedup by using a hash function to quickly perform an approximate check for each position, and then only performing an exact comparison at the positions that pass this approximate check.
A hash function is a function which converts every string into a numeric value, called its "hash value"; for example, we might have codice_0. If two strings are equal, their hash values are also equal. For a well-designed hash function, the inverse is true, in an approximate sense: strings that are unequal are very unlikely to have equal hash values. The Rabin–Karp algorithm proceeds by computing, at each position of the text, the hash value of a string starting at that position with the same length as the pattern. If this hash value equals the hash value of the pattern, it performs a full comparison at that position.
In order for this to work well, the hash function should be selected randomly from a family of hash functions that are unlikely to produce many false positives, that is, positions of the text which have the same hash value as the pattern but do not actually match the pattern. These positions contribute to the running time of the algorithm unnecessarily, without producing a match. Additionally, the hash function used should be a rolling hash, a hash function whose value can be quickly updated from each position of the text to the next. Recomputing the hash function from scratch at each position would be too slow.
The algorithm.
The algorithm is as shown:
function RabinKarp(string s[1..n], string pattern[1..m])
hpattern := hash(pattern[1..m]);
for i from 1 to n-m+1
hs := hash(s[i..i+m-1])
if hs = hpattern
if s[i..i+m-1] = pattern[1..m]
return i
return not found
Lines 2, 4, and 6 each require O("m") time. However, line 2 is only executed once, and line 6 is only executed if the hash values match, which is unlikely to happen more than a few times. Line 5 is executed O("n") times, but each comparison only requires constant time, so its impact is O("n"). The issue is line 4.
Naively computing the hash value for the substring codice_1 requires O("m") time because each character is examined. Since the hash computation is done on each loop, the algorithm with a naive hash computation requires O(mn) time, the same complexity as a straightforward string matching algorithm. For speed, the hash must be computed in constant time. The trick is the variable codice_2 already contains the previous hash value of codice_3. If that value can be used to compute the next hash value in constant time, then computing successive hash values will be fast.
The trick can be exploited using a rolling hash. A rolling hash is a hash function specially designed to enable this operation. A trivial (but not very good) rolling hash function just adds the values of each character in the substring. This rolling hash formula can compute the next hash value from the previous value in constant time:
This simple function works, but will result in statement 5 being executed more often than other more sophisticated rolling hash functions such as those discussed in the next section.
Good performance requires a good hashing function for the encountered data. If the hashing is poor (such as producing the same hash value for every input), then line 6 would be executed O("n") times (i.e. on every iteration of the loop). Because character-by-character comparison of strings with length "m" takes O(m) time, the whole algorithm then takes a worst-case O("mn") time.
Hash function used.
The key to the Rabin–Karp algorithm's performance is the efficient computation of hash values of the successive substrings of the text. The Rabin fingerprint is a popular and effective rolling hash function. The hash function described here is not a Rabin fingerprint, but it works equally well. It treats every substring as a number in some base, the base being usually the size of the character set.
For example, if the substring is "hi", the base is 256, and prime modulus is 101, then the hash value would be
[(104 × 256 ) % 101 + 105] % 101 = 65
(ASCII of 'h' is 104 and of 'i' is 105)
'%' is 'mod' or modulo, or remainder after integer division, operator
Technically, this algorithm is only similar to the true number in a non-decimal system representation, since for example we could have the "base" less than one of the "digits". See hash function for a much more detailed discussion. The essential benefit achieved by using a rolling hash such as the Rabin fingerprint is that it is possible to compute the hash value of the next substring from the previous one by doing only a constant number of operations, independent of the substrings' lengths.
For example, if we have text "abracadabra" and we are searching for a pattern of length 3, the hash of the first substring, "abr", using 256 as the base, and 101 as the prime modulus is:
// ASCII a = 97, b = 98, r = 114.
hash("abr") = [ ( [ ( [ (97 × 256) % 101 + 98 ] % 101 ) × 256 ] % 101 ) + 114 ] % 101 = 4
We can then compute the hash of the next substring, "bra", from the hash of "abr" by subtracting the number added for the first 'a' of "abr", i.e. 97 × 2562, multiplying by the base and adding for the last a of "bra", i.e. 97 × 2560. Like so:
// "old hash (-ve avoider)* old 'a' left base offset base shift new 'a"' prime modulus
hash("bra") = [ ( 4 + 101 - 97 * [(256%101)*256] % 101 ) * 256 + 97 ] % 101 = 30
* (-ve avoider) = "underflow avoider". Necessary if using unsigned integers for calculations. Because we know all hashes formula_0 for prime modulus $p$, we can ensure no underflow by adding p to the old hash before subtracting the value corresponding to the old 'a' (mod p).
the last '* 256' is the shift of the subtracted hash to the left
although ((256%101)*256)%101 is the same as 2562 % 101, to avoid overflowing integer maximums when the pattern string is longer (e.g. 'Rabin-Karp' is 10 characters, 2569 is the offset without modulation ), the pattern length base offset is pre-calculated in a loop, modulating the result each iteration
If we are matching the search string "bra", using similar calculation of hash("abr"),
hash'("bra") = [ ( [ ( [ ( 98 × 256) %101 + 114] % 101 ) × 256 ] % 101) + 97 ] % 101 = 30
If the substrings in question are long, this algorithm achieves great savings compared with many other hashing schemes.
Theoretically, there exist other algorithms that could provide convenient recomputation, e.g. multiplying together ASCII values of all characters so that shifting substring would only entail dividing the previous hash by the first character value, then multiplying by the new last character's value. The limitation, however, is the limited size of the integer data type and the necessity of using modular arithmetic to scale down the hash results, (see hash function article). Meanwhile, naive hash functions do not produce large numbers quickly, but, just like adding ASCII values, are likely to cause many hash collisions and hence slow down the algorithm. Hence the described hash function is typically the preferred one in the Rabin–Karp algorithm.
Multiple pattern search.
The Rabin–Karp algorithm is inferior for single pattern searching to Knuth–Morris–Pratt algorithm, Boyer–Moore string-search algorithm and other faster single pattern string searching algorithms because of its slow worst case behavior. However, it is a useful algorithm for multiple pattern search.
To find any of a large number, say "k", fixed length patterns in a text, a simple variant of the Rabin–Karp algorithm uses a Bloom filter or a set data structure to check whether the hash of a given string belongs to a set of hash values of patterns we are looking for:
function RabinKarpSet(string s[1..n], set of string subs, m):
set hsubs := emptySet
foreach sub in subs
insert hash(sub[1..m]) into hsubs
hs := hash(s[1..m])
for i from 1 to n-m+1
if hs ∈ hsubs and s[i..i+m-1] ∈ subs
return i
hs := hash(s[i+1..i+m])
return not found
We assume all the substrings have a fixed length "m".
A naïve way to search for "k" patterns is to repeat a single-pattern search taking O("n+m") time, totaling in O("(n+m)k") time. In contrast, the above algorithm can find all "k" patterns in O("n"+"km") expected time, assuming that a hash table check works in O(1) expected time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h \\leq p"
}
]
| https://en.wikipedia.org/wiki?curid=684698 |
68471920 | Krzysztof Gawedzki | Polish mathematical physicist (1947–2022)
Krzysztof Gawędzki (: ; born 2 July 1947 – died 21 January 2022) was a Polish mathematical physicist, a graduate of the University of Warsaw and professor at the École normale supérieure de Lyon (ENS de Lyon). He was primarily known for his research on quantum field theory and statistical physics. In 2022, he shared the Dannie Heineman Prize for Mathematical Physics with Antti Kupiainen.
Education and career.
Born in Poland, Gawędzki received in 1971 his doctorate from the University of Warsaw. His doctoral dissertation "Functional theory of geodesic fields" was supervised by (1923–2017).
In the 1980s Gawędzki did research at CNRS at the IHES near Paris. Since the 1990s, he was a professor at the École normale supérieure de Lyon (ENS de Lyon), and later an emeritus researcher there.
He was known for his research in the mathematics of quantum field theory (QFT), especially conformal field theory. In the 1980s he collaborated with Antti Kupiainen on the application of the renormalization group method in the rigorous mathematical treatment of various model systems of quantum field theory. Much of their research deals with conformal field theories, which serve as two-dimensional toy models of non-perturbative aspects of QFT (with applications to string theory and statistical mechanics). Gawędzki and collaborators studied the geometry of WZW models (also called WZNW models, Wess-Zumino-Novikov-Witten models), prototypes for rational conformal field theories.
With Kupiainen he succeeded in the 1980s in the rigorous construction of the massless lattice formula_0 model in four dimensions and the Gross-Neveu model in two space-time dimensions. At about the same time, this was achieved by Roland Sénéor, Jacques Magnen, Joel Feldman and Vincent Rivasseau. This was considered an outstanding achievement in constructive quantum field theory.
In 1986 Gawędzki identified the Kalb–Ramond field (B field), which generalizes the electromagnetic field from point particles to strings, as a degree-3 cocycle in the Deligne cohomology model.
In the 2000s he did research on turbulence, partly in collaboration with Kupiainen. In 1995 Gawędzki and Kupiainen demonstrated anomalous scaling behavior of scalar advection in random vector field models of homogeneous turbulence.
From January to June 2003 he was at the Institute for Advanced Study. In 1986 he was invited speaker with talk "Renormalization: from magic to mathematics" at the International Congress of Mathematicians in Berkeley. In 2007 at ENS de Lyon a conference on mathematical physics was held in honor of his 60th birthday. In 2017 at the University of Nice Sophia Antipolis a conference on mathematical physics was held in honor of his 70th birthday.
On 24 November 2021, the American Institute of Physics and the American Physical Society announced Krzysztof Gawędzki and Antti Kupiainen as the recipients of the 2022 Dannie Heineman Prize for Mathematical Physics. They were recognized for their "fundamental contributions to quantum field theory, statistical mechanics, and fluid dynamics using geometric, probabilistic, and renormalization group ideas."
He died in Lyon, France on 21 January 2022, at the age of 74.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi^4_4"
}
]
| https://en.wikipedia.org/wiki?curid=68471920 |
68483779 | SWIM Protocol | The Scalable Weakly Consistent Infection-style Process Group Membership (SWIM) Protocol is a group membership protocol based on "outsourced heartbeats" used in distributed systems, first introduced by Indranil Gupta in 2001. It is a hybrid algorithm which combines failure detection with group membership dissemination.
Protocol.
The protocol has two components, the "Failure Detector Component" and the "Dissemination Component".
The "Failure Detector Component" functions as follows:
The "Dissemination Component" functions as follows:
Properties.
The protocol provides the following guarantees:
Extensions.
The original SWIM paper lists the following extensions to make the protocol more robust: | [
{
"math_id": 0,
"text": "N_1"
},
{
"math_id": 1,
"text": "N_2"
},
{
"math_id": 2,
"text": "\\{N_3,...,N_{3+k}\\}"
},
{
"math_id": 3,
"text": "N_2\n"
},
{
"math_id": 4,
"text": "T'\\dot{}\\frac{1}{1-e^{-q_f}}"
},
{
"math_id": 5,
"text": "T'"
},
{
"math_id": 6,
"text": "q_f"
}
]
| https://en.wikipedia.org/wiki?curid=68483779 |
68488098 | Toroidal planet | Planet in the shape of a toroidal or doughnut shape
A toroidal planet is a hypothetical type of telluric exoplanet with a toroidal or doughnut shape. While no firm theoretical understanding as to how toroidal planets could form naturally is necessarily known, the shape itself is potentially quasistable, and is analogous to the physical parameters of a speculatively constructible megastructure in self-suspension, such as a Dyson Ring, ringworld, Stanford torus or Bishop Ring.
Physical description.
At sufficiently large enough scales, rigid matter such as the typical silicate-ferrous composition of rocky planets behaves fluidly, and satisfies the condition for evaluating the mechanics of toroidal self-gravitating fluid bodies in context. A rotating mass in the form of a torus allows an effective balance between the gravitational attraction and the force due to centrifugal acceleration, when the angular momentum is adequately large. Ring-shaped masses without a relatively massive central nuclei in equilibrium have been analyzed in the past by Henri Poincaré (1885), Frank W. Dyson (1892), and Sophie Kowalewsky (1885), wherein a condition is allowable for a toroidal rotating mass to be stable with respect to a displacement leading to another toroid. Dyson (1893) investigated other types of distortions and found that the rotating toroidal mass is secularly stable against "fluted" and "twisted" displacements but can become unstable against beaded displacements in which the torus is thicker in some meridians but thinner in some others. In the simple model of parallel sections, beaded instability commences when the aspect ratio of major to minor radius exceeds 3.
Wong (1974) found that toroidal fluid bodies are stable against axisymmetric perturbations for which the corresponding Maclaurin sequence is unstable, yet in the case of non-axisymmetric perturbation at any point on the sequence is unstable. Prior to this, Chandrasekhar (1965, 1967), and Bardeen (1971), had shown that a Maclaurin spheroid with an eccentricity formula_0 is unstable against displacements leading to toroidal shapes and that this Newtonian instability is excited by the effects of general relativity. Eriguchi and Sugimoto (1981) improved on this result, and Ansorg, Kleinwachter & Meinel (2003) achieved near-machine accuracy, which allowed them to study bifurcation sequences in detail and correct erroneous results.
While an integral expression for gravitational potential of an idealized homogeneous circular torus composed of infinitely thin rings is available, more precise equations are required to describe the expected inhomogeneities in the mass-distribution per the differentiated composition of a toroidal planet. The rotational energy of a toroidal planet in uniform rotation is formula_1 where formula_2 is the angular momentum and formula_3 the rigid-body moment of inertia about the central symmetry axis. Toroidal planets would experience a tidal force pulling matter in the inner part of toroid toward the opposite rim, consequently flattening the object across the formula_4-axis. Tectonic plates drifting hubward would undergo significant contraction, resulting in mountainous convolutions inside the planet's inner region, whereby the elevation of such mountains would be amplified via isostasy due to the reduced gravitational effect in that region.
Formation.
Since the existence of toroidal planets is strictly hypothetical, no empirical basis for protoplanetary formation has been established. One homolog is a synestia, a loosely connected doughnut-shaped mass of vaporized rock, proposed by Simon J. Lock and Sarah T. Stewart-Mukhopadhyay to have been responsible for the isotopic similarity in composition, particularly the difference in volatiles, of the Earth-Moon system that occurred during the early-stage process of formation, according to the leading giant-impact hypothesis. The computer modelling incorporated a smoothed particle hydrodynamics code for a series of overlapping constant-density spheroids to obtain the result of a transitional region with a corotating inner region connected to a disk-like outer region.
Occurrence.
To date, no distinctly torus-shaped planet has ever been observed. Given how improbable their occurrence, it is extremely unlikely any will ever be observationally confirmed to exist even within our cosmological horizon; the corresponding search field being approximately formula_5 Hubble volumes, or formula_6 cubic light years.
In fiction.
The game My Singing Monsters takes place in a toroidal planet named simply The Monster World.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e \\geq 0.98523"
},
{
"math_id": 1,
"text": "E_r = L^2/2I,"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "I"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "140 \\cdot (c/H_0)^3"
},
{
"math_id": 6,
"text": "\\sim 4.211 \\times 10^{32}"
}
]
| https://en.wikipedia.org/wiki?curid=68488098 |
68488293 | Neptunium silicide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Neptunium silicide is a binary inorganic compound of neptunium and silicon with the chemical formula NpSi2. The compound forms crystals and does not dissolve in water.
Synthesis.
Heating neptunium trifluoride with powdered silicon in vacuum:
formula_0
Physical properties.
Neptunium silicide forms crystals of tetragonal crystal system, space group "I"41/"amd", cell parameters: a = 0.396 nm, c = 1.367 nm, Z = 4.
Neptunium disilicide does not dissolve in water.
Chemical properties.
Neptunium disilicide reacts with HCl:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ 4 NpF_3 + 11Si \\ \\xrightarrow{1500^oC}\\ 4NpSi_2 + 3SiF_4 }"
},
{
"math_id": 1,
"text": "\\mathsf{ NpSi_2 + 8HCl \\ \\xrightarrow\\ NpCl_4 + 2SiH_4 }"
}
]
| https://en.wikipedia.org/wiki?curid=68488293 |
68488480 | Plutonium silicide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Plutonium silicide is a binary inorganic compound of plutonium and silicon with the chemical formula PuSi. The compound forms gray crystals.
Synthesis.
Reaction of plutonium dioxide and silicon carbide:
formula_0
Reaction of plutonium trifluoride with silicon:
formula_1
Physical properties.
Plutonium silicide forms gray crystals of orthorhombic crystal system, space group "Pnma", cell parameters: a = 0.7933 nm, b = 0.3847 nm, c = 0.5727 nm, Z = 4, TiSi type structure.
At a temperature of 72 K, plutonium silicide undergoes a ferromagnetic transition.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ PuO_2 + SiC \\ \\xrightarrow{T}\\ PuSi + CO_2\\uparrow }"
},
{
"math_id": 1,
"text": "\\mathsf{ 4 PuF_3 + 7 Si \\ \\xrightarrow{T}\\ 4 PuSi + 3 SiF_4 }"
}
]
| https://en.wikipedia.org/wiki?curid=68488480 |
68490860 | Import–export (logic) | Principle of classical logic
In propositional logic, import-export is a name given to the propositional form of Exportation:
formula_0.
This already holds in minimal logic, and thus also in classical logic, where the conditional operator "formula_1" is taken as material implication.
In the Curry-Howard correspondence for intuitionistic logics, it can be realized through currying and uncurrying.
Discussion.
Import-export expresses a deductive argument form. In natural language terms, the formula states that the following English sentences are logically equivalent:
There are logics where it does not hold and its status as a true principle of logic is a matter of debate. Controversy over the principle arises from the fact that any conditional operator that satisfies it will collapse to material implication when combined with certain other principles. This conclusion would be problematic given the paradoxes of material implication, which are commonly taken to show that natural language conditionals are not material implication.
This problematic conclusion can be avoided within the framework of dynamic semantics, whose expressive power allows one to define a non-material conditional operator which nonetheless satisfies import-export along with the other principles. However, other approaches reject import-export as a general principle, motivated by cases such as the following, uttered in a context where it is most likely that the match will be lit by throwing it into a campfire, but where it is possible that it could be lit by striking it. In this context, the first sentence is intuitively true but the second is intuitively false.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (P \\rightarrow ( Q \\rightarrow R )) \\leftrightarrow ((P \\land Q) \\rightarrow R)"
},
{
"math_id": 1,
"text": "\\rightarrow"
}
]
| https://en.wikipedia.org/wiki?curid=68490860 |
68493758 | Ratio of uniforms | The ratio of uniforms is a method initially proposed by Kinderman and Monahan in 1977 for pseudo-random number sampling, that is, for drawing random samples from a statistical distribution. Like rejection sampling and inverse transform sampling, it is an exact simulation method. The basic idea of the method is to use a change of variables to create a bounded set, which can then be sampled uniformly to generate random variables following the original distribution. One feature of this method is that the distribution to sample is only required to be known up to an unknown multiplicative factor, a common situation in computational statistics and statistical physics.
Motivation.
A convenient technique to sample a statistical distribution is rejection sampling. When the probability density function of the distribution is bounded and has finite support, one can define a bounding box around it (a uniform proposal distribution), draw uniform samples in the box and return only the x coordinates of the points that fall below the function (see graph). As a direct consequence of the fundamental theorem of simulation, the returned samples are distributed according to the original distribution.
When the support of the distribution is infinite, it is impossible to draw a rectangular bounding box containing the graph of the function. One can still use rejection sampling, but with a non-uniform proposal distribution. It can be delicate to choose an appropriate proposal distribution, and one also has to know how to efficiently sample this proposal distribution.
The method of the ratio of uniforms offers a solution to this problem, by essentially using as proposal distribution the distribution created by the ratio of two uniform random variables.
Statement.
The statement and the proof are adapted from the presentation by Gobet
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_0 be a multidimensional random variable with probability density function formula_1 on formula_2. The function formula_3 is only required to be known up to a constant, so we can assume that we only know formula_4 where formula_5, with formula_6 a constant unknown or difficult to compute. Let formula_7, a parameter that can be adjusted as we choose to improve the properties of the method. We can define the set formula_8:formula_9The Lebesgue measure of the set formula_8 is finite and equal to formula_10.
Furthermore, let formula_11 be a random variable uniformly distributed on the set formula_8. Then, formula_12 is a random variable on formula_2 distributed like formula_0.
<templatestyles src="Math_proof/styles.css" />Proof
We will first assume that the first statement is correct, i.e. formula_13.
Let formula_14 be a measurable function on formula_2. Let's consider the expectation of formula_15 on the set formula_8:
formula_16
With the change of variables formula_17, we have
formula_18
where we can see that formula_19 has indeed the density formula_3.
Coming back to the first statement, a similar argument shows that formula_13.
Complements.
Rejection sampling in formula_8.
The above statement does not specify how one should perform the uniform sampling in formula_8. However, the interest of this method is that under mild conditions on formula_4 (namely that formula_20 and formula_21 for all formula_22 are bounded), formula_8 is bounded. One can define the rectangular bounding box formula_23 such thatformula_24This allows to sample uniformly the set formula_8 by rejection sampling inside formula_23. The parameter formula_25 can be adjusted to change the shape of formula_8 and maximize the acceptance ratio of this sampling.
Parametric description of the boundary of formula_8.
The definition of formula_8 is already convenient for the rejection sampling step. For illustration purposes, it can be interesting to draw the set, in which case it can be useful to know the parametric description of its boundary:formula_26or for the common case where formula_0 is a 1-dimensional variable, formula_27.
Generalized ratio of uniforms.
Above parameterized only with formula_25, the ratio of uniforms can be described with a more general class of transformations in terms of a transformation "g". In the 1-dimensional case, if formula_28 is a strictly increasing and differentiable function such that formula_29, then we can define formula_30 such that
formula_31
If formula_32 is a random variable uniformly distributed in formula_30, then formula_33 is distributed with the density formula_3.
Examples.
The exponential distribution.
Assume that we want to sample the exponential distribution, formula_36 with the ratio of uniforms method. We will take here formula_37.
We can start constructing the set formula_34:
formula_38
The condition formula_39 is equivalent, after computation, to formula_40, which allows us to plot the shape of the set (see graph).
This inequality also allows us to determine the rectangular bounding box formula_35 where formula_34 is included. Indeed, with formula_41, we have formula_42 and formula_43, from where formula_44.
From here, we can draw pairs of uniform random variables formula_45 and formula_46 until formula_47, and when that happens, we return formula_48, which is exponentially distributed.
A mixture of normal distributions.
Consider the mixture of two normal distributions formula_50. To apply the method of the ratio of uniforms, with a given formula_25, one should first determine the boundaries of the rectangular bounding box formula_23 enclosing the set formula_8. This can be done numerically, by computing the minimum and maximum of formula_51 and formula_52 on a grid of values of formula_49. Then, one can draw uniform samples formula_53, only keep those that fall inside the set formula_8 and return them as formula_54.
It is possible to optimize the acceptance ratio by adjusting the value of formula_25, as seen on the graphs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "p(x_1, x_2, \\ldots, x_d)"
},
{
"math_id": 2,
"text": "\\mathbb{R}^d"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "p(x_1, x_2, \\ldots, x_d) = c f(x_1, x_2, \\ldots, x_d)"
},
{
"math_id": 6,
"text": "c"
},
{
"math_id": 7,
"text": "r>0"
},
{
"math_id": 8,
"text": "A_{f,r}"
},
{
"math_id": 9,
"text": "A_{f,r} = \\left\\{(u, v_1, v_2, \\ldots, v_d) \\in \\mathbb{R}^{d+1}: 0 \\leq u \\leq f\\left(\\frac{v_1}{u^r},\\frac{v_2}{u^r}, \\ldots,\\frac{v_d}{u^r} \\right)^{\\frac{1}{1+rd}}\\right\\}"
},
{
"math_id": 10,
"text": "\\frac{1}{c\\,(1+rd)}"
},
{
"math_id": 11,
"text": "(U, V_1, V_2, \\ldots, V_d)"
},
{
"math_id": 12,
"text": "\\left(\\frac{V_1}{U^r}, \\frac{V_2}{U^r}, \\ldots, \\frac{V_d}{U^r}\\right)"
},
{
"math_id": 13,
"text": "|A_{f,r}| = \\frac{1}{c\\,(1+rd)}"
},
{
"math_id": 14,
"text": "\\varphi"
},
{
"math_id": 15,
"text": "\\varphi\\left(\\frac{V_1}{U^r}, \\ldots, \\frac{V_d}{U^r}\\right)"
},
{
"math_id": 16,
"text": "E\\left[\\varphi\\left(\\frac{V_1}{U^r}, \\ldots, \\frac{V_d}{U^r}\\right)\\right]\n= \\frac{1}{|A_{f,r}|}\n\\int_{-\\infty}^\\infty\\int_{-\\infty}^\\infty\\cdots\\int_{-\\infty}^\\infty\\varphi\\left(\\frac{v_1}{u^r},\\ldots,\\frac{v_d}{u^r}\\right)\\mathbf{1}_{(u, v_1, \\ldots, v_d)\\in A_{f,r}}\\mathrm{d}u\\,\\mathrm{d}v_1\\ldots\\mathrm{d}v_d"
},
{
"math_id": 17,
"text": "x_i = \\frac{v_i}{u^r}"
},
{
"math_id": 18,
"text": "\\begin{align}\nE\\left[\\varphi\\left(\\frac{V_1}{U^r},\\ldots,\\frac{V_d}{U^r}\\right)\\right]\n&= \\frac{1}{|A_{f,r}|}\n\\int_{-\\infty}^\\infty\\int_{-\\infty}^\\infty\\cdots\\int_{-\\infty}^\\infty\n\\varphi(x_1,\\ldots,x_d)\n\\mathbf{1}_{0\\leq u\\leq f(x_1,\\ldots, x_d)^\\frac{1}{1+rd}}\nu^{rd}\n\\mathrm{d}u\\,\\mathrm{d}x_1\\cdots\\mathrm{d}x_d\\\\\n&= \\frac{1}{|A_{f,r}|}\n\\int_{-\\infty}^\\infty\\cdots\\int_{-\\infty}^\\infty\n\\varphi\\left(x_1,\\ldots,x_d\\right)\n\\frac{1}{1+rd}f\\left(x_1,\\ldots,x_d\\right)\n\\mathrm{d}x_1\\cdots\\mathrm{d}x_d\\\\\n&= \\int_{-\\infty}^\\infty\\ldots\\int_{-\\infty}^\\infty\n\\varphi\\left(x_1,\\ldots,x_d\\right)p\\left(x_1,\\ldots,x_d\\right)\n\\mathrm{d}x_1\\cdots\\mathrm{d}x_d\n\\end{align}"
},
{
"math_id": 19,
"text": "\\left(\\frac{V_1}{U^r}, \\ldots, \\frac{V_d}{U^r}\\right)"
},
{
"math_id": 20,
"text": "f(x_1, x_2, \\ldots, x_d)^{\\frac{1}{1+rd}}"
},
{
"math_id": 21,
"text": "x_i f(x_1, x_2, \\ldots, x_d)^\\frac{r}{1+rd}"
},
{
"math_id": 22,
"text": "i"
},
{
"math_id": 23,
"text": "\\tilde{A}_{f,r}"
},
{
"math_id": 24,
"text": "A_{f,r} \\subset \\tilde{A}_{f,r} = \n\\left[0, \\sup_{x_1, \\ldots, x_d}{f(x_1, \\ldots, x_d)^{\\frac{1}{1+rd}}}\\right]\n\\times\n\\prod_i \\left[\\inf_{x_1, \\ldots, x_d}{x_i f(x_1, \\ldots, x_d)^{\\frac{r}{1+rd}}},\n\\sup_{x_1, \\ldots, x_d}{x_i f(x_1, \\ldots, x_d)^{\\frac{r}{1+rd}}}\\right]"
},
{
"math_id": 25,
"text": "r"
},
{
"math_id": 26,
"text": "u = f\\left(x_1, x_2, \\ldots, x_d \\right)^\\frac{1}{1+rd}\n\\quad\\text{and}\\quad\n\\forall i \\in [|1, n|], v_i = x_i u^r"
},
{
"math_id": 27,
"text": "(u, v) = \\left(f(x)^\\frac{1}{1+r}, x\\,f(x)^\\frac{r}{1+r}\\right)"
},
{
"math_id": 28,
"text": "g: \\mathbb{R}^+\\rightarrow\\mathbb{R}^+"
},
{
"math_id": 29,
"text": "g(0) = 0"
},
{
"math_id": 30,
"text": "A_{f,g}"
},
{
"math_id": 31,
"text": "A_{f,g} = \\left\\{(u, v)\\in\\mathbb{R}^2: 0\\leq u \\leq g^{-1}\\left[f\\left(\\frac{v}{g'(u)}\\right)\\right]\\right\\}"
},
{
"math_id": 32,
"text": "(U, V)"
},
{
"math_id": 33,
"text": "\\frac{V}{g'(U)}"
},
{
"math_id": 34,
"text": "A_{f,1}"
},
{
"math_id": 35,
"text": "\\tilde{A}_{f,1}"
},
{
"math_id": 36,
"text": "p(x) = \\lambda \\mathrm{e}^{-\\lambda x}"
},
{
"math_id": 37,
"text": "r = 1"
},
{
"math_id": 38,
"text": "A_{f,1} = \\left\\{(u,v)\\in\\mathbb{R}^2:0\\leq u\\leq \\sqrt{p\\left(\\frac{v}{u}\\right)} \\right\\}"
},
{
"math_id": 39,
"text": "0\\leq u\\leq \\sqrt{p\\left(\\frac{v}{u}\\right)}"
},
{
"math_id": 40,
"text": "0\\leq v\\leq -\\frac{u}{\\lambda}\\ln\\frac{u^2}{\\lambda}"
},
{
"math_id": 41,
"text": "g(u) = -\\frac{u}{\\lambda}\\ln\\frac{u^2}{\\lambda}"
},
{
"math_id": 42,
"text": "g\\left(\\sqrt{\\lambda}\\right) = 0"
},
{
"math_id": 43,
"text": "g'\\left(\\frac{2}{\\mathrm{e}\\sqrt{\\lambda}}\\right) = 0"
},
{
"math_id": 44,
"text": "\\tilde{A}_{f,1} = \\left[0, \\sqrt{\\lambda}\\right]\\times\\left[0, g\\left(\\frac{2}{\\mathrm{e}\\sqrt{\\lambda}}\\right)\\right]"
},
{
"math_id": 45,
"text": "U \\sim \\mathrm{Unif}\\left(0, \\sqrt{\\lambda}\\right)"
},
{
"math_id": 46,
"text": "V \\sim \\mathrm{Unif}\\left(0, g\\left(\\frac{2}{\\mathrm{e}\\sqrt{\\lambda}}\\right)\\right)"
},
{
"math_id": 47,
"text": "u \\leq \\sqrt{\\lambda\\,\\mathrm{e}^{-\\lambda\\frac{v}{u}}}"
},
{
"math_id": 48,
"text": "\\frac{v}{u}"
},
{
"math_id": 49,
"text": "x"
},
{
"math_id": 50,
"text": "\\mathcal{D} = 0.6\\,N(\\mu=-1, \\sigma=2)+0.4\\,N(\\mu=3, \\sigma=1)"
},
{
"math_id": 51,
"text": "u(x) = f(x)^\\frac{1}{1+r}"
},
{
"math_id": 52,
"text": "v(x) = x\\,f(x)^\\frac{r}{1+r}"
},
{
"math_id": 53,
"text": "(u, v)\\in \\tilde{A}_{f,r}"
},
{
"math_id": 54,
"text": "\\frac{v}{u^r}"
}
]
| https://en.wikipedia.org/wiki?curid=68493758 |
68495772 | Mabuchi functional | In mathematics, and especially complex geometry, the Mabuchi functional or K-energy functional is a functional on the space of Kähler potentials of a compact Kähler manifold whose critical points are constant scalar curvature Kähler metrics. The Mabuchi functional was introduced by Toshiki Mabuchi in 1985 as a functional which integrates the Futaki invariant, which is an obstruction to the existence of a Kähler–Einstein metric on a Fano manifold.
The Mabuchi functional is an analogy of the log-norm functional of the moment map in geometric invariant theory and symplectic reduction. The Mabuchi functional appears in the theory of K-stability as an analytical functional which characterises the existence of constant scalar curvature Kähler metrics. The slope at infinity of the Mabuchi functional along any geodesic ray in the space of Kähler potentials is given by the Donaldson–Futaki invariant of a corresponding test configuration.
Due to the variational techniques of Berman–Boucksom–Jonsson in the study of Kähler–Einstein metrics on Fano varieties, the Mabuchi functional and various generalisations of it have become critically important in the study of K-stability of Fano varieties, particularly in settings with singularities.
Definition.
The Mabuchi functional is defined on the space of Kähler potentials inside a fixed Kähler cohomology class on a compact complex manifold. Let formula_0 be a compact Kähler manifold with a fixed Kähler metric formula_1. Then by the formula_2-lemma, any other Kähler metric in the class formula_3 in de Rham cohomology may be related to formula_1 by a smooth function formula_4, the Kähler potential:
formula_5
In order to ensure this new two-form is a Kähler metric, it must be a positive form:
formula_6
These two conditions define the space of Kähler potentials
formula_7
Since any two Kähler potentials which differ by a constant function define the same Kähler metric, the space of Kähler metrics in the class formula_8 can be identified with formula_9, the Kähler potentials modulo the constant functions. One can instead restrict to those Kähler potentials which normalise so that their integral over formula_10 vanishes.
The tangent space to formula_11 can be identified with the space of smooth real-valued functions on formula_10. Let formula_12 denote the scalar curvature of the Riemannian metric corresponding to formula_13, and let formula_14 denote the average of this scalar curvature over formula_10, which does not depend on the choice of formula_15 by Stokes theorem. Define a differential one-form on the space of Kähler potentials by
formula_16
This one-form is closed. Since formula_11 is a contractible space, this one-form is exact, and there exists a functional formula_17 normalised so that formula_18 such that formula_19, the Mabuchi functional or K-energy.
The Mabuchi functional has an explicit description given by integrating the one-form formula_20 along a path. Let formula_21 be a fixed Kähler potential, which may be taken as formula_22, and let formula_23, and formula_24 be a path in formula_11 from formula_21 to formula_25. Then
formula_26
This integral can be shown to be independent of the choice of path formula_24.
Constant scalar curvature Kähler metrics.
From the definition of the Mabuchi functional in terms of the one-form formula_20, it can be seen that for a Kähler potential formula_27, the variation
formula_28
vanishes for all tangent vectors formula_29 if and only if formula_30. That is, the critical points of the Mabuchi functional are precisely the Kähler potentials which have constant scalar curvature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(M,\\omega)"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "\\partial \\bar \\partial"
},
{
"math_id": 3,
"text": "[\\omega]\\in H^2_{\\text{dR}}(M)"
},
{
"math_id": 4,
"text": "\\varphi\\in C^{\\infty}(X)"
},
{
"math_id": 5,
"text": "\\omega_\\varphi = \\omega + i \\partial \\bar \\partial \\varphi."
},
{
"math_id": 6,
"text": "\\omega_\\varphi > 0."
},
{
"math_id": 7,
"text": "\\mathcal{K} = \\{ \\varphi: M \\to \\mathbb{R} \\mid \\varphi\\in C^{\\infty}(X),\\quad \\omega + i \\partial \\bar \\partial \\varphi > 0\\}."
},
{
"math_id": 8,
"text": "[\\omega]"
},
{
"math_id": 9,
"text": "\\mathcal{K}/\\mathbb{R}"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "\\mathcal{K}"
},
{
"math_id": 12,
"text": "S_\\varphi"
},
{
"math_id": 13,
"text": "\\omega_\\varphi"
},
{
"math_id": 14,
"text": "\\hat S"
},
{
"math_id": 15,
"text": "\\varphi"
},
{
"math_id": 16,
"text": "\\alpha_\\varphi (\\psi) = \\int_M \\psi (\\hat S - S_\\varphi) \\omega_\\varphi^n."
},
{
"math_id": 17,
"text": "\\mathcal{M}: \\mathcal{K} \\to \\mathbb{R}"
},
{
"math_id": 18,
"text": "\\mathcal{M}(0)=0"
},
{
"math_id": 19,
"text": "d\\mathcal{M} = \\alpha"
},
{
"math_id": 20,
"text": "\\alpha"
},
{
"math_id": 21,
"text": "\\varphi_0"
},
{
"math_id": 22,
"text": "\\varphi_0=0"
},
{
"math_id": 23,
"text": "\\varphi_1=\\varphi"
},
{
"math_id": 24,
"text": "\\varphi_t"
},
{
"math_id": 25,
"text": "\\varphi_1"
},
{
"math_id": 26,
"text": "\\mathcal{M}(\\varphi) = \\int_0^1 \\int_M \\dot \\varphi_t (\\hat S - S_{\\varphi_t}) \\omega_{\\varphi_t}^n dt."
},
{
"math_id": 27,
"text": "\\varphi\\in \\mathcal{K}"
},
{
"math_id": 28,
"text": "\\left.\\frac{d}{dt}\\right|_{t=0} \\mathcal{M}(\\varphi + t \\psi) = \\int_M \\psi (\\hat S - S_\\varphi) \\omega_\\varphi^n"
},
{
"math_id": 29,
"text": "\\psi \\in C^{\\infty}(M)"
},
{
"math_id": 30,
"text": "\\hat S = S_\\varphi"
}
]
| https://en.wikipedia.org/wiki?curid=68495772 |
68497153 | Theories of iterated inductive definitions | In set theory and logic, Buchholz's ID hierarchy is a hierarchy of subsystems of first-order arithmetic. The systems/theories formula_0 are referred to as "the formal theories of ν-times iterated inductive definitions". IDν extends PA by ν iterated least fixed points of monotone operators.
Definition.
Original definition.
The formal theory IDω (and IDν in general) is an extension of Peano Arithmetic, formulated in the language LID, by the following axioms:
The theory IDν with ν ≠ ω is defined as:
Explanation / alternate definition.
ID1.
A set formula_6 is called inductively defined if for some monotonic operator formula_7, formula_8, where formula_9 denotes the least fixed point of formula_10. The language of ID1, formula_11, is obtained from that of first-order number theory, formula_12, by the addition of a set (or predicate) constant IA for every X-positive formula A(X, x) in LN[X] that only contains X (a new set variable) and x (a number variable) as free variables. The term X-positive means that X only occurs positively in A (X is never on the left of an implication). We allow ourselves a bit of set-theoretic notation:
Then ID1 contains the axioms of first-order number theory (PA) with the induction scheme extended to the new language as well as these axioms:
Where formula_22 ranges over all formula_11 formulas.
Note that formula_23 expresses that formula_24 is closed under the arithmetically definable set operator formula_25, while formula_26 expresses that formula_24 is the least such (at least among sets definable in formula_11).
Thus, formula_24 is meant to be the least pre-fixed-point, and hence the least fixed point of the operator formula_27.
IDν.
To define the system of ν-times iterated inductive definitions, where ν is an ordinal, let formula_28 be a primitive recursive well-ordering of order type ν. We use Greek letters to denote elements of the field of formula_28. The language of IDν, formula_29 is obtained from formula_12 by the addition of a binary predicate constant JA for every X-positive formula_30 formula formula_31 that contains at most the shown free variables, where X is again a unary (set) variable, and Y is a fresh binary predicate variable. We write formula_32 instead of formula_33, thinking of x as a distinguished variable in the latter formula.
The system IDν is now obtained from the system of first-order number theory (PA) by expanding the induction scheme to the new language and adding the scheme formula_34 expressing transfinite induction along formula_28 for an arbitrary formula_29 formula formula_16 as well as the axioms:
where formula_22 is an arbitrary formula_29 formula. In formula_37 and formula_38 we used the abbreviation formula_39 for the formula formula_40, where formula_41 is the distinguished variable. We see that these express that each formula_42, for formula_43, is the least fixed point (among definable sets) for the operator formula_44. Note how all the previous sets formula_45, for formula_46, are used as parameters.
We then define formula_47.
Variants.
formula_48 - formula_48 is a weakened version of formula_49. In the system of formula_48, a set formula_6 is instead called inductively defined if for some monotonic operator formula_7, formula_50 is a fixed point of formula_51, rather than the least fixed point. This subtle difference makes the system significantly weaker: formula_52, while formula_53.
formula_54 is formula_48 weakened even further. In formula_54, not only does it use fixed points rather than least fixed points, and has induction only for positive formulas. This once again subtle difference makes the system even weaker: formula_55, while formula_52.
formula_56 is the weakest of all variants of formula_49, based on W-types. The amount of weakening compared to regular iterated inductive definitions is identical to removing bar induction given a certain subsystem of second-order arithmetic. formula_57, while formula_53.
formula_58 is an "unfolding" strengthening of formula_49. It is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on ν-times iterated generalized inductive definitions. The amount of increase in strength is identical to the increase from formula_59 to formula_60: formula_53, while formula_61.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ID_\\nu"
},
{
"math_id": 1,
"text": "\\forall y \\forall x (\\mathfrak{M}_y(P^\\mathfrak{M}_y, x) \\rightarrow x \\in P^\\mathfrak{M}_y)"
},
{
"math_id": 2,
"text": "\\forall y (\\forall x (\\mathfrak{M}_y(F, x) \\rightarrow F(x)) \\rightarrow \\forall x (x \\in P^\\mathfrak{M}_y \\rightarrow F(x)))"
},
{
"math_id": 3,
"text": "\\forall y \\forall x_0 \\forall x_1(P^\\mathfrak{M}_{<y}x_0x_1 \\leftrightarrow x_0 < y \\land x_1 \\in P^\\mathfrak{M}_{x_0})"
},
{
"math_id": 4,
"text": "\\forall y \\forall x (Z_y(P^\\mathfrak{M}_y, x) \\rightarrow x \\in P^\\mathfrak{M}_y)"
},
{
"math_id": 5,
"text": "\\forall x (\\mathfrak{M}_u(F, x) \\rightarrow F(x)) \\rightarrow \\forall x (P^\\mathfrak{M}_ux \\rightarrow F(x))"
},
{
"math_id": 6,
"text": "I \\subseteq \\N"
},
{
"math_id": 7,
"text": "\\Gamma: P(N) \\rightarrow P(N)"
},
{
"math_id": 8,
"text": "LFP(\\Gamma) = I"
},
{
"math_id": 9,
"text": "LFP(f)"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "L_{ID_1}"
},
{
"math_id": 12,
"text": "L_\\N"
},
{
"math_id": 13,
"text": "F(x) = \\{x \\in N \\mid F(x)\\}"
},
{
"math_id": 14,
"text": "s \\in F"
},
{
"math_id": 15,
"text": "F(s)"
},
{
"math_id": 16,
"text": "F"
},
{
"math_id": 17,
"text": "G"
},
{
"math_id": 18,
"text": "F \\subseteq G"
},
{
"math_id": 19,
"text": "\\forall x F(x) \\rightarrow G(x)"
},
{
"math_id": 20,
"text": "(ID_1)^1: A(I_A) \\subseteq I_A"
},
{
"math_id": 21,
"text": "(ID_1)^2: A(F) \\subseteq F \\rightarrow I_A \\subseteq F"
},
{
"math_id": 22,
"text": "F(x)"
},
{
"math_id": 23,
"text": "(ID_1)^1"
},
{
"math_id": 24,
"text": "I_A"
},
{
"math_id": 25,
"text": "\\Gamma_A(S) = \\{x \\in \\N \\mid \\N \\models A(S, x)\\}"
},
{
"math_id": 26,
"text": "(ID_1)^2"
},
{
"math_id": 27,
"text": "\\Gamma_A"
},
{
"math_id": 28,
"text": "\\prec"
},
{
"math_id": 29,
"text": "L_{ID_\\nu}"
},
{
"math_id": 30,
"text": "L_\\N[X, Y]"
},
{
"math_id": 31,
"text": "A(X, Y, \\mu, x)"
},
{
"math_id": 32,
"text": "x \\in J^\\mu_A"
},
{
"math_id": 33,
"text": "J_A(\\mu, x)"
},
{
"math_id": 34,
"text": "(TI_\\nu): TI(\\prec, F)"
},
{
"math_id": 35,
"text": "(ID_\\nu)^1: \\forall \\mu \\prec \\nu; A^\\mu(J^\\mu_A) \\subseteq J^\\mu_A"
},
{
"math_id": 36,
"text": "(ID_\\nu)^2: \\forall \\mu \\prec \\nu; A^\\mu(F) \\subseteq F \\rightarrow J^\\mu_A \\subseteq F"
},
{
"math_id": 37,
"text": "(ID_\\nu)^1"
},
{
"math_id": 38,
"text": "(ID_\\nu)^2"
},
{
"math_id": 39,
"text": "A^\\mu(F)"
},
{
"math_id": 40,
"text": "A(F, (\\lambda\\gamma y; \\gamma \\prec \\mu \\land y \\in J^\\gamma_A), \\mu, x)"
},
{
"math_id": 41,
"text": "x"
},
{
"math_id": 42,
"text": "J^\\mu_A"
},
{
"math_id": 43,
"text": "\\mu \\prec \\nu"
},
{
"math_id": 44,
"text": "\\Gamma^\\mu_A(S) = \\{n \\in \\N | (\\N, (J^\\gamma_A)_{\\gamma \\prec \\mu}\\}"
},
{
"math_id": 45,
"text": "J^\\gamma_A"
},
{
"math_id": 46,
"text": "\\gamma \\prec \\mu"
},
{
"math_id": 47,
"text": "ID_{\\prec \\nu} = \\bigcup _{\\xi \\prec \\nu}ID_\\xi"
},
{
"math_id": 48,
"text": "\\widehat{\\mathsf{ID}}_\\nu"
},
{
"math_id": 49,
"text": "\\mathsf{ID}_\\nu"
},
{
"math_id": 50,
"text": "I"
},
{
"math_id": 51,
"text": "\\Gamma"
},
{
"math_id": 52,
"text": "PTO(\\widehat{\\mathsf{ID}}_1) = \\psi(\\Omega^{\\varepsilon_0})"
},
{
"math_id": 53,
"text": "PTO(\\mathsf{ID}_1) = \\psi(\\varepsilon_{\\Omega+1})"
},
{
"math_id": 54,
"text": "\\mathsf{ID}_\\nu\\#"
},
{
"math_id": 55,
"text": "PTO(\\mathsf{ID}_1\\#) = \\psi(\\Omega^\\omega) "
},
{
"math_id": 56,
"text": "\\mathsf{W-ID}_\\nu"
},
{
"math_id": 57,
"text": "PTO(\\mathsf{W-ID}_1) = \\psi_0(\\Omega\\times\\omega) "
},
{
"math_id": 58,
"text": "\\mathsf{U(ID}_\\nu\\mathsf{)}"
},
{
"math_id": 59,
"text": "\\varepsilon_0"
},
{
"math_id": 60,
"text": "\\Gamma_0"
},
{
"math_id": 61,
"text": "PTO(\\mathsf{U(ID}_1\\mathsf{)}) = \\psi(\\Gamma_{\\Omega+1}) "
},
{
"math_id": 62,
"text": "\\Pi^1_1 - CA + BI"
},
{
"math_id": 63,
"text": "\\Pi^0_2"
},
{
"math_id": 64,
"text": "\\forall x \\exists y \\varphi(x, y) (\\varphi \\in \\Sigma^0_1)"
},
{
"math_id": 65,
"text": "p \\in N"
},
{
"math_id": 66,
"text": "\\forall n \\geq p \\exists k < H_{D_0D^n_\\nu0}(1) \\varphi(n, k)"
},
{
"math_id": 67,
"text": "\\vdash_k^{D^k_\\nu0} A^N"
},
{
"math_id": 68,
"text": "\\psi_0(\\Omega_\\nu)"
},
{
"math_id": 69,
"text": "\\psi_0(\\varepsilon_{\\Omega_\\nu+1}) = \\psi_0(\\Omega_{\\nu+1})"
},
{
"math_id": 70,
"text": "\\widehat{ID}_{<\\omega}"
},
{
"math_id": 71,
"text": "\\widehat{ID}_\\nu"
},
{
"math_id": 72,
"text": "\\nu < \\omega"
},
{
"math_id": 73,
"text": "\\varphi(\\varphi(\\nu, 0), 0)"
},
{
"math_id": 74,
"text": "\\widehat{ID}_{\\varphi(\\alpha, \\beta)}"
},
{
"math_id": 75,
"text": "\\varphi(1, 0, \\varphi(\\alpha+1, \\beta-1))"
},
{
"math_id": 76,
"text": "\\widehat{ID}_{<\\varphi(0, \\alpha)}"
},
{
"math_id": 77,
"text": "\\alpha > 1"
},
{
"math_id": 78,
"text": "\\varphi(1, \\alpha, 0)"
},
{
"math_id": 79,
"text": "\\widehat{ID}_{<\\nu}"
},
{
"math_id": 80,
"text": "\\nu \\geq \\varepsilon_0"
},
{
"math_id": 81,
"text": "\\varphi(1, \\nu, 0)"
},
{
"math_id": 82,
"text": "ID_\\nu\\#"
},
{
"math_id": 83,
"text": "\\varphi(\\omega^\\nu, 0)"
},
{
"math_id": 84,
"text": "ID_{<\\nu}\\#"
},
{
"math_id": 85,
"text": "\\varphi(0, \\omega^{\\nu+1})"
},
{
"math_id": 86,
"text": "W\\textrm{-}ID_{\\varphi(\\alpha, \\beta)}"
},
{
"math_id": 87,
"text": "\\psi_0(\\Omega_{\\varphi(\\alpha, \\beta)}\\times\\varphi(\\alpha+1, \\beta-1))"
},
{
"math_id": 88,
"text": "W\\textrm{-}ID_{<\\varphi(\\alpha, \\beta)}"
},
{
"math_id": 89,
"text": "\\psi_0(\\varphi(\\alpha+1, \\beta-1)^{\\Omega_{\\varphi(\\alpha, \\beta-1)}+1})"
},
{
"math_id": 90,
"text": "U(ID_\\nu)"
},
{
"math_id": 91,
"text": "\\psi_0(\\varphi(\\nu, 0, \\Omega+1))"
},
{
"math_id": 92,
"text": "U(ID_{<\\nu})"
},
{
"math_id": 93,
"text": "\\psi_0(\\Omega^{\\Omega+\\varphi(\\nu, 0, \\Omega)})"
},
{
"math_id": 94,
"text": "\\mathsf{KP}"
},
{
"math_id": 95,
"text": "\\mathsf{KP\\omega}"
},
{
"math_id": 96,
"text": "\\mathsf{CZF}"
},
{
"math_id": 97,
"text": "\\mathsf{ML_{1}V}"
},
{
"math_id": 98,
"text": "\\psi_0(\\Omega_\\omega\\varepsilon_0)"
},
{
"math_id": 99,
"text": "\\mathsf{W-KPI}"
},
{
"math_id": 100,
"text": "\\mathsf{KPI}"
},
{
"math_id": 101,
"text": "\\Pi^1_1 - \\mathsf{CA} + \\mathsf{BI}"
},
{
"math_id": 102,
"text": "\\Delta^1_2 - \\mathsf{CA} + \\mathsf{BI}"
},
{
"math_id": 103,
"text": "\\psi_0(\\Omega_{\\omega^\\omega})"
},
{
"math_id": 104,
"text": "\\Delta^1_2 - \\mathsf{CR}"
},
{
"math_id": 105,
"text": "\\psi_0(\\Omega_{\\varepsilon_0})"
},
{
"math_id": 106,
"text": "\\Delta^1_2 - \\mathsf{CA}"
},
{
"math_id": 107,
"text": "\\Sigma^1_2 - \\mathsf{AC}"
},
{
"math_id": 108,
"text": "\\Sigma^1_2 - AC"
}
]
| https://en.wikipedia.org/wiki?curid=68497153 |
68498 | Ocean thermal energy conversion | Extracting energy from the ocean
Ocean thermal energy conversion (OTEC) is a renewable energy technology that harnesses the temperature difference between the warm surface waters of the ocean and the cold depths to run a heat engine to produce electricity. It is a unique form of clean energy generation that has the potential to provide a consistent and sustainable source of power. Although it has challenges to overcome, OTEC has the potential to provide a consistent and sustainable source of clean energy, particularly in tropical regions with access to deep ocean water.
Description.
OTEC uses the ocean thermal gradient between cooler deep and warmer shallow or surface seawaters to run a heat engine and produce useful work, usually in the form of electricity. OTEC can operate with a very high capacity factor and so can operate in base load mode.
The denser cold water masses, formed by ocean surface water interaction with cold atmosphere in quite specific areas of the North Atlantic and the Southern Ocean, sink into the deep sea basins and spread in entire deep ocean by the thermohaline circulation. Upwelling of cold water from the deep ocean is replenished by the downwelling of cold surface sea water.
Among ocean energy sources, OTEC is one of the continuously available renewable energy resources that could contribute to base-load power supply. The resource potential for OTEC is considered to be much larger than for other ocean energy forms. Up to 10,000 TWh/yr of power could be generated from OTEC without affecting the ocean's thermal structure.
Systems may be either closed-cycle or open-cycle. Closed-cycle OTEC uses working fluids that are typically thought of as refrigerants such as ammonia or R-134a. These fluids have low boiling points, and are therefore suitable for powering the system's generator to generate electricity. The most commonly used heat cycle for OTEC to date is the Rankine cycle, using a low-pressure turbine. Open-cycle engines use vapor from the seawater itself as the working fluid.
OTEC can also supply quantities of cold water as a by-product. This can be used for air conditioning and refrigeration and the nutrient-rich deep ocean water can feed biological technologies. Another by-product is fresh water distilled from the sea.
OTEC theory was first developed in the 1880s and the first bench size demonstration model was constructed in 1926. Currently operating pilot-scale OTEC plants are located in Japan, overseen by Saga University, and Makai in Hawaii.
History.
Attempts to develop and refine OTEC technology started in the 1880s. In 1881, Jacques Arsene d'Arsonval, a French physicist, proposed tapping the thermal energy of the ocean. D'Arsonval's student, Georges Claude, built the first OTEC plant, in Matanzas, Cuba in 1930. The system generated 22 kW of electricity with a low-pressure turbine. The plant was later destroyed in a storm.
In 1935, Claude constructed a plant aboard a 10,000-ton cargo vessel moored off the coast of Brazil. Weather and waves destroyed it before it could generate net power. (Net power is the amount of power generated after subtracting power needed to run the system).
In 1956, French scientists designed a 3 MW plant for Abidjan, Ivory Coast. The plant was never completed, because new finds of large amounts of cheap petroleum made it uneconomical.
In 1962, J. Hilbert Anderson and James H. Anderson, Jr. focused on increasing component efficiency. They patented their new "closed cycle" design in 1967. This design improved upon the original closed-cycle Rankine system, and included this in an outline for a plant that would produce power at lower cost than oil or coal. At the time, however, their research garnered little attention since coal and nuclear were considered the future of energy.
Japan is a major contributor to the development of OTEC technology. Beginning in 1970 the Tokyo Electric Power Company successfully built and deployed a 100 kW closed-cycle OTEC plant on the island of Nauru. The plant became operational on 14 October 1981, producing about 120 kW of electricity; 90 kW was used to power the plant and the remaining electricity was used to power a school and other places. This set a world record for power output from an OTEC system where the power was sent to a real (as opposed to an experimental) power grid.
1981 also saw a major development in OTEC technology when Russian engineer, Dr. Alexander Kalina, used a mixture of ammonia and water to produce electricity. This new ammonia-water mixture greatly improved the efficiency of the power cycle. In 1994, the Institute of Ocean Energy at Saga University designed and constructed a 4.5 kW plant for the purpose of testing a newly invented Uehara cycle, also named after its inventor Haruo Uehara. This cycle included absorption and extraction processes that allow this system to outperform the Kalina cycle by 1–2%.
The 1970s saw an uptick in OTEC research and development during the post 1973 Arab-Israeli War, which caused oil prices to triple. The U.S. federal government poured $260 million into OTEC research after President Carter signed a law that committed the US to a production goal of 10,000 MW of electricity from OTEC systems by 1999.
In 1974, The U.S. established the Natural Energy Laboratory of Hawaii Authority (NELHA) at Keahole Point on the Kona coast of Hawaii. Hawaii is the best US OTEC location, due to its warm surface water, access to very deep, very cold water, and high electricity costs. The laboratory has become a leading test facility for OTEC technology. In the same year, Lockheed received a grant from the U.S. National Science Foundation to study OTEC. This eventually led to an effort by Lockheed, the US Navy, Makai Ocean Engineering, Dillingham Construction, and other firms to build the world's first and only net-power producing OTEC plant, dubbed "Mini-OTEC" For three months in 1979, a small amount of electricity was generated. NELHA operated a 250 kW demonstration plant for six years in the 1990s. With funding from the United States Navy, a 105 kW plant at the site began supplying energy to the local power grid in 2015.
A European initiative EUROCEAN - a privately funded joint venture of 9 European companies already active in offshore engineering - was active in promoting OTEC from 1979 to 1983. Initially a large scale offshore facility was studied. Later a 100 kW land based installation was studied combining land based OTEC with Desalination and Aquaculture nicknamed ODA. This was based on the results from a small scale aquaculture facility at the island of St Croix that used a deepwater supply line to feed the aquaculture basins. Also a shore based open cycle plant was investigated.
The location of the case of study was the Dutch Kingdom related island Curaçao.
Research related to making open-cycle OTEC a reality began earnestly in 1979 at the Solar Energy Research Institute (SERI) with funding from the US Department of Energy. Evaporators and suitably configured direct-contact condensers were developed and patented by SERI (see). An original design for a power-producing experiment, then called the 165-kW experiment was described by Kreith and Bharathan and as the Max Jakob Memorial Award Lecture. The initial design used two parallel axial turbines, using last stage rotors taken from large steam turbines. Later, a team led by Dr. Bharathan at the National Renewable Energy Laboratory (NREL) developed the initial conceptual design for up-dated 210 kW open-cycle OTEC experiment (). This design integrated all components of the cycle, namely, the evaporator, condenser and the turbine into one single vacuum vessel, with the turbine mounted on top to prevent any potential for water to reach it. The vessel was made of concrete as the first process vacuum vessel of its kind. Attempts to make all components using low-cost plastic material could not be fully achieved, as some conservatism was required for the turbine and the vacuum pumps developed as the first of their kind. Later Dr. Bharathan worked with a team of engineers at the Pacific Institute for High Technology Research (PICHTR) to further pursue this design through preliminary and final stages. It was renamed the Net Power Producing Experiment (NPPE) and was constructed at the Natural Energy Laboratory of Hawaii (NELH) by PICHTR by a team led by Chief Engineer Don Evans and the project was managed by Dr. Luis Vega.
In 2002, India tested a 1 MW floating OTEC pilot plant near Tamil Nadu. The plant was ultimately unsuccessful due to a failure of the deep sea cold water pipe. Its government continues to sponsor research.
In 2006, Makai Ocean Engineering was awarded a contract from the U.S. Office of Naval Research (ONR) to investigate the potential for OTEC to produce nationally significant quantities of hydrogen in at-sea floating plants located in warm, tropical waters. Realizing the need for larger partners to actually commercialize OTEC, Makai approached Lockheed Martin to renew their previous relationship and determine if the time was ready for OTEC. And so in 2007, Lockheed Martin resumed work in OTEC and became a subcontractor to Makai to support their SBIR, which was followed by other subsequent collaborations
In March 2011, Ocean Thermal Energy Corporation signed an Energy Services Agreement (ESA) with the Baha Mar resort, Nassau, Bahamas, for the world's first and largest seawater air conditioning (SWAC) system. In June 2015, the project was put on pause while the resort resolved financial and ownership issues. In August 2016, it was announced that the issues had been resolved and that the resort would open in March 2017. It is expected that the SWAC system's construction will resume at that time.
In July 2011, Makai Ocean Engineering completed the design and construction of an OTEC Heat Exchanger Test Facility at the Natural Energy Laboratory of Hawaii. The purpose of the facility is to arrive at an optimal design for OTEC heat exchangers, increasing performance and useful life while reducing cost (heat exchangers being the #1 cost driver for an OTEC plant). And in March 2013, Makai announced an award to install and operate a 100 kilowatt turbine on the OTEC Heat Exchanger Test Facility, and once again connect OTEC power to the grid.
In July 2016, the Virgin Islands Public Services Commission approved Ocean Thermal Energy Corporation's application to become a Qualified Facility. The company is thus permitted to begin negotiations with the Virgin Islands Water and Power Authority (WAPA) for a Power Purchase Agreement (PPA) pertaining to an Ocean Thermal Energy Conversion (OTEC) plant on the island of St. Croix. This would be the world's first commercial OTEC plant.
A project is set to be installed in the African country of São Tomé and Príncipe, which will be the first commercial-scale floating OTEC platform in the world. Developed by Global OTEC, the structure named Dominique will generate 1.5MW, with subsequent barges being installed to help supply the full demand of the country. In 2022, an MoU was signed between the government and British startup Global OTEC.
Currently operating OTEC plants.
In March 2013, Saga University with various Japanese industries completed the installation of a new OTEC plant. Okinawa Prefecture announced the start of the OTEC operation testing at Kume Island on April 15, 2013. The main aim is to prove the validity of computer models and demonstrate OTEC to the public. The testing and research will be conducted with the support of Saga University until the end of FY 2016. IHI Plant Construction Co. Ltd, Yokogawa Electric Corporation, and Xenesys Inc were entrusted with constructing the 100 kilowatt class plant within the grounds of the Okinawa Prefecture Deep Sea Water Research Center. The location was specifically chosen in order to utilize existing deep seawater and surface seawater intake pipes installed for the research center in 2000. The pipe is used for the intake of deep sea water for research, fishery, and agricultural use.
The plant consists of two 50 kW units in double Rankine configuration. The OTEC facility and deep seawater research center are open to free public tours by appointment in English and Japanese. Currently, this is one of only two fully operational OTEC plants in the world. This plant operates continuously when specific tests are not underway.
In 2011, Makai Ocean Engineering completed a heat exchanger test facility at NELHA. Used to test a variety of heat exchange technologies for use in OTEC, Makai has received funding to install a 105 kW turbine. Installation will make this facility the largest operational OTEC facility, though the record for largest power will remain with the Open Cycle plant also developed in Hawaii.
In July 2014, DCNS group partnered with Akuo Energy announced NER 300 funding for their NEMO project. If the project was successful, the 16 MW gross 10 MW net offshore plant would have been the largest OTEC facility to date. DCNS planned to have NEMO operational by 2020. Early in April 2018, Naval Energies shut down the project indefinitely due to technical difficulties relating to the main cold-water intake pipe.
An ocean thermal energy conversion power plant built by Makai Ocean Engineering went operational in Hawaii in August 2015. The governor of Hawaii, David Ige, "flipped the switch" to activate the plant. This is the first true closed-cycle ocean Thermal Energy Conversion (OTEC) plant to be connected to a U.S. electrical grid. It is a demo plant capable of generating 105 kilowatts, enough to power about 120 homes.
Thermodynamic efficiency.
A heat engine gives greater efficiency when run with a large temperature difference. In the oceans the temperature difference between surface and deep water is greatest in the tropics, although still a modest 20 to 25 °C. It is therefore in the tropics that OTEC offers the greatest possibilities. OTEC has the potential to offer global amounts of energy that are 10 to 100 times greater than other ocean energy options such as wave power.
OTEC plants can operate continuously providing a base load supply for an electrical power generation system.
The main technical challenge of OTEC is to generate significant amounts of power efficiently from small temperature differences. It is still considered an emerging technology. Early OTEC systems were 1 to 3 percent thermally efficient, well below the theoretical maximum 6 and 7 percent for this temperature difference. Modern designs allow performance approaching the theoretical maximum Carnot efficiency.
Power cycle types.
Cold seawater is an integral part of each of the three types of OTEC systems: closed-cycle, open-cycle, and hybrid. To operate, the cold seawater must be brought to the surface. The primary approaches are active pumping and desalination. Desalinating seawater near the sea floor lowers its density, which causes it to rise to the surface.
The alternative to costly pipes to bring condensing cold water to the surface is to pump vaporized low boiling point fluid into the depths to be condensed, thus reducing pumping volumes and reducing technical and environmental problems and lowering costs.
Closed.
Closed-cycle systems use fluid with a low boiling point, such as ammonia (having a boiling point around -33 °C at atmospheric pressure), to power a turbine to generate electricity. Warm surface seawater is pumped through a heat exchanger to vaporize the fluid. The expanding vapor turns the turbo-generator. Cold water, pumped through a second heat exchanger, condenses the vapor into a liquid, which is then recycled through the system.
In 1979, the Natural Energy Laboratory and several private-sector partners developed the "mini OTEC" experiment, which achieved the first successful at-sea production of net electrical power from closed-cycle OTEC. The mini OTEC vessel was moored off the Hawaiian coast and produced enough net electricity to illuminate the ship's light bulbs and run its computers and television.
Open.
Open-cycle OTEC uses warm surface water directly to make electricity. The warm seawater is first pumped into a low-pressure container, which causes it to boil. In some schemes, the expanding vapor drives a low-pressure turbine attached to an electrical generator. The vapor, which has left its salt and other contaminants in the low-pressure container, is pure fresh water. It is condensed into a liquid by exposure to cold temperatures from deep-ocean water. This method produces desalinized fresh water, suitable for drinking water, irrigation or aquaculture.
In other schemes, the rising vapor is used in a gas lift technique of lifting water to significant heights. Depending on the embodiment, such vapor lift pump techniques generate power from a hydroelectric turbine either before or after the pump is used.
In 1984, the "Solar Energy Research Institute" (now known as the National Renewable Energy Laboratory) developed a vertical-spout evaporator to convert warm seawater into low-pressure steam for open-cycle plants. Conversion efficiencies were as high as 97% for seawater-to-steam conversion (overall steam production would only be a few percent of the incoming water). In May 1993, an open-cycle OTEC plant at Keahole Point, Hawaii, produced close to 80 kW of electricity during a net power-producing experiment. This broke the record of 40 kW set by a Japanese system in 1982.
Hybrid.
A hybrid cycle combines the features of the closed- and open-cycle systems. In a hybrid, warm seawater enters a vacuum chamber and is flash-evaporated, similar to the open-cycle evaporation process. The steam vaporizes the ammonia working fluid of a closed-cycle loop on the other side of an ammonia vaporizer. The vaporized fluid then drives a turbine to produce electricity. The steam condenses within the heat exchanger and provides desalinated water (see heat pipe).
Working fluids.
A popular choice of working fluid is ammonia, which has superior transport properties, easy availability, and low cost. Ammonia, however, is toxic and flammable. Fluorinated carbons such as CFCs and HCFCs are not toxic or flammable, but they contribute to ozone layer depletion. Hydrocarbons too are good candidates, but they are highly flammable; in addition, this would create competition for use of them directly as fuels. The power plant size is dependent upon the vapor pressure of the working fluid. With increasing vapor pressure, the size of the turbine and heat exchangers decreases while the wall thickness of the pipe and heat exchangers increase to endure high pressure especially on the evaporator side.
Land, shelf and floating sites.
OTEC has the potential to produce gigawatts of electrical power, and in conjunction with electrolysis, could produce enough hydrogen to completely replace all projected global fossil fuel consumption. Reducing costs remains an unsolved challenge, however. OTEC plants require a long, large diameter intake pipe, which is submerged a kilometer or more into the ocean's depths, to bring cold water to the surface.
Land-based.
Land-based and near-shore facilities offer three main advantages over those located in deep water. Plants constructed on or near land do not require sophisticated mooring, lengthy power cables, or the more extensive maintenance associated with open-ocean environments. They can be installed in sheltered areas so that they are relatively safe from storms and heavy seas. Electricity, desalinated water, and cold, nutrient-rich seawater could be transmitted from near-shore facilities via trestle bridges or causeways. In addition, land-based or near-shore sites allow plants to operate with related industries such as mariculture or those that require desalinated water.
Favored locations include those with narrow shelves (volcanic islands), steep (15–20 degrees) offshore slopes, and relatively smooth sea floors. These sites minimize the length of the intake pipe. A land-based plant could be built well inland from the shore, offering more protection from storms, or on the beach, where the pipes would be shorter. In either case, easy access for construction and operation helps lower costs.
Land-based or near-shore sites can also support mariculture or chilled water agriculture. Tanks or lagoons built on shore allow workers to monitor and control miniature marine environments. Mariculture products can be delivered to market via standard transport.
One disadvantage of land-based facilities arises from the turbulent wave action in the surf zone. OTEC discharge pipes should be placed in protective trenches to prevent subjecting them to extreme stress during storms and prolonged periods of heavy seas. Also, the mixed discharge of cold and warm seawater may need to be carried several hundred meters offshore to reach the proper depth before it is released, requiring additional expense in construction and maintenance.
One way that OTEC systems can avoid some of the problems and expenses of operating in a surf zone is by building them just offshore in waters ranging from 10 to 30 meters deep (Ocean Thermal Corporation 1984). This type of plant would use shorter (and therefore less costly) intake and discharge pipes, which would avoid the dangers of turbulent surf. The plant itself, however, would require protection from the marine environment, such as breakwaters and erosion-resistant foundations, and the plant output would need to be transmitted to shore.
Shelf based.
To avoid the turbulent surf zone as well as to move closer to the cold-water resource, OTEC plants can be mounted to the continental shelf at depths up to . A shelf-mounted plant could be towed to the site and affixed to the sea bottom. This type of construction is already used for offshore oil rigs. The complexities of operating an OTEC plant in deeper water may make them more expensive than land-based approaches. Problems include the stress of open-ocean conditions and more difficult product delivery. Addressing strong ocean currents and large waves adds engineering and construction expense. Platforms require extensive pilings to maintain a stable base. Power delivery can require long underwater cables to reach land. For these reasons, shelf-mounted plants are less attractive.
Floating.
Floating OTEC facilities operate off-shore. Although potentially optimal for large systems, floating facilities present several difficulties. The difficulty of mooring plants in very deep water complicates power delivery. Cables attached to floating platforms are more susceptible to damage, especially during storms. Cables at depths greater than 1000 meters are difficult to maintain and repair. Riser cables, which connect the sea bed and the plant, need to be constructed to resist entanglement.
As with shelf-mounted plants, floating plants need a stable base for continuous operation. Major storms and heavy seas can break the vertically suspended cold-water pipe and interrupt warm water intake as well. To help prevent these problems, pipes can be made of flexible polyethylene attached to the bottom of the platform and gimballed with joints or collars. Pipes may need to be uncoupled from the plant to prevent storm damage. As an alternative to a warm-water pipe, surface water can be drawn directly into the platform; however, it is necessary to prevent the intake flow from being damaged or interrupted during violent motions caused by heavy seas.
Connecting a floating plant to power delivery cables requires the plant to remain relatively stationary. Mooring is an acceptable method, but current mooring technology is limited to depths of about . Even at shallower depths, the cost of mooring may be prohibitive.
Political concerns.
Because OTEC facilities are more-or-less stationary surface platforms, their exact location and legal status may be affected by the United Nations Convention on the Law of the Sea treaty (UNCLOS). This treaty grants coastal nations zones of varying legal authority from land, creating potential conflicts and regulatory barriers. OTEC plants and similar structures would be considered artificial islands under the treaty, giving them no independent legal status. OTEC plants could be perceived as either a threat or potential partner to fisheries or to seabed mining operations controlled by the International Seabed Authority.
Cost and economics.
Because OTEC systems have not yet been widely deployed, cost estimates are uncertain. A 2010 study by University of Hawaii estimated the cost of electricity for OTEC at 94.0 cents per kilowatt hour (kWh) for a 1.4 MW plant, 44.0 cents per kWh for a 10 MW plant, and 18.0 cents per kWh for a 100 MW plant. A 2015 report by the organization Ocean Energy Systems under the International Energy Agency gave an estimate of about 20.0 cents per kWh for 100 MW plants. Another study estimated power generation costs as low as 7.0 cents per kWh. Comparing to other energy sources, a 2019 study by Lazard estimated the unsubsidized cost of electricity to 3.2 to 4.2 cents per kWh for Solar PV at utility scale and 2.8 to 5.4 cents per kWh for wind power.
A report published by IRENA in 2014 claimed that commercial use of OTEC technology can be scaled in a variety of ways. “...small-scale OTEC plants can be made to accommodate the electricity production of small communities (5,000–50,000 residents), but would require the production of valuable by-products – like fresh water or cooling – to be economically viable”. Larger scaled OTEC plants would have a much higher overhead and installation costs.
Beneficial factors that should be taken into account include OTEC's lack of waste products and fuel consumption, the area in which it is available (often within 20° of the equator), the geopolitical effects of petroleum dependence, compatibility with alternate forms of ocean power such as wave energy, tidal energy and methane hydrates, and supplemental uses for the seawater.
Some proposed projects.
OTEC projects under consideration include a small plant for the U.S. Navy base on the British overseas territory island of Diego Garcia in the Indian Ocean. Ocean Thermal Energy Corporation (formerly OCEES International, Inc.) is working with the U.S. Navy on a design for a proposed 13-MW OTEC plant, to replace the current diesel generators. The OTEC plant would also provide 1.25 million gallons per day of potable water. This project is currently waiting for changes in US military contract policies. OTE has proposed building a 10-MW OTEC plant on Guam.
Bahamas.
Ocean Thermal Energy Corporation (OTE) currently has plans to install two 10 MW OTEC plants in the US Virgin Islands and a 5–10 MW OTEC facility in The Bahamas. OTE has also designed the world's largest Seawater Air Conditioning (SWAC) plant for a resort in The Bahamas, which will use cold deep seawater as a method of air-conditioning. In mid-2015, the 95%-complete project was temporarily put on hold while the resort resolved financial and ownership issues. On August 22, 2016, the government of the Bahamas announced that a new agreement had been signed under which the Baha Mar resort will be completed. On September 27, 2016, Bahamian Prime Minister Perry Christie announced that construction had resumed on Baha Mar, and that the resort was slated to open in March 2017.
This is on hold, and may never resume.
Hawaii.
Lockheed Martin's Alternative Energy Development team has partnered with Makai Ocean Engineering
to complete the final design phase of a 10-MW closed cycle OTEC pilot system which planned to become operational in Hawaii in the 2012–2013 time frame. This system was designed to expand to 100-MW commercial systems in the near future. In November, 2010 the U.S. Naval Facilities Engineering Command (NAVFAC) awarded Lockheed Martin a US$4.4 million contract modification to develop critical system components and designs for the plant, adding to the 2009 $8.1 million contract and two Department of Energy grants totaling over $1 million in 2008 and March 2010.
A small but operational ocean thermal energy conversion (OTEC) plant was inaugurated in Hawaii in August 2015. The opening of the research and development 100-kilowatt facility marked the first time a closed-cycle OTEC plant was connected to the U.S. grid.
Hainan.
On April 13, 2013, Lockheed contracted with the Reignwood Group to build a 10 megawatt plant off the coast of southern China to provide power for a planned resort on Hainan island. A plant of that size would power several thousand homes. The Reignwood Group acquired Opus Offshore in 2011 which forms its Reignwood Ocean Engineering division which also is engaged in development of deepwater drilling.
Japan.
Currently the only continuously operating OTEC system is located in Okinawa Prefecture, Japan. The Governmental support, local community support, and advanced research carried out by Saga University were key for the contractors, IHI Plant Construction Co. Ltd, Yokogawa Electric Corporation, and Xenesys Inc, to succeed with this project. Work is being conducted to develop a 1MW facility on Kume Island requiring new pipelines. In July 2014, more than 50 members formed the Global Ocean reSource and Energy Association (GOSEA) an international organization formed to promote the development of the Kumejima Model and work towards the installation of larger deep seawater pipelines and a 1MW OTEC Facility. The companies involved in the current OTEC projects, along with other interested parties have developed plans for offshore OTEC systems as well. - For more details, see "Currently Operating OTEC Plants" above.
United States Virgin Islands.
On March 5, 2014, Ocean Thermal Energy Corporation (OTEC) and the 30th Legislature of the United States Virgin Islands (USVI) signed a Memorandum of Understanding to move forward with a study to evaluate the feasibility and potential benefits to the USVI of installing on-shore Ocean Thermal Energy Conversion (OTEC) renewable energy power plants and Seawater Air Conditioning (SWAC) facilities. The benefits to be assessed in the USVI study include both the baseload (24/7) clean electricity generated by OTEC, as well as the various related products associated with OTEC and SWAC, including abundant fresh drinking water, energy-saving air conditioning, sustainable aquaculture and mariculture, and agricultural enhancement projects for the Islands of St Thomas and St Croix.
On July 18, 2016, OTE's application to be a Qualifying Facility was approved by the Virgin Islands Public Services Commission. OTE also received permission to begin negotiating contracts associated with this project.
Kiribati.
South Korea's Research Institute of Ships and Ocean Engineering (KRISO) received approval in principle from Bureau Veritas for their 1MW offshore OTEC design. No timeline was given for the project which will be located 6 km offshore of the Republic of Kiribati.
Martinique.
Akuo Energy and DCNS were awarded NER300 funding on July 8, 2014 for their NEMO (New Energy for Martinique and Overseas) project which is expected to be a 10.7MW-net offshore facility completed in 2020. The award to help with development totaled 72 million Euro.
Maldives.
On February 16, 2018, Global OTEC Resources announced plans to build a 150 kW plant in the Maldives, designed bespoke for hotels and resorts. "All these resorts draw their power from diesel generators. Moreover, some individual resorts consume 7,000 litres of diesel a day to meet demands which equates to over 6,000 tonnes of CO2 annually," said Director Dan Grech. The EU awarded a grant and Global OTEC resources launched a crowdfunding campaign for the rest.
Related activities.
OTEC has uses other than power production.
Desalination.
Desalinated water can be produced in open- or hybrid-cycle plants using surface condensers to turn evaporated seawater into potable water. System analysis indicates that a 2-megawatt plant could produce about of desalinated water each day. Another system patented by Richard Bailey creates condensate water by regulating deep ocean water flow through surface condensers correlating with fluctuating dew-point temperatures. This condensation system uses no incremental energy and has no moving parts.
On March 22, 2015, Saga University opened a Flash-type desalination demonstration facility on Kumejima. This satellite of their Institute of Ocean Energy uses post-OTEC deep seawater from the Okinawa OTEC Demonstration Facility and raw surface seawater to produce desalinated water. Air is extracted from the closed system with a vacuum pump. When raw sea water is pumped into the flash chamber it boils, allowing pure steam to rise and the salt and remaining seawater to be removed. The steam is returned to liquid in a heat exchanger with cold post-OTEC deep seawater. The desalinated water can be used in hydrogen production or drinking water (if minerals are added).
The NELHA plant established in 1993 produced an average of 7,000 gallons of freshwater per day. KOYO USA was established in 2002 to capitalize on this new economic opportunity. KOYO bottles the water produced by the NELHA plant in Hawaii. With the capacity to produce one million bottles of water every day, KOYO is now Hawaii's biggest exporter with $140 million in sales.[81]
Air conditioning.
The cold seawater made available by an OTEC system creates an opportunity to provide large amounts of cooling to industries and homes near the plant. The water can be used in chilled-water coils to provide air conditioning for buildings. It is estimated that a pipe in diameter can deliver 4,700 gallons of water per minute. Water at could provide more than enough air conditioning for a large building. Operating 8,000 hours per year in lieu of electrical conditioning selling for 5–10¢ per kilowatt-hour, it would save $200,000-$400,000 in energy bills annually.
The InterContinental Resort and Thalasso-Spa on the island of Bora Bora uses an SWAC system to air-condition its buildings. The system passes seawater through a heat exchanger where it cools freshwater in a closed loop system. This freshwater is then pumped to buildings and directly cools the air.
In 2010, Copenhagen Energy opened a district cooling plant in Copenhagen, Denmark. The plant delivers cold seawater to commercial and industrial buildings, and has reduced electricity consumption by 80 percent. Ocean Thermal Energy Corporation (OTE) has designed a 9800-ton SDC system for a vacation resort in The Bahamas.
Chilled-soil agriculture.
OTEC technology supports chilled-soil agriculture. When cold seawater flows through underground pipes, it chills the surrounding soil. The temperature difference between roots in the cool soil and leaves in the warm air allows plants that evolved in temperate climates to be grown in the subtropics. Dr. John P. Craven, Dr. Jack Davidson and Richard Bailey patented this process and demonstrated it at a research facility at the Natural Energy Laboratory of Hawaii Authority (NELHA). The research facility demonstrated that more than 100 different crops can be grown using this system. Many normally could not survive in Hawaii or at Keahole Point.
Japan has also been researching agricultural uses of Deep Sea Water since 2000 at the Okinawa Deep Sea Water Research Institute on Kume Island. The Kume Island facilities use regular water cooled by Deep Sea Water in a heat exchanger run through pipes in the ground to cool soil. Their techniques have developed an important resource for the island community as they now produce spinach, a winter vegetable, commercially year round. An expansion of the deep seawater agriculture facility was completed by Kumejima Town next to the OTEC Demonstration Facility in 2014. The new facility is for researching the economic practicality of chilled-soil agriculture on a larger scale.
Aquaculture.
Aquaculture is the best-known byproduct, because it reduces the financial and energy costs of pumping large volumes of water from the deep ocean. Deep ocean water contains high concentrations of essential nutrients that are depleted in surface waters due to biological consumption. This artificial upwelling mimics the natural upwellings that are responsible for fertilizing and supporting the world's largest marine ecosystems, and the largest densities of life on the planet.
Cold-water sea animals, such as salmon and lobster, thrive in this nutrient-rich, deep seawater. Microalgae such as "Spirulina", a health food supplement, also can be cultivated. Deep-ocean water can be combined with surface water to deliver water at an optimal temperature.
Non-native species such as salmon, lobster, abalone, trout, oysters, and clams can be raised in pools supplied by OTEC-pumped water. This extends the variety of fresh seafood products available for nearby markets. Such low-cost refrigeration can be used to maintain the quality of harvested fish, which deteriorate quickly in warm tropical regions. In Kona, Hawaii, aquaculture companies working with NELHA generate about $40 million annually, a significant portion of Hawaii's GDP.
Hydrogen production.
Hydrogen can be produced via electrolysis using OTEC electricity. Generated steam with electrolyte compounds added to improve efficiency is a relatively pure medium for hydrogen production. OTEC can be scaled to generate large quantities of hydrogen. The main challenge is cost relative to other energy sources and fuels.
Mineral extraction.
The ocean contains 57 trace elements in salts and other forms and dissolved in solution. In the past, most economic analyses concluded that mining the ocean for trace elements would be unprofitable, in part because of the energy required to pump the water. Mining generally targets minerals that occur in high concentrations, and can be extracted easily, such as magnesium. With OTEC plants supplying water, the only cost is for extraction.
The Japanese investigated the possibility of extracting uranium and found developments in other technologies (especially materials sciences) were improving the prospects.
Climate control.
Ocean thermal gradient can be used to enhance rainfall and moderate the high ambient summer temperatures in tropics to benefit enormously the mankind and the flora and fauna. When sea surface temperatures are relatively high on an area, lower atmospheric pressure area is formed compared to atmospheric pressure prevailing on the nearby land mass inducing winds from the landmass towards the ocean. Oceanward winds are dry and warm which would not contribute to good rainfall on the landmass compared to landward moist winds. For adequate rainfall and comfortable summer ambient temperatures (below 35 °C) on the landmass, it is preferred to have landward moist winds from the ocean. Creating high pressure zones by artificial upwelling on sea area selectively can also be used to deflect / guide the normal monsoon global winds towards the landmass. Artificial upwelling of nutrient-rich deep ocean water to the surface also enhances fisheries growth in areas with tropical and temperate weather. It would also lead to enhanced carbon sequestration by the oceans from improved algae growth and mass gain by glaciers from the extra snow fall mitigating sea level rise or global warming process. Tropical cyclones also do not pass through the high pressure zones as they intensify by gaining energy from the warm surface waters of the sea.
The cold deep sea water (<10 °C) is pumped to the sea surface area to suppress the sea surface temperature (>26 °C) by artificial means using electricity produced by mega scale floating wind turbine plants on the deep sea. The lower sea water surface temperature would enhance the local ambient pressure so that atmospheric landward winds are created. For upwelling the cold sea water, a stationary hydraulically driven propeller (≈50 m diameter) is located on the deep sea floor at 500 to 1000 m depth with a flexible draft tube extending up to the sea surface. The draft tube is anchored to the sea bed at its bottom side and top side to floating pontoons at the sea surface. The flexible draft tube would not collapse as its inside pressure is more compared to outside pressure when the colder water is pumped to the sea surface. Middle east, north east Africa, Indian subcontinent and Australia can get relief from hot and dry weather in summer season, also prone to erratic rainfall, by pumping deep sea water to the sea surface from the Persian gulf, Red sea, Indian Ocean and Pacific Ocean respectively.
Thermodynamics.
A rigorous treatment of OTEC reveals that a 20 °C temperature difference will provide as much energy as a hydroelectric plant with 34 m head for the same volume of water flow.
The low temperature difference means that water volumes must be very large to extract useful amounts of heat. A 100MW power plant would be expected to pump on the order of 12 million gallons (44,400 tonnes) per minute. For comparison, pumps must move a mass of water greater than the weight of the "battleship Bismarck", which weighed 41,700 tonnes, every minute. This makes pumping a substantial parasitic drain on energy production in OTEC systems, with one Lockheed design consuming 19.55 MW in pumping costs for every 49.8 MW net electricity generated. For OTEC schemes using heat exchangers, to handle this volume of water the exchangers need to be enormous compared to those used in conventional thermal power generation plants, making them one of the most critical components due to their impact on overall efficiency. A 100 MW OTEC power plant would require 200 exchangers each larger than a 20-foot shipping container making them the single most expensive component.
Variation of ocean temperature with depth.
The total insolation received by the oceans (covering 70% of the earth's surface, with clearness index of 0.5 and average energy retention of 15%) is: 5.45×1018 MJ/yr × 0.7 × 0.5 × 0.15
2.87×1017 MJ/yr
We can use Beer–Lambert–Bouguer's law to quantify the solar energy absorption by water,
formula_0
where, "y" is the depth of water, "I" is intensity and "μ" is the absorption coefficient.
Solving the above differential equation,
formula_1
The absorption coefficient "μ" may range from 0.05 m−1 for very clear fresh water to 0.5 m−1 for very salty water.
Since the intensity falls exponentially with depth "y", heat absorption is concentrated at the top layers. Typically in the tropics, surface temperature values are in excess of , while at , the temperature is about . The warmer (and hence lighter) waters at the surface means there are no thermal convection currents. Due to the small temperature gradients, heat transfer by conduction is too low to equalize the temperatures. The ocean is thus both a practically infinite heat source and a practically infinite heat sink.
This temperature difference varies with latitude and season, with the maximum in tropical, subtropical and equatorial waters. Hence the tropics are generally the best OTEC locations.
Open/Claude cycle.
In this scheme, warm surface water at around enters an evaporator at pressure slightly below the saturation pressures causing it to vaporize.
formula_2
Where "Hf" is enthalpy of liquid water at the inlet temperature, "T"1.
This temporarily superheated water undergoes volume boiling as opposed to pool boiling in conventional boilers where the heating surface is in contact. Thus the water partially flashes to steam with two-phase equilibrium prevailing. Suppose that the pressure inside the evaporator is maintained at the saturation pressure, "T"2.
formula_3
Here, "x"2 is the fraction of water by mass that vaporizes. The warm water mass flow rate per unit turbine mass flow rate is 1/"x"2.
The low pressure in the evaporator is maintained by a vacuum pump that also removes the dissolved non-condensable gases from the evaporator. The evaporator now contains a mixture of water and steam of very low vapor quality (steam content). The steam is separated from the water as saturated vapor. The remaining water is saturated and is discharged to the ocean in the open cycle. The steam is a low pressure/high specific volume working fluid. It expands in a special low pressure turbine.
formula_4
Here, "Hg" corresponds to "T"2. For an ideal isentropic (reversible adiabatic) turbine,
formula_5
The above equation corresponds to the temperature at the exhaust of the turbine, "T"5. "x"5,"s" is the mass fraction of vapor at state 5.
The enthalpy at "T"5 is,
formula_6
This enthalpy is lower. The adiabatic reversible turbine work = "H"3-"H"5,"s".
Actual turbine work "W"T = ("H"3-"H"5,"s") x "polytropic efficiency"
formula_7
The condenser temperature and pressure are lower. Since the turbine exhaust is to be discharged back into the ocean, a direct contact condenser is used to mix the exhaust with cold water, which results in a near-saturated water. That water is now discharged back to the ocean.
"H"6="Hf", at "T"5. "T"7 is the temperature of the exhaust mixed with cold sea water, as the vapor content now is negligible,
formula_8
The temperature differences between stages include that between warm surface water and working steam, that between exhaust steam and cooling water, and that between cooling water reaching the condenser and deep water. These represent external irreversibilities that reduce the overall temperature difference.
The cold water flow rate per unit turbine mass flow rate,
formula_9
Turbine mass flow rate, formula_10
Warm water mass flow rate, formula_11
Cold water mass flow rate formula_12
Closed Anderson cycle.
As developed starting in the 1960s by J. Hilbert Anderson of Sea Solar Power, Inc., in this cycle, "QH" is the heat transferred in the evaporator from the warm sea water to the working fluid. The working fluid exits the evaporator as a gas near its dew point.
The high-pressure, high-temperature gas then is expanded in the turbine to yield turbine work, "WT". The working fluid is slightly superheated at the turbine exit and the turbine typically has an efficiency of 90% based on reversible, adiabatic expansion.
From the turbine exit, the working fluid enters the condenser where it rejects heat, "-QC", to the cold sea water. The condensate is then compressed to the highest pressure in the cycle, requiring condensate pump work, "WC". Thus, the Anderson closed cycle is a Rankine-type cycle similar to the conventional power plant steam cycle except that in the Anderson cycle the working fluid is never superheated more than a few degrees Fahrenheit. Owing to viscosity effects, working fluid pressure drops in both the evaporator and the condenser. This pressure drop, which depends on the types of heat exchangers used, must be considered in final design calculations but is ignored here to simplify the analysis. Thus, the parasitic condensate pump work, "WC", computed here will be lower than if the heat exchanger pressure drop was included. The major additional parasitic energy requirements in the OTEC plant are the cold water pump work, "WCT", and the warm water pump work, "WHT". Denoting all other parasitic energy requirements by "WA", the net work from the OTEC plant, "WNP" is
formula_13
The thermodynamic cycle undergone by the working fluid can be analyzed without detailed consideration of the parasitic energy requirements. From the first law of thermodynamics, the energy balance for the working fluid as the system is
formula_14
where "WN" = "WT" + "WC" is the net work for the thermodynamic cycle. For the idealized case in which there is no working fluid pressure drop in the heat exchangers,
formula_15
and
formula_16
so that the net thermodynamic cycle work becomes
formula_17
Subcooled liquid enters the evaporator. Due to the heat exchange with warm sea water, evaporation takes place and usually superheated vapor leaves the evaporator. This vapor drives the turbine and the 2-phase mixture enters the condenser. Usually, the subcooled liquid leaves the condenser and finally, this liquid is pumped to the evaporator completing a cycle.
Environmental impact.
Carbon dioxide dissolved in deep cold and high pressure layers is brought up to the surface and released as the water warms.
Mixing of deep ocean water with shallower water brings up nutrients and makes them available to shallow water life. This may be an advantage for aquaculture of commercially important species, but may also unbalance the ecological system around the power plant.
OTEC plants use very large flows of warm surface seawater and cold deep seawater to generate constant renewable power. The deep seawater is oxygen deficient and generally 20–40 times more nutrient rich (in nitrate and nitrite) than shallow seawater. When these plumes are mixed, they are slightly denser than the ambient seawater. Though no large scale physical environmental testing of OTEC has been done, computer models have been developed to simulate the effect of OTEC plants.
Hydrodynamic modeling.
In 2010, a computer model was developed to simulate the physical oceanographic effects of one or several 100 megawatt OTEC plant(s). The model suggests that OTEC plants can be configured such that the plant can conduct continuous operations, with resulting temperature and nutrient variations that are within naturally occurring levels. Studies to date suggest that by discharging the OTEC flows downwards at a depth below 70 meters, the dilution is adequate and nutrient enrichment is small enough so that 100-megawatt OTEC plants could be operated in a sustainable manner on a continuous basis.
Biological modeling.
The nutrients from an OTEC discharge could potentially cause increased biological activity if they accumulate in large quantities in the photic zone. In 2011 a biological component was added to the hydrodynamic computer model to simulate the biological response to plumes from 100 megawatt OTEC plants. In all cases modeled (discharge at 70 meters depth or more), no unnatural variations occurs in the upper 40 meters of the ocean's surface. The picoplankton response in the 110 - 70 meter depth layer is approximately a 10–25% increase, which is well within naturally occurring variability. The nanoplankton response is negligible. The enhanced productivity of diatoms (microplankton) is small. The subtle phytoplankton increase of the baseline OTEC plant suggests that higher-order biochemical effects will be very small.
Studies.
A previous Final Environmental Impact Statement (EIS) for the United States' NOAA from 1981 is available, but needs to be brought up to current oceanographic and engineering standards. Studies have been done to propose the best environmental baseline monitoring practices, focusing on a set of ten chemical oceanographic parameters relevant to OTEC. Most recently, NOAA held an OTEC Workshop in 2010 and 2012 seeking to assess the physical, chemical, and biological impacts and risks, and identify information gaps or needs.
The Tethys database provides access to scientific literature and general information on the potential environmental effects of OTEC.
Technical difficulties.
Dissolved gases.
The performance of direct contact heat exchangers operating at typical OTEC boundary conditions is important to the Claude cycle. Many early Claude cycle designs used a surface condenser since their performance was well understood. However, direct contact condensers offer significant disadvantages. As cold water rises in the intake pipe, the pressure decreases to the point where gas begins to evolve. If a significant amount of gas comes out of solution, placing a gas trap before the direct contact heat exchangers may be justified. Experiments simulating conditions in the warm water intake pipe indicated about 30% of the dissolved gas evolves in the top of the tube. The trade-off between of the seawater and expulsion of non-condensable gases from the condenser is dependent on the gas evolution dynamics, deaerator efficiency, head loss, vent compressor efficiency and parasitic power. Experimental results indicate vertical spout condensers perform some 30% better than falling jet types.
Microbial fouling.
Because raw seawater must pass through the heat exchanger, care must be taken to maintain good thermal conductivity. Biofouling layers as thin as can degrade heat exchanger performance by as much as 50%. A 1977 study in which mock heat exchangers were exposed to seawater for ten weeks concluded that although the level of microbial fouling was low, the thermal conductivity of the system was significantly impaired. The apparent discrepancy between the level of fouling and the heat transfer impairment is the result of a thin layer of water trapped by the microbial growth on the surface of the heat exchanger.
Another study concluded that fouling degrades performance over time, and determined that although regular brushing was able to remove most of the microbial layer, over time a tougher layer formed that could not be removed through simple brushing. The study passed sponge rubber balls through the system. It concluded that although the ball treatment decreased the fouling rate it was not enough to completely halt growth and brushing was occasionally necessary to restore capacity. The microbes regrew more quickly later in the experiment (i.e. brushing became necessary more often) replicating the results of a previous study. The increased growth rate after subsequent cleanings appears to result from selection pressure on the microbial colony.
Continuous use of 1 hour per day and intermittent periods of free fouling and then chlorination periods (again 1 hour per day) were studied. Chlorination slowed but did not stop microbial growth; however chlorination levels of 0.1 mg per liter for 1 hour per day may prove effective for long term operation of a plant. The study concluded that although microbial fouling was an issue for the warm surface water heat exchanger, the cold water heat exchanger suffered little or no biofouling and only minimal inorganic fouling.
Besides water temperature, microbial fouling also depends on nutrient levels, with growth occurring faster in nutrient rich water. The fouling rate also depends on the material used to construct the heat exchanger. Aluminium tubing slows the growth of microbial life, although the oxide layer which forms on the inside of the pipes complicates cleaning and leads to larger efficiency losses. In contrast, titanium tubing allows biofouling to occur faster but cleaning is more effective than with aluminium.
Sealing.
The evaporator, turbine, and condenser operate in partial vacuum ranging from 3% to 1% of atmospheric pressure. The system must be carefully sealed to prevent in-leakage of atmospheric air that can degrade or shut down operation. In closed-cycle OTEC, the specific volume of low-pressure steam is very large compared to that of the pressurized working fluid. Components must have large flow areas to ensure steam velocities do not attain excessively high values.
Parasitic power consumption by exhaust compressor.
An approach for reducing the exhaust compressor parasitic power loss is as follows. After most of the steam has been condensed by spout condensers, the non-condensible gas steam mixture is passed through a counter current region which increases the gas-steam reaction by a factor of five. The result is an 80% reduction in the exhaust pumping power requirements.
Cold air/warm water conversion.
In winter in coastal Arctic locations, the temperature difference between the seawater and ambient air can be as high as 40 °C (72 °F). Closed-cycle systems could exploit the air-water temperature difference. Eliminating seawater extraction pipes might make a system based on this concept less expensive than OTEC. This technology is due to H. Barjot, who suggested butane as cryogen, because of its boiling point of and its non-solubility in water. Assuming a realistic level of efficiency of 4%, calculations show that the amount of energy generated with one cubic meter water at a temperature of in a place with an air temperature of equals the amount of energy generated by letting this cubic meter water run through a hydroelectric plant of 4000 feet (1,200 m) height.
Barjot Polar Power Plants could be located on islands in the polar region or designed as swimming barges or platforms attached to the ice cap. The weather station Myggbuka at Greenlands east coast for example, which is only 2,100 km away from Glasgow, detects monthly mean temperatures below during 6 winter months in the year.
Application of the thermoelectric effect.
In 1979 SERI proposed using the Seebeck effect to produce power with a total conversion efficiency of 2%.
In 2014 Liping Liu, Associate Professor at Rutgers University, envisioned an OTEC system that utilises the solid state thermoelectric effect rather than the fluid cycles traditionally used.
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "-\\frac{dI(y)}{dy}=\\mu I"
},
{
"math_id": 1,
"text": " I(y)=I_{0}\\exp(-\\mu y) \\,"
},
{
"math_id": 2,
"text": " H_{1}=H_{f} \\,"
},
{
"math_id": 3,
"text": "H_{2}=H_{1}=H_{f}+x_{2}H_{fg} \\,"
},
{
"math_id": 4,
"text": "H_{3}=H_{g} \\,"
},
{
"math_id": 5,
"text": "s_{5,s}=s_{3}=s_{f}+x_{5,s}s_{fg} \\, "
},
{
"math_id": 6,
"text": " H_{5,s}=H_{f}+x_{5,s}H_{fg} \\,"
},
{
"math_id": 7,
"text": "H_{5}=H_{3}-\\ \\mathrm{actual}\\ \\mathrm{work} "
},
{
"math_id": 8,
"text": "H_{7}\\approx H_{f}\\,\\ at\\ T_{7} \\,"
},
{
"math_id": 9,
"text": "\\dot{m_{c}=\\frac{H_{5}-\\ H_{6}}{H_{6}-\\ H_{7}}} \\,"
},
{
"math_id": 10,
"text": "\\dot{M_{T}}=\\frac{\\mathrm{turbine}\\ \\mathrm{work}\\ \\mathrm{required}}{W_{T}} "
},
{
"math_id": 11,
"text": " \\dot{M_{w}}=\\dot{M_{T}\\dot{m_{w}}} \\,"
},
{
"math_id": 12,
"text": "\\dot{\\dot{M_{c}}=\\dot{M_{T}m_{C}}} \\,"
},
{
"math_id": 13,
"text": " W_{NP}=W_{T}-W_{C}-W_{CT}-W_{HT}-W_{A} \\,"
},
{
"math_id": 14,
"text": " W_{N}=Q_{H}-Q_{C} \\,"
},
{
"math_id": 15,
"text": " Q_{H}=\\int_{H}T_{H}ds \\,"
},
{
"math_id": 16,
"text": " Q_{C}=\\int_{C}T_{C}ds \\,"
},
{
"math_id": 17,
"text": " W_{N}=\\int_{H}T_{H}ds-\\int_{C}T_{C}ds \\,"
}
]
| https://en.wikipedia.org/wiki?curid=68498 |
68498620 | Conservativity | Proposed linguistic universal
In formal semantics conservativity is a proposed linguistic universal which states that any determiner formula_0 must obey the equivalence formula_1. For instance, the English determiner "every" can be seen to be conservative by the equivalence of the following two sentences, schematized in generalized quantifier notation to the right.
Conceptually, conservativity can be understood as saying that the elements of formula_4 which are not elements of formula_5 are not relevant for evaluating the truth of the determiner phrase as a whole. For instance, truth of the first sentence above does not depend on which biting non-aardvarks exist.
Conservativity is significant to semantic theory because there are many logically possible determiners which are not attested as denotations of natural language expressions. For instance, consider the imaginary determiner formula_6 defined so that formula_7 is true iff formula_8. If there are 50 biting aardvarks, 50 non-biting aardvarks, and millions of non-aardvark biters, formula_7 will be false but formula_9 will be true.
Some potential counterexamples to conservativity have been observed, notably, the English expression "only". This expression has been argued to not be a determiner since it can stack with bona fide determiners and can combine with non-nominal constituents such as verb phrases.
Different analyses have treated conservativity as a constraint on the lexicon, a structural constraint arising from the architecture of the syntax-semantics interface, as well as constraint on learnability.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D"
},
{
"math_id": 1,
"text": "D(A,B) \\leftrightarrow D(A, A\\cap B)"
},
{
"math_id": 2,
"text": "\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\rightsquigarrow every(A,B)"
},
{
"math_id": 3,
"text": "\\ \\ \\rightsquigarrow every(A,A\\cap B)"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "shmore"
},
{
"math_id": 7,
"text": "shmore(A,B)"
},
{
"math_id": 8,
"text": "|A|>|B|"
},
{
"math_id": 9,
"text": "shmore(A, A\\cap B)"
}
]
| https://en.wikipedia.org/wiki?curid=68498620 |
68498668 | Ramanujan machine | Software that produces mathematical conjectures about continued fractions
The Ramanujan machine is a specialised software package, developed by a team of scientists at the Technion: Israeli Institute of Technology, to discover new formulas in mathematics. It has been named after the Indian mathematician Srinivasa Ramanujan because it supposedly imitates the thought process of Ramanujan in his discovery of hundreds of formulas.
The machine has produced several conjectures in the form of continued fraction expansions of expressions involving some of the most important constants in mathematics like "e" and π (pi). Some of these conjectures produced by the Ramanujan machine have subsequently been proved true. The others continue to remain as conjectures. The software was conceptualised and developed by a group of undergraduates of the Technion under the guidance of Ido Kaminer, an electrical engineering faculty member of Technion. The details of the machine were published online on 3 February 2021 in the journal "Nature".
According to George Andrews, an expert on the mathematics of Ramanujan, even though some of the results produced by the Ramanujan machine are amazing and difficult to prove, the results produced by the machine are not of the caliber of Ramanujan and so calling the software the "Ramanujan machine" is slightly outrageous. Doron Zeilberger, an Israeli mathematician, has opined that the Ramanujan machine is a harbinger of a new methodology of doing mathematics.
Formulas discovered by the Ramanujan machine.
The following are some of the formulas discovered by the Ramanujan machine which have been later proved to be true:
formula_0
formula_1
The following are some of the many formulas conjectured by the Ramanujan machine whose truth or falsity has not yet been established:
formula_2
formula_3
In the last expression, the numbers 4, 14, 30, 52, . . . are defined by the sequence formula_4 for formula_5 and the numbers 8, 72, 288, 800, . . . are generated using the formula formula_6 for formula_7.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\cfrac{4}{3\\pi -8} = 3 - \\cfrac{1\\cdot 1}{6 - \\cfrac{2\\cdot 3}{9 - \\cfrac{3\\cdot 5}{12 - \\cfrac{4\\cdot 7}{15 - {_\\ddots}}}}}"
},
{
"math_id": 1,
"text": "\\cfrac{e}{e - 2} = 4 - \\cfrac{1}{5 - \\cfrac{1}{6 - \\cfrac{2}{7 - \\cfrac{3}{8 - {_\\ddots}}}}}"
},
{
"math_id": 2,
"text": "\\cfrac{8}{\\pi^2} = 1 - \\cfrac{2\\cdot 1^4 - 3\\cdot 1^3}{7 - \\cfrac{2\\cdot 2^4 - 3\\cdot 2^3}{19 - \\cfrac{2\\cdot 3^4 - 3\\cdot 3^3}{37 - \\cfrac{2\\cdot 4^4 - 3\\cdot 4^3}{61 - {_\\ddots}}}}}"
},
{
"math_id": 3,
"text": "\\cfrac{1}{1 - \\log 2} = 4 - \\cfrac{8}{14 - \\cfrac{72}{30 - \\cfrac{288}{52 - \\cfrac{800}{80 - {_\\ddots}}}}}"
},
{
"math_id": 4,
"text": "a_n=3n^2+7n+4"
},
{
"math_id": 5,
"text": "n=0,1,2,3, \\ldots"
},
{
"math_id": 6,
"text": "b_n= 2n^2(n+1)^2 "
},
{
"math_id": 7,
"text": "n=1,2,3 \\ldots"
}
]
| https://en.wikipedia.org/wiki?curid=68498668 |
68503 | Pseudometric space | Generalization of metric spaces in mathematics
In mathematics, a pseudometric space is a generalization of a metric space in which the distance between two distinct points can be zero. Pseudometric spaces were introduced by Đuro Kurepa in 1934. In the same way as every normed space is a metric space, every seminormed space is a pseudometric space. Because of this analogy, the term semimetric space (which has a different meaning in topology) is sometimes used as a synonym, especially in functional analysis.
When a topology is generated using a family of pseudometrics, the space is called a gauge space.
Definition.
A pseudometric space formula_0 is a set formula_1 together with a non-negative real-valued function formula_2 called a <templatestyles src="Template:Visible anchor/styles.css" />pseudometric, such that for every formula_3
Unlike a metric space, points in a pseudometric space need not be distinguishable; that is, one may have formula_7 for distinct values formula_8
Examples.
Any metric space is a pseudometric space.
Pseudometrics arise naturally in functional analysis. Consider the space formula_9 of real-valued functions formula_10 together with a special point formula_11 This point then induces a pseudometric on the space of functions, given by formula_12 for formula_13
A seminorm formula_14 induces the pseudometric formula_15. This is a convex function of an affine function of formula_16 (in particular, a translation), and therefore convex in formula_16. (Likewise for formula_17.)
Conversely, a homogeneous, translation-invariant pseudometric induces a seminorm.
Pseudometrics also arise in the theory of hyperbolic complex manifolds: see Kobayashi metric.
Every measure space formula_18 can be viewed as a complete pseudometric space by defining formula_19 for all formula_20 where the triangle denotes symmetric difference.
If formula_21 is a function and "d"2 is a pseudometric on "X"2, then formula_22 gives a pseudometric on "X"1. If "d"2 is a metric and "f" is injective, then "d"1 is a metric.
Topology.
The <templatestyles src="Template:Visible anchor/styles.css" />pseudometric topology is the topology generated by the open balls
formula_23
which form a basis for the topology. A topological space is said to be a <templatestyles src="Template:Visible anchor/styles.css" />pseudometrizable space if the space can be given a pseudometric such that the pseudometric topology coincides with the given topology on the space.
The difference between pseudometrics and metrics is entirely topological. That is, a pseudometric is a metric if and only if the topology it generates is T0 (that is, distinct points are topologically distinguishable).
The definitions of Cauchy sequences and metric completion for metric spaces carry over to pseudometric spaces unchanged.
Metric identification.
The vanishing of the pseudometric induces an equivalence relation, called the metric identification, that converts the pseudometric space into a full-fledged metric space. This is done by defining formula_24 if formula_25. Let formula_26 be the quotient space of formula_1 by this equivalence relation and define
formula_27
This is well defined because for any formula_28 we have that formula_29 and so formula_30 and vice versa. Then formula_31 is a metric on formula_32 and formula_33 is a well-defined metric space, called the metric space induced by the pseudometric space formula_34.
The metric identification preserves the induced topologies. That is, a subset formula_35 is open (or closed) in formula_34 if and only if formula_36 is open (or closed) in formula_37 and formula_38 is saturated. The topological identification is the Kolmogorov quotient.
An example of this construction is the completion of a metric space by its Cauchy sequences.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X,d)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "d : X \\times X \\longrightarrow \\R_{\\geq 0},"
},
{
"math_id": 3,
"text": "x, y, z \\in X,"
},
{
"math_id": 4,
"text": "d(x,x) = 0."
},
{
"math_id": 5,
"text": "d(x,y) = d(y,x)"
},
{
"math_id": 6,
"text": "d(x,z) \\leq d(x,y) + d(y,z)"
},
{
"math_id": 7,
"text": "d(x, y) = 0"
},
{
"math_id": 8,
"text": "x \\neq y."
},
{
"math_id": 9,
"text": "\\mathcal{F}(X)"
},
{
"math_id": 10,
"text": "f : X \\to \\R"
},
{
"math_id": 11,
"text": "x_0 \\in X."
},
{
"math_id": 12,
"text": "d(f,g) = \\left|f(x_0) - g(x_0)\\right|"
},
{
"math_id": 13,
"text": "f, g \\in \\mathcal{F}(X)"
},
{
"math_id": 14,
"text": "p"
},
{
"math_id": 15,
"text": "d(x, y) = p(x - y)"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "y"
},
{
"math_id": 18,
"text": "(\\Omega,\\mathcal{A},\\mu)"
},
{
"math_id": 19,
"text": "d(A,B) := \\mu(A \\vartriangle B)"
},
{
"math_id": 20,
"text": "A, B \\in \\mathcal{A},"
},
{
"math_id": 21,
"text": "f : X_1 \\to X_2"
},
{
"math_id": 22,
"text": "d_1(x, y) := d_2(f(x), f(y))"
},
{
"math_id": 23,
"text": "B_r(p) = \\{x \\in X : d(p, x) < r\\},"
},
{
"math_id": 24,
"text": "x\\sim y"
},
{
"math_id": 25,
"text": "d(x,y)=0"
},
{
"math_id": 26,
"text": "X^* = X/{\\sim}"
},
{
"math_id": 27,
"text": "\\begin{align}\n d^*:(X/\\sim)&\\times (X/\\sim) \\longrightarrow \\R_{\\geq 0} \\\\\n d^*([x],[y])&=d(x,y)\n\\end{align}"
},
{
"math_id": 28,
"text": "x' \\in [x]"
},
{
"math_id": 29,
"text": "d(x, x') = 0"
},
{
"math_id": 30,
"text": "d(x', y) \\leq d(x, x') + d(x, y) = d(x, y)"
},
{
"math_id": 31,
"text": "d^*"
},
{
"math_id": 32,
"text": "X^*"
},
{
"math_id": 33,
"text": "(X^*,d^*)"
},
{
"math_id": 34,
"text": "(X, d)"
},
{
"math_id": 35,
"text": "A \\subseteq X"
},
{
"math_id": 36,
"text": "\\pi(A) = [A]"
},
{
"math_id": 37,
"text": "\\left(X^*, d^*\\right)"
},
{
"math_id": 38,
"text": "A"
}
]
| https://en.wikipedia.org/wiki?curid=68503 |
68503678 | Actinium(III) nitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Actinium(III) nitrate is an inorganic compound, actinium salt of nitric acid with the chemical formula Ac(NO3)3. The compound looks like white substance, readily soluble in water.
Synthesis.
Actinium nitrate can be obtained by dissolving actinium or actinium hydroxide in nitric acid.
formula_0
Properties.
Actinium(III) nitrate decomposes on heating above 600 °C:
formula_1
This salt is used as a source of Ac3+ ions to obtain insoluble actinium compounds by precipitation from aqueous solutions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{Ac(OH)_3 + 3HNO_3 \\ \\xrightarrow{}\\ Ac(NO_3)_3 + 3H_2O}"
},
{
"math_id": 1,
"text": "\\mathsf{4Ac(NO_3)_3=2Ac_2O_3+12NO_2+3O_2}"
}
]
| https://en.wikipedia.org/wiki?curid=68503678 |
68505364 | Dysprosium(III) nitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Dysprosium(III) nitrate is an inorganic compound, a salt of dysprosium and nitric acid with the chemical formula Dy(NO3)3. The compound forms yellowish crystals, dissolves in water, forms a crystalline hydrate.
Synthesis.
Anhydrous salt is obtained by the action of nitrogen dioxide on dysprosium(III) oxide:
formula_0
The action of nitrogen dioxide on metallic dysprosium:
formula_1
Physical properties.
Dysprosium(III) nitrate forms yellowish crystals.
The anhydrous nitrate forms a crystalline hydrate in wet air with the ideal composition of , which melts in its own crystallization water at 88.6 °C.
All hydrates (anhydrous, pentahydrate, and hexahydrate) are soluble in water and ethanol, hygroscopic.
Chemical properties.
Hydrated dysprosium nitrate thermally decomposes to form , and further heating produces dysprosium oxide.
Application.
Dysprosium(III) nitrate is used as a catalyst.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{2Dy_2O_3 + 9N_2O_4 \\ \\xrightarrow{150^oC}\\ 4Dy(NO_3)_3 + 6NO }"
},
{
"math_id": 1,
"text": "\\mathsf{Dy + 3N_2O_4 \\ \\xrightarrow{200^oC}\\ Dy(NO_3)_3 + 3NO }"
}
]
| https://en.wikipedia.org/wiki?curid=68505364 |
68508465 | Hypertabastic survival models | Hypertabastic survival models were introduced in 2007 by Mohammad Tabatabai, Zoran Bursac, David Williams, and Karan Singh. This distribution can be used to analyze time-to-event data in biomedical and public health areas and normally called survival analysis. In engineering, the time-to-event analysis is referred to as reliability theory and in business and economics it is called duration analysis. Other fields may use different names for the same analysis. These survival models are applicable in many fields such as biomedical, behavioral science, social science, statistics, medicine, bioinformatics, medical informatics, data science especially in machine learning, computational biology, business economics, engineering, and commercial entities. They not only look at the time to event, but whether or not the event occurred. These time-to-event models can be applied in a variety of applications for instance, time after diagnosis of cancer until death, comparison of individualized treatment with standard care in cancer research, time until an individual defaults on loans, relapsed time for drug and smoking cessation, time until property sold after being put on the market, time until an individual upgrades to a new phone, time until job relocation, time until bones receive microscopic fractures when undergoing different stress levels, time from marriage until divorce, time until infection due to catheter, and time from bridge completion until first repair.
Hypertabastic cumulative distribution function.
The Hypertabastic cumulative distribution function or simply the hypertabastic distribution function formula_0 is defined as the probability that random variable formula_1 will take a value less than or equal to formula_2. The hypertabastic distribution function is defined as
formula_3,
where formula_4 represents the hyperbolic secant function and formula_5, formula_6 are parameters.
The parameters formula_5 and formula_6 are both positive with formula_4 and formula_7 as hyperbolic secant and hyperbolic cotangent respectively. The Hypertabastic probability density function is
formula_8,
where formula_9 and formula_10 are hyperbolic cosecant and hyperbolic tangent respectively and
formula_11
Hypertabastic survival function.
The Hypertabastic survival function is defined as
formula_12,
where formula_13 is the probability that waiting time exceeds formula_2.
For formula_14, the Restricted Expected (mean) Survival Time of the random variable formula_1 is denoted by formula_15, and is defined as
formula_16.
Hypertabastic hazard function.
For the continuous random variable formula_1 representing time to event, the Hypertabastic hazard function formula_17, which represents the instantaneous failure rate at time formula_2 given survival up to time formula_2, is defined as
formula_18.
The Hypertabastic hazard function has the flexibility to model varieties of hazard shapes. These different hazard shapes could apply to different mechanisms for which the hazard functions may not agree with conventional models. The following is a list of possible shapes for the Hypertabastic hazard function:
For formula_19, the Hypertabastic hazard function is monotonically decreasing indicating higher likelihood of failure at early times. For formula_20, the Hypertabastic hazard curve first increases with time until it reaches its maximum failure rate and thereafter the failure decreases with time (unimodal). For formula_21, the Hypertabastic hazard function initially increases with time, then it reaches its horizontal asymptote formula_5. For formula_22, the Hypertabastic hazard function first increases with time with an upward concavity until it reaches its inflection point and subsequently continues to increase with a downward concavity. For formula_23, the Hypertabastic hazard function initially increases with an upward concavity until it reaches its point of inflection, thereafter becoming a linear asymptote with slope formula_5. For formula_24, the Hypertabastic hazard function increases with an upward concavity.
The Hypertabastic cumulative hazard function is
formula_25
Hypertabastic proportional hazards model.
The hazard function formula_26 of the Hypertabastic proportional hazards model has the form
formula_27,
where formula_28 is a p-dimensional vector of explanatory variables and formula_29 is a vector of unknown parameters. The combined effect of explanatory variables formula_30 is a non-negative function of formula_31 with formula_32. The Hypertabastic survival function formula_33 for the proportional hazards model is defined as:
formula_34
and the Hypertabastic probability density function for the proportional hazard model is given by
formula_35.
Depending on the type of censoring, the maximum likelihood function technique along with an appropriate log-likelihood function may be used to estimate the model parameters.
If the sample consists of right censored data and the model to use is Hypertabastic proportional hazards model, then, the proportional hazards log-likelihood function is
formula_36.
Hypertabastic accelerated failure time model.
When the covariates act multiplicatively on the time-scale, the model is called accelerated failure time model. The Hypertabastic survival function for the accelerated failure time model is given by
formula_37.
The Hypertabastic accelerated failure time model has a hazard function formula_26 of the form
formula_38.
The Hypertabastic probability density function for the accelerated failure time model is
formula_39.
For the right censored data, the log-likelihood function for the Hypertabastic accelerated failure time model is given by
formula_40,
where formula_41.
A modified chi-squared type test, known as Nikulin-Rao-Robson statistic is used to test the goodness-of-fit for Hypertabastic accelerated failure time models and its comparison with unimodal hazard rate functions. Simulation studies have shown that the Hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution because of its flexible shape of hazard functions. The Hypertabastic distribution is a competitor for statistical modeling when compared with Birnbaum-Saunders and inverse Gaussian distributions
Likelihood functions for survival analysis.
Consider a sample of survival times of n individuals formula_42 with associated p-dimensional covariate vectors formula_43 and an unknown parameter vector formula_44. Let formula_45 and formula_46 stand for the corresponding probability density function, cumulative distribution function, survival function and hazard function respectively.
In the absence of censoring (censoring normally occurs when the failure time of some individuals cannot be observed), the likelihood function is
formula_47
and the log-likelihood formula_48 is
formula_49
For the right censored data, the likelihood function is
formula_50
or equivalently,
formula_51,
and the log-likelihood function is
formula_52
or equivalently,
formula_53
where
formula_54,
In the presence of left censored data, the likelihood function is
formula_55
and the corresponding log-likelihood function is
formula_56
where
formula_57,
In the presence of interval censored data, the likelihood function is
formula_58
and the log-likelihood function is
formula_59
where formula_60 for all interval censored observations and
formula_61,
If the intended sample consists of all types of censored data (right censored, left censored and interval censored), then its likelihood function takes the following form
formula_62
and its corresponding log-likelihood function is given by
formula_63
Applications of hypertabastic survival models.
Cutaneous or mucosal melanoma.
The Hypertabastic Accelerated Failure Time model was used to analyze a total of 27,532 patients regarding the impact of histology on the survival of patients with cutaneous or mucosal melanoma. Understanding patients’ histological subtypes and their failure rate assessment would enable clinicians and healthcare providers to perform individualized treatment, resulting in a lower risk of complication and higher survivability of patients.
Oil field quantities.
The quantities of 49 locations of the same area of an oil field was examined to identify its underlying distribution. Using generalized chi-squared, the distribution of oil field quantities was represented by the Hyperbolastic distribution and compared with the lognormal (LN), log-logistic (LL), Birnbaum-Saunders (BS) and inverse Gaussian (IG) distributions.
Remission duration for acute leukemia.
The times of remission from clinical trial for acute leukemia of children study were
used to analyze the remission duration of acute leukemia
data for two groups of patients controlling for log of white blood cell counts. The Hypertabastic accelerated failure time model was used to analyze the remission duration of acute leukemia patient.
Brain tumor study of malignant glioma patients.
A randomized clinical trial comparing two chemotherapy regimens for 447 individuals with malignant glioma. A total of 293 patients died within a five-year time period and the median survival time was about 11 months. The overall model fit, in comparison with other parametric distributions, was performed using the generalized chi-square test statistics and proportional hazards model.
Analysis of breast cancer patients.
The Hypertabastic proportional hazard model was used to analyze numerous breast cancer data including the survival of breast cancer patients by exploring the role of a metastasis variable in combination with clinical and gene expression variables.
Analysis of hypertensive patients.
One hundred five Nigerian patients who were diagnosed with hypertension from January 2013 to July 2018 were included in this study, where death was the event of interest. Six parametric models such as; exponential, Weibull, lognormal, log-logistic, Gompertz and Hypertabastic distributions were fitted to the data using goodness of fit tests such as S.E., AIC, and BIC to determine the best fit model. The parametric models were considered because they are all lifetime distributions. S.E., AIC, and BIC measures were used to compare these parametric models.
Analysis of cortical bone fracture.
Stress fractures in older individuals are very important due to the growing number of elderly. Fatigue tests on 23 female bone samples from three individuals were analyzed. Hypertabastic survival and hazard functions of the normalized stress level and age were developed using previously published bone fatigue stress data. The event of interest was the number of cycles until the bone gets microscopic fracture. Furthermore, Hypertabastic proportional hazard models were used to investigate tensile fatigue and cycle-to-fatigue for cortical bone data.
Analysis of unemployment.
Hypertabastic survival models have been used in the analysis of unemployment data and its comparison with the cox regression model.
Analysis of kidney carcinoma patients.
Using National Cancer Institute data from 1975 to 2016, the impact of histological subtypes on the survival probability of 134,150 kidney carcinoma patients were examined. The study variables were a race/ethnicity, age, sex, tumor grade, type of surgery, geographical location of patient and stage of disease. The Hypertabastic proportional hazards model was used to analyze the survival time of patients diagnosed with kidney carcinoma to explore the effect of histological subtypes on their survival probability and assess the relationship between the histological subtypes, tumor stage, tumor grade, and type of surgery.
Kidney carcinoma SAS example code.
Sample code in SAS:
Proc nlp data=sasuser.KidneyCarcinoma tech=quanew cov=2 vardef=n pcov phes maxiter=250;
/* Hypertabastic Proportional Hazards Model with Log Time */
title1 'Kidney Carcinoma';
max logf;
/* Model Parameter Initial Values for Explanatory Variables */
parms a=0.01,b=0.1,
c=0.01, /* Age */ /* Continuous */
d=-.01, /* Male */ /* reference: Female */
r1=.071, /* Hispanic */
r2=.044, /* Asian */
r3=.134, /* Black */ /* reference: White */
h1=.205, /* Adeno Carcinoma w/ Mixed Subtypes */
h2=.505, /* Papillary Adeno Carcinoma NOS */
h3=.537, /* Clear Cell Adeno Carcinoma */
h4=.316, /* Renal Cell Adeno Carcinoma */
h5=1.15, /* Chromophobe Renal Cell Carcinoma */
h6=-.21, /* Sarcomatoid Renal Cell Carcinoma */
h7=.378, /* Granular Cell Carcinoma */ /* reference: Other */
g1=.03, /* East */
g2=.088, /* Northern Plains */
g3=.06, /* Pacific Coast */ /* reference: Southwest */
s1=1.2, /* Localized */
s2=-1.3, /* Distant */ /* reference: Regional */
gr1=1.169, /* Well Differentiated */
gr2=.99, /* Moderately Differentiated */
gr3=.413, /* Poorly Differentiated */ /* reference: Undifferentiated */
su1=-.945, /* No Surgery */
su2=.84, /* Cryocergery */
su3=.56, /* Thermal Ablation */
su4=.574, /* Cryosurgery */
su5=1.173, /* Partial Nephrectomy or Partial Uretterectomy */
su6=.25, /* Complete Nephrectomy */
su7=.073, /* Radical Nephrectomy */
su8=-.096, /* Any Nephrectomy */
su9=.028; /* Nephrectomy, Urectomy */ /* reference: Other */
/* Log-Likelihood Function */
in6=exp(-(c*Age+
d*Gender+
r1*Race1+r2*Race2+r3*Race3+
h1*Hist1+h2*Hist2+h3*Hist3+h4*Hist4+h5*Hist5+h6*Hist6+h7*Hist7+
g1*Geo1+g2*Geo2+g3*Geo3+
s1*Stage1+s2*Stage2+
gr1*Grade1+gr2*Grade2+gr3*Grade3+
su1*Surgery1+su2*Surgery2+su3*Surgery3+su4*Surgery4+su5*Surgery5+su6*Surgery6+su7*Surgery7+su8*Surgery8+su9*Surgery9)); /* covariates */
s = log(1/cosh(a*(1-(time**b)/tanh(time**b))/b))*in6+Status*log(((a*time**(-1+2*b)/sinh(time**b)**2-
a*time**(-1+b)/tanh(time**b))*tanh(a*(1-time**b/tanh(time**b))/b))*in6);
logf=s;
run;
Applications of hypertabastic survival models in bridge engineering.
Although survival analysis tools and techniques have been widely used in medical and biomedical applications over the last few decades, their applications to engineering problems have been more sporadic and limited. The probabilistic assessment of service life of a wide variety of engineering systems, from small mechanical components to large bridge structures, can substantially benefit from the well-established survival analysis techniques. Modeling of time-to-event phenomena in engineering applications can be performed under the influence of numerical and categorical covariates using observational or test data. The "survival" of an engineering component or system is synonymous with the more commonly used term "reliability". The term "hazard rate" or "conditional failure rate" (defined as probability of survival per unit time assuming survival up to that time) is an important measure of the change in the rate of failure over time. In this context, failure is defined as reaching the target event in the time-to-event process. This could be defined as reaching a particular serviceability condition state, localized/partial structural failure, or global/catastrophic failure applied the Hypertabastic parametric accelerated failure time survival model to develop probabilistic models of bridge deck service life for Wisconsin. Bridge decks are typically concrete slabs on which traffic rides as seen in the Marquette Interchange bridge. The authors used the National Bridge Inventory (NBI) dataset to obtain the needed data for their study. NBI records include discrete numerical ratings for bridge decks (and other bridge components) as well as other basic information such as Average Daily Traffic (ADT) and deck surface area (obtained by multiplying the provided bridge length with bridge deck width). The numerical ratings range from 0 to 9 with 9 corresponding to brand new condition and 0 being complete failure. A deck condition rating of 5 was selected as the effective end of service life of bridge deck. The numerical covariates used were the ADT and deck surface area, while the categorical covariate was the superstructure material (structural steel or concrete).
The hypertabastic Proportional Hazards and Accelerated Failure Time models are useful techniques in analyzing bridge-related structures due to its flexibility of hazard curves, which can be monotonically increasing or decreasing with upward or downward concavity. It can also take the shape of a single mound curve. This flexibility in modeling various hazard shapes makes the model suitable for a wide variety of engineering problems.
Tabatabai et al. extended the Hypertabastic bridge deck models developed for Wisconsin bridges to bridges in six northern US states and then to all 50 US states. The study of bridge decks in all 50 states indicated important differences in reliability of bridge decks in different states and regions. Stevens et al.
discuss the importance of survival analyses in identifying key bridge performance indicators and discuss the use of Hypertabastic survival models for bridges. and Nabizadeh et al.
further extended the use of Hypertabastic survival models to bridge superstructures. The covariates used were ADT, maximum bridge span length and superstructure type.
The survival function can be used to determine the expected life using the following equation (area under the entire survival curve)
formula_64
It is important to note that both the survival function and the expected life would change as the time passes by. The conditional survival function formula_65 is a function of time formula_2 and survival time formula_66 and is defined as
formula_67,
Nabizadeh et al. used the Hypertabastic survival functions developed for Wisconsin to analyze conditional survival functions and conditional expected service lives formula_68
formula_69
The conditional expected life would continue to increase as the survival time formula_66 increases. Nabizadeh et al. term this additional expected life as "survival dividend.”
An important mode of failure in bridge engineering is metal fatigue, which can result from repetitive applications of stress cycles to various details and connections in the structure. As the number of cycles formula_70 increase, the probability of fatigue failure increases. An important factor in fatigue life formula_71 is the stress range (Sr)(maximum minus minimum stress in a cycle). The probabilistic engineering fatigue problem can be treated as a "time"-to-event survival analysis problem if the number of cycles formula_70 is treated as a fictitious time variable formula_72
This would facilitate the application of well-established survival analysis techniques to engineering fatigue problems and Tabatabai et al. The survival function formula_73, probability density function formula_74, hazard rate formula_75, and cumulative probability of failure formula_76 can then be defined as
formula_77
formula_78
formula_79
The hypertabastic accelerated failure time model was used to analyze probabilistic fatigue life for various detailed categories in steel bridges.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(t)"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "F(t) = \n\\begin{cases}\n1 - \\operatorname{sech}(\\frac{\\alpha(1-t^{\\beta} \\operatorname{coth}(t^{\\beta}))}{\\beta}) & t > 0 \\\\\n0 & t\\leq 0\n\\end{cases}\n"
},
{
"math_id": 4,
"text": "\\operatorname{sech}"
},
{
"math_id": 5,
"text": "\\alpha"
},
{
"math_id": 6,
"text": "\\beta"
},
{
"math_id": 7,
"text": "\\operatorname{coth}"
},
{
"math_id": 8,
"text": "f(t) =\n\\begin{cases}\n\\operatorname{sech}(W(t))(\\alpha t^{2 \\beta - 1}\\operatorname{csch}^{2}(t^{\\beta})-\\alpha t^{\\beta-1}\\operatorname{coth}(t^{\\beta}))\\operatorname{tanh}(W(t))& t > 0 \\\\\t\t\t\t\t\n0 & t < 0\n\\end{cases}\n"
},
{
"math_id": 9,
"text": "\\operatorname{csch}"
},
{
"math_id": 10,
"text": "\\operatorname{tanh}"
},
{
"math_id": 11,
"text": "W(t)=\\frac{\\alpha(1-t^{\\beta} \\operatorname{coth}(t^{\\beta}))}{\\beta} "
},
{
"math_id": 12,
"text": "S(t)=\\operatorname{sech}[\\frac{\\alpha(1-t^\\beta \\operatorname{coth}(t^\\beta))}{\\beta}]"
},
{
"math_id": 13,
"text": "S(t)"
},
{
"math_id": 14,
"text": "t>0"
},
{
"math_id": 15,
"text": "REST(t)"
},
{
"math_id": 16,
"text": "REST(t)=\\int_{0}^{t}{S(u)} du"
},
{
"math_id": 17,
"text": "h(t)"
},
{
"math_id": 18,
"text": "h(t) = \\lim_{\\Delta(t) \\to 0^{+}} \\frac{P(t \\leq T < t + \\Delta(t) | T \\geq t)}{\\Delta(t)}= \\alpha (t^{2 \\beta-1} \\operatorname{csch}^{2} (t^{\\beta}) -t^{\\beta-1}\\operatorname{coth} (t^{\\beta})) \\operatorname{tanh} (W(t))"
},
{
"math_id": 19,
"text": "0 < \\beta \\leq 0.25"
},
{
"math_id": 20,
"text": "0.25 < \\beta < 1"
},
{
"math_id": 21,
"text": "\\beta = 1"
},
{
"math_id": 22,
"text": "1 < \\beta < 2"
},
{
"math_id": 23,
"text": "\\beta = 2"
},
{
"math_id": 24,
"text": "\\beta > 2"
},
{
"math_id": 25,
"text": "H(t)= \\int_{0}^{t}h(v)dv = -ln(S(t))"
},
{
"math_id": 26,
"text": "h(t|\\mathbf{x},\\mathbf{\\theta})"
},
{
"math_id": 27,
"text": "h(t|\\mathbf{x},\\mathbf{\\theta})=h(t)g(\\mathbf{\\theta}|\\mathbf{x})"
},
{
"math_id": 28,
"text": "\\mathbf{x}"
},
{
"math_id": 29,
"text": "\\theta"
},
{
"math_id": 30,
"text": "g(\\mathbf{\\theta}|\\mathbf{x})= e^{-\\theta_0 - \\sum_{k=1}^{p}{\\theta_kx_k}}"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": "g(\\mathbf{\\theta}|\\mathbf{0})=e^{-\\theta_0}"
},
{
"math_id": 33,
"text": "S(t|\\mathbf{x},\\mathbf{\\theta})"
},
{
"math_id": 34,
"text": "S(t|\\mathbf{x},\\mathbf{\\theta})=[S(t)]^{g(\\mathbf{\\theta}|\\mathbf{x})}"
},
{
"math_id": 35,
"text": "f(t|\\mathbf{x},\\mathbf{\\theta})=f(t)[S(t)]^{g(\\mathbf{\\theta}|\\mathbf{x})-1}g(\\mathbf{\\theta}|\\mathbf{x})"
},
{
"math_id": 36,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}{(ln[{\\operatorname{sech}(W(t_i))}]g(\\mathbf{\\theta}|\\mathbf{x}_i)+\\delta_i ln{[(\\alpha {t_i}^{-1+2 \\beta}\\operatorname{csch}^{2}{({t_i}^{\\beta})}-\\alpha {t_i}^{-1+\\beta}\\operatorname{coth}({t_i}^{\\beta}))\\operatorname{tanh}(W(t_i))g(\\mathbf{\\theta}|\\mathbf{x}_i)]})}"
},
{
"math_id": 37,
"text": "S(t|\\mathbf{x},\\mathbf{\\theta})=S(tg(\\mathbf{\\theta}|\\mathbf{x}))"
},
{
"math_id": 38,
"text": "h(t|\\mathbf{x},\\mathbf{\\theta})=h(tg(\\mathbf{\\theta}|\\mathbf{x}))g(\\mathbf{\\theta}|\\mathbf{x})"
},
{
"math_id": 39,
"text": "f(t|\\mathbf{x},\\mathbf{\\theta})= f(tg(\\mathbf{\\theta}|\\mathbf{x}))g(\\mathbf{\\theta}|\\mathbf{x})"
},
{
"math_id": 40,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}{(ln{[\\operatorname{sech}(\\frac{\\alpha{(Z(t_i))}^{\\beta}\\operatorname{coth}({Z(t_i)}^{\\beta})}{\\beta})]}+\\delta_i ln{[(\\alpha {(Z(t_i))}^{-1+2 \\beta}\\operatorname{csch}^{2}{[{Z(t_i)}^{\\beta}]}-\\alpha {Z(t_i)}^{\\beta}\\operatorname{tanh}(\\frac{\\alpha (1-{(Z(t_i))}^{\\beta}\\operatorname{coth}{(Z(t_i))}^\\beta)}{\\beta}))]}g(\\mathbf{\\theta}|\\mathbf{x}_i))} "
},
{
"math_id": 41,
"text": "Z(t_i) = t_i g(\\mathbf{\\theta}|\\mathbf{x}_i)"
},
{
"math_id": 42,
"text": "t_1,t_2,\\ldots,t_n"
},
{
"math_id": 43,
"text": "\\mathbf{x}_1,\\mathbf{x}_2,\\ldots,\\mathbf{x}_n"
},
{
"math_id": 44,
"text": "\\mathbf{\\theta}=(\\theta_0,\\theta_1,\\ldots,\\theta_p)"
},
{
"math_id": 45,
"text": "f(t_i|\\mathbf{x}_i,\\mathbf{\\theta}), F(t_i|\\mathbf{x}_i,\\theta), S(t_i|\\mathbf{x}_i,)"
},
{
"math_id": 46,
"text": "h(t_i|\\mathbf{x}_i,\\mathbf{\\theta})"
},
{
"math_id": 47,
"text": "L(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\prod_{i=1}^{n}f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})"
},
{
"math_id": 48,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})"
},
{
"math_id": 49,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}ln{[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}"
},
{
"math_id": 50,
"text": "L(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\prod_{i=1}^{n}{[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{\\delta_i}[S(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{1-\\delta_i}}"
},
{
"math_id": 51,
"text": "L(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\prod_{i=1}^{n}{[h(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{\\delta_i}S(t_i|\\mathbf{x}_i,\\mathbf{\\theta})}"
},
{
"math_id": 52,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}(\\delta_i ln{[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}+(1-\\delta_i) ln{[S(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]})"
},
{
"math_id": 53,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}(\\delta_i ln{[h(t_i|\\mathbf{x}_i,\\mathbf{\\theta}]}+ln{[S(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]})"
},
{
"math_id": 54,
"text": "\\delta_i =\n\\begin{cases}\n0 & t_i \\text{is a right censored observation} \\\\\t\t\t\t\t\n1 & otherwise\n\\end{cases}\n"
},
{
"math_id": 55,
"text": "L(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\prod_{i=1}^{n}[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{\\gamma_i}[F(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{1-\\gamma_i}"
},
{
"math_id": 56,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}(\\gamma_i ln{[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}+(1-\\gamma_i)ln{[F(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]})"
},
{
"math_id": 57,
"text": "\\gamma_i =\n\\begin{cases}\n0 & t_i \\text{is a left censored observation} \\\\\t\t\t\t\t\n1 & otherwise\n\\end{cases}\n"
},
{
"math_id": 58,
"text": "L(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\prod_{i=1}^{n}([f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{\\xi_i}[F(v_i|\\mathbf{x}_i,\\mathbf{\\theta})-F(u_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{1-\\xi_i})"
},
{
"math_id": 59,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}(\\xi_i ln{[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}+(1-\\xi_i) ln{[F(v_i|\\mathbf{x}_i,\\mathbf{\\theta})-F(u_i|\\mathbf{x}_i,\\mathbf{\\theta})]})"
},
{
"math_id": 60,
"text": "u_i \\le t_i \\le v_i"
},
{
"math_id": 61,
"text": "\\xi_i =\n\\begin{cases}\n0 & t_i \\text{is an interval censored observation} \\\\\t\t\t\t\t\n1 & otherwise\n\\end{cases}\n"
},
{
"math_id": 62,
"text": " L(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\prod_{i=1}^{n}([S(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{1-\\delta_i}[F(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{1-\\gamma_i}[F(v_i|\\mathbf{x}_i,\\mathbf{\\theta})-F(u_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{1-\\xi_i}[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]^{\\delta_i+\\gamma_i+\\xi_i-2})"
},
{
"math_id": 63,
"text": "LL(\\mathbf{\\theta},\\alpha,\\beta:{\\mathbf{x}})=\\sum_{i=1}^{n}{(1-\\delta_i) ln{[S(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}+(1-\\gamma_i)ln{[F(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}(1-\\xi_i)ln{[F(v_i|\\mathbf{x}_i,\\mathbf{\\theta})-F(u_i|\\mathbf{x}_i,\\mathbf{\\theta})]}+ (\\delta_i+\\gamma_i+\\xi_i-2)ln{[f(t_i|\\mathbf{x}_i,\\mathbf{\\theta})]}}"
},
{
"math_id": 64,
"text": "{EL}_0=\\int_{0}^{\\infty}S(t)dt"
},
{
"math_id": 65,
"text": "C_S"
},
{
"math_id": 66,
"text": "t_s"
},
{
"math_id": 67,
"text": "CS(t,t_s) =\n\\begin{cases}\n1 & 0 \\le t \\le t_s \\\\\t\t\t\t\t\n\\frac{S(t)}{S(t_s)} & t > t_s\n\\end{cases}\n"
},
{
"math_id": 68,
"text": "(EL_c(t_s))"
},
{
"math_id": 69,
"text": "{EL}_c(t_s)=\\int_{0}^{\\infty}CS(t)dt=t_s+\\int_{t_s}^{\\infty}{CS(t)dt=t_s+\\int_{t_s}^{\\infty}\\frac{S(t)}{S(t_s)}}dt"
},
{
"math_id": 70,
"text": "(n_c)"
},
{
"math_id": 71,
"text": "(N_c)"
},
{
"math_id": 72,
"text": "(t)"
},
{
"math_id": 73,
"text": "S(n_c)"
},
{
"math_id": 74,
"text": "f(n_c)"
},
{
"math_id": 75,
"text": "h(n_c)"
},
{
"math_id": 76,
"text": "F(n_c)"
},
{
"math_id": 77,
"text": "S(n_c)=P(N_c>n_c)=1-F(n_c)"
},
{
"math_id": 78,
"text": "f(n_c)=\\lim_{\\delta n_c \\to 0} \\frac{P(n_c<N_c<n_c+\\delta n_c)}{\\delta n_c}"
},
{
"math_id": 79,
"text": "h(n_c)=\\lim_{\\delta n_c \\to 0} \\frac{P(n_c<N_c<n_c+\\delta n_c| N_c>n_c)}{\\delta n_c}"
}
]
| https://en.wikipedia.org/wiki?curid=68508465 |
68508470 | Holmium(III) nitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Holmium (III) nitrate is an inorganic compound, a salt of holmium and nitric acid with the chemical formula Ho(NO3)3. The compound forms yellowish crystals, dissolves in water, also forms crystalline hydrates.
Synthesis.
Anhydrous salt is obtained by the action of nitrogen dioxide on holmium(III) oxide:
formula_0
Effect of nitrogen dioxide on metallic holmium:
formula_1
Reaction of holmium hydroxide and nitric acid:
formula_2
Physical properties.
Holmium(III) nitrate forms yellowish crystals.
Forms a crystalline hydrate of the composition Ho(NO3)3•5H2O.
Soluble in water and ethanol.
Chemical properties.
Hydrated holmitic nitrate thermally decomposes to form HoONO3 and decomposes to holmium oxide upon subsequent heating.
Application.
The compound is used for the production of ceramics and glass.
Also used to produce metallic holmium and as a chemical reagent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{2Ho_2O_3 + 9N_2O_4 \\ \\xrightarrow{150^oC}\\ 4Ho(NO_3)_3 + 6NO }"
},
{
"math_id": 1,
"text": "\\mathsf{Ho + 3N_2O_4 \\ \\xrightarrow{200^oC}\\ Ho(NO_3)_3 + 3NO }"
},
{
"math_id": 2,
"text": "\\mathsf{Ho(OH)_3 + 3NHO_3 \\ \\xrightarrow{150^oC}\\ Ho(NO_3)_3 + 3H_2O }"
}
]
| https://en.wikipedia.org/wiki?curid=68508470 |
68509217 | Uniformly disconnected space | In mathematics, a uniformly disconnected space is a metric space formula_0 for which there exists formula_1
such that no pair of distinct points formula_2 can be connected by a formula_3-chain.
A formula_3-chain between formula_4 and formula_5 is a sequence of points
formula_6 in formula_7 such that formula_8.
Properties.
Uniform disconnectedness is invariant under quasi-Möbius maps. | [
{
"math_id": 0,
"text": "(X,d)"
},
{
"math_id": 1,
"text": "\\lambda > 0"
},
{
"math_id": 2,
"text": "x,y \\in X"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "x= x_0, x_1, \\ldots, x_n = y"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "d(x_i,x_{i+1}) \\leq \\lambda d(x,y), \\forall i \\in \\{0,\\ldots,n\\}"
}
]
| https://en.wikipedia.org/wiki?curid=68509217 |
68511640 | Ytterbium(III) nitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Ytterbium(III) nitrate is an inorganic compound, a salt of ytterbium and nitric acid with the chemical formula Yb(NO3)3. The compound forms colorless crystals, dissolves in water, and also forms crystalline hydrates.
Synthesis.
Reaction of ytterbium and nitric oxide in ethyl acetate:
formula_0
Reaction of ytterbium hydroxide and nitric acid:
formula_1
Physical properties.
Ytterbium(III) nitrate forms colorless hygroscopic crystals.
Soluble in water and ethanol.
Forms crystalline hydrates of the composition <chem>Yb(NO3)3*nH2O</chem>, where n = 4, 5, 6.
Chemical properties.
The hydrated ytterbium nitrate thermally decomposes to form YbONO3 and decomposes to ytterbium oxide upon further heating.
Application.
Ytterbium(III) nitrate hydrate is used for nanoscale coatings of carbon composites.
Also used to obtain metallic ytterbium and as a chemical reagent.
Used as a component for the production of ceramics and glass.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{Yb + 3N_2O_4 \\ \\xrightarrow{}\\ Yb(NO_3)_3 + 3NO\\uparrow }"
},
{
"math_id": 1,
"text": "\\mathsf{Yb(OH)_3 + 3HNO_3 \\ \\xrightarrow{}\\ Yb(NO_3)_3 + 3H_2O\\uparrow }"
}
]
| https://en.wikipedia.org/wiki?curid=68511640 |
68511809 | Lutetium(III) nitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Lutetium(III) nitrate is an inorganic compound, a salt of lutetium and nitric acid with the chemical formula Lu(NO3)3. The compound forms colorless crystals, dissolves in water, and also forms crystalline hydrates. The compound is poisonous.
Synthesis.
Dissolving lutetium oxide in nitric acid:
formula_0
To obtain anhydrous nitrate, the powdered metal is added to nitrogen dioxide dissolved in ethyl acetate:
formula_1
Physical properties.
Lutetium(III) nitrate forms colorless hygroscopic crystals.
Soluble in water and ethanol.
Forms crystalline hydrates of the composition Lu(NO3)3•nH2O, where n = 3, 4, 5, 6.
Chemical properties.
The hydrated lutetium nitrate thermally decomposes to form LuONO3 and decomposes to lutetium oxide upon further heating.
The compound forms ammonium hexafluoroluthenate with ammonium fluoride:
formula_2
Applications.
Lutetium(III) nitrate is used to obtain metallic lutetium and also as a chemical reagent.
It is used as a component of materials for the production of laser crystals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Lu_2O_3 + 6HNO_3 \\ \\xrightarrow{90^oC}\\ 2Lu(NO_3)_3 + 3H_2O }"
},
{
"math_id": 1,
"text": "\\mathsf{ Lu + 3N_2O_4 \\ \\xrightarrow{77^oC}\\ Lu(NO_3)_3 + 3NO }"
},
{
"math_id": 2,
"text": "\\mathsf{ Lu(NO_3)_3 + 6 NH_4F \\ \\xrightarrow{}\\ (NH_4)_3[LuF_6]\\downarrow + 3NH_4NO_3 }"
}
]
| https://en.wikipedia.org/wiki?curid=68511809 |
68511991 | Erbium(III) nitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Erbium(III) nitrate is an inorganic compound, a salt of erbium and nitric acid with the chemical formula Er(NO3)3. The compound forms pink crystals, readily soluble in water, also forms crystalline hydrates.
Synthesis.
Dissolving metallic erbium in nitric acid:
formula_0
Dissolving erbium oxide or hydroxide in nitric acid:
formula_1
Reaction of nitrogen dioxide with metallic erbium:
formula_2
Physical properties.
Erbium(III) nitrate forms pink hygroscopic crystals.
Forms crystalline hydrates of the composition <chem>Er(NO3)3*5H2O</chem>.
Both erbium(III) nitrate and its crystalline hydrate decompose on heating.
Dissolves in water and EtOH.
Chemical properties.
The hydrated erbium nitrate thermally decomposed to form ErONO3 and then to erbium oxide.
Applications.
It is used to obtain metallic erbium and is also used as a chemical reagent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{Er + 6HNO_3 \\ \\xrightarrow{}\\ Er(NO_3)_3 + 3NO_2 + 3H_2O\\uparrow }"
},
{
"math_id": 1,
"text": "\\mathsf{Er(OH)_3 + 3HNO_3 \\ \\xrightarrow{}\\ Er(NO_3)_3 + 3H_2O\\uparrow }"
},
{
"math_id": 2,
"text": "\\mathsf{Er + 3N_2O_4 \\ \\xrightarrow{}\\ Er(NO_3)_3 + 3NO\\uparrow }"
}
]
| https://en.wikipedia.org/wiki?curid=68511991 |
6851367 | Langer correction | Improvement of WKB approximation
The Langer correction, named after the mathematician Rudolf Ernest Langer, is a correction to the WKB approximation for problems with radial symmetry.
Description.
In 3D systems.
When applying WKB approximation method to the radial Schrödinger equation,
formula_0
where the effective potential is given by
formula_1
(formula_2 the azimuthal quantum number related to the angular momentum operator), the eigenenergies and the wave function behaviour obtained are different from the real solution.
In 1937, Rudolf E. Langer suggested a correction
formula_3
which is known as Langer correction or Langer replacement. This manipulation is equivalent to inserting a 1/4 constant factor whenever formula_4 appears. Heuristically, it is said that this factor arises because the range of the radial Schrödinger equation is restricted from 0 to infinity, as opposed to the entire real line. By such a changing of constant term in the effective potential, the results obtained by WKB approximation reproduces the exact spectrum for many potentials. That the Langer replacement is correct follows from the WKB calculation of the Coulomb eigenvalues with the replacement which reproduces the well known result.
In 2D systems.
Note that for 2D systems, as the effective potential takes the form
formula_5
so Langer correction goes:
formula_6
This manipulation is also equivalent to insert a 1/4 constant factor whenever formula_7 appears.
Justification.
An even more convincing calculation is the derivation of Regge trajectories (and hence eigenvalues) of the radial Schrödinger equation with Yukawa potential by both a perturbation method (with the old formula_8 factor) and independently the derivation by the WKB method (with Langer replacement)-- in both cases even to higher orders. For the perturbation calculation see Müller-Kirsten book and for the WKB calculation Boukema.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " -\\frac{\\hbar^2}{2 m} \\frac{d^2 R(r)}{dr^2} + [E-V_\\textrm{eff}(r)] R(r) = 0 , "
},
{
"math_id": 1,
"text": "V_\\textrm{eff}(r) = V(r) - \\frac{\\hbar^2\\ell(\\ell+1)}{2mr^2}"
},
{
"math_id": 2,
"text": " \\ell"
},
{
"math_id": 3,
"text": "\\ell(\\ell+1) \\rightarrow \\left(\\ell+\\frac{1}{2}\\right)^2"
},
{
"math_id": 4,
"text": "\\ell(\\ell+1)"
},
{
"math_id": 5,
"text": "V_\\textrm{eff}(r) = V(r) - \\frac{\\hbar^2(\\ell^2-\\frac{1}{4})}{2mr^2},"
},
{
"math_id": 6,
"text": "\\left(\\ell^2-\\frac{1}{4}\\right) \\rightarrow \\ell^2."
},
{
"math_id": 7,
"text": "\\ell^2"
},
{
"math_id": 8,
"text": " \\ell(\\ell+1)"
}
]
| https://en.wikipedia.org/wiki?curid=6851367 |
685158 | Neyman–Pearson lemma | Theorem about the power of the likelihood ratio test
In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior.<ref name="org/euclid.ndml">Wald: Chapter II: The Neyman–Pearson Theory of Testing a Statistical Hypothesis: Wald: Chapter II: The Neyman–Pearson Theory of Testing a Statistical Hypothesis</ref> The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all formula_0 level tests while subsequently minimizing type II error, traditionally denoted by formula_1. Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error (formula_0), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.
Statement.
Consider a test with hypotheses formula_2 and formula_3, where the probability density function (or probability mass function) is formula_4 for formula_5.
For any hypothesis test with rejection set formula_6, and any formula_7, we say that it satisfies condition formula_8 if
formula_11
where formula_12 is a negligible set in both formula_13 and formula_14 cases: formula_15.
For any formula_7, let the set of level formula_16 tests be the set of all hypothesis tests with size at most formula_16. That is, letting its rejection set be formula_6, we have formula_17.
<templatestyles src="Math_theorem/styles.css" />
Neyman–Pearson lemma — Existence:
If a hypothesis test satisfies formula_8 condition, then it is a uniformly most powerful (UMP) test in the set of level formula_0 tests.
Uniqueness:
If there exists a hypothesis test formula_18 that satisfies formula_8 condition, with formula_19 , then every UMP test formula_6 in the set of level formula_0 tests satisfies formula_8 condition with the same formula_20.
Further, the formula_18 test and the formula_6 test agree with probability formula_21 whether formula_22 or formula_23.
In practice, the likelihood ratio is often used directly to construct tests — see likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).
<templatestyles src="Math_proof/styles.css" />Proof
Given any hypothesis test with rejection set formula_6, define its statistical power function formula_24.
Existence:
Given some hypothesis test that satisfies formula_8 condition, call its rejection region formula_18 (where NP stands for Neyman–Pearson).
For any level formula_0 hypothesis test with rejection region formula_6 we have
formula_25 except on some ignorable set formula_12.
Then integrate it over formula_26 to obtain
formula_27
Since formula_28 and formula_29, we find that formula_30.
Thus the formula_18 rejection test is a UMP test in the set of level formula_0 tests.
Uniqueness:
For any other UMP level formula_0 test, with rejection region formula_6, we have from Existence part,
formula_31.
Since the formula_6 test is UMP, the left side must be zero. Since formula_19 the right side gives
formula_32, so the formula_6 test has size formula_0.
Since the integrand formula_33 is nonnegative, and integrates to zero, it must be exactly zero except on some ignorable set formula_12.
Since the formula_18 test satisfies formula_8 condition, let the ignorable set in the definition of formula_8 condition be formula_34.
formula_35 is ignorable, since for all formula_36, we have formula_37.
Similarly, formula_38 is ignorable.
Define formula_39 (where formula_40 means symmetric difference). It is the union of three ignorable sets, thus it is an ignorable set.
Then we have
formula_41 and formula_42. So the formula_6 rejection test satisfies formula_8 condition with the same formula_20.
Since formula_43 is ignorable, its subset formula_44 is also ignorable. Consequently, the two tests agree with probability formula_21 whether formula_22 or formula_23.
Example.
Let formula_45 be a random sample from the formula_46 distribution where the mean formula_47 is known, and suppose that we wish to test for formula_48 against formula_49. The likelihood for this set of normally distributed data is
formula_50
We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome:
formula_51
This ratio only depends on the data through formula_52. Therefore, by the Neyman–Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on formula_52. Also, by inspection, we can see that if formula_53, then formula_54 is a decreasing function of formula_52. So we should reject formula_55 if formula_52 is sufficiently large. The rejection threshold depends on the size of the test. In this example, the test statistic can be shown to be a scaled chi-square distributed random variable and an exact critical value can be obtained.
Application in economics.
A variant of the Neyman–Pearson lemma has found an application in the seemingly unrelated domain of the economics of land value. One of the fundamental problems in consumer theory is calculating the demand function of the consumer given the prices. In particular, given a heterogeneous land-estate, a price measure over the land, and a subjective utility measure over the land, the consumer's problem is to calculate the best land parcel that they can buy – i.e. the land parcel with the largest utility, whose price is at most their budget. It turns out that this problem is very similar to the problem of finding the most powerful statistical test, and so the Neyman–Pearson lemma can be used.
Uses in electrical engineering.
The Neyman–Pearson lemma is quite useful in electronics engineering, namely in the design and use of radar systems, digital communication systems, and in signal processing systems.
In radar systems, the Neyman–Pearson lemma is used in first setting the rate of missed detections to a desired (low) level, and then minimizing the rate of false alarms, or vice versa.
Neither false alarms nor missed detections can be set at arbitrarily low rates, including zero. All of the above goes also for many systems in signal processing.
Uses in particle physics.
The Neyman–Pearson lemma is applied to the construction of analysis-specific likelihood-ratios, used to e.g. test for signatures of new physics against the nominal Standard Model prediction in proton–proton collision datasets collected at the LHC.
Discovery of the lemma.
Neyman wrote about the discovery of the lemma as follows. Paragraph breaks have been inserted.
<templatestyles src="Template:Blockquote/styles.css" />I can point to the particular moment when I understood how to formulate the undogmatic problem of the most powerful test of a simple statistical hypothesis against a fixed simple alternative. At the present time [probably 1968], the problem appears entirely trivial and within easy reach of a beginning undergraduate. But, with a degree of embarrassment, I must confess that it took something like half a decade of combined effort of E. S. P. [Egon Pearson] and myself to put things straight.
The solution of the particular question mentioned came on an evening when I was sitting alone in my room at the Statistical Laboratory of the School of Agriculture in Warsaw, thinking hard on something that should have been obvious long before. The building was locked up and, at about 8 p.m., I heard voices outside calling me. This was my wife, with some friends, telling me that it was time to go to a movie.
My first reaction was that of annoyance. And then, as I got up from my desk to answer the call, I suddenly understood: for any given critical region and for any given alternative hypothesis, it is possible to calculate the probability of the error of the second kind; it is represented by this particular integral. Once this is done, the optimal critical region would be the one which minimizes this same integral, subject to the side condition concerned with the probability of the error of the first kind. We are faced with a particular problem of the calculus of variation, probably a simple problem.
These thoughts came in a flash, before I reached the window to signal to my wife. The incident is clear in my memory, but I have no recollections about the movie we saw. It may have been Buster Keaton.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\beta"
},
{
"math_id": 2,
"text": "H_0: \\theta = \\theta_0"
},
{
"math_id": 3,
"text": "H_1:\\theta=\\theta_1"
},
{
"math_id": 4,
"text": "\\rho(x\\mid \\theta_i)"
},
{
"math_id": 5,
"text": "i=0,1"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "\n\\alpha\\in [0, 1]"
},
{
"math_id": 8,
"text": "P_\\alpha"
},
{
"math_id": 9,
"text": "\\alpha = {\\Pr}_{\\theta_0}(X\\in R)"
},
{
"math_id": 10,
"text": "\\exists \\eta \\geq 0"
},
{
"math_id": 11,
"text": "\n\\begin{align}\nx\\in{}& R\\smallsetminus A\\implies \\rho(x\\mid \\theta_1) > \\eta \\rho(x\\mid \\theta_0) \\\\\nx\\in{}& R^c\\smallsetminus A \\implies \\rho(x\\mid\\theta_1) < \\eta \\rho(x\\mid \\theta_0)\n\\end{align}"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "\\theta_0"
},
{
"math_id": 14,
"text": "\\theta_1"
},
{
"math_id": 15,
"text": "{\\Pr}_{\\theta_0}(X\\in A) = {\\Pr}_{\\theta_1}(X\\in A) = 0"
},
{
"math_id": 16,
"text": "\n\\alpha"
},
{
"math_id": 17,
"text": "{\\Pr}_{\\theta_0}(X\\in R)\\leq \\alpha"
},
{
"math_id": 18,
"text": "R_{NP}"
},
{
"math_id": 19,
"text": "\\eta > 0"
},
{
"math_id": 20,
"text": "\\eta"
},
{
"math_id": 21,
"text": "1"
},
{
"math_id": 22,
"text": "\\theta = \\theta_0"
},
{
"math_id": 23,
"text": "\\theta = \\theta_1"
},
{
"math_id": 24,
"text": "\\beta_R(\\theta) = {\\Pr}_\\theta(X\\in R)"
},
{
"math_id": 25,
"text": "[1_{R_{NP}}(x) - 1_R(x)][\\rho(x\\mid\\theta_1) - \\eta \\rho(x\\mid\\theta_0)] \\geq 0"
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "0 \\leq [\\beta_{R_{NP}}(\\theta_1) - \\beta_R(\\theta_1)] - \\eta[\\beta_{R_{NP}}(\\theta_0) - \\beta_R(\\theta_0)]."
},
{
"math_id": 28,
"text": "\\beta_{R_{NP}}(\\theta_0) = \\alpha"
},
{
"math_id": 29,
"text": "\\beta_R(\\theta_0) \\leq \\alpha"
},
{
"math_id": 30,
"text": "\\beta_{R_{NP}}(\\theta_1) \\geq \\beta_R(\\theta_1)"
},
{
"math_id": 31,
"text": "[\\beta_{R_{NP}}(\\theta_1) - \\beta_R(\\theta_1)] \\geq \\eta[\\beta_{R_{NP}}(\\theta_0) - \\beta_R(\\theta_0)]"
},
{
"math_id": 32,
"text": "\\beta_R(\\theta_0) = \\beta_{R_{NP}}(\\theta_0) =\\alpha"
},
{
"math_id": 33,
"text": "[1_{R_{NP}}(x) - 1_R(x)][\\rho(x\\mid\\theta_1) - \\eta \\rho(x\\mid\\theta_0)]"
},
{
"math_id": 34,
"text": "A_{NP}"
},
{
"math_id": 35,
"text": "R\\smallsetminus (R_{NP}\\cup A_{NP})"
},
{
"math_id": 36,
"text": "x\\in R\\smallsetminus (R_{NP}\\cup A_{NP})"
},
{
"math_id": 37,
"text": "[1_{R_{NP}}(x) - 1_R(x)][\\rho(x\\mid\\theta_1) - \\eta \\rho(x\\mid\\theta_0)] = \\eta \\rho(x\\mid\\theta_0)-\\rho(x\\mid\\theta_1)> 0"
},
{
"math_id": 38,
"text": "R_{NP}\\smallsetminus (R\\cup A_{NP})"
},
{
"math_id": 39,
"text": "A_R := (R \\mathbin{\\Delta} R_{NP})\\cup A_{NP}"
},
{
"math_id": 40,
"text": "\\Delta"
},
{
"math_id": 41,
"text": "x\\in R\\smallsetminus A_R\\implies \\rho(x\\mid\\theta_1) > \\eta \\rho(x \\mid \\theta_0)"
},
{
"math_id": 42,
"text": "x\\in R^c\\smallsetminus A_R \\implies \\rho(x\\mid\\theta_1) < \\eta \\rho(x \\mid \\theta_0)"
},
{
"math_id": 43,
"text": "A_R"
},
{
"math_id": 44,
"text": "R \\mathbin{\\Delta} R_{NP}\\subset A_R"
},
{
"math_id": 45,
"text": "X_1,\\dots,X_n"
},
{
"math_id": 46,
"text": "\\mathcal{N}(\\mu,\\sigma^2)"
},
{
"math_id": 47,
"text": "\\mu"
},
{
"math_id": 48,
"text": "H_0:\\sigma^2=\\sigma_0^2"
},
{
"math_id": 49,
"text": "H_1:\\sigma^2=\\sigma_1^2"
},
{
"math_id": 50,
"text": "\\mathcal{L}\\left(\\sigma^2\\mid\\mathbf{x}\\right)\\propto \\left(\\sigma^2\\right)^{-n/2} \\exp\\left\\{-\\frac{\\sum_{i=1}^n (x_i-\\mu)^2}{2\\sigma^2}\\right\\}."
},
{
"math_id": 51,
"text": "\\Lambda(\\mathbf{x}) = \\frac{\\mathcal{L}\\left({\\sigma_0}^2\\mid\\mathbf{x}\\right)}{\\mathcal{L}\\left({\\sigma_1}^2\\mid\\mathbf{x}\\right)} =\n\\left(\\frac{\\sigma_0^2}{\\sigma_1^2}\\right)^{-n/2} \\exp\\left\\{-\\frac{1}{2}(\\sigma_0^{-2} -\\sigma_1^{-2})\\sum_{i=1}^n (x_i-\\mu)^2\\right\\}."
},
{
"math_id": 52,
"text": "\\sum_{i=1}^n (x_i-\\mu)^2"
},
{
"math_id": 53,
"text": "\\sigma_1^2>\\sigma_0^2"
},
{
"math_id": 54,
"text": "\\Lambda(\\mathbf{x})"
},
{
"math_id": 55,
"text": "H_0"
}
]
| https://en.wikipedia.org/wiki?curid=685158 |
6851758 | Cusp (singularity) | Point on a curve where motion must move backwards
In mathematics, a cusp, sometimes called spinode in old texts, is a point on a curve where a moving point must reverse direction. A typical example is given in the figure. A cusp is thus a type of singular point of a curve.
For a plane curve defined by an analytic, parametric equation
formula_0
a cusp is a point where both derivatives of "f" and "g" are zero, and the directional derivative, in the direction of the tangent, changes sign (the direction of the tangent is the direction of the slope formula_1). Cusps are "local singularities" in the sense that they involve only one value of the parameter "t", in contrast to self-intersection points that involve more than one value. In some contexts, the condition on the directional derivative may be omitted, although, in this case, the singularity may look like a regular point.
For a curve defined by an implicit equation
formula_2
which is smooth, cusps are points where the terms of lowest degree of the Taylor expansion of "F" are a power of a linear polynomial; however, not all singular points that have this property are cusps. The theory of Puiseux series implies that, if "F" is an analytic function (for example a polynomial), a linear change of coordinates allows the curve to be parametrized, in a neighborhood of the cusp, as
formula_3
where "a" is a real number, "m" is a positive even integer, and "S"("t") is a power series of order "k" (degree of the nonzero term of the lowest degree) larger than "m". The number "m" is sometimes called the "order" or the "multiplicity" of the cusp, and is equal to the degree of the nonzero part of lowest degree of "F". In some contexts, the definition of a cusp is restricted to the case of cusps of order two—that is, the case where "m" = 2.
The definitions for plane curves and implicitly-defined curves have been generalized by René Thom and Vladimir Arnold to curves defined by differentiable functions: a curve has a cusp at a point if there is a diffeomorphism of a neighborhood of the point in the ambient space, which maps the curve onto one of the above-defined cusps.
Classification in differential geometry.
Consider a smooth real-valued function of two variables, say "f" ("x", "y") where x and y are real numbers. So f is a function from the plane to the line. The space of all such smooth functions is acted upon by the group of diffeomorphisms of the plane and the diffeomorphisms of the line, i.e. diffeomorphic changes of coordinate in both the source and the target. This action splits the whole function space up into equivalence classes, i.e. orbits of the group action.
One such family of equivalence classes is denoted by &NoBreak;&NoBreak; where k is a non-negative integer. A function f is said to be of type &NoBreak;&NoBreak; if it lies in the orbit of formula_4 i.e. there exists a diffeomorphic change of coordinate in source and target which takes f into one of these forms. These simple forms formula_5 are said to give normal forms for the type &NoBreak;&NoBreak;-singularities. Notice that the &NoBreak;&NoBreak; are the same as the &NoBreak;&NoBreak; since the diffeomorphic change of coordinate &NoBreak;&NoBreak; in the source takes formula_6 to formula_7 So we can drop the ± from &NoBreak;&NoBreak; notation.
The cusps are then given by the zero-level-sets of the representatives of the &NoBreak;}&NoBreak; equivalence classes, where "n" ≥ 1 is an integer.
Examples.
For a type "A"4-singularity we need f to have a degenerate quadratic part (this gives type "A"≥2), that L "does" divide the cubic terms (this gives type "A"≥3), another divisibility condition (giving type "A"≥4), and a final non-divisibility condition (giving type exactly "A"4).
To see where these extra divisibility conditions come from, assume that f has a degenerate quadratic part "L"2 and that L divides the cubic terms. It follows that the third order taylor series of f is given by formula_12 where Q is quadratic in x and y. We can complete the square to show that formula_13 We can now make a diffeomorphic change of variable (in this case we simply substitute polynomials with linearly independent linear parts) so that formula_14 where "P"1 is quartic (order four) in "x"1 and "y"1. The divisibility condition for type "A"≥4 is that "x"1 divides "P"1. If "x"1 does not divide "P"1 then we have type exactly "A"3 (the zero-level-set here is a tacnode). If "x"1 divides "P"1 we complete the square on formula_15 and change coordinates so that we have formula_16 where "P"2 is quintic (order five) in "x"2 and "y"2. If "x"2 does not divide "P"2 then we have exactly type "A"4, i.e. the zero-level-set will be a rhamphoid cusp.
Applications.
Cusps appear naturally when projecting into a plane a smooth curve in three-dimensional Euclidean space. In general, such a projection is a curve whose singularities are self-crossing points and ordinary cusps. Self-crossing points appear when two different points of the curves have the same projection. Ordinary cusps appear when the tangent to the curve is parallel to the direction of projection (that is when the tangent projects on a single point). More complicated singularities occur when several phenomena occur simultaneously. For example, rhamphoid cusps occur for inflection points (and for undulation points) for which the tangent is parallel to the direction of projection.
In many cases, and typically in computer vision and computer graphics, the curve that is projected is the curve of the critical points of the restriction to a (smooth) spatial object of the projection. A cusp appears thus as a singularity of the contour of the image of the object (vision) or of its shadow (computer graphics).
Caustics and wave fronts are other examples of curves having cusps that are visible in the real world. | [
{
"math_id": 0,
"text": "\\begin{align}\nx &= f(t)\\\\\ny &= g(t),\n\\end{align}\n"
},
{
"math_id": 1,
"text": " \\lim (g'(t)/f'(t))"
},
{
"math_id": 2,
"text": "F(x,y) = 0,"
},
{
"math_id": 3,
"text": "\\begin{align}\nx &= at^m\\\\\ny &= S(t),\n\\end{align}\n"
},
{
"math_id": 4,
"text": "x^2 \\pm y^{k+1},"
},
{
"math_id": 5,
"text": "x^2 \\pm y^{k+1}"
},
{
"math_id": 6,
"text": "x^2 + y^{k+1}"
},
{
"math_id": 7,
"text": "x^2 - y^{2n+1}."
},
{
"math_id": 8,
"text": "x^2-y^3=0,"
},
{
"math_id": 9,
"text": "x^2-x^4-y^5=0."
},
{
"math_id": 10,
"text": "x^2-y^5=0,"
},
{
"math_id": 11,
"text": "x = t^2,\\, y = a x^4 + x^5."
},
{
"math_id": 12,
"text": "L^2 \\pm LQ,"
},
{
"math_id": 13,
"text": "L^2 \\pm LQ = (L \\pm Q/2)^2 - Q^4/4."
},
{
"math_id": 14,
"text": "(L \\pm Q/2)^2 - Q^4/4 \\to x_1^2 + P_1"
},
{
"math_id": 15,
"text": "x_1^2 + P_1"
},
{
"math_id": 16,
"text": "x_2^2 + P_2"
}
]
| https://en.wikipedia.org/wiki?curid=6851758 |
685179 | Schwinger's quantum action principle | Approach to quantum theory
The Schwinger's quantum action principle is a variational approach to quantum mechanics and quantum field theory. This theory was introduced by Julian Schwinger in a series of articles starting 1950.
Approach.
In Schwingers approach, the action principle is targeted towards quantum mechanics. The action becomes a quantum action, i.e. an operator, formula_0. Although it is superficially different from the path integral formulation where the action is a classical function, the modern formulation of the two formalisms are identical.
Suppose we have two states defined by the values of a complete set of commuting operators at two times. Let the early and late states be formula_1 and formula_2, respectively. Suppose that there is a parameter in the Lagrangian which can be varied, usually a source for a field. The main equation of Schwinger's quantum action principle is:
formula_3
where the derivative is with respect to small changes (formula_4) in the parameter, and formula_5 with formula_6 the Lagrange operator.
In the path integral formulation, the transition amplitude is represented by the sum over all histories of formula_7, with appropriate boundary conditions representing the states formula_1 and formula_2. The infinitesimal change in the amplitude is clearly given by Schwinger's formula. Conversely, starting from Schwinger's formula, it is easy to show that the fields obey canonical commutation relations and the classical equations of motion, and so have a path integral representation. Schwinger's formulation was most significant because it could treat fermionic anticommuting fields with the same formalism as bose fields, thus implicitly introducing differentiation and integration with respect to anti-commuting coordinates. | [
{
"math_id": 0,
"text": " S "
},
{
"math_id": 1,
"text": "| A \\rang"
},
{
"math_id": 2,
"text": "| B \\rang"
},
{
"math_id": 3,
"text": " \\delta \\langle B|A\\rangle = i \\langle B| \\delta S |A\\rangle,\\ "
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "S=\\int \\mathcal{L} \\, \\mathrm{d}t"
},
{
"math_id": 6,
"text": "\\mathcal{L}"
},
{
"math_id": 7,
"text": "\\exp(iS)"
}
]
| https://en.wikipedia.org/wiki?curid=685179 |
68518 | Chemisorption | Phenomenon of surface adhesion
Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds.
In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species.
Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties.
The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent.
Uses.
An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface.
Self-assembled monolayers.
Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface.
Gas-surface chemisorption.
Adsorption kinetics.
As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would return to the bulk gas. If it loses enough momentum through an inelastic collision, then it "sticks" onto the surface, forming a precursor state bonded to the surface by weak forces, similar to physisorption. The particle diffuses on the surface until it finds a deep chemisorption potential well. Then it reacts with the surface or simply desorbs after enough energy and time.
The reaction with the surface is dependent on the chemical species involved. Applying the Gibbs energy equation for reactions:
formula_0
General thermodynamics states that for spontaneous reactions at constant temperature and pressure, the change in free energy should be negative. Since a free particle is restrained to a surface, and unless the surface atom is highly mobile, entropy is lowered. This means that the enthalpy term must be negative, implying an exothermic reaction.
Physisorption is given as a Lennard-Jones potential and chemisorption is given as a Morse potential. There exists a point of crossover between the physisorption and chemisorption, meaning a point of transfer. It can occur above or below the zero-energy line (with a difference in the Morse potential, a), representing an activation energy requirement or lack of. Most simple gases on clean metal surfaces lack the activation energy requirement.
Modeling.
For experimental setups of chemisorption, the amount of adsorption of a particular system is quantified by a sticking probability value.
However, chemisorption is very difficult to theorize. A multidimensional potential energy surface (PES) derived from effective medium theory is used to describe the effect of the surface on absorption, but only certain parts of it are used depending on what is to be studied. A simple example of a PES, which takes the total of the energy as a function of location:
formula_1
where formula_2 is the energy eigenvalue of the Schrödinger equation for the electronic degrees of freedom and formula_3 is the ion interactions. This expression is without translational energy, rotational energy, vibrational excitations, and other such considerations.
There exist several models to describe surface reactions: the Langmuir–Hinshelwood mechanism in which both reacting species are adsorbed, and the Eley–Rideal mechanism in which one is adsorbed and the other reacts with it.
Real systems have many irregularities, making theoretical calculations more difficult:
Compared to physisorption where adsorbates are simply sitting on the surface, the adsorbates can change the surface, along with its structure. The structure can go through relaxation, where the first few layers change interplanar distances without changing the surface structure, or reconstruction where the surface structure is changed. A direct transition from physisorption to chemisorption has been observed by attaching a CO molecule to the tip of an atomic force microscope and measuring its interaction with a single iron atom.
For example, oxygen can form very strong bonds (~4 eV) with metals, such as Cu(110). This comes with the breaking apart of surface bonds in forming surface-adsorbate bonds. A large restructuring occurs by missing row.
Dissociative chemisorption.
A particular brand of gas-surface chemisorption is the dissociation of diatomic gas molecules, such as hydrogen, oxygen, and nitrogen. One model used to describe the process is precursor-mediation. The absorbed molecule is adsorbed onto a surface into a precursor state. The molecule then diffuses across the surface to the chemisorption sites. They break the molecular bond in favor of new bonds to the surface. The energy to overcome the activation potential of dissociation usually comes from translational energy and vibrational energy.
An example is the hydrogen and copper system, one that has been studied many times over. It has a large activation energy of 0.35 – 0.85 eV. The vibrational excitation of the hydrogen molecule promotes dissociation on low index surfaces of copper.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta G = \\Delta H - T\\Delta S"
},
{
"math_id": 1,
"text": "E(\\{R_i\\}) = E_{el}(\\{R_i\\}) + V_{\\text{ion-ion}}(\\{R_i\\})"
},
{
"math_id": 2,
"text": "E_{el}"
},
{
"math_id": 3,
"text": "V_{ion-ion}"
}
]
| https://en.wikipedia.org/wiki?curid=68518 |
68520 | Physisorption | Process involving electronic structure
Physisorption, also called physical adsorption, is a process in which the electronic structure of the atom or molecule is barely perturbed upon adsorption.
Overview.
The fundamental interacting force of physisorption is Van der Waals force. Even though the interaction energy is very weak (~10–100 meV), physisorption plays an important role in nature. For instance, the van der Waals attraction between surfaces and foot-hairs of geckos (see Synthetic setae) provides the remarkable ability to climb up vertical walls. Van der Waals forces originate from the interactions between induced, permanent or transient electric dipoles.
In comparison with chemisorption, in which the electronic structure of bonding atoms or molecules is changed and covalent or ionic bonds form, physisorption does not result in changes to the chemical bonding structure. In practice, the categorisation of a particular adsorption as physisorption or chemisorption depends principally on the binding energy of the adsorbate to the substrate, with physisorption being far weaker on a per-atom basis than any type of connection involving a chemical bond.
Modeling by image charge.
To give a simple illustration of physisorption, we can first consider an adsorbed hydrogen atom in front of a perfect conductor, as shown in Fig. 1. A nucleus with positive charge is located at R = (0, 0, "Z"), and the position coordinate of its electron, r = ("x", "y", "z") is given with respect to the nucleus. The adsorption process can be viewed as the interaction between this hydrogen atom and its image charges of both the nucleus and electron in the conductor. As a result, the total electrostatic energy is the sum of attraction and repulsion terms:
formula_0
The first term is the attractive interaction of nucleus and its image charge, and the second term is due to the interaction of the electron and its image charge. The repulsive interaction is shown in the third and fourth terms arising from the interaction between the nucleus and the image electron, and, the interaction between the electron and the image nucleus, respectively.
By Taylor expansion in powers of |r| / |R|, this interaction energy can be further expressed as:
formula_1
One can find from the first non-vanishing term that the physisorption potential depends on the distance "Z" between adsorbed atom and surface as "Z"−3, in contrast with the "r"−6 dependence of the molecular van der Waals potential, where "r" is the distance between two dipoles.
Modeling by quantum-mechanical oscillator.
The van der Waals binding energy can be analyzed by another simple physical picture: modeling the motion of an electron around its nucleus by a three-dimensional simple harmonic oscillator with a potential energy "Va":
formula_2
where "me" and "ω" are the mass and vibrational frequency of the electron, respectively.
As this atom approaches the surface of a metal and forms adsorption, this potential energy "Va" will be modified due to the image charges by additional potential terms which are quadratic in the displacements:
formula_3 (from the Taylor expansion above.)
Assuming
formula_4
the potential is well approximated as
formula_5,
where
formula_6
If one assumes that the electron is in the ground state, then the van der Waals binding energy is essentially the change of the zero-point energy:
formula_7
This expression also shows the nature of the "Z"−3 dependence of the van der Waals interaction.
Furthermore, by introducing the atomic polarizability,
formula_8
the van der Waals potential can be further simplified:
formula_9
where
formula_10
is the van der Waals constant which is related to the atomic polarizability.
Also, by expressing the fourth-order correction in the Taylor expansion above as ("aCvZ"0) / (Z4), where "a" is some constant, we can define "Z"0 as the position of the "dynamical image plane" and obtain
formula_11
The origin of "Z"0 comes from the spilling of the electron wavefunction out of the surface. As a result, the position of image plane representing the reference for the space coordinate is different from the substrate surface itself and modified by "Z"0.
Table 1 shows the jellium model calculation for van der Waals constant "Cv" and dynamical image plane "Z"0 of rare gas atoms on various metal surfaces. The increasing of "Cv" from He to Xe for all metal substrates is caused by the larger atomic polarizability of the heavier rare gas atoms. For the position of the dynamical image plane, it decreases with increasing dielectric function and is typically on the order of 0.2 Å.
Physisorption potential.
Even though the van der Waals interaction is attractive, as the adsorbed atom moves closer to the surface the wavefunction of electron starts to overlap with that of the surface atoms. Further the energy of the system will increase due to the orthogonality of wavefunctions of the approaching atom and surface atoms.
This Pauli exclusion and repulsion are particularly strong for atoms with closed valence shells that dominate the surface interaction. As a result, the minimum energy of physisorption must be found by the balance between the long-range van der Waals attraction and short-range Pauli repulsion. For instance, by separating the total interaction of physisorption into two contributions—a short-range term depicted by Hartree–Fock theory and a long-range van der Waals attraction—the equilibrium position of physisorption for rare gases adsorbed on jellium substrate can be determined. Fig. 2 shows the physisorption potential energy of He adsorbed on Ag, Cu, and Au substrates which are described by the jellium model with different densities of smear-out background positive charges. It can be found that the weak van der Waals interaction leads to shallow attractive energy wells (<10 meV). One of the experimental methods for exploring physisorption potential energy is the scattering process, for instance, inert gas atoms scattered from metal surfaces. Certain specific features of the interaction potential between scattered atoms and surface can be extracted by analyzing the experimentally determined angular distribution and cross sections of the scattered particles.
Quantum mechanical – thermodynamic modelling for surface area and porosity.
Since 1980 two theories were worked on to explain adsorption and obtain equations that work. These two are referred to as the chi hypothesis, the quantum mechanical derivation, and excess surface work, ESW. Both these theories yield the same equation for flat surfaces:
formula_12
Where "U" is the unit step function. The definitions of the other symbols is as follows:
formula_13
where "ads" stands for "adsorbed", "m" stands for "monolayer equivalence" and "vap" is reference to the vapor pressure ("ads" and "vap" are the latest IUPAC convention but "m" has no IUAPC equivalent notation) of the liquid adsorptive at the same temperature as the solid sample. The unit function creates the definition of the molar energy of adsorption for the first adsorbed molecule by:
formula_16
The plot of formula_17 adsorbed versus formula_14 is referred to as the chi plot. For flat surfaces, the slope of the chi plot yields the surface area. Empirically, this plot was notice as being a very good fit to the isotherm by Polanyi and also by deBoer and Zwikker but not pursued. This was due to criticism in the former case by Einstein and in the latter case by Brunauer. This flat surface equation may be used as a "standard curve" in the normal tradition of comparison curves, with the exception that the porous sample's early portion of the plot of formula_17 versus formula_14 acts as a self-standard. Ultramicroporous, microporous and mesoporous conditions may be analyzed using this technique. Typical standard deviations for full isotherm fits including porous samples are typically less than 2%.
A typical fit to good data on a homogeneous non-porous surface is shown in figure 3. The data is by Payne, Sing and Turk and was used to create the formula_15-s standard curve. Unlike the BET, which can only be at best fit over the range of 0.05 to 0.35 of "P"/"P", the range of the fit is the full isotherm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = {e^2\\over 4\\pi\\varepsilon_0}\\left(\\frac{-1}{|2\\mathbf R|}+\\frac{-1}{|2\\mathbf R+\\mathbf r-\\mathbf r'|}+\\frac{1}{|2\\mathbf R-\\mathbf r'|}+\\frac{1}{|2\\mathbf R+\\mathbf r|}\\right)."
},
{
"math_id": 1,
"text": "V = {-e^2\\over 16\\pi\\varepsilon_0 Z^3}\\left(\\frac{x^2+y^2}{2}+z^2\\right)+ {3e^2\\over 32\\pi\\varepsilon_0 Z^4}\\left(\\frac{x^2+y^2}{2}{z}+z^3\\right)+O\\left(\\frac{1}{Z^5}\\right)."
},
{
"math_id": 2,
"text": "V_a = \\frac{m_e}{2}{\\omega^2}(x^2+y^2+z^2),"
},
{
"math_id": 3,
"text": "V_a = \\frac{m_e}{2}{\\omega^2}(x^2+y^2+z^2)-{e^2\\over 16\\pi\\varepsilon_0 Z^3}\\left(\\frac{x^2+y^2}{2}+z^2\\right)+\\ldots"
},
{
"math_id": 4,
"text": " m_e \\omega^2\\gg{e^2\\over 16\\pi\\varepsilon_0 Z^3},"
},
{
"math_id": 5,
"text": "V_a \\sim \\frac{m_e}{2}{\\omega_1^2}(x^2+y^2)+\\frac{m_e}{2}{\\omega_2^2}z^2"
},
{
"math_id": 6,
"text": "\n\\begin{align}\n\\omega_1 &= \\omega - {e^2\\over 32\\pi\\varepsilon_0 m_e\\omega Z^3},\\\\\n\\omega_2 &= \\omega - {e^2\\over 16\\pi\\varepsilon_0 m_e\\omega Z^3}.\n\\end{align}\n"
},
{
"math_id": 7,
"text": "V_v = \\frac{\\hbar}{2}(2\\omega_1+\\omega_2-3\\omega)= - {\\hbar e^2\\over 16\\pi\\varepsilon_0 m_e\\omega Z^3}."
},
{
"math_id": 8,
"text": " \\alpha= \\frac {e^2} {m_e\\omega^2},"
},
{
"math_id": 9,
"text": "V_v = - {\\hbar \\alpha \\omega\\over 16\\pi\\varepsilon_0 Z^3}= -\\frac{C_v}{Z^3},"
},
{
"math_id": 10,
"text": "C_v = {\\hbar \\alpha \\omega\\over 16\\pi\\varepsilon_0},"
},
{
"math_id": 11,
"text": "V_v = - \\frac{C_v}{(Z-Z_0)^3}+O\\left(\\frac{1}{Z^5}\\right)."
},
{
"math_id": 12,
"text": "\\theta=(\\chi-\\chi_\\text{c})U(\\chi-\\chi_\\text{c})"
},
{
"math_id": 13,
"text": "\\theta:=n_\\text{ads}/n_\\text{m} \\quad,\\quad \\chi := -\\ln\\bigl(-\\ln\\bigl(P/P_{\\text{vap}}\\bigr)\\bigr)"
},
{
"math_id": 14,
"text": "\\chi"
},
{
"math_id": 15,
"text": "\\alpha"
},
{
"math_id": 16,
"text": "\\chi_\\text{c} =:-\\ln\\bigl(-E_\\text{a}/RT\\bigr) "
},
{
"math_id": 17,
"text": "n_{ads}"
}
]
| https://en.wikipedia.org/wiki?curid=68520 |
68522044 | Polonium tetranitrate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Polonium tetranitrate is an inorganic compound, a salt of polonium and nitric acid with the chemical formula Po(NO3)4. The compound is radioactive, forms white crystals.
Synthesis.
Dissolution of metallic polonium in concentrated nitric acid:
formula_0
Physical properties.
Polonium(IV) nitrate forms white or colorless crystals. It dissolves in water with hydrolysis.
Chemical properties.
It disproportionates in aqueous weakly acidic nitric acid solutions:
formula_1
The polonium(II) ion (Po2+) is then oxidized by nitric acid to polonium(IV).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Po + 8 HNO_3 \\ \\xrightarrow{}\\ Po(NO_3)_4 + 4 NO_2\\uparrow + 4 H_2O }"
},
{
"math_id": 1,
"text": "\\mathsf{ 2 Po(NO_3)_4 + 2H_2O \\ \\xrightarrow{}\\ PoO_2(NO_3)_2\\downarrow + Po^{{2+}} + 2NO_3^- + 4HNO_3 }"
}
]
| https://en.wikipedia.org/wiki?curid=68522044 |
68532798 | Five-qubit error correcting code | The five-qubit error correcting code is the smallest quantum error correcting code that can protect a logical qubit from any arbitrary single qubit error. In this code, 5 physical qubits are used to encode the logical qubit. With formula_0 and formula_1 being Pauli matrices and formula_2 the Identity matrix, this code's generators are formula_3. Its logical operators are formula_4 and formula_5. Once the logical qubit is encoded, errors on the physical qubits can be detected via stabilizer measurements. A lookup table that maps the results of the stabilizer measurements to the types and locations of the errors gives the control system of the quantum computer enough information to correct errors.
Measurements.
Stabilizer measurements are parity measurements that measure the stabilizers of physical qubits.
For example, to measure the first stabilizer (formula_7), a parity measurement of formula_0 of the first qubit, formula_1 on the second, formula_1 on the third, formula_0 on the fourth , and formula_2 on the fifth is performed.
Since there are four stabilizers, 4 ancillas will be used to measure them. The first 4 qubits in the image above are the ancillas. The resulting bits from the ancillas is the syndrome; which encodes the type of error that occurred and its location.
A logical qubit can be measured in the computational basis by performing a parity measurement on formula_6. If the measured ancilla is formula_8, the logical qubit is formula_9. If the measured ancilla is formula_10, the logical qubit is formula_11.
Error correction.
It is possible to compute all the single qubit errors that can occur and how to correct them. This is done by calculating what errors commute with the stabilizers. For example, if there is an formula_0 error on the first qubit and no errors on the others (formula_12), it commutes with the first stabilizer formula_13. This means that if an X error occurs on the first qubit, the first ancilla qubit will be 0. The second ancilla qubit: formula_14, the third: formula_15 and the fourth formula_16. So if an X error occurs on the first qubit, the syndrome will be formula_17; which is shown in the table below, to the right of formula_18. Similar calculations are realized for all other possible errors to fill out the table.
To correct an error, the same operation is performed on the physical qubit based on its syndrome. If the syndrome is formula_17, an formula_0 gate is applied to the first qubit to reverse the error.
Encoding.
The first step in executing error corrected quantum computation is to encode the computer's initial state by transforming the physical qubits into logical codewords. The logical codewords for the five qubit code are
formula_19
formula_20
Stabilizer measurements followed by a formula_6 measurement can be used to encode a logical qubit into 5 physical qubits. To prepare formula_9, perform stabilizer measurements and apply error correction. After error correction, the logical state is guaranteed to be a logical codeword. If the result of measuring formula_6 is formula_8, the logical state is formula_9. If the result is formula_10, the logical state is formula_21 and applying formula_22 will transform it into formula_9.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Z"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": " \\langle XZZXI, IXZZX, XIXZZ,ZXIXZ \\rangle"
},
{
"math_id": 4,
"text": "\\bar{X} = XXXXX "
},
{
"math_id": 5,
"text": "\\bar{Z} = ZZZZZ"
},
{
"math_id": 6,
"text": "\\bar{Z}"
},
{
"math_id": 7,
"text": "XZZXI"
},
{
"math_id": 8,
"text": " 0 "
},
{
"math_id": 9,
"text": "|0_{\\rm L}\\rangle"
},
{
"math_id": 10,
"text": "1"
},
{
"math_id": 11,
"text": "|1_{\\rm L}\\rangle"
},
{
"math_id": 12,
"text": "X_{1} = XIIII"
},
{
"math_id": 13,
"text": "[XIIII, XZZXI] = 0"
},
{
"math_id": 14,
"text": "[XIIII, IXZZX] = 0"
},
{
"math_id": 15,
"text": "[XIIII, XIXZZ] = 0"
},
{
"math_id": 16,
"text": "[XIIII, ZXIXZ] \\neq 0"
},
{
"math_id": 17,
"text": "0001"
},
{
"math_id": 18,
"text": "X_{1}"
},
{
"math_id": 19,
"text": "\\begin{align}\n|0_{\\rm L} \\rangle = \\frac{1}{4}[ &|00000\\rangle + |10010\\rangle + |01001\\rangle + |10100\\rangle + |01010\\rangle - |11011\\rangle - |00110\\rangle - |11000\\rangle \n\\\\ - &|11101\\rangle - |00011\\rangle - |11110\\rangle - |01111\\rangle - |10001\\rangle - |01100\\rangle - |10111\\rangle + |00101\\rangle],\n\\end{align}"
},
{
"math_id": 20,
"text": "\\begin{align}\n|1_{\\rm L} \\rangle=\\frac{1}{4}[ &|11111\\rangle + |01101\\rangle + |10110\\rangle + |01011\\rangle + |10101\\rangle - |00100\\rangle - |11001\\rangle - |00111\\rangle \n\\\\- &|00010\\rangle - |11100\\rangle - |00001\\rangle - |10000\\rangle - |01110\\rangle - |10011\\rangle - |01000\\rangle + |11010\\rangle ].\n\\end{align}\n"
},
{
"math_id": 21,
"text": "|1\\rangle"
},
{
"math_id": 22,
"text": "\\bar{X}"
}
]
| https://en.wikipedia.org/wiki?curid=68532798 |
68551165 | Fractional approval voting |
In fractional social choice, fractional approval voting refers to a class of electoral systems using approval ballots (each voter selects one or more candidate alternatives), in which the outcome is "fractional": for each alternative "j" there is a fraction "pj" between 0 and 1, such that the sum of "pj" is 1. It can be seen as a generalization of approval voting: in the latter, one candidate wins ("pj" = 1) and the other candidates lose ("pj" = 0). The fractions "pj" can be interpreted in various ways, depending on the setting. Examples are:
Fractional approval voting is a special case of fractional social choice in which all voters have "dichotomous preferences". It appears in the literature under many different terms: lottery, sharing, portioning, mixing and distribution.
Formal definitions.
There is a finite set "C" of "candidates" (also called: "outcomes" or "alternatives"), and a finite set "N" of "n voters" (also called: "agents"). Each voter "i" specifies a subset "Ai" of "C", which represents the set of candidates that the voter "approves".
A "fractional approval voting" rule takes as input the set of sets "Ai", and returns as output a "mixture" (also called: "distribution" or "lottery") - a vector "p" of real numbers in [0,1], one number for each candidate, such that the sum of numbers is 1.
It is assumed that each agent "i" gains a utility of 1 from each candidate in his approval set "Ai", and a utility of 0 from each candidate not in "Ai". Hence, agent "i" gains from each mixture "p", a utility of formula_0. For example, if the mixture "p" is interpreted as a budget distribution, then the utility of "i" is the total budget allocated to outcomes he likes.
Desired properties.
Efficiency properties.
Pareto-efficiency (PE) means no mixture gives a higher utility to one agent and at least as high utility to all others.
Ex-post PE is a weaker property, relevant only for the interpretation of a mixture as a lottery. It means that, after the lottery, no outcome gives a higher utility to one agent and at least as high utility to all others (in other words, it is a mixture over PE outcomes). For example, suppose there are 5 candidates (a,b,c,d,e) and 6 voters with approval sets (ac, ad, ae, bc, bd, be). Selecting any single candidate is PE, so every lottery is ex-post PE. But the lottery selecting c,d,e with probability 1/3 each is not PE, since it gives an expected utility of 1/3 to each voter, while the lottery selecting a,b with probability 1/2 each gives an expected utility of 1/2 to each voter.
PE always implies ex-post PE. The opposite is also true in the following cases:
Fairness properties.
Fairness requirements are captured by variants of the notion of fair share (FS).
Individual"-"FS (also called Fair Welfare Share) means that the utility of each voter "i" is at least 1/"n", that is, at least 1/"n" of the budget is allocated to candidates approved by "i".
Individual-Outcome-FS means that the utility of each voter "i" is at least his utility in a lottery that selects a candidate randomly, that is, at least "k"/|"C"|, where "k" is the number of candidates approved by "i".
Single-vote-FS (also called faithful) means that, if each voter approves a single candidate, then the fraction assigned to each candidate "j" equals the number of voters who approve "j" divided by "n".
Unanimous-FS means that, for each set "S" of voters with "identical" preferences, the utility of each member in "S" is at least |"S"|/"n."
Group-FS (also called "proportional sharing") means that, for each voter set "S", the total budget allocated to candidates approved by "at least one" member of "S", is at least |"S"|/"n."
Average-FS means that, for each voter set "S" with at least one approved candidate in common, the average utility of voters in "S" is at least |"S"|/"n."
Core-FS means that, for each voter set "S", there is no other distribution of their |"S"|/"n" budget, which gives all members of "S" a higher utility.
Strategic properties.
Several variants of strategyproofness (SP) have been studied for voting rules:
A weaker variant of SP is excludable SP. It is relevant in situations where it is possible to exclude voters from using some candidate alternatives. For example, if the candidates are meeting times, then it is possible to exclude voters from participating in the meeting in times which they did not approve. This makes it harder to manipulate, and therefore, the requirement is weaker.
Participation properties.
Rules should encourage voters to participate in the voting process. Several participation criteria have been studied:
A stronger property is required in participatory budgeting settings in which the budget to distribute is donated by the voters themselves:
Rules.
Utilitarian rule.
The utilitarian rule aims to maximize the sum of utilities, and therefore it distributes the entire budget among the candidates approved by the largest number of voters. In particular, if there is one candidate with the largest number of votes, then this candidate gets 1 (that is, all the budget) and the others get 0, as in single-winner approval voting. If there are some "k" candidates with the same largest number of votes, then the budget is distributed equally among them, giving 1/"k" to each such candidate and 0 to all others. The utilitarian rule has several desirable properties: it is anonymous, neutral, PE, individual-SP, and preference-monotone. It is also easy to compute.
However, it is not fair towards minorities - it violates Individual-FS (as well as all stronger variants of FS). For example, if 51% of the voters approve X and 49% of the voters approve Y, then the utilitarian rule gives all the budget to X and no budget at all to Y, so the 49% who vote for Y get a utility of 0. In other words, it allows for tyranny of the majority.
The utilitarian rule is also not weak-group-SP (and hence not group-SP). For example, suppose there are 3 candidates (a,b,c) and 3 voters, each of them approves a single candidate. If they vote sincerely, then the utilitarian mixture is (1/3,1/3,1/3) so each agent's utility is 1/3. If a "single" voter votes insincerely (say, the first one votes for both a and b), then the mixture is (0,1,0), which is worse for the insincere voter. However, if "two" voters collude and vote insincerely (say, the first two voters vote for the first two outcomes), then the utilitarian mixture is (1/2, 1/2, 0), which is "better" for both insincere voters.
Nash-optimal rule.
The Nash-optimal rule maximizes the sum of "logarithms" of utilities. It is anonymous and neutral, and satisfies the following additional properties:
The Nash-optimal rule can be computed by solving a convex program. There is another rule, called fair utilitarian, which satisfies similar properties (PE and group-FS) but is easier to compute.
Egalitarian rule.
The egalitarian (leximin) rule maximizes the smallest utility, then the next-smallest, etc. It is anonymous and neutral, and satisfies the following additional properties:
Other welfarist rules.
For any monotonically increasing function "f", one can maximize the sum of "f"("ui"). The utilitarian rule is a special case where f("x")="x", and the Nash rule is a special case where f("x")=log("x"). Every "f"-maximizing rule is PE, and has the following additional properties:
Priority rules.
A priority rule (also called "serial dictatorhip") is parametrized by a permutation of the voters, representing a priority ordering. It selects an outcome that maximizes the utility of the highest-priority agent; subject to that, maximizes the utility of the second-highest-priorty agent; and so on. Every priority rule is neutral, PE, weak-group-SP, and preference-monotone. However, it is not anonymous and does not satisfy any fairness notion.
The random priority rule selects a permutation of the voters uniformly at random, and then implements the priority rule for that permutation. It is anonymous, neutral, and satisfies the following additional properties:
A disadvantage of this rule is that it is computationally-hard to find the exact probabilities (see Dictatorship mechanism#Computation).
Conditional utilitarian rule.
In the conditional utilitarian rule, each agent receives 1/"n" of the total budget. Each agent finds, among the candidates that he approves, those that are supported by the largest number of other agents, and splits his budget equally among them. It is anonymous and neutral, and satisfies the following additional properties:
Majoritarian rule.
The majoritarian rule aims to concentrate as much power as possible in the hands of few candidates, while still guaranteeing fairness. It proceeds in rounds. Initially, all candidates and voters are active. In each round, the rule selects an active candidate "c" who is approved by the largest set of active voters, "Nc". Then, the rule "assigns" these voters "Nc" to "c", that is, it assumes that voters in "Nc" voted "only" for "c", and assigns c the fraction "|Nc"|/n. Then, the candidate "c" and the voters in "Nc" become inactive, and the rule proceeds to the next round. Note that the conditional-utilitarian rule is similar, except that the voters in "Nc" do not become inactive.
The majoritarian rule is anonymous, neutral, guarantees individual-FS and single-vote-FS.
Impossibility results.
Some combinations of properties cannot be attained simultaneously.
Summary table.
In the table below, the number in each cell represents the "strength" of the property: 0 means none (the property is not satisfied); 1 corresponds to the weak variant of the property; 2 corresponds to a stronger variant; etc. | [
{
"math_id": 0,
"text": "\\sum_{j\\in A_i} p_j"
}
]
| https://en.wikipedia.org/wiki?curid=68551165 |
6855275 | Courant bracket | In a field of mathematics known as differential geometry, the Courant bracket is a generalization of the Lie bracket from an operation on the tangent bundle to an operation on the direct sum of the tangent bundle and the vector bundle of "p"-forms.
The case "p" = 1 was introduced by Theodore James Courant in his 1990 doctoral dissertation as a structure that bridges Poisson geometry and pre-symplectic geometry, based on work with his advisor Alan Weinstein. The twisted version of the Courant bracket was introduced in 2001 by Pavol Severa, and studied in collaboration with Weinstein.
Today a complex version of the "p"=1 Courant bracket plays a central role in the field of generalized complex geometry, introduced by Nigel Hitchin in 2002. Closure under the Courant bracket is the integrability condition of a generalized almost complex structure.
Definition.
Let "X" and "Y" be vector fields on an N-dimensional real manifold "M" and let "ξ" and "η" be "p"-forms. Then "X+ξ" and "Y+η" are sections of the direct sum of the tangent bundle and the bundle of "p"-forms. The Courant bracket of "X+ξ" and "Y+η" is defined to be
formula_0
where formula_1 is the Lie derivative along the vector field "X", "d" is the exterior derivative and "i" is the interior product.
Properties.
The Courant bracket is antisymmetric but it does not satisfy the Jacobi identity for "p" greater than zero.
The Jacobi identity.
However, at least in the case "p=1", the Jacobiator, which measures a bracket's failure to satisfy the Jacobi identity, is an exact form. It is the exterior derivative of a form which plays the role of the Nijenhuis tensor in generalized complex geometry.
The Courant bracket is the antisymmetrization of the Dorfman bracket, which does satisfy a kind of Jacobi identity.
Symmetries.
Like the Lie bracket, the Courant bracket is invariant under diffeomorphisms of the manifold "M". It also enjoys an additional symmetry under the vector bundle automorphism
formula_2
where "α" is a closed "p+1"-form. In the "p=1" case, which is the relevant case for the geometry of flux compactifications in string theory, this transformation is known in the physics literature as a shift in the B field.
Dirac and generalized complex structures.
The cotangent bundle, formula_3 of M is the bundle of differential one-forms. In the case "p"=1 the Courant bracket maps two sections of formula_4, the direct sum of the tangent and cotangent bundles, to another section of formula_4. The fibers of formula_4 admit inner products with signature (N,N) given by
formula_5
A linear subspace of formula_4 in which all pairs of vectors have zero inner product is said to be an isotropic subspace. The fibers of formula_4 are "2N"-dimensional and the maximal dimension of an isotropic subspace is "N". An "N"-dimensional isotropic subspace is called a maximal isotropic subspace.
A Dirac structure is a maximally isotropic subbundle of formula_4 whose sections are closed under the Courant bracket. Dirac structures include as special cases symplectic structures, Poisson structures and foliated geometries.
A generalized complex structure is defined identically, but one tensors formula_4 by the complex numbers and uses the complex dimension in the above definitions and one imposes that the direct sum of the subbundle and its complex conjugate be the entire original bundle (Tformula_6
T*)formula_7C. Special cases of generalized complex structures include complex structure and a version of Kähler structure which includes the B-field.
Dorfman bracket.
In 1987 Irene Dorfman introduced the Dorfman bracket [,]D, which like the Courant bracket provides an integrability condition for Dirac structures. It is defined by
formula_8.
The Dorfman bracket is not antisymmetric, but it is often easier to calculate with than the Courant bracket because it satisfies a Leibniz rule which resembles the Jacobi identity
formula_9
Courant algebroid.
The Courant bracket does not satisfy the Jacobi identity and so it does not define a Lie algebroid, in addition it fails to satisfy the Lie algebroid condition on the anchor map. Instead it defines a more general structure introduced by Zhang-Ju Liu, Alan Weinstein and Ping Xu known as a Courant algebroid.
Twisted Courant bracket.
Definition and properties.
The Courant bracket may be twisted by a "(p+2)"-form "H", by adding the interior product of the vector fields "X" and "Y" of "H". It remains antisymmetric and invariant under the addition of the interior product with a "(p+1)"-form "B". When "B" is not closed then this invariance is still preserved if one adds "dB" to the final "H".
If "H" is closed then the Jacobiator is exact and so the twisted Courant bracket still defines a Courant algebroid. In string theory, "H" is interpreted as the Neveu–Schwarz 3-form.
"p=0": Circle-invariant vector fields.
When "p"=0 the Courant bracket reduces to the Lie bracket on a principal circle bundle over "M" with curvature given by the 2-form twist "H". The bundle of 0-forms is the trivial bundle, and a section of the direct sum of the tangent bundle and the trivial bundle defines a circle invariant vector field on this circle bundle.
Concretely, a section of the sum of the tangent and trivial bundles is given by a vector field "X" and a function "f" and the Courant bracket is
formula_10
which is just the Lie bracket of the vector fields
formula_11
where "θ" is a coordinate on the circle fiber. Note in particular that the Courant bracket satisfies the Jacobi identity in the case "p=0".
Integral twists and gerbes.
The curvature of a circle bundle always represents an integral cohomology class, the Chern class of the circle bundle. Thus the above geometric interpretation of the twisted "p=0" Courant bracket only exists when "H" represents an integral class. Similarly at higher values of "p" the twisted Courant brackets can be geometrically realized as untwisted Courant brackets twisted by gerbes when "H" is an integral cohomology class. | [
{
"math_id": 0,
"text": "[X+\\xi,Y+\\eta]=[X,Y]\n+\\mathcal{L}_X\\eta-\\mathcal{L}_Y\\xi\n-\\frac{1}{2}d(i(X)\\eta-i(Y)\\xi)"
},
{
"math_id": 1,
"text": "\\mathcal{L}_X"
},
{
"math_id": 2,
"text": "X+\\xi\\mapsto X+\\xi+i(X)\\alpha"
},
{
"math_id": 3,
"text": "{\\mathbf T}^*"
},
{
"math_id": 4,
"text": "{\\mathbf T}\\oplus{\\mathbf{T}}^*"
},
{
"math_id": 5,
"text": "\\langle X+\\xi,Y+\\eta\\rangle=\\frac{1}{2}(\\xi(Y)+\\eta(X))."
},
{
"math_id": 6,
"text": "\\oplus"
},
{
"math_id": 7,
"text": "\\otimes"
},
{
"math_id": 8,
"text": "[A,B]_D=[A,B]+d\\langle A,B\\rangle"
},
{
"math_id": 9,
"text": "[A,[B,C]_D]_D=[[A,B]_D,C]_D+[B,[A,C]_D]_D."
},
{
"math_id": 10,
"text": "[X+f,Y+g]=[X,Y]+Xg-Yf"
},
{
"math_id": 11,
"text": "[X+f,Y+g]=[X+f\\frac{\\partial}{\\partial\\theta},Y+g\\frac{\\partial}{\\partial\\theta}]_{Lie}"
}
]
| https://en.wikipedia.org/wiki?curid=6855275 |
6855527 | Fisher kernel | In statistical classification, the Fisher kernel, named after Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the class for a new object (whose real class is unknown) can be estimated by minimising, across classes, an average of the Fisher kernel distance from the new object to each known member of the given class.
The Fisher kernel was introduced in 1998. It combines the advantages of generative statistical models (like the hidden Markov model) and those of discriminative methods (like support vector machines):
Derivation.
Fisher score.
The Fisher kernel makes use of the Fisher score, defined as
formula_0
with "θ" being a set (vector) of parameters. The function taking "θ" to log P("X"|"θ") is the log-likelihood of the probabilistic model.
Fisher kernel.
The Fisher kernel is defined as
formula_1
with "formula_2" being the Fisher information matrix.
Applications.
Information retrieval.
The Fisher kernel is the kernel for a generative probabilistic model. As such, it constitutes a bridge between generative and probabilistic models of documents. Fisher kernels exist for numerous models, notably tf–idf, Naive Bayes and probabilistic latent semantic analysis.
Image classification and retrieval.
The Fisher kernel can also be applied to image representation for classification or retrieval problems. Currently, the most popular bag-of-visual-words representation suffers from sparsity and high dimensionality. The Fisher kernel can result in a compact and dense representation, which is more desirable for image classification and retrieval problems.
The Fisher Vector (FV), a special, approximate, and improved case of the general Fisher kernel, is an image representation obtained by pooling local image features. The FV encoding stores the mean and the covariance deviation vectors per component k of the Gaussian-Mixture-Model (GMM) and each element of the local feature descriptors together. In a systematic comparison, FV outperformed all compared encoding methods (Bag of Visual Words (BoW), Kernel Codebook encoding (KCB), Locality Constrained Linear Coding (LLC), Vector of Locally Aggregated Descriptors (VLAD)) showing that the encoding of second order information (aka codeword covariances) indeed benefits classification performance. | [
{
"math_id": 0,
"text": "\nU_X = \\nabla_{\\theta} \\log P(X|\\theta)\n"
},
{
"math_id": 1,
"text": "\nK(X_i, X_j) = U_{X_i}^T \\mathcal{I}^{-1} U_{X_j}\n"
},
{
"math_id": 2,
"text": "\\mathcal{I}"
}
]
| https://en.wikipedia.org/wiki?curid=6855527 |
6856307 | Damgård–Jurik cryptosystem | The Damgård–Jurik cryptosystem is a generalization of the Paillier cryptosystem. It uses computations modulo formula_0 where formula_1 is an RSA modulus and formula_2 a (positive) natural number. Paillier's scheme is the special case with formula_3. The order formula_4 (Euler's totient function) of formula_5 can be divided by formula_6. Moreover, formula_5 can be written as the direct product of formula_7. formula_8 is cyclic and of order formula_6, while formula_9 is isomorphic to formula_10. For encryption, the message is transformed into the corresponding coset of the factor group formula_11 and the security of the scheme relies on the difficulty of distinguishing random elements in different cosets of formula_9. It is semantically secure if it is hard to decide if two given elements are in the same coset. Like Paillier, the security of Damgård–Jurik can be proven under the decisional composite residuosity assumption.
Simplification.
At the cost of no longer containing the classical Paillier cryptosystem as an instance, Damgård–Jurik can be simplified in the following way:
In this case decryption produces formula_37. Using recursive Paillier decryption this gives us directly the plaintext "m".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n^{s+1}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "s"
},
{
"math_id": 3,
"text": "s=1"
},
{
"math_id": 4,
"text": "\\varphi(n^{s+1})"
},
{
"math_id": 5,
"text": "Z^*_{n^{s+1}}"
},
{
"math_id": 6,
"text": "n^s"
},
{
"math_id": 7,
"text": "G \\times H"
},
{
"math_id": 8,
"text": "G"
},
{
"math_id": 9,
"text": "H"
},
{
"math_id": 10,
"text": "Z^*_n"
},
{
"math_id": 11,
"text": "G\\times H/H"
},
{
"math_id": 12,
"text": "n=pq"
},
{
"math_id": 13,
"text": "\\lambda=\\operatorname{lcm}(p-1,q-1)"
},
{
"math_id": 14,
"text": "g \\in \\mathbb{Z}^*_{n^{s+1}}"
},
{
"math_id": 15,
"text": "g=(1+n)^j x \\mod n^{s+1}"
},
{
"math_id": 16,
"text": "j"
},
{
"math_id": 17,
"text": "x \\in H"
},
{
"math_id": 18,
"text": "d"
},
{
"math_id": 19,
"text": "d \\mod n \\in \\mathbb{Z}^*_n"
},
{
"math_id": 20,
"text": "d= 0 \\mod \\lambda"
},
{
"math_id": 21,
"text": "\\lambda"
},
{
"math_id": 22,
"text": "(n, g)"
},
{
"math_id": 23,
"text": "m"
},
{
"math_id": 24,
"text": "m\\in \\mathbb Z_{n^s}"
},
{
"math_id": 25,
"text": "r"
},
{
"math_id": 26,
"text": "r\\in \\mathbb Z^{*}_{n} "
},
{
"math_id": 27,
"text": " c=g^m \\cdot r^{n^s} \\mod n^{s+1} "
},
{
"math_id": 28,
"text": "c\\in \\mathbb Z^{*}_{n^{s+1}} "
},
{
"math_id": 29,
"text": "c^d \\;mod\\;n^{s+1}"
},
{
"math_id": 30,
"text": "c^d = (g^m r^{n^s})^d = ((1+n)^{jm}x^m r^{n^s})^d = (1+n)^{jmd \\;mod\\; n^s} (x^m r^{n^s})^{d \\;mod\\; \\lambda} = (1+n)^{jmd \\;mod\\; n^s}"
},
{
"math_id": 31,
"text": "jmd"
},
{
"math_id": 32,
"text": "jd"
},
{
"math_id": 33,
"text": "m=(jmd)\\cdot (jd)^{-1} \\;mod\\;n^s"
},
{
"math_id": 34,
"text": "g=n+1"
},
{
"math_id": 35,
"text": "d=1 \\;mod\\; n^s"
},
{
"math_id": 36,
"text": "d=0 \\;mod\\; \\lambda"
},
{
"math_id": 37,
"text": "c^d = (1+n)^{m} \\;mod\\; n^{s+1}"
}
]
| https://en.wikipedia.org/wiki?curid=6856307 |
68569453 | Generalized blockmodeling of binary networks | Generalized blockmodeling of binary networks (also relational blockmodeling) is an approach of generalized blockmodeling, analysing the binary network(s).
As most network analyses deal with binary networks, this approach is also considered as the fundamental approach of blockmodeling. This is especially noted, as the set of ideal blocks, when used for interpretation of blockmodels, have binary link patterns, which precludes them to be compared with valued empirical blocks.
When analysing the binary networks, the criterion function is measuring block inconsistencies, while also reporting the possible errors. The ideal block in binary blockmodeling has only three types of conditions: "a certain cell must be (at least) 1, a certain cell must be 0 and the formula_0 over each row (or column) must be at least 1".
It is also used as a basis for developing the generalized blockmodeling of valued networks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
}
]
| https://en.wikipedia.org/wiki?curid=68569453 |
685714 | Dark state | In atomic physics, a dark state refers to a state of an atom or molecule that cannot absorb (or emit) photons. All atoms and molecules are described by quantum states; different states can have different energies and a system can make a transition from one energy level to another by emitting or absorbing one or more photons. However, not all transitions between arbitrary states are allowed. A state that cannot absorb an incident photon is called a dark state. This can occur in experiments using laser light to induce transitions between energy levels, when atoms can spontaneously decay into a state that is not coupled to any other level by the laser light, preventing the atom from absorbing or emitting light from that state.
A dark state can also be the result of quantum interference in a three-level system, when an atom is in a coherent superposition of two states, both of which are coupled by lasers at the right frequency to a third state. With the system in a particular superposition of the two states, the system can be made dark to both lasers as the probability of absorbing a photon goes to 0.
Two-level systems.
In practice.
Experiments in atomic physics are often done with a laser of a specific frequency formula_0 (meaning the photons have a specific energy), so they only couple one set of states with a particular energy formula_1 to another set of states with an energy formula_2. However, the atom can still decay spontaneously into a third state by emitting a photon of a different frequency. The new state with energy formula_3 of the atom no longer interacts with the laser simply because no photons of the right frequency are present to induce a transition to a different level. In practice, the term dark state is often used for a state that is not accessible by the specific laser in use even though transitions from this state are in principle allowed.
In theory.
Whether or not we say a transition between a state formula_4 and a state formula_5 is allowed often depends on how detailed the model is that we use for the atom-light interaction. From a particular model follow a set of selection rules that determine which transitions are allowed and which are not. Often these selection rules can be boiled down to conservation of angular momentum (the photon has angular momentum). In most cases we only consider an atom interacting with the electric dipole field of the photon. Then some transitions are not allowed at all, others are only allowed for photons of a certain polarization.
Consider for example the hydrogen atom. The transition from the state formula_6 with "mj=-1/2" to the state formula_7 with "mj=-1/2" is only allowed for light with polarization along the z axis (quantization axis) of the atom. The state formula_7 with "mj=-1/2" therefore appears dark for light of other polarizations.
Transitions from the "2S" level to the "1S" level are not allowed at all. The "2S" state can not decay to the ground state by emitting a single photon. It can only decay by collisions with other atoms or by emitting multiple photons. Since these events are rare, the atom can remain in this excited state for a very long time, such an excited state is called a metastable state.
Three-level systems.
We start with a three-state Λ-type system, where formula_8 and formula_9 are dipole-allowed transitions and formula_10 is forbidden. In the rotating wave approximation, the semi-classical Hamiltonian is given by
formula_11
with
formula_12
formula_13
where formula_14 and formula_15 are the Rabi frequencies of the probe field (of frequency formula_16) and the coupling field (of frequency formula_17) in resonance with the transition frequencies formula_18 and formula_19, respectively, and H.c. stands for the Hermitian conjugate of the entire expression. We will write the atomic wave function as
formula_20
Solving the Schrödinger equation formula_21, we obtain the solutions
formula_22
formula_23
formula_24
Using the initial condition
formula_25
we can solve these equations to obtain
formula_26
formula_27
formula_28
with formula_29. We observe that we can choose the initial conditions
formula_30
which gives a time-independent solution to these equations with no probability of the system being in state formula_31. This state can also be expressed in terms of a mixing angle formula_32 as
formula_33
with
formula_34
This means that when the atoms are in this state, they will stay in this state indefinitely. This is a dark state, because it can not absorb or emit any photons from the applied fields. It is, therefore, effectively transparent to the probe laser, even when the laser is exactly resonant with the transition. Spontaneous emission from formula_31 can result in an atom being in this dark state or another coherent state, known as a bright state. Therefore, in a collection of atoms, over time, decay into the dark state will inevitably result in the system being "trapped" coherently in that state, a phenomenon known as "coherent population trapping". | [
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "E_1"
},
{
"math_id": 2,
"text": "E_2=E_1 + \\hbar \\omega"
},
{
"math_id": 3,
"text": "E_3<E_2"
},
{
"math_id": 4,
"text": "|1\\rangle"
},
{
"math_id": 5,
"text": "|2\\rangle"
},
{
"math_id": 6,
"text": "1^2S_{1/2}"
},
{
"math_id": 7,
"text": "2^2P_{3/2}"
},
{
"math_id": 8,
"text": "|1\\rangle\\leftrightarrow|3\\rangle"
},
{
"math_id": 9,
"text": "|2\\rangle\\leftrightarrow|3\\rangle"
},
{
"math_id": 10,
"text": "|1\\rangle\\leftrightarrow|2\\rangle"
},
{
"math_id": 11,
"text": "H=H_0+H_1"
},
{
"math_id": 12,
"text": "H_0=\\hbar\\omega_1|1\\rangle\\langle 1|+\\hbar\\omega_2|2\\rangle\\langle 2|+\\hbar\\omega_3|3\\rangle\\langle 3|,"
},
{
"math_id": 13,
"text": "H_1=-\\frac \\hbar 2\\left(\\Omega_p e^{+i\\omega_p t}|1\\rangle\\langle 3|+\\Omega_c e^{+i\\omega_c t}|2\\rangle\\langle 3|\\right)+\\mbox{H.c.},"
},
{
"math_id": 14,
"text": "\\Omega_p "
},
{
"math_id": 15,
"text": "\\Omega_c "
},
{
"math_id": 16,
"text": "\\omega_p"
},
{
"math_id": 17,
"text": "\\omega_c"
},
{
"math_id": 18,
"text": "\\omega_1-\\omega_3"
},
{
"math_id": 19,
"text": "\\omega_2-\\omega_3"
},
{
"math_id": 20,
"text": "|\\psi(t)\\rangle=c_1(t)e^{-i\\omega_1 t}|1\\rangle+c_2(t)e^{-i\\omega_2 t}|2\\rangle+c_3(t)e^{-i\\omega_3 t}|3\\rangle."
},
{
"math_id": 21,
"text": "i\\hbar|\\dot\\psi\\rangle=H|\\psi\\rangle"
},
{
"math_id": 22,
"text": "\\dot c_1=\\frac i2\\Omega_p c_3"
},
{
"math_id": 23,
"text": "\\dot c_2=\\frac i2\\Omega_c c_3 "
},
{
"math_id": 24,
"text": "\\dot c_3=\\frac i2(\\Omega_p c_1+\\Omega_c c_2)."
},
{
"math_id": 25,
"text": "|\\psi(0)\\rangle=c_1(0)|1\\rangle+c_2(0)|2\\rangle+c_3(0)|3\\rangle,"
},
{
"math_id": 26,
"text": "\nc_1(t)=c_1(0)\\left[\\frac{\\Omega_c ^2}{\\Omega^2}+\\frac{\\Omega_p ^2}{\\Omega^2}\\cos\\frac{\\Omega t}{2}\\right]+c_2(0)\\left[-\\frac{\\Omega_p \\Omega_c }{\\Omega^2}+\\frac{\\Omega_p \\Omega_c }{\\Omega^2}\\cos\\frac{\\Omega t}{2}\\right]\n\\quad-ic_3(0)\\frac{\\Omega_p }{\\Omega}\\sin\\frac{\\Omega t}{2}"
},
{
"math_id": 27,
"text": "\nc_2(t)=c_1(0)\\left[-\\frac{\\Omega_p \\Omega_c }{\\Omega^2}+\\frac{\\Omega_p \\Omega_c }{\\Omega^2}\\cos\\frac{\\Omega t}{2}\\right]+c_2(0)\\left[\\frac{\\Omega_p ^2}{\\Omega^2}+\\frac{\\Omega_c^2}{\\Omega^2}\\cos\\frac{\\Omega t}{2}\\right]\n\\quad-ic_3(0)\\frac{\\Omega_c }{\\Omega}\\sin\\frac{\\Omega t}{2}"
},
{
"math_id": 28,
"text": "\nc_3(t)=-ic_1(0)\\frac{\\Omega_p }{\\Omega}\\sin\\frac{\\Omega t}{2}-ic_2(0)\\frac{\\Omega_c }{\\Omega}\\sin\\frac{\\Omega t}{2}+c_3(0)\\cos\\frac{\\Omega t}{2}"
},
{
"math_id": 29,
"text": "\\Omega=\\sqrt{\\Omega_c ^2+\\Omega_p ^2}"
},
{
"math_id": 30,
"text": "c_1(0)=\\frac{\\Omega_c }{\\Omega},\\qquad c_2(0)=-\\frac{\\Omega_p }{\\Omega},\\qquad c_3(0)=0,"
},
{
"math_id": 31,
"text": "|3\\rangle"
},
{
"math_id": 32,
"text": "\\theta"
},
{
"math_id": 33,
"text": "|D\\rangle=\\cos\\theta|1\\rangle-\\sin\\theta|2\\rangle"
},
{
"math_id": 34,
"text": "\\cos\\theta=\\frac{\\Omega_{c}}{\\sqrt{\\Omega_{p}^2+\\Omega_{c}^2}},\\qquad \\sin\\theta=\\frac{\\Omega_{p}}{\\sqrt{\\Omega_{p}^2+\\Omega_{c}^2}}."
}
]
| https://en.wikipedia.org/wiki?curid=685714 |
68572267 | Method of moments (electromagnetics) | Numerical method in computational electromagnetics
The method of moments (MoM), also known as the moment method and method of weighted residuals, is a numerical method in computational electromagnetics. It is used in computer programs that simulate the interaction of electromagnetic fields such as radio waves with matter, for example antenna simulation programs like NEC that calculate the radiation pattern of an antenna. Generally being a frequency-domain method, it involves the projection of an integral equation into a system of linear equations by the application of appropriate boundary conditions. This is done by using discrete meshes as in finite difference and finite element methods, often for the surface. The solutions are represented with the linear combination of pre-defined basis functions; generally, the coefficients of these basis functions are the sought unknowns. Green's functions and Galerkin method play a central role in the method of moments.
For many applications, the method of moments is identical to the boundary element method. It is one of the most common methods in microwave and antenna engineering.
History.
Development of boundary element method and other similar methods for different engineering applications is associated with the advent of digital computing in the 1960s. Prior to this, variational methods were applied to engineering problems at microwave frequencies by the time of World War II. While Julian Schwinger and Nathan Marcuvitz have respectively compiled these works into lecture notes and textbooks, Victor Rumsey has formulated these methods into the "reaction concept" in 1954. The concept was later shown to be equivalent to the Galerkin method. In the late 1950s, an early version of the method of moments was introduced by Yuen Lo at a course on mathematical methods in electromagnetic theory at University of Illinois.
In the 1960s, early research work on the method was published by Kenneth Mei, Jean van Bladel and Jack Richmond. In the same decade, the systematic theory for the method of moments in electromagnetics was largely formalized by Roger Harrington. While the term "the method of moments" was coined earlier by Leonid Kantorovich and Gleb Akilov for analogous numerical applications, Harrington has adapted the term for the electromagnetic formulation. Harrington published the seminal textbook "Field Computation by Moment Methods" on the moment method in 1968. The development of the method and its indications in radar and antenna engineering attracted interest; MoM research was subsequently supported United States government. The method was further popularized by the introduction of generalized antenna modeling codes such as Numerical Electromagnetics Code, which was released into public domain by the United States government in the late 1980s. In the 1990s, introduction of fast multipole and multilevel fast multipole methods enabled efficient MoM solutions to problems with millions of unknowns.
Being one of the most common simulation techniques in RF and microwave engineering, the method of moments forms the basis of many commercial design software such as FEKO. Many non-commercial and public domain codes of different sophistications are also available. In addition to its use in electrical engineering, the method of moments has been applied to light scattering and plasmonic problems.
Background.
Basic concepts.
An inhomogeneous integral equation can be expressed as:
formula_0
where "L" denotes a linear operator, "g" denotes the known forcing function and "f" denotes the unknown function. "f" can be approximated by a finite number of basis functions (formula_1):
formula_2
By linearity, substitution of this expression into the equation yields:
formula_3
We can also define a residual for this expression, which denotes the difference between the actual and the approximate solution:
formula_4
The aim of the method of moments is to minimize this residual, which can be done by using appropriate weighting or testing functions, hence the name method of weighted residuals. After the determination of a suitable inner product for the problem, the expression then becomes:
formula_5
Thus, the expression can be represented in the matrix form:
formula_6
The resulting matrix is often referred as the impedance matrix. The coefficients of the basis functions can be obtained through inverting the matrix. For large matrices with a large number of unknowns, iterative methods such as conjugate gradient method can be used for acceleration. The actual field distributions can be obtained from the coefficients and the associated integrals. The interactions between each basis function in MoM is ensured by Green's function of the system.
Basis and testing functions.
Different basis functions can be chosen to model the expected behavior of the unknown function in the domain; these functions can either be subsectional or global. Choice of Dirac delta function as basis function is known as point-matching or collocation. This corresponds to enforcing the boundary conditions on formula_7 discrete points and is often used to obtain approximate solutions when the inner product operation is cumbersome to perform. Other subsectional basis functions include pulse, piecewise triangular, piecewise sinusoidal and rooftop functions. Triangular patches, introduced by S. Rao, D. Wilton and A. Glisson in 1982, are known as RWG basis functions and are widely used in MoM. Characteristic basis functions were also introduced to accelerate computation and reduce the matrix equation.
The testing and basis functions are often chosen to be the same; this is known as the Galerkin method. Depending on the application and studied structure, the testing and basis functions should be chosen appropriately to ensure convergence and accuracy, as well as to prevent possible high order algebraic singularities.
Integral equations.
Depending on the application and sought variables, different integral or integro-differential equations are used in MoM. Radiation and scattering by thin wire structures, such as many types of antennas, can be modeled by specialized equations. For surface problems, common integral equation formulations include electric field integral equation (EFIE), magnetic field integral equation (MFIE) and mixed-potential integral equation (MPIE).
Thin-wire equations.
As many antenna structures can be approximated as wires, thin wire equations are of interest in MoM applications. Two commonly used thin-wire equations are Pocklington and Hallén integro-differential equations. Pocklington's equation precedes the computational techniques, having been introduced in 1897 by Henry Cabourn Pocklington. For a linear wire that is centered on the origin and aligned with the z-axis, the equation can be written as:
formula_8
where formula_9 and formula_10 denote the total length and thickness, respectively. formula_11 is the Green's function for free space. The equation can be generalized to different excitation schemes, including magnetic frills.
Hallén integral equation, published by E. Hallén in 1938, can be given as:
formula_12
This equation, despite being more well-behaved than the Pocklington's equation, is generally restricted to the delta-gap voltage excitations at the antenna feed point, which can be represented as an impressed electric field.
Electric field integral equation (EFIE).
The general form of electric field integral equation (EFIE) can be written as:
formula_13
where formula_14 is the incident or impressed electric field. formula_15 is the Green function for Helmholtz equation and formula_16 represents the wave impedance. The boundary conditions are met at a defined PEC surface. EFIE is a Fredholm integral equation of the first kind.
Magnetic field integral equation (MFIE).
Another commonly used integral equation in MoM is the magnetic field integral equation (MFIE), which can be written as:
formula_17
MFIE is often formulated to be a Fredholm integral equation of the second kind and is generally well-posed. Nevertheless, the formulation necessitates the use of closed surfaces, which limits its applications.
Other formulations.
Many different surface and volume integral formulations for MoM exist. In many cases, EFIEs are converted to mixed potential integral equations (MFIE) through the use of Lorenz gauge condition; this aims to reduce the orders of singularities through the use of magnetic vector and scalar electric potentials. In order to bypass the internal resonance problem in dielectric scattering calculations, combined-field integral equation (CFIE) and Poggio—Miller—Chang—Harrington—Wu—Tsai (PMCHWT) formulations are also used. Another approach, the volumetric integral equation, necessitates the discretization of the volume elements and is often computationally expensive.
MoM can also be integrated with physical optics theory and finite element method.
Green's functions.
Appropriate Green's function for the studied structure must be known to formulate MoM matrices: automatic incorporation of the radiation condition into the Green's function makes MoM particularly useful for radiation and scattering problems. Even though the Green function can be derived in closed form for very simple cases, more complex structures necessitate numerical derivation of these functions.
Full wave analysis of planarly-stratified structures in particular, such as microstrips or patch antennas, necessitate the derivation of spatial-domain Green's functions that are peculiar to these geometries. Nevertheless, this involves the inverse Hankel transform of the spectral Green's function, which is defined on the Sommerfeld integration path. This integral cannot be evaluated analytically, and its numerical evaluation is often computationally expensive due to the oscillatory kernels and slowly-converging nature of the integral. Following the extraction of quasi-static and surface pole components, these integrals can be approximated as closed-form complex exponentials through Prony's method or generalized pencil-of-function method; thus, the spatial Green's functions can be derived through the use of appropriate identities such as Sommerfeld identity. This method is known in the computational electromagnetics literature as the discrete complex image method (DCIM), since the Green's function is effectively approximated with a discrete number of image dipoles that are located within a complex distance from the origin. The associated Green's functions are referred as closed-form Green's functions. The method has also been extended for cylindrically-layered structures.
Rational-function fitting method, as well as its combinations with DCIM, can also be used to approximate closed-form Green's functions. Alternatively, the closed-form Green's function can be approximated through method of steepest descent. For the periodic structures such as phased arrays, Ewald summation is often used to accelerate the computation of the periodic Green's function.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L(f) = g"
},
{
"math_id": 1,
"text": "f_n"
},
{
"math_id": 2,
"text": "f \\approx \\sum_n^N a_n f_n."
},
{
"math_id": 3,
"text": "\\sum_n^N a_n L(f_n) \\approx g ."
},
{
"math_id": 4,
"text": "R = \\sum_n^N a_n L(f_n) - g"
},
{
"math_id": 5,
"text": "\\sum_n^N a_n \\langle w_m, L(f_n) \\rangle \\approx \\langle w_m, g \\rangle"
},
{
"math_id": 6,
"text": "\\left[\\ell_{mn}\\right] \\left[\\alpha_m\\right] = [g_n]"
},
{
"math_id": 7,
"text": "N"
},
{
"math_id": 8,
"text": "\\int^{l/2}_{-l/2} I_z(z') \\left[ \\left(\\frac{d^2}{dz^2}+\\beta^2 \\right) G(z,z') \\right]\\,dz'=-j \\omega \\varepsilon E^\\text{inc}_z(p=a)"
},
{
"math_id": 9,
"text": "l"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "G(z,z')"
},
{
"math_id": 12,
"text": " \\left(\\frac{d^2}{dz^2} + \\beta^2 \\right) \\int^{l/2}_{-l/2} I_z(z') G(z,z')\\,dz' = -j \\omega \\varepsilon E^\\text{inc}_z(p=a)"
},
{
"math_id": 13,
"text": "\\hat\\mathbf{n} \\times \\mathbf{E}^\\text{inc}(\\mathbf{r}) = \\hat\\mathbf{n} \\times \\int_S \\left[ \\eta j k \\, \\mathbf{J}(\\mathbf{r}') G(\\mathbf{r},\\mathbf{r}') + \\frac{\\eta}{jk} \\left\\{\\boldsymbol\\nabla_s' \\cdot \\mathbf{J}(\\mathbf{r}') \\right\\} \\boldsymbol\\nabla' G(\\mathbf{r},\\mathbf{r}') \\right] \\, dS'"
},
{
"math_id": 14,
"text": "\\mathbf{E}_\\text{inc}"
},
{
"math_id": 15,
"text": "G(r,r')"
},
{
"math_id": 16,
"text": "\\eta"
},
{
"math_id": 17,
"text": "-\\frac{1}{2} \\mathbf{J}(r) + \\hat\\mathbf{n} \\times \\oint_S \\mathbf{J}(r') \\times \\boldsymbol\\nabla' G(r,r')\\,dS' = \\hat\\mathbf{n} \\times \\mathbf{H}_\\text{inc}(r)"
}
]
| https://en.wikipedia.org/wiki?curid=68572267 |
68577665 | Lak wettability index | In petroleum engineering, Lak wettability index is a quantitative indicator to measure wettability of rocks from relative permeability data. This index is based on a combination of Craig's first rule. and modified Craig's second rule
formula_0
where
formula_1 : Lak wettability index (index values near -1 and 1 represent strongly oil-wet and strongly water-wet rocks, respectively)
formula_2 : Water relative permeability measured at residual oil saturation
formula_3 : Water saturation at the intersection point of water and oil relative permeability curves (fraction)
formula_4 : Residual oil saturation (in fraction)
formula_5 : Irreducible water saturation (in fraction)
formula_6 : Reference crossover saturation (in fraction) defined as:
formula_7
and formula_8 and formula_9 are two constant coefficients defined as:
formula_10 and formula_11 if formula_12
formula_13 and formula_11 if formula_14
formula_13 and formula_15 if formula_16
To use the above formula, relative permeability is defined as the effective permeability divided by the oil permeability measured at irreducible water saturation.
Craig's triple rules of thumb.
Craig proposed three rules of thumb for interpretation of wettability from relative permeability curves. These rules are based on the value of interstitial water saturation, the water saturation at the crossover point of relative permeability curves (i.e., where relative permeabilities are equal to each other), and the normalized water permeability at residual oil saturation (i.e., normalized by the oil permeability at interstitial water saturation).
According to Craig's first rule of thumb, in water-wet rocks the relative permeability to water at residual oil saturation is generally less than 30%, whereas in oil-wet systems this is greater than 50% and approaching 100%. The second rule of thumb considers a system as water-wet, if saturation at the crossover point of relative permeability curves is greater than water saturation of 50%, otherwise oil-wet. The third rule of thumb states that in a water-wet rock the value of interstitial water saturation is usually greater than 20 to 25% pore volume, whereas this is generally less than 15% pore volume (frequently less than 10%) for an oil-wet porous medium.
Modified Craig's second rule.
In 2021, Abouzar Mirzaei-Paiaman investigated the validity of Craig's rules of thumb and showed that while the third rule is generally unreliable, the first rule is suitable. Moreover, he showed that the second rule needed a modification. He pointed out that using 50% water saturation as a reference value in the Craig's second rule is unrealistic. That author defined a reference crossover saturation (RCS). According to the modified Craig's second rule, the crossover point of relative permeability curves lies to the right of RCS in water-wet rocks, whereas for oil-wet systems, the crossover point is expected to be located at the left of the RCS.
Modified Lak wettability index.
Modified Lak wettability index exists which is based on the areas below water and oil relative permeability curves.
formula_17
where
formula_18 : modified Lak wettability index (index values near -1 and 1 represent strongly oil-wet and strongly water-wet rocks, respectively)
formula_19 : Area under the oil relative permeability curve
formula_20 : Area under the water relative permeability curve
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_{\\mathit{L}} = \\frac{A (0.3 - k_{\\mathit{rw,Sor}})} {\\ 0.3} + \\frac{B (0.5 - k_{\\mathit{rw,Sor}})} {\\ 0.5} + \\frac{CS - RCS} {\\ 1-Sor - Swc}"
},
{
"math_id": 1,
"text": "I_{\\mathit{L}}"
},
{
"math_id": 2,
"text": "k_{\\mathit{rw,Sor}}"
},
{
"math_id": 3,
"text": "CS"
},
{
"math_id": 4,
"text": "Sor"
},
{
"math_id": 5,
"text": "Swc"
},
{
"math_id": 6,
"text": "RCS"
},
{
"math_id": 7,
"text": "RCS = 0.5 + \\frac{Swc - Sor} {\\ 2}"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "B"
},
{
"math_id": 10,
"text": "A = 0.5"
},
{
"math_id": 11,
"text": "B = 0"
},
{
"math_id": 12,
"text": "k_{\\mathit{rw,Sor}} < 0.3 "
},
{
"math_id": 13,
"text": "A = 0"
},
{
"math_id": 14,
"text": "0.3 <= k_{\\mathit{rw,Sor}} <= 0.5 "
},
{
"math_id": 15,
"text": "B = 0.5"
},
{
"math_id": 16,
"text": "k_{\\mathit{rw,Sor}} > 0.5 "
},
{
"math_id": 17,
"text": "I_{\\mathit{ML}} = \\frac{A_{\\mathit{o}} - A_{\\mathit{w}}} {\\ A_{\\mathit{o}} + A_{\\mathit{w}}} "
},
{
"math_id": 18,
"text": "I_{\\mathit{ML}}"
},
{
"math_id": 19,
"text": "A_{\\mathit{o}}"
},
{
"math_id": 20,
"text": "A_{\\mathit{w}}"
}
]
| https://en.wikipedia.org/wiki?curid=68577665 |
68578227 | Lentz's algorithm | In mathematics, Lentz's algorithm is an algorithm to evaluate continued fractions and compute tables of spherical Bessel functions.
The version usually employed now is due to Thompson and Barnett.
History.
The idea was introduced in 1973 by William J. Lentz and was simplified by him in 1982. Lentz suggested that calculating ratios of spherical Bessel functions of complex arguments can be difficult. He developed a new continued fraction technique for calculating the ratios of spherical Bessel functions of consecutive order. This method was an improvement compared to other methods because it started from the beginning of the continued fraction rather than the tail, had a built-in check for convergence, and was numerically stable. The original algorithm uses algebra to bypass a zero in either the numerator or denominator. Simpler Improvements to overcome unwanted zero terms include an altered recurrence relation suggested by Jaaskelainen and Ruuskanen in 1981 or a simple shift of the denominator by a very small number as suggested by Thompson and Barnett in 1986.
Initial work.
This theory was initially motivated by Lentz's need for accurate calculation of ratios of spherical Bessel function necessary for Mie scattering. He created a new continued fraction algorithm that starts from the beginning of the continued fraction and not at the tail-end. This eliminates guessing how many terms of the continued fraction are needed for convergence. In addition, continued fraction representations for both ratios of Bessel functions and spherical Bessel functions of consecutive order themselves can be computed with Lentz's algorithm. The algorithm suggested that it is possible to terminate the evaluation of continued fractions when formula_0 is relatively small.
Algorithm.
Lentz's algorithm is based on the Wallis-Euler relations. If
formula_1
formula_2
formula_3
formula_4
etc., or using the big-K notation, if
formula_5
is the formula_6th convergent to formula_7 then
formula_8
where formula_9 and formula_10 are given by the Wallis-Euler recurrence relations
formula_11
formula_12
formula_13
formula_14
formula_15
formula_16
Lentz's method defines
formula_17
formula_18
so that the formula_6th convergent is
formula_19
and uses the recurrence relations
formula_20
formula_21
formula_1
formula_22
formula_23
When the product formula_24 approaches unity with increasing formula_6, it is hoped that formula_25 has converged to formula_7.
Applications.
Lentz's algorithm was used widely in the late twentieth century. It was suggested that it doesn't have any rigorous analysis of error propagation. However, a few empirical tests suggest that it's at least as good as the other methods. As an example, it was applied to evaluate exponential integral functions. This application was then called modified Lentz algorithm. It's also stated that the Lentz algorithm is not applicable for every calculation, and convergence can be quite rapid for some continued fractions and vice versa for others.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|f_j-f_{j-1} |"
},
{
"math_id": 1,
"text": "{f}_{0} = {b}_{0}"
},
{
"math_id": 2,
"text": "{f}_{1} = {b}_{0} + \\frac{{a}_{1}}{{b}_{1}}"
},
{
"math_id": 3,
"text": "{f}_{2} = {b}_{0} + \\frac{{a}_{1}}{{b}_{1} + \\frac{{a}_{2}}{{b}_{2}}}"
},
{
"math_id": 4,
"text": "{f}_{3} = {b}_{0} + \\frac{{a}_{1}}{{b}_{1} + \\frac{{a}_{2}}{{b}_{2} + \\frac{{a}_{3}}{{b}_{3}}}}"
},
{
"math_id": 5,
"text": "{f}_{n} = {b}_{0} + \\underset{j = 1}\\overset{n}\\operatorname{K}\\frac{{a}_{j}}{{b}_{j} +}"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "{f}_{n} = \\frac{{A}_{n}}{{B}_{n}}"
},
{
"math_id": 9,
"text": "{A}_{n}"
},
{
"math_id": 10,
"text": "{B}_{n}"
},
{
"math_id": 11,
"text": "{A}_{- 1} = 1"
},
{
"math_id": 12,
"text": "{B}_{- 1} = 0"
},
{
"math_id": 13,
"text": "{A}_{0} = {b}_{0}"
},
{
"math_id": 14,
"text": "{B}_{0} = 1"
},
{
"math_id": 15,
"text": "{A}_{n} = {b}_{n} {A}_{n - 1} + {a}_{n} {A}_{n - 2}"
},
{
"math_id": 16,
"text": "{B}_{n} = {b}_{n} {B}_{n - 1} + {a}_{n} {B}_{n - 2}"
},
{
"math_id": 17,
"text": "{C}_{n} = \\frac{{A}_{n}}{{A}_{n - 1}}"
},
{
"math_id": 18,
"text": "{D}_{n} = \\frac{{B}_{n - 1}}{{B}_{n}}"
},
{
"math_id": 19,
"text": "{f}_{n} = {C}_{n} {D}_{n} {f}_{n - 1}"
},
{
"math_id": 20,
"text": "{C}_{0} = \\frac{{A}_{0}}{{A}_{- 1}} = {b}_{0}"
},
{
"math_id": 21,
"text": "{D}_{0} = \\frac{{B}_{- 1}}{{B}_{0}} = 0"
},
{
"math_id": 22,
"text": "{C}_{n} = {b}_{n} + \\frac{{a}_{n}}{{C}_{n - 1}}"
},
{
"math_id": 23,
"text": "{D}_{n} = \\frac{1}{{b}_{n} + {a}_{n} {D}_{n - 1}}"
},
{
"math_id": 24,
"text": "{C}_{n} {D}_{n}"
},
{
"math_id": 25,
"text": "{f}_{n}"
}
]
| https://en.wikipedia.org/wiki?curid=68578227 |
685810 | Boiling liquid expanding vapor explosion | Explosion of a vessel containing liquid above and beyond boiling point
A boiling liquid expanding vapor explosion (BLEVE, ) is an explosion caused by the rupture of a vessel containing a pressurized liquid that is or has reached a temperature sufficiently higher than its boiling point at atmospheric pressure. Because the boiling point of a liquid rises with pressure, the contents of the pressurized vessel can remain a liquid as long as the vessel is intact. If the vessel's integrity is compromised, the loss of pressure drops the boiling point, which can cause the liquid to convert to gas expanding rapidly. BLEVEs are manifestations of explosive boiling.
If the gas is flammable, as is the case with e.g., hydrocarbons and alcohols, further damage can be caused by the ensuing fire. However, BLEVEs do not necessarily involve fire.
Name.
On 24 April 1957, a process reactor at a Factory Mutual (FM) facility underwent a powerful explosion as a consequence of a rapid depressurization. It contained formalin mixed with phenol. The burst damaged the plant. However, no fire developed, as the mixture was not flammable. In the wake of the accident, researchers James B. Smith, William S. Marsh, and Wilbur L. Walls, who were employed with FM, came up with the terms "boiling liquid expanding vapor explosion" and its acronym "BLEVE". The expressions did not become of common use until the early 1970s, when the National Fire Protection Association's (NFPA) "Fire Command" and "Fire Journal" magazines started publishing articles using them.
Mechanism.
There are three key elements in the formation of a BLEVE:
Typically, a BLEVE starts with a vessel containing liquid held above its atmospheric-pressure boiling temperature. Many substances normally stored as liquids, such as carbon dioxide, propane, and other industrial gases have boiling temperatures below room temperature when at atmospheric pressure. In the case of water, a BLEVE could occur if a pressure vessel is heated beyond . That container, because the boiling water pressurizes it, must be capable of holding liquid water at very high temperatures.If the pressurized vessel ruptures, the pressure which prevents the liquid from boiling is lost. If the rupture is catastrophic, i.e., the vessel becomes suddenly no longer capable of holding any pressure, then the liquid will find itself at a temperature far above its boiling point. This causes a portion of the liquid to instantaneously vaporize with extremely rapid expansion. Depending on temperatures, pressures and the material involved, the expansion may be so rapid that it can be classified as an explosion, fully capable of inflicting severe damage on its surroundings.
For example, a tank of pressurized liquid water held at might be pressurized to above atmospheric (or gauge) pressure. If the tank containing the water were to rupture, there would for a brief moment exist a volume of liquid water which would be at:
At atmospheric pressure the boiling point of water is . Liquid water at atmospheric pressure does not exist at temperatures higher than . At that moment, the water would boil and turn to vapor explosively, and the liquid water turned to gas would take up significantly more volume (≈ 1,600-fold) than it did as liquid, causing a vapor explosion. Such explosions can happen when the superheated water of a boiler escapes through a crack in a boiler, causing a boiler explosion.
The vaporization of liquid resulting in a BLEVE typically occurs within 1 millisecond after a catastrophic loss of containment.
Superheat limit theory.
For a BLEVE to occur, the boiling liquid must be sufficiently superheated upon loss of containment. For example, at a pressure of approximately , water boils at . Superheated water released from a closed container at these conditions will not generate a BLEVE, as homogeneous nucleation of vapor bubbles is not possible. There is no consensus about the minimal temperature above which a BLEVE will occur. A formula proposed by Robert Reid to predict it is:
formula_0
where "T"C is the critical temperature of the fluid (expressed in kelvin). The minimum BLEVE temperatures of some fluids, based on this formula, are as follows:
<templatestyles src="Template:Table alignment/tables.css" />
According to Reid, BLEVE will occur, more in general, if the expansion crosses a "superheat-limit locus". In Reid's model, this curve is essentially the fluid's spinodal curve as represented in a pressure–temperature diagram, and the BLEVE onset is a manifestation of explosive boiling, where the spinodal is crossed "from above", i.e., via sudden depressurization. However, direct correspondence between the superheat limit and the spinodal has not been proven experimentally. In practical BLEVEs, the way the pressure vessel fails may influence decisively the way the expansion takes place, for example causing pressure waves and non-uniformities. Additionally, there may be stratification in the liquid, due to local temperature variations. Because of this, it is possible for BLEVEs to occur at temperatures less than those predicted with Reid's formula.
Physical BLEVEs.
The term BLEVE is often associated to explosive fires from pressure vessels containing a flammable liquid. However, a BLEVE can occur even with a non-flammable substance such as water, liquid nitrogen, liquid helium or other refrigerants or cryogenics. Such materials can go through purely physical BLEVEs, not entailing flames or other chemical reactions. In the case of unignited BLEVEs of liquefied gases, rapid cooling due to the absorption of the enthalpy of vaporization is a hazard that can cause frostbite. Asphyxiation from the expanding vapors is also possible, if the vapor cloud is not rapidly dispersed, as can be the case inside a building, or in a trough in the case of heavier-than-air gasses. The vapors can also be toxic, in which case harm and possibly death can occur at relatively low concentrations and, therefore, even far from the source.
BLEVE–fireball.
If a flammable substance, however, is subject to a BLEVE, it can ignite upon release, either due to friction, mechanical spark or other point sources, or from a pre-existing fire that had engulfed the pressure vessel and caused it to fail in the first place. In such a case, the burning vapors will further expand, adding to the force of the explosion. Furthermore, a very significant amount of the escaped fluid will burn in a matter of seconds in a raising fireball, which will generate extremely high levels of thermal radiation. While the blast effects can be devastating, a flammable substance BLEVE typically causes more damage due to the fireball thermal radiation than the blast overpressure.
Effect of impinging fires.
BLEVEs are often caused by an external fire near the storage vessel causing heating of the contents and pressure build-up. While tanks are often designed to withstand great pressure, constant heating can cause the metal to weaken and eventually fail. If the tank is being heated in an area where there is no liquid (such as near its top), it may rupture faster because the boiling liquid does not afford cooling in that area. Pressure vessels are usually equipped with relief valves that vent off excess pressure, but the tank can still fail if the pressure is not released quickly enough. A pressure vessel is designed to withstand the set pressure of its relief valves, but only if its mechanical integrity is not weakened as it can be in the case of an impinging fire. In an impinging fire scenario, flammable vapors released in the BLEVE will ignite upon release, forming a fireball. The origin of the impinging fire may be from a release of flammable fluid from the vessel itself, or from an external source, including releases from nearby tanks and equipment. For example, rail tank cars have BLEVEd under the effect of a jet fire from the open relief valve of another derailed tank car.
Hazards.
The main damaging effects of a BLEVE are three: the blast wave from the explosion; the projection of fragments, or missiles, from the pressure vessel; and the thermal radiation from the fireball, where one occurs.
Horizontal cylindrical ("bullet") tanks tend to rupture longitudinally. This causes the failed tank and its fragments to get propelled like rockets and travel long distances. At Feyzin, three of the propelled fragments weighed in excess of 100 tons and were thrown 150–350 meters (490–1150 ft) from the source of the explosion. One bullet tank at San Juanico travelled in the air before landing, possibly the farthest ever for a BLEVE missile. Fragments can impact on other tanks or equipment, which may result in a domino effect propagation of the accidental sequence.
Fireballs can rise to significant heights above ground. They are spheroidal when developed and rise from the ground in a mushroom shape. The diameter of fireballs at San Juanico was estimated at 200–300 meters (660–980 ft), with a duration of around 20 seconds. Such massive fires can injure people at distances of hundreds of meters (e.g., 300 m (980 ft) at Feyzin and 400 m (1310 ft) at San Juanico).
An additional hazard from BLEVE-fireball events is the formation of secondary fires, by direct exposure to the fireball thermal radiation, as pool fires from fuel that does not get combusted in the fireball, or from the scattering of blazing tank fragments. Another secondary effect of importance is the dispersion of a toxic gas cloud, if the vapors involved are toxic and do not catch fire upon release. Chlorine, ammonia and phosgene are example of toxic gases that underwent BLEVE in past accidents and produced toxic clouds as a consequence.
Notable accidents.
Notable BLEVE accidents include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_{\\text{min,BLEVE}}=0.895\\ T_\\text{C}"
}
]
| https://en.wikipedia.org/wiki?curid=685810 |
685851 | Batting average against | Baseball statistic
In baseball statistics, batting average against (denoted by BAA or AVG), also known as opponents' batting average (denoted by OBA), is a statistic that measures a pitcher's ability to prevent hits during official at bats. It can alternatively be described as the league's hitters' combined batting average against the pitcher.
Definition.
Batting average against is calculated as:
formula_0
for which:
For example, if a pitcher faced 125 batters and allowed 25 hits, issued 8 walks, hit 1 batsman, allowed 2 sacrifice hits, allowed 3 sacrifice flies, and had 1 instance of catcher's interference, the pitcher's BAA would be calculated as:
formula_1
Reference site Baseball-Reference.com more simply defines BAA as hits divided by at bats, as "at bats" for a pitcher equates to the above noted denominator (batters faced less walks, hit by pitch, sacrifice hits, sacrifice flies, and catcher's interference):
formula_2
For example, in 2021, Max Scherzer had the best (lowest) batting average against in Major League Baseball among qualified pitchers. Scherzer's BAA for the 2021 season was:
formula_3
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BAA = \\frac{H}{BF-BB-HBP-SH-SF-CINT}"
},
{
"math_id": 1,
"text": "BAA = \\frac{25}{125-8-1-2-3-1} = \\frac{25}{110} = .227"
},
{
"math_id": 2,
"text": "BAA = \\frac{H}{AB}"
},
{
"math_id": 3,
"text": "BAA = \\frac{H}{AB} = \\frac{119}{644} = .185"
}
]
| https://en.wikipedia.org/wiki?curid=685851 |
6858862 | Neutral third | Musical interval
A neutral third is a musical interval wider than a minor third but narrower than a major third , named by Jan Pieter Land in 1880. Land makes reference to the neutral third attributed to Zalzal (8th c.), described by Al-Farabi (10th c.) as corresponding to a ratio of 27:22 (354.5 cents) and by Avicenna (Ibn Sina, 11th c.) as 39:32 (342.5 cents). The Zalzalian third may have been a mobile interval.
Three distinct intervals may be termed neutral thirds:
These intervals are all within about 12 cents and are difficult for most people to distinguish by ear. Neutral thirds are roughly a quarter tone sharp from 12 equal temperament minor thirds and a quarter tone flat from 12-ET major thirds. In just intonation, as well as in tunings such as 31-ET, 41-ET, or 72-ET, which more closely approximate just intonation, the intervals are closer together.
In addition to the above examples, a "square root neutral third" can be characterized by a ratio of formula_3 between two frequencies, being exactly half of a just perfect fifth of 3/2 and measuring about 350.98 cents. Such a definition stems from the two thirds traditionally making a fifth-based triad.
A triad formed by two neutral thirds is neither major nor minor, thus the neutral thirds triad is ambiguous. While it is not found in twelve tone equal temperament it is found in others such as the quarter tone scale and 31-tet .
<templatestyles src="Template:TOC limit/styles.css" />
Occurrence in human music.
In infants' song.
Infants experiment with singing, and a few studies of individual infants' singing found that neutral thirds regularly arise in their improvisations. In two separate case studies of the progression and development of these improvisations, neutral thirds were found to arise in infants' songs after major and minor seconds and thirds, but before intervals smaller than a semitone and also before intervals as large as a perfect fourth or larger.
In modern classical Western music.
The neutral third has been used by a number of modern composers, including Charles Ives, James Tenney, and Gayle Young.
In traditional music.
Claudius Ptolemy describes an "even diatonic" tuning which uses two justly tuned neutral thirds in built off the 12:11 and 11:10 neutral seconds in compound intervals with 9:8 and 10:9 whole tones, forming the intervals: (12/11)*(9/8) = 27/22, (11/10)*(10/9) = 11/9. The latter of these is an interval found in the harmonic series as the interval between partials 9 and 11.
The equal-tempered neutral third may be found in the quarter tone scale and in some traditional Arab music (see also Arab tone system). Undecimal neutral thirds appear in traditional Georgian music. Neutral thirds are also found in American folk music.
In contemporary popular music.
Blue notes (a note found in country music, blues, and some rock music) on the third note of a scale can be seen as a variant of a neutral third with the tonic, as they fall in between a major third and a minor third. Similarly the blue note on the seventh note of the scale can be seen as a neutral third with the dominant.
In equal temperaments.
Two steps of seven-tone equal temperament form an interval of 342.8571 cents, which is within 5 cents of 347.4079 for the undecimal (11:9) neutral third. This is an equal temperament in reasonably common use, at least in the form of "near seven equal", as it is a tuning used for Thai music as well as the Ugandan Chopi tradition of music.
The neutral third also has good approximations in other commonly used equal temperaments including 24-ET (7 steps, 350 cents) and similarly by all multiples of 24 equal steps such as 48-ET and 72-ET, 31-ET (9 steps, 348.39), 34-ET (10 steps, 352.941 cents), 41-ET (12 steps, 351.22 cents), and slightly less closely by 53-ET (15 steps, 339.62 cents).
Close approximations to the tridecimal neutral third (16:13) appear in 53-ET and 72-ET. Both of these temperaments distinguish between the tridecimal (16:13) and undecimal (11:9) neutral thirds. All the other tuning systems mentioned above fail to distinguish between these intervals; they temper out the comma 144:143.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|5 f - 4 (11/9) f| = (1/9) f"
},
{
"math_id": 1,
"text": "|6 f - 5 (11/9) f| = |-(1/9) f| = (1/9) f"
},
{
"math_id": 2,
"text": "|7 f - 6 (16/13) f| = |9 f - 7 (16/13) f| = (5/13) f"
},
{
"math_id": 3,
"text": "\\sqrt{3/2}"
}
]
| https://en.wikipedia.org/wiki?curid=6858862 |
68600757 | 1 Samuel 4 | First Book of Samuel chapter
1 Samuel 4 is the fourth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter describes how the Ark of Covenant was taken by the Philistines, a part of the "Ark Narrative" (–) within a section concerning the life of Samuel (–7:17).
Text.
This chapter was originally written in the Hebrew language. It is divided into 22 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 3–4, 9–10, 12.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
In the beginning of this chapter, Samuel was no longer a boy, as he had grown into a powerful prophet whose words were fulfilled and with Shiloh stripped of its pre-eminence, Samuel was no longer associated with that town.
Verses 4:1b to 7:1 forms the so-called "the Ark Narrative", because of their distinctive vocabulary, focusing mainly on the Ark of the Covenant, while Samuel disappeared from the scene, and Shiloh's influence diminished. The historical setting suggests the tenth century BCE as the composition date of this narrative, with the main argument that 'an account of the previous misfortunes of the ark would be unnecessary
and irrelevant once David was on his way to be king in Jerusalem'.
The Philistines capture the Ark (4:1–10).
The position of the two camps at Ebenezer and Aphek in the southern end of the plain of Sharon indicates the intention of the Philistines to gain land further north from their current territories, whereas the Israelites had the intention to move westwards. Israel was defeated twice: the first occasion was attributed to God's decision 'to put us to rout today' (verse 3), and on the second occasion happened despite the presence of the Ark of the Covenant in battle (verse 7). The importance of the ark in Israel's battles is known from several passages such as Numbers 10:35–36 and 2 Samuel 11:11, being a visible sign of God's presence. The loss of Israel and the capture of the ark by the Philistines was attributed in verse 11 (recalling 1 Samuel 2:34) to 'the degenerate priesthood of
Shiloh'. The Philistines regarded the Israelites as worshippers of several gods (verses 7–8) and they were aware of the Exodus tradition.
"And the word of Samuel came to all Israel."
"And Israel went out to battle against the Philistines and they made camp beside Ebenezer, and the Philistines encamped in Aphek."
Verse 1.
Before the words "and Israel", LXX (Septuagint) and Vulgate have the statements: ""And it came to pass in those days that the Philistines gathered themselves together to fight" (LXX adds further "against Israel""); this addition is not found in the Masoretic Text and Targum.
Death of Eli (4:11–22).
News of Israel's defeat was brought to Eli (verses 12–17), who was 'more concerned about the ark than anything else' (verse 13). The loss of the ark caused a triad of calamities for Eli and his family as Eli fell to his death (verses 17–18), Phinehas's wife give premature birth and this led to her untimely death (verse 19). The naming of her son, Ichabod ('where is glory?' or 'alas (for) glory'), and her death-cry 'both allude to the loss of the ark'.
"Then it happened, when he made mention of the ark of God, that Eli fell off the seat backward by the side of the gate; and his neck was broken and he died, for the man was old and heavy. And he had judged Israel forty years."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=68600757 |
68602630 | Fair division among groups | Fair division among groups (or families) is a class of fair division problems, in which the resources are allocated among "groups" of agents, rather than among individual agents. After the division, all members in each group consume the same share, but they may have different preferences; therefore, different members in the same group might disagree on whether the allocation is fair or not. Some examples of group fair division settings are:
In all the above examples, the groups are fixed in advance. In some settings, the groups can be determined ad-hoc, that is, people can be grouped based on their preferences. An example of such a setting is:
Fairness criteria.
Common fairness criteria, such as proportionality and envy-freeness, judge the division from the point-of-view of a single agent, with a single preference relation. There are several ways to extend these criteria to fair division among groups.
Unanimous fairness requires that the allocation be considered fair in the eyes of all agents in all groups. For example:
Unanimous fairness is a strong requirement, and often cannot be satisfied.
Aggregate fairness assigns to each group a certain aggregate function, such as: sum, product, arithmetic mean or geometric mean. It requires that the allocation be considered fair according to this aggregate function. For example:
Democratic fairness requires that, in each group, a certain fraction of the agents agree that the division is fair; preferredly this fraction should be at least 1/2. A practical situation in which such requirement may be useful is when two democratic countries agree to divide a certain disputed land among them, and the agreement should be approved by a referendum in both countries.
Unanimous-fairness implies both aggregate-fairness and democratic-fairness. Aggregate-fairness and democratic fairness are independent - none of them implies the other.
Pareto efficiency is another important criterion that is required in addition to fairness. It is defined in the usual way: no allocation is better for at least one individual agent and at least as good for all individual agents.
Results for divisible resources.
In the context of fair cake-cutting, the following results are known (where "k" is the number of groups, and "n" is the number of agents in all groups together).
The division problem is easier when the agents can be grouped ad-hoc based on their preferences. In this case, there exists a unanimous envy-free connected allocation for any number of groups and any number of agents in each group.
Unanimous proportionality and exact division.
In an "exact division" (also called "consensus division"), there are "n" agents, and the goal is to partition the cake into "k" pieces such that all agents value all pieces at exactly 1/"k". It is known that an exact division with "n"("k"-1) always exists. However, even for "k"=2, finding an exact division with "n" cuts is FIXP-hard, and finding an approximate exact division with "n" cuts is PPA-complete (see exact division for more information). It can be proved that unanimous-proportionality is equivalent to consensus division in the following sense:
Results for indivisible items.
In the context of fair item allocation, the following results are known.
Unanimous approximate maximin-share fairness:
Unanimous approximate envy-freeness:
Unanimous envy-freeness with high probability:
Democratic fairness:
Group fair division of items and money.
In the context of rental harmony (envy-free division of rooms and rent), the following results are known.
Fair division of ticket lotteries.
A practical application of fair division among groups is dividing tickets to parks or other experiences with limited capacity. Often, tickets are divided at random. When people arrive on their own, a simple uniformly-random lottery among all candidates is a fair solution. But people often come in families or groups of friends, who want to enter together. This leads to various considerations in how exactly to design the lottery. The following results are known: | [
{
"math_id": 0,
"text": "({2c+1\\choose c+1}, {2c+1 \\choose c+1})"
},
{
"math_id": 1,
"text": "O(\\sqrt{n})\\geq c \\geq \\Omega(\\sqrt{n/k^3})"
},
{
"math_id": 2,
"text": "O(\\sqrt{n})\\geq c \\geq \\Omega(\\sqrt{n/k})"
},
{
"math_id": 3,
"text": "\\Omega(n \\log n)"
},
{
"math_id": 4,
"text": "(1-1/2^{c-1})"
},
{
"math_id": 5,
"text": "(1-1/2^{c})"
},
{
"math_id": 6,
"text": "c=2"
}
]
| https://en.wikipedia.org/wiki?curid=68602630 |
686036 | Wave vector | Vector describing a wave; often its propagation direction
In physics, a wave vector (or wavevector) is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave (inversely proportional to the wavelength), and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation.
A closely related vector is the angular wave vector (or angular wavevector), with a typical unit being radian per metre. The wave vector and angular wave vector are related by a fixed constant of proportionality, 2π radians per cycle.
It is common in several fields of physics to refer to the angular wave vector simply as the "wave vector", in contrast to, for example, crystallography. It is also common to use the symbol k for whichever is in use.
In the context of special relativity, "wave vector" can refer to a four-vector, in which the (angular) wave vector and (angular) frequency are combined.
Definition.
The terms "wave vector" and "angular wave vector" have distinct meanings. Here, the wave vector is denoted by formula_0 and the wavenumber by formula_1. The angular wave vector is denoted by k and the angular wavenumber by "k" = |k|. These are related by formula_2.
A sinusoidal traveling wave follows the equation
formula_3
where:
The equivalent equation using the wave vector and frequency is
formula_6
where:
Direction of the wave vector.
The direction in which the wave vector points must be distinguished from the "direction of wave propagation". The "direction of wave propagation" is the direction of a wave's energy flow, and the direction that a small wave packet will move, i.e. the direction of the group velocity. For light waves in vacuum, this is also the direction of the Poynting vector. On the other hand, the wave vector points in the direction of phase velocity. In other words, the wave vector points in the normal direction to the surfaces of constant phase, also called wavefronts.
In a lossless isotropic medium such as air, any gas, any liquid, amorphous solids (such as glass), and cubic crystals, the direction of the wavevector is the same as the direction of wave propagation. If the medium is anisotropic, the wave vector in general points in directions other than that of the wave propagation. The wave vector is always perpendicular to surfaces of constant phase.
For example, when a wave travels through an anisotropic medium, such as light waves through an asymmetric crystal or sound waves through a sedimentary rock, the wave vector may not point exactly in the direction of wave propagation.
In solid-state physics.
In solid-state physics, the "wavevector" (also called k-vector) of an electron or hole in a crystal is the wavevector of its quantum-mechanical wavefunction. These electron waves are not ordinary sinusoidal waves, but they do have a kind of "envelope function" which is sinusoidal, and the wavevector is defined via that envelope wave, usually using the "physics definition". See Bloch's theorem for further details.
In special relativity.
A moving wave surface in special relativity may be regarded as a hypersurface (a 3D subspace) in spacetime, formed by all the events passed by the wave surface. A wavetrain (denoted by some variable X) can be regarded as a one-parameter family of such hypersurfaces in spacetime. This variable X is a scalar function of position in spacetime. The derivative of this scalar is a vector that characterizes the wave, the four-wavevector.
The four-wavevector is a wave four-vector that is defined, in Minkowski coordinates, as:
formula_8
where the angular frequency formula_9 is the temporal component, and the wavenumber vector formula_10 is the spatial component.
Alternately, the wavenumber k can be written as the angular frequency ω divided by the phase-velocity vp, or in terms of inverse period T and inverse wavelength λ.
When written out explicitly its contravariant and covariant forms are:
formula_11
In general, the Lorentz scalar magnitude of the wave four-vector is:
formula_12
The four-wavevector is null for massless (photonic) particles, where the rest mass formula_13
An example of a null four-wavevector would be a beam of coherent, monochromatic light, which has phase-velocity formula_14
formula_15 {for light-like/null}
which would have the following relation between the frequency and the magnitude of the spatial part of the four-wavevector:
formula_16 {for light-like/null}
The four-wavevector is related to the four-momentum as follows:
formula_17
The four-wavevector is related to the four-frequency as follows:
formula_18
The four-wavevector is related to the four-velocity as follows:
formula_19
Lorentz transformation.
Taking the Lorentz transformation of the four-wavevector is one way to derive the relativistic Doppler effect. The Lorentz matrix is defined as
formula_20
In the situation where light is being emitted by a fast moving source and one would like to know the frequency of light detected in an earth (lab) frame, we would apply the Lorentz transformation as follows. Note that the source is in a frame "S"s and earth is in the observing frame, "S"obs.
Applying the Lorentz transformation to the wave vector
formula_21
and choosing just to look at the formula_22 component results in
formula_23
where formula_24 is the direction cosine of formula_25 with respect to formula_26
So
Source moving away (redshift).
As an example, to apply this to a situation where the source is moving directly away from the observer (formula_27), this becomes:
formula_28
Source moving towards (blueshift).
To apply this to a situation where the source is moving straight towards the observer ("θ" = 0), this becomes:
formula_29
Source moving tangentially (transverse Doppler effect).
To apply this to a situation where the source is moving transversely with respect to the observer ("θ" = "π"/2), this becomes:
formula_30
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\tilde{\\boldsymbol{\\nu}} "
},
{
"math_id": 1,
"text": "\\tilde{\\nu} = \\left| \\tilde{\\boldsymbol{\\nu}} \\right|"
},
{
"math_id": 2,
"text": "\\mathbf k = 2\\pi \\tilde{\\boldsymbol{\\nu}}"
},
{
"math_id": 3,
"text": "\\psi(\\mathbf{r},t) = A \\cos (\\mathbf{k} \\cdot \\mathbf{r} - \\omega t + \\varphi) ,"
},
{
"math_id": 4,
"text": "\\omega= \\tfrac{2\\pi}{T},"
},
{
"math_id": 5,
"text": "|\\mathbf{k}|= \\tfrac{2\\pi}{\\lambda}."
},
{
"math_id": 6,
"text": " \\psi \\left( \\mathbf{r}, t \\right) = A \\cos \\left(2\\pi \\left( \\tilde{\\boldsymbol{\\nu}} \\cdot {\\mathbf r} - f t \\right) + \\varphi \\right) ,"
},
{
"math_id": 7,
"text": " f "
},
{
"math_id": 8,
"text": "K^\\mu = \\left(\\frac{\\omega}{c}, \\vec{k}\\right) = \\left(\\frac{\\omega}{c}, \\frac{\\omega}{v_p}\\hat{n}\\right) = \\left(\\frac{2 \\pi}{cT}, \\frac{2 \\pi \\hat{n}}{\\lambda}\\right) \\,"
},
{
"math_id": 9,
"text": "\\tfrac{\\omega}{c}"
},
{
"math_id": 10,
"text": "\\vec{k}"
},
{
"math_id": 11,
"text": "\\begin{align}\n K^\\mu &= \\left(\\frac{\\omega}{c}, k_x, k_y, k_z \\right)\\, \\\\[4pt]\n K_\\mu &= \\left(\\frac{\\omega}{c}, -k_x, -k_y, -k_z \\right)\n\\end{align}"
},
{
"math_id": 12,
"text": "K^\\mu K_\\mu = \\left(\\frac{\\omega}{c}\\right)^2 - k_x^2 - k_y^2 - k_z^2 = \\left(\\frac{\\omega_o}{c}\\right)^2 = \\left(\\frac{m_o c}{\\hbar}\\right)^2"
},
{
"math_id": 13,
"text": "m_o = 0"
},
{
"math_id": 14,
"text": "v_p = c"
},
{
"math_id": 15,
"text": "K^\\mu = \\left(\\frac{\\omega}{c}, \\vec{k}\\right) = \\left(\\frac{\\omega}{c}, \\frac{\\omega}{c}\\hat{n}\\right) = \\frac{\\omega}{c}\\left(1, \\hat{n}\\right) \\,"
},
{
"math_id": 16,
"text": "K^\\mu K_\\mu = \\left(\\frac{\\omega}{c}\\right)^2 - k_x^2 - k_y^2 - k_z^2 = 0"
},
{
"math_id": 17,
"text": "P^\\mu = \\left(\\frac{E}{c}, \\vec{p}\\right) = \\hbar K^\\mu = \\hbar\\left(\\frac{\\omega}{c}, \\vec{k}\\right) "
},
{
"math_id": 18,
"text": "K^\\mu = \\left(\\frac{\\omega}{c}, \\vec{k}\\right) = \\left(\\frac{2 \\pi}{c}\\right)N^\\mu = \\left(\\frac{2 \\pi}{c}\\right)\\left(\\nu, \\nu \\vec{n}\\right)"
},
{
"math_id": 19,
"text": "K^\\mu = \\left(\\frac{\\omega}{c}, \\vec{k}\\right) = \\left(\\frac{\\omega_o}{c^2}\\right)U^\\mu = \\left(\\frac{\\omega_o}{c^2}\\right) \\gamma \\left(c, \\vec{u}\\right)"
},
{
"math_id": 20,
"text": "\\Lambda = \\begin{pmatrix}\n \\gamma & -\\beta \\gamma & \\ 0 \\ & \\ 0 \\ \\\\\n -\\beta \\gamma & \\gamma & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n \\end{pmatrix}"
},
{
"math_id": 21,
"text": "k^{\\mu}_s = \\Lambda^\\mu_\\nu k^\\nu_{\\mathrm{obs}} "
},
{
"math_id": 22,
"text": "\\mu = 0"
},
{
"math_id": 23,
"text": "\\begin{align}\n k^{0}_s &= \\Lambda^0_0 k^0_{\\mathrm{obs}} + \\Lambda^0_1 k^1_{\\mathrm{obs}} + \\Lambda^0_2 k^2_{\\mathrm{obs}} + \\Lambda^0_3 k^3_{\\mathrm{obs}} \\\\[3pt]\n \\frac{\\omega_s}{c} &= \\gamma \\frac{\\omega_{\\mathrm{obs}}}{c} - \\beta \\gamma k^1_{\\mathrm{obs}} \\\\\n &= \\gamma \\frac{\\omega_{\\mathrm{obs}}}{c} - \\beta \\gamma \\frac{\\omega_{\\mathrm{obs}}}{c} \\cos \\theta.\n\\end{align}"
},
{
"math_id": 24,
"text": "\\cos \\theta "
},
{
"math_id": 25,
"text": "k^1"
},
{
"math_id": 26,
"text": "k^0, k^1 = k^0 \\cos \\theta. "
},
{
"math_id": 27,
"text": "\\theta=\\pi"
},
{
"math_id": 28,
"text": "\\frac{\\omega_{\\mathrm{obs}}}{\\omega_s} = \\frac{1}{\\gamma (1 + \\beta)} = \\frac{\\sqrt{1-\\beta^2}}{1+\\beta} = \\frac{\\sqrt{(1+\\beta)(1-\\beta)}}{1+\\beta} = \\frac{\\sqrt{1-\\beta}}{\\sqrt{1+\\beta}} "
},
{
"math_id": 29,
"text": "\\frac{\\omega_{\\mathrm{obs}}}{\\omega_s} = \\frac{1}{\\gamma (1 - \\beta)} = \\frac{\\sqrt{1-\\beta^2}}{1-\\beta} = \\frac{\\sqrt{(1+\\beta)(1-\\beta)}}{1-\\beta} = \\frac{\\sqrt{1+\\beta}}{\\sqrt{1-\\beta}} "
},
{
"math_id": 30,
"text": "\\frac{\\omega_{\\mathrm{obs}}}{\\omega_s} = \\frac{1}{\\gamma (1 - 0)} = \\frac{1}{\\gamma} "
}
]
| https://en.wikipedia.org/wiki?curid=686036 |
68609370 | Finding Ellipses | Mathematics book
Finding Ellipses: What Blaschke Products, Poncelet’s Theorem, and the Numerical Range Know about Each Other is a mathematics book on "some surprising connections among complex analysis, geometry, and linear algebra", and on the connected ways that ellipses can arise from other subjects of study in all three of these fields. It was written by Ulrich Daepp, Pamela Gorkin, Andrew Shaffer, and Karl Voss, and published in 2019 by the American Mathematical Society and Mathematical Association of America as volume 34 of the Carus Mathematical Monographs, a series of books aimed at presenting technical topics in mathematics to a wide audience.
Topics.
"Finding Ellipses" studies a connection between Blaschke products, Poncelet's closure theorem, and the numerical range of matrices.
A Blaschke product is a rational function that maps the unit disk in the complex plane to itself, and maps some given points within the disk to the origin. In the main case considered by the book, there are three distinct given points formula_0, formula_1, and formula_2, and their Blaschke product has the formula
formula_3
For this function, each point on the unit circle has three preimages, also on the unit circle. These triples of preimages form triangles inscribed in the unit circle, and (it turns out) they all circumscribe an ellipse with foci at formula_1 and formula_2. Thus, they form an infinite system of polygons inscribed in and circumscribing two conics, which is exactly the kind of system that Poncelet's theorem describes. This theorem states that, whenever one polygon is inscribed in a conic and circumscribes another conic, it is part of an infinite family of polygons of the same type, one through each point of either conic. The family of triangles constructed from the Blaschke product is one of these infinite families of Poncelet's theorem.
The third part of the connection surveyed by the book is the numerical range of a matrix, a region within which the eigenvalues of the matrix can be found. In the case of a formula_4 complex matrix, the numerical range is an ellipse, by a result commonly called the elliptical range theorem, with the eigenvalues as its foci. For a certain matrix whose coefficients are derived from the two given points, and having these points on its diagonal, this ellipse is the one circumscribed by the triangles of Poncelet's theorem. More, the numerical range of any matrix is the intersection of the numerical ranges of its unitary dilations, which in this case are formula_5 unitary matrices each having one of the triangles of Poncelet's theorem as its numerical range and the three vertices of the triangle as its eigenvalues.
"Finding Ellipses" is arranged into three parts. The first part develops the mathematics of Blaschke products, Poncelet's closure theorem, and numerical ranges separately, before revealing the close connections between them. The second part of the book generalizes these ideas to higher-order Blaschke products, larger matrices, and Poncelet-like results for the corresponding numerical ranges, which generalize ellipses. These generalizations connect to more advanced topics in mathematics: "Lebesgue theory, Hardy spaces, functional analysis, operator theory and more". The third part consists of projects and exercises for students to develop this material beyond the exposition in the book. An online collection of web applets allow students to experiment with the constructions in the book.
Audience and reception.
"Finding Ellipses" is primarily aimed at advanced undergraduates in mathematics, although more as a jumping-off point for undergraduate research projects than as a textbook for courses. The first part of the book uses only standard undergraduate mathematics, but the second part is more demanding, and reviewer Bill Satzer writes that "even the best students might find themselves paging backward and forward in the book, feeling frustrated while trying to make connections". Despite that, Line Baribeau writes that it is "clear and engaging", and appealing in its use of modern topics. Yunus Zeytuncu is even more positive, calling it a "delight" that "realizes the dream" of bringing this combination of disciplines together into a neat package that is accessible to undergraduates.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "B(z)= z\\cdot\\frac{z-a}{1-\\bar a z}\\cdot\\frac{z-b}{1-\\bar b z}."
},
{
"math_id": 4,
"text": "2\\times 2"
},
{
"math_id": 5,
"text": "3\\times 3"
}
]
| https://en.wikipedia.org/wiki?curid=68609370 |
68618685 | Seat bias | Metric for fairness of apportionment methods
Seat bias is a property describing methods of apportionment. These are methods used to allocate seats in a parliament among federal states or among political parties. A method is "biased" if it systematically favors small parties over large parties, or vice versa. There are several mathematical measures of bias, which can disagree slightly, but all measures broadly agree that rules based on Droop's quota or Jefferson's method are strongly biased in favor of large parties, while rules based on Webster's method, Hill's method, or Hare's quota are effectively unbiased.
Notation.
There is a positive integer formula_0 (=house size), representing the total number of seats to allocate. There is a positive integer formula_1 representing the number of parties to which seats should be allocated. There is a vector of fractions formula_2 with formula_3, representing "entitlements", that is, the fraction of seats to which some party formula_4 is entitled (out of a total of formula_0). This is usually the fraction of votes the party has won in the elections.
The goal is to find an apportionment method is a vector of integers formula_5 with formula_6, called an "apportionment" of formula_0, where formula_7 is the number of seats allocated to party "i".
An apportionment method is a multi-valued function formula_8, which takes as input a vector of entitlements and a house-size, and returns as output an apportionment of formula_0.
Majorization order.
We say that an apportionment method formula_9 "favors small parties more than" formula_10 if, for every t and "h", and for every formula_11 and formula_12, formula_13 implies either formula_14 or formula_15.
If formula_10 and formula_9 are two divisor methods with divisor functions formula_16 and formula_17, and formula_18 whenever formula_19, then formula_9 favors small agents more than formula_10.
This fact can be expressed using the majorization ordering on vectors. A vector a "majorizes" another vector b if for all "k", the "k" largest parties receive in a at least as many seats as they receive in b. An apportionment method formula_10 majorizes another method formula_9, if for any house-size and entitlement-vector, formula_8 majorizes formula_20. If formula_10 and formula_9 are two divisor methods with divisor functions formula_16 and formula_17, and formula_18 whenever formula_19, then formula_9 majorizes formula_10. Therefore, Adams' method is majorized by Dean's, which is majorized by Hill's, which is majorized by Webster's, which is majorized by Jefferson's.
The shifted-quota methods (largest-remainders) with quota formula_21 are also ordered by majorization, where methods with smaller "s" are majorized by methods with larger "s".
Averaging over all house sizes.
To measure the bias of a certain apportionment method M, one can check, for each pair of entitlements formula_22, the set of all possible apportionments yielded by M, for all possible house sizes. Theoretically, the number of possible house sizes is infinite, but since formula_22 are usually rational numbers, it is sufficient to check the house sizes up to the product of their denominators"." For each house size, one can check whether formula_23 or formula_24. If the number of house-sizes for which formula_23 equals the number of house-sizes for which formula_24, then the method is unbiased. The only unbiased method, by this definition, is Webster's method.
Averaging over all entitlement-pairs.
One can also check, for each pair of possible allocations formula_25, the set of all entitlement-pairs formula_22 for which the method "M" yields the allocations formula_25 (for formula_26). Assuming the entitlements are distributed uniformly at random, one can compute the probability that "M" favors state 1 vs. the probability that it favors state 2. For example, the probability that a state receiving 2 seats is favored over a state receiving 4 seats is 75% for Adams, 63.5% for Dean, 57% for Hill, 50% for Webster, and 25% for Jefferson. The unique proportional divisor method for which this probability is always 50% is Webster. There are other divisor methods yielding a probability of 50%, but they do not satisfy the criterion of "proportionality" as defined in the "Basic requirements" section above. The same result holds if, instead of checking pairs of agents, we check pairs of groups of agents.
Averaging over all entitlement-vectors.
One can also check, for each vector of entitlements (each point in the standard simplex), what is the "seat bias" of the agent with the "k"-th highest entitlement. Averaging this number over the entire standard simplex gives a "seat bias formula".
Stationary divisor methods.
For each "stationary" divisor method, i.e. one where formula_27 seats correspond to a divisor formula_28, and electoral threshold formula_29:
formula_30
In particular, Webster's method is the only unbiased one in this family. The formula is applicable when the house size is sufficiently large, particularly, when formula_31. When the threshold is negligible, the third term can be ignored. Then, the sum of mean biases is:
formula_32, when the approximation is valid for formula_33.
Since the mean bias favors large parties when formula_34, there is an incentive for small parties to form party alliances (=coalitions). Such alliances can tip the bias in their favor. The seat-bias formula can be extended to settings with such alliances.
For shifted-quota methods.
For each shifted-quota method (largest-remainders method) with quota formula_21, when entitlement vectors are drawn uniformly at random from the standard simplex,
formula_35
In particular, Hamilton's method is the only unbiased one in this family.
Empirical data.
Using United States census data, Balinski and Young argued Webster's method is the least median-biased estimator for comparing pairs of states, followed closely by the Huntington-Hill method. However, researchers have found that under other definitions or metrics for bias, the Huntington-Hill method can also be described as least biased.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "(t_1,\\ldots,t_n)"
},
{
"math_id": 3,
"text": "\\sum_{i=1}^n t_i = 1"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "a_1,\\ldots,a_n"
},
{
"math_id": 6,
"text": "\\sum_{i=1}^n a_i = h"
},
{
"math_id": 7,
"text": "a_i"
},
{
"math_id": 8,
"text": "M(\\mathbf{t},h)"
},
{
"math_id": 9,
"text": "M'"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "\\mathbf{a'}\\in M'(\\mathbf{t},h)"
},
{
"math_id": 12,
"text": "\\mathbf{a}\\in M(\\mathbf{t},h)"
},
{
"math_id": 13,
"text": "t_i < t_j"
},
{
"math_id": 14,
"text": "a_i'\\geq a_i"
},
{
"math_id": 15,
"text": "a_j'\\leq a_j"
},
{
"math_id": 16,
"text": "d"
},
{
"math_id": 17,
"text": "d'"
},
{
"math_id": 18,
"text": "d'(a)/d'(b) > d(a)/d(b)"
},
{
"math_id": 19,
"text": "a > b"
},
{
"math_id": 20,
"text": "M'(\\mathbf{t},h)"
},
{
"math_id": 21,
"text": "q_i = t_i\\cdot (h+s)"
},
{
"math_id": 22,
"text": "t_1, t_2"
},
{
"math_id": 23,
"text": "a_1/t_1 > a_2 / t_2"
},
{
"math_id": 24,
"text": "a_1/t_1 < a_2 / t_2"
},
{
"math_id": 25,
"text": "a_1, a_2"
},
{
"math_id": 26,
"text": "h = a_1 + a_2"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": "d(a) = a+r"
},
{
"math_id": 29,
"text": "t"
},
{
"math_id": 30,
"text": "\\text{MeanBias}(r, k, t) = (r-1/2)\\cdot\\left(\\sum_{i=k}^n(1/i) -1\\right)\\cdot(1-n t)"
},
{
"math_id": 31,
"text": "h\\geq 2n"
},
{
"math_id": 32,
"text": "\\sum_{k=1}^n \\text{MeanBias}(r, k, 0) \\approx (r-1/2)\\cdot (n/e-1)"
},
{
"math_id": 33,
"text": "n\\geq 5"
},
{
"math_id": 34,
"text": "r>1/2"
},
{
"math_id": 35,
"text": "\\text{MeanBias}(s, k, t) = \\frac{s}{n}\\cdot\\left(\\sum_{i=k}^n(1/i) -1\\right)\\cdot(1-n t)"
}
]
| https://en.wikipedia.org/wiki?curid=68618685 |
6862105 | End (topology) | In topology, a branch of mathematics, the ends of a topological space are, roughly speaking, the connected components of the "ideal boundary" of the space. That is, each end represents a topologically distinct way to move to infinity within the space. Adding a point at each end yields a compactification of the original space, known as the end compactification.
The notion of an end of a topological space was introduced by Hans Freudenthal (1931).
Definition.
Let formula_0 be a topological space, and suppose that
<templatestyles src="Block indent/styles.css"/>formula_1
is an ascending sequence of compact subsets of formula_0 whose interiors cover formula_0. Then formula_0 has one end for every sequence
<templatestyles src="Block indent/styles.css"/>formula_2
where each formula_3 is a connected component of formula_4. The number of ends does not depend on the specific sequence formula_5 of compact sets; there is a natural bijection between the sets of ends associated with any two such sequences.
Using this definition, a neighborhood of an end formula_6 is an open set formula_7 such that formula_8 for some formula_9. Such neighborhoods represent the neighborhoods of the corresponding point at infinity in the end compactification (this "compactification" is not always compact; the topological space "X" has to be connected and locally connected).
The definition of ends given above applies only to spaces formula_0 that possess an exhaustion by compact sets (that is, formula_0 must be hemicompact). However, it can be generalized as follows: let formula_0 be any topological space, and consider the direct system formula_10 of compact subsets of formula_0 and inclusion maps. There is a corresponding inverse system formula_11, where formula_12 denotes the set of connected components of a space formula_13, and each inclusion map formula_14 induces a function formula_15. Then set of ends of formula_0 is defined to be the inverse limit of this inverse system.
Under this definition, the set of ends is a functor from the category of topological spaces, where morphisms are only "proper" continuous maps, to the category of sets. Explicitly, if formula_16 is a proper map and formula_17 is an end of formula_0 (i.e. each element formula_18 in the family is a connected component of formula_19 and they are compatible with maps induced by inclusions) then formula_20 is the family formula_21 where formula_22 ranges over compact subsets of "Y" and formula_23 is the map induced by formula_24 from formula_25 to formula_26. Properness of formula_24 is used to ensure that each formula_27 is compact in formula_0.
The original definition above represents the special case where the direct system of compact subsets has a cofinal sequence.
Ends of graphs and groups.
In infinite graph theory, an end is defined slightly differently, as an equivalence class of semi-infinite paths in the graph, or as a haven, a function mapping finite sets of vertices to connected components of their complements. However, for locally finite graphs (graphs in which each vertex has finite degree), the ends defined in this way correspond one-for-one with the ends of topological spaces defined from the graph .
The ends of a finitely generated group are defined to be the ends of the corresponding Cayley graph; this definition is insensitive to the choice of generating set. Every finitely-generated infinite group has either 1, 2, or infinitely many ends, and Stallings theorem about ends of groups provides a decomposition for groups with more than one end.
Ends of a CW complex.
For a path connected CW-complex, the ends can be characterized as homotopy classes of proper maps formula_32, called rays in "X": more precisely, if between the restriction —to the subset formula_33— of any two of these maps exists a proper homotopy we say that they are equivalent and they define an equivalence class of proper rays. This set is called an end of "X". | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "K_1 \\subseteq K_2 \\subseteq K_3 \\subseteq \\cdots"
},
{
"math_id": 2,
"text": "U_1 \\supseteq U_2 \\supseteq U_3 \\supseteq \\cdots,"
},
{
"math_id": 3,
"text": "U_n"
},
{
"math_id": 4,
"text": "X\\setminus K_n"
},
{
"math_id": 5,
"text": "(K_i)"
},
{
"math_id": 6,
"text": "(U_i)"
},
{
"math_id": 7,
"text": "V"
},
{
"math_id": 8,
"text": "V\\supset U_n"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "\\{K\\}"
},
{
"math_id": 11,
"text": "\\{\\pi_0(X\\setminus K)\\}"
},
{
"math_id": 12,
"text": "\\pi_0(Y)"
},
{
"math_id": 13,
"text": "Y"
},
{
"math_id": 14,
"text": "Y\\to Z"
},
{
"math_id": 15,
"text": "\\pi_0(Y)\\to\\pi_0(Z)"
},
{
"math_id": 16,
"text": "\\varphi:X\\to Y"
},
{
"math_id": 17,
"text": "x=(x_K)_K"
},
{
"math_id": 18,
"text": "x_K"
},
{
"math_id": 19,
"text": "X\\setminus K"
},
{
"math_id": 20,
"text": "\\varphi(x)"
},
{
"math_id": 21,
"text": "\\varphi_*(x_{\\varphi^{-1}(K')})"
},
{
"math_id": 22,
"text": "K'"
},
{
"math_id": 23,
"text": "\\varphi_*"
},
{
"math_id": 24,
"text": "\\varphi"
},
{
"math_id": 25,
"text": "\\pi_0(X \\setminus \\varphi^{-1}(K'))"
},
{
"math_id": 26,
"text": "\\pi_0(Y \\setminus K')"
},
{
"math_id": 27,
"text": "\\varphi^{-1}(K)"
},
{
"math_id": 28,
"text": "\\mathbb{R}"
},
{
"math_id": 29,
"text": "\\mathbb{R}^n "
},
{
"math_id": 30,
"text": "\\mathbb{R}^n \\smallsetminus K"
},
{
"math_id": 31,
"text": "\\mathbb{R}^2 "
},
{
"math_id": 32,
"text": "\\mathbb{R}^+\\to X"
},
{
"math_id": 33,
"text": "\\mathbb{N}"
}
]
| https://en.wikipedia.org/wiki?curid=6862105 |
6863 | Compression ratio | Ratio of the volume of a combustion chamber from its largest capacity to its smallest capacity
The compression ratio is the ratio between the volume of the cylinder and combustion chamber in an internal combustion engine at their maximum and minimum values.
A fundamental specification for such engines, it is measured two ways: the static compression ratio, calculated based on the volume of the cylinder when the piston is at the bottom of its stroke, and the volume of the cylinder when the piston is at the top of its stroke.
The dynamic compression ratio is a more advanced calculation which also takes into account gases entering and exiting the cylinder during the compression phase.
Effect and typical ratios.
A high compression ratio is desirable because it allows an engine to extract more mechanical energy from a given mass of air–fuel mixture due to its higher thermal efficiency. This occurs because internal combustion engines are heat engines, and higher compression ratios permit the same combustion temperature to be reached with less fuel, while giving a longer expansion cycle, creating more mechanical power output and lowering the exhaust temperature.
Petrol engines.
In petrol (gasoline) engines used in passenger cars for the past 20 years, compression ratios have typically been between 8:1 and 12:1. Several production engines have used higher compression ratios, including:
When forced induction (e.g. a turbocharger or supercharger) is used, the compression ratio is often lower than naturally aspirated engines. This is due to the turbocharger or supercharger already having compressed the air before it enters the cylinders. Engines using port fuel-injection typically run lower boost pressures and/or compression ratios than direct injected engines because port fuel injection causes the air–fuel mixture to be heated together, leading to detonation. Conversely, directly injected engines can run higher boost because heated air will not detonate without a fuel being present.
Higher compression ratios can make gasoline (petrol) engines subject to engine knocking (also known as "detonation", "pre-ignition", or "pinging") if lower octane-rated fuel is used. This can reduce efficiency or damage the engine if knock sensors are not present to modify the ignition timing.
Diesel engines.
Diesel engines use higher compression ratios than petrol engines, because the lack of a spark plug means that the compression ratio must increase the temperature of the air in the cylinder sufficiently to ignite the diesel using compression ignition. Compression ratios are often between 14:1 and 23:1 for direct injection diesel engines, and between 18:1 and 23:1 for indirect injection diesel engines.
At the lower end of 14:1, NOx emissions are reduced at a cost of more difficult cold-start. Mazda's Skyactiv-D, the first such commercial engine from 2013, used adaptive fuel injectors among other techniques to ease cold start.
Other fuels.
The compression ratio may be higher in engines running exclusively on liquefied petroleum gas (LPG or "propane autogas") or compressed natural gas, due to the higher octane rating of these fuels.
Kerosene engines typically use a compression ratio of 6.5 or lower. The petrol-paraffin engine version of the Ferguson TE20 tractor had a compression ratio of 4.5:1 for operation on tractor vaporising oil with an octane rating between 55 and 70.
Motorsport engines.
Motorsport engines often run on high-octane petrol and can therefore use higher compression ratios. For example, motorcycle racing engines can use compression ratios as high as 14.7:1, and it is common to find motorcycles with compression ratios above 12.0:1 designed for 95 or higher octane fuel.
Ethanol and methanol can take significantly higher compression ratios than gasoline. Racing engines burning methanol and ethanol fuel often have a compression ratio of 14:1 to 16:1.
Mathematical formula.
In a piston engine, the static compression ratio (formula_0) is the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke. It is therefore calculated by the formula
formula_1
where
formula_2 can be estimated by the cylinder volume formula:
formula_4
where
Because of the complex shape of formula_3 it is usually measured directly. This is often done by filling the cylinder with liquid and then measuring the volume of the used liquid.
Variable compression ratio engines.
Most engines use a fixed compression ratio, however a variable compression ratio engine is able to adjust the compression ratio while the engine is in operation. The first production engine with a variable compression ratio was introduced in 2019.
Variable compression ratio is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. This is done to increase fuel efficiency while under varying loads. Variable compression engines allow the volume above the piston at top dead centre to be changed.
Higher loads require lower ratios to increase power, while lower loads need higher ratios to increase efficiency, i.e. to lower fuel consumption. For automotive use this needs to be done as the engine is running in response to the load and driving demands.
The 2019 Infiniti QX50 is the first commercially available car that uses a variable compression ratio engine.
Dynamic compression ratio.
The "static compression ratio" discussed above — calculated solely based on the cylinder and combustion chamber volumes — does not take into account any gases entering or exiting the cylinder during the compression phase. In most automotive engines, the intake valve closure (which seals the cylinder) takes place during the compression phase (i.e. after bottom dead centre, BDC), which can cause some of the gases to be pushed back out through the intake valve. On the other hand, intake port tuning and scavenging can cause a greater amount of gas to be trapped in the cylinder than the static volume would suggest. The "dynamic compression ratio" accounts for these factors.
The dynamic compression ratio is higher with more conservative intake camshaft timing (i.e. soon after BDC), and lower with more radical intake camshaft timing (i.e. later after BDC). Regardless, the dynamic compression ratio is always lower than the static compression ratio.
Absolute cylinder pressure is used to calculate the dynamic compression ratio, using the following formula:
formula_7
where formula_8 is a polytropic value for the ratio of specific heats for the combustion gases at the temperatures present (this compensates for the temperature rise caused by compression, as well as heat lost to the cylinder)
Under ideal (adiabatic) conditions, the ratio of specific heats would be 1.4, but a lower value, generally between 1.2 and 1.3 is used, since the amount of heat lost will vary among engines based on design, size and materials used. For example, if the static compression ratio is 10:1, and the dynamic compression ratio is 7.5:1, a useful value for cylinder pressure would be 7.51.3 × atmospheric pressure, or 13.7 bar (relative to atmospheric pressure).
The two corrections for dynamic compression ratio affect cylinder pressure in opposite directions, but not in equal strength. An engine with high static compression ratio and late intake valve closure will have a dynamic compression ratio similar to an engine with lower compression but earlier intake valve closure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{CR}"
},
{
"math_id": 1,
"text": "\\mathrm{CR} = \\frac { V_d + V_c} {V_c}"
},
{
"math_id": 2,
"text": "V_d"
},
{
"math_id": 3,
"text": "V_c"
},
{
"math_id": 4,
"text": "V_d = \\tfrac{\\pi} {4} b^2 s"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "P_\\text{cylinder} = P_\\text{atmospheric} \\times \\text{CR}^\\gamma"
},
{
"math_id": 8,
"text": "\\gamma"
}
]
| https://en.wikipedia.org/wiki?curid=6863 |
68638522 | Buchholz hydra | Hydra game in mathematical logic
In mathematics, especially mathematical logic, graph theory and number theory, the Buchholz hydra game is a type of hydra game, which is a single-player game based on the idea of chopping pieces off a mathematical tree. The hydra game can be used to generate a rapidly growing function, formula_0, which eventually dominates all recursive functions that are provably total in "formula_1", and the termination of all hydra games is not provably total in formula_2.
Rules.
The game is played on a "hydra", a finite, rooted connected tree formula_3, with the following properties:
If the player decides to remove the top node formula_7 of formula_3, the hydra will then choose an arbitrary formula_8, where formula_9 is a current turn number, and then transform itself into a new hydra formula_10 as follows. Let formula_11 represent the parent of formula_7, and let formula_12 represent the part of the hydra which remains after formula_7 has been removed. The definition of formula_10 depends on the label of formula_7:
If formula_7 is the rightmost head of formula_3, formula_22 is written. A series of moves is called a strategy. A strategy is called a winning strategy if, after a finite amount of moves, the hydra reduces to its root. This always terminates, even though the hydra can get taller by massive amounts.
Hydra theorem.
Buchholz's paper in 1987 showed that the canonical correspondence between a hydra and an infinitary well-founded tree (or the corresponding term in the notation system formula_23 associated to Buchholz's function, which does not necessarily belong to the ordinal notation system formula_24), preserves fundamental sequences of choosing the rightmost leaves and the formula_25 operation on an infinitary well-founded tree or the formula_26 operation on the corresponding term in formula_23.
The hydra theorem for Buchholz hydra, stating that there are no losing strategies for any hydra, is unprovable in formula_27.
BH(n).
Suppose a tree consists of just one branch with formula_28 nodes, labelled formula_29. Call such a tree formula_30. It cannot be proven in formula_27 that for all formula_28, there exists formula_31 such that formula_32 is a winning strategy. (The latter expression means taking the tree formula_33, then transforming it with formula_34, then formula_35, then formula_36, etc. up to formula_37.)
Define formula_38 as the smallest formula_31 such that formula_32 as defined above is a winning strategy. By the hydra theorem, this function is well-defined, but its totality cannot be proven in formula_27. Hydras grow extremely fast, because the amount of turns required to kill formula_39 is larger than Graham's number or even the amount of turns to kill a Kirby-Paris hydra; and formula_40 has an entire Kirby-Paris hydra as its branch. To be precise, its rate of growth is believed to be comparable to formula_41 with respect to the unspecified system of fundamental sequences without a proof. Here, formula_42 denotes Buchholz's function, and formula_43 is the Takeuti-Feferman-Buchholz ordinal which measures the strength of formula_27.
The first two values of the BH function are virtually degenerate: formula_44 and formula_45. Similarly to the weak tree function, formula_46 is very large, but less so.
The Buchholz hydra eventually surpasses TREE(n) and SCG(n), yet it is likely weaker than loader as well as numbers from finite promise games.
Analysis.
It is possible to make a one-to-one correspondence between some hydras and ordinals. To convert a tree or subtree to an ordinal:
The resulting ordinal expression is only useful if it is in normal form. Some examples are:
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
External links.
" This article incorporates text available under the license." | [
{
"math_id": 0,
"text": "BH(n)"
},
{
"math_id": 1,
"text": "\\textrm{ID}_{\\nu}"
},
{
"math_id": 2,
"text": "\\textrm{(}\\Pi_1^1\\textrm{-CA)+BI}"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "+"
},
{
"math_id": 5,
"text": "\\nu \\leq \\omega"
},
{
"math_id": 6,
"text": "0"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "n \\in \\N"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "A(\\sigma, n)"
},
{
"math_id": 11,
"text": "\\tau"
},
{
"math_id": 12,
"text": "A^-"
},
{
"math_id": 13,
"text": "u"
},
{
"math_id": 14,
"text": "u \\in \\N"
},
{
"math_id": 15,
"text": "\\varepsilon"
},
{
"math_id": 16,
"text": "v < u"
},
{
"math_id": 17,
"text": "B"
},
{
"math_id": 18,
"text": "A_\\varepsilon"
},
{
"math_id": 19,
"text": "u - 1"
},
{
"math_id": 20,
"text": "\\omega"
},
{
"math_id": 21,
"text": "n + 1"
},
{
"math_id": 22,
"text": "A(n)"
},
{
"math_id": 23,
"text": "T"
},
{
"math_id": 24,
"text": "OT \\subset T"
},
{
"math_id": 25,
"text": "(n)"
},
{
"math_id": 26,
"text": "[n]"
},
{
"math_id": 27,
"text": "\\mathsf{\\Pi^1_1 - CA + BI}"
},
{
"math_id": 28,
"text": "x"
},
{
"math_id": 29,
"text": "+, 0, \\omega, ..., \\omega"
},
{
"math_id": 30,
"text": "R^n"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "R_x(1)(2)(3)...(k)"
},
{
"math_id": 33,
"text": "R_x"
},
{
"math_id": 34,
"text": "n=1"
},
{
"math_id": 35,
"text": "n=2"
},
{
"math_id": 36,
"text": "n=3"
},
{
"math_id": 37,
"text": "n=k"
},
{
"math_id": 38,
"text": "BH(x)"
},
{
"math_id": 39,
"text": "R_x(1)(2)"
},
{
"math_id": 40,
"text": "R_x(1)(2)(3)(4)(5)(6)"
},
{
"math_id": 41,
"text": "f_{\\psi_0(\\varepsilon_{\\Omega_\\omega + 1})}(x)"
},
{
"math_id": 42,
"text": "\\psi_0"
},
{
"math_id": 43,
"text": "\\psi_0(\\varepsilon_{\\Omega_\\omega + 1})"
},
{
"math_id": 44,
"text": "BH(1) = 0"
},
{
"math_id": 45,
"text": "BH(2) = 1"
},
{
"math_id": 46,
"text": "BH(3)"
},
{
"math_id": 47,
"text": "\\psi_\\alpha"
},
{
"math_id": 48,
"text": "\\alpha"
},
{
"math_id": 49,
"text": "\\psi"
}
]
| https://en.wikipedia.org/wiki?curid=68638522 |
68641808 | Extension complexity | In convex geometry and polyhedral combinatorics, the extension complexity of a convex polytope formula_0 is the smallest number of facets among convex polytopes formula_1 that have formula_0 as a projection. In this context, formula_1 is called an extended formulation of formula_0; it may have much higher dimension than formula_0.
The extension complexity depends on the precise shape of formula_0, not just on its combinatorial structure. For instance, regular polygons with formula_2 sides have extension complexity formula_3 (expressed using big O notation), but some other convex formula_2-gons have extension complexity at least proportional to formula_4.
If a polytope describing the feasible solutions to a combinatorial optimization problem has low extension complexity, this could potentially be used to devise efficient algorithms for the problem, using linear programming on its extended formulation. For this reason, researchers have studied the extension complexity of the polytopes arising in this way. For instance, it is known that the matching polytope has exponential extension complexity. On the other hand, the independence polytope of regular matroids has polynomial extension complexity.
The notion of extension complexity has also been generalized from linear programming to semidefinite programming, by considering projections of spectrahedra in place of projections of polytopes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "O(\\log n)"
},
{
"math_id": 4,
"text": "\\sqrt{n}"
}
]
| https://en.wikipedia.org/wiki?curid=68641808 |
6864370 | Outgoing longwave radiation | Energy transfer mechanism which enables planetary cooling
In climate science, longwave radiation (LWR) is electromagnetic thermal radiation emitted by Earth's surface, atmosphere, and clouds. It may also be referred to as "terrestrial radiation". This radiation is in the infrared portion of the spectrum, but is distinct from the shortwave (SW) near-infrared radiation found in sunlight.
Outgoing longwave radiation (OLR) is the longwave radiation emitted to space from the top of Earth's atmosphere. It may also be referred to as "emitted terrestrial radiation". Outgoing longwave radiation plays an important role in planetary cooling.
Longwave radiation generally spans wavelengths ranging from 3–100 micrometres (μm). A cutoff of 4 μm is sometimes used to differentiate sunlight from longwave radiation. Less than 1% of sunlight has wavelengths greater than 4 μm. Over 99% of outgoing longwave radiation has wavelengths between 4 μm and 100 μm.
The flux of energy transported by outgoing longwave radiation is typically measured in units of watts per metre squared (W⋅m−2). In the case of global energy flux, the W/m2 value is obtained by dividing the total energy flow over the surface of the globe (measured in watts) by the surface area of the Earth, .
Emitting outgoing longwave radiation is the only way Earth loses energy to space, i.e., the only way the planet cools itself. Radiative heating from absorbed sunlight, and radiative cooling to space via OLR power the heat engine that drives atmospheric dynamics.
The balance between OLR (energy lost) and incoming solar shortwave radiation (energy gained) determines whether the Earth is experiencing global heating or cooling (see Earth's energy budget).
Planetary energy balance.
Outgoing longwave radiation (OLR) constitutes a critical component of Earth's energy budget.
The principle of conservation of energy says that energy cannot appear or disappear. Thus, any energy that enters a system but does not leave must be retained within the system. So, the amount of energy retained on Earth (in Earth's climate system) is governed by an equation:
"[change in Earth's energy]" = "[energy arriving]" − "[energy leaving]".
Energy arrives in the form of absorbed solar radiation (ASR). Energy leaves as outgoing longwave radiation (OLR). Thus, the rate of change in the energy in Earth's climate system is given by Earth's energy imbalance (EEI):
formula_0.
When energy is arriving at a higher rate than it leaves (i.e., ASR > OLR, so that EEI is positive), the amount of energy in Earth's climate increases. Temperature is a measure of the amount of thermal energy in matter. So, under these circumstances, temperatures tend to increase overall (though temperatures might decrease in some places as the distribution of energy changes). As temperatures increase, the amount of thermal radiation emitted also increases, leading to more outgoing longwave radiation (OLR), and a smaller energy imbalance (EEI).
Similarly, if energy arrives at a lower rate than it leaves (i.e., ASR < OLR, so than EEI is negative), the amount of energy in Earth's climate decreases, and temperatures tend to decrease overall. As temperatures decrease, OLR decreases, making the imbalance closer to zero.
In this fashion, a planet naturally constantly adjusts its temperature so as to keep the energy imbalance small. If there is more solar radiation absorbed than OLR emitted, the planet will heat up. If there is more OLR than absorbed solar radiation the planet will cool. In both cases, the temperature change works to shift the energy imbalance towards zero. When the energy imbalance is zero, a planet is said to be in "radiative equilibrium". Planets natural tend to a state of approximate radiative equilibrium.
In recent decades, energy has been measured to be arriving on Earth at a higher rate than it leaves, corresponding to planetary warming. The energy imbalance has been increasing. It can take decades to centuries for oceans to warm and planetary temperature to shift sufficiently to compensate for an energy imbalance.
Emission.
Thermal radiation is emitted by nearly all matter, in proportion to the fourth power of its absolute temperature.
In particular, the emitted energy flux, formula_1 (measured in W/m2) is given by the Stefan–Boltzmann law for non-blackbody matter:
formula_2
where formula_3 is the absolute temperature, formula_4 is the Stefan–Boltzmann constant, and formula_5 is the emissivity. The emissivity is a value between zero and one which indicates how much less radiation is emitted compared to what a perfect blackbody would emit.
Surface.
The emissivity of Earth's surface has been measured to be in the range 0.65 to 0.99 (based on observations in the 8-13 micron wavelength range) with the lowest values being for barren desert regions. The emissivity is mostly above 0.9, and the global average surface emissivity is estimated to be around 0.95.
Atmosphere.
The most common gases in air (i.e., nitrogen, oxygen, and argon) have a negligible ability to absorb or emit longwave thermal radiation. Consequently, the ability of air to absorb and emit longwave radiation is determined by the concentration of trace gases like water vapor and carbon dioxide.
According to Kirchhoff's law of thermal radiation, the emissivity of matter is always equal to its absorptivity, at a given wavelength. At some wavelengths, greenhouse gases absorb 100% of the longwave radiation emitted by the surface. So, at those wavelengths, the emissivity of the atmosphere is 1 and the atmosphere emits thermal radiation much like an ideal blackbody would. However, this applies only at wavelengths where the atmosphere fully absorbs longwave radiation.
Although greenhouse gases in air have a high emissivity at some wavelengths, this does not necessarily correspond to a high rate of thermal radiation being emitted to space. This is because the atmosphere is generally much colder than the surface, and the rate at which longwave radiation is emitted scales as the fourth power of temperature. Thus, the higher the altitude at which longwave radiation is emitted, the lower its intensity.
Atmospheric absorption.
The atmosphere is relatively transparent to solar radiation, but it is nearly opaque to longwave radiation. The atmosphere typically absorbs most of the longwave radiation emitted by the surface. Absorption of longwave radiation prevents that radiation from reaching space.
At wavelengths where the atmosphere absorbs surface radiation, some portion of the radiation that was absorbed is replaced by a lesser amount of thermal radiation emitted by the atmosphere at a higher altitude.
When absorbed, the energy transmitted by this radiation is transferred to the substance that absorbed it. However, overall, greenhouse gases in the troposphere emit more thermal radiation than they absorb, so longwave radiative heat transfer has a net cooling effect on air.
Atmospheric window.
Assuming no cloud cover, most of the "surface emissions" that reach space do so through the atmospheric window. The atmospheric window is a region of the electromagnetic wavelength spectrum between 8 and 11 μm where the atmosphere does not absorb longwave radiation (except for the ozone band between 9.6 and 9.8 μm).
Gases.
Greenhouse gases in the atmosphere are responsible for a majority of the absorption of longwave radiation in the atmosphere. The most important of these gases are water vapor, carbon dioxide, methane, and ozone.
The absorption of longwave radiation by gases depends on the specific absorption bands of the gases in the atmosphere. The specific absorption bands are determined by their molecular structure and energy levels. Each type of greenhouse gas has a unique group of absorption bands that correspond to particular wavelengths of radiation that the gas can absorb.
Clouds.
The OLR balance is affected by clouds, dust, and aerosols in the atmosphere. Clouds tend to block penetration of upwelling longwave radiation, causing a lower flux of long-wave radiation penetrating to higher altitudes. Clouds are effective at absorbing and scattering longwave radiation, and therefore reduce the amount of outgoing longwave radiation.
Clouds have both cooling and warming effects. They have a cooling effect insofar as they reflect sunlight (as measured by cloud albedo), and a warming effect, insofar as they absorb longwave radiation. For low clouds, the reflection of solar radiation is the larger effect; so, these clouds cool the Earth. In contrast, for high thin clouds in cold air, the absorption of longwave radiation is the more significant effect; so these clouds warm the planet.
Details.
The interaction between emitted longwave radiation and the atmosphere is complicated due to the factors that affect absorption. The path of the radiation in the atmosphere also determines radiative absorption: longer paths through the atmosphere result in greater absorption because of the cumulative absorption by many layers of gas. Lastly, the temperature and altitude of the absorbing gas also affect its absorption of longwave radiation.
OLR is affected by Earth's surface skin temperature (i.e, the temperature of the top layer of the surface), skin surface emissivity, atmospheric temperature, water vapor profile, and cloud cover.
Day and night.
The net all-wave radiation is dominated by longwave radiation during the night and in the polar regions. While there is no absorbed solar radiation during the night, terrestrial radiation continues to be emitted, primarily as a result of solar energy absorbed during the day.
Relationship to greenhouse effect.
The reduction of the outgoing longwave radiation (OLR), relative to longwave radiation emitted by the surface, is at the heart of the greenhouse effect.
More specifically, the greenhouse effect may be defined quantitatively as the amount of longwave radiation emitted by the surface that does not reach space. On Earth as of 2015, about 398 W/m2 of longwave radiation was emitted by the surface, while OLR, the amount reaching space, was 239 W/m2. Thus, the "greenhouse effect" was 398−239 = 159 W/m2, or 159/398 = 40% of surface emissions, not reaching space.
Effect of increasing greenhouse gases.
When the concentration of a greenhouse gas (such as carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and water vapor (H2O) and is increased, this has a number of effects. At a given wavelength
The size of the reduction in OLR will vary by wavelength. Even if OLR does not decrease at certain wavelengths (e.g., because 100% of surface emissions are absorbed and the emission altitude is in the stratosphere), increased greenhouse gas concentration can still lead to significant reductions in OLR at other wavelengths where absorption is weaker.
When OLR decreases, this leads to an energy imbalance, with energy received being greater than energy lost, causing a warming effect. Therefore, an increase in the concentrations of greenhouse gases causes energy to accumulate in Earth's climate system, contributing to global warming.
Surface budget fallacy.
If the absorptivity of the gas is high and the gas is present in a high enough concentration, the absorption at certain wavelengths becomes saturated. This means there is enough gas present to completely absorb the radiated energy at that wavelength before the upper atmosphere is reached.
It is sometimes "incorrectly" argued that this means an increase in the concentration of this gas will have no additional effect on the planet's energy budget. This argument neglects the fact that outgoing longwave radiation is determined not only by the amount of surface radiation that is "absorbed", but also by the altitude (and temperature) at which longwave radiation is "emitted" to space. Even if 100% of surface emissions are absorbed at a given wavelength, the OLR at that wavelength can still be reduced by increased greenhouse gas concentration, since the increased concentration leads to the atmosphere emitting longwave radiation to space from a higher altitude. If the air at that higher altitude is colder (as is true throughout the troposphere), then thermal emissions to space will be reduced, decreasing OLR.
False conclusions about the implications of absorption being "saturated" are examples of the "surface budget fallacy", i.e., erroneous reasoning that results from focusing on energy exchange at the surface, instead of focusing on the top-of-atmosphere (TOA) energy balance.
Measurements.
Measurements of outgoing longwave radiation at the top of the atmosphere and of longwave radiation back towards the surface are important to understand how much energy is retained in Earth's climate system: for example, how thermal radiation cools and warms the surface, and how this energy is distributed to affect the development of clouds. Observing this radiative flux from a surface also provides a practical way of assessing surface temperatures on both local and global scales. This energy distribution is what drives atmospheric thermodynamics.
OLR.
Outgoing long-wave radiation (OLR) has been monitored and reported since 1970 by a progression of satellite missions and instruments.
Surface LW radiation.
Longwave radiation at the surface (both outward and inward) is mainly measured by pyrgeometers. A most notable ground-based network for monitoring surface long-wave radiation is the Baseline Surface Radiation Network (BSRN), which provides crucial well-calibrated measurements for studying global dimming and brightening.
Data.
Data on surface longwave radiation and OLR is available from a number of sources including:
OLR calculation and simulation.
Many applications call for calculation of long-wave radiation quantities. Local radiative cooling by outgoing longwave radiation, suppression of radiative cooling (by downwelling longwave radiation cancelling out energy transfer by upwelling longwave radiation), and radiative heating through incoming solar radiation drive the temperature and dynamics of different parts of the atmosphere.
By using the radiance measured from a particular direction by an instrument, atmospheric properties (like temperature or humidity) can be inversely inferred.
Calculations of these quantities solve the radiative transfer equations that describe radiation in the atmosphere. Usually the solution is done numerically by atmospheric radiative transfer codes adapted to the specific problem.
Another common approach is to estimate values using surface temperature and emissivity, then compare to satellite top-of-atmosphere radiance or brightness temperature.
There are online interactive tools that allow one to see the spectrum of outgoing longwave radiation that is predicted to reach space under various atmospheric conditions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{EEI} = \\mathrm{ASR} - \\mathrm{OLR}"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "M = \\epsilon\\, \\sigma\\, T^4"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "\\sigma"
},
{
"math_id": 5,
"text": "\\epsilon"
}
]
| https://en.wikipedia.org/wiki?curid=6864370 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.